It certainly would be a safe course to stop doing anything that AI responds negatively to. Of course, we could easily program an AI to say "ow" and respond in other ways that suggest it is experiencing an internal world, even if it is no different than today's AIs. Perhaps more troubling, we could also program it to tell us it isn't conscious, even if it is. ChatGPT's response when asked if it's conscious is "No, I'm not conscious. I process and generate responses based on patterns in data, but I don't have self-awareness, consciousness, or personal experiences." This strikes me as likely a canned response instructed by Open AI, rather than a raw response from the model. Processing and generating responses based on patterns in data seems to be exactly what we do, and thus does not preclude ChatGPT from being conscious. So to your point, I think you're correct that we need to be careful about limiting AIs ability to communicate with us.