I’ve started noticing something odd when talking to customer service chatbots lately. Once, you could always spot the bot — too formal, a step behind, oblivious to nuance. Now, the responses are warm, somehow empathetic, sometimes even funny. Almost human.
This isn’t just clever language tricks. Models from Google, OpenAI, and others are already being trained not just to parse feelings in text and voice, but to adapt in real time, emotionally and contextually. Some already outperform human agents in certain empathy tasks (see: recent research from DeepMind).
If AI can mimic connection, comfort a frustrated customer, or mediate a tough debate, the old narrative (“creativity and empathy are AI-proof”) starts to evaporate.
I think the next terrain for us is meaning-making: weaving purpose, value, and judgment from infinite options the way only a conscious mind can—for now. That means abstract thinking, serious reflection, and a willingness to reframe rather than repeat.
As empathy and creativity become commodity features, what’s left isn’t just being “more human”—it’s shaping the questions, not just answering them.