A few years ago, I recommended “Crucial Conversations” to a team member, Gabriel, because he struggled with having difficult conversations. For months afterward, he’d excitedly announce: “I’m having a ‘crucial conversation’ today!”

Reading the book sometimes feel like overkill, even for Kendrick, cause AI is just so convincing

Unfortunately, this didn’t translate to tangibly better conversations. Gabriel seemingly knew the theory, but lacked the understanding that creates actual competence.

Gap in the crucials

The gap became clear when I found myself in one of these ‘crucials’ with him. In contrast with previous conversations, it was a constant barrage of “I, I, I”. Had Gabriel turned into a self-centered egotist!? So I asked him, “Hey bro, is there a reason why you are suddenly saying “I” so much?” and the answer blew me away.

He looked at me somewhat puzzled, “Well, I’m doing Crucial Conversations!”. “Say what now?” I uttered with surprise. “Yes, according to the book… When using “I” a lot, people will be more at ease because I’m talking about myself and not them”.

Real conversational safety

Suddenly, the coin dropped. The book talks about “making conversations safer” by showing you’re sharing your perspective rather than stating absolute facts. Moreover, it talks about clarifying to others that you are merely sharing an interpretation, not claiming objective truth. But Gabriel had completely missed the nuance—the book suggests phrases like “I believe” (or “I feel”) to soften your position. Used in moderation, “I believe” does indeed reinforce a sense of safety. Saying “I” incessantly just feels like Kendrick Lamar gone wrong.

Froze up during re-up

Later he admitted he’d read chapter one, then turned to AI to prepare meeting notes based on the book. This was the AI competence illusion in action—he thought he was applying a proven framework, but he was stuck with something worse than mediocre and couldn’t see or explain why it wasn’t working. When I called out the constant ‘I’ repetition, he was like a deer in headlights because he never understood what he was supposed to be doing in the first place. He froze up when he tried to re-up.

The real danger isn’t that people are getting easy access to information they don’t fully grasp. The real hazard is that truly understanding takes so much more effort than getting AI responses – so people settle for the shortcut.

Increased capable and less competent

As their AI-powered output increases, so do expectations of what they can do independently. But their actual abilities atrophy, leading to an existential crisis of value. They’re trapped in a cycle where they appear increasingly capable while becoming genuinely less competent.

Leave a Reply

Your email address will not be published. Required fields are marked *