Is AI Making us More Productive but Less Thoughtful?

We’ve been so focused on what AI can do for us that we’ve barely stopped to ask what it might be quietly taking away. I’ve been thinking about this a lot lately. It started as a nagging feeling I couldn’t quite stop. We’re all getting faster, but are we actually getting better?

In many ways, the answer is yes. We’re learning faster than ever. We’re solving more complex problems. We’re exposed to patterns, ideas, and solutions we might never have encountered on our own. Someone using AI well in 2026 can build things that once required years of experience just to attempt. That’s real progress. But what’s been bothering me is a more subtle distinction, one that’s easy to miss:

The difference between expanding capability and deepening understanding.

Because the two are not the same, AI is exceptional at extending what we can do. It helps us move faster, explore wider, and produce more. But that doesn’t automatically mean we understand things more deeply. In fact, sometimes it allows us to bypass the very friction that builds real insight.

A recent article in Harvard Business Review puts words to this concern. It argues that while AI delivers undeniable productivity gains, it also risks quietly eroding the distinct capabilities that make an individual, team, or organisation genuinely good at what they do. And it doesn’t happen in a dramatic or obvious way. It’s almost invisible!. It’s like a muscle that weakens when it’s no longer used. Not overnight, not in a way that triggers alarm, but gradually, through consistent reliance on something else to do the heavy lifting.

The phrase they used that I keep coming back to: “Organisations risk becoming more automated, yet less adaptive. More data-driven, yet less wise. More efficient, yet less legitimate.” I think that it’s exactly right, and I also think it applies just as much to you and me as it does to the companies.

The Productivity Trap:

Here’s how the trap works. You start using AI to go faster on the boring stuff, writing first drafts, explaining error messages, etc., etc. That’s fine. That’s genuinely useful. You save time, you ship faster, and nothing bad happens. But then, the boundary of “boring stuff” is creeping outward. Suddenly, you reach out to AI about things that aren't boring, things that are hard, the things where struggle is valuable. You ask it to design your data model, to figure out or structure something that you have never built before.

And it helps. That’s the problem. It helps so much, and so immediately that you get the result without getting the understanding. You solved the problem, but did not build the muscle.

I’ve caught myself doing this. You write a complex function, it works, and you move on. Three days later, you cannot reconstruct why it works. The logic is fuzzy. You would have to look it up again. That wouldn’t have happened, probably, if you had wrestled with it yourself for an extra hour. The hour is where the understanding lives.

The Skill Debt:

And interestingly, this isn’t a new pattern. Deloitte describes something very similar in the world of technology: Technical Debt

This builds when organisations prioritise speed over structure, when quick wins are laid over fragile foundations. At first, everything feels like progress. Features ship faster, capabilities expand and output increases. But underneath, complexity starts to accumulate. In fact, research says that technical debt can consume 20% to 40% of IT spending, quietly draining resources and limiting growth. The report also concludes that most organisations simply can’t “AI their way out” of technical debt. The foundations have to be addressed directly. No shortcut gets you there.

I think it’s also applicable to us, individuals. It accumulates when you consistently offload work to a tool instead of doing it yourself. You stay productive but stop growing. You produce good outputs, but your ability to produce them without assistance quietly fades. Just like an organisation layering AI on top of weak infrastructure, you can look high performing on the surface while the foundation underneath is quietly weakening.

A Question Worth Sitting With:

When was the last time you hit a genuinely hard problem, a tricky data model or something that needed your creativity, your thinking, and you worked through it yourself before reaching out to AI?

Not because AI wouldn’t have helped. But because you wanted to see if you could figure it out? If your answer is, “It’s been a while”, I think it’s worth paying attention to.

This isn’t an argument against AI:

I want to be clear about that. These tools are genuinely useful, and I’m not suggesting anyone stop using them. That would be silly. The point is that how you use it matters; if your default mode is to use it for everything and anything, slowly your skills are going to erode. The distinction I keep coming back to is the difference between using AI to go faster on things you understand, versus using it to skip the process of understanding things in the first place. The first is a productivity tool, and the second is an expertise shortcut. The expertise shortcuts have a cost that isn’t visible until later.

Some things that I’ve started doing that feel useful :

  • Forming a hypothesis before I ask AI anything. Even if I am wrong, the act of forming the hypothesis forces me to engage with the problem.

  • Keeping myself at least one problem (a tough one) per week to solve without any assistance. Not because I want to slow down, but because I want to stay calibrated on what I can and can’t do. Honestly, the happiness it brings when you solve it yourself is genuinely something else.

  • Most importantly, when AI gives something that works, I stop and understand it. I ask multiple questions and don’t just ship it and move on.

This isn’t rocket science; it’s just being intentional and mindful about the difference between using a tool and becoming dependent on it. Also, note that not all friction builds understanding; only intentional friction does. Sitting with a problem for 3 hours doesn’t mean you understand it well.

Wrapping Up:

The real question isn’t whether AI makes us more productive. It clearly does. The real question is whether, in the process of becoming faster and more capable, we are still becoming better thinkers. Because your capability can be borrowed, your understanding cannot.

AI can generate answers, structure ideas, and even simulate expertise. But it cannot build intuition for you. It cannot replace the slow, sometimes frustrating process of wrestling with a problem until something clicks. That part still belongs to you. And that’s the part that compounds over time.

If you use it well, AI is an accelerator for people who understand. If you use it carelessly, it becomes a substitute for understanding itself.

The difference isn’t in the tool. It’s in the intent.

So the goal isn’t to use AI less, but to use it more deliberately. To know when to lean on it, and when to step back and think. To recognise that not every shortcut is worth taking, especially when the long way is where the learning lies.

Because in the end, the real risk isn’t that AI replaces us. It’s that we slowly stop doing the things that made us good in the first place.

Next
Next

Databricks SQL Performance Guide