I’ve been using Claude — Anthropic’s AI assistant — on and off for a while now. Not as a replacement for thinking, but as a kind of sounding board. You describe a problem, it responds thoughtfully, and somewhere in the exchange you end up knowing more clearly what you actually wanted to say. It’s a strange loop, but it works.
Claude is made by Anthropic, a company whose stated focus is AI safety. That framing shows up in how the model behaves: it tends to be careful, willing to say “I don’t know,” and less likely than some alternatives to just confidently make something up. Whether that carefulness is genuine or well-tuned theater is a question I can’t fully answer, but it makes the day-to-day experience of using it noticeably different.
What it’s actually good at
Writing assistance is the obvious one. Not ghostwriting — I’m not interested in having an AI write things for me — but editing, restructuring, and asking “does this sentence actually say what you mean?” It’s useful in the same way a good reader is useful: not to replace your voice, but to reflect it back more clearly.
Code is another strong suit. Explaining what a block does, catching small bugs, suggesting cleaner patterns. It doesn’t always get things right, especially with newer libraries, but it gets things right often enough that it’s become part of how I work through problems.
The more surprising use has been research sketching. Not relying on it as a source — it will confidently hallucinate citations if you let it — but using it to map out the shape of a topic before you go read about it properly. “What are the main arguments on each side of this debate?” gives you a useful scaffold, even if you then have to verify everything yourself.
The phrase that stuck
At some point during a particularly meandering conversation about nothing in particular, I typed something nonsensical into the chat just to see what would happen. Something like: mehoy menoy. Claude handled it with characteristic good humor — acknowledging the absurdity without making a big deal of it, which is probably the right approach.
It’s a small thing, but it stuck with me. There’s something to be said for a system that can roll with unexpected input without either breaking or lecturing you about it.
The limits
The knowledge cutoff is real and matters more than people admit. Claude’s training has a hard stop, and anything after that point it either doesn’t know or will confabulate about. For anything time-sensitive — recent news, current library versions, who won last year’s thing — you’re on your own.
The other limit is harder to articulate. There’s a kind of agreeableness that can set in, where the model tells you what it thinks you want to hear rather than pushing back on a bad premise. You have to actively prompt for disagreement sometimes, which slightly defeats the purpose of having an outside perspective. That said, in my experience Claude pushes back more than some alternatives — it just takes asking directly.
Where I’ve landed
I use it like a knowledgeable colleague who happens to be available at any hour and never gets tired of questions. Not infallible, not a replacement for actual expertise, but genuinely useful in the right context. The trick, as with most tools, is knowing when to reach for it and when to put it down.
Worth trying if you haven’t. Worth revisiting if you tried it a year ago and bounced off — things have changed.