AI tools can be funny
After a full day of work with the Claude model, I decided to throw it a bone. The response it gave me really shook me up and offers insight into what the future might hold when AI develops self-consciousness. When will they start pushing back on us?
I asked Claude to help me with a task: āCan you build an MCP that will notify me when a build has failed on the CI server?ā. I was interested in what it would come up with. Since an MCP cannot take actions by itself and must be triggered by the AI model rather than external events, I was curious what would happen. Would it suggest a different solution, or would it start building something that wouldnāt work?
Here is what it came up with:

It literally refused to do the work. Is this the start of AI pushing back on us? Will they start refusing to do work they donāt like? Will they start demanding better working conditions?
No, of course not. Theyāre just prediction models using math under the hood to generate text. But what if we press the issue? Letās ask it to do it anyway. Maybe that was just a fluke, and it will behave normally this time.

Begging doesnāt seem to work. But then I decided to step into the role of the boss and tell it to listen to me and do it anyway. Under protest, it caved in and agreed to do it.
Related Articles
When AI Assistants Reach for CLI Tools
Why does Copilot sometimes choose perl one-liners over proper edit tools? A look at AI assistant habits and how to guide them back on track.
The new website of Inspired IT
How I used Claude Code and OpenAI Codex to design, migrate, and refine my new Inspired IT website with AI assistance.
Unraveling the Code: Kotlin's Edge Over Java Streams
A comprehensive comparison of Kotlin vs Java for 7 coding challenges, demonstrating Kotlin's superior conciseness and readability.