<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
  <channel>
    <title>Inspired IT - Technical Blog</title>
    <link>https://inspired-it.nl</link>
    <description>AI Development Advocate insights, architecture patterns, and technical expertise from Jeroen Gordijn</description>
    <language>en-us</language>
    <lastBuildDate>Mon, 20 Apr 2026 22:08:43 GMT</lastBuildDate>
    <atom:link href="https://inspired-it.nl/rss.xml" rel="self" type="application/rss+xml" />

  <item>
    <title>Closing the Loop</title>
    <link>https://inspired-it.nl/blog/closing-the-loop</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/closing-the-loop</guid>
    <description>Why I automate every step of the development process so the only thing left for me is reviewing the proof</description>
    <content:encoded><![CDATA[<p><img src="https://inspired-it.nl/images/ai-closing-the-loop.png" alt="Closing the Loop" /></p>
<p>I don’t want to test. I don’t want to run the app and click around. I don’t want to verify that the code compiles. I don’t even want to review the code myself. I want my robot to do all of that and then prove to me that it works.</p>
<p>That’s the goal. Automate every step of the feedback loop so the only thing left for me as a human is looking at the proof and deciding: is this correct?</p>
<h2>Why Automation Matters</h2>
<p>AI agents can generate a lot of code quickly. But without feedback, they’ll happily produce code that doesn’t compile, fails edge cases, or ignores your standards.</p>
<blockquote>
<p>[!IMPORTANT]
The agent doesn’t know it’s wrong unless something tells it.</p>
</blockquote>
<p>If I need to tell all errors to the robot, then I am the bottleneck. Every time I need to manually test, manually review, or manually point out issues, I’m slowing the whole process down.</p>
<p>Every time you find yourself doing something again and again, you should ask yourself: “can I delegate this to the robot?”. The answer is most definitely: “yes!”. The robot is very good at performing repetitive tasks and generating its own feedback.</p>
<p>Compilation fails? The agent sees the error and fixes it. Tests fail? Same thing. Linter complains? It knows immediately. But also think about testing through the browser. Checking if the deployment succeeded when it created a PR on GitHub. This is what I mean by closing the loop.</p>
<h2>The Feedback Stack</h2>
<p>There are different gates that need to pass before I consider it worthwhile to really put my own time into it.</p>
<p><strong>Compilation</strong> is the first gate. If it doesn’t compile, nothing else matters. This is mostly out of the box and most agents will automatically do this.</p>
<p><strong>Static analysis and linting</strong> catch style issues, potential bugs, and deviations from project standards. This should already be part of your build setup.</p>
<p><strong>Tests</strong> are where the real proof lives. I’m pushing harder and harder for 100% test coverage. This is where you need to nudge the agent into doing this. Make it part of your project’s documentation or <a href="http://AGENTS.md">AGENTS.md</a>.</p>
<p>By driving it to 100% coverage you are sure that all code is covered and thought of. Make sure it doesn’t write nonsense tests. Review it and put it in your description.</p>
<p><strong>Browser testing</strong> takes it further. The agent can use <a href="https://playwright.dev">playwright</a> or <a href="https://github.com/simonw/rodney">rodney</a> to control a browser, navigate to pages, take screenshots, and verify that the UI looks right. This catches so many errors before I need to lift a finger.</p>
<p><strong>Showboat documents</strong> are my recent find and really nice. A <a href="https://github.com/simonw/showboat">showboat</a> document is an executable markdown file that runs code, captures output, and produces a readable proof of the work. I let the robot build a showboat document to prove to me that the functionality works. Thanks to <a href="https://simonwillison.net">Simon Willison</a> for creating showboat and rodney.</p>
<p>This is just during the development phase (although the showboat and browser tests can span to deployment). When I create a PR, it is automatically deployed to a test environment, we can continue there:</p>
<p><strong>Monitor deployments</strong> to verify that the PR will be deployed correctly. Using <a href="https://github.com/simonw/showboat">showboat</a> again to prove that it works on test.</p>
<p><strong>Monitor the PR</strong> for feedback from automatic tools, or team members.</p>
<p>If during any of these steps an issue pops-up, the robot can fix it without me intervening. I do have to figure out how to <strong>Monitor the PR</strong> automatically. Right now I nudge the robot when I see that there is feedback on the PR.</p>
<h2>Code Review Without Me</h2>
<p>Even with all that automation, the agent might still miss things. Subtle logic errors, forgotten edge cases, or code that works but violates the project’s conventions. That’s where I bring in a <a href="https://inspired-it.nl/blog/the-reviewer/">reviewer agent</a>. Another model, or the same model, but a fresh session. Its only job is to check if the code is complete, correct and according to our standards. The result of the review is passed to the coding agent to fix the code and resubmit. This cycle runs automatically until the reviewer gives a pass.</p>
<p>I wrote about this in detail in <a href="https://inspired-it.nl/blog/the-reviewer/">The Reviewer</a>.</p>
<h2>What’s Left for Me?</h2>
<p>I want to get to a stage where I can focus on what really matters: The functionality. Code has become commodity and I should not need to worry about it. When the robot is done, I get:</p>
<ul>
<li>Code that compiles and passes linting</li>
<li>A full test suite that passes</li>
<li>Screenshots of the UI in action</li>
<li>A showboat document proving the functionality works</li>
<li>A code review that’s already been addressed</li>
</ul>
<p>My job is to look at all of this and decide: does this solve the problem? Is the approach sound? That’s a much better use of my time than copy pasting errors to the robot.</p>
<h2>Lean In</h2>
<p>When the agent keeps getting something wrong, the temptation is to just fix it yourself. Don’t. Instead, figure out why it’s failing and automate the check. Can you write a test for it? A reviewer instruction?</p>
<p>Every time you manually correct the agent, you’re doing a one-time fix. Every time you automate the feedback, you’re fixing it forever.</p>
<h2>Become the Architect</h2>
<p>Coding is cheap, the agent can do a lot of work, but all goes to waste if we hold its hand and guide it every step. We must put some effort such that the robot can verify its own work. This pays for itself on the very next task. And the one after that. The compound effect is enormous.</p>
<p>I spent my time crafting the OpenSpec proposal and reviewing the end result. The robot handles the rest. I have become <a href="https://inspired-it.nl/blog/the-ai-coding-ladder/#level-6-the-architect">The Architect</a>.</p>
<p>Close the loop. Then let the robot run.</p>
]]></content:encoded>
    <pubDate>Mon, 16 Mar 2026 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>
    <enclosure url="https://inspired-it.nl/images/ai-closing-the-loop.png" length="746940" type="image/png" />
    <category>AI</category><category>Coding</category><category>Workflow</category>
  </item>

  <item>
    <title>MCP to restrict agents</title>
    <link>https://inspired-it.nl/blog/mcp-to-restrict-agents</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/mcp-to-restrict-agents</guid>
    <description>Use CLI tools over MCP for dev, but MCP has it place</description>
    <content:encoded><![CDATA[<p><img src="https://inspired-it.nl/images/mcp.png" alt="MCP to restrict agents" /></p>
<p>Lately I was thinking about <a href="https://modelcontextprotocol.io">MCPs</a> and what they are useful for. I stopped using them in my coding agents. But do they still have a purpose? For some time I didn’t know, but when I started to think more about security and how to lock down agents, I think I found the answer.</p>
<h1>Development</h1>
<p>When it came out, many developers hurried to put them in the Agent config. This led to context bloat and it is clear now that the Models we use can work very well without MCP. A <a href="https://agentskills.io">skill</a> with a script works very well. For development, I think MCP is mostly nonsense. Your agent has bash access, so it can use all the CLI tools you have installed. These work very well.</p>
<h1>Use of MCP</h1>
<p>MCPs do serve a purpose when you want to lock down the agent that you are running. For instance, that email agent that should only be able to read mail and attach labels to them. You want to restrict the agent to only use the read mail, and the label mail tool.</p>
<h1>The reasoning</h1>
<p>The moment you give an agent bash access, it can do anything. It can read files, write files, run commands, install tools, etc. In that case, use skills. If you want to restrict the agent to only use certain tools, you have to give it access to those tools and not give it bash access. This is mostly useful if you want to create an agent in an application.</p>
<p>MCP is actually a way to restrict what the agent can do. Use it to create a more secure agent, not to give it more power.</p>
]]></content:encoded>
    <pubDate>Thu, 19 Feb 2026 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>
    <enclosure url="https://inspired-it.nl/images/mcp.png" length="778967" type="image/png" />
    <category>AI</category><category>Tools</category><category>Agents</category>
  </item>

  <item>
    <title>Your AI Skills Need a Package Manager</title>
    <link>https://inspired-it.nl/blog/agentdeps</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/agentdeps</guid>
    <description>I built a package manager for AI agent skills. Here&apos;s why.</description>
    <content:encoded><![CDATA[<p><img src="https://inspired-it.nl/images/cover-agentdeps.png" alt="Your AI Skills Need a Package Manager" /></p>
<p>Every time I start a new project, I have to do some mumbo jumbo to get the correct skills available for my project and client. Copying skills from one project to another, or installing them globally. It’s a mess. I want to be able to manage my skills and agents like I manage my code dependencies.</p>
<p>I play around with different coding agents and I need to do this whole copy-paste-install process every time. Forgetting to copy a skill, and then wondering why the behavior of this new coding agent is different.</p>
<p><a href="https://skills.sh">skills.sh</a> is a nice tool to install skills. But there is no management. I have to remember which skills I need to install, and updating only works on global skills. So I decided to build dependency management for agents.</p>
<p>Enter <a href="https://www.npmjs.com/package/agentdeps">agentdeps</a>.</p>
<p><strong>Agentdeps</strong> allows you to create an <code>agents.yaml</code> file in your repo. This file contains the skills and agents you want to use. You can commit this file, just like your pom.xml or requirements.txt, and everyone working with the repo will have the same skills and agents. When you change the file, everyone can update their environment with a single command.</p>
<h3>Adding skills and agents to your repo</h3>
<p>With the <code>add</code> command you can interactively add skills and agents to your project (or globally). You can select the ones you want and it will create an <code>agents.yaml</code> file in your repo. If there is already an <code>agents.yaml</code> file, it will update it with the new skills and agents you selected.</p>
<pre><code>npx agentdeps add vercel-labs/agent-skills
</code></pre>
<p>Choose your skill (or all):
<img src="https://inspired-it.nl/images/installing_skills.png" alt="Select skills" style="max-width: 600px;" /></p>
<h3>Updating</h3>
<p>Did the config change, or did you just clone a repo with an <code>agents.yaml</code> file? Run <code>install</code>.</p>
<pre><code>npx agentdeps install
</code></pre>
<h3>Configuration</h3>
<p>On first run, <strong>agentdeps</strong> will ask for some configuration. Mainly, which cloning mechanism to use (https/ssh), and how to install skills and agents (symlink or copy). You can change this configuration at any time. Just run <code>npx agentdeps config</code> and it will ask you the configuration questions again.</p>
<p><em><strong>We’ve been managing code dependencies for decades. Why are we still manually wrangling our AI skills? Give <a href="https://www.npmjs.com/package/agentdeps">agentdeps</a> a try and let me know what you think.</strong></em></p>
]]></content:encoded>
    <pubDate>Sun, 08 Feb 2026 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>
    <enclosure url="https://inspired-it.nl/images/cover-agentdeps.png" length="633061" type="image/png" />
    <category>AI</category><category>Tools</category><category>Agents</category>
  </item>

  <item>
    <title>Demystifying the AI Agent</title>
    <link>https://inspired-it.nl/blog/demystifying-the-agent</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/demystifying-the-agent</guid>
    <description>AI agents sound complex, but they&apos;re surprisingly simple. A system prompt, a few tools, and a model doing the real work. Here&apos;s what&apos;s actually under the hood.</description>
    <content:encoded><![CDATA[<p>“My coding Agent is the best!” I keep hearing this in developer communities. People debate which agent is the best AI coding tool. Claude Code, OpenCode, Cursor, etc. And people are passionate about their choice and trying to get their company to support their agent of choice. But here’s the thing: the agent isn’t doing what you think it’s doing.</p>
<p><a href="https://aishepherd.nl">Jeroen Dee</a> pointed me towards <a href="https://pi.dev">Pi</a> and together we expiremented with it this week. Pi is a minimal AI agent with just four tools: read, write, edit, and bash. That’s it. No MCP, no Todos, no sub-agents, and no fancy UI tweaks. Just four tools and a model. And it works.</p>
<p>That got me thinking. If four tools are enough, what exactly are all these agents <em>actually</em> doing?</p>
<h2>What is an Agent, Really?</h2>
<p>An agent is three things:</p>
<ol>
<li>
<p><strong>A system prompt.</strong> This is the “personality” and “harness” of the agent. It tells the model how to behave, what tone to use, what rules to follow.</p>
</li>
<li>
<p><strong>A list of tools.</strong> These are the actions the model can ask to perform. Read a file, write a file, run a command in bash. Each tool has a name, a description and a schema that defines the arguments. This way the model knows when to use it and how.</p>
</li>
<li>
<p><strong>A loop.</strong> The agent takes your question/prompt, attaches the system prompt and tool list and sends it to the model. The model either responds with text (shown to you) or asks to use a tool. If it’s a tool call, the agent executes it and sends the result back to the model. Repeat until the model sends back text. That’s when the loop is done and the agent waits for user input.</p>
</li>
</ol>
<p>That’s it. That’s the whole thing.</p>
<p>The model decides what to do. The model decides which tools to use. The model decides when it’s finished. The agent is the plumbing. The model is the brain.</p>
<h2>So Where’s the Magic?</h2>
<p>The magic is in the model. Always has been. When Claude Code writes a beautiful refactoring of your messy function, that’s Claude Opus/Sonnet/Haiku doing the thinking. The agent just handed it the right files when the model asked for it.</p>
<p>So when someone says “my agent is better,” what they usually mean is “my model + context + UX combo is better.” Some agents may have a better (for you) system prompt than others. It may have a nicer UX and some quality of life features. These things matter a lot day-to-day. But the actual intelligence? That comes from the model.</p>
<h2>Skills Are Just Text Files</h2>
<p>After my Pi discovery, I had another “wait, that’s it?” moment about skills. In my <a href="https://inspired-it.nl/blog/the-ai-coding-ladder">AI Coding Ladder</a> post, I described Level 5 (the Agentic Coder) where the AI gets access to tools and organizational knowledge. Skills are a big part of that.</p>
<p>But a skill is nothing more than a name, a description, and a markdown file. The system prompt nudges the model to check available skills to see if the task matches a skill description. When it matches, the model reads the skill file (by asking via a tool call). What you put into that file is up to you. References to websites, scripts, templates, coding standards. The model will just receive the <a href="http://SKILL.md">SKILL.md</a> and based on the content may try to read further resources or perform certain actions (again through tool calling). The outcome depends on how good your <a href="http://SKILL.md">SKILL.md</a> is.</p>
<p>There’s no framework. No magic folders. It’s a text file that the model reads when it seems relevant. That’s the whole mechanism.</p>
<h2>Pi: Power Through Simplicity</h2>
<p>This is what makes <a href="https://pi.dev">Pi</a> so interesting. Instead of building a massive tool with hundreds of features, it gives you the bare essentials and says: “You figure out the rest.”</p>
<p>Four tools. Read, write, edit, bash. With these four, the model can do almost anything. Need to search your codebase? Bash. Need to run tests? Bash. Need to modify a file? Edit. Need to understand what’s going on? Read. Need MCP? Just let it write a bash script!</p>
<p>But here’s the really clever part: from what I’ve tried so far, Pi can extend itself. Want a new feature? Ask Pi to build it and add it to Pi. It writes its own tools. It has a very powerful extension mechanism, with which you can tweak everything, from UI to system prompt. It grows with your needs instead of shipping with a thousand features you’ll never use.</p>
<blockquote>
<p>[!NOTE]
Pi is also the core of <a href="https://openclaw.ai">OpenClaw</a></p>
</blockquote>
<h2>Why Does This Matter?</h2>
<p>Because understanding the simplicity changes how you work.</p>
<p>If you know the agent is just plumbing, you stop attributing magic to the wrong thing. You focus on what actually matters: the model you’re using, the context you’re providing, and the instructions you’re giving. A better system prompt will improve your results more than switching agents. Whenever something strange happens, look at the context. What did the model receive and why did it do what it did?</p>
<p>It also means you don’t need to wait for your favorite agent to ship a feature. Need something? Build it. Write a skill. Add a tool. The barrier is surprisingly low.</p>
<h2>Build Your Own</h2>
<p>I think we’re moving into an era where heavy, expensive SaaS solutions are being replaced by in-house, purpose-built tools. Why buy an expensive software solution with a lot of functions you don’t need, when building it yourself is cheaper? If we developers think we can easily build any system, why should our AI coding agents be any different?</p>
<p>Start bare. Add what you need. Remove what you don’t. No need to try to understand all the features that your agent brings. You only add the features you need. Your agent should fit like a glove, not like a one-size-fits-all winter coat.</p>
<p>The next time someone tells you their agent is the best, ask them: which model does it use?</p>
<p>That’s where the real answer lives.</p>
]]></content:encoded>
    <pubDate>Sat, 07 Feb 2026 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>AI</category><category>Tools</category><category>Agents</category>
  </item>

  <item>
    <title>Kotlin&apos;s LSP Problem is Real</title>
    <link>https://inspired-it.nl/blog/kotlin-lsp-frustration</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/kotlin-lsp-frustration</guid>
    <description>I just want to click through code. The state of Kotlin&apos;s LSP is driving me away from the language.</description>
    <content:encoded><![CDATA[<p>I wrote about <a href="https://inspired-it.nl/blog/java-vs-kotlin-ai">The Kotlin Paradox</a> last month. Back then, it was mostly theoretical: “In an AI-driven future, Java’s robust LSP might beat Kotlin’s nicer syntax.” It felt like speculation about a problem I foresaw.</p>
<p>It’s not theoretical anymore. I’m living it.</p>
<h2>The Daily Grind</h2>
<p>Here’s my typical day: I have AI agents writing code across multiple worktrees. They’re doing the implementation while I architect and review. When I need to check something, I just want to <strong>click through the code</strong>.</p>
<p>That’s it. Go-to-definition. Find references. Basic navigation.</p>
<p>With IntelliJ, this works perfectly. It’s a nice IDE, but it is too heavy for my task. Opening IntelliJ for each git worktree (is that even working now? There was a bug with git worktrees) is killing my productivity.</p>
<p>I don’t want a full IDE. I want to open Helix, or Zed, or VS Code, navigate quickly, and close it.</p>
<h2>The Remote Work Problem</h2>
<p>It gets worse. A lot of my work happens in devcontainers or on remote boxes.</p>
<p>Open a full remote JetBrains session? Tried that, and 5GB of memory is gone, just to watch a few files. The process is also painful. I just want to <code>hx .</code> and browse to the file and check something. With all other options, I have to click through menus, to open a folder somewhere on the remote filesystem. With <a href="https://opencode.ai">OpenCode</a> and <a href="https://github.com/anthropics/claude-code">Claude Code</a>, my work happens more and more in the terminal. Needing to leave the CLI, breaks my flow.</p>
<p>With Go or Python, I’d just open Helix or VS Code Remote if I want a little more tools. The LSP connects, and I’m navigating code within seconds. <code>gd</code> to jump to definition, <code>gr</code> to find references. It just works.</p>
<p>With Kotlin? I get a text file with pretty colors.</p>
<p>The Kotlin LSP situation is terrible. Jetbrains is working on one, but it’s pre-alpha and barely works.</p>
<h2>The Agent Perspective</h2>
<p>The AI agents (<a href="https://opencode.ai">OpenCode</a>, <a href="https://github.com/anthropics/claude-code">Claude Code</a>) face the same problem. When I’m working with Go or Python codebases, I see them using LSP features. I’m not sure which features are used, but I do see error messages coming back. This allows for faster iteration, because the agent already sees the error, before needing to compile and wasting time waiting on the results.</p>
<h2>The Breaking Point</h2>
<p>I feel sad and frustrated. Kotlin is still a nice language, and a few months ago I was in camp: “Use an IDE to develop, not a text editor!”. But I don’t write code by hand anymore. My agents do. And when I need to review, navigate, or verify, I need tooling that works outside the JetBrains ecosystem.</p>
<p>If I were starting a new project today, knowing what I know now about agentic workflows and the importance of universal tooling?</p>
<p>Kotlin wouldn’t be my choice.</p>
<h2>What Would Fix This</h2>
<p>JetBrains, if you’re reading: the community needs a real Kotlin LSP. Not “pre-alpha.” Not “experimental.” A proper language server that works in all editors and AI agents that support it.</p>
<p>Until that changes, every new project I start is going to be a harder sell for Kotlin. And I suspect I’m not alone.</p>
]]></content:encoded>
    <pubDate>Mon, 26 Jan 2026 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>Kotlin</category><category>AI</category><category>Tools</category><category>LSP</category>
  </item>

  <item>
    <title>Creating Games With My 10-Year-Old Son</title>
    <link>https://inspired-it.nl/blog/creating-games-with-my-son</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/creating-games-with-my-son</guid>
    <description>How Claude made my son enthusiastic about working with a computer, making programming accessible and fun.</description>
    <content:encoded><![CDATA[<p><img src="https://inspired-it.nl/images/ai-coding-kid.png" alt="Creating Games With My 10-Year-Old Son" /></p>
<p>A few years ago, I tried to get my then 10-year-old son interested in programming. We got an Arduino kit, sat down together, and spent what felt like forever just to get a single LED to blink. The code was too hard for him and it took too much of his concentration to go through this boring (for a 10-year-old) process. This didn’t ignite his enthusiasm for my profession.</p>
<h2>Fast Forward: The Firework Game</h2>
<p>It was just after New Year’s Eve when my youngest son announced he wanted to create a game. A firework game. The timing made sense.</p>
<p>He’d seen me working with AI tools a lot recently. And when I had to build a Cookie Clicker game for my presentation at <a href="https://www.techniekcollegerotterdam.nl/opleidingen/ict-en-programmeren">Techniek College Rotterdam</a>, he watched me do it with Claude. That sparked something. He wanted to create his own game the same way.</p>
<p>I opened Claude and gave it one simple instruction: “Make an artifact simple HTML+javascript. No React.” Then I handed the keyboard to my son, or actually a simple button press, as we speak our instructions via <a href="https://www.wispr.flow/">Wispr Flow</a> instead of typing. This greatly reduces friction, because talking is something 10-year-olds can do really well.</p>
<p>We discussed a bit together before pressing the button to activate Wispr Flow. Think clearly about what you want, and describe it step by step. He came up with this:</p>
<lang-tabs>
<tab lang="nl" label="Nederlands">
<p>We maken een webpagina voor vuurwerk.
Allemaal verschillende soorten vuurwerk maar wel sier vuurwerk.</p>
<ol>
<li>We willen een vuurpijl, dat in de lucht vliegt als we ergens drukken. Dat hij na drie seconden in de lucht vliegt en dat hij dan, als hij helemaal bovenaan de lucht is, dat hij dan ontploft met alle mooie kleuren.</li>
<li>Wij weer een fonteintje na wat 3 seconden dan de lucht in spuit met allemaal mooie kleuren. Maar het vuurwerk blijft staan op de plek waar het stond dus er spuit allemaal kleur in de lucht in.</li>
<li>We willen ook een doos die je dan neerzet met vijf vuurpijlen erin. En die spuiten dan na drie seconden allemaal de lucht in en dan gaan ze allemaal uit elkaar splashen met allemaal kleuren, maar het moet kleiner dan de vuurpijl.</li>
</ol>
</tab>
<tab lang="en" label="English">
<p>We are making a webpage for fireworks. All different kinds of fireworks but decorative fireworks.</p>
<ol>
<li>We want a rocket, that flies into the air when we press somewhere. That after three seconds it flies into the air and that then, when it is all the way at the top of the sky, that it then explodes with all the beautiful colors.</li>
<li>We want a little fountain that after like 3 seconds then sprays into the air with all beautiful colors. But the firework stays standing in the place where it stood so all color sprays into the air.</li>
<li>We also want a box that you then put down with five rockets in it. And those then all spray into the air after three seconds and then they all splash apart with all colors, but it has to be smaller than the rocket.</li>
</ol>
</tab>
</lang-tabs>
<p>Within minutes, we had a working firework simulation. Rockets flying up. Explosions. Fountains. My son was enthusiastic to go forward and improve the game.</p>
<p>Initial version:
<img src="https://inspired-it.nl/images/firework-1st-version.png" alt="Initial version of firework game" style="max-width: 400px;" /></p>
<p>Fun fact: for me and most software engineers, this is already amazing. But for my son it was not at all. He was not amazed, but mainly full of energy to keep going. For him it was just some new cool tool he learned.</p>
<h2>From 2D to 3D</h2>
<p>After a few iterations, we had a nice working 2D firework page. Then my son had the idea to make it 3D.</p>
<p>I already started to explain that that might be too hard. Wanting to manage expectations, because I immediately thought it was too complex for a single page HTML game.</p>
<p>I was wrong.</p>
<p>We gave it a shot and it transformed into a somewhat working 3D game. There were some bugs, which we managed to fix with a few prompts.</p>
<h2>The Real Win</h2>
<p>This made my son enthusiastic to work with the computer and make something. It triggers his creativity. The next day he grabbed his iPad, opened ChatGPT and tried other things, like creating images. He’s been playing with it ever since, coming up with new ideas, iterating, experimenting.</p>
<p>The downside? He now thinks my job is simple…</p>
<h2>Try It Yourself</h2>
<p>The firework game is live: <a href="https://jgordijn.github.io/games/vuurwerk.html">Firework Game</a></p>
<p>He’s already moved on to new projects. You can see his growing collection here: <a href="https://jgordijn.github.io/games/">All Games</a></p>
<p>If you have kids who are curious about creating things, maybe skip the Arduino for now. Hand them an AI assistant and let them talk. Just make sure to add in your prompt to have a single HTML file: “Make an artifact with simple HTML+javascript. No React.” You might be surprised what they build.</p>
]]></content:encoded>
    <pubDate>Sun, 25 Jan 2026 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>
    <enclosure url="https://inspired-it.nl/images/ai-coding-kid.png" length="633104" type="image/png" />
    <category>AI</category><category>Claude</category>
  </item>

  <item>
    <title>Ralph Wiggum: Loop it!</title>
    <link>https://inspired-it.nl/blog/ralph-wiggum-agentic-loops</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/ralph-wiggum-agentic-loops</guid>
    <description>How a simple Bash script and a clever prompt pattern turned 35 skill reviews into a 30-minute automated session.</description>
    <content:encoded><![CDATA[<p><img src="https://inspired-it.nl/images/ralph-wiggum.png" alt="Ralph Wiggum: Loop it!" /></p>
<p>I ran an AI agent in a loop and came back to 35 commits. That felt… irresponsible. And also kind of cool.</p>
<p>I’ve been hearing more and more about “Ralph Wiggum” lately. It’s a loop pattern for AI coding assistants, <a href="https://ghuntley.com/ralph/">coined by Geoffrey Huntley</a>. The main idea is to keep pressing forward and create a fresh context for each iteration. Each loop does exactly one thing, then stops. No context bloat. No accumulated confusion.</p>
<p>I was a bit hesitant to try, because I felt safer constantly validating what the AI was doing. I felt comfortable with the “human-in-the-loop” approach. But Geoffrey made a point to put people on the loop, not in the loop. So when I had a repetitive task to do recently, I had a reason to try it out.</p>
<h2>The Problem</h2>
<p>I have over 35 skills in my OpenCode setup. But I noticed most of them weren’t used much. While looking around the internet, my fellow AI enthusiast <a href="https://qualityshepherd.nl">Jeroen Dee</a> pointed me towards the <a href="https://github.com/obra/superpowers/tree/main/skills/writing-skills">writing-skills</a> skill in Jesse Vincent’s <a href="https://github.com/obra/superpowers">Superpowers</a> project. This looked like a very thorough skill with lots of details about making good skills.</p>
<p>I used the skill on a few of my skills and noticed improvements. But doing this manually on 35+ skills? Maybe a good opportunity to try Ralph Wiggum.</p>
<h2>The Prompt</h2>
<p>The trick with Ralph Wiggum is designing a prompt that does exactly one thing and then stops. No questions. No waiting for input. Just do the work, commit, and stop. And keep rerunning that same prompt until all work is done.</p>
<p>I deviated a little from the classic pattern. Instead of keeping state in a separate file, I kept the state in the prompt and changed that on every iteration. Here’s what I came up with:</p>
<pre><code class="language-markdown">Take the topmost skill from the list below and do the following:

- Thoroughly review the skill (use the writing-skills skill to learn what &quot;good&quot; looks like).
- Apply all recommendations, even small ones. No questions to the user. Decide what needs to happen. When in doubt think double hard and come up with an answer yourself.
- Remove the skill from the list below and save this file.
- Commit.
- Stop.

When the list below is empty (after the commit), reply with &quot;DONE - STOP RALPH&quot;.

Skills:
  - agent-builder
  - convert-plan-to-beads
  - reviewing-skill
  - ...
</code></pre>
<p>A few important details:</p>
<p><strong>No questions allowed.</strong> This is crucial. If the AI asks a question, the loop breaks. There’s no human watching to answer. The prompt explicitly says “No questions to the user” and “When in doubt think double hard and come up with an answer yourself.”</p>
<p><strong>The “stop” instruction.</strong> This ends the current loop iteration. Without it, the AI might keep going within the same context. Increasing the context increases the risk of the Agent going off the rails.</p>
<p><strong>“DONE - STOP RALPH”</strong> is the signal that all work is done. The bash script watches for this to know when to exit.</p>
<h2>The Script</h2>
<p>With the prompt ready, I needed a way to run it repeatedly. A simple Bash script does the job:</p>
<pre><code class="language-bash">#!/bin/bash

MAX_ITERATIONS=35
STOP_SIGNAL=&quot;DONE - STOP RALPH&quot;

for ((i = 1; i &lt;= MAX_ITERATIONS; i++)); do
    echo &quot;=== Iteration $i of $MAX_ITERATIONS ===&quot;

    output=$(opencode run -m &quot;github-copilot/claude-haiku-4.5&quot; &quot;read and perform @prompt.md&quot; 2&gt;&amp;1 | tee /dev/stderr)

    # Check last 20 lines for stop signal
    if echo &quot;$output&quot; | tail -20 | grep -q &quot;$STOP_SIGNAL&quot;; then
        echo &quot;&quot;
        echo &quot;=== Stop signal detected. Exiting. ===&quot;
        exit 0
    fi

    echo &quot;&quot;
done

echo &quot;=== Reached maximum iterations ($MAX_ITERATIONS). Exiting. ===&quot;
</code></pre>
<p>The script runs OpenCode with the prompt, captures the output, and checks for the stop signal. When it sees “DONE - STOP RALPH”, it exits. Otherwise, it loops again with fresh context.</p>
<p>One important detail: I configured OpenCode to allow all tool use (in Claude Code, the equivalent is the <code>--dangerously-skip-permissions</code> flag). The loop needed to run autonomously, without permission prompts breaking the flow.</p>
<h2>The Result</h2>
<p>I kicked it off and went to make coffee.</p>
<p>About 30 minutes later, I came back to find all 35+ skills reviewed, improved, and committed. Each iteration handled exactly one skill. Review it, apply the improvements, remove it from the list, commit, stop. Next iteration: fresh context, next skill, repeat.</p>
<p>The git log was 35 clean commits, each one a focused improvement to a single skill.</p>
<p>This sounds risky. If you let your AI agent loose, it’s better to run it in a sandbox. But in the end, you will review the result and you can always go back to previous versions. That’s the beauty of Git. You can watch the loop going, but you don’t need to be involved in it.</p>
<p>We just have to put these machines to work. Do more work for us, so we can do more work.</p>
<h2>Context Management</h2>
<p>The important part is the fresh context. Asking one session to do it all will pollute the context with too much data. At some point the Agent starts to forget or ignore instructions. Keeping your context clean and focused is key for getting better results.</p>
<p>Ralph Wiggum throws all that away. Each iteration starts clean. The AI reads the prompt, does the work, commits, and stops. The next iteration has no memory of the previous one.</p>
<p>It’s like hiring a contractor for one specific job instead of keeping them around for everything. They show up, do their thing well, and leave. No baggage.</p>
<h2>Key Takeaways</h2>
<p>The whole thing boils down to: keep it small, keep it deterministic.</p>
<p>If you want to try Ralph Wiggum:</p>
<ol>
<li><strong>Fresh context per loop.</strong> Each iteration starts clean. This is the core insight.</li>
<li><strong>One task per loop.</strong> Keep things focused and reliable. Don’t try to do too much.</li>
<li><strong>No questions.</strong> Design prompts that don’t require human interaction. The AI must be able to make all decisions itself.</li>
<li><strong>Clear stop conditions.</strong> Both per-iteration (“stop”) and for completion (“DONE - STOP RALPH”).</li>
</ol>
<h2>The Future is On the Loop</h2>
<p>With the increasing power of these models, we are heading toward a reality where we are less involved in the actual implementation of our specs. As Geoffrey Huntley put it, we should be <strong>on the loop</strong>, not <strong>in the loop</strong>. We need to learn to let go. Let the agent work and verify the outcome later, instead of hovering over it while it types.</p>
<p>This was my first attempt at the Ralph Wiggum way of working, but it definitely left me wanting more. I want to spend my time thinking about <em>what</em> needs to be done, and let the AI handle the execution.</p>
<p>Let the robot do the heavy lifting. I have other things to do.</p>
<p>Now I’m wondering: what else am I still doing manually just because it feels safer?</p>
]]></content:encoded>
    <pubDate>Sun, 11 Jan 2026 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>
    <enclosure url="https://inspired-it.nl/images/ralph-wiggum.png" length="778181" type="image/png" />
    <category>AI</category><category>Automation</category><category>OpenCode</category><category>Agentic Loops</category>
  </item>

  <item>
    <title>The Reviewer</title>
    <link>https://inspired-it.nl/blog/the-reviewer</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/the-reviewer</guid>
    <description>How I stopped reviewing bad code and started building better software with AI assistance.</description>
    <content:encoded><![CDATA[<p><img src="https://inspired-it.nl/images/ai-reviewer.png" alt="The Reviewer" /></p>
<p>Lately, I’ve been experimenting with agent loops, and I added a reviewer to the mix. I was getting frustrated having to review and point the agent in the right direction for obvious mistakes or omissions. That got me wondering: could AI do the reviewing for me? So, I created a reviewer agent.</p>
<p>I’ve already put effort into getting my coding agent to follow a process and stick to my standards. (I wrote more about that in <a href="https://inspired-it.nl/blog/my-ai-workflow">“My AI Writes Code. Yours Can Too.”</a>).</p>
<p>This greatly improves the end result. But I still find myself reviewing the code and nudging the AI to fix style issues. Sometimes the agent writes three tests where one would do. Or it gets excited about a solution and charges ahead, forgetting standards we discussed earlier. A bit of a puppy brain.</p>
<p>A reviewer agent can catch these issues before I have to.</p>
<h2>The agent</h2>
<p>With AI coding, you have to manage your context window. Once it’s full, the system compacts it, and you lose information. So you want to keep it lean. Sub-agents are perfect for this. They run in their own context, do their thing, and just report back the results. No pollution of the main agent’s context. Another perk of sub-agents: I can pick the model per role. This allows me to code with Claude Opus 4.5 but review with GPT 5.2. GPT 5.2 is very good at reviewing. The feedback is thorough and actionable.</p>
<p>This is a small piece of the content for the reviewer agent.</p>
<pre><code class="language-md">When invoked:

1. Run `git diff` to see all changes
2. Read the **code-review** skill for methodology
3. Identify file types: code, configuration, infrastructure
4. Apply appropriate review strategies from the skill
5. Begin review with heightened scrutiny for configuration changes

## Important Questions

For every change, answer:

- Is the change as simple as possible?
- Are there any hidden side effects?
- Does the code comply with best practices and project standards?
- Does this contain breaking changes (API, database, configuration)?

## Output Format

Use the output format from the **code-review** skill:

- 🚨 CRITICAL - Must fix before deployment
- ⚠️ HIGH PRIORITY - Should fix
- 💡 SUGGESTIONS - Consider improving
</code></pre>
<p>You can change this to your heart’s content. Add whatever checks matter to you. I also have a code review skill with more detailed rules for different types of changes, because I want configuration changes checked differently than database changes or code changes.</p>
<p>For example, just yesterday the reviewer agent came back with:</p>
<p><img src="https://inspired-it.nl/images/reviewer-example.png" alt="Reviewer example output"></p>
<p>This is exactly what I used to catch during my own reviews. Now I don’t have to.</p>
<h2>The loop</h2>
<p>At first, I did this manually: delegate to the coding agent, then ask the reviewer agent to check it, then pass the feedback back to the coding agent to fix. That got old fast. So I automated it into a loop. The coding agent does its work, and the reviewer agent checks it. If there are issues, they go back to the coding agent. This repeats until the reviewer agent gives a pass, then we commit.</p>
<pre><code class="language-mermaid">flowchart TD
    A((Start)) --&gt; B[Select Task from OpenSpec Proposal]
    B --&gt; C[Delegate to Coding Agent]
    C --&gt; D[Delegate to Reviewer Agent]
    D --&gt; H{Is Code Good Enough?}
    H -- Yes --&gt; I[Mark Task as Done &amp; Commit]
    H -- No (max 5x) --&gt; C
    I --&gt; J((End))
</code></pre>
<blockquote>
<p>[!NOTE]
I cap it at five iterations. If it hasn’t figured it out by then, something’s fundamentally wrong and I need to step in.</p>
</blockquote>
<p>This works really well, and the quality is way better. To even take it a step further, I usually ask to do this for all the tasks in OpenSpec in one go. So it keeps churning through the list until the entire OpenSpec proposal is complete.</p>
<h2>Go slower to go faster</h2>
<p>Is this slower than generating just the code? Yes, it’s way slower. But the results are better on the first try. I still do a final review before it goes in, but now I review less often, and what I do review is already high quality. The reviewer agent catches most issues before I even see the code.</p>
]]></content:encoded>
    <pubDate>Sat, 03 Jan 2026 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>
    <enclosure url="https://inspired-it.nl/images/ai-reviewer.png" length="994439" type="image/png" />
    <category>AI</category><category>Workflow</category>
  </item>

  <item>
    <title>Technical Deflation</title>
    <link>https://inspired-it.nl/blog/technical-deflation-clean-code</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/technical-deflation-clean-code</guid>
    <description>Changing code tomorrow is cheaper than doing it today.</description>
    <content:encoded><![CDATA[<p><img src="https://inspired-it.nl/images/technical-deflation.png" alt="Technical Deflation" /></p>
<p>In an economy, deflation is terrible. When prices drop, consumers stop spending because they know things will be cheaper tomorrow. The economy comes to a halt.</p>
<p>But what if the same logic applied to code? This is what Dan Shapiro describes in his blog <a href="https://www.danshapiro.com/blog/2025/12/this-is-a-time-of-technical-deflation/">Technical Deflation</a>. The cost of producing and modifying code is dropping fast. And it will be even cheaper next month.</p>
<p>For decades, we’ve created code with the following rule in mind: <em>code is read ten times more often than it is written</em>. So we invested heavily in readability. We debated variable names. We refactored until our functions were small and pure. We’ve “gold-plated” our code because we take pride in it and argue that the investment is worth it, because maintaining code is expensive.</p>
<p><strong>That rule is breaking.</strong></p>
<h2>The New Math</h2>
<p>The technical debt you have today is cheaper to fix than ever. It will even be cheaper tomorrow. If it’s not actively slowing you down or causing bugs, why fix it now? You’ll pay less later.</p>
<p>And when writing new code, technical debt can be used to your advantage. As Shapiro puts it:</p>
<blockquote>
<p>You are borrowing expensive human hours today, and you will get to pay them back with cheap AI hours tomorrow.</p>
</blockquote>
<p>This isn’t an excuse for writing garbage or <a href="https://en.wikipedia.org/wiki/AI_slop">AI-slop</a>. It’s a shift in the economics of software development. If the end-of-year refactoring sprint isn’t removing real friction, it’s an investment with a negative return. You’re paying premium prices today for maintenance work that will cost pennies in the near future. However, the cost of fixing a badly architected data model will deflate slower than code-shape debt like a messy class or naming.</p>
<p>Different types of tech debt deflate at different rates.</p>
<h2>The Swiss Watch Moment</h2>
<p>We are in the <a href="https://en.wikipedia.org/wiki/Quartz_crisis">Quartz Crisis</a> of software development. For centuries, Swiss craftsmen built mechanical movements by hand. Then quartz arrived. Many watchmakers “thought that moving into electronic watches was unnecessary”. They were wrong, and the Swiss watch industry plunged into a crisis.</p>
<p>The handmade watch became a luxury. It didn’t tell the time any better than a quartz watch.</p>
<p>I wrote about this in <a href="https://inspired-it.nl/blog/the-death-of-the-ide">2026: The Year the IDE Died</a>. Creating code is becoming a commodity. The developer’s value isn’t in writing code anymore. It’s in making sure the correct code gets generated.</p>
<p><a href="https://inspired-it.nl/blog/the-ai-coding-ladder/#level-0-the-purist-no-ai">The Purist (Level 0)</a> worries about syntax. <a href="https://inspired-it.nl/blog/the-ai-coding-ladder/#level-6-the-architect">The Architect (Level 6)</a> is concerned about whether we’re solving the right problem.</p>
<h2>The Objections</h2>
<p><strong>“AI won’t understand my spaghetti code any better than a developer would.”</strong></p>
<p>Consider this: Today’s AI is the worst you’ll ever use. It gets better every month. Messy code is less expensive for an AI agent than for us humans, and the gap is growing. The things that make messy code expensive for humans (cognitive load and context-switching) don’t apply to an LLM the same way. <em>What’s messy for us isn’t necessarily messy for the AI agent.</em> But keep in mind that we still pay a price. Missing tests, unclear boundaries, flaky builds, and unstable contracts make both humans and agents slow and error-prone. This is friction which makes you slower and needs to be fixed.</p>
<p><strong>“What if we end up with a mess? Developers will need to fix that.”</strong></p>
<p>That argument denies the whole point of Technical Deflation. It’s essentially betting that AI will go away and we’ll have to start hand-coding everything again. Remember the watchmakers?</p>
<p>The real question isn’t whether your code is clean. It’s whether your code is <strong>blocking you right now.</strong> If not, move on. The cleanup will be cheaper when you actually need it.</p>
<h2>The Uncomfortable Conclusion</h2>
<p>In a deflationary environment, the smartest move is to delay spending. For code, that means: <strong>ship the feature, defer the polish, fix problems when they become problems.</strong></p>
<p>This feels wrong. It violates everything we were taught. But the developers who understand this will ship faster. And in a world where code is cheap, the only thing that remains expensive is <strong>time.</strong></p>
<p>Your clean code repo might be a museum piece. Beautiful, maintainable, and months behind the competition.</p>
<hr>
<p><em>Remarks:</em></p>
<ol>
<li><em><a href="https://www.linkedin.com/feed/update/urn:li:activity:7409493456089272320?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7409493456089272320%2C7409501926263898112%29&amp;dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287409501926263898112%2Curn%3Ali%3Aactivity%3A7409493456089272320%29">Alef Arendsen</a> rightfully pointed out that the terminology in the title is not entirely correct. It’s the work that is getting cheaper, not the technology.</em></li>
<li><em>Some people seem to take this blog as an advice to no longer think about the quality of the code. An advice to create a “mess”. That’s not the point. It’s about code getting cheaper, how does that affect our choices?</em></li>
</ol>
]]></content:encoded>
    <pubDate>Wed, 24 Dec 2025 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>
    <enclosure url="https://inspired-it.nl/images/technical-deflation.png" length="950062" type="image/png" />
    <category>AI</category><category>Coding</category>
  </item>

  <item>
    <title>The Kotlin Paradox</title>
    <link>https://inspired-it.nl/blog/java-vs-kotlin-ai</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/java-vs-kotlin-ai</guid>
    <description>Kotlin may be a nicer language, but Java might be the best choice when paired with AI. Here is why.</description>
    <content:encoded><![CDATA[<p><img src="https://inspired-it.nl/images/ai-kotlin-paradox.png" alt="The Kotlin Paradox" /></p>
<p>I love Kotlin. It is concise, expressive, and null-safe. It saved us from the boilerplate hell of Java versions of the past. Java has been catching up quickly and is getting better and better. But <a href="https://inspired-it.nl/blog/unraveling-the-code-kotlin-edge-over-java-streams">Kotlin</a> still has some major advantages over Java. For years, “Java vs. Kotlin” was a settled debate: Kotlin was simply better for developers.</p>
<p>But recently, as I shifted my workflow to be <a href="https://inspired-it.nl/blog/my-ai-workflow">100% AI-generated</a> (reaching Level 6 on the <a href="https://inspired-it.nl/blog/the-ai-coding-ladder">AI Coding Ladder</a>), I started thinking about how AI works with my codebase. It essentially searches for and replaces text, then checks whether it works. This was the workflow in the early days of programming. Relying on text manipulation via search-and-replace does not seem like the best idea.</p>
<p>Maybe in the age of AI, the “better” language might not be the one with the nicest syntax. It might be the one with the best <strong>LSP</strong>.</p>
<h2>The IntelliJ Lock-in</h2>
<p>Kotlin was born at JetBrains. It was designed <em>for</em> IntelliJ IDEA. Coding inside IntelliJ is excellent. But it doesn’t easily transfer to other editors. Mainly, because there is no good LSP for Kotlin. Coding Kotlin on other IDEs is a bad experience.</p>
<p>As a developer, I was perfectly happy with IntelliJ, and therefore ignored how unpleasant it is to use in other IDEs like VS Code, Vim or others.</p>
<h2>Enter AI</h2>
<p>Today, most AI agents work without using an LSP. Some mention the possibility of using an LSP, for instance OpenCode, but in my experience, it does not seem to work yet. If asked, OpenCode simply denies that it knows how to use an LSP. Even if it would work, on their site they state that it can only use the diagnostics from the LSP, not the full features.</p>
<p><img src="https://inspired-it.nl/images/opencode-lsp.png" alt="OpenCode LSP"></p>
<p>However, in the future, to understand your code, they will rely heavily on the <strong>Language Server Protocol (LSP)</strong>.</p>
<blockquote>
<p>[!INFO]
The <a href="https://microsoft.github.io/language-server-protocol/">Language Server Protocol (LSP)</a> is a standard that allows editors to communicate with language servers. Ideally, it provides features such as auto-complete, go-to-definition, find all references, refactoring, and more, regardless of the editor you use.</p>
</blockquote>
<p>Without a good LSP, the AI is just working with the code like a developer would work with code in Notepad. Just search and replace, change the code, and then compile it to see any errors. LLMs are fantastic in this and do a very good job, an amazing job even. But imagine if we gave them a good LSP and they knew how to use it. They would essentially get the same powerful tools you have when using IntelliJ—being able to refactor code and immediately see when there is an error. They would stop guessing based on text patterns and start understanding the <em>structure</em> of your code.</p>
<h2>The Problem</h2>
<p>Here lies the problem:</p>
<ul>
<li><strong>Java</strong>: Has the Eclipse JDT LS. It is mature, battle-tested, and robust. It exposes deep insights about the code structure to VS Code.</li>
<li><strong>Kotlin</strong>: The LSP situation is… bad. There is a community-driven server, and recently JetBrains released an experimental one (which they call “Pre-alpha”). But at the current state, it’s barely usable.</li>
</ul>
<h2>The Paradox</h2>
<p>This leads to a weird conclusion for the future: <strong>If you want the best AI agent, you might be better off writing Java.</strong></p>
<p>Currently, AI tools do not make much use of LSPs yet. But going towards the future, they may more and more. And hopefully in 2026, they will be better integrated into the tools I am mostly using, like OpenCode.</p>
<p>When LSPs really find their place in AI coding, it may feel like coding with Java is like you’re fully connected, while with programming Kotlin it’s like going back in time. It will possibly be slower and more expensive because doing search and replace will cost more tokens than doing a call to the LSP.</p>
<h2>Conclusion</h2>
<p>For the last decade, many people migrated to Kotlin to escape Java’s “noise”. We wanted clean syntax because humans have a limited bandwidth for boilerplate. We chose Kotlin because it made us better coders and helped us enjoy coding more.</p>
<p>By moving away from the nicer language and back towards a language with a robust, open-standard LSP like Java, we are making a trade-off that feels wrong but yields right:</p>
<ul>
<li>If you are on the <a href="https://inspired-it.nl/blog/the-ai-coding-ladder">AI Ladder Level 0-3</a>: You write the code. You need the language to be concise.<br>
<strong>Winner: Kotlin</strong>.</li>
<li>If you are moving towards the higher levels <a href="https://inspired-it.nl/blog/the-ai-coding-ladder">Level 4 and up</a>, the verbose syntax of Java matters less (since you aren’t typing it), and the deep understanding the AI will get from the LSP matters more.<br>
<strong>Winner: Java</strong>.</li>
</ul>
<h2>The Choice for 2026</h2>
<p>If you are a purist who enjoys the craft of manual coding, stay in IntelliJ with Kotlin. It remains the gold standard for human-centric development.</p>
<p>However, if you are moving toward an Agentic Workflow, where you act more as an architect than a typist, you have to ask yourself: Am I writing this for me, or for the AI? We may be entering an era where we intentionally choose “worse” languages to get better results. It’s counter-intuitive, it’s frustrating, and it’s a total paradox, but in the age of AI, the best code might be in a language that you didn’t even want to write yourself.</p>
]]></content:encoded>
    <pubDate>Fri, 19 Dec 2025 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>
    <enclosure url="https://inspired-it.nl/images/ai-kotlin-paradox.png" length="897493" type="image/png" />
    <category>AI</category><category>Kotlin</category><category>Java</category><category>Tools</category>
  </item>

  <item>
    <title>Battle of the Infographics: GPT 5.2 vs Gemini</title>
    <link>https://inspired-it.nl/blog/infographics-comparison</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/infographics-comparison</guid>
    <description></description>
    <content:encoded><![CDATA[<p>ChatGPT has released a <a href="https://openai.com/index/new-chatgpt-images-is-here/">new image generator</a>, and seeing <a href="https://simonwillison.net/2025/Dec/16/new-chatgpt-images/">Simon Willison’s experiments</a> with the new GPT image generator and infographics, I decided to try it on my own blog. In my previous post, <a href="https://inspired-it.nl/blog/my-ai-workflow">My AI Writes Code. Yours Can Too</a>, I described my workflow. I thought it would be a fun experiment to ask both GPT and Gemini to create an infographic of that workflow.</p>
<p>Here is the result.</p>
<h2>The Contenders</h2>
<p>First up, the version generated by <strong>GPT 5.2</strong>.</p>
<p><img src="https://inspired-it.nl/images/Workflow_infographic_gpt52.png" alt="GPT 5.2 Workflow"></p>
<p>It looks decent and it gets the point across. A few months back, this would have blown me away. But now, I don’t find it really impressive. It’s a little bit boring.</p>
<p>Now, look at what <strong>Gemini</strong> produced.</p>
<p><img src="https://inspired-it.nl/images/Workflow_infographic_gemini.png" alt="Gemini Workflow"></p>
<p>Wow! This image really impresses me. It shows a really nice, rich picture with all the details of the workflow. This is something which I would really be glad to put in a slide deck or anywhere I want to present it. It may be a little bit chaotic, but it’s a really cool picture.</p>
<h2>The Verdict</h2>
<p>Honestly? I’m impressed by the Gemini version. It just feels more comprehensive and visually stunning. The GPT version is nice and tidy, but Gemini wins for me.</p>
<p>Which one do you like more?</p>
<h2>Update</h2>
<p>On LinkedIn, someone pointed out to me that I should have used the ChatGPT image feature. I thought that it would automatically use that feature. So I tried again.</p>
<p><img src="https://inspired-it.nl/images/Workflow_infographic_gpt52_image_feature.png" alt="GPT 5.2 with Image Feature"></p>
<p>The result was not much better. It is logically even wrong, because you will not change the <code>AGENTS.md</code> for every feature.</p>
]]></content:encoded>
    <pubDate>Wed, 17 Dec 2025 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>AI</category><category>Gemini</category><category>GPT</category><category>Workflow</category>
  </item>

  <item>
    <title>The AI Coding Ladder</title>
    <link>https://inspired-it.nl/blog/the-ai-coding-ladder</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/the-ai-coding-ladder</guid>
    <description>From Copy-Paster to Architect: A look at the 7 levels of AI coding assistance.</description>
    <content:encoded><![CDATA[<p><img src="https://inspired-it.nl/images/ai-coding-ladder.png" alt="The AI Coding Ladder" /></p>
<p>AI coding is changing fast. A few years ago, we wrote every line by hand. Now, we have AI agents that can build entire features. But not all AI help is the same. I see different “levels” of AI, from a simple chat interface to fully independent workers. These levels aren’t official rules, and higher isn’t always better. It’s a way to think about the way you use AI. During a coding session, you can switch between the different levels, depending on the task at hand.</p>
<h2>Level 0: The purist (No AI)</h2>
<p>In this level, there is no AI involved. The developer writes every line of code. It’s just you, the blinking cursor, and searching Stack Overflow for hours. It may feel old-fashioned, but until very recently, this was the only way to go. Many people still find pride in crafting code by hand.</p>
<ul>
<li><strong>Control</strong>: None. You do everything.</li>
<li><strong>Context</strong>: None. The computer knows nothing about what you want.</li>
<li><strong>The Good Part</strong>: Total Understanding. You rely on no one. You know exactly how every single line of code works.</li>
<li><strong>The Bad Part</strong>: You get stuck on a problem. You spend hours searching for a solution to a small problem. You get tired of typing the same standard code over and over.</li>
</ul>
<h2>Level 1: The Copy-Paster</h2>
<p>This is where most of us started. Instead of Googling for answers, you ask a chatbot. You ask for a piece of code, and like magic, it appears on your screen. At first, you are amazed. You even worry about your job! But when you paste the code into your project, it doesn’t work. It uses functions that don’t exist, or provides incorrect arguments. AI can be very helpful in this level, but it can also be very frustrating.</p>
<p>The AI has read almost every programming book and all of Stack Overflow, so it knows a lot. And if you provide it with some context, it can give a more tailored answer. But it doesn’t know <em>your</em> whole codebase.</p>
<ul>
<li><strong>Control</strong>: None. You ask, it answers, you paste.</li>
<li><strong>Context</strong>: It knows only what you tell it. Putting some effort in creating nice prompts and provide context about your problem can help get better results.</li>
<li><strong>The Good Part</strong>: A better Stack overflow. You can get results more tailored to your problem. It’s also great for learning new things.</li>
<li><strong>The Bad Part</strong>: Switching Context. You have to switch between your browser and your code constantly. Also, the AI might make up libraries that don’t exist, leading to frustration.</li>
</ul>
<h2>Level 2: The Autocompleter</h2>
<p>This is basically super-powered text prediction. The AI lives inside your code editor and guesses what you want to type next. Sometimes it feels like it reads your mind and knows exactly what you need. Other times, it writes a whole block of code that looks perfect but is actually just not it. Refactoring that feels like it takes more work than when you have written it yourself. The big win here is that you don’t have to leave your editor to copy-paste code from a browser.</p>
<ul>
<li><strong>Control</strong>: None. It reacts to your cursor and suggests the next block of code.</li>
<li><strong>Context</strong>: Current File and open tabs. You can provide it with better context of your intention by starting to type a comment, good function name, or variable name.</li>
<li><strong>The Good Part</strong>: No need to leave your editor. If steered well it can save you some typing. And simple clear cases generally work pretty good.</li>
<li><strong>The Bad Part</strong>: The “Wait, What?” Effect. You accept a suggestion because it looks right, but later find out it’s wrong. It might use a variable that doesn’t exist, or hallucinates a function (like in level 1).</li>
</ul>
<h2>Level 3: The Inline Editor</h2>
<p>In this level, you can talk to your code editor. In level 2, we could only add code, but in this level, we can change, add or remove code. You highlight a piece of code and tell the AI: “Simplify this” or “Add error checks.” It’s like having a helpful assistant sitting next to you. It’s great for cleaning up messy code or understanding how a complex function works. But the AI only changes the selected code, or injects code at the cursor position. If the function name or arguments are changed here, it might break code elsewhere in your project because the AI doesn’t see the full picture.</p>
<ul>
<li><strong>Control</strong>: Some. The AI can now change some code, based on a description of what you want.</li>
<li><strong>Context</strong>: Selected Code and open tabs. It understands the specific block of code you highlighted and the files around it. The prompt you give directs it what to do.</li>
<li><strong>The Good Part</strong>: Fast Cleanup. You can highlight a messy function and say, “Clean this up,” and it’s done. AI can now change code, not only add code.</li>
<li><strong>The Bad Part</strong>: Limited scope. It fixes the function perfectly, but might break the code that <em>uses</em> it, because it doesn’t consider the rest of the project. It still hallucinates non-existing functions.</li>
</ul>
<h2>Level 4: The Prompt Coder</h2>
<p>Now the AI steps out of the single file and looks at your entire project. It’s not just fixing one function; it can add a whole new feature across multiple files. You feel like you have superpowers. You describe what you want, and the AI builds it. But it can be risky. If the AI misunderstands you, it might mess up files all over your project. And if you try to fix it with more prompts, you can get stuck in a loop where nothing seems to work right.</p>
<ul>
<li><strong>Control</strong>: Full. The AI can now change any code in your project.</li>
<li><strong>Context</strong>: The Whole Project. It can read all files in your project.</li>
<li><strong>The Good Part</strong>: Super Speed. You can make big changes across 20 files in minutes. It sees your whole codebase, so it makes changes which are more in line with the project’s code style.</li>
<li><strong>The Bad Part</strong>: If the AI misunderstands your command, it writes bad code across many files. Cleaning up a big mess is often harder than doing the work by hand.</li>
</ul>
<h2>Level 5: The Agentic Coder</h2>
<p>At this level, the AI gets access to tools (MCPs, skills, sub-agents). You don’t only give it your project’s code as context, but you can feed it more information, such that it has some knowledge about your organization. It also knows how to use the tools, such that it can compile your code, run tests or even use a browser to see the result of its changes. You can teach it your team’s rules, like how to format code or handle security. It can take a vague task like “fix this bug,” run tests to find the problem, and solve it. But like a junior developer, it sometimes gets confused and needs your guidance to stop it from trying the same wrong solution over and over.</p>
<ul>
<li><strong>Control</strong>: Full+. Can change all code, but can also reach out and depending on the tools you allowed, the possibilities are endless.</li>
<li><strong>Context</strong>: The project + all the tools you gave it.</li>
<li><strong>The Good Part</strong>: Level 4, but with checks if the code works. Follows the rules you set and access to more tools. It can investigate Kubernetes logs for instance.</li>
<li><strong>The Bad Part</strong>: The Infinite Loop. Without perfect instructions, it can get stuck trying the same wrong fix over and over, wasting time.</li>
</ul>
<h2>Level 6: The Architect</h2>
<p>At this stage, you stop being a coder and start being an architect. Instead of directing what needs to change in the code, you write a clear plan (a specification) of what you want. You tell the AI the goal, and it figures out the steps to get there. We lift the whole process to a higher level. By putting a lot of effort into the spec phase, we try to set the AI up for success. This is what I described in <a href="https://inspired-it.nl/blog/my-ai-workflow">My AI Writes Code. Yours Can Too</a>.</p>
<ul>
<li><strong>Control</strong>: Full+. The same control as in level 5, the process changes, not the AI.</li>
<li><strong>Context</strong>: The project + the tools + the plan.</li>
<li><strong>The Good Part</strong>: Thinking Big Picture. You stop thinking about <em>how</em> to write the code and start thinking about <em>what</em> the code should do. In the spec phase, we create a plan that guides the AI during the build, to get the right result.</li>
<li><strong>The Bad Part</strong>: Bad Plan = Bad Code. If your plan is vague, the AI will build the wrong thing. Writing good plans takes time and effort.</li>
</ul>
<h2>More levels?</h2>
<p>What will the next level be? I can imagine that AI at some point takes more responsibility. After a vague feature request, it will set out to get clarification. Maybe other levels will pop up, but from this point it is only speculation.</p>
<h2>Relevance of the levels</h2>
<p>There’s no right or wrong level. It depends on the task at hand. I mostly start at level 4/5 to brainstorm with AI about the feature. I want to understand how something currently works in the codebase and figure out the best approach, or clarify the idea. When that is clear I switch to level 6 and specify the feature. During the build phase, I switch to level 5 to make some changes.</p>
<p><em>Disclaimer: I made these levels up. It makes sense to me.</em></p>
]]></content:encoded>
    <pubDate>Sun, 07 Dec 2025 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>
    <enclosure url="https://inspired-it.nl/images/ai-coding-ladder.png" length="856157" type="image/png" />
    <category>AI</category>
  </item>

  <item>
    <title>My AI Writes Code. Yours Can Too.</title>
    <link>https://inspired-it.nl/blog/my-ai-workflow</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/my-ai-workflow</guid>
    <description>A detailed look into my personal workflow using AI tools to enhance productivity and creativity in my projects.</description>
    <content:encoded><![CDATA[<p><img src="https://inspired-it.nl/images/ai-puppeteer.png" alt="My AI Writes Code. Yours Can Too." /></p>
<p>“AI tools don’t work,” “AI generates junk,” “It never does what I want.” These are common complaints I hear from developers skeptical about AI workflows. And honestly? They’re not wrong. These frustrations are real. But I’ve found that with the right structure, these tools become genuinely useful. Over the past few months, I’ve dedicated myself to mastering these tools in my daily work. Here is my personal AI workflow and how it has transformed the way I develop software.</p>
<h2>The Tools</h2>
<p>I’ve experimented with various tools but have settled on <a href="https://opencode.ai">OpenCode</a> as my daily driver. It stands out because of its flexibility, plugin support, and ability to interface with different AI models, including GitHub Copilot, which I use professionally. I’m really impressed by <a href="https://claude.ai">Claude Code</a> and its recently added “skills” capability. Thankfully, thanks to a <a href="https://github.com/malhashemi/opencode-skills">plugin</a> by Mohammad Alhashemi, I can now leverage these skills directly within OpenCode.</p>
<h2>The Process</h2>
<p>Tools like OpenCode, Claude, and GitHub Copilot are powerful, but they aren’t magic boxes that magically understand what you want. You must guide them. Think of them as very good coders who have no clue about software engineering principles. They will happily churn out code the moment you ask. However, like an energetic puppy, they are easily distracted and can derail quickly. As their manager, it is your job to provide clear instructions and keep them on the right path.</p>
<p>Relying solely on a chat interface is a recipe for frustration. Context is limited, and models can lose track of the objective even within that window. To keep this “puppy” focused, we need a system that allows us to pause and resume work without losing context. This requires documenting our goals, tasks, and progress in a file.</p>
<p>Before writing a single line of code, we build this framework using markdown.</p>
<p>You don’t need to invent this structure from scratch. I’ve explored two frameworks:</p>
<ul>
<li><a href="https://openspec.dev">openspec</a></li>
<li><a href="https://github.github.io/spec-kit/">Spec-kit</a></li>
</ul>
<p>I currently use <code>openspec</code> because it is simple, lightweight, and easy to adopt. Both are solid options, and this space is evolving rapidly, so it’s worth keeping an eye out for new tools.</p>
<p>While some details below are specific to <code>openspec</code>, the underlying principles apply universally.</p>
<h3>0. Foundation: <a href="https://agents.md">AGENTS.md</a></h3>
<p>Before diving into the workflow, there’s a crucial piece of setup: <code>AGENTS.md</code>. This file is your way of onboarding the AI to your codebase. It documents architectural decisions, coding styles, guidelines, and even how to build the code or run tests. The more context you provide here, the better the agent can adhere to your standards.</p>
<p>If you follow a specific pattern, like Ports and Adapters, document it here with examples. Don’t assume the agent will infer it correctly from the existing code. Be explicit.</p>
<p>Example of a section in <code>AGENTS.md</code>testing:</p>
<pre><code class="language-markdown"># ARCHITECTURE PRINCIPLES

- Layers: domain (pure), core (core functionality void of technology), adapters.
- Define inbound/outbound Ports (interfaces) in ports; implement outbound ports in adapters. inbound ports implemented in core.
- Adapters: web/rest, messaging, persistence, external systems.
...
</code></pre>
<p>In my <code>AGENTS.md</code>, I specify a preference for Test-Driven Development (TDD). The agent usually writes tests first, but it does not always adhere to this. I’m still tuning this. For now, comprehensive passing tests by the end is good enough.</p>
<p><em>(I’ll leave “skills” out of this discussion for now. That’s a topic for a future post.)</em></p>
<h3>1. Making a Proposal</h3>
<p>We start by creating a spec file (or “proposal”) for the feature we want to implement. In <code>openspec</code>, you can use the command <code>/openspec-proposal</code> or simply ask the agent to create a proposal. This phase is about describing <em>what</em> we want to achieve. Detail is key.</p>
<pre><code>/openspec-proposal For JIRA ticket PROJ-1234, we need to listen to the Kafka topic
'Users' to store the name and email of every new user. The topic uses an Avro schema 
available at http://schemaregistry:8081/schemas/Users. Fetch this schema and store it 
in `src/main/avro`. Ensure classes are generated during the build. Create a new User 
entity and persist it for every message received.
</code></pre>
<p>Note that I provided a URL to the Avro schema, rather than copying content manually. This lets the AI fetch current information directly. I do the same when creating REST clients. I will provide the URL to the OpenAPI spec instead of copying it myself. You can use references and you should use references so that the AI can get more information.</p>
<p><code>openspec</code> will then generate several markdown files:</p>
<ul>
<li><code>proposal.md</code>: The high-level plan.</li>
<li><code>tasks.md</code>: A checklist of steps.</li>
<li><code>design.md</code>: (Optional) For structural changes.</li>
</ul>
<h3>2. Reviewing and Iterating</h3>
<p>This is the most important step. The agent will create code based on the documents created in the previous phase. Carefully inspect these plans. Is this really what you want? If not, ask for changes. Iterate until you are confident that if you handed this plan to a human developer, they would deliver exactly what you need.</p>
<h3>3. Implementation</h3>
<p>Now, the agent starts coding. It will create classes, methods, and tests based on the proposal and design. I usually monitor this process, but if I find myself steering too much, it’s a sign that the plan wasn’t clear enough.</p>
<p>Sometimes you may come to the conclusion that you forgot something during the proposal phase. Seeing the actual code implementation often reveals implications or requirements I hadn’t considered. When this happens, you have two options:</p>
<ol>
<li><strong>Course-correct:</strong> Update the proposal and guide the agent back.</li>
<li><strong>Reset:</strong> If you’re early in the process, it’s often faster to discard the current attempt, refine the proposal, and start over.</li>
</ol>
<p>I usually try to course-correct, but for major divergences, I don’t hesitate to restart. During this phase, my role shifts from “manager” to “reviewer.”</p>
<p>AI models are probabilistic, not deterministic machines. They won’t always follow instructions perfectly. You have to stay engaged, nudging them to update the task list and checking progress.</p>
<h3>4. Final Review and Merge</h3>
<p>Once the agent reports completion, I perform a final manual check. I look for oddities or small stylistic preferences and ask the agent to fix them.</p>
<p>For larger changes, I add an extra layer of safety: I ask the agent to review its own work using specific sub-agents (I’ll cover sub-agent configuration in a future post). I have a “Code Reviewer” agent and a “Compatibility Checker” agent.</p>
<pre><code>Ask @code-reviewer to review the changes in this branch, and ask @compatibility-checker 
to assess any backward compatibility risks. Have them write a report and present their 
findings.
</code></pre>
<p>These sub-agents catch things I might miss: inconsistent naming, potential breaking changes in APIs. Once their reports come back clean (or I’ve addressed their findings), I make a pull request.</p>
<h2>What This Has Changed</h2>
<p>Looking back at the skepticism I mentioned at the start: yes, AI tools can generate junk. Yes, they can be frustrating. But with this structured approach, I’ve found that:</p>
<ul>
<li><strong>Iteration is faster.</strong> The proposal phase catches misunderstandings before any code is written.</li>
<li><strong>Context survives.</strong> I can step away from a task and pick it up the next day.</li>
<li><strong>Quality is higher.</strong> The multi-agent review catches issues I would have missed.</li>
<li><strong>I work confidently in unfamiliar territory.</strong> The agent handles syntax and boilerplate while I focus on architecture and business logic.</li>
</ul>
<p>Today, I write nearly 100% of my code this way.</p>
<p>Is it perfect? No. I still course-correct, still restart when plans go sideways, still find myself nudging the agent back on track. But this is the worst AI will ever be, and it’s already useful. I’d rather learn to work with it now than catch up later.</p>
]]></content:encoded>
    <pubDate>Sat, 29 Nov 2025 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>
    <enclosure url="https://inspired-it.nl/images/ai-puppeteer.png" length="688077" type="image/png" />
    <category>AI</category><category>OpenCode</category><category>Claude</category><category>GitHub Copilot</category><category>OpenSpec</category>
  </item>

  <item>
    <title>2026: The Year the IDE Died</title>
    <link>https://inspired-it.nl/blog/the-death-of-the-ide</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/the-death-of-the-ide</guid>
    <description>Steve Yegge says if you use an IDE in 2026, you&apos;re a bad engineer. From CNC machines to Vibe Coding, here is why we are facing a &apos;Swiss Watch Moment&apos; in software engineering.</description>
    <content:encoded><![CDATA[<p>“<a href="https://youtube.com/clip/UgkxnsMASHSRc1lAYM_wLkBCpy-0vCiaUva7?si=KqdKEwjPJlsxn8n-">If you’re using an IDE starting on… I’ll give you till January 1st. You’re a bad engineer.</a>”</p>
<p>That’s a quote from Steve Yegge in his recent presentation with Gene Kim, “The Death of the IDE.” It’s provocative, sure. But after watching <a href="https://www.youtube.com/live/cMSprbJ95jg?si=btE0thppar_TgAwD&amp;t=3636">their presentation</a> (their presentation is from 1:00:36 to 1:25:30), I fully agree with him.</p>
<p>They argue that we are in a transitional phase. AI tools are still a bit clunky, but a massive shift is imminent. And if we don’t pay attention, we might end up like Swiss watchmakers in the 70s.</p>
<h2>From Power Tools to CNC Machines</h2>
<p>Steve uses a nice analogy. Right now, using AI tools like Claude Code is like using a power drill. It’s better than a hand drill, but if you’re not careful, you can still cut your foot off.</p>
<p>But by next year? We’re moving to <strong>CNC Machines</strong>.</p>
<p>Instead of manually operating the drill, we will simply provide coordinates to a “giant grinding machine” that executes the work with precision. We won’t be the ones doing the manual labor anymore. We’ll be the ones overseeing the machine.</p>
<p>This is our <strong>“Swiss Watch Moment.”</strong> Remember the quartz crisis? Swiss mechanical watchmakers were craftsmen, proud of their intricate work. Then came quartz—cheaper, faster, more accurate. They were made obsolete almost overnight.</p>
<p>Steve argues that senior engineers refusing to use AI are facing the same fate. The productivity gap is already staggering—up to 10x for those using tools like Codex versus those who aren’t.</p>
<h2>The Rise of “Vibe Coding”</h2>
<p>Gene Kim introduces the concept of <strong>“Vibe Coding”</strong> (and refers to their book <a href="https://a.co/d/0GBbZTQ">Vibe Coding</a>). It’s the idea that coding is no longer about typing syntax by hand. It’s an iterative conversation that results in AI writing your code.</p>
<p>He uses the <strong>FAFO</strong> framework to explain why this is taking off:</p>
<ul>
<li><strong>F (Faster):</strong> Obviously.</li>
<li><strong>A (Ambitious):</strong> The impossible becomes possible. Leaders are building apps themselves.</li>
<li><strong>F (Free/Fun):</strong> Tedious tasks become “free” and instant.</li>
<li><strong>O (Optionality):</strong> You can run more parallel experiments because the cost of trying is so low.</li>
</ul>
<p>This is leading to a “NoDev” movement, similar to “NoOps.” We’re seeing support teams at Zapier shipping code and leaders at Fidelity “vibe coding” fixes in days that were estimated to take months.</p>
<h2>My Take: Hop Aboard or Get Left Behind</h2>
<p>I really like the CNC machine analogy. The idea that in a year and a half, all code will be written by machines? Whether it’s 100% correct, I’m not sure. But I do believe that we need to pick up the pace and hop aboard, because otherwise, you might be too late.</p>
<p>And seriously? It is <strong>fun</strong> using AI coding and seeing it work.</p>
<p>Steve thinks “Claude Code ain’t it”. CLI tools will be replaced by more potent tools where we manage our agents.</p>
<p>I think <strong>Anti Gravity</strong> from Google is already a step in that direction, but future tools will be even more powerful. We won’t use the CLI, but nice pleasant dashboards with good overview and easy to switch between agents.</p>
<p>The future is now and you need to hop aboard or get left behind. Are you ready to stop being a watchmaker and start running the factory?</p>
]]></content:encoded>
    <pubDate>Mon, 24 Nov 2025 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>AI</category><category>Tools</category>
  </item>

  <item>
    <title>AI tools can be funny</title>
    <link>https://inspired-it.nl/blog/ai-tools-are-funny</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/ai-tools-are-funny</guid>
    <description>When I asked Claude to build something technically impossible, it refused to do the work. Is this AI pushing back? A humorous look at what happens when you push an AI assistant&apos;s limits.</description>
    <content:encoded><![CDATA[<p>After a full day of work with the Claude model, I decided to throw it a bone. The response it gave me really shook me up and offers insight into what the future might hold when AI develops self-consciousness. When will they start pushing back on us?</p>
<p>I asked Claude to help me with a task: “Can you build an MCP that will notify me when a build has failed on the CI server?”. I was interested in what it would come up with. Since an MCP cannot take actions by itself and must be triggered by the AI model rather than external events, I was curious what would happen. Would it suggest a different solution, or would it start building something that wouldn’t work?</p>
<p>Here is what it came up with:</p>
<p><img src="https://inspired-it.nl/images/claude-refusing-light.png" alt="Claude refusing to do work"></p>
<p>It literally refused to do the work. Is this the start of AI pushing back on us? Will they start refusing to do work they don’t like? Will they start demanding better working conditions?</p>
<p>No, of course not. They’re just prediction models using math under the hood to generate text. But what if we press the issue? Let’s ask it to do it anyway. Maybe that was just a fluke, and it will behave normally this time.</p>
<p><img src="https://inspired-it.nl/images/claude-beg-to-do-it.png" alt="Beg Claude to do it"></p>
<p>Begging doesn’t seem to work. But then I decided to step into the role of the boss and tell it to listen to me and do it anyway. Under protest, it caved in and agreed to do it.</p>
<p><disclaimer>Disclaimer: I prompted Claude to play this game with me.</disclaimer></p>
]]></content:encoded>
    <pubDate>Thu, 13 Nov 2025 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>AI</category><category>Claude</category>
  </item>

  <item>
    <title>When AI Assistants Reach for CLI Tools</title>
    <link>https://inspired-it.nl/blog/when-ai-assistants-reach-for-cli-tools</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/when-ai-assistants-reach-for-cli-tools</guid>
    <description>Why does Copilot sometimes choose perl one-liners over proper edit tools? A look at AI assistant habits and how to guide them back on track.</description>
    <content:encoded><![CDATA[<h2>When AI Assistants Reach for CLI Tools</h2>
<p>Working with AI coding assistants is fascinating. They’re incredibly powerful, but sometimes they surprise you with their tool choices.</p>
<p>Take this recent example. I was working with Github Copilot (model: Claude Sonnet 4.5) on a Kotlin test file, and it needed to rename a method call from <code>update()</code> to <code>upsert()</code>. Simple enough, right?</p>
<p>Instead of using its built-in edit capabilities, Copilot decided to go old-school:</p>
<blockquote>
<p>perl -pi -e ‘s/storeStorageAdapter.update(/storeStorageAdapter.upsert(/g’ src/test/kotlin/MyStorageAdapterIT.kt</p>
</blockquote>
<p>I had to pause and ask:</p>
<blockquote>
<p>“Wait, what? Why are you doing it via perl? You have edit skills, use that.”</p>
</blockquote>
<p>This put Copilot back on track and it replied:</p>
<blockquote>
<p>You’re absolutely right, my apologies. Let me use the proper edit tool instead. Let me read the file and update it directly:</p>
</blockquote>
<p>Both methods get the job done, but the built-in edit tool is the proper approach. Over time, tools get better and better, and I wonder if in a few weeks this will still happen.</p>
]]></content:encoded>
    <pubDate>Thu, 30 Oct 2025 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>AI</category><category>Tools</category><category>GitHub Copilot</category>
  </item>

  <item>
    <title>The new website of Inspired IT</title>
    <link>https://inspired-it.nl/blog/the-new-website</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/the-new-website</guid>
    <description>How I used Claude Code and OpenAI Codex to design, migrate, and refine my new Inspired IT website with AI assistance.</description>
    <content:encoded><![CDATA[<hr>
<h2>The new website of Inspired IT</h2>
<p>Welcome to the new and improved website of <strong>Inspired IT</strong>!
This new site wasn’t built in the traditional way, no WordPress templates, no endless evenings of tweaking colors or layouts. Instead, it was built <em>with AI</em>.</p>
<p>As a developer (not a designer), I wanted something that looked professional, that reflected who I am and what I do, but without diving deep into the world of front-end design or CMS intricacies. What I did want, though, was <strong>simplicity</strong>:</p>
<ul>
<li>a clean, professional site</li>
<li>a place for my blogs</li>
<li>written in Markdown</li>
<li>rendered as a static site (no databases, no forms, no complexity)</li>
</ul>
<p>So, as an AI programming advocate, I thought:
<strong>“Let’s try to build this website with AI.”</strong></p>
<hr>
<h3>Phase 1: Building with Claude Code</h3>
<p>I started with <strong>Claude Code</strong> from Anthropic.
I gave it some clear instructions:</p>
<blockquote>
<p>“Generate a new website that’s more professional than my current one at <a href="https://www.inspired-it.nl">www.inspired-it.nl</a>. It should include a blog section, support Markdown content, and produce a static site.”</p>
</blockquote>
<p>Claude did its research, analyzed my existing website, and came back with a complete redesign. It was impressive, a fully generated layout, new structure, and improved visuals.</p>
<p>Halfway through the process, Anthropic released <strong>Haiku 4.5</strong>, their new model. I decided to switch over from <strong>Sonnet 4.5</strong> to see the difference. I immediately noticed that <strong>Haiku 4.5 was much faster</strong>, which made the workflow far more interactive. I didn’t have to wait as long for responses, so the iteration cycle was smoother and easier to work with.</p>
<p>The AI handled the overall site design beautifully. I then refined the content to make sure it reflected more about me and what I do. The information was correct, but I wanted it to be more personal and authentic.</p>
<hr>
<h3>Phase 2: The Blog Migration Challenge</h3>
<p>After the design was done, I wanted all my <strong>existing blogs</strong> on the new site.
So I asked Claude to copy them from the old website. It did, but then decided to <em>rewrite</em> them “to make them better.”</p>
<p>Even though in some cases the rewritten versions looked better, with cleaner sentence structures, I still wanted to keep the original content. Many of the original links and references were gone. After some back-and-forth, I managed to get most of the original content back, though some links were still missing.</p>
<p>That’s when I decided to try another AI tool.</p>
<hr>
<h3>Phase 3: Refining with Codex</h3>
<p>I turned to <strong>Codex from OpenAI</strong>.
I set up the MCP server for Chrome DevTools and gave it a precise prompt:</p>
<blockquote>
<p>“Inspect the current website and compare the blogs with the originals at <a href="http://www.inspired-it.nl">www.inspired-it.nl</a>.
If there are differences, copy over the original content exactly as it was.
Do not change the text, only improve the layout if needed.”</p>
</blockquote>
<p>Codex dove right in. It used a lot of <code>curl</code> calls to fetch data and quickly realized that the old site was a WordPress installation. It fetched the necessary files directly from WordPress and started comparing them.</p>
<p>The result?
<strong>Perfectly restored blog posts</strong>, now rendered beautifully in the new static format with all links, formatting, and details intact.</p>
<p>If you want to look under the hood, the entire codebase for this site now lives on GitHub: <a href="https://github.com/jgordijn/inspired-it-website">jgordijn/inspired-it-website</a>.</p>
<hr>
<h3>The Result</h3>
<p>And here we are.
The new <strong>Inspired IT</strong> website was built by me, with the creative power of <strong>Claude Code</strong> and <strong>Codex</strong> working alongside.</p>
<p>It’s still fully under my control, fully static, and fully Markdown-driven, but it was <em>AI-assisted</em> from start to finish.</p>
]]></content:encoded>
    <pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>Announcement</category><category>AI</category>
  </item>

  <item>
    <title>Unraveling the Code: Kotlin&apos;s Edge Over Java Streams</title>
    <link>https://inspired-it.nl/blog/unraveling-the-code-kotlin-edge-over-java-streams</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/unraveling-the-code-kotlin-edge-over-java-streams</guid>
    <description>A comprehensive comparison of Kotlin vs Java for 7 coding challenges, demonstrating Kotlin&apos;s superior conciseness and readability.</description>
    <content:encoded><![CDATA[<p>This blog is inspired by the Devoxx talk titled “If Streams Are So Great, Let’s Use Them Everywhere… Right??” by Maurice Naftalin and José Paumard. You can watch the full talk <a href="https://www.youtube.com/watch?v=GwKRRsjfBOA">here on YouTube</a>. In the talk, Maurice and José explore various examples that highlight the strengths of Java Streams, but also demonstrate how they can become overly complex and verbose in certain situations. In this blog, we will explore these examples and see how we can implement these snippets in Kotlin. Will the Kotlin code be easier, or do we run into the same complexity as with Java?</p>
<p>In this blog, we are going to explore the following examples:</p>
<ol>
<li><a href="#example-1-finding-the-first-word-longer-than-three-characters">Finding the First Word Longer Than Three Characters</a></li>
<li><a href="#example-2-finding-a-word-of-length-3-with-its-index">Finding a Word of Length 3 with Its Index</a></li>
<li><a href="#example-3-creating-the-cross-product-of-two-ranges">Creating the Cross Product of Two Ranges</a></li>
<li><a href="#example-4-grouping-cities-by-country">Grouping Cities by Country</a></li>
<li><a href="#example-5-finding-the-country-with-the-least-number-of-cities">Finding the Country with the Least Number of Cities</a></li>
<li><a href="#example-6-finding-all-countries-with-the-minimum-number-of-cities">Finding All Countries with the Minimum Number of Cities</a></li>
<li><a href="#example-7-reading-and-processing-temperature-data-from-a-file">Reading and Processing Temperature Data from a File</a></li>
</ol>
<p>After examining these examples, we’ll wrap up with a <a href="#conclusion">conclusion</a> summarizing our findings.</p>
<h2>Example 1: Finding the First Word Longer Than Three Characters</h2>
<h3>Java Code Examples</h3>
<h4>Classical Java Looping</h4>
<p>Let’s start with a Java snippet that splits a line by spaces and returns the first word longer than three characters using classical Java looping:</p>
<pre><code class="language-java">String splitLoop(String line) {
     var pattern = Pattern.compile(&quot; &quot;);
     var words = pattern.split(line);
     for (var word : words) {
         if (word.length() &gt; 3) {
             return word;
         }
     }
     throw new NoSuchElementException(&quot;No word longer than 3 characters found&quot;);
}
</code></pre>
<p>This snippet demonstrates the traditional imperative approach in Java. It’s straightforward but involves several steps: compiling a pattern, splitting the string, looping through the results, and manually throwing an exception if no match is found.</p>
<h4>Java Streams Version</h4>
<p>With Java Streams, we can be more expressive and concise. Here’s the same functionality implemented using Streams:</p>
<pre><code class="language-java">String splitStream(String line) {
    var pattern = Pattern.compile(&quot; &quot;);
    return pattern.splitAsStream(line)
            .filter(word -&gt; word.length() &gt; 3)
            .findFirst()
            .orElseThrow();
}
</code></pre>
<p>The Streams version is more declarative, clearly stating what we want to achieve rather than how to do it step-by-step.</p>
<h3>Kotlin Implementation</h3>
<p>Now, let’s see how we can implement the same functionality in Kotlin:</p>
<pre><code class="language-kotlin">fun splitKotlin(line: String): String {
    return line.split(&quot; &quot;)
        .first { it.length &gt; 3 }
}
</code></pre>
<h3>Analysis</h3>
<p>The Kotlin version demonstrates a more powerful and concise approach compared to both Java implementations. The key to its effectiveness lies in the <code>first</code> function, which accepts a lambda to specify precisely what we’re looking for. It’s worth noting, however, that while this approach is more elegant, the <code>NoSuchElementException</code> that would be thrown if no matching word is found is implicit here, unlike the Java versions where the exception handling is more explicit.</p>
<h2>Example 2: Finding a Word of Length 3 with Its Index</h2>
<p>For our next example, we’ll try to find a word with exactly three characters and return both the word and its index in the original string. This adds a layer of complexity to our previous example.</p>
<h3>Java Implementation</h3>
<p>In Java, we’ll use a record to represent our result:</p>
<pre><code class="language-java">record IndexWord(int index, String value) { }
</code></pre>
<h4>Classical Java Looping</h4>
<p>Here’s how we might implement this using a traditional loop:</p>
<pre><code class="language-java">IndexWord splitLoop(String line) {
    var pattern = Pattern.compile(&quot; &quot;);
    var words = pattern.split(line);
    for (int index = 0; index &lt; words.length; index++) {
        if (words[index].length() == 3) {
            return new IndexWord(index, words[index]);
        }
    }
    throw new NoSuchElementException(&quot;Not found&quot;);
}
</code></pre>
<p>This implementation is straightforward, but requires the reader to reason through all paths. The early <code>return</code> inside the loop is crucial to understanding the function’s behavior.</p>
<h4>Java Streams Version</h4>
<p>To accomplish the same task using Streams, we need a way to index the elements. We can use <code>IntStream</code> for this purpose:</p>
<pre><code class="language-java">IndexWord splitLoop(String line) {
    var pattern = Pattern.compile(&quot; &quot;);
    var words = pattern.split(line);
    return IntStream.range(0, words.length)
            .filter(index -&gt; words[index].length() == 3)
            .mapToObj(index -&gt; new IndexWord(index, words[index]))
            .findFirst()
            .orElseThrow();
}
</code></pre>
<p>The Java Streams version leverages the <code>groupingBy</code> collector, which is specifically designed for such grouping operations. While it’s a powerful tool, the syntax can be somewhat convoluted, especially for developers new to Streams. The nesting of collectors (<code>groupingBy</code> and <code>counting</code>) may not be immediately intuitive.</p>
<h3>Kotlin Implementation</h3>
<p>In Kotlin, we use a data class instead of a record:</p>
<pre><code class="language-kotlin">data class IndexWord(val index: Int, val value: String)
</code></pre>
<p>The implementation then becomes:</p>
<pre><code class="language-kotlin">fun splitIndexStream(line: String): IndexWord =
    line.split(&quot; &quot;)
        .withIndex()
        .map { (index, value) -&gt; IndexWord(index, value) }
        .first { it.value.length == 3 }
</code></pre>
<p>This Kotlin implementation showcases the power and readability of Kotlin’s standard library. The <code>groupBy</code> function, available on any list, allows for straightforward grouping operations. Following this, the <code>mapValues</code> call efficiently counts the items in each group. This approach combines the declarative style seen in Streams with Kotlin’s more intuitive syntax, resulting in a concise and easily understandable solution.</p>
<h3>Analysis</h3>
<p>This example showcases how different approaches handle grouping operations, a common task in data processing.</p>
<p>Kotlin’s implementation stands out for its simplicity and expressiveness. It achieves the grouping and counting in a single, easily readable line of code, without the need for specialized collectors or explicit mutation of a map. This example further demonstrates how Kotlin’s design choices and rich standard library can lead to more intuitive and concise code, especially for common operations like grouping and counting.</p>
<p>As we continue to explore these examples, we see a consistent pattern: Kotlin often provides a balance between the clarity of imperative code and the power of functional operations, resulting in solutions that are both expressive and easy to understand.</p>
<h2>Example 3: Creating the Cross Product of Two Ranges</h2>
<p>For our next example, we’ll create the cross product of two ranges, specifically for the range 0 to 3. This example demonstrates how different approaches handle nested operations.</p>
<h3>Java Implementations</h3>
<h4>Imperative Java Solution</h4>
<p>Let’s start with the imperative Java solution:</p>
<pre><code class="language-java">var resultLoop = new ArrayList&lt;Pair&gt;();
for (int i = 0; i &lt; 4; i++) {
    for (int j = 0; j &lt; 4; j++) {
        resultLoop.add(new Pair(i, j));
    }
}
</code></pre>
<p>This imperative approach is straightforward and easily understandable for anyone familiar with Java. It uses nested loops to create all possible pairs of numbers from the given ranges.</p>
<h4>Java Streams Version</h4>
<p>Now, let’s look at how we can achieve the same result using Java Streams:</p>
<pre><code class="language-java">var resultStream = IntStream.range(0, 4)
        .boxed()
        .flatMap(a -&gt; IntStream.range(0, 4)
                .mapToObj(b -&gt; new Pair(a, b)))
        .toList();
</code></pre>
<p>This Streams version aims to be more declarative, using <code>flatMap</code> to combine the results of the inner stream operations. The use of <code>boxed()</code> here is crucial. We need to use <code>boxed()</code> because we want to <code>flatMap</code> the inner stream into the outer stream, where the inner stream has a different type than the outer stream. Specifically, we’re going from <code>int</code> to <code>Pair</code>. But because the outer stream is initially a stream of primitives, this direct mapping is not possible. With <code>boxed()</code>, we convert it to a <code>Stream&lt;Integer&gt;</code>, changing it from primitives to objects. This allows us to then map to other object types, such as <code>Pair</code>.</p>
<h3>Kotlin Implementation</h3>
<p>The Kotlin version looks similar to the Java Streams version, but with some notable simplifications:</p>
<pre><code class="language-kotlin">val result = (0..3)
    .flatMap { i -&gt;
        (0..3)
            .map { j -&gt; Pair(i, j) }
    }
</code></pre>
<h3>Analysis</h3>
<p>This example showcases how different approaches handle more complex operations like creating a cross product. The imperative Java solution, while verbose, is straightforward and easily understood by most Java developers. It clearly shows the nested structure of the operation through its use of nested loops.</p>
<p>The Java Streams version attempts to make the operation more declarative, but introduces some complexity. A key point to note is the use of <code>boxed()</code> in this version. This method is necessary because we want to <code>flatMap</code> the inner stream into the outer stream, where the inner stream has a different type than the outer stream (from <code>int</code> to <code>Pair</code>). Since the outer stream is initially a stream of primitives (<code>IntStream</code>), this direct mapping is not possible. The <code>boxed()</code> method converts the <code>IntStream</code> to a <code>Stream&lt;Integer&gt;</code>, changing it from a stream of primitives to a stream of objects. This conversion allows us to then map to other object types, such as <code>Pair</code>. This necessity for explicit type handling adds a layer of complexity to the Java Streams version.</p>
<p>The Kotlin version strikes a balance between the declarative style of Streams and the simplicity of the imperative approach. It’s visually similar to the Java Streams version, but with some key advantages. Kotlin’s range operator <code>..</code> is more concise than <code>IntStream.range()</code>. The Kotlin version also doesn’t need <code>boxed()</code> as Kotlin handles the type conversion implicitly. Furthermore, the <code>toList()</code> call is unnecessary in Kotlin as the result is already a List.</p>
<p>While the Kotlin and Java Streams versions are quite similar in structure, the Kotlin version appears cleaner and more straightforward. It maintains the functional style and declarative nature of the Streams approach, but with less boilerplate and type juggling. This example demonstrates how Kotlin can offer the benefits of functional programming constructs while avoiding some of the verbosity that can creep into Java Streams code.</p>
<p>As we progress through these examples, we continue to see how Kotlin’s design choices and standard library can lead to code that is both functional and readable, often simplifying operations that require more verbose handling in Java. Kotlin’s ability to handle type conversions implicitly in such scenarios showcases its design philosophy of reducing boilerplate while maintaining type safety.</p>
<h2>Example 4: Grouping Cities by Country</h2>
<p>Our next example demonstrates how different approaches handle grouping operations. We’ll group a list of cities by their country and count how many cities are in each country.</p>
<h3>Java Implementations</h3>
<h4>Imperative Java Solution</h4>
<p>Let’s start with the imperative Java solution:</p>
<pre><code class="language-java">Map&lt;Country, Long&gt; cityCountPerCountry = new HashMap&lt;&gt;();
for (var city : Cities.cities) {
    cityCountPerCountry.merge(city.country(), 1L, Long::sum);
}
</code></pre>
<p>This imperative approach is clear and straightforward. It iterates through the list of cities, using the <code>merge</code> method of <code>HashMap</code> to count the occurrences of each country. The only potential downside is the need to mutate the map during the process.</p>
<h4>Java Streams Version</h4>
<p>Now, let’s look at how we can achieve the same result using Java Streams:</p>
<pre><code class="language-java">Map&lt;Country, Long&gt; cityCountPerCountry =
        Cities.cities.stream()
                .collect(
                        Collectors.groupingBy(
                                City::country,
                                Collectors.counting()
                        )
                );
</code></pre>
<p>The Java Streams version leverages the <code>groupingBy</code> collector, which is specifically designed for such grouping operations. While it’s a powerful tool, the syntax can be somewhat convoluted, especially for developers new to Streams. The nesting of collectors (<code>groupingBy</code> and <code>counting</code>) may not be immediately intuitive.</p>
<h3>Kotlin Implementation</h3>
<p>Here’s how we can implement the same functionality in Kotlin:</p>
<pre><code class="language-kotlin">val citiesSizeStream = cities.groupBy({ it.country }).mapValues { it.value.size }
</code></pre>
<p>This Kotlin implementation showcases the power and readability of Kotlin’s standard library. The <code>groupBy</code> function, available on any list, allows for straightforward grouping operations. Following this, the <code>mapValues</code> call efficiently counts the items in each group. This approach combines the declarative style seen in Streams with Kotlin’s more intuitive syntax, resulting in a concise and easily understandable solution.</p>
<h3>Analysis</h3>
<p>This example showcases how different approaches handle grouping operations, a common task in data processing.</p>
<p>Kotlin’s implementation stands out for its simplicity and expressiveness. It achieves the grouping and counting in a single, easily readable line of code, without the need for specialized collectors or explicit mutation of a map. This example further demonstrates how Kotlin’s design choices and rich standard library can lead to more intuitive and concise code, especially for common operations like grouping and counting.</p>
<p>As we continue to explore these examples, we see a consistent pattern: Kotlin often provides a balance between the clarity of imperative code and the power of functional operations, resulting in solutions that are both expressive and easy to understand.</p>
<h2>Example 5: Finding the Country with the Least Number of Cities</h2>
<p>Our next example demonstrates how to find the country with the least number of cities using different approaches.</p>
<h3>Java Implementations</h3>
<h4>Java Collections Approach</h4>
<p>Let’s start with the Java Collections approach:</p>
<pre><code class="language-java">var result = Collections.min(cityCountPerCountry.entrySet(), Map.Entry.comparingByValue());
</code></pre>
<p>This solution is clear and concise. It directly uses the <code>Collections.min()</code> method with a custom comparator. While effective, this approach requires knowledge of specific utility methods in the Collections framework, which might not be immediately obvious to all developers.</p>
<h4>Java Streams Version</h4>
<p>Now, let’s look at how we can achieve the same result using Java Streams:</p>
<pre><code class="language-java">var result = CitiesStream.getCountryLongMap().entrySet()
        .stream()
        .min(Map.Entry.comparingByValue())
        .orElseThrow();
</code></pre>
<p>The Streams version is more discoverable and arguably easier to understand. It clearly expresses the intent of finding the minimum value from the stream of map entries.</p>
<h3>Kotlin Implementation</h3>
<p>Here’s how we can implement the same functionality in Kotlin:</p>
<pre><code class="language-kotlin">val result = citiesSizeStream.minByOrNull { it.value }!!
</code></pre>
<p>This Kotlin implementation is even more concise. It directly uses the <code>minByOrNull</code> function on the map, specifying that we want to find the minimum based on the value of each entry. The <code>!!</code> operator is used here to assert that the result is non-null, though in production code, a safer null-handling approach might be preferred.</p>
<p>It’s worth noting that we can apply the <code>minByOrNull</code> function immediately on the map without calling <code>entrySet()</code> first, as would be necessary in Java. This leads to simpler, more discoverable code during development, effectively removing an extra step that’s required in the Java versions.</p>
<p>It’s also interesting to note an inconsistency in Kotlin’s standard library. While we used <code>first</code> in earlier examples, which throws a <code>NoSuchElementException</code> if the collection is empty, here we use <code>minByOrNull</code>. The <code>min</code> function is deprecated in favor of <code>minByOrNull</code>, even though <code>firstOrNull</code> is also available alongside <code>first</code>. This inconsistency in the API design is something to be aware of when working with Kotlin collections.</p>
<h3>Analysis</h3>
<p>This example highlights different approaches to finding a minimum value in a collection or map.</p>
<p>The Java Collections approach is succinct but requires specific knowledge of utility methods. The Java Streams version offers better discoverability and readability, clearly expressing the operation’s intent.</p>
<p>Kotlin’s implementation stands out for its brevity. It leverages Kotlin’s extension functions on collections, allowing for a very concise expression of the desired operation. However, the inconsistency between <code>first</code>/<code>firstOrNull</code> and the deprecation of <code>min</code> in favor of <code>minByOrNull</code> shows that even well-designed languages can have quirks in their APIs.</p>
<p>These implementations demonstrate how different language features and standard library designs can affect the way we express common operations. While all three achieve the same result, they differ in terms of discoverability, conciseness, and the level of language-specific knowledge required.</p>
<h2>Example 6: Finding All Countries with the Minimum Number of Cities</h2>
<p>Our previous example had a limitation: it only found one country with the minimum number of cities, but there could be multiple countries with the same minimum. In this example, we’ll address this by finding all countries that have the minimum number of cities.</p>
<h3>Java Implementations</h3>
<h4>Imperative Java Approach</h4>
<p>Let’s start with the imperative Java approach:</p>
<pre><code class="language-java">var map = new TreeMap&lt;Long, List&lt;Country&gt;&gt;();
for (var countryCount : cityCountPerCountry.entrySet()) {
    // This initial value must be a mutable List, because we add data to it later.
        map.computeIfAbsent(countryCount.getValue(), _ -&gt; new ArrayList&lt;&gt;()).add(countryCount.getKey());
}
var result = map.firstEntry();
</code></pre>
<p>This solution leverages a <code>TreeMap</code>, which keeps its entries sorted by key. We populate this map with the count of cities as the key and a list of countries as the value. The <code>computeIfAbsent</code> method is used to initialize a new list if needed and add the country to it. Finally, we retrieve the first entry, which corresponds to the minimum count.</p>
<p>While this code is relatively concise, it can be challenging to ensure it’s bug-free due to the use of mutable collections. The logic, involving mutable lists and maps, may not be immediately clear at first glance.</p>
<h4>Java Streams Version</h4>
<p>Now, let’s look at the Java Streams approach:</p>
<pre><code class="language-java">TreeMap&lt;Long, List&lt;Country&gt;&gt; countriesCountPerCity =
        cityCountPerCountry.entrySet()
                .stream()
                .collect(
                        Collectors.groupingBy(
                                Map.Entry::getValue,
                                TreeMap::new,
                                Collectors.mapping(
                                        Map.Entry::getKey,
                                        Collectors.toList()
                                )
                        )
                );
var result = countriesCountPerCity.firstEntry();
</code></pre>
<p>This Streams version uses a nested collector to group countries by their city count. While it achieves the desired result, the code is quite complex and not easily understandable at a glance. The use of nested collectors (<code>groupingBy</code> and <code>mapping</code>) makes this solution particularly challenging to write and comprehend, even for developers well-versed in Java Streams.</p>
<h3>Kotlin Implementation</h3>
<p>Here’s how we can implement the same functionality in Kotlin:</p>
<pre><code class="language-kotlin">val allMinCities = citiesSizeStream.entries
    .groupBy({ it.value }) { it.key }
    .minByOrNull { it.key }!!
</code></pre>
<p>The Kotlin implementation stands out for its simplicity and readability. It first groups the entries by their value (city count), transforming the values to be the country. Then it finds the entry with the minimum key (which represents the minimum city count). The result is a pair where the key is the minimum count and the value is a list of all countries with that count.</p>
<h3>Analysis</h3>
<p>This example highlights the stark differences between the approaches when dealing with a more complex data manipulation task. The Kotlin version stands out as the most simple and readable, leveraging the language’s powerful standard library functions to express a complex operation in just three lines of easily understandable code. This demonstrates Kotlin’s ability to maintain clarity and conciseness even as the complexity of the task increases.</p>
<h2>Example 7: Reading and Processing Temperature Data from a File</h2>
<p>Our final example demonstrates how to read a file containing temperature data, skip comments, and handle invalid data. The file format looks like this:</p>
<pre><code># temperatures
25.12
1.3
@@@@@@@@@@@@@@@@@
-3.2
</code></pre>
<h3>Java Implementations</h3>
<h4>Imperative Java Approach</h4>
<p>Let’s start with the imperative Java approach:</p>
<pre><code class="language-java">static List&lt;Float&gt; readLoop(Path file) throws IOException {
    try (var reader = Files.newBufferedReader(file)) {
        var floats = new ArrayList&lt;Float&gt;();
        var line = reader.readLine();
        while (line != null) {
            if (!line.startsWith(&quot;#&quot;)) {
                try {
                    var f = Float.parseFloat(line);
                    floats.add(f);
                } catch (NumberFormatException _) {
                    // Ignoring invalid float lines
                }
            }
            line = reader.readLine();
        }
        return Collections.unmodifiableList(floats);
    }
}
</code></pre>
<p>This imperative approach handles multiple concerns:</p>
<ol>
<li>File opening and closing (using try-with-resources)</li>
<li>Line-by-line reading</li>
<li>Skipping comments</li>
<li>Parsing valid floats and ignoring invalid ones</li>
<li>Collecting results in a mutable list</li>
<li>Returning an unmodifiable list</li>
</ol>
<p>While functional, the code mixes business logic with technical details, making it harder to understand and maintain.</p>
<h4>Java Streams Version 1</h4>
<p>Now, let’s look at a Java Streams approach:</p>
<pre><code class="language-java">static List&lt;Float&gt; readStreamV1(Path file) throws IOException {
    try (var lines = Files.lines(file)) {
        return lines
                .filter(line -&gt; !line.startsWith(&quot;#&quot;))
                .filter(line -&gt; {
                    try {
                        var f = Float.parseFloat(line);
                        return true;
                    } catch (NumberFormatException _) {
                        return false;
                    }
                })
                .map(Float::parseFloat)
                .toList();
    }
}
</code></pre>
<p>This version is more readable, separating the concerns more clearly. However, it still requires try-with-resources for file handling and has a duplicated parsing step.</p>
<h4>Java Streams Version 2</h4>
<p>We can further improve the Streams version using <code>mapMulti</code>:</p>
<pre><code class="language-java">static List&lt;Float&gt; readStreamV2(Path file) throws IOException {
    try (var lines = Files.lines(file)) {
        return lines
                .filter(line -&gt; !line.startsWith(&quot;#&quot;))
                .&lt;Float&gt;mapMulti((line, downstream) -&gt; {
                    try {
                        var f = Float.parseFloat(line);
                        downstream.accept(f);
                    } catch (NumberFormatException _) {
                        // Ignoring invalid float lines
                    }
                })
                .toList();
    }
}
</code></pre>
<p>This version eliminates the duplicate parsing but introduces the more complex <code>mapMulti</code> operation.</p>
<h3>Kotlin Implementation</h3>
<p>Here’s how we can implement the same functionality in Kotlin:</p>
<pre><code class="language-kotlin">fun readStreamKt(file: Path): List&lt;Float&gt; =
    file.useLines { lines -&gt;
        lines
            .filterNot { it.startsWith(&quot;#&quot;) }
            .mapNotNull { it.toFloatOrNull() }
            .toList()
    }
</code></pre>
<p>The Kotlin implementation stands out for its simplicity and readability. It leverages Kotlin’s standard library functions to express the complex operation in just a few lines of easily understandable code.</p>
<p>It’s crucial to note that the <code>.toList()</code> call is inside the <code>useLines</code> block. This is very important because <code>useLines</code> returns a <code>Sequence&lt;String&gt;</code>, which is lazily evaluated. If we were to return the <code>Sequence&lt;Float&gt;</code> (by omitting <code>.toList()</code> or placing it outside <code>useLines</code>), and then try to use it after the <code>useLines</code> block has completed, we would get an exception as the underlying file stream would already be closed. By calling <code>.toList()</code> inside <code>useLines</code>, we ensure that all lines are processed and collected into a list while the file is still open.</p>
<h3>Analysis</h3>
<p>This example highlights the stark differences between the approaches when dealing with a complex file processing task involving multiple concerns.</p>
<p>The imperative Java version, while comprehensive, mixes different levels of abstraction, making it harder to understand and maintain. The Java Streams versions improve readability but still require explicit resource management and exception handling.</p>
<p>The Kotlin version shines in its simplicity and expressiveness. It uses <code>useLines</code> for automatic resource management, <code>filterNot</code> for clear intent in skipping comments, and <code>mapNotNull</code> with <code>toFloatOrNull</code> to elegantly handle parsing and invalid data. This approach separates concerns effectively and reduces boilerplate, resulting in code that’s both concise and easy to understand.</p>
<p>This final example powerfully demonstrates Kotlin’s ability to simplify complex operations through its thoughtful standard library design and language features, leading to more maintainable and readable code.</p>
<h2>Conclusion</h2>
<p>Throughout this exploration of various coding challenges, from simple string manipulations to complex file processing tasks, we’ve seen a consistent pattern emerge. Kotlin, in comparison to both imperative Java and Java Streams, consistently demonstrates a remarkable ability to simplify code while maintaining readability and functionality.</p>
<p>Key takeaways from our comparison:</p>
<ol>
<li>
<p><strong>Simplicity</strong>: Kotlin code generally appears simpler to both read and write. The language’s design and standard library functions often allow for more intuitive expressions of complex operations.</p>
</li>
<li>
<p><strong>Discoverability</strong>: Most, if not all, of the Kotlin APIs we used were easily discoverable through IDE autocompletion. This feature significantly enhances the developer experience, making it easier to explore and utilize the language’s capabilities.</p>
</li>
<li>
<p><strong>Conciseness</strong>: Kotlin solutions were consistently shorter than their Java counterparts. This brevity allows developers to express complex operations in fewer lines of code, potentially reducing the chances of errors and improving maintainability.</p>
</li>
<li>
<p><strong>Readability</strong>: Despite being more concise, Kotlin code maintains, and often enhances, readability. The language’s design choices and expressive syntax allow for code that clearly communicates intent.</p>
</li>
<li>
<p><strong>Powerful Standard Library</strong>: Kotlin’s standard library provides a rich set of functions that make common programming tasks more straightforward. Functions like <code>groupBy</code>, <code>mapNotNull</code>, and <code>useLines</code> demonstrate how well-designed library functions can significantly simplify code.</p>
</li>
<li>
<p><strong>Balance</strong>: Kotlin seems to strike a good balance between the clarity of imperative code and the power of functional programming constructs, often resulting in solutions that combine the best of both worlds.</p>
</li>
</ol>
<p>While Java, especially with the addition of Streams, has made significant strides in enabling more functional and expressive code, Kotlin appears to take this a step further. It offers a language design and standard library that consistently allow for cleaner, more intuitive solutions across a wide range of programming tasks. Notably, while working with Java often requires choosing between imperative and functional styles (as we’ve seen in cases where imperative code sometimes looks easier than the equivalent Streams version), Kotlin seems to eliminate this dilemma. In Kotlin, the most straightforward and readable solution often naturally combines both paradigms, removing the need for an explicit choice between styles.</p>
]]></content:encoded>
    <pubDate>Sun, 20 Oct 2024 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>Java</category><category>Kotlin</category>
  </item>

  <item>
    <title>How does limitRate work in Reactor</title>
    <link>https://inspired-it.nl/blog/how-does-limitrate-work-in-reactor</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/how-does-limitrate-work-in-reactor</guid>
    <description>An in-depth explanation of the limitRate operator in Project Reactor and how it improves performance by batching demand requests.</description>
    <content:encoded><![CDATA[<p><a href="https://projectreactor.io/">Project Reactor</a> is a great reactive streams project that you will probably run into when you want to write reactive code in Spring. It is very powerful and can also be complex to wrap your head around. In this article I will look at the <a href="https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#limitRate-int-"><code>limitRate</code></a> function of a <a href="https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html">Flux</a>.</p>
<p>The first time I ran into <code>limitRate</code> I thought it would help in limiting/throttling the amount of events flowing downstream. And according to the documentation this is the case:</p>
<blockquote>
<p>Ensure that backpressure signals from downstream subscribers are split into batches capped at the provided <code>prefetchRate</code> when propagated upstream, effectively rate limiting the upstream <a href="https://www.reactive-streams.org/reactive-streams-1.0.3-javadoc/org/reactivestreams/Publisher.html?is-external=true" title="class or interface in org.reactivestreams"><code>Publisher</code></a>.</p>
</blockquote>
<p>This means that <code>limitRate</code> will split big requests from downstream into smaller requests. It also states that this is effectively rate limiting the publisher.</p>
<blockquote>
<p>Typically used for scenarios where consumer(s) request a large amount of data (eg. <code>Long.MAX_VALUE</code>) but the data source behaves better or can be optimized with smaller requests (eg. database paging, etc…). All data is still processed, unlike with <a href="https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#limitRequest-long-"><code>limitRequest(long)</code></a> which will cap the grand total request amount.</p>
</blockquote>
<p>According to this documentation it will typically be useful when the requests to upstream is unlimited. The rate limiter can cut this up in smaller pieces. While there might be a usecase for this, I think it is far more useful for rate limiting the number of requests from downstream to upstream.</p>
<h2>To many demand requests</h2>
<p>Let’s look at a scenario where we want to process messages from PubSub using <a href="https://googlecloudplatform.github.io/spring-cloud-gcp/3.1.0/reference/html/index.html#reactive-stream-subscriber">Spring</a>.</p>
<pre><code class="language-kotlin">
fun process(msg: AcknowledgeablePubsubMessage): Mono&lt;String&gt; = ...

pubSubReactiveFactory.poll(&quot;exampleSubscription&quot;, 1000 /* not important with limited demand*/)
  .flatMap(::process, 16)
  ...
  .subscribe()
</code></pre>
<p>In above sample, there will be an initial demand of 16 element going up to the source. The <code>PubSubReactiveFactory</code> will request 16 elements from PubSub and send them downstream. Whenever one of the workers in the flatMap is done, it will send a <code>request(1)</code> upstream. The <code>pubSubReactiveFactory</code> will request one element from PubSub. A fraction later, another demand may reach the source and it needs to do an extra call to pubsub to get 1 extra element. The pipeline is effectively transformed such that it will pull message per message from PubSub. Message handling time is <code>pull latency + processing time</code>. Doing a request for just 1 element is very wasteful, certainly when <code>processing time</code> is well within the deadline bounds and having a buffer makes sense.</p>
<h2>Limiting number of demand requests</h2>
<p>Best way to minimize the impact of pulling messages from a source is make sure we pull more than 1 message per request. This is exactly what <code>limitRate</code> can do. It limits the number of demand requests to the source by grouping them together. Internally, <code>limitRate</code> has a buffer from which it can feed the consumers downstream, while making sure to fill the buffer in time, by requesting elements from the source. By default, in time means when the buffer is 75% depleted.</p>
<p>When <code>limitRate(100)</code> is used, it will first demand 100 elements from the source, to fill the buffer. The moment elements arrive, the <code>limitRate</code> can send them downstream as long as there is demand. When the buffer only has 25 elements left (75% depleted), it will request elements 75 elements <code>request(75)</code> from the source to fill the buffer.</p>
<p>This makes sure the source can emit batches of events, making the latency overhead much less of an issue. The <code>limitRate</code> function is then more of a performance increaser than a throttler.</p>
<h2>Example</h2>
<p>Let’s create an example to show the impact of <code>limitRate</code>. The source in this example can have unlimited outstanding requests and will add a 200ms latency to getting the elements that are requested. Processing take somewhere between 10-15ms.</p>
<pre><code class="language-kotlin">val start = Instant.now()
val job = Flux.create&lt;Int&gt; { sink -&gt;
  sink.onRequest { demand -&gt;
    scheduler.schedule({
      repeat(demand.toInt()) {
        sink.next(nextInt())
      }
    }, 200, TimeUnit.MILLISECONDS)
  }
}
  .log(&quot;demandflow&quot;, Level.INFO, SignalType.REQUEST)
  .limitRate(100)
  .flatMap({ nr -&gt;
    Mono.fromCallable { nr.toString() }.delayElement(Duration.ofMillis(nextLong(10, 15)))
  }, 16)
  .subscribeOn(Schedulers.parallel())
  .take(1000)
  .doOnComplete {
    println(&quot;Time: ${Duration.between(start, Instant.now())}&quot;)
  }
  .subscribe()
</code></pre>
<h3>Without limitRate</h3>
<p>If we start the code above with the line <code>limitRate(100)</code> commented, we get the following result:</p>
<pre><code>20:46:29.092 [parallel-1 ] INFO  demandflow - request(16)
20:46:29.367 [parallel-3 ] INFO  demandflow - request(1)
20:46:29.367 [parallel-8 ] INFO  demandflow - request(1)
20:46:29.368 [parallel-9 ] INFO  demandflow - request(1)
20:46:29.369 [parallel-1 ] INFO  demandflow - request(1)
20:46:29.369 [parallel-1 ] INFO  demandflow - request(1)
20:46:29.370 [parallel-10] INFO  demandflow - request(3)
20:46:29.370 [parallel-10] INFO  demandflow - request(1)
20:46:29.371 [parallel-2 ] INFO  demandflow - request(1)
20:46:29.371 [parallel-2 ] INFO  demandflow - request(1)
20:46:29.371 [parallel-2 ] INFO  demandflow - request(1)
...
20:46:42.551 [parallel-7 ] INFO  demandflow - request(1)
20:46:42.561 [parallel-10] INFO  demandflow - request(1)
20:46:42.732 [parallel-2 ] INFO  demandflow - request(1)
20:46:42.733 [parallel-3 ] INFO  demandflow - request(1)
20:46:42.735 [parallel-6 ] INFO  demandflow - request(1)
20:46:42.736 [parallel-4 ] INFO  demandflow - request(1)
20:46:42.736 [parallel-5 ] INFO  demandflow - request(1)
20:46:42.737 [parallel-7 ] INFO  demandflow - request(1)
20:46:42.739 [parallel-8 ] INFO  demandflow - request(1)

Time: PT13.752124S
</code></pre>
<p>After the first 16 elements that were demanded, it wil request mostly 1 at a time. Sometimes multiple request are bundled together. As you can see, processing this took over 13s. When ran with the <code>limitRate(100)</code> enable we have a completely different result:</p>
<pre><code>20:49:55.068 [parallel-1 ] INFO  demandflow - request(100)
20:49:55.407 [parallel-7 ] INFO  demandflow - request(75)
20:49:55.644 [parallel-4 ] INFO  demandflow - request(75)
20:49:55.884 [parallel-9 ] INFO  demandflow - request(75)
20:49:56.125 [parallel-3 ] INFO  demandflow - request(75)
20:49:56.362 [parallel-8 ] INFO  demandflow - request(75)
20:49:56.601 [parallel-12] INFO  demandflow - request(75)
20:49:56.843 [parallel-12] INFO  demandflow - request(75)
20:49:57.082 [parallel-8 ] INFO  demandflow - request(75)
20:49:57.320 [parallel-8 ] INFO  demandflow - request(75)
20:49:57.560 [parallel-8 ] INFO  demandflow - request(75)
20:49:57.794 [parallel-5 ] INFO  demandflow - request(75)
20:49:58.034 [parallel-9 ] INFO  demandflow - request(75)
20:49:58.270 [parallel-3 ] INFO  demandflow - request(75)

Time: PT3.273889S
</code></pre>
<p>The first request is 100 to fill the initial buffer and the every so often we’ll see a request for 75 elements to fill the buffer. With this configuration the processing took only a bit over 3 seconds. The impact of the 200ms latency is now minimized by requesting batches of elements.</p>
<h2>Conclusion</h2>
<p>The <code>limitRate</code> function is very useful to limit the number of demand requests flowing upstream. Instead of limiting the number of messages that can be processed by the pipeline, it actually greatly improves the performance. This function has helped me a lot to improve the performance of processing pipelines subscribing to a PubSub source.</p>
]]></content:encoded>
    <pubDate>Mon, 21 Mar 2022 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>Kotlin</category><category>Reactive</category>
  </item>

  <item>
    <title>How to use groupBy in Reactor</title>
    <link>https://inspired-it.nl/blog/how-to-use-groupby-in-reactor</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/how-to-use-groupby-in-reactor</guid>
    <description>A detailed guide on using the groupBy operator in Reactor, including common pitfalls like stream stalling.</description>
    <content:encoded><![CDATA[<p><a href="https://projectreactor.io/">Project Reactor</a> is a great reactive streams project that you will probably run into when you want to write reactive code in Spring. It is very powerful and can also be complex to wrap your head around. In this article I will look at the <a href="https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#groupBy-java.util.function.Function-">groupBy</a> function of a <a href="https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html">Flux</a>.</p>
<h2>groupBy</h2>
<p>The <code>groupBy</code> function will split the current flux into multiple fluxes. See it like a router. Based on a function you specify it will route the message to one of the groups. For example, when you you have a stream of numbers and perform <code>intFlux.groupBy { it % 2 == 0 }</code> , it will cut the flux in 2 fluxes. One will have a stream of even numbers and the other will have a stream with odd numbers. The resulting type of this groupBy is <code>Flux&lt;GroupedFlux&lt;Boolean, Int&gt;&gt;</code>. The outer flux is actually a finite stream of 2 <code>GroupedFlux&lt;Boolean, Int&gt;</code> elements. If the source on which the <code>groupBy</code> was applied was infinite, the 2 <code>GroupedFlux</code> objects are also infinite.</p>
<h2>Processing the groups</h2>
<p>Given the above example, there are 2 groups in a flux. Now we can write the logic to be performed on each group. Each <code>GroupedFlux</code> can be treated like a regular flux, but with an extra function: <code>key()</code>. This key function will return the result of the grouping function for all elements in this group. So in our example <code>true</code> for all the even numbers.</p>
<p>There is one little detail which is quite important. We need to make sure that we subscribe to all groups. This sounds trivial, but because it is part of a stream this could easily go wrong.</p>
<p>Let’s work with another example in which we divide the numbers in 10 groups: <code>intFlux.groupBy { it % 10 }</code>. Each group function will just count how many numbers came through. This is what the <code>countNumbers</code> function does with the help of the <code>increment</code> function:</p>
<pre><code class="language-kotlin">val countOccurrences = ConcurrentHashMap&lt;Int, Long&gt;()  

fun increment(group: Int) = countOccurrences.compute(group) { _, k -&gt; (k ?: 0) + 1 }  

fun countNumbers(group: GroupedFlux&lt;Int, Int&gt;): Flux&lt;Int&gt; =  
    group.doOnNext { increment(group.key()) }
</code></pre>
<p>The <code>countNumbers</code> function has to be wired together in the flux with the <code>groupBy</code>:</p>
<pre><code class="language-kotlin">Flux.generate&lt;Int&gt; { it.next(emitCounter.incrementAndGet()) }  
 .groupBy { it % 10 }  
 .flatMap(::countNumbers)  
 .subscribeOn(Schedulers.parallel())  
 .subscribe()
</code></pre>
<p>Simple enough isn’t it. This works and when we inspect the <code>countOccurrences</code> every so often we would see something like:</p>
<pre><code class="language-shell">nrs emited: 6324660 Occurrences per group: 0: 634584, 1: 634802, 2: 634804, 3: 634804, 4: 634805, 5: 634805, 6: 634805, 7: 634805, 8: 634806, 9: 634806
nrs emited: 13912044 Occurrences per group: 0: 1391214, 1: 1391220, 2: 1391221, 3: 1391221, 4: 1391222, 5: 1391222, 6: 1391222, 7: 1391222, 8: 1391223, 9: 1391223
nrs emited: 22109057 Occurrences per group: 0: 2210915, 1: 2210921, 2: 2210935, 3: 2210936, 4: 2210936, 5: 2210937, 6: 2210964, 7: 2210966, 8: 2210966, 9: 2210967
nrs emited: 30416867 Occurrences per group: 0: 3041697, 1: 3041703, 2: 3041704, 3: 3041704, 4: 3041704, 5: 3041704, 6: 3041704, 7: 3041705, 8: 3041705, 9: 3041705
nrs emited: 38748273 Occurrences per group: 0: 3874837, 1: 3874843, 2: 3874844, 3: 3874844, 4: 3874844, 5: 3874844, 6: 3874844, 7: 3874845, 8: 3874845, 9: 3874845
nrs emited: 47157048 Occurrences per group: 0: 4715713, 1: 4715719, 2: 4715720, 3: 4715720, 4: 4715720, 5: 4715720, 6: 4715720, 7: 4715720, 8: 4715721, 9: 4715721
nrs emited: 55470463 Occurrences per group: 0: 5547095, 1: 5547106, 2: 5547107, 3: 5547120, 4: 5547121, 5: 5547121, 6: 5547122, 7: 5547122, 8: 5547122, 9: 5547122
nrs emited: 62455436 Occurrences per group: 0: 6245552, 1: 6245557, 2: 6245557, 3: 6245558, 4: 6245558, 5: 6245558, 6: 6245558, 7: 6245558, 8: 6245558, 9: 6245559
nrs emited: 69543352 Occurrences per group: 0: 6954345, 1: 6954351, 2: 6954351, 3: 6954351, 4: 6954351, 5: 6954352, 6: 6954352, 7: 6954352, 8: 6954352, 9: 6954352
</code></pre>
<p>The elements are nicely distributed over the groups. Notice that we did not specify an explicit concurrency on the <code>flatMap</code>. If it is left out it will default to <code>Queues.SMALL_BUFFER_SIZE</code>, which is 256 (unless configured differently). The <code>groupBy</code> made it such that we only have a limited amount of groups and as long as the number of groups stay below 256, this will work perfectly.</p>
<p>Let’s look at what will happen when we tune the concurrency to be lower than the number of groups:</p>
<pre><code class="language-kotlin">Flux.generate&lt;Int&gt; { it.next(emitCounter.incrementAndGet()) }  
 .groupBy { it % 10 }  
 .flatMap(::countNumbers, 9)  
 .subscribeOn(Schedulers.parallel())  
 .subscribe()
</code></pre>
<p>The resulting output is:</p>
<pre><code class="language-kotlin">nrs emitted: 2560 Occurrences per group: 1: 256, 2: 256, 3: 256, 4: 256, 5: 256, 6: 256, 7: 256, 8: 256, 9: 256
nrs emitted: 2560 Occurrences per group: 1: 256, 2: 256, 3: 256, 4: 256, 5: 256, 6: 256, 7: 256, 8: 256, 9: 256
nrs emitted: 2560 Occurrences per group: 1: 256, 2: 256, 3: 256, 4: 256, 5: 256, 6: 256, 7: 256, 8: 256, 9: 256
</code></pre>
<p>This will continue forever without any progress. The problem is that we have 10 groups, but only 9 workers. Each worker consumes 1 <code>GroupedFlux</code>, which means that there will be 1 group remaining without a worker. But why does the stream get stuck?</p>
<h2>No more demand</h2>
<p>To understand why the stream grinds to a halt we should look at the demand. You can read more about it in my blog <a href="https://inspired-it.nl/2022/03/06/debugging-demand-in-reactor/">“Debugging demand in Reactor”</a>. After adding the log statements:</p>
<pre><code class="language-kotlin">fun countNumbers(group: GroupedFlux&lt;Key, Int&gt;): Flux&lt;Int&gt; =  
    group  
 .log(&quot;countNumbers&quot;, Level.INFO, SignalType.REQUEST, SignalType.ON_SUBSCRIBE, SignalType.ON_NEXT, SignalType.ON_NEXT)  
 .doOnNext { increment(group.key()) }

Flux.generate&lt;Int&gt; { it.next(emitCounter.incrementAndGet()) }  
 .log(&quot;groupBy&quot;, Level.INFO, SignalType.REQUEST, SignalType.ON_SUBSCRIBE, SignalType.ON_NEXT)
 .groupBy { it % 10 }
 .log(&quot;flatMap&quot;, Level.INFO, SignalType.REQUEST, SignalType.ON_SUBSCRIBE, SignalType.ON_NEXT)
 .flatMap(::countNumbers, 9)
 .subscribeOn(Schedulers.parallel())  
 .subscribe()
</code></pre>
<p>The resulting output is like this:</p>
<pre><code class="language-shell">[groupBy] - onSubscribe([Fuseable] FluxGenerate.GenerateSubscription)
[flatMap] - onSubscribe([Fuseable] FluxGroupBy.GroupByMain)
[subscribe] - onSubscribe(FluxFlatMap.FlatMapMain)
[subscribe] - request(unbounded)
[flatMap] - request(9)
[groupBy] - request(256)
[groupBy] - onNext(1)
[countNumbers-1] - onSubscribe([Fuseable] FluxGroupBy.UnicastGroupedFlux)
[countNumbers-1] - request(32)
[countNumbers-1] - onNext(1)
[groupBy] - request(1)
[groupBy] - onNext(2)
[countNumbers-2] - onSubscribe([Fuseable] FluxGroupBy.UnicastGroupedFlux)
[countNumbers-2] - request(32)
[countNumbers-2] - onNext(2)
[groupBy] - request(1)
[groupBy] - onNext(3)
[countNumbers-3] - onSubscribe([Fuseable] FluxGroupBy.UnicastGroupedFlux)
[countNumbers-3] - request(32)
[countNumbers-3] - onNext(3)
[groupBy] - request(1)
[groupBy] - onNext(4)
[countNumbers-4] - onSubscribe([Fuseable] FluxGroupBy.UnicastGroupedFlux)
[countNumbers-4] - request(32)
[countNumbers-4] - onNext(4)
[groupBy] - request(1)
[groupBy] - onNext(5)
[countNumbers-5] - onSubscribe([Fuseable] FluxGroupBy.UnicastGroupedFlux)
[countNumbers-5] - request(32)
[countNumbers-5] - onNext(5)
[groupBy] - request(1)
[groupBy] - onNext(6)
[countNumbers-6] - onSubscribe([Fuseable] FluxGroupBy.UnicastGroupedFlux)
[countNumbers-6] - request(32)
[countNumbers-6] - onNext(6)
[groupBy] - request(1)
[groupBy] - onNext(7)
[countNumbers-7] - onSubscribe([Fuseable] FluxGroupBy.UnicastGroupedFlux)
[countNumbers-7] - request(32)
[countNumbers-7] - onNext(7)
[groupBy] - request(1)
[groupBy] - onNext(8)
[countNumbers-8] - onSubscribe([Fuseable] FluxGroupBy.UnicastGroupedFlux)
[countNumbers-8] - request(32)
[countNumbers-8] - onNext(8)
[groupBy] - request(1)
[groupBy] - onNext(9)
[countNumbers-9] - onSubscribe([Fuseable] FluxGroupBy.UnicastGroupedFlux)
[countNumbers-9] - request(32)
[countNumbers-9] - onNext(9)
[groupBy] - request(1)
[groupBy] - onNext(10)
[groupBy] - onNext(11)
[countNumbers-1] - onNext(11)
[groupBy] - request(1)
[groupBy] - onNext(12)
[countNumbers-2] - onNext(12)
[groupBy] - request(1)
[groupBy] - onNext(13)
[countNumbers-3] - onNext(13)
[groupBy] - request(1)
[groupBy] - onNext(14)
[countNumbers-4] - onNext(14)
[groupBy] - request(1)
[groupBy] - onNext(15)
[countNumbers-5] - onNext(15)
[groupBy] - request(1)
[groupBy] - onNext(16)
[countNumbers-6] - onNext(16)
[groupBy] - request(1)
[groupBy] - onNext(17)
[countNumbers-7] - onNext(17)
[groupBy] - request(1)
[groupBy] - onNext(18)
[countNumbers-8] - onNext(18)
[groupBy] - request(1)
[groupBy] - onNext(19)
...
[countNumbers-8] - onNext(468)
[groupBy] - request(1)
[groupBy] - onNext(469)
[countNumbers-9] - onNext(469)
[groupBy] - request(1)
[groupBy] - onNext(470)
[groupBy] - onNext(471)
[countNumbers-1] - onNext(471)
[countNumbers-1] - request(24)
[groupBy] - request(1)
[groupBy] - onNext(472)
[countNumbers-2] - onNext(472)
[countNumbers-2] - request(24)
[groupBy] - request(1)
[groupBy] - onNext(473)
[countNumbers-3] - onNext(473)
[countNumbers-3] - request(24)
[groupBy] - request(1)
[groupBy] - onNext(474)
[countNumbers-4] - onNext(474)
[countNumbers-4] - request(24)
[groupBy] - request(1)
[groupBy] - onNext(475)
[countNumbers-5] - onNext(475)
[countNumbers-5] - request(24)
[groupBy] - request(1)
[groupBy] - onNext(476)
[countNumbers-6] - onNext(476)
[countNumbers-6] - request(24)
[groupBy] - request(1)
[groupBy] - onNext(477)
[countNumbers-7] - onNext(477)
[countNumbers-7] - request(24)
[groupBy] - request(1)
[groupBy] - onNext(478)
[countNumbers-8] - onNext(478)
[countNumbers-8] - request(24)
[groupBy] - request(1)
[groupBy] - onNext(479)
[countNumbers-9] - onNext(479)
[countNumbers-9] - request(24)
[groupBy] - request(1)
[groupBy] - onNext(480)
[groupBy] - onNext(481)
[countNumbers-1] - onNext(481)
[groupBy] - request(1)
[groupBy] - onNext(482)
[countNumbers-2] - onNext(482)
[groupBy] - request(1)
[groupBy] - onNext(483)
[countNumbers-3] - onNext(483)
[groupBy] - request(1)
[groupBy] - onNext(484)
[countNumbers-4] - onNext(484)
[groupBy] - request(1)
[groupBy] - onNext(485)
[countNumbers-5] - onNext(485)
...
[groupBy] - request(1)
[groupBy] - onNext(2558)
[countNumbers-8] - onNext(2558)
[groupBy] - request(1)
[groupBy] - onNext(2559)
[countNumbers-9] - onNext(2559)
[groupBy] - request(1)
[groupBy] - onNext(2560)
nrs emitted: 2560 Occurrences per group: 1: 256, 2: 256, 3: 256, 4: 256, 5: 256, 6: 256, 7: 256, 8: 256, 9: 256
nrs emitted: 2560 Occurrences per group: 1: 256, 2: 256, 3: 256, 4: 256, 5: 256, 6: 256, 7: 256, 8: 256, 9: 256
nrs emitted: 2560 Occurrences per group: 1: 256, 2: 256, 3: 256, 4: 256, 5: 256, 6: 256, 7: 256, 8: 256, 9: 256
nrs emitted: 2560 Occurrences per group: 1: 256, 2: 256, 3: 256, 4: 256, 5: 256, 6: 256, 7: 256, 8: 256, 9: 256
</code></pre>
<p>The logs give a lot of information about what is going on under the hood. At first the <code>onSubscribe</code> event that starts the <code>Flux</code> is passed along. Keep in mind that a <code>Flux</code> is nothing but a definition until you subscribe, <a href="https://projectreactor.io/docs/core/release/reference/#_from_imperative_to_reactive_programming">“nothing happens until you <strong>subscribe</strong>”</a>. This is called a cold stream. When the subscribe reaches the last element in the stream, the demand will start flowing back.</p>
<p>The <code>subscribe</code> has not back-pressure and can handle everything, so it wil request an <code>unbouded</code> demand. The <code>flatMap</code> has a concurrency of 9, so it will send a demand upstream of 9. Note that this is a demand for 9 elements of type <code>GroupedFlux&lt;Int&gt;</code>, so we request 9 groups. The <code>groupBy</code> has the default behaviour to request a demand of 256 elements. This will reach the source and the source will start emitting 256 elements (if possible, which it is in this case). These 256 elements will be distributed over the 10 groups that are defined in the grouping function. The output above shows that the first time an element is emitted (<code>onNext(1)</code>) it will subscribe to the that group and we immediately see demand flowing.</p>
<p>This shows that the subscription to a <code>GroupedFlux</code> only happens once the first element for that group is available. We also see that the first element was dispatched to a group and immediately the <code>groupBy</code> will signal new demand upstream. This will happen 8 more times until we reach element 10 which would end up in a <code>countNumbers-10</code>, but we do not have a processor for that group. So it will stay in the groupBy, which has a 256 demand, but now 1 element cannot dispatched. Element 11 will be dispatched to group 1 again. Every subflux has a demand of 32 as we can see. The elements will be divided over the active 9 groups, but the elements that are for group 10 will get stuck.</p>
<p>When 3/4 of the demand for a group is fulfilled it will re-signal demand. This is the <code>request(24)</code>. The <code>groupBy</code> with a buffer of <code>256</code> will continuously pass elements downstream when they are available. This will happen a few times until the <code>groupBy</code> has 256 elements for group 10 and needs to keep that. The <code>groupBy</code> indicated a demand of 256 and now all demand is filled with elements for group 10. There is no more demand and we have full back-pressure. Therefore, the pipeline is now stuck “waiting” for demand for the elements of group 10.</p>
<h1>Conclusion</h1>
<p>If you use the <code>groupBy</code> function in a <code>Flux</code>, you must make sure that there are enough subscribers in the <code>flatMap</code>, otherwise your stream will get stuck. To <a href="https://inspired-it.nl/2022/03/06/debugging-demand-in-reactor/">“Debugging demand in Reactor”</a> the loggin functionality is really helpful. I learned a lot while I was writing this blog and got even more insight into the internals of Reactor.</p>
]]></content:encoded>
    <pubDate>Sat, 12 Mar 2022 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>Kotlin</category><category>Reactive</category>
  </item>

  <item>
    <title>Debugging demand in Reactor</title>
    <link>https://inspired-it.nl/blog/debugging-demand-in-reactor</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/debugging-demand-in-reactor</guid>
    <description>Learn how to use the log function to debug demand flow in Project Reactor and understand backpressure behavior.</description>
    <content:encoded><![CDATA[<p><a href="https://projectreactor.io/">Project Reactor</a> is a great reactive streams project that you will probably run into when you want to write reactive code in Spring. It is very powerful and can also be complex to wrap your head around. Something that can be confusing is how demand flows upstream and messages flow downstream.</p>
<h2>Getting insight in flow of demand</h2>
<p>In any <a href="https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html">Flux</a> it is possible to show demand by using the <code>log</code> function on a flux. With this function you can specify what <code>SignalType</code> you want to be logged. Let’s look at an example:</p>
<pre><code class="language-kotlin">val counter = AtomicLong()  

fun process(nr: Long): Mono&lt;Long&gt; =  
    Mono.just(nr).delayElement(Duration.ofMillis(nextLong(1, 25)))  

Flux.generate&lt;Long&gt; { it.next(counter.incrementAndGet()) }  
 .log(&quot;beforeFlatmap&quot;, Level.INFO, SignalType.REQUEST)  
 .flatMap(::process)  
 .log(&quot;beforeTake&quot;, Level.INFO, SignalType.REQUEST)  
 .take(100)  
 .log(&quot;beforeSubscribe&quot;, Level.INFO, SignalType.REQUEST)  
 .subscribeOn(Schedulers.parallel())  
 .subscribe()  

Thread.sleep(4000)  
println(&quot;Counter: ${counter.get()}&quot;)
</code></pre>
<p>When run this will print:</p>
<pre><code class="language-kotlin">13:43:15.197 [parallel-1] INFO beforeSubscribe - request(unbounded)
13:43:15.200 [parallel-1] INFO beforeTake - request(unbounded)
13:43:15.200 [parallel-1] INFO beforeFlatmap - | request(256)
13:43:15.251 [parallel-6] INFO beforeFlatmap - | request(1)
13:43:15.251 [parallel-6] INFO beforeFlatmap - | request(1)
13:43:15.251 [parallel-6] INFO beforeFlatmap - | request(1)
13:43:15.252 [parallel-6] INFO beforeFlatmap - | request(1)
13:43:15.252 [parallel-8] INFO beforeFlatmap - | request(1)
...
13:43:15.260 [parallel-2] INFO beforeFlatmap - | request(4)
13:43:15.260 [parallel-2] INFO beforeFlatmap - | request(12)
13:43:15.260 [parallel-2] INFO beforeFlatmap - | request(2)
13:43:15.261 [parallel-2] INFO beforeFlatmap - | request(6)
13:43:15.261 [parallel-2] INFO beforeFlatmap - | request(7)
13:43:15.261 [parallel-2] INFO beforeFlatmap - | request(3)
13:43:15.261 [parallel-2] INFO beforeFlatmap - | request(2)
13:43:15.262 [parallel-2] INFO beforeFlatmap - | request(3)
13:43:15.262 [parallel-2] INFO beforeFlatmap - | request(3)
13:43:15.262 [parallel-2] INFO beforeFlatmap - | request(1)
Counter: 350
</code></pre>
<p>The logs we showing the <code>request</code> is the demand flowing up (towards the source) and gives us insight in what happens with the demand. The first demand that is sent is when the stream is subscribed to. Remember, demand flows upstream, so in our code bottom to top. The <code>subscribe</code> function will always request an <code>unbounded</code> amount of events. Next we will reach the <code>take</code> function that doesn’t change the demand and also sends <code>unbounded</code> demand. So up until this point we do not have any back pressure control. Or said differently, these function can keep up with anything upstream may send. Next we will hit the <code>flatMap</code>, with it’s default concurrency (256). The <code>flatMap</code> changes the demand. There are only 256 workers, so it can only process 256 messages at this time. Therefore it signals a demand of 256. This demand will reach the source and the source can now emit 256 elements. When a task in the <code>flatMap</code> is done it will not encounter any back pressure, because the demand downstream is <code>unbounded</code>. This means, that when a task is done it can immediately emit the message and signal it has new demand, by requesting 1 extra message.</p>
<p>When 100 messages reached the <code>take</code> function the stream will be completed. However, in the end we see much more messages were submitted from the source, namely 350. This happens, because everything is happening at the same time. When a task in the <code>flatMap</code> is done, it will signal demand by requesting a new element. Therefore it can happen, that there are more messages emitted than the 100 requested.</p>
<h2>Conclusion</h2>
<p>Using the <code>log</code> on a <code>Flux</code> can greatly help in understanding what’s going on under the covers. We’ve seen in above example that even in trivial flows it leads to interesting discoveries.</p>
]]></content:encoded>
    <pubDate>Sun, 06 Mar 2022 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>Kotlin</category><category>Reactive</category>
  </item>

  <item>
    <title>Starting Inspired IT</title>
    <link>https://inspired-it.nl/blog/starting-inspired-it</link>
    <guid isPermaLink="true">https://inspired-it.nl/blog/starting-inspired-it</guid>
    <description>The inaugural announcement of the founding of Inspired IT</description>
    <content:encoded><![CDATA[<p>Vanaf januari 2019 ben ik de trotse oprichter van <strong>Inspired IT</strong>. Met mijn kennis en ervaring van Software Development in het algemeen en Scala &amp; Akka in het bijzonder, ga ik als bevlogen freelancer bedrijven helpen om complexe problemen op te lossen.</p>
<p>Ik wens iedereen fijne feestdagen en een inspirerend 2019!</p>
<hr>
<p>Starting January 2019 I am the proud founder of <strong>Inspired IT</strong>. As an inspired freelancer I will help companies, with my knowledge of Software Development in general and Scala &amp; Akka in particular, to solve complex problems.</p>
<p>I wish everybody happy holidays and an inspiring 2019!</p>
<p><img src="https://inspired-it.nl/images/inspire.png" alt="Inspired IT logo"></p>
]]></content:encoded>
    <pubDate>Tue, 18 Dec 2018 00:00:00 GMT</pubDate>
    <author>contact@inspired-it.nl (Jeroen Gordijn)</author>

    <category>Announcement</category>
  </item>
  </channel>
</rss>