With 20+ years of experience, I specialize in database optimization, system architecture, and performance tuning large-scale applications.
Currently working as a Senior Software Engineer at Hubstaff, and former Developer Advocate at Timescale.
I'm also a community builder, a cyclist and a permaculture enthusiast.
Featured Content
Latest Talks
Explore my recent presentations at conferences and meetups around the world.
View TalksInteractive Hub & Tools
Apps & Playgrounds
Check out the interactive tools directory, featuring the Semantic Learning journey, full-screen Drawing Recorder, and the 7m Geodesic Dome Builder.
Explore AppsMandala Drawings
Explore the Mandala Synth and Playground. Draw mandalas, sonify visuals, and experiment with complex geometries mapped to audio.
Launch PlaygroundLatest Post
Where is your mind?
April 14, 2026 philosophy productivity technology
I see there’s an infinite cognitive unload now that we can proxy everything to LLMs. But then, the question arises: where is my mind? How am I keeping up?
The No-Excuses Era
In my previous post, I shared the idea that we reached the no-excuses era, as we evolve and don’t have the same barriers we had before. Now, I focus on what we’re doing with our time and how this delegation is fragmenting and stretching the distance between those that are leveraging AI to become lazy and just reduce the working time, and on the other side, we have people that are really learning from it.
This is a classic problem of Time Economics. If we’re unloading the cognitive weight of low-level tasks, we must be intentional about how we reinvest that saved time. Are we investing it in “Future Value” activities—learning foundational concepts that AI still struggles with—or are we just increasing our consumption of “Low Authorship” reactive work?
So, if you’re learning fast with AI, what exactly are you learning? Are you just adding more skills, connecting agents, or are you really getting smarter?
Better prompts, a set of tools that will make you use less tokens? Settings that can be tweaked internally in the next interaction.
How do you judge it as you learn, and how is learning going to change as systems become more advanced and need less human input?
Are We Getting Smarter?
I’m curious about a few things:
- Am I just QA-ing for AI?
- Do we have a set of tools that needs to be created by humans?
I see AI will turn the full-stack developer into a holo-developer getting involved from conception, research, and validation to other areas.
The path you get is the path to fast adaptation. Find the gaps where AI is still not good and play hard with it. Get better and know exactly what you can leverage from it and what is still in your power as a proxy for the intent, holding the objectives, validating, and reiterating.
The Holo-Developer
I’m testing and validating the yoga app, and I’m certainly improving the way I interact, not only building better prompts but iterating better.
Every time you think about your wording and how it will impact the way the LLM sets the work for the agents. Every time you iterate over it and observe, learn, and understand the reasoning, you can use it faster and more accurately.
But use it for what? Now, it seems building systems will become far easier than in the past, so what new things are you going to build?
For some of us, programming is a drug. The dopamine hit of seeing code work is now amplified by AI speed. But if we’re just chasing that high without deep understanding, we’re not becoming better engineers; we’re just becoming more efficient addicts. The real challenge is maintaining the “thinking programmer” mindset when the tool can do most of the “doing.”
The Migration to Agentic Workflows
We’ll evolve our mindset to more agent-driven workflows, where we set the boundaries of the challenges and let AI decide and validate the options instead of thinking our mere single POV is worth everything.
Now, we’re in the migration to the agentic era. Fewer developers, higher throughput, easier to experiment, adopt, and reiterate.
So, if you’re not loading the same cognitive weight, where is your mind?