How I Use AI to Scale My Skills Without Losing My Mind
The simplest route to the "correct" answer isn't always the best one. What does correct even mean? Why is my computer talking to me? Hey Google, set a timer for 13 minutes.
Slop fatigue
It's hard to get through a typical week in 2026 without hearing "AI Slop." What started as a way to describe the surge in AI-generated content online also signals something broader: everyday internet is starting to suck.
The AI-generated internet has been on my mind a lot this year. I use LLM tools like Claude and ChatGPT every day at work and at home for brainstorming, research, and the occasional Excel formula. I'm not calling for abandonment of these tools. But that familiarity has made me acutely aware of something most people aren't talking about: AI is very good at making you feel productive while quietly eroding the thinking underneath. This is what I've learned to watch out for.
Acknowledging the addiction
I'll admit, it feels great to produce grammatically correct writing with very little effort. It's fun to send your manager's manager a thorough AI-polished explanation of why deliverables are delayed because stakeholders keep changing the MVP.
As a technical professional, AI helps me navigate corporate email with less overhead so I can focus on actual deliverables. It's saved me hundreds of hours and let me maintain and ship significantly more work.
But this workflow has a dark side. The faster AI polishes my tone, the less patience I have for doing it myself. Before long I found myself leaning on it to transform every passive-aggressive draft into palatable corporate word-spaghetti. Left unchecked, it wouldn't just outsource the editing. It would outsource the thinking behind my communication.
Acknowledging that risk was the first step. So I made myself a rule: write the idea first, understand its purpose, then hand it to AI for anything beyond basic grammar and spelling. It's made me more in tune with my own thinking and, honestly, more human for it.
Brainstorming, challenging, and researching ideas
Instant feedback is not the same as instant gratification.
LLM tools deliver feedback in seconds. Traditional search required distilling information across multiple sources just to find a starting point. That speed is useful, but I've found myself susceptible to its flip side. The instant gratification of completing a time-consuming task quickly can quietly chip away at independent thought without you noticing.
Think about how research used to work. You had to learn the language of a topic before you could search it effectively, skimming sources, picking up terminology, building enough context to ask a precise question. That process was slow, but it forced baseline understanding. LLMs collapse that timeline in a way that feels less like Googling and more like thinking out loud with someone who has read everything.
Recognizing that AI can make us lazier thinkers was enough for me to start enforcing stricter personal guidelines. A well-drafted prompt followed by specific clarifying questions goes a long way. Paired with hands-on tinkering, the two compound on each other fast.
No hands: voice controlled computing
"Hello, I am Macintosh. It sure is great to get out of that bag..."
There was a lot of noise around voice assistants in the early 2010s. When Siri launched in 2011, it felt like a turning point. Something that could reason, converse, interact with daily services, and unlock entirely new relationships with technology. The decade that followed was supposed to deliver on that promise.
It mostly didn't. Siri, Google Assistant, and Alexa became glorified timers and light switches. The gap between what was pitched and what shipped was disappointing. The vision was always good. The technology just wasn't there. Progress crawled, excitement faded, and what remained were fancy kitchen timers that happened to transmit personal conversations to data brokers.
Fast forward to 2023. ChatGPT releases Voice Mode, and it is exactly what Siri was supposed to be. I remember the moment I realized I wasn't querying a search engine with extra steps. I was having a conversation. It followed my thread, let me push back on its assumptions, and connected ideas across a discussion in a way that felt less like a tool and more like thinking out loud. Think Jamie from Joe Rogan's podcast following you around, ready to look anything up. This was getting good.
What this unlocked was a new way to interact with my technology. A cool idea during my commute no longer fades before I can write it down. I can talk to my assistant, capture the thought, pull up supporting information, and set reminders, all with my voice alone. My hour-long commute went from an energy drain to one of the most thought-provoking parts of my day.
It's not that deep, bro
The gist is simple: don't let the easy way creep into your learning and understanding. It's all fun and games until you're an AI impostor who can't comprehend nuance.
I wrote this mostly to maintain my own sanity navigating the AI landscape of 2026. These tools are useful, but using them well is an active choice, not a default. If it resonates with even one other person, that's enough for me.
If you want to connect, feel free to email me at sudo@tyl.sh.
