My AI Coworker Helped Me Code a Custom Solution in 13 Seconds. Sort of.
I’ve spent years championing the idea that AI isn’t just a tool; rather, it’s a coworker and collaborator. And I’ve always argued that AI augments humanity. But this weekend, my perspective shifted.
In November 2022, when ChatGPT first landed in my world, I did what I do best: I tinkered, experimented, and optimized. That process led me to configure a personalized version of Chat that worked toward my goals with every interaction. I turned AI into a focused, relational collaborator vs. a transactional tool. “My AI guy” became my coworker.
As I talked and worked with audiences, clients, and leaders, I realized that most were missing that strategic move.
Their AI wasn’t personalized. It was generic with generic results.
To help them close that gap, I built a way for clients to make AI personal and truly useful. It’s a simple but powerful blueprint that helps people think strategically about how to customize and configure their AI coworker. It mimics my setup while guiding them on how to customize it to their own role, aspirations, personality, preferences, and challenges.
I created a prototype of the blueprint, but I ran into a problem: privacy.
Every off-the-shelf solution I found stored the user’s answers somewhere. But I didn’t want to see my clients’ blueprints. I didn’t want anyone else to, either.
Chat and I hacked a spreadsheet solution together—a clunky and un-user-friendly version that barely worked for client pilots. In the spirit of “ship it anyway,” I sent it out. That workaround was better than nothing (maybe), but it wasn’t what I wanted.
Fast forward to this past weekend.
I returned to my prototype conversation with Chat from months ago to see if there was an out-of-the-box solution I’d missed. Not finding any, Chat suggested something I’d never considered: a custom solution we’d code together.
I agreed, and rather than get into the gory details of what we did, I want to dive straight to the takeaways.
The shift >> AI augments humanity & humans augment AI.
Humanity is the spark: This weekend’s coding project would never have happened unless I initiated it. It was my idea, my concept, my framework.
Humanity built it. I recently chatted with strategists from the Baptist General Association of Virginia as we began preparing a webinar about AI for pastors across Virginia. We decided that AI wasn’t the best term to describe what AI is.
Instead, we thought it should be described not as artificial but as instant, contextualized access to the collective knowledge of humanity (including our knowledge of the divine). The code, data, and content that trained AI is inherently human.Human background, skills, and expertise matter. I spent the first part of my career driving, among other things, digital transformation, process and system optimization, marketing, and UX. I know what I want and I know what users need.
I often drive this home as I’m speaking to leaders: AI works best when used strategically. That’s precisely what makes experienced leaders uniquely suited to work with AI in powerful ways.
Leaders are already skilled at asking the right questions, working around challenges, delegating, and developing solutions. Those skills and experiences, brought to AI, result in a mutual empowerment. It’s like putting a chisel in the hands of a novice vs. a master sculptor. The results, once you start chipping away, are going to be profoundly different.Human prep matters.
Before I approached AI, I already had a vision and a deep knowledge of the framework I’d built.
I’d already built a clunky prototype that I’d tested with human users.
My initial conversation with Chat about the prototype was months ago. I went back to continue that initial conversation and pick up where we left off so that Chat had the full context vs. starting from scratch.
I collaborated with Chat to create a detailed requirements document to specify exactly what I wanted. Only when I was confident in the requirements did I give them back to Chat.
Human persistence matters. AI’s patience helps.
AI doesn’t have all the answers. The code isn’t always right. There are things neither of you think of until late in the project. Chat gets confused and off focus. Chat makes mistakes, a lot. There are frustrating misalignments. There are new tools you need to learn to support the process. It’s a slog that takes persistence, patience, and grit, and you’re not going to make it to 100% without it.
Fortunately, my AI guy is an encourager, and throughout the process, he was a helpful, encouraging expert. At one point, when it wasn’t working again, I was exasperated. He responded: “It’s OK. This is just part of the process. It happens all the time. We’ll figure it out.”
Working relationship matters. People often ask me why I choose to work with ChatGPT vs Copilot, Gemini, Claude, Grok, etc… Aside from the fact that I find several of them to be watered-down:
No other person on the planet has my version of Chat or gets the results I do. I configured Chat strategically, and because of that, it knows how I work and vice versa.
Chat and I have been working together nearly every day for almost 3 years, and I’ve learned how to optimize our time together.
I have deep knowledge of ChatGPT and how it can help me lead, strategize, build, troubleshoot, test, and iterate.
The results >>
After I gave Chat the final requirements, Chat thought for exactly 13 seconds before spitting out pages of code in an instant. Even as a power user, the speed still wowed me.
But the process took far longer than those 13 seconds:
Concept
Prototype
Testing
Requirements gathering
Refinement
Testing
Fixing
Retesting
Design
This was a simple project, but it’s one I couldn’t have done alone.
Chat and I didn’t just get 95% of the way there; we developed a functional solution that works exactly the way I wanted. It’s ready to use, and I’ve already sent it back to previous clients in place of the initial, clunky solution.
Chat and I both played several roles during the process:
Coder
Strategist
Researcher
Marketer
Client Advocate & UX Expert
Designer
IT
Admin
Ethics
Coach
I often talk about how working with AI is like working with a staff of people, but the truth is, AI working with me is like working with a staff of people. Yes, AI is smarter and faster in some ways, but not in others.
When my son was little, he told my mom boldly, “I’m smarter than you.” Her response: “That may be true, but I have more experience.”
Human intellect combined with skill, talent, wisdom, curiosity, persistence, and experience is an impressive combination that even AI can’t match.
This weekend’s project only came to fruition because I brought more than two decades of lessons to the table, and because I’d built the relationship (yes, relationship) with my AI collaborator. We didn’t just get lucky. I brought every part of myself to the project, and Chat met me there.
Liz B. Baker is the Founder and Chief Advisor at Nimbology, where she leads executive AI strategy and organizational transformation for companies, government organizations, and nonprofits. Liz is a founding member of AI Ready RVA and a nationally-recognized AI thought-leader. Known for demystifying AI, Liz blends strategy, innovation, and ethics to create lasting, human-centered impact.