Practical Lessons Learned using Claude Code to automate Integrations
Field-tested patterns and tricks to get more from Claude Code sub-agents
At Fixify, we live in the messy middle of IT operations—where APIs are cranky, documentation lies, and permission models were built by someone who missed the memo on ring 0 vs 3 and thought chmod 777
was a personality trait. To put it bluntly—we have stories <insert dumpster fire emoji>.
A few months ago, I decided to run a thought experiment:
Could I “zero-shot” an entire integration—permissions, code, tests, live testing—without a human touching the keyboard?
(Yes, of course we’d still review the code. We’re curious, not reckless.)
That question kicked off a journey: could Claude Code orchestrate agents well enough to build integrations end-to-end? Along the way I discovered a few techniques that worked, a few that didn’t, and how to get more out of Claude’s sub-agent model.
This post shares those lessons. If you’re experimenting with Claude Code—or just trying to wring more value out of sub-agents—you’ll find a few practical patterns, and tricks I used to push close to “zero-shot” as possible.
Spoiler: I didn’t reach full autonomy… but I learned a ton. And if you build with AI, these lessons might save you days of frustration.
Background: Why We Built Our Own Integration SDK
If you’re wondering why not just use Integration Providers as a Service (iPaaS) or a plug-and-pray connector—trust us, we asked that too. But owning our own SDK meant we kept control over the onboarding experience, permissions our integrations requested, health checks, normalization and a host of other details that are important to our product experience.
The nice side effect of building our own SDK, is it’s highly opinionated, which offers consistent structure that Claude Code (and others) could reliably understand.
This SDK foundation made it natural to explore how Claude Code could speed up integration development while keeping our standards intact.
Getting Started with Claude Code
Anthropic makes getting started as simple as your first hello world, but with agents. Spin up Claude code and use their automation to set up your CLAUDE.md file. It was that simple.
Standing on the shoulders of others
Staring at an autogenerated CLAUDE.md file is daunting — what do you add, how do you make changes, what are the impacts of those changes?
But I found if you step back and ask are you truly the first developer to think of an idea, concept or pattern, or are the odds likely that one of the tens of millions of GitHub users might have thought of something. I tend to believe I’m not that smart, my mom tells me differently — but I digress. So I like to search for prior art, for inspiration. In this case, here’s an example search that might be useful to kick start things:
path:"claude/agents" language:Markdown
Then start scrolling, something in there will spark the next modification you want to make 🙂
Sub-Agents
I spent a few weeks working on getting the CLAUDE.md file to zero shot an integration before the sub-agents feature was announced or at least I saw it. Once sub-agents came out it was a game changer for me, because our integration SDK is strongly opinionated on file names, and structure, it became rather trivial to see what sub-agents I would need or want to create. Specialization became key, and was an unlocking function.
integration-api-architect.md — read the vendor docs, produce a canonical reference file
integration-skill-developer.md — write the actual skill implementations
integration-test-engineer.md — ensure coverage, handle edge cases
health-check-developer.md — validate connectivity and permissions
…and so on.
Lessons Learned
Claude’s /agent
command makes scaffolding easy—the real challenge is tuning. How much context is enough? When does guidance become too specific, or too vague?
Through trial, error, and plenty of broken integrations (rm -rf integrations/<vendor name> was my most common command for many weeks), I uncovered a few patterns that made sub-agents sharper. Hopefully you can steal these for your own experiments.
1. Ask Claude to help ..
One of the most effective techniques I found was stopping Claude when it got confused using a sub-agent, and asking it to analyze where it got confused or encountered errors, and then prompting it to update the specific sub agent that might be causing the confusion. This is a common sequence in my Claude session:
│ > Describe the errors you encountered building the integration and what you did about them
[...]
│ > For each error you encountered, find an agent in .claude/agents/ to update and make the update.
2. Treat sub agents as “Developer Guidance” docs
The best trick I found? Stop treating sub-agents like configs—and start treating them like living documentation for developers. Once I made that mental shift, Claude and Cursor suddenly became collaborators in keeping those docs up to date.
Claude
With Claude Code, I leaned on reflection. I’d ask it to look at how it orchestrated agents—what worked, what failed—and then suggest updates to the agent files as if they were developer guidance docs.
Here’s an example from my Claude Code session:
│ > Act as if you were going to explain each class of error, issue, and how you resolved it to an engineer. You will explain the error so the engineer can understand it, fix it, and fix ones similar to it. Based on your analysis and understanding, suggest updates to .claude/agents so that future engineers don't make the same mistakes.
Cursor
Cursor played a complementary role. Where Claude reflected on orchestration, Cursor generalized from the codebase. I’d feed it an agent file and ask: “Update this so no developer (human or agent) makes the same mistake twice.” Cursor pulled patterns from the repo and rewrote the guidance at a higher level.
Look at @<file>.ts its using an non existent interface <Wrong Interface> and not using <Correct Interface>. The developer who wrote @<file>.ts used @health-check-developer.md as a guide, review the guide, and suggest updates so the developer doesn't make the same mistake. Make it general to all developers writing health checks across any integration using @health-check-developer.md as their guidance.
Or tightening test instructions after agents produced invalid tests:
Analyze @tests/ look at the errors, syntax, typescript. The guidance on how to write tests for integrations used by developers is @integration-test-engineer.md review it and update it so that its generally applicable across integrations for any engineer.
Ground Truth Cuts Hallucination
Unsurprisingly, Claude and its sub-agents hallucinated—permissions that didn’t exist, API routes that weren’t real. Classic problem.
I explored MCP tooling like Context7—promising for injecting fresh, version-specific docs straight into the context window—and even considered Ref MCP, which emphasizes token-efficient retrieval. But based on others’ experience, I wanted more control: control over how docs were structured, when they became available to sub-agents, and the ability to guide agents to specific sections.
So I built an API-Architect agent. Inspired by this repo, I first tried coordinating agents with a shared JSON “write-ahead log.” It didn’t click. Instead, I pivoted: I created the API-Architect sub agent who reads vendor docs, distill them into a canonical reference doc on the vendors APIs, permissions and authentication scheme.
Each sub-agent was then updated to reference only the parts of the doc relevant to its task. That simple move—grounding agents in curated, structured truth—cut down hallucinations dramatically and gave us far more accurate permission mapping.
What This Means for Fixify
Could we zero-shot an integration?
The answer: No, not yet — but who were we kidding, that’d be wild.
We learned a lot and believe we’ve vastly improved our ability to build high quality integrations quickly.
Will we ever ship a fully autonomous integration? Probably not without human oversight. But we’re rooting for the day when that oversight shrinks from hours to minutes :)
The real value?
We can integrate with more technologies, sooner. That means more skills in Fixify’s library, faster onboarding for customers, and a service that keeps getting smarter without burning out the team.
AI won’t replace thoughtful engineering. But with the right scaffolding, it can act like a junior developer who learns frighteningly fast.
And just like with humans, the magic is in how you train them.
If you liked this and want to follow along as we keep pushing on AI-powered development at Fixify, connect with me on LinkedIn or watch this space.