What our web team learned using Claude Code for a month
Development••5 minutes read
Juwan Wheatley
Engineering
Expo's web team used Claude Code extensively for a month. Learn what worked for us, what didn't, and how to Expo gets real value from AI coding tools.

Our website team committed to going deep with Claude Code over the past month. The intention was to invest heavily in one tool and observe how it impacted our process and production.
Yesterday at the all-hands we shared what we learned with the company and today we pass those learnings on to you. (We recognize that the space is moving extremely fast and our insights today might go stale fairly quickly. But today matters!)
In this post you’ll learn what is working well for us with Claude, what isn’t working so well, and how we’re getting value from agentic coding.
Where Claude Code excels
Knowledge retrieval for unfamiliar code
If you're not ready to generate large amounts of code with AI, start here: use Claude Code as a knowledge base for parts of the codebase you don't know well. This is particularly valuable for unblocking yourself on tasks that require system context you haven't built up yet.
That said, don't hesitate to ask your teammates. There's still irreplaceable knowledge stored in human brains that no LLM has indexed.
Executing well-specified tasks
Claude Code performs best on tasks you already know how to do. The problems for which you can articulate clear requirements and acceptance criteria will drive better output. It’s sort of the “you get out what you put in” philosophy.
For more ambiguous or complex features, use Plan mode. It asks clarifying questions, preserves context across iterations, and consistently produces higher-quality results than jumping straight into implementation.
Codebases with strong patterns
Projects with established conventions, comprehensive linting, type checking, formatting rules, and integration tests guide Claude toward solutions that work and meet your standards reliably.
The key is giving the agent tools to close its own feedback loop. This includes enabling MCPs that provide context about your systems and workflows. We've found Linear, Sentry, Figma, and Graphite MCPs particularly useful for enriching Claude's understanding of our work.
Parallelizing development work
Once you develop an intuition for delegating tasks to AI reliably, consider parallelizing work using Git worktrees. This allows multiple branches to be checked out simultaneously on your local machine. Tools like Conductor and Claude Desktop make this workflow straightforward.
Where Claude Code still needs your help
Claude Code handles many tasks well, but it has limitations worth understanding:
Training your AI engineer with system prompts
Claude starts fresh every session. Think of it as a new hire who needs onboarding each time. Without guidance in your CLAUDE.md system prompt about how to work with your codebase, you'll repeat the same instructions constantly.
Keep system prompts concise, though. Verbose instructions consume context window space you'll need for actual work.
Skill limitations
Skills (pre-packaged recipes or context bundles) provide useful shortcuts, but Claude often forgets to apply them without explicit reminders. We've worked around this by manually invoking skill slash commands when we want specific recipes followed.
Context management challenges
Long-running sessions expose LLM context limitations. Claude attempts to compress and summarize session history to reduce hallucinations, which helps but doesn't eliminate the problem. We observed output quality deteriorating noticeably as context accumulated.
Our workaround: use /clear after completing each discrete task, or ask Claude to export current progress to a markdown file, clear the session, then have it read the file to continue. This resets context while preserving critical information.
Maintaining engineering standards
Don't abandon your engineering judgment when evaluating Claude's output. LLMs still produce poorly architected solutions with surprising frequency - and the solutions are presented with confidence (we are all familiar with this). You still need to actively guide the model toward solutions you'd be comfortable shipping to your users.
The bottom line
Building intuition about what LLMs handle well versus poorly takes time. The more you use it, the better you’ll get at leveraging the AI for powerful results. We're learning as we go, but we've reached a point where AI coding tools are genuinely improving our workflows rather than just adding overhead.
Our entire company (yes, even sales/marketing) is using Claude Code to varying degrees now. We have been aggressive and intentional about how we use AI every day. We are excited, cautious, and meticulous about testing, observing, and sharing our experiences with agentic coding internally.
It is a massive part of how we work at Expo and how you work with Expo. We will have a lot more to say here in the coming months. In the meantime please let us know if you have specific questions about how to build with AI and Expo.
Happy agentic coding!


