aidarrowcaretcheckclipboardcommenterrorexperienceeyegooglegownmicroscopenavigatepillTimer IconSearchshare-emailFacebookLinkedInTwitterx

We Told 300 Engineers to Stop Writing Code. Here’s What Happened.

Over the past year, our teams have been advancing their AI fluency through tools like Cursor. But when Claude Opus 4.5 was released in November 2025, we knew it was a game-changer. It was fascinating to see an agent reliably operate on a real codebase at our scale – our monorepo, our services, our tests – and watch it reason across files, make multi-step changes, and complete tasks that previously took hours. When Opus 4.6 followed shortly after in early 2026, the step change was even clearer: the models were getting better, fast.

When a technological shift is unfolding in real time, companies have a choice: wait and see how it plays out, or lean in. We chose the latter, with a firm conviction to become the trend setters in the industry.

At Zocdoc, we build technology with purpose. We help people with one of the most important aspects of their lives: their health. Our work has a direct impact on patients and providers navigating a dysfunctional, $5 trillion industry. We’re driven by a belief that better care should be more accessible, and we’re energized by helping to make that a reality.

AI is now a vital instrument in accelerating that mission. Our early experiments demonstrated the opportunity before us. With the models continuing to improve at this pace, we knew it was time to capitalize on the inflection point, fundamentally change how we build.

To drive meaningful impact and prepare for where the industry is headed, we needed to take the highlights and learnings from our early experiments and make this a fundamental part of how our entire 300-member technology organization operates.

We’re in the middle of this transformation now, and are learning and sharing in real time what it takes to make it stick.

Why We Had to Act

We saw early proof that AI meaningfully expanded what our engineers could accomplish. In parts of the organization, teams were moving faster and tackling work that had long felt out of reach. But the gains were uneven. Without consistent adoption, AI would remain a local optimization rather than an organizational advantage. 

Realizing the full value of AI across the organization required more than encouragement. Our talent is our most important asset. Investing in, growing, and developing it to meet the evolving technology landscape is a leadership obligation. We decided to create the space and time for this with dedicated AI sprints, even when it meant making deliberate trade-offs in our roadmap.

How We Structured Sprints

We committed to training our entire tech org across three cohorts in six weeks. Each cohort spent one full sprint – two weeks of dedicated, protected time during the work day – focused on AI fluency.

Two weeks felt like the right amount of time to build practical skills focused on real, day to day work. We took in-flight priorities that were already planned for the sprint and asked teams to apply AI to those tasks. This gave us useful learnings on how to maintain the AI focus without derailing delivery on the projects teams were already accountable for. 

Dividing into cohorts helped ensure that we had enough advanced trainers and mentors to provide a safe space and tangible learning opportunities with an interactive curriculum. We couldn’t spread them thin across 300 people at once. This approach also enabled us to iterate quickly on the training curriculum.We started with the first cohort, fine-tuned the materials, collected feedback from our managers, and improved. The subsequent cohorts have been reaping the benefits of those learnings.

Along the way, we’ve made one thing clear: this isn’t optional enrichment.  Using AI is the new baseline expectation for how we operate.

How We Supported Teams

The Training Sessions

The sprints were set up for each cohort to go through five hands-on training sessions with local skill creation exercises to increase engagement, and a demo day on the final day for teams to showcase what they built. The space is evolving at a pace where even our most experienced practitioners are continually learning. Many of our most advanced users surfaced new patterns, constraints, and optimizations through the sessions. The curriculum accelerated everyone.

The Support Structure

We established dedicated Slack channels and empowered AI Champions across our engineering team to serve as mentors dedicated to driving sustained momentum.  We were intentional about building support into the model through 1:1 conversations and small, trusted peer settings where people could ask questions openly, work through issues, and build confidence alongside teammates they know. 

The Feedback Loops

Rapidly upskilling an organization requires listening, not just instructing. So we’ve been polling teams after each cohort and holding checkpoints with managers, enabling us to fine-tune the curriculum as we go.

The Demos Exceeded Expectations

On our first demo day, we saw 21 projects – many more than originally anticipated – and those demos reflected strong, creative, quality-focused work that exceeded our expectations:

  • Quality improvements and refactoring
  • Tech debt reduction at scale
  • Thread-safe Swift migration (previously considered too time-consuming)
  • Flaky test reduction
  • AI search acceleration
  • Jira process fully automated

The thesis proved out: give engineers time and tools, and they’ll tackle problems they couldn’t previously justify. We continued to see this hold true across the second sprint, as well as the third, which is in progress now.

Five Things We’ve Learned

1. AI makes it cheaper to tackle the important-but-not-urgent items.

Things that had been sitting on the back burner – important but never urgent enough to prioritize – could now be tackled. One team used the sprint to migrate a Swift codebase to thread-safe patterns, something they’d wanted to do for over a year, but couldn’t justify given the tedium. Another finally tackled a flaky test suite that had been eroding trust in CI for months.

Historically, writing the code to set up auditing was slow and took numerous sprints. Now, generating auditing scaffolding is cheap. One team is building a reusable skill to automate auditing setup.

When doing the right thing becomes cheaper, overall quality improves. As one manager put it, we’re  “removing the tedium tax” from their day-to-day work.

2. Code review becomes a first-class skill.

More code getting generated means more code that needs to be reviewed. Engineers are still accountable for the code, even if they didn’t hand-write it themselves. That means code review is becoming the primary skill.

This is a mindset shift.  The discipline of reviewing AI-generated code at scale is a new muscle for the industry. We’re investing in what “advanced code review” looks like as a discipline, with strong judgment as the differentiator.

3. Velocity without guardrails is a risk.

Faster code generation (even at the same ratio of bug to code as before AI) means faster introduction of bugs, security issues, and technical debt. Teams are shipping more, but that also means they need more rigorous review processes. Infrastructure changes, permission updates, and anything without a verification loop become riskier at higher velocity.

We’re hardening our infrastructure through three efforts: LLM sandboxing, security training, and tooling that defines what can and cannot be done. We’re strengthening Code Owners enforcement and applying stricter review standards to AI-assisted changes.

4. When you speed up coding, the constraints move elsewhere.

Once the tech team is trained, you realize the bottleneck shifts. Design and product teams now need to gear up – providing prototypes and clearer inputs so tech teams can accelerate development and make it production-ready faster.

Beyond development, there’s important SRE work to be done, including testing, observability, and monitoring. There’s also analytics work involved in tracking product trends and measuring impact. 

Once your tech team operates in this new agentic mode, the rest of the ecosystem needs to adapt, too. We’re now thinking about what a new AI-native development lifecycle looks like from design, to product, to development, testing, observability, and analytics.

5. The role of engineer is shifting and so must our talent strategy.

Engineers now need stronger product instincts and deeper system design skills. Beyond implementing specs, they’re working alongside agents, directing outcomes, making judgment calls that used to live elsewhere, and thinking across surfaces to design and troubleshoot systems at a more macro level.

This raises important questions around hiring, team composition, and nurturing talent. We don’t have all the answers yet, but we know the profile of a great engineer is evolving toward stronger product instincts, system-level thinking, and end-to-end ownership. We’re evolving our hiring, development, and culture to match.

Where We’re Investing Next

Hardening the infrastructure. We are investing in isolating the execution and integration of model-generated code, restricting tool access through scoped permissions and environment controls, and building an architecture that allows non-technical teams to interact safely with production systems. These investments are especially important because they address control gaps that existed pre-AI.

Holistic metrics for measuring impact. Story points don’t capture the new reality. We’re building a measurement framework focused on business outcomes and customer experience.

Sustaining momentum. The sprints will end, but the field will keep on accelerating. We need to ensure people don’t revert to their original methods. Leveraging our AI Champions and ongoing checks will help, but we plan to continually invest and develop to maintain accountability.

Skill management. Teams are creating reusable agent workflows embedded in their repos, which is a positive outcome. We’ve implemented codeowners and governance to ensure consistency and prevent redundancy as this scales.

Managing the mental load. We are shipping more, prototyping faster, spotting opportunities daily, and that productivity is exciting and motivating. It’s also more cognitively demanding. The nature of the work has shifted. There’s more code to review. More context to hold in your head. More context switching is required to accomplish the same tasks at a faster pace. That tension isn’t unique to Zocdoc. It’s showing up across the industry. And we are now focused on helping teams train this new muscle and adapt to a new way of working.

Talent development across the lifecycle. This new way of working is reshaping how we hire, interview, and develop engineers. We’re updating our talent strategy to prioritize system thinking, product instincts, and the ability to direct and review AI-generated work. We’re also rethinking how we develop junior engineers to build strong fundamentals while leveraging AI effectively.

Addressing the Human Side

A change of this magnitude raises important questions about the evolving role of engineers. We’ve addressed these openly in our sessions and conversations with managers.

For many engineers, craftsmanship has long been tied to writing code line by line, by hand. That identity shift is real, and we are emphasizing that the fundamentals still remain: architecture, system design, understanding inter-service dependencies and risks, making debugging easier, and unlocking business value. That work isn’t going away. If anything, it matters more now.

What’s new is multi-context switching. When you’re spawning multiple agents, you have to be able to multitask across them. Some teams we’ve spoken with use a “rule of three” heuristic: running no more than three agents at a time. We haven’t enforced it, but it’s a useful mental model.

So our answer to these questions isn’t: “don’t worry.” It’s: “yes, the world is evolving and so are our jobs. Here’s where it’s going. Here’s what great looks like. And here’s how we’re investing in helping you get there.”

The Bottom Line

As we approach the end of these sprints, we’re seeing a clear shift in mindset. Teams are raising the bar on quality and unlocking ambitious solutions to problems that previously sat in the backlog.

Just today, in the context of our largest Q1 initiative (a build system migration), an engineer proposed an intermediary step we would have normally skipped or put off because it threatened delivery dates. We’re moving forward with it, and it will likely save us from a potential internal CI incident down the line. The task is well-suited for agents, and the team recognizes that.

That’s the shift. Not simply using AI to write code faster, but rather to do things we wouldn’t have prioritized before. 

We’re seeing similar patterns elsewhere. Teams have been deploying agents to add logging for bug investigation, instrument observability metrics, and analyze codebases to generate documentation and project plans – tasks that used to be far more difficult to prioritize are becoming table stakes.

AI isn’t coming for technology roles. It’s changing the meaning of those roles. The organizations that thrive will be the ones that help their people make that transition.

This is what it looks like, in practice, to make AI fluency foundational for an entire tech org. It means going beyond just adopting a tool, to fundamentally reset the expectation of what the engineering craft looks like.

What’s Coming Next

I’m Kamini, Zocdoc’s CTO, and preparing our tech organization for an AI-native future is one of my core focus areas. This post is the first in a series documenting the journey of evolving how we build, measure impact, and develop talent.

Next, we’ll go deeper into:

  • Training curriculum
  • How do we keep our AI Native culture up to date
  • Managing the mental load of infinite code
  • Measuring the impact of AI
  • AI beyond product engineering
  • The new flight formation for an AI Native Tech org
  • Learning from industry experts through regular fireside chats

We’re learning in public. Stay tuned.