Behind Bridgit AI: What we’ve learned building our AI stack

This post was written by Vincent Seguin, Bridgit’s Chief Technology Officer

Bridgit AI has now been in the hands of customers for several months, and we’re working every day to further its capabilities and provide more value to our customers. At the same time, we’re continuing our commitment to transparency by offering another look behind the scenes at how Bridgit is strategically embracing AI.

I previously shared about cultivating a culture of AI curiosity, and how internal hackathons have helped spark creativity and bring new ideas to life. For this post, I want to pull back the curtain on another side of our AI journey: how we’ve built an AI-enabled technical stack, and how we’ve approached the challenge of keeping an entire organization learning together. 

Every company is navigating the AI transition differently, and my hope is that sharing our approach might serve as inspiration for organizations facing similar challenges. To be clear, we don’t have all the answers. With a new tool launching every week—or so it seems—the pace of change has felt dizzying at times. But that’s precisely why we’ve focused on building systems for learning, rather than chasing every new release. I’m excited to share what that looks like for us.

The developer tooling evolution

Like most engineering teams, our AI journey started with GitHub Copilot. It was a useful introduction, but felt limited in scope. The arrival of 2025 brought an explosion of AI-assisted coding tools.

This presented an interesting challenge: How do you balance budget discipline with the need to let people experiment and find what works best for them? At Bridgit, our philosophy has been to lean toward experimentation, while putting lightweight guardrails in place: monitoring spend to avoid surprises and ensuring tools are logged for security purposes.

As the year progressed, we saw adoption of Cursor and JetBrains’ Junie grow steadily. By the second half of the year, Claude Code emerged as a strong contender (and it’s where much of the industry seems to be converging).

As usage matured, we started establishing standards to make AI tooling more consistent and effective across the team. We’ve introduced AGENTS.md rules across our most-used repositories, defined a shared configuration for common MCP servers, and created LLM-friendly markdown files for frequent operations. The goal is to make it easier for anyone on the team to get reliable, high-quality results from these tools – without having to reinvent the wheel each time.

We’ve also started experimenting with the product team on a more ambitious front: having Claude Code build features directly from detailed specs. The results so far have been promising, and it’s opened up interesting conversations about how we might rethink parts of our development workflow.

We believe AI fluency is now fundamental to engineering work, which is why we’ve incorporated it into our performance review criteria. Rather than tracking specific tool usage, we’ve wrapped it into a broader criterion called “Growth Mindset.” This focus helps ensure everyone takes the time to level up and explore new approaches, regardless of which tools they choose.

Collective learning as a strategy

One principle has guided our approach from the start: learning can’t be left to individuals alone. In a moment of rapid change, it has to be an organizational responsibility.

On the engineering side, we’ve created dedicated Slack channels for sharing discoveries and asking questions. We also host monthly dev demos, where much of the conversation has naturally gravitated toward showcasing what people have accomplished with new AI tools. These sessions have become a highlight, and are equal parts knowledge-sharing and inspiration.

For the broader organization, we’ve taken a similar approach. We established company-wide channels for AI learning and have used our quarterly kickoffs and Friday all-hands meetings to run demos and talks. These have ranged from basics, like building a shared glossary of AI terms, to more forward-looking discussions about where the technology is headed.

We also ran a company-wide survey to understand how comfortable people are with AI and what they’d most like to learn. This has helped us tailor training to actual needs rather than assumptions.

Choosing the right tools

On the tooling front, we started with a team trial of ChatGPT but quickly found ourselves gravitating toward Anthropic’s Claude. Our general sense is that Claude is better suited to workplace and organizational needs, particularly with its connector capabilities for integrating with other tools. We’ve since made Enterprise plans available to employees upon request and enabled additional capabilities like Claude Desktop.

On the automation side, we reintroduced Zapier across the organization – this time with its AI features enabled. We did evaluate alternatives like n8n, which is a great product with strong technical flexibility. Ultimately, we chose Zapier for its breadth of out-of-the-box integrations and its gentler learning curve for non-technical team members. The goal was to empower people across the organization to build their own automations, not just those comfortable writing code. It’s been a good opportunity to not only build AI fluency but also to embrace automation more broadly.

What’s next

One key piece we’re actively working on is securing and enabling the use of MCP servers. Most of our developers already use them locally with a shared configuration, but we want more visibility and control there. In the same vein, we’re exploring ways to connect Claude to more tools in our stack that don’t yet have official MCP integrations.

On the engineering side, we’re also preparing to collectively delve into the world of AI agents, learning how to build them properly for production. It’s one thing to experiment with agentic patterns in a hackathon; it’s another to deploy them reliably at scale. That’s the next frontier for us.

Beyond that? It’s hard to say what 2026 will bring. But with the learning infrastructure we’ve put in place, we’re confident we’ll be ready to adapt. The pace of change in AI shows no signs of slowing down, and we’re excited to keep building alongside it. If your organization is navigating similar challenges, I hope our experience offers some useful inspiration 🚀