Aug. 19, 2025

From GPT-3 to Claude: Daniela Amodei’s Lessons on Building Anthropic Around AI Safety

From GPT-3 to Claude: Daniela Amodei’s Lessons on Building Anthropic Around AI Safety

When Daniela Amodei left OpenAI with six colleagues including her brother, Dario, to co-found Anthropic, they were chasing a mission: put safety and human values at the center of generative AI.

Three years later, Anthropic’s Claude is used by millions and known for its HHH approach — Helpful, Honest, Harmless. But getting there meant navigating messy model behaviors, scaling a multi-cofounder team, and making contrarian decisions about research vs. go-to-market.

In a conversation on First Block with Notion’s Akshay Kothari, Amodei opened up about the firsts, the trade-offs, and the philosophies shaping Anthropic’s journey.


1. Safety as the Starting Line — Not the Afterthought

Amodei and her co-founders, all veterans of AI research at OpenAI, saw a gap in the industry: What if safety wasn’t bolted on later, but baked into every decision from day one?

“We wanted to build something from scratch where safety and making humans the center of generative AI was the foundation.”

For Anthropic, that meant spending the first 18 months focused solely on research — no sales team, no go-to-market push — to ensure the tech was both powerful and aligned before touching the market.


2. Splitting Horizons: 10-Year Vision Meets 2-Year Execution

One advantage of founding with her brother? Clear ownership zones.

Dario, the “technical visionary,” looks five to ten years ahead. Daniela’s role is to take those concepts and make them tangible within one to two years.

“How do we take these incredible technical ideas and turn them into something people can use today?”

That separation keeps disagreements minimal and execution focused — a useful model for any founding team juggling vision with delivery.


3. The HHH Framework — and the Trade-Off Tightrope

Claude’s Helpful, Honest, Harmless framework is more than a tagline. Anthropic has dedicated research teams for each pillar, tackling challenges like hallucinations, safety guardrails, and user usefulness.

But Amodei is candid about the trade-offs:

“You can have a perfectly harmless model today — it just wouldn’t be very helpful. The art is raising the watermark on all three together.”

They use Constitutional AI to encode these priorities into training, while still allowing for customer tuning depending on creative vs. safety needs.


4. Early-Stage Chaos: Potato Diets and Dragon Mode

Early Claude prototypes weren’t always the polished assistant we see today.

One version insisted the best weight loss method was an “all-potato diet.” Another entered “dragon mode” for reasons no one could trace. Over-tuning harmlessness led to responses like:

“I’m concerned about you… here’s a therapy link”

— even when asked who the 34th U.S. president was.

The takeaway? Breakthroughs are paved with bizarre iterations. Founders in fast-moving fields should expect — and embrace — strange detours.


5. Founding with Six Co-Founders and Making It Work

Most startups struggle with two co-founders. Anthropic began with seven.

The key was pre-existing trust: some had worked together at Google Brain, others had 15-year-old research connections. This allowed clear division of work and flexibility as the company scaled — some co-founders becoming ICs again, others staying in management.


6. Listening Without Losing Your Compass

In a field evolving at breakneck speed, customer feedback is all about survival.

“About 80% of the time, product roadmap questions are really research roadmap questions.”

Anthropic runs a tight prioritization filter: What’s high-impact and tractable? Not every request can make the next release, but they maintain a feedback loop between research and product to keep user needs shaping the tech.


7. A Founder’s Advice for Building on the AI Frontier

Standard startup wisdom says “pick a lane and don’t pivot too soon.” Amodei’s advice flips that for AI founders:

“Have flexibility and imagine what your product could be six months from now. The models themselves are evolving so fast your roadmap might need to, too.”

In other words: agility isn’t optional — it’s the core competency.


8. Personal Sustainability in High-Growth Chaos

Amodei’s day starts with a workout, family time with her two-year-old, and mornings reserved for deep thinking. Afternoons? Meetings.

The boundary she guards most:

When she’s at work, she’s fully at work. When home, she’s fully home. It’s how she stays grounded while riding three hypergrowth company waves in a row.


Key Takeaways for Founders

  • Lead with values: Embedding safety from day one can become a competitive edge, not just a compliance check.
  • Divide ownership clearly: Separate long-term vision from near-term execution to reduce friction.
  • Expect messy iterations: Quirky missteps are part of developing frontier tech.
  • Stay agile: AI moves so fast that research breakthroughs will reshape your roadmap.
  • Listen actively: Build strong feedback loops without chasing every request.
  • Protect your balance: Personal sustainability fuels long-term leadership.