Blog ยท AI Companies
๐Ÿง  AI Companies

Anthropic & the Claude Family: Why We Build on Claude

Anthropic is the AI lab we've bet our company on. Every app in the Aether AI platform talks to Claude. Our consulting practice recommends Claude to clients more often than anything else. This post explains why โ€” without the marketing.

Origins

Anthropic was founded in 2021 by Dario and Daniela Amodei, along with several former OpenAI researchers who left to focus on AI safety. The pitch was: build the most capable AI in the world, but make safety the architecture, not the guardrail. That distinction matters more than it sounds.

By 2026, Anthropic has gone from a research lab to a multi-billion-dollar company with infrastructure partnerships with Amazon, Google, and Apple. Claude is now in iMessage on iPhones, in Slack, in Notion, in countless products you use daily.

The Claude model family

Anthropic ships three tiers:

Each tier has gone through multiple generations: Claude 1, 2, 3, then 3.5, then 4, now 4.5+. The major versions tend to drop ~9โ€“12 months apart. The minor versions (Sonnet 4.5 โ†’ Sonnet 4.7) are silent capability bumps that arrive almost monthly.

Picking a tier
For most consulting engagements, we recommend starting with Sonnet. It hits 90% of use cases at 20% of Opus pricing. Drop to Haiku for batch jobs. Bump to Opus only when reasoning quality is the bottleneck.

Constitutional AI

The most distinctive thing about Claude is how it was trained. Anthropic pioneered a technique called Constitutional AI โ€” instead of just learning from human feedback at scale (which is what RLHF effectively is), Claude is trained against a written set of principles called a "constitution."

In practice, this means Claude is harder to jailbreak, more transparent about its limitations, and more likely to say "I don't know" instead of confidently hallucinating. Every model has hallucination problems. Claude tends to admit when it's reaching.

For a consultancy, that's enormous. We don't recommend AI features where confident hallucination is catastrophic. Claude's calibration on uncertainty has saved us at least one production-shipping decision so far.

Claude Code

Claude Code is Anthropic's developer tool โ€” a CLI that runs Claude inside your terminal, with full access to your filesystem, git, and shell. It's the tool I'm using right now to write this blog post and ship the website. If you're a developer, this changes how you work fundamentally.

Two reasons it works:

  1. Real codebase context. Claude Code reads your actual files, runs your tests, sees your errors. No copy-pasting snippets into a chat window.
  2. Project memory. A file called CLAUDE.md at your project root persists context across sessions. Style preferences, architecture decisions, tone โ€” all loaded automatically.

The first time you build a feature with Claude Code and watch it self-correct based on a failed test, you understand why this is the future of development.

Claude Cowork & Design

Cowork and Design are Anthropic's productivity surfaces. Cowork is a workspace for documents, projects, and ongoing collaboration โ€” like ChatGPT's "Projects" but built around Claude's longer context and better writing.

Claude Design is the newest layer: a generative UI/design environment where Claude can produce React components, Figma-style mockups, and full design systems. We use it for app mockups before any Xcode work begins.

MCP: the protocol that changes everything

The Model Context Protocol (MCP) is an open standard Anthropic released in late 2024 to let AI models talk to tools โ€” your file system, your database, GitHub, Slack, anything. Think of it as USB for AI.

Before MCP, every integration was a custom one-off. After MCP, any tool can expose itself once and every MCP-compatible AI client (Claude Desktop, Claude Code, others) can use it. The ecosystem of MCP servers has exploded โ€” there are now thousands.

This matters for clients because it means AI integrations are no longer "build a custom backend for each AI feature." It's "install an MCP server, point Claude at it."

Computer Use

In October 2024, Anthropic introduced Computer Use โ€” a capability where Claude can take screenshots of your screen, move the mouse, click, type, and operate your computer like a human. It's not AGI, but it's the first practical step toward AI agents that can use software you didn't write specifically for them.

For consulting, Computer Use unlocks automation for clients with legacy software that has no API. Claude can drive a 2008-era ERP system as competently as a junior employee โ€” and at $0.50/hour of compute instead of $30/hour of wages.

Why we build here

We've used every major model. We have OpenAI keys, Google Gemini access, Grok, Perplexity Pro, the works. We still build on Claude.

The reasons:

None of this means OpenAI or Google are bad picks โ€” we use them for specific workloads. But for the kind of AI features we build โ€” chatbots with persistent memory, agentic workflows, code generation โ€” Claude is the strongest current foundation.

If you're choosing a model for your business and don't know where to start, book a call. The right answer depends on your use case, but in 2026, "default to Claude unless you have a specific reason otherwise" is a defensible position.

Sources & References
  1. Anthropic โ€” Official Anthropic news & research
  2. Anthropic โ€” Constitutional AI: Harmlessness from AI Feedback (paper)
  3. Model Context Protocol โ€” MCP specification
  4. Anthropic โ€” Computer Use documentation
  5. Anthropic โ€” Claude Code documentation