Blog · AI Companies
🤖 AI Companies

Tesla FSD + Grok 2026: How the 2025 Model 3 Drives Itself

📊 View 1-page infographic (share-ready PDF)

The 2025 Tesla Model 3 ships with the most aggressive consumer self-driving stack on the market and is the first Tesla to integrate Grok as an in-car AI assistant. This post is a clear-eyed engineer's take on what's actually in the car, what it does well, what it doesn't, and what the Grok integration adds in 2026.

What FSD actually is (and isn't)

Tesla's Full Self-Driving (FSD) is, as of mid-2026, an SAE Level 2 driver-assistance system in the U.S. That means the car can steer, accelerate, brake, change lanes, navigate intersections, and follow turn-by-turn routes — but the human driver remains legally and operationally responsible at all times, must keep attention on the road, and must be ready to take control instantly. The branding is "Full Self-Driving (Supervised)." The "Supervised" is doing real work.

This is meaningfully different from a true Level 4 / Level 5 system (a Waymo robotaxi, for example) where the vehicle takes operational responsibility within a defined operating domain and there's no human driver requirement.

Hardware 4 (HW4) in the 2025 Model 3

The 2025 Model 3 ships with Tesla's fourth-generation autonomy hardware:

Vision-only architecture (and why Tesla chose it)

Tesla's argument: humans drive with vision and a couple of mirror checks. If a neural network can be trained to do the same with eight camera streams, you avoid the cost and complexity of LiDAR, the unreliability of automotive radar at distance, and the sensor-fusion problem of reconciling disagreeing sensors.

The counter-argument from the rest of the industry: LiDAR provides geometrically precise distance and shape information that no camera-only system can match in low-visibility conditions (heavy rain, snow, fog, low sun glare). And human-equivalent vision is not actually trivial to replicate.

Where Tesla's bet has paid off so far:

Where the bet is still in question:

End-to-end neural net (v12 and beyond)

Pre-v12 FSD used a modular pipeline: perception (what's in the scene) → prediction (what will it do) → planning (what should I do) → control (steering/throttle/brake). Each stage was a separate neural network or rule-based system. Engineers wrote hundreds of thousands of lines of C++ to handle special cases ("what if the lane is closed for construction?").

FSD v12 (rolled out through 2024) and the subsequent v12.5 and v13 releases moved to an "end-to-end" architecture: a single large neural network that takes camera frames as input and outputs steering and pedal commands directly. The intermediate representations are still there inside the model but they're learned, not hand-coded.

What end-to-end changed in practice:

FSD v13 (late 2025/2026) doubled the model size again and rolled in improvements specifically around city driving, unprotected lefts, and parking-lot navigation.

Where FSD shines

Where FSD still fails (or needs vigilance)

The honest framing: FSD is a remarkably capable Level 2 system that handles ~95% of typical driving well, with the remaining 5% being the part you absolutely cannot disengage from. The driver-monitoring system (eye tracking) is intentionally aggressive about this.

Grok integration in the car

The 2025 Model 3 was the first Tesla to ship with native Grok integration via the in-car software stack. Grok is xAI's large language model, conversationally accessed through the car's voice button. It's not part of the FSD driving stack — it's a separate in-car assistant for everything that isn't driving.

What Grok does in the car:

What Grok doesn't do:

Privacy: Grok queries go to xAI servers. The car identifies queries from the vehicle ID. Tesla's privacy controls let you opt out of certain data sharing; review the settings.

Supervision and liability

Inside the Model 3, the driver monitoring system uses an in-cabin camera to track eye gaze and head position. Looking away from the road, looking at the phone, eyes closed — all trigger escalating warnings, then a "strikeout" period where FSD is disabled for the rest of the drive (or week, for repeat offenses).

Legally: the human driver remains responsible for accidents on FSD. Tesla has not assumed liability for FSD operation the way Mercedes has done for its Level 3 Drive Pilot in some markets. This will continue to be a meaningful differentiator until FSD becomes a formal Level 3+ system — an open question for 2026-2027.

Tesla vs Waymo vs the rest

The comparison most people want isn't apples-to-apples:

Tesla's bet: a Level 2 system that scales to millions of vehicles will eventually generate enough miles, data, and improvement velocity to reach Level 4 at a fraction of the per-vehicle cost of a LiDAR-based robotaxi. Waymo's bet: focused, conservative, sensor-rich vehicles in defined service areas will reach commercial viability first. Both bets are still active.

What 2026 looks like

The 2025 Model 3 is the best version of this experiment so far. It's not autonomous in the formal sense. It is the most useful and the most aggressive consumer driver-assistance system available, and it ships with an in-car LLM assistant that is genuinely useful day-to-day. For what it actually is, it's remarkable. For what the marketing sometimes implies, it's still a Level 2 system that demands your full attention. Both can be true.


For broader AI context, see Claude vs ChatGPT vs Gemini and AI agents and MCP in 2026.