The 2025 Tesla Model 3 ships with the most aggressive consumer self-driving stack on the market and is the first Tesla to integrate Grok as an in-car AI assistant. This post is a clear-eyed engineer's take on what's actually in the car, what it does well, what it doesn't, and what the Grok integration adds in 2026.
What FSD actually is (and isn't)
Tesla's Full Self-Driving (FSD) is, as of mid-2026, an SAE Level 2 driver-assistance system in the U.S. That means the car can steer, accelerate, brake, change lanes, navigate intersections, and follow turn-by-turn routes — but the human driver remains legally and operationally responsible at all times, must keep attention on the road, and must be ready to take control instantly. The branding is "Full Self-Driving (Supervised)." The "Supervised" is doing real work.
This is meaningfully different from a true Level 4 / Level 5 system (a Waymo robotaxi, for example) where the vehicle takes operational responsibility within a defined operating domain and there's no human driver requirement.
Hardware 4 (HW4) in the 2025 Model 3
The 2025 Model 3 ships with Tesla's fourth-generation autonomy hardware:
- Eight cameras — three forward (different focal lengths), two side-forward, two side-rear, one rearview. HW4 cameras are higher resolution than HW3, with improved low-light performance and wider dynamic range.
- No radar, no LiDAR — Tesla removed radar from new vehicles in 2021-2022 and has not added LiDAR. This is the most contested architectural choice in the industry.
- Updated FSD computer — substantially more compute than HW3, supporting larger neural networks running in real time.
- Improved compute headroom for the larger end-to-end models shipping in FSD v12.5+.
- HD radar reintroduction: some newer builds reintroduced a high-definition imaging radar; check your specific build sheet.
Vision-only architecture (and why Tesla chose it)
Tesla's argument: humans drive with vision and a couple of mirror checks. If a neural network can be trained to do the same with eight camera streams, you avoid the cost and complexity of LiDAR, the unreliability of automotive radar at distance, and the sensor-fusion problem of reconciling disagreeing sensors.
The counter-argument from the rest of the industry: LiDAR provides geometrically precise distance and shape information that no camera-only system can match in low-visibility conditions (heavy rain, snow, fog, low sun glare). And human-equivalent vision is not actually trivial to replicate.
Where Tesla's bet has paid off so far:
- Lower BOM cost — FSD is shipping in every new Tesla, not just expensive trims.
- Scalable training data — Tesla collects driving data from millions of vehicles. LiDAR systems can't be retrofitted to the fleet.
- The neural nets have improved rapidly with more compute and data.
Where the bet is still in question:
- Heavy weather still degrades the system substantially.
- Sun glare and pre-dawn light cause well-documented edge cases.
- Regulators in some jurisdictions remain skeptical without LiDAR-backed redundancy.
End-to-end neural net (v12 and beyond)
Pre-v12 FSD used a modular pipeline: perception (what's in the scene) → prediction (what will it do) → planning (what should I do) → control (steering/throttle/brake). Each stage was a separate neural network or rule-based system. Engineers wrote hundreds of thousands of lines of C++ to handle special cases ("what if the lane is closed for construction?").
FSD v12 (rolled out through 2024) and the subsequent v12.5 and v13 releases moved to an "end-to-end" architecture: a single large neural network that takes camera frames as input and outputs steering and pedal commands directly. The intermediate representations are still there inside the model but they're learned, not hand-coded.
What end-to-end changed in practice:
- Significantly smoother behavior in edge cases (the model generalizes rather than falling off a rule-system cliff).
- Less interpretable failures — when something goes wrong it's harder to debug.
- Faster improvement velocity — new behaviors come from training data rather than engineer-written rules.
- More dependence on training data quality. Bad demonstrations train bad driving.
FSD v13 (late 2025/2026) doubled the model size again and rolled in improvements specifically around city driving, unprotected lefts, and parking-lot navigation.
Where FSD shines
- Highway driving. Smooth lane-keeping, sensible lane changes, predictable behavior in traffic. The original Autopilot use case, now with FSD-level routing and exits.
- Stop-and-go traffic. One of the strongest use cases — the car handles the worst part of commuting better than most humans.
- Familiar routes. The model behaves well on roads with clear lane markings, consistent signage, and normal road geometry.
- Parking lot navigation. Much improved in v13 — the car will route through a parking lot to a specific stall.
- Unprotected lefts. Still hard but improved enough to handle most.
- Navigation handoffs. Highway-to-exit, exit-to-surface-street transitions are smoother than they were in 2023.
Where FSD still fails (or needs vigilance)
- Construction zones with cone-defined detours that contradict the lane markings.
- Emergency vehicles — FSD's handling of stopped emergency vehicles has been a documented NHTSA concern.
- Heavy rain / snow / fog where camera vision degrades.
- Low-sun glare at sunrise/sunset toward east-west arterials.
- Unusual traffic control (police officer directing traffic, flagger at a construction site).
- Roundabouts — better than they were, still uneven.
- School zones and crosswalks with kids around — FSD is conservative which is right, but sometimes too conservative.
- Phantom braking — less common than in 2022 but not gone.
The honest framing: FSD is a remarkably capable Level 2 system that handles ~95% of typical driving well, with the remaining 5% being the part you absolutely cannot disengage from. The driver-monitoring system (eye tracking) is intentionally aggressive about this.
Grok integration in the car
The 2025 Model 3 was the first Tesla to ship with native Grok integration via the in-car software stack. Grok is xAI's large language model, conversationally accessed through the car's voice button. It's not part of the FSD driving stack — it's a separate in-car assistant for everything that isn't driving.
What Grok does in the car:
- General-knowledge Q&A while driving (hands-free).
- Trip planning — "find me a good Italian restaurant 20 minutes from here that's open at 9pm and has parking."
- Calendar and message dictation with conversational follow-up.
- Vehicle control via natural language — "warm up the cabin to 72°F before we get home" or "schedule a Service Center visit next Tuesday morning."
- Voice-driven media control — "play that podcast I was listening to last week about ancient Rome."
- Charging and routing recommendations with context awareness.
- Conversational entertainment for kids in the back seat.
What Grok doesn't do:
- It does not drive the car. FSD remains a separate, vision-based, deterministic-in-design neural stack.
- It does not have authority over safety-critical systems. You cannot ask Grok to "disable phantom-braking alerts" or "drive faster."
- It does not see what the FSD cameras see (no cross-feed between the two systems).
Privacy: Grok queries go to xAI servers. The car identifies queries from the vehicle ID. Tesla's privacy controls let you opt out of certain data sharing; review the settings.
Supervision and liability
Inside the Model 3, the driver monitoring system uses an in-cabin camera to track eye gaze and head position. Looking away from the road, looking at the phone, eyes closed — all trigger escalating warnings, then a "strikeout" period where FSD is disabled for the rest of the drive (or week, for repeat offenses).
Legally: the human driver remains responsible for accidents on FSD. Tesla has not assumed liability for FSD operation the way Mercedes has done for its Level 3 Drive Pilot in some markets. This will continue to be a meaningful differentiator until FSD becomes a formal Level 3+ system — an open question for 2026-2027.
Tesla vs Waymo vs the rest
The comparison most people want isn't apples-to-apples:
- Waymo: Level 4 robotaxi in defined service areas (Phoenix, San Francisco, parts of LA, Austin, etc.). No driver. LiDAR + radar + cameras. Geofenced. Phenomenally safe per-mile but limited to specific cities.
- Tesla FSD: Level 2 supervised driver assistance available to any Tesla owner anywhere FSD has been validated (most U.S. states, Canada, EU expanding). Camera-only. Driver required.
- Mercedes Drive Pilot: Level 3 conditional autonomy on specific highways under specific conditions. Driver can legally look away. Geographically and condition-limited.
- Cruise / others: various states of development, with several programs paused or restructured since 2023.
Tesla's bet: a Level 2 system that scales to millions of vehicles will eventually generate enough miles, data, and improvement velocity to reach Level 4 at a fraction of the per-vehicle cost of a LiDAR-based robotaxi. Waymo's bet: focused, conservative, sensor-rich vehicles in defined service areas will reach commercial viability first. Both bets are still active.
What 2026 looks like
- FSD v14 is expected to push further on city driving and on parking/charging routing.
- Robotaxi expansion — Tesla's robotaxi service launched in limited markets in 2025-2026 with a much narrower operational design domain than the consumer FSD product.
- Cybertruck and Model Y roll forward the same HW4 + FSD stack.
- Grok integration is expected to expand — in particular, deeper integration with the car's actual state ("why did my range drop 20% today?" with model awareness of cabin temperature, terrain, and driving style).
- Regulatory environment remains the wild card. NHTSA investigations and state-level rules will continue to shape what features can ship.
The 2025 Model 3 is the best version of this experiment so far. It's not autonomous in the formal sense. It is the most useful and the most aggressive consumer driver-assistance system available, and it ships with an in-car LLM assistant that is genuinely useful day-to-day. For what it actually is, it's remarkable. For what the marketing sometimes implies, it's still a Level 2 system that demands your full attention. Both can be true.
For broader AI context, see Claude vs ChatGPT vs Gemini and AI agents and MCP in 2026.
- SAE — J3016 Levels of Driving Automation
- NHTSA — National Highway Traffic Safety Administration
- Tesla — Autopilot & FSD documentation
- xAI — xAI / Grok