ADAS/Autonomous Vehicle Tutorial
This tutorial is for educational and academic purposes only. The incident analyses, root cause discussions, and liability scenarios presented are based on publicly available information and represent possible interpretations for teaching systems engineering concepts. Actual accident causes are often complex, multi-factorial, and may be subject to ongoing investigation, legal proceedings, or dispute.
This material should not be construed as definitive statements of fact regarding any specific incident, company, or product. References to companies (Tesla, Uber, Cruise, Waymo, etc.) are for illustrative purposes based on publicly reported information. The hypothetical mitigations and liability scenarios are intended solely to illustrate systems engineering principles and do not represent legal advice or claims about what any organization did or should have done.
NirmIQ and its authors disclaim any liability for interpretations or applications of this educational content.
The Central Question
When an autonomous vehicle crashes, who is liable - the car company or the human?
This tutorial demonstrates why systems engineering with full traceability is MORE important in the AI age, not less.
Overview
Project: ADAS/Autonomous Vehicle - AI Safety & Liability Edition Size: 370 requirements, 240 failure modes Focus: Academic exploration of liability attribution concepts Key Differentiator: 130+ requirement-to-FMEA traceability links
The Liability Problem
Traditional Vehicles (Simple)
- Human driving → Human responsible
- Accident → Police determine fault → Driver's insurance pays
- Clear liability chain
Autonomous Vehicles (Complex)
- WHO WAS IN CONTROL? AI or human?
- If AI was driving → Manufacturer liable
- If human was driving → Human liable
- If unclear → Litigation nightmare (both parties sue each other)
Without proper engineering: Car manufacturers face unlimited liability for every accident.
With NirmIQ traceability: Clear evidence of:
- AI system was properly designed
- All failure modes were analyzed
- Safety requirements were met
- Event data logged for forensic analysis
Real-World Autonomous Vehicle Incidents
Tesla Autopilot Fatalities
- Issue: AI perception failure - failed to detect pedestrian
- Liability question: Was driver paying attention? Was AI in control?
- Legal outcome: Settled out of court, unclear precedent
Uber Autonomous Test Vehicle (2018)
- Issue: AI decision-making failure - saw pedestrian but didn't brake
- Liability: Backup driver criminally charged, Uber civil liability
- Lesson: Need clear logging of AI decisions and human override capability
Cruise Robotaxi Incidents (2023)
- Issue: AI couldn't handle edge cases (emergency vehicles, construction zones)
- Regulatory action: License suspended in San Francisco
- Lesson: AI systems need comprehensive failure mode analysis
Waymo Incidents
- Issue: Complex scenarios AI wasn't trained for
- Lesson: Need requirements for all operational scenarios + FMEA for each
Project Structure
7-Level Requirement Hierarchy
Autonomous Vehicle System
├── System Requirements (Perception, Planning, Control)
│ ├── Subsystem Requirements (Camera, Lidar, Radar)
│ │ ├── Component Requirements (AI Model, Sensor Fusion)
│ │ │ ├── Software Requirements (ISO 26262 ASIL-D)
│ │ │ │ ├── Safety Requirements (Fail-operational + liability logging)
│ │ │ │ │ └── Verification Requirements (Test validation)
6 FMEA Analyses
- Perception System DFMEA - AI vision failures (40 modes)
- Decision-Making System DFMEA - AI logic failures (40 modes)
- Sensor Fusion DFMEA - Multi-sensor integration (40 modes)
- Event Data Recorder DFMEA - Legal logging requirements (40 modes)
- Human-Machine Interface DFMEA - Takeover scenarios (40 modes)
- Redundancy Management DFMEA - Fail-safe mechanisms (40 modes)
Key Learning Objectives
1. Liability-Critical Requirements
This sample demonstrates requirements specifically designed for legal defense:
Example: AV-LEGAL-001
"System shall log all AI decisions with explainable rationale
for forensic accident reconstruction"
Why this matters:
- Insurance company asks: "Why did the AI turn left?"
- Without logging: No answer → Manufacturer assumes fault
- With logging: "AI detected pedestrian, calculated optimal avoidance path, human did not override" → Clear liability determination
Linked FMEA: "Event data recorder failure" → RPN 200 (Critical)
- Severity: 10 (Liability exposure = $millions)
- Occurrence: 2 (Unlikely but possible)
- Detection: 10 (Cannot detect until accident happens)
2. Explainable AI Requirements
Example: AV-SAFE-015
"AI decision-making shall be explainable for court admissibility"
Why this matters:
- Court asks: "How does the AI work?"
- "Deep learning black box" → Not admissible as evidence
- Explainable decision tree + logged data → Admissible evidence
Linked FMEA: "AI makes unexplainable decision" → Action: Add decision logging layer
3. Clear Responsibility Boundaries
Example: AV-HMI-003
"System shall clearly indicate to driver whether AI or human
is in control at all times"
Why this matters:
- Accident occurs
- Driver claims: "I thought the AI was driving"
- Manufacturer claims: "Driver was supposed to be alert"
- Without clear indication: Both parties liable (worst outcome)
- With clear indication + logging: Liability determination is factual, not argumentative
Exploring the Sample in NirmIQ
Step 1: View Liability Requirements (5 min)
- Open "ADAS/Autonomous Vehicle" sample
- Go to Requirements tab
- Filter by type: "Legal" or "Safety"
- Click AV-LEGAL-001: "Log all AI decisions"
- See linked FMEA analyses showing how this is validated
Step 2: Explore Event Data Recorder FMEA (10 min)
- Go to Advanced FMEA tab
- Open "Event Data Recorder DFMEA"
- See failure modes:
- "EDR storage failure" → Lost legal evidence
- "Timestamp synchronization failure" → Cannot reconstruct timeline
- "Data tampering" → Evidence inadmissible
- All have RPN ≥ 150 (High/Critical priority)
Step 3: Check Traceability for Legal Requirements (10 min)
- Go to Analytics → Traceability
- Filter: "Legal Requirements"
- See 100% coverage - every legal requirement has FMEA validation
- This is what insurance companies and courts want to see
Step 4: Compare with Human-Machine Interface FMEA (10 min)
- Open "Human-Machine Interface DFMEA"
- See failure mode: "Driver unaware of control mode transition"
- RPN: 240 (Critical) - Severity=10, Occurrence=3, Detection=8
- Action items:
- Add visual indicator (steering wheel LED)
- Add audible warning
- Add haptic feedback
- Log all transitions
- Verification: Test with 100 test drivers
How This Prevents Liability Nightmares
Scenario: Autonomous Vehicle Hits Pedestrian
Without NirmIQ Traceability:
- Insurance company: "What happened?"
- Manufacturer: "We don't know, AI made a decision"
- Court: "Show us the AI decision process"
- Manufacturer: "It's a black box"
- Result: Manufacturer assumes 100% fault + punitive damages
With NirmIQ Traceability:
- Insurance company: "What happened?"
- Manufacturer: "EDR shows:
- Time 14:32:15 - AI detected pedestrian at 50m
- Time 14:32:16 - AI calculated braking trajectory
- Time 14:32:17 - AI applied maximum braking
- Time 14:32:18 - Human override attempted
- Time 14:32:19 - Impact occurred
- AI recommendation: Stop in 40m (safe)
- Human override: Swerve left (unsafe)"
- Result: Evidence shows human override caused accident → Driver liable
The traceability + logging is the legal defense.
Standards Compliance Demonstrated
- ISO 26262 ASIL-D - Automotive functional safety
- SAE J3016 Level 4 - Autonomous driving classification
- UN ECE R157 - Automated Lane Keeping Systems
- NHTSA ADS Guidelines - Automated Driving Systems
Follow-Up Learning Paths
Path 1: Build Simplified ADAS Project (10-20 reqs)
Create your own scaled-down version:
- 10 safety requirements (sensor failures, decision errors)
- 5 legal requirements (logging, explainability)
- 2 FMEA analyses (Perception + Decision-Making)
- Link all failure modes to requirements
Path 2: Explore Commercial Aviation Sample
See similar traceability principles for aerospace:
- Commercial Aviation Tutorial
- Focus: Boeing 737 MAX lessons learned
- 317 requirement-FMEA links
Path 3: Deep-Dive Training
Questions?
- Support: beta@nirmiq.com
- Documentation: docs.nirmiq.com
- Feedback: Use in-app feedback button (bottom-right corner)
Additional Topics
The Three Critical Questions Insurance/Courts Ask
After an autonomous vehicle accident, three questions determine liability:
Question 1: Was the vehicle in autonomous mode?
If you can't answer this with certainty → Manufacturer loses by default
Requirements from our sample:
ADS-VEH-019: "System shall log liability-critical events with GPS, time, and full sensor data"ADS-SAFE-029: "System shall log insurance-critical data: speed, braking, steering inputs"ADS-SAFE-030: "Vehicle shall provide manufacturer liability data vs driver liability data"
Linked FMEA: EDR-FM-002 - "System cannot determine if vehicle or human was in control during accident"
- Effect: Insurance denies ALL claims, litigation nightmare
- Mitigation: Log control authority EVERY 10ms (% autonomous vs % human)
Question 2: Did the AI system function as designed?
If you can't prove this → Negligence is assumed
Requirements:
ADS-VEH-012: "AI models shall be versioned, validated, and traceable to training data"ADS-SAFE-016: "AI training data shall be versioned with traceability to model performance"ADS-SAFE-031: "AI model updates shall not degrade validated safety performance"
Linked FMEA: DEC-FM-002 - "AI decides to continue driving despite sensor failure"
- Effect: Operating with impaired perception, inevitable accident
- Mitigation: ANY safety-critical sensor failure = DISABLE autonomous mode
Question 3: Can you explain WHY the AI made the decision it made?
If you can't explain → Jury sees "black box that killed someone"
Requirements:
ADS-VEH-007: "AI decision-making shall be explainable and auditable for legal proceedings"ADS-SAFE-017: "System shall provide explainable AI decisions for accident reconstruction"
Linked FMEA: EDR-FM-003 - "Cannot explain why AI made a decision that caused accident"
- Effect: Manufacturer cannot defend in court - LOSES BY DEFAULT
- Mitigation: Log: What did AI see? What did AI predict? Why did AI choose this action?
Deep Dive: Uber Autonomous Vehicle Fatality (2018)
What Really Happened:
Uber's autonomous system:
- Detected the pedestrian 6 seconds before impact
- Classified her incorrectly (vehicle? bicycle? unknown object?)
- Decision system couldn't decide whether to brake
- Emergency braking was DISABLED to prevent "false positives"
- Safety driver was watching her phone
The Systems Engineering Failures:
Missing Perception Requirements:
- System should have requirement: "Classify all objects as POTENTIALLY DANGEROUS until proven safe"
- Missing: "ALWAYS brake for uncertain objects in path"
FMEA Failure - Disabling Emergency Braking:
- Decision: Disable emergency braking to prevent false positives (customer comfort)
- Missing FMEA Analysis: "What if obstacle detection is CORRECT but system doubts itself?"
- Effect: CATASTROPHIC - System detects pedestrian but doesn't brake
Liability Ambiguity:
- Was safety driver responsible for intervening?
- Was Uber responsible for disabling emergency braking?
- Result: Criminal charges filed, massive settlement, program shut down
How NirmIQ Would Have Prevented This:
From our sample:
- Requirement
ADS-SAFE-002: "System shall never accelerate when obstacle detected within safe stopping distance" - Requirement
ADS-SAFE-004: "System shall detect and avoid all vulnerable road users (pedestrians, cyclists, motorcycles)"
Linked to FMEA PER-FM-002: "AI accelerates into stationary obstacle (Uber/Cruise collision scenario)"
- Cause: AI classifies stationary object as 'safe', sensor fusion failure
- Mitigation 1: Implement 'assume dangerous' default for all unclassified objects
- Mitigation 2: PROHIBIT acceleration when ANY sensor shows potential obstacle in path
- Mitigation 3: Add time-to-collision (TTC) calculations with mandatory brake if TTC < 2 seconds
The lesson: Conservative design with full traceability prevents "efficiency over safety" decisions.
Deep Dive: Cruise Robotaxi Pedestrian Dragging (2023)
What Happened:
- Cruise autonomous robotaxi struck by human-driven vehicle
- Cruise vehicle knocked into pedestrian
- Cruise vehicle then DRAGGED the pedestrian 20 feet trying to pull over
- Cruise initially downplayed severity to regulators
The Systems Engineering Failure:
Missing Requirement:
- "If collision detected with pedestrian under vehicle, STOP IMMEDIATELY"
- "Detect trapped pedestrian and disable movement"
Missing FMEA:
- Failure Mode: "AI attempts to pull over with pedestrian trapped under vehicle"
- Effect: CATASTROPHIC - Severe injuries from dragging
- Cause: AI not trained on "pedestrian trapped under vehicle" scenario
Liability Disaster:
- Was Cruise responsible for dragging pedestrian?
- Or was original collision (human driver) the cause?
- Result: California DMV suspended Cruise's permit, massive regulatory setback
How NirmIQ Would Have Prevented This:
Requirements needed:
- "Vehicle shall detect objects/people trapped under vehicle before moving"
- "If collision involves pedestrian, perform damage assessment before any movement"
- "System shall log all collision events with full sensor data for liability determination"
FMEA needed:
- "AI detects collision but doesn't detect trapped pedestrian" → PROHIBIT vehicle movement until verified safe
The Insurance Question Explained
Option 1: Manufacturer Liability Insurance
- Car company carries insurance for ALL accidents in autonomous mode
- Problem: UNLIMITED liability exposure (every accident potentially $millions)
- Cost: Insurance becomes prohibitively expensive
- Result: Autonomous vehicles economically unviable
Option 2: Occupant Liability Insurance (Status Quo)
- "Driver" still liable even when AI driving
- Problem: Legally dubious - how can someone be liable if not in control?
- Result: First major lawsuit sets precedent → likely shifts to manufacturer
Option 3: Shared Liability Model (Requires Clear Data)
- If AI was in control → Manufacturer liable
- If human was in control → Human liable
- If human ignored takeover request → Human liable
- Requirement: CRYSTAL CLEAR logging of control authority
Our sample project provides Option 3 implementation:
ADS-SAFE-030: "Vehicle shall provide manufacturer liability data vs driver liability data"- Logged every 10ms: % autonomous vs % human control
- Logged: Driver attentiveness score (camera + steering + pedal inputs)
- Logged: All takeover requests with outcome
This enables insurance to determine liability fairly.
Learning Objectives for Students
1. Systems Engineering is MORE Important in the AI Age
Myth: AI is so smart, it handles everything automatically.
Reality: AI is a black box. Without proper engineering:
- Can't prove AI is safe
- Can't explain AI decisions
- Can't determine liability
- Can't deploy at scale
Lesson: AI requires MORE rigorous systems engineering, not less.
2. Traceability is Your Legal Defense
Scenario: Your autonomous vehicle kills someone.
Prosecutor: "Did you anticipate this failure mode?"
Without traceability: "Uh... we did testing... I think..."
With traceability: "Yes. Requirement ADS-SAFE-001, FMEA PER-FM-001, mitigation validated in test case TC-PER-047. Here's the data."
Verdict: Not guilty vs. guilty.
3. The Liability Question is THE Question
Every autonomous vehicle company must answer: "When your vehicle crashes, who is liable - your company or the occupant?"
Without clear answer:
- Insurance denies coverage
- Regulations block deployment
- Lawsuits bankrupt company
With clear answer (via proper logging):
- Fair liability attribution
- Insurance coverage available
- Regulatory approval
- Deployment possible
4. Documentation Wins Lawsuits
In court:
- He-said-she-said: Loses
- Documentation: Wins
Requirements + FMEA + Traceability + Test Results + Event Logs = Winning defense
Why This Tool Matters
NirmIQ provides:
- Requirements management with full hierarchy
- FMEA analysis with S/O/D ratings
- Explicit requirement-to-FMEA traceability links ← KEY DIFFERENTIATOR
- Change history tracking
- Compliance reporting
Result: Build systems that are:
- Safer (comprehensive failure mode analysis)
- Defensible (traceability proves due diligence)
- Certifiable (compliance evidence)
- Insurable (liability attribution clear)
Conclusion
The age of AI doesn't make systems engineering obsolete - it makes it MORE CRITICAL than ever.
Autonomous vehicles represent the ultimate test of systems engineering:
- Lives depend on getting it right
- Liability depends on proving you got it right
- Economics depend on insurance accepting your proof
Proper requirements engineering + FMEA + traceability = The difference between:
- Successful autonomous vehicle deployment
- Regulatory shutdown after first fatality
NirmIQ's key differentiator: Explicit requirement-to-FMEA traceability links that prove every safety requirement addresses real failure modes.
This is how you build robust, safe, DEFENSIBLE AI systems.
Remember: Blaming AI for accidents is lazy. Proper systems engineering prevents them.