Commercial Aviation Tutorial
This tutorial is for educational and academic purposes only. The incident analyses, root cause discussions, and failure scenarios presented are based on publicly available information and represent possible interpretations for teaching systems engineering concepts. Actual accident causes are often complex, multi-factorial, and may be subject to ongoing investigation or dispute.
This material should not be construed as definitive statements of fact regarding any specific incident, company, or product. The hypothetical mitigations and "what-if" scenarios are intended solely to illustrate systems engineering principles and do not represent claims about what any organization did or should have done.
NirmIQ and its authors disclaim any liability for interpretations or applications of this educational content.
Overview
Project: Commercial Aviation - Next-Generation Narrow-Body Aircraft Size: 370 requirements, 240 failure modes Focus: Academic exploration of systems engineering lessons from aviation incidents Key Differentiator: 317 requirement-to-FMEA traceability links
Why This Sample Matters
This large sample project demonstrates NirmIQ's core differentiator: comprehensive requirement-to-FMEA traceability that prevents catastrophic single points of failure.
Real-World Context
- Boeing 737 MAX crashes (346 deaths) - Single AOA sensor failure
- Air India Express Flight 812 (158 deaths) - Inadequate redundancy
- Lion Air Flight 610 (189 deaths) - Lack of requirement-to-FMEA traceability
What Makes This Special
| Feature | Standard Sample | This Project |
|---|---|---|
| Requirements | 185 | 370 (2x) |
| Failure Modes | 120 | 240 (2x) |
| Req-FMEA Links | ~40 | 317 |
| Safety Coverage | ~60% | 100% |
| Change History | Minimal | 25+ tracked |
Project Structure
7-Level Requirement Hierarchy
Aircraft System (Top)
├── System Requirements (Flight Control, Navigation, etc.)
│ ├── Subsystem Requirements (Primary FCS, Backup FCS)
│ │ ├── Component Requirements (Sensors, Actuators)
│ │ │ ├── Software Requirements (DO-178C Level A)
│ │ │ │ ├── Safety Requirements (Fail-operational)
│ │ │ │ │ └── Verification Requirements (Test specs)
6 FMEA Analyses
- Primary Flight Control System DFMEA - 40 failure modes
- Backup Flight Control System DFMEA - 40 failure modes
- Navigation System DFMEA - 40 failure modes
- Angle of Attack Sensor DFMEA - 40 failure modes (MCAS scenario)
- Flight Control Computer DFMEA - 40 failure modes
- Actuator System DFMEA - 40 failure modes
Key Learning Objectives
1. Understanding Requirement-to-FMEA Traceability
Traditional approach (What Boeing did):
- Requirements team writes specifications
- FMEA team analyzes failures independently
- NO LINK between safety requirements and failure modes
- Result: MCAS single point of failure went undetected
NirmIQ approach (This sample):
- Every critical safety requirement linked to specific FMEA analysis
- Failure modes explicitly trace back to requirements they address
- Gaps immediately visible in Traceability Matrix
- 317 explicit links ensure nothing falls through cracks
2. Exploring the Sample in NirmIQ
Step 1: View Requirements Hierarchy (5 min)
- Log into NirmIQ
- Open "Commercial Aviation - Next-Generation Narrow-Body Aircraft" sample
- Go to Requirements tab
- Expand the hierarchy to see 7 levels
- Click on AV-SAFE-001: "Flight control system shall continue operation with any single failure"
- Note the Linked FMEA section showing which failure modes address this requirement
Step 2: Explore FMEA Analyses (10 min)
- Go to Advanced FMEA tab
- Open "Angle of Attack Sensor DFMEA"
- Find failure mode: "Single AOA sensor failure causing incorrect MCAS activation"
- See RPN ratings: Severity=10, Occurrence=3, Detection=7 → RPN=210 (High priority)
- View action items: Design changes to add redundancy
Step 3: Check Traceability Matrix (10 min)
- Go to Analytics → Traceability
- Filter by "Safety Requirements"
- See 100% coverage - every safety requirement has linked FMEA
- Compare with "Software Requirements" → 50% coverage (DO-178C Level A subset)
Step 4: Review Change History (5 min)
- Go to Requirements tab
- Select requirement AV-SAFE-001
- Click Change History
- See 3 revisions documenting lessons from Boeing 737 MAX incident
How This Prevents Boeing-Type Failures
Boeing 737 MAX MCAS Failure
What went wrong:
- Safety requirement: "System shall handle sensor failures" (implied, not explicit)
- FMEA: Single AOA sensor failure analyzed but not linked to requirement
- Result: Nobody realized requirement wasn't satisfied → 346 deaths
How NirmIQ prevents this:
- Explicit requirement: AV-SAFE-001 "Shall continue operation with any single failure"
- Linked FMEA: "Single AOA sensor failure" → RPN 210 (High)
- Traceability shows: This failure mode violates the requirement
- Action item generated: Add second AOA sensor + voting logic
- Verification requirement: Test with single sensor failure
The link is the safety net.
Standards Compliance Demonstrated
- FAA Part 25 - Transport category aircraft certification
- DO-178C Level A - Software development for critical systems
- ARP 4754A - Development of civil aircraft and systems
- DO-254 - Hardware design assurance
Follow-Up Learning Paths
Path 1: Build Your Own (Scaled-Down)
Create a simplified version (10-20 requirements) following the same patterns:
- Define 10 safety requirements
- Create 2 FMEA analyses with 10 failure modes each
- Link every failure mode to at least one requirement
- Verify 100% coverage in Traceability Matrix
Path 2: Explore ADAS Sample
See how the same principles apply to autonomous vehicles:
- ADAS/Autonomous Vehicle Tutorial
- Focus: Liability attribution in AI systems
- 130+ requirement-FMEA links for legal defense
Path 3: Deep-Dive Training
Questions?
- Support: beta@nirmiq.com
- Documentation: docs.nirmiq.com
- Feedback: Use in-app feedback button (bottom-right corner)
Additional Topics
Understanding Systems Thinking
Traditional engineering courses teach:
- Component design in isolation
- "Design this part to specification"
This project teaches:
- Systems engineering: How components interact
- Emergent behaviors: Failure of Component A can cascade to Components B, C, D
- Holistic analysis: Can't analyze a sensor in isolation - must consider entire system
Exercise:
- Find requirement AV-SAFE-006 (AOA sensor disagreement)
- Trace it to the FMEA failure mode
- List all OTHER components affected if AOA sensor fails
- Draw a system diagram showing cascading effects
You'll discover:
- AOA sensor → Flight Control Computer → Control Surfaces → Entire Aircraft!
- Single sensor failure can bring down a 150-passenger aircraft
- This is why redundancy matters
Real-World Consequences
Every failure mode in this project references real accidents where people died.
Exercise:
- Read about Lion Air Flight 610 crash
- Find the corresponding failure mode in our FMEA
- Read the mitigations we've specified
- Ask yourself: "If Boeing had this traceability, would 346 people be alive today?"
Answer: Almost certainly yes.
Robustness Principles
Principle 1: No Single Points of Failure
Definition: A single component failure shall not cause catastrophic system failure.
How traceability ensures this:
- Safety requirement states: "Flight control shall continue with any single failure"
- Requirement is linked to FMEA failure mode: "Single AOA sensor failure"
- FMEA shows: Severity = 10 (CATASTROPHIC), Mitigation = "Triple redundancy required"
- If design only has dual redundancy, the link reveals the gap!
Without traceability, the gap might not be discovered until after an accident.
Principle 2: Defense in Depth
Definition: Multiple independent protection layers prevent accidents.
How traceability ensures this:
Protection Layer 1: Requirement AV-SAFE-001
"Flight control system shall continue operation with any single failure"
|
| (linked to)
V
Protection Layer 2: FMEA Analysis
"Triple-redundant FCCs with dissimilar redundancy"
|
| (linked to)
V
Protection Layer 3: Test Requirement
"Verify system operates with any single FCC failure"
|
V
Complete verification chain!
Principle 3: Fail-Safe Design
Definition: When failures occur, system transitions to safe state.
Example from our project:
- Requirement AV-SAFE-013: "Landing gear shall extend by gravity if all hydraulic systems fail"
- Linked to FMEA: "Hydraulic System PFMEA - Total hydraulic pressure loss"
- Mitigation: "Gravity extension with manual release handle"
- Test: "Verify gear extends within 30 seconds using gravity alone"
Traceability ensures fail-safe mechanisms are not forgotten!
Air India Flight 171 Case Study (Gujarat Fuel Cutoff Incident)
What Happened:
- Air India Flight 171 (Boeing 747), approaching Ahmedabad Airport, Gujarat (1982)
- BOTH engines lost power during approach due to uncommanded fuel cutoff
- Pilots managed emergency landing with NO ENGINE POWER
- Miracle survival - aircraft landed safely, but this was pure luck
The Systems Engineering Failure:
The critical question that reveals the systems engineering gap:
WHY was fuel cutoff allowed during approach/landing?
The FADEC (Full Authority Digital Engine Control) system had access to:
- Angle of attack sensors
- Airspeed indicators
- Altitude data
- Throttle position
- Landing gear position (extended = approach/landing phase)
- Flight phase information
Any competent FMEA would have identified:
- "Uncommanded fuel cutoff during critical flight phase" = CATASTROPHIC failure mode
- Mitigation: "FADEC shall IGNORE fuel cutoff commands when:
- Airspeed < 200 knots AND altitude < 2000 ft AND landing gear extended"
- OR: "Require dual-confirmation for fuel cutoff during approach"
- This is basic fail-safe design!
How NirmIQ Would Have Prevented This:
In our large sample project, see:
- Requirement AV-SAFE-018: "Fuel system shall prevent fuel starvation with any single component failure"
- Linked to Propulsion DFMEA: "FADEC uncommanded fuel cutoff during critical flight phase"
- Failure Mode Analysis:
- Cause: Electrical fault, short circuit, or false signal causing fuel cutoff command
- Effect: CATASTROPHIC - Loss of all engines during approach/landing
- Current Control: Pilot manual override (NOT sufficient - pilots may not react in time)
- MISSING CONTROL: FADEC flight phase awareness!
- Required Mitigation: "FADEC shall maintain fuel flow regardless of cutoff switch during approach/landing phases"
Why This Matters:
In the Air India Flight 171 incident:
- Pilots had seconds to react
- Emergency landing with no engines is nearly impossible
- Only pilot skill saved lives - the system FAILED to protect them
If proper FMEA with traceability had existed:
- Requirement: "Fuel supply shall be fail-safe during critical flight phases"
- FMEA would have identified: "Uncommanded fuel cutoff" as CATASTROPHIC
- Design mitigation: "FADEC ignores cutoff commands during approach/landing"
- Result: Incident would have been PREVENTED
The traceability link would have flagged that the safety requirement wasn't adequately implemented!
The Business Case for Traceability
Boeing's 737 MAX Grounding:
- Aircraft grounded for 20 months (March 2019 - November 2020)
- Estimated cost: $20+ billion
- Reputation damage: Incalculable
- Legal settlements: $2.5 billion+
If proper requirement-to-FMEA traceability had existed:
- MCAS single-sensor design would have been flagged during development
- Cost to fix: ~$50 million (dual sensor design)
- Savings: $19.95 billion
- Lives saved: 346
The business case is clear: Traceability is not optional.
Professional Engineering Responsibility
As future engineers, you will design systems that can kill people if done wrong.
This sample project demonstrates:
- How to do it RIGHT: Complete traceability, no single points of failure
- What happens when done WRONG: Boeing 737 MAX
Your Responsibility:
- Speak up when you see single points of failure
- Demand traceability from requirements to tests
- Don't assume someone else will catch the problem
- Remember: Your signature on a drawing means "I verified this is safe"
Exercise: Imagine you're the engineer who approved the single-sensor MCAS design.
- Could you sleep at night after the crashes?
- How would you face the families of the 346 victims?
- This is not hypothetical - it happened to real engineers at Boeing
Conclusion
This large sample project demonstrates that requirement-to-FMEA traceability is not optional - it's life-critical.
Key Takeaways:
- Transparency: Every stakeholder can see how safety is ensured
- Robustness: No single point of failure goes unanalyzed
- Compliance: Auditors can verify complete coverage instantly
- Lives Saved: Proper traceability would have prevented 346 deaths in Boeing 737 MAX crashes
NirmIQ's Differentiator:
- Complete bidirectional traceability
- Automated coverage analysis
- Change impact visualization
- Compliance reporting built-in
Remember: Engineering is about responsibility. Traceability saves lives.