World Library  
Flag as Inappropriate
Email this Article
 

Life-critical system

A life-critical system or safety-critical system is a system whose failure or malfunction may result in one (or more) of the following outcomes:

  • death or serious injury to people
  • loss or severe damage to equipment/property
  • environmental harm

Risks of this sort are usually managed with the methods and tools of safety engineering. A life-critical system is designed to lose less than one life per billion (109) hours of operation.[1] Typical design methods include probabilistic risk assessment, a method that combines failure mode and effects analysis (FMEA) with fault tree analysis. Safety-critical systems are increasingly computer-based.

Contents

  • Reliability regimes 1
  • Software engineering for life-critical systems 2
  • Examples of life-critical systems 3
    • Infrastructure 3.1
    • 3.2 Medicine
    • Nuclear engineering 3.3
    • Recreation 3.4
    • Transport 3.5
      • Railway 3.5.1
      • Automotive 3.5.2
      • Aviation 3.5.3
      • Spaceflight 3.5.4
  • See also 4
  • References 5
  • External links 6

Reliability regimes

Several reliability regimes for life-critical systems exist:

  • Fail-operational systems continue to operate when their control systems fail. Examples of these include elevators, the gas thermostats in most home furnaces, and passively safe nuclear reactors. Fail-operational mode is sometimes unsafe. Nuclear weapons launch-on-loss-of-communications was rejected as a control system for the U.S. nuclear forces because it is fail-operational: a loss of communications would cause launch, so this mode of operation was considered too risky. This is contrasted with the Fail-deadly behavior of the Perimeter system built during the Soviet era.[2]
  • Fail-safe systems become safe when they cannot operate. Many medical systems fall into this category. For example, an infusion pump can fail, and as long as it alerts the nurse and ceases pumping, it will not threaten the loss of life because its safety interval is long enough to permit a human response. In a similar vein, an industrial or domestic burner controller can fail, but must fail in a safe mode (i.e. turn combustion off when they detect faults). Famously, nuclear weapon systems that launch-on-command are fail-safe, because if the communications systems fail, launch cannot be commanded. Railway signaling is designed to be fail-safe.
  • Fail-secure systems maintain maximum security when they can not operate. For example, while fail-safe electronic doors unlock during power failures, fail-secure ones will lock, keeping an area secure.
  • Fail-Passive systems continue to operate in the event of a system failure. An example includes an aircraft autopilot. In the event of a failure, the aircraft would remain in a controllable state and allow the pilot to take over and complete the journey and perform a safe landing.
  • Fault-tolerant systems avoid service failure when faults are introduced to the system. An example may include control systems for ordinary nuclear reactors. The normal method to tolerate faults is to have several computers continually test the parts of a system, and switch on hot spares for failing subsystems. As long as faulty subsystems are replaced or repaired at normal maintenance intervals, these systems are considered safe. Interestingly, the computers, power supplies and control terminals used by human beings must all be duplicated in these systems in some fashion.

Software engineering for life-critical systems

Software engineering for life-critical systems is particularly difficult. There are three aspects which can be applied to aid the engineering software for life-critical systems. First is process engineering and management. Secondly, selecting the appropriate tools and environment for the system. This allows the system developer to effectively test the system by emulation and observe its effectiveness. Thirdly, address any legal and regulatory requirements, such as FAA requirements for aviation. By setting a standard for which a system is required to be developed under, it forces the designers to stick to the requirements. The avionics industry has succeeded in producing standard methods for producing life-critical avionics software. Similar standards exist for automotive (ISO 26262), Medical (IEC 62304) and nuclear (IEC 61513) industries. The standard approach is to carefully code, inspect, document, test, verify and analyze the system. Another approach is to certify a production system, a compiler, and then generate the system's code from specifications. Another approach uses formal methods to generate proofs that the code meets requirements. All of these approaches improve the software quality in safety-critical systems by testing or eliminating manual steps in the development process, because people make mistakes, and these mistakes are the most common cause of potential life-threatening errors.

Examples of life-critical systems

Infrastructure

Medicine[3]

The technology requirements can go beyond avoidance of failure, and can even facilitate medical intensive care (which deals with healing patients), and also life support (which is for stabilizing patients).

Nuclear engineering[4]

Recreation

Transport

Railway[5]

  • Railway signalling and control systems
  • Platform detection to control train doors[6]
  • Automatic train stop[6]

Automotive[7]

Aviation[8]

Spaceflight[9]

See also

References

  1. ^ AC 25.1309-1A
  2. ^ "Inside the Apocalyptic Soviet Doomsday Machine". WIRED. 
  3. ^ "Medical Device Safety System Design: A Systematic Approach". mddionline.com. 
  4. ^ "Safety of Nuclear Reactors". world-nuclear.org. 
  5. ^ http://rtos.com/images/uploads/Safety-Critical_Systems_In_Rail_Transportation.pdf
  6. ^ a b http://www.fersil-railway.com/wp-content/uploads/PLAQUETTEA4-ENGL.pdf
  7. ^ "Safety-Critical Automotive Systems". sae.org. 
  8. ^ Leanna Rierson. Developing Safety-Critical Software: A Practical Guide for Aviation Software and DO-178C Compliance.  
  9. ^ http://www.dept.aoe.vt.edu/~cdhall/courses/aoe4065/NASADesignSPs/N_PG_8705_0002_.pdf

External links

  • An Example of a Life-Critical System
  • Safety-critical systems Virtual Library
  • Explanation of Fail Operational and Fail Passive in Avionics
  • Useful Slides which explain Fault Tolerance and Fail * in distributed Systems
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.