Jump to content

Avionics software

From Wikipedia, the free encyclopedia

Avionics software is embedded software with legally mandated safety and reliability concerns used in avionics. The main difference between avionic software and conventional embedded software is that the development process is required by law and is optimized for safety. It is claimed that the process described below is only slightly slower and more costly (perhaps 15 percent) than the normal ad hoc processes used for commercial software. Since most software fails because of mistakes, eliminating the mistakes at the earliest possible step is also a relatively inexpensive and reliable way to produce software. In some projects however, mistakes in the specifications may not be detected until deployment. At that point, they can be very expensive to fix.

The basic idea of any software development model is that each step of the design process has outputs called "deliverables."[1] If the deliverables are tested for correctness and fixed, then normal human mistakes can not easily grow into dangerous or expensive problems. Most manufacturers[2] follow the waterfall model to coordinate the design product,[3] but almost all explicitly permit earlier work to be revised. The result is more often closer to a spiral model.

For an overview of embedded software see embedded system and software development models. The rest of this article assumes familiarity with that information, and discusses differences between commercial embedded systems and commercial development models.

General overview

[edit]

Since most avionics manufacturers see software as a way to add value without adding weight, the importance of embedded software in avionic systems is increasing.

Most modern commercial aircraft with auto-pilots use flight computers and so called flight management systems (FMS) that can fly the aircraft without the pilot's active intervention during certain phases of flight. Also under development or in production are unmanned vehicles: missiles and drones which can take off, cruise and land without airborne pilot intervention.

In many of these systems, failure is unacceptable. The reliability of the software running in airborne vehicles (civil or military) is shown by the fact that most airborne accidents occur due to manual errors. Unfortunately reliable software is not necessarily easy to use or intuitive, poor user interface design has been a contributing cause of many aerospace accidents and deaths.[citation needed]

Regulatory issues

[edit]

Due to safety requirements, most nations regulate avionics, or at least adopt standards in use by a group of allies or a customs union. The three regulatory organizations that most affect international aviation development are the U.S, the E.U. and Russia.

In the U.S., avionic and other aircraft components have safety and reliability standards mandated by the Federal Aviation Regulations, Part 25 for Transport Airplanes, Part 23 for Small Airplanes, and Parts 27 and 29 for Rotorcraft. These standards are enforced by "designated engineering representatives" of the FAA who are usually paid by a manufacturer and certified by the FAA.

In the European Union the IEC describes "recommended" requirements for safety-critical systems, which are usually adopted without change by governments. A safe, reliable piece of avionics has a "CE Mark." The regulatory arrangement is remarkably similar to fire safety in the U.S. and Canada. The government certifies testing laboratories, and the laboratories certify both manufactured items and organizations. Essentially, the oversight of the engineering is outsourced from the government and manufacturer to the testing laboratory.

To assure safety and reliability, national regulatory authorities (e.g. the FAA, CAA, or DOD) require software development standards. Some representative standards include MIL-STD-2167 for military systems, or RTCA DO-178B and its successor DO-178C for civil aircraft.

The regulatory requirements for this software can be expensive compared to other software, but they are usually the minimum that is required to produce the necessary safety.

Development process

[edit]

The main difference between avionics software and other embedded systems is that the actual standards are often far more detailed and rigorous than commercial standards, usually described by documents with hundreds of pages. It is usually run on a real time operating system.

Since the process is legally required, most processes have documents or software to trace requirements from numbered paragraphs in the specifications and designs to exact pieces of code, with exact tests for each, and a box on the final certification checklist. This is specifically to prove conformance to the legally mandated standard.

Deviations from a specific project to the processes described here can occur due to usage of alternative methods or low safety level requirements.

Almost all software development standards describe how to perform and improve specifications, designs, coding, and testing (See software development model). However avionics software development standards add some steps to the development for safety and certification:

Human interfaces

[edit]

Projects with substantial human interfaces are usually prototyped or simulated. The videotape is usually retained, but the prototype retired immediately after testing, because otherwise senior management and customers can believe the system is complete. A major goal is to find human-interface issues that can affect safety and usability.

Hazard analysis

[edit]

Safety-critical avionics usually have a hazard analysis. The early stages of the project, already have at least a vague idea of the main parts of the project. An engineer then takes each block of a block diagram and considers the things that could go wrong with that block, and how they affect the system as a whole. Subsequently, the severity and probability of the hazards are estimated. The problems then become requirements that feed into the design's specifications.

Projects involving military cryptographic security usually include a security analysis, using methods very like the hazard analysis.

Maintenance manual

[edit]

As soon as the engineering specification is complete, writing the maintenance manual can start. A maintenance manual is essential to repairs, and of course, if the system can't be fixed, it will not be safe.

There are several levels to most standards. A low-safety product such as an in-flight entertainment unit (a flying TV) may escape with a schematic and procedures for installation and adjustment. A navigation system, autopilot or engine may have thousands of pages of procedures, inspections and rigging instructions. Documents are now (2003) routinely delivered on CD-ROM, in standard formats that include text and pictures.

One of the odder documentation requirements is that most commercial contracts require an assurance that system documentation will be available indefinitely. The normal commercial method of providing this assurance is to form and fund a small foundation or trust. The trust then maintains a mailbox and deposits copies (usually in ultrafiche) in a secure location, such as rented space in a university's library (managed as a special collection), or (more rarely now) buried in a cave or a desert location.[4]

Design and specification documents

[edit]

These are usually much like those in other software development models. A crucial difference is that requirements are usually traced as described above. In large projects, requirements-traceability is such a large expensive task that it requires large, expensive computer programs to manage it.

Code production and review

[edit]

The code is written, then usually reviewed by a programmer (or group of programmers, usually independently) that did not write it originally (another legal requirement). Special organizations also usually conduct code reviews with a checklist of possible mistakes. When a new type of mistake is found it is added to the checklist, and fixed throughout the code.

The code is also often examined by special programs that analyze correctness (Static code analysis), such as SPARK examiner for the SPARK (a subset of the Ada programming language) or lint for the C-family of programming languages (primarily C, though). The compilers or special checking programs like "lint" check to see if types of data are compatible with the operations on them, also such tools are regularly used to enforce strict usage of valid programming language subsets and programming styles. Another set of programs measure software metrics, to look for parts of the code that are likely to have mistakes. All the problems are fixed, or at least understood and double-checked.

Some code, such as digital filters, graphical user interfaces and inertial navigation systems, are so well understood that software tools have been developed to write the software. In these cases, specifications are developed and reliable software is produced automatically.

Unit testing

[edit]

"Unit test" code is written to exercise every instruction of the code at least once to get 100% code coverage. A "coverage" tool is often used to verify that every instruction is executed, and then the test coverage is documented as well, for legal reasons.

This test is among the most powerful. It forces detailed review of the program logic, and detects most coding, compiler and some design errors. Some organizations write the unit tests before writing the code, using the software design as a module specification. The unit test code is executed, and all the problems are fixed.

Integration testing

[edit]

As pieces of code become available, they are added to a skeleton of code, and tested in place to make sure each interface works. Usually the built-in-tests of the electronics should be finished first, to begin burn-in and radio emissions tests of the electronics.

Next, the most valuable features of the software are integrated. It is very convenient for the integrators to have a way to run small selected pieces of code, perhaps from a simple menu system.

Some program managers try to arrange this integration process so that after some minimal level of function is achieved, the system becomes deliverable at any following date, with increasing numbers of features as time passes.

Black box and acceptance testing

[edit]

Meanwhile, the test engineers usually begin assembling a test rig, and releasing preliminary tests for use by the software engineers. At some point, the tests cover all of the functions of the engineering specification. At this point, testing of the entire avionic unit begins. The object of the acceptance testing is to prove that the unit is safe and reliable in operation.

The first test of the software, and one of the most difficult to meet in a tight schedule, is a realistic test of the unit's radio emissions. This usually must be started early in the project to assure that there is time to make any necessary changes to the design of the electronics. The software is also subjected to a structural coverage analysis, where test's are run and code coverage is collected and analyzed.

Certification

[edit]

Each step produces a deliverable, either a document, code, or a test report. When the software passes all of its tests (or enough to be sold safely), these are bound into a certification report, that can literally have thousands of pages. The designated engineering representative, who has been striving for completion, then decides if the result is acceptable. If it is, he signs it, and the avionic software is certified.

See also

[edit]

References

[edit]
  1. ^ "Software models". www.cs.uct.ac.za. Retrieved 2024-01-28.
  2. ^ "What is the Waterfall Model? - Definition and Guide". Software Quality. Retrieved 2023-12-01.
  3. ^ "Software models". www.cs.uct.ac.za. Retrieved 2024-01-28.
  4. ^ Personal Information, Robert Yablonsky, Engineering manager, B.E. Aerospace, Irvine, CA, 1993
[edit]