Category Archives: Certification

Integrating Formal Program Verification with Testing

This is the paper that Yannick Moy presented at the recent ERTS 2012 conference:
Verification activities mandated for critical software are essential to achieve the required level of confidence expected in life-critical or business-critical software. They are becoming increasingly costly as, over time, they require the development and maintenance of a large body of functional and robustness tests on larger and more complex applications. Formal program verification offers a way to reduce these costs while providing stronger guarantees than testing. Addressing verification activities with formal verification is supported by upcoming standards such as do-178c for software development in avionics. In the Hi-Lite project, we pursue the integration of formal verification with testing for projects developed in C or Ada. In this paper, we discuss the conditions under which this integration is at least as strong as testing alone. We describe associated costs and benefits, using a simple banking database application as a case study. The full paper can de downloaded here.

Also posted in Events, Open Source, Open-DO News, Papers and Slides | Tagged , , , , | Leave a comment

Trusting the tools: An agile approach to tool qualification for DO-178C

Ben Brosgol and Greg Gicca look at agility, tool qualification and the new DO-178C standard in this article in Military Embedded Systems.

The DO-178C standard improves upon the previous DO-178B version by better supporting newer concepts in software development processes and methods. The new standard contains a core document and four supplements. The core document is an updated version of DO-178B with clarifications, improvements and many known issues addressed. The supplements clarify certification using object oriented technology and related techniques, model based development and verification, formal methods, and software tool qualification considerations. This paper will discuss the new DO-178C guidance for tool qualification and will present an approach to tool qualification based on iterative and agile development methods. Through this approach tools may be qualified at a lower cost and more importantly re-qualified for new software development environments as tools sets (such as compilers) are updated during the main application’s certified development life span.

Full article here.

Also posted in Agile/Lean Programming | Leave a comment

Nose Gear Challenge Problem joins Open-DO

The Nose Gear Challenge Problem has joined the Open-DO forge. It was initially developed to stimulate and unify discussions at the 2nd Workshop on Theorem Proving in Certification held on December 5 – 6, 2011 in Cambridge, UK. The goal of this project is to consider how/if theorem proving can have any value in providing assurance in the context of DO-178C formal method supplement. The contributors to the project can use the Nose Gear problem example to explain their techniques.

Visit the Nose Gear Challenge Problem project page on the Open-DO forge.

Also posted in Open Source, Related Initiatives | 1 Comment

Prove Your Plane Now!

The DO-333 is now available! (ok, that’s not free: 215$ for an electronic version, or 300$ for a hard copy, pfew!)

Under this amazingly explicit name is hiding the formal methods supplement for DO-178C. Or, said otherwise, the document that allows you, as a developer of avionics software, to replace tests/reviews/analyses by formal methods. Or you, as a provider of techniques and tools for formal methods, to find customers in the avionics market. Ah yes, because the new version of the certification standard for avionics software, DO-178C, has been also issued at the same time. So that starts today!

Here is what the abstract of this doc says:

This supplement identifies the additions, modifications and substitutions to DO-178C and DO-278A objectives when formal methods are used as part of a software life cycle, and the additional guidance required. It discusses those aspects of airworthiness certification that pertain to the production of software, using formal methods for systems approved using DO-178C.

Formal methods are mathematically-based techniques for the specification, development and verification of software aspects of digital systems. The mathematical basis of formal methods consists of formal logic, discrete mathematics and computer-readable languages. The use of formal methods is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analyses can contribute to establishing the correctness and robustness of a design.

Also posted in Open-DO News, Papers and Slides | Tagged | Leave a comment

Prove & Fly!

On December 5-6, I participated in the 2nd workshop on Theorem Proving in Certification, in Cambridge (UK). This turned out to be even more interesting than last year’s program promised.

The goal of the workshop is to clarify under which conditions theorem proving can be applied in the context of DO-178C Formal Methods Supplement (hence Prove & Fly!):

  • extent of verifications performed
  • cost/benefit compared to testing
  • characteristics of a technique/tool to be called theorem proving
  • tool qualification needs

The workshop was organized around a common challenge (gear nose challenge) which all participants were invited to address from different angles. The challenge was to compute the velocity of the nose gear of a plane while on the ground. This was made even more interesting by the need to comply with a small certification standard (Tamarack standard). Both the challenge and the certification standard were created by Jeff Joyce from CSL.

Besides sharing the strategy we follow in project Hi-Lite, and showing how it applied to the common challenge, I was very interested in the discussions we had over tool qualification and the alternate objectives to coverage in DO-178C, when using formal verification instead of testing. An interesting shared opinion was that the automatic prover does not need to be qualified if it generates a trace that can be double-checked independently by a theorem prover (based on a small set of induction rules). For example, that’s the case for CVC3. In the discussion on alternate objectives to coverage, Jeff Joyce clearly stated that the underlying goal is to detect incompleteness of specifications, or equivalently (from the opposite point of view) unintended functionalities. During the discussion, it appeared that we may be able to use either model checking to perform a symbolic coverage analysis, or information given by automatic provers stating which hypotheses (and thus source code constructs) were used in proofs, but for example not concolic testing which is based on source code.

Many of these subjects will need to be further explored as DO-178C is adopted in new projects and tools based on formal methods are applied in this context. In particular, I look forward to the evolutions of the Tamarack standard and new solutions to the gear nose challenge. Hot news: Open-DO will host the workshop forge and wiki to support these evolutions. :)

Also posted in Events | Tagged , , | Leave a comment

Non-intrusive Code Coverage

In his recent Embedded Computing Design article, Ben Brosgol discusses “Non-intrusive code coverage for safety-critical software” and more specifically how a “tool that derives precise source-level coverage metrics from execution trace data for a non-instrumented program” can really help with DO-178B evidence requirements. Abstract below with a link to the the full article…

Certification standards such as DO-178B for avionics require evidence that the system source code is completely exercised by tests derived from requirements. Traditional tools obtain the coverage data for a test run through code instrumentation, but this complicates analysis since the code being exercised is not the code that will finally execute. A solution to this problem is provided by a combination of two new tools, one for target emulation and one for coverage analysis. GNATemulator translates target object code into native host instructions, with the resulting code running on the host. This approach is efficient (target code is not being interpreted dynamically) and convenient (a significant amount of development can be conducted without an actual target board). Running on an instrumented version of GNATemulator, the GNATcoverage tool non-intrusively provides coverage data at both the source and object levels. At the object code level the tool performs instruction and branch coverage. At the source code level it provides statement coverage, decision coverage, and Modified Condition/Decision Coverage (MC/DC), performing the necessary analysis when MC/DC cannot be deduced from object branch coverage, and fully supports all levels of DO-178B safety certification.

Also posted in Agile/Lean Programming, In the Press, Open Source | 1 Comment

EE Times Design article – The Big Thaw

Matteo Bordin, Jerome Lambourg, and Ben Brosgol discuss some of the principles behind the Open-DO initiative in an article entitled The “Big Thaw” – An Agile Process for Software Certification for EE Times Design.


“To achieve certification, safety-critical systems must demonstrate compliance with domain-specific standards such as DO-178 for commercial avionics. Developing a certified system consists of various interrelated activities that produce outputs (collections of artifacts) as evidence of successful completion. For example, one of the DO-178 verification activities is a traceability analysis; its output is a report showing that each software requirement is implemented in the source code. Conducting the certification-required activities and producing the artifacts demand a major effort, much more than for conventional Quality Assurance on non safety-critical systems.”

Read the full article.

Also posted in Agile/Lean Programming, In the Press | Leave a comment

Best Paper Award for Results of Verification Competition

Last year, the conference VSTTE 2010 organized a competition of software verification systems (language + tools), to improve understanding of each system’s pros and cons. Rod Chapman from Altran Praxis participated with the SPARK language and toolset, and solved the first problem even beyond what the subject asked. We have since provided solutions in SPARK to all five problems, like other teams, which formed the basis for a report that you can find on this page.

This report was deemed important enough by the organizers of the Formal Methods conference 2011 that they have granted it the best paper award.

Furthermore, the page of the competition contains an archive with all solutions in many different languages and systems… you can even DIY!

Also posted in Events, Papers and Slides | 1 Comment

Language Vulnerabilities for Dummies

In case you do not know the series of books “for Dummies”, its principle is to explore a subject from the ground up, with rich explanations and examples for non-experts. That’s in my view a valid alternative title for the recently published “Guidance to Avoiding Vulnerabilities in Programming Languages through Language Selection and Use”. Rich (around 70 vulnerabilities explored) + detailed (130 pages!) + accessible (it contains the best discussion I’ve read of unspecified/implementation-defined/undefined behavior).

The ISO/IEC committee has produced here a language-neutral evaluation of the ways in which a language may “come in the way”, and how to avoid traps and pitfalls either upfront (in language design) or in the field (through coding standards and use of static analysis tools). This is a must-read for anyone whose task is to establish coding guidelines, recommend the usage of a static analysis tool, or choose a programming language for some project.

While Ada and SPARK naturally stand as the languages with fewer vulnerabilities, it also shows the many uses of static analysis tools, from coding standard checking (like GNATcheck) to static analysis (like CodePeer) and formal proof (like SPARK toolset). The recommendations also match well the restrictions for the Alfa subset of Ada that we are defining in project Hi-Lite. (See for example the discussion of aliasing in section 6.39 “Passing Parameters and Return Values”.)

Also posted in Papers and Slides | 6 Comments

NIST paper highlights language vulnerabilities

A recently published paper by the National Institute of Standards and Technology (NIST), examines software assurance tools as a fundamental resource to improve quality in today’s software applications. It looks at the behavior of one class of software assurance tool: the source code security analyzer. Because many software security weaknesses are introduced at the implementation phase, using a source code security analyzer should help reduce the number of security vulnerabilities in software.

The report – Source Code Security Analysis Tool Functional Specification Version 1.1 (NIST Special Publication 500-268 v1.1) – defines a minimum capability to help software professionals understand how a tool can help meet their software security assurance needs. The example languages studied are C, C++, Java and SPARK. The NIST report identifies the languages’ vulnerabilities. As you would expect, the SPARK language comes out well.

Posted in Certification | Tagged , , | 3 Comments
  • Categories

  • Open-DO Projects

  • Contact

    info @