View this page in Romanian in a translation by Alexandra Seremina.
View this page in Polish in a translation by Valeria Aleksandrova.
View this page in Slovakian in a translation by created by Sciologness Team

Formal Specification and Composition

Correctness in Compositional Designs

The goal of a compositional design is to be able to build a system from components. To some extend, all designs are compositional: systems are built from machine statements, which can be seen as tiny components. Of course, for composition to be useful, we require a little more from a compositional design. A component must represent a part of the design, i.e., it must result from some intellectual effort. Only then composition is useful, by allowing designers to share this intellectual effort.

Compositional designs involve different kinds of problems. Here, we are more specifically interested in the correctness issue: How can we claim the correctness of a system from the specifications of its components, or, in other words, what do we need to know from the components to be able to predict the behavior of a system? We call proof of composition a correctness proof where system correctness is deduced from components specifications. Again, any correctness proof is, in some sense, a proof of composition: we always use some knowledge of the components to prove a system. As before, we add a further constraint: We want components to contain a part of the correctness proof, as they should contain a part of the design. In other words, we want to build proofs of composition from statements already proved at the component level, without having to redo any of those proofs. This picture describes specifications and proofs in a compositional design:

Intuitively, the amount of the proof effort in proofs of composition is going to depend on the way components are specified. If a component specification is just a formal description of the implementation (i.e., the program text itself, with a well-defined semantics), like a UNITY program or a CSP process, no part of the proof effort at all is achieved at the component level, and designers have to deal with an operational, explicit, description of the component behavior when they build the proof of composition. Components specifications must be abstractions of components behavior. They must describe useful facts about a component, facts that have to be proved from the component text. Then, these facts are used in a proof of composition, without the obligation to prove them again.

Actually, sharing the proof effort does not require a compositional design. It is perfectly possible to split the effort of doing a monolithic proof. In that case, one starts from a specification of the complete system and an operational description of the complete system and deduces a global proof obligation. This proof obligation is itself decomposed and the correctness proof is built from lemmas and intermediate results. This approach has been investigated, for instance, by Leslie Lamport with TLA+ proofs.

However, the approach we consider here, that requires to specify and prove components as a first step and then build a proof of composition, has (potentially) another advantage. One goal of compositional designs, yet unachieved, is to reuse components, i.e., build systems from already existing components, possibly components already used to build other systems. The idea is that one does not start to chop trees when building a house: requirements are specified and components are bought already made (or, at least, are manufactured independently). If such a reuse of components becomes possible, then the chosen approach allows designers to reuse parts of a correctness proof when they reuse components, namely, the proof of component correctness ("T proofs" in the picture). The larger that part is (compared to the proof of composition itself), the better, which is another reason why components specifications should be abstract and explicitly proved from program texts. (Yet another reason is that abstract specifications allows designers to express just what they expect from components and to avoid over-specifying their requirements. This is necessary if they want to increase their chances to find an existing component that solves their problem.)

The difficulty of such an approach is that specifications may not compose very well. It is possible that no (non trivial) system property can be logically deduced from components properties. For the approach to work, component specifications must allow designers to deduce system properties. Some component specifications, because they do not contain enough information, do not allow that deduction. The problem is that, because we want them to be abstract, component specifications are likely not to contain enough information. There is clearly a tradeoff between keeping specifications abstract and making them useful in a proof of composition: too much information, too many details revealed in the specification and abstraction is lost; not enough, and composition is lost.

The Case of invariant and always

To illustrate the balance between abstraction and ability to compose, consider the case of systems (and components) defined by their infinite computations (reactive systems), composed through parallel composition. For these systems, two closely related types of properties are often use:

In the world of temporal logic, always properties are often called "invariants", while invariant properties are called "inductive invariants." On the other end, in the world of sequential programs verification, invariant properties are called "invariants" and always properties often do not have a name. (However, they are called "assertions" in the B method.) This has resulted in some confusion (see the substitution axiom page).

Naturally, always and invariant are related: any predicate that is invariant is also always true (by induction). However, a predicate can be always true and not be an invariant. This means that always properties are weaker in general than invariant properties. They are also more abstract: they assert that a predicate is true in any state of any computation, but they do not say why. Invariant properties say why. In other words, invariant properties are a useful tool to prove always properties, but they are not abstract enough to be used to specify components.

With respect to composition, invariant and always properties also differ. Because program statements are preserved through parallel composition, a state predicate is invariant in a system as soon as it is invariant in all components of the system (In our vocabulary, invariant properties are said to be universal). Because a concurrent system may have more reachable states than its components together, this is not true for always properties: a predicate may be always true in all components of a system and falsified by the concurrent system globally.

To summarize, invariant properties can be composed, but they are too strong (not abstract enough) to be used in components specifications; always properties are actually the abstraction that is required, but they cannot be composed. (A worthwhile question is one of determining what properties are weaker than invariant properties, stronger than always properties and can still be composed. This question is not as simple as it may seem. Readers are invited to try to solve a challenge question related to this issue.)

Our Research

In our reasearch, Mani Chandy and I address the following problem: can we find component properties strong enough to be composed, but weak enough to preserve abstraction? More specifically, we focus on two forms of composition: existential (an existential property holds in a system as soon as it holds in at least one component) and universal (a universal property holds in a system as soon as it holds in all components). We consider these two forms of composition in a general context. Components are abstract entities, not necessarily programs. They need not have such attributes as "states" or "computations." They are composed with a composition law that is not assumed to be parallel composition (in particular, it is not assumed to be symmetric or idempotent).

Interesting results (and even more interesting questions) can be explored under these hypotheses. This framework can be applied to reactive systems and temporal logic; in particular, the case of invariants, discussed above, can be nicely expressed. These ideas can also be generalized when several laws of composition are used together.


Michel Charpentier <>
Last modified: Thu Jan 24 09:46:45 EST 2013