A Report on the

Workshop on Applied Formal Methods

Hyderabad, India, December 14 to 17 1996

Sanjiva Prasad

Dept of Computer Sc and Engg, IIT Delhi
E-mail: sanjiva@cse.iitd.ernet.in

Formal Methods is a loose term that encompasses a variety of methods, both theoretical and practical, for developing reliable software. John Rushby likened the area, the necessary mathematical basis (using models for analysis) for building reliable software, to the analytical laws and techniques used by engineers such as the Navier-Stokes equation, etc. in practical engineering problems such as laminar flow.

Applied Formal Methods comprise the tools and techniques for developing reliable software that have a sound formal basis, and can be used in a variety of application areas. Thus they constitute an important effort to bridge the yawning gap between what theoreticians prescribe and what practitioners use or find usable. In the context of the workshop, the speakers tended to identify formal methods with verification and verification technology, whereas some of the audience and participants from Indian industry and R&D organisations understood the term in a looser sense, encompassing various testing methodologies, and systematic use of software engineering tools and protocols.

The participant speakers from outside India included Deepak Kapur, Gerard Huet, J. Strother Moore, John Rushby, Rajeev Alur, Zhou Chao Chen, Mandayam Srivas, and Natarajan Shankar.

The idea of having a workshop on Applied Formal Methods prior to FST&TCS-16 was a wonderful one, and I must commend the organisers, moving spirits (particularly Shankar) and the invited participants for their efforts in making possible an excellent workshop. The organisation and facilities were excellent, and the workshop certainly enriched those who attended. The recent Turing award winner Amir Pnueli regretfully couldn't attend, and that was a small disappointment, especially for students.

John Rushby of SRI got the workshop off to a flying start, with a talk on Formal Methods and (Safety) Critical Systems. He gave the audience an excellent perspective on the entire activity that goes on in the name of Formal Methods. His talk was rich with anecdotal information, particularly from the Aerospace and Space industries---arguably the applications that most require reliable software. Rushby outlined the need for Formal Methods in these and other important application areas, and traced the various phases that so-called formal methods have gone through---from advocacy, especially by Dijkstra-Gries-Floyd-Hoare, of developing programs and proofs of correctness in concert, to a becoming an academic (almost ivory-tower) activity, to finding active support from hardware manufacturers.

He mentioned the main classes of verification tools, theorem proving (both automatic and human-assisted) and model checking approaches. He also highlighted the technical developments that made the advances possible in terms of verification technology, highlighting the role of fast model checking methods developed recently that can deal with zillions of states. In the later discussion on safety critical systems, Rushby argued with great facility the virtues and shortcomings of both theorem proving and model checking approaches. He advocated an integration of both approaches, exemplified by the PVS system, with model checking as decision procedures in a theorem proving environment.

Rushby pointed were that tool support was important for formal methods to make an impact, but stressed that it was a skilled and costly activity. Therefore, one should concentrates one's effort on those application where the cost and effort was justified, where the delays involved in formal verification were acceptable. His suggestion was to concentrate on verifying specifications rather than on code verification, and to concentrate on safety-critical systems, where the effort, time and money spent is absolutely necessary.

Natarajan Shankar went deeper into the comparison between Theorem Proving and Model Checking, pointing out that model checking was eminently suited for analysis of control aspects, whereas theorem proving for data-oriented aspects of software models of systems. He contended mechanisation was an absolute necessity for formal methods, and that so-called descriptive formal methods are just not formal and often are not even methods. He cited examples of where even very careful human proofs had errors, where lemmas with incorrect proofs or even non-theorems were claimed in respected and refereed journal publications.

He demonstrated the use of PVS, giving examples (such as verifying protocols for mutual exclusion) where theorem proving is lousy but model-checking flies, as well as examples where model-checking is not appropriate and human assisted theorem proving is necessary. He spoke highly of the contribution of Ken McMillan in advances in model-checking technology. In the discussion moderated by him and also later, one question that arose was ``how trustworthy are the tools if they haven't themselves been subject to verification?''

Deepak Kapur spoke on Using Rewrite Rule Laboratory for Design Analysis. RRL is a simple but surprisingly powerful automatic verification tool based on rewriting techniques. In a later two-fer-the-price-of-one talk, he demo'ed RRL, and discussed verification of adder circuits, progressing on to multiplier circuits.

Zhou Chao Chen gave an Introduction to Duration Calculus, with some simple working examples. The Duration Calculus seems to be easily accessible and should find application in specifying real-time systems. In the context of this workshop, I wish Prof Zhou had some more time in which to describe the proof support for DC in PVS, which he mentioned.

Rajeev Alur spoke about ``Model checking for timed and hybrid systems''. His talk concentrated on timed automata as a formalism for specifying real-time systems, and summarised some of the automata-theoretic results on timed automata. He also listed some of the major and successful model-checkers available.

A later discussion led by Alur on compositionality, the ability to decompose a problem based on its structure, generated much debate. I don't think there is sufficient agreement on what the appropriate composition operations are, nor about properties we wish to describe, in order to authoritatively prescribe compositional frameworks for concurrent systems. In fact, many property-description frameworks are called logics, but sadly lack even a rudimentary proof theory. Here's what I thought was wrong with the ``Assume-Guarantee'' rules presented by Alur: The general rule he presented allowed formulae (properties) on the left of the turn-stile where models (programs) are expected, though there is a way to explain this usage as shorthand for an infinitary rule. However, the particular specialised rule he presented, involving an ``implies'' connective neither made sense nor was sound, irrespective of whether ``implies'' was classical or intuitionistic.

Here were the two versions of the rules:
$$
\frac{\textstyle{P,\phi \vdash \psi ~~~ Q,\psi \vdash \phi}} {\textstyle{P \| Q \vdash \phi \land \psi}}
$$
$$
\frac{\textstyle{P \vdash \phi \rightarrow \psi, ~~ Q \vdash \psi \rightarrow \phi}} {\textstyle{P \| Q \vdash \phi \land \psi}}
$$
(Consider $\phi$ and $\psi$ both false in the second rule.)

J S Moore, who arrived in Hyderabad a day or so late after seeing more of Mumbai's airport than he intended, gave a talk and a talk-plus-demo. The first was on the ``Features of ACL2 Supporting Industrial Strength Applications''. ACL2 stands for A Computational Logic for Applicative Common Lisp, and is realised as a theorem prover that is the successor of the legendary Boyer-Moore prover Nqthm. His second talk (-and-demo) showed us features of ACL2, and described the verification of the floating point division algorithm of the AMD 5K86 processor. Both talks were excellent, being highly informative and interesting. Prof. Moore identified the development of reusable theory libraries as a major issue in theorem-proving activity.

Chris George of UNU, Macau gave a talk on Transforming applicative axioms into imperative and concurrent programs, and later demo'ed his system. While the problem addressed by George was certainly interesting, I thought the ``ideal path'' he spoke of---turning an applicative specification into an imperative sequential program (by what sounded like a CPS transformation) and then making it concurrent---to be a somewhat strange strategy, since the imperative transformation is likely to eliminate almost all potential (and perhaps all non-trivial) concurrency. I did, however, like the slick system demo'ed by George---which is built using Grammatech's Synthesizer Generator (Teitelbaum and Rep's Attribute Grammar based tool for generating front-ends).

Mandayam Srivas spoke on Mechanical Verification of Microprocessors, first surveying various approaches to hardware verification -- BDDs for combinatorial circuits, model-checking for reachability analysis, and theorem-proving using systems such as Nqthm, ACL2, PVS, HOL, RRL. He identified the common features of such efforts, where hardware is modelled at a RTL level, with a finite state controller and a data path. What is verified is a correspondence between the macro instruction level operation and its implementation at the micro-instruction level, the main difficulty lying in characterising the abstraction functions, the visible (observable) states and the distance between visible states. In another two-fer, Srivas described the use of PVS in the verification of the AAMP5 microprocessor. The problem description seemed somewhat simpler than it actually is, and I wonder if the audience were able to appreciate how much non-trivial effort is really involved in such a verification.

Gerard Huet spoke on ``Axiomatisations, Proofs and Formal Specifications of Algorithms: Commented Case Studies in the Coq Proof Assistant''. Coq is a theorem proving environment based on a flavour of Type Theory called the Calculus of Inductive Constructions. Huet presented Coq as a first step in mechanising mathematics. Later in a demo of the system, he illustrated this point with some examples, and pointed to applications including program certification and extracting programs from proofs.

There were three talks from TIFR, by R K Shyamasundar, Basant Rajan and Paritosh Pandya. The first two spoke about real-time extensions to Esterel, Multi-Clock Esterel. I would have liked to see how both extensions are actually used in applications, as well as a description of what tool support was available. To be fair, though, the speakers were constrained by time. Pandya gave a very nice and entertaining talk on an extension to the Duration Calculus with modal ($\mu$) operators.

The Industrial Experience session was somewhat mixed, and as I mentioned earlier, the speakers from industry approached the workshop with perhaps a very different understanding of formal methods than did the academics. I must apologize for not having noted the names of all the speakers. Mr KRM Rao of CMC gave a longish presentation that reiterated many software engineering home truths. The participant from Philips described the use of software engineering techniques in their organisation. The gentleman from BARC emphasised their need for verified software, and expressed his interest in seeing the application of the tools and techniques presented during the workshop for their critical systems. He gave a very open and candid presentation of the concerns of BARC, but stressed his organisation's commitment towards safe and reliable systems. Mr Chandrashekhar of the LCA project later described the use of software engineering techniques and testing methodologies in his project. The interest evinced by industry, at least their research labs (CSIR, DRDO, and major software houses), was on the whole very positive, and we can hope to see greater application of formal methods in their R&D cycle.

The workshop was an eminent success in educating us about various existing tools and techniques, their strengths and shortcomings. We also got a good sense of how to get started with using them, what applications they had successfully targeted. In the closing session, Shankar listed the sites from where we could download the systems, documentation about the systems and their use, as well as several good textbooks. What I felt the workshop didn't achieve, which I wish it did, was to help us understand the architecture of these systems, and how to go about building theorem provers. Perhaps that needs a different and more focused workshop on building theorem provers.

N. Shankar has set up a home-page containing details about the Workshop, including Postscript versions of some of the speakers' transparencies. This page is mirrored here.


Sanjiva Prasad's home page.