Reseña: Compiling concurrency correctly (Verifying software transactional memory)

Se ha publicado una nueva tesis de verificación con Agda: Compiling concurrency correctly (Verifying software transactional memory)

Su autor es Liyang Hu, dirigido por Graham Hutton.

La tesis se ha presentado el pasado mes de junio en la Universidad de Nottingham.

El resumen de la tesis es

Concurrent programming is notoriously difficult, but with multi-core processors becoming the norm, is now a reality that every programmer must face. Concurrency has traditionally been managed using low-level mutual exclusion locks, which are error-prone and do not naturally support the compositional style of programming that is becoming indispensable for today’s large-scale software projects.

A novel, high-level approach that has emerged in recent years is that of software transactional memory (STM), which avoids the need for explicit locking, instead presenting the programmer with a declarative approach to concurrency. However, its implementation is much more complex and subtle, and ensuring its correctness places significant demands on the compiler writer.

This thesis considers the problem of formally verifying an implementation of STM. Utilising a minimal language incorporating only the features that we are interested in studying, we first explore various STM design choices, along with the issue of compiler correctness via the use of automated testing tools. Then we outline a new approach to concurrent compiler correctness using the notion of bisimulation, implemented using the Agda theorem prover. Finally, we show how bisimulation can be used to establish the correctness of a low-level implementation of software transactional memory.

Reseña: Formal specification and verification of well-formedness in business process product lines

Se ha publicado un nuevo trabajo de razonamiento formalizado en PVS: Formal specification and verification of well-formedness in business process product lines.

Sus autores son Giselle Machado, Vander Alves y Rohit Gheyi.

El trabajo se presentará el 23 de septiembre en el LA-WASP 2012 (6th Latin-American Workshop on Aspect-Oriented Software Development).

Su resumen es

Quality assurance is a key challenge in product lines (PLs). Given the exponential growth of product variants as a function of the number of features, ensuring that all products meet given properties is a non-trivial issue. In particular, this holds for well-formedness in PLs. Although this has been explored at some levels of abstraction (e.g., implementation), this remains unexplored for business process PLs developed using a compositional approach. Accordingly, in this paper we report on-going work in formalizing Business Process Product Lines, including the definition of well-formedness rules, formal specification of transformations, and the proof that transformations preserve well-formedness without having to instantiate all variants. Formalization and proofs are provided in the Prototype Verification System, which has a formal specification language and a proof assistant.

El código de la formalización en PVS se encuentra aquí.

Reseña: Proving the impossibility of trisecting an angle and doubling the cube

Se ha publicado en The Archive of Formal Proofs un nuevo trabajo de razonamiento formalizado en Isabelle sobre geometría: Proving the impossibility of trisecting an angle and doubling the cube.

Sus autors son Ralph Romanos y Lawrence Paulson (de la Universidad de Cambridge).

Su resumen es

Squaring the circle, doubling the cube and trisecting an angle, using a compass and straightedge alone, are classic unsolved problems first posed by the ancient Greeks. All three problems were proved to be impossible in the 19th century. The following document presents the proof of the impossibility of solving the latter two problems using Isabelle/HOL, following a proof by Carrega. The proof uses elementary methods: no Galois theory or field extensions. The set of points constructible using a compass and straightedge is defined inductively. Radical expressions, which involve only square roots and arithmetic of rational numbers, are defined, and we find that all constructive points have radical coordinates. Finally, doubling the cube and trisecting certain angles requires solving certain cubic equations that can be proved to have no rational roots. The Isabelle proofs require a great many detailed calculations.

Reseña: Mechanization of an algorithm for deciding KAT terms equivalence

Se ha publicado un nuevo trabajo sobre verificación formal en Coq: Mechanization of an algorithm for deciding KAT terms equivalence.

Sus autores son Nelma Moreira, David Pereira y Simao Melo de Sousa (de la Universidade do Porto, Portugal).

Su resumen es

This work presents a mechanically verified implementation of an algorithm for deciding the (in-)equivalence of Kleene algebra with tests (KAT) terms. This mechanization was carried out in the Coq proof assistant. The algorithm decides KAT terms equivalence through an iterated process of testing the equivalence of their partial derivatives. It is a purely syntactical decision procedure and so, it does not construct the underlying automata. The motivation for this work comes from the possibility of using KAT encoding of propositional Hoare logic for reasoning about the partial correctness of imperative programs.

El código de la formalización en Coq se encuentra aquí.

Meta-theory à la carte

Se ha publicado un nuevo artículo de automatización del razonamiento en Coq sobre metateorías de lenguajes de programación: Meta-theory à la carte.

Sus autores son Benjamin Delaware, Bruno C. d. S. Oliveira y Tom Schrijvers.

Su resumen es

Formalizing meta-theory, or proofs about programming languages, in a proof assistant has many well-known benefits. However, the considerable effort involved in mechanizing proofs has prevented it from becoming standard practice. This cost can be amortized by reusing as much of an existing formalization as possible when building a new language or extending an existing one. Unfortunately reuse of components is typically ad-hoc, with the language designer cutting and pasting existing definitions and proofs, and expending considerable effort to patch up the results.

This paper presents a more structured approach to the reuse of formalizations of programming language semantics through the composition of modular definitions and proofs. The key contribution is the development of an approach to induction for extensible Church encodings which uses a novel reinterpretation of the universal property of folds. These encodings provide the foundation for a framework, formalized in Coq, which uses type classes to automate the composition of proofs from modular components.

Several interesting language features, including binders and general recursion, illustrate the capabilities of our framework. We reuse these features to build fully mechanized definitions and proofs for a number of languages, including a version of mini-ML. Bounded induction enables proofs of properties for non-inductive semantic functions, and mediating type classes enable proof adaptation for more feature-rich languages.

El marco descrito en el artículo, así como el caso de estudio de mini-ML se encuentra en Meta-Theory à la Carte + Modular mini-ML.