Reseña: A formal proof of Sasaki-Murao algorithm

La semana pasada se presentó en el MAP2012 (Mathematics, Algorithms and Proofs 2012) un trabajo de aplicación de bibliotecas de razonamiento formalizado en Coq a la verificación de un algoritmo de álgebra computacional.

El título del trabajo es A formal proof of Sasaki-Murao algorithm y sus autores son Tierry Coquand, Anders Mörtberg y Vincent Siles (de la Univ. de Gotemburgo, Suecia).

El resumen del trabajo es

The Sasaki-Murao algorithm computes the determinant of any square matrix over a commutative ring in polynomial time. The algorithm itself can be written as a short and simple functional program, but its correctness involves non-trivial mathematics. We here represent this algorithm in Type Theory with a new correctness proof, using the Coq proof assistant and the SSReflect extension.

El código Coq de la formalización se encuentra aquí y las transparencias de la presentación se encuentran aquí.

Este trabajo es parte del proyecto ForMath: Formalisation of Mathematics.

Reseña: Proving concurrent noninterference

En la CPP 2012 (The Second International Conference on Certified Programs and Proofs), que comenzará el 13 de diciembre, se presentará un trabajo de verificación formal en Isabelle/HOL titulado Proving concurrent noninterference.

Sus autores son Andrei Popescu, Johannes Hölzl y Tobias Nipkow (del grupo de demostración automática (Theorem Proving Group) de la Universidad de Munich).

Su resumen es

We perform a formal analysis of compositionality techniques for proving possibilistic noninterference for a while language with parallel composition. We develop a uniform framework where we express a wide range of noninterference variants from the literature and compare them w.r.t. their contracts: the strength of the security properties they ensure weighed against the harshness of the syntactic conditions they enforce. This results in a simple implementable algorithm for proving that a program has a specific noninterference property, using only compositionality, which captures uniformly several security type-system results from the literature and suggests a further improved type system. All formalism and theorems have been mechanically verified in Isabelle/HOL.

Este trabajo forma parte del proyecto Security Type Systems and Deduction

El código con las teorías correspondiente se encuentra aquí.

Reseña: Operational refinement for compiler correctness

Este mes (septiembre de 2012) se va a presentar en la Universidad de Princeton una tesis doctoral de verificación formal en Coq titulada Operational refinement for compiler correctness.

Su autor es Robert W. Dockins, dirigido por Andrew W. Appel.

Su resumen es

Compilers are an essential part of the software development process. Programmers all over the world rely on compilers every day to correctly translate their intentions, expressed as high-level source code, into executable low-level machine code. But what does it mean for a compiler to be correct?

This question is surprisingly difficult to answer. Despite the fact that various groups have made concerted efforts to prove the correctness of compilers since at least the early 1980’s, no clear consensus has arisen about what it means for a compiler to be correct. As a result, it seems that no two compiler verification efforts have stated their correctness theorems in the same way.

In this dissertation, I will advance a new approach to compiler correctness based on refinements of the operational semantics of programs. The cornerstones of this approach are behavioral refinement, which allows programs to improve by going wrong less often, and choice refinement, which allows compilers to reduce the amount of internal nondeterminism present in a program. I take particular care to explain why these notions of refinement are the correct formal interpretations of the informal ideas above.

In addition, I will show how these notions of refinement can be realistically applied to compiler verification efforts. First, I will present a toy language, WHILE-C, and show how choice and behavioral refinement can be used to verify the correctness of several interesting program transformations. The WHILE-C language and the transformations themselves are simple enough to be presented here in full detail. I will also show how the ideas of behavioral and choice refinement may be applied to the CompCert formally verified compiler [Ler09a], a realistic compiler for a significant subset of C

El código Coq correspondiente a la tesis se encuentra aquí.

Esta tesis se enmarca en el proyecto Verified Software Toolchain.

Reseña: Correctness of pointer manipulating algorithms illustrated by a verified BDD construction

Se ha publicado un nuevo trabajo de verificación formal en Isabelle/HOL titulado Correctness of pointer manipulating algorithms illustrated by a verified BDD construction.

Los autores del trabajo son Mathieu Giorgino y Martin Strecker (de la Univ. de Toulouse).

El trabajo se presentará el 29 de agosto en el FM 2012 (18th International Symposium on Formal Methods).

En el trabajo se presenta una metodología para la verificación de programas imperativos usando el asistente de prueba Isabelle/HOL y su extensión Imperative_HOL, junto con el generador de código Scala de Isabelle. Como aplicación de la metodología se verifica los diagramas de decisión binarios (BDD).

Su resumen es

This paper is an extended case study using a high-level approach to the verification of graph transformation algorithms: To represent sharing, graphs are considered as trees with additional pointers, and algorithms manipulating them are essentially primitive recursive traversals written in a monadic style. With this, we achieve almost trivial termination arguments and can use inductive reasoning principles for showing the correctness of the algorithms. We illustrate the approach with the verification of a BDD package which is modular in that it can be instantiated with different implementations of association tables for node lookup. We have also implemented a garbage collector for freeing association tables from unused entries. Even without low-level optimizations, the resulting implementation is reasonably efficient.

El código de la formalización se encuentra en aquí

El trabajo es parte del proyecto CLIMT (Categorical and Logical Methods in Model Transformation).

Reseña: Compiling concurrency correctly (Verifying software transactional memory)

Se ha publicado una nueva tesis de verificación con Agda: Compiling concurrency correctly (Verifying software transactional memory)

Su autor es Liyang Hu, dirigido por Graham Hutton.

La tesis se ha presentado el pasado mes de junio en la Universidad de Nottingham.

El resumen de la tesis es

Concurrent programming is notoriously difficult, but with multi-core processors becoming the norm, is now a reality that every programmer must face. Concurrency has traditionally been managed using low-level mutual exclusion locks, which are error-prone and do not naturally support the compositional style of programming that is becoming indispensable for today’s large-scale software projects.

A novel, high-level approach that has emerged in recent years is that of software transactional memory (STM), which avoids the need for explicit locking, instead presenting the programmer with a declarative approach to concurrency. However, its implementation is much more complex and subtle, and ensuring its correctness places significant demands on the compiler writer.

This thesis considers the problem of formally verifying an implementation of STM. Utilising a minimal language incorporating only the features that we are interested in studying, we first explore various STM design choices, along with the issue of compiler correctness via the use of automated testing tools. Then we outline a new approach to concurrent compiler correctness using the notion of bisimulation, implemented using the Agda theorem prover. Finally, we show how bisimulation can be used to establish the correctness of a low-level implementation of software transactional memory.