Starting today, I will systematically send an email to ~lthms/lthms.xyz@lists.sr.ht when I publish a new blogpost. Send a first email to ~lthms/lthms.xyz+subscribe@lists.sr.ht if you want to subscribe to the mailing list, and another to ~lthms/lthms.xyz+unsubscribe@lists.sr.ht, when you want to unsubscribe. The objective is for readers to be able to react to my posts more easily, without having to modify the way I handle this website. As a bonus, you don’t need to create any account of any sort!

Finally, an archive of the mailing list can be found on lists.sr.ht.

]]>It has been a long journey —4 years, 10 days— but I have completed my PhD on October 25, 2018. The exact title of my PhD thesis is *“Specifying and Verifying Hardware-based Security Enforcement Mechanisms”*.

In this thesis, we consider a class of security enforcement mechanisms we called *Hardware-based Security Enforcement* (HSE). In such mechanisms, some trusted software components rely on the underlying hardware architecture to constrain the execution of untrusted software components with respect to targeted security policies. For instance, an operating system which configures page tables to isolate userland applications implements a HSE mechanism.

For a HSE mechanism to correctly enforce a targeted security policy, it requires both hardware and trusted software components to play their parts. During the past decades, several vulnerability disclosures have defeated HSE mechanisms. We focus on the vulnerabilities that are the result of errors at the specification level, rather than implementation errors. In some critical vulnerabilities, the attacker makes a legitimate use of one hardware component to circumvent the HSE mechanism provided by another one. For instance, cache poisoning attacks leverage inconsistencies between cache and DRAM’s access control mechanisms. We call this class of attacks, where an attacker leverages inconsistencies in hardware specifications, *compositional attacks*.

Our goal is to explore approaches to specify and verify HSE mechanisms using formal methods that would benefit both hardware designers and software developers. Firstly, a formal specification of HSE mechanisms can be leveraged as a foundation for a systematic approach to verify hardware specifications, in the hope of uncovering potential compositional attacks ahead of time. Secondly, it provides unambiguous specifications to software developers, in the form of a list of requirements.

Our contribution is two-fold:

- We propose a theory of HSE mechanisms against hardware architecture models. This theory can be used to specify and verify such mechanisms. To evaluate our approach, we propose a minimal model for a single core x86-based computing platform. We use it to specify and verify the HSE mechanism provided by Intel to isolate the code executed while the CPU is in System Management Mode (SMM), a highly privileged execution mode of x86 microprocessors. We have written machine-checked proofs in the Coq proof assistant to that end.
- We propose a novel approach inspired by algebraic effects to enable modular verification of complex systems made of interconnected components as a first step towards addressing the challenge posed by the scale of the x86 hardware architecture. This approach is not specific to hardware models, and could also be leveraged to reason about composition of software components as well. In addition, we have implemented our approach in the Coq theorem prover, and the resulting framework takes advantages of Coq proof automation features to provide general-purpose facilities to reason about components interactions.

If you are interested, you can have a look at the paper I wrote during my PhD:

- SpecCert: Specifying and Verifying Hardware-based Security Enforcement Mechanisms, with Pierre Chifflier, Guillame Hiet and Benjamin Morin, at Formal Methods 2016
- Modular Verification of Programs with Effects and Effect Handlers in Coq, with Yann Régis-Gianas, Pierre Chifflier and Guillaume Hiet, at Formal Methods 2018

You can also have a look at the Coq frameworks I have published:

- SpecCert on Github (CeCILL-B)
- FreeSpec on Github (GPL-3.0)

Finally, the interesting reader might want to see the final material of my PhD:

]]>I always wanted to learn some Lisp dialect. In the meantime, lykan —my Slayers Online clone— begins to take shape. So, of course, my brain got an idea: *why not writing a client for lykan in some Lisp dialect?* I asked on Mastodon if there were good game engine for Lisp, and someone told me about trivial-gamekit.

I have no idea if I will manage to implement a decent client using trivial-gamekit, but why not trying? This article is the first of a series about my experiments, discoveries and difficulties.

The code of my client is hosted on my server, using the pijul vcs. If you have pijul installed, you can clone the repository:

In addition, the complete project detailed in this article is available as a gist.

The trivial-gamekit website lists several requirements. Two are related to Lisp:

- Quicklisp
- SBCL or CCL

Quicklisp is an experimental package manager for Lisp project (it was easy to guess, because there is a link to quicklisp website in the trivial-gamekit documentation). As for SBCL and CCL, it turns out they are two Lisp implementations. I had already installed clisp, and it took me quite some times to understand my mistake. Fortunately, sbcl is also packaged in ArchLinux.

With a compatible Lisp implementation, installing Quicklisp as a user is straightforward. Following the website instructions is enough. At the end of the process, you will have a new directory `${HOME}/quicklisp`

, whose purpose is similar to the go workspace.

Quicklisp is not a native feature of sbcl, and has to be loaded to be available. To do it automatically, you have to create a file `${HOME}/.sbclrc`

, with the following content:

```
(load "~/quicklisp/setup")
```

There is one final step to be able to use trivial-gamekit.

As for now^{1}, Quicklisp does not support HTTPS.

The first thing I search for when I learn a new language is how projects are organized. From this perspective, trivial-gamekit pointed me directly to Quicklisp

Creating a new Quicklisp project is very simple, and this is a very good thing. As I said, the `${HOME}/quicklisp`

directory acts like the go workspace. As far as I can tell, new Quicklisp projects have to be located inside `${HOME}/quicklisp/local-projects`

. I am not particularly happy with it, but it is not really important.

The current code name of my Lisp game client is lysk.

Quicklisp packages (systems?) are defined through `asd`

files. I have firstly created `lysk.asd`

as follows:

```
(asdf:defsystem lysk
:description "Lykan Game Client"
:author "lthms"
:license "GPLv3"
:version "0.0.1"
:serial t
:depends-on (trivial-gamekit)
:components ((:file "package")
(:file "lysk")))
```

`:serial t`

means that the files detailed in the `components`

field depends on the previous ones. That is, `lysk.lisp`

depends on `package.lisp`

in this case. It is possible to manage files dependencies manually, with the following syntax:

```
(:file "seconds" :depends-on "first")
```

I have declared only one dependency: trivial-gamekit. That way, Quicklisp will load it for us.

The first “true” Lisp file we define in our skeleton is `package.lisp`

. Here is its content:

```
(defpackage :lysk
(:use :cl)
(:export run app))
```

Basically, this means we use two symbols, `run`

and `app`

.

The `lysk.lisp`

file contains the program in itself. My first goal was to obtain the following program: at startup, it shall creates a new window in fullscreen, and exit when users release the left button of their mouse. It is worth mentioning that I had to report an issue to the trivial-gamekit upstream in order to make my program work as expected. While it may sounds scary —it definitely shows trivial-gamekit is a relatively young project— the author has implemented a fix in less than an hour! He also took the time to answer many questions I had when I joined the `#lispgames`

Freenode channel.

Before going any further, lets have a look at the complete file.

```
(cl:in-package :lysk)
(gamekit:defgame app () ()
(:fullscreen-p 't))
(defmethod gamekit:post-initialize ((app app))
(gamekit:bind-button :mouse-left :released
(lambda () (gamekit:stop))))
(defun run ()
(gamekit:start 'app))
```

The first line is some kind of header, to tell Lisp the owner of the file.

The `gamekit:defgame`

function allows for creating a new game application (called `app`

in our case). I ask for a fullscreen window with `:fullscreen-p`

. Then, we use the `gamekit:post-initialize`

hook to bind a handler to the release of the left button of our mouse. This handler is a simple call to `gamekit:stop`

. Finally, we define a new function `run`

which only starts our application.

Pretty straightforward, right?

To “play” our game, we can start the sbcl REPL.

And it works!

It looks like empower a REPL-driven development. That being said, once the development is finished, I don’t think I will have a lot of success if I ask my future players to start sbcl to enjoy my game. Fortunately, trivial-gamekit provides a dedicated function to bundle the game as a standalone executable.

Following the advises of the borodust —the trivial-gamekit author— I created a second package to that end. First, we need to edit the `lysk.asd`

file to detail a second package:

```
(asdf:defsystem lysk/bundle
:description "Bundle the Lykan Game Client"
:author "lthms"
:license "GPLv3"
:version "0.0.1"
:serial t
:depends-on (trivial-gamekit/distribution lysk)
:components ((:file "bundle")))
```

This second package depends on lysk (our game client) and and trivial-gamekit/distribution. The latter provides the `deliver`

function, and we use it in the `bundle.lisp`

file:

```
(cl:defpackage :lysk.bundle
(:use :cl)
(:export deliver))
(cl:in-package :lysk.bundle)
(defun deliver ()
(gamekit.distribution:deliver :lysk 'lysk:app))
```

To bundle the game, we can use `sbcl`

from our command line interface.

Objectively, there is not much in this article. However, because I am totally new to Lisp, it took me quite some time to get these few lines of code to work together. All being told I think this constitutes a good trivial-gamekit skeleton. Do not hesitate to us it this way.

Thanks again to borodust, for your time and all your answers!

I like Makefile, so here is one to `run`

the game directly, or `bundle`

it.

```
run:
@sbcl --eval "(ql:quickload :lysk)" \
--eval "(lysk:run)"
bundle:
@echo -en "[ ] Remove old build"
@rm -rf build/
@echo -e "\r[*] Remove old build"
@echo "[ ] Building"
@sbcl --eval "(ql:quickload :lysk/bundle)" --eval "(lysk.bundle:deliver)" --quit
@echo "[*] Building"
.PHONY: bundle run
```

June 2018↩

`pi-hoole`

is a collection of tools to enforce access control for self-hosted pijul repositories. It can be seen as a cgit-like solution, for authenticated (public key, SSH) and anonymous accesses. You can grant read and write accesses to a whole repository, or to a determined subset of branches.

`pi-hoole`

is distributed under the terms of the AGPL v3.

It should not be a surprise that `pi-hoole`

is versioned under `pijul`

, with the patch format introduced by `pijul-0.9`

. Fair warning, use `pijul-0.10.1`

and you should be able to clone the `pi-hoole`

repository.

Under the hood, `pi-hoole`

is implemented in Haskell. We will need `stack`

to build it.

We are using the latest lts available, but `pi-hoole`

has already been built with `lts-10`

(but does not build with older ones).

This build three executables:

`pi-hoole-cfg`

generates a`.authorized_keys`

file to enforce access control for SSH.`pi-hoole-shell`

is called to determine if a authenticated user is allowed to perform a given`pijul`

command.`pi-hoole-web`

is a HTTP proxy to enforce access control for anonymous requests.

In order to use `pi-hoole`

on your server, the first step is to create a new, dedicated user (*e.g.* `pijul`

). You then need to make `pijul`

and `pi-hoole-shell`

available to this user. Currently, we do not provide any packaging solution to that end, but this might change in the future. Although it is not mandatory, we consider both `pi-hoole-cfg`

and `pi-hoole-web`

have also been made available for the `pijul`

user.

`pi-hoole`

executables assume the configuration files are stored at `${XDG_CONFIG_DIRECTORY}/pi-hoole`

, that is `~/.config/pi-hoole`

by default.

In this directory, `pi-hoole`

will scan the `keys/`

directory, if it exists, to know the list of authorized authenticated users. One user may have as many public key as required. Keys should be saved in file with the following filename scheme: `<user>(\.<label>)?\.pub`

. Thus, `lthms.pub`

, `lthms.laptop.pub`

and `lthms.work.pub`

are three valid public key filenames for the `lthms`

user.

To generate a `.ssh/authorized_keys`

, use `pi-hoole-cfg`

.

Resulting file will be of the forms:

```
command="pi-hoole-shell \".lthms\" \"${SSH_ORIGINAL_COMMAND}\"",no-port-forwarding,no-x11-forwarding,no-agent-forwarding <ssh_keys_1>
command="pi-hoole-shell \".lthms\" \"${SSH_ORIGINAL_COMMAND}\"",no-port-forwarding,no-x11-forwarding,no-agent-forwarding <ssh_keys_2>
```

To configure the authorized accesses for the users, you have to set up a valid `pi-hoole`

configuration file. The latter uses the YAML syntax, and is located at `${XDG_CONFIG_DIRECTORY}/pi-hoole/config.yaml`

. It has two root fields: `groups`

and `repositories`

.

You can define groups of users, and grant certain rights to a group, effectively granting there rights to each users who are member of the group.

Group names are prefixed by `+`

, and user names are prefixed by `.`

.

Then, if you want to create one group `ogma`

which contains the users `lgeorget`

and `lthms`

:

```
groups:
+ogma: [.lthms, .lgeorget]
```

You can create as many groups as you want, and one user may be added to several groups. Currently, nested groups are not allowed (*i.e.* a group being member of another group).

The `repositories`

field allows for defining which repositories are available, and what given users can do with these repositories. The contents of the `repositories`

is a map, where keys are `path`

to the repositories (relative to `HOME`

directory), and values are maps from role to rights.

For instance, given the following configuration:

```
repositories:
my/first/repo:
.lthms: +w
+contrib: +r +w[master]
anon: +r
my/second/repo:
.lthms: +r
.lgeorget: +w
```

We declare two repositories, one located at `${HOME}/my/first/repo`

, and another at `${HOME}/my/second/repo`

. The user `lthms`

(identified with `.lthms`

) can read and write to this repository, to arbitrary branches. The members of the group `contrib`

can read to any branch, but can only write to the `master`

branch. Finally, the anonymous user (through HTTP), can read to any branch.

Read means being able to clone or pull. Write means being able to push.

For the second repository, `lthms`

can read to any branch, when `lgeorget`

can read and write to any branch. Therefore, it is not possible to clone `my/second/repo`

through HTTP, because `anon`

has not been granted any particular rights.

`pi-hoole-web`

is a very simple HTTP proxy. It receives HTTP request, ideally issued by a `pijul`

client, and turns them into `pijul`

commands if the `anon`

role has been granted the required rights.

Currently, `pi-hoole-web`

cannot be configured, and listen the port 8080. Also, it does not daemonify itself. The easiest way to set up a `pi-hoole-web`

instance in a reliable way is to use systemd, for instance with the following unit:

```
[Unit]
Description=HTTP Proxy for Pijul anonymous accesses
[Service]
Type=simple
User=pijul
Group=pijul
ExecStart=/usr/bin/pi-hoole-web
Restart=of-failure
TimeoutStopSec=300
[Install]
WantedBy=multi-user.target
```

For reference, the instance of `pi-hoole-web`

responsible for `pijul.lthms.xyz`

is behind a nginx server. One very straightforward nginx configuration can be:

```
server {
listen 80;
server_name pijul.lthms.xyz;
access_log /var/log/nginx/pijul.lthms.xyz.access.log;
error_log /var/log/nginx/pijul.lthms.xyz.error.log;
location / {
proxy_pass http://localhost:8080/;
}
}
```

The command `pijul patch`

does not have an argument to specify the branch from which a patch is initially fetched; as a consequence, having read access to a repo, even for another branch where the patch is not applied, is enough. For this reason, if a user who obtains a hash for a patch of a branch they cannot access can fetch the patch. That is, a patch is as private as its hash. Private branches will eventually be supported in a better manner, but for now, private branches should mean separated repositories.

Today, I am pleased to announce that `nordless`

is now a MELPA package. `nordless`

is a mostly colourless theme, with two main sources of inspiration: nofrils, an extremely minimalist colour scheme for vim, and nord, a north-bluish colour palette.

`nordless`

is distributed under the terms of the GPL-3.0 licence. It would not have existed at all without `nofrils`

, a neat, “anti-highlighting syntax” theme for vim by robertmeta. A lot of credits go to Arctic Ice Studio.

I should warn you: I have only tested `nordless`

with my emacs configuration, and I am pretty sure there is still a lot to do. If you use it and find a case where some strange colour pops up, feel free to report this by filling an issue to the GitHub repository.

A colleague of mine introduced me to the benefits of `error-chain`

, a crate which aims to implement *“consistent error handling”* for Rust. I found the overall design pretty convincing, and in his use case, the crate really makes its error handling clearer and flexible. I knew *pijul* uses `error-chain`

to, but I never had the occasion to dig more into it.

At the same time, I have read quite a lot about *extensible effects* in Functional Programming, for an academic article I have submitted to Formal Methods 2018^{1}. In particular, the freer package provides a very nice API to define monadic functions which may use well-identified effects. For instance, we can imagine that `Console`

identifies the functions which may print to and read from the standard output. A function `askPassword`

which displays a prompt and get the user password would have this type signature:

Compared to `IO`

, `Eff`

allows for meaningful type signatures. It becomes easier to reason about function composition, and you know that a given function which lacks a given effect in its type signature will not be able to use them. As a predictable drawback, `Eff`

can become burdensome to use.

Basically, when my colleague showed me its Rust project and how he was using `error-chain`

, the question popped out. **Can we use an approach similar to Eff to implement a Haskell-flavoured error-chain?**

Spoiler alert: the answer is yes. In this post, I will dive into the resulting API, leaving for another time the details of the underlying implementation. Believe me, there is plenty to say. If you want to have a look already, the current implementation can be found on GitHub.

In this article, I will use several “advanced” GHC pragmas. I will not explain each of them, but I will *try* to give some pointers for the reader who wants to learn more.

This is not an academic publication, and my goal was primarily to explore the arcane of the Haskell type system, so I might have skipped the proper study of the state of the art. That being said, I have written programs in Rust and Haskell before.

In Rust, `Result<T, E>`

is the counterpart of `Either E T`

in Haskell^{2}. You can use it to model to wrap either the result of a function (`T`

) or an error encountered during this computation (`E`

). Both `Either`

and `Result`

are used in order to achieve the same end, that is writing functions which might fail.

On the one hand, `Either E`

is a monad. It works exactly as `Maybe`

(returning an error acts as a shortcut for the rest of the function), but gives you the ability to specify *why* the function has failed. To deal with effects, the `mtl`

package provides `EitherT`

, a transformer version of `Either`

to be used in a monad stack.

On the other hand, the Rust language provides the `?`

syntactic sugar, to achieve the same thing. That is, both languages provide you the means to write potentially failing functions without the need to care locally about failure. If your function `B`

uses a function `A`

which might fail, and want to fail yourself if `A`

fails, it becomes trivial.

Out of the box, neither `EitherT`

nor `Result`

is extensible. The functions must use the exact same `E`

, or errors must be converted manually.

Rust and the `error-chain`

crate provide several means to overcome this limitation. In particular, it has the `Into`

and `From`

traits to ease the conversion from one error to another. Among other things, the `error-chain`

crate provides a macro to easily define a wrapper around many errors types, basically your own and the one defined by the crates you are using.

I see several drawbacks to this approach. First, it is extensible if you take the time to modify the wrapper type each time you want to consider a new error type. Second, either you can either use one error type or every error type.

However, the `error-chain`

package provides a way to solve a very annoying limitation of `Result`

and `Either`

. When you “catch” an error, after a given function returns its result, it can be hard to determine from where the error is coming from. Imagine you are parsing a very complicated source file, and the error you get is `SyntaxError`

with no additional context. How would you feel?

`error-chain`

solves this by providing an API to construct a chain of errors, rather than a single value.

The `chain_err`

function makes it easier to replace a given error in its context, leading to be able to write more meaningful error messages for instance.

The `ResultT`

is an attempt to bring together the extensible power of `Eff`

and the chaining of errors of `chain_err`

. I will admit that, for the latter, the current implementation of `ResultT`

is probably less powerful, but to be honest I mostly cared about the “extensible” thing, so it is not very surprising.

This monad is not an alternative to neither Monad Stacks a la mtl nor to the `Eff`

monad. In its current state, it aims to be a more powerful and flexible version of `EitherT`

.

As often in Haskell, the `ResultT`

monad can be parameterised in several ways.

`msg`

is the type of messages you can stack to provide more context to error handling`err`

is a*row of errors*^{3}, it basically describes the set of errors you will eventually have to handle`m`

is the underlying monad stack of your application, knowing that`ResultT`

is not intended to be stacked itself`a`

is the expected type of the computation result

`achieve`

and `abort`

The two main monadic operations which comes with `ResultT`

are `achieve`

and `abort`

. The former allows for building the context, by stacking so-called messages which describe what you want to do. The latter allows for bailing on a computation and explaining why.

`achieve`

should be used for `do`

blocks. You can use `<?>`

to attach a contextual message to a given computation.

The type signature of `abort`

is also interesting, because it introduces the `Contains`

typeclass (e.g., it is equivalent to `Member`

for `Eff`

).

This reads as follows: *“you can abort with an error of type e if and only if the row of errors err contains the type e.”*

For instance, imagine we have an error type `FileError`

to describe filesystem-related errors. Then, we can imagine the following function:

We could leverage this function in a given project, for instance to read its configuration files (for the sake of the example, it has several configuration files). This function can use its own type to describe ill-formed description (`ConfigurationError`

).

```
parseConfiguration :: (Contains err ConfigurationError, MonadIO m)
=> String
-> String
-> ResultT msg err m Configuration
```

To avoid repeating `Contains`

when the row of errors needs to contains several elements, we introduce `:<`

^{4} (read *subset or equal*):

```
getConfig :: ( '[FileError, ConfigurationError] :< err
, MonadIO m)
=> ResultT String err m Configuration
getConfig = do
achieve "get configuration from ~/.myapp directory" $ do
f1 <- readContent "~/.myapp/init.conf"
<?> "fetch the main configuration"
f2 <- readContent "~/.myapp/net.conf"
<?> "fetch the net-related configuration"
parseConfiguration f1 f2
```

You might see, now, why I say `ResultT`

is extensible. You can use two functions with totally unrelated errors, as long as the caller advertises that with `Contains`

or `:<`

.

Monads are traps, you can only escape them by playing with their rules. `ResultT`

comes with `runResultT`

.

This might be surprising: we can only escape out from the `ResultT`

if we do not use *any errors at all*. In fact, `ResultT`

forces us to handle errors before calling `runResultT`

.

`ResultT`

provides several functions prefixed by `recover`

. Their type signatures can be a little confusing, so we will dive into the simpler one:

```
recover :: forall e m msg err a.
(Monad m)
=> ResultT msg (e ': err) m a
-> (e -> [msg] -> ResultT msg err m a)
-> ResultT msg err m a
```

`recover`

allows for *removing* an error type from the row of errors, To do that, it requires to provide an error handler to determine what to do with the error raised during the computation and the stack of messages at that time. Using `recover`

, a function may use more errors than advertised in its type signature, but we know by construction that in such a case, it handles these errors so that it is transparent for the function user. The type of the handler is `e -> [msg] -> ResultT msg err m a`

, which means the handler *can raise errors if required*. `recoverWhile msg`

is basically a synonym for `achieve msg $ recover`

. `recoverMany`

allows for doing the same with a row of errors, by providing as many functions as required. Finally, `recoverManyWith`

simplifies `recoverMany`

: you can provide only one function tied to a given typeclass, on the condition that the handling errors implement this typeclass.

Using `recover`

and its siblings often requires to help a bit the Haskell type system, especially if we use lambdas to define the error handlers. Doing that is usually achieved with the `Proxy a`

dataype (where `a`

is a phantom type). I would rather use the TypeApplications^{5} pragma.

```
recoverManyWith @[FileError, NetworkError] @DescriptiveError
(do x <- readFromFile f
y <- readFromNetwork socket
printToStd x y)
printErrorAndStack
```

The `DecriptiveError`

typeclass can be seen as a dedicated `Show`

, to give textual representation of errors. It is inspired by the macros of `error_chain`

.

We can start from an empty row of errors, and allows ourselves to use more errors thanks to the `recover*`

functions.

`cat`

in Haskell using ResultT`ResultT`

only cares about error handling. The rest of the work is up to the underlying monad `m`

. That being said, nothing forbids us to provide fine-grained API for, e.g. Filesystem-related functions. From an error handling perspective, the functions provided by Prelude (the standard library of Haskell) are pretty poor, and the documentation is not really precise regarding the kind of error we can encounter while using it.

In this section, I will show you how we can leverage `ResultT`

to **(i)** define an error-centric API for basic file management functions and **(ii)** use this API to implement a `cat`

-like program which read a file and print its content in the standard output.

We could have one sum type to describe in the same place all the errors we can find, and later use the pattern matching feature of Haskell to determine which one has been raised. The thing is, this is already the job done by the row of errors of `ResultT`

. Besides, this means that we could raise an error for being not able to write something into a file in a function which *opens* a file.

Because `ResultT`

is intended to be extensible, we should rather define several types, so we can have a fine-grained row of errors. Of course, too many types will become burdensome, so this is yet another time where we need to find the right balance.

```
newtype AlreadyInUse = AlreadyInUse FilePath
newtype DoesNotExist = DoesNotExist FilePath
data AccessDeny = AccessDeny FilePath IO.IOMode
data EoF = EoF
data IllegalOperation = IllegalRead | IllegalWrite
```

To be honest, this is a bit too much for the real life, but we are in a blog post here, so we should embrace the potential of `ResultT`

.

By reading the System.IO documentation, we can infer what our functions type signatures should look like. I will not discuss their actual implementation in this article, as this requires me to explain how `IO` deals with errors itself (and this article is already long enough to my taste). You can have a look at this gist if you are interested.

```
openFile :: ( '[AlreadyInUse, DoesNotExist, AccessDeny] :< err
, MonadIO m)
=> FilePath -> IOMode -> ResultT msg err m Handle
```

`cat`

We can use the `ResultT`

monad, its monadic operations and our functions to deal with the file system in order to implement a `cat`

-like program. I tried to comment on the implementation to make it easier to follow.

```
cat :: FilePath -> ResultT String err IO ()
cat path =
-- We will try to open and read this file to mimic
-- `cat` behaviour.
-- We advertise that in case something goes wrong
-- the process.
achieve ("cat " ++ path) $ do
-- We will recover from a potential error,
-- but we will abstract away the error using
-- the `DescriptiveError` typeclass. This way,
-- we do not need to give one handler by error
-- type.
recoverManyWith @[Fs.AlreadyInUse, Fs.DoesNotExist, Fs.AccessDeny, Fs.IllegalOperation]
@(Fs.DescriptiveError)
(do f <- Fs.openFile path Fs.ReadMode
-- `repeatUntil` works like `recover`, except
-- it repeats the computation until the error
-- actually happpens.
-- I could not have used `getLine` without
-- `repeatUntil` or `recover`, as it is not
-- in the row of errors allowed by
-- `recoverManyWith`.
repeatUntil @(Fs.EoF)
(Fs.getLine f >>= liftIO . print)
(\_ _ -> liftIO $ putStrLn "%EOF")
closeFile f)
printErrorAndStack
where
-- Using the `DescriptiveError` typeclass, we
-- can print both the stack of Strings which form
-- the context, and the description of the generic
-- error.
printErrorAndStack e ctx = do
liftIO . putStrLn $ Fs.describe e
liftIO $ putStrLn "stack:"
liftIO $ print ctx
```

The type system of `cat`

teaches us that this function handles any error it might encounter. This means we can use it anywhere we want… in another computation inside `ResultT`

which might raise errors completely unrelated to the file system, for instance. Or! We can use it with `runResultT`

, escaping the `ResultT`

monad (only to fall into the `IO`

monad, but this is another story).

For once, I wanted to write about the *result* of a project, instead of *how it is implemented*. Rest assured, I do not want to skip the latter. I need to clean up a bit the code before bragging about it.

If the odds are in my favour, I will have plenty of occasions to write more about this topic.↩

I wonder if they deliberately choose to swap the two type arguments.↩

You might have notice

`err`

is of kind`[*]`

. To write such a thing, you will need the DataKinds GHC pragmas.↩If you are confused by

`:<`

, it is probably because you were not aware of the TypeOperators before. Maybe it was for the best. :D↩The TypeApplications pragmas is probably one of my favourites. When I use it, it feels almost like if I were writing some Gallina.↩

I’ve been using Coq for more than two years now. So far, it has been a very interesting journey. The least I can say is that I’ve learnt a lot, from dependent types to generalized rewriting. However, until very recently there has been one thing I’ve been avoiding as much as possible: Ltac, the “tactic language” of Coq. You can write your own tactics to be used later in your proof scripts. The main reason behind my reluctance was how poorly it has been described to me by several developers. Also, I find the reference manual quite unhelpful.

Fortunately, you don’t need to know how to write a tactic to prove most of your lemmas. SpecCert^{1} has some custom `intros`

-like tactics, but they are nothing more than aliases.

Sometimes, even if you *could* do without Ltac, the automation power it grants you is just too appealing. For the past few months, I have been working on a new project in Coq that introduces a relatively verbose proof framework. For toy examples, the “do it yourself” way was enough, but very quickly the proof goals become insane. I needed Ltac.

Turns out, it is not that hard.

This article is a quick overview of my findings. If you never wrote a tactic in Coq and are curious about the subject, it might be a good starting point.

The first thing you will probably want to do with Ltac is define aliases for recurring (sequences of) tactics.

For instance, in SpecCert, I had a lot of similar lemmas to prove. They have exactly the same “shape”: `forall h1 ev h2, P h -> R h1 ev h2 -> P h2`

. `h1`

, `h2`

, `ev`

were records I had to destruct and `P`

and `R`

were a composition of sub predicates. As a consequence, most of my proofs started with something like `intros [x1 [y1 z1]] [a b] [x2 [y2 z2]] [HP1 [HP2 [HP3 HP4]]] [R1|R2]`

. Nothing copy/past cannot solve at first, of course, but as soon as the definitions change, you have to change every single `intros`

and it is quite boring, to say the least.

The solution is simple: define a new tactic to use in place of your “raw” `intros`

:

```
Ltac my_intros_1 := intros [x1 [y1 z1]] [a b] [x2 [y2 z2]] [HP1 [HP2 [HP3 HP4]]] [R1|R2].
```

It *is* as simple as that. If your alias uses two tactics instead of one (eg, you always use `cbn`

after you intro), you have to use `;`

^{2}:

```
Ltac my_intros_2 := intros [x1 [y1 z1]] [a b] [x2 [y2 z2]] [HP1 [HP2 [HP3 HP4]]] [R1|R2].
```

That being said, there is an issue with these definitions. What if you want to use them twice? The name `x1`

would already be used and the tactic would fail. To solve this issue, Ltac provided `fresh`

, a keyword to get an unused name.

```
Ltac and_intro :=
let Hl := fresh "Hl" in
let Hr := fresh "Hr" in
intros [Hl Hr].
```

It is straightforward to use:

```
Goal (forall P Q, P /\ Q -> P).
intros P Q.
and_intro.
exact Hl.
Qed.
```

Here, `fresh “H”`

means Coq will try to use `H`

as a name. If it already exists, it will try `H0`

, then `H1`

etc. In your Ltac definition, you can just use `H`

.

Tactics can be recursive too! For instance, we can write `and_intros`

pretty easily:

```
Ltac and_intros :=
let Hl := fresh "Hl" in
let Hr := fresh "Hr" in
intros [Hl Hr]; try and_intros.
```

So, for instance, if we consider the following (incomplete) proof script:

```
Goal (forall P Q, Q /\ P -> P /\ Q -> P).
intros P Q.
and_intros.
```

Here is what we get after `and_intros`

:

```
1 subgoal, subgoal 1 (ID 15)
P, Q : Prop
Hl : Q
Hr, Hl0 : P
Hr0 : Q
============================
P
```

Finally, tactics can take a set of arguments:

```
Ltac destruct_and H :=
let Hl := fresh "Hl" in
let Hr := fresh "Hr" in
destruct H as [Hl Hr].
```

With that, you can already write some very useful “aliases.” It can save you quite some time when refactoring your definitions, but Ltac is capable of much more.

One thing that can be useful while writing/debugging a tactic is the ability to print a message. You have to strategies available in Ltac as far as I know: `idtac`

and `fail`

, where `idtac`

does not stop the proof script on an error whereas `fail`

does.

If you need to remember one thing from this blogpost, it is probably this: Ltac is mostly about pattern matching. That is, you will try to find in your goal and hypotheses relevant terms and sub terms you can work with.

You can pattern match a value as you would do in Gallina, but in Ltac, the pattern match is also capable of more. The first case scenario is when you have a hypothesis name and you want to check its type:

```
Ltac and_or_destruct H :=
let Hl := fresh "Hl" in
let Hr := fresh "Hr" in
match type of H with
| _ /\ _
=> destruct H as [Hl Hr]
| _ \/ _
=> destruct H as [Hl|Hr]
end.
```

For the following incomplete proof script:

```
Goal (forall P Q, P /\ Q -> Q \/ P -> True).
intros P Q H1 H2.
and_or_destruct H1.
and_or_destruct H2.
```

We get two sub goals:

```
2 subgoals, subgoal 1 (ID 20)
P, Q : Prop
Hl : P
Hr, Hl0 : Q
============================
True
subgoal 2 (ID 21) is:
True
```

We are not limited to constructors with the `type of`

keyword, We can also pattern match using our own definitions. For instance:

```
Ltac and_my_predicate_simpl H :=
let H_ := fresh "H" in
match type of H x with
| my_predicate _ /\ _
=> destruct H as [Hmy_pred H_]; clear H_
| _ /\ my_predicate x
=> destruct H as [H_ Hmy_pred]; clear H_
end.
```

Last but not least, it is possible to introspect the current goal of your proof development:

```
Ltac rewrite_something :=
match goal with
| [ H: ?x = _ |- context[?x] ]
=> rewrite H
end.
```

So once again, as an example, the following proof script:

```
Goal (forall (x: nat)
(H: x = 2),
x + 2 = 4).
intros x Heq.
rewrite_something.
```

Gives us the following goal to prove:

```
1 subgoal, subgoal 1 (ID 6)
x : nat
Heq : x = 2
============================
2 + 2 = 4
```

The `rewrite_something`

tactic will search an equality relation to use to modify your goal. How does it work?

`match goal with`

tells Coq we want to pattern match on the whole proof state, not only a known named hypothesis`[ H: ?x = _ |- _ ]`

is a pattern to tell coq we are looking for a hypothesis`_ = _`

`?x`

is a way to bind the left operand of`=`

to the name`x`

- The right side of
`|-`

is dedicated to the current goal `context[?x]`

is a way to tell Coq we don’t really want to pattern-match the goal as a whole, but rather we are looking for a subterm of the form`?x`

`rewrite H`

will be used in case Coq is able to satisfy the constrains specified by the pattern, with`H`

being the hypothesis selected by Coq^{3}

Finally, there is one last thing you really need to know before writing a tactic: the difference between `match` and `lazymatch`. Fortunately, it is pretty simple. One the one hand, with `match`, if one pattern matches, but the branch body fails, Coq will backtrack and try the next branch. On the other hand, `lazymatch` will stop on error.

So, for instance, the two following tactics will print two different messages:

```
Ltac match_failure :=
match goal with
| [ |- _ ]
=> fail "fail in first branch"
| _
=> fail "fail in second branch"
end.
Ltac match_failure' :=
lazymatch goal with
| [ |- _ ]
=> fail "fail in first branch"
| _
=> fail "fail in second branch"
end.
```

We can try that quite easily by starting a dumb goal (eg. `Goal (True).`

) and call our tactic. For `match_failure`

, we get:

```
Ltac call to "match_failure" failed.
Error: Tactic failure: fail in second branch.
```

On the other hand, for `lazymatch_failure`

, we get:

```
Ltac call to "match_failure'" failed.
Error: Tactic failure: fail in first branch.
```

I were able to tackle my automation needs with these Ltac features. As always with Coq, there is more to learn. For instance, I saw that there is a third kind of pattern match (`multimatch`

) I know nothing about.

Monads are hard to get right. I think it took me around a year of Haskelling to feel like I understood them. The reason is, to my opinion, there is not such thing as *the* Monad. It is even the contrary. When someone asks me how I would define Monads in only a few words, I say Monad is a convenient formalism to chain specific computations. Once I’ve got that, I started noticing “monadic construction” everywhere, from the Rust `?`

operator to the Elixir `with`

keyword.

Haskell often uses another concept above Monads: Monad Transformers. This allows you to work not only with *one* Monad, but rather a stack. Each Monad brings its own properties and you can mix them into your very own one. That you can’t have in Rust or Elixir, but it works great in Haskell. Unfortunately, it is not an easy concept and it can be hard to understand. This article is not an attempt to do so, but rather a postmortem review of one situation where I found them extremely useful. If you think you have understood how they work, but don’t see the point yet, you might find here a beginning of answer.

Recently, I ran into a very good example of why Monad Transformers worth it. I have been working on a project called ogma for a couple years now. In a nutshell, I want to build “a tool” to visualize in time and space a storytelling. We are not here just yet, but in the meantime I have wrote a software called `celtchar`

to build a novel from a list of files. One of its newest feature is the choice of language, and by extension, the typographic rules. This information is read from a configuration file very early in the program flow. Unfortunately, its use comes much later, after several function calls.

In Haskell, you deal with that kind of challenges by relying on the Reader Monad. It carries an environment in a transparent way. The only thing is, I was already using the State Monad to carry the computation result. But that’s not an issue with the Monad Transformers.

As you may have already understood, I wasn’t using the “raw” `State`

Monad, but rather the transformer version `StateT`

. The underlying Monad was `IO`

, because I needed to be able to read some files from the filesystem. By replacing `IO`

by `ReaderT Language IO`

, I basically fixed my “carry the variable to the correct function call easily” problem.

Retrieving the chosen language is as simple as:

And that was basically it. The complete commit can be found on Github.

Now, my point is not that Monad Transformers are the ultimate beast we will have to tame once and then everything will be shiny and easy. There are a lot of other way to achieve what I did with my `Builder`

stack. For instance, in an OO language, I probably would have to add a new class member to my `Builder`

class and I would have done basically the same thing.

However, there is something I really like about this approach: the `Builder`

type definition gives you a lot of useful information already. Both the `State`

and `Reader`

Monads have a well established semantics most Haskellers will understand in a glance. A bit of documentation won’t hurt, but I suspect it is not as necessary as one could expect. Moreover, the presence of the `IO`

Monad tells everyone using the `Builder`

Monad might cause I/O.

I have to confess something. In the published codebase of SpecCert lies a shameful secret. It takes the form of a set of axioms which are not required. I thought they were when I wrote them, but it was before I heard about “generalized rewriting,” setoids and morphisms.

Now, I know the truth. I will have to update SpecCert eventually. But, in the meantime, let me try to explain how it is possible to rewrite a term in a proof using a ad-hoc equivalence relation and, when necessary, a proper morphism. Be aware, however, that I may not be completely familiar with the underlying mathematics. I will try not to say anything wrong, but if I do, please! Feel free to tell me so.

Now, why would anyone want such a thing as “generalized rewriting” when the `rewrite`

tactic works just fine.

The thing is: it does not in some cases. To illustrate my statement, we will consider the following definition of a gate in Coq:

```
Record Gate :=
{ open: bool
; lock: bool
; lock_is_close: lock = true -> open = false }.
```

According to this definition, a gate can be either open or closed. It can also be locked, but if it is, it cannot be open at the same time. To express this constrain, we embed the appropriate proposition inside the Record. By doing so, we *know* for sure that we will never meet an ill-formed Gate instance. The Coq engine will prevent it, because to construct a gate, one will have to prove the `lock_is_close`

predicate holds.

The `Program`

keyword makes it easy to work with embedded proofs. For instance, defining the ”open gate” is as easy as:

```
Require Import Coq.Program.Tactics.
Program Definition open_gate :=
{| open := true
; lock := false
|}.
```

Under the hood, `Program`

proves what needs to be proven, that is the `lock_is_close`

proposition. Just have a look at its output:

```
open_gate has type-checked, generating 1 obligation(s)
Solving obligations automatically...
open_gate_obligation_1 is defined
No more obligations remaining
open_gate is defined
```

In this case, using `Program`

is a bit like cracking a nut with a sledgehammer. We can easily do it ourselves using the `refine`

tactic.

```
Definition open_gate': Gate.
refine ({| open := true
; lock := false
|}).
intro Hfalse.
discriminate Hfalse.
Defined.
```

What does it mean for two gates to be equal? Intuitively, we know they have to share the same states (`open`

and `lock`

is our case).

When you write something like `a = b`

in Coq, the `=`

refers to the `eq`

function and this function relies on what is called the Leibniz Equality. You can have a look at the Coq documentation if you want more detail, but the main idea is pretty simple.

`x`

and`y`

are equal iff every property on`A`

which is true of`x`

is also true of`y`

As for myself, when I first started to write some Coq code, the Leibniz Equality was not really something I cared about and I tried to prove something like this:

```
Lemma open_gates_are_equal:
forall (g g': Gate),
open g = true
-> open g' = true
-> g = g'.
```

Basically, it means that if two doors are open, then they are equal. That made sense to me, because by definition of `Gate`

, a locked door is closed, meaning an open door cannot be locked.

Here is an attempt to prove the `open_gates_are_equal`

lemmas.

```
Proof.
induction g; induction g'.
cbn.
intros H0 H2.
assert (lock0 = false).
+ case_eq lock0; [ intro H; apply lock_is_close0 in H;
rewrite H0 in H;
discriminate H
| reflexivity
].
+ assert (lock1 = false).
* case_eq lock1; [ intro H'; apply lock_is_close1 in H';
rewrite H2 in H';
discriminate H'
| reflexivity
].
* subst.
```

The next tactic I wanted to use is `reflexivity`

, because I’d basically proven `open g = open g’`

and `lock g = lock g’`

, which meets my definition of equality at that time.

Except Coq wouldn’t agree. See how it reacts:

```
Error: In environment
lock_is_close0, lock_is_close1 : false = true -> true = false
Unable to unify "{| open := true; lock := false; lock_is_close := lock_is_close1 |}" with
"{| open := true; lock := false; lock_is_close := lock_is_close0 |}".
```

It cannot unify the two records. More precisely, it cannot unify `lock_is_close1`

and `lock_is_close0`

. So we abort and try something else.

```
Abort.
```

This is a familiar pattern. Coq cannot guess what we have in mind. Giving a formal definition of “our equality” is fortunately straightforward.

```
Definition gate_eq
(g g': Gate)
: Prop :=
open g = open g' /\ lock g = lock g'.
```

Because “equality” means something very specific in Coq, we won’t say “two gates are equal” anymore, but “two gates are equivalent”. That is, `gate_eq`

is an equivalence relation. But “equivalence relation” is also something very specific. For instance, such relation needs to be symmetric (`R x y -> R y x`

), reflexive (`R x x`

) and transitive (`R x y -> R y z -> R x z`

).

We can prove that quite easily.

```
Lemma gate_eq_refl
: forall (g: Gate),
gate_eq g g.
Proof.
split; reflexivity.
Qed.
Lemma gate_eq_sym
: forall (g g': Gate),
gate_eq g g'
-> gate_eq g' g.
Proof.
intros g g' [Hop Hlo].
symmetry in Hop; symmetry in Hlo.
split; assumption.
Qed.
Lemma gate_eq_trans
: forall (g g' g'': Gate),
gate_eq g g'
-> gate_eq g' g''
-> gate_eq g g''.
Proof.
intros g g' g'' [Hop Hlo] [Hop' Hlo'].
split.
+ transitivity (open g'); [exact Hop|exact Hop'].
+ transitivity (lock g'); [exact Hlo|exact Hlo'].
Qed.
```

Now, Coq has a nice way to define an equivalence relation. We need to import the `Coq.Setoids.Setoid`

module first, in order to have the ```
Add Parametric
Relation
```

syntax available.

```
Require Import Coq.Setoids.Setoid.
Add Parametric Relation: (Gate) (gate_eq)
reflexivity proved by (gate_eq_refl)
symmetry proved by (gate_eq_sym)
transitivity proved by (gate_eq_trans)
as gate_eq_rel.
```

Afterwards, the `symmetry`

, `reflexivity`

and `transitivity`

tactics also works with `gate_eq`

, in addition to `eq`

. We can try again to prove the `open_gate_are_the_same`

lemma and it will work^{1}.

```
Lemma open_gates_are_the_same:
forall (g g': Gate),
open g = true
-> open g' = true
-> gate_eq g g'.
Proof.
induction g; induction g'.
cbn.
intros H0 H2.
assert (lock0 = false).
+ case_eq lock0; [ intro H; apply lock_is_close0 in H;
rewrite H0 in H;
discriminate H
| reflexivity
].
+ assert (lock1 = false).
* case_eq lock1; [ intro H'; apply lock_is_close1 in H';
rewrite H2 in H';
discriminate H'
| reflexivity
].
* subst.
split; reflexivity.
Qed.
```

So here we are, with our ad-hoc definition of gate equivalence. We can use `symmetry`

, `reflexivity`

and `transitivity`

along with `gate_eq`

and it works fine because we have told Coq `gate_eq`

is indeed an equivalence relation for `Gate`

.

Can we do better? Can we actually use `rewrite`

to replace an occurrence of `g`

by an occurrence of `g’`

as long as we can prove that `gate_eq g g’`

? The answer is “yes”, but it will not come for free.

Before moving forward, just consider the following function:

```
Require Import Coq.Bool.Bool.
Program Definition try_open
(g: Gate)
: Gate :=
if eqb (lock g) false
then {| lock := false
; open := true
|}
else g.
```

It takes a gate as an argument and returns a new gate. If the former is not locked, the latter is open. Otherwise the argument is returned as is.

```
Lemma gate_eq_try_open_eq
: forall (g g': Gate),
gate_eq g g'
-> gate_eq (try_open g) (try_open g').
Proof.
intros g g' Heq.
Abort.
```

What we could have wanted to do is: `rewrite Heq`

. Indeed, `g`

and `g’`

“are the same” (`gate_eq g g’`

), so, *of course*, the results of `try_open g`

and `try_open g’`

have to be the same. Except…

```
Error: Tactic failure: setoid rewrite failed: Unable to satisfy the following constraints:
UNDEFINED EVARS:
?X49==[g g' Heq |- relation Gate] (internal placeholder) {?r}
?X50==[g g' Heq (do_subrelation:=Morphisms.do_subrelation)
|- Morphisms.Proper (gate_eq ==> ?X49@{__:=g; __:=g'; __:=Heq}) try_open] (internal placeholder) {?p}
?X52==[g g' Heq |- relation Gate] (internal placeholder) {?r0}
?X53==[g g' Heq (do_subrelation:=Morphisms.do_subrelation)
|- Morphisms.Proper (?X49@{__:=g; __:=g'; __:=Heq} ==> ?X52@{__:=g; __:=g'; __:=Heq} ==> Basics.flip Basics.impl) eq]
(internal placeholder) {?p0}
?X54==[g g' Heq |- Morphisms.ProperProxy ?X52@{__:=g; __:=g'; __:=Heq} (try_open g')] (internal placeholder) {?p1}
.
```

What Coq is trying to tell us here —in a very poor manner, I’d say— is actually pretty simple. It cannot replace `g`

by `g’`

because it does not know if two equivalent gates actually give the same result when passed as the argument of `try_open`

. This is actually what we want to prove, so we cannot use `rewrite`

just yet, because it needs this result so it can do its magic. Chicken and egg problem.

In other words, we are making the same mistake as before: not telling Coq what it cannot guess by itself.

The `rewrite`

tactic works out of the box with the Coq equality (`eq`

, or most likely `=`

) because of the Leibniz Equality.

`x`

and`y`

are equal iff every property on`A`

which is true of`x`

is also true of`y`

This is a pretty strong property, and one a lot of equivalence relations do not have. Want an example? Consider the relation `R`

over `A`

such that forall `x`

and `y`

, `R x y`

holds true. Such relation is reflexive, symmetric and reflexive. Yet, there is very little chance that given a function `f : A -> B`

and `R’`

an equivalence relation over `B`

, `R x y -> R' (f x) (f y)`

. Only if we have this property, we would know that we could rewrite `f x`

by `f y`

, e.g. in `R' z (f x)`

. Indeed, by transitivity of `R’`

, we can deduce `R' z (f y)`

from `R' z (f x)`

and `R (f x) (f y)`

.

If `R x y -> R' (f x) (f y)`

, then `f`

is a morphism because it preserves an equivalence relation. In our previous case, `A`

is `Gate`

, `R`

is `gate_eq`

, `f`

is `try_open`

and therefore `B`

is `Gate`

and `R’`

is `gate_eq`

. To make Coq aware that `try_open`

is a morphism, we can use the following syntax:

```
Add Parametric Morphism: (try_open)
with signature (gate_eq) ==> (gate_eq)
as open_morphism.
```

The `Add Parametric Morphism`

is read as follows:

```
Add Parametric Morphism: (f)
with signature (R) ==> (R')
as <name>.
```

This notation is actually more generic because you can deal with functions that take more than one argument. Hence, given `g : A -> B -> C -> D`

, `R`

(respectively `R’`

, `R’’`

and `R’’’`

) an equivalent relation of `A`

(respectively `B`

, `C`

and `D`

), we can prove `f`

is a morphism as follows:

```
Add Parametric Morphism: (g)
with signature (R) ==> (R') ==> (R'') ==> (R''')
as <name>.
```

Back to our `try_open`

morphism. Coq needs you to prove the following goal^{2}:

```
1 subgoal, subgoal 1 (ID 50)
============================
forall x y : Gate, gate_eq x y -> gate_eq (try_open x) (try_open y)
```

Here is a way to prove that:

```
Proof.
intros g g' Heq.
assert (gate_eq g g') as [Hop Hlo] by (exact Heq).
unfold try_open.
rewrite <- Hlo.
destruct (bool_dec (lock g) false) as [Hlock|Hnlock]; subst.
+ rewrite Hlock.
split; cbn; reflexivity.
+ apply not_false_is_true in Hnlock.
rewrite Hnlock.
cbn.
exact Heq.
Qed.
```

Now, back to our `gate_eq_try_open_eq`

, we now can use `rewrite`

and `reflexivity`

.

```
Lemma gate_eq_try_open_eq
: forall (g g': Gate),
gate_eq g g'
-> gate_eq (try_open g) (try_open g').
Proof.
intros g g' Heq.
rewrite Heq.
reflexivity.
Qed.
```

We did it! We actually rewrite `g`

as `g’`

, even if we weren’t able to prove ```
g
= g’
```

.

There is one last thing to cover. How to rewrite `R g`

into `R g’`

when we have proven `gate_eq g g’`

? By declaring a new morphism, of course! Except the output relation is not either `gate_eq`

or `R`

. What we really want, indeed, is the following: `gate_eq g g’ -> R g -> R g’`

. The Coq Library defines `R g -> R g’`

as `impl`

in `Coq.Program.Basics`

, so we can do as follows:

```
Parameter (R: Gate -> Prop).
Add Parametric Morphism: (R)
with signature (gate_eq) ==> (Basics.impl)
as r_morphism.
Admitted.
```

Once it is done, we can easily rewrite `R g’`

into `R g`

:

```
Lemma r_gate
: forall (g g': Gate),
gate_eq g g' -> R g -> R g'.
Proof.
intros g g' Heq Hr.
rewrite <- Heq.
exact Hr.
Qed.
```

**Do not do what follows, use equivalence relation and morphisms instead.**

Maybe you are curious and want to know “how did I manage to deal with equality in SpecCert without all this setoids and morphisms dark magic?”. It is pretty simple actually. I’ve defined the `Eq`

typeclass, inspired by the Haskell typeclass:

```
Class Eq (A: Type) :=
{ eq: forall (a a': A), Prop
; eq_dec: forall (a a': A), {eq a a'}+{~eq a a'}
; eq_refl: forall (a: A), eq a a
; eq_sym: forall (a a': A), eq a a' -> eq a' a
; eq_trans: forall (a a' a'': A), eq a a' -> eq a' a'' -> eq a a''
; equality_eq: forall (a a': A), a = a' -> eq a a'
; eq_equality: forall (a a': A), eq a a' -> a = a'
}.
```

The `equality_eq`

goal is pretty straightforward to prove (```
intro H; rewrite H’;
apply eq_refl
```

), but the `eq_equality`

… Well. Did I mention I had to use the `Axiom`

keyword?

For lkn, I am using pijul as my distributed version control system (DVCS). Because pijul remains in an early stage, I cannot say using it “for a real project” does not come with some… inconvenience. Yes, pijul ate my data, as advertised. I can think of a few times where the tool didn’t want to record my changes properly. Also, the 0.4 has wiped out my history.

As recently as this weekend, I ran into a brand new bug in `pijul record`

. Therefore, I asked myself: should I just quite using pijul for now? I mean, it *is* unstable and even if it is completely normal and understandable, when a new bug strikes, it still hurts. Besides, even a bug-free pijul has its drawbacks. The main one is: you lose the Github ecosystem, including Travis and all the integrated services which come with it (e.g. Coveralls). I have set up a Github repo for lkn. In comparison with the nest, there shall be no debate.

That being said, I **will continue to use pijul as the main DVCS for lkn**. I want to believe in the pijul promises and I want to be part of pijul becoming a real thing. I hope to contribute eventually, but in the meantime, actively using it is the best thing I can do for the project. I can report the bugs I find and give many details as possible while doing it. Yes, lkn is a toy project, it will not become “a thing” anytime soon, but it is still the kind of project pijul will be used for. **So, yeah. Keep hurting me, pijul.** I won’t say I like it, but I will deal with it so that maybe, one day, other could use you confidently.

That being said, I will try to keep a git mirror just because lkn might be of some interest for some folks and not everybody have to be threatened by an unstable bird.

]]>For the past few weeks, I have been playing around with `Program`

. The recent publications[1] [2] of Robert C. Martin (author of *The Clean Code*) have convinced me to write here something about it. I have been trying hard enough to *use* `Program`

to build myself an “intuition”, meaning I am not randomly typing tactics until `C-c C-n`

works. This text is an attempt to restore what I’ve learned.

Note: the code works with Coq 8.6, but not with Coq 8.5 (at least, the `extract`

definition does not). The complete gist can be found here (8.5) or here (8.6). I want to apologize if this text lacks explanation sometimes. I found it very difficult to write, probably because I have struggled a lot of this matter and tried a lot of different strategies. I really wanted to finish it, even if it is not perfect, because I think it can be a good starting point for a conversation.

If I had to explain `Program`

, I would say `Program`

is the heir of the `refine`

tactic. It gives you a convenient way to embed proofs within functional programs that are supposed to fade away during code extraction. But what do I mean when I say “embed proofs” within functional programs? I found two ways to do it.

First, we can define a record with one or more fields of type `Prop`

. By doing so, we can constrain the values of other fields. Put another way, we can specify invariant for our type. For instance, in SpecCert, I have defined the memory controller’s SMRAMC register as follows:

```
(* see: specs/intel/5th-gen-core-family-datasheet-vol-2.pdf *)
Record SmramcRegister := {
d_open: bool;
d_lock: bool;
(* following proofs are justified by:
* “When D_LCK=1, then D_OPEN is reset to 0 and all
* writeable fields in this register are locked (become RO).”
*)
lock_is_close: d_lock = true -> d_open = false;
}.
```

So `lock_is_closed`

is an invariant I know each instance of `SmramcRegister`

will have to comply with, because every time I will construct a new instance, I will have to prove `lock_is_closed`

holds true. For instance:

```
Definition lock
(reg: SmramcRegister)
: SmramcRegister.
refine ({| d_open := false
; d_lock := true
|}).
(* goal: true = true -> false = false *)
trivial.
Defined.
```

Using tactics to define functions is not straightforward. In particular, as gasche noticed on reddit, it is hard to know what the extracted code will look like. Also, you end up mixing definition and proposition proofs.

From that perspective, `Program`

helps. Indeed, the `lock`

function can also be defined as follows:

```
Require Import Program.
Program Definition lock'
(reg: SmramcRegister)
: SmramcRegister :=
{| d_open := false
; d_lock := true
|}.
```

Another way to “embed proofs in a program” is by specifying pre- and post-conditions for its component. In Coq, this is done using Sigma-types. I have already written a while ago about Sigma-types.

On the one hand, a precondition is a proposition a function input has to satisfy in order for the function to be applied. For instance, a precondition for `head : list A -> A`

the function that returns the first element of a list `l`

requires `l`

to contain at least one element. We can write that using a Sigma-type.

```
Definition head
{A: Type}
(l: list A | l <> nil)
: A.
```

On the other hand, a post condition is a proposition a function output has to satisfy in order for the function to be correctly implemented. In this way, `head`

should in fact return the first element of `l`

and not something else.

We can write such a specification without Program.

```
Definition head
{A: Type}
(l: list A | l <> nil)
: { a: A | exists l', cons a l' = (proj1_sig l) }.
```

Because `{ l: list A | l <> nil }`

is not the same as `list A`

, we cannot just compare `l`

with `cons a l'`

. One benefit on `Program`

is to deal with it using implicit coercion, so we can write:

```
Program Definition head
{A: Type}
(l: list A | l <> nil)
: { a: A | exists l', cons a l' = l }.
```

Note that for its type inference, it uses the left operand of `=`

so you cannot write `l = cons a l'`

.

Now that `head`

have been specified, we have to implement it. Without `Program`

, one can use `refine`

, whereas using `Program`

we can just write regular ML-like code:

```
Program Definition head
{A: Type}
(l: list A | l <> nil)
: { a: A | exists l', cons a l' = l } :=
match l with
| cons a l' => a
| nil => !
end.
Next Obligation.
exists l'.
reflexivity.
Qed.
```

I want to highlight several things here:

- We return
`a`

(of type`A`

) rather than a Sigma-type, then`Program`

is smart enough to wrap it. To do so, it tries to prove the post condition and because it fails, we have to do it ourselves (this is the Obligation we solve after the function definition.) - The
`nil`

case is absurd regarding the precondition, we tell Coq that using the bang (`!`

) symbol.

We can have a look at the extracted code:

```
(** val head : 'a1 list -> 'a1 **)
let head = function
| Nil -> assert false (* absurd case *)
| Cons (a, _) -> a
```

The implementation is pretty straightforward, but the pre- and post conditions have faded away. Also, the absurd case is discarded using an assertion. This means one thing: `head`

should not be used directly from the Ocaml world. “Interface” functions have to be total.

I have challenged myself to build a strongly specified library. My goal was to define a type `vector : nat -> Type -> Type`

such as `vector A n`

is a list of `n`

instance of `A`

.

```
Inductive vector (A: Type) : nat -> Type :=
| vcons {n:nat} : A -> vector A n -> vector A (S n)
| vnil : vector A 0.
Arguments vcons [A n] _ _.
Arguments vnil [A].
```

I had three functions in mind: `take`

, `drop`

and `extract`

. I learned few lessons. The first one is maybe the most important: **do not use Sigma-types, Program and dependent-types together**. From my point of view, Coq is not yet ready for this. Maybe it is possible to make those three work together, but I have to admit I did not find out how. As a consequence, my preconditions are defined as extra arguments.

To be able to specify the post conditions my three functions and some others, I first defined `nth`

:

```
Program Fixpoint nth
{A: Type}
{n: nat}
(v: vector A n)
(i: nat)
{struct v}
: option A :=
match v with
| vcons x r => match i with
| 0 => Some x
| S i' => nth r i'
end
| vnil => None
end.
```

With `nth`

, it is possible to give a very precise definition of `take`

:

```
Program Fixpoint take
{A: Type}
{n: nat}
(v: vector A n)
(e: nat)
(Hbound: e <= n)
{struct v}
: { v': vector A e | forall i : nat, i < e -> nth v' i = nth v i } :=
match e with
| 0 => vnil
| S e' => match v with
| vcons x r => vcons x (take r e' _)
| vnil => !
end
end.
Next Obligation.
apply (Nat.nlt_0_r i) in H.
destruct H.
Qed.
Next Obligation.
apply (le_S_n e' wildcard').
exact Hbound.
Qed.
Next Obligation.
induction i.
+ reflexivity.
+ apply e0.
apply lt_S_n in H.
exact H.
Defined.
Next Obligation.
apply Nat.nle_succ_0 in Hbound.
exact Hbound.
Qed.
```

I’ve learned another lesson here. I wanted to define the post condition as follows: `{ v': vector A e | forall (i : nat | i < e), nth v' i = nth v i }`

. I quickly realized it is a very bad idea. The goals and hypotheses become very hard to read and to use.

```
(* Extraction Implicit take [A n].
Extraction take *)
(** val take : 'a1 vector -> nat -> 'a1 vector **)
let rec take v = function
| O -> Vnil
| S e' ->
(match v with
| Vcons (_, x, r) -> Vcons (e', x, (take r e'))
| Vnil -> assert false (* absurd case *))
```

Then I could tackle `drop`

in a very similar manner:

```
Program Fixpoint drop
{A: Type}
{n: nat}
(v: vector A n)
(b: nat)
(Hbound: b <= n)
: { v': vector A (n - b) | forall i, i < n - b -> nth v' i = nth v (b + i) } :=
match b with
| 0 => v
| S n => (match v with
| vcons _ r => (drop r n _)
| vnil => !
end)
end.
Next Obligation.
rewrite <- minus_n_O.
reflexivity.
Qed.
Next Obligation.
induction n;
rewrite <- eq_rect_eq;
reflexivity.
Qed.
Next Obligation.
apply le_S_n in Hbound.
exact Hbound.
Qed.
Next Obligation.
apply Nat.nle_succ_0 in Hbound.
destruct Hbound.
Defined.
```

The proofs are easy to write, and the extracted code is exactly what one might want it to be:

```
(* Extraction Implicit drop [A n].
Extraction drop *)
(** val drop : 'a1 vector -> nat -> 'a1 vector **)
let rec drop v = function
| O -> v
| S n ->
(match v with
| Vcons (_, _, r) -> drop r n
| Vnil -> assert false (* absurd case *))
```

But `Program`

really shone when it comes to implementing extract. I just had to combine `take`

and `drop`

.

```
Program Definition extract
{A: Type}
{n: nat}
(v: vector A n)
(b e: nat)
(Hbound: b <= e <= n)
: { v': vector A (e - b) | forall i, i < (e - b) -> nth v' i = nth v (b + i) } :=
take (drop v b _) (e - b) _.
Next Obligation.
omega.
Qed.
Next Obligation.
omega.
Qed.
Next Obligation.
remember (drop v b (extract_obligation_1 n b e (conj l l0))) as vd.
remember (take (`vd) (e - b) (extract_obligation_2 n b e (conj l l0))) as vt.
induction vd as [v' Hv'].
rewrite <- (Hv' i).
simpl in vt.
induction vt as [v'' Hv''].
simpl.
apply Hv''.
exact H.
omega.
Qed.
```

To be honest, I think the last proof can be better, but anyway, it is easy to convince ourselves from the `extract`

definition that our implementation is bug-free.

I was pretty happy, so I tried some more. Each time, using `nth`

, I managed to write a precise post condition and to prove it holds true. For instance, given `map`

to apply a function `f`

to each element of a vector `v`

:

```
Program Fixpoint map
{A B: Type}
{n: nat}
(v: vector A n)
(f: A -> B)
: { v': vector B n | forall i, nth v' i = option_map f (nth v i) } :=
match v with
| vnil => vnil
| vcons a v => vcons (f a) (map v f)
end.
Next Obligation.
induction i.
+ reflexivity.
+ apply e.
Defined.
```

Once again, the extracted code is what I wanted it to be once you tell Coq that the vector size is implicit.

I also managed to specify and write `append`

:

```
Program Fixpoint append
{A: Type}
{n m: nat}
(v: vector A n)
(v': vector A m)
: { v'' : vector A (n + m) | forall i, (i < n /\ nth v'' i = nth v i)
\/ (n <= i /\ nth v'' i = nth v' (i - n))} :=
match v with
| vnil => v'
| vcons a v => vcons a (append v v')
end.
Next Obligation.
destruct (lt_dec i 0).
* omega.
* right.
split; [ omega |].
rewrite Nat.sub_0_r.
reflexivity.
Qed.
Next Obligation.
destruct (lt_dec i (S wildcard')).
left.
split; [ exact l |].
induction i.
+ reflexivity.
+ assert (i < wildcard' /\ nth x i = nth v i
\/ wildcard' <= i /\ nth x i = nth v' (i - wildcard')) by (apply (o i)).
destruct H as [[Hi Hn]|[Hi Hn]].
* exact Hn.
* omega.
+ right.
split.
omega.
induction i; [ omega |].
assert (i < wildcard' /\ nth x i = nth v i
\/ wildcard' <= i /\ nth x i = nth v' (i - wildcard')) by (apply (o i)).
destruct H as [[Hi Hn]|[Hi Hn]].
* omega.
* rewrite Hn.
rewrite Nat.sub_succ.
reflexivity.
Defined.
```

This time, the proofs are bigger, but it is not the main issue. Because, this time, I was not happy with the extracted code, as it was not possible to make the size of the second vector implicit. It is quite normal, because `m`

is used to construct the resulting vector. It is hidden by Coq, but `vcons`

take the size of the inner vector, so when we append two vectors, we need to add `m`

to each `vcons`

of `v`

. I think it is possible to find our way out. For instance, we could probably define `length`

such that:

```
Program Definition length
{A: Type}
{n: nat}
(v: vector A n)
: { n': nat | n = n' } :=
match v with
| @vcons _ m _ _ => S m
| vnil => 0
end.
```

Then, we should never use the vector size directly, but rather `length v`

.

Finally, I tried to implement `map2`

that takes a vector of `A`

, a vector of `B`

(both of the same size) and a function `f: A -> B -> C`

and returns a vector of `C`

.

I struggled a lot but could not find a way to use `Program`

to implement it. I had to use tactics to achieve my ends, so I am not very happy about it. However, using a proper configuration, the extracted code looks quite good.

```
Definition option_app
{A B: Type}
(opf: option (A -> B))
(opx: option A)
: option B :=
match opf, opx with
| Some f, Some x => Some (f x)
| _, _ => None
end.
Program Fixpoint map2
{A B C: Type}
{n: nat}
(v: vector A n)
(v': vector B n)
(f: A -> B -> C)
{struct v}
: { v'': vector C n | forall i, nth v'' i = option_app (option_map f (nth v i)) (nth v' i) } := _.
Next Obligation.
dependent induction v; dependent induction v'.
+ remember (IHv v' f) as v''.
inversion v''.
refine (exist _ (vcons (f a a0) x) _).
intros i.
induction i.
* cbv.
reflexivity.
* simpl.
apply (H i).
+ refine (exist _ vnil _).
cbv.
reflexivity.
Qed.
```

And the extracted code:

```
(* Extraction Inline map2_obligation_1 solution_left solution_right simplification_heq.
Extraction map2. *)
(** val map2 : nat -> 'a1 vector -> 'a2 vector -> ('a1 -> 'a2 -> 'a3) -> 'a3 vector **)
let rec map2 n v v' f =
let rec f0 _ v0 v'0 f1 =
match v0 with
| Vcons (n0, y, v1) ->
(match v'0 with
| Vcons (_, y0, v2) -> let v'' = f0 n0 v1 v2 f1 in Vcons (n0, (f1 y y0), v'')
| Vnil -> assert false (* absurd case *))
| Vnil ->
(match v'0 with
| Vcons (_, _, _) -> assert false (* absurd case *)
| Vnil -> Vnil)
in f0 n v v' f
```

This post mostly gives the “happy ends” for each function. I think I tried to hard for what I got in return and therefore I am convinced `Program`

is not ready (at least for a dependent type, I cannot tell for the rest). For instance, I found at least one bug in Program logic (I still have to report it). Have a look at the following code:

```
Program Fixpoint map2
{A B C: Type}
{n: nat}
(v: vector A n)
(v': vector B n)
(f: A -> B -> C)
{struct v}
: vector C n :=
match v with
| _ => vnil
end.
```

It gives the following error:

```
Error: Illegal application:
The term "@eq" of type "forall A : Type, A -> A -> Prop"
cannot be applied to the terms
"nat" : "Set"
"0" : "nat"
"wildcard'" : "vector A n'"
The 3rd term has type "vector A n'" which should be coercible to "nat".
```

That being said, it is very promising and I look forward to seeing what it will become in the future. Coming back to the blog posts of Uncle Bob, I am sure this is what he would call "a *very* Dark Path. »I never read deeply enough a Coq program to see if they use `Program`

and similar approach. I know the only consequent Coq program I ever wrote is SpecCert and it is not a program but a model and its proofs. But I am eager to learn more and to find an opportunity to make this leap.

Over time, hardware designs have constantly grown in complexity and modern platforms involve multiple interconnected hardware components. From a security point of view, this brings two issues. Firstly, during the last decade, several vulnerability disclosures have proven that trust in hardware can be misplaced. Secondly, software developers have to struggle with hundreds of pages of specifications divided into dozens of documents.

SpecCert is an attempt to tackle this two challenges. SpecCert is a framework to specify what we have called “Hardware-based Security Enforcement (HSE) mechansims”. As part of such a mechanism, a trusted software component configures the hardware platform so that untrusted software components cannot violate a security property. For instance, in order to enforce applications isolation, an operating system (which is the trusted software components in this scenario) partly relies on the memory paging mechanisms and user/supervisor execution contexts of modern CPUs.

By formally specifying a HSE mechanism, we gain two benefits. Firstly, we can verify our definition brings the desired security property *according to our model*. Secondly, we provide an unambiguous specification to any software developer interested in actually implemented the HSE mechanism.

SpecCert is still a work in progress. I have implemented and published a first proof of concept on Github. The project is written in Gallina using the Coq proof-assistant. If you are interested in the subject, feel free to have a look. You can also read our paper to have a more-in-depth description of the formalism..

]]>In the ogmarkup library, the output generation is driven by a set of templates and functions. In the earlier versions of ogmarkup, the configuration was the `GenConf`

record (module `Text.Ogmarkup.Private.Config`

). However, a few days ago, we merged a Pull Request in `dev`

that changes things a bit.

`GenConf`

?Commit after commit, the definition of the `GenConf`

record has grown in complexity until it gets more than ten fields. That made the declaration of a `GenConf`

instance a little tricky. See for yourself:

```
htmlConf :: GenConf Html
htmlConf =
GenConf englishTypo
(\doc -> [shamlet|<article>#{doc}|])
id
asideTemp
(\paragraph -> [shamlet|<p>#{paragraph}|])
id
(\a dialogue -> [shamlet|$newline never
<span .dialogue .#{a}>
#{dialogue}|])
(\a thought -> [shamlet|$newline never
<span .thought .by-#{a}>
#{thought}|])
(\reply -> [shamlet|$newline never
<span .reply>
#{reply}|])
(preEscapedToHtml ("</p><p>" :: Text))
(\text -> [shamlet|$newline never
<em>#{text}|])
(\text -> [shamlet|$newline never
<strong>#{text}|])
auth
htmlPrintSpace
```

It was a real mess. It was hard to write, hard to read and, as a consequence, very error prone. In these conditions, GHC is not a great assistance to the developers. Its error messages basically consist of a “too few arguments” complaint.

`GenConf`

A while ago, I read an article on typeclass-based libraries. We made the switch in the commit `888a86dc`

I do not regret our choice, for several reasons. The `GenConf`

typeclass definition brings several default implementations. Another benefit rises when the time comes to declare a new instance. You can override several configuration functions without worrying about the declaration order. As a side effect, the functions name is visible, so it improve the declaration readability.

We kept the record field names, so the ogmarkup library update was pretty straightforward. I just updated the function signatures.

Yet, I found one major drawback to this change: `GenConf`

is a multi-parameter type class. You have to enable at least two GHC extensions:

`FlexibleInstances`

`MultiParamTypeClasses`

Without them, you won’t be able to define a new `GenConf`

instance. I am not very comfortable with the idea to oblige our potential future users to enable specific GHC extensions. At least, we will have to make it very clear in the documentation^{1}.

All in all, I think this update was a good idea, especially because we are not immune to see the `GenConf`

definition changes again in the future and becomes even more complex.

Finally, let’s have a look at the typeclass definition.

I had already seen such strange declaration before, but I didn’t know what it meant before. `c -> o`

means if you know which type is `c`

, you can guess `o`

. In other words, there is only one `o`

for one `c`

. It is called a functional dependency (see `FunctionalDependencies`

GHC extension). The main thing to understand is that, without this dependency, for one `c`

, GHC cannot guess which `o`

to choose.

It is still not the case, but it

*will*.↩

- Defining the
`list`

type - Is this list empty?
- Defining some utility functions
- Let’s prove some theorems

I started to play with Coq, the interactive theorem prover developed by Inria, a few weeks ago. It is a very powerful tool, yet hard to master. Fortunately, there are some very good readings if you want to learn^{1}. This article is not one of them.

In this article, we will see how to implement strongly-specified list manipulation functions in Coq. Strong specifications are used to ensure some properties on functions’ arguments and return value. It makes Coq type system very expressive. Thus, it is possible to specify in the type of the function `pop`

that the return value is the list passed in argument in which the first element has been removed!

`list`

typeThanks to Coq polymorphism and inductive types, it is very simple to define what a list is. Note that to deal with polymorphism, we enclose all our Coq definitions in a `Section`

block.

```
Section lists.
Variable A:Set.
Inductive list:Set :=
| nil: list
| cons: A → list → list.
End lists.
```

We can take a look at Ocaml extracted code.

It’s the first question to deal with when manipulating lists. There are some functions that require their arguments not to be empty. It’s the case for the `pop`

function, for instance: it is not possible to remove the first element of a list that does not have any elements in the first place.

When one wants to answer such a question as “Is this list empty?”, he has to keep in mind that there are two ways to do it: by a predicate or by a boolean function. Indeed, `Prop`

and `bool`

are two different worlds that do not mix easily. One solution is to write two definitions and to prove their equivalence. That is `∀ args`

, `predicate args ↔ bool_function args = true`

.

Another solution is to use the `Sumbool`

type as middleman. The scheme is the following:

- Defining
`predicate: args → Prop`

- Defining
`predicate_dec: args → { predicate args } + { ~predicate args }`

- Defining
`predicate_b`

:

```
Definition predicate_b (args) :=
if predicate_dec args then true else false.
```

`empty`

predicateA list is empty if it is `nil`

. It’s as simple as that!

` Definition empty (l:list):Prop := l = nil.`

`empty`

A decidable version of `empty`

(`empty_dec`

) is a function that takes a list and returns a `sumbool (empty l) (¬empty l)`

. Such function can be used in an `if ... then ... else ...`

construction.

```
Definition empty_dec (l:list): {empty l} + {¬empty l}.
refine (
match l with
| nil => left _ _
| _ => right _ _
end
); unfold empty; trivial.
unfold not; intro H; discriminate H.
Defined.
```

The definition of `empty_dec`

^{2} uses the refine tactic. It’s a powerful tactic that helps when it comes to dealing with specified types, such as `Sumbool`

.

`empty_b`

With `empty_dec`

, we can define `empty_b`

.

```
Definition empty_b (l:list):bool :=
if empty_dec l then true else false.
```

Let’s try to extract empty_b code:

```
type bool =
| True
| False
type sumbool =
| Left
| Right
type 'a list =
| Nil
| Cons of 'a * 'a list
(** val empty_dec : 'a1 list -> sumbool **)
let empty_dec = function
| Nil -> Left
| Cons (a, l0) -> Right
(** val empty_b : 'a1 list -> bool **)
let empty_b l =
match empty_dec l with
| Left -> True
| Right -> False
```

In addition to `list 'a`

, Coq has created the `sumbool`

and `bool`

types and `empty_b`

is basically a translation from the first to the second. We could have stopped with `empty_dec`

, but `Left`

and `Right`

are less readable that `True`

and `False`

.

`pop`

There are several ways to write a function that removes the first element of a list. One is to return `nil`

if the given list was already empty:

```
Definition pop (l:list) :=
match l with
| nil => nil
| cons a l => l
end.
```

But it’s not really satisfying. A `pop`

call over an empty list should not be possible. It can be done by adding an argument to `pop`

: the proof that the list is not empty.

```
Definition pop (l:list)(h:¬empty l): list.
induction l.
+ unfold not, empty in h. intuition. (* case (nil), absurd *)
+ exact l. (* case (cons a l) *)
Defined.
```

It’s better and yet it can still be improved. Indeed, according to its type, `pop`

returns “some list”. As a matter of fact, `pop`

returns “the same list without its first argument”. It is possible to write such precise definition thanks to sigma-types, defined as:

```
Inductive sig (A:Type) (P:A -> Prop) : Type :=
exist : forall x:A, P x -> sig P.
```

Rather that `sig A p`

, sigma-types can be written using the notation `{ a | P }`

. They express subsets. Thus, it is possible to write a strongly-specified version of `pop`

that way:

```
Definition pop (l:list)(h:¬empty l): {l' | exists a, l = cons a l'}.
induction l.
+ unfold not, empty in h; intuition.
+ refine ( exist _ l _ ).
exists a.
trivial.
Defined.
```

`{l' | exists a, l = cons a l'}`

expresses the condition “the same list without the first element”. The tactic `refine`

is used to construct the result.

Let’s have a look at the extracted code:

```
(** val pop : 'a1 list -> 'a1 list **)
let pop = function
| Nil -> assert false (* absurd case *)
| Cons (a, l0) -> l0
```

If one tries to call `(pop nil)`

, the `assert`

ensures the call fails. Extra information given by the sigma-type have been stripped away. It can be confusing, but the implementation still respects the related property.

`push`

It is possible to specify `push`

the same way `pop`

has been. The only difference is `push`

accepts lists with no restriction at all. Thus, its definition is a simpler, but it still uses `refine`

to deal with the `exist`

constructor.

```
Definition push (l:list) (a:A): {l' | l' = cons a l}.
refine (
exist _ (cons a l) _
); reflexivity.
Defined.
```

And the extracted code:

```
let push l a =
Cons (a, l)
```

`head`

Same as `pop`

and `push`

, it is possible to add extra information in the type of `head`

. It’s not a surprise its definition is very close to `pop`

.

```
Definition head (l:list)(h:¬empty l): { a | ∃ r, l = cons a r }.
induction l.
+ unfold not, empty in h; intuition.
+ refine ( exist _ a _ ).
exists l.
trivial.
Defined.
```

And the extracted code:

`head`

, `push`

and `pop`

are well-specified and the type alone gives enough information about what the function is doing. However, we might want to prove some generic list properties, just to be sure.

`push`

and `head`

For instance, given a list `l`

and a `a`

, the result of `head (push l a)`

should always be `a`

.

```
Theorem push_head_equal: ∀ l a,
match push l a with
| exist l' h => ∀ h',
match head l' h' with
| exist a' h'' => a = a'
end
end.
Proof.
intros; unfold push; intros; unfold head.
reflexivity.
Qed.
```

The use of sigma-type and proofs as argument makes the theorem a little hard to read, but the proof stays simple.

`push`

and `pop`

Our implementation of `push`

and `pop`

must guarantee that their composition is the identity function, that is `l = pop (push l)`

.

```
Theorem push_pop_equal: ∀ l a,
match push l a with
| exist l' h => ∀ h',
(match pop l' h' with
| exist l'' h'' => l = l''
end)
end.
Proof.
intros.
unfold push.
intros.
unfold pop.
reflexivity.
Qed.
```

And if you want, here is a complete gist of the code.