This is a continuation of my last post on how to write a treesitter grammar in an afternoon. Building on the grammar we wrote, now we’re going to write a linter for Imp, and it’s even easier! The final result clocks in less than 60 SLOC and can be found here.
Recall that treesitter is an incremental parser generator. That is, you give it a description of the grammar of your programming language and it spits out a parser in C that creates a syntax tree based on the rules you specified. What’s notable about tree sitter is that it is resilient in the presence of syntax errors, and it being incremental means the parser is fast enough to reparse the file on every keystroke, only changing the parts of the tree as needed.
Specifically, we’ll write a program that suggests simplification of assignments and some conditional constructs. First I’ll describe the treesitter query language with some examples, then show how a little bit of JavaScript can let us manipulate the results programmatically. You can get the code in this post here. Ready? Set? Go!
Note: There are many language bindings that let you work with treesitter parsers using the respective language’s FFI. I’ve used only two to date, the Rust and the JavaScript bindings, and from my brief experience, the JavaScript bindings are much more usable. When using the Rust bindings the lifetime and mutability restrictions make abstraction more difficult, especially for a noncritical program such as a linter.
Treesitter has a builtin query language that lets you write queries to match parts of the AST of interest. Think of it as pattern matching, but you don’t need to handle every case of a syntactical construct.
Treesitter queries are written as a series of one or more patterns in
an Sexpression syntax.
We first match on a node’s type (corresponding to a name of a node in
the grammar file), then possibly the types of the children of the node
as well. After each pattern, write @m
(or any other valid variable
name) so you can refer to the matched node later.
Our running example will be some Python code.
def factorial(n):
return 1 if n == 0 else (n * (1 * 1)) * factorial(n  1)
Let’s match all expressions involving binary operators.
(binary_operator) @m
def factorial(n):
return 1 if n == 0 else (n * (1 * 1)) * factorial(n  1)
Treesitter lets us specify what the children should be. So we can match all binary expressions involving at least one integer:
(binary_operator (integer)) @m
def factorial(n):
return 1 if n == 0 else (n * (1 * 1)) * factorial(n  1)
Or match all binary expressions involving two integers:
(binary_operator (integer) (integer)) @m
def factorial(n):
return 1 if n == 0 else (n * (1 * 1)) * factorial(n  1)
Try playing around with queries in the playground.
You can also assign capture names to nodes that you match, letting
you refer to them later by name. This is useful because in the
running example, suppose we wanted to capture the left and right
integer arguments to a binary operator, labeling them a
and b
respectively. Then our query would look like this, and treesitter
would highlight the matches accordingly.
(binary_operator (integer) @a (integer) @b) @m
def factorial(n):
return 1 if n == 0 else (n * (1 * 1)) * factorial(n  1)
The treesitter query language also lets you specify additional
constraints on matches. For instance, we can match on binary
expressions where the lefthand side is n
, which now gets
highlighted in blue. The underscore _
lets us match any node.
((binary_operator _ @a _ @b) (#eq? @a n)) @m
def factorial(n):
return 1 if n == 0 else (n * (1 * 1)) * factorial(n  1)
Now we have the basic parts out of the way, we can get to writing a linter! Instead of Python, we’ll continue working with Imp. Note that it’s easy to adapt this linter for any language with a treesitter grammar. Imp also has a much simpler semantics than Python so we can just focus on “obviously correct” lints rather than worry about suggestions changing program behavior.
We can start with a basic package.json
:
{
"name": "implint",
"type": "module",
"version": "1.0.0",
"description": "Linter for Imp",
"main": "index.js",
"scripts": {
"lint": "node index.js"
},
"author": "Ben Siraphob",
"license": "MIT",
"devDependencies": {
"treesitter": "^0.20.0",
"treesitterimp": "github:siraben/treesitterimp"
}
}
Then npm install
to install the dependencies. We’ll write our code
in index.js
then we can call our linter by running npm run lint <file>
.
Nothing fancy here, just the Parser class from the treesitter library
and our language definition Imp
(discussed in my last blog post), and
a library to read from the filesystem.
import Parser from "treesitter";
import Imp from "treesitterimp";
import { readFileSync } from "fs";
const { Query } = Parser;
const parser = new Parser();
parser.setLanguage(Imp);
const args = process.argv.slice(2);
if (args.length != 1) {
console.error("Usage: npm run lint <file to lint>");
process.exit(1);
}
// Load the file passed as an argument
const sourceCode = readFileSync(args[0], "utf8");
We then create the parser, set the language to Imp
and run the
parser on our source code to get out a syntax tree.
const parser = new Parser();
parser.setLanguage(Imp);
// Load the file passed as an argument
const tree = parser.parse(sourceCode);
If we have the following file:
x := x + 1
The corresponding output from console.log(tree.rootNode.toString())
would be:
(program (stmt (asgn name: (id) (plus (id) (num)))))
That was some preliminary work. Now let’s see what queries would be interesting to run over more realistic Imp programs. Say we have:
z := x;
y := 1;
y := y;
while ~(z = 0) do
y := y * z;
z := z  1;
x := x;
end;
x := x;
if x = y then x := 1 else x := 1 end
There’s some redundancies for sure! We can tell the user about
assignments such as x := x
which are a noop, and that last if
statement certainly looks redundant since both branches are the same
statement.
It’s simple to create a Query object in JavaScript and run it over the root node.
const redundantQuery = new Query(
Imp,
"((asgn name: (id) @left _ @right) (#eq? @left @right)) @redundantAsgn"
);
console.log(redundantQuery.captures(tree.rootNode));
This is what we get:
[
{
name: 'redundantAsgn',
node: AsgnNode {
type: asgn,
startPosition: {row: 2, column: 0},
endPosition: {row: 2, column: 6},
childCount: 3,
}
},
{
name: 'left',
node: IdNode {
type: id,
startPosition: {row: 2, column: 0},
endPosition: {row: 2, column: 1},
childCount: 0,
}
},
// etc...
]
Ok, that’s a lot of detail! Notice that every capture name was reported along with what type of node matched and the start and end of the match. Some tools might want this information, but for us it’s enough to report only the start of the match and the text that the match corresponded to:
// Given a raw list of captures, extract the row, column and text.
function formatCaptures(tree, captures) {
return captures.map((c) => {
const node = c.node;
delete c.node;
c.text = tree.getText(node);
c.row = node.startPosition.row;
c.column = node.startPosition.column;
return c;
});
}
Now we get something more concise:
[
{ name: 'redundantAsgn', text: 'y := y', row: 2, column: 0 },
{ name: 'left', text: 'y', row: 2, column: 0 },
{ name: 'right', text: 'y', row: 2, column: 5 },
{ name: 'redundantAsgn', text: 'x := x', row: 6, column: 2 },
{ name: 'left', text: 'x', row: 6, column: 2 },
{ name: 'right', text: 'x', row: 6, column: 7 },
{ name: 'redundantAsgn', text: 'x := x', row: 8, column: 0 },
{ name: 'left', text: 'x', row: 8, column: 0 },
{ name: 'right', text: 'x', row: 8, column: 5 }
]
And of course, it’s trivial to filter out the captures corresponding to a given name:
// Get the captures corresponding to a capture name
function capturesByName(tree, query, name) {
return formatCaptures(
tree,
query.captures(tree.rootNode).filter((x) => x.name == name)
).map((x) => {
delete x.name;
return x;
});
}
Passing tree
, redundantQuery
and "redundantAsgn"
to
capturesByName
, we get:
[
{ text: 'y := y', row: 2, column: 0 },
{ text: 'x := x', row: 6, column: 2 },
{ text: 'x := x', row: 8, column: 0 }
]
Now you can process these objects however you like. Note that treesitter uses zerobased indexing for the rows and columns, and you might want to offset it by one so users can locate it in their text editor. Here’s a simple approach:
// Lint the tree with a given message, query and match name
function lint(tree, msg, query, name) {
console.log(msg);
console.log(capturesByName(tree, query, name));
}
lint(tree, "Redundant assignments:", redundantQuery, "redundantAsgn");
We get the output:
Redundant assignments:
[
{ text: 'y := y', row: 2, column: 0 },
{ text: 'x := x', row: 6, column: 2 },
{ text: 'x := x', row: 8, column: 0 }
]
As a bonus, we can reuse our existing code for new queries! Here’s a couple:
((if condition: _ @c consequent: _ @l alternative: _ @r)
(#eq? @l @r)) @redundantIf
((plus (num) @n) (#eq? @n 0)) @addzero
Here are some exercises to try:
skip
statementTo appreciate it more, think about what we would have done had we not used treesitter. The process might have gone something like this:
Note that there are several steps were things could go wrong or block
us later. If we wrote the parser, say in Haskell using
megaparsec, we would
have not been able to recover the rows and columns of the syntax
elements (or painfully write an abstract data type with annotations).
And even worse, what happens when the user supplies syntactically
invalid input? Some parser generators based on GLR parsing such as
Bison
allow for error recovery, but then we’d need to define a custom
error
token and come up with adhoc logic for dealing with it.
Treesitter separates these design choices into orthogonal ones. A treesitter grammar is easy to write and reusable in any language with a C FFI. The error recovery logic is pervasive yet unwritten, and the resulting AST is annotated with locations and can be easily patternmatched over with queries.
Should we throw treesitter at every problem involving parsing? No! There are certainly some areas where we need syntax trees without error nodes, and sometimes the incremental parsing is not necessary. For instance, if we’re working with a build farm, we don’t want to build package definitions with syntax errors!
Beyond linting, treesitter has also found applications in GitHub’s searchbased code navigation which also makes use of the query language to annotate the AST with tags.
]]>Every passing decade, it seems as if the task of implementing a new programming language becomes easier. Parser generators take the pain out of parsing, and can give us informative error messages. Expressive type systems in the host language let us patternmatch over a recursive syntax tree with ease, letting us know if we’ve forgotten a case. Propertybased testing and fuzzers let us test edge cases faster and more completely than ever. Compiling to intermediate languages such as LLVM give us reasonable performance to even the simplest languages.
Say you have just created a new language leveraging the latest and greatest technologies in programming language land, what should you turn your sights to next, if you want people to actually adopt and use it? I’d argue that it should be writing a treesitter grammar. Before I elaborate what treesitter is, here’s what you’ll be able to achieve much more easily:
And the best part is that you can do it in an afternoon! In this post we’ll write a grammar for Imp, a simple imperative language, and you can get the source code here.
This post was inspired by my research in improving the developer experience for FORMULA and Spin.
Treesitter is a parser generator tool. Unlike other parser generators, it especially excels at incremental parsing, creating useful parse trees even when the input has syntax errors. And best of all, it’s extremely fast and dependencyfree, letting you parse the entirety of the file on every keystroke in milliseconds. The generated parser is written in C, and there are many bindings to other programming languages, so you can programmatically walk the tree as well.
Imp is a simple imperative language often used as an illustrative example in programming language theory. It has arithmetic expressions, boolean expressions and different kinds of statements including sequencing, conditionals and while loops.
Here’s an Imp program that computes the factorial of x
and places the
result in y
.
// Compute factorial
z := x;
y := 1;
while ~(z = 0) do
y := y * z;
z := z  1;
end
Check out the official treesitter development guide.
If you’re using Nix, run nix shell nixpkgs#treesitter
nixpkgs#nodejs16x
to enter a shell with the necessary dependencies.
Note that you don’t need to have it set up to continue reading this post, since I’ll provide the terminal output at appropriate points.
First we follow the grammar for expressions given in the chapter. Here it is for reference.
a := nat
 id
 a + a
 a  a
 a * a
 (a)
b := true
 false
 a = a
 a <= a
 ~b
 b && b
a
corresponds to arithmetic expressions and b
corresponds to
boolean expressions.
The easiest things to handle are numbers and variables. We can add the following rules:
id: $ => /[az]+/,
nat: $ => /[09]+/,
The grammar for arithmetic expressions can easily be translated:
program: $ => $.aexp,
aexp: $ => choice(
/[09]+/,
/[az]+/,
seq($.aexp,'+',$.aexp),
seq($.aexp,'',$.aexp),
seq($.aexp,'*',$.aexp),
seq('(',$.aexp,')'),
),
Let’s try to compile it! Here’s what treesitter outputs:
Unresolved conflict for symbol sequence:
aexp '+' aexp • '+' …
Possible interpretations:
1: (aexp aexp '+' aexp) • '+' …
2: aexp '+' (aexp aexp • '+' aexp)
Possible resolutions:
1: Specify a left or right associativity in `aexp`
2: Add a conflict for these rules: `aexp`
Treesitter immediately tells that our rules are ambiguous, that is, the same sequence of tokens can have different parse trees. We don’t want to be ambiguous when writing code! Let’s make everything leftassociative:
program: $ => $.aexp,
aexp: $ => choice(
/[09]+/,
/[az]+/,
prec.left(1,seq($.aexp,'+',$.aexp)),
prec.left(1,seq($.aexp,'',$.aexp)),
prec.left(1,seq($.aexp,'*',$.aexp)),
seq('(',$.aexp,')'),
),
However, something’s not quite right when we parse 1*23*4
:
It’s being parsed as ((1*2)3)*4
, which is clearly a different
interpretation! We can fix this by specfiying prec.left(2,...)
for
*
. The resulting parse tree we get is what we want.
Note that in many real language specs, the precedence of binary operators is given, so it becomes pretty routine to figure out the associativity and precedence to specify.
The grammars for boolean expressions and statements are similar, and can be found in the accompanying repository.
Phew, so now we have a grammar that treesitter compiles. How do we
actually run it? The treesitter CLI has two subcommands to help out
with this, treesitter parse
and treesitter test
. The parse
subcommand takes a path to a file and parses it with the current
grammar, printing the parse tree to stdout. The test
subcommand
runs a suite of tests defined in a very simple syntax:
===
skip statement
===
skip

(program
(stmt
(skip)))
The rows of equal signs denote the name of the test, followed by the program to parse, then a line of dashes followed by the expected parse tree.
When we run treesitter test
, we get a check if a test passed and a
cross if it failed, complete with a diff showing the expected
vs. actual parse tree (to illustrate the error I replaced the example
code with skip; skip
instead):
tests:
✗ skip
✓ assignment
✓ prec
✓ prog
1 failure:
expected / actual
1. skip:
(program
(stmt
(seq
(stmt
(skip))
(stmt
(skip)))))
(skip)))
Believe it or not, that was pretty much all there is to writing a treesitter grammar! We can immediately put it to use by using it to perform syntax highlighting. Traditional syntax highlighting methods used in editors rely on regex and adhoc heuristics to colorize tokens, whereas since treesitter has access to the entire parse tree it can not only color identifiers, numbers and keywords, but also can do so in a contextaware fashion—for instance, highlighting local variables and userdefined types consistently.
The treesitter highlight
command lets you generate syntax
highlighting
of your source code and render it in your terminal or output to HTML.
Treesitter’s syntax
highlighting
is based on queries. Importantly, we need to assign highlight names
to different nodes in the tree. We only need the following 5 lines
for this simple language. The square brackets indicate alternations,
that is, if any of the nodes in the tree match an item in the list,
then assign the given capture name (prefixed with @
) to it.
[ "while" "end" "if" "then" "else" "do" ] @keyword
[ "*" "+" "" "=" ":=" "~" ] @operator
(comment) @comment
(num) @number
(id) @variable.builtin
And here is what treesitter highlight html
on the factorial
program gives
// Compute factorial
z := x;
y := 1;
while ~(z = 0) do
y := y * z;
z := z  1;
end
Not bad! Operators, keywords, numbers and identifiers are clearly highlighted, and the comment being grayed out and italicized makes the code more readable.
Creating a treesitter grammar is only the beginning. Now that you have a fast, reliable way to generate syntax trees even in the presence of syntax errors, you can use this as a base to build other tools on. I’ll briefly describe some of the topics below but they really deserve their own blog post at a later date.
Syntax highlighting can become more informative semantically with treesitter. That is, we can have the syntax highlighter color local variable names one color, global variables another, distinguish between field access and method access, and more. Doing such nuanced highlighting using a regexbased highlighter is about as futile as trying to parse HTML with regex.
Treesitter grammars compile to a dynamic library which can be loaded into editors such as Emacs, Atom and VS Code on any platform (including WebAssembly). Using the extension mechanisms in each editor, you can build packages on top which can use the syntax tree for a variety of things, such as structural code navigation, querying the syntax tree for specific nodes (see screenshot), and of course syntax highlighting. Here’s an incomplete list of projects that use treesitter to enhance editing:
Treesitter has bindings in several languages. You can use this information and treesitter’s query language to traverse the syntax tree looking for specific patterns (or antipatterns) in your programming language. To see this in action for Imp, see my minimal example of linting Imp with the JavaScript bindings. More details in a future post!
Parsing technology has come a long way since the birth of computer science almost a century ago (see this excellent timeline of parsing). We’ve gone from being unable to handle recursive expressions and precedence to LALR parser generators and now GLR and fast incremental parsing with treesitter. It stands to reason that the tools millions of developers use every day to look at their code should take advantage of such developments. We can do better than lineoriented editing or hacky regexps to transform and highlight our code. The future is structural, and perhaps treesitter will play a big role in it!
]]>Nevertheless, this hermeticity comes with some downsides, especially when it comes to bandwidth, disk space and CPU usage. The reason for this is that Nixpkgs occasionally merges PRs that “rebuild the world,” for instance, staging next cycles, or urgent updates to OpenSSL and other critical packages (which cause a rebuild in say, Vim because it affects the git derivation used to fetch it.) Thus when you want to use a package that depends on an older or newer commit of Nixpkgs and some massrebuild PR landed in the intervening time, you’ll be faced with mass downloads of almost every dependency that probably did not change in terms of build contents, but whose build environments differed enough that Nix considers them different.
After over a year of using flakes in practice, I’ve noticed certain ways in which I overcome these inconveniences, which I’ll elaborate below.
Note that this isn’t to say the hacks are without drawbacks. I’ll make it apparent in each hack what the benefits and drawbacks are.
Scenario: want to avoid a mass rebuild when trying to build an older project
Fix: override the nixpkgs
input with a fixed reference
Drawbacks: might lose reproducibility, but it’s fine if the changes weren’t major between the pinned commit and overriden one
Around a year ago, I started pinning my Nixpkgs
registry.
This lets me keep my flake reference to nixpkgs
consistent across my
systems (as opposed to using channels.) This is good when running
commands with nix run
so that instead of using the most uptodate
commit of Nixpkgs, it uses the pinned one from my system instead.
I then deploy my server configuration using a simple tool. So when I want to update my server I run the following command
$ nix run github:winterqt/deploy  sirabenland
[0/81 built, 1/0/14 copied (3.7/924.4 MiB), 1.0/161.4 MiB DL] fetching llvm13.0.0lib from https://cache.nixos.org
Huh? What does LLVM have to do with using the deployment tool? Why
are there 81 rebuilds? Such scenarios are commonplace in my
experience, due to the gap between what the package set Nixpkgs to and
where Nixpkgs is currently. The solution is thus to override the
flake input altogether. Many flake commands accept the
overrideinput
flag that takes two arguments; a path to override
and the new flake reference to override it with. In the following
command I’m overriding the input called nixpkgs
with nixpkgs
from
my registry.
$ nix run github:winterqt/deploy overrideinput nixpkgs nixpkgs sirabenland
warning: not writing modified lock file of flake 'github:winterqt/deploy':
• Updated input 'nixpkgs':
'github:NixOS/nixpkgs/5c37ad87222cfc1ec36d6cd1364514a9efc2f7f2' (20211225)
→ 'github:NixOS/nixpkgs/a529f0c125a78343b145a8eb2b915b0295e4f459' (20220131)
Notice that the reference to Nixpkgs went forward in time by a month. In this case, I avoided rebuilds and the server config deployed without any problems. Of course, the natural downside to this is that you might lose reproducibility if there were major changes between the two commits. In most noncritical cases, the resources and time saved are worth the risk.
Scenario: when working with a preflakes project, we want to be able to build a derivation specified with a given expression
Fix: pass the impure
flag
Drawbacks: could lead to larger closure sizes
In the world of Nix flakes, impure references to things as such as the
current directory are outright banned. For instance, suppose we’re on
aarch64darwin
and we want to build GNU Hello for x86_64darwin
,
before flakes we might run
$ nixbuild E 'with (import ./. {system="x86_64darwin";}); hello'
So the Nix command equivalent would be
$ nix build expr 'with (import ./. {system="x86_64darwin";}); hello'
error: access to absolute path '/Users/siraben/Git/forks/nixpkgs' is forbidden in pure eval mode (use 'impure' to override)
(use 'showtrace' to show detailed location information)
As the error message suggests, we have to pass impure
to it,
resulting in
$ nix build impure expr 'with (import ./. {system="x86_64darwin";}); hello'
which succeeds as usual. Note that this might lead to increased closure sizes because a path reference results in the entire directory of the package being copied to the Nix store.
Scenario: want to build unfree packages or packages that are marked as broken for the current platform
Fix: pass impure
to nix build
Drawbacks: mostly harmless™
As an example, the mathcomp book has a
flake.nix
file defined. So we might be tempted to try to build the
book with flakes:
$ nix build github:mathcomp/mcb
error: Package ‘mathcompbook’ in /nix/store/z5d23mcmv3va30nfkg1q40iz62xyi57asource/flake.nix:36 has an unfree license (‘ccbync40’), refusing to evaluate.
a) To temporarily allow unfree packages, you can use an environment variable
for a single invocation of the nix tools.
$ export NIXPKGS_ALLOW_UNFREE=1
b) For `nixosrebuild` you can set
{ nixpkgs.config.allowUnfree = true; }
in configuration.nix to override this.
Alternatively you can configure a predicate to allow specific packages:
{ nixpkgs.config.allowUnfreePredicate = pkg: builtins.elem (lib.getName pkg) [
"mathcompbook"
];
}
c) For `nixenv`, `nixbuild`, `nixshell` or any other Nix command you can add
{ allowUnfree = true; }
to ~/.config/nixpkgs/config.nix.
(use 'showtrace' to show detailed location information)
Unfortunately in this case it’s not clear what the fix is. Even if
you set that environment variable, you still get the same error
message. Again harkening back to the philosophy of Nix flakes,
querying environment variables is considered impure. The fix is to
again pass the impure
flag while setting the environment variable
at the same time.
$ NIXPKGS_ALLOW_UNFREE=1 nix build impure github:mathcomp/mcb && tree ./result
./result
└── share
└── book.pdf
There really isn’t any downside to this method, as far as I know. Unless environment variables you set in your shell also affect other aspects of the build, everything should be the same, and you’ll be able to run and build packages that were marked as broken or unfree previously.
Nix flakes isn’t to blame for these workarounds arising per se. In a sense, Nix becomes too pure to the extent where resources are being used when they don’t strictly need to, especially for noncritical use cases. In the future, features such as a contentaddressed store may help with issues such as mass rebuilds, where package hashes are determined by their build contents and not their input derivations.
]]>From left to right, the structures can be roughly classified as pertaining to order theory, algebra and topology. For the objectoriented programmer: how many instances of multiple inheritance do you see?
It’s important to capture the way structures are organized in mathematics in a proof assistant with some uniform strategy, wellknown in the OOP world as “design patterns.” In this article I will catalogue and explain a selection of various patterns and their strengths and benefits. They are (in order of demonstration):
For convenience as a reference I will start with the most recommended elegant and boilerplatefree patterns to the ugliest and broken ones.
The running example will be a simple algebraic hierarchy: semigroup,
monoid, commutative monoid, group, Abelian group. That should be
elaborate enough to show how the approaches hold up in a more
realistic setting. Here’s an overview of the hierarchy we’ll be
building over a type A
:
add : A > A > A
(a binary operation over A
)addrA : forall x y z, add x (add y z) = add (add x y) z
zero : A
add0r : forall x, add zero x = x
addr0 : forall x, add x zero = x
addrC : addrC : forall (x y : A), add x y = add y x;
opp : A > A
(inverse function)addNr : forall x, add (opp x) x = zero
(addition of an element
with its inverse results in identity)You may also see ssreflect style statements such as associative add
.
Then, if all goes well, we will test the expressiveness of our hierarchy by proving a simple lemma, which makes use of a law from every structure.
(* Let A an instance of AbGroup, then the lemma holds *)
Lemma example A (a b : A) : add (add (opp b) (add b a)) (opp a) = zero.
Proof. by rewrite addrC (addrA (opp b)) addNr add0r addNr. Qed.
Reading: Type Classes for Mathematics in Type Theory
A wellknown and vanilla approach is to use typeclasses. This goes
very well, our declaration for AbGroup
is just the constraints,
similar to how it would be done in Haskell. However, pay special
attention to the definition of AbGroup
, there’s a !
in front of
the ComMonoid
constraint to expose the implicit arguments again, so
that it can implicitly inherit the monoid instance from G
.
Require Import ssrfun ssreflect.
Class Semigroup (A : Type) (add : A > A > A) := { addrA : associative add }.
Class Monoid A `{M : Semigroup A} (zero : A) := {
add0r : forall x, zero + x = x;
addr0 : forall x, x + zero = x
}.
Class ComMonoid A `{M : Monoid A} := { addrC : commutative add }.
Class Group A `{M : Monoid A} (opp : A > A) := {
addNr : forall x, add (opp x) x = zero
}.
Class AbGroup A `{G : Group A} `{CM : !ComMonoid A}.
The example lemma is easily proved, showing the power of typeclass resolution in unifying all the structures.
Lemma example A `{M : AbGroup A} (a b : A)
: add (add (opp b) (add b a)) (opp a) = zero.
Proof. by rewrite addrC (addrA (opp b)) addNr add0r addNr. Qed.
See the accompanying gist for the instantation of the structures over ℤ.
The Hierarchy Builder (HB) package is best described as a boilerplate generator, but in a good way! From a usability point of view, it is similar to typeclasses.
First we define semigroups. HB.mixin Record IsSemigroup A
declares
that we are about to define a predicate IsSemigroup
over a type A
,
and the two entries in the record denote the binary operation and its
associativity, respectively. We also define an infix notation for
convenience.
From HB Require Import structures.
From Coq Require Import ssreflect.
(* Semigroup definition *)
HB.mixin Record IsSemigroup A := {
add : A > A > A;
addrA : forall x y z, add x (add y z) = add (add x y) z;
}.
HB.structure Definition Semigroup := { A of IsSemigroup A }.
(* Left associative by default *)
Infix "+" := add.
Next we define monoids. Similarly to semigroups we use the mixin
command, but now declare the inheritance by of IsSemigroup A
. That
is, for a type to be a monoid, it must be a semigroup first.
(* Monoid definition, inheriting from Semigroup *)
HB.mixin Record IsMonoid A of IsSemigroup A := {
zero : A;
add0r : forall x, add zero x = x;
addr0 : forall x, add x zero = x;
}.
HB.structure Definition Monoid := { A of IsMonoid A }.
Notation "0" := zero.
Now that we’ve seen two examples, there’s no surprises left on how to define commutative monoids and groups.
(* Commutative monoid definition, inheriting from Monoid *)
HB.mixin Record IsComMonoid A of Monoid A := {
addrC : forall (x y : A), x + y = y + x;
}.
HB.structure Definition ComMonoid := { A of IsComMonoid A }.
(* Group definition, inheriting from Monoid *)
HB.mixin Record IsGroup A of Monoid A := {
opp : A > A;
addNr : forall x, opp x + x = 0;
}.
HB.structure Definition Group := { A of IsGroup A }.
Notation " x" := (opp x).
Now for the interesting part. Hierarchy Builder makes it easy for us to do multiple inheritance and combine the constraints, much like typeclasses. Then we can seemlessly prove the lemma exactly as we did before.
(* Abelian group definition, inheriting from Group and ComMonoid *)
HB.structure Definition AbGroup := { A of IsGroup A & IsComMonoid A }.
(* Lemma that holds for Abelian groups *)
Lemma example (G : AbGroup.type) (a b : G) : b + (b + a) + a = 0.
Proof. by rewrite addrC (addrA (opp b)) addNr add0r addNr. Qed.
The underlying code it generates follows a pattern known as packed
classes (elaborated in the next section). For futureproofing, the
generated code can be shown by prefixing a HB command with
#[log]
. When the HB.structure
command is invoked, a bunch of
mixins and definitions are created. For brevity I’m omitted some of
them here.
...
Top_AbGroup__to__Top_Semigroup is defined
Top_AbGroup__to__Top_Monoid is defined
Top_AbGroup__to__Top_Group is defined
Top_AbGroup__to__Top_ComMonoid is defined
join_Top_AbGroup_between_Top_ComMonoid_and_Top_Group is defined
...
In more detail, here is the output of Print
Top_AbGroup__to__Top_ComMonoid.
, which shows that it is a coercion
that lets us go from an Abelian group structure to a commutative
monoid structure (i.e. going back up the hierarchy.) Hierarchy
Builder automatically creates these coercions and joins for us.
Top_AbGroup__to__Top_ComMonoid =
fun s : AbGroup.type =>
{ ComMonoid.sort := s; ComMonoid.class := AbGroup.class s }
: AbGroup.type > ComMonoid.type
Top_AbGroup__to__Top_ComMonoid is a coercion
It is worth noting that the mathcomp library is undergoing a transition to use Hierarchy Builder in the future, instead of handwritten instances and coercions.
Reading: Canonical structures for the working Coq user
In the mathcomp library, the approach taken is known as the packed classes design pattern. It’s a fairly complicated construct that I might elaborate more in a future blog post, but I’ll give some highlights and a full example.
Note that mathcomp is being ported to Hierarchy builder, so this style is being phased out.
According to the Mathematical Components book,
Telescopes suffice for most simple — treelike and shallow — hierarchies, so new users do not necessarily need expertise with the more sophisticated packed class organization covered in the next section
Here’s how to define a monoid. We create a module, postulate a type
T
and an identity element zero
of type T
, and combine the laws
into a record called law
. The exports section is small here but we
export just the operator
coercion.
Module Monoid.
Section Definitions.
Variables (T : Type) (zero : T).
Structure law := Law {
operator : T > T > T;
_ : associative operator;
_ : left_id zero operator;
_ : right_id zero operator
}.
Local Coercion operator : law >> Funclass.
End Definitions.
Module Import Exports.
Coercion operator : law >> Funclass.
End Exports.
End Monoid.
Export Monoid.Exports.
With that defined, we can instantiate the monoid structure for
booleans (note that zero
is automatically unified with true
).
Import Monoid.
Lemma andbA : associative andb. Proof. by case. Qed.
Lemma andTb : left_id true andb. Proof. by case. Qed.
Lemma andbT : right_id true andb. Proof. by case. Qed.
Canonical andb_monoid := Law andbA andTb andbT.
Let’s define a semigroup using one of the most basic features of Coq, records. Writing it this way means it is simply just a conjunction of laws as an nary predicate over n components. We define the semigroup structure first, then consider monoids as an augmented semigroup.
Require Import ssrfun.
Record Semigroup {A : Type} : Type := makeSemigroup {
s_add : A > A > A;
s_addrA : associative s_add;
}.
Record Monoid {A : Type} : Type := makeMonoid {
m_semi : @Semigroup A;
m_zero : A;
m_add0r : forall x, (s_add m_semi) m_zero x = x;
m_addr0 : forall x, (s_add m_semi) x m_zero = x;
}.
Unfortunately we already have to make an awkward choice to do some sort of indexing to access the underlying shared associative binary operation. At the next level when one defines groups as an augmented monoid, the situation only gets worse:
Record Group {A : Type} : Type := makeGroup {
m_monoid : @Monoid A;
g_inv : A > A;
g_addNr : forall x, (s_add (m_semi m_monoid)) (g_inv x) x = m_zero m_monoid;
}.
We have to access the operation through two different indexes for
it! Perhaps we might want to add a member to the record that is equal
to the inherited operation, but this too is not satisfactory since it
prevents us from creating a canonical name for the operation in
question (for instance, add
), and we’d have to do this at
arbitrarily nested levels. Thus, while flexible, this approach does
not scale.
One approach, seen in CPDT is to use the module system to organize the hierarchy. It seems fine for the first few structures. We declare parameterized modules and postulate additional axioms upon the structure from which it is inheriting from.
Require Import ssrfun.
Module Type SEMIGROUP.
Parameter A: Type.
Parameter Inline add: A > A > A.
Axiom addrA : associative add.
End SEMIGROUP.
Module Type MONOID.
Include SEMIGROUP.
Parameter zero : A.
Axiom add0r : left_id zero add.
End MONOID.
Module Type COM_MONOID.
Include MONOID.
Axiom addrC : commutative add.
End COM_MONOID.
Module Type GROUP.
Include MONOID.
Parameter opp : A > A.
Axiom addNr : forall x, add (opp x) x = zero.
End GROUP.
However, we immediately run into an issue when trying to create a
Abelian group from a commutative monoid and group, this is because the
carrier type A
is already in scope from the first include, so we
cannot share the carrier type (or even the underlying monoid) with
GROUP
. So we give up.
Module Type COM_GROUP.
Include COM_MONOID.
Fail Include GROUP.
End COM_GROUP.
The command has indeed failed with message:
The label A is already declared.
Just like software engineering, there are many ways to organize mathematical theories in proof assistants such as Coq.
Personally, I would lean more towards organizing my theories with Hierarchy Builder—or at the very least, typeclasses, if external dependencies are an issue.
]]>This article was discussed on Hacker News.
This year, I had a lot of fun with Advent of Code. There’s a nice problem solving aspect of it and refining your initial solution is always a fun exercise.
I do my solutions in Haskell, and AoC problems can be nice explorations into various performancewise aspects of GHC’s runtime. My README has some notes on what my general approach is to getting programs to run faster.
But sometimes, you can get solutions that run orders of magnitude faster, and this year I encountered such a case where my solution ran over 2700 times faster! Let’s dive in.
Day 15 asks us to compute the
lowest sum possible that results from traversing a grid from one
corner to the other, where the possible moves are moving by one in the
four cardinal directions. When I saw this problem, I
thought, “this is just recursion!” Naturally, I wrote such a
recursive solution (glguy notes on IRC that this would be a lot faster
if I memoized the calls.) The base case is if we’re at the origin
point, the cost is 0. Otherwise, the minimum cost starting at (r,c)
is the cost of the cell at (r,c)
added with the minimum of
recursively solving the same problem for (r1,c)
, (r,c1)
and
(r,c+1)
.
import qualified Data.Map as M
minSum :: Int > Int > M.Map (Int, Int) Int > Int
minSum r c g
 r == 0  c == 0 = 0
 otherwise = g M.! (r, c) + minimum [minSum (r  1) c g, minSum r (c  1) g, minSum r (c + 1) g]
This was sufficient for the very small example they gave. But it fails even on part 1, which was a 100 by 100 grid! In fact, it is an incorrect solution as well (if we restrict the problem to only down and right moves then a dynamic programming solution would work.)
Clearly, even a naïve solution won’t save us here. The next approach
I went with uses Dijkstra’s
Algorithm. One
limitation I impose on myself in solving Advent of Code is to not use
any libraries outside the GHC bootstrap
libraries.
This is simply because in some environments such as Google Code Jam or
Codeforces, fancy libraries would not exist, but ones such as
containers
certainly would be.
And sure enough, Dijkstra’s algorithm was able to solve part 1 in a few seconds. Part 2 however, just kept churning. I left my computer running then returned to see the result, which was accepted by the validator online. My shell also prints out how long the previous command took to execute, so that acted as a timer.
Even though we got the right answer, there’s no way that this is a reasonable runtime for a solution. To find the culprit, I ran the Haskell profiler on the program solving part 1.
COST CENTRE MODULE SRC %time %alloc
dijkstra.step3.nextVert.v.\ Main day15.hs:101:4796 86.1 0.0
dijkstra.step3.nextVert.v Main day15.hs:101:13128 11.8 28.2
...
The function to compute the next vertex to visit (see the pseudocode) by choosing vertex by minimum distance from source.
v :: (Int, Int)
v = minimumBy (\x y > compare (dv M.! x) (dv M.! y)) q
where q
is the set of vertices left to visit and dv
is a map from
vertices to distances. Using minimumBy
on a Set
in Haskell calls
foldl'
on sets, which is
here.
This procedure of course will be linear in the size of the set, and a
500 by 500 grid has 250,000 vertices to find the minimum of each time
we want to select another vertex. Yikes. I imagined, “what if we were
able to just find the next vertex of minimum distance from the source
in constant time?” Thus, we would be able to breeze through this part
of the algorithm and bring the runtime significantly down.
There’s a nice “duality” to the operation of Dijkstra’s algorithm when
you use a priority queue. You have a map where the keys are vertices
and the elements are distances, but when you select the next vertex to
visit, you use a priority queue where the keys (priorities) are
distances and the elements are vertices. The structure of each is
optimized for a different aspect of the algorithm, so conflating the
two would intuitively cause slowdown. With that in mind, we can
define a priority queue as just a map from Int
s to lists of values
of that priority.
type PQueue a = IntMap [a]
Getting a next minimal element from the priority queue is easy, since
IntMap
s already a provide a minViewWithKey
function. Insertion is
similarly easy to write up. The empty priority queue is just an empty
IntMap
.
 Retrieve an element with the lowest priority
pminView :: PQueue a > ((Key, a), PQueue a)
pminView p =
let Just (l, p') = IM.minViewWithKey p
in case l of
(_, []) > pminView p'
(k, x : xs) > ((k, x), if null xs then IM.delete k p' else IM.insert k xs p')
 Insertion of an element with a specified priority
pins :: Int > a > PQueue a > PQueue a
pins k x = IM.insertWith (++) k [x]
Note that pminView
already returns the new map with the minimal
element removed, so we don’t need to write another deletion function.
With those functions in hand, and lots of rewriting, I finally cracked it!
The results were staggering—part 2 was sped up by a factor of 2545, which is a serious demonstration of how even if you have the right algorithm, the choice of how you represent auxillary data in the algorithm matters.
benchmarking day15/part2
time 1.172 s (1.088 s .. 1.259 s)
0.999 R² (0.997 R² .. 1.000 R²)
mean 1.213 s (1.190 s .. 1.244 s)
std dev 29.96 ms (12.16 ms .. 39.32 ms)
variance introduced by outliers: 19% (moderately inflated)
After checking the cost centers again, a small
adjustment
to how neighbors were computed reduced the mean running time to
1.098
seconds, which amounts to a 2716 times speedup!
Not using premade libraries was a great pedagogical constraint because it forced me to get to the essence of an algorithm or data structure. While implementations of Dijkstra’s algorithm exists in various Haskell libraries, they are often too optimized or specialized to certain structures. There’s a lot to be learned from doing things from scratch!
]]>After having gone through most of the first 3 books in Software Foundations, and proving various theorems in classical real analysis in Coq, I decided to formalize some basic stuff from Algebra: Chapter 0, in particular the following statement.
Proposition 2.1. Assume \(A\) is nonempty, let \(f : A \to B\) be a function. \(f\) has a leftinverse if and only if it is injective.
This is a pretty trivial statement with a pretty trivial proof on paper (try it!), so I expected the Coq proof to be quite easy as well. What I didn’t know was that it would take me through a short journey through mathematical foundations, computability and some philosophy of mathematics as well. Why was it harder than expected, and what does it say about mathematics as a field? I will replicate the deadends I ran into here, to illustrate various points.
On the first attempt, I translated the statements as directly as possible into Coq. The forward direction is easy.
Definition injective {A B} (f : A > B) :=
forall a' a'', a' <> a'' > f a' <> f a''.
Definition left_inverse {A B} (f : A > B) g := forall a, g (f a) = a.
Definition right_inverse {A B} (f : A > B) g := forall b, f (g b) = b.
Lemma P_2_1 {A B} (f : A > B) (s : A) :
(exists g, left_inverse f g) <> injective f.
Proof.
unfold left_inverse, injective.
split; intros H.
 intros a' a'' eq. destruct H as [g Hg].
congruence.

Now we prove the reverse direction, that if a function is injective, then it has a leftinverse. We have the following proof state:
A : Type
B : Type
f : A > B
s : A
H : forall a' a'' : A, a' <> a'' > f a' <> f a''
============================
exists g : B > A, forall a : A, g (f a) = a
The goal is exists g : B > A, ...
. In Coq we prove such statements
by providing the function \(g\) then showing indeed it is a
leftinverse. The paper proof constructed such a \(g\) as well, so just
translate!
Here is the paper proof:
\((\Longleftarrow)\) Now assume \(f : A\to B\) is injective. In order to construct a function \(g : B\to A\) we have to assign a unique value \(g(b)\in A\) for each element \(b\in B\). For this, choose any fixed element \(s\in A\) (which we can do because \(A\neq\emptyset\)); then set
\[{\displaystyle g(b)={ \begin{cases} a&{\text{if }}b = f(a)\,\text{for some}\,a\in A,\\[4px] s&{\text{if }b\notin \text{im } f} \end{cases}}}\]We have a function \(g\) that, given a \(b\), returns an \(a\) if such an \(a\) satisfying \(b = f(a)\) exists, or returns a fixed element \(s\) of \(f\). Except, the devil is in the details, or in this case, in the word if. We have to make a decision depending on whether something is in the image or not. Actually, if we can do this for arbitrary sets \(A\) and \(B\), we can solve the halting problem. Here’s a short proof:
Assume we can always tell if \(b : B\) is in the image of \(f\) or not. Let \(M\) be a Turing machine. Define \(f : \mathbb{N} \to \mathbb{N}\) where \(f(n) = 0\) if \(M\) halts in \(n\) steps, otherwise return \(n+1\). This is obviously injective. So, what would \(g(0)\) give? It would return the number of steps that it takes for \(M\) to halt, or the fixed element if \(M\) diverges, but \(M\) is an arbitrary Turing machine, so that means we can solve the halting problem. (In fact you can also prove LEM from the theorem, see Appendix B).
In type theory, all the functions we ever write are computable by construction, so it actually turns out to be impossible to prove this lemma as stated. Thus, we need to strengthen our hypothesis, by assuming that the condition is decidable. In the Coq standard library, there is the following datatype:
(** [sumor] is an option type equipped with the justification of why
it may not be a regular value *)
Inductive sumor (A:Type) (B:Prop) : Type :=
 inleft : A > A + {B}
 inright : B > A + {B}
where "A + { B }" := (sumor A B) : type_scope.
Essentially this an option type that gives the value of type \(A\) or a proof why such a value cannot be produced. This is important because we want to use the left of the sum as the return value for the leftinverse of \(f\), and the right of the sum as a single bit to decide to return the default value \(s\).
(* Property of decidability of being in the image *)
Definition im_dec {A} {B} (f : A > B) :=
forall b, { a  f a = b } + ~ (exists a, f a = b).
(* If being in the image of an injective function f is decidable, it
has a left inverse. *)
Lemma injective_left_inverse {A B} (s : A) (f : A > B) :
im_dec f > injective f > { g  left_inverse f g }.
Proof.
unfold im_dec, injective; intros dec inj.
exists (fun b => match dec b with inl (exist _ a _) => a  inr _ => s end).
intros a'; destruct (dec (f a')) as [[a Ha]  contra].
With this assumption, we can continue the proof, but then we are stuck at following proof state.
A : Type
B : Type
s : A
f : A > B
dec : forall b : B, {a : A  f a = b} + (~ (exists a : A, f a = b))
inj : forall a' a'' : A, a' <> a'' > f a' <> f a''
a', a : A
Ha : f a = f a'
============================
a = a'
We need to prove \(a = a'\), which follows from \(f\) being injective,
however note that the hypothesis inj
states the contrapositive
claim.
In an undergrad discrete math class, one quickly learns about the
contrapositive law in classical logic, \((P \to Q) \leftrightarrow
(\neg Q \to \neg P)\). It turns out that in type theory (more
generally, in intuitionistic logic), the forward implication is
provable but the backward implication requires double negation
elimination, which is equivalent to LEM. As a result, we can make
prove a slightly more general theorem if we use the following
definition of injective
. The proof that our definition of
injectivity implies the one used in the book is trivial.
(* New definition of injective *)
Definition injective {A B} (f : A > B) :=
forall a' a'', f a' = f a'' > a' = a''.
(* Book's definition *)
Definition injective' {A B} (f : A > B) :=
forall a' a'', a' <> a'' > f a' <> f a''.
(* injective implies injective' *)
Theorem injective_injective' : forall {A B} (f : A > B),
injective f > injective' f.
Proof. cbv; auto. Qed.
With all this done, we can finally prove the backwards direction. One
last twist is that we’ll use a sigma type and make the proof
transparent using Defined.
so that we can extract the computational
content later.
Lemma injective_left_inverse {A B} (s : A) (f : A > B) :
im_dec f > injective f > { g  left_inverse f g }.
Proof.
unfold injective, left_inverse, im_dec.
intros dec inj.
(* It's decidable to check if b is in the image or not *)
exists (fun b => match dec b with inl (exist _ a _) => a  inr _ => s end).
intros a'.
destruct (dec (f a')) as [[a Ha]  contra].
 apply inj; auto.
 exfalso. apply contra. exists a'; auto.
Defined.
So, what did we learn from this notsotrivial proof of a trivial theorem?
This experience left me feeling a bit philosophical. Note some of these points are subjective, and I speak from my perspective as an undergraduate in pure math and CS.
We lost a bit of symmetry. What used to be a simple iff is now two separate lemmas, where the backward direction takes a proof that finding a preimage is decidable. Did that make matters worse? I don’t think so. I think the central question is, what do we want from this theorem? We can obtain leftinverses of any injective function, presumably we would want compute with the leftinverse!
If we used the law of the excluded middle anywhere, we would lose computability. But by paying careful attention and performing minor adjustments to the theorem, we have still preserved computability, and in fact can use this to find leftinverses of injective functions (see Appendix A).
What does this say about using constructive type theory as a foundation for mathematics at large? This is a difficult question that far more qualified researchers such as Andrej Bauer can answer better than I can (I highly recommend his talk on constructive mathematics). My naïve view is that once a foundation such as set theory is fixed, it is inevitable that “implementation details” will be used to the fullest extent. You can “just” check if something is in the image of an arbitrary function and voila, you have your welldefined function! You can “just” construct the set of all Turing machines that halt. No words said about if it’s computable or not.
Another analogous situation I ran into was when trying to formalize concepts from differential topology, but the problem was at my very feet, I couldn’t prove a function restricted to the whole domain is equal to the original function, something taken for granted in the classical world. Or how about quotients? They are ubiquitous in mathematics, one can perform quotients of groups, rings, topological spaces and more. See this passionate discussion regarding setoid hell.
On the other hand, type theory feels like the right logical system to work in for CS. Most proofs in CS are constructive anyway, making them very easy to translate into Coq. You also get nicer properties: all functions are computable, all functions are Scottcontinuous, or even topologically continuous. You can also extract computational content from proofs. See Appendix A for how the leftinverse of the successor function can be obtained, something you couldn’t easily do in a settheoretic setting.
Do we have to give up LEM have all these things? Not necessarily. To quote Bauer from his talk,
Constructive mathematics keeps silent about the law of the excluded middle. We do not accept it, we do not deny it, we just don’t use it. Particular cases of law of excluded middle might be OK, but you have to establish them first.
While \(P \vee \neg P\) is not provable for an arbitrary proposition \(P\), if you can show it for some particular \(P\) (or a family of such \(P\)’s), you regain classical reasoning. For further excellent discussion on this topic, I recommend this Zulip discussion regarding LEM and decidability.
Using the proofasprograms principle, we can in fact obtain leftinverses of functions, provided that being in the image is decidable and that the function is injective.
Definition eq_dec A := forall (a1 a2 : A), a1 = a2 \/ a1 <> a2.
Lemma nat_eq_dec : eq_dec nat.
Proof.
unfold eq_dec.
induction a1; destruct a2; auto.
destruct (IHa1 a2); auto using f_equal.
Qed.
Definition succ (n : nat) := S n.
Definition pred' : nat > nat.
Proof.
refine (fun n => _ (injective_left_inverse 0 succ _ _)).
 intros H. destruct H as [g Hg]. exact (g n).
 unfold im_dec.
induction b.
+ right. intros H. destruct H; discriminate.
+ left. refine (exist _ b _). reflexivity.
 unfold injective. intros a' a'' H. inversion H; auto.
Defined.
Eval compute in (pred' 1000). (* => 999 *)
Exercise (4 stars): define double n = n + n
and derive its
leftinverse halve
in a similar manner. You’ll need to prove that
being in the image of double
is decidable (hint: parity) and that
it’s injective, along with some additional lemmas as you see fit. You
might want to use the following induction principle to help. Don’t
forget to make some proofs transparent so that Eval compute in (halve
1000).
reduces to 500
! Send me an email if you solve it!
(* Idea: if a property P holds on 0 and 1, and n+2 whenever it holds on n,
then it holds for all n. *)
Definition nat_ind2 :
forall (P : nat > Type),
P 0 >
P 1 >
(forall n : nat, P n > P (S (S n))) >
forall n : nat, P n :=
fun P => fun P0 => fun P1 => fun PSS =>
fix f (n:nat) := match n with
0 => P0
 1 => P1
 S (S n') => PSS n' (f n')
end.
Here’s something wild, we can prove LEM from the backward direction of the original theorem! (Assuming proof irrelevance)
Require Import ProofIrrelevance.
Lemma inj_left_inverse_implies_lem :
(forall {A B} (f : A > B), A > injective f > exists g, left_inverse f g)
> (forall (P : Prop), P \/ ~ P).
Proof.
unfold left_inverse. intros H P.
set (f := fun a : P + True => match a with  inl _ => inl I  inr _ => inr I end).
pose proof (H _ _ f (inr I)).
assert (Hf : injective f).
{
unfold injective; intros.
destruct a', a''; try discriminate.
 f_equal. apply proof_irrelevance.
 destruct t, t0. reflexivity.
}
specialize (H0 Hf).
destruct H0 as [g Hg].
destruct (g (inl I)) eqn:E; auto. right.
intros a. destruct t.
replace (inl I) with (f (inl a)) in E by auto.
rewrite Hg in E. inversion E.
Qed.
So how difficult would it be to translate a Common Lisp program to Haskell, in a way that makes the translated code seem idiomatic? The answer is through a careful choice of what Haskell features to use, in this case, monad transformers, but also a lesserknown technique—the tagless final style.
Get the full code here.
A couple of years ago, malisper wrote a blog post on writing an assembler for the esoteric programming language FRACTRAN in Common Lisp. It’s quite a nice display of the power of Common Lisp, particularly the macro system. We’re going to display a Common Lisp code block followed by its translated Haskell variant, when possible.
(defparameter *curinstprime* nil)
(defparameter *nextinstprime* nil)
(defparameter *lisptranlabels* nil)
(defparameter *lisptranvars* nil)
(defparameter *nextnewprime* nil)
Unsurprisingly, this global state is encapsulated with the state
monad. ExceptT
appears too because we want to be able to throw an
error when a label is not found in the map. We’re also using the
Math.NumberTheory.Primes
package to generate the infinite stream of
primes for us.
import qualified Data.Map as M
import qualified Math.NumberTheory.Primes as P
data CompState =
CompState
{ currInstPrime, nextInstPrime :: Integer
, labels, vars :: M.Map String Integer
, primes :: [P.Prime Integer]
, gensymCount :: Integer
}
deriving (Show)
newtype Comp a =
Comp { runComp :: ExceptT String (State CompState) a }
deriving ( Functor, Applicative, Monad,
, MonadState CompState, MonadError String)
newprime
generates a fresh prime and places it in
*nextnewprime*
. In our case, we have an infinite list of primes,
so newPrime
should just advance the list and return the old head.
(defun newprime ()
"Returns a new prime we haven't used yet."
(prog1 *nextnewprime*
(setf *nextnewprime*
(loop for i from (+ *nextnewprime* 1)
if (prime i)
return i))))
newPrime :: Comp Integer
newPrime = do
l < gets primes
modify (\s > s {primes = tail l})
return (P.unPrime (head l))
advance
is a sequence of assignments, so the translation is
straightforward.
(defun advance ()
(setf *curinstprime* *nextinstprime*
*nextinstprime* (newprime)))
advance :: Comp ()
advance = do
c < gets nextInstPrime
p < newPrime
modify (\s > s {currInstPrime = c, nextInstPrime = p})
primeforlabel
looks up a label and returns its value if found, and
inserts it otherwise. primeforvar
is defined similarly.
(defun primeforlabel (label)
(or (gethash label *lisptranlabels*)
(setf (gethash label *lisptranlabels*)
(newprime))))
primeForLabel :: String > Comp Integer
primeForLabel label = do
labels < gets labels
case M.lookup label labels of
Just p > return p
Nothing > do
p < newPrime
modify (\s > s {labels = M.insert label p labels})
return p
Now we run into a little bit of an issue;
(defmacro deftran (name args &body body)
"Define a Lisptran macro."
(setf (gethash ',name *lisptranmacros*)
(lambda ,args ,@body)))
We don’t have macros in Haskell! This is where our translation starts
to diverge. In such a case, it is useful to read the rest of the Lisp
code and see the larger structures at play, in this case, how the
deftran
macro is used, for instance, in the definitions of addi
,
subi
and >=i
.
(deftran addi (x y)
(prog1 (list (/ (* *next* (expt (primeforvar x) y))
*cur*))
(advance)))
(deftran subi (x y) ((addi x ,( y))))
(deftran >=i (var val label)
(prog1 (let ((restore (newprime)))
(list (/ restore
(expt (primeforvar var) val)
*curinstprime*)
(/ (* (primeforlabel label)
(expt (primeforvar var) val))
restore)
(/ *nextinstprime* *curinstprime*)))
(advance)))
It would seem that we are stuck. We could generate lists of
Rationals
, but the use of advance
forces us to use the State
monad. Furthermore, in subi
, it calls addi
!
One approach would be to express the instructions as a data type;
type Var = String
type Label = String
data Instr = Addi Var Int
 Subi Var Int
 Jge Var Int Label
...
But we lose a critical feature of macros, that they can be used in
other macros, such as subi
calling addi
, and when we add a new
instruction, we have to go through the entire codebase to handle the
extra case, this is the infamous expression problem. Fortunately,
much work has been carried out in attempting to resolve this, with one
promising approach being the tagless final approach. That is, can
express we addi
, subi
and more using a typeclass, rather than a
data declaration? The answer is a resounding yes.
class MonadState repr => FracComp repr where
lit :: Integer > repr [Rational]
label :: String > repr [Rational]
addi :: String > Integer > repr [Rational]
jge :: String > Integer > String > repr [Rational]
gensym :: repr String
subi :: String > Integer > repr [Rational]
subi x y = addi x (y)
Now the definition of subi
looks just like the Lisp one! What’s
going on in this typeclass is that repr
is a higherkinded type,
repr :: * > *
. The FracComp
typeclass has a constraint, repr
has to support being a State monad, because we will need a notion of
sequencing label generation to assemble programs correctly.
This extends naturally to deftran
definitions that have side
effects, for instance, gensym
in goto
.
(deftran goto (label) `((>=i ,(gensym) 0 ,label)))
goto :: FracComp repr => String > repr [Rational]
goto dest = do
g < gensym
jge g 0 dest
That’s neat, but now we only have a typeclass, we need to actually
instantiate it. Indeed, Comp
can be made an instance of FracComp
.
instance FracComp Comp where
addi x 0 = primeForVar x $> []
addi x y = do
g < (^ abs y) <$> primeForVar x
f <
if y < 0
then (%) <$> gets nextInstPrime <*> ((* g) <$> gets currInstPrime)
else (%) <$> ((* g) <$> gets nextInstPrime) <*> gets currInstPrime
advance
return [f]
gensym = newsym
newsym = do
n < gets gensymCount
modify (\s > s {gensymCount = n + 1})
return ('t' : show n)
There’s a little bit of a hiccup when y
is negative, because raising
to a negative exponent raises an error. Otherwise, the code is
remarkably close to Lisp.
Now we need to actually assemble a program. assemble
initializes
the state to the initial state.
(defun assemble (insts)
"Compile the given Lisptran program into Fractran.
Returns two values. The first is the Fractran program
and the second is the alphabet of the program."
(let* ((*curprime* 2)
(*curinstprime* (newprime))
(*nextinstprime* (newprime))
(*lisptranlabels* (makehashtable))
(*lisptranvars* (makehashtable)))
(values (assemblehelper insts)
(alphabet *lisptranvars*))))
initState =
let (c:n:p) = P.primes
in (CompState
{ currInstPrime = P.unPrime c
, nextInstPrime = P.unPrime n
, primes = p
, labels = mempty
, vars = mempty
, gensymCount = 0
})
run a = a & runComp & runExceptT & (`evalState` initState)
Now, we want to run the assembler. Something like this;
λ> [addi "x" 3] :: FracComp repr => [repr [Rational]]
λ> assemble [addi "x" 3] :: FracComp f => f [Rational]
So, assemble
should have the following type:
assemble :: FracComp repr => [repr [Rational]] > repr [Rational]
We can calculate it as follows;
λ> :t [addi "x" 3]
it :: FracComp repr => [repr [Rational]]
λ> :t sequence [addi "x" 3]
it :: FracComp m => m [[Rational]]
λ> :t concat <$> sequence [addi "x" 3]
it :: FracComp f => f [Rational]
And for kicks, we can generalize concat
to join
, yielding our final
result.
assemble :: (Traversable m, Monad m, Monad f) => m (f (m a)) > f (m a)
assemble l = join <$> sequence l
λ> run (assemble [addi "x" 3])
Right [375 % 2]
The genius of the tagless final approach is that it lets us define new data variants, in this case, new modular pieces of FRACTRAN code.
Some examples;
(deftran while (test &rest body)
(let ((gstart (gensym))
(gend (gensym)))
`((goto ,gend)
,gstart
,@body
,gend
(,@test ,gstart))))
(deftran zero (var)
`((while (>=i ,var 1)
(subi ,var 1))))
(deftran move (to from)
(let ((gtemp (gevnsym)))
`((zero ,to)
(while (>=i ,from 1)
(addi ,gtemp 1)
(subi ,from 1))
(while (>=i ,gtemp 1)
(addi ,to 1)
(addi ,from 1)
(subi ,gtemp 1)))))
while test body = do
gstart < gensym
gend < gensym
assemble
(concat [[goto gend,
label gstart],
body,
[label gend, test gstart]])
zero var = while (jge var 1) [subi var 1]
move to from = do
gtemp < gensym
assemble
[ zero to
, while (jge from 1)
[addi gtemp 1, subi from 1]
, while (jge gtemp 1)
[addi to 1, addi from 1, subi gtemp 1]
]
adds a b = do
gtemp < gensym
assemble
[ while (jge b 1) [addi gtemp 1, subi b 1]
, while (jge gtemp 1) [addi a 1, addi b 1, subi gtemp 1]
]
With the tagless final embedding, we can write Haskell functions that
generate FRACTRAN programs as easily as we construct values. For
instance, a function that takes an integer n
and returns a FRACTRAN
program that computes the sum of the numbers from 1 to n
inclusive.
sumTo :: FracComp repr => Integer > [repr [Rational]]
sumTo n = [ addi "c" 0
, addi "n" n
, while (jge "n" 0)
[adds "c" "n", subi "n" 1]]
Now let’s see the assembler in action!
λ> runAssembler (sumTo 10)
Right [847425747 % 2,13 % 3,19 % 13,11 % 3,11 % 29,31 % 11,41 % 31,
23 % 11,23 % 47,2279 % 23,59 % 301,59 % 41,67 % 413,329 % 67,61 % 59,
73 % 61,83 % 73,71 % 61,71 % 97,445 % 71,707 % 89,103 % 5353,
103 % 83,109 % 5459,5141 % 109,107 % 103,113 % 749,113 % 19,
131 % 113,29 % 131,127 % 113]
We’re done. Let’s see what directions we can take our newly
translated FRACTRAN assembler. Since we used the tagless final
approach, we can do cool things such as interpreting the values under
a different semantic domain. In other words, a fully assembled and
final (pun intended) program FracComp f => f [Rational]
has a
concrete type that depends on the appropriate choice of f
, which in
turn depends on the call site! In particular, we can let f
be the
newtype S
, defined as
newtype S a = S { unS :: StateT Int (Writer [Doc]) a }
deriving (Functor, Applicative, Monad,
MonadWriter [Doc], MonadState Int)
And write the FracComp
instance for S
.
instance FracComp S where
lit i = tell [text (show i)] $> []
label l = tell ["label" <+> text l] $> []
addi l x = tell ["addi" <+> text l <+> text (show x)] $> []
jge l x dest = tell ["jge" <+> text l <+> text (show x) <+> text dest] $> []
gensym = gets (('g' :) . show) <* modify (+ 1)
pretty :: Traversable t => t (S a) > Doc
pretty x = x
& fmap unS
& sequence
& (`evalStateT` 0)
& execWriter
& vcat
pretty
works by unwrapping the t (S a)
to a stateful writer, then
handling the state and writing.
 Traversable t
pretty x = x :: t (S a)
& fmap unS :: t (StateT Int (Writer [Doc]) a)
& sequence :: StateT Int (Writer [Doc]) (t a)
& (`evalStateT` 0) :: Writer [Doc] (t a)
& execWriter :: [Doc]
& vcat :: Doc
λ> pretty (sumTo 10)
addi n 10
jge g2 0 g1
label g0
jge g6 0 g5
label g4
addi g3 1
addi n 1
label g5
jge n 1 g4
jge g9 0 g8
label g7
addi c 1
addi n 1
addi g3 1
label g8
jge g3 1 g7
addi n 1
label g1
jge n 0 g0
But we have just defined the pretty printers for the basic opcodes,
let’s also write specialized printers for the highlevel constructs
like while
. Once again, tagless final helps us achieve this.
instance FracComp S where
lit i = tell [text (show i)] $> []
label l = tell ["label" <+> text l] $> []
addi l x = tell [text l <+> "+=" <+> text (show x)] $> []
jge l x dest = tell [text l <+> ">=" <+> (text (show x) <+> text dest)] $> []
gensym = gets (('g' :) . show) <* modify (+ 1)

jle l x dest = tell [text l <+> "<=" <+> (text (show x) <+> text dest)] $> []
adds l x = tell [text l <+> "+=" <+> text x] $> []
subi l x = tell [text l <+> "=" <+> text (show x)] $> []
goto l = tell ["goto" <+> text l] $> []
while test body = do
censor ((\x > "while " <> x <> "{") <$>) (test "")
censor (nest 2 <$>) (sequence body)
tell ["}"]
return []
As a result, we can now output FRACTRAN programs in a language resembling C.
λ> pretty (sumTo 10)
c += 0
n += 10
while n >= 0 {
c += n
n = 1
}
Porting code can be challenging, as there are multiple facets to
consider, for instance, what if the target language lacked a feature
of the source language? Keeping it idiomatic across paradigms adds
additional challenges. In this translation, some Lisp functions were
omitted entirely, either because they were not needed or did not fit
with the model (for instance, the assemblehelper
function).
Nevertheless, code translation is a (in my opinion) good way to deepen
understanding and practice.
Get the full code here.
]]>Haskellers love monads, and there’s a good reason why. Without monads, Haskell wouldn’t have much of an I/O system at all, as Simon PeytonJones described it, it was embarassing to note that the emperor lacked I/O at conferences. Nevertheless, Haskell could be considered the world’s finest imperative language because of the way its type system forces one to separate purity and impurity, allowing the use of pure functions in impure code, but not vice versa.
The next stepping stone on the road to Haskell proficiency is often learning monad transformers, so that effects can be combined. In other words, we want to write code like this, resembling an impure language.
ticke n = do y < recall
incr 5
z < recall
if z > n
then raise "too big"
else return y
But this blog post is not about monad transformers, it’s about another idea that’s less wellknown, free monads, a neat way to combine effects with less boilerplate, and has been applied in works such as Extensible Effects and Polysemy.
This blog post assumes a working knowledge of Haskell, typeclasses, and some category theory.
A free object is a construction that obeys the axioms for that object, generated from something simpler. A canonical example of a free object is a free monoid. Recall that a monoid is a set \(S\) together with a binary operation \(\cdot\) and an element \(e\in S\) such that \(a\cdot(b\cdot c)=(a\cdot b)\cdot c\) for all \(a,b,c\) and \(e\cdot a=a\cdot e=a\) for all \(a\).
So, given a set \(S = \{a, b, c, d\}\), what is the free monoid over it? It’s simply the language \(\{a, b, c, d\}*\), giving us strings like \(e\), \(aa\), \(abc\), \(acdbcd\) and so on. The only thing we can do with these objects is treat them according to the axioms they obey, in other words, we can only concatenate them and nothing more.
People who have written interpreters may notice that free objects are like valid ASTs of a particular programming language. The axioms let us perform some manipulations, for instance, if we create a free group, we know we can rewrite \(a\cdot a^{1}\) to \(e\), because that’s one of the group axioms. However, we would be unable to reduce something like \(2+3\) because we don’t have an interpretation for it. This will come in later when we implement interpreters for effects.
Enough of this abstract algebra, let’s see some code!
infixr :+:
data (f :+: g) r = Inl (f r)
 Inr (g r)
deriving (Functor)
:+:
is a type operator, i.e. it takes two type constructors f :: *
> *
and g :: * > *
and a type r :: *
, and producing a new type
(f :+: g) r :: *
. To construct a value of this type, we the Inl
and Inr
constructors. This resembles the Either
type, but over
higher kinded types f
and g
. Categorically, this is the coproduct
construction in the functor category.
Now the main data type:
data Term f a = Pure a
 Impure (f (Term f a))
We can read Term f a
as a term of our language generated by f
,
having a final value of type a
, here are some examples.
λ> Pure 3
Pure 3 :: Term f Integer
λ> Impure (Just (Pure 3))
Impure (Just (Pure 3)) :: Term Maybe Integer
λ> Impure Nothing
Impure Nothing :: Term Maybe a
But generated how? Notice that the argument to Impure
has type f
(Term f a)
, but we’re showing how to construct a term of type
Term f a
in the data
declaration, so this is like fixpoints of
functions, with Pure
as the “base case”. In the case that f
happens to be a functor, Term
is what is
known as a fixpoint of a
functor,
as seen in papers such as Data types à la carte by Swierstra.
It gets better. If we know f
is a functor, we know that it must
have fmap
, even better, we can write the Functor
, Applicative
and Monad
instances for Term f
!
instance Functor f => Functor (Term f) where
fmap f (Pure x) = Pure $ f x
fmap f (Impure t) = Impure $ fmap (fmap f) t
instance Functor f => Applicative (Term f) where
pure = Pure
Pure f <*> Pure x = Pure $ f x
Pure f <*> Impure b = Impure $ fmap (fmap f) b
Impure fx <*> a = Impure $ fmap (<*> a) fx
instance Functor f => Monad (Term f) where
return = pure
Pure x >>= f = f x
Impure t >>= f = Impure $ fmap (>>= f) t
At first sight this looks scary, but in fact it isn’t at all, here’s the same code with only the last line in each instance declaration shown.
instance Functor f => Functor (Term f) where
 ...
fmap f (Impure t) = Impure $ fmap (fmap f) t
instance Functor f => Applicative (Term f) where
 ...
Impure fx <*> a = Impure $ fmap (<*> a) fx
instance Functor f => Monad (Term f) where
 ...
Impure t >>= f = Impure $ fmap (>>= f) t
Notice how we’re implementing >>=
with >>=
on the right, and <*>
with <*>
, but applying it via fmap
? It seems like an impossible
trick at first, we don’t even know f
forms a monad! But we do know
some laws about applicatives and monads.
fmap f (pure x) = pure (f x)
pure f <*> pure x = pure (f x)
pure x >>= f = f x
Believe it or not, this is exactly the code we need to implement the
fmap
, <*>
and >>=
operators for Term f
! Perhaps the only
thing left to explain is the implementation of <*>
in the case that
the first argument is pure and the second argument is impure, left as
an exercise.
Pure f <*> Impure b = Impure $ fmap (fmap f) b
We’re quite close now, just one more class and a couple of instances and we’ll have the core for our free monad effects library.
class (Functor sub, Functor sup) => sub :<: sup where
inj :: sub a > sup a
sub
and sup
are functors, and inj
is the natural transformation
that turns a sub
into a sup
. Intuitively we can think of sup
as
allowing an embedding of sub
into it, but not necessarily the other
way around. Now that we’re in the functor category, we know there is
an identity natural transformation for every functor into itself, so
there must be an embedding of every functor into itself.
instance Functor f => f :<: f where
inj = id
What about the relation between f
and f :+: g
? Now that we’re
talking category theory, f :+: g
is really a coproduct of f
and
g
in the functor category. So indeed there is a morphism from f
to f :+: g
, the Inl
morphism.
instance (Functor f, Functor g) => f :<: (f :+: g) where
inj = Inl
Finally, we want to be able to extend an existing embedding. Let’s say
we are given functors f
, g
and h
, and know that f :<: g
. It
stands to reason that if f
already has an embedding into g
, it
also has an embedding into whatever g
can embed into. In
particular, g
can embed into h :+: g
. Therefore, f
can be
embedded into h :+: g
as well. First we embed f
into g
via
inj
, then we go from g
to h :+: g
by Inr
. inj
and Inr
are
natural transformations, so we can compose them to get a natural
transformation from f
to h :+: g
.
instance (Functor f, Functor g, Functor h, f :<: g) => f :<: (h :+: g) where
inj = Inr . inj
First, we’re going to want to convert a Term f a
into an a
,
i.e. escaping out of the Term
monad. We can extract such an a
by
pattern matching on Pure a
. It now remains to settle on an
appropriate functor f
. We don’t want to handle the Impure
case,
because that means we have some function of type f a > a
, which may
not exist (e.g. []
, IO
, Maybe
). Since Impure
should not
happen, f (Term f a)
must not happen either, which means a > f a
should not exist!
A functor f
with no way of getting from a > f a
nor f a > a
?
This forces our choice of f
to be the Void
functor, which has no
constructors.
data Void t
deriving (Functor)
efRun :: Term Void a > a
efRun (Pure a) = a
And that’s it. While it looks like boilerplate, we can more or less mechanically write out the instances, thanks to universal objects, and notions from category theory. To really illustrate the power of this, we’re going to see a series of increasingly ambitious examples of effects.
Let’s implement a simple calculator with a single Int
register. We
define the operations of this calculator separately; Incr
and
Reader
.
Incr i t
increments the state by i
and returns t
Reader f
extracts the state and runs f
on itMem i
represents the state having value i
data Incr t = Incr Int t deriving (Functor)
data Reader t = Reader (Int > t) deriving (Functor)
data Mem = Mem Int deriving (Show)
Given the commands, we need to define their denotation, i.e. their
meaning. This is given by the efRunCalc
function.
If the command is Pure x
, just return x
along with the state.
efRunCalc :: Functor r => Mem
> Term (Incr :+: (Reader :+: r)) a
> Term r (a, Mem)
efRunCalc s (Pure x) = return (x, s)
If the command is Incr k r
, modify the state by k
and continue
with r. The slight twist is that we need to pattern match with Inl
,
because to extract Incr
from (Incr :+: (Reader :+: r))
. To
extract Reader
from (Incr :+: (Reader :+: r))
we have to perform
Inl
followed by Inr
. Finally, to handle r
, we fmap (efRunCalc
s)
over t
.
efRunCalc (Mem s) (Impure (Inl (Incr k r))) = efRunCalc (Mem (s + k)) r
efRunCalc (Mem s) (Impure (Inr (Inl (Reader r)))) = efRunCalc (Mem s) (r s)
efRunCalc s (Impure (Inr (Inr t))) = Impure (efRunCalc s <$> t)
We’ll also define a couple of helper functions, incr
and recall
can be thought of the userfacing functions for calculator effects.
inject :: (g :<: f) => g (Term f a) > Term f a
inject = Impure . inj
incr :: (Incr :<: f) => Int > Term f ()
incr i = inject (Incr i (Pure ()))
recall :: (Reader :<: f) => Term f Int
recall = inject (Reader Pure)
Now, we can combine our Incr
and Reader
effects in a single monad.
tick :: (Incr :<: f, Reader :<: f) => Term f Int
tick = do
y < recall
incr 1
return y
What you see in the code examples is all there is, no monad transformers needed. We can combine effects freely. When we interpret effects, we handle them one by one.
λ> tick & efRunCalc (Mem 4) & efRun
(4,Mem 5)
Suppose now we want to add a raise
command to our calculator, the
Exc
effect. This is done separately. A pattern raises for making
new effects, we define our type, our userfacing functions and an
interpreter for our effect.
data Exc e t = Exc e deriving Functor
raise :: (Exc e :<: f) => e > Term f a
raise e = inject (Exc e)
efRunExc :: Functor r => Term (Exc e :+: r) a > Term r (Either e a)
efRunExc (Pure x) = return (Right x)
efRunExc (Impure (Inl (Exc e))) = return (Left e)
efRunExc (Impure (Inr t)) = Impure (efRunExc <$> t)
Due to our coproduct construction, Exc
is separate from the other
effects. ticke
uses all the effects we have seen so far.
ticke :: (Exc String :<: f, Incr :<: f, Reader :<: f) => Int > Term f Int
ticke n = do y < recall
incr 5
z < recall
if z > n
then raise "too big"
else return y
efRunExcString :: Functor r => Term (Exc String :+: r) a
> Term r (Either String a)
efRunExcString = efRunExc
The reason why we need efRunExcString
is that we need to be explicit
about handling the Exc String
effect, otherwise we get a type error
about no instances for (Exc String :<: Void)
. It’s really a type
inference failure, because Haskell could not infer the instance at the
callsite (this is a problem effect libraries attempt to address).
We can also show that the order in which effects are handled makes a difference. For instance, in the first line below we handle the exception first then the state after (so the state persists even though an exception is thrown), and in the second line the state is handled first followed by the exception, which means the state is discarded when the exception is thrown.
λ> ticke 1 & efRunExcString & efRunCalc (Mem 0) & efRun
(Left "too big",Mem 5)
λ> ticke 1 & efRunCalc (Mem 0) & efRunExcString & efRun
Left "too big"
For our final example, we generalize the state to an arbitrary type
s
, and write an imperative program to sum the first n
numbers.
 Enable the ConstraintKind language extension for this
data Writer s t = Put s t deriving Functor
data Reader s t = Get (s > t) deriving Functor
type StEff s r = (Reader s :<: r, Writer s :<: r)
efRunSt :: Functor r => State s > Term (St s r) a > Term r (a, s)
efRunSt (State s) (Pure x) = return (x, s)
efRunSt (State s) (Impure (Inl (Put k r))) = efRunSt (State k) r
efRunSt (State s) (Impure (Inr (Inl (Get r)))) = efRunSt (State s) (r s)
efRunSt s (Impure (Inr (Inr t))) = Impure (efRunSt s <$> t)
put :: (Writer s :<: f) => s > Term f ()
put s = inject (Put s (Pure ()))
get :: (Reader s :<: f) => Term f s
get = inject (Get Pure)
sumN :: (StEff (Integer, Integer) f) => Term f ()
sumN = do (acc, n) < get
if n == 0
then return ()
else do { put (acc + n, n  1); sumN }
sumNEx :: ((), (Integer, Integer))
sumNEx = sumN
& efRunSt (State (0,10))
& efRun
Run the code shown in this post by downloading the gist.
I hope this post was illuminating in understanding what free monads are and their implementation. There’s quite a lot of literature on the subject, and the various effect libraries in use are worth checking out, because they address problems we weren’t able to handle in our fromscratch implementation, reducing boilerplate even further, creating better error messages, among other things.