MPL Research Report, First Ed.
04 Jun 2015In this novella I’ll explain MPL, the research language I’m working on in the Programming Language Lab at the University of Calgary. It’s a concurrent language, which is a bit strange because none of us here have any major experience making concurrent software. Instead, MPL is derived largely from Category Theory, a branch of mathematics that I have so far been unable to understand personally.
At first this was just going to be a summary-tutorial about what I had learned about MPL, it turned out that I learned a lot in the process of writing all this, so now this document does three things:
- It teaches MPL to anyone who wants to know (mainly, those working for Dr. Cockett).
- It goes far beyond the incomplete example programs previously written, introducing a variety of small, complete, realistic examples.
- It teaches the limitations of the MPL language as currently planned, and proposes design changes and workarounds to deal with them. The example programs are crucial to show why we need these changes. It seems to be impractical to write larger (but still small!) programs without changing MPL.
As originally conceived, MPL is divided into two separate parts, a “concurrent” part (“processes”) and a sequential part (“functions”). The sequential part is not very important–it’s basically a pure functional language like Haskell–but it has a couple of interesting concepts so I’ll talk about it for awhile.
The original syntax of MPL is very unusual. To make this document easier to follow, I’ll use a C++/Java-style syntax instead (and I will assume you know C++, C# or Java). I’ll still call it MPL, because the “real” syntax is not stable and not important; it’s the internal ideas that matter.
MPL as originally conceived uses left and right “polarities”. MPL in this document does not use polarities because they don’t fit nicely into the C-style syntax.
The key properties of MPL
MPL has two key properties:
- Type-safe concurrency
- Guaranteed termination (optional)
MPL programs consist of a set of “processes” that run concurrently. Conceptually, what we call processes are threads. Much like Erlang, if MPL had a production-quality compiler it would surely support some kind of lightweight processes (e.g. green threads with small stacks), because it would probably not be unusual for MPL programs to consist of hundreds or thousands of processes.
Processes communicate with each other exclusively by passing messages; there is no concept of shared memory, so MPL would easily scale across multiple computers. However, if you happen to be passing large data structures between processes then shared memory could (as an implementation detail) improve performance. MPL “solves” the problem of shared mutable state simply by not allowing mutable variables; the Haskell language already showed this design option to be viable (as suggested by Haskell’s mathematical predecesor, a certain variant of the lambda calculus).
Type-safe concurrency means that when one process sends an integer, there is always another process that is ready (or will be ready in the future) to receive an integer. When one process sends a string, another is ready to receive a string. And so on. Communication between process is governed by special types called protocols. A protocol is a type that explains how communication works between a pair of processes, on a “channel” (a channel is a connection between exactly two processes). Here’s a simple example of a protocol, which I’ll call a “lookup” protocol:
- Put an integer.
- Get a string.
- Start over.
If one process is following the above protocol, another protocol must be following the complement (or negation) of the protocol:
- Get an integer.
- Put a string.
- Start over.
As an example, imagine a simple “list service” process that contains a large list of strings. It follows the second protocol: it gets an integer (waiting if necessary), and when it gets one, it treats it as an index, looks up the string at that index in the big list, and sends the string back.
There’s a couple of things missing though. First, the list service needs to initialize when it starts, so a process can take zero or more so-called “sequential” parameters which must be provided to start the process. Some outside code might provide the list this way. How does one start a process? I’ll get to that later.
Second, the dictionary process needs to know when it should shut down. MPL has named protocol “constructors” or “selectors” which represent different “paths” through the protocol. In one of the paths, one of the processes may be required to shut down (the other process keeps running). For example:
- Choose either Lookup: Put an integer, get a string, and start over. StopLookups: Close the channel.
The complement protocol, implemented by the “list service”, is
- Receive either Lookup: Get an integer, put a string, and start over StopLookups: Shut down the current process.
Here’s what the protocol might look like in source code (one of many possible choices of syntax):
protocol StringLookup = put Lookup: put Int, get String, loop
| put StopLookups: close;
In this version of the syntax, “loop” means “start over at the beginning of StringLookup
”. Since there’s a “close” on the StopLookups
branch, the process using the complement of StringLookup
must halt itself. Note that the two paths (Lookup
and StopLookups
) must both start with “put” or both start with “get”. It makes no sense otherwise: which process would choose the path to take? And in MPL as originally conceived, the protocol definition is also defining the identifiers Lookup
and StopLookups
, and these names must be unique program-wide.
Actually I’m going to talk about “MPL as originally conceived” a lot, so I’ll abbreviate that as MPL-AOC.
Q. Can there be another “version” of the complement protocol in which neither process shuts down? A. In MPL-AOC, no. Whenever a channel closes, one process always shuts down.
Now here’s how the list service might look:
process ListService(neg<StringLookup> channel, String[] data)
{
switch (channel.get) {
case Lookup:
let index = channel.get;
channel.put(data[index, ""]);
ListService(channel, data);
case StopLookups: // no-op
}
}
Here, neg<StringLookup>
is the complement of the StringLookup
protocol.
MPL-AOC does not support loops; instead a process has to “call” itself instead. It wasn’t made clear to me whether this is truly a “call”, or actually a “goto”, because in all existing examples the call happens at the very end of the process. And if it is a call, I wasn’t told whether it’s sequential (blocking) or parallel. There was a concept of “plug” for creating new processes; Dr. Cockett recently decided that “plug” would occur implicitly, making it syntactically the same as a call, so I would guess that a “call”, like a “plug”, is a parallel call. This implies that there is no call stack of processes.
Of course, you can implement something like protocols in an existing programming language, but the verification that your code follows the protocol would have to happen at run-time. In MPL the program will not compile unless the compiler can tell that your code always follows the protocol.
MPL-AOC has no arrays either, so to understand “String[]” we can either imagine a version of MPL that has arrays, or imagine String[]
as some other data structure that MPL-AOC does support, such as a linked list. MPL also has no concept of “exceptions”, so the act of getting data from the list cannot fail. I’ve used data[index, ""]
to mean “get the item at index
, or an empty string if the index is invalid.”
It should be noted that this concept of protocols-on-channels already exists; it’s called “session types” and was invented by others. I was given the impression that the first paper on session types was published at some point before I arrived, after this lab started designing MPL. It looks like the first paper on modern Session Types is “Multiparty Asynchronous Session Types”, published in 2008. I’m no mathematician, but it looks to me like session types are isomorphic (equivalent) to MPL’s typed protocols.
The second main feature of MPL-AOC is guaranteed termination. If you follow certain rules,
- Sequential code is always guaranteed to terminate (no infinite loops)
- Concurrent code cannot have deadlocks or livelocks
So the compiler may be able to prove that the whole program terminates. Now, I’ll talk about sequential MPL for awhile, then resume discussion of the concurrent part.
Sequential MPL
Roughly speaking, the sequential part of MPL is a simplified version of Haskell with a couple of extra features so that you can write code that is guaranteed to terminate. Unlike Haskell, MPL is not a “lazy”; it runs code immediately like most other programming languages, evaluating a function’s arguments before calling the function. The current plan is not to have currying as a core feature either, which meant that continuing to use Haskell-style syntax looks downright strange (different from every other language I’ve ever seen). Also, closures aren’t planned to be plain functions, but are more like Java’s anonymous inner classes.
Something called fold
can verify that a given subprogram will (eventually) terminate. Consider the following MPL code (using a funny mix of C++ and Haskell style):
data List<t> = Nil // Empty linked list
| Cons(t, List<t>); // Add an item to the list
fold {
Int length(fold by List<t> list) {
switch list {
case Nil: 0
case Cons(_, rest): 1 + length(rest)
}
}
}
Note that in Haskell the equivalent type
data List t = Nil | Cons t (List t);
could be an infinite list, whereas this data type must be finite (which follows from immutability and the decision not to use lazy evaluation). MPL has a separate category of data types that can be infinite (the “coinductive” ones, see below). Also note that normal data types and “constructors” (Nil
& Cons
) must start with a capital letter; the lowercase t
implies that t
is a generic parameter, so that (in Haskell lingo) length
is a polymorphic function. By the way, MPL-AOC has type inference, so you could leave out the type of list
and the return type of length
. However, type inference is pedagogically counterproductive, so in this document I’ll usually indicate the types of everything explicitly.
The fold {...}
block is a group of functions that are allowed to call each other recursively (often it’s a group of size one). fold by
tells the compiler that the specified function parameter must be “reduced” during recursion. Here you can see that the list is unwrapped by one level, e.g. Cons(1, Cons(2, Cons(3, Nil)))
would become Cons(2, Cons(3, Nil))
. The compiler verifies that this unwrapping occurs, and that the call chain stops at Nil
, to prove that the function will eventually terminate. And remember, there can be no infinite loops, because loops don’t exist.
How exactly is the compiler to verify that the function terminates? Well, we were given a plan based on an idea that I think was called “circular types”. I didn’t like the plan, though, because it requires exactly one level of unwrapping; for example, it makes the following variation of the function (which unwraps two levels) illegal:
fold {
Int length(fold by List<t> list) {
switch list {
case Nil: 0
case Cons(_, rest):
switch rest {
case Nil: 1
case Cons(_, more): 2 + lengh(more)
}
}
}
}
To complete this example, here’s another type and a couple more functions:
// This type represents zero or one items. It's like "Nullable<T>" in C#.
data Maybe<t> = Nothing | Just(t);
t unwrap(Maybe<t> value, t defaultValue) {
switch value {
case Just(x): x
case Nothing: defaultValue
}
}
Maybe<t> head(List<t> list) { // Get first item in list
switch list {
case Nil: 0
case Cons(h, _): Just(h)
}
}
List<t> tail(List<t> list) {
switch list {
case Nil: Nil
case Cons(_, rest): rest
}
}
By the way, in MPL-AOC, if your function started with case __ of
(i.e. switch
) you could split up the function into pieces, so the following definitions would be equivalent to the above:
unwrap(Just x, _) { x }
unwrap(Nothing, defaultValue) { defaultValue }
head(Nil) { Nothing }
head(Cons(h, _)) { Just(h) }
tail(Nil) { Nil }
tail(Cons(_, rest)) { rest }
List<t>
is known as an “inductive” data type, meaning its shape is tree-like (acyclic) and always ends; you can’t make a linked list that refers back to itself. There is another kind of data type that can appear to go on forever, called the “coinductive” data type. A simple example is the “stream”:
codata Stream<t> {
Maybe<t> head();
Stream<t> tail();
}
A coinductive data type is basically like an interface in Java, but without support for inheritance, and you create such a data type with a “record” keyword which works much like the creation of an anonymous class in Java:
Stream<Int> intStream(Int start)
{
record Stream<Int> {
Maybe<Int> head() { Just start }
Stream<Int> tail() { intStream(start + 1) }
}
}
Note that the variable start
is captured from the outer function intStream()
; since support for ordinary anonymous inner functions is not planned, this is how you make closures in MPL. Terminology note: what normal people would call “methods” or “member functions” are called “destructors” in MPL-AOC. This name reflects a mathematical duality with the “constructors” of coinductive types.
Since everything is immutable in MPL, you can’t tell a Stream
to “advance”; tail()
must create a new stream object instead. Here are two examples that construct lists out of streams:
fold {
// Constructs a linked list from the given stream. Since a stream
// may be infinite, `count` specifies the max size of the output.
List<t> streamToList(Stream<t> s, fold by Int count) {
if (count <= 0)
Nil
else
switch s.head() {
case Nothing: Nil
case Just h: Cons(h, streamToList(s, count-1))
}
}
}
fold {
// Constructs a linked list in reverse order from the given stream.
List<t> reverseStream(Stream<t> s, fold by Int count) {
reverseStreamImpl(s, count, Nil);
}
List<t> reverseStreamImpl(Stream<t> s, fold by Int count, List<t> list) {
if (count <= 0)
list
else
switch s.head() {
case Nothing: list
case Just h: reverseStreamImpl(s.tail(), count - 1, Cons(h, list))
}
}
}
I’m cheating a little here by assuming you can “fold by Int”, MPL-AOC had no such feature (perhaps it didn’t have integers at all). Because of fold by Int
, the compiler ought to give an error if I wrote if (count == 0)
instead of if (count <= 0)
(if it said count == 0
, count = -1
would cause infinite recursion.)
MPL-AOC did have type inference so you could leave out the data types. But in MPL-AOC, head()
and tail()
were defined in the global namespace, not called using the dot notation. I don’t like that because it implies that no other head
or tail
function can exist in the program (a serious annoyance of Haskell’s record types), given that (as in Haskell) function overloading is not supported.
You can convert a List<t>
into a Stream<t>
:
Stream<t> toStream(List<t> list)
{
record Stream<t> {
// this code calls the head() and tail() functions defined earlier
Maybe<t> head() { head(list) }
Stream<t> tail() { toStream(tail(list)) }
}
}
But since the compiler can prove termination only with inductive data, it’s preferred to use inductive data where possible. Note that there’s no such thing as a “class”; you must use record
inside a function to construct coinductive data. (In Java terminology, you can think of toStream(list)
as a constructor of an anonymous class that implements the Stream
interface, and record
means as “new anonymous class”.)
If you need to simulate a data structure that contains a cycle, you could use codata
that refers to some other data
or codata
. For example, this code constructs a cyclic (repeating) stream from a linked list:
Stream<t> toCyclicStream(List<t> list) { toCyclicStream'(list, list); }
Stream<t> toCyclicStream'(List<t> ptr, List<t> list) {
record Stream<t> {
Maybe<t> head() { head(ptr) }
Stream<t> tail() {
switch ptr {
case Nil : toCyclicStream'(list, list)
case Cons(x,Nil) : toCyclicStream'(list, list)
case Cons(x,rest): toCyclicStream'(rest, list)
}
}
}
}
Often there is an equivalence between inductive and coinductive data. For instance we can represent a “pair” of things in either form:
// Inductive: "destructors" (first & second) must be defined manually
data Pair<a,b> = Pair(a, b)
first(Pair(x,y)) { x }
second(Pair(x,y)) { y }
// Coinductive: "constructor" (mk_pair) must be defined manually
codata Pair<a,b> { a first(); b second(); }
Pair<a,b> mk_pair(a itemA, b itemB) {
record Pair<a,b> {
first() { itemA }
second() { itemB }
}
}
In the rest of this document I’ll assume MPL has a built-in linked list type. Instead of Nil
I’ll write []
from now on; instead of using Cons(head,tail)
to add a new item at the front of the list, I’ll write head:tail
; and I will refer to the List<t>
type as [t]
for short.
MPL isn’t planned to support “traditional” lambda functions or closures. Instead, the plan is to use codata
for the same purpose. Thus, the standard map
and filter
functions (known as map
and filter
in Java Streams or Select
and Where
in C#) would be defined as
codata Func<t, ret> {
ret call(t input);
}
fold {
[ret] map(Func<t, ret> mapper, fold by [t] data)
{
switch data {
case (item:rest): mapper.call(item) : map(mapper, rest)
case []: []
}
}
[t] filter(Func<t, Bool> mapper, fold by [t] data)
{
switch data {
case (item:rest):
if (mapper.call(item))
item : filter(mapper, rest)
else
filter(mapper, rest)
case []: []
}
}
}
And here’s another function I’ll use later.
fold {
[t] take(fold by Int qty, [t] list)
{ // Note: `fold by [t] list` is equally valid
if (qty <= 0)
[]
else switch list {
case []: []
case (x:xs): x : take(qty - 1, xs)
}
}
[t] skip(fold by Int qty, [t] list)
{
if (qty <= 0)
list
else switch list {
case []: []
case x:xs: skip(qty - 1, xs)
}
}
}
Concurrent MPL
Protocols
As mentioned earlier, a protocol is a list of named communication actions. Two processes run concurrently and are connected together through a protocol. One process follows the protocol, and the other follows the complement of the protocol. The complement is also known as the negation, written in this document as neg<NameOfProtocol>
.
For example, we might represent a basic e-commerce protocol like this:
data Result = Success | Fail(String);
data ProductId = ProductId(String);
// Street, city, province, postal code
data StreetAddress = StreetAddress(String, String, String, String);
// CC number, security code, expiry
data CreditCardInfo = CreditCardInfo(String, String, String);
protocol CartSystem
= put AddToCart: put ProductID, put Int /*quantity*/, loop
| put RemoveFromCart: put ProductID, get Result, loop
| put GetCartContents: get [Pair<ProductID, Int>], loop
| put PurchaseAll: put StreetAddress, put CreditCardInfo, get Result, loop
| put EndSession: halt;
All (five) branches of the protocol must start with the same keyword, either put
or get
. Each branch ends with loop
(meaning, start over with a new action), close
(close the channel), or halt
(close the channel and terminate the current process). Protocols can also switch to other protocols (more on that later).
Hello World & terminal output
It’s natural to ask how can we write “Hello, World!” on the screen… even if MPL’s designer never thought of it until I came along.
Actually, this is a nontrivial question to choose the answer for because MPL has no global variables or mutable state. The “sequential” part of MPL-AOC has no side effects or anything resembling them, but the concurrent part does at least have “channels”, and it is natural that the terminal be represented by some kind of channel.
There are different ways that a terminal API could work, and it’s not really obvious which to choose. First of all, can we have multiple terminals? Well, we want MPL to be able to run on multiple computers, but we’re very early in development and in any case, it seems like there should be a “default” terminal, the one connected to stdin
and stdout
or at least, the same terminal that was used to start the program.
And since channels are bidirectional, should the input and output be bundled together as a single channel, or split into two separate channels? You might say “well, we should bundle them: we can’t read a line of input and write a line at the same time. We need bundling to enforce an ordering between reads and writes”. However, terminals actually can read and write simultaneously, even if it makes the terminal look and act clumsily. Another issue is that stdout
is capable of writing single characters, but stdin
is (on some platforms anyway) limited to reading entire lines at once. But I guess we could ignore that for the purpose of this decision.
So let’s say we have a character-based terminal. Possible protocols include…
// Predefined (`GetSeq<Char>` for an input terminal)
protocol GetSeq<t> = put Get: get t, loop
| put StopGet: close
// Predefined (`PutSeq<Char>` for an output terminal)
protocol PutSeq<t> = put Put: put t, loop
| put StopPut: close
// Predefined (`BidiSeq<Char>` for a bidirectional terminal)
protocol BidiSeq<t> = put Get1: get t, loop
| put Put1: put t, loop
| put StopBidi: close
Of course, it occurs to me that requiring tags like Get
and Get1
to be unique program-wide is going to get annoying fast, especially the Stop
commands (which are already annoying just by being included explicitly). Personally, I’m wondering if a union type system is the answer…
Anyway, if there are two separate channels, we can easily make a process that combines them:
process makeBidi(neg<BidiSeq<t>> combined, GetSeq<t> getc, PutSeq<t> putc)
{
switch combined.get {
case Get1 : getc.put(Get); combined.put(getc.get);
case Put1 : putc.put(Put); putc.put(combined.get);
case StopBidi: getc.put(StopGet); putc.put(StopPut); halt;
}
makeBidi(combined, getc, putc);
}
However, given a bidirectional channel, it’s not generally possible to separate it into GetSeq
and PutSeq
. We’ll see why later, but my conclusion is that it’s better to provide separate terminal input and output channels. That said, Jonathan picked a combined design.
Note: In MPL-AOC, a protocol that started with “put Tag:
” was called a “protocol” while a protocol that started with “get Tag:
” was called a “coprotocol”, and the word “get” or “put” was left out:
protocol Foo =>> c = DoFoo:: ...; /// Current MPL parser
protocol Foo = put DoFoo: ...; /// My proposed syntax
coprotocol c =>> Bar = DoBar:: ...; /// Current MPL parser
protocol Bar = get DoBar: ...; /// My proposed syntax
Obviously, MPL isn’t the kind of language where you can actually implement access to a terminal; if MPL is built as a standalone language, the terminal channels must be provided from outside the program, and for simplicity we’ll probably make them built-in.
So again, how do we write “Hello, World”?
At the outer level of the program (outside any process) there is a mechanism to run a group of processes; at this level, the terminal channels can each be predefined, with one endpoint pre-established. You can start multiple processes, but the compiler will verify that each terminal process is used only once. Let’s suppose the name of our terminal is stdin
and stdout
(of course, they may be files rather than terminals, which is fine). The current plan is that stdin
and stdout
can only be mentioned at the top level, so we could write “Hello, World!” as
// In MPL-AOC you'd surround the next two lines in some kind of "run" block.
makeBidi(~console, stdin, stdout);
helloWorld(console);
process helloWorld(BidiSeq<Char> console)
{
console.put(Put1); console.put('H');
console.put(Put1); console.put('e');
console.put(Put1); console.put('l');
console.put(Put1); console.put('l');
console.put(Put1); console.put('o');
// add 14 more statements here
console.close();
}
This code is horrifyingly tedious, but it can be improved with helper process(es). The identifier console
wasn’t defined anywhere; I have the impression that Dr. Cockett’s new plan is to allow new channels to be created implicitly, by creating a channel whenever an undefined identifier is used in exactly two places in a series of wiring (i.e. process-start) commands. Here, the notation ~console
means “the negation (complement) of the protocol of console”. Either side of the channel could be negated:
makeBidi(console, stdin, stdout);
helloWorld(~console);
In fact, MPL-AOC doesn’t use complements; instead there is a concept of “polarity”: left and right. In that design, a channel must always be connected to the “right” side of one process and to the “left” side of another process; in this case, “negation” is never necessary (at least in theory). In fact, the current MPL design also includes polarity, and hey, won’t my labmates be surprised to see that I’m talking about an MPL without polarity! I believe that MPL with polarities is equivalent to MPL without polarities, and there’s only one reason I’ve described MPL without polarities here: clarity. Most programmers are familiar with C++/C#/Java syntax, and Dr. Cockett’s current syntax is potentially confusing, especially the signature (header). It looks like this:
process bulkPutAdapter :: PutSeq(|[t]) =>> PutSeq(|t) | [t] ::=
client =>> putseq = putInitially ->>
case putInitially of
[] ->>
handle client as
Put ->> get buf on client
bulkPutAdapter(client =>> putseq | buf)
StopPut ->> put StopPut on putseq
close putseq
end client
c:rest ->
put Put on putseq
put c on putseq
bulkPutAdapter(client =>> putseq | rest);
I simply think beginners will be more comfortable with an ordinary argument list.
Let’s assume that a String
is a synonym for a linked list of characters ([Char]
) using Haskell notation to prepend a character (head:tail
). Then the following is a “write string” adapter (but generalized, so I call it bulkPutAdapter
):
process bulkPutAdapter(neg<PutSeq<[t]>> client, PutSeq<t> putseq, [t] putInitially)
{
switch putInitially {
case []:
case client.get of {
Put -> bulkPutAdapter(client, putseq, client.get);
StopPut -> putseq.put(StopPut); halt;
}
case (c:rest):
putseq.put(Put);
putseq.put(c);
bulkPutAdapter(client, putseq, rest);
}
}
Now we can write “Hello, World” as
bulkPutAdapter(~strOut, stdout, "");
helloWorld(strOut);
process helloWorld(PutSeq<[Char]> strOut)
{
strOut.put(Put);
strOut.put("Hello, world!\n");
}
The guessing game & terminal input
Now let’s suppose we want to play a guessing game, something that is easy in a sequential imperative language. The game is “I’m thinking of a number between 1 and 100. Guess it!” For that we’ll need a way to “read a line” from the console. Here’s a process for that (but generalized, so I call it bulkGetAdapter
). It’s more complicated than bulkPutAdapter
because it watches for a delimiter at which to split the input sequence.
process bulkGetAdapter(neg<GetSeq<[t]>> client, GetSeq<t> getter, t delimiter)
{
switch client.get {
case Get: bulkGet(client, getter, delimiter, []);
case StopGet: getter.put(StopGet); halt;
}
}
// A protocol in which you get a "t" value, then run another protocol
protocol GetThen<t,#p> = get t, #p;
process bulkGet(neg<GetThen<[t],GetSeq<[t]>>> client,
GetSeq<t> getter, t delimiter, [t] buf)
{
getter.put(Get);
let c = getter.get;
// I'm cheating: we haven't planned a general equality operator yet
if (c == delimiter) {
client.put(buf);
bulkGetAdapter(client, getter, delimiter, buf);
} else {
bulkGet(client, getter, delimiter, c:buf);
}
}
Remember that GetSeq<t>
is defined as put Get, get t, loop | ...
.
bulkGet
is called after client.get
, which means we’re somewhere in the middle of protocol GetSeq<[t]>
, This means the first parameter of bulkGet
cannot have type neg<GetSeq<[t]>>
, because that type represents the beginning of the protocol.
Specifically, when bulkGet
is called, the protocol is at the get t, loop
part of GetSeq<[t]>
, i.e. get [t], GetSeq<[t]>
in this context. How do we represent the “middle” of the protocol? Well, I’ve defined a protocol GetThen<t,#p>
as get t, #p
. What’s with the hash sign? I realized that sometimes it’s important whether a generic parameter represents a protocol or not; so I decided, arbitrarily, to use a hash sign to indicate that an unknown generic type is a protocol and not an ordinary “sequential” type.
Therefore, neg<GetThen<[t],GetSeq<[t]>>>
means put [t], neg<GetSeq<[t]>>
. Sure enough, bulkGet
puts one [t]
into client
and then calls bulkGetAdapter
with the protocol which has become a neg<GetSeq<[t]>>
. Whew. (At least MPL-AOC has type inference.)
You may wonder, does this code have to be written as two separate processes? I’m afraid so, since there are no loops in MPL-AOC and it does not even allow you to pass a channel to an ordinary function.
Next, where can we get a random number? There are no true globals, but let’s suppose there is an outside timeService
that provides a high-precision CPU timer. In this case the language restriction of having only a single process talking to it is onerous - multiple processes may want to read the timer - so I think we should revise that plan. Anyway, the low bits of a microsecond timer will seem random enough. Here’s my guessing game implementation.
// predefined
protocol TimeService = put GetCpuTimerMicroseconds: get Int, loop
| // other time services here
| put StopTimeService: close
// prelude
Int strToInt(String s) {...}
String concat(String s1, String s2) {...}
t unwrap(Maybe<t> value, t defaultValue) {...}
// wiring
guessingGame(timeService, console);
makeBidi(~console, strIn, strOut);
bulkPutAdapter(~strOut, stdout, "");
bulkGetAdapter(~strIn, stdin, '\n');
process guessingGame(TimeService time, BidiSeq<Char> console)
{
time.put(GetCpuTimerMicroseconds);
let num = time.get % 100 + 1;
console.put(Put1);
console.put("I'm thinking of a number between 1 and 100. Guess it!\n");
time.close();
guessingGame2(num, console);
}
process guessingGame2(Int num, BidiSeq<Char> console)
{
console.put(Get1);
let guess = unwrap(strToInt(console.get), -1);
if (guess != num) {
if (guess < num) {
console.put(Put1);
console.put("Too low! Guess again!\n");
} else {
console.put(Put1);
console.put("Too high! Guess again!\n");
}
guessingGame2(num, console);
} else {
console.put(Put1);
console.put(concat(concat("Right on! I was thinking of ", num), "!\n");
console.close();
}
}
It’s pretty annoying that I have to do two put
commands to print a single line of text. The language could improve this a little by supporting chaining, as in
console.put(Put1).put("Too high! Guess again!\n");
But it’s clear that such calls will be pervasive in MPL, and therefore a nicer syntax is desirable, maybe something like
console.Put1("Too high! Guess again!\n");
But for now, I’ll settle for chaining. I also see no reason why I would have to close a console explicitly, or halt explicitly; these are not my ideas, and I’m not quite sure what the rules are supposed to be about them.
Split, Fork, Plug
In many of these examples we’ve “called” from one process to another. Conceptually, the new process runs in parallel with the current one, but if calling a process is the last act of a process, the new process can obviously continue on the same thread, as the old process disappears.
Though I needed some months to figure out the meaning of ‘fork’ and ‘split’, their meaning turned out to be quite simple. Given two processes with a channel between them (plus some extra channels a b c
, attached to P0 with dotted lines),
+--------+ +-------+
a....| | | |
b....| P0 |---------| Q |
c....| | qlink | |
+--------+ +-------+
fork
is a command that splits the current process (P0, here) into two new processes while terminating the current process:
(fork) (split)
+--------+ +-------+
a....| P1 |---------| |
+--------+ qlink1 | |
| Q |
+--------+ | |
b....| P2 |---------| |
c....| | qlink2 | |
+--------+ +-------+
The other process uses split
, which means “I expect this channel to change into two channels.” The protocol of qlink
must also say “there’s going to be a split”. Here, letters a b c
represent three existing channels. P0 can distribute these channels to P1
and P2
in any way it chooses. MPL-AOC seems to have a concept of “inner processes” (closures) for the sole purpose of forking:
process P0(ProtocolA a, ProtocolB b, ProtocolC c, ProtocolQ qlink, Int num)
{
qlink.put(Q).put(12345);
fork qlink as // simplified syntax compared to MPL-AOC
qlink1 { /* P1 */ a := qlink1; },
qlink2 { /* P2 */ P2(b, c, qlink2, num); };
// P0 ends; no additional code is allowed down here
}
process Q(neg<ProtocolQ> qlink)
{
switch qlink.get {
case Q:
Int num = qlink.get;
split(qlink) as (qlink1, qlink2);
// qlink no longer exists at this point
...
}
}
If two channels have complement protocols (or the same protocol with opposite polarity), you can simply join them with :=
as you see here (MPL-AOC used ==
to mean the same thing.)
Protocol Q might look like something like this:
protocol ProtocolQ = put Q: put Int, fork (neg<ProtocolA>, ProtocolQ2)
| ...
The other primitive for creating processes was called “plug” in MPL-AOC, but the new plan seems to be to make process creation implicit. You simply “call” two or more processes, which then become separate entities from the current process:
process P0(ProtocolA a, ProtocolB b, ProtocolC c)
{
P1(a, link);
P2(b, ~link);
// there can be additional code down here
}
Before:
+--------+
| |....a
| P0 |....b
| |....c
+--------+
After:
+--------+ +------+....a +------+
| | | | | |
| P0 | | P1 |------------| P2 |....b
| |....c | | link | |
+--------+ +------+ +------+
The channel link
is created implicitly by being mentioned exactly twice.
Exactly two processes have access to any given channel, so when P0
passes a
to P1
, it loses access to a
. Curiously, I’ve gotten a strong impression that the processes P1 and P2 are completely separate, and have no ability to communicate with P0 (not even indirectly). A more sensible plan, it seems to me, would be to allow developers to create a channel which you can then use inside P0, something like this:
process P0(ProtocolA a, ProtocolB b, ProtocolC c)
{
channel p0link, link;
P1(a, link, p0link);
P2(b, ~link);
// there can be additional code down here
}
+--------+ +------+....a +------+
| | p0link | | | |
| P0 |----------| P1 |------------| P2 |....b
| |....c | | link | |
+--------+ +------+ +------+
It wasn’t clearly stated whether you could create more than two processes at once, but I think you should be able to wire up a process graph of arbitrary size this way, as long as its shape is specified statically (i.e. in a series of source code statements, not computed).
Note that a channel must connect two processes. The two sides of a channel cannot be given to the same process.
Deadlock Avoidance
A deadlock is a situation in which two or more competing actions are each waiting for the other to finish. For example, process A is waiting for B to do something, B is waiting for C to do something, and C is waiting for A. In other words deadlocks always involve a cycle. The way that MPL proposes to avoid deadlocks is simple: no cycles are permitted.
This is enforced by construction. Suppose that we start with an (undirected) process graph with no cycles:
P0---P2 P5------P7
\ / \ |
\ / \ P8
\ / \
P3---P1---P4 P6
Also, suppose that the compiler verifies that when you create a new set of processes (the mechanism formerly known as ‘plug’) there are no cycles, you do not connect two processes with more than one channel, and you do not give both ends of a channel to the same process. This is easy for the compiler to check. In case you’re clever enough to think about aliasing, I would point out that MPL won’t allow channel aliasing. All channel references are unique, a restriction that also makes it straightforward for the compiler to verify that the channel’s protocol is being followed.
Clearly, any of the processes in this graph (or any other acyclic graph) is attached to one or more independent subtrees. For example, P5:
+----+
P1(etc.)-----| P5 |-----P7(etc.)
+----+\
\---P6
Clearly, P5 represents the only link, direct or indirect, between P1 and P7, P6 and P7, and between P1 and P6. If there is only a single channel connecting each pair of processes, and the whole graph is acyclic, there can be no cycle of waiting, so there can be no deadlocks. For example, if P5 is waiting for another process, say P7, there’s no way P7 can also be waiting for P5. Remember that the protocol type system verifies that along a single channel, a get
in one process is matched up with a put
in the other, and there’s only one channel between P5 and P7. QED.
Knowing that any acyclic graph is deadlock-free, we just have to check that the two primitives given above for creating new processes (fork/split and the process-creation primitive formerly known as ‘plug’) do not create cycles. When creating a new set of processes, the compiler simply needs to check that there are no cycles in the new subgraph. Likewise it is straightforward to convince oneself with diagrams that fork/split cannot create a cycle, although I don’t want to bother drawing more diagrams here (it’s kind of tedious.)
I fully expect that if someone with a lot of experience in concurrent programming is reading this, they might be saying “duh, I knew that”, and yet they may still have trouble with deadlocks in their large systems. Sometimes this is due to programming mistakes or design flaws, something that a strong type-checker like MPL’s would help with, but I suspect that at other times, the design might intentionally use cycles for good reasons that I am unaware of (for example, having no cycles seems to guarantee a single point of failure). For that reason, I guess, Dr. Cockett is proposing some sort of weaker compiler mode that won’t ensure there are no deadlocks, but in that case it isn’t clear whether MPL has advantages over session types (which are already described in the academic literature). I don’t think anyone here has investigated the properties of the plug/fork primitives in a cyclic environment.
Earlier I said “given a bidirectional channel, it’s not generally possible to separate it into GetSeq
and PutSeq
”. Actually, there are two separate problems:
-
MPL-AOC doesn’t seem to have a primitive that lets you ask “is there something available to get on this channel?” or “get the first available input from a list of channels”. So when we try to write a process to split
BidiSeq
, there seems to be no way to do it:process splitBidi(neg<GetSeq<t>> getc, neg<PutSeq<t>> putc, BidiSeq<t> bidi) { /// `getFromEither` doesn't exist switch getFromEither(getc, putc) { case Left(Get): /// Besides, it's impossible to keep processing Put commands from /// putc while waiting for the Get1 command to finish on bidi. getc.put(bidi.put(Get1).get); case Right(Put): /// Also, putc.get could block bidi.put(Put1).put(putc.get); case Left (StopGet): // TODO case Right (StopPut): // TODO } }
-
But suppose
getFromEither
existed and you were able to writesplitBidi
. Then you have a new problem. It is attached to three channels:+-----+ getc +---------+ bidi +----------+ | G |-----------|splitBidi|----------|BidiSeq<t>| +-----+ +---------+ +----------+ | putc | +---------+ | P | +---------+
Because the graph is supposed to be acyclic, this implies that G and P cannot communicate with each other, implying there cannot be any relationship between input and output (unless the protocol of splitBidi
is extended for the express purpose of allowing some kind of communication). This is usually not an acceptable constraint, so the acyclicity constraint implies that splitting a bidirectional channel is kind of useless.
Sending channels as alternate to fork/split
In MPL-AOC, you are extremely limited in what you can do with channels. You cannot store channels in data structures, you cannot pass a channel to “sequential” function, and you cannot send a channel along another channel to an existing process. Thus, it achieves its goals at a very high cost.
I find the fork/split primitive to be an awkward thing. Instead I would propose allowing sending channels along other channels. It provides more flexibility, is simpler for the compiler (no “inner process”, no “fork/split” primitive, nothing extra in the protocol types), and provides the same guarantee: it cannot create cycles. Let’s “split” P0 like in the earlier example, but without actually terminating P0 as split requires:
Before:
+--------+ +-------+
a....| | | |
b....| P0 |---------| Q |
c....| | qlink | |
+--------+ +-------+
After:
(fork) (split)
+--------+ +-------+
a....| P0 |---------| |
+--------+ qlink | |
| Q |
+--------+ | |
b....| P2 |---------| |
c....| | qlink2 | |
+--------+ +-------+
protocol ProtocolQ = put Q: put Int, put ProtocolQ2, neg<ProtocolA>
| ...
process P0(ProtocolA a, ProtocolB b, ProtocolC c, ProtocolQ qlink, Int num)
{
qlink.put(Q).put(12345);
neg<ProtocolA> qlink2;
P2(b, c, qlink2, num);
qlink.put(qlink2);
a := qlink1;
}
process Q(neg<ProtocolQ> qlink)
{
switch qlink.get {
case Q:
Int num = qlink.get;
ProtocolQ2 qlink2 = qlink.get;
...
}
}
I also see no reason not to allow channels to be sent to (sequential) functions. Functions receiving channels won’t exactly be “purely functional”, but that’s fine with me, and in terms of compiler complexity I don’t see a major change.
The other big limitation of channels is that you can’t store them in data structures. Allowing this is very tricky since it implies aliasing, but the problem is already well-researched; just read about typestate, such as you’d find in Plaid. Even without typestate, I’m fairly confident that MPL-like channels are doable in Rust, including storing them in data structures.
Recursive protocols & sequential proceses
As the final example I want to write a TCP server, so that means I’ll need predefined MPL protocols for TCP. The following protocols are inadequate for real-world usage, but for experimental purposes I guess they will suffice.
/// Note: MPL-AOC does not support anonymous choice `( ... | ... )`, seen here
protocol TcpListenService = put ListenOnPort: put Int,
( get TcpBindSuccess: BoundTcpListener
| get TcpBindFailed: loop )
| put StopTcpListenService;
/// When you listen for connections in a loop, you get a sequence of connections
/// (TcpPipePairs). Each pipe consists of an output stream and an input stream.
/// Note: MPL-AOC does not support unlabeled protocols like TcpConnection.
protocol BoundTcpListener = GetSeq<TcpConnection>;
protocol TcpConnection = get PutSeq<Byte>, get neg<PutSeq<Byte>>, close;
I designed BoundTcpListener
assuming that the next connection cannot fail to arrive, and that only the listener (user code) can decide to terminate the listening process. Is that actually true? I doubt it, but oh well (waves hands).
Now, why does the TcpConnection
receive neg<PutSeq<Byte>>
instead of GetSeq<Byte>
? It’s a matter of choice: who gets to choose when the sequence is over? If you have a GetSeq<t>
, it implies your code gets to choose when to end the sequence. If you have a neg<PutSeq<t>>
, you still receive a sequence, but the other side gets to choose when to end the sequence. So in this design you can decide when to stop sending (by sending the StopPut
signal), but the other side gets to decide when it stops sending.
I can already sense this is not a very good design. What if the other side never stops sending but we want to close the connection? Or what if I want to do a non-blocking get? We can’t even find out the IP address of the remote party! But for now, I’ll leave it this way. We can redesign it later.
Three or four weeks ago, while still trying to figure out MPL, I asked Dr. Cockett if he could give me an example of a nontrivial protocol composed from at least one other protocol. He did not answer my question, so it still isn’t quite clear to me how he expects protocol composition to work. Here, I’ve assumed that when BoundTcpListener
is used inside TcpListenService
it’s a “goto” command rather than a “call”. This interpretation would mean that I need not (and cannot) have anything after BoundTcpListener
in the protocol. It would suggest that the following protocol is not possible:
// Put a certain number of 't' values, then get an equal number back
protocol GetsAndPuts<t> = GetsAndPuts2<t>, close;
protocol GetsAndPuts2<t> = get GetT: get t, GetsAndPuts2<t>, put t;
| get StopGetT: halt;
But wait, hold on. On second thought, could polymorphism rescue us, assuming protocols are allowed as generic parameters?
protocol PutThen<t,#p> = put t, #p
protocol GetsAndPuts<t> = GetsAndPuts2<t, close>;
protocol GetsAndPuts2<t, #p> = get GetT: get t, GetsAndPuts2<t, PutThen<t, #p>>;
| get StopGetT: #p;
Holy crap, I think it’ll work! It’s unintuitive, but … yeah. It seems to make sense, let’s see…
Actually, in MPL-AOC I don’t think it’s allowed to have a ‘bare’ protocol like PutThen
that doesn’t start with a choice label (i.e. get Label:
or put Label:
), but then again, in MPL-AOC there was actually no comma operator in protocols; instead, the primitive get
took the next action after itself as a polymorphic parameter. That is to say, instead of put a, get b, close
you’d write Get<a, Put<b, Top>>
or some such, but with different syntax. Thus PutThen<t,#p>
is the same thing as the primitive Put
command in MPL-AOC. I suppose we can get rid of PutThen
from this protocol, too, by writing it like this instead:
protocol GetsAndPuts<t> = GetsAndPuts2<t, close>;
protocol GetsAndPuts2<t, #p> = get GetT: get t, GetsAndPuts2<t, (put t, #p)>;
| get StopGetT: #p;
Let’s try using this protocol in a process. Here’s a service that buffers up the things put in by some other process, then sends them back.
// first-in, last-out buffer
process FiloBuffer(neg<PutsAndGets<t, #p>> channel, [t] list) {
switch channel.get {
case PutT: let next = channel.get;
FiloBuffer(channel, next : list);
case StopPutT:
er... what now????
}
}
Uh-oh. The problem here is that after StopPutT
comes p
, but p
could be anything: we don’t know how to fulfill the protocol. It would be nice if we could simply “return” to whatever process called FiloBuffer
and let them fulfill the protocol, but I have the impression that processes are always started in parallel (meaning that there is no call stack of processes). If there were a call
keyword, to call a process sequentially on a call stack, then we could write FiloBuffer
something like this:
process FiloBuffer(neg<PutsAndGets<t, #p>> channel) {
switch channel.get {
case PutT: t next = channel.get;
call FiloBuffer(channel);
channel.put(next);
case StopPutT: // do nothing
}
return;
}
This would work. Assuming the compiler is sophisticated to do the type analysis on this, it would be able to tell when FiloBuffer
returns, the channel is left in state (put t, p)
and so it is legal to put t
at this point, leaving the channel in state p
on both branches of the case
statement.
call
would have limited use, it seems, because processes have no return value. There is no mutable state and it’s illegal for a thread to attach both ends of a channel to itself; it’s not quite impossible for the called process to return information to its caller, but the information would have to pass through a parallel process. In this example, happily, FiloBuffer
doesn’t need to return anything to its caller, but of course that’s not true in general.
We could solve this problem by abolishing the call stack once again but introducing some kind of process closure (essentially, what’s known as continuation passing style). But CPS is clunky, and supporting process closures could add more compiler complexity than my preferred solution.
Let’s reconsider our bulkGetAdapter
from earlier, which we used to read lines from the terminal:
process bulkGetAdapter(neg<GetSeq<[t]>> client, GetSeq<t> getter, t delimiter) {...}
protocol GetThen<t,#p> = get t, #p
process bulkGet(neg<GetThen<[t],GetSeq<[t]>>> client,
GetSeq<t> getter, t delimiter, [t] buf) {...}
There are a couple of odd things about it:
bulkGet
’s responsibiliy is to get a list oft
s and send them to theclient
. Because processes have no call stack, it callsbulkGetAdapter
at the end as a way of “returning” to its caller. This implies that no other process can callbulkGet
sincebulkGet
“returns” to the wrong place.bulkGetAdapter
is a separate process from the process that is requesting a line from the terminal, so potentially it runs on its own thread. In a way this is good; if a processmain
wants to read a line from the terminal, it could do some background work while it is waiting. However, 99% of the timemain
really just wants to sit and wait for the input to arrive. Potentially, two threads could be idling, both waiting for a third (the external terminal process) to receive keyboard input. That seems wasteful.
My preferred solution is to unify the sequential and parallel parts of MPL by allowing functions to accept channels. In that case we don’t necessarily need bulkGetAdapter
at all, just bulkGet
, and the code is simpler.
[t] readLine(GetSeq<Char> getter) { bulkGet(getter, '\n', []); }
[t] bulkGet(GetSeq<t> getter, t delimiter, [t] buf)
{
getter.put(Get);
let c = getter.get;
if (c == delimiter)
buf
else
bulkGet(getter, delimiter, c:buf)
}
For good measure I’ve thrown in a bonus readLine
function. Let’s do the same thing with bulkPutAdapter
, renaming it simply bulkPut
:
void writeLine(PutSeq<Char> putseq, [Char] text)
{
bulkPut(putseq, text);
putseq.put(Put).put('\n');
}
void bulkPut(PutSeq<t> putseq, [t] thingsToPut)
{
switch thingsToPut {
case (c:rest):
putseq.put(Put).put(c);
bulkPut(putseq, rest);
case []: // do nothing
}
}
We can still define adapters as separate processes if we want; for instance here’s the bulkPutAdapter
reprogrammed to use bulkPut
:
process bulkPutAdapter(neg<PutSeq<[t]>> client, PutSeq<t> putseq, [t] putInitially)
{
bulkPut(putseq, putInitially);
switch client.get {
case Put: bulkPutAdapter(client, putseq, client.get);
case StopPut: putseq.put(StopPut); halt;
}
}
To complete this idea, let’s rewrite guessingGame
to use this new paradigm:
guessingGame(timeService, stdin, stdout);
process guessingGame(TimeService time, GetSeq<Char> input, PutSeq<Char> output)
{
time.put(GetCpuTimerMicroseconds);
let num = time.get % 100 + 1;
writeLine(output, "I'm thinking of a number between 1 and 100. Guess it!\n");
time.close();
guessingGame2(num, input, output);
input.close();
output.close();
}
void guessingGame2(Int num, GetSeq<Char> input, PutSeq<Char> output)
{
let guess = unwrap(strToInt(readLine(input)), -1);
if (guess != num) {
if (guess < num) {
writeLine(output, "Too low! Guess again!");
} else {
writeLine(output, "Too high! Guess again!");
}
guessingGame2(num, input, output);
} else {
writeLine(output, concat(concat("Right on! I was thinking of ", num), "!\n"));
}
}
Notice that the program is a slightly simpler now, apart from the fact that stdin
and stdout
are no longer bundled together. Remember, our type system doesn’t allow us to put channels in data structures such as Pair<a,b>
, though perhaps we could make an exception for a built-in tuple type.
Now that I have allowed functions to do something previously allowed only to processes (to use channels), and I’ve allowed processes to do something that only functions could do before (to block the caller), what differences remain between the two?
- Only processes can be started concurrently.
- Only functions can return values.
- Before stopping, processes need to reach the end (
close
) of all channels they receive from outside (and still have valid references to); functions do not. Both processes and functions need to close or send any channels they create inside themselves, except that functions could return such channels to their caller.
Concurrent MPL: process tree
Sometimes you need a list of channels (which one can also think of as processes). Unfortunately, the current plan says we can’t: MPL doesn’t allow channels to be stored in data structures.
As an ugly alternative, I figured out that you can construct a binary tree of processes to represent a list of channels. Each process holds a reference to, at most, one channel from the list. The binary tree I’m going to implement here is an unusual beast that associates integers with channels and requires O(log N) to retrieve a channel, where N is the number of bits in the integer “index” associated with the channel. A tree with 18 items, numbered 0..17, is structured like this:
[3]--[7]---[15]
/ |
/ +--[11]
[1]
/ \ +--[9]---[17]
/ \ |
/ [5]--[13]
[0]
\ [4]--[8]---[16]
\ / |
\ / +--[12]
[2]
\ +--[10]
\ |
[6]--[14]
My tree implementation allows any non-negative integer (not just the next available integer) to be associated with a channel. Negative indexes are treated the same as 0. Each node in the tree has a reference to zero or one channels, and between zero and two child processes. The pattern may not be obvious, but the calculations that produce this tree are simple bit shifts and decrements. Here is the protocol for using the tree:
protocol Try<#success,#fail> = get Success: #success
| get Fail: #fail;
protocol GMaybe<#p, #then> = get GJust: get #p, #then
| get GNothing: #then
protocol ProcessTree<#p> =
/// Put a channel in, getting the old channel (if any) out
put PutProc, put Int, put #p, GMaybe<#p, loop>
/// Take a channel out without replacement
| put TakeProc, put Int, GMaybe<#p, loop>
/// Take the channel out that has the lowest index (also returns the index)
/// Terminates the process tree if there are no channels left.
| put TakeMinOrClose, Try<(get Int, get #p, loop), close>
/// Get the current number of channels in the process tree
| put GetProcCount, get Int;
Implementing this tree in MPL is fairly difficult. I decided that doing a proper binary tree, such as a red-black tree, is too difficult to be worth the trouble. In fact, I’ve determined that doing any legal tree implementation is too tedious! The following implementation relies on something illegal instead: Maybe<ProcessTree<#p>>
(it’s illegal because Maybe
counts as a data structure). In theory we could rewrite this not to use Maybe
, by writing 8 versions of processTree
, one for each of the 8 possible combinations of Nothing
and Just #p
. And we could reduce this to 6 versions by somehow sharing the code for childE=Nothing, childO=Just #p
with the code for childE=Just #p, childO=Nothing
. Still pretty hard!
The point is, this code serves as a proof-of-concept that it isn’t literally impossible to write useful programs without the ability to store channels in data structures.
process emptyProcessTree(neg<ProcessTree<#p>> user)
{
processTree(user, Nothing, Nothing, Nothing, 0);
}
void processTree(neg<ProcessTree<#p>> user, Maybe<#p> item,
Maybe<ProcessTree<#p>> childE, Maybe<ProcessTree<#p>> childO, int itemCount)
{
switch user.get {
case PutProc:
Int index = user.get;
Int i = index - 1;
Int halfI = i >> 1;
#p newItem = user.get;
if (index <= 0) {
switch item {
case Nothing:
user.put(GNothing);
processTree(user, Just newItem, childE, childO, itemCount + 1);
case Just(oldItem):
user.put(GJust).put(oldItem);
processTree(user, Just newItem, childE, childO, itemCount);
}
} else if ((i & 1) == 0) {
let (newChild, deltaCount) = putChild(childE, user, halfI, newItem);
processTree(user, item, newChild, childO, itemCount + deltaCount);
} else { // (i & 1) == 1
let (newChild, deltaCount) = putChild(childO, user, halfI, newItem);
processTree(user, item, childE, newChild, itemCount + deltaCount);
}
TakeProc ->
Int index = user.get;
takeProc(user, index, item, childE, childO, itemCount);
TakeMinOrClose ->
takeMinOrClose(user, item, childE, childO, itemCount);
GetProcCount ->
user.put(itemCount);
processTree(user, item, childE, childO, itemCount);
}
}
ProcessTree<#p> startChild(neg<ProcessTree<#p>> user, Int halfI, #p newItem)
{
ProcessTree<#p> newChild;
emptyProcessTree(~newChild);
switch newChild.put(PutProc).put(halfI).put(newItem).get {
case GNothing: user.put(GNothing);
case GJust: /// UNREACHABLE!!!
user.put(GJust).put(newChild.get);
}
newChild
}
(Maybe<ProcessTree<#p>>, Int) putChild(Maybe<ProcessTree<#p>> mChild,
neg<ProcessTree<#p>> user, Int halfI, #p newItem)
{
switch mChild {
case Nothing:
ProcessTree<#p> newChild = startChild(user, halfI, newItem);
(Just newChild, 1);
case Just(child):
switch child.put(PutProc).put(halfI).put(newItem).get {
case GNothing: // Put OK
user.put(GNothing);
(mChild, 1);
case GJust: // Swap OK
user.put(GJust).put(child.get);
(mChild, 0);
}
}
}
void takeProc(neg<ProcessTree<#p>> user, Int index, Maybe<#p> item,
Maybe<ProcessTree<#p>> childE, Maybe<ProcessTree<#p>> childO, int itemCount)
{
Int i = index - 1;
Int halfI = i >> 1;
if (index <= 0) {
let delta = switch item {
case Nothing:
user.put(GNothing);
0;
case Just(item'):
user.put(GJust).put(item');
-1;
}
processTree(user, Nothing, childE, childO, itemCount + delta);
} else if ((i & 1) == 0) {
switch childE {
case Nothing: user.put(GNothing);
case Just(child):
switch child.put(TakeProc).put(halfI).get {
case GNothing: user.put(GNothing);
case GJust:
user.put(GJust).put(child.get);
processTree(user, item, child1, child2, itemCount - 1);
}
}
} else {
switch childO {
case Nothing: user.put(GNothing);
case Just(child):
switch child.put(TakeProc).put(halfI).get {
case GNothing: user.put(GNothing);
case GJust:
user.put(GJust).put(child.get);
processTree(user, item, child1, child2, itemCount - 1);
}
}
}
}
void takeMinOrClose(neg<Try<(get Int, get #p, loop), close>> user, Maybe<#p> item,
Maybe<ProcessTree<#p>> childE, Maybe<ProcessTree<#p>> childO, int itemCount)
{
switch item {
case Just(item'):
user.put(Success).put(0).put(item');
processTree(user, Nothing, childE, childO, itemCount - 1);
case Nothing:
switch childE {
case Just(child):
let childE' = takeMinOrCloseChild(user, child, 0);
let delta = countMaybe(childE');
processTree(user, Nothing, childE', childO, itemCount - delta);
case Nothing:
switch childO {
case Just(child):
let childO' = takeMinOrCloseChild(user, child, 1);
let delta = countMaybe(childE');
processTree(user, Nothing, childE, childO', itemCount - delta);
case Nothing:
user.put(Fail).close();
}
}
}
}
Maybe<ProcessTree<#p>> takeMinOrCloseChild(
neg<Try<(get Int, get #p, loop), close>> user, ProcessTree<#p> child, Int odd)
{
switch child.put(TakeMinOrClose).get {
case Fail:
child.close();
Nothing;
case Success:
user.put(Success);
user.put(child.get * 2 + odd);
user.put(child.get);
Just child;
}
}
Int countMaybe(Nothing) { 0 }
Int countMaybe(Just _) { 1 }
Wow, that took a long time, and I probably made mistakes. And clearly I’m not going to spend the time to rewrite this to eliminate all uses of Maybe<...#...>
. But this proves it is possible, albeit unweildy and inefficient, to work around MPL’s restriction against holding channels in data structures.
Recall that a protocol represents not just the abstract concept of a communication protocol, but also our current location within that protocol. Notice that we must require every channel stored in the tree to be in the same state. When you get a channel out, you have to do a sequence of operations that leaves it in the original state before you can put it back into the tree.
Note: there is a line of unreachable code in startChild
which, nevertheless, must be written. Inevitably, the type system will not be able to detect that certain situations are impossible, but because there’s no such thing as an “run-time error” we’ll have to write some code to satisfy the compiler, code which puts the protocols involved into the state that the compiler expects. I believe this requirement potentially makes some programs impossible, or very difficult, to write. For example, suppose the impossible path requires a value of polymorphic type t
, but no t
value is in scope at that point. Then we’re stuck. There’s no way to create a value of unknown type.
Bitmap filter matrix
Until now, all our MPL code would be perfectly at home in an ordinary sequential language. Let’s (finally) write a program that actually takes advantage of parallelism.
I’d like to implement a “bitmap filter matrix”. This algorithm takes a bitmap as input, along with an N×M matrix and a constant value to add, and returns another bitmap as output. For simplicity, I will support only a one-channel (monochrome) bitmap.
The N×M matrix (for some odd numbers N, M) specifies the value of each new pixel as a weighted sum of the old pixels. The value in the center of the matrix represents the weight assigned to the current pixel. It is often used for “emboss”, “edge detect” and “blur” effects.
For example, here’s a 3×1 matrix for horizontal emboss / edge detection:
[[-1, 1, 0]]
That is, the pixel on the left is subtracted from the current pixel. So when a vertical “edge” is encountered, the output will be either a large negative or large positive value, and if there is no edge, the output will be near zero. Of course, pixels are normally unsigned and constrained to a particular range, so the filter algorithm could take extra parameters to either avoid exceeding the range of a pixel, or to decide what to do when the range is exceeded. My code will accept a constant value to be added, and then the output will be clipped to the range (0,255).
A simple blur matrix might look like this (assuming [a, b, c, ...]
constructs a built-in linked list type):
let a = 1.0/4, b = 1.0/16, c = 1.0/32, d = c;
/// so that 1*a + 4*b + 8*c + 8*d == 1.0;
let matrix = [[0, d, c, d, 0],
[d, c, b, c, d],
[c, b, a, b, c],
[d, c, b, c, d],
[0, d, c, d, 0]];
Notice that filters have a problem at the edges: there is no “pixel on the left” when we are processing the leftmost pixel. To keep things simple, my algorithm will cheat, and skip over those pixels on the edges. In fact it will cut them out of the output, producing an output that is smaller than the input.
First I’ll write a sequential filter algorithm. But first: how will the bitmap and matrix be stored? I could represent them as nested linked lists, e.g. [[Byte]]
, but this is disturbing in two different ways:
- Linked lists are a terribly inefficient way to store bitmaps.
- There is no guarantee that all scanlines, or matrix rows, are the same length.
In the very beginning of this document I suggested we just “imagine” there is an array type; the only problem with this is that, in a language without mutable state, it’s hard to imagine how you would construct an array incrementally. The Haskell tutorial on arrays seems to imply that indeed you cannot construct an array incrementally, not really; you supply a linked list which is then converted to an array. Now, there is an “incremental” update function in Haskell, but the program can keep using the old version of the array afterward, which leads me to assume that the “incremental” update duplicates the entire array (at least until someone proves me wrong… if it were not so, why would the “incremental” update function require a list of changes instead of a single change?)
I guess we could envision a mutable array as an external process following an “Array” protocol:
protocol Array<t> = put GetAt: put Int, get t, loop
| put SetAt: put Int, put t, loop
| put DeleteArray, close
But then we’d still have to add some new mechanism for creating a new array, we’d have to make sure we can do nested arrays this way (um… uh-oh…), and we still wouldn’t have two-dimensional arrays (problem #2 above).
So, tell you what, I’m just going to settle for linked lists. I’ll assume we have a Byte
type with values from 0 to 255, and a Float
type to represent the values of the matrix.
So, assuming row-major order (i.e. head(bitmap)
is the first row of pixels, not the first column), finally I can write my sequential algorithm:
byte[,] FilterBitmap(byte[,] bitmap, float[,] matrix, float constAdd)
{
int width = bitmap.GetLength(1), height = bitmap.GetLength(0);
int matW = matrix.GetLength(1), matH = matrix.GetLength(0);
int offsX = matW/2, offsY = matH/2;
byte[,] output = new byte[height-matH+1, width-matW+1];
for (int y = 0; y <= height - matH; y++) {
for (int x = 0; x <= width - matW; x++) {
float sum = constAdd;
for (int xm = 0; xm < matW; xm++)
for (int ym = 0; ym < matH; ym++)
sum += matrix[ym, xm] * bitmap[y + ym, x + xm];
output[y, x] = sum < 0 ? 0 : sum > 255 ? 255 : (byte)sum;
}
}
return output;
}
Oh, I’m sorry, I wrote that in C#. Sorry, I’ll rewrite it in MPL. It’s just that the prospect of doing the same task with linked lists (and no loops) makes me vomit in my mouth a little. Oh by the way, remember that fold
thing that guarantees termination? Yeah, um, I’m too overloaded to think about that right now.
What about type conversions? Well, there is no plan for that. I’ll assume that byteToFloat
and floatToByteClipped
exist for the necessary conversions (Clipped
refers to values outside the valid range: -12.3 becomes 0 and 262.0 becomes 255, etc.).
[[Byte]] filterBitmap([[Byte]] bitmap, [[Float]] matrix, Float offset)
{
Int width = length(unwrap(head(bitmap), [])), height = length(bitmap);
Int matW = length(unwrap(head(matrix), [])), matH = length(matrix);
filterBitmap2(bitmap, width, height, matrix, matW, matH);
}
[[Byte]] filterBitmap2([[Byte]] bitmap, Int width, Int height,
[[Float]] matrix, Int matW, Int matH)
{
if (height >= matH) {
/// Filter the top row, then the rest
[Byte] newRow = filterRow(take(matH, bitmap), width, matrix, matW);
newRow : filterBitmap2(tail(bitmap), width, height - 1, matrix, matW, matH);
} else
[]
}
/// Computes one row of the solution
[Byte] filterRow([[Byte]] topOfBitmap, Int width,
[[Float]] matrix, Int matW)
{
if (width >= matW) {
Float newPx = filterPixel(topOfBitmap, matrix);
Byte newPx2 = floatToByteClipped(newPx);
[[Byte]] rightSide = getShorterRows(topOfBitmap);
newPx2 : filterRow(rightSide, width - 1, matrix, matW);
} else
[]
}
/// Clips off one pixel from the beginning of several rows
[[t]] getShorterRows([[t]] bitmap)
{
switch bitmap {
case []: []
case row:rows:
switch row { case []: [] case (px:pxs): pxs } : getShorterRows(rows)
}
}
/// Computes one pixel of the solution
Float filterPixel([[Byte]] bitmap, [[Float]] matrix)
{
switch matrix {
case []: 0
case (mrow:rest): filterPixel2(unwrap(head(bitmap), []), mrow) +
filterPixel(tail(bitmap), rest)
}
}
Float filterPixel2([Byte] bitmapRow, [Float] matrixRow)
{
switch matrixRow {
case []: 0
case (factor:rest):
factor * unwrap(head(bitmapRow), 0) +
filterPixelRow(tail(bitmapRow), rest)
}
}
Whew! That was quite a lot harder (and I am even more likely to have made a mistake than before). Now let’s parallelize it: we’ll split the task off into N processes, each of which will produce H/N scanlines of output if there are H lines of output. We’ll store subprocesses in the processTree
from the previous section.
protocol FilterBitmapTask = put ([[Byte]], [[Float]], Int), get ([[Byte]]), close
[[Byte]] filterBitmapInParallel([[Byte]] bitmap, [[Float]] matrix,
Float offset, Int numChildren)
{
Int width = length(unwrap(head(bitmap), [])), height = length(bitmap);
Int matW = length(unwrap(head(matrix), [])), matH = length(matrix);
ProcessTree<FilterBitmapTask> taskList;
emptyProcessTree(~taskList);
Int linesPer = (height-matH + numChildren-1) / numChildren;
runFilterTasks(bitmap, height, linesPer, 0, matrix, matH, taskList);
getFilterResults(taskList, []);
}
void runFilterTasks([[Byte]] bitmap, Int height, Int linesPer, Int i,
[[Float]] matrix, Int matH, ProcessTree<FilterBitmapTask> taskList)
{
if (height-matH > 0) {
let part = take(linesPer + matH, bitmap);
filterBitmapTask(~taskChannel);
taskChannel.put(part, matrix, offset);
switch taskList.put(PutProc).put(i).put(taskChannel).get {
case GNothing: ;
case GJust: let _ = taskList.get; /// UNREACHABLE
}
runFilterTasks(skip(linesPer, bitmap), height - linesPer,
linesPer, i + 1, matrix, matH, taskList);
}
}
[[Byte]] getFilterResults(ProcessTree<FilterBitmapTask> taskList)
{
switch taskList.put(TakeMinOrClose).get {
case Fail: taskList.close(); []
case Success:
Int index = taskList.get;
[[Byte]] part = (taskList.get).get;
concat(part, getFilterResults(taskList))
}
}
process filterBitmapTask(neg<FilterBitmapTask> channel)
{
channel.put(filterBitmap channel.get);
}
This, of course, is what’s known as an “embarrasingly parallel” problem, something that could be parallelized more easily in several existing languages. I guess MPL isn’t really about making things easy, though, it’s about statically checking that a protocol is being followed, and deadlock avoidance.
On the unreachable branch I’m doing something that is probably illegal in MPL-AOC: I’m discarding the (nonexistant) task.
I’ve done something strange in filterBitmapTask
:
channel.put(filterBitmap channel.get);
Of course, every function and process call before now has used parenthesis, as in f(x)
. But consider this: what if every process and function in MPL only takes a single parameter? What if every function that appears to take three parameters is really taking a single parameter that happens to be a tuple of size three? Well, that’s what I’m assuming here. At this point in the code, channel.get
returns a tuple ([[Byte]], [[Float]], Int)
, which I am using directly as an argument list to filterBitmap
.
This “juxtaposition” operator isn’t really compatible with C notation because if, for example, C# used this notation, a variable declaration like Foo x;
would be indistinguishable from a function call Foo(x);
. In MPL the situation is different, since all functions start with a lower-case letter and all types start with an upper-case letter. Some languages let you expand a tuple tup
when calling a function, with a notation such as Foo(*tup)
, but this is not the same as what I’m talking about. This “juxtaposition” operator is not an extra feature that allows you to expand a tuple; instead I’m saying we can subtract the concept of “argument lists” from the language and just use tuples instead. I think this idea works best if there is no such thing as a “tuple of one item”; then the one-tuple (x)
and x
are the same thing.
A file server
As my final and most complex example, I’ll try to work out how to implement a TCP file server in MPL (allowing the new features I’ve proposed). For this we’ll need access to the file system and to TCP sockets, perhaps with protocols like these:
/// TCP server-side protocols
protocol TcpListenService = put ListenOnPort: put Int,
( get TcpBindSuccess: BoundTcpListener
| get TcpBindFailed: loop )
| put StopTcpListenService;
protocol BoundTcpListener = GetSeq<TcpConnection>;
protocol TcpConnection = get PutSeq<Byte>, get neg<PutSeq<Byte>>, close;
/// File system access (assume a predefined fileSystem service)
/// (Issue: Do you see the problem with how GEither is used?)
data Error = Error String;
protocol GEither<#left,#rite,#then> = get GLeft: get #left, #then
| get GRight: get #rite, #then;
protocol FileSystem = put OpenFile: put String, put Bool,
GEither<Error, File, FileSystem>
| ...
| put StopFileSystem: close;
protocol File = put GetLength: get Int, loop
| put GetBytes: GetLoop<Char,File>
| ...
| put CloseFile: close
protocol GetLoop<t,#p> = put Get2: get t, loop
| put StopGetLoop: #p
/// Same as before
protocol GetSeq<t> = put Get: get t, loop | put StopGet: close
protocol PutSeq<t> = put Put: put t, loop | put StopPut: close
I talked about the TCP protocols a little bit already. I realize they are impoverished, especially TcpConnection
: The connection could “timeout” at any time and our simple protocol definition won’t tell us the difference between a timeout and a connection that closed normally. It also doesn’t give us a way to configure the socket’s behavior (e.g. how long should we wait for an “ack” before assuming that a timeout has occurred? Should there be a system of “keepalives” when keeping a socket open indefinitely?) Heck, we can’t even find out what IP address we’re connected to! But as they say, you have to walk before you can run, so for now I’ll accept this impoverished interface.
And of course, I’ll ignore security concerns. A file server that obeys all requests? Haha! Great idea. So let’s write the server code:
/// Startup command
fileServer(fileSystem, tcpListenService, stdout, 9998);
process fileServer(FileSystem fs, TcpListenService lService,
GetSeq<Char> stdout, Int portNum)
{
lService.put(ListenOnPort).put(portNum);
switch lService.get {
case TcpBindFailed:
writeLine(stdout, concat("Couldn't bind to port ", intToStr(portNum)));
lService.Stop();
case TcpBindSuccess:
listen(fs, lService);
}
}
process listen(FileSystem fs, BoundTcpListener listener)
{
let connection = listener.put(Get).get;
serveFiles(fs, connection);
listen(fs, listener);
}
process serveFiles(FileSystem fs, TcpConnection connection)
{
/// TODO: respond to requests for files on the new connection
}
Side note: wasn’t MPL suppose to guarantee termination? The compiler can detect that the fileServer
will never terminate in case of TcpBindSuccess
, but it seems like this must be allowed.
We have a problem here. I want listen
to give serveFiles
access to the file system, but the file system is represented as a channel, which means only one process can use it. How can we “split” the file system channel so any number of connections can use it?
This is not so simple. In fact, I can’t think of any solution that doesn’t involve changing the definition of FileSystem
itself! It seems that FileSystem
needs a CloneRef
command that gets a new channel to talk to the file system:
protocol FileSystem = put OpenFile: put String, put Bool,
GEither<Error, File, FileSystem>
| ...
| put CloneRef: get FileSystem, loop
| put StopFileSystem: close;
Then listen
would look like this:
process listen(FileSystem fs, BoundTcpListener listener)
{
let connection = listener.put(Get).get;
serveFiles(fs.put(CloneRef).get, connection);
listen(fs, listener);
}
But in fact, this is cheating; we assume FileSystem can “clone” itself, which is not possible in MPL itself. With two clients connected, the file server’s process graph will look like this:
+----------+ +-----------+
| |----------------|File System|
| | FileSystem +-----------+
| listen |
| | BoundTcpListener +--------+
| |------------------|Listener|
+----------+ +--------+
+----------+ +-----------+
| |----------------|File System|
| | FileSystem +-----------+
|serveFiles|
| | BidiSeq<Byte> +----------+
| |------------------|Connection|
+----------+ +----------+
+----------+ +-----------+
| |----------------|File System|
| | FileSystem +-----------+
|serveFiles|
| | BidiSeq<Byte> +----------+
| |------------------|Connection|
+----------+ +----------+
This is cheating, because in reality there is only one file system. You may notice that if the graph looked like this instead:
+----------+ +-----------+
| |------------------------|File System|
| | FileSystem +-----------+
| listen | | |
| | BoundTcpListener +--------+ | |
| |------------------|Listener| | |
+----------+ +--------+ | |
| |
+----------+ FileSystem | |
| |------------------------------+ |
|serveFiles| |
| | BidiSeq<Byte> +----------+ |
| |------------------|Connection| |
+----------+ +----------+ |
|
+----------+ FileSystem |
| |---------------------------------+
|serveFiles|
| | BidiSeq<Byte> +----------+
| |------------------|Connection|
+----------+ +----------+
This would be an acceptable graph, since it is acyclic. But if File System
were a process written in (strict) MPL, it would be unable to give out new references to itself, so this graph is impossible to construct. Remember that an MPL process is not allowed to have a reference to both ends of a channel simultaneously, because it could deadlock itself by using get
(implying it is waiting for itself to put
). If you send one end of the channel to another process you’ll still have a cycle. For example,
+--------+ a +-------+
| |---------| |
| P | | Q |
| |---------| |
+--------+ b +-------+
If P
does a.get
and Q
does b.get
, the processes will be deadlocked. So, if the file system process were to send out a new reference to itself along an existing channel, it would (temporarily) produce a cyclic graph like this between it and listen
.
There is an alternate design of FileSystem
that doesn’t have this problem:
protocol FileSystem = put OpenFile: put String, put Bool,
GEither<Error, File, FileSystem>
| ...
| put ShareFSWith: put neg<FileSystem>, loop
| put StopFileSystem, close;
Here I’ve replaced CloneRef
with what I call ShareFSWith
. In this design, the file system process receives its own complement neg<FileSystem>
(the complement of a FileSystem
is a process that uses the file system). Now we can write listen
like this:
process listen(FileSystem fs, BoundTcpListener listener)
{
let connection = listener.put(Get).get;
serveFiles(newFileSystemRef, connection);
fs.put(ShareFSWith).put(~newFileSystemRef);
listen(fs, listener);
}
But does this “sharing” concept scale? For example, suppose that serveFiles
needs access to the TimeService
for some reason. Suppose we extend TimeService
with a sharing feature too. Can we provide serveFiles
with both a FileSystem
and a TimeService
at the same time?
protocol TimeService = put GetCpuTimerMicroseconds: get Int, loop
| // other time services here
| put ShareTSWith: put neg<TimeService>, loop
| put StopTimeService: close
process listen(FileSystem fs, TimeService ts, BoundTcpListener listener)
{
let connection = listener.put(Get).get;
serveFiles(newFileSystemRef, newTimeServiceRef, connection);
fs.put(ShareFSWith).put(~newFileSystemRef);
ts.put(ShareTSWith).put(~newTimeServiceRef);
listen(fs, ts, listener);
}
Soon after starting the serveFiles
process, the graph is acyclic, so it feels like the code this should be allowed:
+------------+
+---------------------------------------|Time Service|
| TimeService +------------+
| |
+----------+ +-----------+ |
| |------------------------|File System| |
| | FileSystem +-----------+ |
| listen | | |
| | BoundTcpListener +--------+ | |
| |------------------|Listener| | |
+----------+ +--------+ | |
| |
+----------+ FileSystem | |
| |--------------------------------+ |
| | TimeService |
| |----------------------------------------+
|serveFiles|
| | TcpConnection +----------+
| |------------------|Connection|
+----------+ +----------+
Unfortunately, this code is actually illegal under my current plan. The way I formulated MPL, I defined two separate primitives:
- Create a graph of processes.
- Send a channel to another process.
In this case I’m using (1), then (2):
/// (1) Create a new process
serveFiles(newFileSystemRef, newTimeServiceRef, connection);
/// (2) Send two channels to other processes
fs.put(ShareFSWith).put(~newFileSystemRef);
ts.put(ShareTSWith).put(~newTimeServiceRef);
In between (1) and (2), listen
has ownership of two channels (newFileSystemRef
and newTimeServiceRef
) that both point to the same process (serveFiles
), so there is a temporary cycle. I am not sure how to redefine the rules of MPL to make this code legal. Perhaps there is some workaround, some modification to this code that would make it legal, but I’d rather change the rules of the compiler instead.
At least the idea is sound. As long as listen
doesn’t create a permanent link from serveFiles
back to listen
, there is no cycle, so no risk of deadlocks.
By the way, I’ve treated the time service like it was the same scenario as the file system service, but is it? Certainly, this graph is illegal in MPL:
+------------+
|Time Service|
+------------+
/ \
/ \
+---------+ +---------+
| P |----| Q |
+---------+ +---------+
But even though there is only one of each clock circuit in a computer, it couldn’t hurt for the time service to provide “clones” of itself to give us a graph like this:
+------------+ +------------+
|Time Service| |Time Service|
+------------+ +------------+
| |
| |
+---------+ +---------+
| P |-----| Q |
+---------+ +---------+
The clock is physically shared, but assuming it is read-only (albeit mutating), a cloned time service, sharing that circuit (with traditional locks or whatever), would be indistinguishable from having multiple physical clocks. In contrast, if the file system were “clonable”, we would expect changes to one copy would not affect the others, which isn’t normally how you want or expect a file system to behave. In either case, cloning can only happen outside the “safe” MPL type system. But I digress.
So, how do we serve files on each connection?
process serveFiles(FileSystem fs, TcpConnection connection)
{
let send = connection.get;
let recv = connection.get;
// Now what?
}
I immediately realize that serving files over a TCP connection, which is a sequence of raw bytes, could be unpleasant compared to the friendly typed world of MPL. I wish I could just use an MPL protocol over the TCP socket. Sadly, I don’t know of a way to enable that. But let’s see what we can do with this byte-oriented protocol.
In fact, I’ll be ambitious. I’d like to implement an incomplete HTTP server implementing a small part of HTTP 1.1.
The client will send a series of requests (using “\r\n” for newline), like this:
GET /folder/file%20one.txt HTTP/1.1
Host: fileserver.dyndns.com:80
Etc: other fields we can ignore
GET /folder/file%20two.txt HTTP/1.1
Etc: other fields we can ignore
HEAD /folder/file%20two.txt HTTP/1.1
POST /folder/file%20two.txt HTTP/1.1
Content-Length: 5
WTF?!
The server will respond like this:
HTTP/1.1 200 OK
Content-Length: 8
File oneHTTP/1.1 200 OK
Content-Length: 10
File two
HTTP/1.1 200 OK
Content-Length: 10
HTTP/1.1 405 Method Not Allowed
So let’s start writing the code…
process serveFiles(FileSystem fs, TcpConnection connection)
{
PutSeq<Byte> send = connection.get;
neg<PutSeq<Byte>> recv = connection.get;
String firstLine = receiveLine(recv, '\n');
...
}
Suddenly we have a serious problem. The type system won’t allow this receiveLine
function (and not just because I used ==
again):
[t] receiveLine(neg<PutSeq<t>> recv, t delimiter)
{
switch recv.get {
case Put: let c = recv.get;
if (c == delimiter)
[]
else {
let buf = receiveLine2(recv, delimiter);
c:buf
}
case StopPut: []
}
}
See the problem? It’s about the recv
protocol. recv
might get StopPut
, meaning the channel is closing, or it might receive Put
, meaning the channel stays open. The state of the channel is unknown at the end of receiveLine
. In a traditional language, you might return a Bool
to the caller to indicate whether the channel stays open. But informing the caller isn’t good enough in MPL; somehow you must also inform the compiler.
Perhaps I can use a record
(coinductive type) as a workaround… let’s see:
codata Continuation<chan, data, ret> {
ret cont(chan c, data d);
ret stop(data d);
}
ret receiveLine(neg<PutSeq<t>> recv, t delimiter,
Continuation<neg<PutSeq<t>>, [t], ret> after)
{
receiveLineImpl(recv, delimiter, [],
record Continuation<neg<PutSeq<t>>, [t], ret> {
ret stop([t] buf) {
after.stop(reverse(buf));
}
ret cont(neg<PutSeq<t>> chan, [t] buf) {
after.cont(chan, reverse(buf));
}
});
}
ret receiveLineImpl(neg<PutSeq<t>> recv, t delimiter, [t] buf,
Continuation<neg<PutSeq<t>>, [t], ret> after)
{
switch recv.get {
case Put:
let c = recv.get;
if (c == delimiter)
after.cont(recv, buf);
else
receiveLineImpl(recv, delimiter, c:buf, after);
case StopPut:
recv.close();
after.stop(buf);
}
}
Whew! Trust me, that was a lot harder to write than the original version! It also appears to be less efficient, since the linked list of characters is built in the wrong order and must be reversed at the end.
So, I’ve defined a coinductive type called Continuation
that calls either stop
if the channel happens to close, or cont
if it stays open.
But there’s another problem; the idea of passing a channel into codata
isn’t clearly defined. When we pass a channel to a normal function, like this one:
void PutStar(PutSeq<Char> ps) { ps.put(Put).put('*'); }
It seems perfectly practical to allow this, because the compiler can use the body of the function to infer the state of the channel ps
at the end of the function. However, the body of a codata
function is unknown. What is the state (protocol) of the channel after the call? For the purposes of this example, the cont
function needs to “use up” the recv
channel, so that when the function returns, the channel is either closed or passed to a different process, so it effectively no longer exists. Perhaps that’s okay as a default. But it seems as if, in general, when defining a codata
type we would need some special syntax to indicate the state that each channel will have when the function returns.
Anyway, we can now write serveFiles
like this (I’ve assumed we have tuple types available, i.e. (item1, item2, ...)
):
process serveFiles(FileSystem fs, TcpConnection connection)
{
PutSeq<Byte> send = connection.get;
neg<PutSeq<Byte>> recv = connection.get;
handleNextRequest(send, recv);
}
void handleNextRequest(FileSystem fs, PutSeq<Byte> send, neg<PutSeq<Byte>> recv)
{
receiveLine(recv, '\n', record Continuation<,,> {
ret stop([Byte] buf) {
send.put(StopPut).close(); /// Connection closed early
}
ret cont(neg<PutSeq<Byte>> recv', [Byte] line) {
let (method, path, ver) = parseFirstLine(bytesToString(line));
readHeaderLines(method == "POST", recv', record Continuation<,,> {
ret stop([String] headers) {
sendResponse(fs, send, method, path);
send.put(StopPut).close();
}
ret cont(neg<PutSeq<Byte>> recv'', [String] headers) {
sendResponse(fs, send, method, path);
handleNextRequest(send, recv);
}
});
}
});
}
void readHeaderLines(Bool expectBody, neg<PutSeq<Byte>> recv,
Continuation<neg<PutSeq<Byte>>, [String], void> after)
{
/// TODO: read headers into a list of strings and run after.cont()
/// when a blank line is received. If `expectBody`, we must also read the
/// "body" of the request.
}
/// Send the file if it exists and the method is GET,
/// send a header in case of HEAD, or send an error response.
process sendResponse(FileSystem fs, PutSeq<Byte> send, String method, String path)
{
Bool isHead = (method == "HEAD");
if (isHead || method == "GET") {
/// TODO: respond to the request for `path` (include Content-Length header)
} else {
bulkPut(send, stringToBytes("HTTP/1.1 405 Method Not Allowed\r\n\r\n"));
}
}
(String,String,String) parseFirstLine(String line)
{
/// TODO: Parse a line such as "GET /path/file%20one.txt HTTP/1.1"
/// into ("GET", "/path/file one.txt", "HTTP/1.1")
}
String bytesToString([Byte] bytes)
{
/// TODO
}
[Byte] stringToBytes(String str)
{
/// TODO
}
After writing all this, I realize there is another problem, a serious problem with the idea of closures in this context. As you can see, the closures cont
and stop
in handleNextRequest
need to refer to services fs
and send
. However, the compiler cannot determine this is safe via local analysis. In this particular program, receiveLine
calls stop
or cont
within itself in the same process, and it calls one of those functions, exactly once.
The problem is, there are no restrictions on passing around or using codata record
s.
- The
record
could be passed to another process and called, which would meanfs
andsend
were effectively transferred to another process. It is possible to create a cyclic graph of process relationships this way. stop
andcont
could be called multiple times, or not be at all, implyingfs
andsend
are left in an indeterminate state.
Thus, while passing channels into codata
functions seems to be okay, “capturing” a channel in a closure is not acceptable.
Rust addresses a related issue by having multiple kinds of closures; our situation seems to call for something like Rust’s FnOnce
closure. However, adding such a thing is a substantial complication to MPL’s simple type system. Besides, this whole approach of passing “continuations” to receiveLine
is pretty clunky anyhow. It’s quite inconvenient to use.
Instead, I suggest that closures not be allowed to capture channels at all, at least not initially. Instead, we need to be able to put channels into normal data
types such as Maybe
. In that case, receiveLine
can have some of its original simplicity back:
([t], Maybe<neg<PutSeq<t>>>) receiveLine(neg<PutSeq<t>> recv, t delimiter)
{
switch recv.get {
case Put:
let c = recv.get;
if (c == delimiter)
([], Just(recv))
else {
let (buf, chan) = receiveLine2(recv, delimiter);
(c:buf, chan)
}
case StopPut:
recv.close();
([], Nothing)
}
}
In this design, the argument recv
is always unusable after calling receiveLine
, but the caller may receive the same channel back as a return value.
Note: I have assumed I can just put a channel inside the normal Maybe<t>
type. And as long as the compiler knows that t
is a channel, that could work. However, you could not pass that Maybe
value to a function like unwrap
:
t unwrap(Maybe<t> value, t defaultValue) {
switch value {
case Just(x): x
case Nothing: defaultValue
}
}
If t
were a protocol type, unwrap
would leave defaultValue
in an indeterminate state, which cannot be allowed.
Now we must reimplement some of our earlier code:
void handleNextRequest(FileSystem fs, PutSeq<Byte> send, neg<PutSeq<Byte>> recv)
{
let (line, mRecv) = receiveLine(recv, '\n');
switch mRecv {
case Nothing:
send.put(StopPut).close(); /// Connection closed early
case Just(recv'):
let (method, path, ver) = parseFirstLine(bytesToString(line));
let (headers, mRecv') = readHeaderLines(method == "POST", recv');
sendResponse(fs, send, method, path);
switch mRecv' {
case Nothing:
send.put(StopPut).close();
case Just(recv''):
handleNextRequest(send, recv'');
}
}
}
([String], Maybe<neg<PutSeq<Byte>>>)
readHeaderLines(Bool expectBody, neg<PutSeq<Byte>> recv)
{
/// TODO: read headers into a list of strings and run after.cont()
/// when a blank line is received. If `expectBody`, we must also read the
/// "body" of the request.
}
That’s it. I’m tired, I’m not actually going to complete this thing.
Additional thoughts
Close/halt
I’ve been haphazardly throwing in .close()
commands and the occasional halt
, but this part of MPL really doesn’t make sense to me. In MPL-AOC it seems that every “close” on one channel had to be matched with a “halt” (end process command) on another channel. The purpose of this requirement seems to be ensuring that a connected graph of processes remains connected, by ensuring that “leaf nodes” are killed. However,
- It appears that one is allowed to create disconnected graphs directly.
- When a “halt” is the next element of a protocol, it appears that one can do an arbitrary amount of work (including spawning new processes) before actually halting.
- So, I don’t see how “halting” is different from “closing” in any practical sense.
Last but not least, manually issuing a close command seems redundant; the compiler could infer when it needs to close a channel.
Simulating fork/split
If you can send channels along other channels, and if functions can accept and return channels, then fork
and split
are easily simulated like this:
protocol Fork = put #a, put #b, halt;
void fork(Fork<#a,#b> oldChannel, #a a, #b b)
{
old.put(a).put(b);
}
(#a, #b) split(neg<Fork<#a,#b>> oldChannel)
{
let a = old.get;
let b = old.get;
old.close();
(a, b)
}
/// This code for P0, P1, P2, and Q is an adaptation of the (purposeless)
/// code in the earlier section "Concurrent MPL: split, fork, plug".
process P0(ProtocolA a, ProtocolB b, ProtocolC c, ProtocolQ qlink, Int num)
{
qlink.put(Q).put(12345);
fork(qlink, ~qlink1, ~qlink2);
P1(qlink1, a);
P2(qlink2, b, c, num);
}
process P1(ProtocolQ1 qlink1, ProtocolA a) { ... }
process P2(ProtocolQ2 qlink2, ProtocolB b, ProtocolC c, Int num) { ... }
process Q(neg<ProtocolQ> qlink)
{
switch qlink.get {
case Q:
Int num = qlink.get;
let (qlink1, qlink2) = split(qlink);
// qlink no longer exists at this point
...
}
}
Unique tag problem
It is clear from this document alone that requiring protocol tags to be unique is quite onerous. I’ve already defined Get
, Get1
, Get2
, GetT
and numerous Stop
commands such as StopGet
, StopPut
, StopBidi
, StopTimeService
, StopTcpListenService
etc.
We should find a way to drop this requirement. But how can we do it while supporting type inference?
When getting labels:
switch channel.get {
case Label1: ...
case Label2: ...
}
It seems to be required to handle all possible labels, so it would be possible to infer the protocol type from the set of labels rather than requiring each label to be unique. Inference would always work if every protocol has some difference in the set of labels.
When putting labels, however, it is certainly not required to put all possible labels. So it would seem type inference in this case wouldn’t be reliable, and users may sometimes have to attach an explicit protocol to the channe. I think that’s an acceptable price to pay for the right to use the same labels in different protocols.
Protocols invoking other protocols: “call” vs “goto” interpretation
My current interpretation of how protocols are supposed to work is that when one protocol refers to another protocol, it transfers control “permanently” to the new protocol. In other words, it’s a “goto”, not a “call”.
However, I think the “call” and “goto” interpretations are equivalent. I have a feeling that, given any set of protocols expressed in “call” form, I can give you one expressed in “goto” form and vice versa.
Example 1: “Put some ‘t’ values, then get an equal number back.”
/// "call" version
protocol GetsAndPuts<t> = get Get: get t, GetsAndPuts<t>, put t;
| get Stop: ;
/// equivalent "goto" version
protocol GetsAndPuts<t, #then> = get Get: get t, GetsAndPuts<t, (put t, #then)>;
| get Stop: #then;
When a protocol under the “call” interpretation comes to an end without looping or closing, this is taken to mean “return to the caller”, and if there is no “caller”, the protocol is assumed to close
. Thus, the upper protocol GetsAndPuts<t>
is equivalent to the lower protocol GetsAndPuts<t, close>
.
Example 2: “File System + GEither”
/// "call" version
protocol GEither<#left,#rite> = get GLeft: get #left
| get GRight: get #rite;
protocol FileSystem = put OpenFile: put String, put Bool,
GEither<Error, File>, loop
| ...
| put StopFileSystem: ;
/// equivalent "goto" version
protocol GEither<#left,#rite,#then> = get GLeft: get #left, #then
| get GRight: get #rite, #then;
protocol FileSystem<#then> = put OpenFile: put String, put Bool,
GEither<Error, File, loop>
| ...
| put StopFileSystem: #then;
It may not be obvious how to map the first version to the second, so I will explain in more detail:
- To convert a “call” protocol into a “goto” protocol, add an extra #then parameter which is inserted at any point where the protocol ends without using
loop
orclose
and without calling a protocol that never returns. (If the protocol never ends in one of these three ways then the protocol is already in “goto” form and should be left alone.) - When a protocol calls a “call-protocol”, the “goto” version of that same call will include whatever is supposed to follow as an extra generic parameter.
/// Call-protocols
protocol GetsAndPuts<t> = get Get: get t, GetsAndPuts<t>, put t;
| get Stop: ;
protocol GEither<#left,#rite> = get GLeft: get #left
| get GRight: get #rite;
protocol FileSystem = put OpenFile: put String, put Bool,
GEither<Error, File>, loop
| ...
| put StopFileSystem: ;
/// After step 1
protocol GetsAndPuts<t, #then> = get Get: get t, GetsAndPuts<t>, put t, #then;
| get Stop: #then;
protocol GEither<#left,#rite,#then> = get GLeft: get #left, #then
| get GRight: get #rite, #then;
protocol FileSystem<#then> = put OpenFile: put String, put Bool,
GEither<Error, File>, loop
| ...
| put StopFileSystem: #then;
/// After step 2
protocol GetsAndPuts<t, #then> = get Get: get t, GetsAndPuts<t, (put t, #then)>;
| get Stop: #then;
protocol GEither<#left,#rite,#then> = // No change: GEither doesn't "call" anything
protocol FileSystem<#then> = put OpenFile: put String, put Bool,
GEither<Error, File, loop>
| ...
| put StopFileSystem: #then;
- When a process uses a “call protocol”, it’s equivalent to using the “goto protocol” with an extra
close
parameter (e.g.FileSystem
->FileSystem<close>
).
One way to think of this is that a “call-based” protocol is simply syntactic sugar for a “goto-based” protocol. In fact, the syntax get a, put b, #c
can be considered syntactic sugar for the equivalent “goto” protocol Get<a, Put<b, #c>>
.
Is t
a subtype of #t
?
Consider the following protocol:
protocol GEither<#left,#rite,#then> = get GLeft: get #left, #then
| get GRight: get #rite, #then;
Earlier I suggested that the hash sign #
be used to mark generic arguments that are protocols rather than sequential data
/codata
. But wait, is there really anything wrong with using, say, GEither<Int,Bool,close>
? I don’t think so. If a generic function expects unknown channels but gets sequential types instead, there’s not really a problem:
void fn(GEither<#left,#rite,#then> e, PutSeq<#left> pL, PutSeq<#right> pR)
{
switch e.get {
case GLeft: pL.put(Put).put(e.get);
case GRight: pR.put(Put).put(e.get);
}
}
If #left
and #right
happen to be sequential types, this code still makes sense. I can’t think of anything you can do with an unknown protocol type that you can’t also do with a sequential type. Unknown sequential types seem to act like subtypes of unknown channels.
On the other hand if we write a function like
([left],[rite]) fn(GEither<left,rite,#then> e)
{
switch e.get {
case GLeft: let x = e.get; ([x, x],[]); /// duplication
case GRight: let x = e.get; ([],[x, x]); /// duplication
}
}
Then certainly you could not call this function with protocol types for left
and rite
.
“after” construct for specifying the middle of a protocol
Consider a protocol like
protocol Foo<t> = get Foo: get Int, put String, get t, put Bool, loop
| ...;
suppose we then write a pair of functions like
void fn1(Foo<t> foo) {
switch(foo.get) {
case Foo:
Int x = foo.get;
fn2(foo);
}
}
void fn2(foo) {
...
}
Until now, I’ve assumed if we want to specify the type of foo
in fn2
, we have to define a new protocol like
protocol Foo2<t> = put String, get t, put Bool, Foo<t>
void fn2(Foo2<t> foo) {
...
}
That’s pretty damn annoying. Instead, I think there should be an “after” modifier, something like this, that will represent a protocol in an altered state:
void fn2(Foo<t>.after<get Foo, get> foo) {
...
}
Much better.
Summary
MPL’s core elements are
- Inductive
data
types, which cannot contain loops or cycles - Coinductive
codata
types, which are like Java interfaces, and therecord
expression, which is like creating an anonymous implementation of a Java interface. fold
, which verifies termination of a set of functions- Processes, which behave like threads (except that they could be distributed across multiple machines)
- Channels, which are like pairs of bounded queues between processes
- Acyclic graphs (trees) of processes as a means of avoiding deadlocks
What you just learned
This document has explored MPL through numerous examples:
helloWorld
! (exploring the design of a terminal API and wrappersmakeBidi
,bulkPutAdapter
,bulkGetAdapter
)- The
guessingGame
(plusTimeService
,bulkGetAdapter
) - The
splitBidi
process, which MPL does not support - The design of a
TcpListenService
andTcpConnection
GetsAndPuts2<t, #p>
andFiloBuffer
, which demonstrates the feasibility of a recursive protocol, on the condition that blocking calls are allowed.emptyProcessTree
/processTree
and theProcessTree<#p>
protocol, which demonstrate that (1) a tree of processes can simulate a list of channels, but (2) the code for such a thing is very difficult to write.filterBitmap
andfilterBitmapInParallel
, which demonstrate the awful pain of storing bitmaps as linked lists.- The design of
FileSystem
andFile
services as external processes - The problem of writing a TCP/HTTP
fileServer
, which leads us to- the impossibility of cloning references to mutable services, the
ShareWith
pattern, and the unsolved issue of sharing multiple services with a new process. - two major difficulties with using channels in
codata
functions. - the apparent necessity of using
Maybe<#c>
where#c
is a channel type. This could probably be simulated with an ancillary “maybe” process, but that sounds clumsy and wasteful of CPU resources.
- the impossibility of cloning references to mutable services, the
Recommended modifications to MPL
A few of these recommendations are cosmetic, but it seems to me that many of them are required to give us a programming language in which it is practical to write nontrivial programs, like the file server.
- I would recommend that MPL’s syntax use mainly C++/C#/Java or Rust style, optionally substituting Pascal-style type indicators (
x: Int
instead ofInt x
) and/or a different style for generic types (e.g.PutSeq t
orPutSeq!t
instead ofPutSeq<t>
). Haskell’s general syntactic style is a poor fit for MPL, because MPL does not support currying, implying that we should use parentheses around argument lists. Also, if channels have polarity, MPL will look even less like Haskell. Besides, C-style syntax is far more widely known than Haskell. The stylex: Int
for typing is also well-known, used in languages as diverse as Pascal, Rust, TypeScript and mathematics. - So-called “destructors” of
codata
types should be invoked with dot notation (e.g.stream.head()
). Since there is no plan to support method overloading, name collisions will be a big problem without this feature. For example, would thehead(x)
function be used to get the head of a list or of a stream? Ifhead(x)
gets the head of a list, how would we denote getting the head of a stream? - Although this document does not use “left” and “right” polarities, I am actually rather neutral about whether they should be used. Similarly, while this document minimizes the use of type inference for the sake of clarity, some inference is required, and it’s nice if the compiler supports more.
- For clarity, the distinction between
protocol
andcoprotocol
should be replaced with a distinction betweenput Label:
andget Label:
, respectively. - Also, allow “bare” named protocols that do not begin with a label (no
get _:
orput _:
) - Terminal input and output should have separate channels, and similarly for TCP connections.
- It is clear that requiring protocol tags to be unique is onerous. Somehow, this requirement should be dropped.
- Allow chaining of
put
operations on channels:channel.put(...).put(...).get
. - Unify concurrent and sequential MPL: allow channels to be passed to functions and returned from functions.
- Eliminate
fork
andsplit
in favor of sending channels along channels (SCAC), because multiple examples above would be much more difficult to write if they were forced to usefork
. SCAC is more powerful;fork
andsplit
are easily simulated with SCAC, but the converse is not true. - Eliminate the concept of inner processes, but support the concept of
codata
closures inside processes (the latter will not be allowed to capture channels). - Explicit close or halt commands should be optional.
- Realistically, we need a way to allow channels to be stored in inductive data types, even though it may not be required in theory. This issue has been well-researched already under the banner of “typestate”. The “process tree” example demonstrates that it is difficult to write programs in a language without this capability.
- Furthermore, unless there is something wrong with my analysis, a generic type
t
should be treated as a subtype of#t
. I don’t think it actually matters whether the definition of adata
orprotocol
type uses#
or not; the important thing is whether a generic function uses#
or not. - Support an
after
construct for representing the “middle” of a protocol. - Figure out a way to allow the
==
operator to operate on generic types. - Consider allowing anonymous branching, as shown in the definition of
TcpListenService
. - Last but not least, it is my opinion that a standalone MPL compiler, even if it implements all of my recommendations, and regardless of its performance characteristics, would be unappealing for writing large-scale applications for several reasons, such as the unfilled need for a comprehensive “standard library”, and the practical need for some way to deal with runtime errors. Ultimately it makes more sense to build MPL features on top of an existing language, perhaps documenting a subset of that language that would provide the desired mathematical properties.
Limitations of MPL still not covered by the above
- Requiring an acyclic process graph implies that you cannot easily split a bidirectional channel, and it may place serious constraints on programs that use global services like
FileSystem
andTimeService
. - It appears there is no primitive planned for checking whether input is waiting on a channel, or for waiting for the first available input from multiple channels.
Other issues
- How do we avoid creating a new OS thread for each process, which would be too expensive?
- It is not clear that MPL will scale to support realistic concurrent programs given the acyclicity constraint. I recommend research be done into “isolating” the risk of deadlocks so that cycles can be allowed while minimizing risk.
- How do we support sharing more than one service with a new process?
- The design of protocols for system services (file system, TCP, etc.) seem to need a lot of careful thought and more effort than designing a traditional API. I sense that changing these APIs will be more difficult than changing traditional APIs… it could be very hard to change APIs without breaking existing code.
- It would be nice if APIs could be invoked over protocols with a more “natural” syntax, something that looks more like an ordinary function call.
- To avoid stack overflow, MPL seems to need a back-end that guarantees the tail-call optimization. .NET, for instance, doesn’t. However, the compiler front-end could implement this optimization itself, except in the case of mutual recursion. But it makes a lot of sense, I think, to simply allow loops.
- I suspect we’ll need some form of runtime error, and we’ll need some easy way for processes to receive external, asynchronous shutdown commands.
- People will discover that the compiler makes them write code in unreachable branches that the compiler does not know are unreachable. Sometimes it is impossible to add code in such a branch that will satisfy the compiler; the ability to halt with a runtime error would be useful in such a case.
- Earlier I suggested a feature called
call
while defining a recursive process, but in hindsight I believe the feature to be unnecessary (handy, but unnecessary). We just need to be able to pass channels to functions.
Things I don’t know:
- Are session types mathematically equivalent to MPL protocols?
- Does MPL have a feature for verifying that concurrent programs always terminate? How does
fold
factor into concurrent code?
Lolcat
I can haz masterz degree?
Comments