# Introduction

It’s way too long ago I wrote about this side-project of mine, as I got side-tracked by other stuff both inside and outside the realm of LINQ (more about that some other time around). Last time, I showed how to put “the query pattern” to our hand by providing an implementation for the Where operator on top of some Theorem<T> type. Let’s recap that to set the scene:

```/// <summary>
/// Strongly-typed theorem type for use with LINQ syntax.
/// </summary>
/// <typeparam name="T">Enviroment type over which the theorem is defined.</typeparam>
public class Theorem<T> : Theorem
{    …

/// <summary>
/// Where query operator, used to add constraints to the theorem.
/// </summary>
/// <param name="constraint">Theorem constraint expression.</param>
/// <returns>Theorem with the new constraint applied.</returns>
public Theorem<T> Where(Expression<Func<T, bool>> constraint)
{
return new Theorem<T>(base.Context, base.Constraints.Concat(new List<LambdaExpression> { constraint }));
}

/// <summary>
/// Solves the theorem.
/// </summary>
/// <returns>Environment type instance with properties set to theorem-satisfying values.</returns>
public T Solve()
{
return base.Solve<T>();
}
}```

We explained how the non-generic base type keeps track of a set of constraints that have been declared by the user through use of the where query operator, turned into expression trees by the Where signature’s first parameter. This allows us to define constraints in a manner that’s easy on the eye:

```using (var ctx = new Z3Context())
{
ctx.Log = Console.Out;

var theo = from t in ctx.NewTheorem<Symbols<int, int, int, int, int>>()
where t.X1 - t.X2 >= 1
where t.X1 - t.X2 <= 3
where t.X1 == (2 * t.X3) + t.X5
where t.X3 == t.X5
where t.X2 == 6 * t.X4
select t;

var result = theo.Solve();
Console.WriteLine(result);
}```

The Symbols generic type is one way to create a set of symbols used within the declared theorem’s constraint (and to print the result if the constraints can be satisfied).

From the above you should be able to get an idea about the LambdaExpression instances that are kept in the Theorem<T>’s base type’s instance field, as collected through the successive calls to Where. Remaining question is how to turn the defined theorem into action, something that’s triggered through the call to Solve. This mystery is what we’ll reveal today.

# Three steps to solve a theorem

LINQ to Z3 is in fact nothing more but a runtime translator of LINQ expression trees into Z3 calls. The Solve method lies at the heart of this translation and acts as the entry-point for the user. It’s similar to the nature of GetEnumerator in a typical LINQ provider, causing a translation of the query’s LINQ expression tree into the underlying query language (e.g. SQL, CAML, etc).

Three steps take place to make this work:

• First of all, we need to establish an environment with the theorem’s symbols and inform Z3 about those. In other words, we map things like X1 to a Z3 counterpart, capturing the name and the type of the symbol (e.g. “T1”, int). We keep track of the Z3 targets (in fact those will be IntPtr pointers, see further) so that subsequent phases of the translation can refer to those when asserting constraints.
• Secondly, constraints expressed through LINQ need to be translated into Z3 constraints, based on the established environment above. We call this phase the assert constraints phase which is implemented as an expression tree visitor pattern. Here the bulk of the implementation of LINQ to Z3 lives.
• Finally, Z3 is queried for a model that satisfied the asserted constraints. If such a model exists, it’s still not ready for consumption by the end-user. To expose the solution in a user-friendly way, the symbols from the environment are mapped onto a .NET object that contains the values satisfying the expressed constraints. This phase is referred to as the get solution phase.

All of the above translates into the following implementation of Solve, on the Theorem base class:

```/// <summary>
/// Solves the theorem using Z3.
/// </summary>
/// <typeparam name="T">Theorem environment type.</typeparam>
/// <returns>Result of solving the theorem; default(T) if the theorem cannot be satisfied.</returns>
protected T Solve<T>()
{
// TODO: some debugging around issues with proper disposal of native resources…
// using (Context context = _context.CreateContext())
Context context = _context.CreateContext();
{
var environment = GetEnvironment<T>(context);

AssertConstraints<T>(context, environment);

Model model = null;
if (context.CheckAndGetModel(ref model) != LBool.True)
return default(T);

return GetSolution<T>(model, environment);
}
}```

Recall from the previous post that _context is a Z3Context object that wraps the Z3 Config type. All we do there is creating a Z3 Context object which is the Z3 primitive to start defining a theorem on. Let’s have a brief look at what Z3’s Context type provides (simplified):

```namespace Microsoft.Z3
{
public class Context : IDisposable
{
public Context(Config config);

public void AssertCnstr(IntPtr a);        …
public LBool Check();
public LBool CheckAndGetModel(ref Model m);        …
public IntPtr MkAdd(IntPtr arg1, IntPtr arg2);
public IntPtr MkAnd(IntPtr arg1, IntPtr arg2);
public IntPtr MkApp(IntPtr d, IntPtr[] args);        …        public IntPtr MkBoolType();        …        public IntPtr MkConst(string s, IntPtr ty);        …
public IntPtr MkFuncDecl(string s, IntPtr[] domain, IntPtr range);        …
public IntPtr MkGe(IntPtr arg1, IntPtr arg2);
public IntPtr MkGt(IntPtr arg1, IntPtr arg2);        …
public IntPtr MkXor(IntPtr t1, IntPtr t2);        …
public IntPtr Simplify(IntPtr a);        …
}
}```

As you can see, the way to communicate with the Z3 library is through IntPtr objects. Recall the managed Z3 library is a simple wrapper on top of the native library and the “handle” mechanism to represent Z3 objects is surfaced through IntPtr objects (corresponding to pointers ultimately). Various Mk* functions exist to build up expressions that are used to represent constraints. For example, one could write:

IntPtr cA = ctx.MkConst(“a”, ctx.MkBoolType());
IntPtr cb = ctx.MkConst(“b”, ctx.MkBoolType());
IntPtr aAndb = ctx.MkAnd(cA, cB);
ctx.AssertCnstr(aAndb);

This asserts the constraint “a && b” (in C# lingo), where a and b are Boolean values, to the Z3 context. Ultimately we can call CheckAndGetModel to see whether the constraints can be satisfied, and if so a Model is passed back to us:

```Model model = null;
if (context.CheckAndGetModel(ref model) != LBool.True)   …```

To consume the model, we can use the Model type in various ways:

```namespace Microsoft.Z3
{
public class Model : IDisposable
{
public void Display(TextWriter w);
public void Display(TextWriter w, IntPtr v);        …
public IntPtr Eval(IntPtr __p1);        …
public bool GetBoolValueBool(IntPtr v);        …
}
}```

Either we print the whole model or we evaluate expressions against the Model (amongst other options). Below is a sample of how a and b can be retrieved. Notice how cA and cB are used again to query the model:

bool a = model.GetBoolValueBool(model.Eval(cA));
bool b = model.GetBoolValueBool(model.Eval(cB));

All of the above is obviously a code fragment specialized to solve the constraint “a && b”, getting the result back as two Booleans. In LINQ to Z3 we need to generalize the steps above:

• We’ll introduce an environment that maps every symbol from the theorem onto the underlying IntPtr retrieved from calling MkConst.
• We’ll visit the expression trees that declare the constraints, generating Z3 expressions (e.g. via MkAnd) and asserting them on the Context.
• We’ll extract the model’s values for the bindings in the environment to instantiate an object representing the solution (if any).

Let’s start by looking at the code to establish an environment.

# Taking care of the environment

First we need to establish the environment from the T generic parameter type. E.g. in the aforementioned sample Symbols<…> is our starting point:

from t in ctx.NewTheorem<Symbols<int, int, int, int, int>>() …

But the user could equally well use different means to establish a bag of symbols to write a theorem against, e.g.:

`from t in ctx.NewTheorem(new { x = default(bool), y = default(bool) }) …`

We need to let Z3 know what the types are of the symbols we’re using here. Obviously we’re going to use reflection to extract this information as shown below:

```/// <summary>
/// Maps the properties on the theorem environment type to Z3 handles for bound variables.
/// </summary>
/// <typeparam name="T">Theorem environment type to create a mapping table for.</typeparam>
/// <param name="context">Z3 context.</param>
/// <returns>Environment mapping table from .NET properties onto Z3 handles.</returns>
private static Dictionary<PropertyInfo, IntPtr> GetEnvironment<T>(Context context)
{
var environment = new Dictionary<PropertyInfo, IntPtr>();

//
// All public properties are considered part of the theorem's environment.
// Notice we can't require custom attribute tagging if we want the user to be able to
// use anonymous types as a convenience solution.
//
foreach (var parameter in typeof(T).GetProperties(BindingFlags.Public | BindingFlags.Instance))
{
//
// Normalize types when facing Z3. Theorem variable type mappings allow for strong
// typing within the theorem, while underlying variable representations are Z3-
// friendly types.
//
var parameterType = parameter.PropertyType;
var parameterTypeMapping = (TheoremVariableTypeMappingAttribute)parameterType.GetCustomAttributes(typeof(TheoremVariableTypeMappingAttribute), false).SingleOrDefault();
if (parameterTypeMapping != null)
parameterType = parameterTypeMapping.RegularType;

//
// Map the environment onto Z3-compatible types.
//
if (parameterType == typeof(bool))
else if (parameterType == typeof(int))
else
throw new NotSupportedException("Unsupported parameter type for " + parameter.Name + ".");
}

return environment;
}```

To understand the above, start from the method’s contract: given a Z3 context, we’re going to return a dictionary that maps PropertyInfo objects onto IntPtr objects representing the symbols introduced on the Z3 context. The code is fairly straightforward to understand and only cares about properties (though one could extend it to allow fields as well). Ignore the TheoremVariableTypeMapping part, which can be dropped from the code for the time being, which will be explained in subsequent posts. Notice we only support Z3’s Bool and Int types.

This was easy, wasn’t it? Next, the meat of the translator: translating and asserting constraints.

# Asserting constraints

Asserting the constraints from the LINQ expression trees onto Z3 constraints is the next, and most complicated, step. The entry-point to the method responsible for this looks as follows:

```/// <summary>
/// Asserts the theorem constraints on the Z3 context.
/// </summary>
/// <param name="context">Z3 context.</param>
/// <param name="environment">Environment with bindings of theorem variables to Z3 handles.</param>
/// <typeparam name="T">Theorem environment type.</typeparam>
private void AssertConstraints<T>(Context context, Dictionary<PropertyInfo, IntPtr> environment)
{
var constraints = _constraints;

//
// Global rewwriter registered?
//
var rewriterAttr = (TheoremGlobalRewriterAttribute)typeof(T).GetCustomAttributes(typeof(TheoremGlobalRewriterAttribute), false).SingleOrDefault();
if (rewriterAttr != null)
{
//
// Make sure the specified rewriter type implements the ITheoremGlobalRewriter.
//
var rewriterType = rewriterAttr.RewriterType;
if (!typeof(ITheoremGlobalRewriter).IsAssignableFrom(rewriterType))
throw new InvalidOperationException("Invalid global rewriter type definition. Did you implement ITheoremGlobalRewriter?");

//
// Assume a parameterless public constructor to new up the rewriter.
//
var rewriter = (ITheoremGlobalRewriter)Activator.CreateInstance(rewriterType);

//
// Do the rewrite.
//
constraints = rewriter.Rewrite(constraints);
}

//
// Visit, assert and log.
//
foreach (var constraint in constraints)
{
IntPtr c = Visit(context, environment, constraint.Body, constraint.Parameters[0]);
context.AssertCnstr(c);

_context.LogWriteLine(context.ToString(c));
}
}```

I’m not holding any information back, so some of the above can be ignored for the time being: mask away all of the code with the word rewriter in it, as it will be explained next time around. (To satisfy your curiosity, rewriters allow to create domain-specific constraint languages, e.g. to solve a Kakuro puzzle.)

The essence of the code lives in the foreach-loop that translates the lambda expression-based constraints by a call to the Visit method, returning a Z3 expression for the constraint which is subsequently asserted on the Z3 context. Obviously Visit is where all the magic happens. To do so, it gets four parameters:

• The Z3 context, to call Mk* functions on during the translation.
• Our environment, to be able to map occurrences of symbols (expressed as properties) onto the introduced Z3 corresponding objects.
• Obviously the lambda expression’s body to translate (cf. Expression<Func<T, bool>> constraint).
• And finally, the lambda expression’s parameter expression object (so that calls on it can be recognized, e.g. symbols => … symbols.X1 …).

On to the visitor now. Before we do so, let’s think a bit about the operations we want to support on constraints: arithmetic (+, -, *, /, %), logic (&&, ||, !) and relational (<, <=, >, >=, ==, !=). There’ll be a few more as we shall see in a minute. As usual with expression trees, lots of switch-statements will be attending the party:

```/// <summary>
/// Main visitor method to translate the LINQ expression tree into a Z3 expression handle.
/// </summary>
/// <param name="context">Z3 context.</param>
/// <param name="environment">Environment with bindings of theorem variables to Z3 handles.</param>
/// <param name="expression">LINQ expression tree node to be translated.</param>
/// <param name="param">Parameter used to express the constraint on.</param>
/// <returns>Z3 expression handle.</returns>
private IntPtr Visit(Context context, Dictionary<PropertyInfo, IntPtr> environment, Expression expression, ParameterExpression param)
{
//
// Largely table-driven mechanism, providing constructor lambdas to generic Visit*
// methods, classified by type and arity.
//
switch (expression.NodeType)
{
case ExpressionType.And:
case ExpressionType.AndAlso:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkAnd(a, b));
case ExpressionType.Or:
case ExpressionType.OrElse:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkOr(a, b));
case ExpressionType.ExclusiveOr:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkXor(a, b));
case ExpressionType.Not:
return VisitUnary(context, environment, (UnaryExpression)expression, param, (ctx, a) => ctx.MkNot(a));
case ExpressionType.Negate:
case ExpressionType.NegateChecked:
return VisitUnary(context, environment, (UnaryExpression)expression, param, (ctx, a) => ctx.MkUnaryMinus(a));
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkAdd(a, b));
case ExpressionType.Subtract:
case ExpressionType.SubtractChecked:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkSub(a, b));
case ExpressionType.Multiply:
case ExpressionType.MultiplyChecked:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkMul(a, b));
case ExpressionType.Divide:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkDiv(a, b));
case ExpressionType.Modulo:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkRem(a, b));
case ExpressionType.LessThan:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkLt(a, b));
case ExpressionType.LessThanOrEqual:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkLe(a, b));
case ExpressionType.GreaterThan:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkGt(a, b));
case ExpressionType.GreaterThanOrEqual:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkGe(a, b));
case ExpressionType.Equal:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkEq(a, b));
case ExpressionType.NotEqual:
return VisitBinary(context, environment, (BinaryExpression)expression, param, (ctx, a, b) => ctx.MkNot(ctx.MkEq(a, b)));
case ExpressionType.MemberAccess:
return VisitMember(environment, (MemberExpression)expression, param);
case ExpressionType.Constant:
return VisitConstant(context, (ConstantExpression)expression);
case ExpressionType.Call:
return VisitCall(context, environment, (MethodCallExpression)expression, param);
default:
throw new NotSupportedException("Unsupported expression node type encountered: " + expression.NodeType);
}
}```

Notice the above is made as declarative as possible: most expression tree node types are mapped on either unary or binary specialized visitor methods, passing in a lambda expression that acts as the “Z3 constructor”. For example, And (for & in C#) uses a “(ctx, a, b) => ctx.MkAnd(a, b)” argument. The VisitBinary method will be responsible to recursively translate the left-hand side and right-hand side of the And operation (e.g. x & y, where x and y can be arbitrarily complex BLOCKED EXPRESSION, ultimately feeding the resulting Z3-returned IntPtr objects into the supplied lambda expression.

Next to the VisitBinary and VisitUnary calls, there are specialized calls for the MemberAccess, Constant and Call expression tree nodes. We’ll defer the discussion of those for a moment and focus on binary and unary calls first:

```/// <summary>
/// Visitor method to translate a binary expression.
/// </summary>
/// <param name="context">Z3 context.</param>
/// <param name="environment">Environment with bindings of theorem variables to Z3 handles.</param>
/// <param name="expression">Binary expression.</param>
/// <param name="ctor">Constructor to combine recursive visitor results.</param>
/// <param name="param">Parameter used to express the constraint on.</param>
/// <returns>Z3 expression handle.</returns>
private IntPtr VisitBinary(Context context, Dictionary<PropertyInfo, IntPtr> environment, BinaryExpression expression, ParameterExpression param, Func<Context, IntPtr, IntPtr, IntPtr> ctor)
{
return ctor(context, Visit(context, environment, expression.Left, param), Visit(context, environment, expression.Right, param));
}

/// <summary>
/// Visitor method to translate a unary expression.
/// </summary>
/// <param name="context">Z3 context.</param>
/// <param name="environment">Environment with bindings of theorem variables to Z3 handles.</param>
/// <param name="expression">Unary expression.</param>
/// <param name="ctor">Constructor to combine recursive visitor results.</param>
/// <param name="param">Parameter used to express the constraint on.</param>
/// <returns>Z3 expression handle.</returns>
private IntPtr VisitUnary(Context context, Dictionary<PropertyInfo, IntPtr> environment, UnaryExpression expression, ParameterExpression param, Func<Context, IntPtr, IntPtr> ctor)
{
return ctor(context, Visit(context, environment, expression.Operand, param));
}```

More documentation than anything else :-). Those methods simply visit the expression’s operands (e.g. Left, Right for a binary expression, Operand for a unary one) recursively. The obtained results, of type IntPtr, are fed into the “ctor” delegate parameter to obtain the final result. Trivial stuff.

Now the more complicated visitor methods for members, constants and method calls. First, constants:

```/// <summary>
/// Visitor method to translate a constant expression.
/// </summary>
/// <param name="context">Z3 context.</param>
/// <param name="constant">Constant expression.</param>
/// <returns>Z3 expression handle.</returns>
private static IntPtr VisitConstant(Context context, ConstantExpression constant)
{
if (constant.Type == typeof(int))
return context.MkNumeral((int)constant.Value, context.MkIntType());
else if (constant.Type == typeof(bool))
return (bool)constant.Value ? context.MkTrue() : context.MkFalse();

throw new NotSupportedException("Unsupported constant type.");
}```

This code should be easy enough to understand as well: looking at the type of the constant value, different Z3 calls are made based on the Value property’s value.

Members get a tiny bit more involved, as that’s where we need to recognize property accesses for symbols. In other words, we need to recognize the

```/// <summary>
/// Visitor method to translate a member expression.
/// </summary>
/// <param name="environment">Environment with bindings of theorem variables to Z3 handles.</param>
/// <param name="member">Member expression.</param>
/// <param name="param">Parameter used to express the constraint on.</param>
/// <returns>Z3 expression handle.</returns>
private static IntPtr VisitMember(Dictionary<PropertyInfo, IntPtr> environment, MemberExpression member, ParameterExpression param)
{
//
// E.g. Symbols l = ...;
//      theorem.Where(s => l.X1)
//                         ^^
//
if (member.Expression != param)
throw new NotSupportedException("Encountered member access not targeting the constraint parameter.");

//
// Only members we allow currently are direct accesses to the theorem's variables
// in the environment type. So we just try to find the mapping from the environment
// bindings table.
//
PropertyInfo property;
IntPtr value;
if ((property = member.Member as PropertyInfo) == null
|| !environment.TryGetValue(property, out value))
throw new NotSupportedException("Unknown parameter encountered: " + member.Member.Name + ".");

return value;
}```

First we make sure the user didn’t access something outside the realm of the constraint parameter, e.g.:

```Symbols<int, int, int, int, int> oops = new Symbols<int, int, int, int, int>();
var theo = from t in ctx.NewTheorem<Symbols<int, int, int, int, int>>()
where t.X1 - oops.X2 >= 1
where t.X1 - t.X2 <= 3
where t.X1 == (2 * t.X3) + t.X5
where t.X3 == t.X5
where t.X2 == 6 * t.X4
select t;```

In essence this is some kind of scope check with regards to the expressed theorem. Recall the above corresponds to calls to the Where method that look as follows:

ctx.NewTheorem<Symbols<int, int, int, int, int>>() .Where(t => t.X1 - oops.X2 >= 1)…

We’ve captured the lambda expression parameter “t” in the AssertConstraints method and carry it on throughout the visitor so we can check it’s used as the left-hand side inside constraint expressions.

Note: This restriction limits expressiveness quite a bit. For example, capturing an outer variable gives rise to a closure to be created, which is accessed through field accesses (of type MemberExpression). Such potentially valid uses get rejected by the code above. To restore such functionality, one can employ partial compilation using LambdaExpression.Compile, after checking the expression doesn’t contain the lambda parameter. The essence is shown below:

if (member.Expression != param)
{
if (!IsLambdaParameterFree(member.Expression, param))
throw new NotSupportedException(“Unsupported member expression using constraint parameter.”);

// Evaluation outside Z3.
object value = Expression.Lambda(member.Expression).Compile().DynamicInvoke();
return VisitConstant(context, Expression.Constant(value));
}

We leave the implementation of such techniques to the reader, together with the task to hook this logic up in al the appropriate places (default case of Visit, in VisitCall further on, etc).

VisitMember finally consults the environment to map the property access back to the Z3 object generated in the GetEnvironment call.

Finally, we have a VisitCall method as well. Once more, I’ve kept some advanced rewriter stuff in here (to be explained in a subsequent post), so ignore that for the time being. The reason we implement VisitCall is to provide a means to expose advanced Z3 constructs efficiently, e.g. the Distinct operator. To do so, a Z3Methods class is provided:

```/// <summary>
/// Z3 predicate methods.
/// </summary>
public static class Z3Methods
{
/// <summary>
/// Creates a predicate constraining the given symbols as distinct values.
/// </summary>
/// <typeparam name="T">Type of the parameters.</typeparam>
/// <param name="symbols">Symbols that are required to be distinct.</param>
/// <returns>Predicate return value.</returns>
/// <remarks>This method should only be used within LINQ expressions.</remarks>
public static bool Distinct<T>(params T[] symbols /* type? */)
{
throw new NotSupportedException("This method should only be used in query expressions.");
}
}```

This is very similar to the SqlMethods class of LINQ to SQL, and available for future extensions. VisitCall has the simple task to recognize calls to Z3Methods methods, turning them into the corresponding Z3 primitives.

```/// <summary>
/// Visitor method to translate a method call expression.
/// </summary>
/// <param name="context">Z3 context.</param>
/// <param name="environment">Environment with bindings of theorem variables to Z3 handles.</param>
/// <param name="call">Method call expression.</param>
/// <param name="param">Parameter used to express the constraint on.</param>
/// <returns>Z3 expression handle.</returns>
private IntPtr VisitCall(Context context, Dictionary<PropertyInfo, IntPtr> environment, MethodCallExpression call, ParameterExpression param)
{
var method = call.Method;

//
// Does the method have a rewriter attribute applied?
//
var rewriterAttr = (TheoremPredicateRewriterAttribute)method.GetCustomAttributes(typeof(TheoremPredicateRewriterAttribute), false).SingleOrDefault();
if (rewriterAttr != null)
{
//
// Make sure the specified rewriter type implements the ITheoremPredicateRewriter.
//
var rewriterType = rewriterAttr.RewriterType;
if (!typeof(ITheoremPredicateRewriter).IsAssignableFrom(rewriterType))
throw new InvalidOperationException("Invalid predicate rewriter type definition. Did you implement ITheoremPredicateRewriter?");

//
// Assume a parameterless public constructor to new up the rewriter.
//
var rewriter = (ITheoremPredicateRewriter)Activator.CreateInstance(rewriterType);

//
// Make sure we don't get stuck when the rewriter just returned its input. Valid
// rewriters should satisfy progress guarantees.
//
var result = rewriter.Rewrite(call);
if (result == call)
throw new InvalidOperationException("The expression tree rewriter of type " + rewriterType.Name + " did not perform any rewrite. Aborting compilation to avoid infinite looping.");

//
// Visit the rewritten expression.
//
return Visit(context, environment, result, param);
}

//
// Filter for known Z3 operators.
//
if (method.IsGenericMethod && method.GetGenericMethodDefinition() == typeof(Z3Methods).GetMethod("Distinct"))
{
//
// We know the signature of the Distinct method call. Its argument is a params
// array, hence we expect a NewArrayExpression.
//
var arr = (NewArrayExpression)call.Arguments[0];
var args = from arg in arr.Expressions select Visit(context, environment, arg, param);
return context.MkDistinct(args.ToArray());
}
else
throw new NotSupportedException("Unknown method call:" + method.ToString());
}```

In particular, a call to Distinct is processed by inspecting its argument corresponding to the params-array, recursively translating any of its elements (e.g. Z3Methods.Distinct(1, 2, s.X1, s.X2) will give rise to two constant expressions and two member accesses, but more complex expressions can occur as the arguments as well…). Ultimately, MkDistinct is called and all is happy.

Now that we’re done investigating the visitor, we can have a look at the final stage of the LINQ to Z3-driven theorem solving, the get solution phase.

# Getting the solution

To expose the solution to the user, we new up a Plain Old CLR Object (POCO) of the theorem’s generic parameter type T. All there’s left to do here is to use some reflection voodoo to instantiate T, populate its properties somehow, and return the object. Different situations can occur, based on the way the type is defined. In particular, use of an anonymous type gives some problems as there’s no formal correlation between the generated constructor and the type’s properties (which are get-only, so we can’t assign values directly). This isn’t super-exciting stuff, but nonetheless:

```/// <summary>
/// Gets the solution object for the solved theorem.
/// </summary>
/// <typeparam name="T">Environment type to create an instance of.</typeparam>
/// <param name="model">Z3 model to evaluate theorem parameters under.</param>
/// <param name="environment">Environment with bindings of theorem variables to Z3 handles.</param>
/// <returns>Instance of the enviroment type with theorem-satisfying values.</returns>
private static T GetSolution<T>(Model model, Dictionary<PropertyInfo, IntPtr> environment)
{
Type t = typeof(T);

//
// Determine whether T is a compiler-generated type, indicating an anonymous type.
// This check might not be reliable enough but works for now.
//
if (t.GetCustomAttributes(typeof(CompilerGeneratedAttribute), false).Any())
{
//
// Anonymous types have a constructor that takes in values for all its properties.
// However, we don't know the order and it's hard to correlate back the parameters
// to the underlying properties. So, we want to bypass that constructor altogether
// by using the FormatterServices to create an uninitialized (all-zero) instance.
//
T result = (T)FormatterServices.GetUninitializedObject(t);

//
// Here we take advantage of undesirable knowledge on how anonymous types are
// implemented by the C# compiler. This is risky but we can live with it for
// now in this POC. Because the properties are get-only, we need to perform
// nominal matching with the corresponding backing fields.
//
var fields = t.GetFields(BindingFlags.NonPublic | BindingFlags.Instance);
foreach (var parameter in environment.Keys)
{
//
// Mapping from property to field.
//
var field = fields.Where(f => f.Name.StartsWith("<" + parameter.Name + ">")).SingleOrDefault();

//
// Evaluation of the values though the handle in the environment bindings.
//
IntPtr val = model.Eval(environment[parameter]);
if (parameter.PropertyType == typeof(bool))
field.SetValue(result, model.GetBoolValueBool(val));
else if (parameter.PropertyType == typeof(int))
field.SetValue(result, model.GetNumeralValueInt(val));
else
throw new NotSupportedException("Unsupported parameter type for " + parameter.Name + ".");
}

return result;
}
else
{
//
// Straightforward case of having an "onymous type" at hand.
//
T result = Activator.CreateInstance<T>();

foreach (var parameter in environment.Keys)
{
//
// Normalize types when facing Z3. Theorem variable type mappings allow for strong
// typing within the theorem, while underlying variable representations are Z3-
// friendly types.
//
var parameterType = parameter.PropertyType;
var parameterTypeMapping = (TheoremVariableTypeMappingAttribute)parameterType.GetCustomAttributes(typeof(TheoremVariableTypeMappingAttribute), false).SingleOrDefault();
if (parameterTypeMapping != null)
parameterType = parameterTypeMapping.RegularType;

//
// Evaluation of the values though the handle in the environment bindings.
//
IntPtr val = model.Eval(environment[parameter]);
object value;
if (parameterType == typeof(bool))
value = model.GetBoolValueBool(val);
else if (parameterType == typeof(int))
value = model.GetNumeralValueInt(val);
else
throw new NotSupportedException("Unsupported parameter type for " + parameter.Name + ".");

//
// If there was a type mapping, we need to convert back to the original type.
// In that case we expect a constructor with the mapped type to be available.
//
if (parameterTypeMapping != null)
{
var ctor = parameter.PropertyType.GetConstructor(new Type[] { parameterType });
if (ctor == null)
throw new InvalidOperationException("Could not construct an instance of the mapped type " + parameter.PropertyType.Name + ". No public constructor with parameter type " + parameterType + " found.");

value = ctor.Invoke(new object[] { value });
}

parameter.SetValue(result, value, null);
}

return result;
}
}```

Focus on the non-anonymous type case at the bottom, ignoring parts on type mappings (part of the stuff to be explained another time). The essence is plain simple:

• Create an instance of the type.
• Go over the environment and:
• Extract the value from the Z3 model using the environment’s stored Z3 object.
• Assign it to the property.

Notice get-only properties are not supported, though one could look at LINQ to SQL as a source of inspiration to deal with such cases as well, if desired.

# Samples

A few samples of simple theorems using the infrastructure provided above are shown below:

```using (var ctx = new Z3Context())
{
ctx.Log = Console.Out;

Print(from t in ctx.NewTheorem(new { x = default(bool) })
where t.x && !t.x
select t);

Print(from t in ctx.NewTheorem(new { x = default(bool), y = default(bool) })
where t.x ^ t.y
select t);

Print(from t in ctx.NewTheorem(new { x = default(int), y = default(int) })
where t.x < t.y + 1
where t.x > 2
select t);

Print(from t in ctx.NewTheorem<Symbols<int, int>>()
where t.X1 < t.X2 + 1
where t.X1 > 2
where t.X1 != t.X2
select t);

Print(from t in ctx.NewTheorem<Symbols<int, int, int, int, int>>()
where t.X1 - t.X2 >= 1
where t.X1 - t.X2 <= 3
where t.X1 == (2 * t.X3) + t.X5
where t.X3 == t.X5
where t.X2 == 6 * t.X4
select t);

Print(from t in ctx.NewTheorem<Symbols<int, int>>()
where Z3Methods.Distinct(t.X1, t.X2)
select t);}```

The reader should be able to trace through those samples mentally without much trouble. The Print method is shown below:

```private static void Print<T>(Theorem<T> t) where T : class
{
Console.WriteLine(t);
var res = t.Solve();
Console.WriteLine(res == null ? "none" : res.ToString());
Console.WriteLine();
}```

Given all of this, the output looks as follows:

Sweet! Notice the use of the Log property on the context object causes the constraints to be emitted to the screen, in LISP-style prefix notation for operators and operands.

# Sneak peak

Given the baseline infrastructure we covered in this post, we can start adding layers of abstraction over it to make more advanced theorems easier to express. An example is a Sudoku puzzle:

```var s1 = from t in Sudoku.Create(ctx)
where t.Cell13 == 2 && t.Cell16 == 1 && t.Cell18 == 6
where t.Cell23 == 7 && t.Cell26 == 4
where t.Cell31 == 5 && t.Cell37 == 9
where t.Cell42 == 1 && t.Cell44 == 3
where t.Cell51 == 8 && t.Cell55 == 5 && t.Cell59 == 4
where t.Cell66 == 6 && t.Cell68 == 2
where t.Cell73 == 6 && t.Cell79 == 7
where t.Cell84 == 8 && t.Cell87 == 3
where t.Cell92 == 4 && t.Cell94 == 9 && t.Cell97 == 2
select t;

Console.WriteLine(s1);
Console.WriteLine(s1.Solve());
Console.WriteLine();```

Or more compactly represented as a string:

```var s2 = Sudoku.Parse(ctx, @"..2..1.6.
..7..4...
5.....9..
.1.3.....
8...5...4
.....6.2.
..6.....7
...8..3..
.4.9..2..");
Console.WriteLine(s2);
Console.WriteLine(s2.Solve());
Console.WriteLine();```

Here we’ve created a domain-specific theorem type, Sudoku, that’s responsible to provide parts of the constraints. In fact, a Sudoku has much more constraints than the ones specified by the user: rows, columns and blocks need to contain unique values from 1-9, and every cell has to be within that range. For the innocent-looking Sudoku above, we get the following set of constraints:

((1 <= p.Cell11) && (p.Cell11 <= 9)),
((1 <= p.Cell12) && (p.Cell12 <= 9)),
((1 <= p.Cell13) && (p.Cell13 <= 9)),
((1 <= p.Cell14) && (p.Cell14 <= 9)),
((1 <= p.Cell15) && (p.Cell15 <= 9)),
((1 <= p.Cell16) && (p.Cell16 <= 9)),
((1 <= p.Cell17) && (p.Cell17 <= 9)),
((1 <= p.Cell18) && (p.Cell18 <= 9)),
((1 <= p.Cell19) && (p.Cell19 <= 9)),
((1 <= p.Cell21) && (p.Cell21 <= 9)),
((1 <= p.Cell22) && (p.Cell22 <= 9)),
((1 <= p.Cell23) && (p.Cell23 <= 9)),
((1 <= p.Cell24) && (p.Cell24 <= 9)),
((1 <= p.Cell25) && (p.Cell25 <= 9)),
((1 <= p.Cell26) && (p.Cell26 <= 9)),
((1 <= p.Cell27) && (p.Cell27 <= 9)),
((1 <= p.Cell28) && (p.Cell28 <= 9)),
((1 <= p.Cell29) && (p.Cell29 <= 9)),
((1 <= p.Cell31) && (p.Cell31 <= 9)),
((1 <= p.Cell32) && (p.Cell32 <= 9)),
((1 <= p.Cell33) && (p.Cell33 <= 9)),
((1 <= p.Cell34) && (p.Cell34 <= 9)),
((1 <= p.Cell35) && (p.Cell35 <= 9)),
((1 <= p.Cell36) && (p.Cell36 <= 9)),
((1 <= p.Cell37) && (p.Cell37 <= 9)),
((1 <= p.Cell38) && (p.Cell38 <= 9)),
((1 <= p.Cell39) && (p.Cell39 <= 9)),
((1 <= p.Cell41) && (p.Cell41 <= 9)),
((1 <= p.Cell42) && (p.Cell42 <= 9)),
((1 <= p.Cell43) && (p.Cell43 <= 9)),
((1 <= p.Cell44) && (p.Cell44 <= 9)),
((1 <= p.Cell45) && (p.Cell45 <= 9)),
((1 <= p.Cell46) && (p.Cell46 <= 9)),
((1 <= p.Cell47) && (p.Cell47 <= 9)),
((1 <= p.Cell48) && (p.Cell48 <= 9)),
((1 <= p.Cell49) && (p.Cell49 <= 9)),
((1 <= p.Cell51) && (p.Cell51 <= 9)),
((1 <= p.Cell52) && (p.Cell52 <= 9)),
((1 <= p.Cell53) && (p.Cell53 <= 9)),
((1 <= p.Cell54) && (p.Cell54 <= 9)),
((1 <= p.Cell55) && (p.Cell55 <= 9)),
((1 <= p.Cell56) && (p.Cell56 <= 9)),
((1 <= p.Cell57) && (p.Cell57 <= 9)),
((1 <= p.Cell58) && (p.Cell58 <= 9)),
((1 <= p.Cell59) && (p.Cell59 <= 9)),
((1 <= p.Cell61) && (p.Cell61 <= 9)),
((1 <= p.Cell62) && (p.Cell62 <= 9)),
((1 <= p.Cell63) && (p.Cell63 <= 9)),
((1 <= p.Cell64) && (p.Cell64 <= 9)),
((1 <= p.Cell65) && (p.Cell65 <= 9)),
((1 <= p.Cell66) && (p.Cell66 <= 9)),
((1 <= p.Cell67) && (p.Cell67 <= 9)),
((1 <= p.Cell68) && (p.Cell68 <= 9)),
((1 <= p.Cell69) && (p.Cell69 <= 9)),
((1 <= p.Cell71) && (p.Cell71 <= 9)),
((1 <= p.Cell72) && (p.Cell72 <= 9)),
((1 <= p.Cell73) && (p.Cell73 <= 9)),
((1 <= p.Cell74) && (p.Cell74 <= 9)),
((1 <= p.Cell75) && (p.Cell75 <= 9)),
((1 <= p.Cell76) && (p.Cell76 <= 9)),
((1 <= p.Cell77) && (p.Cell77 <= 9)),
((1 <= p.Cell78) && (p.Cell78 <= 9)),
((1 <= p.Cell79) && (p.Cell79 <= 9)),
((1 <= p.Cell81) && (p.Cell81 <= 9)),
((1 <= p.Cell82) && (p.Cell82 <= 9)),
((1 <= p.Cell83) && (p.Cell83 <= 9)),
((1 <= p.Cell84) && (p.Cell84 <= 9)),
((1 <= p.Cell85) && (p.Cell85 <= 9)),
((1 <= p.Cell86) && (p.Cell86 <= 9)),
((1 <= p.Cell87) && (p.Cell87 <= 9)),
((1 <= p.Cell88) && (p.Cell88 <= 9)),
((1 <= p.Cell89) && (p.Cell89 <= 9)),
((1 <= p.Cell91) && (p.Cell91 <= 9)),
((1 <= p.Cell92) && (p.Cell92 <= 9)),
((1 <= p.Cell93) && (p.Cell93 <= 9)),
((1 <= p.Cell94) && (p.Cell94 <= 9)),
((1 <= p.Cell95) && (p.Cell95 <= 9)),
((1 <= p.Cell96) && (p.Cell96 <= 9)),
((1 <= p.Cell97) && (p.Cell97 <= 9)),
((1 <= p.Cell98) && (p.Cell98 <= 9)),
((1 <= p.Cell99) && (p.Cell99 <= 9)),
Distinct(new [] {p.Cell11, p.Cell12, p.Cell13, p.Cell14, p.Cell15, p.Cell16, p.Cell17, p.Cell18, p.Cell19}),
Distinct(new [] {p.Cell21, p.Cell22, p.Cell23, p.Cell24, p.Cell25, p.Cell26, p.Cell27, p.Cell28, p.Cell29}),
Distinct(new [] {p.Cell31, p.Cell32, p.Cell33, p.Cell34, p.Cell35, p.Cell36, p.Cell37, p.Cell38, p.Cell39}),
Distinct(new [] {p.Cell41, p.Cell42, p.Cell43, p.Cell44, p.Cell45, p.Cell46, p.Cell47, p.Cell48, p.Cell49}),
Distinct(new [] {p.Cell51, p.Cell52, p.Cell53, p.Cell54, p.Cell55, p.Cell56, p.Cell57, p.Cell58, p.Cell59}),
Distinct(new [] {p.Cell61, p.Cell62, p.Cell63, p.Cell64, p.Cell65, p.Cell66, p.Cell67, p.Cell68, p.Cell69}),
Distinct(new [] {p.Cell71, p.Cell72, p.Cell73, p.Cell74, p.Cell75, p.Cell76, p.Cell77, p.Cell78, p.Cell79}),
Distinct(new [] {p.Cell81, p.Cell82, p.Cell83, p.Cell84, p.Cell85, p.Cell86, p.Cell87, p.Cell88, p.Cell89}),
Distinct(new [] {p.Cell91, p.Cell92, p.Cell93, p.Cell94, p.Cell95, p.Cell96, p.Cell97, p.Cell98, p.Cell99}),
Distinct(new [] {p.Cell11, p.Cell21, p.Cell31, p.Cell41, p.Cell51, p.Cell61, p.Cell71, p.Cell81, p.Cell91}),
Distinct(new [] {p.Cell12, p.Cell22, p.Cell32, p.Cell42, p.Cell52, p.Cell62, p.Cell72, p.Cell82, p.Cell92}),
Distinct(new [] {p.Cell13, p.Cell23, p.Cell33, p.Cell43, p.Cell53, p.Cell63, p.Cell73, p.Cell83, p.Cell93}),
Distinct(new [] {p.Cell14, p.Cell24, p.Cell34, p.Cell44, p.Cell54, p.Cell64, p.Cell74, p.Cell84, p.Cell94}),
Distinct(new [] {p.Cell15, p.Cell25, p.Cell35, p.Cell45, p.Cell55, p.Cell65, p.Cell75, p.Cell85, p.Cell95}),
Distinct(new [] {p.Cell16, p.Cell26, p.Cell36, p.Cell46, p.Cell56, p.Cell66, p.Cell76, p.Cell86, p.Cell96}),
Distinct(new [] {p.Cell17, p.Cell27, p.Cell37, p.Cell47, p.Cell57, p.Cell67, p.Cell77, p.Cell87, p.Cell97}),
Distinct(new [] {p.Cell18, p.Cell28, p.Cell38, p.Cell48, p.Cell58, p.Cell68, p.Cell78, p.Cell88, p.Cell98}),
Distinct(new [] {p.Cell19, p.Cell29, p.Cell39, p.Cell49, p.Cell59, p.Cell69, p.Cell79, p.Cell89, p.Cell99}),
Distinct(new [] {p.Cell11, p.Cell12, p.Cell13, p.Cell21, p.Cell22, p.Cell23, p.Cell31, p.Cell32, p.Cell33}),
Distinct(new [] {p.Cell14, p.Cell15, p.Cell16, p.Cell24, p.Cell25, p.Cell26, p.Cell34, p.Cell35, p.Cell36}),
Distinct(new [] {p.Cell17, p.Cell18, p.Cell19, p.Cell27, p.Cell28, p.Cell29, p.Cell37, p.Cell38, p.Cell39}),
Distinct(new [] {p.Cell41, p.Cell42, p.Cell43, p.Cell51, p.Cell52, p.Cell53, p.Cell61, p.Cell62, p.Cell63}),
Distinct(new [] {p.Cell44, p.Cell45, p.Cell46, p.Cell54, p.Cell55, p.Cell56, p.Cell64, p.Cell65, p.Cell66}),
Distinct(new [] {p.Cell47, p.Cell48, p.Cell49, p.Cell57, p.Cell58, p.Cell59, p.Cell67, p.Cell68, p.Cell69}),
Distinct(new [] {p.Cell71, p.Cell72, p.Cell73, p.Cell81, p.Cell82, p.Cell83, p.Cell91, p.Cell92, p.Cell93}),
Distinct(new [] {p.Cell74, p.Cell75, p.Cell76, p.Cell84, p.Cell85, p.Cell86, p.Cell94, p.Cell95, p.Cell96}),
Distinct(new [] {p.Cell77, p.Cell78, p.Cell79, p.Cell87, p.Cell88, p.Cell89, p.Cell97, p.Cell98, p.Cell99}),
(and (<= 1 Cell11) (<= Cell11 9))
(and (<= 1 Cell12) (<= Cell12 9))
(and (<= 1 Cell13) (<= Cell13 9))
(and (<= 1 Cell14) (<= Cell14 9))
(and (<= 1 Cell15) (<= Cell15 9))
(and (<= 1 Cell16) (<= Cell16 9))
(and (<= 1 Cell17) (<= Cell17 9))
(and (<= 1 Cell18) (<= Cell18 9))
(and (<= 1 Cell19) (<= Cell19 9))
(and (<= 1 Cell21) (<= Cell21 9))
(and (<= 1 Cell22) (<= Cell22 9))
(and (<= 1 Cell23) (<= Cell23 9))
(and (<= 1 Cell24) (<= Cell24 9))
(and (<= 1 Cell25) (<= Cell25 9))
(and (<= 1 Cell26) (<= Cell26 9))
(and (<= 1 Cell27) (<= Cell27 9))
(and (<= 1 Cell28) (<= Cell28 9))
(and (<= 1 Cell29) (<= Cell29 9))
(and (<= 1 Cell31) (<= Cell31 9))
(and (<= 1 Cell32) (<= Cell32 9))
(and (<= 1 Cell33) (<= Cell33 9))
(and (<= 1 Cell34) (<= Cell34 9))
(and (<= 1 Cell35) (<= Cell35 9))
(and (<= 1 Cell36) (<= Cell36 9))
(and (<= 1 Cell37) (<= Cell37 9))
(and (<= 1 Cell38) (<= Cell38 9))
(and (<= 1 Cell39) (<= Cell39 9))
(and (<= 1 Cell41) (<= Cell41 9))
(and (<= 1 Cell42) (<= Cell42 9))
(and (<= 1 Cell43) (<= Cell43 9))
(and (<= 1 Cell44) (<= Cell44 9))
(and (<= 1 Cell45) (<= Cell45 9))
(and (<= 1 Cell46) (<= Cell46 9))
(and (<= 1 Cell47) (<= Cell47 9))
(and (<= 1 Cell48) (<= Cell48 9))
(and (<= 1 Cell49) (<= Cell49 9))
(and (<= 1 Cell51) (<= Cell51 9))
(and (<= 1 Cell52) (<= Cell52 9))
(and (<= 1 Cell53) (<= Cell53 9))
(and (<= 1 Cell54) (<= Cell54 9))
(and (<= 1 Cell55) (<= Cell55 9))
(and (<= 1 Cell56) (<= Cell56 9))
(and (<= 1 Cell57) (<= Cell57 9))
(and (<= 1 Cell58) (<= Cell58 9))
(and (<= 1 Cell59) (<= Cell59 9))
(and (<= 1 Cell61) (<= Cell61 9))
(and (<= 1 Cell62) (<= Cell62 9))
(and (<= 1 Cell63) (<= Cell63 9))
(and (<= 1 Cell64) (<= Cell64 9))
(and (<= 1 Cell65) (<= Cell65 9))
(and (<= 1 Cell66) (<= Cell66 9))
(and (<= 1 Cell67) (<= Cell67 9))
(and (<= 1 Cell68) (<= Cell68 9))
(and (<= 1 Cell69) (<= Cell69 9))
(and (<= 1 Cell71) (<= Cell71 9))
(and (<= 1 Cell72) (<= Cell72 9))
(and (<= 1 Cell73) (<= Cell73 9))
(and (<= 1 Cell74) (<= Cell74 9))
(and (<= 1 Cell75) (<= Cell75 9))
(and (<= 1 Cell76) (<= Cell76 9))
(and (<= 1 Cell77) (<= Cell77 9))
(and (<= 1 Cell78) (<= Cell78 9))
(and (<= 1 Cell79) (<= Cell79 9))
(and (<= 1 Cell81) (<= Cell81 9))
(and (<= 1 Cell82) (<= Cell82 9))
(and (<= 1 Cell83) (<= Cell83 9))
(and (<= 1 Cell84) (<= Cell84 9))
(and (<= 1 Cell85) (<= Cell85 9))
(and (<= 1 Cell86) (<= Cell86 9))
(and (<= 1 Cell87) (<= Cell87 9))
(and (<= 1 Cell88) (<= Cell88 9))
(and (<= 1 Cell89) (<= Cell89 9))
(and (<= 1 Cell91) (<= Cell91 9))
(and (<= 1 Cell92) (<= Cell92 9))
(and (<= 1 Cell93) (<= Cell93 9))
(and (<= 1 Cell94) (<= Cell94 9))
(and (<= 1 Cell95) (<= Cell95 9))
(and (<= 1 Cell96) (<= Cell96 9))
(and (<= 1 Cell97) (<= Cell97 9))
(and (<= 1 Cell98) (<= Cell98 9))
(and (<= 1 Cell99) (<= Cell99 9))
(distinct Cell11 Cell12 Cell13 Cell14 Cell15 Cell16 Cell17 Cell18 Cell19)
(distinct Cell21 Cell22 Cell23 Cell24 Cell25 Cell26 Cell27 Cell28 Cell29)
(distinct Cell31 Cell32 Cell33 Cell34 Cell35 Cell36 Cell37 Cell38 Cell39)
(distinct Cell41 Cell42 Cell43 Cell44 Cell45 Cell46 Cell47 Cell48 Cell49)
(distinct Cell51 Cell52 Cell53 Cell54 Cell55 Cell56 Cell57 Cell58 Cell59)
(distinct Cell61 Cell62 Cell63 Cell64 Cell65 Cell66 Cell67 Cell68 Cell69)
(distinct Cell71 Cell72 Cell73 Cell74 Cell75 Cell76 Cell77 Cell78 Cell79)
(distinct Cell81 Cell82 Cell83 Cell84 Cell85 Cell86 Cell87 Cell88 Cell89)
(distinct Cell91 Cell92 Cell93 Cell94 Cell95 Cell96 Cell97 Cell98 Cell99)
(distinct Cell11 Cell21 Cell31 Cell41 Cell51 Cell61 Cell71 Cell81 Cell91)
(distinct Cell12 Cell22 Cell32 Cell42 Cell52 Cell62 Cell72 Cell82 Cell92)
(distinct Cell13 Cell23 Cell33 Cell43 Cell53 Cell63 Cell73 Cell83 Cell93)
(distinct Cell14 Cell24 Cell34 Cell44 Cell54 Cell64 Cell74 Cell84 Cell94)
(distinct Cell15 Cell25 Cell35 Cell45 Cell55 Cell65 Cell75 Cell85 Cell95)
(distinct Cell16 Cell26 Cell36 Cell46 Cell56 Cell66 Cell76 Cell86 Cell96)
(distinct Cell17 Cell27 Cell37 Cell47 Cell57 Cell67 Cell77 Cell87 Cell97)
(distinct Cell18 Cell28 Cell38 Cell48 Cell58 Cell68 Cell78 Cell88 Cell98)
(distinct Cell19 Cell29 Cell39 Cell49 Cell59 Cell69 Cell79 Cell89 Cell99)
(distinct Cell11 Cell12 Cell13 Cell21 Cell22 Cell23 Cell31 Cell32 Cell33)
(distinct Cell14 Cell15 Cell16 Cell24 Cell25 Cell26 Cell34 Cell35 Cell36)
(distinct Cell17 Cell18 Cell19 Cell27 Cell28 Cell29 Cell37 Cell38 Cell39)
(distinct Cell41 Cell42 Cell43 Cell51 Cell52 Cell53 Cell61 Cell62 Cell63)
(distinct Cell44 Cell45 Cell46 Cell54 Cell55 Cell56 Cell64 Cell65 Cell66)
(distinct Cell47 Cell48 Cell49 Cell57 Cell58 Cell59 Cell67 Cell68 Cell69)
(distinct Cell71 Cell72 Cell73 Cell81 Cell82 Cell83 Cell91 Cell92 Cell93)
(distinct Cell74 Cell75 Cell76 Cell84 Cell85 Cell86 Cell94 Cell95 Cell96)
(distinct Cell77 Cell78 Cell79 Cell87 Cell88 Cell89 Cell97 Cell98 Cell99)
(and (and (= Cell13 2) (= Cell16 1)) (= Cell18 6))
(and (= Cell23 7) (= Cell26 4))
(and (= Cell31 5) (= Cell37 9))
(and (= Cell42 1) (= Cell44 3))
(and (and (= Cell51 8) (= Cell55 5)) (= Cell59 4))
(and (= Cell66 6) (= Cell68 2))
(and (= Cell73 6) (= Cell79 7))
(and (= Cell84 8) (= Cell87 3))
(and (and (= Cell92 4) (= Cell94 9)) (= Cell97 2))

Only the last nine lines were specified by the user through the LINQ query expression. In fact, the reader should be able to create a Sudoku class already based on today’s literature (tip: work your way down starting from the signature of Sudoku.Create you can infer from the sample above; reuse Theorem<T> of course).

More complex puzzles exist though, for example a KenKen puzzle:

Besides constraints on rows and columns and cells, similar to a Sudoku’s (with regards to the use of digits 1 through 4), there are more domain-specific constraints: all mathematical operators are commutative. For example, 1- in the two-cell region means that Cell12 – Cell22 or Cell22 – Cell12 equals to 1. For bigger regions all permutations of cells are considered. In other words, we need different semantics on operators. For example:

```var k1 = from t in KenKen4By4.Create(ctx)
where t.Add(4, t.Cell11, t.Cell21)
where t.Sub(1, t.Cell12, t.Cell22)
where t.Mul(4, t.Cell13, t.Cell14, t.Cell23)
where t.Mul(24, t.Cell31, t.Cell32, t.Cell33)
where t.Sub(3, t.Cell24, t.Cell34)
where t.Div(2, t.Cell41, t.Cell42)
where t.Sub(1, t.Cell43, t.Cell44)
select t;
Console.WriteLine(k1.Solve());
Console.WriteLine();```

results in the following constraints:

(and (<= 1 Cell11) (<= Cell11 4))
(and (<= 1 Cell12) (<= Cell12 4))
(and (<= 1 Cell13) (<= Cell13 4))
(and (<= 1 Cell14) (<= Cell14 4))
(and (<= 1 Cell21) (<= Cell21 4))
(and (<= 1 Cell22) (<= Cell22 4))
(and (<= 1 Cell23) (<= Cell23 4))
(and (<= 1 Cell24) (<= Cell24 4))
(and (<= 1 Cell31) (<= Cell31 4))
(and (<= 1 Cell32) (<= Cell32 4))
(and (<= 1 Cell33) (<= Cell33 4))
(and (<= 1 Cell34) (<= Cell34 4))
(and (<= 1 Cell41) (<= Cell41 4))
(and (<= 1 Cell42) (<= Cell42 4))
(and (<= 1 Cell43) (<= Cell43 4))
(and (<= 1 Cell44) (<= Cell44 4))
(distinct Cell11 Cell12 Cell13 Cell14)
(distinct Cell21 Cell22 Cell23 Cell24)
(distinct Cell31 Cell32 Cell33 Cell34)
(distinct Cell41 Cell42 Cell43 Cell44)
(distinct Cell11 Cell21 Cell31 Cell41)
(distinct Cell12 Cell22 Cell32 Cell42)
(distinct Cell13 Cell23 Cell33 Cell43)
(distinct Cell14 Cell24 Cell34 Cell44)
(= 4 (+ Cell11 Cell21))
(or (= 1 (- Cell12 Cell22)) (= 1 (- Cell22 Cell12)))
(= 4 (* (* Cell13 Cell14) Cell23))
(= 24 (* (* Cell31 Cell32) Cell33))
(or (= 3 (- Cell24 Cell34)) (= 3 (- Cell34 Cell24)))
(or (= 2 (div Cell41 Cell42)) (= 2 (div Cell42 Cell41)))
(or (= 1 (- Cell43 Cell44)) (= 1 (- Cell44 Cell43)))

Again the last set of constraints are a result of the LINQ query expression (as a side-note, different Z3 techniques are available to tackle problems, something outside the scope of this discussion just yet). In order to generate those constraints, the domain-specific Add, Sub, Mul and Div methods need to be translated into Z3 primitives, something that will be done by implementing a rewriter type (cf. the pieces of code I asked the reader to skim over for the time being). We could go one step further and allow the user to specify the puzzle by means of a visual textual DSL:

```var k2 = KenKen4By4.Parse(ctx, @"+#####+#####+#####+#####+
# 4+  # 1-  # 4*        #
+     +     +     +#####+
#     #     #     # 3-  #
+#####+#####+#####+     +
# 24*             #     #
+#####+#####+#####+#####+
# 2/        # 1-        #
+#####+#####+#####+#####+");
Console.WriteLine(k2.Solve());
Console.WriteLine();```

Or even consume the Word file (from which the screenshot of the puzzle was taken earlier) directly:

```var k3 = KenKen4By4.FromWord(ctx, @"C:\Users\Bart\Desktop\KenKen.docx");
Console.WriteLine(k3.Solve());
Console.WriteLine();```

Finally there are Kakuro puzzles that give rise to yet another desire for better domain-specific theorem expressiveness:

```var u1 = from k in ctx.NewTheorem<Kakuro>()
where k.Cell11.IsBlack
where k.Cell14.IsBlack
where k.Cell15.IsBlack
where k.Cell24.IsBlack
where k.Cell48.IsBlack
where k.Cell51.IsBlack
where k.Cell61.IsBlack
where k.Cell85.IsBlack
where k.Cell12.VerticalSum == 23
where k.Cell13.VerticalSum == 30
where k.Cell16.VerticalSum == 27
where k.Cell17.VerticalSum == 12
where k.Cell18.VerticalSum == 16
where k.Cell21.HorizontalSum == 16
where k.Cell25.VerticalSum == 17
where k.Cell25.HorizontalSum == 24
where k.Cell31.HorizontalSum == 17
where k.Cell34.VerticalSum == 15
where k.Cell34.HorizontalSum == 29
where k.Cell41.HorizontalSum == 35
where k.Cell47.VerticalSum == 12
where k.Cell52.HorizontalSum == 7
where k.Cell55.VerticalSum == 7
where k.Cell55.HorizontalSum == 8
where k.Cell58.VerticalSum == 7
where k.Cell62.VerticalSum == 11
where k.Cell63.VerticalSum == 10
where k.Cell63.HorizontalSum == 16
where k.Cell71.HorizontalSum == 21
where k.Cell76.HorizontalSum == 5
where k.Cell81.HorizontalSum == 6
where k.Cell86.HorizontalSum == 3
select k;```

Here properties are used to indicate constraints as simple as possible, for the following puzzle:

+--------+--------+--------+--------+--------+--------+--------+--------+
| ****** | 23     | 30     | ****** | ****** | 27     | 12     | 16     |
+--------+--------+--------+--------+--------+--------+--------+--------+
|     16 |        |        | ****** | 17  24 |        |        |        |
+--------+--------+--------+--------+--------+--------+--------+--------+
|     17 |        |        | 15  29 |        |        |        |        |
+--------+--------+--------+--------+--------+--------+--------+--------+
|     35 |        |        |        |        |        | 12     | ****** |
+--------+--------+--------+--------+--------+--------+--------+--------+
| ****** |      7 |        |        | 7    8 |        |        | 7      |
+--------+--------+--------+--------+--------+--------+--------+--------+
| ****** | 11     | 10  16 |        |        |        |        |        |
+--------+--------+--------+--------+--------+--------+--------+--------+
|     21 |        |        |        |        |      5 |        |        |
+--------+--------+--------+--------+--------+--------+--------+--------+
|      6 |        |        |        | ****** |      3 |        |        |
+--------+--------+--------+--------+--------+--------+--------+--------+

We don’t want the user to express things like Cell22 + Cell32 + Cell42 == 23, but instead want to use all the information captured in the puzzle to allow constraints like Cell12.VerticalSum == 23. As we’ll see, this requires other rewriting mechanisms to allow for a good balance of responsibilities between:

• The LINQ to Z3 infrastructure.
• The Kakuro puzzle library writer.
• The end-user.

The resulting Z3 constraints for the sample puzzle are shown below:

(and (= (+ (+ Cell22 Cell32) Cell42) 23) (distinct Cell22 Cell32 Cell42))
(and (= (+ (+ (+ Cell23 Cell33) Cell43) Cell53) 30)
(distinct Cell23 Cell33 Cell43 Cell53))
(and (= (+ (+ (+ (+ Cell26 Cell36) Cell46) Cell56) Cell66) 27)
(distinct Cell26 Cell36 Cell46 Cell56 Cell66))
(and (= (+ Cell27 Cell37) 12) (distinct Cell27 Cell37))
(and (= (+ Cell28 Cell38) 16) (distinct Cell28 Cell38))
(and (= (+ Cell22 Cell23) 16) (distinct Cell22 Cell23))
(and (>= Cell22 1) (<= Cell22 9))
(and (>= Cell23 1) (<= Cell23 9))
(and (= (+ (+ Cell26 Cell27) Cell28) 24) (distinct Cell26 Cell27 Cell28))
(and (= (+ Cell35 Cell45) 17) (distinct Cell35 Cell45))
(and (>= Cell26 1) (<= Cell26 9))
(and (>= Cell27 1) (<= Cell27 9))
(and (>= Cell28 1) (<= Cell28 9))
(and (= (+ Cell32 Cell33) 17) (distinct Cell32 Cell33))
(and (>= Cell32 1) (<= Cell32 9))
(and (>= Cell33 1) (<= Cell33 9))
(and (= (+ (+ (+ Cell35 Cell36) Cell37) Cell38) 29)
(distinct Cell35 Cell36 Cell37 Cell38))
(and (= (+ (+ (+ (+ Cell44 Cell54) Cell64) Cell74) Cell84) 15)
(distinct Cell44 Cell54 Cell64 Cell74 Cell84))
(and (>= Cell35 1) (<= Cell35 9))
(and (>= Cell36 1) (<= Cell36 9))
(and (>= Cell37 1) (<= Cell37 9))
(and (>= Cell38 1) (<= Cell38 9))
(and (= (+ (+ (+ (+ Cell42 Cell43) Cell44) Cell45) Cell46) 35)
(distinct Cell42 Cell43 Cell44 Cell45 Cell46))
(and (>= Cell42 1) (<= Cell42 9))
(and (>= Cell43 1) (<= Cell43 9))
(and (>= Cell44 1) (<= Cell44 9))
(and (>= Cell45 1) (<= Cell45 9))
(and (>= Cell46 1) (<= Cell46 9))
(and
(= (+ (+ (+ Cell57 Cell67) Cell77) Cell87) 12)
(distinct Cell57 Cell67 Cell77 Cell87))
(and (= (+ Cell53 Cell54) 7) (distinct Cell53 Cell54))
(and (>= Cell53 1) (<= Cell53 9))
(and (>= Cell54 1) (<= Cell54 9))
(and (= (+ Cell56 Cell57) 8) (distinct Cell56 Cell57))
(and (= (+ Cell65 Cell75) 7) (distinct Cell65 Cell75))
(and (>= Cell56 1) (<= Cell56 9))
(and (>= Cell57 1) (<= Cell57 9))
(and (= (+ (+ Cell68 Cell78) Cell88) 7) (distinct Cell68 Cell78 Cell88))
(and (= (+ Cell72 Cell82) 11) (distinct Cell72 Cell82))
(and
(= (+ (+ (+ (+ Cell64 Cell65) Cell66) Cell67) Cell68) 16)
(distinct Cell64 Cell65 Cell66 Cell67 Cell68))
(and (= (+ Cell73 Cell83) 10) (distinct Cell73 Cell83))
(and (>= Cell64 1) (<= Cell64 9))
(and (>= Cell65 1) (<= Cell65 9))
(and (>= Cell66 1) (<= Cell66 9))
(and (>= Cell67 1) (<= Cell67 9))
(and (>= Cell68 1) (<= Cell68 9))
(and (= (+ (+ (+ Cell72 Cell73) Cell74) Cell75) 21)
(distinct Cell72 Cell73 Cell74 Cell75))
(and (>= Cell72 1) (<= Cell72 9))
(and (>= Cell73 1) (<= Cell73 9))
(and (>= Cell74 1) (<= Cell74 9))
(and (>= Cell75 1) (<= Cell75 9))
(and (= (+ Cell77 Cell78) 5) (distinct Cell77 Cell78))
(and (>= Cell77 1) (<= Cell77 9))
(and (>= Cell78 1) (<= Cell78 9))
(and (= (+ (+ Cell82 Cell83) Cell84) 6) (distinct Cell82 Cell83 Cell84))
(and (>= Cell82 1) (<= Cell82 9))
(and (>= Cell83 1) (<= Cell83 9))
(and (>= Cell84 1) (<= Cell84 9))
(and (= (+ Cell87 Cell88) 3) (distinct Cell87 Cell88))
(and (>= Cell87 1) (<= Cell87 9))
(and (>= Cell88 1) (<= Cell88 9))

Again the red parts reflect what the user expressed, while the remainder is all generated by the domain-specific Kakuro implementation.

# Conclusion

Creating a simple LINQ to Z3 implementation isn’t too hard and involves just a little bit of plumbing in the bathroom of reflection and some use of expression tree visitor patterns. In future posts, we’ll have a look at domain-specific theorem solving techniques based on declarative expression tree rewriters. Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

In this post, I’m going to do something out of the ordinary: write a post on a non-technical matter, in the “Personal” and “Microsoft” (big-bucket general stuff kind-a) categories I’ve left unattended for way too long. So, if you’re looking for interesting technical stuff, go look elsewhere this time around (but I promise to be back soon :-)).

So yes, this has been an interesting non-technical start (ignoring the continuation to follow after this post…) of a Saturday over here in Bellevue. To set the scene: a few months ago Dandy and I have been asked by a Belgian community (in the Microsoft-sense) member to write up something on “The Road to Redmond”. Dandy came up with this idea to create a video highlighting several hot-spots in the area. Finally we got around to take care of this request and shoot the moving pictures today. Lots of things interfered with the planning before: camera defects, Best Buy shopping for a new camera, Dandy’s US Open attendance (to see Kim Clijsters win, little did she know she’d become a recurring theme in the video), other weekend activities (occult and otherwise).

It’s kind of funny I’m pretty natural in front of large audiences, but simple video interviews make me behave differently somehow, especially when the spoken language falls back to Dutch instead of the English lingo my brain is in a steady state for. But anyway, we did some great shoots today starting with Belgian-style food and beverages (the latter not for me, as I’m a teetotaler) in downtown Bellevue where the waitress claimed to speak a little French but couldn’t properly pronounce Duchesse De Bourgogne. At least we got Belgian fries with mayonnaise to go with beef stew.

During a short visit to Best Buy to buy a tri-pod, I got a first question by the cashier on my unusual (way too geeky, I admit) T-shirt meant for shameless self-promotion of this blog, something that became a running theme throughout the day as well. Its front – typing for Curry-style System F-omega (different forms of definition can be found in the literature, but I chose one that fits a medium-sized T-shirt) – and back are shown below:

People who saw the filming in progress or got somehow around us can testify I really do wear a yellow T-shirt with the above :-).

Next shots were primarily to show off Dandy’s fancy car for the take-off (almost literally, velocity was getting close enough :-)) to the Space Needle. Though I’ve been here about two years now (and so does Dandy), I hadn’t been up there just yet. Too touristy if you will, but for the occasion I had an excuse. So we got there and went up, disregarding the cloudy weather and rather windy meteorological circumstances. I must say, the view is kind of nice but nothing too fancy to write home about. Kind as I am, I even took a picture of two ladies discovering some digital cameras still have a proper mechanical “take picture now” button. Last time I had a fancy digital camera in my hands, I couldn’t even find the power button, so I felt really good about this little victory near a camera this time. Dandy insisted me on showing them my T-shirt but maybe I did it a little too explicit (though I sincerely don’t believe so, notice my fellow colleague was a few Belgian beers in), so after thanking me for taking the picture and some little giggles (IMO mostly due to the presence of my colleague…) they took off. I know the (App) rule looks frightening enough for people to jump down, so I kept things a bit more hidden while being on the outside up there.

Back on the inside, we met this pretty Tacoma girl who works up there to answer tourist questions, all dressed up in a fabulous “I am Seattle” outfit. Can you believe it she even went to Brussels a while ago, totally jetlagged and looking for some old brewery nearby (can’t think of what that would have been, but nearby on the US scale is, euhm, not small). But hey, that’s better than nothing (unlike people thinking Belgium is the capital of Brussels, though in the limit things get close :-)). Anyway, she did a splendid job in our video interviews and we even made here pronounce Kim Clijsters more or less correctly. And contrast to the folks on the deck, she didn’t look too frightened about my T-shirt and even found my binary watch (standard equipment for a geek) cool.

Down again, we did a bit more hanging around in their souvenir shop (I know touristy things all look the same, but I needed some confirmation), where we shot some crazy scenes doing Plop dances in appropriate garments. Put developers together with IT Pros (yes, another cliché) and you get that sort of thing. Oh well, I don’t mind making my self a bit ridiculous from time to time :-).

Last but not least, we headed over to campus for a few shots around the new Studio buildings, where people were playing cricket on the fields and jogging around in Office T-shirts. All of a sudden, attention for my extraordinary clothing started to drop significantly as people over there are kind of used to this sort of weirdness :-). We did some talking on camera about serious matters this time around, covering subjects on life at Microsoft, the Microsoft culture, etc.

So yes, once a year or so (give or take) I can do non-geeky stuff as well. Though I need a little compensation by wearing appropriate clothing and such :-). People who believe they were part of this crazy plot today, feel free to drop a note in the comments section below this post.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

# Introduction

Side-effects don’t fit together very well with functional-style delayed computation, as observed in LINQ. Having such constructs embedded in an otherwise eagerly evaluated imperative language like C# can cause quite some confusion from time to time. A few samples (due to Erik Meijer) of such “mishaps” are shown below:

//
// When does it throw?
//

IEnumerable<double> res = null;
try
{
res = from x in Enumerable.Range(0, 10)
select 1.0 / x;
}
catch (DivideByZeroException) {}

foreach (var rcp in res)
Console.WriteLine(rcp);

//
// What does it print?
//
Func<int, bool> div2 = i => { Console.WriteLine(“Div2 test “ + i); return i % 2 == 0; };
Func<int, bool> div3 = i => { Console.WriteLine(“Div3 test “ + i); return i % 3 == 0; };
var res = from x in Enumerable.Range(0, 10)
where div2(x)
where div3(x)
select x;

foreach (var sxt in res)
Console.WriteLine(sxt);

The side-effects in the samples above are of various kinds: exceptions and output operations all hurt in a functional setting. In pure “fundamentalist” functional programming languages, those effects are made explicit by means of the type system (in particular, the IO monad). When looking at a .NET “function” signature, types are lying at us:

class Random
{
public int Next();
}

Sure, Next returns an Int32 numeric value when being called. But that’s only part of the story: every time you call it, it may return something else. In other words, we can’t simplify the following program:

var rand = new Random();
var fems = new Pair(rand.Next(), rand.Next());

into

var rand = new Random();
var __rn = rand.Next();
var fems = new Pair(__rn, __rn);

The main reason is simply because we don’t know enough about the behavior, i.e. the side-effects, the Next function exposes. Or to state it in another manner, Next is not a (pure) function but a method. In Haskell, types tell us a side-effect is going on:

randomIO :: (Random a) => IO a

Roughly speaking, Random a stands for Random<T> and IO a stands for T, “tagged” with the fact it has a side-effect through the IO monad (not the same as IO<T> as the CLR’s type system doesn’t support the higher kinding required to realize the typing required for monads).

# Multiplication of side-effects

Being aware of the presence of side-effects is one thing, having them amplified or multiplied (almost) invisibly through the use of query combinators and such is even more painful. Consider the following iterator:

static class RandomExtensions
{
public static IEnumerable<int> GetNumbers(this Random rand, int max)
{
while (true)
yield return rand.Next(max);
}
}

which can be called like:

var rand = new Random();
foreach (var x in rand.GetNumbers(100).Take(10))
Console.WriteLine(x);

All is good and well so far, we’ll get 10 random numbers printed to the screen. Now assume we’re already in .NET 4 (just to make illustration a little easier, the general topic of “taming side effects” is version-independent and platform(-excluding-Haskell)-independent) and have Zip method available. Let’s write the following:

var rand = new Random();
var res = rand.GetNumbers(100).Take(10).Zip(rand.GetNumbers(100).Take(10), (l, r) => l + r).All(x => x % 2 == 0);

What do you think res should be? Clearly, the sum of a number and itself is always even. In bold, we have a common subexpression: on the face of it, you may think it’d produce the same results for both “sides” of Zip, hence the conclusion the zipped sums should always be even. Nothing is further away from the truth: GetNumbers propagates the side-effect of the underlying IO operation (reading form a random number generator) and the two calls create separate iterator instances, each merrily ticking along. A simple Print operator can help to illustrate this:

static class Enumerable
{
public static IEnumerable<T> Print<T>(this IEnumerable<T> source, string formatMessage)
{
foreach (var item in source)
{
Console.WriteLine(formatMessage, item);
yield return item;
}
}
}

Yes, even more side-effects added to the melting pot. Now we can write the following to illustrate the mishap:

var rand = new Random();
var res = rand.GetNumbers(100).Take(10).Print(“SRC1: {0}”).Zip(rand.GetNumbers(100).Take(10).Print(“SRC2: {0}”), (l, r) => l + r).All(x => x % 2 == 0);

One run on the author’s machine produced:

SRC1: 82
SRC2: 86
SRC1: 17
SRC2: 91
SRC1: 48
SRC2: 15

Clearly, both sources are out of sync. Now we go and knock on the door of a fellow programmer and ask to solve this issue. Well, clearly you’re calling GetNumbers twice – that’s your problem right there. Not so much… Let’s see what a proposed fix would look like:

var rand = new Random();
var src = rand.GetNumbers(100).Take(10);

var res = src.Print(“SRC1: {0}”).Zip(src.Print(“SRC2: {0}”), (l, r) => l + r).All(x => x % 2 == 0);

And here’s the result:

SRC1: 89
SRC2: 39
SRC1: 22
SRC2: 15

What’s happening here? Although we’ve reduced the number of calls to GetNumbers to just a single one, its execution is still lazily evaluated as we’re using an iterator. It’s not until we start iterating that computation is triggered. Moreover, the iterator returns an IEnumerable<int> which merely provides the potential to enumerate though the IEnumerator<int> objects it hands out. In effect, Zip will call GetEnumerator twice, resulting in two instances of the iterator (simplified code below):

public static IEnumerable<R> Zip<T1, T2, R>(this IEnumerable<T1> left, IEnumerable<T2> right, Func<T1, T2, R> zip)
{
var leftE = left.GetEnumerator();
var rightE = right.GetEnumerator();
while (leftE.MoveNext() && rightE.MoveNext())
yield return zip(leftE.Current, rightE.Current);
}

In our case, left and right are pretty much the same (ignore Print which acts as an all-pass filter iterator), namely src:

src.Zip(src, …)

Two calls to GetEnumerator result, each querying the same random generator instance that happily continues on its side-effecting mission. The iterator we wrote for GetNumbers is semantically equivalent to the following, illustrating this issue more clearly for those who are not familiar with the way iterators are compiled (simplified presentation):

static class RandomExtensions
{
public static IEnumerable<int> GetNumbers(this Random rand, int max)
{
return new GetNumbersIterator(rand, max);
}
}

class GetNumbersIterator : IEnumerable<int>
{
private Random _rand;
private int _max;

public GetNumbersIterator(Random rand, int max)
{
_rand = rand;
_max = max;
}

public IEnumerator<int> GetEnumerator()
{
return new GetNumbersIteratorEnumerator(this);
}

class GetNumbersIteratorEnumerator : IEnumerator<int>
{
private GetNumbersIterator _iter;
private int _curr;

public GetNumbersIteratorEnumerator(GetNumbersIterator iter)
{
_iter = iter;
}

public bool MoveNext()
{
_curr = _iter._rand.Next(_iter._max);
return true;
}

public int Current
{
get { return _curr; }

}
}
}

In fact, you could say the side-effect that’s killing us now is the instantiation of GetNumbersIteratorEnumerator inside the GetEnumerator method. Observing that, the innocent unenlightened developer may say: what if we eliminate that additional object instantiation? Let’s “desugar” the iterator by writing it as the thing above (quite a bit of work, but the innocent developer is willing to go great lengths to get his code working As The Natural Numbers Intended) and eliminating the construction of a new GetNumbersIteratorEnumerator all the time.

class GetNumbersIterator : IEnumerable<int>
{
private Random _rand;
private int _max;
private GetNumbersIteratorEnumerator _curr;

public GetNumbersIterator(Random rand, int max)
{
_rand = rand;
_max = max;
_curr = new GetNumbersIteratorEnumerator(this);
}

public IEnumerator<int> GetEnumerator()
{
return _curr;
}

Guess what? Here’s the result:

SRC1: 15
SRC2: 97
SRC1: 31
SRC2: 57
SRC1: 21
SRC2: 57
SRC1: 63
SRC2: 94

Bummer. We’re just pushing the problem in front of us. Though we have the same enumerator object inside the Zip method now, we’re still calling MoveNext twice:

public static IEnumerable<R> Zip<T1, T2, R>(this IEnumerable<T1> left, IEnumerable<T2> right, Func<T1, T2, R> zip)
{
var leftE = left.GetEnumerator();
var rightE = right.GetEnumerator();
Debug.Assert(leftE == rightE);
while (leftE.MoveNext() && rightE.MoveNext())
yield return zip(leftE.Current, rightE.Current);
}

What would the desperate developer’s next suggestion be? Let’s not go there – clearly there’s something bigger the matter here. Everything you try at this level of decomposing iterator objects and knowing precisely how query operators work is a very very ugly band-aid trying to cover up for the bigger problem of side-effect replication here.

One way to tackle the problem from the outside is to break laziness:

var rand = new Random();
var src = rand.GetNumbers(100).Take(10).ToArray();

var res = src.Print(“SRC1: {0}”).Zip(src.Print(“SRC2: {0}”), (l, r) => l + r).All(x => x % 2 == 0);

Operators like ToArray and ToList drain the left-hand side sequence on the spot, persisting the retrieved elements in-memory. Iterators versus persisted data sequences aren’t differentiated on the type level: both are “enumerable”. Great from a generalization point of view, less so to understand the runtime behavior of dealing with those objects. Either way, the above works and prints:

SRC1: 44
SRC2: 44
SRC1: 6
SRC2: 6
SRC1: 81
SRC2: 81
SRC1: 14
SRC2: 14
SRC1: 65
SRC2: 65
SRC1: 1
SRC2: 1
SRC1: 9
SRC2: 9
SRC1: 67
SRC2: 67
SRC1: 65
SRC2: 65
SRC1: 75
SRC2: 75

For the first time we reached the end of the zipped sequence. Wonderful. But we’ve made a big sacrifice, something I accounted for right from the start of the post by using Take(10). The above doesn’t work if we’re dealing with infinite sequences. Obviously you should never try to drain those as the program will get stuck (which in fact can be formalized as well, maybe stuff to cover another time), yet infinite sequences are great to provide the user with the potential to compute as many results as desired. Having to break laziness to make query operator composition, e.g. for Zip in our sample, work spoils this whole idea of lazy computation.

By now it should be clear that side-effects are have far-stretching implications, but we should not ignore the fact they’re useful at times. What we need to make our “consistent consumption of side-effectful sequences” is some kind of let construct.

# What’s let anyway?

Functional programming languages typically provide a “let” construct, e.g. as seen in ML-style languages like F#. One form is to use the following syntax:

let id = expr1 in expr2

The above is shorthand for abstraction and application carried out right after one another:

(fun id –> expr2) expr1

Consider the next sample:

let x = 1 in (x, x)

where (…, …) is the tupling operator. Semantically the above is equivalent to:

(fun x –> (x, x)) 1

In a call-by-value setting this effectively avoids re-evaluating the expr1 expression. Of course in the sample above, there’s not much work required to “re-evaluate” the rather minimalistic expression “1”. However, when evaluating this expression causes side-effects, we’re eliminating duplicate evaluation. This is shown in the sample below where we use the Stopwatch type to retrieve the timestamp, based on a high-resolution performance counter (see MSDN documentation for information and potential caveats).

Of course we can loop in our Random type sample as well:

You get the idea. How does this apply to our side-effecting iterators? Well, what we really want is a mechanism by which we can expose the same iterator to multiple consumers without re-evaluating the side-effects that get triggered by calls to IEnumerator.MoveNext (where the iterator’s code lives in one form or another). In addition to this functional requirement, we can’t drain the whole iterator as we’d break our teeth on infinite (computed) sequences.

# IEnumerable.Let – Signature and use

First, let’s have a look at the contract for the IEnumerable.Let operator we’re after here. Obviously we want to keep the order as natural as possible and what better to use than an extension method to make syntax as natural as possible:

… Let<T>(this IEnumerable<T> source, …)

That’s a good start: we can take in an IEnumerable<T> as the expr1 for the let construct. Looking back at the F# equivalent for a moment:

(fun id –> expr2) expr1

It’s clear we need to have a function that exposes the one-time evaluated (read: enumerated) to a block of code where the user can party on the exposed sequence without having to worry about duplicating side-effects. The input of such a function clearly is IEnumerable<T>, leading to:

… Let<T>(this IEnumerable<T> source, Func<IEnumerable<T>, …

So, “id” from the F# code sample above will be identified to the input of the Func delegate. Finally, what’s the output meant to be? Obviously just the output of the expr2 evaluation after feeding in expr1 through the parameter. As we’re talking about sequence operators, a natural choice is to make the output an IEnumerable as well, and to allow projections and such to happen inside we choose another generic parameter:

… Let<T, U>(this IEnumerable<T> source, Func<IEnumerable<T>, IEnumerable<U>> function)

You can guess where the output of the source is meant to go to: the output of the Let construct by itself:

IEnumerable<U> Let<T, U>(this IEnumerable<T> source, Func<IEnumerable<T>, IEnumerable<U>> function)

Makes sense? For example, for our fancy Zip over random sequence we want to be able to write the following:

var rand = new Random();
var res = rand.GetNumbers(100).Take(10).Let(nums => nums.Zip(nums, (l, r) => l + r)).All(x => x % 2 == 0);

See what’s happening here? The left-hand side to Let becomes the source, which internally is exposed as “nums”. Zip produces its output, which becomes the output of the Let construct that’s subsequently fed into All. In fact, the above could be read as:

var res = (let nums = rand.GetNumbers(100).Take(10) in
nums
.Zip(nums, (l, r) => l + r))
.All(x => x % 2 == 0);

Now on to the question of the night, how to realize this?

# Implementing IEnumerable.Let

Before we start implementing the let construct for LINQ to Objects, we need to make sure we fully understand the problem. Let me rephrase it one more time:

Given an IEnumerable<T> source, we need to expose it to a function is such a way that multiple consumers can only trigger a single enumerator to be active on the source. This ensures that the side-effects occurring as part of the enumeration (using MoveNext/Current) over the enumerator are not replicated.

Since many consumers can be active inside the “let body” (the function parameter in the signature), we can expect those to move at different paces. Here’s an example:

nums.Skip(5).Zip(nums, …)

In here, the Skip operator readily advances the iteration over the source 5 positions ahead. Then Zip consumes this input as well as the raw source starting from position 0. Given a sequence of { 1, 2, 3, 4, 5, 6, 7, 8, 9 }, the inputs to Zip are skewed over a distance 5, combining (6, 1), (7, 2), etc. This forces us to memorize the first 5 elements that have already been consumed by Skip, as they still have to be observed through Zip’s second input (and obviously we can’t recompute them as that’s what we’re trying to eliminate!).

For one thing, we need to keep track of elements that have been consumed from the Let-source so that consumers inside the body can get access to those computed values. An immediate question becomes whether or not we can optimize this: wouldn’t it be possible to scavenge the computed values if we are sure no single consumer inside the body can still consume it? Sounds reasonable, but there’s a deep problem here: the number of consumers can be dynamic. A good sample is SelectMany:

from x in nums
from y in nums
select x + y

This translates into the following:

nums.SelectMany(x => nums, (x, y) => x + y)

SelectMany is conceptually similar to nested loops used to enumerate a collection and for each of its elements retrieve associated objects (e.g. for each product, and each order for that product, do …). In the sample above we use nums twice, so assuming the source looks like { 1, … , 9 }, we will see the following pairs being fed to the projection function parameter:

(1, 1), (1, 2), …, (1, 9), (2, 1), …, (9, 1), (9, 2), …, (9, 9)

For the 9 elements in the left-hand side source, we’ll enumerate all of the elements in the corresponding source found by evaluating x => nums (where, in this degenerate case, we disregard the parameter x and simply return ourselves). How many times will we trigger enumeration on nums here? Well, one time for the input to SelectMany as the left-hand side, and for each element one more time for the nested enumeration. So, with 9 elements, the total will be 9 + 1 = 10. There’s no way for us to know upfront how many subscriptions will happen, so we can’t make assumptions when we can discard elements we’ve retrieved from the source.

All of those observations give rise to the introduction of a “memoized enumerator”. A what? Memoization is a technique used in functional programming to cache the result of evaluating a function knowing that function is pure, i.e. given the same inputs it always gives back the same output. A typical example is recursive Fibonacci:

Func<uint, uint> fib = null;
fib = n => n <= 1 ? 1 : fib(n - 2) + fib(n - 1);
for (uint i = 0; i < 50; i++)
Console.WriteLine(fib(i));

Good luck running this code – it will take forever to compute fib(50) (ignore the fact it doesn’t fit in a uint) due to the explosion of recursive steps and re-evaluations incurred. A generic memoization function will cache results:

static Func<T, R> Memoize<T, R>(Func<T, R> f)
{
var cache = new Dictionary<T, R>();

return t =>
{
R res;
if (cache.TryGetValue(t, out res))
return res;
return cache[t] = f(t);
};
}

This memoizer is a higher-order function that transparently (in terms of the function signature) maps a function onto one that caches evaluation results. As a result, for a given function input “t”, the function “f” can be evaluated at most once (for subsequent function calls given the same input, the dictionary will be consulted). It should be pretty clear this is precisely what we want for our Let-enumerator as well, but instead of keying off the avoidance for multiple evaluations from the function input, we’ll need to avoid it based on the position of an enumerator. Let’s discuss this.

Assume you have two consumers of the Let-input sequence: one skips ahead 5 positions, the other starts from the beginning, just as we saw before:

src.Let(nums => nums.Skip(5).Zip(nums, (l, r) => l + r))

Let’s decompose the interactions here. Zip is pulling both its left-hand side followed by its first parameter, at an equal pace. First, the end-result:

I hope the above diagram makes sense, if not look at the more conceptual diagram below:

Now we loop bottom-up: if something pulls at Zip, what does it start pulling at by itself? First it calls MoveNext on its left-hand side, which is the Skip(5) block. Skip starts by fetching the first 5 + 1 items from its input, which is the Let block, in order to yield the 6th item back to the consumer. This leads to the following situation:

The read arrows indicate the active “channels” of enumeration. Reading bottom-up, Zip requested one element from Skip(5), which on its drew 6 items through its upstream channel connected to Let. Since Let hasn’t received calls to retrieve elements from the source at positions 0 through 5, it pulled on its source to get those items, causing potential side-effects (indicated by the bombs). But beside just yielding them back to the Skip(5) caller, it also “memoizes” them in an indexed collection. The “@5” annotation on the outgoing Let arrows – representing enumerators – indicate the cursor position in the source sequence, of the element that was last yielded through the channel.

Now that Zip has an element from its left-hand side source, it needs to fetch an element from its right-hand side source, in order for the two elements to be combined by the zipper function (in our case numerical +). This looks as follows:

Notice the red channel doesn’t extend all the way to the source – we already have received the element at position 0, so we can consult the memoization store to retrieve the value from there. No observable source-induced side-effects are triggered by doing so. The cursor position for the active Let-consumer is moved to 0.

Essentially, whenever a channel queries for an element below the high-water mark of the memoization store, the element are retrieved from the store instead of pulling from the source. On to the code now. We’ll start by looking at a memoizing enumerable as we’ll use that as the building block for the Let construct. First, the IEnumerable<T> implementation:

```class MemoizeEnumerable<T> : IEnumerable<T>, IDisposable
{
private readonly IEnumerable<T> _source;
private IList<T> _data;
private IEnumerator<T> _enumSource;
private bool _disposed;
private bool _reachedEnd;

public MemoizeEnumerable(IEnumerable<T> source)
{
_source = source;
}

public IEnumerator<T> GetEnumerator()
{
if (_disposed)
throw new ObjectDisposedException("LetIterator");

if (_enumSource == null && !_reachedEnd)
{
_enumSource = _source.GetEnumerator();
_data = new List<T>();
}

return new MemoizeEnumerator<T>(this);
}

System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return GetEnumerator();
}

private bool TryMoveNext(int cursor, out T value)
{
Debug.Assert(cursor <= _data.Count);

if (_disposed)
throw new ObjectDisposedException("LetIterator");

if (cursor < _data.Count)
{
value = _data[cursor];
return true;
}
else
{
if (_reachedEnd)
{
value = default(T);
return false;
}

if (_enumSource.MoveNext())
{
value = _enumSource.Current;
return true;
}
else
{
// When we reach the end, no reason to delay the disposal of the source
// any longer. Now we should avoid calls to MoveNext further on, so we
// set a _reachedEnd flag to detect this case.
_enumSource.Dispose();                _enumSource = null;
_reachedEnd = true;

value = default(T);                return false;
}
}
}

public void Dispose()
{
// Facilitates deterministic disposal of the underlying store. We can't rely
// on Dispose calls on the enumerator object as we don't know the number of
// consumers we're dealing with.
if (!_reachedEnd)        {
_enumSource.Dispose();
_enumSource = null;        }
_disposed = true;
_data.Clear();
}

class MemoizeEnumerator<T_> : IEnumerator<T_>
{
…    }
}```

A MemoizeEnumerable<T> encapsulates an underlying IEnumerable<T> that represents the source. The GetEnumerator method for consumers to call in order to establish an enumerator triggers the MemoizeEnumerable to start consuming the input source unless it’s already doing so. This is the key to avoiding multiple source-consumers, which would lead to duplication of side-effects. The memoization store is simply a List<T>, which is indexable. What the MemoizeEnumerator<T> looks like will be shown soon.

Skipping over TryMoveNext for just a second, we look at Dispose first. As we’ve seen before, the number of enumerators over the exposed memoized data structure is something we can’t know upfront or infer from context. So, the disposal of enumerator objects doesn’t mean we can dispose the enumerator we established on the (memoized) source. We need a hint from the outside to say: it’s fine to let go the source (which could be holding on to, say, a network connection underneath). This will correspond to “scoped usage” of the Let construct as we shall see later.

Now on to TryMoveNext. Here the fetching of the underlying source happens, based on a passed-in cursor. We enforce linear progression, which is expected for MoveNext calls: the consuming enumerators should at most take one step forward at a time. If the cursor falls within the range of the memoization store, we simply retrieve the cached value from there. Otherwise, we fetch the next element from the underlying source (causing a potential side-effect once and for all for that element) and keep it in the store. If there are no further elements to be fetched as signaled by the source’s MoveNext call’s return value, we can dispose the enumerator over the source. To avoid subsequent calls on the disposed object, we keep a flag to signal we’ve reached the end. Notice we’re not dealing with multi-threaded consumers here – such worries are left as an exercise for the reader.

Finally, we should have a look at the MemoizeEnumerator inner class which obviously will call TryMoveNext in order to make progress over the shared source enumerator and memoization store. This type is fairly trivial:

```class MemoizeEnumerator<T_> : IEnumerator<T_>
{
private readonly MemoizeEnumerable<T_> _source;
private int _cursor;
private T_ _lastValue;

public MemoizeEnumerator(MemoizeEnumerable<T_> source)
{
_source = source;
_cursor = 0;
}

public T_ Current
{
get { return _lastValue; }
}

object System.Collections.IEnumerator.Current
{
get { return Current; }
}

public bool MoveNext()
{
return _source.TryMoveNext(_cursor++, out _lastValue);
}

public void Reset()
{
_cursor = 0;
}

public void Dispose()
{
// No disposal here - the enumerable object does the bookkeeping of the
// underlying enumerator object. Here we're essentially just dealing with
// a cursor into the underlying sequence.
}
}```

All there is to a memoizing enumerator is to keep its cursor, communicate it to TryMoveNext, and increment it. Notice the assert in TryMoveNext will trigger when an ill-behaved consumer continues to call MoveNext on the enumerator after it has returned false. One could pimp this case a little bit by either keeping a _reachedEnd state inside the enumerator (not to bother the enumerable object anymore in such a case) or by keeping the _cursor value in its final position as opposed to incrementing it.

Notice the enumerator’s Dispose method is not implemented for reasons explained earlier. The only thing one could do as a meaningful implementation is a _disposed flag to throw an ObjectDisposedException if the object is used beyond its disposal state. We leave this as a trivial exercise for the reader as well.

Ultimately we’ve reached a state where we can give away the entry-point to IEnumerable.Let:

```public static IEnumerable<U> Let<T, U>(this IEnumerable<T> source, Func<IEnumerable<T>, IEnumerable<U>> function)
{
using (var mem = new MemoizeEnumerable<T>(source))
{
foreach (var item in function(mem))
yield return item;
}
}```

The most important thing here is obviously the memoization of the source which is fed to the function for the let-body to party on. The use of a using-block inside an iterator causes that iterator’s Dispose method to call the Dispose method on the resource in use. In other words, if one foreach’es over the result of a Let-call, exiting the loop will trigger a call to Dispose which on its turn disposes the memoized enumerable, again on its turn disposing the underlying source’s singleton enumerator. This means re-executing the Let resulting sequence itself will cause re-iteration over the underlying source. This makes sense as we only protect against duplication of side-effects inside the let-body. If you want to avoid re-execution of the Let-operator to cause duplication of side-effects by itself, wrap it in yet another Let-call.

# A few samples

Time for some samples to conclude this journey. We go for two source iterators that are only of interest for their side-effects:

```static IEnumerable<int> GetNums(int n)
{
try
{
for (int i = 0; i <= n; i++)
{
Console.ForegroundColor = ConsoleColor.Red;
Console.Write(i + "   ");
Console.ResetColor();
yield return i;
}
}
finally
{
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("Disposed");
Console.ResetColor();
}
}

static Random rand = new Random();

static IEnumerable<int> Lotto()
{
try
{
for (int i = 0; i < 6; i++)
{
var n = rand.Next(100);
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine(n);
Console.ResetColor();
yield return n;
}
}
finally
{
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("Disposed");
Console.ResetColor();
}
}```

We log produced values to the console to visualize how many times the iterator is enumerated over. Obviously we write those in red as they are bloody side-effects. We’re also interested to see proper disposal, so we log that event too. A first example using SelectMany without Let shows how bad things get as we replicate side-effects:

```Console.WriteLine("Without let");
{
var source = GetNums(10);

var res = from x in source
from y in source
select x + y;

foreach (var i in res)
;
}
Console.WriteLine();```

The result is as follows, no less than 11 iterations over the source sequence:

Next, we eliminate the common sub-expression “source” by lifting it into a Let block. Source becomes the input to Let, and inside the let-body we represent the source with the “src” identifier:

```Console.WriteLine("With let");
{
var source = GetNums(10);

var res = source.Let(src =>
from x in src
from y in src
select x + y);

foreach (var i in res)
;
}
Console.WriteLine();```

Now things look much better:

Obviously the computed results will be the same (not visualized here – we’re just showing the side-effects), but we got rid of the excessive side-effect replication. This said, consuming a Let’s output multiple times doesn’t prevent the source from being consulted more than once: only inside the Let-block, consumers are shielded from side-effect duplication, but outside, consumers of the Let-block itself don’t get protection:

```Console.WriteLine("With let (twice)");
{
var source = GetNums(10);

var res = source.Let(src =>
{
Console.WriteLine("Let!");
return from x in src
from y in src
select x + y;
});

foreach (var i in res)
;

foreach (var i in res)
;
}
Console.WriteLine();```

We enumerate twice over the output of Let, hence we see the side-effects twice (and not 22 times as would be the case if we left Let out of the picture altogether):

However, nothing prevents us from putting a Let over the Let we already have: let’s eliminate the common subexpression res in both foreach-statements as follows:

```Console.WriteLine("With let (nested)");
{
var source = GetNums(10);

var res = source.Let(src =>
{
Console.WriteLine("Let!");
return from x in src
from y in src
select x + y;
});

res.Let(src =>
{
foreach (var i in src)
;

foreach (var i in src)
;

return new int[] { }; // doesn't matter, we want an "imperative let"
}).ToArray(); // forces evaluation
}
Console.WriteLine();```

Sure enough, we’re back to the minimum amount of side-effects:

And finally, some direct uses of Memoize, assuming the following exposition on IEnumerable<T> is present:

```public static MemoizeEnumerable<T> Memoize<T>(this IEnumerable<T> source)
{
return new MemoizeEnumerable<T>(source);
}```

Now we use our lottery iterator as a source and show that the sequences are not equal without memoization as they are enumerated multiple times and hence different random numbers are picked for two uses of the source:

```Console.WriteLine("Without memoize");
{
var src = Lotto();
Console.WriteLine(src.SequenceEqual(src));
}
Console.WriteLine();

Console.WriteLine("With memoize");
{
using (var src = Lotto().Memoize())
Console.WriteLine(src.SequenceEqual(src));
}
Console.WriteLine();```

With the Memoization in use, obviously the two sequences ought to be the same all the time:

One more thing to illustrate is the use of infinite streams by means of a Repeat operator:

```public static IEnumerable<T> Repeat<T>(T value)
{
while (true)
yield return value;
}

public static IEnumerable<T> Repeat<T>(IEnumerable<T> source)
{
while (true)
foreach (var item in source)
yield return item;
}```

It should be clear that the use of eagerly evaluated operators like ToArray and ToList won’t work as a side-effect-avoiding technique in such a setting. However, with Memoize one can eliminate side-effects without getting stuck:

```Console.WriteLine("Memoize prevents _|_ out");
{
using (var mem = Enumerable.Repeat(1).Memoize())
Console.WriteLine(mem.Take(10).Count());
}
Console.WriteLine();

Console.WriteLine("ToList may _|_ out");
{
Console.WriteLine(Enumerable.Repeat(1).ToList().Take(10).Count());
}
Console.WriteLine();```

The samples above are not of interest for side-effects (though one can easily come up with samples), but rather for their infinite character. Use of ToList will exhaust the managed heap quickly, while Memoize just fills memory for the objects that are ever requested downstream (in the case above “Take 10”).

Wonder about the _|_ notation? This is the “bottom type”, one way to type non-termination. “Entre parentheses”, we don’t have such a thing in the world of .NET, which is a pity. For example, if you have an exception throwing helper method:

static void ThrowAlways()
{
throw new MyException(/* localization fluff */);
}

and you call the above from a function to signal an exception:

static int Bar()
{
if (cond)
…
else
ThrowAlways();
}

the compiler will complain that “not all code paths return a value”. Void is not the same as bottom, flow analysis cannot know the method won’t every return. With a bottom type, e.g. using keyword “never” (or more juicy ones like “blackhole”, “getstuck”, “goodbye”, “farewell”), one could say:

static never ThrowAlways()
{
throw new MyException(/* localization fluff */);
}

Now the compiler would check statically that ThrowAlways can never reach its end (either by throwing an exception, looping infinitely or calling other “never”-returning methods). At the call-site, the compiler would be happy with Bar’s definition as well since it knows ThrowAlways never returns, hence there’s no need to be worried about not having a return value for the enclosing method. And of course, if we also had union types, we could (or maybe should, depending on how pesky the compiler would be made) warn the user that Bar may never return:

static (never | int) Bar()
{
if (cond)
…
else
ThrowAlways();
}

But all of this deserves its own discussion another time.

# Conclusion

If you don’t want to tame your side-effects, you should at least have good notions about where they happen and how they may get replicated by mixing lazy and eager evaluation in languages like C#. Enumerable.Let shows one approach to taming side-effects in the context of LINQ queries with side-effecting computed sequences (iterators).

If you can’t get enough of this stuff, check out Erik Meijer’s talks on Fundamentalist Functional Programming. Credits where credits are due, the Enumerable.Let exercise shown in here presents the dual of the IObservable’s Let operator (from the Reactive LINQ framework Erik and his team have been working on lately), which I’ll talk about another time.

Left as a challenge for the reader: can you think of a way to port let’s brother “letrec” to the world of IEnumerable<T> as well? Let completeness be your source of inspiration :-).

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

A Belgian friend of mine was looking up my obsession on lambda calculus and came across this Dutch wiki page. Reminds me of exactly the same mistake 5 years ago: [FUNNY] Rule one when translating software - review the translation. Taking the opportunity to introduce a new IIgnorable blog category for the rare occasions I feel the urge to post nonsense.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

# Introduction

A while back, I blogged about (Mis)using C# 4.0 Dynamic – Type-Free Lambda Calculus, Church Numerals, and more which was a fun post and got some good feedback and solid reading numbers. So, let’s continue our journey of brain-exploding theoretical foundational posts with “The Revenge of the Type-Free Lambda Calculus, Now Compatible with C# 3.0”.

Last time around, we truly misused C# 4.0 dynamic to uni-type (not a spelling mistake) functions and values, and went bananas turning basic computation with numerals and Booleans into functions thanks to Church encodings. All in all a great unit test for the Dynamic Language Runtime that sweats under the pressure put on it by our crazy function calls:

```//
// Church numerals.
// Basic idea: natural number n gets represented as \fx.f^n(x)
//
var N0 = new Func<dynamic, Func<dynamic, dynamic>>(f => x => x);
var succ = new Func<dynamic, Func<dynamic, Func<dynamic, dynamic>>>(n => f => x => f(n(f)(x)));

Func<dynamic, uint> toInt = x => x(new Func<dynamic, dynamic>(i => i + 1))(0u);
Func<uint, dynamic> fromInt = null;
fromInt = n =>
{
return (n == 0 ? N0 : succ(fromInt(n - 1)));
};```
`var plus = new Func<dynamic, Func<dynamic, Func<dynamic, Func<dynamic, dynamic>>>>(m => n => f => x => m(f)(n(f)(x)));`

The above allows us to write a simple iterative (ah, I’m cheating) Fibonacci function as follows:

```var i = N0;
var prev = N0;
var curr = succ(N0);

while (true /* could have used a Church Boolean as well ;-) */)
{
Console.WriteLine(toInt(curr));

var t = curr;
curr = plus(curr)(prev);
prev = t;
}```

If you don’t believe the above works, or you don’t get it how it does its magic, revisit my previous post on the topic, and check back after you’ve plowed through it.

# Exceptions rule the world…

Today, we’ll take a step back and have a look at how we can do all of the above in the pre-.NET 4.0 era, where we don’t have the “convenient” dynamic type to escape from the typed reality. So, how can we make this work? A possible answer can be found in the use of unchecked exceptions, as we have them at our service in the .NET Framework. The underlying idea is to “smuggle out” data out of a function without having to declare it anywhere in the function’s signature (use of checked exceptions prevents this, as the “throws clause” would have to reveal what’s being thrown in a … statically typed manner, aye!).

This is one of the findings described in Mark Lillibridge’s paper entitled “Unchecked Exceptions can be Strictly More Powerful than Call/CC” on page 301, though much more mind-blowing stuff can be found in there (as the title reveals). Once I get into blogging about continuations, most likely in the context of Reactive LINQ (otherwise known as Reactive Framework or Reactive Fx), I may refer to this paper again to assist in blowing out people’s brains (just kidding). Talking about Reactive LINQ, I got the initial reference to this paper by Erik Meijer.

So, what’s the big idea? Let me try to summarize it. You’ll remember we need to define two operations to make the essentials of lambda calculus work: abstraction and application. We also know that both functions and values need to be represented in an equal manner, as functions are first-class: a function can take another function as its input or produce one as its output. So, it’s still clear we need to uni-type things just like we did before. So let’s spoil the beans now without much further ado: that single type we’ll be using today is nothing less than System.Action, as delegate defined as follows:

`public delegate void Action();`

In fundamentalist functional programming the above is the most useless type for a function: it takes nothing and produces nothing, hence all those guys can be compiled away, right? But as they’ll be our bread and butter for today’s installment, you can guess we’ll joyfully dance the tango of side-effects. One such great side-effect is called an exception.

Let’s take a step back first and summarize the dense info above: values and functions will be represented as System.Action objects. Right, in our lambda calculus “emulator” System.Action is the new System.Object. The careful reader will notice that System.Action plays a dual role here: on the outside we use it as the single type we’re dealing with both for values and functions. But on the inside, we'll leverage its delegate (a “CLR function object” if you will) characteristic somehow. Let’s go there now…

How can we realize abstraction? As we know from the previous time around, all functions of multiple arguments can be represented as functions of one argument through currying. So, to construct this simplest form of a function, all we need is a way for the user to define a function between two of our “first-class” objects (of type System.Action), which is something we can do as a Func<Action, Action> in C#. This allows the users to use the lambda arrow to write such a function. However, since a defined function (through abstraction) needs to be capable of being passed elsewhere as an argument, a Func<Action, Action> is too verbose: we need to morph it into a System.Action again. This is what our lambda abstraction operator will have to do:

```static Action λ(Func<Action, Action> x)
{
// But how???}```

We’ll see in a moment how we can realize this function. But first, let’s have a look at the contract of a application operator we so badly need to have this come off the ground. Application takes two objects at a time: the first one will be a function, the second one a function or a value. It applies the function to the given argument. Objects are of type System.Action, and the result of the application call is an object too, hence the following signature:

`static Action ß(Action x, Action y){    // But how???}`

Okay, let’s fill in the voids. How can we smuggle out a function between two Action objects into a single action, to realize abstraction? Well, what about carrying the function on the back of an exception object. We’ll define a useful generic Exception type for this purpose:

```class Exception<T> : Exception
{
public Exception(T data)
{
Data = data;
}

public new T Data { get; private set; }
}```

This allows us to define the lambda function like this:

```static Action λ(Func<Action, Action> x)
{
return () => { throw new Exception<Func<Action, Action>>(x); };
}```

In Lillibridge’s paper this is referred to as the “roll” operation. Think of this smuggling operation as rolling the function in a carpet of type Action: nobody will notice from the outside that it contains some dirty secrets. (Tip: don’t try this technique at airport security gates, they won’t allow exceptions.) But what does this mean? Well, from the outside the function can simply construct a proper lambda calculus function, surfaced as an Action object (making it deserve the status of “first-class”). How do we, the “lambda runtime”, get it out of there again? Simply invoke the resulting Action when needed, and catch the generic exception type. That’s precisely what the applicaiton function will have to do:

```static Action ß(Action x, Action y)
{
Func<Action, Action> xe;

try
{
x();
xe = _ => _;
}
catch (Exception<Func<Action, Action>> ex)
{
xe = ex.Data;
}

return xe(y);
}```

This is referred to as the “unroll” operation. In order to apply the function “x” to argument “y”, we first need to evaluate x. We do so by calling the delegate, which (when properly constructed through the lambda operator) will throw an exception allowing us to unroll the underlying function type from our smuggling exception carpet. Then we can feed in the argument ”y” to the resulting Func<Action, Action> to get the result of the function call back. The remaining one-billion dollar question is how we’d ever end up executing the last statement in the try-block:

`xe = _ => _;`

This is simply an identity function (you could equally well have write z => z or so), which ultimately will cause the application to return its argument. We use this as a convenient mechanism to allow “regular” Action objects as participants in our application operation. At a later point, this will become clear.

# Lambda Language Runtime for CLR 2.0 – summary

The “LLR” I’m illustrating here is a way to layer untyped computation on the statically typed foundation of the “CLR”. Contrast to the DLR’s complex infrastructure, we need nothing more than two methods:

```static Action λ(Func<Action, Action> x)
{
return () => { throw new Exception<Func<Action, Action>>(x); };
}

static Action ß(Action x, Action y)
{
Func<Action, Action> xe;

try
{
x();
xe = _ => _;
}
catch (Exception<Func<Action, Action>> ex)
{
xe = ex.Data;
}

return xe(y);
}```

# Reality check: Church Booleans

In the previous post, I’ve explained what Church Booleans are, so I won’t do it again now. Instead, let’s try to redefine those in terms of our lambda and beta operations:

```//
// Church Booleans
//
var T = λ(a => λ(b => a)); // \ab.a
var F = λ(a => λ(b => b)); // \ab.b

var Not = λ(m => λ(a => λ(b => ß(ß(m, b), a)))); // \m.\ab.mba
var And = λ(m => λ(n => ß(ß(m, n), m))); // \mn.mnm
var Or = λ(m => λ(n => ß(ß(m, m), n))); // \mn.mmn
var Xor = λ(m => λ(n => λ(a => λ(b => ß(ß(m, ß(ß(n, b), a)), ß(ß(n, a), b)))))); // \mn.\ab.m(nba)(nab)```

This should do the trick, shouldn’t it? How to calculate, say, T && F? Here it is:

`ß(ß(And, T), F)`

Quite easy, but with a bit of beta function noise. To prove all of this works, we’ll have to provide pretty-printing functions. Similarly we may benefit from a function that promotes a CLR Boolean into the corresponding Church Boolean:

```Func<Action, bool> toBool = b =>
{
bool res = false;
ß(ß(b, () => { res = true; }), () => { res = false; })();
return res;
};
Func<bool, Action> fromBool = b => b ? T : F;```

Those are very comparable to the ones we used in the previous post, using C# 4.0 dynamic. Notice how an impure lambda expression is used as an argument to the beta-application in order to realize toBool.

Based on all of this we’d like to be able to write a testing function that can print results of applying our Boolean operators to various arguments. Again we base this off the functions used in the previous post:

```static void Unary<T, PArg, PRes>(string name, Func<Action, T> f, Func<Action, PArg> printArg, Func<T, PRes> printRes, params Action[] values)
{
foreach (var value in values)
Console.WriteLine("{0}({1}) = {2}", name, printArg(value), printRes(f(value)));
}

static void Binary<T, PArg, PRes>(string name, Func<Action, Action, T> f, Func<Action, PArg> printArg, Func<T, PRes> printRes, params Action[] values)
{
foreach (var valueL in values)
foreach (var valueR in values)
Console.WriteLine("{0}({1}, {2}) = {3}", name, printArg(valueL), printArg(valueR), printRes(f(valueL, valueR)));
}```

For example, Unary could be used with the “Not” operator, using toBool for print-functions which turn the Church Boolean into the corresponding System.Boolean that Console.WriteLine can deal with. Notice one thing though: the above is not pure either, in a sense we’re not passing in plain Action objects for the function “f”, instead we’re using Func<Action, …> types. None of our lambda functions like “Not” and “And” are compatible with this form. How do we do this (purely a cosmetic thing, btw)?

```//
// Conversion operations
//
Func<Action, Func<Action, Action>> toFunc1 = f => x => ß(f, x);
Func<Action, Func<Action, Action, Action>> toFunc2 = f => (x, y) => ß(ß(f, x), y);
Func<Action, Func<Action, Action, Action, Action>> toFunc3 = f => (x, y, z) => ß(ß(ß(f, x), y), z);
Func<Func<Action, Action>, Action> fromFunc1 = f => λ(x => f(x));
Func<Func<Action, Action, Action>, Action> fromFunc2 = f => λ(x => λ(y => f(x, y)));
Func<Func<Action, Action, Action, Action>, Action> fromFunc3 = f => λ(x => λ(y => λ(z => f(x, y, z))));```

The conversion operators above provide the answer: they allow us to make plain lambda functions callable as if they were CLR functions (Func<…>) and also allow for abstraction based on a CLR functions. In fact, the family of toFunc functions could be called “decurry” (e.g. toFunc3 produces a function of three arguments) while the fromFunc functions are “curry” operators (e.g. fromFunc3 takes a function of three arguments). Based on this we can write “C#-user friendly” forms of our Church Boolean operators:

```var not = toFunc1(Not);
var and = toFunc2(And);
var or = toFunc2(Or);
var xor = toFunc2(Xor);```

Notice the different casing used. Exercise for the reader: infer the types of the eight objects that appear in the fragment above (i.e. excluding the to* functions).

Now we can write a test procedure as follows:

```Console.WriteLine("Church Booleans");
Unary("toBool", toBool, toBool, x => x, F, T);
Unary("not", not, toBool, toBool, F, T);
Binary("and", and, toBool, toBool, F, T);
Binary("or", or, toBool, toBool, F, T);
Binary("xor", xor, toBool, toBool, F, T);
Console.WriteLine();```

Here’s the result:

It worked, didn’t it?

# Lambda Language Runtime for CLR 2.0 – noise reducing intermezzo

In the sample above, we’ve encountered utterly complicated application calls due to the fact the beta function only takes two arguments at a time (just like classical lambda calculus prescribes):

`var Xor = λ(m => λ(n => λ(a => λ(b => ß(ß(m, ß(ß(n, b), a)), ß(ß(n, a), b)))))); // \mn.\ab.m(nba)(nab)`

It’s a convention that applications are left-associative, hence we’d like to be able to write the following instead:

`var Xor = λ(m => λ(n => λ(a => λ(b => ß(m, ß(n, b, a), ß(n, a, b)))))); // \mn.\ab.m(nba)(nab)`

Obviously this is easy to realize as follows, using LINQ’s Aggregate (“left fold”) operator:

```static Action ß(Action x, Action y, params Action[] zs)
{
return zs.Aggregate(ß(x, y), (l, z) => ß(l, z));
}```

Can we make the excessive lambda arrows are bit less invasive too? Sure, inspired by our fromFunc objects:

```static Action λ(Func<Action, Action, Action> f)
{
return λ(x => λ(y => f(x, y)));
}

static Action λ(Func<Action, Action, Action, Action> f)
{
return λ(x => λ(y => λ(z => f(x, y, z))));
}
```

That puts us in a position to write the Xor operator as:

`var Xor = λ((m, n) => λ((a, b) => ß(m, ß(n, b, a), ß(n, a, b)))); // \mn.\ab.m(nba)(nab)`

Much cleaner, isn’t it?

# Church numerals

If Booleans work, we should get numerals working too. Obviously this is the case:

```//
// Church numerals
//
var N0 = λ(f => λ(x => x)); // \fx.x = F
var N1 = λ(f => λ(x => ß(f, x))); // \fx.fx
var N2 = λ(f => λ(x => ß(f, ß(f, x)))); // \fx.f(fx)

var Zero = λ(n => ß(ß(n, λ(x => F)), T)); // \n.n(\x.F)T
var Succ = λ(n => λ(f => λ(x => ß(f, ß(ß(n, f), x))))); // \n.\fx.f(nfx)
var Pred = λ(n => λ(f => λ(x => ß(ß(ß(n, λ(g => λ(h => ß(h, ß(g, f))))), λ(u => x)), λ(u => u)))));
var Plus = λ(m => λ(n => λ(f => λ(x => ß(ß(m, f), ß(ß(n, f), x)))))); // \mn.\fx.mf(nfx)
var Sub = λ(m => λ(n => ß(ß(n, Pred), m)));
var Mul = λ(m => λ(n => λ(f => ß(n, ß(m, f))))); // \mn.\f.n(mf)
var Exp = λ(m => λ(n => ß(n, m))); // \mn.nm

var zero = toFunc1(Zero);
var succ = toFunc1(Succ);
var pred = toFunc1(Pred);
var plus = toFunc2(Plus);
var sub = toFunc2(Sub);
var mul = toFunc2(Mul);
var exp = toFunc2(Exp);

Func<Action, uint> toInt = x =>
{
uint n = 0;
ß(ß(x, () => { n++; }), nop)();
return n;
};
Func<uint, Action> fromInt = null;
fromInt = n =>
{
return (n == 0 ? N0 : succ(fromInt(n - 1)));
};```

The nop helper function used in the toInt function is simply the no-op Action delegate () => {}. To get to know how toInt works (as well as all the other functions, by means of formal proofs), have a look at my previous post again. Either way, a simple test shows all of the above works as well:

```Console.WriteLine("Church numerals");
Unary("toInt", toInt, toInt, x => x, Enumerable.Range(0, 3).Select(i => fromInt((uint)i)).ToArray());
Unary("succ", succ, toInt, toInt, Enumerable.Range(0, 3).Select(i => fromInt((uint)i)).ToArray());
Unary("pred", pred, toInt, toInt, Enumerable.Range(1, 3).Select(i => fromInt((uint)i)).ToArray());
Unary("zero", zero, toInt, toBool, Enumerable.Range(0, 2).Select(i => fromInt((uint)i)).ToArray());
Binary("plus", plus, toInt, toInt, Enumerable.Range(0, 3).Select(i => fromInt((uint)i)).ToArray());
Binary("mul", mul, toInt, toInt, Enumerable.Range(0, 3).Select(i => fromInt((uint)i)).ToArray());
Binary("exp", exp, toInt, toInt, Enumerable.Range(1, 3).Select(i => fromInt((uint)i)).ToArray());
Console.WriteLine();```

Resulting in:

# Church pairs and lists

As an outtake for today’s installment, let’s show how to realize pairs and lists using functions:

```//
// Pairs
//
var Pair = λ(x => λ(y => λ(z => ß(ß(z, x), y)))); // \xy.\z.zxy
var Fst = λ(p => ß(p, T)); // \p.pT
var Snd = λ(p => ß(p, F)); // \p.pF

var pair = toFunc2(Pair);
var fst = toFunc1(Fst);
var snd = toFunc1(Snd);

Func<Action, Tuple<object, object>> toPair = a => new Tuple<object, object>(ext(fst(a)), ext(snd(a)));
Func<Tuple<object, object>, Action> fromPair = p => pair(inj(p.Item1), inj(p.Item2));

Assert(toInt(fst(pair(N1, N2))) == 1, "fst(pair(N1, N2))");
Assert(toInt(snd(pair(N1, N2))) == 2, "snd(pair(N1, N2))");

var me = new Tuple<object, object>("Bart", 26);
Assert(toPair(fromPair(me)).Equals(me), "Roundtrip object pair");

var nums = new Tuple<object, object>(N1, N2);
Assert(toPair(fromPair(nums)).Equals(nums), "Roundtrip nums pair");```

A pair is constructed by giving it two arguments, resulting in a function that will return one of them by giving it a Boolean (“z” in the Pair function above). The first and second extraction functions are then defined in terms of applying the “pair as a function” to either T or F, which will select the right value out of the pair.

The toPair and fromPair functions are defined to go back and forth between a Church pair and a Tuple<object, object> in CLR-land. This is based on two functions I’m not showing here (left as an exercise for the reader): ext and inj, for extract and inject respectively. The signature of those functions is as follows:

```//
// Values
//
Func<object, Action> inj = o => /* left as an exercise */
Func<Action, object> ext = a => /* left as an exercise */

Assert((string)ext(inj("Bart")) == "Bart", "ext(inj(\"Bart\"))");
Assert((int)ext(inj(42)) == 42, "ext(inj(42))");
DateTime now = DateTime.Now;
Assert((DateTime)ext(inj(now)) == now, "ext(inj(now))");```

Warning: This is not a trivial exercise, as arbitrary .NET objects need to be able to be “lifted into” lambda land (e.g. the sample above allows to roundtrip a string and a DateTime). Have fun <g>!

Church lists can be defined in terms of Nil and Cons operations, together with extraction functions like Head and Tail, and a test function IsNil:

```//
// Lists
//
var Nil = ß(ß(Pair, T), T);
var IsNil = Fst;
var Cons = λ(h => λ(t => ß(ß(Pair, F), ß(ß(Pair, h), t))));
var Head = λ(z => ß(Fst, ß(Snd, z)));
var Tail = λ(z => ß(Snd, ß(Snd, z)));

var nil = Nil;
var isNil = fst;
var cons = toFunc2(Cons);
var tail = toFunc1(Tail);

Func<Action, List<object>> toList = a =>
{
List<object> res = new List<object>();

while (!toBool(isNil(a)))
{
a = tail(a);
}

return res;
};
Func<List<object>, Action> fromList = l => Enumerable.Reverse(l).Aggregate(nil, (t, o) => cons(inj(o), t));

Assert(toBool(isNil(nil)) == true, "isNil(nil)");
Assert(toBool(isNil(cons(N1, nil))) == false, "isNil(cons(N1, nil))");
Assert(toInt(tail(cons(N1, N2))) == 2, "tail(cons(N1, N2))");

var lst = new List<object> { 1, N2, 3 };
Assert(toList(fromList(lst)).SequenceEqual(lst), "Roundtrip list");```

Again we provide ways to turn a Church list in a List<object> (again based on ext), and the other way around (using the magical inj). Yes, the above works just fine… As an additional exercise, write a ForEach extension method that can take a Church list (of type … Action) and an Action<Action> delegate as the body of the loop, so we can use it as follows (tip: mirror the toList implementation):

```cons(N1, N2).ForEach(x =>
{
Console.WriteLine(toInt(x));
});```
` `

# Conclusion

What can I say? This was fun for sure, if nothing more. I hope I’ve convinced the reader that functions are immensely powerful animals capable of representing data. In addition, we’ve shown how powerful exceptions can be (something the “Unchecked Exceptions can be Strictly More Powerful than Call/CC” paper goes much much deeper into). Yeah, performance guys will have the hairs on their back standing upright, but hey: it’s Crazy Sundays after all. And, as an additional excuse I now can claim to be referred to as a “stunt coder” on Twitter (thanks James, also for subtly using the word “fold”):

Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

I promise, it will be a (relatively) short post this time. You all know the foreach statement in C#, don’t you? Think twice before you answer and tell me exactly how the following works:

foreach (int x in src)
{
// Do something with x.
}

Got an answer? Let me disappoint you: if you have the answer, you’re wrong. There’s no single answer to the question above as you need to know more about the type of src to make a final decision on how the above works…

You may say that clearly that object needs to implement IEnumerable or IEnumerable<T>, and maybe you’ll even mention that in the former case the compiler inserts a cast for you when it gets “x” back from the call to the IEnumerator’s Current property getter. In other words, the code gets translated like this:

var e = src.GetEnumerator();
while (e.MoveNext())
{
var x = (int)e.Current; // without the cast if src was an IEnumerable<T>
// Do something with x.
}

A worthy attempt at the translation but not quite right. First of all, the variable x is declared in an outer scope (causing some grief when talking about closures, but that’s a whole different topic…). Secondly, the enumerator may implement IDisposable, in which case the foreach-statement will ensure proper disposal a la “using”:

{
int x;

using (var e = src.GetEnumerator())
{
while (e.MoveNext())
{
x = (int)e.Current; // without the cast if src was an IEnumerable<T>
// Do something with x.
}
}
}

That’s a bit more sane, but we’re missing out on another kind of source foreach can work with: any object, as long as it exposes the enumeration pattern of GetEnumerator in tandem with MoveNext and Current. Here’s a sample object that just works fine with the foreach-statement:

```class Source
{
public SourceEnumerator GetEnumerator()
{
return new SourceEnumerator();
}
}

class SourceEnumerator
{
private Random rand = new Random();

public bool MoveNext()
{
return rand.Next(100) != 0;
}

public int Current
{
get
{
return rand.Next(100);
}
}
}```

With its usage shown below:

```foreach (int x in new Source())
Console.WriteLine(x);```

Okay, that’s flexible, isn’t it? In fact, the foreach-statement can be said to be duck typed: it’s not the nominal type that matters (i.e. Source is explicitly declared to be an IEnumerable, and SourceEnumerator an IEnumerator) but just the structure of the object that determines “compatibility” with the foreach-statement.

But who says foreach over a collection immediately starts thinking about LINQ, no? Say the consumer of Source looked like this:

```List<int> res = new List<int>();
foreach (int x in new Source())
if (x % 2 == 0)

A great candidate for LINQ it seems, especially as we start adding more and more logic to the “query”. Nothing surprising about this conclusion, but trying to realize it fails miserably:

Why? Because LINQ is statically typed (update: to be taken with a grain of salt, see comments below this post; agreed, it'd be more precise to write LINQ to Objects as the subject of this sentence), so it expects what I’ve referred to as a nominal enumerator implementation: something that has explicitly stated to be an IEnumerable and not something that “accidentally” happens to look like that. Question of the day: how to morph an existing structural enumerator onto a nominal one so it can be used with LINQ? Sure, we could write specialized code for the Source object above that essentially creates an iterator on top of Source:

```static void Main()
{
var res = from x in IterateOver(new Source())
where x % 2 == 0
select x;

foreach (var x in res)
Console.WriteLine(x);
}

static IEnumerable<int> IterateOver(Source s)
{
foreach (int i in s)
yield return i;
}```

But maybe you’re in a scenario with plenty of those structural enumerator constructs around (e.g. some Office automation libraries expose GetEnumerator on types like Range, while the Range object itself doesn’t implement IEnumerable hence it’s not usable with LINQ), so you want to generalize the above. Essentially, given any object you’d like to provide a duck-typed iterator over it, a suitable task for another extension method and C# 4.0 dynamic:

```static class DuckEnumerable
{
public static IEnumerable<T> AsDuckEnumerable<T>(this object source)
{
dynamic src = source;

var e = src.GetEnumerator();
try
{
while (e.MoveNext())
yield return e.Current;
}
finally
{
var d = e as IDisposable;
if (d != null)
{
d.Dispose();
}
}
}
}```

Question to the reader: why can’t we simply write a foreach-loop over the “source casted as dynamic” object? Tip: how would you implement the translation of foreach when encountering a dynamic object as its source?

Yes, you’re cluttering the apparent member list on System.Object, so use with caution or just use plain old method calls to do the “translation”. What matters more is the inside of the operator, using the dynamic type quite a bit to realize the enumeration pattern. Notice how easy on the eye dynamically typed code looks in C# 4.0. With much more casts, it’d look like this:

```static class DuckEnumerable
{
public static IEnumerable<T> AsDuckEnumerable<T>(this object source)
{
dynamic src = (dynamic)source;

dynamic e = src.GetEnumerator();
try
{
while ((bool)e.MoveNext())
yield return (T)e.Current;
}
finally
{
var d = e as IDisposable;
if (d != null)
{
d.Dispose();
}
}
}
}```

And now we can write:

```var res = from x in new Source().AsDuckEnumerable<int>()
where x % 2 == 0
select x;

foreach (var x in res)
Console.WriteLine(x);```

Dynamic glue – why not? In fact, even objects from other languages (like Ruby or Python) that follow the pattern will now work with LINQ, and for existing compatible objects the operator call is harmless (but wasteful). Oh, and notice you can also have an IEnumerable of “dynamic” objects if you’re dealing with objects originating from dynamic languages...

Can you implement the AsDuckEnumerable operator in C# 3.0? Absolutely, if you limit yourself to reflection-based discovery methods (left as an exercise for the reader).

Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

# Introduction

Sunday morning, time for another episode of the Crazy Sundays series. Again one in the category with risk for exploding brains, but that’s what we like, don’t we? This time around, we’re going to have a look at the type free lambda calculus in C#. But wait a minute, isn’t C# a typed language? True. Does that mean everything you do in it should be statically typed? Not necessarily: typing is there as a tool you can leave or take. In this post we’ll take a look at the new C# 4.0 dynamic feature (which provides a static type to do dynamic dispatch) from a somewhat bizarre angle…

Sometimes types do get in the way: maybe something is intrinsically untyped (like XML documents without a schema, web services without a WSDL contract, etc) or meant to be dynamically typed (like objects coming from a dynamic language like Ruby or Python). And then there are notorious APIs that “peter out” in their typing: one moment you’re statically typed, but all of a sudden you end up with System.Object everywhere (like with COM interop to the Office libraries). All such scenarios are what C# 4.0 dynamic is meant for. The classical sample to illustrate the feature is to talk to an IronPython object from C#. First the Python definition:

class Calc:
def Add(self, a, b):
return a + b
def GetCalc():
return Calc()

The nice thing about this Python-based calculator is that it will work with anything that supports an addition operator. In other words, we could feed it two int objects, two string objects, or even a DateTime and a TimeSpan. To make such calls, we use the dynamic keyword in C#:

```var py = Python.CreateEngine();
dynamic script = py.ImportModule("Calc");

dynamic calc = script.GetCalc();
int three = calc.Add(1, 2);```

Ignore the few lines of plumbing to load the Python file; what matters here is the fact the calc variable is typed to be “dynamic”, meaning all operations invoked on it will be resolved at runtime:

I won’t go into details on how all of this works, maybe another time (outside the scope of Crazy Sundays posts), but that’s the essence of the feature. Instead, we’re going to push this feature to the limits in a “don’t try this at homework” style: enter type free lambda calculus.

# Type free lambda calculus in a nutshell

The type free lambda calculus, due to Alonzo Church around 1930, is a theory about the computational aspects of functions. It views functions as rules and defines two complimentary operations to work with those: abstraction and application. In essence, that’s all there is but nevertheless the theory gives rise to a whole research area of its own. To convince you about that statement, check the page count for the following book on your favorite online bookshop:

So, allow me to summarize the essentials here, citing/paraphrasing the book above (due to the non-trivial nature of mathematical notation, pasted as images from Word equations). First, we need to define the concept of a lambda term:

As a typical exercise, expand the fourth lambda term in the examples above to its fully parenthesized and bloated form.

All there is to lambda terms is they can denote functions, just like lambda expressions in C# do. For example, the second sample above is a function with argument “x”, returning “x”. In other words, it’s the identity function. However, it has no type: it can operate on anything (in particular, any other lambda term). In C# we could write this as follows:

Func<T, T> I = x => x;

where T stands for a generic parameter. It’s clear this function can operate on values but equally well it can operate on functions: given a function, it will return exactly that function:

I(5) // produces 5
I(I) // produces I

In fact, the middle three samples above are closed terms, meaning all symbols used in their “function body” (the part after the dot) are in “scope” by means of the abstraction(s) over it. We call those combinators:

The last sample term in the previous sample is not closed though: it returns “x” which is not being abstracted over. This reflects the concept of a closure, just as we have in C#:

R x = …;
Func<T, R> f = z => x;

Let’s not go there for now, but suffice to say that fancy words like “closure” are deeply rooted in theoretical foundations. Praise yourself lucky to work with a language layered on top of solid theoretical foundations :-).

Next, we need the concept of free variables. In short, this allows us to identify those variables not introduced by an abstraction in a given term. The definition is fairly straightforward:

For combinators, the free variables set will be empty. In contrast, for our last sample (the one resulting in a closure) the set would be a singleton containing “x”. Nothing too fancy, right?

Finally, we can define how application is carried out, based on the concept of substitution:

To avoid name clashes (something that can be formally avoided by using a variable convention, using the definition of FV above), we notice careful renames are possible. We all know this from C#, and this is merely a theoretical foundation for scoping:

Func<T, T> I1 = x => x;

and

Func<T, T> I2 = y => y;

are in fact the “same”. Using such a big word in the context of a whole runtime (CLR) and language (C#) is a dangerous business. I’m not saying that I1 and I2 will refer to the same delegate: there’s no identification between delegates that says “x => x” and “y => y” are the same. Rather, what I’m pointing out here is that the behavior of I1 and I2 will be the same when applied to the same object. In the lambda calculus this is referred to as alpha-conversion.

Ignoring the important concern of avoiding name clashes for the time being, have a closer look at the application “beta-conversion” rule above. The idea is simple: “application of an abstraction with another term results in substitution”. This is pretty much like a delegate call in C#, but yet different enough: in the world of the lambda calculus, substitutions are nothing but mechanical rewrites on terms. In languages like C#, code is compiled as-is and delegate calls don’t magically rewrite the delegate’s body on the fly. But more importantly, in C# we get also immediately concerned with call semantics like call-by-value: before making a call, its arguments need to be reduced to a “value” that can be passed to the receiving end of the call (through the delegate invocation mechanism).

# Untyped or uni-typed?

Now the question of the day: can we type (in the CLR/C# type system sense) all lambda terms? It seems we were successful doing so already with the I combinator, concluding it’s a Func<T, T> with the use of generics: passing a value of a certain type T, the result it the same value and hence of the same type. In fact, what about inferring such signatures? C# doesn’t do so, but F# can (through Hindley-Milner type inference as is done typically in ML-inspired languages):

Okay, “I” is inferred to be a type that goes from ‘a to ‘a, where ‘a is the notation for a generic parameter. What about K, the combinator that given two arguments always returns that first one (“constant” value producer):

We’re still good to go it seems. F# has inferred the type ought to be “‘a to ‘b to ‘a”. Wow, what’s going on here? In C#, this would look like Func<T1, Func<T2, T1>>: a function that given an argument of type T1 returns a function that given an argument of type T2 returns a value of type T1. Get it? What’s happening here is “currying”, where a function of n arguments is turned into “nested” functions that consume one argument at a time. This means we can write, say, “K 5” to create a new function that will eat the remaining “y” argument (of any type) and will always return 5.

Note: There’s a little complication here in the context of F#, called the “value restriction”. I won’t go there as this is irrelevant for the discussion here. What we’re focusing on is solely whether or not we can type lambda expressions.

Even for the (relatively) complicated beast S, a type can be inferred:

Impressive? Not so hard in fact. Follow me as we infer the type ourselves by hand, from the function’s body. First of all, we see x followed by two terms: “z” and “(y z)”. That means “x” is a function with two arguments, and we assume it returns something. Assign type names for those things: the first argument “z” gets type ‘a and the result of “(y z)” will get type ‘b. For the result we write ‘c. In other words, the type of “x” is already inferred to be ’a –> ‘b –> ‘c. Next, we need to infer the type for “y” based on our prior identification of the type for “(y z)” as ’b. We see that “y” is a function with one argument, “z”. We already typed “z” to be ‘a, so the type of “y” becomes ‘a –> ‘b. And finally, our “S” function also takes in “z” as a third argument, which is typed to be ‘a. All of this brought together with the return type of the call to “x” leads to the signature shown above.

So, it looks like we can type all lambda terms, right? Unfortunately, no. Here’s the proof:

Here I’ve written a function W that given an argument x applies that to itself. Such a simple lambda term, and yet one can’t find a type for it. Follow me on this brain exercise: W takes one argument x, say of type ‘a. Now, x is applied to x. From this it looks like x is a function type with one argument. The argument is x, which we’ve already typed to be ‘a. What about the return type? Let’s say it’s ‘b, so now we have that x needs to be of type ‘a –> ‘b, which isn’t the same as ‘a we had before: unification fails.

That’s where differences between the type free lambda calculus and typed variants (like the simply typed lambda calculus and for generics and such something called “System F”) crop up. C# being a typed language doesn’t allow us to get rid of types altogether, so there’s no way we can get “untyped”. But what about “uni-typed” (due to Robert Harper): replace all types with one single type. Can we get there? The answer is, with “dynamic” we can!

`dynamic W = new Func<dynamic, dynamic>(x => x(x));`

The fact we need some ugly delegate constructor call is unfortunate, but notice how we’re assigning the “dynamic –> dynamic” type to a “dynamic” on the left. In other words, we’re treating everything (non-functional values and function “objects” themselves) as dynamic. The code above compiles just fine, but how does the x(x) in the lambda body work? Well, at runtime the system will figure out what the type of x is and ensure it can be used to be called as a unary function. For example:

```dynamic W = new Func<dynamic, dynamic>(x => x(x));
W(1);```

This will clearly fail. It corresponds to “calling the function integer 1” with argument “integer 1”. Clearly, an integer value cannot be used as a function (a good thing!), so a runtime error results:

Unhandled Exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException: Cannot invoke a non-delegate type
at CallSite.Target(Closure , CallSite , Object , Object )
at System.Dynamic.UpdateDelegates.UpdateAndExecute2[T0,T1,TRet](CallSite site, T0 arg0, T1 arg1)
at UntypedLambda.Program.<Main>b__46(Object x)
at CallSite.Target(Closure , CallSite , Object , Int32 )
at System.Dynamic.UpdateDelegates.UpdateAndExecuteVoid2[T0,T1](CallSite site, T0 arg0, T1 arg1)
at UntypedLambda.Program.Main()

How this works internally is beyond the scope of this post (I’m sure I’ll blog about the DLR machinery and the role of the C# and VB compilers in that mix some time in the relatively near future) but the DLR was right to conclude the call above is nonsense.

What about the following?

```var I = new Func<dynamic, dynamic>(x => x);
dynamic W = new Func<dynamic, dynamic>(x => x(x));
Console.WriteLine(W(I)(42));```

Now, that works. Passing I to W results in I(I), which reduces (using application, or beta-conversion) into I. That function is then used with argument 42 subsequently, returning 42.

You can guess it … we’re going to uni-type our whole programs using dynamic in this Crazy Sundays post. Did I ever mention Scheme? If not, I did now :-).

# SKI combinators

As we’ve already played with S, K and I in the samples above, let’s see how those look in uni-typed C#:

```//
// Where else to start than with ... SKI combinators.
//
var S = new Func<dynamic, Func<dynamic, Func<dynamic, dynamic>>>(x => y => z => x(z)(y(z)));
var K = new Func<dynamic, Func<dynamic, dynamic>>(x => y => x);
var I = new Func<dynamic, dynamic>(x => x);```

Notice we’re currying all functions so that we can partially apply functions. For example, we can do the following to create a “constant function” that always returns 42:

```var answer = K(42);

All of the calls above will print 42, regardless of what’s passed in to answer. Follow the reduction in your head: applying 42 to K returns a function from a parameter called “y” to 42. In other words, “y” is discarded altogether. We’re generating the constant 42, no matter what (both in terms of value and type) we throw to the function as an argument. Not just plain values: in the last line we throw S, another function, to the answer function and yet it persists telling us the answer is 42.

It can be shown, as an easy exercise, that SKK (and SKS) are the same as I, so the following is a complicated way of writing 5:

`int five = S(K)(K)(5);`

See how functions are first-class citizens: they can be passed or returned everywhere. Also notice how functions have to be called one argument at a time because of currying (due to Schonfinkel, not Curry).

While we’re at it, we’ll write a helper function to apply a function to different argument values and print the result to the screen. In subsequent paragraphs we’ll use this function (and a variant thereof) to aid us in printing test results:

```static void Unary<T, PArg, PRes>(string name, Func<dynamic, T> f, Func<dynamic, PArg> printArg, Func<dynamic, PRes> printRes, params dynamic[] values)
{
foreach (var value in values)
Console.WriteLine("{0}({1}) = {2}", name, printArg(value), printRes(f(value)));
}```

Quite a signature, but essentially we take a friendly function name, a function to be tested, some function to turn the “dynamic” argument into a print-friendly form (and a similar function for the result of the function call), and an array of values to be fed in. For example:

```Console.WriteLine("SKI");
Unary("SKK", S(K)(K), I, I, 5, DateTime.Now, "Bart");
Unary("I", I, I, I, 5, DateTime.Now, "Bart");
Console.WriteLine();```

This results in the following:

As an exercise, try to do something more meaningful with the S combinator.

# Church Booleans

Functions and data are very closely related. More closely than you may think. Most programmers think of functions as pieces of code that have a certain behavior and get applied to zero or more arguments, producing some (or none) value. That’s a very code-centric view of the world, while functions in the mathematical sense are obviously not related to “code” at all. Functions are often defined as graphs and are a special kind of relation between two sets. Sets are all about data, aren’t they? Based on such an observation one can establish functions in a computer programs as table lookup = data.

But there’s more, even values themselves (and not mappings between them) can be encoded using functions. Let’s start easy, with Booleans. Easy because there are only two values to distinguish. Here’s the proposed mapping:

```//
// Church Booleans.
// Basic idea: true and false are dyadic functions returning respectively
//             the first or second argument, acting as a conditional (?:)
//
var F = new Func<dynamic, Func<dynamic, dynamic>>(a => b => b);
var T = new Func<dynamic, Func<dynamic, dynamic>>(a => b => a);```

Okay, the comment reveals it already: the idea is that true and false are encoded as functions with two arguments (again curried) of which one is returned: false returns the second one, true the first one. Sounds familiar, doesn’t it? Right, the conditional operator (sometimes awkwardly referred to as the ternary operator) in C# does exactly that. Notice that using alpha-conversion, T is exactly the same as K.

Notice it’s easy to convert between the untyped world and the typed world using two back-and-forth conversion functions:

```Func<dynamic, bool> toBool = x => x(true)(false);
Func<bool, dynamic> fromBool = b => b ? T : F;```

We wouldn’t need toBool if we weren’t to print the results to the screen somehow :-). For completeness, I’ve added fromBool to the equation. Both are easy to understand, but let’s start with fromBool. Plain easy: give it true, and it returns T; give it false, and you get F back. The inverse function, toBool, is simple function application (beta conversion) of the defined functions to C# Booleans: if the function bound to x is T, the first argument will be returned (true). If it’s F, the second one (false) will. Clear as crystal. Notice toBool can be applied with any object as its argument: going from untyped to typed will blame the argument if something goes wrong (due to Philip Wadler).

Not very useful yet, if we don’t have a way to define operations between such Booleans. Again functions come to the rescue (we don’t have anything else after all), so let’s have a look at how we define a simple operator: not.

`var not = new Func<dynamic, Func<dynamic, Func<dynamic, dynamic>>>(m => a => b => m(b)(a));`

Notice all operators on Church Booleans will be higher-order functions in their very nature, since their arguments are functions already. That’s the nature of the beast we’re dealing with. So, how can we turn a Church Boolean “m” in its opposite? We already know that Church Booleans are dyadic functions, so the result of calling not with a Church Boolean should be a function with two arguments: that’s what “a” and “b” are for. The body of the function may look a bit weird: we’re calling “m” (a Church Boolean, but remember everything is a function hence executable) with arguments b and a. That has a flipping effect as proven below:

Impressive, isn’t it? A word on notation: before doing a beta-reduction we indicate the abstractions we’re going to apply substitutions for by putting a bar on top of them (e.g. in the second step m, in the fourth step x and y). When doing multiple reductions we write an arrow with a double arrowhead. When substituting terms for their definition (like T and F in the proof above), we carry out alpha-conversion to avoid the risk of name clashes (though strictly speaking for closed terms we could play a more risky game).

How can we do binary operators, like and, or and xor? Turns out those are fairly simple to do as well:

```var and = new Func<dynamic, Func<dynamic, dynamic>>(m => n => m(n)(m));
var or = new Func<dynamic, Func<dynamic, dynamic>>(m => n => m(m)(n));
var xor = new Func<dynamic, Func<dynamic, Func<dynamic, Func<dynamic, dynamic>>>>(m => n => a => b => m(n(b)(a))(n(a)(b)));```

Agreed, one needs to see a proof of correctness before being convinced those functions have the desired behavior. By the way, outside C#, we could save us some parentheses which mainly come from (dynamic) delegate invocations above. Also notice that the functions for and and or don’t have abstractions in their “body”: the result of calling “mnm” or “mmn” already gives a function of the right arity, as can be seen in the proofs below:

The proof for xor is left as an exercise to the reader (feel free to use case analysis of all four combinations of arguments, although you can simplify matters a bit…).

Of course you want to see this in action. We already have our Unary function to test lambda functions with given arguments, let’s define a similar one for binary operators:

```static void Binary<T, PArg, PRes>(string name, Func<dynamic, T> f, Func<dynamic, PArg> printArg, Func<dynamic, PRes> printRes, params dynamic[] values)
{
foreach (var valueL in values)
foreach (var valueR in values)
Console.WriteLine("{0}({1}, {2}) = {3}", name, printArg(valueL), printArg(valueR), printRes(f(valueL)(valueR)));
}```

It basically creates all combinations of input arguments and applies them to the function, putting some pretty printing to the mix. So we write:

```Console.WriteLine("Church Booleans");
Unary("toBool", toBool, toBool, I, F, T);
Unary("not", not, toBool, toBool, F, T);
Binary("and", and, toBool, toBool, F, T);
Binary("or", or, toBool, toBool, F, T);
Binary("xor", xor, toBool, toBool, F, T);
Console.WriteLine();```

The use of toBool allows us to go from a “dynamic” function to a concrete C# Boolean value which is printable by Console.WriteLine. The result looks like this:

Looks right, doesn’t it?

# Church numerals

If we can do Booleans, we can do numeric values too, right? Let’s restrict ourselves to positive natural numbers (including 0) and we end up with the concept of Church numerals. Now we have an infinite domain of values to represent, we need a way to control this complexity somehow (in order to be able to define nice operators over them). One possible solution is the use of repeated function application to encode a numeric value, as follows:

Can you see it? Calling f on argument x once corresponds to 1, twice corresponds to 2, etc. Quite nice. What kind of simple operations can we define over such value representation? Clearly we don’t want to define all different N-objects for the whole domain of natural numbers. In fact, we don’t even want to define N1 explicitly. Two basic ingredients should suffice to define every natural number: a representation of 0, and a way to “add one” to a Church numeral (i.e. returning a new function that represents the value of the argument, plus one). This is the much desired successor function:

```var N0 = new Func<dynamic, Func<dynamic, dynamic>>(f => x => x);
var succ = new Func<dynamic, Func<dynamic, Func<dynamic, dynamic>>>(n => f => x => f(n(f)(x)));```

The successor function is a function with one argument “n”, returning a function of two arguments “f” and “x” (like all Church numerals being double-abstractions over some term). The definition is “f(nfx)”, with some more parentheses in C#. But how does it work? See it with your own eyes and feed it N0 for starters: “n” is substituted (beta conversion) for “N0” which by itself is a lambda expression of the form shown in the figure above (goes from f and x to x). A few beta-conversions later you’ll end up with f(x), exactly the definition of N1. In other words, succ simply adds an additional “f” application to the existing term’s body. A more formal proof is given below:

Based on this, can we create a simple conversions function to promote a C# positive integer value into a corresponding Church numeral? The answer is obviously yes and the idea is fairly easy: we’ll call succ repeatedly, n times for integer with value n, starting with N0 as the base. To go back from a Church numeral to a C# unsigned integer we exploit the repeated function call behavior on argument “f” and use a devilish side-effect to increment a counter whenever we are called:

```Func<dynamic, uint> toInt = x =>
{
uint n = 0;
x(new Action<dynamic>(_ => { n++; }))(null);
return n;
};

Func<uint, dynamic> fromInt = null;
fromInt = n =>
{
return (n == 0 ? N0 : succ(fromInt(n - 1)));
};```

If you don’t get this immediately, don’t worry. Just feed it some Church numerals and see what happens. For example, N0 is a function that simply returns its second argument. When calling toInt on N0, x is bound to N0, executed with two arguments of which the second one – null – is returned. The function of type Action<dynamic> in the first argument was never called, so the result is 0. But if N1 is fed in, that function in the first argument will get called how many times? Right, just one time. In other words, the side-effecting n++ will be called once, and the result is 1. In other words, we should be able to “roundtrip” C# integers through Church numerals:

```for (uint i = 0; i <= 10; i++)
Console.WriteLine(toInt(fromInt(i)));```

Try it at home and you should see the numbers 0 through 10 being printed on the screen. Victory. Oh, and fromInt is simply a recursive function that makes successor calls till n == 0 is reached, at which point N0 is returned. So, fromInt(5) will be encoded as succ(succ(succ(succ(succ(N0)))).

All of this looks good, doesn’t it? Another useful function is a zero-test. In the typed world this would be a function from an integer to a Boolean, but of course in the world of type-free lambda calculus we loose that typing safety net. Nevertheless, here’s how zero looks like:

`var zero = new Func<dynamic, dynamic>(n => n(K(F))(T));`

Recall that N0 was the same as F, and we know that F returns its second argument for its result. A zero-check on N0 clearly should return T, so the zero function should look like n (???) T. This satisfies the case where N0 is fed in. What about the other cases? Well, the first argument to n (the Church numeral passed in) is the function that gets called repeatedly on the second argument (which we already made T). For all those numerals we want to return F for the zero-check, so the question is whether we know a function that will always return F no matter what argument it’s given? We do, it’s called K with argument F: the constant-generator combinator K which we ask friendly to return hardheadedly F all the time. A formal proof is shown below:

Finally, the real stuff. What about operations like add and multiply, or even exponential? The good thing is they exist indeed :-). And here they are:

```var plus = new Func<dynamic, Func<dynamic, Func<dynamic, Func<dynamic, dynamic>>>>(m => n => f => x => m(f)(n(f)(x)));
var mul = new Func<dynamic, Func<dynamic, Func<dynamic, dynamic>>>(m => n => f => n(m(f)));
var exp = new Func<dynamic, Func<dynamic, dynamic>>(m => n => n(m));```

How they work is another piece of cake. Here’s a proof for plus’s behavior:

That was fairly easy, wasn’t it? Feel free to feed it a couple of values to confirm the behavior, but we’ll run it through some tests in just a while. For mul and exp, proofs need some inductive reasoning as shown below:

Notice how beautiful the definition of exp is: apply n with argument m, and you got an exponential. Wow. If I’d drink alcohol, this would be a reason to get drunk. I’ll stick with Diet Coke anyhow, but suffice to say I absolutely love it. And far that matter, mul is nice too, don’t you agree?

What about a predecessor and subtract function? Unfortunately those are quite ugly in this Church numeral encoding system:

`var pred = new Func<dynamic, Func<dynamic, Func<dynamic, dynamic>>>(n => f => x => n(new Func<dynamic, Func<dynamic, dynamic>>(a => b => b(a(f))))(K(x))(I));`

Bweik. It’s so disgusting I’ll leave the definition of a “sub” function (tip: use pred) to the reader. Oh, and if you’re really fearsome, prove the correctness of “pred” using induction.

Anyway, here’s a test-application for Church numerals:

```Console.WriteLine("Church numerals");
Unary("toInt", toInt, toInt, I, Enumerable.Range(0, 3).Select(i => fromInt((uint)i)).ToArray());
Unary("succ", succ, toInt, toInt, Enumerable.Range(0, 3).Select(i => fromInt((uint)i)).ToArray());
Unary("pred", pred, toInt, toInt, Enumerable.Range(1, 3).Select(i => fromInt((uint)i)).ToArray());
Unary("zero", zero, toInt, toBool, Enumerable.Range(0, 2).Select(i => fromInt((uint)i)).ToArray());
Binary("plus", plus, toInt, toInt, Enumerable.Range(0, 3).Select(i => fromInt((uint)i)).ToArray());
Binary("mul", mul, toInt, toInt, Enumerable.Range(0, 3).Select(i => fromInt((uint)i)).ToArray());
Binary("exp", exp, toInt, toInt, Enumerable.Range(1, 3).Select(i => fromInt((uint)i)).ToArray());
Console.WriteLine();```

We use our beloved fromInt function in combination with some LINQ operators to produce an array of test inputs, for which all combinations will be fed to the passed-in functions:

Wonderful. (Don’t bother to ask about performance…)

# Going nuts – recursive functions

We could go much further and define pairs, tuples and lists using Church encodings as well, but as I have another rabbit in my hat for another time, I won’t spoil the beans at this point. A little sneak peak though (this is without C# 4.0 dynamic…), for pairs:

```var Pair = λ(x => λ(y => λ(z => ß(ß(z, x), y)))); // \xyz.zxy
var Fst = λ(p => ß(p, T)); // \p.pT
var Snd = λ(p => ß(p, F)); // \p.pF```

and for lists:

```var Nil = ß(ß(Pair, T), T);
var IsNil = Fst;
var Cons = λ(h => λ(t => ß(ß(Pair, F), ß(ß(Pair, h), t))));
var Head = λ(z => ß(Fst, ß(Snd, z)));
var Tail = λ(z => ß(Snd, ß(Snd, z)));```

Instead, let’s conclude this post with two recursive functions:

```//
// The icing on the cake.
//
Func<Func<Func<dynamic, dynamic>, Func<dynamic, dynamic>>, Func<dynamic, dynamic>> Fix = null;
Fix = f => x => f(Fix(f))(x);

var fact = Fix(new Func<dynamic, Func<dynamic, dynamic>>(f => x => zero(x)(new Func<dynamic, dynamic>(_ => succ(N0)))(new Func<dynamic, dynamic>(_ => mul(x)(f(pred(x)))))(I)));
var fib = Fix(new Func<dynamic, Func<dynamic, dynamic>>(f => x => zero(x)(new Func<dynamic, dynamic>(_ => N0))(new Func<dynamic, dynamic>(_ => zero(pred(x))(new Func<dynamic, dynamic>(__ => succ(N0)))(new Func<dynamic, dynamic>(__ => plus(f(pred(x)))(f(pred(pred(x))))))(I)))(I)));

Unary("fact", fact, toInt, toInt, Enumerable.Range(0, 10).Select(i => fromInt((uint)i)).ToArray());
Unary("fib", fib, toInt, toInt, Enumerable.Range(0, 10).Select(i => fromInt((uint)i)).ToArray());```

This gets quite ugly due to the use of a fixpoint combinator and the chase to circumvent call-by-value semantics (exercise: where and how does that happen precisely in the fragments above?) causing endless evaluations (exhausting the stack), but rest assured those functions work just fine:

Finally, an outtake :-). What does the following result in?

```var Omega = new Func<dynamic, dynamic>(x => x(x))(new Func<dynamic, dynamic>(x => x(x)));
Omega(I);```

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

# Introduction

In my last post, Expression Trees, Take Two – Introducing System.Linq.Expressions v4.0, I showed how to the extensions to the LINQ expression trees API opens up for full-blown statement trees including support for assignment, control flow, etc. One popular question that came up in the comments section is on the lack of language-level support for statement trees, like this:

```Expression<Func<int, List<int>>> getPrimes = to =>
{
var res = new List<int>();
for (int n = 2; n <= to; n++)
{
bool found = false;

for (int d = 2; d <= Math.Sqrt(n); d++)
{
if (n % d == 0)
{
found = true;
break;
}
}

if (!found)
}
return res;
};```

At this point, the above won’t work and trigger the following compile error:

A lambda expression with a statement body cannot be converted to an expression tree

This hasn’t changed from the previous release. Though it would be a nice feature to have, there are various reasons not to have it integrated with the language at this point. My posts to the comments section of my previous post elaborate on some of the conservative constraints employed during language design, applied to this particular case. So, sorry: not at this point.

The result is the above turns into quite an involved statement tree declaration if done by hand. Let’s repeat it here to set the scene:

```var to = Expression.Parameter(typeof(int), "to");
var res = Expression.Variable(typeof(List<int>), "res");
var n = Expression.Variable(typeof(int), "n");
var found = Expression.Variable(typeof(bool), "found");
var d = Expression.Variable(typeof(int), "d");
var breakOuter = Expression.Label();
var breakInner = Expression.Label();
var getPrimes =
// Func<int, List<int>> getPrimes =
Expression.Lambda<Func<int, List<int>>>(
// {
Expression.Block(
// List<int> res;
new [] { res },
// res = new List<int>();
Expression.Assign(
res,
Expression.New(typeof(List<int>))
),
// {
Expression.Block(
// int n;
new [] { n },
// n = 2;
Expression.Assign(
n,
Expression.Constant(2)
),
// while (true)
Expression.Loop(
// {
Expression.Block(
// if
Expression.IfThen(
// (!
Expression.Not(
// (n <= to)
Expression.LessThanOrEqual(
n,
to
)
// )
),
// break;
Expression.Break(breakOuter)
),
// {
Expression.Block(
// bool found;
new[] { found },
// found = false;
Expression.Assign(
found,
Expression.Constant(false)
),
// {
Expression.Block(
// int d;
new [] { d },
// d = 2;
Expression.Assign(
d,
Expression.Constant(2)
),
// while (true)
Expression.Loop(
// {
Expression.Block(
// if
Expression.IfThen(
// (!
Expression.Not(
// d <= Math.Sqrt(n)
Expression.LessThanOrEqual(
d,
Expression.Convert(
Expression.Call(
null,
typeof(Math).GetMethod("Sqrt"),
Expression.Convert(
n,
typeof(double)
)
),
typeof(int)
)
)
// )
),
// break;
Expression.Break(breakInner)
),
// {
Expression.Block(
// if (n % d == 0)
Expression.IfThen(
Expression.Equal(
Expression.Modulo(
n,
d
),
Expression.Constant(0)
),
// {
Expression.Block(
// found = true;
Expression.Assign(
found,
Expression.Constant(true)
),
// break;
Expression.Break(breakInner)
// }
)
)
// }
),
// d++;
Expression.PostIncrementAssign(d)
// }
),
breakInner
)
),
// if
Expression.IfThen(
// (!found)
Expression.Not(found),
Expression.Call(
res,
n
)
)
),
// n++;
Expression.PostIncrementAssign(n)
// }
),
breakOuter
)
),
res
),
to
// }
).Compile();```

I’ve formatted a bunch of nodes in the expression tree above to indicate the “tiles” that are based on the v3.0 subset of the Expression Tree APIs, which do have language support. What if we could simplify the declaration above by leveraging the language support at places where it’s worthwhile? A tour through simplifying statement trees…

# Abstraction + application = ???

Without much further ado, let’s dive straight into our pool of expression tree tricks and reveal the goods for this round: abstraction and application. Guess what, we’re back to fundamental lambda calculus. An explanation is in order:

• Abstraction is the act of abstracting over an expression, i.e. introducing a parameterized function. Given an expression, turning a symbol therein into a parameter is what it takes to abstract over the expression:

• Application is also known as beta-reduction and corresponds to the act of applying a function given one or more arguments (strictly speaking one at a time):

Given those fundamental laws in lambda calculus, we can create a very round-about way of expressing 1 + 2. Just reverse the arrows in the last diagram: 2 becomes y, 1 becomes x, both “abstracting away” the constants one at a time, resulting in a function that’s readily applied to the two constants.

This basic insight is what will help us to simplify the statement tree declaration, allowing us to “tile in” expression trees in the bigger soup of statement tree nodes. But before we go there, let’s show the technique above in practice. Say we want to over-complicate the act of adding 1 and 2 together and do so through a deviation along the road of abstraction and application:

```var x = Expression.Parameter(typeof(int), "x");
var y = Expression.Parameter(typeof(int), "y");
var f = Expression.Lambda<Func<int, int, int>>(
x, y
);
var three = Expression.Invoke(
f,
Expression.Constant(1),
Expression.Constant(2)
);
var res = Expression.Lambda<Func<int>>(three).Compile();

Console.WriteLine(res());```

What’s happening here is not that difficult as it may seem at first glance. The “f” lambda expression simply is a function that goes from x and y to x + y (in C# lingo, (x, y) => x + y). Expression.Invoke is used to invoke a delegate (lambdas ultimately are delegates), this time supplying the arguments. So, “three” is shorthand for ((x, y) => x + y)(1, 2), which is longhand for 1 + 2. Finally, the “res” lambda expression is a simple function returning an int, being the result of the invocation of the sum abstraction.

Looking at the generated IL code for the “res” delegate (using the tricks revealed in the previous post), we see the following:

IL_0000: ldc.i4.1
IL_0001: ldc.i4.2
IL_0002: stloc.1
IL_0003: stloc.0
IL_0004: ldloc.0
IL_0005: ldloc.1
IL_0007: ret

Did you see it? You did? Great! Exactly, no single trace of a delegate call in here. The combination of application (InvocationExpression) over an abstraction (LambdaExpression) got optimized away. Instead of pushing constants 1 and 2 on the stack in preparation of a delegate invocation call instruction, they get stored in locals, followed by a dump of the invoked function’s IL where the expected ldarg instructions are replaced by ldloc instructions. All in all, it’s still as if we’d written:

```var res = Expression.Lambda<Func<int>>(
Expression.Constant(1),
Expression.Constant(2)
)
).Compile();```

which translates in slightly simpler IL code:

IL_0000: ldc.i4.1
IL_0001: ldc.i4.2
IL_0003: ret

Eliminating the excessive stloc/ldloc pairs in the original fragment is something the JIT compiler could take care of it feels happy to do so, but the core message here is that this trick of “abstract & apply” is cheaper than it looks.

Why am I telling you all of this? In fact, the trick outlined above is what we’ll use to tile in language-generated expression trees in a bigger sea of hand-crafted statement trees. For the sample above, what if we made the function definition through abstraction happen using the language-level support, while keeping the invocation call in plain hand-crafted expression trees:

```var three = Expression.Invoke(
(Expression<Func<int, int, int>>)((x, y) => x + y),
Expression.Constant(1),
Expression.Constant(2)
);
var res = Expression.Lambda<Func<int>>(three).Compile();```

See it coming? Think about it for a while: we’re combining a compiler-generated expression tree with a manually created bigger one in which we embed the smaller one.

# Theory applied

To keep things a bit more controlled, focus on one part of the original statement tree:

```Expression.IfThen(
// (!
Expression.Not(
// d <= Math.Sqrt(n)
Expression.LessThanOrEqual(
d,
Expression.Convert(
Expression.Call(
null,
typeof(Math).GetMethod("Sqrt"),
Expression.Convert(
n,
typeof(double)
)
),
typeof(int)
)
)
// )
),
// break;
Expression.Break(breakInner)
),```

This corresponds to the terminating condition for the inner loop:

```for (int d = 2; d <= Math.Sqrt(n); d++)
{
if (n % d == 0)
{
found = true;
break;
}
}```

As we all know, that’s a plain old valued expression that could be turned into a function as follows:

```Func<int, int, bool> cond = (x, y) => x <= Math.Sqrt(y);
for (int d = 2; cond(d, n); d++)
{
if (n % d == 0)
{
found = true;
break;
}
}```

Now we’re getting somewhere. The code above is exactly the same as the original one, but we’ve abstracted over the condition by turning it into a function of its own, which we subsequently apply given d and n. For regular code, that doesn’t make much sense, but it’s exactly the trick we’re talking about here and that will make the expression tree case simpler. What if we translated the code above as follows?

```Expression<Func<int, int, bool>> cond = (x, y) => x <= Math.Sqrt(y);
for (int d = 2; cond.Compile()(d, n); d++)
{
if (n % d == 0)
{
found = true;
break;
}
}```

Heh, there’s our expression tree for the condition. As we’re still not generating the whole for-loop in a statement tree, we have to call Compile explicitly on the intermediate lambda expression to invoke it in the condition part of the for-loop. Now we can take it one step further and go back to the expression tree for the whole for-loop, patching it up with our intermediate “cond” expression tree … using application. At the same time, we can eat the “Not” node:

```Expression.IfThen(
// (!
Expression.Not(
// d <= Math.Sqrt(n)
Expression.LessThanOrEqual(
d,
Expression.Convert(
Expression.Call(
null,
typeof(Math).GetMethod("Sqrt"),
Expression.Convert(
n,
typeof(double)
)
),
typeof(int)
)
)
// )
),
// break;
Expression.Break(breakInner)
),```

In essence, the first argument to IfThen (i.e. the if-statement’s condition expression) contains two variables from the outer scope: d and n. We’ll abstract over those, introducing an intermediate lambda expression, and invoke (apply) that one using d and n. Code is worth a thousand words:

```Expression.IfThen(
// (!(d <= Math.Sqrt(n)))
Expression.Invoke(
(Expression<Func<int, int, bool>>)((x, y) => !(x <= Math.Sqrt(y))),
d,
n
),
// break;
Expression.Break(breakInner)
),```

Wow, that got a bit simpler, didn’t it? The formatted (bold, underline) part corresponds to the language-generated expression tree. All we have to do is wrap that thing in an InvocationExpression, passing in d and n as arguments (respectively being bound to x and y).

Similarly, we can simplify other parts of the statement tree. For example:

```// if (n % d == 0)Expression.IfThen(
Expression.Equal(
Expression.Modulo(
n,
d
),
Expression.Constant(0)
),
// {
Expression.Block(
// found = true;
Expression.Assign(
found,
Expression.Constant(true)
),
// break;
Expression.Break(breakInner)
// }
)
)```

It doesn’t make sense to try to abstract Expression.Constant calls (it would only blow up the code size and make things more cumbersome), but the n % d == 0 part is something we could ask the compiler to generate for us:

```// if (n % d == 0)
Expression.IfThen(
Expression.Invoke(
(Expression<Func<int, int, bool>>)((x, y) => x % y == 0),
n,
d
),
// {
Expression.Block(
// found = true;
Expression.Assign(
found,
Expression.Constant(true)
),
// break;
Expression.Break(breakInner)
// }
)
)```

It’s a little unfortunate lambda expressions need to have a type forced upon them to emit them either as a delegate (Func<…> types) or an expression tree (Expression<Func<…>> types), requiring us to use an explicit cast to demand the compiler to generate an expression tree. This blows up the code significantly, but still we’re saving quite a bit of code. Also for method invocations, we can get rid of the ugly reflection code required to get the right overload, e.g.:

```// if
Expression.IfThen(
// (!found)
Expression.Not(found),
Expression.Call(
res,
n
)
)```

Finding the Add method is simple in this case, but if you end up creating a MethodCallExpression with a bunch of arguments to a method with lots of overloads available, the code gets quite complicated. In the sample above, there’s little to gain, but just for the sake of illustration:

```// if
Expression.IfThen(
// (!found)
Expression.Not(found),
Expression.Invoke(
(Expression<Action<List<int>, int>>)((l, e) => l.Add(e)),
res,
n
)
)```

Moreover, when typing the lambda expression above you’ll get full-blown (statically typed) IntelliSense:

In fact, this trick is a runtime-aided trick to implement an “infoof” operator (info-of) which I use from time to time. For example, if you want to get the MethodInfo for Console.WriteLine(“{0} is {1}”, “Bart”, 26) that can get quite involved using reflection flags (public, static), a method name in a string (“WriteLine”), a type array for the arguments (beware of the “params” behavior above), etc. Instead, you could do this:

```static MethodInfo InfoOf(Expression<Action> ex)
{
var mce = ex.Body as MethodCallExpression;
if (mce != null)
{
return mce.Method;
}
else
{
throw new InvalidOperationException("InfoOf called on expression without any kind of member access.");
}
}

static MemberInfo InfoOf<T>(Expression<Func<T>> ex)
{
var me = ex.Body as MemberExpression;
if (me != null)
{
return me.Member;
}
else
{
throw new InvalidOperationException("InfoOf called on expression without any kind of member access.");
}
}```

which we can call as follows:

```var cw = InfoOf(() => Console.WriteLine("{0} is {1}", "Bart", 26));
var now = InfoOf(() => DateTime.Now);```

So, all in all, the lack of statement tree support in the language is a pity, but by leveraging existing expression tree support we can simplify the task at hand in some cases. The result for our sample is as follows:

```var to = Expression.Parameter(typeof(int), "to");
var res = Expression.Variable(typeof(List<int>), "res");
var n = Expression.Variable(typeof(int), "n");
var found = Expression.Variable(typeof(bool), "found");
var d = Expression.Variable(typeof(int), "d");
var breakOuter = Expression.Label();
var breakInner = Expression.Label();
var getPrimes =
// Func<int, List<int>> getPrimes =
Expression.Lambda<Func<int, List<int>>>(
// {
Expression.Block(
// List<int> res;
new [] { res },
// res = new List<int>();
Expression.Assign(
res,
Expression.New(typeof(List<int>))
),
// {
Expression.Block(
// int n;
new [] { n },
// n = 2;
Expression.Assign(
n,
Expression.Constant(2)
),
// while (true)
Expression.Loop(
// {
Expression.Block(
// if
Expression.IfThen(
// (!
Expression.Not(
// (n <= to)
Expression.LessThanOrEqual(
n,
to
)
// )
),
// break;
Expression.Break(breakOuter)
),
// {
Expression.Block(
// bool found;
new[] { found },
// found = false;
Expression.Assign(
found,
Expression.Constant(false)
),
// {
Expression.Block(
// int d;
new [] { d },
// d = 2;
Expression.Assign(
d,
Expression.Constant(2)
),
// while (true)
Expression.Loop(
// {
Expression.Block(
// if
Expression.IfThen(
// (!(d <= Math.Sqrt(n)))
Expression.Invoke(
(Expression<Func<int, int, bool>>)((x, y) => !(x <= Math.Sqrt(y))),
d,
n
),
// break;
Expression.Break(breakInner)
),
// {
Expression.Block(
// if (n % d == 0)
Expression.IfThen(
Expression.Invoke(
(Expression<Func<int, int, bool>>)((x, y) => x % y == 0),
n,
d
),
// {
Expression.Block(
// found = true;
Expression.Assign(
found,
Expression.Constant(true)
),
// break;
Expression.Break(breakInner)
// }
)
)
// }
),
// d++;
Expression.PostIncrementAssign(d)
// }
),
breakInner
)
),
// if
Expression.IfThen(
// (!found)
Expression.Not(found),
Expression.Invoke(
(Expression<Action<List<int>, int>>)((l, e) => l.Add(e)),
res,
n
)
)
),
// n++;
Expression.PostIncrementAssign(n)
// }
),
breakOuter
)
),
res
),
to
// }
).Compile();```

Still quite involved to write, but a bit simpler than our previous attempt. As an exercise, identify a few other potential rewrite sites in the fragment above. Have fun!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

# Introduction

Avid blog readers know I have a weak spot for expression trees in particular and the broader picture of meta-programming facilities. With the introducing of LINQ in the .NET Framework 3.5 timeframe, we ended up adding expression trees to the framework as a way to represent code as data. A fundamental requirement to make remote LINQ scenarios such as LINQ to SQL work, as the LINQ expression written by the developer needs to be cross-compiled into the corresponding query language, e.g. SQL. To enable this, two routes could be taken:

1. Decompile IL code for the query at runtime, and turn it into the target language – quite roundabout
2. Enable representing the query’s code as data at runtime by emitting an expression tree at compile-time

Turning the query into the required target language at compile time isn’t a viable option either, since the front-end compilers would have to know about all of the target query domains; clearly not a good idea.

# A quick recap of the 3.5 landscape

Before diving into the .NET 4.0 feature set around enhanced expression trees, let’s have a quick recap of the corresponding baseline features introduced in .NET 3.5. Since LINQ was the driver for this feature after all, it makes sense to take a top-down approach and turn a LINQ query into more primitive building blocks.

Consider the following query expression:

var res = from p in db.Products
where p.UnitPrice > 100
select p.ProductName;

Given this, we can’t say much about the query just yet. All we know is that from a language’s point of view, we can “desugar” the above into the below:

var res = db.Products
.Where(p => p.UnitPrice > 100)
.Select(p => p.ProductName);

From this point on, regular method resolution rules kick in to find a Where method on db.Products, and to find a Select method on the result of calling Where. The argument to both those methods is a lambda expression and here’s where the flexibility comes from. Lambda expressions by themselves don’t have a type:

var twice = (int x) => x * 2;

Though the right-hand side seems to have all the information to infer a type (going from int to an expression of type int, ought to be Func<int, int> right?) the above doesn’t compile. Lambda expressions can be assigned to variables of two distinct types: regular delegates or expression trees thereof. The below illustrates this:

Func<int, int>             twiceD = x => x * 2;
Expression<Func<int, int>> twiceE = x => x * 2;

The first one is shorthand for an anonymous method we already supported in C# 2.0:

Func<int, int>             twiceD = delegate(int x) { return x * 2; };

However, the second one gives rise to quite a bit more of generated code; in particular, code is emitted that constructs an expression tree at runtime to represent the code the user wrote as data. Given this code, the library function calls (like Where and Select in case of LINQ) can consume that expression tree to analyze the code the user wrote, extracting the intent and turning it into execution in whatever way is right for the task at hand (e.g. by cross-compiling the expression into a target query language like SQL). In particular, the twiceE lambda expression gets turned into:

ParameterExpression x = Expression.Parameter(typeof(int), “x”);
Expression<Func<int, int>> twiceE = Expression.Lambda<Func<int, int>>(
Expression.Multiply(
x,
Expression.Constant(2)
),
x
);

That’s exactly what happens in the LINQ case when using an IQueryable<T> data source: the query operator methods like Where and Select don’t take in a plain delegate (like a Func<T, bool> for the filter fed in to the Where operator) but instead they take in an Expression<TDelegate> thereof (like an Expression<Func<T, bool>>). This causes the compiler to emit code generating an expression tree (by means of the Expression factory methods as shown above) for the argument. Ultimately the Queryable class puts together all those expression tree fragments into a big one representing the whole query expression, ready for the query provider to do whatever magic it sees fit. We won’t go there now.

To finish up this recap, a single picture that tells it all:

There’s on thing I haven’t mentioned yet: the Compile method on the LambdaExpression type as produced by the Expression.Lambda call. This plan simple API surface gives access to a gigantic machinery behind the scenes that turns the expression tree into executable IL code at runtime: the tip of the Compiler-as-a-Service (CaaS) iceberg.

As a little extra, let’s dig a little deeper and show the generated IL code through the Son-of-Strike (SOS) debugger extension. I’m using a Visual Studio 2010 project targeting .NET 3.5 for this demo. If it weren’t for the fact I have a 64-bit machine, I could equally well have used Visual Studio 2008, but mixed mode debugging (native + managed, as required by SOS) isn’t available on 64-bit in the 2008 version. In Visual Studio 2010, this restriction is gone (hooray).

Inspired by the code above, we can write a simple expression tree and have it compile at runtime. The resulting delegate object is based on dynamically emitted code, which we’ll visualize now:

First, we load the SOS extension from the .NET 2.0 installation folder. To do this, you’ll have to enable unmanaged code debugging:

Next, we use !dso to dump stack objects, which will reveal our twice object which is of type Func<int, int>. Dumping the object using !do reveals the target object, the underlying method, etc. We’ll have a look at the _methodBase next, where we’ll be able to see the dynamic method’s code:

The !dumpil command is all we need to dump the contents of the dynamically generated IL body, revealing the “times two” logic. Feel free to experiment with more complex lambda expressions revealing their contents as IL after runtime compilation. For example, what lambda expression did I write to arrive with the following?

!dumpil 026b89f0
This is dynamic IL. Exception info is not reported at this time.
If a token is unresolved, run "!do <addr>" on the addr given
in parenthesis. You can also look at the token table yourself, by
running "!DumpArray 026bde24".

IL_0000: ldarg.1
IL_0001: ldarg.2
IL_0002: callvirt 6000002 System.String.Substring(Int32)
IL_0007: ret

So, to wrap up, the .NET 3.5 story of expression trees consisted of two core parts:

• Library support for expression trees, in the System.Linq.Expressions namespace defined in the System.Core.dll assembly (meaning one can manually new up expression trees and obviously inspect their properties).
• Language support for expression trees, triggered by assigning a lambda expression to an Expression<TDelegate> type, available in C# and VB (as invisible glue to make LINQ work in certain scenarios).

# After expressions come … statements

Looking at expression trees in .NET 3.5 one observes they’re only able to represent expressions, i.e. those portions from languages that are valued in their very nature. Things like if and while don’t fall under this umbrella and are statements, and one more step higher up we land at full-blown declarations (like classes and interfaces). It’s clear if we want to represent more than valued expressions and aim our arrows at full method bodies, we need more than pure expression trees. So .NET 4.0 introduces statement trees as an extension to the existing expression tree API (so, as a little unfortunate side-effect of this, the Expressions part in the containing namespace becomes a bit of a misnomer).

The following picture illustrates the gaps we have in expression trees that hinder us from representing full method bodies. The reason we’re looking after such support is to be able to use the expression trees as one of the driving forces behind the Dynamic Language Runtime (DLR). To communicate code to the runtime, one uses expression trees. One could think of this in a rather extremist fashion as follows: where IL is used to represent code in the CLR, expression trees are used in the DLR. Ultimately the DLR is layered on top of the CLR, so at the end of the day expression trees are compiled – at runtime – into IL instructions as shown above with the Compile method of the v3.5 API. And from that point on, the IL can be turned into native code by the JIT compiler.

Everything in green was already exposed through the expression tree APIs in .NET 3.5. Things like method calls, various operators, constants, constructor calls, and variables (though you couldn’t assign to them but use them as parameters to lambdas) can be represented through factory method calls on the Expression type.

One first thing we need to add is all sorts of assignments so that mutation of variables can be achieved. Lots of languages live and breathe mutation, so assignment becomes an essential building block to realize the potential of those languages on top of the DLR. This is indicated in orange. Notice things like n++ fall under this category as well, due to their dual increment and assign nature. In fact, assignments in C# are categorized as expressions (why?), so the old API didn’t expose the whole expression surface of the language. Instead, the LINQ expressions API provided just enough expressiveness as was needed for commonly used operations in the context of writing queries. Writing things like the following doesn’t make sense in that context:

var res = from p in db.Products
where p.UnitPrice > 100
select p.ProductName = “Changed”;

(Note: The above could represent some kind of update statement, but together with LINQ’s deeply rooted lazy evaluation the above would turn out very cumbersome from an operational point of view.)

Finally, we require a way to perform sequencing of operations (the semi-colon in C# if you will) and organize such operations in blocks where the concept of scoping is alive and kicking (the idea of curly braces in C#). In addition, every statement that influences control flow is useful to have too. Examples include if and loops (together with their break and continue mechanisms), returning from a method, throwing and catching exceptions, etc.

Adding all those ingredients to the mix results in the v4.0 version of the expression tree API, including the ability to do dynamic dispatch (which is what the DLR is all about, and is surfaced through the new C# 4.0 dynamic keyword). Note: the “table” mentioned in the rectangle for dynamic dispatch above refers to all the operations supported by the DLR, such as calling properties, invoking methods, etc.

The following query can be used on both .NET 3.5 and .NET 4.0 to illustrate the differences between the APIs:

```var ops = from m in typeof(Expression).GetMethods()
where m.IsStatic
group m by m.Name into g
orderby g.Key
select new { g.Key, Overloads = g.Count() };

foreach (var op in ops)
Console.WriteLine(op);```

In .NET 3.5, this is the result:

{ Key = Add, Overloads = 2 }
{ Key = AddChecked, Overloads = 2 }
{ Key = And, Overloads = 2 }
{ Key = AndAlso, Overloads = 2 }
{ Key = ArrayIndex, Overloads = 3 }
{ Key = ArrayLength, Overloads = 1 }
{ Key = Bind, Overloads = 2 }
{ Key = Call, Overloads = 6 }
{ Key = Coalesce, Overloads = 2 }
{ Key = Condition, Overloads = 1 }
{ Key = Constant, Overloads = 2 }
{ Key = Convert, Overloads = 2 }
{ Key = ConvertChecked, Overloads = 2 }
{ Key = Divide, Overloads = 2 }
{ Key = ElementInit, Overloads = 2 }
{ Key = Equal, Overloads = 2 }
{ Key = ExclusiveOr, Overloads = 2 }
{ Key = Field, Overloads = 2 }
{ Key = GetActionType, Overloads = 1 }
{ Key = GetFuncType, Overloads = 1 }
{ Key = GreaterThan, Overloads = 2 }
{ Key = GreaterThanOrEqual, Overloads = 2 }
{ Key = Invoke, Overloads = 2 }
{ Key = Lambda, Overloads = 5 }
{ Key = LeftShift, Overloads = 2 }
{ Key = LessThan, Overloads = 2 }
{ Key = LessThanOrEqual, Overloads = 2 }
{ Key = ListBind, Overloads = 4 }
{ Key = ListInit, Overloads = 6 }
{ Key = MakeBinary, Overloads = 3 }
{ Key = MakeMemberAccess, Overloads = 1 }
{ Key = MakeUnary, Overloads = 2 }
{ Key = MemberBind, Overloads = 4 }
{ Key = MemberInit, Overloads = 2 }
{ Key = Modulo, Overloads = 2 }
{ Key = Multiply, Overloads = 2 }
{ Key = MultiplyChecked, Overloads = 2 }
{ Key = Negate, Overloads = 2 }
{ Key = NegateChecked, Overloads = 2 }
{ Key = New, Overloads = 6 }
{ Key = NewArrayBounds, Overloads = 2 }
{ Key = NewArrayInit, Overloads = 2 }
{ Key = Not, Overloads = 2 }
{ Key = NotEqual, Overloads = 2 }
{ Key = Or, Overloads = 2 }
{ Key = OrElse, Overloads = 2 }
{ Key = Parameter, Overloads = 1 }
{ Key = Power, Overloads = 2 }
{ Key = Property, Overloads = 3 }
{ Key = PropertyOrField, Overloads = 1 }
{ Key = Quote, Overloads = 1 }
{ Key = RightShift, Overloads = 2 }
{ Key = Subtract, Overloads = 2 }
{ Key = SubtractChecked, Overloads = 2 }
{ Key = TypeAs, Overloads = 1 }
{ Key = TypeIs, Overloads = 1 }
{ Key = UnaryPlus, Overloads = 2 }

And in .NET 4.0, where I’ve used the same color scheme to outline differences from the previous version (notice some additions exist to the “expression” subset of available tree node types):

{ Key = Add, Overloads = 2 }
{ Key = AddAssign, Overloads = 3 }
{ Key = AddAssignChecked, Overloads = 3 }
{ Key = AddChecked, Overloads = 2 }
{ Key = And, Overloads = 2 }
{ Key = AndAlso, Overloads = 2 }
{ Key = AndAssign, Overloads = 3 }
{ Key = ArrayAccess, Overloads = 2 }
{ Key = ArrayIndex, Overloads = 3 }
{ Key = ArrayLength, Overloads = 1 }
{ Key = Assign, Overloads = 1 }
{ Key = Bind, Overloads = 2 }
{ Key = Block, Overloads = 12 }
{ Key = Break, Overloads = 4 }
{ Key = Call, Overloads = 14 }
{ Key = Catch, Overloads = 4 }
{ Key = ClearDebugInfo, Overloads = 1 }
{ Key = Coalesce, Overloads = 2 }
{ Key = Condition, Overloads = 2 }
{ Key = Constant, Overloads = 2 }
{ Key = Continue, Overloads = 2 }
{ Key = Convert, Overloads = 2 }
{ Key = ConvertChecked, Overloads = 2 }
{ Key = DebugInfo, Overloads = 1 }
{ Key = Decrement, Overloads = 2 }
{ Key = Default, Overloads = 1 }
{ Key = Divide, Overloads = 2 }
{ Key = DivideAssign, Overloads = 3 }
{ Key = Dynamic, Overloads = 6 }
{ Key = ElementInit, Overloads = 2 }
{ Key = Empty, Overloads = 1 }
{ Key = Equal, Overloads = 2 }
{ Key = ExclusiveOr, Overloads = 2 }
{ Key = ExclusiveOrAssign, Overloads = 3 }
{ Key = Field, Overloads = 3 }
{ Key = GetActionType, Overloads = 1 }
{ Key = GetDelegateType, Overloads = 1 }
{ Key = GetFuncType, Overloads = 1 }
{ Key = Goto, Overloads = 4 }
{ Key = GreaterThan, Overloads = 2 }
{ Key = GreaterThanOrEqual, Overloads = 2 }
{ Key = IfThen, Overloads = 1 }
{ Key = IfThenElse, Overloads = 1 }
{ Key = Increment, Overloads = 2 }
{ Key = Invoke, Overloads = 2 }
{ Key = IsFalse, Overloads = 2 }
{ Key = IsTrue, Overloads = 2 }
{ Key = Label, Overloads = 6 }
{ Key = Lambda, Overloads = 18 }
{ Key = LeftShift, Overloads = 2 }
{ Key = LeftShiftAssign, Overloads = 3 }
{ Key = LessThan, Overloads = 2 }
{ Key = LessThanOrEqual, Overloads = 2 }
{ Key = ListBind, Overloads = 4 }
{ Key = ListInit, Overloads = 6 }
{ Key = Loop, Overloads = 3 }
{ Key = MakeBinary, Overloads = 3 }
{ Key = MakeCatchBlock, Overloads = 1 }
{ Key = MakeDynamic, Overloads = 6 }
{ Key = MakeGoto, Overloads = 1 }
{ Key = MakeIndex, Overloads = 1 }
{ Key = MakeMemberAccess, Overloads = 1 }
{ Key = MakeTry, Overloads = 1 }
{ Key = MakeUnary, Overloads = 2 }
{ Key = MemberBind, Overloads = 4 }
{ Key = MemberInit, Overloads = 2 }
{ Key = Modulo, Overloads = 2 }
{ Key = ModuloAssign, Overloads = 3 }
{ Key = Multiply, Overloads = 2 }
{ Key = MultiplyAssign, Overloads = 3 }
{ Key = MultiplyAssignChecked, Overloads = 3 }
{ Key = MultiplyChecked, Overloads = 2 }
{ Key = Negate, Overloads = 2 }
{ Key = NegateChecked, Overloads = 2 }
{ Key = New, Overloads = 6 }
{ Key = NewArrayBounds, Overloads = 2 }
{ Key = NewArrayInit, Overloads = 2 }
{ Key = Not, Overloads = 2 }
{ Key = NotEqual, Overloads = 2 }
{ Key = OnesComplement, Overloads = 2 }
{ Key = Or, Overloads = 2 }
{ Key = OrAssign, Overloads = 3 }
{ Key = OrElse, Overloads = 2 }
{ Key = Parameter, Overloads = 2 }
{ Key = PostDecrementAssign, Overloads = 2 }
{ Key = PostIncrementAssign, Overloads = 2 }
{ Key = Power, Overloads = 2 }
{ Key = PowerAssign, Overloads = 3 }
{ Key = PreDecrementAssign, Overloads = 2 }
{ Key = PreIncrementAssign, Overloads = 2 }
{ Key = Property, Overloads = 7 }
{ Key = PropertyOrField, Overloads = 1 }
{ Key = Quote, Overloads = 1 }
{ Key = ReferenceEqual, Overloads = 1 }
{ Key = ReferenceNotEqual, Overloads = 1 }
{ Key = Rethrow, Overloads = 2 }
{ Key = Return, Overloads = 4 }
{ Key = RightShift, Overloads = 2 }
{ Key = RightShiftAssign, Overloads = 3 }
{ Key = RuntimeVariables, Overloads = 2 }
{ Key = Subtract, Overloads = 2 }
{ Key = SubtractAssign, Overloads = 3 }
{ Key = SubtractAssignChecked, Overloads = 3 }
{ Key = SubtractChecked, Overloads = 2 }
{ Key = Switch, Overloads = 6 }
{ Key = SwitchCase, Overloads = 2 }
{ Key = SymbolDocument, Overloads = 4 }
{ Key = Throw, Overloads = 2 }
{ Key = TryCatch, Overloads = 1 }
{ Key = TryCatchFinally, Overloads = 1 }
{ Key = TryFault, Overloads = 1 }
{ Key = TryFinally, Overloads = 1 }
{ Key = TryGetActionType, Overloads = 1 }
{ Key = TryGetFuncType, Overloads = 1 }
{ Key = TypeAs, Overloads = 1 }
{ Key = TypeEqual, Overloads = 1 }
{ Key = TypeIs, Overloads = 1 }
{ Key = UnaryPlus, Overloads = 2 }
{ Key = Unbox, Overloads = 1 }
{ Key = Variable, Overloads = 2 }

As you can see, there’s a wealth of new capabilities added to the expression trees API and one can’t but love this from the bottom of his .NET developer’s heart. What hasn’t changed though, at the moment, is the language support for expression tree generation. That is, when writing a statement bodied lambda expression and assigning it to an Expression<TDelegate> variable, the compiler won’t turn it into an expression tree using the new nodes for you. It keeps on supporting the subset required for LINQ. Till we truly enter the domain of meta-programming, I except the language to stay with the true expression tree subset.

# A first sample

To get readers excited about this API enrichment, let’s have a go at writing a bigger expression tree containing statement and control flow nodes. Our running primes sample is a good choice for this task at hand. But first, the regular C# code to set the scene:

```Func<int, List<int>> getPrimes = to =>
{
var res = new List<int>();
for (int n = 2; n <= to; n++)
{
bool found = false;

for (int d = 2; d <= Math.Sqrt(n); d++)
{
if (n % d == 0)
{
found = true;
break;
}
}

if (!found)
}
return res;
};

foreach (var num in getPrimes(100))
Console.WriteLine(num);```

This code should be fairly straightforward to grasp. Notice I’m using a lambda expression to define the “method” we’re talking about, so that we have a local variable called getPrimes that we can invoke through. Our goal is to generate this code at runtime using the expression trees APIs, so that the use site (the foreach loop at the bottom) can remain the same.

Next, we’re on for a stroll in expression tree land. The top level node is a plain old LambdaExpression but then things start to change from what we’re used to. First of all, we’ll need sequencing which can be achieved through a BlockExpression. Each block can have a number of variables that are in scope, as well as a number of expressions that make part of it. In our case, the body of the lambda has a res variable and two statements: one for the for-loop and one for the return-statement:

```var to = Expression.Parameter(typeof(int), "to");
var res = Expression.Variable(typeof(List<int>), "res");
var getPrimes =
// Func<int, List<int>> getPrimes =
Expression.Lambda<Func<int, List<int>>>(
// {
Expression.Block(
// List<int> res;
new [] { res },
// res = new List<int>();
Expression.Assign(
res,
Expression.New(typeof(List<int>))
),
// loop goes here
res
),
to
// }
).Compile();

foreach (var num in getPrimes(100))
Console.WriteLine(num);```

Again, this piece of code is not too hard to understand. The Lambda factory method takes in the body and the lambda parameters. Difference from the past API landscape is that the body now can be a BlockExpression, and in our case it has a “res” variable in scope. Assignment is trivial using the Assign factory method passing in the left-hand side and right-hand side. Returning from the lambda expression can simply be done by using the “res” expression since the last valued expression from a block becomes its return value (an Expression.Return factory exists for more complicated cases where one returns from the middle of a block of code, which boils down to a “value-carrying” GotoExpression behind the scenes). Notice this is more general property of expression trees, as languages like Ruby use this property throughout (e.g. the value of an if-statement is the last expression’s value from either of the branches).

Now the loop. Although a whole bunch of loop constructs exist in front-end languages like C#, the expression tree APIs only provide one out of the box. Its goal is to centralize the plumbing involved with re-executing the loop’s body while having break and continue capabilities, as well as maintaining the last iteration’s value as the result of the loop (again for languages like Ruby that work on the DLR using expression trees). As we’ve written a for-loop in the original fragment, we need a way to cross-compile a for-loop into this basic building block for loops:

for (initializer; condition; update)
body

stands for

{
initializer;
while (condition)
{
body;
continueLabel:
update;
}
breakLabel:
}

where the body’s immediate inner statements get a substitution applied to rewrite break and continue statements in terms of the added labels. Notice the use of a separate block, keeping the for-loop’s initialized variables in a separate scope. More-over, the expression tree API’s loop construct doesn’t have a terminating condition, so that too needs to be replaced by a custom jump to the break label:

{
initializer;
while (true)
{
if (!condition)
goto breakLabel;
body;
continueLabel:
update;
}
breakLabel:
}

As our loops don’t use continue statements, we can omit that complication. For the outer loop we end up with:

```// {
Expression.Block(
// int n;
new [] { n },
// n = 2;
Expression.Assign(
n,
Expression.Constant(2)
),
// while (true)
Expression.Loop(
// {
Expression.Block(
// if
Expression.IfThen(
// (!
Expression.Not(
// (n <= to)
Expression.LessThanOrEqual(
n,
to
)
// )
),
// break;
Expression.Break(breakOuter)
),
// loop body goes here
// n++;
Expression.PostIncrementAssign(n)
// }
),
breakOuter
))```

A more thorough explanation is in order here. Think of Expression.Loop as providing an infinite loop. So, right inside the loop’s body (the first argument to the Loop call) we provide a condition to jump out of the loop when the terminating condition holds. As part of the loop’s body we also need to have the inner loop (see further) as well as the update part of the for-statement, which is a post-increment-assign node. One more thing is required here though: the use of a break label to denote the statement right after the loop. We create such a beast using the following call:

`var breakOuter = Expression.Label();`

and specify it as the last argument to the Loop call, where the expression tree API can take care of emitting the label in the right spot. Inside the loop, we can simply use the Break factory method to emit a GotoExpression that essentially breaks out of the loop. Also notice the use of the outer loop’s block as the scope for the variable “n”, which was declared like this:

`var n = Expression.Variable(typeof(int), "n");`

No rocket science either. Finally, we need to write the loop’s body which is piece of cake again: use a block to group multiple statements. It also contains an inner for-loop for which we use the same tricks, including the use of a break label etc:

```// {
Expression.Block(
// bool found;
new[] { found },
// found = false;
Expression.Assign(
found,
Expression.Constant(false)
),
// {
Expression.Block(
// int d;
new [] { d },
// d = 2;
Expression.Assign(
d,
Expression.Constant(2)
),
// while (true)
Expression.Loop(
// {
Expression.Block(
// if
Expression.IfThen(
// (!
Expression.Not(
// d <= Math.Sqrt(n)
Expression.LessThanOrEqual(
d,
Expression.Convert(
Expression.Call(
null,
typeof(Math).GetMethod("Sqrt"),
Expression.Convert(
n,
typeof(double)
)
),
typeof(int)
)
)
// )
),
// break;
Expression.Break(breakInner)
),
// {
Expression.Block(
// if (n % d == 0)
Expression.IfThen(
Expression.Equal(
Expression.Modulo(
n,
d
),
Expression.Constant(0)
),
// {
Expression.Block(
// found = true;
Expression.Assign(
found,
Expression.Constant(true)
),
// break;
Expression.Break(breakInner)
// }
)
)
// }
),
// d++;
Expression.PostIncrementAssign(d)
// }
),
breakInner
)
),
// if
Expression.IfThen(
// (!found)
Expression.Not(found),
Expression.Call(
res,
n
)
)
),```

It should be straightforward to find out how the variables d, found and the label breakInner have been created upfront. Around the d <= Math.Sqrt(n) code there’s quite a bit of plumbing needed because of the implicit conversions that take place all over, resulting in the use of a few convert nodes. Other than that, the code above doesn’t use anything new.

For simplicity when trying this out for yourself in .NET 4.0 and Visual Studio 2010, here’s the whole fragment:

```var to = Expression.Parameter(typeof(int), "to");
var res = Expression.Variable(typeof(List<int>), "res");
var n = Expression.Variable(typeof(int), "n");
var found = Expression.Variable(typeof(bool), "found");
var d = Expression.Variable(typeof(int), "d");
var breakOuter = Expression.Label();
var breakInner = Expression.Label();
var getPrimes =
// Func<int, List<int>> getPrimes =
Expression.Lambda<Func<int, List<int>>>(
// {
Expression.Block(
// List<int> res;
new [] { res },
// res = new List<int>();
Expression.Assign(
res,
Expression.New(typeof(List<int>))
),
// {
Expression.Block(
// int n;
new [] { n },
// n = 2;
Expression.Assign(
n,
Expression.Constant(2)
),
// while (true)
Expression.Loop(
// {
Expression.Block(
// if
Expression.IfThen(
// (!
Expression.Not(
// (n <= to)
Expression.LessThanOrEqual(
n,
to
)
// )
),
// break;
Expression.Break(breakOuter)
),
// {
Expression.Block(
// bool found;
new[] { found },
// found = false;
Expression.Assign(
found,
Expression.Constant(false)
),
// {
Expression.Block(
// int d;
new [] { d },
// d = 2;
Expression.Assign(
d,
Expression.Constant(2)
),
// while (true)
Expression.Loop(
// {
Expression.Block(
// if
Expression.IfThen(
// (!
Expression.Not(
// d <= Math.Sqrt(n)
Expression.LessThanOrEqual(
d,
Expression.Convert(
Expression.Call(
null,
typeof(Math).GetMethod("Sqrt"),
Expression.Convert(
n,
typeof(double)
)
),
typeof(int)
)
)
// )
),
// break;
Expression.Break(breakInner)
),
// {
Expression.Block(
// if (n % d == 0)
Expression.IfThen(
Expression.Equal(
Expression.Modulo(
n,
d
),
Expression.Constant(0)
),
// {
Expression.Block(
// found = true;
Expression.Assign(
found,
Expression.Constant(true)
),
// break;
Expression.Break(breakInner)
// }
)
)
// }
),
// d++;
Expression.PostIncrementAssign(d)
// }
),
breakInner
)
),
// if
Expression.IfThen(
// (!found)
Expression.Not(found),
Expression.Call(
res,
n
)
)
),
// n++;
Expression.PostIncrementAssign(n)
// }
),
breakOuter
)
),
res
),
to
// }
).Compile();

foreach (var num in getPrimes(100))
Console.WriteLine(num);```
` `

# Inspecting the IL code

Obviously you want to see the generated code after calling Compile. Near the top of this post you saw how to do this using SOS, so I won’t repeat the recipe over here (just make sure to load the sos.dll file from the v4.0 installation folder instead), but below is the result:

!dumpil 023f2bc0
This is dynamic IL. Exception info is not reported at this time.
If a token is unresolved, run "!do <addr>" on the addr given
in parenthesis. You can also look at the token table yourself, by
running "!DumpArray 023f985c".

// res = new List<int>();
IL_0000: newobj 023ef050 is not a MethodDesc
6000003
IL_0005: stloc.0

// n = 2
IL_0006: ldc.i4.2
IL_0007: stloc.1

// n <= to
IL_0008: ldloc.1
IL_0009: ldarg.1
IL_000a: ble.s IL_000f

// “mumble, mumble”
IL_000c: ldc.i4.0
IL_000d: br.s IL_0010
IL_000f: ldc.i4.1
IL_0010: ldc.i4.0
IL_0011: ceq
IL_0013: brfalse IL_001d
IL_0018: br IL_0079

// found = false;
IL_001d: ldc.i4.0
IL_001e: stloc.2

// d = 2
IL_001f: ldc.i4.2
IL_0020: stloc.3

// d <= Math.Sqrt(n)
IL_0021: ldloc.3
IL_0022: ldloc.1
IL_0023: conv.r8
IL_0024: call 023ef9a0 is not a MethodDesc
6000004
IL_0029: conv.i4
IL_002a: ble.s IL_002f

// “mumble, mumble”

IL_002c: ldc.i4.0
IL_002d: br.s IL_0030
IL_002f: ldc.i4.1
IL_0030: ldc.i4.0
IL_0031: ceq
IL_0033: brfalse IL_003d
IL_0038: br IL_005c

// n % d == 0
IL_003d: ldloc.1
IL_003e: ldloc.3
IL_003f: rem
IL_0040: ldc.i4.0
IL_0041: ceq

IL_0043: brfalse IL_004f

// found = true;
IL_0048: ldc.i4.1
IL_0049: stloc.2

IL_004a: br IL_005c

// d++;
IL_004f: ldloc.3
IL_0050: stloc.s VAR OR ARG 4
IL_0052: ldloc.s VAR OR ARG 4
IL_0054: ldc.i4.1
IL_0056: stloc.3

IL_0057: br IL_0021

// if (!found)
IL_005c: ldloc.2
IL_005d: ldc.i4.0
IL_005e: ceq
IL_0060: brfalse IL_006c

IL_0065: ldloc.0
IL_0066: ldloc.1
IL_0067: callvirt 023f0548 is not a MethodDesc
6000005

// n++;
IL_006c: ldloc.1
IL_006d: stloc.s VAR OR ARG 5
IL_006f: ldloc.s VAR OR ARG 5
IL_0071: ldc.i4.1
IL_0073: stloc.1

IL_0074: br IL_0008

// return res;
IL_0079: ldloc.0
IL_007a: ret

Amazing, isn’t it? That’s it for now. Expect to see more expression tree fun in the future, in my continuation to the Control Library series of posts.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

From a hotel lobby in the sunny city of Durban, South-Africa, waiting for my plane transfer after a great TechEd Africa event. Why not write a blog post on one of my talks: the Managed Extensibility Framework, or MEF. As we steam ahead to release .NET 4.0, it’s great to see the amount of new APIs that will make it into this release. I had the opportunity to talk on three of those the past few days:

• The Dynamic Language Runtime (System.Dynamic) with dynamic support in C# and VB, and dynamic languages like IronPython and IronRuby.
• New additions for parallel programming (Task Parallel Library, PLINQ, Coordination Data Structures).
• The Managed Extensibility Framework (System.ComponentModel.Composition).

# Why does extensibility matter?

The world’s most successful APIs succeed in taking away developer frustration. Think of web services (“Add Service Reference”, anyone), LINQ (thanks to its unified query experience across domains) and many more. With extensibility, that’s no different:

That extensibility is a big deal for customers is no secret thing. Look at the success of add-ins within popular software packets: browsers, office productivity suites, developer tools, and even the tool I’m writing this post in: Windows Live Writer. Having a brand new installation of Windows 7, I haven’t installed something I’ll need further on in this post: the ability to paste code from Visual Studio in my post. Why didn’t the Windows Live Writer team think about that capability?

Honestly, that’s not a silly question to ask at all. But tell me, if 99% of users (okay, I’m magnifying things out a little bit) with a blog tend to write about how great the spaghetti was they consumed last night in one of their favorite bars, making pasting code from a tool as specific as Visual Studio doesn’t seem like it meets the bar of top feature requests right?

What if the Live Writer team would say: let’s not freaking care about such a feature ourselves but provide the ability for others to extend our product?

So what about ALT-PrntScrn, CTRL-V to illustrate my point?

Okay, I visited Windows 7’s new Paint utility for a short while to draw a red rectangle, but the above shows how the ability to paste code from Visual Studio is painfully missing but how the “Add a plug-in…” capability is the magic bottle of snake-oil waiting to be consumed. Clicking the link, I arrive on a web page with a bunch of extensions waiting for me to be added to my favorite blogging tool:

I find one plug-in that sounds promising, click Download, accept a few prompts and the installation is on its way. Seconds later, even without restarting Windows Live writer, I take another screenshot:

If you’re a true developer, you’ll have to admit being jealous on the Live Writer team. Don’t you want your application to be that pluggable as well? Sure you do! That’s why extensibility matters: customers love it, but it’s way too hard for developers to do and get right today. Guess what, that’s where MEF comes in…

# Our running sample

Boring as I am, my demos always go the minimalistic route: no UI candy, no contrived samples, plain basics using the purest black gold in the Windows and .NET universe: console applications. I bet my statistics of application development in Visual Studio by project type exceeds the 100:1 ratio for console applications versus anything else. So live with it or navigate away (please don’t).

Because I release “Hello World” would be too boring of a sample, let’s go with an extensible console-driven calculator. Be prepared to be shocked as I show the stunning result:

Little do you know that Add is a built-in operation, while the Pythagorean option was loaded from an extension provided by someone else that the calculator “core” developer. And what about MEF? It’s the secrete sauce that put the pieces of the puzzle together.

# Introducing FunCalc’s project structure

But before going into MEF terrain, let’s have a look at how we can make extensibility work without using MEF. Our goal? To show how much plumbing is involved in order to get such a seemingly simple thing off the ground. In order to be a bit architecturally hygienic, we’ll divide our application into a few components:

1. Our superb million dollar business UI, a console application project.
2. Extensibility interface(s), as a class library project.
3. Third party extensions, again as a class library project.

So we’ll go ahead and create a solution with three projects and the necessary cross-references between them: both the UI and the extension projects need to refer to the extensibility interface project to access the contract that’s established between the “host” and the “extenders”. Here’s what the interface will look like:

```public interface ICalculatorOperation
{
char Symbol { get; }
string Name { get; }
int Calc(int a, int b);
}```

How did I insert the code snippet above? Right, using the plug-in I installed just a few minutes before. Life is great and I’ll be delighted to give this level of flexibility to the users of my superior quality calculator as well.

What does the interface look like? Honestly, it’s already overkill as I seem to have cared a tiny little bit about presentation. Shame on me: the Symbol and Name get-only properties are all about visualization. The former one represents the infix operator symbol while the latter one is used to add a menu option (‘+’ for an “Add” operation, ‘#’ for a “Pythagorean” operation – don’t ask me why I chose that symbol but I reserve the “symbolic mathematical right” to choose whatever symbol I see fit :-)).

Besides the UI candy properties (which more UI-savvy developers would bloat with whole control-exposing properties, which would act as the labels on a button in the calculator UI) there’s some real hardcore stuff in the interface as well: Calc is the binary operator that takes two integer numbers and produces another one back. That’s where the operator’s implementation will live.

But first, the solution’s project structure. Don’t worry about the code files just yet but rather focus on the references between projects:

I’ve also cleaned up the references to assemblies I don’t need (like System.Data) to make the structure as clean as possible for sake of the demonstration. The startup project, FunCalc, is the Console Application project. It references the Interfaces class library assembly where the aforementioned interface ICalculatorOperation is defined. You can already go ahead and do that. Similarly, the Extensions assembly (which you should really think of as being created by a developer on the other side of the world in a separate solution altogether) refers to the Interfaces assembly. I’m keeping everything in one solution again for the sake of demonstration but you should envision extensions coming from tons of places just as raw DLL assembly files.

One final thing left to do in this setting is to provide a way for the extensions to become discoverable to the host, FunCalc.exe, application. In other words, the Extensions.dll file needs to be placed in a well-known location where our application will look for extensions. Again, in a real-world setting this would be done by providing a well-known folder location (maybe registered somewhere in the registry) that plug-in installers would copy their extension DLL file(s) to. All of that would distract us from our core mission here, so let’s use the tools at hand. Using a post-build event command on the Extensions project, I’ll copy the produced assembly to an AddIns subfolder underneath the build location for the FunCalc executable:

Don’t worry too much about this little bit of magic but a few give-aways: TargetPath contains the full path to the compiled file (Extensions.dll), while SolutionDir and SolutionName contain the full path to the solution root and the name of the solution (FunCalc, which happens to be the same as the Console Application’s project name, the only bit of magic we’re relying on here) respectively. I’m also hardcoding the bin\Debug folder, which could be pimped up a little bit if you were to do this for real. But the crux of it is that we’re copying the extensions DLL file to the AddIns folder next to the FunCalc.exe binary. Give it a try and have a look in Explorer:

Sweet. We’re ready to go.

# Designing the “UI”

Time to design and build the host application’s UI. In Program.cs, add the following code to the Main method for starters:

```Calculator calc = new Calculator();

while (true)
{
ICalculatorOperation op = AskUserInput(calc);
if (op == null)
return;

int a = ReadInt("a", x => true);
int b = ReadInt("b", x => true);
int res = op.Calc(a, b);

Console.WriteLine();
Console.WriteLine("{0} {1} {2} = {3}", a, op.Symbol, b, res);
Console.WriteLine();
}```

We don’t have the Calculator class just yet, but be patient for a little while. What’s the main loop do? It calls AskUserInput to show the menu with options and return the calculator operation the user chose. If the result is null, the user entered the exit option (0). To retrieve the operands to the selected (always to be binary/dyadic in our sample) operator, a call a helper function called ReadInt twice. The first argument will be the prompt, the second one a validator function (Func<int, bool>) to have ReadInt repeat its question till the validator passes (e.g. for range checks, see further). Finally we invoke the calculator’s selected operation by means of calling Calc and print the result with some fancy formatting resulting in infix notation using the operator’s Symbol property getter.

Let’s have a look at AskUserInput next:

```private static ICalculatorOperation AskUserInput(Calculator calc)
{
int i = 1;
foreach (var op in calc.Operations)
{
Console.WriteLine(i++ + ". " + op.Name);
}
Console.WriteLine();

int max = i - 1;

int choice = ReadInt("Your choice", x => x >= 0 && x <= max);

if (choice == 0)
{
return null;
}

return calc.Operations[choice - 1];
}```

As trivial as it can get. Some printing of a one-based menu based on a yet-to-be-defined Operations property on the Calculator object (see further), followed by another use of ReadInt to ask the user for a menu option, this time passing in a validator function using a concise lambda expression making only the range of menu options (including 0 for exit) valid. If the choice was 0, we return null to indicate we should exit, otherwise we select the operation of the user’s choice using an indexer operation.

Which brings us with ReadInt as the final helper function:

```private static int ReadInt(string prompt, Func<int, bool> isValid)
{
bool valid = false;
int choice = 0;
while (!valid)
{
Console.Write(prompt + ": ");
string input = Console.ReadLine();
valid = int.TryParse(input, out choice) && isValid(choice);
}

return choice;
}```

Simple once more. Keep on asking for user input given the specified prompt till the input is a valid integer. That’s all it takes to create the UI and you didn’t have to touch the mouse once. How good does life get?

# The Calculator type

Finally our host needs its Calculator type which will hold on to all the supported operations. You can envision it to be much more complex, potentially providing UI candy and additional functionality (like support for different operator arities, a graphical calculator unit or unit conversion modes – have  a look at the Windows 7 calculator for inspiration). But here is our minimalistic Calculator type:

```class Calculator
{
public List<ICalculatorOperation> Operations { get; set; }
}```

At this point, we can try to run the application. It will fail in grandeur:

Why? Because we didn’t compose operators with the Operations list. It’s still null and has never been assigned. A capital mistake but it turns out this is the place where MEF will come in: gluing things together.

But before we embrace MEF, let’s try to wire up things manually “the naive way”. And even before we do so, let’s make our calculator sellable by having built-in operations for Add, Subtract, Multiply and Divide.

# Built-in operations

Creating built-in operations is no rocket science: simply implement the interface. The built-in nature reveals where we’re going to define those operations: right inside the FunCalc console application project. Here’s a sample implementation:

```class Add : ICalculatorOperation
{
public char Symbol
{
get { return '+'; }
}

public string Name
{
get { return "Add"; }
}

public int Calc(int a, int b)
{
return a + b;
}
}```

Consult your books on higher-order mathematics to implement the extremely difficult Subtract, Multiply and Divide operations yourself. While you do so, I’ll go ahead and wire up those in the Main method as follows:

```Calculator calc = new Calculator();

while (true)
{```

And the LoadBuiltIns method looks as follows:

```private static IEnumerable<ICalculatorOperation> LoadBuiltIns()
{
yield return new Add();
yield return new Sub();
yield return new Mul();
yield return new Div();
}```

A trivial iterator which we turn into a List<ICalculatorOperation> higher up by calling LINQ’s ToList operator. Great team-work isn’t it? You gotta love contract-first interface-driven implementation, don’t you? And now, our not-yet-extensible calculator simply works:

Now on to making it extensible.

# Extensibility the naive way

While we already have a LoadBuiltIns method, we also want to be able to load extensions dynamically at runtime by looking in our AddIns folder and discovering types in assemblies that are suitable for use by our Calculator type. The way we do so is by scanning the whole directory for .dll files (a first mistake, since a .dll is not necessarily a .NET assembly), loading them (a mistake if we do it naively as we shall see), looking for types that implement the interface (could be a mistake too, because we may pick up types that accidentally implemented the interface but weren’t meant – explicitly that is – for composition), and create instances of those types by invoking the default constructor (it gets tedious, but this isn’t ideal too). Here’s what I came up with as the most naive very error-prone way of doing extensibility the .NET 1.0 way. In fact, let’s make it the .NET 3.5 way and be super-naive writing everything as a LINQ query:

```private static IEnumerable<ICalculatorOperation> LoadExtensions()
{
return from dll in System.IO.Directory.EnumerateFiles("AddIns", "*.dll")
let asm = System.Reflection.Assembly.LoadFrom(dll)
from t in asm.GetTypes()
where t.GetInterfaces().Contains(typeof(ICalculatorOperation))
select (ICalculatorOperation)Activator.CreateInstance(t);
}```

Obviously we also need to patch up the assignment to the Calculator object’s Operations property to include the extension objects:

```Calculator calc = new Calculator();

while (true)
{```

# Writing our first add-in

But does the magic above work? Let’s try. Add the following class definition to your Extensions project which has been set up to copy the resulting assembly to the AddIns folder before. That should be enough to pick it up right?

```public class Pyth : ICalculatorOperation
{
public char Symbol
{
get { return '#'; }
}

public string Name
{
get { return "Pythagorean"; }
}

public int Calc(int a, int b)
{
return (int)Math.Sqrt(a * a + b * b);
}
}```

Obviously you’ll have to import the Interfaces namespace where the interface lives, as referenced through the project reference to the Interfaces project. Build and run and lo and behold:

If it didn’t work, check a few things:

• Did you include the LoadExtensions result in the assignment to Operations?
• Was the rebuilt Extensions.dll copied correctly to the AddIns folder?
• Is your Pythagorean operation class declared as public?

# Reflecting on our solution

How do we like our current solution? Positive thing: it works. Negative thing: the implementation sucks. Why?

• Discovering extensions was a pain.
• Instantiation extensions was tedious.
• Wiring up the list of extensions to the Calculator is tedious too.

So, if we boil down things to the bare essence we end up with a few things we want to make easier to do:

• Contracts – The interface between various parties is something we can’t eliminate. Notice how general this concept is: things like WCF are contract-based too (address, binding, contract).
• Catalogs – What about making discoverability generic by means of a catalog concept? Such a catalog could be based on a directory search, amongst other sources.
• Composition – Why can’t wiring up parts of the application be as easy as throwing everything together and let the runtime figure out what fits where (e.g. operations should be added to the Operations collection).

That’s the essence of MEF: its three C’s:

# MEF = playing with blocks

To make those concepts a bit easier to grasp, imagine yourself playing with Lego blocks again (or admit you spend all your free time doing so nevertheless):

Blocks fit onto one another in a very particular way: there’s a contract between different blocks that determines whether they fit. Next, there’s a catalog where we can find all the blocks we can play with. It’s about discovering what parts exist. Finally, in an ideal world, we’d be able to buy a magic box, throw the parts in there, and out comes a giant structure where everything is fit together automagically:

That magic box is MEF. It’s a composition engine. No wonder the MEF logo looks as follows:

# MEF essentials

MEF is about give and take. The equation is plain easy:

Import is what our Calculator will do: it needs operations to play with. The operations on the other hand export themselves by saying: I have an operation available for you. MEF wires up the parts in order to reach a composed application. In a practical scenario the picture looks as follows:

The way to express the import/export relationship is done in a purely declarative way using custom attributes: ImportAttribute (with an ImportManyAttribute variant) and ExportAttribute.

# Putting MEF into action

Time to simplify our application using MEF. First of all, let’s get rid of all the code in LoadBuiltIns and LoadExtensions. More specifically, we want to get rid of this line of code:

`calc.Operations = LoadBuiltIns().Concat(LoadExtensions()).ToList();`

This is where we’ll put some MEF code in order to request composition to happen. But first, we’re going to use the MEF attributes to denote what’s importing functionality and what’s exporting functionality. Let’s start with export. In order to make this happen we need to import System.ComponentModel.Composition, the assembly where MEF lives. We do so in both the host application (FunCalc) and extensions (Extensions) projects:

Next, go to all of your operation types (Add, Sub, Mul, Div and Pyth) to put the Export attribute on their definitions as follows:

Notice you’ll have to import the System.ComponentModel.Composition namespace. The core essence of the ExportAttribute is to explicitly state to MEF that the type exports a certain contract (which can be based on an interface type):

```[Export(typeof(ICalculatorOperation))]
public class Pyth : ICalculatorOperation```

We do this for all of our ICalculatorOperation types, including the built-in ones. This allows us to have MEF take care of the composition for those built-in ones as well, taking the burden of doing the wiring-up ourselves away. On the other side of the equation we find the Import attribute where the exported parts will be accepted for composition:

```class Calculator
{
[ImportMany(typeof(ICalculatorOperation))]
public List<ICalculatorOperation> Operations { get; set; }
}```

We’re actually using ImportMany instead of Import in order to tell MEF that we’re doing a Many-to-One mapping here: our Operations property will contain a List containing all of the exported ICalculatorOperation parts it can find. This will ultimately include built-ins as well as extensions.

Now that we have attributed all of our implementers and the consumer, MEF is capable of wiring up everything for us as long as we instruct it to do so. Before getting into the domain of discovering parts, let’s do a manual composition first. The code to do this looks like this:

```Calculator calc = new Calculator();
```var container = new CompositionContainer();
container.ComposeParts(
calc,
new Add(), new Sub(), new Mul(), new Div()
);```

Make sure to import the System.ComponentModel.Composition and System.ComponentModel.Composition.Hosting namespaces in Program.cs for the code above to compile:

```using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;```

When we run the application, sure enough our calculator again exposes the included operations through its UI. MEF went ahead and wired up the objects for us, something you could see in action when setting a breakpoint on the Operations property setter on Calculator:

Notice I’ve also changed the property to have a private setter. This looks far cleaner from an encapsulation point of view and MEF is perfectly capable of initializing through private setters (and even directly on fields) because of its reflection-based nature (and its intentional support for this type of scenario).

Though we haven’t seen how to discover parts yet, the power of MEF’s composition engine has become apparent in the sample above. Time to move on to the discoverability aspect next: catalogs.

# Catalogs in MEF

The concept of a catalog is what drives discoverability of parts in MEF. Various types of catalogs are built-in to MEF but one can create his own catalog easily by deriving from the ComposablePartCatalog class:

To get started with catalogs, we’ll use the TypeCatalog to abstract away from our manual hook up of the built-in operation instances:

```using (var catalog = new TypeCatalog(typeof(Add), typeof(Sub), typeof(Mul), typeof(Div)))
using (var container = new CompositionContainer(catalog)){
container.ComposeParts(calc);}```

How’s that any better I hear the reader wonder. If we still have to specify all the types, what are we saving here? In fact, using the TypeCatalog we’re leaving the burden of instantiating part objects to the MEF composition engine. More advanced features allow controlling the constructor invocation as well, so in essence we’re yielding more control to MEF here. But we can do better: what if we add a built-in operation to the calculator? Now we need to update the code above to include the implementing type to the catalog manually. Can’t we discover all types in the current assembly? It turns out we can using the AssemblyCatalog:

```using (var catalog = new AssemblyCatalog(typeof(Program).Assembly))
using (var container = new CompositionContainer(catalog))
{
container.ComposeParts(calc);
}```

Here we’re instructing MEF to search the whole current assembly, i.e. the one where the calculator’s Program type is defined in, for attributed types. For the types it found, it will take over the plumbing associated with instantiating the objects and obviously the rest of the composition as it did before.

Finally, we want to add the extensions to the mix. To do so, we need two more things. First of all, we need a DirectoryCatalog for MEF to search an entire directory for assemblies containing types that can be used for composition. This is the superior equivalent of our LoadExtensions iterator we played with before. Since we now are faced with two catalogs: one AssemblyCatalog for the built-ins and one DirectoryCatalog for the extensions, we need a way to compose those catalogs. That’s what the AggregateCatalog is for:

```using (var builtIns = new AssemblyCatalog(typeof(Program).Assembly))
using (var extensions = new DirectoryCatalog("AddIns"))
using (var catalog = new AggregateCatalog(builtIns, extensions))
using (var container = new CompositionContainer(catalog))
{
container.ComposeParts(calc);
}```

And, as you can probably imagine, taking this one step further and mixing in other directories or other types of catalogs is trivial to do using the AggregateCatalog: it simply acts as the union of catalogs.

# The result

We replaced a significant amount of plumbing code with five significant lines of MEF code and a few attributes here and there. For people extending our application, things are as easy as implementing the contract interface and sticking an attribute on the resulting type to opt-in explicitly to MEF composition. For us, the application creators, code got significantly easier to create an extensible application. Obviously, the resulting application still runs flawlessly:

In its true essence, MEF is a way to play with blocks or to brew soup (in the latter interpretation, one could argue it’s the Maizena plus in the picture below). We haven’t explore the concept of a CompositionBatch as mentioned below, so the figure below acts as a little stimulus for further exploration on your own :-).

Enjoy this nice new library in .NET 4! In the meantime, you can play around with MEF from http://mef.codeplex.com.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

More Posts « Previous page - Next page »