Introduction

Before reaching the catharsis in the “More LINQ with System.Interactive” series over here, I wanted to ensure a solid understanding of the essence of LINQ in my reader base. Often people forget the true essence of a technology due to the large number of auxiliary frameworks and extensions that are being provided. Or worse, sometimes a sense for the essence never materialized.

Searching for essence is nothing other than a “group by” operation, partitioning the world in fundamentals and derived portions. One succeeds in this mission if the former group is much smaller than the latter. In this post, we’ll try to reach that point for the IEnumerable<T> and IObservable<T> LINQ implementations, illustrating both are fundamentally similar (and dare I say, dual?). You can already guess much of the essence lies in the concept of monads. By the end of the post, we’ll have distilled the core of LINQ, which I refer to as MinLINQ since small is beautiful.

 

Interfaces are overrated?

While loved by object-oriented practitioners, interfaces are essentially nothing but records of functions. And functions, as we all know, are the fundamental pillars of functional programming languages. This trivial observation is illustrated below. I’ll leave it to the reader to think about various implications of the use of a (covariant) IRecord representation for objects:

class Program
{
    static void Main()
    {
        for (var c = new Counter(); c.Get() < 10; c.Inc(1))
            Console.WriteLine(c.Get());
    }
}

interface IRecord<out T1, out T2>
{
    T1 First { get; }
    T2 Second { get; }
}

class Counter : IRecord<Func<int>, Action<int>>
{
    // Data
    private int _value;

    // Code - explicit implementation to hide First, Second
    Func<int> IRecord<Func<int>, Action<int>>.First { get { return () => _value; } }
    Action<int> IRecord<Func<int>, Action<int>>.Second { get { return i => _value += i; } }

    // Code - friendly "interface"
    public Func<int> Get { get { return ((IRecord<Func<int>, Action<int>>)this).First; } }
    public Action<int> Inc { get { return ((IRecord<Func<int>, Action<int>>)this).Second; } }
}

Why do we care? Well, it turns out that IEnumerable<T> and IObservable<T> tend to obscure the true meaning of the objects a bit by having many different methods to facilitate the task of enumeration and observation, respectively. The source of this apparent bloating is irrelevant (and in fact follows design guidelines of an object-oriented inspired framework); what matters more is to see how the two mentioned interfaces can be boiled down to their essence.

Minimalistic as we are, we’re going to drop the notion of error cases that manifest themselves through MoveNext throwing an exception and OnError getting called, respectively on IEnumerator<T> and IObserver<T>. For similar reasons of simplification, we’ll also not concern ourselves with the disposal of enumerators or subscriptions. The resulting picture looks as follows:

image

To consolidate things a bit further, we’ll collapse MoveNext/Current on the left, and OnNext/OnCompleted on the right. How so? Well, either getting or receiving the next element can provide a value or a termination signal. This is nothing but a pair of an optional value and a Boolean. Turns out we have such a thing in the framework, called Nullable<T> but since one can’t nest those guys or use them on reference types, it doesn’t help much. Instead, we’ll represent the presence or absence of a value using an Option<T> type:

public abstract class Option<T>
{
    public abstract bool HasValue { get; }
    public abstract T Value { get; }

    public sealed class None : Option<T>
    {
        public override bool HasValue
        {
            get { return false; }
        }

        public override T Value
        {
            get { throw new InvalidOperationException(); }
        }

        public override string ToString()
        {
            return "None<" + typeof(T).Name + ">()";
        }
    }

    public sealed class Some : Option<T>
    {
        private T _value;

        public Some(T value)
        {
            _value = value;
        }

        public override bool HasValue
        {
            get { return true; }
        }

        public override T Value
        {
            get { return _value; }
        }

        public override string ToString()
        {
            return "Some<" + typeof(T).Name + ">(" + (_value == null ? "null" : _value.ToString()) + ")";
        }
    }
}

The subtypes None and Some are optional though convenient, hence I’ll leave them in. With this, IEnumerator<T> would boil down to an interface with a single method retrieving an Option<T>. When it returns a Some object, there was a next element and we got it; when it returns None, we’ve reached the end of the enumeration. Similar for IObserver<T>, OnNext and OnCompleted are merged into a single method receiving an Option<T>. Interfaces with a single method have a name: they’re delegates. So both those types can be abbreviated to:

IObserver<T>  –>  Action<Option<T>>
IEnumerator<T>  –>  Func<Option<T>>

A quick recap: an observer is something you can give a value or tell it has reached the end of the observable object, hence it takes in an Option<T>; an enumerator is something you can get a value from but it can also signal the end of the enumerable object, hence it produces an Option<T>. In a more functional notation, one could write:

Option<T> –> ()
() –> Option<T>

Here the arrow indicates “goes to”, just as in lambda expressions, with the argument on the left and the return type on the right. All that has happened is reverting the arrows to go from an observer to an enumerator and vice versa. That’s the essence of dualization.

But we’re not done yet. Look one level up at the IEnumerable<T> and IObserver<T> interfaces. Those are single-method ones too, hence we can play the same trick as we did before. The IEnumerable<T> interface’s single method returns an IEnumerator<T>, which we already collapsed into a simple function above. And in a dual manner, IObservable<T>’s single method takes in an IObserver<T>, which we also collapsed above. The yields the following result:

IObservable<T>  –>  Action<Action<Option<T>>>
IEnumerable<T>  –>  Func<Func<Option<T>>>

If that isn’t a simplification, I don’t know what would be. An observable is nothing other than an action taking in an action taking in an observed value, while an enumerable is nothing other than a function returning a function returning a yielded value. Or, in concise functional notation:

(Option<T> –> ()) –> ()
() –> (() –> Option<T>)

Again, to go from one world to the other, it suffices to reverse the arrows to reach the dual form. In summary, have a look at the following figure:

image

 

Flat functions – FEnumerable and FObservable

Since we’ve flattened imperative interfaces into flat functions we’re going to provide several operators over those, we need to have a name for the type to stick those items in. Though we’re not going to make things purely functional on the inside (as we’ll rely on side-effects to implement various operators), I still like to call it function-style enumerable and observable, hence the names FEnumerable and FObservable (not meant to be pronounceable), where F stands for Function as opposed to I for Interface. In addition, Ex additions will materialize to realize some layering as discussed below. The result, including FEnumerableEx2 that’s left as an exercise, is shown below:

image

 

Five essential operators, or maybe even less

To continue on our merry way towards the essence of LINQ, we’ll be providing five essential operators as the building blocks to construct most other operators out of. Needless to say so, those operators will use the above flat function “interfaces” to do their work on. Let’s start with a couple of easy ones: Empty and Return.

 

Empty

The Empty operator is very straightforward and never deals with Option<T>.Some values, just signaling an Option<T>.None immediately to signal completion. Hence the produced collection is empty. How do we realize this operator in the enumerable and observable case? Not surprisingly, the implementation is straightforward in both cases:

public static class FEnumerable
{
    public static Func<Func<Option<T>>> Empty<T>()
    {
        return () => () => new Option<T>.None();
    }

First, the FEnumerable one. All it does is simply returning a function that returns an end-of-sequence None signal in return to getting called. Notice the two levels of nesting needed to be conform with the signature. The outer function is the one retrieving the enumerator, while the inner is the equivalent to MoveNext and Current. For absolute clarity:

image

One the FObservable side of things, we find a similar implementation shuffled around a little bit, as shown below:

public static class FObservable
{
    public static Action<Action<Option<T>>> Empty<T>()
    {
        return o => o(new Option<T>.None());
    }

What used to be output now becomes input: the None constructor call no long appears in an output position but has moved to an input position. Similar for the observer, indicated with o, which has moved to an input position. Upon giving the observable object (the whole thing) an observer (o), the latter gets simply called with a None object indicating the end of the sequence. The inner call is equivalent to OnCompleted, while the whole lambda expression is equivalent to Subscribe.

image

The careful reader may spot an apparent difference in the multiplicity of the involved operations. Where one enumerable can be used to get multiple enumerators, it seems that one observable cannot be used with multiple observers. This is only how it looks, as duality comes to the rescue to explain this again. The statement for enumerables goes as follows: “multiple calls to GetEnumerator each return one IEnumerator”. The dual of that becomes “a single call to Subscribe can take in multiple IObservers”. While that’s not exactly the case in the real IObserver land, where you either wrap all of your observers in a single IObserver to achieve this effect, or make multiple calls to Subscribe (assuming – and that’s where the MinLinq approach differs slightly – a call to subscribe doesn’t block), it’s incredibly true in FObservable. How so? Well, one can combine delegates using the + operator to achieve the effect of subscribing multiple observers at the same time:

Action<Option<int>> observer1 = x => Console.WriteLine("1 <- " + x);
Action<Option<int>> observer2 = x => Console.WriteLine("2 <- " + x);

var xs = FObservable.Return(1);
xs(observer1 + observer2);

The above will print Some(1) and None() twice, since both observers are getting it (in invocation order, coinciding with lexical order).

 

Return

The previous sample brings us seamlessly to the next operator: Return, which realizes a singleton enumerable or observable collection. Though this one seems easy as well, it’s getting a bit more complex in the enumerable case as we need to maintain state across calls to “MoveNext”. Moreover, we need to do so on a per-enumerator basis as they all need to have their own view on the sequence. In our observable case, for the reasons mentioned above, things are slightly simpler as we can just “fire and forget” all data upon receiving a call to Subscribe. (Exercise: how would you make Subscribe asynchronous with respect to the sequence producing its values? When is this useful and when is it harmful?)

Let’s first look at the Return operator realization in FEnumerable:

public static Func<Func<Option<T>>> Return<T>(T value)
{
    return () =>
    {
        int i = 0;
        return () =>
            i++ == 0
            ? (Option<T>)new Option<T>.Some(value)
            : (Option<T>)new Option<T>.None();
    };
}

The state local to the “enumerator block” contains a counter that keeps track of the number of MoveNext calls that have been made. The first time, we return a Some(value) object, and the second (and subsequent) time(s) we answer with None. Notice this has the implicit contract of considering a None value as a terminal in the grammar. If you want to enforce this policy, an exception could be raised if i reaches 2.

In the FObservable world, things are quite easy. Upon a subscription call, we signal a Some and None message on the OnNext function, like this:

public static Action<Action<Option<T>>> Return<T>(T value)
{
    return o =>
    {
        o(new Option<T>.Some(value));
        o(new Option<T>.None());
    };
}

 

Bind

Why says Return and knows about monads immediately thinks about Bind (>>= in Haskell). The Bind operator, known as SelectMany in LINQ, provides an essential combinator allowing to compose objects in the monad. In our case, those monads are IEnumerable<T> and IObservable<T>. In a previous episode of my More LINQ series, I’ve explained the basic idea of monadic composition a bit further, as summarized in the figure below:

image

In the above, M<.> has to be substituted for either Func<Func<.>> or Action<Action<.>> to yield the signature for both FEnumerable’s and FObservable’s Bind operators. The implementation of the operator in the latter case is the more straightforward one of both:

public static Action<Action<Option<R>>> Bind<T, R>(this Action<Action<Option<T>>> source, Func<T, Action<Action<Option<R>>>> selector)
{
    return o => source(x =>
    {
        if (x is Option<T>.None)
        {
            o(new Option<R>.None());
        }
        else
        {
            selector(x.Value)(y =>
            {
                if (y is Option<R>.Some)
                    o(y);
            });
        }
    });
}

Here, upon subscribing to an observable using observer “o”, the operator itself subscribes to the source observable that was fed in to the function. It does so by providing an observer that takes in the received element as “x”. Inside the observer’s body, which gets called for every element raised by the source, “x” is analyzed to see whether or not the source has terminated. If not, Bind does its combining work by calling the selector function for the received element, getting back a new observable source “f(x.Value)”. The goal of Bind is to surface the values raised on this source to the surface of the operator call. Hence, we subscribe to this computed source “f(x.Value)” by providing an observer that takes in the received value as “y” and raises that to the surface by calling “o” (the external observer). Again we assume None is terminating the sequence, which could be enforced by keeping a bit of state (left as an exercise). We’ll see examples of operator usage later on.

(Exercise: What if we want the Subscribe method to return immediately, running the Bind in the background. How would you do so?)

In the FEnumerable case, things get more complex as we need to keep track of where we are in the source and projected sequences across different calls to “MoveNext”. While we could realize this using a state machine (just like iterators would do), I’ve taken on the challenge to write a state-keeping set of loops by hand. It may well be optimized or tweaked but it seems to do its job. Important situations to keep in mind include encountering empty inner sequences (signaled by None), requiring us to loop till we eventually find an object to yield. It’s also important to properly return a Option<R>.None object when we reach the end of the outer source. One of the most essential parts of the code below is the storage of state outside the inner lambda, hence keeping per-enumerator state. Besides cursors into the outer and current inner sequences, we also keep the inner enumerator (recall the signature corresponding to IEnumerator<T>) in “innerE”.

public static Func<Func<Option<R>>> Bind<T, R>(this Func<Func<Option<T>>> source, Func<T, Func<Func<Option<R>>>> f)
{
    return () =>
    {
        var e = source();

        Option<T> lastOuter = new Option<T>.None();
        Option<R> lastInner = new Option<R>.None();
        Func<Option<R>> innerE = null;

        return () =>
        {
            do
            {
                while (lastInner is Option<R>.None)
                {
                    lastOuter = e();

                    if (lastOuter is Option<T>.None)
                    {
                        return new Option<R>.None();
                    }
                    else
                    {
                        innerE = f(lastOuter.Value)();
                    }

                    lastInner = innerE();
                    if (lastInner is Option<R>.Some)
                    {
                        return lastInner;
                    }
                }

                lastInner = innerE();
            } while (lastInner is Option<R>.None);

            return lastInner;
        };
    };
}

The reader is invited to make sense of the above at his or her own pace, keeping in mind the regular LINQ to Objects implementation is the following much more comprehensible code:

public static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> f)
{
    foreach (var item in source)
        foreach (var result in f(item))
            yield return result;
}

The interesting thing about the SelectMany implementation is that the types in the signature exactly tell you what to do: the main operation on an IEnumerable is to enumerate using foreach. The only parameter you can do that on is source, but you can’t yield those elements as the output expects elements of type R and we got elements of type T. However, the function “f” accepts a T and produces an IEnumerable<R>, so if we call that one an enumerate the results, we got what we can yield. Simple.

This operator is essential to LINQ (and monads) in that it allows many other operators to be written in terms of it. Where and Select and two that pop to mind immediately, and we’ll come to those when we talk about FEnumerableEx (and FObservable) later.

 

Ana

An anamorphism is the fancy word for an operator that produces an M<T> out of something outside M<.>, by use of unfolding. Given some seed value and an iterator function, one can produce a potentially infinite sequence of elements. Implementation of this operator is straightforward in both cases, again with the enumerable case requiring some state:

public static Func<Func<Option<T>>> Ana<T>(T seed, Func<T, bool> condition, Func<T, T> next)
{
    return () =>
    {
        Option<T> value = new Option<T>.None();
        return () =>
            condition((value = new Option<T>.Some(
                value is Option<T>.None
                ? seed
                : next(value.Value))).Value)
            ? (Option<T>)new Option<T>.Some(value.Value)
            : (Option<T>)new Option<T>.None();
    };
}

For fun and giggles I wrote this one using conditional operator expressions only, with an assignment side-effect nicely interwoven. It’s left to the reader to write it in a more imperative style. Again, we’re assuming the enumerator function is not called after a None object has been received. The basic principle of the operator is clear and implementation would look like this in regular C# with iterators:

public static IEnumerable<T> Ana<T>(T seed, Func<T, bool> condition, Func<T, T> next)
{
    for (T t = seed; condition(t); t = next(t))
        yield return t;
}

On the FObservable side, things are simpler again (the main reason being that FEnumerable is hard because of its lazy nature and because we can’t use iterators):

public static Action<Action<Option<T>>> Ana<T>(T seed, Func<T, bool> condition, Func<T, T> next)
{
    return o =>
    {
        for (T t = seed; condition(t); t = next(t))
            o(new Option<T>.Some(t));
        o(new Option<T>.None());
    };
}

Again, the reader is invited to think about what I’d take to have this sequence getting generated on the background, as opposed to blocking the caller.

As an additional exercise, can you rewrite Return and Empty in terms of Ana, therefore making those two operators no longer primitives? Doing so will bring down the total of essentials to three: Ana, Cata and Bind:

image

 

Cata

The opposite of an anamorphism is a catamorphism, also known as Aggregate in LINQ. Its goal is to fold a M<T> into something outside M<.>, e.g. computing the sum of a sequence of numbers. Since this is a greedy operation, we can do it on the spot for both the FEnumerable and FObservable cases as shown below:

public static R Cata<T, R>(this Func<Func<Option<T>>> source, R seed, Func<R, T, R> f)
{
    var e = source();

    Option<T>.Some value;
    R result = seed;
    while ((value = e() as Option<T>.Some) != null)
    {
        result = f(result, value.Value);
    }

    return result;
}

First for the enumerable case, we simply run till we get a None object, continuously calling the aggregation function, starting with the seed value. In the observable case, things are equally simple:

public static R Cata<T, R>(this Action<Action<Option<T>>> source, R seed, Func<R, T, R> f)
{
    R result = seed;

    bool end = false;
    source(x =>
    {
        if (x is Option<T>.Some && !end)
            result = f(result, x.Value);
        else
            end = true; // or break using exception
    });

    return result;
}

This time we have to hook up an observer with the source and analyze what we got back. Notice the code above shows one approach to break out of or immunize an observer after a None message has been received. Notice though that if all constructor functions can be trusted (which is not the case with an Action of Func), such protections wouldn’t be required as we’re defining a closed world of constructors and combinators. If the former group never emits sequences that don’t follow the described protocol and the latter never combines existing sequences into an invalid one (i.e. preserving the protocol properties), it shouldn’t be possible to fall off a cliff.

 

Bridging the brave new world with the old-school one

Before getting into more operators layered on top of the essential ones provided above, we should spend a few minutes looking at ways to convert back and forth between the new functionally inspired “flat” world and the familiar interface-centric “bombastic” world of LINQ. In particular, can we establish the following conversions?

  • IEnumerable<T> to Func<Func<Option<T>>>
  • Func<Func<Option<T>>> to IEnumerable<T>
  • IObservable<T> to Action<Action<Option<T>>>
  • Action<Action<Option<T>>> to IObservable<T>

Obviously the answer is we can. Let’s focus on the first two as a starter. It’s clear that in order to go from an IEnumerable<T> to our new world of FEnumerable we should iterate the specified sequence. We should do so in a lazy manner such that upon every call to FEnumerable’s inner function (playing the enumerator’s role) we fetch an element out of the source IEnumerator<T>, but no earlier. In other words, we have to keep the iteration state which is represented by an IEnumerator<T> as the local state to the enumerator function:

public static Func<Func<Option<T>>> AsFEnumerable<T>(this IEnumerable<T> source)
{
    return () =>
    {
        var e = source.GetEnumerator();
        return () => e.MoveNext()
            ? (Option<T>)new Option<T>.Some(e.Current)
            : (Option<T>)new Option<T>.None();
    };
}

This should be fairly straightforward code to grasp, ensuring we properly terminate a (finite) sequence with a None object to signal completion. The opposite operation is easy as well, now calling a FEnumerable’s enumerator function, providing results to the caller in a lazy fashion by means of a typical C# iterator:

public static IEnumerable<T> AsEnumerable<T>(this Func<Func<Option<T>>> source)
{
    var e = source();
    Option<T>.Some value;
    while ((value = e() as Option<T>.Some) != null)
    {
        yield return value.Value;
    }
}

As soon as we encounter a None object, we’ll break out of the loop causing the consuming enumerator to terminate. Using the operators above, we can readily verify the back and forth conversions easily:

// IEnumerable -> FEnumerable
var xs = Enumerable.Range(0, 10).AsFEnumerable();
{
    var xse = xs();
    Option<int> x;
    while ((x = xse() as Option<int>.Some) != null)
        Console.WriteLine(x.Value);
}

// FEnumerable -> IEnumerable
var ys = xs.AsEnumerable();
{
    foreach (var y in ys)
        Console.WriteLine(y);
}

This is very convenient as we’ll be able to treat arrays and other enumerable collections as FEnumerable functions in a blink of the eye. Now we can start to mix and match typical LINQ to Objects operators with our own academic playground.

On to the dual world, we can also provide conversions for IObservable<T> to FObservable<T> back and forth. Both are relatively easy to realize as well but lets starts with old to new:

public static IObservable<T> AsObservable<T>(this Action<Action<Option<T>>> source)
{
    return Observable.Create<T>(o =>
    {
        source(x =>
        {
            if (x is Option<T>.Some)
                o.OnNext(x.Value);
            else
                o.OnCompleted();
        });
        return () => { };
    });
}

Here I’m using Rx’s Observable.Create operator to simplify the creation of an IObservable<T>, passing in an observer’s code body. Lambda parameter “o” is an IObserver<T>, so all we got to do is subscribe to our source (by means of just calling it, passing in a FObserver function) and forward received objects “x” to the external observer. As we don’t have a notion to run asynchronous in our little world, we simply return the no-op action delegate from the observer function. Since all execution happens synchronously upon a Subscribe call to the produced IObservable<T>, there’s little for us to do in a reaction to an unsubscribe invocation.

In the other direction, things are even simpler. We simply use an Rx extension method for IObservable<T> to subscribe given an OnNext and OnCompleted delegate:

public static Action<Action<Option<T>>> AsFObservable<T>(this IObservable<T> source)
{
    return o =>
    {
        source.Subscribe(x => o(new Option<T>.Some(x)), () => o(new Option<T>.None()));
    };
}

Again we can test this easily, this time using Observable.Range. Since that one runs asynchronously, we have to do a bit of synchronization to see the results printed nicely:

// IObservable -> FObservable
var evt = new ManualResetEvent(false);
var xs = Observable.Range(0, 10).AsFObservable();
{
    xs(x =>
    {
        if (x is Option<int>.Some)
            Console.WriteLine(x.Value);
        else
            evt.Set();
    });
}
evt.WaitOne();

// FObservable -> IObservable
var ys = xs.AsObservable();
{
    // We got this one synchronous inside.
    ys.Subscribe(Console.WriteLine);
}

The result of all this plumbing is summarized in the following diagram. The direct conversion between a FEnumerable and FObservable (and vice versa) is left to the reader as an interesting exercise:

image 

 

Where and Select for monadic dummies

While we leave the implementation of operators like Snoc (Cons in reverse, to construct sequences out of a single element and a sequence) and Concat (concatenating arbitrary sequences to one another) to the reader, we should focus on a few operators that can be realized using the essential building blocks provided before. In particular, we’ll implement Where and Select in terms of Bind, Empty and Return.

Recall what Bind does: it combines a sequence with sequences generated from a function call, collecting the elements all sequences that result from those function calls. In a concrete sample: given a list of products and a way to get all the suppliers for each product we can return a sequence of all suppliers across all products. Or with function arrows: IE<Product> –> (Product –> IE<Supplier>) –> IE<Supplier>. This is exactly the signature of Bind or SelectMany.

How can we use this to create a filter like Where? The answer is pretty simple, by controlling the “selector” function passed to Bind and make it analyze each element that’s passed in, deciding whether or not to return it to Bind. The “whether or not” part can be realized using a conditional either returning Return(element) or Empty(). And there we got our filtering logic:

public static Func<Func<Option<T>>> Where<T>(this Func<Func<Option<T>>> source, Func<T, bool> filter)
{
    return source.Bind(t => filter(t) ? FEnumerable.Return(t) : FEnumerable.Empty<T>());
}

A picture is worth a thousand words, so let’s have a look at the Where operator realization in terms of Bind:

image

And guess what, the FObservable implementation can be derived by mechanical translation from the one for FEnumerable:

public static Action<Action<Option<T>>> Where<T>(this Action<Action<Option<T>>> source, Func<T, bool> filter)
{
    return source.Bind(t => filter(t) ? FObservable.Return(t) : FObservable.Empty<T>());
}

In fact, the code is exactly the same with FEnumerable replaced by FObservable. If we’d have typedefs for the function signatures or static extension methods on a delegate type, we’d actually see both pieces of code being the same of the following “template”:

public static M<T> Where<T>(this M<T> source, Func<T, bool> filter)
{
    return source.Bind(t => filter(t) ? M<T>.Return(t) : M<T>.Empty());
}

Such an M<T> abstraction would be realized as a type constructor in Haskell and the packaging of both Return and Bind on M<T> would be realized by means of a type class that looks as follows:

class Monad m where
    return :: a –> m a
    (>>=)  :: m a –> (a –> m b) –> m b

The second function is Haskell’s infix operator for bind. More information ca be found at the following locations:

How can we realize Select using Bind and Return as well? The answer is again very straightforward: this time we simply apply the projection function to the object passed to the bind selector function and wrap the result using Return. Here’s the code for both worlds, again ready to be abstracted to M<T>:

public static Func<Func<Option<R>>> Select<T, R>(this Func<Func<Option<T>>> source, Func<T, R> selector)
{
    return source.Bind(t => FEnumerable.Return(selector(t)));
}
public static Action<Action<Option<R>>> Select<T, R>(this Action<Action<Option<T>>> source, Func<T, R> selector)
{
    return source.Bind(t => FObservable.Return(selector(t)));
}

Again a picture will make the above more clear:

image

With those extension methods in place, we can actually start writing LINQ expressions against FEnumerable and FObservable (function!) objects. That’s right: now you got a delegate you can dot into, thanks to the magic of extension methods. But using convenient LINQ syntax, we don’t even have to see any of that:

var res = (from x in Enumerable.Range(0, 10).AsFEnumerable()
           where x % 2 == 0
           select x + 1).AsEnumerable();

foreach (var x in res)
    Console.WriteLine(x);

Notice how we go back and forth between classic IEnumerable<T> and our FEnumerable implementation? But the key to see here is that our Where and Select operators are getting called. The result obviously prints 1, 3, 5, 7, 9 and to convince ourselves or calls happening to our methods, we’ll have a look in the debugger:

image

I hope this suffices to convince the reader we got query expression syntax working around our MinLINQ implementation. It’s left to the reader to decipher the exact call stack we’re observing above. The same exercise can be repeated for the FObservable case, using the following equivalent code:

var res = (from x in Observable.Range(0, 10).AsFObservable()
           where x % 2 == 0
           select x + 1).AsObservable();

res.Subscribe(Console.WriteLine);
Console.ReadLine(); // Stuff happening on the background; don't exit yet

Since Bind is none other than SelectMany in disguise, we could rename it as such to enable it for use in LINQ as well, triggered by query expressions having multiple from clauses. In fact, to fully enable query expressions of that form, you’ll need a slight tweak to the SelectMany signature, as follows (same for the observable case of course):

public static Func<Func<Option<R>>> SelectMany<T, C, R>(this Func<Func<Option<T>>> source, Func<T, Func<Func<Option<C>>>> selector, Func<T, C, R> result)
{
    // Left as an exercise.
}

If you implement this one correctly, you will be able to run a query of the following shape:

var res = (from x in Enumerable.Range(1, 5).AsFEnumerable()
           from y in Enumerable.Range(1, x).AsFEnumerable()
           select new string((char)('a' + x - 1), y)).AsEnumerable();

foreach (var item in res)
    Console.WriteLine(item);

This will print the following output:

a
b
bb
c
cc
ccc
d
dd
ddd
dddd
e
ee
eee
eeee
eeeee

Finally, just to go nuts with some back-and-forth transitioning between all worlds (as shown in our diagram before), an all-inclusive sample mixing all sorts of execution:

var res = (from y in
               (from x in Enumerable.Range(0, 20).AsFEnumerable()
                where x % 2 == 0
                select x + 1).AsEnumerable()
               .ToObservable() // Rx
               .AsFObservable()
           where y % 3 == 0
           select y * 2)
          .AsObservable()
          .ToEnumerable(); // Rx

foreach (var item in res)
    Console.WriteLine(item);

The interested reader is invited to create short-circuiting operators to provide a direct path for .AsEnumerable().ToObservable().AsFObservable() and .AsObservable().ToEnumerable().AsFEnumerable(). Refer back to the diagram to see where those operators’ corresponding arrows occur.

 

Fueling Range and Sum with Ana and Cata

To conclude this post, let’s also have a look at how to derive constructor and aggregator operators from our Ana and Cata primitives. As a sequence constructor we’ll consider Range and for the aggregator we’ll consider Sum. Let’s start with Range in terms if Ana:

public static Func<Func<Option<int>>> Range(int from, int length)
{
    return FEnumerable.Ana<int>(from, x => x < from + length, x => x + 1);
}

and (again exactly the same code thanks to the shared primitives)

public static Action<Action<Option<int>>> Range(int from, int length)
{
    return FObservable.Ana<int>(from, x => x < from + length, x => x + 1);
}

Now we can get rid of the AsFEnumerable() use in our samples when creating a range and construct our range sequence immediately in our world (similar example for FObservable of course):

var res = (from x in FEnumerableEx.Range(0, 10)
           where x % 2 == 0
           select x + 1).AsEnumerable();

foreach (var x in res)
    Console.WriteLine(x);

As an exercise, also abstract the AsEnumerable call followed by foreach into a Run method, as seen in System.Interactive, so that you can write the code below. Implement this operator in terms of Cata (!):

(from x in FEnumerableEx.Range(0, 10)
 where x % 2 == 0
 select x + 1).Run(
     Console.WriteLine
);

(Question: could you benefit from such an operator in FObservable as well?)

For the Sum realization we can use Cata:

public static int Sum(this Func<Func<Option<int>>> source)
{
    return source.Cata(0, (sum, x) => sum + x);
}

and

public static int Sum(this Action<Action<Option<int>>> source)
{
    return source.Cata(0, (sum, x) => sum + x);
}

The following example illustrates how to sum 1 to 10 using Range and Sum:

Console.WriteLine(FEnumerableEx.Range(1, 10).Sum());
Console.WriteLine(FObservableEx.Range(1, 10).Sum());

Both print 55 just fine.

Implement more aggregation operators as found in the Standard Query Operators. Also think about how to implement those over nullable value types (e.g. Sum with int?). Could you reuse Option<T> as an alternative to nullables? Could you reuse monadic computation to carry out nullable arithmetic (tip: the Maybe monad)? A few aggregates that some people don’t see as aggregates include All, Any, First, Last, ElementAt, and more. Don’t forget to implement those either (most of them should be a one-liner making a single call to Cata). As an additional caveat, the following implementation of Average is inadequate (why?):

public static double Average(this Func<Func<Option<int>>> source)
{
    return (double)source.Sum() / source.Count();
}

 

Conclusion

Boiling down LINQ to its core essence can be fun and a great eye-opener to many users of the technology. While optimizations often mandate a lower degree of layering, it’s good to have an idea of the conceptual layering of various operators to see which ones are essential and which ones are not so much. If kids can build castles out of Lego blocks, sure every self-respecting developer should be able to exploit the expressive power a few primitive building blocks to create great libraries and applications. Choosing the right set of primitives can get you a long way in such a design, as illustrated in this post. Readers who can’t get enough of essential primitives and the composition thereof are cordially invited to have a go at another Crazy Sunday post titled Unlambda .NET – With a Big Dose of C# 3.0 Lambdas (and many others in that category).

In the continuation of my “More LINQ with System.Interactive” series we’ll get back to less academic stuff with System.Interactive. And before I forget: a happy 2010!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about new combinator operators provided by EnumerableEx:

image

 

Combine and conquer?

Combinators are at the heart of LINQ’s expressive power, allowing sequences to be combined into new ones. In earlier posts, I’ve shown the essence of monadic computation through the following illustration:

image

It’s fair to say that SelectMany (or Bind) is the mother of all combinators, as many others can be derived from it (Exercise: implement Where and Select using SelectMany and a limited number of auxiliary operators like Return). In today’s post we’ll look at various new combinators added to the IEnumerable<T> set of operators.

So, what’s a combinator? In one world view (the one we’re using), it’s an operator that combines one or more instances of a given entity into a new such entity. For example, in functional programming we got S, K and I combinators that act on functions:

S x y z = (x z) y z
K x y = x
I x = x

A more precise definition can be found on http://en.wikipedia.org/wiki/Combinator, for those interested in more foundational stuff. In our case, we’ll combine one or more IEnumerable<T> instances into a new IEnumerable<R> (where R can be different from T).

 

Concat, now with more arguments

LINQ to Objects has always had a Concat operator, with the following signature:

public static IEnumerable<TSource> Concat<TSource>(this IEnumerable<TSource> first, IEnumerable<TSource> second);

However, this is merely a special case of a more general version of Concat, introduced in EnumerableEx:

public static IEnumerable<TSource> Concat<TSource>(params IEnumerable<TSource>[] sources);
public static IEnumerable<TSource> Concat<TSource>(this IEnumerable<IEnumerable<TSource>> sources);

The second one is the core operator we’re talking about here, with the first overload providing convenience due to the lack of a “params enumerable” feature in the language. The Concat operator is simple to understand, simply yielding all TSource objects from all sequences in the sources parameter. If an error occurs during enumeration any of the sequences, the resulting concatenated sequence is also terminated for yielding. In fact, this operator is very similar to OnErrorResumeNext where the error condition is ignored.

Below is a sample illustrating the main scenarios:

new[] {
    new[] { 1, 2 },
    new[] { 3, 4 },
    new[] { 5, 6 }
}
.Concat()
.Materialize(/* for pretty printing */)
.Run(Console.WriteLine);

new[] {
    new[] { 1, 2 },
    new[] { 3, 4 }.Concat(EnumerableEx.Throw<int>(new Exception())),
    new[] { 5, 6 }
}
.Concat()
.Materialize(/* for pretty printing */)
.Run(Console.WriteLine);

The first sample will print numbers 1 through 6, while the second one will print 1 through 4 and an error notification.

image

 

Merge, a parallel Concat

Where Concat will proceed through the sources collection sequentially, guaranteeing in-order retrieval of data, one could get all the data from the sources in a parallel manner as well. To do so, Merge spawns workers that drain all of the sources in parallel, flattening or “sinking” all the results to the caller:

public static IEnumerable<TSource> Merge<TSource>(params IEnumerable<TSource>[] sources);
public static IEnumerable<TSource> Merge<TSource>(this IEnumerable<IEnumerable<TSource>> sources);
public static IEnumerable<TSource> Merge<TSource>(this IEnumerable<TSource> leftSource, IEnumerable<TSource> rightSource);

The three overloads share the same signatures as the Concat equivalents, with the second one being the most general overload again. The same sample as for Concat can be used to illustrate the working:

new[] {
    new[] { 1, 2 },
    new[] { 3, 4 },
    new[] { 5, 6 }
}
.Merge()
.Materialize(/* for pretty printing */)
.Run(Console.WriteLine);

new[] {
    new[] { 1, 2 },
    new[] { 3, 4 }.Concat(EnumerableEx.Throw<int>(new Exception())),
    new[] { 5, 6 }
}
.Merge()
.Materialize(/* for pretty printing */)
.Run(Console.WriteLine);

What the results are will depend on the mood of your task scheduler. Either way, for the first sample you should get to see all of the numbers from 1 through 6 getting printed, in any order (though 1 will come before 2, 3 before 4 and 5 before 6). On my machine I got 1, 3, 5, 4, 2, 6 in my first run. For the second sample, it’s entirely possible to see 5 and 6 getting printed before the exception for the second source is reached. But then that’s what you expect from parallel computation, don’t you?

Merge can speed up your data retrieval operations significantly, if you don’t care about the order in which results are returned. For example, you could cause two LINQ to SQL queries that provide stock quotes to run in parallel by using Merge, followed by a client-side duplicate entry elimination technique:

var stocks =
    from quote in
        EnumerableEx.Merge(
            (from quote in t1 select quote).Do(q => Console.WriteLine("t1: " + q)),
            (from quote in t2 select quote).Do(q => Console.WriteLine("t2: " + q))
        )
    group quote by quote.Symbol into g
    select new { g.Key, Price = g.Average(p => p.Price) };

stocks.Run(Console.WriteLine);

Results could look as follows, with the main idea being the parallel retrieval of both query results:

Query: SELECT Symbol, Price FROM Trader1
Query: SELECT Symbol, Price FROM Trader2
t2: { Symbol = MSFT, Price = 30.94 }
t1: { Symbol = MSFT, Price = 30.99 }
t1: { Symbol = ORCL, Price = 24.92 }
t1: { Symbol = GOOG, Price = 618.35 }
t1: { Symbol = AAPL, Price = 209.10 }
t2: { Symbol = ORCL, Price = 25.06 }
t2: { Symbol = GOOG, Price = 610.25 }
t2: { Symbol = AAPL, Price = 204.99 }
{ Key = MSFT, Price = 30.965 }
{ Key = ORCL, Price = 24.99 }
{ Key = GOOG, Price = 614.30 }
{ Key = AAPL, Price = 207.045 }

image

(Note: behavior in face of an exception will depend on timing and is not included in the diagram.)

 

Amb, a racing game

Amb is the ambiguous operator as introduced by McCarthy in 1963. Because of its nostalgic background, it’s been chosen to preserve the name as-is instead of expanding it. What’s so ambiguous about this operator? Well, the idea is that Amb allows two sequences to race to provide the first result causing the winning sequence to be elected as the one providing the resulting sequence from the operator call. The operator’s signatures make this clear:

public static IEnumerable<TSource> Amb<TSource>(params IEnumerable<TSource>[] sources);
public static IEnumerable<TSource> Amb<TSource>(this IEnumerable<IEnumerable<TSource>> sources);
public static IEnumerable<TSource> Amb<TSource>(this IEnumerable<TSource> leftSource, IEnumerable<TSource> rightSource);

Again, the overloads are threesome, just like Concat and Merge. To provide a sample of the operator’s behavior, use the following simple implementation of a Delay operator:

public static IEnumerable<TSource> Delay<TSource>(this IEnumerable<TSource> source, int delay)
{
    return EnumerableEx.Defer(() => { Thread.Sleep(delay); return source; });
}

Now we can write the following two test cases:

var src1 = new[] { 1, 2 }.Delay(300);
var src2 = new[] { 3, 4 }.Delay(400);
src1.Amb(src2).Run(Console.WriteLine);

var src3 = new[] { 5, 6 }.Delay(400);
var src4 = new[] { 7, 8 }.Delay(300);
src3.Amb(src4).Run(Console.WriteLine);

The expected result will be that src1 and src4 win their Amb battles against src2 and src3, respectively. One practical use for this operator is to have two or more redundant data sources, all containing the same data, fight to provide the quickest answer to a query. Here’s a sample illustrating this:

var stocks =
    EnumerableEx.Amb(
        (from quote in t1 select quote).Do(q => Console.WriteLine("t1: " + q)),
        (from quote in t2 select quote).Do(q => Console.WriteLine("t2: " + q))
    );

stocks.Run(Console.WriteLine);

Results could look as follows, assuming t2 was the quickest to provide an answer:

Query: SELECT Symbol, Price FROM Trader1
Query: SELECT Symbol, Price FROM Trader2
t2: { Symbol = MSFT, Price = 30.94 }
t2: { Symbol = ORCL, Price = 25.06 }
t2: { Symbol = GOOG, Price = 610.25 }
t2: { Symbol = AAPL, Price = 204.99 }
{ Key = MSFT, Price = 30.94 }
{ Key = ORCL, Price = 25.06 }
{ Key = GOOG, Price = 610.25 }
{ Key = AAPL, Price = 204.99 }

image

 

Repeat, again and (maybe) again

The purpose of Repeat is self-explanatory and could be seen as a constructor function as well. Two categories of overloads exists: one that takes a single element and an optional repeat count (unspecified = infinite) and another that takes a sequence and an optional repeat count. While the former is more of a constructor, the latter is more of a combinator over a single input sequence:

        public static IEnumerable<TSource> Repeat<TSource>(this IEnumerable<TSource> source);
        public static IEnumerable<TSource> Repeat<TSource>(TSource value);
        public static IEnumerable<TSource> Repeat<TSource>(this IEnumerable<TSource> source, int repeatCount);
        public static IEnumerable<TSource> Repeat<TSource>(TSource value, int repeatCount);

Samples don’t need much further explanation either:

EnumerableEx.Repeat(1).Take(5).Run(Console.WriteLine);
EnumerableEx.Repeat(2, 5).Run(Console.WriteLine);

new[] { 3, 4 }.Repeat().Take(4).Run(Console.WriteLine);
new[] { 5, 6 }.Repeat(2).Run(Console.WriteLine);

It goes almost without saying that an input sequence causing an exception will also terminate the enumeration of a repeated form of the same sequence:

new[] { 5, 6 }.Concat(EnumerableEx.Throw<int>(new Exception())).Repeat(2).Run(Console.WriteLine);

image

 

Zip ‘em together

Introduced in .NET 4.0, I’ve covered the new Zip operator already in my earlier post on C# 4.0 Feature Focus - Part 3 - Intermezzo: LINQ's new Zip operator. Rx ports back this operator to the .NET 3.5 System.Interactive library for consistency. In summary, Zip walks two sequences hand-in-hand, combing their respective yielded elements using a given function to produce a result. The signature is as follows:

public static IEnumerable<TResult> Zip<TFirst, TSecond, TResult>(this IEnumerable<TFirst> first, IEnumerable<TSecond> second, Func<TFirst, TSecond, TResult> resultSelector);

A simple example is shown below:

Enumerable.Range(1, 26).Zip(
    "abcdefghijklmnopqrstuvwxyz",
    (i, c) => "alpha[" + i + "] = " + c
).Run(Console.WriteLine);

In here, the first sequence is an IEnumerable<int> and the second one is a string, hence an IEnumerable<char>. The result is a table of mappings between numbers and letters. As an exercise, implement the following overload of Select using Zip and Generate, in terms of the more commonly used overload of Select that doesn’t take a position in the selector function:

public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, int, TResult> selector);

One thing that’s interesting about the interactive version of Zip is its left-to-right characteristic with regards to enumeration of first and second. Internally, it does something along the following lines:

while (first.MoveNext() && second.MoveNext())
    …

In other words, “first” is dominant in that it can prevent a MoveNext call on second from happening, e.g. because of an exception getting thrown, non-termination (stuck forever) and termination (returning false). The following matrix shows the implications of this:

image

It’s left as an exercise to the reader to implement the right-hand side behavior (notice the transposition symmetry!) for fun, where a Zip could fetch results from both sources simultaneously, combining their results or exceptions into produced results. What are advantages and disadvantages of such an approach? As an additional question, think about ways to detect and report an asymmetric zip, where one of both sides still has an element while the other side has signaled termination.

Finally, the diagram illustrating some of the regular operations of Zip. Other combinations of behavior can be read from the matrix above.

image

 

Scan, a running aggregation operator

Readers familiar with the LINQ to Objects APIs will know about the Aggregate operator, which we also mentioned before when talking about the new Generate operator (as the opposite of Aggregate). Aggregate “folds” or reduces a sequence of elements into a single value, eating the elements one by one using some specified function. However, sometimes you may not want to loose the intermediate results, e.g. if you want to compute a running sum or so. Scan allows you to do so:

public static IEnumerable<TSource> Scan<TSource>(this IEnumerable<TSource> source, Func<TSource, TSource, TSource> accumulator);
public static IEnumerable<TAccumulate> Scan<TSource, TAccumulate>(this IEnumerable<TSource> source, TAccumulate seed, Func<TAccumulate, TSource, TAccumulate> accumulator);

You’ll see big similarities with the existing Aggregate operator when looking at the signatures above, and use of the operator is straightforward as well:

Enumerable.Range(1, 10)
.Scan((sum, i) => sum + i)
.Run(Console.WriteLine);

Enumerable.Range(2, 9).Reverse()
.Scan(3628800, (prod, i) => prod / i)
.Run(Console.WriteLine);

The first sample will simply print 1, 1+2 = 3, 3+3 = 6, 6+4 = 10, … In the second sample, a seed value is used to illustrate an inverse factorial computation, dividing a given value by subsequent descending values (from 10 to 2).

image

 

SelectMany

Finally, as a honor to the monadic bind operator, a new overload was added for SelectMany :-). Its signature is shown below, and it’s left to the reader to figure out what it does (simple):

public static IEnumerable<TOther> SelectMany<TSource, TOther>(this IEnumerable<TSource> source, IEnumerable<TOther> other);

 

Next on More LINQ

Functionally inspired constructs allowing to share enumerables and tame their side-effects.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about the materialization and dematerialization operators provided by EnumerableEx:

image

 

von Neumann was right

Code and data are very similar animals, much more similar than you may expect them to be. We can approach this observation from two different angles, one being a machine-centric view. Today’s computers are realizations of von Neumann machines where instructions and data are treated on the same footage from a memory storage point of view. While this is very useful, it’s also the source of various security-related headaches such as script or SQL injection and data execution through e.g. stack overruns (Data Execution Prevention is one mitigation).

Another point of view goes back to the foundational nature of programming, in particular the essentials of functional programming, where functions are used to represent data. An example are Church numerals, which are functions that are behaviorally equivalent to natural numbers (realized by repeated application of a function, equal in number to the natural number being represented). This illustrates how something that seems exclusively code-driven can be used to represent or mimic data.

If the above samples seem far-fetched or esoteric, there are a variety of more familiar grounds where the “code as data” paradigm is used or exploited. One such sample is LISP where code and data representation share the same syntactical form and where the technique of quotation can be used to represent a code snippet as data for runtime inspection and/or manipulation. This is nothing other than meta-programming in its earliest form. Today we find exactly the same principle back in C#, and other languages, through expression trees. The core property here is so-called homo-iconicity, where code can be represented as data without having to resort to a different syntax (homo = same; iconic = appearance):

Func<int, int> twiceD = x => x * 2;
Expression<Func<int, int>> twiceE = x => x * 2;

What what does all of this have to do with enumerable sequences? Spot on! The matter is that sequences seem to be a very data-intensive concept, and sure they are. However, the behavior and realization of such sequences, e.g. through iterators, can be very code-intensive as well, to such an extent that we introduced means to deal with exceptions (Catch for instance) and termination (Repeat, restarting after completing). This reveals that it’s useful to deal with all possible states a sequence can go through. Guess what, state is data.

 

The holy trinity of IEnumerator<T> and IObserver<T> states

In all the marble diagrams I’ve shown before, there was a legend consisting of three potential states an enumerable sequence can go through as a result of iteration. Those three states reflect possible responses to a call to MoveNext caused by the consumer of the sequence:

image

In the world of IObserver<T>, the dual to IEnumerator<T> as we saw in earlier episodes, those three states are reflected in the interface definition directly, with three methods:

// Summary:
//     Supports push-style iteration over an observable sequence.
public interface IObserver<T>
{
    // Summary:
    //     Notifies the observer of the end of the sequence.
    void OnCompleted();
    //
    // Summary:
    //     Notifies the observer that an exception has occurred.
    void OnError(Exception exception);
    //
    // Summary:
    //     Notifies the observer of a new value in the sequence.
    void OnNext(T value);
}

Instead of having an observer getting called on any of those three methods, we could equally well record the states “raised” by the observable, which turns calls (code) into object instances (data) of type Notification<T>. This operation is called materialization. Thanks to dualization, the use of Notification<T> can be extended to the world of enumerables as well.

image

Notification<T> is a discriminated union with three notification kinds, reflecting the three states we talked about earlier:

public enum NotificationKind
{
    OnNext = 0,
    OnError = 1,
    OnCompleted = 2,
}

 

It’s a material dual world

Materialization is the act of taking a plain enumerable and turning it into a data-centric view based on Notification<T>. Dematerialization reverses this operation, going back to the code-centric world. Thanks to this back-and-forth ability between the two worlds of code and data, we get the ability to use LINQ over notification sequences and put the result back into the regular familiar IEnumerable<T> world. A figure makes this clear:

image

The power of this lies in the ability to use whatever domain is more convenient to perform operations over a sequence. Maybe you want to do thorough analysis of error conditions, corresponding to the Error notification kind, or maybe it’s more convenient to create a stream of notification objects before turning them into a “regular” sequence of objects that could exhibit certain additional behavior (like error conditions). This is exactly the same as the tricks played in various other fields, like mathematics where one can do Fourier analysis either in the time of frequency domain. Sometimes one is more convenient than the other; all that counts is to know there are reliable ways to go back and forth.

image

(Note: For the Queryable sample, you may want to end up in the bottom-right corner, so the AsQueryable call is often omitted.)

 

Materialize and Dematerialize

What remains to be said in this post are the signatures of the operators and a few samples. First, the signatures:

public static IEnumerable<Notification<TSource>> Materialize<TSource>(this IEnumerable<TSource> source);
public static IEnumerable<TSource> Dematerialize<TSource>(this IEnumerable<Notification<TSource>> source);

An example of materialization is shown below, where we take a simple range generator to materialize. We expect to see OnNext notifications for all the numeric values emitted, terminated by a single OnCompleted call:

Enumerable.Range(1, 10)
.Materialize()
.Run(Console.WriteLine);

This prints:

OnNext(1)
OnNext(2)
OnNext(3)
OnNext(4)
OnNext(5)
OnNext(6)
OnNext(7)
OnNext(8)
OnNext(9)
OnNext(10)
OnCompleted()

A sample where an exception is triggered by the enumerator is shown below. Notice the code won’t blow up when enumerating over the materialized sequence: the exception is materialized as a passive exception object instance in an error notification.

Enumerable.Range(1, 10).Concat(EnumerableEx.Throw<int>(new Exception()))
.Materialize()
.Run(Console.WriteLine);

The result is as follows:

OnNext(1)
OnNext(2)
OnNext(3)
OnNext(4)
OnNext(5)
OnNext(6)
OnNext(7)
OnNext(8)
OnNext(9)
OnNext(10)
OnError(System.Exception)

Starting from a plain IEnumerable<T> the grammar of notifications to be expected is as follows:

OnNext* ( OnCompleted | OnError )?

In the other direction, starting from the world of IEnumerable<Notification<T>> one can write a different richer set of sequence defined by the following grammar:

( OnNext | OnCompleted | OnError )*

For example:

var ns = new Notification<int>[] {
    new Notification<int>.OnNext(1),
    new Notification<int>.OnNext(2),
    new Notification<int>.OnCompleted(),
    new Notification<int>.OnNext(3),
    new Notification<int>.OnNext(4),
    new Notification<int>.OnError(new Exception()),
    new Notification<int>.OnNext(5),
};

Dematerializing this sequence of notifications will produce an enumerable sequence that will run no further than the first OnCompleted or OnError:

ns
.Dematerialize()
.Run(Console.WriteLine);

This prints 1 and 2 and then terminates. The reason this can still be useful is to create a stream of notifications that will be pre-filtered before doing any dematerialization operation on it. For example, a series of batches could be represented in the following grammar:

( OnNext* OnCompleted )*

If the user requests to run n batches, the first n – 1 OnCompleted notifications can be filtered out using some LINQ query expression, before doing dematerialization.

Finally, a sample of some error-filtering code going back and forth between IEnumerable<T> and IEnumerable<Notification<T>> showing practical use for those operators when doing sophisticated error handling:

var xs1 = new[] { 1, 2 }.Concat(EnumerableEx.Throw<int>(new InvalidOperationException()));
var xs2 = new[] { 3, 4 }.Concat(EnumerableEx.Throw<int>(new ArgumentException()));
var xs3 = new[] { 5, 6 }.Concat(EnumerableEx.Throw<int>(new OutOfMemoryException()));
var xs4 = new[] { 7, 8 }.Concat(EnumerableEx.Throw<int>(new ArgumentException()));

var xss = new[] { xs1, xs2, xs3, xs4 };
var xns = xss.Select(xs => xs.Materialize()).Concat();

var res = from xn in xns
          let isError = xn.Kind == NotificationKind.OnError
          let exception = isError ? ((Notification<int>.OnError)xn).Exception : null
          where !isError || exception is OutOfMemoryException
          select xn;

res.Dematerialize().Run(Console.WriteLine);

Given some input sequences, we materialize and concatenate all of them into sequence xns. Now we write a LINQ query over the notifications to filter out exceptions, unless the exception is a critical OOM one (you could add others to this list). The result is we see 1 through 6 being printed to the screen. (Question: What’s the relationship to OnErrorResumeNext that we saw in the previous post? What’s similar, what’s different?)

 

Exercises

As an exercise, try to implement the following operators in a notification-oriented manner:

  1. Catch
    (tip: use SelectMany and lots of conditional BLOCKED EXPRESSION
  2. Finally
    (tip: use SelectMany and Defer)
  3. OnErrorResumeNext – overload taking two IEnumerable<TSource> sequences
    (tip: use TakeWhile)
  4. Retry – overload with a retry count
    (tip: recursion, ignore stack overflow conditions)

The skeleton code for those operators is shown below:

return
    source
.Materialize()
// Your stuff here
.Dematerialize();

All-inclusive unit test:

    new[] { 1, 2 }
        .Finally(() => Console.WriteLine("Finally inner"))
    .Concat(EnumerableEx.Throw<int>(new InvalidOperationException()))
.Catch((InvalidOperationException _) => new[] { 3, 4 }.Concat(EnumerableEx.Throw<int>(new Exception())))
.Finally(() => Console.WriteLine("Finally outer"))
.OnErrorResumeNext(new[] { 5, 6 })
.Concat(EnumerableEx.Throw<int>(new ArgumentException()))
.Retry(2)
.Run(Console.WriteLine);

This should produce the same results with the built-in operators and with your implementation of those operators. More specifically, the result has to be:

1
2
Finally inner
3
4
Finally outer
5
6
1
2
Finally inner
3
4
Finally outer
5
6

with no exception leaking to the surface in the call site (behavior of Retry after the retry count has been exceeded).

 

Next on More LINQ

Various combinators to combine or transform existing observable sources into others.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about constructor operators provided by EnumerableEx:

image

 

Constructing sequences

In order to perform operations over sequences using various combinators and operators, it’s obviously a prerequisite to have such sequences available. While collection types in the .NET Framework implement IEnumerable<T> (or the non-generic counterpart, bridgeable to LINQ using the Cast<T> Standard Query Operator), one often wants to construct sequences on the spot. Moreover, sequences often should have a lazy nature as their persistence in memory may be problematic or infeasible (infinite sequences). For all those reasons, constructor operators come in handy.

LINQ to Objects already has a constructor function called Enumerable.Range to produce a sequence with a integral numbers starting from a certain value, returning the asked amount of numbers lazily:

// Imperative
for (int i = 0; i < 10; i++)
{
    Console.WriteLine(i);
}

// LINQish
Enumerable.Range(start: 0, count: 10).Run
(
    Console.WriteLine
);

The lazy nature should not be underestimated, as one could create infinite sequences representing the potential to produce a certain (ordered) set of objects. When combined with other restriction operators it becomes possible to use composition to limit the produced results in a manner very close to the domain we’re talking about. For example, positive natural numbers are integer numbers larger or equal to zero. Numbers starting with 5 are the numbers, capped by means of a Skip operation or something similar. Taking a number of elements can be done using Take. Without deviating too much from our today’s blogging mission, here’s what I’m alluding to:

static IEnumerable<int> Integer()
{
    for (int i = int.MinValue; i < int.MaxValue; i++)
        yield return i;

    yield return int.MaxValue;
}
var ints = Integer();
var nats = from i in ints where i >= 0 select i;
var some = nats.Skip(5).Take(5); // Good luck :-)
some.Run(Console.WriteLine);

I’ll leave it to the reader as a challenge to come up with ways to optimize this in a variety of ways whilst preserving the declarative nature on the use site (i.e. make the sarcastic “Good luck” go away).

Back to Rx: in today’s installment we’ll look at various constructor functions in EnumerableEx.

 

Return and the cruel return of the monad

The simplest constructor function is Return, simply yielding the single value specified on demand. It’s similar to a one-element array and that’s about it from a practical point of view:

public static IEnumerable<TSource> Return<TSource>(TSource value);

You should be able to guess the implementation of the operator for yourself. Use is straightforward as shown below:

EnumerableEx.Return(42).Run(Console.WriteLine);

One interesting thing about this constructor function is its signature, going from TSource to IEnumerable<TSource>. This is nothing but the return function (sometimes referred to as unit) used on a monad, with a more general signature of T to M<T>, the little brother to the bind function which has signature M<T> –> (T –> M<R>) –> M<R>, also known as SelectMany in LINQ. The triplet (known as a Kleisli triple) of the type constructor M (in LINQ the particular cases of IEnumerable<T> and IQueryable<T> are used, i.e. not a general type constructor), the unit and bind function form a monad.

image

For a great overview of Language Integrated Monads, have a look at Wes Dyer’s The Marvels of Monads post. For a more foundational paper (with lots of applications though), have a look at Philip Wadler’s Monads for Functional Programming paper.

 

Throw me an exception please

Another singleton constructor is the Throw function that we’ve seen repeatedly in the previous post on exception handling over sequences. Its role is to provide an enumerable that will throw an exception upon the first MoveNext call during enumeration:

public static IEnumerable<TSource> Throw<TSource>(Exception exception);

In fact, this is a lazily thrown exception constructor. Use is simple again:

EnumerableEx.Throw<int>(new Exception()).Run();

Notice you got to specify the element type for the returned (never-yielding) sequence as we’re constructing an IEnumerable<T> and there’s no information to infer T from. Obviously, the resulting sequence can be combined with other sequences of the same type in various places, e.g. using Concat. Below is a sample of how to use the Throw constructor with SelectMany to forcefully reject even numbers in a sequence (rather than filtering them out):

var src = Enumerable.Range(1, 10);//.Where(i => i % 2 != 0);
var res = src.SelectMany(i =>
    i % 2 == 0
    ? EnumerableEx.Throw<int>(new Exception("No evens please!"))
    : EnumerableEx.Return(i)
);
res.Run(Console.WriteLine);

Here we use the conditional operator to decide between an exception throwing sequence or a singleton element sequence (in this case, “Many” in “SelectMany” has “Single” semantics).

 

Empty completing the triad

Since the introduction of LINQ in .NET 3.5 (thanks to reader Keith for reminding me about my heritage), there’s been an Empty constructor as well, with the following signature and implementation:

public static IEnumerable<TSource> Empty<TSource>()
{
    yield break;
}

There seems little use for this though I challenge the reader to use this one to build the Where operator using SelectMany. In fact, the reason I say “for completeness” is illustrated below:

image

 

StartWith = Snoc (or Cons in disguise)

People familiar with LISP, ML, Scala, and many other functional languages, will know the concept of cons by heart. Cons is nothing but the abbreviation for “construct” used to create a bigger list (in LISP lingo) out of an existing list and an element to be prepended:

(cons 1 (cons 2 nil))

The above creates a list with 1 as the head and (cons 2 nil) as the tail, which by itself expands into a cell containing 2 and a tail with the nil (null) value. The underlying pair of the head value and tail “reference” to the tail list is known as a cons cell. Decomposition operators exist, known as car and cdr (from old IBM machine terminology where cons cells were realized in machine words consisting of a so called “address” and “decrement” register, explaining the a and d in car and cdr – c and r stand for content and register respectively):

(car (cons 1 2)) == 1
(cdr (cons 1 2)) == 2

The StartWith operator is none other than Cons in reverse (sometimes jokingly referred to as “Snoc” by functional programmers):

public static IEnumerable<TSource> StartWith<TSource>(this IEnumerable<TSource> source, params TSource[] first);
public static IEnumerable<TSource> StartWith<TSource>(this IEnumerable<TSource> source, TSource first);

Focus on the second one first. See how the “first” parameter is taken in as the second argument to StartWith. The reason is it’d be very invasive to put the extension method this parameter on the “first” parameter, as it would pollute all types in the framework with a “Cons” method:

public static IEnumerable<TSource> Cons<TSource>(this TSource head, IEnumerable<TSource> tail);

So, StartWith has to be read in reverse as illustrated below:

EnumerableEx.StartWith(
    EnumerableEx.StartWith(
        EnumerableEx.Return(3),
        2
    ),
    1
).Run(Console.WriteLine);

This prints 1, 2, 3 since 2 is put in front of 3 and 1 in front of that { 2, 3 } result. An overload exists to start a sequence with multiple elements in front of it:

EnumerableEx.StartWith(
    EnumerableEx.Return(3),
    1, 2
).Run(Console.WriteLine);

image

 

Generate is your new anamorphism

Generate is the most general constructor function for sequences you can imagine. It’s the dual of Aggregate in various ways. Where Aggregate folds a sequence into a single object by combining elements in the input sequence onto a final value in a step-by-step way, the Generate function unfolds a sequence out of a generator function also in a step-by-step way. To set the scene, let’s show the power of Aggregate by refreshing its signature and showing how to implement a bunch of other LINQ combinators in terms of it:

public static TResult Aggregate<TSource, TAccumulate, TResult>(this IEnumerable<TSource> source, TAccumulate seed, Func<TAccumulate, TSource, TAccumulate> func, Func<TAccumulate, TResult> resultSelector);

Given a seed value and a function to combine an element of the input sequence with the current accumulator value into a new accumulator value, the Aggregate function can produce a result that’s the result of (left-)folding all elements in the sequence one-by-one. For example, a sum is nothing but a left-fold thanks to left associativity of the numerical addition operation:

1 + 2 + 3 + 4 + 5 = ((((1 + 2) + 3) + 4) + 5)

The accumulated value is the running sum of everything to the left of the current element. Seeing the elements of a sequence being eaten one-by-one is quite a shocking catastrophic event for the sequence, hence the name catamorphism. Below are implementations of Sum, Product, Min, Max, FirstOrDefault, LastOrDefault, Any and All:

var src = Enumerable.Range(1, 10);

Console.WriteLine("Sum = " + src.Aggregate(0, (sum, i) => sum + i));
Console.WriteLine("Prd = " + src.Aggregate(1, (prd, i) => prd * i));
Console.WriteLine("Min = " + src.Aggregate(int.MaxValue, (min, i) => i < min ? i : min));
Console.WriteLine("Max = " + src.Aggregate(int.MinValue, (max, i) => i > max ? i : max));
Console.WriteLine("Fst = " + src.Aggregate((int?)null, (fst, i) => fst == null ? i : fst));
Console.WriteLine("Lst = " + src.Aggregate((int?)null, (lst, i) => i));
Console.WriteLine("AlE = " + src.Aggregate(true, (all, i) => all && i % 2 == 0));
Console.WriteLine("AnE = " + src.Aggregate(false, (any, i) => any || i % 2 == 0));

As the dual to catamorphisms we find anamorphisms, where one starts from an initial state and generates elements for the resulting sequence. I leave it to the reader to draw parallels with others words starting with ana- (from the Greek “up”). The most elaborate signature of Generate is shown below:

public static IEnumerable<TResult> Generate<TState, TResult>(TState initial, Func<TState, bool> condition, Func<TState, IEnumerable<TResult>> resultSelector, Func<TState, TState> iterate);

To see this is the dual to Aggregate, you got to use a bit of fantasy, but you can see the parallels. Where Aggregate takes in an IEnumerable<TSource> and produces a TResult, the Generate function produces an IEnumerable<TResult> from a given TState (and a bunch of other things). On both sides, there’s room for an initial state and a way to make progress (“func” versus “iterate”) both staying in their respective domains for the accumulation type (TAccumulate and TState). To select the result (that will end up in the output sequence), the overload above allows to produce multiple TResult values to be returned per TState. And finally, there’s a stop condition which is implicit in the case of a catamorphism as the “remaining tail of sequence is empty” condition can be used for it (i.e. MoveNext returns false).

Another way to look at Generate is to draw the parallel with a for loop’s three parts: initialization, termination condition, update. In fact, Generate is implemented as some for-loops. More signatures exist:

public static IEnumerable<TValue> Generate<TValue>(this Func<Notification<TValue>> function);
public static IEnumerable<TResult> Generate<TState, TResult>(TState initial, Func<TState, IEnumerable<TResult>> resultSelector, Func<TState, TState> iterate);
public static IEnumerable<TResult> Generate<TState, TResult>(TState initial, Func<TState, Notification<TResult>> resultSelector, Func<TState, TState> iterate);
public static IEnumerable<TResult> Generate<TState, TResult>(TState initial, Func<TState, bool> condition, Func<TState, IEnumerable<TResult>> resultSelector, Func<TState, TState> iterate);
public static IEnumerable<TResult> Generate<TState, TResult>(TState initial, Func<TState, bool> condition, Func<TState, TResult> resultSelector, Func<TState, TState> iterate);

We’ll discuss the ones with Notification<T> types in the next episode titled “Code = Data”, but the remaining three others are all straightforward to understand. Some lack a terminating condition while others lack the ability to yield multiple results per intermediate state. Below is a sample of Generate to produce the same results as Enumerable.Range:

Func<int, int, IEnumerable<int>> range = (start, count) => EnumerableEx.Generate(0, i => i < count, i => i + start, i => i + 1);

The other constructors we’ve seen can be written in terms of Generate as well:

Func<IEnumerable<int>> empty = () => EnumerableEx.Generate<object, int>(null, o => false, o => null, o => o);
Func<int, IEnumerable<int>> @return = i => EnumerableEx.Generate<int, int>(0, n => n < 1, o => new [] { i }, n => n + 1);
Func<Exception, IEnumerable<int>> @throw = ex => EnumerableEx.Generate<object, int>(null, o => true, o => { throw ex; return null; }, o => o);
Func<int, IEnumerable<int>, IEnumerable<int>> cons = (a, d) => EnumerableEx.Generate<int, int>(0, n => n < 2, o => o == 0 ? new [] { a } : d, n => n + 1);

@return(1).Run(Console.WriteLine);
@throw(new Exception()).Catch((Exception ex) => @return(22)).Run(Console.WriteLine);
cons(1, cons(2, cons(3, empty()))).Run(Console.WriteLine);

 

Defer what you can do now till later

The intrinsic lazy nature of sequences with regards to enumeration allows us to push more delayed effects into the sequence’s iteration code. In particular, the construction of a sequence can be hidden behind a sequence of the same type. Let’s show a signature to make this more clear:

public static IEnumerable<TSource> Defer<TSource>(Func<IEnumerable<TSource>> enumerableFactory);

In here, an IEnumerable<TSource> is created out of a factory function. What’s handed back from the call to Defer is a stub IEnumerable<TSource> that will only call its factory function (getting the real intended result sequence) upon a triggered enumeration. An example is shown below:

var xs = EnumerableEx.Defer(() =>
{
    Console.WriteLine("Factory!");
    return EnumerableEx.Return(1);
});

Console.ReadLine();

xs.Run(Console.WriteLine);
xs.Run(Console.WriteLine);

In here, the Factory message won’t be printed till something starts enumerating the xs sequence. Both calls to Run do so, meaning the factory will be called twice (and could in fact return a different sequence each time).

image

 

Next on More LINQ

More duality, this time between “code and data” views on sequences, introducing Notification<T>.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about exception handling operators provided by EnumerableEx:

image

 

Iterating with and without exceptions

Under regular circumstances, one expects sequences to produce data in response to iteration. However, it’s perfectly possibly for an iterator (or any enumerable object) to throw an exception in response to a MoveNext call. For example:

Enumerable.Range(0, 10)
    .Reverse()
    .Select(x => 100 / x)
    .Run(Console.WriteLine);

This piece of code produces the following output:

11
12
14
16
20
25
33
50
100

Unhandled Exception: System.DivideByZeroException: Attempted to divide by zero.
   at Demo.Program.<Main>b__0(Int32 x) in Program.cs:line 15
   at System.Linq.Enumerable.<>c__DisplayClass12`3.<CombineSelectors>b__11(TSource x)
   at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext()
   at System.Linq.EnumerableEx.Run[TSource](IEnumerable`1 source)
   at Demo.Program.Main(String[] args) in Program.cs:line 13

Only when the Select operator’s iterator hits 0 for its input, its projection function will throw a DivideByZeroException, causing the iterator to come to an abrupt stop as seen above. In the connected world, where iterators may reach out to external services that can signal error conditions, the ability to handle such sequences in a better and composable way becomes increasingly important.

In this post, we’ll have a look at the exception handling primitives for enumerable sequences provided by Rx in System.Interactive.EnumerableEx. A related constructor operator, Throw, will be discussed later but is simply enough to reveal in this context because of its relevance:

var oops = EnumerableEx.Throw<int>(new Exception("Oops"));
oops.Run();

The Throw operator simply creates an iterator that throws the specified exception upon the first MoveNext call on its enumerator. It’s the counterpart to the Return operator creating a single-element iterator. Logically both correspond to the OnNext and OnError methods of IObserver<T> in the reactive world. In addition, we’ll see the relation between those operators and Notification<T> later on, when covering “Code = Data” discussing the Materialize and Dematerialize operators.

 

Catch it and move on

First on is the Catch operator which is available with the following signatures:

public static IEnumerable<TSource> Catch<TSource>(IEnumerable<IEnumerable<TSource>> sources);
public static IEnumerable<TSource> Catch<TSource, TException>(this IEnumerable<TSource> source, Func<TException, IEnumerable<TSource>> handler) where TException : Exception;

The second overload is the one used directly for exception handling as you’re used to it in your favorite imperative language. While you normally associate a handler code block with a “protected code block”, here a handler consists of a function producing a sequence in response to an exceptional iteration over the corresponding “protected sequence”. A sample will make things clearer. Consider the following iterator:

static IEnumerable<int> CouldThrow()
{
    yield return 1;
    yield return 2;
    throw new InvalidOperationException("Oops!");
}

Assume you can’t handle the exceptional condition from the inside and you got the iterator from somewhere else, so the following is impossible to achieve:

static IEnumerable<int> CouldThrow()
{
    try
    {
        yield return 1;
        yield return 2;
        throw new InvalidOperationException("Oops!");
    }
    catch (InvalidOperationException)
    {
        yield return 3;
        yield return 4;
        yield return 5;
    }
}

In fact, the above is invalid C# since you can’t yield from a try-block that’s associated with a catch clause, and neither can you yield from a catch clause. Either way, this illustrates basically what we want to achieve from a conceptual point of view, but on the consuming side of the iterator. This is what Catch allows us to do, as follows:

CouldThrow()
    .Catch((InvalidOperationException ex) => new[] { 3, 4, 5 })
    .Run(Console.WriteLine);

This simply prints the numbers 1 through 5 on the screen, where the last three values originate from the exception handler. Obviously one could inspect the exception object in the handler. Just like with regular block-based exception handling constructs, one can have multiple “nested” catch clauses associated with the same source sequence. This is achieved by simply chaining Catch operator calls:

new [] {
    /* yield return */ 1,
    /* yield return */ 2 }.Concat(
    /* throw */        EnumerableEx.Throw<int>(new InvalidOperationException("Oops!")))
.Catch((InvalidOperationException ex) => new [] {
    /* yield return */ 3,
    /* yield return */ 4 }.Concat(
    /* throw */        EnumerableEx.Throw<int>(new FormatException("Aargh!"))))
.Catch((FormatException ex) => new [] {
    /* yield return */ 5 })
.Run(Console.WriteLine);

Here, the first catch clause throws an exception by itself, being caught by the next catch clause. This is completely similar to regular exception handling. In summary, the Catch operator allows iteration of a sequence to continue with another one when an exception occurs during the first’s iteration. The handler function provided to Catch isn’t evaluated till an exception occurs, so if the resulting sequence isn’t iterated far enough for an exception to be triggered, the handler obviously won’t execute.

The second overload of Catch allows specifying a sequence of sequences (IEnumerable<IEnumerable<T>>), continuing a sequence that has terminated by an exception with the sequence following it. For example:

var ex = EnumerableEx.Throw<int>(new Exception());
EnumerableEx.Catch(new[]
    {
        new [] { 1, 2 }.Concat(ex),
        new [] { 3, 4 }.Concat(ex),
        new [] { 5 },
        new [] { 6 },
    }).Run(Console.WriteLine);

This again will print the numbers 1 through 5, but not 6. Reason is that the first sequence blew up after yielding 1 and 2, causing the next sequence yielding 3 and 4 to be looped in, again causing an exception followed by a hand-over to the third sequence yielding 5. This third sequence finishes regularly (as opposed to exceptionally), so the story ends. I leave it to the reader to write down the corresponding block-structured nested try-catch statements this corresponds to from a conceptual angle.

Exercise: how would you implement a rethrow operation?

image

 

Finally, too

Now we’ve seen the Catch operator, Finally should come as no surprise. From the signature alone, we can see what it does:

public static IEnumerable<TSource> Finally<TSource>(this IEnumerable<TSource> source, Action finallyAction);

Under whatever terminating circumstance when enumerating over the source, the finallyAction will be executed. Obviously this can be illustrated using two cases, one for the regular case and one for the exceptional case. For the latter, we use EnumerableEx.Throw again. First, the regular case:

/* try { */ new [] {
    /* yield return */ 1,
    /* yield return */ 2 }
.Finally(() =>
    Console.WriteLine("Finally"))
.Run(Console.WriteLine);

This will print 1 and 2, followed by the Finally message. In case of an exception, let’s show the similarity to the lexical nesting of exception handler blocks in C#:

/* try { */
    /* try { */ new[] {
        /* yield return */ 1,
        /* yield return */ 2 }.Concat(
        /* throw */        EnumerableEx.Throw<int>(new Exception()))
    .Finally(() =>
        Console.WriteLine("Finally"))
.Catch((Exception ex) => new[] {
    /* yield return */ 3,
    /* yield return */ 4,
    /* yield return */ 5 })
.Run(Console.WriteLine);

Here the innermost enumerable yields 1 and 2, followed by the throwing of an exception. The Finally operator ensures the printing action is executed no matter how this sequence terminates. In this case, the exception will be caught downstream by the Catch operator, so the end result on the screen will be 1, 2, Finally, 3, 4, 5. As a simple exercise, think about what the following code will and should print:

/* try { */
    /* try { */ new[] {
        /* yield return */ 1,
        /* yield return */ 2 }.Concat(
        /* throw */        EnumerableEx.Throw<int>(new Exception()))
    .Finally(() =>
        Console.WriteLine("Finally"))
.Catch((Exception ex) => new[] {
    /* yield return */ 3,
    /* yield return */ 4,
    /* yield return */ 5 })
.Take(2)
.Run(Console.WriteLine);

image 

(Note: break happens when a consumer stops iterating over the resulting sequence.)

 

OnErrorResumeNext as in VB

Visual Basic fans will recognize this operator without doubt. Its operation is fairly straightforward: given a sequence of sequences, those are enumerated one by one, yielding their result to the caller. This is pretty much the same as the Concat operator we’ll see when talking about combinators, with the main difference being that an exceptional termination of any of the sequences does not bubble up. Instead, the OnErrorResumeNext operator simply moves on to the next sequence it can “yield foreach”. A sample will make this clear, but first the signatures:

public static IEnumerable<TSource> OnErrorResumeNext<TSource>(params IEnumerable<TSource>[] sources);
public static IEnumerable<TSource> OnErrorResumeNext<TSource>(this IEnumerable<IEnumerable<TSource>> sources);
public static IEnumerable<TSource> OnErrorResumeNext<TSource>(this IEnumerable<TSource> source, IEnumerable<TSource> next);

The following sample prints numbers 1 through 9, with no exception surfacing, even though the third sequence did terminate exceptionally. Replacing the OnErrorResumeNext call with the use of the Concat operator would surface that exception, terminating the resulting sequence after 1 through 7 have been yielded:

EnumerableEx.OnErrorResumeNext(
    new [] { 1, 2 },
    new [] { 3, 4, 5 },
    new [] { 6, 7 }.Concat(EnumerableEx.Throw<int>(new Exception())),
    new [] { 8, 9 }
).Run(Console.WriteLine);

Use of this operator can be useful for batch processing of records where an exceptional return is tolerable.

image

 

Using resources

Just like C#’s and VB’s using statements are related to exceptions due to their “finally”-alike guarantees for cleanup, System.Interactive’s Using operator is used for proper resource cleanup, this time in the face of delayed execution of a sequence. The signature for Using is as follows:

public static IEnumerable<TSource> Using<TSource, TResource>(Func<TResource> resourceSelector, Func<TResource, IEnumerable<TSource>> resourceUsage) where TResource : IDisposable;

The idea is to create a sequence that acquires a resource when its iteration is started (by running resourceSelector), which is subsequently used to provide a data sequence “using the resource” (obtained through resourceUsage). It’s only when the resulting sequence terminates (exceptionally or regularly) that the resource is disposed by calling its Dispose method. To illustrate this, let’s cook up our own Action-based disposable:

class ActionDisposable : IDisposable
{
    private Action _a;

    public ActionDisposable(Action a)
    {
        _a = a;
    }

    public void Dispose()
    {
        _a();
    }
}

Now we can write the following two samples:

EnumerableEx.Using<int, ActionDisposable>(() => new ActionDisposable(() => Console.WriteLine("Gone")), a =>
{
    // Now we could be using a to get data back...
    Console.WriteLine(a is ActionDisposable);
    // ... but let's just return some stock data.
    return new[] { 1, 2, 3 };
})
.Run(Console.WriteLine);

EnumerableEx.Using<int, ActionDisposable>(() => new ActionDisposable(() => Console.WriteLine("Gone")), a =>
{
    // Now we could be using a to get data back...
    Console.WriteLine(a is ActionDisposable);
    // ... which may result in an exception.
    return new[] { 1, 2 }.Concat(EnumerableEx.Throw<int>(new Exception()));
})
.Catch((Exception ex) => new [] { 4, 5, 6 })
.Run(Console.WriteLine);

The first one will nicely obtain the Gone-printing resource when enumeration is triggered by Run, returning values 1, 2 and 3, before Using calls dispose on the resource, causing it to print “Gone”. In the second example, the results produced under the acquired resource scope trigger an exception, so upon leaving Using the resource will be disposed again (printing “Gone”), putting us in the Catch operator’s body as we saw before. Now the output will be 1, 2, Gone, 4, 5, 6. Again, as an exercise, think about the following one (easy, just stressing the point…):

EnumerableEx.Using<int, ActionDisposable>(() => new ActionDisposable(() => Console.WriteLine("Gone")), a =>
{
    // Now we could be using a to get data back...
    Console.WriteLine(a is ActionDisposable);
    // ... but let's just return some stock data.
    return new[] { 1, 2, 3 };
})
.Take(2)
.Run(Console.WriteLine);

image

(Note: break is caused by the consumer’s termination of iteration over the resulting sequence.)

 

Retry till you succeed

A final operator in the exception handling operators category we’re discussing in this post, is Retry. The idea of Retry is to retry enumerating and yielding a sequence till it terminates successfully:

public static IEnumerable<TValue> Retry<TValue>(this IEnumerable<TValue> source);
public static IEnumerable<TValue> Retry<TValue>(this IEnumerable<TValue> source, int retryCount);

Obviously, Retry has no effect if the source sequence iterates without an exception being triggered:

// A no-op.
new [] { 1, 2, 3 }
.Retry()
.Run(Console.WriteLine);

On the other hand, if an exception occurs, a new enumerator over the source sequence is obtained (using GetEnumerator) and iteration is retried. If the exception condition is persistent, this may cause infinite retry:

// Will go forever...
new [] { 1, 2, 3 }.Concat(EnumerableEx.Throw<int>(new Exception()))
.Retry()
.Run(Console.WriteLine);

The overload taking a retryCount can be used to cap the number of retries. If the exception condition is dependent on dynamic factors (e.g. network connectivity to a stream of data), use of Retry will eventually make the iteration succeed:

static int s_count = 0;
static IEnumerable<int> MayGetNumbers()
{
    try
    {
        yield return 4;
        if (s_count == 0)
            throw new Exception();
        yield return 5;
        if (s_count == 1)
            throw new Exception();
        yield return 6;
    }
    finally
    {
        s_count++;
    }
}

The iterator above will make a bit more progress every time it’s called, the first time getting stuck after yielding 4, the second time after yielding 4 and 5, and finally succeed to yield 4, 5 and 6. Using Retry on this one will produce the following result:

// 4, (!), 4, 5, (!), 4, 5, 6
MayGetNumbers()
.Retry()
.Run(Console.WriteLine);

I’ll leave it as an exercise to the reader to come up with a diagram for this operator, introducing a distinction between IEnumerable and IEnumerator, the latter being potentially different for every time the GetEnumerator method is called. It’s because of the potential different enumeration results that Retry has a chance to be effective.

 

Next on More LINQ

Constructor operators, producing (sometimes trivial) sequences.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about the imperative style operators provided on EnumerableEx:

image

 

Laziness and side-effecting iterators

LINQ can be quite deceptive on a first encounter due to the lazy island it provides in an otherwise eagerly evaluated language like C# and Visual Basic. Simply writing down a query doesn’t cause it to be executed, assuming no eager operators like ToArray, ToList or ToDictionary are used. In fact, the composition of sequences lies at the heart of this since sequences can evaluate lazily, on demand when calling MoveNext on an enumerator. Iterators are a simple means to provide such a sequence, potentially capturing a sea of side-effects interleaved with the act of producing (or “yielding”) values.

Let’s start with a quite subtle kind of side-effect, reading from a random number generator:

static Random s_random = new Random();

static IEnumerable<int> GetRandomNumbers(int maxValue)
{
    while (true)
    {
        yield return s_random.Next(maxValue);
    }
}

Every time you execute this, you’ll get to see different numbers. What’s more important in this context though is the fact every yield return point in the code is a place where the iterator suspends till the next call to MoveNext occurs, causing it to run till the next yield return is encountered. In other words, the whole loop is immunized till a consumer comes along. To visualize this a bit more, let’s add some Console.WriteLine output calls as an additional side-effect:

static Random s_random = new Random();

static IEnumerable<int> GetRandomNumbers(int maxValue)
{
    while (true)
    {
        Console.WriteLine("Next");
        yield return s_random.Next(maxValue);
    }
}

The following code fragment illustrates the point in time where the sequence executes:

var res = GetRandomNumbers(100).Take(10);
Console.WriteLine("Before iteration");
foreach (var x in res)
    Console.WriteLine(x);

The result is the following:

Before iteration
Next
16
Next
56
Next
46
Next
58
Next
22
Next
91
Next
77
Next
20
Next
91
Next
92

 

Run, run, run

System.Interactive’s Run operator in EnumerableEx allows execution of the sequence on the spot, in a fashion equivalent to having a foreach-loop. Two overloads exist, one discarding the element consumed from the sequence and another one feeding it in to an Action<T>:

public static void Run<TSource>(this IEnumerable<TSource> source);
public static void Run<TSource>(this IEnumerable<TSource> source, Action<TSource> action);

Rewriting the code above using the second overload will produce similar results:

var res = GetRandomNumbers(100).Take(10);
Console.WriteLine("Before iteration");
res.Run(x => Console.WriteLine(x)); // equivalent to res.Run(Console.WriteLine);

Since Run returns a void, it’s only used for its side-effects, which can be useful from time to time. Previously, a similar affect could be achieved by calling ToArray or ToList, at the cost of burning memory for no good reason. In the above, it wouldn’t even be a viable option in case you simply want to print random numbers ad infinitum, as an infinite sequence would cause the system to run out of memory in a ToArray or ToList context.

Let’s assume for the continuation of this post that GetRandomNumbers doesn’t exhibit a printing side-effect in and of itself:

static IEnumerable<int> GetRandomNumbers(int maxValue)
{
    while (true)
    {
        yield return s_random.Next(maxValue);
    }
}

In this setting, our Run call above effectively adds the side-effect of printing to the screen “from the outside”, at the (consuming) end of the “query”. Using the Do operator, one can inject a side-effect in a lazily evaluated sequence composed of different combinators.

image

 

Adding side-effects using Do

The Do method has the following signature:

public static IEnumerable<TSource> Do<TSource>(this IEnumerable<TSource> source, Action<TSource> action);

Taking in an IEnumerable<T> and producing one, it simply iterates over the source, executing the specified action before yielding the result to the consumer. Other than producing the side-effect during iteration, it doesn’t touch the sequence at all. You can write this operator in a straightforward manner yourself:

static IEnumerable<T> Do<T>(this IEnumerable<T> source, Action<T> action)
{
    foreach (var item in source)
    {
        action(item);
        yield return item;
    }
}

Or you could build it out of other combinator primitives, in particular Select:

static IEnumerable<T> Do<T>(this IEnumerable<T> source, Action<T> action)
{
    return source.Select(item =>
    {
        action(item);
        return item;
    });
}

This is useful primarily for debugging purposes, where you want to “probe” different points of execution in a query. For example, consider the following query expression:

var res = from x in GetRandomNumbers(100).Take(10)
          where x % 2 == 0
          orderby x
          select x + 1;
res.Run(x => Console.WriteLine(x));

Don’t know why it produces the results you’re seeing? Using Do, you can inject “checkpoints”. First, realize the above query desugars into:

var res = GetRandomNumbers(100).Take(10)
          .Where(x => x % 2 == 0)
          .OrderBy(x => x)
          .Select(x => x + 1);

Now we can put Do calls “on the dots” to see the values flowing through the pipeline during consumption of the query result.

var res = GetRandomNumbers(100).Take(10)
          .Do(x => Console.WriteLine("Source  -> {0}", x))
          .Where(x => x % 2 == 0)
          .Do(x => Console.WriteLine("Where   -> {0}", x))
          .OrderBy(x => x)
          .Do(x => Console.WriteLine("OrderBy -> {0}", x))
          .Select(x => x + 1)
          .Do(x => Console.WriteLine("Select  -> {0}", x));

The below shows what’s triggered by the call to Run:

Source  -> 96
Where   -> 96
Source  -> 25
Source  -> 8
Where   -> 8
Source  -> 79
Source  -> 25
Source  -> 3
Source  -> 36
Where   -> 36
Source  -> 51
Source  -> 53
Source  -> 81
OrderBy -> 8
Select  -> 9
9
OrderBy -> 36
Select  -> 37
37
OrderBy -> 96
Select  -> 97
97

For example, 25 produced by the source didn’t survive the Where operator filtering. From the output one can also see that all Where and Source consumption calls precede any OrderBy calls, since the ordering operator eagerly drains its source before carrying out the ordering and passing the results to its consumer.

Looking at the output before the first result, 9, is printed, you can observe the effect of the first MoveNext call on the resulting sequence: the whole source is consulted and fed through the Where operator in order for OrderBy to produce the first (smallest) result. A conceptual diagram illustrating the interception of sequences using Do is shown below:

image

In fact, one can make Do surface through query syntax as well, by providing an extension method overload for e.g. Where (note: this is purely for illustration purposes, and admittedly over-overloading and misusing existing operators :-)):

public static class DoEnumerable
{
    public static IEnumerable<T> Where<T>(this IEnumerable<T> source, Action<T> action)
    {
        return source.Do(action);
    }
}

The resulting usage pattern is the following:

var res = from x in GetRandomNumbers(100).Take(10)
          /*do*/ where Console.WriteLine("Source  -> {0}", x)
          where x % 2 == 0
          /*do*/ where Console.WriteLine("Where   -> {0}", x)
          orderby x
          /*do*/ where Console.WriteLine("OrderBy -> {0}", x)
          select x + 1 into x
          /*do*/ where Console.WriteLine("Select  -> {0}", x)
          select x;

image

 

A lame semi-cooperative scheduler

Let’s first say there’s no good justification (this is the lame part) for doing this sample other than for educational purposes showing use of a sequence purely for its side-effects. The idea of the below is to declare a worker thread with varying priorities for portions of its code. Sure, we could have set thread priorities directly in the code, but the special part of it is feeding back desired priorities to the driver loop (“Start”) of the scheduler that can decide how to implement this prioritization scheme. The cooperative nature is the fact the worker threads yield their run by signaling a new priority, effectively handing over control to the driver loop. I’m calling it semi just because of the following sample implementation relying on preemptive scheduling as provided by the Thread class, though the reader challenge will be to shake off that part.

First of all, work is declared by an iterator that yields priorities followed by the work that will run under that priority. The driver can decide whether or not to call MoveNext, effectively causing the iterator to proceed till the next yield return statement. For example:

static IEnumerable<ThreadPriority> Work1()
{
    int i = 0;
    Action print = () =>
    {
        Console.WriteLine("{0} @ {1} -> {2}", Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.Priority, i++);
        for (int j = 0; j < 10000000; j++)
            ;
    };
    yield return ThreadPriority.Normal;
    {
        print();
    }
    yield return ThreadPriority.Lowest;
    {
        print();
    }
    yield return ThreadPriority.Normal;
    {
        print();
    }
    yield return ThreadPriority.Highest;
    {
        print();
    }
    yield return ThreadPriority.Highest;
    {
        print();
    }
}

The block-based work item declaration after a yield syntactically groups work items and their priorities. Obviously we fake work to illustrate the point. A driver loop, called Start, can be implemented as lame as relying on the managed Thread type:

static void Start(IEnumerable<ThreadPriority> work)
{
    new Thread(() =>
    {
        work.Do(p => Thread.CurrentThread.Priority = p).Run();
    }).Start();
}

In here, we’re using both Run and Do to respectively run the work and cause the side-effect of adjusting the priority of the thread hosting the work. The reader is invited to cook their own dispatcher with the following signature:

static void Start(params IEnumerable<ThreadPriority>[] workers);

The idea of this one will be to implement a prioritization scheme – just for fun and definitely no profit other than intellectual stimulus – by hand: run all the work on the same thread, with MoveNext calls standing for an uninterruptible quantum. During a MoveNext call, the worker will proceed till the next yield return is encountered, so you may cause an unfair worker to run away and do work forever. This pinpoints the very nature of cooperative scheduling: you need trust in the individual workers. But when you regain control, retrieving the priority for the next work item the worker plans to do, you can make a decision whether you let it go for another quantum (by calling MoveNext) or let another worker from the worker list take a turn (tip: use an ordering operator to select the next worker to get a chance to run). This process continues till all workers have no more work items left, indicated by MoveNext returning false (tip: keep a list of “schedulable” items).

In the scope of this post, the sole reason I showed this sample is because of the use of Do and Run to drive home the point of those operators. Sure, you can achieve the same result (if desired at all) by tweaking the managed thread priority directly in each worker.

 

Next on More LINQ

Dealing with exceptions caused by sequence iteration.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about getting started with System.Interactive, also touching briefly on the deep duality.

 

Where to get it?

To get the Reactive Extensions, which include System.Interactive, visit the landing page on DevLabs over here. Downloads are available for .NET Framework 3.5 SP1, .NET Framework 4.0 Beta 2 and Silverlight 3. In this series, I’ll be using the “desktop CLR” distributions from Visual Studio 2008 and Visual Studio 2010.

The differences between the various distributions are of a technical nature and have to do with backporting certain essentials Rx relies on, to the .NET Framework 3.5 SP1 stack. For instance, the IObservable<T> and IObserver<T> interfaces exist in .NET 4.0 but don’t in .NET 3.5. Similarly, the Task Parallel Library (TPL) is available in .NET 4.0’s System.Threading namespace, while Rx redistributes it to run on .NET 3.5 SP1.

 

What’s in it?

Once you’ve installed, have a look at your Program Files (x86) folder, under Microsoft Reactive Extensions. I’m using the “DesktopV2” version here, which refers to CLR 2.0 and the .NET Framework 3.5 SP1 package. The main difference with the “DesktopV4” version is the presence of System.Threading, which contains the Parallel Extensions that ship in .NET 4.0:

image

A brief introduction to the remaining assemblies:

  • System.CoreEx.dll contains some commonly used types like Action and Func delegates with bigger arities (up to 16 parameters), new Property<T> primitives, a Unit type, an Event type wrapping “object sender, EventArgs e” pairs, a Notification<T> (which will be discussed extensively) and some notions of time in the form of TimeInterval<T> and Timestamped<T>.
  • System.Interactive.dll, the subject of this new series, contains extension methods for IEnumerable<T> and additional LINQ to Objects operators, provided in a type called EnumerableEx.
  • System.Reactive.dll, which is where Rx gets its name for and which will be discussed in future series, is the home for reactive programming tools. It contains IObservable<T> and IObserver<T>, as well as various combinators over it (sometimes referred to as “LINQ to Events”). In addition, it provides primitives like subjects and contains a join library (more about this in a separate installment).

 

Duality? Help!

As we like to use expensive words like “mathematical dual” it makes sense to provide some easy to grasp introduction to the subject. The first thing to look at is the distinction between interactive and reactive programming. In the diagram below, this is illustrated:

image

In the world of interactive programming, the application asks for more information. It pulls data out of a sequence that represents some data source, in particular by calling MoveNext on an enumerator object. The application is quite active in the data retrieval process: besides getting an enumerator (by calling GetEnumerator on an enumerable), it also decides about the pace of the retrieval by calling MoveNext at its own convenience.

In the world of reactive programming, the application is told about more information. Data is pushed to it from a data source by getting called on the OnNext method of an observer object. The application is quite passive in the data retrieval process: apart from subscribing to an observable source, it can’t do anything but reacting to the data pushed to it by means of OnNext calls.

The nice thing about those two worlds is that they’re dual. The highlighted words in the paragraphs above have dual meanings. Because of this observation, it’s desirable to search for dualities on a more formal and technical level as well. In particular, the interfaces being used here are the exact duals of one another: IEnumerable<T> is to IObservable<T> as IEnumerator<T> is to IObserver<T>. Dualization can be achieved by turning inputs (e.g. method parameters) into output (e.g. return values):

image

Lots of dualities exist in various disciplines, providing for great knowledge transfers between different domains. For example, in formal logic, De Morgan’s law allows converting expressions built from conjunctions into ones built from disjunctions, and vice versa. In electronics, similarities exist between the behavior of capacitors and inductances: know one and how to go back and forth between domains, and you know the other. Fourier calculus provides duals between time and frequency domains.

One thing all those have in common is a way to go back and forth between domains. Such a mechanism exists in the world of System.Reactive and System.Interactive as well. Every observable collection can be turned into an enumerable one and vice versa, using operators called ToEnumerable and ToObservable. To get a feel about how those work, imagine an enumerable collection first. The only thing one can do to retrieve its data is enumerate over it. For all the values received, signal them on the resulting observable’s observer. In the opposite direction, you subscribe on an observable collection to receive the values thrown at you and keep them so that the resulting enumerable can fetch them.

In this series, we’ll not look over the garden wall to the reactive world just yet. Instead, we’ll get our hands dirty in the world of System.Interactive, a logical extension to .NET 3.5’s IEnumerable<T> extension methods, known as the Standard Query Operators.

 

Operators overview

The System.Linq.EnumerableEx static class in System.Interactive contains various (extension) methods that operator on IEnumerable<T> enumerable collections. It should be seen as a logical extension to the System.Linq.Enumerable class in System.Core. In the illustration below I’ve summarize the various categories those new operators fall into. Some could be considered to fall in multiple categories, so take this with a grain of salt. Nevertheless, we’ll look at those big buckets in subsequent posts in this series:

  • Imperative use – provides operators that execute a sequence (Run) and inject side-effecting Actions in a chain of query operator calls (Do), which is handy for debugging.
  • Exceptions – enumeration of sequences can cause exceptions (e.g. if you write an iterator, but also by other means – see later), which may need to be handled somehow. Methods like Catch, Finally, Using, OnErrorResumeNext and Retry provide means to make a sequence resilient in face of exceptions.
  • Constructors – instead of creating an iterator yourself, it’s possible to let the system create a sequence on your behalf, e.g. by providing it a generator function (Generate), by composing sequences and elements (Return, StartWith, Throw), or triggering the call of a deferred constructor function when a client start enumerating (Defer).
  • Code = Data – the triplet of OnNext, OnError and OnComplete seen on IObserver<T> is a very code-centric way of signaling various outcomes of data consumption. An alternative view is to treat those outcomes as pieces of data, called notifications (Notification<T>). Using Materialize and Dematerialize, one can transfer back and forth between those two domains.
  • Combinators – producing sequences out of one or more existing sequences is what combinators generally do. One can repeat a sequence a number of times (Repeat), zip two sequences together (Zip), let two sequences battle to provide a result the fastest (Amb), and more. Those operators are most “in line” with what you already know from System.Linq today.
  • Functional – while the imperative and exception categories acknowledge the possibility for sequence to exhibit side-effects, the functional category is meant to tame the side-effects, typically in one-producer-many-consumer scenarios. When a sequence may produce side-effects during iteration, it may be desirable to avoid duplication of those when multiple consumers iterate.
  • Miscellaneous – just that, miscellaneous.

image

Next time, we’ll start by looking at the “Imperative use” category. Download the libraries today and start exploring!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

The CLR’s exception handling facilities provide for protected blocks (“try”) one can associate a handler with. There are four kinds of handlers, and exactly one can be associated with a protected block (but nesting can be used to associate multiple handlers with a block of code):

  • A finally handler is executed whenever the block is exited, regardless of whether this happened by normal control flow or an unhandled exception. C# exposes this using the finally keyword.
  • A type-filtered handler handles an exception of a specified class or any of its subclasses. Better known as a “catch block”, C# provides this through its catch keyword.
  • A user-filtered handler runs user-specified code to determine whether the exception should be ignored, handled by the associated handler, or passed on to the next protected block. C# doesn’t expose this, but Visual Basic does by means of its When keyword.
  • A fault handler is executed if an exception occurs, but not on completion of normal control flow. Neither C# nor Visual Basic provide a fault handler language feature.

In this reader challenge, we’re going to focus on fault handlers. Due to their lack of language surface, their effect is often mimicked by using some local state to determine whether the protected block exited gracefully or not:

bool success = false;
try
{
    // Do stuff
    success = true;
}
finally
{
   if (!success)
   {
       // There was a fault. Do something special.
   }
   // Fault or not; this is what finally does.
}

If an exception happens during “Do stuff”, we end up in the finally block and come to conclude success was never set to true. This indicates an error happened, and we should handle the fault case. However, this technique can get a bit tricky when there are different paths exiting the try block: one could return from the enclosing method in various places, requiring the “success = true” code to be sprinkled around. This is exactly what exception handling was designed for: reducing clutter in your code that has to do with error condition/code tracking. So, we’re defeating that purpose.

Today’s challenge is to create a true fault handler in C#, just for the sake of it. This is merely a brain teaser, encouraging readers to find out what happens behind the scenes of compiled C# code. We won’t be addressing certain concerns like non-local return (the case I mentioned above) but will be hunting for the true “fault” handler treasure hidden deeply in the C# compiler’s IL code emitter. The operational specification is the following:

var f = Fault(() => Console.WriteLine("Okay"),
              () => Console.WriteLine("Fault"));
f();
Console.WriteLine();

var g = Fault(() => { throw new Exception("Oops"); },
              () => Console.WriteLine("Fault"));
try
{
    g();
}
catch (Exception ex)
{
    Console.WriteLine(ex);
}

The above should produce the following output:

Okay

Fault
System.Exception: Oops
   at Program.<Main>b__2()
   (I won’t reveal the secrets here yet…)
   at Program.Main()

Action f illustrates the non-exceptional case where the fault handler is not invoked (a finally handler would get invoked). Action g illustrates the exceptional case where the fault handler gets invoked and the exception bubbles up to the catch-block surrounding its invocation.

It’s strictly forbidden to use local state in Fault (or a method it calls) to track the successful execution of the protected block. Therefore, the below is an invalid solution:

static Action Fault(Action protectedBlock, Action faultHandler)
{
    return () =>
    {
        bool success = false;
        try
        {
            protectedBlock();
            success = true;
        }
        finally
        {
            if (!success)
                faultHandler();
        }
    };
}

Moreover, execution of your Fault method should really use a fault handler as encountered in IL code. It should be a fault handler, not mimic one. In addition, you should not go for a solution where you write a Fault method in ILASM by hand and link it as a netmodule in a C# project, using al.exe:

.class private FaultClosure
{
  .field class [System.Core]System.Action protectedBlock
  .field class [System.Core]System.Action faultHandler

  .method void .ctor()
  {
    ldarg.0
    call instance void [mscorlib]System.Object::.ctor()
    ret
  }

  .method void Do()
  {
    .try
    {
      ldarg.0
      ldfld class [System.Core]System.Action Program/FaultClosure::protectedBlock
      callvirt instance void [System.Core]System.Action::Invoke()
      leave.s END
    }
    fault
    {
      ldarg.0
      ldfld class [System.Core]System.Action Program/FaultClosure::faultHandler
      callvirt instance void [System.Core]System.Action::Invoke()
      endfault
    }
    END: ret
  }
}

.method static class [System.Core]System.Action Fault(class [System.Core]System.Action protectedBlock, class [System.Core]System.Action faultHandler)
{
  .locals init (class Program/FaultClosure V_0)
  newobj void Program/FaultClosure::.ctor()
  stloc.0
  ldloc.0
  ldarg.0
  stfld class
[System.Core]System.Action Program/FaultClosure::protectedBlock
  ldloc.0
  ldarg.1
  stfld class
[System.Core]System.Action Program/FaultClosure::faultHandler
  ldloc.0
  ldftn instance void Program/FaultClosure::Do()
  newobj void [System.Core]System.Action::.ctor(object, native int)
  ret
}

Again, this exercise is just for fun with no profit other than brain stimulation. Hint: what C# 2.0 or later feature may cause a “fault” block to be emitted (i.e. if you ildasm a compiled valid C# application, you can find a “fault” keyword)?

Happy holidays!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

Recursion is a widely known technique to decompose a problem in smaller “instances” of the same problem. For example, performing tree operations (e.g. in the context of data structures, user interfaces, hierarchical stores, XML, etc) can be expressed in terms of a navigation strategy over the tree where one performs the same operation to subtrees. A base case takes care of the algorithm’s “bounding”; in case of tree operations that role is played by the leaf nodes of the tree.

Looking at mathematical definitions, one often finds recursive definitions, as well as more imperative style operations:

Imperative

clip_image002

Recursive

clip_image004

In here, the first definition lends itself nicely for implementation in an imperative language like C#, e.g. using a foreach-loop. Or, in a more declarative and functionally inspired style, one could write this one using LINQ’s Aggregate operator (which really is a catamorphism):

Func<int, int> fac = n => Enumerable.Range(1, n).Aggregate(1, (p, i) => p * i);

It’s left as an exercise to the reader to define all other catamorphic operators in LINQ in terms of the Aggregate operator:

But this is not what we’re here for today. Instead, we’re going to focus on the recursive definition. We all know how to write this down in C#, as follows:

int fac(int n)
{
    return n == 0 ? 1 : n * fac(n – 1);
}

Or, we could go for lambda expressions, requiring a little additional trick to make this work:

Func<int, int> fac = null;
fac = n => n == 0 ? 1 : n * fac(n – 1);

The intermediate assignment with the null literal is required to satisfy the definite assignment rules at the point fac is used in the body of the lambda expression, as indicated in bold. What goes on here is quite interesting. When the compiler sees that the fac local variable is used inside a lambda’s body, it’s hoisted into a closure object. In other words, the local variable is not that local:

var closure = new { fac = (Func<int, int>)null };
closure.fac = n => n == 0 ? 1 : n * closure.fac(n – 1);

Because of the heap-allocated nature of a closure, we can pass it around – including all of its “context” – to other locations in the code, potentially lower on the call stack. Let’s not go there, but focus on the little null-assignment trick we had to play here. Turns out we can eliminate this.

 

Please tell me … Y

Our two-step recursive definition of a lambda expression isn’t too bad, but it should stimulate the reader’s curiosity: can’t we do a one-liner recursive definition instead? The following doesn’t work for reasons alluded to above (try it yourself in your C# compiler):

Func<int, int> fac = n => n == 0 ? 1 : n * fac(n – 1);

In languages like F#, a separate recursive definition variant of “let” exists:

let rec fac n = if n = 0 then 1 else n * fac (n – 1)

An interesting (well, in my opinion at least) question is whether we can do something similar in C#, realizing “anonymous recursion”. What’s anonymous about it? Well, just having a single expression, without any variable assignments, that captures the intent of the recursive function. In other words, I’d like to be able to:

ThisMethodExpectsUnaryFunctionIntToInt(/* I want to pass the factorial function here, defining it inline */)

To do this, in the factorial-function-defining expression we can’t refer to a local variable, as we did in the C# (and the F#) fragment. Yet, we need to be able to refer to the function being defined to realize the recursion. If it ain’t a local variable, and we need to refer to it, it ought to be a parameter to the lambda expression:

fac => n => n == 0 ? 1 : n * fac(n – 1)

Now we can start to think about types here. On the outer level, we have a function that takes in a “fac” and produces a function “n => …”. The latter function, at the inner level, is a function that takes in “n” and produces “n == 0 ? …”. That last part is simple to type: Func<int, int>. Back to the outer layer of the lambda onion, what has to be the type of fac? Well, we’re using fac inside the lambda expression, giving it an int and expecting an int back (see “fac(n – 1)”), hence it needs to be a Func<int, int> as well. Pasting all pieces together, the type of the thing above is:

Func<Func<int, int>, Func<int, int>>

Or, in a full-typed form, the expression looks as follows:

Func<Func<int, int>, Func<int, int>> f = (Func<int, int> fac) => ((int n) => n == 0 ? 1 : n * fac(n - 1))

But the “ThisMethodExpectsUnaryFunctionIntToInt” method expects, well, a Func<int, int>. We somehow need to shake off one of the seemingly redundant Func<int, int> parts of the resulting lambda expression. In fact, we need to fix the lambda expression by eliminating the fac parameter, substituting it for the recursive function itself. So far, we can misuse the function above:

f(n => n * 2)(5) –> 40

The bold part somehow needs to be the factorial function itself. This can be realized by means of a fixpoint combinator. From a typing perspective, it has the following meaning:

Func<int, int> Fix(Func<Func<int, int>, Func<int, int>> f)

In other words, we should be able to:

ThisMethodExpectsUnaryFunctionIntToInt(Fix(fac => n => n == 0 ? 1 : n * fac(n – 1)))

and leave the magic of realizing the recursion to Fix. This method can be define as follows (warning: danger for brain explosion):

Func<T, R> Fix<T, R>(Func<Func<T, R>, Func<T, R>> f)
{
    FuncRec<T, R> fRec = r=> t => f(r(r))(t);
    return fRec(fRec);
}

delegate Func<T, R> FuncRec<T, R>(FuncRec<T, R> f);

To see how the Fix method works, step through it, feeding it our factorial definition. The mechanics of it are less interesting in the context of this blog post, suffice to say it can be done. By the way, this Fix method is inspired by the Y combinator, a fixpoint combinator created by lambda calculus high priests.

 

Oops … there goes my stack :-(

So far, so good. We have a generic fixpoint method called “Fix”, allowing us to define the factorial function (amongst others of course) as follows:

Fix(fac => n => n == 0 ? 1 : n * fac(n – 1))

Since factorial grows incredibly fast, our Int32-based calculation will overflow in no time, so feel free to use .NET 4’s new BigInteger instead:

var factorial = Fix<BigInteger, BigInteger>(fac => n => n == 0 ? 1 : n * fac(n - 1));
factorial(10000);

Either way, let’s see what happens under the debugger:

image

That doesn’t look too good, does it? All the magic that Fix did was to realize the recursion, but we’re still using recursive calls to compute the value. After some 5000 recursive calls, the call stack blew up. Clearly we need to do something if we are to avoid this, whilst staying in the comfort zone of recursive algorithm implementations. One such technique is a trampoline. But before we go there, it’s worthwhile to talk about tail calls.

 

Don’t stand on my tail!

One of the inherent problems with this kind of recursion is the fact we need the result of the recursive call after we return from a recursive call. That seems logical but think about it for a while. When we’re computing factorial of 5, we really are doing this:

fac 5 =
          fac 4 =
                   fac 3 =
                           fac 2 =
                                   fac 1 =
                                   1
                           2 = 2 * 1
                   6 = 3 * 2
          24 = 4 * 6
120 = 5 * 24

What happens here is that after we return from the recursive call, we still have to carry out a multiplication. It’s from this observation it follows that we need a call stack frame to keep track of the computation going on. One way to improve on the situation is by avoiding the need to do computation after a recursive call returns. This can achieved by accumulating the result of recursive calls, effectively carrying the result “forward” till the point we hit the base case. In essence, we’re dragging along the partial computed result on every recursive call. In the case of factorial this accumulator will contain the partial multiplication, starting with a value of 1:

fac 5 1 =
          fac 4 (5 * 1) =
                           fac 3 (4 * 5) = 
                                           fac 2 (3 * 20) =
                                                            fac 1 (2 * 60) =
                                                                             120

In here, the second parameter is the accumulated product so far. In the base case, we simply return the accumulated value. Now we don’t need to do any more work after the recursive call returns. In other words, we’ve eliminated a “tail” of computation after a recursive call. Compilers can come to this insight and eliminate the recursive call. Below is a sample of an accumulating factorial definition in F#:

image

If we compile this code in the F# compiler (instead of just staring at F# interactive) and disassemble it, we get to see exactly this optimization carried out by the compiler:

image

In fact, this code is equivalent to the following piece of C#:

int Fac(int n, int a)
{
    while (n > 1)
    {
        a *= n;
        n—;
    }
    return a;
}

Wonderful, isn’t it? While we preserved a recursive definition, we really got the performance of an imperative loop-construct and are not exhausting the call stack in any way. The C# compiler on the other hand wouldn’t figure this out. In what follows, we will be using this definition of factorial in combination with a trampoline to realize the same kind of stack-friendly recursion in C#.

 

The art of jumping the trampoline

One main characteristics of trampolines is that they bounce back. Jump on them and you’ll be catapulted in the air because you’re given a kinetic energy boost. While in the air you can make funny transformations (corresponding to the body of the recursive function as we shall see), but in the end you’ll end up on the trampoline again. The whole cycle repeats till you run out of energy and just stay at rest on the trampoline. That state will correspond to the end of the recursion.

image

This all may sound very vague but things will become clear in a moment. The core idea of a trampoline is to throw a (recursive) function call on a trampoline, let it compute and have it land on the trampoline with a new function. It’s important to see that both the function and its arguments are jumping on there. Compare it to an acrobat that jumps on the trampoline and counts down every time he bounces. The function is the acrobat, the argument is the counter he maintains. When that counter reaches a base case, the breaks from the bouncing by carefully landing next to the trampoline.

How can we realize such a thing is C# for functions of various arities? To grasp the concept, it helps to start from the simplest case, i.e. an Action with no arguments and – obviously, as we’re talking about actions – no return value. We want to be able to write something like this, but without exhausting the call stack:

void Motivate()
{
    Console.WriteLine(“Go!”);
    Motivate();
}

It goes without saying this can be achieved using a simple loop construct, but it’s no surprise the base case of our investigation is trivial. Keep in mind most of my blog blog posts are about esoteric programming, so don’t ask “Why?” just yet. To realize this recursion, we should start from the signature of a recursive action delegate. To get the trampoline behavior, a recursive action should not just return “void” but return another instance of itself to signal the trampoline what to call next. Compare it with the acrobat again: his capability (a “function” that can be called by the ring master initially: “start jumping!”) to jump up returns the capability (a function again, to be called by the trampoline upon landing) to jump another time. This leads to the following (type-recursive!) signature:

delegate ActionRec ActionRec();

To write an anonymous recursive function, we use the same fixpoint technique as we saw before. In other words, the action is going to be passed as a parameter to a lambda expression, so that it can be called – to realize the recursion – inside the lambda expression’s body. For example, our Motivation sample can be written like this:

Func<ActionRec, Func<ActionRec>> _motivate = motivate => () =>
{
Console.WriteLine("Go!");
return motivate();
};

Read the lambda expression from left to right to grasp it: given an ActionRec (which will be fixed to the while action itself further on by means of a Fix call), we’re tasked with providing something the trampoline can call (with no arguments in our simple case) to run the next “bounce”. This by itself should return an ActionRec to facilitate the further recursion. Apart from the return keyword and some lambda arrow tokens this looks quite similar to the typical recursive C# method shown earlier. To get the real recursive function with a regular Action signature, we’ll call a fixpoint method called Fix:

Action Motivate = _motivate.Fix();

Now can can call Motivate and should see no StackOverflowException even though the function will run forever. The obvious question is how the Fix method works. Since we have no control over the Func delegate type used to define the non-fixed _motivate delegate, it ought to be an extension method. The signature therefore looks like this:

public static Action Fix(this Func<ActionRec, Func<ActionRec>> f)

Now let’s reason about what the Fix method can do. Obviously it has to return an Action, which looks like “return () => { /* TODO */ };”. Question is what the body of the action has to do. Well, it will have to call f at some point, passing in an ActionRec. This returns a function that, when called, will give us another of those ActionRec delegates. As long as a non-null delegate object is returned (null will be used later on as a way to break from the recursion), we can keep calling it in a loop. And that’s where the stack-friendly nature comes from: we realize the recursion using a loop. Here’s how it looks:

public static Action Fix(this Func<ActionRec, Func<ActionRec>> f)
{
    return () =>
    {
        ActionRec a = null;
        for (a = () => a; a != null; a = f(a)())
            ;
    };
}

The last part of the for-statement is the most explanatory one: it calls the user-defined function with the recursive action, which returns an ActionRec. That gets called with the arguments, which for a plain vanilla action are empty, (). To get started, we use the definite assignment “closure over self” trick we saw at the very start of the post (starting with null):

Func<int, int> fac = null;
fac = n => n == 0 ? 1 : n * fac(n – 1);

That’s essentially the fixpoint part of Fix. It will definitely help the reader to trace through the code for the Motivate sample step by step. You’ll see how code in the Fix trampoline will get interleaved with calls to your own delegate:

image

The second frame in the callstack is the trampoline that lives in the anonymous action inside the Fix method. We’re currently broken in the debugger inside the recursive call to our own delegate. Notice though the call stack’s depth is constant at two frames (ignoring Main), even though we’ve already made calls. Contrast this to the original C#-style Motive definition, which would already have grown the stack to 10 frames:

image

The way to break from a trampoline-based recursion is by returning null from the trampolined function. While that works, we want to add a bit of syntactical surface to it for reasons that will become apparent later (hint: we’ll need a place to stick return values on). So, we define a trivial Break extension method that will return a null-valued ActionRec:

public static ActionRec Break(this ActionRec a) { return null; }

Based on certain conditions we can now decide to break out of the recursion, simply by calling Break on the ActionRec passed in. For example, we could capture a local variable from the outer scope, to act as a counter:

Console.WriteLine("Action of arity 0");
{
    int i = 0;
    Func<ActionRec, Func<ActionRec>> f = a => () => { Console.WriteLine("Go! " + i++); return i < 10 ? a() : a.Break(); };
    f.Fix()();
}
Console.WriteLine();

This will just print 10 Go! messages. Notice I’ve omitted an intermediate variable for the f.Fix() result and call the resulting Action delegate in one go.

 

More recursive Action types

To do something more useful, we want to support higher arities for recursive Action and Func delegates. Let’s start by looking at the Action delegates since we’ve already looked at the simplest case of a recursive Action delegate before. Below is a sample of a recursive Action delegate with one parameter, printing the powers of two with exponents 0 to 9:

Console.WriteLine("Action of arity 1");
{
    int i = 0;
    Func<ActionRec<int>, Func<int, ActionRec<int>>> f = a => x => { Console.WriteLine("2^" + i++ + " = " + x); return i < 10 ? a(x * 2) : a.Break(); };
    f.Fix()(1);
}
Console.WriteLine();

Notice we’re cheating a bit by using a captured outer local variable to restrict the number of recursive calls. It’s left as an exercise to the reader to define another such recursive function where the input parameter is used to represent the “to” argument, i.e. specifying the largest exponent to calculate a power of two for.

In here, the ActionRec<T> delegate represents a recursive action delegate with one generic argument:

delegate ActionRec<T> ActionRec<T>(T t);

In order to define the recursive action that produces the powers of two, we use a regular function that maps such a recursive action onto a function that can create a new one of those, given an int as the input. Changing the names of the parameters may help to grasp this:

Console.WriteLine("Action of arity 1");
{
    int i = 0;
    Func<ActionRec<int>, Func<int, ActionRec<int>>> _printPowersOfTwo = printPowersOfTwo => x =>
{
Console.WriteLine("2^" + i++ + " = " + x);
return i < 10 ? printPowersOfTwo(x * 2) : printPowersOfTwo.Break();
}; Action<int> PrintPowersOfTwo = _printPowersOfTwo.Fix(); PrintPowersOfTwo(1); } Console.WriteLine();

Now the indented block reads like “void printPowersOfTo(int x) { … }”. The Fix method’s trampoline is a bit more tricky than the one we saw before, as it needs to deal with the one parameter that has to be fed to the called delegate. There’s a bit of voodoo here since the argument can change every time one makes a recursive call. After all, it’s an argument to the delegate. In the sample above, printPowersOfTwo is fed consecutive powers of two. The little hack is shown below:

public static Action<T> Fix<T>(this Func<ActionRec<T>, Func<T, ActionRec<T>>> f)
{
    return t =>
    {
        ActionRec<T> a = null;
        for (a = t_ => { t = t_; return a; }; a != null; a = f(a)(t))
            ;
    };
}

Trace through this for the PrintPowersOfTwo sample, where t starts as value 1. Clearly, a is non-null at that point (due to the assigned lambda expression in the initializer of the for-loop), so we get to call f with that action and argument 1. Now we’re in our code where 1 got assigned to parameter x, causing 2^0 = 1 to be printed to the screen. Ultimately this results in a call to printPowersOfTwo with argument 2. This happens on the action delegate “a” created by the first iteration of the trampoline’s for-loop:

a = t_ => { t = t_; return a; }

So, as a side-effect of calling this delegate, the local variable t got assigned the value 2. And the returned object from this call, “a”, gets assigned in the trampoline’s driver loop to the local variable “a”. In the next iteration, 2 will be fed to the recursive delegate. And so on:

image

Increasing the number of arguments with one more is done in a completely similar way:

Console.WriteLine("Action of arity 2");
{
    int i = 0;
    Func<ActionRec<int, int>, Func<int, int, ActionRec<int, int>>> f = a => (x, y) => { Console.WriteLine("2^" + x + " = " + y); return ++i < 10 ? a(x + 1, y * 2) : a.Break(); };
    f.Fix()(0, 1);
}
Console.WriteLine();

Where the new ActionRec delegate takes two generic parameters:

delegate ActionRec<T1, T2> ActionRec<T1, T2>(T1 t1, T2 t2);

In this sample we use two input parameters, on to represent the exponents and one to accumulate the powers of two. The Fix method now has to deal with two input parameters that need to be captured upon recursive calls. This is achieved as follows:

public static Action<T1, T2> Fix<T1, T2>(this Func<ActionRec<T1, T2>, Func<T1, T2, ActionRec<T1, T2>>> f)
{
    return (t1, t2) =>
    {
        ActionRec<T1, T2> a = null;
        for (a = (t1_, t2_) => { t1 = t1_; t2 = t2_; return a; }; a != null; a = f(a)(t1, t2))
            ;
    };
}

What we haven’t mentioned over and over again is the definition of the Break method that returns null to signal to break from the recursion. Here they are for completeness:

public static ActionRec<T> Break<T>(this ActionRec<T> a) { return null; }
public static ActionRec<T1, T2> Break<T1, T2>(this ActionRec<T1, T2> a) { return null; }

Below is an insight-providing screenshot illustrating the way recursion happens:

image

In Main, we called the fixed delegate with arguments 0 and 1. This caused us to enter the outermost lambda expression in Fix with t1 and t2 respectively set to 0 and 1. This is the second frame on the call stack (read from the bottom). The for-loop has proceeded to the first call of its update expression, resulting in a call to f with argument a and a subsequent invocation on the result with arguments 0 and 1. As a result, our lambda expression, lexically nested in the Main method, got called as observed by the third frame on the call stack, with x and y respectively set to 0 and 1. Here the recursive call happens by invoking the a delegate with arguments 1 (x + 1) and 2 (y * 2). Finally, this put us back in the trampoline where those two values will be captured in t1 and t2, and that’s where the debugger is currently sitting.

Moving on from here, we’ll back out of the trampoline and return the result of the apparent recursive call on “a” from lambda “f” in Main. This by itself puts us back in the driver for-loop, where “a” will be tested for null (which it isn’t yet) and the whole cycle starts again. This illustrates the key essence of the trampoline: instead of having the user directly causing a recursive call, callbacks to the trampoline code cause it to capture enough state information to make the call later on. This effectively flattens recursive calls into the for-loop. What we lost is the ability to do work after the recursive call returns (something we could work around by getting into the land of continuations).

 

Recursive Func types

The essential tricks to deal with input parameters have been explored above. However, Func delegate types have one more property we haven’t investigated just yet: the ability to return a value. We’ve seen the Break method before, but for Action delegates it doesn’t do anything but returning null. In case of recursive Func types, we’ll have to do something in addition to this, in order to return an object to the caller.

Let’s get started by defining the FuncRec delegate types. Again, those are mirrored after the regular Func delegates, but we have to sacrifice the return type position for a FuncRec:

delegate FuncRec<R> FuncRec<R>();
delegate FuncRec<T, R> FuncRec<T, R>(T t);
delegate FuncRec<T1, T2, R> FuncRec<T1, T2, R>(T1 t1, T2 t2);

Returning from a recursive FuncRec delegate will be done through the Break methods that now will take an argument for the return value:

public static FuncRec<R> Break<R>(this FuncRec<R> a, R res) { _brr[a] = res; return a; }
public static FuncRec<T, R> Break<T, R>(this FuncRec<T, R> a, R res) { _brr[a] = res; return a; }
public static FuncRec<T1, T2, R> Break<T1, T2, R>(this FuncRec<T1, T2, R> a, R res) { _brr[a] = res; return a; }

What’s happening inside those Break methods will be discussed further on. For now, it suffices to see the signatures, taking in an R parameter to hold the return value of the recursive call. Also notice how those methods return “a” instead of null.

Before we dig any deeper in the implementation, let’s see a couple of recursive functions in action:

Console.WriteLine("Function of arity 0");
{
    int i = 0;
    Func<FuncRec<int>, Func<FuncRec<int>>> f = a => () => { Console.WriteLine("Fun! " + i++); return i < 10 ? a() : a.Break(i); };
    Console.WriteLine("Result: " + f.Fix()());
}
Console.WriteLine();

Console.WriteLine("Function of arity 1");
{
    int i = 0;
    Func<FuncRec<int, int>, Func<int, FuncRec<int, int>>> f = a => x => { Console.WriteLine("2^" + i++ + " = " + x); return i < 10 ? a(x * 2) : a.Break(i); };
    Console.WriteLine("Result: " + f.Fix()(1));
}
Console.WriteLine();

Console.WriteLine("Function of arity 2");
{
    int i = 0;
    Func<FuncRec<int, int, int>, Func<int, int, FuncRec<int, int, int>>> f = a => (x, y) => { Console.WriteLine("2^" + x + " = " + y); return ++i < 10 ? a(x + 1, y * 2) : a.Break(i); };
    Console.WriteLine("Result: " + f.Fix()(0, 1));
}
Console.WriteLine();

We bound the recursion again by means of some outer local variable, but this is not a requirement. But in order to show all functions without one running away, such a bound is desirable. Concerning the input parameters, things look identical to the ActionRec samples. What’s different are the Break calls and the output types specified in the FuncRec type parameters. We’ve simply used the bounding variable “i” as the return value for illustration purposes. Later, when we see factorial again, the output value will be more interesting.

How does Fix work this time? Let’s show one sample for the function with one argument:

public static Func<T, R> Fix<T, R>(this Func<FuncRec<T, R>, Func<T, FuncRec<T, R>>> f)
{
    return t =>
    {
        object res_;
        FuncRec<T, R> a = null;
        for (a = t_ => { t = t_; return a; }; !_brr.TryGetValue(a, out res_); a = f(a)(t))
            ;
        var res = (R)res_;
        _brr.Remove(a);
        return res;
    };
}

I’m using an ugly trick here to store the return value. Have a look at the Break methods that do stick the specified result in a dictionary, which is typed as follows:

// Would really like to store result on a property on the delegate,
// but can't derive from Delegate manually in C#... This is "brr".
private static Dictionary<Delegate, object> _brr = new Dictionary<Delegate, object>();

Break add the return value to this dictionary, while the trampoline driver loop checks for such a value repeatedly. If one is found, a Break call has been done and the loop terminates, stopping the recursion and sending the answer to the caller. Alternative potentially cleaner tricks can be thought of, but I haven’t spent much more time thinking about this.

All in all, the core Fix is pretty much the same as for the action-based delegates, apart from the TryGetValue call in the condition, and some dictionary-related cleanup code. Below is our destination factorial sample:

Console.WriteLine("Factorial");
{
    Func<FuncRec<int, int, int>, Func<int, int, FuncRec<int, int, int>>> fac_ = f => (x, a) => x <= 1 ? f.Break(a) : f(x - 1, a * x);
    Func<int, int> fac = (int n) => fac_.Fix()(n, 1);
    Enumerable.Range(1, 10).Select(n => new { n, fac = fac(n) }).Do(Console.WriteLine).Run();
}
Console.WriteLine();

The type of the intermediate function definition is quite impressive due to the fixpoint structure, but the essence of the function is quite easy to grasp:

f => (x, a) => x <= 1 ? f.Break(a) : f(x - 1, a * x)

Given a function (that will represent the fixed factorial definition, i.e. itself) and two arguments, one to count down and one to represent the accumulated product, we simply continue multiplying till we hit the base case, where we return (using Break) the accumulated value. The next line creates a simple wrapper function to hide away the accumulator base value of 1:

Func<int, int> fac = (int n) => fac_.Fix()(n, 1);

And now we have a simple factorial function we can call in the regular manner we’re used to, using delegate invocation syntax. To illustrate it for multiple values, I'm using a simple LINQ statement, projecting each value from 1 to 10 onto an anonymous object with both that number and the corresponding factorial value. The Do and Run methods will be introduced in the Reactive Framework as new extensions to IEnumerable:

public static IEnumerable<T> Do<T>(this IEnumerable<T> src, Action<T> a)
{
    foreach (var item in src)
    {
        a(item);
        yield return item;
    }
}

public static void Run<T>(this IEnumerable<T> src)
{
    foreach (var _ in src)
        ;
}

To prove the stack utilization remains constant, we can extend the sample using the handy System.Diagnostics.StackTrace class and the .NET 4.0 Tuple class. In the non-trampolined version, we’d see the stack grow on every call, reaching its maximum depth at the point we return from the base case. So, watching the stack depth at the point of the base case’s return call (using Break) will be a good metric of success:

Console.WriteLine("Factorial + stack analysis");
{
    Func<FuncRec<int, int, Tuple<int, int>>, Func<int, int, FuncRec<int, int, Tuple<int, int>>>> fac_ =
f => (x, a) => x <= 1 ? f.Break(new Tuple<int,int>(a, new StackTrace().FrameCount)) : f(x - 1, a * x); Func<int, Tuple<int, int>> fac = (int n) => fac_.Fix()(n, 1); (from n in Enumerable.Range(1, 10) let f = fac(n) select new { n, fac = f.Item1, stack = f.Item2 }).Do(Console.WriteLine).Run(); } Console.WriteLine();

The result is shown below:

image

This looks good, doesn’t it? If you get tried of the long generic Func types, simply call the Fix method directly, passing in the types of the arguments and return value:

var fac_ = Ext.Fix<int, int, Tuple<int, int>>(f => (x, a) => 
x <= 1 ? f.Break(new Tuple<int, int>(a, new StackTrace().FrameCount)) : f(x - 1, a * x)
);

Beautiful! Almost reads like a regular C# method declaration (with plenty of imagination the author possesses).

 

Putting the pieces together

Since readers often want to try out the thing as a whole, here’s the implementation of my latest Esoteric namespace:

// Trampoline for tail recursive Action and Func delegate creation and invocation in constant stack space
// bartde - 10/29/2009

using System;
using System.Collections.Generic;

namespace Esoteric
{
    delegate ActionRec ActionRec();
    delegate ActionRec<T> ActionRec<T>(T t);
    delegate ActionRec<T1, T2> ActionRec<T1, T2>(T1 t1, T2 t2);

    delegate FuncRec<R> FuncRec<R>();
    delegate FuncRec<T, R> FuncRec<T, R>(T t);
    delegate FuncRec<T1, T2, R> FuncRec<T1, T2, R>(T1 t1, T2 t2);

    static class Ext
    {
        public static ActionRec Break(this ActionRec a) { return null; }
        public static ActionRec<T> Break<T>(this ActionRec<T> a) { return null; }
        public static ActionRec<T1, T2> Break<T1, T2>(this ActionRec<T1, T2> a) { return null; }

        public static Action Fix(this Func<ActionRec, Func<ActionRec>> f)
        {
            return () =>
            {
                ActionRec a = null;
                for (a = () => a; a != null; a = f(a)())
                    ;
            };
        }

        public static Action<T> Fix<T>(this Func<ActionRec<T>, Func<T, ActionRec<T>>> f)
        {
            return t =>
            {
                ActionRec<T> a = null;
                for (a = t_ => { t = t_; return a; }; a != null; a = f(a)(t))
                    ;
            };
        }

        public static Action<T1, T2> Fix<T1, T2>(this Func<ActionRec<T1, T2>, Func<T1, T2, ActionRec<T1, T2>>> f)
        {
            return (t1, t2) =>
            {
                ActionRec<T1, T2> a = null;
                for (a = (t1_, t2_) => { t1 = t1_; t2 = t2_; return a; }; a != null; a = f(a)(t1, t2))
                    ;
            };
        }

        // Would really like to store result on a property on the delegate,
        // but can't derive from Delegate manually in C#... This is "brr".
        private static Dictionary<Delegate, object> _brr = new Dictionary<Delegate, object>();

        public static FuncRec<R> Break<R>(this FuncRec<R> a, R res) { _brr[a] = res; return a; }
        public static FuncRec<T, R> Break<T, R>(this FuncRec<T, R> a, R res) { _brr[a] = res; return a; }
        public static FuncRec<T1, T2, R> Break<T1, T2, R>(this FuncRec<T1, T2, R> a, R res) { _brr[a] = res; return a; }

        public static Func<R> Fix<R>(this Func<FuncRec<R>, Func<FuncRec<R>>> f)
        {
            return () =>
            {
                object res_;
                FuncRec<R> a = null;
                for (a = () => a; !_brr.TryGetValue(a, out res_); a = f(a)())
                    ;
                var res = (R)res_;
                _brr.Remove(a);
                return res;
            };
        }

        public static Func<T, R> Fix<T, R>(this Func<FuncRec<T, R>, Func<T, FuncRec<T, R>>> f)
        {
            return t =>
            {
                object res_;
                FuncRec<T, R> a = null;
                for (a = t_ => { t = t_; return a; }; !_brr.TryGetValue(a, out res_); a = f(a)(t))
                    ;
                var res = (R)res_;
                _brr.Remove(a);
                return res;
            };
        }

        public static Func<T1, T2, R> Fix<T1, T2, R>(this Func<FuncRec<T1, T2, R>, Func<T1, T2, FuncRec<T1, T2, R>>> f)
        {
            return (t1, t2) =>
            {
                object res_;
                FuncRec<T1, T2, R> a = null;
                for (a = (t1_, t2_) => { t1 = t1_; t2 = t2_; return a; }; !_brr.TryGetValue(a, out res_); a = f(a)(t1, t2))
                    ;
                var res = (R)res_;
                _brr.Remove(a);
                return res;
            };
        }
    }
}

Another sample illustrating the stack-friendly nature of the trampoline, is shown below:

Console.WriteLine("Forever! (CTRL-C to terminate)");
{
    bool boom = false;
    Console.CancelKeyPress += (s, e) =>
    {
        boom = true;
        e.Cancel = true;
    };
    Func<ActionRec, Func<ActionRec>> f = a => () =>
{
if (boom)
throw new Exception("Stack use is constant!");
return a();
};
    try
    {
        f.Fix()();
    }
    catch (Exception ex)
    {
        // Inspect stack trace here
        Console.WriteLine(ex);
    }
}

This function never returns unless you force it by pressing CTRL-C. At that point, you’ll see the exception’s stack trace being printed, revealing the constant stack space:

image

It also illustrates how the trampoline is sandwiched between our call to the recursive function (f.Fix()()) and the callback to the code we wrote (f).

 

Homework

The reader is invited to think about realizing mutual recursion in a stack-friendly way. For example, the sample below is an F# mutual recursive set of two functions used to determine whether a number is odd or even:

let rec isEven n =
  if n = 0 then
    true
  else
    isOdd (n - 1)
and isOdd n =
  if n = 0 then
    false
  else
    isEven (n - 1)

Its use is shown below:

image

In fact, the F# implementation generates mutually recursive calls here, but in a stack-friendly way by using tail calls (only shown for the isEven function below, but similar for isOdd):

image

Tail calls reuse the current stack frame, therefore not exhausting the stack upon recursion. The same can be achieved by means of a trampoline if you’re brave enough to give it a try. Hint: notice how mutually recursive functions in F# are subtly bundled by means of an “and” keyword.

As an additional piece of homework, think about ways we could use a trampoline to call functions that still return a useful value, to be used in code after the call returns. As an example, consider the classic definition of factorial:

int fac(int n)
{
    return n == 0 ? 1 : n * fac(n – 1);
}

How would you realize exactly the code above, using trampolines and whatnot, without exhausting call stack space? Recall the problem with the above is the fact we need to do a multiplication after the recursive fac call returns. Hint: think of continuations and maybe even the typical “von Neumann machine trade-off” between code (CPU) and data (memory).

Happy jumping!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

Today, a colleague and I were playing with new C# 4.0 and BCL 4.0 features, me trying (and succeeding I think) to convince my co-worker about the merits of LINQ and its peripheral technologies. Users of Visual Studio 2010 Beta 2 may have noticed the new IObservable<T> and IObserver<T> interfaces in the System namespace:

namespace System
{
    // Summary:
    //     Defines a provider for push-based notification.
    //
    // Type parameters:
    //   T:
    //     The object that provides notification information.This type parameter is
    //     covariant. That is, you can use either the type you specified or any type
    //     that is more derived. For more information about covariance and contravariance,
    //     see Covariance and Contravariance in Generics.
    public interface IObservable<out T>
    {
        // Summary:
        //     Notifies the provider that an observer is to receive notifications.
        //
        // Parameters:
        //   observer:
        //     The object that is to receive notifications.
        //
        // Returns:
        //     The observer's interface that enables resources to be disposed.
        IDisposable Subscribe(IObserver<T> observer);
    }
    // Summary:
    //     Provides a mechanism for receiving push-based notifications.
    //
    // Type parameters:
    //   T:
    //     The object that provides notification information.This type parameter is
    //     contravariant. That is, you can use either the type you specified or any
    //     type that is less derived. For more information about covariance and contravariance,
    //     see Covariance and Contravariance in Generics.
    public interface IObserver<in T>
    {
        // Summary:
        //     Notifies the observer that the provider has finished sending push-based notifications.
        void OnCompleted();
        //
        // Summary:
        //     Notifies the observer that the provider has experienced an error condition.
        //
        // Parameters:
        //   error:
        //     An object that provides additional information about the error.
        void OnError(Exception error);
        //
        // Summary:
        //     Provides the observer with new data.
        //
        // Parameters:
        //   value:
        //     The current notification information.
        void OnNext(T value);
    }
}

Those interfaces deserve whole blog series of their own (I promise to do so, some day) and have to do with the Reactive Framework discussed on various occasions on Channel 9:

More info on this will appear on Channel 9 in the foreseeable future (stay tuned; I’ll be there!) but all that matters in the scope of this post is the use of two interesting keywords in the interface definitions above:

public interface IObservable<out T> { … }
public interface IObserver<in T> { … }

Did you see them? The reuse of keywords out and in as modifiers on generic type parameters is what’s referred to as the new C# 4.0 feature called definition-site generic co- and contravariance for interface and delegate types, or generic co- and contravariance for short. I’ve written about this in the past, explaining the concept in terms of apples of tomatoes:

If you don’t know about this stuff just yet, take a break and visit the link above before returning back here. The nice thing about all of this is that as a user of generic types declared to be co- and/or contravariant in certain type parameters, you don’t need to worry about it at all. In fact, this feature allows you to write things you couldn’t write before:

IEnumerable<Apple> apples = null;
IEnumerable<Fruit> fruits = apples; // didn't compile before!

But in .NET 4.0 with C# 4.0 and VB 10.0, you can. The point of variance annotations on generic type parameters is to make such operations provably, at compile-time, safe. Before this new language-level feature, generics were always invariant.

 

An observed problem?

Loaded with my knowledge of generic co- and contravariance, I was surprised to see the following code compile fine. To be honest, the real scenario was more contrived than the simplified one below, with many intermediate types in the picture:

//
// When apples are thrown at us, we'll write them to the screen.
// We don't expect to see other fruits here!
//
// Note: This uses an extension method on IObservable<T> that
//       hides the declaration of an IObserver<T>. C# doesn't
//       have anonymous inner types unfortunately...
//
IObservable<Apple> apples = new AngryAppleThrower();
apples.Subscribe(Console.WriteLine);

//
// Fine, since IObservable<T> is covariant in T.
//
IObservable<Fruit> fruits = apples;

//
// What? A Banana is injected into an observable of Fruits?
// Though you may think this is fine, how will the observer
// above, expecting Apple objects, react to seeing a Banana?
// This is not type-safe; covariance should prevent this!
//
fruits.Inject(new Banana());

It just couldn’t be true that you could somehow sneak a Banana in an observable of Apple objects. That’s the whole point of type safety guarantees provided by co- and contravariance on generic types in C# 4.0.  I tried a few things to verify we could really pass subtypes of Fruit into the IObservable<T>. The following didn’t compile while the Banana-case did:

fruits.Inject(new string());

Just looking at the sample above you can already smell what’s going on, maybe? Tip: look at the types of the variables. What letter do they start with? Then, where does Inject come from? If you don’t see it yet, don’t worry as I’ll reveal the answers in a second. Either way, I started to repro the above with simpler fictional types:

interface IO<out T>
{
}

interface IMyIO<out T> : IO<T>
{
    void Inject<S>(S s) where S : T;
}

In fact, the two interfaces shown here sort of add an additional layer of complexity as was the case in the application I was staring at. The above should fail to compile: since IMyIO<T> declares T as a covariant (output-only) type parameter, it should not be possible to inject an object of type T, or any of its subtypes, into it. So Inject should not work. And indeed it didn’t compile:

Invalid variance: The type parameter 'T' must be contravariantly valid on 'IMyIO<T>.Inject<S>(S)'. 'T' is covariant.

What this says is that Inject requires T to be contravariant (available for input positions) but it’s declared to be covariant. Good job dear compiler (and its developers), just what I expected to see here! Still, that was what I was observing in the concrete fruity flavored sample shown to me. To make matters worse, we were dealing with concrete types implementing the interfaces, so things were far less obvious than they are in the sample shown above. Going back to the questions I asked to the reader earlier, it’s clear the Inject method in the Fruit-sample is coming from an extension method definition, as the left-hand side is an interface that doesn’t have the method by itself:

static class EvilLooking
{
    public static IObservable<T> Inject<T, S>(this IObservable<T> source, S s) where S : T
    {
        // Implementation doesn't matter yet...
        return null;
    }
}

Well, clearly this is fine. Inject is just a method sitting on the outside of the type it extends (in this case IObservable<T>), hence it cannot reach into its internals in any way. The syntactical characteristic of extension method invocation as if it were an instance method simply threw me off:

fruits.Inject(new Banana());

Even though it looks like the covariant Fruit container accepts an object of subtype Banana, were not doing so. The three words “covariant”, “accepts” and “subtype” are a contradictio in terminis. Valid triplets are:

  • contravariant, accept, subtype – e.g. an IComparer<Fruit> can accept Apple objects since IComparer<T> is contravariant in T
  • covariant, return, subtype – e.g. an IEnumerable<Fruit> can return Apple objects since IEnumerable<T> is covariant in T

All that’s happening is here is that a “peer method” (with no more “privileges” than the caller in terms of reaching out to the fruits object’s state and internals) is being called:

EvilLooking.Inject(fruits, new Banana());

Here it’s clear that fruits is not accepting a Banana directly. And obviously the “Implementation doesn’t matter yet…” comment in the Inject method implementation shown above is wishful thinking. It won’t be able to contaminate the IObservable<T> source object with an S object (where S is a subtype of T) no matter how hard it tries (short of reflection-based techniques that will fail at runtime).

Case solved. Remaining work for the night was to come up with a good illustration (recall the real problem was not with Fruit, Banana or Apple objects, nor with IObservable<T> directly, but with less intuitive types) of how:

  1. The extension method doesn’t break safety at all, though it looks as if it does (to the trained covariant eye);
  2. Co- and contravariance are a great feature;
  3. And maybe throwing in some of the dangers of dynamic typing when used inappropriately (a tangent to the original topic).

 

Extension methods ain't Trojan horses

Here’s what I came up with on the bus heading home: the story of how the Trojan War could have been gone differently if the city of Troy protected itself against the Trojan horse adequately... Lots of references to this story appear in the code below, so keep a copy of the linked pages on the side (using Windows 7 Snap if you have installed the brand new wonderful OS already). I hope you have as much fun reading it (and working your way through it as bedside lecture) as I wrote it. I know, it’s geeky humor at best. And analogies are just that: analogies. I don’t guarantee it to be perfect.

/* Extension methods ain't Trojan horses
 * 
 * Illustration of safety guarantees provided by generic covariance in C# 4.0
 * and how extension methods can merely provide a fata morgana making you
 * believe they can defeat it.
 * 
 * bartde - 10/22/2009
 */
using System;
using System.Collections.Generic;
using System.Dynamic;
using System.Threading;
using Microsoft.CSharp.RuntimeBinder;

// How the Trojan war could have been...
namespace NewTrojanWar
{
    /// <summary>
    /// Running the attack vector!
    /// </summary>
    class Program
    {
        /// <summary>
        /// Proves the attack DOESN'T work.
        /// </summary>
        static void Main()
        {
            try
            {
                GreekArmy.TheOneAndOnly.Attack();
            }
            catch (EntryPointNotFoundException ex)
            {
                // Ouch. "The gates are closed it says." Maybe they are covariant after all?
                // They got smarter than we thought! Did they hire Anderius?
                Console.WriteLine(ex);

                // What happened to Epeius is left to your own imagination...
            }
        }
    }

    /// <summary>
    /// City protected by covariance.
    /// </summary>
    /// <typeparam name="NotAllowedToComeIn">Persona or objects of this type are not allowed to come in.</typeparam>
    interface ISecureCity<out NotAllowedToComeIn>
    {
        /// <summary>
        /// What can't come in, can go out.
        /// </summary>
        /// <returns>You're fine to leave, but beware of the point of no return!</returns>
        NotAllowedToComeIn Escape();

        /* Thanks to Laocoon and Cassandra, the following is not possible anymore due the covariantly
         * protected city. The Trojans are a little unhappy as the annual horse meeting can't come in
         * anymore though. It was quite an invasion every year; too bad we lost it since Laocoon and
         * Cassandra put their Troy Sharp 4.0 plan into action. The fact they convinced us that the
         * city has support for covariant protection in place since the Troy City Council 2.0 came out
         * almost five years ago, we - the Trojans - are convinced that cancelling the annual horse
         * meeting for additional protection is a good thing. Out with those foreign horses!
         */
        //void Invade(NotAllowedToComeIn evil);
    }

    /// <summary>
    /// Base class for every horse.
    /// </summary>
    class Horse
    {
    }

    /// <summary>
    /// Troy doesn't want horses to come in.
    /// </summary>
    sealed /* we're an incredibly safe city, no-one should override us! */ class Troy :
        DynamicObject /* Another protection put in place by Laocoon and Cassandra, see further... */,
        ISecureCity<Horse /* Laocoon and Cassandra were right */>
    {
        /// <summary>
        /// We constructed it ourselves and are proud of it.
        /// </summary>
        private Troy()
        {
        }

        /// <summary>
        /// There's only one real Troy!
        /// </summary>
        private static Troy s_city;

        /// <summary>
        /// For publication in the Easy Jet brochure.
        /// </summary>
        /// <returns>Attractive city in the sun; and safe against incoming horses!</returns>
        public static ISecureCity<Horse> City
        {
            get
            {
                if (s_city == null)
                    s_city = new Troy();
                return s_city;
            }
        }

        /// <summary>
        /// Troy breeds horses and people can ask for them from the outside.
        /// </summary>
        /// <returns>You want a horse? Have one!</returns>
        public Horse Escape()
        {
            return new Horse();
        }

        /* Sigh. This used to be fun. To be a good covariant citizen, we have to drop it for
         * better or for worse. Now we implement the new ISecureCity interface!
        /// <summary>
        /// Horse parade.
        /// </summary>
        /// <param name="parade">External horses.</param>
        [Obsolete]
        public void Invade(Horse parade) // implemented the invariant old ICity interface
        {
            // Hang out in the streets and watch the horse parade coming in through the gates.
        }
         */

        /// <summary>
        /// We'll pretend to accept dynamic calls.
        /// </summary>
        /// <param name="binder">Greek language invocation.</param>
        /// <param name="args">What are they saying?</param>
        /// <param name="result">We won't every talk to them though...</param>
        /// <returns>Hard cheese.</returns>
        public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result)
        {
            throw new EntryPointNotFoundException("The gates are closed!");
        }
    }

    /// <summary>
    /// Greeks trying to attack will need to sit in the horse for quite a while till
    /// the Trojans wheel the horse in.
    /// </summary>
    interface ICanSitStill
    {
        void SitStill();
    }

    /// <summary>
    /// Greeks trying to attack by creating a special horse.
    /// </summary>
    /// <typeparam name="T">Objects to put inside the horse.</typeparam>
    abstract class FilledHorse<T> : Horse where T : ICanSitStill
    {
        /// <summary>
        /// The Trojans will be curious to see what's inside the horse and will subscribe
        /// to the present to receive whatever is in there. Greeks will invade through the
        /// well-known OnNext method that fires the attack inside the city walls.
        /// </summary>
        public abstract IObservable<object> Present { get; }
    }

    /// <summary>
    /// Non-HR-compliant base class for the type below, but according to the story all soldiers were men.
    /// </summary>
    class Male
    {
    }

    /// <summary>
    /// Soldiers need to be tough men.
    /// </summary>
    sealed /* inheritance is not allowed in the army */ class GreekSoldier : Male, ICanSitStill
    {
        /// <summary>
        /// Event to launch the attack.
        /// </summary>
        internal ManualResetEvent GoGoGo { get; private set; }

        /// <summary>
        /// Wait for the attack to start.
        /// </summary>
        public void SitStill()
        {
            GoGoGo.WaitOne();
            // Really they looked for the city's gates and let in their friends. But let's assume they were
            // particularly eager to attack straight away. No matter what they do, thanks to the new city
// security plan, they won’t even get here: Troy won’t burn!
throw new DestroyTroyException(); } } /// <summary> /// Destroy Troy! /// </summary> class DestroyTroyException : Exception { } /// <summary> /// A present for the Trojans. /// </summary> sealed /* don't mess with us! */ class InterestingHorse : FilledHorse<GreekSoldier /* Let's hope they don't look at the type label at the "base" of the horse. */> { /// <summary> /// Invaders. /// </summary> private IEnumerable<GreekSoldier> _soldiers; /// <summary> /// Fill the horse with invaders. Make internal not to arouse suspicion. /// </summary> /// <param name="soldiers">Invaders.</param> internal void FillWith(IEnumerable<GreekSoldier> soldiers) { _soldiers = soldiers; } /// <summary> /// You got a present! Get it and listen to it :-). /// </summary> /// <remarks>Typed as a mysterious observable of objects. Little do they know the objects are soldiers.</remarks> public override IObservable<object> Present { get { return new WaitingInvaders(_soldiers); } } /// <summary> /// Other than a regular horse, it can be wheeled in. /// </summary> public void Wheel() { // This should really override the Horse's feet. } /// <summary> /// But the horse is full of invaders waiting till the attack has to happen at night. /// </summary> sealed class WaitingInvaders : IObservable<GreekSoldier> { /// <summary> /// Invaders. /// </summary> private IEnumerable<GreekSoldier> _soldiers; /// <summary> /// Creating new waiting invaders. /// </summary> /// <param name="soldiers">Invaders sitting still.</param> public WaitingInvaders(IEnumerable<GreekSoldier> soldiers) { _soldiers = soldiers; foreach (var invader in _soldiers) invader.SitStill(); } /// <summary> /// Curious Trojans will definitely call this. /// </summary> /// <param name="observer">The curious Trojans will observe.</param> /// <returns>It doesn't give you back something to unsubscribe. In a disappointment, the Trojans go to bed.</returns> public IDisposable Subscribe(IObserver<GreekSoldier> observer) { new Thread(() => { // Wait till the night falls. We anticipate there will be at most 12 hours of celebration // from the time they wheel in the horse. Then all Trojans will be drunk and asleep. Thread.Sleep(12 * 60 * 60 * 1000); // If coast is clear... foreach (var invader in _soldiers) { observer.OnNext(invader); // Wake-up call! Don't ask me what alarm mechanism they used back then... For one thing, // it wasn't the Windows 7 kernel's thread scheduler. invader.GoGoGo.Set(); } }).Start(); return null; // if you try to dispose the attackers, you'll null-ref yourself :P } } } /// <summary> /// There's this evil invader that believes in static typing, hence he declared himself as a classy /// static man. He can't have anything but static things. In a evil mood, he decided to help out the /// Greek army by building the horse and providing an invasion plan for them to enter Troy. /// </summary> static class Epeius { /// <summary> /// Build the horse. /// </summary> /// <returns>The invading horse.</returns> public static InterestingHorse BuildHorse() { return new InterestingHorse(); } /// <summary> /// How evil. Putting a Gift method before the city door. How could you resist /// calling it? But Epeius is faking it, as we shall see. The city is protected /// quite well thanks to the Troy Sharp 4.0 feature. /// </summary> /// <param name="city">The city to invade.</param> /// <param name="present">Hmm, horse beef (assuming the Trojans are not vegetarian)!</param> public static void Gift(this ISecureCity<Horse> city, InterestingHorse present) { // Sometimes Epeius silently surrenders to dynamic typing because he's lazy. // No-one will suspect him to describe Troy as a dynamic vibrant city. After // all, they eagerly want to get into it and Epeius was pretty excited about // his job for the Greek army helping them to invade the city. No reason for // suspicion at all. We believe Epeius' use of dynamic is fine... dynamic vibrantTroy = city; // But Epeius didn't find (statically) a method or property on city that accepts // the present. He makes the Greek army believe this method works by putting // dynamic fairy dust in their eyes though. In fact, Epeius fails here and will // let down the Greek army: there is no way to invade Troy. So, the invasion // plan (the compiled IL) contains some hidden decision logic on finding out how // to carry out the invasion (the DLR machinery behind the scenes). When the // soldiers are in the horse, they'll eventually fail right here and their horse
// will explode (see DynamicObject::TryInvokeMember method implementation!).
vibrantTroy.Invade(present); // Unfortunately, Epeius made himself so static he even doesn't have a this // reference to call .Escape() on himself. The rest of the story of what happened // with Epeius after the Greek army discovered he cheated on them is not for the // sensitive reader. } } /// <summary> /// The Greek army is cruel. /// </summary> class GreekArmy { /// <summary> /// Army. /// </summary> private List<GreekSoldier> _soldiers; /// <summary> /// Only one! /// </summary> private GreekArmy() { _soldiers = new List<GreekSoldier> { /* recruit good fighters */ }; } /// <summary> /// Only one! /// </summary> private static GreekArmy s_army; /// <summary> /// Only one! /// </summary> /// <returns>Only one!</returns> public static GreekArmy TheOneAndOnly { get { if (s_army == null) s_army = new GreekArmy(); return s_army; } } /// <summary> /// Written by Odysseus who believes Epeius' poisonous Gift call on the City of Troy will work. /// </summary> public void Attack() { var horse = Epeius.BuildHorse(); horse.FillWith(_soldiers); Troy.City.Gift(horse); // Sweet, Odysseus sees a Gift command he can shout to the City of Troy... } } }

So far for this lecture on Latin epic poems, lectured in C# 4.0.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

More Posts « Previous page - Next page »