## July 2007 - Posts

In order to help people to understand the hows of expression trees, I've created a simple application called the ExpressionLearner (download it here). It's written for Orcas Beta 2 and shows how expression trees work. A quick overview.

## What are expression trees?

I assume you know a bit of programming with functions (I'm intentionally not talking about "functional programming" as in languages like Haskell, although we're getting closer and closer to such languages with the advent of .NET Framework 3.5). To set our mind, consider the following piece of code in C# 1.0:

delegate int BinOp(int a, int b);

class Calculator
{
static void Main()
{
}

static int Add(int a, int b)
{
return a + b;
}

static int DoBinOp(BinOp op, int a, int b)
{
return op(a, b);
}
}

I think you agree this code is pretty heavy for what it's supposed to do. In C# 2.0 things got easier with anonymous methods:

delegate int BinOp(int a, int b);

class Calculator
{
static void Main()
{
Console.WriteLine(DoBinOp(delegate(int a, int b) { return a + b; }, 1, 2));
}

static int DoBinOp(BinOp op, int a, int b)
{
return op(a, b);
}
}

But what about that ugly piece of inline code just to add two numbers? That's why we now have lambdas in C# 3.0:

delegate int BinOp(int a, int b);

class Calculator
{
static void Main()
{
Console.WriteLine(DoBinOp((a,b) => a + b, 1, 2));
}

static int DoBinOp(BinOp op, int a, int b)
{
return op(a, b);
}
}

and we could even get rid of the BinOp delegate thanks to the BCL generic Func<T1,T2,R> delegate (other "overloads" on generic type parameters exist; the T params stand for inputs, R is the - functional - output):

class Calculator
{
static void Main()
{
Console.WriteLine(DoBinOp((a,b) => a + b, 1, 2));
}

static int DoBinOp(Func<int,int,int> op, int a, int b)
{
return op(a, b);
}
}

So far, so good. But what does this have to do with expression trees? Before I can tell this, reinspect the code above. All of this C# 3.0 code gets compiled into IL code that's ready for execution by the CLR:

where the lambda was translated into an anonymous method, called "<Main>b__0":

Now, side-step to LINQ. Using LINQ, you can write queries like this:

var res = from p in products where p.UnitPrice >= 100 select p.ProductName;

In reality, this piece of code gets translated into a chain of (extension) method calls, like this:

var res = products.Where(p => p.UnitPrice >= 100).Select(p => p.ProductName);

Observe the two lambdas. But what's next? Where do Where and Select come from? It depends. If you're using LINQ to Objects, it comes from a set of extension methods on System.Linq.Enumerable (simplified code below):

static class Enumerable
{
public static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T,bool> predicate)
{
foreach (T item in source)
if (predicate(item))
yield return item;
}

public static IEnumerable<R> Select<T,R>(this IEnumerable<T> source, Func<T,R> project)
{
foreach (T item in source)
yield return project(item);
}
}

So, essentially the LINQ query above will get translated into IL code from A to Z, ready for direct execution on the target computer. This is because we've assume "products" is an IEnumerable<T>, thus the extension methods for IEnumerable<T> take effect. But what if "products" isn't an IEnumerable<T> (for example an IQueryable<T> as I'll blog about again in a future post)? Assume it's some kind of class that acts as a proxy to write queries against while the written queries are intended to be executed remotely, e.g. in a target language such as SQL or CAML or ... In such a case we can't do anything with IL code (well, übergeeks could use IL and disassemble/decompile it prior to converting it to the target query language); instead, we'd like to have the same query represented in some other intermediate format we can deal with ourselves in the way we see fit (we = the implementor of the class you write the queries against). This is where expression trees enter the stage.

Let's go back to the original calculator sample and change the code a little:

class Calculator
{
static void Main()
{
Console.WriteLine(DoBinOp((a,b) => a + b, 1, 2));
}

static int DoBinOp(Expression<Func<int,int,int>> op, int a, int b)
{
return (int)op.Compile().DynamicInvoke(a, b);
}
}

There's one core difference in here: the signature of the DoBinOp method now has an Expression<Func<int,int,int>> as its first parameter, instead of Func<int,int,int>. Although I've changed the implementation of the DoBinOp method (which I'll discuss later), you can ignore this for now. Observe however that the caller of the code doesn't see a change. However, the underlined lambda is now compiled to something completely different than regular IL instructions; instead, it's compiled to IL code that generates an in-memory expression tree representation of the original code at runtime. Why does the compiler take that decision? Because it has to assign the lambda to a variable of type Expression<Func<...>> (in this case to such a parameter). In IL, it looks like this:

This Main method code is equivalent to the following (self-written) code:

static void Main()
{
ParameterExpression a = Expression.Parameter(typeof(int), "a");
ParameterExpression b = Expression.Parameter(typeof(int), "b");
Expression<Func<int,int,int>> l = Expression.Lambda<Func<int,int,int>>(add, a, b);

Console.WriteLine(DoBinOp(l, 1, 2));
}

Thus, essentialy, you can think of expression trees are the data-representation of an AST (abstract syntax tree). In other words, an expression tree represents a piece of code as data. Not any piece of code can be transformed into an expression tree however. In contrast to code-generation mechanisms like CodeDOM, expression trees cannot represent statements (CS0834: A lambda expression with a statement body cannot be converted to an expression tree):

Func<int, int> abs = (int a) => { if (a >= 0) return a; else return -a; }; //compiles

Expression<Func<int, int>> abs = (int a) => { if (a >= 0) return a; else return -a; }; //doesn't compile (CS0834)

So, what the DoBinOp method gets from its caller is an expression tree instead of a delegate instance. One thing it can do with that expression tree (that represents a lambda expression) is to compile it into IL at that stage of the game, which is what happens in this line of code:

return (int)op.Compile().DynamicInvoke(a, b);

Instead, you could interpret the expression tree in order to execute it. Or, and that's what LINQ custom query providers do, you could translate the expression tree in some target language for further execution by another system (e.g. a DBMS).

## The Expression Learner

The goal of the Expression Learner sample is to show translations of various lambdas to their corresponding expression trees. In the System.Linq.Expressions namespace one can find the enumeration called ExpressionType that hosts 46 values:

namespace System.Linq.Expressions
{
// Summary:
//     Describes the node types for the nodes of an expression tree.
public enum ExpressionType
{
And = 2,
AndAlso = 3,
...
TypeIs = 45,
}
}

For each of these possible expression types, one method has been supplied that illustrates the C# 3.0 equivalent (where applicable, i.e. in almost all cases) to such an expression. At the same time, the code allows for dynamic compilation and invocation of the lambdas in order to test their functionality. The 'learner' should be of most interest to anyone who wants to use expression trees explicitly (i.e. not just for writing LINQ queries, but to parse trees, e.g. as part of a LINQ query provider implementation or as part of some kind of expression interpreter engine). Below is a screenshot of the sample in action:

And here's a screenshot of the dynamic invocation:

You might want to take a look at the Main method too, since it uses a few LINQ to Objects queries itself in order to get all of the sample methods dynamically.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

With the advent of Orcas Beta 2 earlier this week, it's about time to ship an update to the LINQ-SQO project. You can download the 0.9.2 release right now from CodePlex at http://www.codeplex.com/LINQSQO. The goal of this project is to provide a custom implementation of the LINQ to Objects Standard Query Operators (SQO). Basically this means you should be able to compile and run any piece of LINQ to Objects code against the LINQ-SQO API just by changing the "using System.Linq" namespace import with "using BdsSoft.Linq", with the same runtime behavior of course.

The project implements all of the System.Linq.Enumerable (extension) methods with all of their overloads in a similar static class called BdsSoft.Linq.Enumerable. It ships with a total of 148 (simple) unit tests to check functionality. If you want to make sure all of the operators are there, you could eat LINQ's own dogfood to match both implementations:

var res1 = from mi in typeof(System.Linq.Enumerable).GetMethods()
orderby mi.Name
group mi by mi.Name into g
select new { Name = g.Key, Overloads = g.Count() };
var res2 = from mi in typeof(BdsSoft.Linq.Enumerable).GetMethods()
orderby mi.Name
group mi by mi.Name into g
select new { Name = g.Key, Overloads = g.Count() };

if (!res1.SequenceEqual(res2))
{
Console.WriteLine("Implementation doesn't match the official LINQ to Objects standard query operators. Mismatches are:");

var mismatches = from m1 in res1
join m2 in res2 on m1.Name equals m2.Name

foreach (var m in mismatches)
}

I use this code myself to check for new operator overloads that appear in newer Orcas builds. From Orcas Beta 1 to Orcas Beta 2 five methods were added: four GroupBy overloads and one SelectMany overload:

public static IEnumerable<TResult> GroupBy<TSource, TKey, TResult>(this IEnumerable<TSource> source, Func<TSource, TKey> keySelector, Func<TKey, IEnumerable<TSource>, TResult> resultSelector)
public static IEnumerable<TResult> GroupBy<TSource, TKey, TResult>(this IEnumerable<TSource> source, Func<TSource, TKey> keySelector, Func<TKey, IEnumerable<TSource>, TResult> resultSelector, IEqualityComparer<TKey> comparer)
public static IEnumerable<TResult> GroupBy<TSource, TKey, TResult>(this IEnumerable<TSource> source, Func<TSource, TKey> keySelector, Func<TSource, TElement> elementSelector, Func<TKey, IEnumerable<TSource>, TResult> resultSelector)
public static IEnumerable<TResult> GroupBy<TSource, TKey, TResult>(this IEnumerable<TSource> source, Func<TSource, TKey> keySelector, Func<TSource, TElement> elementSelector, Func<TKey, IEnumerable<TSource>, TResult> resultSelector, IEqualityComparer<TKey> comparer)

public static IEnumerable<TResult> SelectMany<TSource, TCollection, TResult>(this IEnumerable<TSource> source, Func<TSource, int, IEnumerable<TCollection>> collectionSelector, Func<TSource, TCollection, TResult> resultSelector)

It might be a nice exercise to play the human compiler on the code fragment above to make sure you're (still) familiar with extension method stuff and the query operators (Q: which operators play a role in the code fragment above + which overloads do get called?).

WARNING: Notice this implementation isn't meant to be used in production, it's rather a reference implementation that could help you to understand how the query operators work internally.

Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

As you've heard by now, Orcas beta 2 (or should I start to talk about VS2008 and .NET Framework 3.5 instead?) has hit the web. If you didn't know yet, here are a few pointers:

On to the real stuff. In this post I want to talk about a new C# 3.0 feature, called partial methods, that's introduced in the beta 2 release. Likely you already know about partial classes, which were added in the 2.0 timeframe. So, what's in a name? In short, partial classes allow you to split the definition of a class across multiple files, or alternatively you could think about it as a code compilation unit separated over multiple files. The reason for the existence of this feature is - primarily - to provide a nice split between generated code and user code, as in the Windows Forms Designer that generates its code in a separate file, while developers have almost (you should delete the initialization call in the ctor) full control over the form's other code file (the one where the event handlers find a place to live).

Partial methods are methods living in partial classes which are marked as partial. Their existence also stims from the world of code generation - although it's likely to be useful outside this scope too - and allows to compile efficient code while allowing end-user extensions to the class by implementing a method. I know it's a little vague, so let's take a look at a more concrete sample. Over here I have a simple console app:

using System;

partial class PartialMethods //Part 1
{
static void Main()
{
Do();
}

static partial void Do();
}

partial class PartialMethods //Part 2
{
static partial void Do() {}
}

I've defined both parts of the partial class in the same file, but in real scenarios you'd have the two parts in separate files of course. So, what's happening in here? In part 1 of the class definition, I've declared the Do method as partial. Notice it's a static but that doesn't need to be the case, it works in a similar fashion with instance methods. Partial methods don't take an implementation body, it's just a declaration, much as you're used to in interfaces or abstract classes. In part 2 of the class definition, I've 'implemented' the partial method, for demo purposes just with an empty body. In reality, the complete definition from above is equal to:

using System;

class PartialMethods
{
static void Main()
{
Do();
}

static void Do() {}
}

If you take a look at the IL code:

(Notice the new version number on the C# compiler)

you can see that the Main method contains a call to the Do method. But what if we'd omit the definition of the partial method, like this:

using System;

partial class PartialMethods
{
static void Main()
{
Do();
}

static partial void Do();
}

In other words, what if no part of the partial class provides a method body for Do? Then, the following happens:

Right, no single trace of Do at the caller's side. It goes even further than that: all of the parameters evaluation is omitted too: try to guess what the following will print:

using System;

partial class PartialMethods
{
static void Main()
{
int i = 0;
Console.WriteLine(i);
Do(i++);
Console.WriteLine(i);
}

static partial void Do(int i);
}

Right, if there's an implementation of Do, you'll see this piece of code in Main:

The region indicates by the red rectangle is the piece of IL code that's part of the Do(i++) method call. Ignore the nop instructions as I'm generating non-optimized debuggable code (for the unaware, nop instructions are inserted in debug builds to allow to set breakpoint on various code elements, including lines with just curly braces; in the code above, the whole method body is surrounded with two nops, one for both Main method body curly braces). I you don't have an implementation somewhere, you'll just see this:

There's just a nop left, and the i++ side-effect's code is gone too. In other words, you can't tell what the code will print if you don't know whether or not there's a method body somewhere. Notice this is somewhat similar to conditional compilation with the ConditionalAttribute:

using System;
using System.Diagnostics;

class Program
{
static void Main()
{
int i = 1;
Console.WriteLine(i);
Do(i++);
Console.WriteLine(i);

}

[Conditional("BAR")]
static void Do(int i)
{
}
}

As you can see, the caller's IL is very similar:

unless you define BAR (e.g. using #define or using the /define:BAR csc command line switch):

Notice the use of System.Diagnostics.CondtionalAttribute isn't limited to partial classes. The most known use of this attribute is likely the Debug class, which has static methods (such as Assert) that are marked with Conditional["DEBUG"]: if you're running a non-debug build, no Debug.* calls are left in the code. There are a few core differences however; start by taking a look at the callee. No matter how the code is built, the Do method definition will be there:

Tip: try to read the serialized custom attribute's data; it says: 01 00 03 42 41 52 00 00, which really means "the three following bytes are B A R" (consult ECMA 335 for full details on custom attributes in IL).

In case of partial methods, it's really partial: there can be calls to 'non-implemented' methods (at the surface it looks as if the method signature is still there, so it feels like a non-implemented method, although in the resulting code there's just nothing left from a partial method if no method body is found).

Of course, there are a few limitations in using partial methods. First of all, partial methods are always private. The following won't compile (error CS0750: A partial method cannot have access modifiers or the virtual, abstract, override, new, sealed, or extern modifiers):

using System;

partial class Bar
{
public partial void Foo();
}

The reason for this is simple: if no single bit of code is generated, even not at the callee side (i.e. there's no metadata describing a "partial method declaration"), the method shouldn't be visible outside the scope of the class since external callers don't know whether the method really exists or not. For the same reason, you can't create a delegate to a partial method (CS0762: Cannot create delegate from method %1 because it is a partial method without an implementing declaration):

using System;

partial class Program
{
static void Main()
{
}

partial void Worker();
}

If you have an implementing declaration however, the code will compile fine (but in such a case you're intentionally specifying an implementation, so you won't have much benefit of declaring the method as partial).

Another limitation is that the method needs to have the void return type. The following won't compile (CS0766: Partial methods must have a void return type):

using System;

partial class Bar
{
partial int Foo();
}

Again, the reason is straightforward. If we don't know for sure there will be a method implementation, how can we possibly know what the return value should be?

int i = new Bar().Foo();
int j = i * 2; //???

Similarly, out parameters are not allowed (error CS0752: A partial method cannot have out parameters):

using System;

partial class Bar
{
partial void Foo(out int i);
}

for the same reason. In general I tend to avoid out parameters in most cases, especially for the public interface of an API design. The main reason for this is the lack of composability when working with such APIs: calling a method with out params requires users to define a variable first, prior to making the call. A functional style (functions, in math terms, do have a single output value - which of course can be a composed type) is much easier to use, but it might require a bit of additional work to create a suitable return type that wraps all of the to-be-returned values. Ref parameters are allowed nevertheless:

using System;

partial class Bar
{
partial void Foo(ref int i);
}

In reality, out and ref are the same under the covers, but the compiler enforces different checks: out params must be assigned (CS0177) as part of the method body, ref params don't need to do so; at the caller's side, ref params should be assigned prior to making a call (CS0165).

Obviously, there shouldn't bee more than one declaration and/or implementation:

using System;

partial class Bar
{
static void Foo();
static void Foo(); //CS0756: A partial method may not have multiple defining declarations
}

partial class Bar
{
static void Foo() {}
static void Foo() {} //CS0757: A partial method may not have multiple implementing declarations
}

The code fragment above sets the vocab right: defining declaration and implementing declaration. Also, you can't have an implementing declaration if there isn't a defining declaration (CS0759).

Where does Orcas eat its own dogfood? LINQ to SQL is one place where you see partial methods in action. In the illustration below, I've created a LINQ to SQL Classes ".dbml" file:

and I created a mapping for some SQL Server 2005 table from TFS:

Now, when you take a look at the generated code in the corresponding designer file, you'll see a region marked as "Extensibility Method Definitions". This one contains a bunch of partial methods:

I've indicated one pair of a partial method definition and an invocation, as used in an column mapping auto-generated property, in this case for a field called "AssemblySignature" (don't ask me about the TFS db schema):

For each such property, the setter has two "guards" that call a generated partial method. If you don't do anything else than just generating the entity classes, these calls are non-existing because there's only a defining declaration without an implementing one. However, these inserted calls are really extension points for the end-users of the generated code; in this case for LINQ to SQL, these allow to add business logic validation rules, e.g. as follows:

Just define another part of the partial class and type "partial". IntelliSense will jump in and tell you about the partial methods that you can provide an implementing declaration for. Select it and press enter to implement the method:

Once you've implemented such a method, the compiled code will contain the calls to it in the property setters, and you were able to do so without touching the generated code (which you shouldn't do, because it will be overwritten sooner or later). Notice one could get similar results by using events, but these cause runtime overhead that can't be eliminated. A sample with events is shown below (sorry to stress Gen 0 of your GC):

using System;
using System.Diagnostics;

#region Consumer

class Program
{
static void Main()
{
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < 1000000; i++)
{
Bar b = new Bar();
b.Callback += delegate { /* Console.WriteLine("ET calling home."); */ };
b.Do();
}
sw.Stop();
Console.WriteLine(sw.Elapsed.Milliseconds);
}
}

#endregion

#region Provider

delegate void Callback();

class Bar
{
public event Callback Callback;

public void Do()
{
if (Callback != null)
Callback();
}
}

#endregion

Execution time of this piece of code is around 72 ms on my Orcas Beta 2 VPC. If you drop the callback event registration on the consumer's side, it's about 17 ms. Below is an alternative using partial methods:

using System;
using System.Diagnostics;

#region Consumer

class Program
{
static void Main()
{
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < 1000000; i++)
{
Bar b = new Bar();
b.Do();
}
sw.Stop();
Console.WriteLine(sw.Elapsed.Milliseconds);
}
}

partial class Bar
{
partial void Callback()
{
/* Console.WriteLine("ET calling home."); */
}
}

#endregion

#region Provider

partial class Bar
{
partial void Callback();

public void Do()
{
Callback();
}
}

#endregion

Notice the consumer's side has been extended a little bit. In order to "register to the event" you'll need to write a partial method implementing declaration. Executing this piece of code costs about 12 ms, with the callback in there (6 times faster). If you drop the callback (i.e. no "partial class Bar" thing in the Consumer region), perf will be about the same (though, in theory, slightly faster). However, observe the difference with using events: the whole callback overhead through delegates is worse than having a "regular" method call in place. Of course, you can't compare both approaches on a general level since events and delegates are much richer constructs (just to name one difference: partial methods should be private, so you can't cross the boundary of a class definition, while events can be exposed as public members).

I guess I shouldn't forget to mention that VB 9.0 has partial methods as well. Although it looks a bit like a zebra in the code editor <g>, it works similarly:

One unfortunate thing about all of this is the lack of CodeDOM support for partial methods (it supports partial classes though). So, you'll have to rely on a CodeSnippetTypeMember instead of a CodeMemberMethod to create the partial methods, just using (constant) string values. The reason for this discrepancy is the fact that CodeDOM is part of the v2.0 FX assemblies which don't change in .NET FX 3.5.

Pretty cool, isn't it?

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

No further comments needed - Soma has all of the information on his blog: http://blogs.msdn.com/somasegar/archive/2007/07/26/announcing-the-release-of-visual-studio-2008-beta-2-net-fx-3-5-beta-2-and-silverlight-1-0-rc.aspx. I'll be switching from previous builds to Beta 2 this weekend and will blog about some changes to the "LINQ query provider" stuff (more specifically the introduction of System.Linq.IQueryProvider).

Have fun!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Posted Thursday, July 26, 2007 3:16 PM by bart | with no comments
Filed under:

TechEd Developers 2007 is on its way. The place to be is sunny Barcelona, from Monday 5 November till Friday 9 November. Yes indeed, TechEd Developers now offers five days of in-depth technical training (there's no pre-conference anymore) with 21 session slots throughout the week. An opportunity you shouldn't miss! In this post I just want to draw your attention to the Super Early Bird Offer that still runs till July 31st and covers:

• €300 discount
• Special invitation to a private technical session with a top Microsoft speaker
• Reserved priority seating in the Opening Keynote presentation
• Limited edition baseball cap

What about the content? Finding the right technical tracks is always a challenge for the content teams, but this is the list they came up with this year:

• Architecture - Patterns, practices and guidance for cross-product and cross-technology solution development
• Business Applications - Microsoft Dynamics et al
• Business Intelligence - Analytics and reporting for A to Z with all of the Microsoft BI core technologies, such as SQL Server 2005, PerformancePoint Server 2007, SharePoint 2007, etc
• Connected Systems - From services, business process integration to identity federation: learn everything about WCF, WF, WCS and BizTalk
• Database Development - SQL Server 2005 and Katmai all the way
• Designer - Express yourself with Expression Studio and get to know the amazing graphical power of WPF and Silverlight
• Infrastructure for Developers - Everything developers need to know about Windows Server 2008, Virtual Server, PowerShell, IIS 7 and much more
• Mobile & Embedded - Covers Windows Mobile, Windows CE, embedded software development and .NET Framework CF/MF
• Office System - How to build Office-driven and Office-integrated applications using SharePoint, VSTO, WF, etc? Get a sneak-peak on VSTO v3 as well!
• Security - If there's one software-aspect that deserves a whole track on its own it's security: learn about the SDL, cryptography, Windows Vista security and much much more
• Tools & Languages - There's a lot of goodness coming in the Orcas timeframe: LINQ, VS 2008, the entity framework, "Rosario", ...
• Web Development - ASP.NET, AJAX, IIS, Windows Live, Expression, ...
• Windows and Frameworks - Managed and unmanaged APIs, .NET Framework 3.x, etc

All registration information can be found on this page. Why hesitate any longer? ;-)

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

It's there. After three weeks of hard work, I'm proud to tell you about the availability of LINQ to SharePoint alpha 0.2.2.0. Read more about it on the team blog at http://community.bartdesmet.net/blogs/linqtosharepoint/archive/2007/07/20/the-0-2-2-alpha-interim-release-an-overview.aspx. If you want to get the release immediately, follow this link.

Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

As the winner of last year's Speaker Idol competetion at TechEd Developers, I'm happy to announce this year's edition of Speaker Idol.

So, if you want to get a chance to present at TechEd Developers EMEA in 2008, this is the ideal opportunity to show us your talents as a speaker.

In order to participate:

• you should be a registered delegate on TechEd Developers EMEA 2007;
• create a short 3 minutes home video of yourself delivering a presentation;
• submit it to the jury by October 8th.

In total, the jury will allow 30 submissions and 16 out of these will go to the next round. The 16 finalists will speak at the event's Speaker Idol Theatre in one of the four timeslots, i.e. during the welcome reception and during three exhibition slots. The jury will pick one winner for each slot and all of these semi-finalists will compete against each other in the final time slot.

All detailed info can be found on the Speaker Idol Contest page. Also take a look at the Speaker Idol 2006 Winners page for some quotes by last year's finalists about participating to the contest. It was great to compete with both Anthony and Bogdan in last year's final.

I hope to see you at the Speaker Idol 07 competition in Barcelona this fall; if time permits, I'll be on the first row to watch you become the next speaker star. And if your schedule permits, come and see me during my session (topic still TBD).

A final tip to all of you who're thinking to partipate: impress the judges and show you know your stuff. And if you're really feeling confident: developers love to see (live) code ;-)

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Posted Monday, July 09, 2007 5:25 PM by bart | with no comments
Filed under:

The last couple of weeks my blog has been filled with LINQ to SharePoint stuff. Because of the project dimensions today and in the near future, it was decided to branch off all the LINQ to SharePoint stuff to the LINQ to SharePoint team blog. That's right, it says "team blog" since LINQ to SharePoint won't stay a one-man show in the near future (more info about these evolutions will be posted to the team blog the next couple of weeks).

So, what will you find on the team blog? Here a list of the topics that will be covered on the blog:

• Information about upcoming and new releases;
• A look at the internals of LINQ to SharePoint, including the parser and much more;
• Stuff about the tools that ship with LINQ to SharePoint, including SpMetal and VS2008 IDE integration;
• Samples and how to Videos to learn how to work with LINQ to SharePoint.

On my own blog I'll put references to LINQ to SharePoint posts now and then but the team blog will become the ultimate resource for LINQ to SharePoint news and insights, while the CodePlex site is intended for the technical side of the project, i.e. source control, work item tracking, etc. Most CodePlex wiki pages will link to the blog and vice versa. (It would be a great feature for CodePlex to have some team blog functionality in order to centralize information about projects.)

In the meantime, stay tuned (subscribe to the team blog's RSS feed) for an interim release that will be published later this month, before we get to the 0.3 alpha stage. The interim release will contain overall design improvements and foundation work in order to enable the 0.3 work that's mostly related to entity updates.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Welcome back to what's going to end up as "LINQ to SharePoint: The Cruel Sequel" :-). The last couple of days, LINQ to SharePoint has been a full-time job and the result is getting better and better build after build. In this post, I'd like to highlight another feature that was planned from the start but didn't make its way to the 0.2 release of last month: parser enhancements.

So, what's up? Simply stated, the query parser so far was a runtime parser only. When executing LINQ to SharePoint queries, the LINQ query expression tree gets parsed sooner or later, possibly throwing exceptions in case something can't be translated into CAML. A typical example is the following:

var res = from u in users where u.FirstName.EndsWith("t") select u;

The reason this can't be translated is the EndsWith call on the FirstName entity property. Since CAML doesn't support an equivalent in its query language, we can't provide a translation. There are much more such things that make the parse operation fail, due to the relatively limited expressiveness of CAML. The problem however, especially with big queries, is for developers to get to know where the problem is located exactly. In the previous alphas an InvalidOperationException is thrown with some message, possibly referring to something in the query that couldn't be translated (e.g. "Unsupported string filtering query expression detected: EndsWith. Only the methods Contains and StartsWith are supported."). Although this sample message is pretty easy to understand, there are more complex ones that deserve a better approach.

<Intermezzo>

To put this in a broader context, you should be aware of the fact that LINQ lacks support for compile-time query validation by custom query providers. All the LINQ-capable compilers (C# 3.0, VB 9.0) do, is generating an expression tree representing the query. Therefore, the only way to find out about problems in the query is to execute the code, which triggers the IQueryable-supported (custom query provider's) query expression tree parser that can signal issues in the query by throwing some exception. All LINQ providers expose such a behavior. As an example, take a look at the following situation in LINQ to SQL:

Luckily the message is pretty clear in order to figure out what's going wrong. Also observe the time when and the place where the exception occurs: not at definition time of the query (the query - i.e. var res = ... in our case - remains an expression) but when the iteration statement is executed.

Note: LINQ to SharePoint alpha 0.1 did produce parse errors at query expression definition time instead; this has been fixed in 0.2 so that the query parser isn't invoked before query execution time (i.e. iteration over the results).

So what's wrong with this? Not that much, exception for the fact that we would be able to signal such problems at compile time if we could have the appropriate infrastructure in place at the compiler's side. This would mean that the C# and VB compiler would have to pass the generated expression tree to the custom query provider's query parser (which could be interfaced for communication with a front-end compiler) as part of the compilation job. Our query parser could then feed a set of warnings and errors back to the compiler, which are then presented to the developer as regular compiler warnings or errors (albeit generated by the custom query provider instead of the compiler itself).

Since we don't have such a thing at this very moment, alternatives have to be invented. That's exactly what we've done in LINQ to SharePoint in order to help the developer spot the location of the problem in his/her query.

</Intermezzo>

So, what's our approach? Of course we don't drop the NotSupportedException approach: if your query can't be translated, you're out of luck and we need to signal this in some way or another at runtime. However, when debugging we provide a debugger visualizer for LINQ to SharePoint queries that allows you to inspect the query, including the generated CAML. Essentially, the debugger visualizer triggers the parser albeit in a slightly different "parser run mode": instead of throwing exceptions for parse-time errors, all errors are collected and fed back to the visualizer with enough information to spot the problem. A picture is worth a thousand words, so take a look at this:

This is the debugger visualizer for LINQ to SharePoint that will become available in a later release (keep an eye on my blog). At the top of the dialog you can see the LINQ query. Admitted, it's not in its original shape anymore but it's the best we can do right now (the original LINQ query in either C# 3.0 or VB 9.0 has been eaten by the respective compiler at this stage of execution). The original query looks as follows:

var res = from t in lst where !(t.FirstName.Contains("Bart") && t.Age >= 24) || t.LastName.EndsWith("De Smet") && CamlMethods.DateRangesOverlap(t.Modified.Value) orderby 1 select t;

The LINQ query you can see in the dialog above is basically the query's expression tree ToString() call result. With a little knowledge about extension methods and expression trees, you can read such an expression string representation in just a few seconds (as a little exercise play the human compiler, translating a LINQ query to an expression tree followed by a mental ToString-call).

What the LINQ to SharePoint parser does when running in "debug mode" - in addition to its regular parsing job - is the identification of subexpressions that can't be translated while continuing the parsing (instead of throwing an exception). All places where something went wrong are marked by <ParseError /> placeholders in the CAML query and each of these have a unique identifier that's linked (bidirectionally) with the subexpression in the LINQ query that caused the problem. This way, developers can identify problems in a more visually attractive way.

Even more, in the example above we can see four problems with the query at once. If we'd run the application we'd get only one single exception (which would result in at least four "run-crash-fix" iterations). The goal is to take this to the maximum level possible, providing links from the debugger visualizer to specific help information about the parser issues that occurred (observe the unique identification code on the error, in this case SP0011). In case you're curious why you're seeing SP0011 in the fragment above: observe that the t.FirstName.Contains("Bart") expression is nested inside a Not expression. CAML doesn't have a Boolean negation operator in its query schema, so we can't express the !t.FirstName.Contains("Bart") expression as a whole.

Stay tuned for more LINQ to SharePoint fun soon!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

In today's post I'd like to introduce a new concept that will be introduced in LINQ to SharePoint v0.3: SPML or the SharePoint Mapping Language. It didn't make it for the 0.2 release due to time restrictions, but tonight the first portion of the POC code has been checked in to CodePlex. So what's it?

Let's start with the very beginning: SpMetal. As you might already know, the SpMetal tool is used to generate entity classes from a SharePoint list definition. It connects up to the SharePoint site, grabs the list definition and converts it into some entity class type that can be used to write queries using SharePointDataSource<T>. Therefore, the core of its work is to generate code either in C# 3.0 or VB 9.0. To set our minds, here's a little sample:

The syntax of SpMetal

A list with various fields

SpMetal in action, exporting the Demo list

Code generated by SpMetal

In the very first alpha of LINQ to SharePoint (before we went live on CodePlex), there was no such tool at all and the creation of entity classes was a manual job (after all, it wasn't so difficult at all yet: there was no base class to derive from and using automatic properties in C# 3.0 the mapping was just a matter of minutes). Lazy as developers (including myself) are, the SpMetal tool was created as a very simple tool based on a quick-n-dirty string concatenation and formatting technique (take a look at the sources to see how it's done in 0.2). However, things were getting more complex and a few weeks ago work was started to port the tool to a CodeDOM-based approach for code generation. I decided not to merge these changes with the 0.2 release since full testing on the tool's correctness hasn't been done yet, so it will become part of 0.3 instead.

However, there's more than just a new back-end to SpMetal. The cool thing about it is its potential for reuse elsewhere, including the VS 2008 IDE. Over time, the goal is to provide entity creation as easy as dragging and dropping lists from an add-in in Server Explorer to a designer surface. We're not there yet, but an important milestone is under development right now: SPML. Designers are just overlays on top of some source definition, for example a partial class with Windows Forms designer generated code or a resx file or ... In a similar way, SPML is the source-side of a SharePoint list mapping for LINQ to SharePoint. Currently it's very minimalistic, but over time it will get more and more expressiveness to drive the mapping process (e.g. you'll be able to decide which fields to include in the entity mapping and you'll be able to control a few aspects associated with entity updating, another 0.3 feature that's under development right now). Let's take a brief look at it:

Observe a few things:

• The file extension of an SPML file is .spml (duh!).
• SPML files contain the definition of a SharePointDataContext, something that will be introduced in 0.3 (I'll blog about it once we get closer to 0.3). For now, think of it as a set of list entities (see <Lists> section).
• The SPML file has a Custom Tool associated, called LINQtoSharePointGenerator. You can specify a code namespace as well.

What the LINQtoSharePointGenerator does is pretty straightforward: it parses the SPML file, finds enough information to connect to the WSS site and lets the SpMetal back-end (now called the EntityGenerator) do the rest of the work, returning a code file (support for VB and C#) that's added to the solution. Furthermore, it adds a reference to BdsSoft.SharePoint.Linq if it's not already present. All of this magic is done automatically when you build the project (or you can trigger it manually). This means that using LINQ to SharePoint doesn't require SpMetal anymore: just write an SPML file and add it to the project with the right Custom Tool setting. Here's an example:

Manual triggering of the LINQtoSharePointGenerator...

...the result: a .Designer.cs file

With VB support included!

Start to write LINQ queries right away

If you want to play with this already, you can grab the sources from CodePlex (change set 7418). However, the VS Orcas integration requires you have the VS Orcas SDK installed as well. Also remember this is very early work in progress but step by step we're getting there.

Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks