July 2010 - Posts

Introduction

In preparation for some upcoming posts related to LINQ (what else?), Windows PowerShell and Rx, I had to set up a local LDAP-capable directory service. (Hint: It will pay off to read till the very end of the post if you’re wondering what I’m up to...) In this post I’ll walk the reader through the installation, configuration and use of Active Directory Lightweight Directory Services (LDS), formerly known as Active Directory Application Mode (ADAM). Having used the technology several years ago, in relation to the LINQ to Active Directory project (which as an extension to this blog series will receive an update), it was a warm and welcome reencounter.

 

What’s Lightweight Directory Services anyway?

Use of hierarchical storage and auxiliary services provided by technologies like Active Directory often has advantages over alternative designs, e.g. using a relational database. For example, user accounts may be stored in a directory service for an application to make use of. While Active Directory seems the natural habitat to store (and replicate, secure, etc.) additional user information, IT admins will likely point you – the poor developer – at the door when asking to extend the schema. That’s one of the places where LDS comes in, offering the ability to take advantage of the programming model of directory services while keeping your hands off “the one and only AD schema”.

The LDS website quotes other use cases, which I’ll just copy here verbatim:

Active Directory Lightweight Directory Service (AD LDS), formerly known as Active Directory Application Mode, can be used to provide directory services for directory-enabled applications. Instead of using your organization’s AD DS database to store the directory-enabled application data, AD LDS can be used to store the data. AD LDS can be used in conjunction with AD DS so that you can have a central location for security accounts (AD DS) and another location to support the application configuration and directory data (AD LDS). Using AD LDS, you can reduce the overhead associated with Active Directory replication, you do not have to extend the Active Directory schema to support the application, and you can partition the directory structure so that the AD LDS service is only deployed to the servers that need to support the directory-enabled application.

  • Install from Media Generation. The ability to create installation media for AD LDS by using Ntdsutil.exe or Dsdbutil.exe.

  • Auditing. Auditing of changed values within the directory service.

  • Database Mounting Tool. Gives you the ability to view data within snapshots of the database files.

  • Active Directory Sites and Services Support. Gives you the ability to use Active Directory Sites and Services to manage the replication of the AD LDS data changes.

  • Dynamic List of LDIF files. With this feature, you can associate custom LDIF files with the existing default LDIF files used for setup of AD LDS on a server.

  • Recursive Linked-Attribute Queries. LDAP queries can follow nested attribute links to determine additional attribute properties, such as group memberships.

Obviously that last bullet point grabs my attention through I will retain myself from digressing here.

 

Getting started

If you’re running Windows 7, the following explanation is the right one for you. For older versions of the operating system, things are pretty similar though different downloads will have to be used. For Windows Server 2008, a server role exists for LDS. So, assuming you’re on Windows 7, start by downloading the installation media over here. After installing this, you should find an entry “Active Directory Lightweight Directory Services Setup Wizard” under the “Administrative Tools” section in “Control Panel”:

image

LDS allows you to install multiple instances of directory services on the same machine, just like SQL Server allows multiple server instances to co-exist. Each instance has a name and listens on certain ports using the LDP protocol. Starting this wizard – which lives under %SystemRoot%\ADAM\adaminstall.exe, revealing the former product name – brings us here:

image

After clicking Next, we need to decide whether we create a new unique instance that hasn’t any ties with existing instances, or whether we want to create a replicate of an existing instance. For our purposes, the first option is what we need:

image

Next, we’re asked for an instance name. The instance name will be used for the creation of a Windows Service, as well as to store some settings. Each instance will get its own Windows Service. In our sample, we’ll create a directory for the Northwind Employees tables, which we’ll use to create accounts further on.

image

We’re almost there with the baseline configuration. The next question is to specify a port number, both for plain TCP and for SSL-encrypted traffic. The default ports, 389 and 636, are fine for us. Later we’ll be able to connect to the instance by connecting to LDP over port 389, e.g. using the System.DirectoryServices namespace functionality in .NET. Notice every instance of LDS should have its own port number, so only one can be using the default port numbers.

image

Now that we have completed the “physical administration”, the wizard moves on to a bit of “logical administration”. More specifically, we’re given the option to create a directory partition for the application. Here we choose to create such a partition, though in many concrete deployment scenarios you’ll want the application’s setup to create this at runtime. Our partition’s distinguished name will mimic a “Northwind.local” domain containing a partition called “Employees”:

image

After this bit of logical administration, some more physical configuration has to be carried out, specifying the data files location and the account to run the services under. For both, the default settings are fine. Also the administrative account assigned to manage the LDS instance can be kept as the currently logged in user, unless you feel the need to change this in your scenario:

image image

Finally, we’ve arrived at an interesting step where we’re given the option to import LDIF files. And LDIF file, with extension .ldf, contains the definition of a class that can be added to a directory service’s schema. Basically those contain things like attributes and their types. Under the %SystemRoot%\ADAM folder, a set of out-of-the-box .ldf files can be found:

image

Instead of having to run the ldifde.exe tool, the wizard gives us the option to import LDIF files directly. Those classes are documented in various places, such as RFC2798 for inetOrgPerson. On TechNet, information is presented in a more structured manner, e.g revealing that inetOrgPerson is a subclass of user. Custom classes can be defined and imported after setup has completed. In this post, we won’t extend the schema ourselves but we will simply be using the built-in User class so let’s tick that one:

image

After clicking Next, we get a last chance to revisit our settings or can confirm the installation. At this point, the wizard will create the instance – setting up the service – and import the LDIF files.

image image

Congratulations! Your first LDS instance has materialized. If everything went alright, the NorthwindEmployees service should show up:

image

 

Inspecting the directory

To inspect the newly created directory instance, a bunch of tools exist. One is ADSI Edit which you could already see in the Administrative Tools. To set it up, open the MMC-based tool and go to Action, Connect to… In the dialog that appears, specify the server name and choose Schema as the Naming Context.

image

For example, if you want to inspect the User class, simply navigate to the Schema node in the tree and show the properties of the User entry.

image

To visualize the objects in the application partition, connect using the distinguished name specified during the installation:

image

Now it’s possible to create a new object in the directory using the context menu in the content pane:

image

After specifying the class, we get to specify the “CN” name (for common name) of the object. In this case, I’ll use my full name:

image image

We can also set additional attributes, as shown below (using the “physicalDeliveryOfficeName” to specify the office number of the user):

image image

After clicking Set, closing the Attributes dialog and clicking Finish to create the object, we see it pop up in the items view of the ADSI editor snap-in:

image

 

Programmatic population of the directory

Obviously we’re much more interested in a programmatic way to program Directory Services. .NET supports the use of directory services and related protocols (LDAP in particular) through the System.DirectoryServices namespace. In a plain new Console Application, add a reference to the assembly with the same name (don’t both about other assemblies that deal with account management and protocol stuff):

image

For this sample, I’ll also assume the reader got a Northwind SQL database sitting somewhere and knows how to get data out of its Employees table as rich objects. Below is how things look when using the LINQ to SQL designer:

image

We’ll just import a few details about the users; it’s left to the reader to map other properties onto attributes using the documentation about the user directory services class. Just a few lines of code suffice to accomplish the task (assuming the System.DirectoryServices namespace is imported):

static void Main()
{
    var path = "LDAP://bartde-hp07/CN=Employees,DC=Northwind,DC=local";
    var root = new DirectoryEntry(path);

    var ctx = new NorthwindDataContext();
    foreach (var e in ctx.Employees)
    {
        var cn = "CN=" + e.FirstName + e.LastName;

        var u = root.Children.Add(cn, "user");
        u.Properties["employeeID"].Value = e.EmployeeID;
        u.Properties["sn"].Value = e.LastName;
        u.Properties["givenName"].Value = e.FirstName;
        u.Properties["comment"].Value = e.Notes;
        u.Properties["homePhone"].Value = e.HomePhone;
        u.Properties["photo"].Value = e.Photo.ToArray();
        u.CommitChanges();
    }
}

After running this code – obviously changing the LDAP path to reflect your setup – you should see the following in ADSI Edit (after hitting refresh):

image

Now it’s just plain easy to write an application that visualizes the employees with their data. We’ll leave that to the UI-savvy reader (just to tease that segment of my audience, I’ve also imported the employee’s photo as a byte-array).

 

A small preview of what’s coming up

To whet the reader’s appetite about next episodes on this blog, below is a single screenshot illustrating something – IMHO – rather cool (use of LINQ to Active Directory is just an implementation detail below):

image

Note: What’s shown here is the result of a very early experiment done as part of my current job on “LINQ to Anything” here in the “Cloud Data Programmability Team”. Please don’t fantasize about it as being a vNext feature of any product involved whatsoever. The core intent of those experiments is to emphasize the omnipresence of LINQ (and more widely, monads) in today’s (and tomorrow’s) world. While we’re not ready to reveal the “LINQ to Anything” mission in all its glory (rather think of it as “LINQ to the unimaginable”), we can drop some hints.

Stay tuned for more!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

A while ago I was explaining runtime mechanisms like the stack and the heap to some folks. (As an aside, I’m writing a debugger course on “Advanced .NET Debugging with WinDbg with SOS”, which is an ongoing project. Time will tell when it’s ready to hit the streets.) Since the context was functional programming where recursion is a typical substitute (or fuel if you will) for loops, an obvious topic for discussion is the possibility to hit a stack overflow. Armed with my favorite editor, Notepad.exe, and the C# command-line compiler, I quickly entered the following sample to show “looping with recursion” and how disaster can strike:

using System;

class Program
{
    static void Main()
    {
        Rec(0);
    }

    static void Rec(int n)
    {
        if (n % 1024 == 0)
            Console.WriteLine(n);

        Rec(n + 1);
    }
}

The module-based condition in there is to avoid excessive slowdowns due to Console.WriteLine use, which is rather slow due to the way the Win32 console output system works. To my initial surprise, the overflow didn’t come anywhere in sight and the application kept running happily:

image

I rather expected something along the following lines:

image

So, what’s going on here? Though I realized pretty quickly what the root cause is of this unexpected good behavior, I’ll walk the reader through the thought process used to “debug” the application’s code.

 

I made a call, didn’t I?

The first thing to check is that we really are making a recursive call in our Rec method. Obviously ildasm is the way to go to inspect that kind of stuff, so here’s the output which we did expect.

image

In fact, the statement made above – “which we did expect” – is debatable. Couldn’t the compiler just turn the call into a jump right to the start of the method after messing around a bit with the local argument slot that holds argument value n? That way we wouldn’t have to make a call and the code would still work as expected. Essentially what we’re saying here is that the compiler could have turned the recursive call into a loop construct. And indeed, some compilers do exactly that. For example, consider the following F# sample:

#light

let rec Rec n =
   if n % 1024 = 0 then
       printfn "%d" n

   Rec (n + 1)

Rec 0

Notice the explicit indication of the recursive nature of a function by means of the “rec” keyword. After compiling this piece of code using fsc.exe, the following code is shown in Reflector (decompiling to C# syntax) for the Rec function:

image

The mechanics of the printf call are irrelevant. What matters is the code that’s executed after the n++ statement, which isn’t a recursive call to Rec itself. Instead, the compiler has figured out a loop can be used. Hence, no StackOverflowException will result.

Back to the C# sample though. What did protect the code from overflowing the stack? Let’s have some further investigations, but first … some background.

 

Tail calls

One optimization that can be carried out for recursive functions is to spot tail calls and optimize them away into looping – or at a lower level, jumps – constructs. A tail call is basically a call after which the current stack frame is no longer needed upon return from the call. For example, our simple sample can benefit from tail call optimization since the Rec method doesn’t really do anything anymore after returning from the recursive Rec call:

static void Rec(int n)
{
    if (n % 1024 == 0)
        Console.WriteLine(n);

    Rec(n + 1);
}

This kind of optimization – as carried out by F# in the sample shown earlier – can’t always take place. For example, consider the following definition of a factorial method:

static int Fac(int n)
{
    if (n == 0)
        return 1;

    return n * Fac(n – 1);
}

The above has quite a few issues such as the inability to deal with negative values and obviously the arithmetic overflow disaster that will strike when the supplied “n” parameter is too large for the resulting factorial to fit in an Int32. The BigInteger type introduced in .NET 4 (and not in .NET 3.5 as originally planned) would be a better fit for this kind of computation, but let’s ignore this fact for now.

A more relevant issue in the context of our discussion is the code’s use of recursion where a regular loop would suffice, but now I’m making a value judgment of imperative control flow constructs versus a more functional style of using recursion. That’s true nonetheless is the fact that the code above is not immediately amenable for tail call optimization. To see why this is, rewrite the code as follows:

static int Fac(int n)
{
    if (n == 0)
        return 1;

    int t = Fac(n – 1);
    return n * t;

}

See what’s going on? After returning from the recursive call to Fac, we still need to have access to the value of “n” in the current call frame. As a result, we can’t reuse the current stack frame when making the recursive call. Implementing the above in F# (just for the sake of it) and decompiling it, shows the following code:

image

The culprit keeping us from employing tail call optimization is the multiplication instruction needed after the return from the recursive call to Fac. (Note: the second operand to the multiplication was pushed onto the evaluation stack in IL_0005; in fact IL_0006 could also have been a dup instruction.) C# code will be slightly different but achieve the same computation (luckily!).

Sometimes it’s possible to make a function amenable for tail call optimization by carrying out a manual rewrite. In the case of the factorial method, we can employ the following trick:

static int Fac(int n)
{
    return Fac_(n, 1);
}

static int Fac_(int n, int res)
{
    if (n == 0)
        return res;

    return Fac_(n – 1, n * res);
}

Here, we’re not only decrementing n in every recursive call, we’re also keeping the running multiplication at the same time. In my post Jumping the trampoline in C# – Stack-friendly recursion, I explained this principle in the “Don’t stand on my tail!” section. The F# equivalent of the code, shown below, results in tail call optimization once more:

let rec Fac_ n res =
   if n = 0 then
       res
   else
       Fac_ (n - 1) (n * res)

let Fac n =
   Fac_ n 1

The compilation result is shown below:

image

You can clearly see the reuse of local argument slots.

 

A smart JIT

All of this doesn’t yet explain why the original C# code is just working fine though our look at the generated IL code in the second section of this post did reveal the call instruction to really be there. One more party is involved in getting our much beloved piece of C# code to run on the bare metal of the machine: the JIT compiler.

In fact, as soon as I saw the demo not working as intended, the mental click was made to go and check this possibility. Why? Well, the C# compiler doesn’t optimize tail calls into loops, nor does it emit tail.call instructions. The one and only remaining party is the JIT compiler. And indeed, since I’m running on x64 and am using the command-line compiler, the JIT compiler is more aggressive about performing tail call optimizations.

Let’s explain a few things about the previous paragraph. First of all, why does the use of the command-line compiler matter? Won’t the same result pop up if I used a Console Application project in Visual Studio? Not quite, if you’re using Visual Studio 2010 that is. One the decisions made in the last release is to mark executables IL assemblies (managed .exe files) as 32-bit only. That doesn’t mean the image contains 32-bit instructions (in fact, the C# compiler never emits raw assembler); all it does it tell the JIT to only emit 32-bit assembler at runtime, hence resulting in a WOW64 process on 64-bit Windows. The reasons for this are explained in the Rick Byer’s blog post on the subject. In our case, we’re running the C# compiler without the /platform:x86 flag – which now is passed by the default settings of a Visual Studio 2010 executable (not library!) project – therefore resulting in an “AnyCPU” assembly. The corflags.exe tool can be used to verify this claim:

image

In Visual Studio 2010, a new Console Application project will have the 32-bit only flag set by default. Again, reasons for this decision are brought up in Rick’s post on the subject.

image

Indeed, when running the 32-bit only assembly, a StackOverflowException results. An alternative way to tweak the flags of a managed assembly is by using corflags.exe itself, as shown below:

image

It turns out when the 64-bit JIT is involved, i.e. when the AnyCPU Platform target is set – the default on the csc.exe compiler – tail call optimization is carried out for our piece of code. A whole bunch of conditions under which tail calls can be optimized by the various JIT flavors can be found on David Broman’s blog. Grant Richins has been blogging about improvements made in .NET 4 (which don’t really apply to our particular sample). One important change in .NET 4 is the fact the 64-bit JIT now honors the “tail.” prefix on call instructions, which is essential to the success of functional style languages like F# (indeed, F#’s compiler actually has a tailcalls flags, which is on by default due to the language’s nature).

 

Seeing the 64-bit JIT’s work in action

In order to show the reader the generated x64 code for our recursive Rec method definition, we’ll switch gears and open up WinDbg, leveraging the SOS debugger extension. Obviously this requires one to install the Debugging Tools for Windows. Also notice the section’s title to apply to x64. For x86 users, the same experiment can be carried out, revealing the x86 instructions generated without the tail call optimization, hence explaining the overflow observed on 32-bit executions.

Loading the ovf.exe sample (making sure the 32-bit only flag is not set!) under the WinDbg debugger – using windbg.exe ovf.exe – brings us to the first loader breakpoint as shown below. In order to load the Son Of Strike (SOS) debugger extension, set a module load breakpoint for clrjit.dll (which puts us in a convenient spot where the CLR has been sufficiently loaded to use SOS successfully). When that breakpoint hits, the extension can be loaded using .loadby sos clr:

image

Next, we need to set a breakpoint on the Rec method. In my case, the assembly’s file name is ovf.exe, the class is Program and the method is Rec, requiring me to enter the following commands:

image

The !bpmd extension command is used to set a breakpoint based on a MethodDesc – a structure used by the CLR to describe a method. Since the method hasn’t been JIT compiled yet, and hence no physical address for the executable code is available yet, a pending breakpoint is added. Now we let go the debugger and end up hitting the breakpoint which got automatically set when the JIT compiler took care of compiling the method (since it came “in sight” for execution, i.e. because of Main’s call into it). Using the !U – for unassemble – command we can now see the generated code:

image

Notice the presence of code like InitializeStdOutError which is the result from inlining of the Console.WriteLine method’s code. What’s going on here with regards to the tail call behavior is the replacement of a call instruction with a jump simply to the beginning of the generated code. The rest of the code can be deciphered with a bit of x86/x64 knowledge. For one thing, you can recognize the 1024 value (used for our modulo arithmetic) in 3FF which is 1023. The module check stretches over a few instructions that basically use a mask over the value to see whether any of the low bits is non-zero. If so, the value is not dividable by 1024; otherwise, it is. Based on this test (whose value gets stored in eax), a jump is made or not, either going through the path of calling Console.WriteLine or not.

 

Contrasting with the x86 assembler being used

In the x86 setting, we’ll see different code. To show this, let’s use a Console Application in Visual Studio 2010, whose default platform target is – as mentioned earlier – 32-bit. In order to load SOS from inside the Immediate Window, enable the native debugger through the project settings:

image

Using similar motions as before, we can load the SOS extension upon hitting a breakpoint. Instead of using !bpmd, we can use !name2ee to resolve the JITTED Code Address for the given symbol, in this case the Program.Rec method:

image

Inspecting the generated code, one will encounter the following call instruction to the same method. This is the regular recursive call without any tail call optimization carried out. Obviously this will cause a StackOverflowException to occur. Also notice from the output below that the Console.WriteLine method call didn’t get inlined in this particular x86 case.

image

 

Revisiting the tail. instruction prefix

As referred to before, the IL instruction set has a tail. prefix for call instructions. Before .NET 4, this was merely a hint to the JIT compiler. For x86, it was (and still is) a request of the IL generator to the JIT compiler to perform a tail call. For x64, prior to CLR 4.0, this request was not always granted. For our x86 case, we can have a go at inserting the tail. prefix for the recursive call in the code generated by the C# compiler (which doesn’t emit this instruction by itself as explained before). Using ildasm’s /out parameter, you can export the ovf.exe IL code to a text file. Notice the COR flags have been set to “32-bit required” using either the x86 platform target flag on csc.exe or by using corflags /32bit+:

image

Now tweak the code of Rec as shown below. After a tail call instruction, no further code should execute other than a ret. If this rule isn’t obeyed, the CLR will throw an exception signaling an invalid program. Hence we remove the nop instruction that resulted from a non-optimized build (Debug build or csc.exe use without /o+ flag). To turn the call into a tail call one, we add the “tail.” prefix. Don’t forget the space after the dot though:

image

The session of roundtripping through ILDASM and ILASM with the manual tweak in Notepad shown above is shown here:

image

With this change in place, the ovf.exe will keep on running without overflowing the stack. Looking at the generated code through the debugger, one would see a jmp instruction instead of a call, explaining the fixed behavior.

 

Conclusion

Tail calls are the bread and butter of iterative programs written in a functional style. As such, the CLR has evolved to support tail call optimization in the JIT when the tail. prefix is present, e.g. as emitted by the F# compiler when needed (though the IL code itself may be turned into a loop by the compiler itself). One thing to know is that on x64, the JIT is more aggressive about detecting and carrying out tail recursive calls (since it has a good value proposition with regards to “runtime intelligence cost” versus “speed-up factor”). For more information, I strongly recommend you to have a look at the CLR team’s blog: Tail Call Improvements in .NET Framework 4.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

Recently I’ve been playing with Windows PowerShell 2.0 again, in the context of my day-to-day activities. One hint should suffice for the reader to get an idea of what’s going on: push-based collections. While I’ll follow up on this subject pretty soon, this precursor post explains one of the things I had to work around.

 

PowerShell: a managed application or not?

Being designed around the concept of managed object pipelines, one may expect powershell.exe to be a managed executable. However, it turns out this isn’t the case completely. If you try to run ildasm.exe on the PowerShell executable (which lives in %windir%\system32\WindowsPowerShell\v1.0 despite the 2.0 version number, due to setup complications), you get the following message:

image

So much for the managed executable theory. What else can be going on to give PowerShell the power of managed objects. Well, it could be hosting the CLR. To check this theory, we can use the dumpbin.exe tool, using the /imports flag, checking for mscoree.dll functions being called. And indeed, we encounter the CorBindToRuntimeEx function that’s been the way to host the CLR prior to .NET 4’s in-process side-by-side introduction (a feature I should blog about as well since I wrote a CLR host for in-process side-by-side testing on my prior team here at Microsoft).

image

One of the parameters passed to CorBindToRuntimeEx is the version of the CLR to be loaded. Geeks can use WinDbg or cdb to set a breakpoint on this function and investigate the version parameter passed to it by the PowerShell code:

image

Notice the old code name of PowerShell still being revealed in the third stack frame (from the top). In order to hit this breakpoint on a machine that has .NET 4 installed, I’ve used the mscoreei.dll module rather than mscoree.dll. The latter has become a super-shim in the System32 folder, while the former one is where the CLR shim really lives (“i” stands for “implementation”). This refactoring has been done to aid in servicing the CLR on different version of Windows, where the operating system “owns” the files in the System32 folder.

Based on this experiment, it’s crystal clear the CLR is hosted by Windows PowerShell, with hardcoded affinity to v2.0.50727. This is in fact a good thing since automatic roll-forward to whatever the latest version of the CLR is on the machine could cause incompatibilities. One can expect future versions of Windows PowerShell to be based on more recent versions of the CLR, once all required testing has been carried out. (And in that case, one will likely use the new “metahost” CLR hosting APIs.)

 

Loading .NET v4 code in PowerShell v2.0

The obvious question with regards to some of the stuff I’ve been working on was whether or not we can run .NET v4 code in Windows PowerShell v2.0? It shouldn’t be a surprise this won’t work as-is, since the v2.0 CLR is loaded by the PowerShell host. Even if the hosting APIs weren’t involved and the managed executable were compiled against .NET v2.0, that version’s CLR would take precedence. This is in fact the case for ISE:

image

Trying to load a v4.0 assembly in Windows PowerShell v2.0 pathetically fails – as expected – with the following message:

image

So, what are the options to get this to work? Let’s have a look.

Warning:  None of those hacks are officially supported. At this point, Windows PowerShell is a CLR 2.0 application, capable of loading and executing code targeting .NET 2.0 through .NET 3.5 SP1 (all of which run on the second major version of the CLR).

 

Option 1 – Hacking the parameter passed to CorBindToRuntimeEx

If we just need an ad-hoc test of Windows PowerShell v2.0 running on CLR v4.0, we can take advantage of WinDbg once more. Simply break on the CorBindToRuntimeEx and replace the v2.0.50727 string in memory by the v4.0 version, i.e. v4.0.30319. The “eu” command used for this purpose stands for “edit memory Unicode”:

image

If we let go the debugger after this tweak, we’ll ultimately get to see Windows PowerShell running seemingly fine, this time on CLR 4.0. One proof is the fact we can load the .NET 4 assembly we tried to load before:

image

Another proof can be found by looking at the DLL list for the PowerShell.exe instance in Process Explorer:

image

No longer we see mscorwks.dll (which is indicative of CLR 2.0 or below), but a clr.dll module appears instead. While this hack works fine for single-shot experiments, we may want to get something more usable for demo and development purposes.

Note:  Another option – not illustrated here – would be to use Detours and intercept the CorBindToRuntimeEx call programmatically, performing the same parameter substitution as the one we’ve shown through the lenses of the debugger. Notice though the use of CorBindToRuntimeEx is deprecated since .NET 4, so this is and stays a bit of a hack either way.

 

Option 2 – Hosting Windows PowerShell yourself

The second option we’ll explore is to host Windows PowerShell ourselves, not by hosting the CLR and mimicking what PowerShell.exe does, but by using the APIs provided for this purpose. In particular, the ConsoleShell class is of use to achieve this. Moreover, besides simply hosting PowerShell in a CLR v4 process, we can also load snap-ins out of the box. But first things first, starting with a .NET 4 Console Application, add a reference to the System.Management.Automation and Microsoft.PowerShell.ConsoleHost assemblies which can be found under %programfiles%\Reference Assemblies\Microsoft\WindowsPowerShell\v1.0:

image

The little bit of code required to get basic hosting to work is shown below:

using System;
using System.Management.Automation.Runspaces;
using Microsoft.PowerShell;

namespace PSHostCLRv4
{
    class Program
    {
        static int Main(string[] args)
        {
            var config = RunspaceConfiguration.Create();
            return ConsoleShell.Start(
config,
"Windows PowerShell - Hosted on CLR v4\nCopyright (C) 2010 Microsoft Corporation. All rights reserved.",
"",
args
); } } }

Using the RunspaceConfiguration object, it’s possible to load snap-ins if desired. Since that would reveal the reason I was doing this experiment, I won’t go into detail on that just yet :-). The tip in the introduction should suffice to get an idea of the experiment I’m referring to. Here’s the output of the above:

image

While this hosting on .NET 4 is all done using legitimate APIs, it’s better to be conservative when it comes to using this in production since PowerShell hasn’t been blessed to be hosted on .NET 4. While compatibility between CLR versions and for the framework assemblies has been a huge priority for the .NET teams (I was there when it happened), everything should be fine. But the slightest bit of pixy dust (e.g. changes in timing for threading, a classic!) could reveal some issue. Till further notice, use this technique only for testing and experimentation.

Enjoy and stay tuned for more PowerShell fun (combined with other technologies)!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

A quick update to my readers on a few little subjects. First of all, some people have noticed my blog welcomed readers with a not-so-sweet 404 error message the last few days. Turned out my monthly bandwidth was exceeded which was enough reason for my hosting provider to take the thing offline.

image

Since this is quite inconvenient I’ve started some migration of image content to another domain, which is work in progress and should (hopefully) prevent the issue from occurring again. Other measures will be taken to limit the download volumes.

Secondly, many others have noticed it’s been quite silent on my blog lately. As my colleague Wes warned me, once you start enjoying every day of functional programming hacking on Erik’s team, time for blogging steadily decreases. What we call “hacking” has been applied to many projects we’ve been working on over here in the Cloud Programmability Team, some of which are yet undisclosed. The most visible one today is obviously the Reactive Extensions both for .NET and for JavaScript, which I’ve been evangelizing both within and outside the company. Another one which I can only give the name for is dubbed “LINQ to Anything” that’s – as you can imagine – keeping me busy and inspired on a daily and nightly basis. On top of all of this, I’ve got some other writing projects going on that are nearing completion (finally).

Anyway, the big plan is to break the silence and start blogging again about our established technologies, including Rx in all its glory. Subjects will include continuation passing style, duality between IEnumerable<T> and IObservable<T>, parameterization for concurrency, discussion of the plethora of operators available, a good portion of monads for sure, the IQbservable<T> interface (no, I won’t discuss the color of the bikeshed) and one of its applications (LINQ to WMI Events), etc. Stay tuned for a series on those subjects starting in the hopefully very near future.

See you soon!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

More Posts