February 2009 - Posts

Geeks need fancy hardware, don’t they? Well, for a geek, this evening has been a most exciting one. A while back, I decided I should start thinking about upgrading my laptop. I’m currently running a dual core 2.16 GHz machine with 2GB of RAM. As I’m writing this, I conclude I’ve already forgotten when I got the thing. But my blog reveals the precise timing. That’s almost three years now, enough for Moore’s law to strike twice or so. So, the specs I got in mind are as follows:

  • Quad Core CPU (I’m running one of these at work on a desktop and build times benefit greatly)
  • 8GB of RAM  (I want to be able to run Hyper-V while I’m “on the road”)
  • 1920 x 1200 resolution (I’d never settle for a lower resolution)
  • Solid State Drive (I’m a geek after all…)

Not everything needs to be the sweetest piece of hardware for me though. The first thing I downgrade is the video card (and I’m actually proud of that); gaming and movies are the biggest waste of time on earth (personal opinion, feel free to disagree) and I don’t care about the fancy pixels some humans require to transmit optical impulses carrying fiction for the brain. A good old boring text editor for coding is all I need (and occasionally a mail reader and web browser to survive in the connected jungle called the internet).

Due to availability issues on the LCD screen I kept myself from ordering the new laptop, but instead I wanted to prepare myself for the upgrade. So, a few weeks back I heard about the Intel X25-M SATA Solid-State Drive from a colleague at work. It seems that most laptop vendors that offer SSD are kind of vague on the thing they put inside and I heard some horror stories on reliability of a few models. But the X25-M, 80GB in capacity, had very good reviews. So last Friday, I decided to order the thing and this morning it arrived; some online stores offer amazing delivery times in a country as big as this one (where I’m considered a “resident alien”, only the first part was new to me when I first heard it). A bit expensive, I have to admit, but I don’t care (another colleague of mine insists I shouldn’t care at all, being a single geek driving no car, carrying no cell phone and not owning photon emitting devices called TVs).

In the meantime, this weekend, I visited an electronics store in the wide area of Seattle, looking for a 7200 RPM high-capacity classic (boring) mechanical hard drive. Historically I’ve been a huge fan of Western Digital and recent quotes from another WD-believer at work (who experienced a non-WD drive crash) confirmed my faith in Western Digital, so I got one. Ultimately this will become the secondary disk in my new laptop, to store virtual machine images on. For now, it contains a couple of demo partitions for my upcoming trip. Actually this makes me wonder whether I’m the only one spending far too much time strategizing partition sizes only to conclude I’ve made the wrong partitioning once more?

Actually, I seldom reinstall my computer, most of the time I get a new hard disk, mostly because the old one is such a mess it’s almost impossible to backup all the files I still care about, so I just keep it around likely to never plug it in again (a great way to reset the brain is to take distance from existing pieces of work :-)). However, that barrier has been removed now too as I ordered a Thermaltake docking station last Friday with my SSD order too. Kind of funny, because earlier tonight I gave the thing a try. I removed the 320 GB 7200 RPM disk that was in the laptop for barely two days, placed it in the device and plugged it in. Next, I tried to boot the machine from it, and yes: there was Windows 7 resuming from hibernation…

Anyway, here I am, writing this blog post on a brand new snappy install of some recent Windows 7 build, live on SSD. Here’s how the install went, carefully written down on the analog device commonly referred to as a whiteboard behind me:

  • 8:20 – Boot from network, contacting our Windows Deployment Services servers.
  • 8:28 – Went through the first few clicks of the Windows 7 installer, selecting keyboard layout, locale and the destination partition. My SSD 80 GB appears, obviously suspect to a 2^10 division error as our OS believes in GiB more than GB. File copy starts.
  • 8:37 – File copy (over the network) and file extraction finishes. The patterns in the blinking of the disk (no longer hard-) activity LED look different, but that might be my imagination. One thing is sure: the icon for disk activity needs to be replaced by the outlines of a chip as opposed to a cylinder.
  • 8:38 – The machine reboots twice the next two minutes, finishing the installation and detecting some hardware.
  • 8:39 – Windows starts the first time after the installation.
  • 8:40 – I’m on the desktop of the freshly installed machine.

The core of the installer (eliminating booting from the network and loading the Windows PE image that is) took barely 12 minutes from a completely empty disk to a 10 GB occupation for the clean install (including page file, cache of the setup binaries, etc). I attribute this to two things: I’ve experienced the Windows 7 installer to be really fast for clean installs, and the solid state disk seems to make a difference (although SSD doesn’t excel in write speeds according to specifications).

Next, I wanted to give boot time a test. Cold start, from a press on the power button takes 14 seconds to the boot screen (4 to the first pixel of the Windows 7 boot logo). Disk activity is completely absent at the end. Next I log in to the local profile and 3 seconds later I’m on my desktop with now disk activity whatsoever. The clean install on the mechanical drive last weekend took 34 seconds all the way to the desktop with drive activity till the very end. (Note: I seldom use red text on my blog, but from this you can tell I’m as excited as a new-born, not that I’ve seen a new-born in years...)

Windows 7 downloads a few drivers that didn’t come on the OS image (for my crappy video card), the system reboots and I have to re-do the hardware assessment as new hardware has been found. This is the moment of truth:

image

Windows 7 seems to tell me indirectly I should become a gamer by subtly pointing out the slightly-below-average performance of a thing called “3D business and gaming graphics performance”. Unfortunately I don’t have a microphone installed: the speech recognition engine would have had a hard time recognizing my yell which reflected an emotional state anywhere between: “Graphics go to ****; Solid State go to ****” (where the two **** notations have totally opposite meanings). I should note the disk’s SATA interface has a higher bandwidth than my motherboard is capable of (will get resolved soon with the new baby), so the result is even not honoring the drive’s speed for full.

In the meantime I’ve copied my Virtual PC demo images from my mechanical drive and clearly the performance of the images has increased significantly. The machine feels like reborn, the right bottom corner is dead silent and cold (for the first time, the left hand side carrying the DVD drive gets hotter – no it doesn’t contain a movie, just a DVD with robust software bits). Visual Studio and Office applications have been installed in the meantime and feel to start up much faster. Next test is the battery drain, but the last 40 minutes I have only lost 15% of my battery charge, so that seems to go in the right direction too. Finally I’ll be able to do a bit more work on transatlantic flights…

Needless to say, the SSD movement has a new believer. Get one to believe it yourself!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Readers can expect traffic caused by my blog to decrease a bit over the next few weeks as I’ll be travelling to Europe for a couple of events I’ll be speaking at (not to talk about the preparations required for the seven distinct talks <g>). If you’re attending any of those events or are in the proximity, feel free to jump by and say ‘Hi’ (or better, attend one of my talks). Here’s an overview of the topics I’ll be delivering:

 

TechDays Finland – 5 and 6 March

More information about the event at http://www.techdays.fi. I can’t read a word of it, so no idea how my talks (and my bio :-)) are described…

  • Introduction to Oslo and ”M” – Covers the essentials of the Oslo modeling platform with a bit more attention to the language aspect of it. During this talk, I’ll be showing stuff attendees can play with immediately using the Oslo January 2009 CTP SDK, covering the repository, Intellipad, the m/mx toolchain, the M language and maybe a bit of MGrammar. Check out my blog series on the topic as well.
  • Introduction to F# – One of my favorite .NET languages is without doubt F# (the first part of a blog series on the language is in my Windows Live Writer drafts folder as we speak). This session will be a gentle introduction to F#, explaining why functional matters and how it can help to solve real-world problems. We’ll also look at the more “exotic” features of F# like asynchronous workflows, that make the language unique.

I’ve never been to Helsinki, so I plan on sight-seeing half a day during the hectic trip :-). If you have any tips on must-see places, let me know!

 

ICTdag Belgium – 9 March

An event I’ve been speaking at quite a bit in the past when I was a full-time Belgian... The site is in Dutch, but the curious can find information here: http://ictdag.be. Basically it’s an event for education folks (teachers, ICT coordinators, etc) on how to use technology in schools. Always great to meet a different kind of audience than developers or IT Pros.

  • A sneak peek at Windows 7 – What’s new in Windows 7: covers improvements to the user experience, a few looks at more technical concepts, Internet Explorer 8, etc.
  • Windows PowerShell Introduction – How to use Windows PowerShell to improve manageability of school infrastructures and networks on Windows.

The event takes place in Hasselt, and a short train ride brings me to…

 

TechDays Belgium – 11 and 12 March

This time in Antwerp, more information at http://www.techdays.be. Looking forward to meet old friends from Microsoft Belux and the community once more.

  • The Future of C# – Essentially a redelivery of Anders’ great PDC08 session on the topic, but with a couple of little “under the covers” investigations. In this talk, we’ll take a look at what’s coming next for C# in the 4.0 release. Features that will be covered are dynamic, generic co- and contravariance, optional and named parameters, better COM interop and “no PIA”.
  • LINQ in Breadth – Ah, this one will be truly fun (the others too of course, but this one just that little bit more). Last year, I talked at the same conference on “LINQ in depth” (custom LINQ providers); this time, I’m rotating the approach. I have a few surprises in the cooker, but basically we’ll take a look at LINQ from a different angle: how to LINQify virtually everything, how to apply LINQ design concepts in other places, etc.
  • Windows PowerShell v2: the IT revolution, part two – An overview of Windows PowerShell 2.0 features with a closer look at the WS-MAN and Remoting functionality that will be part of the second release of PowerShell. Other topics that will be covered are script cmdlets, interactive script debugging, ISE (Interactive Scripting Environment) and more.

 

I have a few upcoming blog posts ready that will come to your RSS feeds in auto-pilot mode, but response to feedback might be slow.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Update (2/22/2009, 5:29 PM):  Thanks to all the readers for the feedback on this post so far. Over the weekend I had some mail conversation with the BCL team, and it turns out the bits for this feature mistakenly made it in the CTP. Allow me to quote Justin's mail:

Hi Bart,

I saw your blog post on System.Shell.CommandLine. We’re actually not planning to include this in .NET 4; it was mistakenly public in the CTP. The design wasn’t something we were happy with so it has been removed and will not be available in the next preview release. However, we are planning to release a much better designed command line parsing library on CodePlex later this year. When it is available, we’ll be sure to announce it on the BCL team blog. If you could let your readers know, I’d appreciate it.

Thanks,
Justin


So, stay tuned for more information on this upcoming library. As you can see, some of the concerns raised through the feedback are well-known to the team, hence their investment in a better library design, implementation, test, documentation and shipping effort. I'm sorry for the inconvenience this post may have caused you. My next parts of this series will be archived for history till the better one arrives :-).


Introduction

Command-line parsing isn’t a trivial thing and the wheel has been reinvented many times to make this job easier. Starting with .NET 4.0 though, developers will have built-in support for command-line parsing in the framework. In this series of posts I’ll dive into the details of this hidden treasure in .NET Framework 4.0. You can actually start playing with it today, by downloading the Visual Studio 2010 and .NET Framework 4.0 CTP.

So, what’s the deal? Why focus on something seemingly old-fashioned like command-line parsing? Turns out quite some programs want to support command-line arguments of some sort, no matter whether the tool is intrinsically command-line driven (e.g. console applications such as compilers) or comes with a GUI: they all have a Main method. And that’s where the pain starts. All you get from Main’s arguments is an array of strings, and you’re on your own from there on. The very first thing you’re likely going to do is some kind of parsing to turn the arguments into rich objects. Far from trivial if you even think about a little bit of flexibility at the command-line:

  • is the argument required or optional?
  • positional (like copy <first> <last>) or named (like csc /t:library)?
  • what’s the type of the argument?
  • how to validate an argument’s value?
  • support for “parameter set”?
  • etc

Lots of things to think about. And we’re lucky entire teams have been doing that for quite a while: Windows PowerShell. As you’ll see, core concepts of command-line parsing in the .NET Framework are based on the techniques applied to PowerShell cmdlets. Not only does that make command-line experiences consistent, it also allows for developers to transfer their knowledge from one domain to the other, and even to port command-line parsing from one side to the other.

When to use what? Windows PowerShell is definitely the way to go for automation, so if you find yourself writing command-line tools that are applicable in such a scenario, PowerShell should be a no-brainer. In addition, the use of PowerShell gives you a lot of infrastructure to party on with regards to pipelining, types, error handling, etc. But if you find yourself in a scenario where it just feels right to write a .NET console application or to add command-line support to any kind of application, System.Shell.CommandLine should be your next big friend…

System.Shell.CommandLine

The new namespace for the command-line parsing functionality is System.Shell.CommandLine and lives in the System.Core.dll assembly, so it will be included by default in new projects. It contains quite a few types:

image

The main entry-points to the API are CommandLineParser and AttributeCommandLineParser. What’s the difference between these two? The first one, CommandLineParser, is the simplest one to use. It’s very basic and imperative in use: you’ll call a few methods to add parameters to the parser, invoke the parser and get the detected values (or an exception if the command-line was invalid) back through a dictionary-alike lookup. The second one, AttributeCommandLineParser, is the one that’s declarative in nature. You declare a class with properties that get annotated with metadata indicating the corresponding parameter’s behavior. Next, you pass a new instance of that type in to the parser, and given the command-line it will populate the properties with the values found. All of this can be done in conjunction with attribute-driven validation and even transformations on parameters (like turning short file names into full paths or so).

To get started with the basics, I’ll dive into the CommandLineParser class today. In the next episodes we’ll look at the AttributeCommandLineParser in all its glory.

CommandLineParser

Let’s start by looking at the definition of CommandLineParser:

using System;

namespace System.Shell.CommandLine
{
    public class CommandLineParser
    {
        public CommandLineParser();

        public bool AllowRemainingArguments { get; set; }

        public void AddParameter(CommandLineParameterType type, string name, ParameterRequirement required);
        public void AddParameter(CommandLineParameterType type, string name, ParameterRequirement required, ParameterNameRequirement nameRequired);
        public void AddParameter(CommandLineParameterType type, string name, ParameterRequirement required, ParameterNameRequirement nameRequired, string helpText);
        public bool GetBooleanParameterValue(string name);
        public double? GetDoubleParameterValue(string name);
        public string GetHelp();
        public int? GetInt32ParameterValue(string name);
        public string[] GetRemainingArguments();
        public string GetStringParameterValue(string name);
        public void Parse();
        public void Parse(string commandLine);
    }
}

As you can see, this class isn’t that big. I should stress that CommandLineParser is the least powerful of the two, but still applicable in a lot of cases where you don’t require automatic mapping onto an object, data validation and/or transformation, etc.

What’s better than looking at a simple example. Say we want to rewrite chkdsk with support for a few of its arguments:

image

Let’s tweak it a little though to reduce the number of parameters (so we can get to the essence) and make one required:

CHKDSK volume [-F] [-L:size]

This gives us a chance to show a required parameter (volume), an optional “flag” parameter (/F) and one that takes a value (/L). The basic steps to parse this are:

  • Create a CommandLineParser instance.
  • Add parameters to it using the AddParameter methods.
  • Call Parse, feeding in a string or without arguments to parse the process’s command-line.
  • Retrieve parameter values through Get* methods.
  • Catch an exception for the case where parsing failed.

Now in terms of code:

using System;
using System.Shell.CommandLine;

class Program
{
    static void Main()
    {
        Console.WriteLine("Checks a disk and displays a status report.\n");

        CommandLineParser cmd = new CommandLineParser();
        cmd.AddParameter(CommandLineParameterType.String,  "volume", ParameterRequirement.Required,    ParameterNameRequirement.NotRequired, "Specifies the drive letter.");
        cmd.AddParameter(CommandLineParameterType.Boolean, "F",      ParameterRequirement.NotRequired, ParameterNameRequirement.Required,    "Fixes errors on the disk.");
        cmd.AddParameter(CommandLineParameterType.Int32,   "L",      ParameterRequirement.NotRequired, ParameterNameRequirement.Required,    "Changes the log file size to the specified number of kilobytes.");

        try
        {
            cmd.Parse();
        }
        catch (ParameterParsingException ex)
        {
            Console.WriteLine(cmd.GetHelp());
            Console.WriteLine(ex.Message);
            return;
        }

        string volume = cmd.GetStringParameterValue ("volume");
        bool fix      = cmd.GetBooleanParameterValue("F");
        int? logSize  = cmd.GetInt32ParameterValue  ("L");

        Console.WriteLine("Checking volume {0}...", volume);

        if (fix)
            Console.WriteLine("Fixing errors...");

        if (logSize != null)
            Console.WriteLine("Changing log size to {0}...", logSize);
        else
            Console.WriteLine("Current log size: {0}", 1024); 
    }
}

A quick journey through the code. First, we new up the CommandLineParser object. Nothing special here. Next, three parameters are added. I’m using the most specific overloads to specify everything up to a parameter description (ignoring localization lazy as I am…). The supported types for arguments are string, boolean, int32 and double. Names are not case sensitive for the user, but do matter in the code. I’m referring to the fix parameter as “F” here, so later on I’ll have to use “F” in capital again. Whether or not a parameter is required should be self-explanatory. The name requirement might be less obvious but essentially this boils down to either allowing positional use of the parameter or requiring the parameter to be paired with a name all the time. For the “flag” -F it makes sense that’s going to be required. For –L this means this value can only be specified like “-L:size”, requiring the “L” to be spelled out explicitly. The last argument of AddParameter takes the help string.

Once we have declared (in an imperative way, that’s the way CommandLineParser works, see next post for a more declarative metadata-driven way) the parameters, we can invoke the parser. Just calling Parse without an argument will use Environment.CommandLine as the input. Alternatively we could feed in a string ourselves. When the user violated the contract (missing out a required parameter, specifying an invalid type for a valued parameter, etc), an exception of type ParameterParsingException is thrown. You can take a look at the Message property for detailed error info (as expected), but I’m also printing out the auto-generated syntax report by using GetHelp. If the user made a mistake, this causes something like this to appear:

image

Other types of errors will be handled in a later episode (like duplication of parameters, validation errors, and such). (Note: I’ve called my assembly mchkdsk for “managed chkdsk”, but feel free to find other explanations for the “m” prefix given the dysfunctional nature of the thing…)

Finally, we retrieve the parameters using type-specific Get*ParameterValue methods. As Boolean parameters represent a flag, there’s not retrieved as nullable (the absence of the parameter means false, presence means true), but all other parameter types are nullable (string obviously always is). And once we have obtained the parameter values, we get into the program’s logic which is just some dummy code as you can imagine. Below I’m executing mchkdsk C: -F –L:1024 as an example output:

image

Some quick notes:

  • Named parameters are prefixed with a dash ‘-‘.
  • String parameters with spaces can be surrounded by double quotes; ‘\’ acts as the escape for quotes in between quotes.
  • Named parameters that take a value (i.e. non-Boolean) can have an optional colon ‘:’ between the name and the value, as well as spaces (in BNF, something along the lines of <name>‘:’<space>*<value> or <name><space>+<value>).
  • Remaining arguments are supported on an opt-in basis (see AllowRemainingArguments). If not opted-in, an exception will be thrown if arguments other than recognized ones are found. Otherwise, you can find them in the GetRemainingArguments array result.

That was easy, no? Next time, the more die-hard way :-). Enjoy the weekend!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

A few colleagues hinted kindly my post on Type Theory Essentials in Pictures – Part 1 – Quiz might be a little to subtle or non-obvious despite the verbal hints in the surrounding text. So I decided to give away the first couple of pictures (including titles that is) to set a bit more context.

Here’s the first one:

Slide1

Infer from this what the dotted “hollow” box means, versus a solid colored one. Then looking at the second picture, you should be able to make the connection between a non-dotted hollow box and a solid one again:

Slide2

Finally, let me give a hint about the faces: they are actors.

Answers will follow later, and then we’ll get into more serious concepts in parts 2 and 3. Happy puzzling!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

I'm currently in the middle of preparing my talks for a trip to Europe the first two weeks of March (I’ll post a conference/session list later this week). One thing I love about speaking to (technical) audiences is to bring rather theoretical but essential topics in a way that’s easy to consume in the short time available for a session. There are two main strategies: use a lot of pictures or do a lot of on-stage coding. The latter is definitely the most fun both for me and for the audience and I try to apply it wherever possible. However, having a graphical illustration of the concepts used in the subsequent coding demos is often a good idea to make things stick.

Starting with this post, and in the next few ones, I want to share out some of my illustrations. Notice my artistic talents are limited to Paint and PowerPoint :-). I’ve removed the slide titles though, so that the reader can try to map the illustrated concept back to something familiar (or still unfamiliar) in the domain of types. Some advice:

  1. Go through the pictures in order (left-to-right, top-to-bottom traversal of the table) as graphical notations are reused subsequently.
  2. Pictures are grouped as well, e.g. 1 and 2 belong together, where groups start with abstract concepts and concretize them, e.g. indicated by an increase in color usage.
  3. Transitions between adjacent pictures could be made more gradual in some cases (i.e. an abstract concept is concretized a few steps at once).

Today’s series shouldn’t be too hard, although I’m biased obviously :-). Some hints: blocks are passive things that compose but are themselves composed. Humans are active (sometimes). Think in terms of objects, types and functions.

Answers can typically be formulated in C# syntax or pseudo-syntax. As we go further in this series of posts, some things might no longer have a notation. If that’s the case, answering with a theoretical term will be required (or using syntax from another language).

image 
Figure 1
image 
Figure 2
   
image 
Figure 3
image 
Figure 4
   
image 
Figure 5
image 
Figure 6
   
image 
Figure 7
image 
Figure 8
   
image 
Figure 9
image 
Figure 10
   
image 
Figure 11 (extra)
image 
Figure 12
   
image 
Figure 13
image 
Figure 14
   
image 
Figure 15

Good luck!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Last time in this series, we looked at M’s structural type system, pointing out the differences compared to nominal type systems and why structural typing has its benefits when dealing with data. Obviously there’s more to data than types: we need containers to store that data in. That’s what collections and extents are for. Ready? Go!

 

Collections

In the last post, we’ve already seen some core operator that acts on collections: the in operator. But wait a minute, didn’t in operate on a value and a type, like “Hello” in Text? Turns out M’s type system is so unified that types and collections are very closely related. Indeed, a type describes a set of possible values something declared to be of that type (notably a value) can take. But besides types, collections can be used too in order to group values together. Values contained in a collection are called elements. Also, collections by themselves are treated as values.

Again, we’ll rely on MrEPL to teach us how M works in an interactive fashion. Let’s start by playing around a bit with the syntax to define collections of values:

image

Collections are built from comma-separated lists (“lists” used in an informal way here – see further) of values in between a pair of curly braces. Much like collection initialization syntax in C# 3.0 and beyond (or arrays before that). A few notable things:

  • There’s an empty collection.
  • Collections can contain different types of values.
  • Nesting of collections is possible as collections themselves can be treated as values.

When talking about collections different questions come to mind: “are duplicates allowed?” (a.k.a. “is it a set or not?”), “does order matter?” (a.k.a. “is is a list or not?”) are the most important ones. Let’s put those questions to the test and find out:

image

Clearly, duplicates are allowed but order is irrelevant. (You know a name for such a collection, don’t you?) It’s not hard to see why this design was chosen, as M is about modeling data in a general way, and typically maps to repositories based on database technologies. We’ll see later how uniqueness can be enforced when dealing with “real” data.

From the sample above, you can already infer a few operators that act on sets: == and != check for equality and inequality respectively. What about other operations? As M is inspired by set theory it shouldn’t be too surprising operators exist to check for subset (<=) and superset (>=) relationships and to define unions (|) and intersections (&):

image

Notice though how set operations always return sets (i.e. containing no duplicates). The choice for | and & as operators for union and intersection respectively comes from the relationship there is between set theory and logic. If a set is defined as a predicate (the membership condition that determines whether a value is part of the set or not), taking the union means finding all the elements that have either one (or both) of the sets’ predicates evaluate to true. Similarly, the intersection holds all elements where both predicates are true. Actually think of it this way:

{ 1 } ~ (Number where value == 1)
{ 2 } ~ (Number where value == 2)

then

({ 1 } | { 2 }) ~ (Number where value == 1 || value == 2)
({ 1 } & { 2 }) ~ (Number where value == 1 && value == 2)

What are the other operations that can be carried out on collections? What about checking a value belongs to a collection and what about query operations?

image

Friends of LINQ should be immediately familiar with the query operators like filtering and projection. Next, let’s point out some other nice features such as checking for the number of elements (including duplicates) in a collection and turning a collection into a set by means of the Distinct operator:

image

In the previous post, we’ve seen how to declare types. Collections obviously can take values of any type, so you can have collections of things like Products. Let’s show a sample based on entities, which consist of name/value pairs (notice the anonymous construction of values below):

image

 

Collection types

In the previous paragraph we’ve been looking at collection values, which can be thought of as containers of elements that themselves are values. That makes sense, right? Now, we’ll take a look at the same concept from a different angle, using types. Previously we haven’t spelled out the type of a particular collection value, but obviously there should be a way to do this. If not, how would we say things like: I have a Person type and each object of that type should contain a collection of numbers (whatever they represent).

The common base for collection types is called, no surprise, Collection. First of all, it’s important to know the difference between a singleton (which is a collection with one value) and a scalar (a single value, which could be based on a type that is compound by itself):

image

In the sample above you’ve seen two distinct “types” of types. Actually their “order” is different: Number (representing a “scalar” value) versus Collection (representing a collection of values). But how do we go from a “scalar” type (a Number, a Person, whatever) to a collection type based on that? The answer is by means of a type constructor. You already know type constructors from the world of the CLR. Given any type T you can build up a new type like T[] for an array of objects of that type (I’m not using the word “value” here as that would be ambiguous in CLR lingo). Notice that no-one had to declare a Person[] explicitly; the mere fact there is a Person type allows the [] type constructor to be applied to it, yielding a new type (constructed by the runtime) that represents an array of Person objects. If you read ECMA 335 cover-to-cover you’ll discover other constructs in the CLR that play the role of type constructors although there aren’t that many. M though has quite a few type constructors that allow you to define a collection:

image

The type constructors restrict the type of the collection elements and the cardinality bounds of the collection. Four constructs are available to limit the cardinality: the three Kleene operators (? = 0 or 1, * = 0 or more, + = 1 or more) and the #m..n operator specifying (inclusive) lower and upper bounds to the element count.

Based on this, we can define our own collection types. It’s important to note though that constraints in collection type definitions have two “pseudo”-variables available: value, referring to the collection itself, and item, referring to each individual item in the collection:

image

I’ll leave it to the reader to play a bit more with collection types.

 

Extents

Values are one thing, but without storage for them there not really very usable in modeling scenarios where you want to keep data around. So we need dynamic storage for those value (this includes, and typically is, a collection of values), which is what we call an extent. To show how this works, we’ll walk through the tool chain and create a table of Person values, using the following key steps in the declaration of the model:

  1. Define the type for the values, i.e. Person.
  2. Define an extent for the values, based on a collection type over our entity type (i.e. Person* becomes People).
  3. Wrap the whole thing in a module in order to make it deployable.

Here’s the basic sample:

image

Notice that in order to make this work, you’ll need to provide a concrete type for storage. E.g. for Age, you’ll use an Integer32 or so (well, if you expect the modeled people to get really old that is :-)).

image

Where’s the extent in the sample above? The last line in the Demo module is where the storage is allocated, concretized using a table definition in SQL. How to get this model in SQL Server was subject of my introductory post: Getting Started with Oslo – Introducing “M”. In the next episodes we’ll dive a little deeper into things like SQL generation, computed values and queries, before tackling MGrammar.

Cheers!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

One of my favorite features in our upcoming Windows 7 (and Server 2008 R2) release is the native support for VHDs, a.k.a. Virtual Hard Disks, the file format used by Virtual PC, Virtual Server and Hyper-V to represent virtual disks. In this post, I’ll show you the basics of this feature and how to use it. I won’t cover a sister-feature, boot from VHD, in this post but as we get closer to shipping Windows 7, I plan on blogging about that one too.

 

Why native VHD support?

A first question quite some readers might raise is why we’d build in support for the VHD format natively to Windows? As you might have figured by now, virtualization is becoming increasingly important for consolidation of servers, so that it just makes sense for the OS to have intrinsic understanding of one of virtualization’s core pillar: the disk format. Having such support helps for a variety of scenarios, ranging from maintenance of virtual disks, creation of new virtual disks for data, booting from them, etc. And you can expect the virtualization story to grow significantly over the next months and years to come.

This said, support for VHDs isn’t something unprecedented. A while back I blogged about VHDmount, a tool that shipped with Virtual Server 2005 R2 allowing you to take a VHD file and mount it to a drive letter, using a driver installed by the tool. Native VHD support can, besides its built-in nature,  take all of this many steps further as explained above.

 

Virtual² Disk Service

The hub of disk administration on Windows is the Virtual Disk service. A bit of a historical misnomer, but understandable as “Prophetical Engineer” isn’t a job role at Microsoft (yet), as “virtual” here doesn’t refer to the VHD format in any way. VDS was introduced in Windows Server 2003 as a unification mechanism on top of different kinds of storage, therefore virtualizing the underlying software and hardware used for storage.

VDS is surfaced to users by means of various tools, like diskpart and the Disk Management MMC snap-in (diskmgmt.msc), but is made accessible through a set of COM interfaces (IVds*) as well. As usual in the world of software it helps to think about it in terms of the layered cake principle:

image

In this post I’ll cover the Disk Management MMC snap-in enhancements in Windows 7 as well as the enhancements made to diskpart, both to surface the new VHD support. In a later post, I’ll talk about API enhancements too (once we have a Windows 7 update for the Windows SDK published).

So, with the native VHD support in Windows 7, the Virtual in Virtual Disk Service is applicable twice, as now the VDS can be used to manage virtual hard disks or VHDs as well, hence my subtitle Virtual² Disk Service.

 

Diskpart

As this is a blog for geeks, let’s start on the dark side of the picture with Diskpart. As many of you know, Diskpart is centered around the idea of objects being managed. In the past, those objects were either disk, partition or volume. Now, in Windows 7, the object “type” vdisk has been added to that set. Quite a few commands know how to deal with vdisk objects (like create, select) while others are meant to be used for vdisks exclusively (like attach, detach). The picture below outlines the most important vdisk-aware or vdisk-specific commands:

image

Let’s go through the steps required to create a new vdisk, attach it and use it.

 

Step 1 – Create the VHD (only if you want a new disk)

Quite predictably, creating a VHD is carried out by the “create vdisk” command. Here’s the full syntax:

image

Notice the support to create fixed-size VHDs and expandable VHDs. Other than that, you can set the maximum size (which would become the preallocated space for a fixed-size VHD, and the maximum size for an expandable one) and create differencing disks by specifying the parent. I won’t dive into the advanced options, so let’s stick with the simplest configuration possible:

create vdisk file=”c:\temp\demo.vhd” maximum=1000

(Notice the default of “expandable” seems to be wrong currently. Actually the default is “fixed”, so you’ll get a file of the size specified.) Executing this command creates a new empty VHD in the specified location:

image

image

and triggers the installation of the VHD miniport driver (HBA = Host Bus Adapter, see the WDK for more information):

image

image

image

 

Step 2 – Select the virtual disk

Both when you’re creating a new VHD (see step 1) or using an existing one, the vdisk object needs to be selected prior to continuing with the next step of attaching the disk. This is done by the “select vdisk” command, pointing at the VHD to be selected:

image

I won’t go into merging this time, but if you’re using a differencing virtual disk and intend to merge it with its parent or parents, depth becomes a relevant setting in order to be able to merge till a certain parent level (e.g. root.vhd, diff1.vhd, diff2.vhd, diff3.vhd would require a depth of at least 4 to merge all the way up to the root VHD, allowing you to run the merge command to execute ((diff3 + diff2) + diff1) + root).

Anyway, selection is not a piece of art but let’s make sure using a list vdisk that “focus” has been moved to the selected virtual disk indeed:

image

That seems to have worked.

 

Step 3 – Surface/attach the disk

The “create vdisk” command is your VHD factory (much like an assembly line for physical harddisks), the “select vdisk” command is you unwrapping the newly manufactured disk and taking it in your electrostatic-free hands, and the “attach vdisk” command is you opening the case of the computer and connecting the thing to the motherboard:

image

Notice you should have focus on the vdisk first as outlined in the previous step. Upon executing “attach vdisk”, the focused object will be attached to the computer and appear in Device Management again (you’ll hear the “insert hardware” sound):

image

Although it appears in the list of devices…

image

… it’s still a virgin disk without any partitions, so you won’t see it in Windows Explorer yet:

image 

You can verify the Virtual Disk Service knows about the VHD being “online” by executing “list vdisk” again:

image

 

Step 4 – Partitions and volumes (only if you’re creating a new disk)

The last part in creating a new virtual disks is nothing new compared to real physical disks: you need to partition them and assign drive letters.

image

Windows gently reminds you to format your new disk:

image

And after the usual procedure, we’re done:

image

 

Disk Management MMC snap-in

The above is the geeky way to deal with VHDs, but a more gentle way is present as well through the Disk Management MMC, diskmgmt.msc (also reachable for regular humans through Computer Management, assuming you can find that one :-)).

Let’s continue where we left off in the above: we already have an attached VHD, so Disk Management should show it. And indeed it does, even with a slightly different icon:

image

All the usual operations apply to deal with partitions and volumes, but obviously there should be some context-sensitivity for the fact we’re dealing with a VHD:

image

Let’s give it a try (you’ll hear the “remove hardware” sound when pressing OK):

image

How to create or attach a VHD? Plain simple again:

image

Most options of Diskpart (well, VDS to be precise) are surfaced through the UI:

image

Notice the drop-down with units of measure is prepared for the TB range, which could have been better given how optimistic we are in the “disk quota” management UI, but that’s a whole different story…:

image

And finally the “Attach VHD” isn’t too surprising either:

image

Recall though how it’s possible to attach a disk as virtual and how easy it is to do that through the UI. This is extremely handy if you simply want to inspect a VHD but avoid making any mistakes whatsoever.

 

Happy VHD’ing!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Welcome to the first real post in my new series on the M Programming Language. Last time, we bootstrapped ourselves by looking at the tool spectrum available in the Oslo SDK. If you haven’t read through it yet: The M Programming Language – Part 0 – Intellipad, MrEPL, VS2008 project support and Excel integration. Today, we’ll dive into the type system of M.

 

Nominal or structural?

The first thing to realize about M is its fundamentally different type system, compared to the languages most readers will be familiar with (which I suspect to be OO languages from the curly brace family and such, like C#). The taxonomy of type systems we’re looking at right now differentiates between:

  • Nominal (sometimes referred to as nominative) type systems
  • Structural type systems

Let’s dive into both families a little bit. First, nominal type systems. What’s in a name? Nominal comes from the Latin word ‘nomen’ which stands for name. In other words, the name of a type is relevant for something. That something is about “type compatibility”. What makes it possible for you to say that a Giraffe object can be treated as an Animal, for instance in C#? Determining whether this statement is true has to be carried out by analysis of the type hierarchy, which defines relationships between types by their names. For instance, the declaration of the type Giraffe will say: “my base type is (referred to be name) Mammal”. Next, we analyze the Mammal type in a similar way and come to conclude it derives from Animal, allowing us to conclude a Giraffe instance can be treated as an Animal instance. What we’ve been analyzing here are the “is a” relationships, which are declared by names. In other words, compatibility is an explicitly stated fact and no accidental equivalence is possible. For example, instances of the types below cannot be treated as equivalent:

class Point { int X; int Y; }
class Punt { int X; int Y; }

even though they’re structurally equivalent. And that brings us seamlessly to the concept of structural type systems. In a structural type system, all that matters to make decisions about type compatibility or subtype relationships is the structure of the declared types, not the name. Let’s go straight to an example, from M this time:

type Point { X; Y; }
type Punt { X; Y; }

This time, we can say that instances of Point and Punt can be treated as equivalent in the type dimension. Even more, when you ask the system whether a 2D point can be treated as a 3D point, the subtype relationship will confirm that’s the case:

type Point2D { X; Y; }
type Point3D { X; Y; Z; }

So, we didn’t have to say something like “Point3D derives from Point3D and just add the value Z to its representation”. Clearly, structural typing is more flexible although it has the feel of “compatible by accident” to it. This also means concepts like sealing don’t work in a structural type system. We can create subtypes and supertypes of any given type just by looking at that type’s structure and providing type declarations that are either “more” or “less” than the given type. All that matters in a structural world is the “has a” relationship: as long as an object has an X and Y (with compatible types, i.e. applying the rules of structural subtyping recursively) it will be compatible with any of the above type declarations.

Actually, allow me to deviate a bit from my path here and draw a parallel with Windows PowerShell. If you’re familiar with the pipeline processor of PowerShell, you know it’s role is to flow objects (.NET, WMI, etc) through different cmdlets that act upon them. In doing so, the pipeline processor needs to determine for each object that passes through the pipeline whether or not it can act as the input of the next cmdlet. For instance, here’s the help for get-process:

image

Take a look at the part indicated in green. This is one of the crucial parts of PowerShell that makes it such a powerful and flexible environment. PowerShell isn’t pesky about types in order to determine compatibility of objects in relation to the cmdlets that act upon them. As soon as the input object has a property name called “ComputerName” that property can be bound to the ComputerName parameter of the Get-Process cmdlet. This too is all about a “has a” relationship. If PowerShell were to demand nominative-based compatibility only, you wouldn’t be able to get a list of processes without using “exactly the right type of input”, which would be far from flexible (isn’t it nice to be able to use CSV-files, output of WMI commands, rich .NET objects, XML-based data, etc as input to the command, with the only requirement of having a ComputerName property?).

Back to our main discussion thread though. One more thing about structural typing: don’t confuse it with duck typing, although (agreed) the distinction is a bit blurry. In duck typing, the “has a” relationship is exercised too, but it happens in a dynamic fashion at runtime. Structural typing doesn’t imply a dynamic runtime environment though. However, there’s another distinction that’s more relevant to point out here. Duck typing only cares about the parts of a type that are used by the program. For example, given an object (of who knows what type) you can use duck typing to access its X property without requiring a full-blown interface or base type to be used that makes the “has an X property” requirement explicit (like IHaveX). Similarly you can (optimistically, as violations will only be detected at runtime) access a hypothetical Y property. Notice you never said you wanted to treat the input object as “something compatible with, say, Point2D”. In other words: you’ve not really tried to establish some kind of type relationship.

In summary:

  • Structural typing establishes type compatibility based on the structure of a type, i.e. not based on names of types.
  • Structural typing is still statically typed.
  • Structural typing allows more flexibility for treatment of data.

It also helps to think about types as sets of possible values. In such a mindset, structural typing means that two types are considered identical when they describe the same set of possible values.

 

Built-in types

Time to take a look at M in practice. We’ll be using MrEPL for this purpose, so time to spin up Intellipad and start MrEPL. For more information on how to do this, see my previous post: The M Programming Language – Part 0 – Intellipad, MrEPL, VS2008 project support and Excel integration.

First, a short exploration of some of the built-in types. Obviously you’ll expect numbers, strings, dates and times, and such just to work. And luckily that’s the case. A few types are shown below:

image

Notice the use of the “in” keyword to check for types. I’ve shown positive cases, but obviously things like “true in Text” will produce a negative result. Just to prove it doesn’t always print true, I’ve shown how errors are handled at the bottom of the screen :-). Since we’re having relationships between types, it makes sense to have a “mother of all”, which is the Any type. In the above, I’ve been using some abstract types like Number, but concrete types like Integer32, Decimal19 and Double exist too. An exhaustive list of all the types can be found in the documentation in the Oslo SDK. I’ll cover built-in operations that act on values in a subsequent post.

Why “in”? Because types are set-oriented constructs. The mother of all types, Any, can contain any value that’s valid in the language. Subtypes like Number restrict that set of acceptable types; hence, a subtype denotes a subset. The “in” keyword reflects the set-oriented nature, because a type-check in this world corresponds to a set membership check.

One final thing to pay attention to in this sample is nullability. As you know, in the .NET Framework and the CLR nullability has been tied to the distinction between value types and reference types, reasons for which Nullable<T> was invented in the 2.0 timeframe. M is value-centric and makes nullability an orthogonal concept. Although there’s notation for nullability, similar to C#’s and VB’s, it doesn’t seem to work yet in MrEPL. In a future post, where I’ll be translating things into SQL again, I’ll dive a little deeper into nullability. For the curious: here’s the notation:

Integer?

which is shorthand for the following set-notation:

Integer | { null }

 

Defining types

Next, let’s define a custom type. Staying in the world of points:

image

Notice how the same value is a member of two different, but structurally equivalent, types. Also notice how the three-dimensional point value is accepted to be typed as a two-dimensional one (assuming we’re using the same coordinate letters though). Notice I haven’t specified types for the X and Y members yet: anything (Any) will do. To restrict this, we use the type ascription operator ‘:’ as shown below:

image

Again, think about a type as a set of possible values. Based on this, a subtype can be thought of as a subset of an existing type, and indeed one can declare such a subtype easily in M. In the sample below I’m adding a constraint to the accepted values, restricting Point values to those in the first quadrant:

image

I’ll talk about constraints again in later posts. Similarly, a subtype can have additional members as illustrated below:

image

Once more, pay attention to the membership tests and the relationship with structural typing. Let’s go one step further and try to ascribe values to the respective types and see what happens:

image

Nothing should be too surprising here. A few notes though:

  • Member access is carried out using the familiar ‘.’ operator.
  • Ascription can be used safely to make a type less specific (e.g. treating Point3D-compatible values as Point2D) but is not guaranteed to work the other way around.
  • Trying to access an undefined member produces a null value (“undefined”).

For now, this discussion should suffice for the reader to start declaring types. Next time we’ll take a closer look at other core concepts of “M”: collections and extents.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

A couple of days ago I wrote about Oslo in my blog entry Getting started with Oslo – Introducing “M”. It turned out quite a bit of a hassle to get the thing posted for various reasons that make my setup not ideal. Here’s what the battlefield looked like:

  • A short combat with IntelliMirror. Our network infrastructure allows to set up folder redirection, so that your documents follow you everywhere. However, Windows Live Writer kept claiming the disk was full. I had forgotten where Windows Live Writer stores its stuff (obviously under My Documents which were on the file server), so my reaction was: “can’t be true – I have plenty of space free on my disk”, so I opened up Process Monitor only to conclude Windows Live Writer was honest to me and reminded me in an indirect way I was IntelliMirrored:

    image

    And the rest was easy to figure out: I had reached my quota (due to other large Windows Live Writer drafts with lots of pictures).
  • Convince my network stack to connect to the FTP server to upload pictures. Turned out I didn’t have the ISA Firewall Client software installed yet and our IT department knows how to lock down network access quite well :-). Another time Windows Live Writer wasn’t to blame at all.

But now the final challenge came up:

image

Let’s spell out the message for search indexing friendliness: “Blog Server Error. Server Error 0 Occurred. Specified argument was out of the range of valid values. Parameter name: The requested index value does not exist”. Hmm, looks like an IndexOutOfRangeException to me, doesn’t it? Yes, it was a pretty complex post in terms of size, number of pictures, etc, but nothing too fancy in there (and It Should Just Work, irrelevant link). The fact the title of the error says “Blog Server Error” told me Windows Live Writer was putting the blame on my blog engine, so I turned my attention to that one. It started to feel like the cloud was letting my down that night :-).

However, I couldn’t really find a clue in the exception logs on the server, so it was about time to train my psychic debugging skills (for various, non-technical, reasons I can’t debug through the source code of the blog engine). Let’s analyze what I’m trying to do: posting quite some text with images in it, but I also added new categories from within Live Writer:

image

Maybe the server tried to access the new categories and hit an issue doing so (because they were too new or so, you never know). Bummer, deselecting the categories doesn’t fix it. What else? Lots of images in my post, but that has worked before. It should be the text somehow, but I couldn’t yet think about special things in there. So, I started a new post, copying in parts of my original post and try to isolate the problem using divide and conquer (a proven technique). The precise same post without the images still had the problem, so that was assuring. As I started to cut out pieces, I came to the realization there was something different about this post: it had SQL code in it, something almost unprecedented on my blog.

Thinking I was on the right track with a theory about SQL code being the root cause, I removed my SQL fragment and sure it worked fine. So, I started to weed out the duplicate statements (multiple create tables, insert intos, create schemas, etc), and was able to isolate the issue to something like this:

image

Ultimately I removed everything but the [ Name ] thing (notice how I’m inserting spaces now), and sure it still failed to post. If square brackets somehow are the issue, I thought I should find something about it around the web. I searched and found: Problem with brackets and Date in CS 2, another blogger sharing a technical story in the realm of SQL, with OLAP. I ended up dropping the SQL fragment for now (it was meant to be an introductory post anyway, or how a bug can help you to improve introductory content :-)), but the “fix” was easy: don’t use some special names within brackets.

image

I’ll apply the real fix soon: upgrading to a new build of the blog software. But for now, I’m satisfied rational thinking (and directed web searches) still pays off when trying to debug through issues. After all, a happy ending!

 

PS: Help, I’m blogging about blogging. I’m not becoming a meta-blogger, am I?

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

As promised in my previous introductory post on “Oslo” and “M”, I’ll be running a new language-focused blog series, this time about the modeling language “M”. In this first installment, our goal is to become intimately familiar with the “M” programming environment, so we can start to study the language without any hurdles.

 

Getting started

If you haven’t done already, go and download the Oslo SDK January 2009 CTP. Also take a look at the Release Notes. What we’ll be focusing on in this post is the higher-level tool support provided in the SDK. We’ll skip the command-line tools for now (m, mx, etc) – for more info, see my introductory post on “Oslo” and “M” – but focus on the following instead:

SDK Tools

· Microsoft Visual Studio® integration: edit and build "M" in Visual Studio 2008

· “Intellipad”: a text editor with "M" language services

· "M" Add-in for Microsoft Excel® 2007: imports and exports "M" into and from Microsoft Excel 2007

In the first couple of subsequent posts I’ll try to stay within the boundaries of those two. At a later stage, we’ll dive a bit deeper again and focus on other SDK tools.

 

Intellipad

This should be your new Notepad. It’s a modern, extensible and highly command-driven text editor ideal for developers. You’ll find it in your Start Menu, under Microsoft Oslo SDK, Tools. It looks plain simple in its innocent form (i.e. no special modes selected):

image

What do you need to know about it? First, there’s a Help menu. Use it to find out about all the shortcuts: Help, Commands (Alt-F1):

image

Notice the concept of “Mini Buffer” at the top. Basically, it’s a simple command-line that can be entered by pressing CTRL+/. Notice: for non-QWERTY keyboards, this means: CTRL plus the key that has ‘/’ on it (without SHIFT or so if it appears in the top position, as is the case on some AZERTY keyboards). Enter Mini Buffer Command mode and you’ll see this:

image

Let’s enter a command like Zoom(2.0); and see what happens:

image

Handy for demos. There are alternative ways to zoom, CTRL++ and CTRL+-.

Other things you need to know about Intellipad? Not much, except for the selection of modes in the top-right corner. For example, here I switched to M Mode (actually, modes are another extensibility point of Intellipad, something I might talk about later as well) and types nonsense:

image

Notice how mode-specific menus appear upon switching to another mode.

(For the interested, search the Oslo SDK folder for .py Python files and get an idea about how scripting is used to make Intellipad extensible…)

 

MrEPL

Read-Eval-Print-Loops are great for interactive development, experimentation, testing, debugging, etc. The languages used in modeling are very well-suited to benefit from such a REPL tool, so the Oslo SDK comes with such a beast integrated in Intellipad. You can find the binaries of MrEPL under %ProgramFiles%\Microsoft Oslo SDK 1.0\Bin\Intellipad\Samples\Microsoft.Intellipad.Scripting.M. So, how to get it started within Intellipad? What you’re looking at is a kind of add-in model, so we should launch ipad.exe in such a way that it known to find the MrEPL “add-in”. The way this works is by loading something called a catalog, which is defined in XAML (once more). You’ll find a file called ipad-vs-samples.xaml under the install location of the Oslo SDK, which contains stuff like this:

<!-- Copyright (c) Microsoft Corporation
     All rights reserved -->
<ipad:IntellipadCore
  xmlns:ipad='clr-namespace:Microsoft.Intellipad;assembly=Microsoft.Intellipad.Core'
  xmlns:mi='clr-namespace:Microsoft.Intellipad;assembly=Microsoft.Intellipad.Core'
  xmlns:cm='clr-namespace:System.ComponentModel.Activation;assembly=Activation'
  xmlns:s='clr-namespace:System;assembly=mscorlib'
  xmlns:scg='clr-namespace:System.Collections.Generic;assembly=mscorlib'
  xmlns:x='http://schemas.microsoft.com/winfx/2006/xaml'>
  <ipad:IntellipadCore.CatalogSources>
    <cm:FileCatalogSource RelativeCacheFilePath='ipad\root.catalog'>
      <cm:FileCatalogSource.Files>
          <s:String>Microsoft.Intellipad.Core.dll</s:String>
          <s:String>Microsoft.Intellipad.Framework.dll</s:String>
          <s:String>Microsoft.VisualStudio.Platform.Editor.dll</s:String>
      </cm:FileCatalogSource.Files>
    </cm:FileCatalogSource>
    <mi:IntellipadCatalogSource SubDirectoriesOnly='true' DirectoryPath='Components' RelativeCacheFilePath='ipad\components.catalog' />
    <mi:IntellipadCatalogSource SubDirectoriesOnly='true' DirectoryPath='Samples' RelativeCacheFilePath='ipad\samples.catalog' />
  </ipad:IntellipadCore.CatalogSources>
    <ipad:IntellipadCore.SettingsSources>
        <mi:IntellipadCatalogSource DirectoryPath='Settings' RelativeCacheFilePath='ipad\settings.catalog'/>
        <mi:IntellipadCatalogSource DirectoryPath='Settings\VisualStudio' RelativeCacheFilePath='ipad\settings-visualstudio.catalog'/>
    </ipad:IntellipadCore.SettingsSources>
</ipad:IntellipadCore>

The second IntellipadCatalogSource points at the Samples folder, where it can find the MrEPL sample. Remaining question is how to tell ipad to use this file to populate catalogs and load the referenced add-ins. Actually, you didn’t need to know anything about this since there’s a shortcut in the Start Menu that does precisely this:

image

Alright, let’s launch it and see we can invoke MrEPL by going to the mini-buffer (CTRL+/) and typing SetMode(‘MScriptMode’):

image

and guess what, there we are, interactive and well:

image

This will become our playground quite a bit when exploring the language features of M, so you better start to get used to it :-).

 

Visual Studio 2008 support

Intellipad is great, but sometimes it just makes sense to get the support of a project system, MSBuild, source control, etc from a rich IDE like Visual Studio. So, not unsurprisingly, the Oslo SDK comes with support for Visual Studio integration out of the box. After installation of the SDK, go to Visual Studio and create a new project. You should see a template for “Oslo”:

image

Integration with Visual Studio is rather limited at the moment but all the essentials are in. “M” projects contain, not too surprisingly, .m files:

image

Mistakes produce errors, and building the project is supported:

image

For the curious, the Add Reference dialog allows you to select .mx files, “M” image files (see my previous post for information about those). And if you’re curious how the build system works, take a look at the targets file in %ProgramFiles%\MSBuild\Microsoft\M\v1.0. Building a project produces both a .sql file and a .mx file:

image

Deployment of the model definition is a manual step, but more about that later (or, again, in my previous post).

 

Excel 2007 integration

Finally, as M is about data, and most information workers use Excel as a data front-end, M offers integration with Excel too. To install this, go to the Start Menu, Microsoft Oslo SDK, Tools and run the M-addin installer:

image

You’ll see the Office Customization installer pop up; just accept the customization to be installed:

image

Finally, Excel will have been extended with ribbon options under the Data tab, that allow to import and export data from and to M:

image 

For example, importing my (empty) MySuperModels project created before, produces the following table:

image

More about that later.

 

Conclusion

If you were able to complete all of the steps above, you’re ready to get started with our M blog series. Next time, we’ll start by looking at M’s type system and play more with MrEPL.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

More Posts Next page »