Effective use of Nullable
Posted on May 22nd, 2009
In this post I will give a quick usage scenario for using Nullable<T> as well as the nice shortcut of the null coalescing operator ??.
I know when I first ran into some code with this ?? thing in it I was like what the heck is that. Well it is something that every developer should know about. So what does it do? If you are working with objects or value types that can be null you can use this to guarantee that you have a value.
Simple example of how this works:
string test = null; Console.WriteLine(test ?? "We had a null value");
What this will do is print “We had a null value” to the console. Now in .Net 2.0 they introduced Nullable<T> objects as well. The null coalescing operator can be used with these quite effectively as well.
Let’s look at a Date object and how we used to have to use it:
DateTime today = new DateTime(); if (today.Year == 1900) { today = DateTime.Now; }
In the old days we would have old date times out there instead of a nice null value. So with .net 2.0 we can now do something like this.
DateTime? today = null; if (!today.HasValue) { today = DateTime.Now; }
Now that still looks like a lot of code. This is where the null coalescing operator comes into play. Take a look:
DateTime? today = null; today = today ?? DateTime.Now;
What this does is allows us to say, hey, if the today variable is null set it to DateTime.Now. It is clean and concise.
Now you may be also asking what the ? is after the DateTime variable. Well this is shorthand for the Nullable<T> object. You could also define the same code like this and it would mean the exact same thing. I just find the ? easier to read.
Nullable<DateTime> today = null; today = today ?? DateTime.Now;
As you can see not only are nullable objects handy but the null coalescing operator is even handier. So now you might ask can I use it on my own objects. The answer is YES you can use the ?? operator on anything that can be null. This gives this operator true versatility and should be in every developers playbook.
Using Predicate
Posted on May 14th, 2009
Unless you have been living under a rock or unable to use a newer version of .net since 1.1 you have probably run across the Predicate<T> object while using the List<T>. I know lots of people that use these Lists and there are other objects in the framework that use Predicate<t> as well. Another common one would be the Array object. The basic usage of the Predicate<T> object is to provide a delegate with a method that takes as a parameter the same data type of the object in your list. Then the List, Array, or some other object will essentially enumerate over your collection and test each object with this method. Thus the reason it returns a boolean.
What I’m going to show you here is a simple find process that will allow you to create a reusable delegate for use in the Predicate<T> processes using reflection.
First let’s get our pieces in place. Lets first see how to do it inline which is what most people use by default.
First lets define our person class we will use.
class Person { private string _firstName; private string _lastName; private int _age; public Person(string firstName, string lastName, int age) { this._firstName = firstName; this._lastName = lastName; this._age = age; } public string FirstName { get { return this._firstName; } set { this._firstName = value; } } public string LastName { get { return this._lastName; } set { this._lastName = value; } } public int Age { get { return this._age; } set { this._age = value; } } }
Next in our main app lets add some data.
List<Person> people = new List<Person>(); people.Add(new Person("John", "Smith", 35)); people.Add(new Person("Caitlin", "Smith", 13)); people.Add(new Person("Steve", "Long", 23)); people.Add(new Person("Justin", "Short", 45)); people.Add(new Person("Karigan", "Patterson", 16));
Now that we have data in our object we want to search. First I will show you the delegate method in-lined.
// Find one result Person p = people.Find(delegate(Person p1) { if (p1.LastName == "Long") return true; else return false; }); Console.WriteLine("{0}, {1} - {2}", p.LastName, p.FirstName, p.Age);
As you can see we have created a delegate using the delegate keyword and we are passing in a object of the same datatype as our list as a parameter. Since we are doing a simple find operation we are looking for just 1 person with a last name of Long.
This statement is a little long but not too bad in the grand scheme of things. However what if you were writing an application where you had to do a lot of finds based on just 1 property. That in itself would be come very tedious and you would end up with a lot of repetitive code.
So now lets build a class that we can use to help us with this.
public class SimpleFind<T> { private string _property; private object _valueToFind; private PropertyInfo _p; public SimpleFind(string property, object value) { this._property = property; this._valueToFind = value; this._p = typeof(T).GetProperty(this._property); Protect.Against<nullReferenceException>(p == null, string.Format("Property {0} not found on type {1}", this._property, typeof(T).FullName)); } public bool Find(T t) { try { if (this._p.GetValue(t, null).Equals(this._valueToFind)) { return true; } else { return false; } } catch { return false; } } }
So let’s go over the class itself. First, you should notice that the class uses Generics in the class definition. The type you specify needs to match the type you use in your list. Next is the constructor. Since the delegate needs to take the type of object that matches your data type we need to pass in the information we need in the constructor. In this case since we are trying to find data based on a property value we specify the Property Name that we are going to search against and the Value we want to search for. Now the find method does all the real work though. In the find method you see we are getting the property via reflection and making sure that we actually have that property before we use it. The Protect object was discussed here. Other than that you will notice that the code pretty much matches what we did earlier.
Now to see the difference:
// Find one result Person p = people.Find(new SimpleFind<Person>("LastName", "Long").Find); Console.WriteLine("{0}, {1} - {2}", p.LastName, p.FirstName, p.Age);
As you can see it’s a bit shorter and highly reusable. So what if you wanted to find more than one record. The same class can be used again.
// Find one result List<Person> p2 = people.FindAll(new SimpleFind<Person>("LastName", "Smith").Find); Console.WriteLine(p2.Count.ToString());
Well that’s it. It’s a pretty straight forward process. However I find it very useful and easy to use and the nice thing about it is that since it’s an object you could re-use it with just a little bit of tweaking.
Enjoy.
*** UPDATE ***
After doing some more testing with this class I found that I needed to move the Property Info object up into the constructor to improve performance and reduce overhead. As well I moved the protect statement to the constructor as well as any error was causing the system to just return false and keep trying to process the find. It should stop. Last but not least the If condition in the find method was changed to just use the .Equals method.
Exception Handling Basics
Posted on April 14th, 2009
Download the code Here.
It’s amazing to see how exception handling is not used properly after all these years that the .net framework has been out. Some of it can be attributed to the Internet since anybody can post information anywhere, it get’s propagated through google and then upcoming developers find it and think that is how things are done. Of course for those that understand exception handling it’s always obvious when you see a rookie mistake. I don’t think we should always blame the rookie since still to this day people post things with bad exception handling just making the problem worse.
The goal of this is to show you a very very simple sample to show how you should code your exception handling so that if you ever have to debug a problem you can get right to it instead of having to find it.
I’m going to give you three examples of exception handling. Two of them work correctly. The first one I will show is what you tend to find a lot. This is the one that is wrong. I cannot say that rookies are the only ones that do this either. There are plenty of people that are very strong that still make this simple mistake. However it can mean a world of difference when done right and you run into a production problem.
The wrong way:
public void ExceptionHandlingA() { try { throw new ArgumentNullException("ExceptionA", "Test Exception Call from DoStuff.ExceptionHandlingA"); } catch (Exception ex) { throw ex; } }
Now you may ask or even tell yourself that there is nothing wrong with this statement. The problem is both subtle and devious. It all hinges on your Catch block. Just looking at this statement you think it bubbles the exception up. While it does bubble it up what is wrong with this is that it bubbles it up as a new exception object. While if you only have a single layer of code this would never be obvious. However if you are calling this method from other classes that then are called by other classes is where you start to run into your problem.
Next I’ll show you two proper methods that do exception handling and then walk you through creating the code you need to show the very subtle difference in the result. Namely the stacktrace.
Good Exceptions:
public void ExceptionHandlingB() { try { throw new ArgumentNullException("ExceptionB", "Test Exception Call from DoStuff.ExceptionHandlingB"); } catch (Exception ex) { throw; } } public void ExceptionHandlingC() { try { throw new ArgumentNullException("ExceptionC", "Test Exception Call from DoStuff.ExceptionHandlingC"); } catch { throw; } }
Now if you look at this code you will see that the only thing that is different is that in the Catch block we only use the keyword throw. This is very important and something you should understand. Basically when you call just the word throw it does a re-throw in the IL for .net whereas the other throw ex command creates a new exception.
In case you don’t believe what I’ve just said here is the IL from the Bad exception handler:
.method public hidebysig instance void ExceptionHandlingA() cil managed { // Code size 22 (0x16) .maxstack 3 .locals init ([0] class [mscorlib]System.Exception ex) IL_0000: nop .try { IL_0001: nop IL_0002: ldstr "ExceptionA" IL_0007: ldstr "Test Exception Call from DoStuff.ExceptionHandlingA" IL_000c: newobj instance void [mscorlib]System.ArgumentNullException::.ctor(string, string) IL_0011: throw } // end .try catch [mscorlib]System.Exception { IL_0012: stloc.0 IL_0013: nop IL_0014: ldloc.0 IL_0015: throw } // end handler } // end of method DoStuff::ExceptionHandlingA
and the code from the good exception handler:
.method public hidebysig instance void ExceptionHandlingC() cil managed { // Code size 22 (0x16) .maxstack 3 IL_0000: nop .try { IL_0001: nop IL_0002: ldstr "ExceptionC" IL_0007: ldstr "Test Exception Call from DoStuff.ExceptionHandlingC" IL_000c: newobj instance void [mscorlib]System.ArgumentNullException::.ctor(string, string) IL_0011: throw } // end .try catch [mscorlib]System.Object { IL_0012: pop IL_0013: nop IL_0014: rethrow } // end handler } // end of method DoStuff::ExceptionHandlingC
If you notice towards the end of the IL that on the good one it uses the command rethrow where the bad handler does not. This is very important as the rethrow command allows your exception objects to retain the full stacktrace. Since stacktraces are very important in helping you find where your problem is the better it is the faster you can find your problem and fix it.
Now to show you what we have done so you can see for your own eyes. Create a new VS Console project or just put this stuff in notepad and compile using the SDK. Whatever your preference. I’ve included a VS2008 version of the project for download if you just want to download and run it.
Take the first three methods from a above and put them in a class called DoStuff like so:
using System; using System.Collections.Generic; using System.Text; namespace EffectiveErrorHandling { class DoStuff { public void ExceptionHandlingA() { try { throw new ArgumentNullException("ExceptionA", "Test Exception Call from DoStuff.ExceptionHandlingA"); } catch (Exception ex) { throw ex; } } public void ExceptionHandlingB() { try { throw new ArgumentNullException("ExceptionB", "Test Exception Call from DoStuff.ExceptionHandlingB"); } catch (Exception ex) { throw; } } public void ExceptionHandlingC() { try { throw new ArgumentNullException("ExceptionC", "Test Exception Call from DoStuff.ExceptionHandlingC"); } catch { throw; } } } }
Next we are going to create a second layer that calls this class using the same exception handling constructs for each method type so we create a layer that can absorb our exceptions to show the loss of the stacktrace.
using System; using System.Collections.Generic; using System.Text; namespace EffectiveErrorHandling { class CallStuff { public void CallExceptionA() { try { DoStuff d = new DoStuff(); d.ExceptionHandlingA(); } catch (Exception ex) { throw ex; } } public void CallExceptionB() { try { DoStuff d = new DoStuff(); d.ExceptionHandlingB(); } catch (Exception ex) { throw; } } public void CallExceptionC() { try { DoStuff d = new DoStuff(); d.ExceptionHandlingC(); } catch { throw; } } } }
Next put the following code in your Program.cs file:
using System; using System.Collections.Generic; using System.Text; namespace EffectiveErrorHandling { class Program { static void Main(string[] args) { CallStuff c = new CallStuff(); Console.WriteLine("Calling Exception Type A"); try { c.CallExceptionA(); } catch (Exception ex) { Console.WriteLine(ex.Message + Environment.NewLine + ex.StackTrace); } Console.WriteLine(""); Console.WriteLine(""); Console.WriteLine("Calling Exception Type B"); try { c.CallExceptionB(); } catch (Exception ex) { Console.WriteLine(ex.Message + Environment.NewLine + ex.StackTrace); } Console.WriteLine(""); Console.WriteLine(""); Console.WriteLine("Calling Exception Type C"); try { c.CallExceptionC(); } catch (Exception ex) { Console.WriteLine(ex.Message + Environment.NewLine + ex.StackTrace); } } } }
As you can see here we are handling the exception that is raised in each case and outputing it to the console.
Download the code Here.
Now if you are trusting this is the output from the program when you run it. As you can see the first exception has one less level of detail due to the fact that it does not rethrow the exception but instead it creates a new copy of the exception for bubbling up. So as you are progressing through your development career please take this to heart. You’ll thank me for it when you run up against some bug which you could have found and fixed if you had the right exception handling in place.
Output from the console:
Calling Exception Type A
Test Exception Call from DoStuff.ExceptionHandlingA
Parameter name: ExceptionA
at EffectiveErrorHandling.CallStuff.CallExceptionA() in G:\a951072\My Documen
ts\Visual Studio 2008\Projects\EffectiveErrorHandling\EffectiveErrorHandling\Cal
lStuff.cs:line 18
at EffectiveErrorHandling.Program.Main(String[] args) in G:\a951072\My Docume
nts\Visual Studio 2008\Projects\EffectiveErrorHandling\EffectiveErrorHandling\Pr
ogram.cs:line 16
Calling Exception Type B
Test Exception Call from DoStuff.ExceptionHandlingB
Parameter name: ExceptionB
at EffectiveErrorHandling.DoStuff.ExceptionHandlingB() in G:\a951072\My Docum
ents\Visual Studio 2008\Projects\EffectiveErrorHandling\EffectiveErrorHandling\D
oStuff.cs:line 29
at EffectiveErrorHandling.CallStuff.CallExceptionB() in G:\a951072\My Documen
ts\Visual Studio 2008\Projects\EffectiveErrorHandling\EffectiveErrorHandling\Cal
lStuff.cs:line 31
at EffectiveErrorHandling.Program.Main(String[] args) in G:\a951072\My Docume
nts\Visual Studio 2008\Projects\EffectiveErrorHandling\EffectiveErrorHandling\Pr
ogram.cs:line 27
Calling Exception Type C
Test Exception Call from DoStuff.ExceptionHandlingC
Parameter name: ExceptionC
at EffectiveErrorHandling.DoStuff.ExceptionHandlingC() in G:\a951072\My Docum
ents\Visual Studio 2008\Projects\EffectiveErrorHandling\EffectiveErrorHandling\D
oStuff.cs:line 42
at EffectiveErrorHandling.CallStuff.CallExceptionC() in G:\a951072\My Documen
ts\Visual Studio 2008\Projects\EffectiveErrorHandling\EffectiveErrorHandling\Cal
lStuff.cs:line 45
at EffectiveErrorHandling.Program.Main(String[] args) in G:\a951072\My Docume
nts\Visual Studio 2008\Projects\EffectiveErrorHandling\EffectiveErrorHandling\Pr
ogram.cs:line 38
Exception woes and the dreaded clr20r3 error
Posted on February 13th, 2009
Well, we got bit by this obscure bug as well. As most of you, if you have encountered this, have found out there is not really a lot of information out there on the web to really explain what the heck is going on. All we know is that it kills our app and we get no real information out of it.
So hopefully my information will provide some extra light on the subject. Since each case is different however from my research they are also very similar in there respective problems.
My scenario
Windows Service running on Windows 2003 server
Multi-threaded app (roughly 70 threads)
Lots of real-time messaging using Tibco EMS
nHibernate database layer
.Net 2.0 framework
The problem
Without reason or warning the windows service would crash, without any notifications going out, and our users would be dead in the water.
What we tried:
- First, we tried adding the AppDomain.UnhandledException logic. Bam – NO good.
- Next, we tried to add the .Net legacy exception handling tag to our app config file. Bam – NO good. Not only that but we could not even start our service properly.
- Next, we called MS. Opened a case and got some tools to try and capture some mem dumps if we could replicate the server failure in our dev environment. This might have worked but we fixed the problem before we could find out.
- Lastly, we were reviewing lots of our code to try and find any leaks, wholes, etc that could maybe cause a critical thread to fail and bring it down. Honestly we got lucky. Just so happens we found an area that did not look right, fixed the code, and Bam – No more exceptions.
What actually caused the problem
In as few lines as possible here is what caused the problem, this method is a shortened and modified version just showcasing the issue and also this is the method that was given to the thread process to run once the thread was started.
What we had in our code
public void RunMe() { try { List data = new List(); data.Add(new SomeObject()); data.Add(new SomeObject()); data.Add(new SomeObject()); foreach(SomeObject o in data) { DoSomethingToObject(o); data.Remove(o) } } catch (ThreadAbortException ex) { Log(ex); } }
So basically we had a bit of logic that had data in a List collection. We were enumerating over it and once it was processed removing it from the collection. We also had a Try..Catch block to try and catch a threadabort if one occured.
Why it blew up
Well looking at the code you’d think um that should not blow up. However if you stop and think about it for a second you will see what happened. If you guessed throwing an InvalidOperation exception… Here’s your cupie doll. 🙂 You guessed it we were throwing an exception because we were removing from the collection while we were enumerating it. Does not matter if you have a lock or anything else this is just a no-no. Now if we had used a for loop instead of a foreach and iterated in reverse that would have been fine. However the rules around IEnumberable don’t like what we were doing.
So we threw the InvalidOperation exception and since we were in a thread and our Try..Catch handler was not catching generic exceptions it ends up being an unhandled thread exception which then bubbles up and bubbles up and bubbles up… you get the point. Even though we had Try..Catch handlers at the service layers it does not matter as this type of unhandled exception will just shut you down. It won’t even fire the Unhandled AppDomain exception.
How we fixed it
Well obviously we had to fix the foreach loop. However the biggest thing that we did to fix the problem was to actually catch the exceptions and handle them. Once we handled the exception it would still cause our thread to shut down (until we fixed the underlying issue) but our server stayed up and no more clr20r3 errors.
From everything I have found the crux of the clr20r3 is exception handling. Make sure in your threads you have a generic exception handler and log the exception to a log file, event log, database, or wherever else you need to so you can actually get the answer you need and go fix the underlying problem.
The final solution
In case you wanted to see the code that fixed the problem here it is:
public void RunMe() { try { List data = new List(); data.Add(new SomeObject()); data.Add(new SomeObject()); data.Add(new SomeObject()); for (int i=data.Count-1;i!=0;i--) { DoSomethingToObject(data[i]); data.Remove(o) } } catch (Exception ex) { LogToEventLog(ex); } }
nAnt: The master build file
Posted on February 13th, 2009
If you have been following along with the other nAnt articles, this is the final step of setting up your base project. From here you will be able to add additional projects, processes, etc. Then if you have a CI server (cruise control.net, Team City, or some other solution) you are ready to go.
Ok so without further ado here is the master build file for our sample project.
<?xml version="1.0" encoding='iso-8859-1' ?> <project default="build" xmlns="http://nant.sf.net/release/0.85/nant.xsd" > <property name="root.dir" value="${directory::get-current-directory()}" /> <include buildfile="${root.dir}\common.xml" /> <fileset id="buildfiles.all"> <include name="${root.dir}/NantSupportLibrary/default.build"/> <include name="${root.dir}/NantBuildSample/default.build"/> </fileset> <target name="build" depends="cleanup common.init display-current-build-info"> <echo message="${directory::get-current-directory()}"/> <echo message="${root.dir}"/> <echo message="${root.dir}/common.xml"/> <nant> <buildfiles refid="buildfiles.all" /> </nant> </target> </project>
As you can see there is not really a whole lot different in this file than a standard build file. However we will start at the top.
The very first thing we do is set our “root.dir” property. Now I did this one a little differently to show you another way to do things if you wanted to. You can also just replace this line with the following in the sample project code and it will work just fine:
Either one will work in this case. Since this file is in the root of our source tree it’s just set to the current working directory.
Next as with the other files we include our common.xml file.
The next thing is a little new. Since this considered to be a master build file what we are going to do is setup a fileset that will tell the build file where all the project level build files are.
Important: The order in which you include the files is important. They will be executed from top to bottom so make sure you have any dependencies figured out and built in the proper order.
Last but not least we have our build target. This is the sucker that starts all the work. As you can see we have several dependencies. The first is a “cleanup” step. This will cleanup our output folder so we can make sure we always have a nice clean build. Next it runs the “common.init” again to make sure our output folders are in place and that we have our project.sources defined (needed to compile the code). Lastly we call the “display-current-build-info” target. This will display some of the settings we have, framework version, etc. you can add to this output in the common.xml file if you feel the need.
Once all those are run we get into the core of the task. Here we are going to essentially have nant spawn instances of itself to run all the build files we defined up above. It will basically loop through that list and run each build file sequentially.
Well that’s about it. Your ready to run the thing. If you downloaded the zip file you can just run nant in the root source folder and it should spit everything out for you.
After it’s done running you should have your output in the output folder.
I hope you enjoyed these posts and it helps you if you plan on using nAnt as a build tool. In the future I will add more articles that build off of these. Things like adding unit testing, calling external objects, using extensions like nAntContrib, etc.
If there is anything you’d like to see drop a line and let me know.
Download the zip sample project.
nAnt: Adding a Visual Studio ItemTemplate for your project build file
Posted on February 13th, 2009
Download nAnt Project Item Template zip file.
If you want to know more about building Visual Studio templates check out MSDN.
Believe it or not this is actually pretty easy. I took the basic project build file from our project build file article and did a File -> Export Template command. Follow the wizard. Afterwards I extracted the zip file put in some of the replace parameters (found on MSDN) and viola! nice little template is now created. Just update the files into the zip file and your done. Just right click add new file and away you go.
If you download the file you want to copy it to the following location:
Just drop it right there in the root. If you put it there then open up visual studio, open a project and add a new item at the bottom of the list you should see the following:
nAnt: The project build file
Posted on February 13th, 2009
Next in the series surrounding nAnt is the process of creating the individual build files for each project. This is assuming of course that you are wanting a DLL, EXE, or what have you for each project in your visual studio solution. If you just want everything to compile into 1 dll or exe you can do that too with some minor adjustments of course.
This project file will assume that you have the common.xml file that we built in the last nAnt article. If you do not have it that is ok, it is included with the sample code for this article.
First things first lets create our build file by adding a new xml document object to our project.
Below is a basic project file we will use.
<?xml version="1.0" encoding='iso-8859-1' ?> <project name="NantSupportLibrary" default="build" xmlns="http://nant.sf.net/release/0.85/nant.xsd" > <property name="root.dir" value="../." /> <include buildfile="${root.dir}/common.xml" /> <target name="init" depends="common.init"> <assemblyfileset id="project.references" basedir="${build.output.dir}"> <include name="System.dll" /> <include name="System.Core.dll" /> </assemblyfileset> <resourcefileset id="project.resources"> </resourcefileset> </target> <target name="build" description="Build ${project::get-name()}" depends="init common.compile.dll.cs" > </target> </project>
You will notice that the first thing we have is the Project tag. This is the most root level element you can have. Just like the common.xml file you need to set the name and default command to run for the project. Also, make sure you have the xmlns element set correctly for the version of nAnt you are running.
The very first thing we need to do is set our root element. Basically we have the root.dir element to help ground everything to a consistent path structure. Makes having a common file and build process much easier. The root.dir property should use the syntax needed to get back to the root of your source tree. So if root level for the project is down one level you use “..” if it’s two levels “../..” etc.
Next we need to include our common.xml build file. The syntax as you can see if very straight forward. Now if you have not seen it before anything enclosed in a “${}” is special command for nAnt to do something. It is effectively a token, variable, parameter, etc. In the case of this include line it is concatenating the value of the variable root.dir with “/common.xml”.
Once we have that our project file will have access to all the targets we setup in the common file. So let’s setup a “init” target to initialize the filesets and other various things we need for this particular project. Since this project is pretty basic, as you can see, we do not have a whole lot in here. First we depend upon the “common.init” target from our common build file that defines our fileset for build files. Next we setup our assembly and resource file sets. In the Assembly fileset you will add a line for each reference you need in your project. If you have project references to other projects in your solution then add the appropriate project output reference here. In the sample I have this so you can see what I mean. If this was a windows forms app you would have resx files included in your resourcefileset element.
Last but not least we create our build target. This is what will be called when we run this build file. As you can see it depends on our “init” target we just created and “common.compile.dll.cs” from our common.xml build file. The description element actually will output the project name from the top of the build file.
One thing to remember the name attribute of the project element must be the name of the compiled object that you want to output. So if you wanted it to be “MyApp.Is.Awesome.dll” your project name would be “MyApp.Is.Awesome”.
In the next article we will walk through creating the master build file which will actually run all this stuff and provide what you want. A working build of a project.
Download the sample code
nAnt: Desiging your common build file (Updated)
Posted on February 2nd, 2009
In this article we will walk you through desiging a basic common build file. If done properly it’s one you can carry with you to all your projects and help you get your automated builds kicked off pretty quick.
Download the Common.xml file.
The Basics
The first thing we need to do is make sure you understand the basics of nAnt. At it’s heart it is just an xml document. As such you always need to be mindful of the strict nature of xml. Case if very important. Beyond that you have the next two items, Targets and Properties. Through these two constructs you can do just about anything. From within targets are where you will call into other tasks like, csc, vbc, or event the nant contrib and custom tasks you can build yourself.
The basic structure of a property looks like this:
<property name="MyPropName" value="MyValue"/>
As you can see it’s pretty straight forward at this level. There are other features of the property we will cover a bit later. These objects will be very important in our construction of a common build file.
Next we have Targets. This is the meat and potatoes of nAnt. Without it you don’t really have anything. The basic structure of a Target looks like this:
<target name="MyTarget"> <!-- do something here --> </target>
As you can see not much here either. Where it’s power comes in is from the depends attribute that allows you to chain multiple targets together into one call. There are other tasks and attributes that you can apply to your build file. If you want to see them all go to the nAnt sourceforge site.
I said it was the basics and that is just about it. The rest will depend upon what you are trying to do. Our main focus in this article will be around build C# projects, however if you need to compile to VB or some other language it would be just as easy to apply these concepts and change your tasks inside the build targets.
Basic Properties
Next we will look at setting up some of our basic properties that will be used throughout the build files. All the way from the common file to your individual build files. The first thing that you need to address though is your folder layout. How are you going to structure your project(s) on disk. This makes a difference on how you configure your project files and with a small degree your common file. The structure we will be using will be one that I’ve used for a while, it works for multiple projects to single projects. Below is a screenshot of the basic folder structure. There is nothing in the child folders as of yet.
The common.xml file we will be building will go into the build folder. As you add projects to your solution each project should have it’s own build file in the same folder as the project file. In addition if you have multiple distributions, say a Web site, Desktop component, and windows service as part of your deployment you can have a root distrib build file. It can be stored in the build folder or at the root of your folder structure. These extra build files can be used to control the build for each individual section of your build output.
Now let’s start by creating our first couple of properties.
<?xml version="1.0"?> <project xmlns="http://nant.sf.net/release/0.85-rc3/nant.xsd"> <property name="build.output.dir" value="${root.dir}\output" /> <property name="build.dir" value="${root.dir}\build" /> <property name="lib.dir" value="${root.dir}\build\references" /> <property name="build.debug" value="true" overwrite="false"/> <property name="build.optimize" value="true"/> <property name="build.rebuild" value="true"/> </project>
Now you will probably ask yourself what the ${root.dir} thing is. Well it’s essentially another property. The syntax to use a property, method, or other defined item is the syntax ${
Now most of these property names make perfect sense. We are essentially trying to setup some variables that will hold all our path data so when it comes time to compile or run any ancillary tasks we are covered and know where everything is.
The references folder under the build folder is for 3rd party dll’s. This is where I’d put things like the log4net, nHibernate, vendor control, microsoft, etc. objects that my project needs to compile. You can also move this to a root folder called something like vendor or 3rdParty (I’ve used both before). It’s really up to you. Just change the property value to match whatever folder structure you plan on using.
Build Information
Now the next part of the common build file has a target that we will call to output some helpful information that you can use in case you have a build failure. Things like SDK paths, framework versions, etc. This can help you debug the case where say you are in a web farm and one of the machines does not have the latest .net framework and your build keeps failing. With this data you will be able to quickly see what versions are present and could adjust acordingly.
<target name="display-current-build-info"> <echo message=""/> <echo message="----------------------------------------------------------" /> <echo message=" ${framework::get-description(framework::get-target-framework())}" /> <echo message="----------------------------------------------------------" /> <echo message="" /> <echo message="framework : ${framework::get-target-framework()}" /> <echo message="description : ${framework::get-description(framework::get-target-framework())}" /> <echo message="sdk directory : ${framework::get-sdk-directory(framework::get-target-framework())}" /> <echo message="framework directory : ${framework::get-framework-directory(framework::get-target-framework())}" /> <echo message="assembly directory : ${framework::get-assembly-directory(framework::get-target-framework())}" /> <echo message="runtime engine : ${framework::get-runtime-engine(framework::get-target-framework())}" /> <echo message="" /> <echo message="----------------------------------------------------------" /> <echo message="Current Build Settings"/> <echo message="----------------------------------------------------------" /> <echo message="build.debug=${build.debug}"/> <echo message="build.rebuild=${build.rebuild}"/> <echo message="build.optimize=${build.optimize}"/> <echo message="lib.dir=${lib.dir}"/> <echo message="build.output.dir=${build.output.dir}" /> <echo message="build.dir=${build.dir}"/> <echo message=""/> </target>
Yes there are a lot of echo message calls. 🙂 The methods and what are available can be found on the nAnt sourceforge site. Basically what we have done is made use of some of the core items that nAnt provides to you to display a bunch of helpful information. Now you can break this down into smaller targets, say one with build specific and framework specific data, or you could ignore this altogether. It is helpful though so I’d urge you to keep the framework stuff at the least.
Build Initialization
Next we are going to create the initialization tasks. This is where we will setup basic things like making sure our output folder is clean, fileset definitions, and generally just anything you want to setup every time you need to do a build.
<target name="cleanup" if="${directory::exists(build.output.dir)}"> <!-- do any folder clean up before we start --> <delete dir="${build.output.dir}" failonerror="false"/> </target> <target name="common.init"> <fileset id="project.sources.cs"> <include name="**/*.cs" /> </fileset> <fileset id="project.sources.vb"> <include name="**/*.vb" /> </fileset> <mkdir dir="${build.output.dir}" if="${ not directory::exists(build.output.dir)}"/> </target>
You will notice that we have a couple of new items introduced in this seciton. Namely the depends command and the if condition. First lets cover the if condition. Just about every task (at least that I’ve looked at) supports the if and unless conditionals. This is a great way to control your tasks. So in the case of our cleanup task what would happen if the output directory did not exist. Well we’d fail the build. We don’t want that to happen so we can add an if condition to make sure the directory exits first. If it does then we run it otherwise we just ignore this altogether.
Next the depends list. This is an important attribute to understand. Namely you need to grasp the chaining ability and order of execution for the subsequent tasks. You can add multiple targets as a dependency as long as you put a space between them. They are executed in left to right order and if you have a target that is called that has it’s own dependencies, you have to wait for them to finish too before the next one in your parent chain starts.
Next and an important one. We have setup a fileset task that allows us to define the types of items we want to be compiled. We will use the id to tell the csc task what to do. This can be very handy. We will do something similar for resources and references but they will be handled in the actuall project build files since they vary from project to project.
Now for the fileset I’ve included a sample of how you can setup your common build file to support both vb and cs. Do you need to do this? No, however if you know you need to support split languages within your build you might want to set it up for it. All you would need to do is setup multiple compile targets with the language extension as part of the target name. In the next section we will be looking at this.
Build Targets
Next we are going to setup the build targets. These are going to be the things we need to actually compile our code. Which in this case will be the csc task. As this is a common.xml file to be used with multiple projects we will setup the tasks needed to compile to DLL, EXE, and the web. The web one does not really have anything special or really all that different than the DLL task but it will allow you to add custom tasks to it that you may or may not need without impacting the other task.
<target name="common.compile.dll.cs"> <csc target="library" debug="${build.debug}" optimize="${build.optimize}" warnaserror="${build.warnaserror}" output="${build.output.dir}/${project::get-name()}.dll" doc="${build.output.dir}/${project::get-name()}.xml" rebuild="${build.rebuild}" > <nowarn> <warning number="1591" /> <!-- No XML comment for publicly visible member --> </nowarn> <sources refid="project.sources.cs" /> <references refid="project.references" /> <resources refid="project.resources" /> </csc> </target> <target name="common.compile.exe.cs"> <csc target="exe" debug="${build.debug}" optimize="${build.optimize}" warnaserror="${build.warnaserror}" output="${build.output.dir}/${project::get-name()}.exe" doc="${build.output.dir}/${project::get-name()}.xml" rebuild="${build.rebuild}" > <nowarn> <warning number="1591" /> <!-- No XML comment for publicly visible member --> </nowarn> <sources refid="project.sources.cs" /> <references refid="project.references" /> <resources refid="project.resources" /> </csc> </target> <target name="common.compile.dll.forweb.cs"> <csc target="library" debug="${build.debug}" optimize="${build.optimize}" warnaserror="${build.warnaserror}" output="${build.output.dir}/${project::get-name()}.dll" doc="${build.output.dir}/${project::get-name()}.xml" rebuild="${build.rebuild}" > <nowarn> <warning number="1591" /> <!-- No XML comment for publicly visible member --> </nowarn> <sources refid="project.sources.cs" /> <references refid="project.references" /> <resources refid="project.resources" /> </csc> </target>
As you can see I targeted these to the .cs version. If you only want to support one language you can actually just remove the last bit off the target names. The last one is target for the web. Remember the web compiles are pretty much just DLL compiles. However if you needed to do any special copying or resource changes you could do so in this target as a base item. By no means do you need to do this if you don’t want to.
Conclusion
That just about does it for our common build file. As you can see there really is not much to it but it can really help go a long way to creating consistent builds. In the next article we will look at the project build file and how to hook these two pieces together to create a full and working build.
As a task to the reader, maybe you could look at adding tasks to execute unit testing or even building your document output based on the doc comments that are extracted during build.
** Update **
Well as sometimes happens, I fat fingered some of my cut-paste-rebuild of my common file when preparing it for this article. As I used it for the project file article I noticed a couple of things were missing. On the
<project name="common" default="build" xmlns="http://nant.sf.net/release/0.85/nant.xsd">
Next the other thing I messed up was the dependency chain on the common.init target. It should not have had any dependencies.
nAnt: Getting your machine ready
Posted on January 27th, 2009
In this post we will get your pc ready to use nant. The examples and links will be geared towards VS2008 however they will also work for VS2005. VS2003 requires a bit more but can be done in very similar ways.
- If you have not already done so go get nAnt. This example and all future ones will use version 0.85
- Extract the file to a location on your c: drive like c:\tools\nant
- Add the location of the bin folder to your path. So in this example it would be: c:\tools\nant\bin or c:\tools\nant\0.85\bin depending on how you extracted your files.
- Copy the file nant.xsd from the c:\tools\nant\schemas folder to the following location(s):
- C:\Program Files\Microsoft Visual Studio 8\Xml\Schemas — for visual studio 2005
- C:\Program Files\Microsoft Visual Studio 9.0\Xml\Schemas — for visual studio 2008
Now that you have done all that we are going to open visual studio. This next bit will work in either 2005 or 2008 so the instructions will be the same. Once you have opened visual studio.
- Go to the Menu:Â Tools -> Options
- Navigate in the left hand tree to the Text Editor section
- Select the File Extentions item. Your options dialog should look like this:
In the Extension box I want you to enter the word: build- Note: I use build as the extension for my build files for nant vs xml so I know what they are out of the box.
- For this option dialog you do not put any periods in the extension box.
- Next select XML Editor from the list and then hit the Add button. Your options dialog should then look like this:
Now select OK and off we go.
That’s it for the setup. If you want to test this add a new XML File to a project in Visual Studio and call it something like default.build. You should then get intellisense for your build files. The root level object will be project. As you will notice because we added the file extension you also get the auto-complete benefits of the xml editor. Things like adding the closing tag, quotes, etc.
That takes care of the first part of our nAnt setup. In the next article I’ll walk you through adding Item and Project templates that you can use and customize for easily adding build files and projects that already have build files to your project. After which we will get into the usage of nAnt and setting up a common build file that you can reuse from project to project.
Building your code or CI and you
Posted on January 27th, 2009
I’ve been seeing alot of stuff on the web lately considering continuous integration (CI), automated builds, build tools, unit testing, etc. Figured maybe it’s time I start to post about some of this stuff. I’ve been using CI in various shapes and sizes as it were for many years. From custom rolled solutions to full commercial packages. As such I will be posting many articles around CI, builds, unit testing, etc to help people who maybe have never seen it before or even if I’m lucky have an answer to a problem you have been having.
First let me say that I don’t care if you are a single person shop, team, department, or whole company. You NEED to be using some form of CI. There are free versions out there like Cruise Control.net and TeamCity (which is now free for limited installations). I personnally have setup a TeamCity installation on my big developer desktop and I’m just a one person show for the stuff I do at home.
So, what is CI and why should you use it. CI is a means by which you can have an autonomous process running somewhere either on your machine, server, cloud computing platform, server farm, you name it that will take your code and compile it. Big whoop-dee-do you say I can do that by just building from my desktop in my IDE. Ok, that’s great if you are a one person shop. But what happens if you don’t get the latest from your source control and yeah it builds with the code you have but your buddy in the cube across from you just changed everything. Now your build won’t work but you don’t find out till later when somebody says something. This is where CI comes into play. It does not remove the build check from your box. You should always do that. Where CI comes in is a sanity check and a means to automate tedious tasks. Using the scenario above what happens when two people check in code at the same time such that you both think everything is working but then in the morning you get latest and bam nothing compiles. Wouldn’t it have been nice to have something email you telling you that it broke and maybe even why. What about unit tests. Do you use them, run them, all the time, some of the time, etc. You could have all this automated for you upfront.
Now setting up a CI system is an upfront task, yes it takes some time, yes there could be integration issues with your code base, yes you may need to change the way you build code. But in the end it’s all worth it. Once you get onto a CI system and everything is up and running you will start to get a sense of peace. Not only that but you will quickly come to rely upon it. It becomes that great little tool that you wish you had found sooner.
Now the catch. If you don’t use a Source Control provider at the minimum CI wont’ do much for you. You really should be using source control. This comes back into the you MUST do this category. Again I don’t care if you are just one person or a whole company. You NEED source control.
Why is this important if you are a single person. Well what happens when you inadvertantly delete a folder with code and you had no backup. Come on how many tech people do you know who actively backup there stuff? If it’s not automated we don’t usually do it. Let’s even say that you are working on some project that you might want to sell. How do you know if you have everything. Just because your folder is there doesn’t count. What happens if you need to have somebody help you code the app, so you just went from being a one person show to a small team. If you have source control your golden. Just give them access and away you go. It is an important process and does not need to be strictly used for source code. I’ve used it for word docs so I can go back and pull say a version 1 of a requirement spec to show the business unit / partner how things have changed over the course of say a year or even a month. You just never know.
CI needs source control. It provides a means by which you can have a 3rd party verify your code base. In a single person shop it can tell you if you have really checked in all your code. So let’s say you the one man team are working on two or more computers. If you have source control it’s easy to share all your code across the wire and know that you have everything because your CI build is working based on what is in the repository.
Another good use of CI is that in most tools you can even setup scheduled builds, say like a nightly build. What if you have a project where you want to provide a nightly build to people. You could setup your build script to actually take all your code build it and zip it. The CI server can do this for you. So no late nights, waiting for people to finish. You can then even have the CI server send you emails when it succeeds or fails.
The advantages to running CI and source control are to many to number. If you haven’t ever used them I suggest you do so. Go get Subversion and TeamCity at the minimum and install them. Work through them, play with them, and use them. They will save you time, money, and effort in the end.
In the next posts I hope to show you how to get a subversion installation up and running on Windows along with configuring TeamCity to use the same setup. I’ll even start posting some topics on using nAnt and unit testing.
The more we can test and automate our build and deployments the more time we can spend actually coding our solutions.