This post has been imported from the old blog and has not yet been converted to the new syntax yet.
Normally, a windows service is used to provide a system service to other applications, such as an antivirus service for example. The best practice for a business application providing a service would be to create a Windows Forms application, which runs on the desktop of a server. A user would be used to login to the server, start the application, and lock the server. At which point the application would provide a service to other systems. This is because a business application normally isn’t part of operating system infrastructure, while all windows services are.

For this project however, to try out technologies, Windows Services were used. And, I have to admit, after having tried it out, it brings more problems along then it solves. As you have to learn from experience, this was a valuable lesson for future similar projects.

To create a new Windows Service, a template can be used from Visual Studio when creating a new project. This provides a starting class, with the two most important methods, OnStart and OnStop. When you want to start or stop the service from Windows, these methods will be called.

However, these methods had to end in a reasonable time, being 30 seconds. Otherwise the service manager would give an error. It is possible the initialization of a service can take longer. To solve this, a Timer was added to the project, which had an interval of 10, was disabled and had a method listening for its Elapsed event. During the OnStart method, this timer simply was started, and nothing more. This caused the service to start immediately, while it had all the time it needed to perform its initialization inside the Elapsed event.

[csharp]
protected override void OnStart(string[] args) {
this.serviceTimer.Enabled = true;
}
[/csharp]

After this, the Windows Service could be coded just like you would code anything else. When everything was done, it was time to add an installer to install the service into the system. To do this, a new class had to be added to the project which had a RunInstaller attribute set to true, inherited from Installer and included the following using statements:

[csharp]
using System;
using System.ComponentModel;
using System.ServiceProcess;
using System.Configuration.Install;

namespace MediaService.Player {
[RunInstaller(true)]
public class PlayerServiceInstaller: Installer {
[/csharp]

The installer itself had to be configured in the constructor, with the following code:

[csharp]
private ServiceInstaller PlayerInstaller;
private ServiceProcessInstaller PlayerProcessInstaller;

public PlayerServiceInstaller() {
this.PlayerInstaller = new ServiceInstaller();
this.PlayerInstaller.StartType = ServiceStartMode.Manual;
this.PlayerInstaller.ServiceName = "MediaServicePlayer";
this.PlayerInstaller.DisplayName = "MediaService - Media Player";
this.Installers.Add(this.PlayerInstaller);

this.PlayerProcessInstaller = new ServiceProcessInstaller();
this.PlayerProcessInstaller.Account = ServiceAccount.User;
this.Installers.Add(this.PlayerProcessInstaller);
} /* PlayerServiceInstaller */
[/csharp]

At this point, the Windows Service was ready to be installed. To do this, the installutil utility had to be used. This tool is available from the Visual Studio .NET 2003 Command Prompt and takes the service executable as a parameter.



During the installation, a log was be generated, and a dialog box appeared, allowing configuring the account the service had to run under. After this, the service was successfully installed and accessible from MMC.
 
This post has been imported from the old blog and has not yet been converted to the new syntax yet.
To demonstrate the possible use of eID in Windows applications, I created a small client/server application. This application contains a central server, which listens on a certain port for possible clients. After a client connects, it has to authenticate with the user’s eID card. The server then validates the certificate and checks if it is in the list of allowed users to connect.

If everything is valid, the client can connect and chat with other clients. Every message send to the server is signed by the client and validated, making sure each message arriving at the server originated from that user. The server then extracts the username from the certificate and uses this to broadcast the message to the other clients. Ultimately, this means users only have to insert their eID card, enter their PIN and are safely chatting away with others.

The steps used to authenticate a client are as follows:




  • The client asks for a logon.

  • The server sends a random challenge back to the client and remembers this value.

  • The client signs this challenge and sends the signed challenge back to the server along with its certificate.

  • The server first validates if the serial number of the certificate is in the database of allowed serials, otherwise the client gets denied.

  • After this it validates if the certificate is still valid. If it is expired or revoked, it denies the client.

  • The server takes the public key from the certificate and verifies the signature of the client.

  • If the signature is valid, the client is really who he claims to be, and is allowed to logon. The client certificate is stored to be used for future communication verification and to extract the client’s name to include in the broadcasted communication.



These steps can be implemented with CAPICOM or WSE in C# to provide authentication with eID.
 
This post has been imported from the old blog and has not yet been converted to the new syntax yet.
Another thing I had to do was a feasibility study on eID. This means I had to look into this technology, research what the possible uses are, if they can be implemented and how they have to be implemented.

The eID project is an initiative from the Belgium government, to replace the current passport of every citizen by an eID card. This is a smartcard which looks like the current Belgian passport, and contains certificates and identity data on its chip. Main functionalities of the eID card are data capture, authentication and digital signature.

Data capture is used in applications to read identity data from the card, such as name, address, gender and others. This gives an advantage to business applications which use this data, because it takes less time to enter the data, and no more typing errors can occur.

Authentication is done by using a certificate on the card. When the private key of the certificate is accessed, the eID middleware, provided by the government, will show a dialog asking for the PIN code of the card. Normally, only the owner of the cards knows this code, and can allow access to the private key. Authentication could be used on websites, physical locations, client-server applications and others.

A digital signature can be used to proof that some content originates from a certain user and has not been modify along the way. Possible uses are signing an email or a document. With eID, a digital signature has the same legal proof as a written one.



Every eID card contains an authentication and digital signature certificate, signed by the Citizen CA, which itself is signed by the Belgium Root CA.

When a citizen request and eID card at his municipality, it gets registered at the population registry, which requests a new certificate. After this a citizen can logon to a website, which will validate the certificate trough the OCSP protocol with the CA.

On the eID file system there are two main directories. One contains the specific user data in a proprietary format and the other one is PIN protected and contains the certificates.

Windows applications can use the Crypto API to access the certificates while everything else can use PKCS#11. There are also toolkits which hide the internal workings of the card.

A certificate always has to be validated, meaning the validity period has to be checked and the serial number of the certificate has to be checked with OCSP or against a CRL.
 
This post has been imported from the old blog and has not yet been converted to the new syntax yet.
During my internship I had to test against different kinds of products, and to be sure everything worked on a clean install of this product, I had to create multiple virtual PC’s. One method of doing this was to create one clean Windows 2003 installation inside Virtual PC and copy this image to a new folder for every different server I needed. This was the method I started with, but one disadvantage was that it required a lot of disk space, as the base image already required 1.8 GB.

A solution to this problem was to use a feature of Virtual PC, called Differencing Disks. This allows for the creating of a base read-only image, which is called the parent, which can be shared with unlimited other virtual machines, the children.



Every child stores their disk changes in a separate file, making it possible to have one clean Windows 2003 parent image, and having a child which only adds Windows SharePoint Services to a separate file. The combination of parent and child would then become a Windows 2003 machine running Windows SharePoint Services.

This way, having a lot of different children uses a lot less space than having to copy the complete base image each time.

Additionally this method also can be used on a network to provide complete base images to all network clients. Making it possible to create an archive of base images for each platform (Windows 98, 2000, XP, 2003, Linux, BSD, …) and placing them on a read-only network share, ready to be consumed by all users creating their own local child disks.

 
This post has been imported from the old blog and has not yet been converted to the new syntax yet.


On March 2, 2005 the ASP.NET 2.0 On Tour came to Belgium, Brussels. This is an international tour, all about the latest Microsoft technology, featuring speakers such as David Platt and Dave Webster.

The subjects of this event were about showing what ASP.NET 2.0 and Visual Studio 2005 had to offer, and how to migrate to these new products and technologies.

One of the sessions was about “Personalization & Membership in ASP.NET 2.0”, by Gunther Beersaerts and Bart De Smet, which was very nice thanks to the good balance between demos and slides.



They talked about the Membership Service, which takes care of the management of users, password generating, validating logins and everything else related to authentication. Other areas of ASP.NET 2.0 they touched were the Role Management Service and the Profile Service.

Trough the Role Management Service, everything related to authorization based on roles can be done in a simple way with static methods to perform key management tasks. While the Profile Service takes care of storing user-specific data persistently in a strongly typed manner, thus making it very easy to customize your site to the logged on user.

This event really gave a good view on what is to come in the web development area.