Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
Instead of making a webservice call each time certain data was needed, the data was stored in the SqlCe database on the Pocket PC, to retrieve when needed. This allowed quickly displaying data after having retrieved it once, while still giving the possibility to retrieve the latest data, and update the local cache with it as well.

To implement this, a Db class was used with the Singleton pattern to provide database access to the local SqlCe engine. A database on the Pocket PC is simply a file on the file system, MediaService.sdf in this case.

In the CheckDb method, the database was created in case it did not exist. This was done with normal SQL queries defining Create Table commands.

The following code made up the base functionality of the Db class:

[csharp]
using System;
using System.IO;
using System.Text;
using System.Data;
using System.Data.Common;
using System.Data.SqlServerCe;
using System.Collections;

namespace MediaService.Pocket {
public class Db {
private const String DB_NAME = "MediaService.sdf";
private static Db instance = null;

public Db() { }

public static Db NewInstance() {
lock(typeof(Db)) {
if (instance == null) {
instance = new Db();
}
return instance;
}
} /* NewInstance */

private void CheckDB() {
if (!File.Exists(DB_NAME)) {
SqlCeConnection conn = null;
SqlCeTransaction trans = null;
SqlCeEngine engine = new SqlCeEngine("Data Source = " + DB_NAME);
engine.CreateDatabase();
try {
conn = new SqlCeConnection("Data Source = " + DB_NAME);
conn.Open();
SqlCeTransaction trans = conn.BeginTransaction();

SqlCeCommand availableTable = conn.CreateCommand();
availableTable.Transaction = trans;
availableTable.CommandText = "CREATE TABLE Available(songId int,
songTitle nvarchar(200), songArtist nvarchar(200))";
availableTable.ExecuteNonQuery();

trans.Commit();
} catch {
trans.Rollback();
} finally {
if (conn != null && conn.State == ConnectionState.Open) {
conn.Close();
}
}
}
} /* CheckDb */
[/csharp]

Storing songs in the database was done every time results were returned from the webservice with the following code:

[csharp]
private void OnGetSongs(IAsyncResult songsResult) {
this.availableSongsCache = this.GetService().EndGetSongs(songsResult);
Db.NewInstance().StoreSongs(this.availableSongsCache);
[/csharp]

To store the songs, the table was first emptied, after which the new results were inserted all at once by using the following method:

[csharp]
public void StoreSongs(Song[] songs) {
this.CheckDB();

SqlCeConnection conn = null;
SqlCeTransaction trans = null;

try {
conn = new SqlCeConnection("Data Source = " + DB_NAME);
conn.Open();
trans = conn.BeginTransaction();
SqlCeCommand deleteSong = conn.CreateCommand();
deleteSong.Transaction = trans;
String deleteSql = "DELETE FROM Available";
deleteSong.CommandText = deleteSql;
deleteSong.ExecuteNonQuery();

SqlCeCommand insertSong = conn.CreateCommand();
String insertSql = "INSERT INTO Available(songId, songTitle, songArtist)
VALUES (?, ?, ?)";
insertSong.Transaction = trans;
insertSong.CommandText = insertSql;

foreach (Song song in songs) {
insertSong.Parameters.Clear();
insertSong.Parameters.Add("@songId", song.ID);
insertSong.Parameters.Add("@songTitle", song.Title);
insertSong.Parameters.Add("@songArtist", song.Artist);
insertSong.ExecuteNonQuery();
}
trans.Commit();
} catch (SqlCeException ex) {
trans.Rollback();
System.Windows.Forms.MessageBox.Show(FormatErrorMessage(ex));
} finally {
if (conn != null && conn.State == ConnectionState.Open) {
conn.Close();
}
}
} /* StoreSongs */
[/csharp]

Retrieving the songs can be done exactly as with the regular SqlClient classes.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
On the client-side, a Pocket PC application was used. Since this has no guaranteed connectivity, some additional techniques had to be used to improve the end-user experience.

First of all, when calling a webservice from a Pocket PC, it was possible that the call would take a long time. If this would have been done synchronously, the application would lock up as long as the call was being processed. To prevent this, the call was made asynchronously and a progress bar was displayed.

To achieve this, a Timer was used from the System.Threading class, to update the progress bar when it was visible. This caused the timer to run on a different thread from the application, and make call-backs at certain intervals to update the user interface containing the progress bar.

The following code was used to easily start and stop the progress bar:

[csharp]
using System;
using System.Threading;

namespace MediaService.Pocket {
public class MediaForm : System.Windows.Forms.Form {
private System.Threading.Timer progressTimer;
private OpenNETCF.Windows.Forms.ProgressBarEx asyncProgress;
private System.Windows.Forms.Label asyncLabel;

public MediaForm(Int32 userId, String authTicket) {
TimerCallback progressDelegate = new TimerCallback(this.UpdateProgress);
this.progressTimer = new System.Threading.Timer(progressDelegate, null,
Timeout.Infinite, Timeout.Infinite);
} /* MediaForm */

private void StartProgress(ProgressEnum progressType) {
// Reset progressbar and show
this.asyncProgress.Value = this.asyncProgress.Minimum;
this.asyncProgress.Visible = true;
this.asyncLabel.Visible = true;
this.asyncLabel.Text = "Retrieving Content";
this.progressTimer.Change(0, 100);
} /* StartProgress */

protected void UpdateProgress(Object state) {
if (this.asyncProgress.Value + 1 > this.asyncProgress.Maximum) {
this.asyncProgress.Value = this.asyncProgress.Minimum;
} else {
this.asyncProgress.Value++;
}
} /* UpdateProgress */

private void StopProgress() {
this.progressTimer.Change(Timeout.Infinite, Timeout.Infinite);
this.asyncProgress.Visible = false;
this.asyncLabel.Visible = false;
} /* StopProgress */
[/csharp]


After the progress bar was started, an asynchronous call was made to the webservice, preventing the application to lock up, using the following syntax:

[csharp]
AsyncCallback callBack = new AsyncCallback(this.OnGetSongs);
IAsyncResult songsResult = this.GetService().BeginGetSongs(callBack, null);
[/csharp]

This started the call to the webservice on a different thread, and when the webservice call finished, it called back to the OnGetSongs method in this case. In this method, the results were retrieved and the user interface was updated.

[csharp]
private void OnGetSongs(IAsyncResult songsResult) {
this.availableSongsCache = this.GetService().EndGetSongs(songsResult);
if (this.InvokeRequired()) {
this.Invoke(new EventHandler(this.UpdateAvailableSongs));
} else {
this.UpdateAvailableSongs(this, System.EventArgs.Empty);
}
} /* OnGetSongs */
[/csharp]

It was possible that the callback occurred from a different thread. In that case it was not possible to update the user interface, since the thread did not own the form controls. To detect if the callback occurred on another thread or not, the following code was used:

[csharp]
namespace MediaService.Pocket {
public class MediaForm : System.Windows.Forms.Form {
private readonly Thread formThread = Thread.CurrentThread;

private Boolean InvokeRequired() {
return !this.formThread.Equals(Thread.CurrentThread);
} /* InvokeRequired */
[/csharp]

If the callback happened on another thread, the Invoke method had to be used to handle the update of the user interface on the thread that owned the interface. For this reason, the method updating the interface had to have the following signature:

[csharp]
private void UpdateAvailableSongs(object sender, EventArgs e) {
[/csharp]

At this point, it was possible to make a webservice call without locking the user interface, and informing the user something is going on thanks to the progress bar.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
After having implemented a data layer in the Data project, it was time to make a real data implementation. A Sql Server 2000 implementation was the default data source, located in the Data.SqlServer project.

Enterprise Library was used to provide the data access to Sql Server. This contained a Data Access Application Block, which allows configuring the connection string through the Enterprise Library Configuration tool.

A reference to Microsoft.Practices.EnterpriseLibrary.Data was needed, together with the Configuration and Common assemblies of Enterprise Library.

Through the Enterprise Library Configuration tool, an existing App.config was loaded, where the Data Access Application Block was added. The database and server values had to be configured to the actual server being used, together with the database containing the data. Additional connection string properties could be added as well, for example, the Integrated Security property, which is set to True.



After saving this file, it was possible to create a data implementation for each Accessor interface previously defined in the Data project, as for example this code:

[csharp]
using System;
using System.Data;
using System.Collections;

using MediaService.Logging;
using MediaService.Objects;
using MediaService.Data.Accessors;

using Microsoft.Practices.EnterpriseLibrary.Data;
using Microsoft.Practices.EnterpriseLibrary.Logging;

namespace MediaService.Data.SqlServer {
public class SongDataAccessor: ISongDataAccessor {
} /* SongDataAccessor */
} /* MediaService.Data.SqlServer */
[/csharp]

Thanks to the Enterprise Library Data Access Application Block, the Sql Server implementation used best practices from the Microsoft Patterns & Practices group, which followed Microsoft guidelines and were optimized for performance.

To get an array of objects from the database, a new Database object had to be created, after which a stored procedure was wrapped, called and read from to get for example Song objects. This was done with the following code:

[csharp]
public Song[] GetSongs() {
Database db = DatabaseFactory.CreateDatabase("MediaServiceSqlServer");

DBCommandWrapper dbCommandWrapper =
db.GetStoredProcCommandWrapper("GetSongs");

Logger.Write("Retrieving songs.", Category.SqlServer,
Priority.Lowest, 1, Severity.Information);

ArrayList songs = new ArrayList();
using (IDataReader dataReader = db.ExecuteReader(dbCommandWrapper)) {
while (dataReader.Read()) {
songs.Add(new Song(dataReader.GetInt32(0), dataReader.GetString(1),
dataReader.GetString(2), dataReader.GetString(3),
dataReader.GetString(4), dataReader.GetString(5),
dataReader.GetString(6), dataReader.GetInt32(7),
dataReader.GetInt32(8), dataReader.GetInt32(9)));
}
}

Logger.Write(String.Format("Retrieved {0} {1}.", songs.Count,
(songs.Count == 1) ? "song" : "songs"),
Category.SqlServer, Priority.Lowest, 1, Severity.Information);

return (Song[])songs.ToArray(typeof(Song));
} /* GetSongs */
[/csharp]

Updating an item by using a stored procedure which uses parameters, was done by using the following code:

[csharp]
public void UpdateSongPlayCount(Int32 songId) {
Database db = DatabaseFactory.CreateDatabase("MediaServiceSqlServer");

DBCommandWrapper dbCommandWrapper =
db.GetStoredProcCommandWrapper("UpdateSongPlayCount");
dbCommandWrapper.AddInParameter("@songId", DbType.Int32, songId);

Logger.Write(String.Format("Updating play count for song: {0}.", songId),
Category.SqlServer, Priority.Lowest, 1, Severity.Information);

try {
db.ExecuteNonQuery(dbCommandWrapper);
} catch (Exception ex) {
Logger.Write(String.Format("Failed to update play count for song: {0}.
Error: {1}", songId, ex.ToString()),
Category.SqlServer, Priority.Highest, 1, Severity.Error);
}
} /* UpdateSongPlayCount */
[/csharp]

Using stored procedures made it possible to have another layer of abstraction. This made it easy changing an existing stored procedure to keep track of statistics, without having to change any code of the implementation. At the same time, using stored procedures also protected against Sql Injection attacks. After all Accessors were implemented, it was possible to use this implementation by deploying the SqlServer dll and selecting it as data source.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
Any application using data benefits from having a separate data layer. This enables the administrator to select which data source to use. It also makes your application have an advantage, making it easier to sell.

Besides from the advantages for the end-users, it’s also best practices to separate the data layer from your presentation and business logic layer.

To provide the data layer to the application a Data project was added. The layers above the data layer never accessed the real data implementations, but worked with objects which implemented certain data interfaces. This way, it was possible to define all possible data related methods in an interface and afterwards implement them in a real implementation.

A logical grouping was applied when creating the interfaces, starting from a generic IDataAccessor from which every other interface inherited from.

[csharp]
using System;

namespace MediaService.Data.Accessors {
public interface IDataAccessor {
} /* IDataAccessor */
} /* MediaService.Data.Accessors */
[/csharp]

One of the logical sections was for example everything related to Song objects:

[csharp]
using System;

using MediaService.Objects;

namespace MediaService.Data.Accessors {
public interface ISongDataAccessor: IDataAccessor {
Song[] GetSongs();
Song[] GetQueue();
Song[] GetMostPlayed(Int32 numberOfSongs);
Song[] GetMostPopular(Int32 numberOfSongs);
} /* ISongDataAccessor */
} /* MediaService.Data.Accessors */
[/csharp]

Since the other projects did not have a reference to the real data implementations, but only to the Data project, this project had to take care of loading the correct implementation. Loading the correct class in the real implementation is done by using factories. For every Accessor interface a factory exists, returning an instance of the data implementation, using the following code:

[csharp]
using System;
using MediaService.Data.Accessors;

namespace MediaService.Data.Factory {
internal class SongFactory: Factory {
internal static ISongDataAccessor Create() {
return Factory.Create(Accessor.Song) as ISongDataAccessor;
} /* Create */
} /* SongFactory */
} /* MediaService.Data */
[/csharp]

In the Data project, there was one Factory class, responsible for loading the correct assembly containing the data implementation and instantiating the correct Accessor class. This was done by using Reflection together with Configuration to retrieve the location. The location consisted out of the class name and the assembly name, separated by a comma, as for example the location for the SongDataAccessor:

[xml]

MediaService.Data.SqlServer.SongDataAccessor,MediaService.Data.SqlServer

[/xml]

This location data was retrieved by configuration, after which it was separated into assembly and class parts and loaded with Reflection with the following code:

[csharp]
using System;
using System.Reflection;

using MediaService.Configuration;
using MediaService.Data.Accessors;

using Microsoft.Practices.EnterpriseLibrary.Configuration;

namespace MediaService.Data.Factory {
internal enum Accessor {
Song
} /* Accessor */

internal class Factory {
internal static IDataAccessor Create(Accessor accessorType) {
DatabaseData configData = LoadConfiguration();

if (configData == null) {
throw new ApplicationException("Could not load configuration.");
}

String blockToLoad = String.Empty;
switch (accessorType) {
case Accessor.Song: blockToLoad = configData.SongDataAccessor; break;
}

if (blockToLoad == String.Empty) {
throw new ApplicationException(String.Format(
"Type entry not found for {0}.", accessorType.ToString()));
}

Int32 index = blockToLoad.IndexOf(",");
string typeToLoad = blockToLoad.Substring(0,index);
string assemblyToLoad = blockToLoad.Substring(typeToLoad.Length + 1,
blockToLoad.Length - typeToLoad.Length - 1);
return (IDataAccessor)Assembly.Load(
assemblyToLoad).CreateInstance(typeToLoad);
} /* Create */

private static DatabaseData LoadConfiguration() {
ConfigurationManager.ClearSingletonSectionCache("databaseConfiguration");
return ConfigurationManager.GetConfiguration(
"databaseConfiguration") as DatabaseData;
} /* LoadConfiguration */
} /* Factory */
} /* MediaService.Data.Factory */
[/csharp]

All of the Factories were marked internal because they are just meant for internal workings of the data layer, while all Accessors remain public because they had to be accessible to implement in the real data implementation.

Besides the Accessor interfaces, the Data project also exposed one public class, named Dalc. This class contained static properties for each logical data section, returning an instantiated Accessor from the configured data source.

[csharp]
using System;

using MediaService.Data.Accessors;
using MediaService.Data.Factory;

namespace MediaService.Data {
public class Dalc {
public static ISongDataAccessor Song {
get { return SongFactory.Create(); }
} /* Song */
} /* Dalc */
} /* MediaService.Data */
[/csharp]

After this, it was possible to access data by adding a reference to the Data project, adding a real data implementation assembly to the deployed location and configuring it. For example, the following code retrieved all songs from the data source:

[csharp]
using MediaService.Objects;
using MediaService.Data;
using MediaService.Data.Accessors;

namespace MediaService.Web {
public class Media {
public Song[] GetSongs() {
return Dalc.Song.GetSongs();
} /* GetSongs */
[/csharp]

With this data layer, all details about data access are contained in the real data implementations, while nowhere else there is specific data source code. The entire application works on data objects, which implement the data interfaces, while under the hood, the correct data source is selected through the configuration file.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
When developing an application it’s possible to use debug mode to figure out why something is wrong, but when it’s time to deploy the application something else has to be integrated. To solution to this is logging. Implementing good logging functionality will make the Operations people who have to deploy your application happy as well, because they can integrate it into existing monitoring systems.

Enterprise Library was used to implement a flexible logging solution. Thanks to the Logging and Instrumentation Application Block it’s possible to integrate logging into your code, but decide where to output the log externally from the application. This way it’s possible to log to a flat file, a database, an email address, the event log, a message queue, WMI and custom implementations, without having to change code.

A Logging project was added to start, to provide some helper constants:

[csharp]
using System;

namespace MediaService.Logging {
public struct Priority {
public const Int32 Lowest = 0;
public const Int32 Low = 1;
public const Int32 Normal = 2;
public const Int32 High = 3;
public const Int32 Highest = 4;
} /* Priority */

public struct Category {
public const String Player = "Player";
public const String Remoting = "Remoting";
public const String Data = "Data";
public const String SqlServer = "SqlServerData";
} /* Category */
} /* MediaService.Logging */
[/csharp]

After this, a reference to the Logging project was added together with a reference to Microsoft.Practices.EnterpriseLibrary.Logging. A reference to the Enterprise Library Configuration Application Block was needed as well.

At this point it was possible to log messages by using the following construct:

[csharp]
if (randomSong == null) {
Logger.Write("Unable to select a random song.", Category.Player,
Priority.High, 1, Severity.Error);
} else {
Logger.Write(String.Format("Fetched song: {0} (Random).", randomSong.Path),
Category.Player, Priority.Low, 1, Severity.Information);
}
[/csharp]

Similar code had been added throughout the code to provide meaningful feedback. The only thing left was configuring where the log output had to go.

Using the Enterprise Library Configuration tool, an existing App.config was loaded and the Logging and Instrumentation Application Block was added. Under Client Settings, LoggingEnabled was to True.

A new category had to be added, called Player, by right clicking on Categories and selecting New – Category. This is the name that was used in the code to specify which log the output belongs to. It was possible to define multiple categories.

To define where the output had to go to, a new sink was added, called Player Flat File Sink, by right clicking on Sinks and selecting New – Flat File Sink. Player.log was chosen as a filename, without a header.

Formatters are used to define how a single log entry had to look. By default a Text Formatter was provided, which included extensive information. To have an overview log file, a new formatter was added by right clicking Formatters and selecting New – Text Formatter. The template for this Simple Text Formatter was the following:

[code]
{severity}
[{timestamp}]: {message}
[/code]

Finally a new destination was added by right clicking the Player category and selecting New – Destination. This destination was configured to use the Simple Text Formatter together with the Player Flat File Sink.



After this, the loggingconfiguration.config and loggingdistributorconfiguration.config files had to be copied in the Post-build event as well. At this point, the application had implemented a flexible logging strategy, where an administrator could easily decide to turn logging on or off and where the log output had to go to and which template had to be used.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
To configure the solution, Enterprise Library was used. This contained a Configuration Application Block which allowed defining various configuration sources externally from the application. This approach gave to ability to switch from XML configuration files to a database without having to change anything in the code. Because of this, it was also possible to distribute the application, and let an administrator choose where it should read the configuration from. An additional advantage of using the application block was the ability to automatically detect when the configuration had changed and retrieve the new values.

The easiest way to implement configuration, was to create a new project which contained all the possible configuration sections in the solution. Each configuration section was defined by a different class.

For example, to make the port used by Remoting in the application configurable, I created a class which contained this value, together with a public property for it. This class uses the Xml namespace, because it was serialized to XML, and the application block uses the public properties to populate the configuration data.

A default constructor also had to be present for XML Serialization.

The configuration data for the player was for example contained in the PlayerData class, which looked like this:

[csharp]
using System;
using System.Xml.Serialization;

namespace MediaService.Configuration {
public class PlayerData {
private Int32 remotingPort;

[XmlElement("RemotingPort")]
public Int32 RemotingPort {
get { return this.remotingPort; }
set { this.remotingPort = value; }
} /* RemotingPort */

public PlayerData() { }

public PlayerData(Int32 remotingPort) {
this.remotingPort = remotingPort;
} /* PlayerData */
} /* PlayerData */
} /* MediaService.Configuration */
[/csharp]

To use these values, a reference to the Configuration project had to be added, together with a reference to Microsoft.Practices.EnterpriseLibrary.Configuration. After this the configuration could be loaded with the following code:

[csharp]
using Microsoft.Practices.EnterpriseLibrary.Configuration;

namespace MediaService.Player {
public class PlayerService : System.ServiceProcess.ServiceBase {
private PlayerData configData = null;

private void LoadConfiguration() {
ConfigurationManager.ClearSingletonSectionCache("playerConfiguration");
try {
this.configData = ConfigurationManager.
GetConfiguration("playerConfiguration") as PlayerData;
} catch (Exception ex) {
this.configData = new PlayerData(4000);
}
[/csharp]

And to receive notifications the following code had to be added:

[csharp]
protected override void OnStart(string[] args) {
ConfigurationManager.ConfigurationChanged += new
ConfigurationChangedEventHandler(ConfigurationManager_ConfigurationChanged);
} /* OnStart */

private void ConfigurationManager_ConfigurationChanged(object sender,
ConfigurationChangedEventArgs e) {
this.LoadConfiguration();
// Check new values and perform possible actions
} /* ConfigurationManager_ConfigurationChanged */
[/csharp]

At this point, all code needed for configuration was done. Now the Enterprise Library Configuration tool had to be used to configure the application’s configuration source.

First, a new application had to be defined by using File – New Application. Then the Configuration Application Block had to be added through Action – New - Configuration Application Block. After this, a new configuration section was added by right clicking on the new application block and selecting New – Configuration Section.

This new section was called playerConfiguration, as it is in the code, and uses an XML Storage Provider and an Xml Serializer Transformer, both added by right clicking the new section and selecting them from the New menu.

The only thing that still had to be changed was the XML Storage Provider, playerConfiguration.config had to be given as a FileName. After this, the configuration had to be saved.



The XML file used for configuration was the following:

[xml]



xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
4000



[/xml]

The only thing left to make sure the application loaded the configuration during testing, was to provide a Post-build event that copied the configuration file. To do this, in the project’s properties, under the Common Properties was a Build Events menu, where it was possible to define the Post-build event. The following had to be used to copy the playerconfiguration.config file:

[code]
copy "$(ProjectDir)playerconfiguration.config" "$(TargetDir)" >Nul
[/code]

When the application was started, it would call the LoadConfiguration method, which would populate the PlayerData class and provide all configuration settings of the application. If the configuration file would be changed when the application was running, the ConfigurationChanged event would be raised and the new configuration would be used.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
After everything was done on the server-side of the Remoting implementation, inside the Windows Service, it was time to add the consumer side. In the case of this project, the consumer was an ASP.NET Webservice running on the same machine.

This required little effort. First, System.Runtime.Remoting had to be referenced, together with the assembly containing the interface used for the controller object. After this it was possible retrieve the controller object with the following code:

[csharp]
private IPlayerServiceController GetPlayerServiceController() {
return (IPlayerServiceController)Activator.GetObject(
typeof(IPlayerServiceController),
String.Format("tcp://{0}:{1}/MediaServicePlayerController",
this.configData.RemotingHost,
this.configData.RemotingPort));
} /* GetPlayerServiceController */
[/csharp]

In this solution, the webservice does not need to reference the assembly containing the real controller, but only an assembly that contains the interface. The real implementation is running on the server-side, as an instantiated, marshalled object.

When the instance got returned, it was possible to use it very simple, like this code:

[csharp]
[WebMethod(Description="Stop playing the queue.")]
public void Stop() {
this.GetPlayerServiceController().StopPlaying();
} /* Stop */
[/csharp]
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
When the Windows Service was successfully running, a way had to be found to control it. There was a ServiceController class which allowed controlling a service and sending messages to it trough the ExecuteCommand method. This method was limited to sending integers without getting anything back. A better solution was to use Remoting to control the service.

Remoting allows for interproces communication, making objects available between different processes. An object is passed from server to client by reference, where the client can work with it as if it was a real object at the client. Remoting takes care of collecting information about the client calls and sending it to the server, where it is passed to the server object which performs the action on the client’s behalf. The result of this operation is then sent back to the client.

Remoting can transport this information over different channels, such as TCP and HTTP for example. In the project, a TCP channel was used, with the following code:

[csharp]
using System.Runtime.Remoting;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting.Channels.Tcp;

namespace MediaService.Player {
public class PlayerService : System.ServiceProcess.ServiceBase {
private TcpChannel tcpChannel;

private void SetupRemoting() {
this.tcpChannel = new TcpChannel(this.configData.RemotingPort);
ChannelServices.RegisterChannel(this.tcpChannel);
RemotingConfiguration.ApplicationName = "MediaServicePlayer";
[/csharp]


After a channel had been setup, there were different possibilities to make a certain object available. But first, the object had to be created. This was a simple class, which inherited from MarshalByRefObject, and implemented a custom interface.

[csharp]
using System;
namespace MediaService.Player {
public class PlayerServiceController: MarshalByRefObject,
IPlayerServiceController {
[/csharp]

The interface was used, to make it possible to only make the assembly with the interface available to consumers of the remote object, instead of the implementation.

After an object was created, it was possible to register this type as a WellKnowType. There were two possibilities for this. It could be registered as a Singleton, which would make sure only instance of the object lived on the server at a given time. The other possibility was to register it as SingleCall, which would create a new object for each call. None of these two proved to be successful, because they both were unable to call methods from the service. The solution was to instantiate a new object when the service started, and to make this instance remotely available. This allowed the object to be in contact with the Windows Service, making it possible to control it. The following code published the object on tcp://host:port/MediaServicePlayerController:

[csharp]
RemotingServices.Marshal(this.playerController,
"MediaServicePlayerController");
[/csharp]

At the end, when the service was stopped, everything had to be cleaned up. The published object got disconnected and the channel got unregistered.

[csharp]
private void TearDownRemoting() {
RemotingServices.Disconnect(this.playerController);
ChannelServices.UnregisterChannel(this.tcpChannel);
} /* TearDownRemoting */
[/csharp]
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
Normally, a windows service is used to provide a system service to other applications, such as an antivirus service for example. The best practice for a business application providing a service would be to create a Windows Forms application, which runs on the desktop of a server. A user would be used to login to the server, start the application, and lock the server. At which point the application would provide a service to other systems. This is because a business application normally isn’t part of operating system infrastructure, while all windows services are.

For this project however, to try out technologies, Windows Services were used. And, I have to admit, after having tried it out, it brings more problems along then it solves. As you have to learn from experience, this was a valuable lesson for future similar projects.

To create a new Windows Service, a template can be used from Visual Studio when creating a new project. This provides a starting class, with the two most important methods, OnStart and OnStop. When you want to start or stop the service from Windows, these methods will be called.

However, these methods had to end in a reasonable time, being 30 seconds. Otherwise the service manager would give an error. It is possible the initialization of a service can take longer. To solve this, a Timer was added to the project, which had an interval of 10, was disabled and had a method listening for its Elapsed event. During the OnStart method, this timer simply was started, and nothing more. This caused the service to start immediately, while it had all the time it needed to perform its initialization inside the Elapsed event.

[csharp]
protected override void OnStart(string[] args) {
this.serviceTimer.Enabled = true;
}
[/csharp]

After this, the Windows Service could be coded just like you would code anything else. When everything was done, it was time to add an installer to install the service into the system. To do this, a new class had to be added to the project which had a RunInstaller attribute set to true, inherited from Installer and included the following using statements:

[csharp]
using System;
using System.ComponentModel;
using System.ServiceProcess;
using System.Configuration.Install;

namespace MediaService.Player {
[RunInstaller(true)]
public class PlayerServiceInstaller: Installer {
[/csharp]

The installer itself had to be configured in the constructor, with the following code:

[csharp]
private ServiceInstaller PlayerInstaller;
private ServiceProcessInstaller PlayerProcessInstaller;

public PlayerServiceInstaller() {
this.PlayerInstaller = new ServiceInstaller();
this.PlayerInstaller.StartType = ServiceStartMode.Manual;
this.PlayerInstaller.ServiceName = "MediaServicePlayer";
this.PlayerInstaller.DisplayName = "MediaService - Media Player";
this.Installers.Add(this.PlayerInstaller);

this.PlayerProcessInstaller = new ServiceProcessInstaller();
this.PlayerProcessInstaller.Account = ServiceAccount.User;
this.Installers.Add(this.PlayerProcessInstaller);
} /* PlayerServiceInstaller */
[/csharp]

At this point, the Windows Service was ready to be installed. To do this, the installutil utility had to be used. This tool is available from the Visual Studio .NET 2003 Command Prompt and takes the service executable as a parameter.



During the installation, a log was be generated, and a dialog box appeared, allowing configuring the account the service had to run under. After this, the service was successfully installed and accessible from MMC.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
To demonstrate the possible use of eID in Windows applications, I created a small client/server application. This application contains a central server, which listens on a certain port for possible clients. After a client connects, it has to authenticate with the user’s eID card. The server then validates the certificate and checks if it is in the list of allowed users to connect.

If everything is valid, the client can connect and chat with other clients. Every message send to the server is signed by the client and validated, making sure each message arriving at the server originated from that user. The server then extracts the username from the certificate and uses this to broadcast the message to the other clients. Ultimately, this means users only have to insert their eID card, enter their PIN and are safely chatting away with others.

The steps used to authenticate a client are as follows:




  • The client asks for a logon.

  • The server sends a random challenge back to the client and remembers this value.

  • The client signs this challenge and sends the signed challenge back to the server along with its certificate.

  • The server first validates if the serial number of the certificate is in the database of allowed serials, otherwise the client gets denied.

  • After this it validates if the certificate is still valid. If it is expired or revoked, it denies the client.

  • The server takes the public key from the certificate and verifies the signature of the client.

  • If the signature is valid, the client is really who he claims to be, and is allowed to logon. The client certificate is stored to be used for future communication verification and to extract the client’s name to include in the broadcasted communication.



These steps can be implemented with CAPICOM or WSE in C# to provide authentication with eID.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
Another thing I had to do was a feasibility study on eID. This means I had to look into this technology, research what the possible uses are, if they can be implemented and how they have to be implemented.

The eID project is an initiative from the Belgium government, to replace the current passport of every citizen by an eID card. This is a smartcard which looks like the current Belgian passport, and contains certificates and identity data on its chip. Main functionalities of the eID card are data capture, authentication and digital signature.

Data capture is used in applications to read identity data from the card, such as name, address, gender and others. This gives an advantage to business applications which use this data, because it takes less time to enter the data, and no more typing errors can occur.

Authentication is done by using a certificate on the card. When the private key of the certificate is accessed, the eID middleware, provided by the government, will show a dialog asking for the PIN code of the card. Normally, only the owner of the cards knows this code, and can allow access to the private key. Authentication could be used on websites, physical locations, client-server applications and others.

A digital signature can be used to proof that some content originates from a certain user and has not been modify along the way. Possible uses are signing an email or a document. With eID, a digital signature has the same legal proof as a written one.



Every eID card contains an authentication and digital signature certificate, signed by the Citizen CA, which itself is signed by the Belgium Root CA.

When a citizen request and eID card at his municipality, it gets registered at the population registry, which requests a new certificate. After this a citizen can logon to a website, which will validate the certificate trough the OCSP protocol with the CA.

On the eID file system there are two main directories. One contains the specific user data in a proprietary format and the other one is PIN protected and contains the certificates.

Windows applications can use the Crypto API to access the certificates while everything else can use PKCS#11. There are also toolkits which hide the internal workings of the card.

A certificate always has to be validated, meaning the validity period has to be checked and the serial number of the certificate has to be checked with OCSP or against a CRL.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
During my internship I had to test against different kinds of products, and to be sure everything worked on a clean install of this product, I had to create multiple virtual PC’s. One method of doing this was to create one clean Windows 2003 installation inside Virtual PC and copy this image to a new folder for every different server I needed. This was the method I started with, but one disadvantage was that it required a lot of disk space, as the base image already required 1.8 GB.

A solution to this problem was to use a feature of Virtual PC, called Differencing Disks. This allows for the creating of a base read-only image, which is called the parent, which can be shared with unlimited other virtual machines, the children.



Every child stores their disk changes in a separate file, making it possible to have one clean Windows 2003 parent image, and having a child which only adds Windows SharePoint Services to a separate file. The combination of parent and child would then become a Windows 2003 machine running Windows SharePoint Services.

This way, having a lot of different children uses a lot less space than having to copy the complete base image each time.

Additionally this method also can be used on a network to provide complete base images to all network clients. Making it possible to create an archive of base images for each platform (Windows 98, 2000, XP, 2003, Linux, BSD, …) and placing them on a read-only network share, ready to be consumed by all users creating their own local child disks.

 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.


On March 2, 2005 the ASP.NET 2.0 On Tour came to Belgium, Brussels. This is an international tour, all about the latest Microsoft technology, featuring speakers such as David Platt and Dave Webster.

The subjects of this event were about showing what ASP.NET 2.0 and Visual Studio 2005 had to offer, and how to migrate to these new products and technologies.

One of the sessions was about “Personalization & Membership in ASP.NET 2.0”, by Gunther Beersaerts and Bart De Smet, which was very nice thanks to the good balance between demos and slides.



They talked about the Membership Service, which takes care of the management of users, password generating, validating logins and everything else related to authentication. Other areas of ASP.NET 2.0 they touched were the Role Management Service and the Profile Service.

Trough the Role Management Service, everything related to authorization based on roles can be done in a simple way with static methods to perform key management tasks. While the Profile Service takes care of storing user-specific data persistently in a strongly typed manner, thus making it very easy to customize your site to the logged on user.

This event really gave a good view on what is to come in the web development area.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
Blogs are a new communication medium, mainly used as a single-direction information channel. On a blog, the owner publishes new posts, which can be read and commented on by readers.

This model looks a lot like a forum where treads are started and replies are given, expect on a blog, only the blog owner creates new posts. Some compare this model to an online diary or the private newspaper of an amateur journalist.

The greatest strength of blogs is the fact that they are very personal and contain a lot of valuable information. They also show the human side of companies when employees are blogging.

Another great advantage of a blog is syndication. This is the use of a particular XML file, using a described schema, called an RSS-feed or an ATOM-feed, to display all the information on a particular blog. By using so called feed-readers, it is possible to read several blogs from one application.

As a part of my internship, I had to post articles about what I did on my blog. Most of these articles correspond with the content of this report.



 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
Another tool I had at my disposal was Microsoft Virtual PC. This is a product that enables you to run several operating systems inside your existing one, each one of them acting as a real PC.



This was very useful when I needed to test some of the things I created on a different server, a Windows 2003 server running Windows SharePoint Services for example.

After I had created my Windows 2003 image, I could use it on any PC I wanted to work on, on my laptop and on my desktop as well. This proved very useful when having to test against a specific machine.

A virtual PC can share its network with the host operating system, making it possible to run several virtual machines at one, simulating a complete network, acting as if it were unique servers on the network. This was a very nice feature before deploying something to the real production servers.



 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
For the Microsoft Student Council, I decided to write a small database application that would keep track of the users their points. The council has access to a SharePoint site, so I decided to create a WebPart that would use existing user data and integrate nicely into the existing infrastructure.



The first thing I did, was install the Web Part Templates for Visual Studio .NET on my machine. To do this, it required the Microsoft.SharePoint.dll during installation. So I decided to install Windows SharePoint Services first, which was freely available on the Microsoft Download website.

After I installed SharePoint, I discovered the SmartPart. This is a special WebPart, created by Jan Tielens and Fons Sonnemans, which allows you to encapsulate a regular ASP.NET User Control in a WebPart. This is a great solution because it gives you the productivity of using the designer and the power of accessing the SharePoint Object Model at the same time.

As this WebPart would be storing data, I had to start looking where to store it. My first idea was to store it right in the SharePoint database, but there was almost no information on it and I got advised it wasn’t a good thing to put third party data in the database. In the end I created two Custom Lists in SharePoint, who would act as database tables to store the data.



The next step was the creation of the User Control. To do this, references to Microsoft.SharePoint.dll and SmartPart.dll had to be added, the SmartPart, Microsoft.SharePoint and Microsoft.SharePoint.WebControls namespaces had to be imported and the IUserControl interface had to be implemented. This interface takes care of the link between your User Control and the SharePoint Web.

After this, it was possible to access everything of the SharePoint Object Model. For example, to fill a dropdown list with the available users, I used the following code:

[csharp]
private void FillUserList() {
this.viewUserList.Items.Clear();

SPUserCollection webUsers = this.SPWeb.Users;
this.viewUserList.Items.Add(new ListItem("All", "-1"));
foreach (SPUser webUser in webUsers) {
this.viewUserList.Items.Add(new ListItem(
Microsoft.SharePoint.Utilities.SPEncode.HtmlEncode(webUser.Name),
webUser.LoginName));
}
} /* FillUserList */
[/csharp]

This dropdown was later used as a filter in the administrative part of the WebPart.

To retrieve the available types from the Custom List, I used the following piece:

[csharp]
private void FillTypeList() {
this.typeList.Items.Clear();
SPListItemCollection puntenTypes = this.SPWeb.Lists["PuntenList"].Items;
foreach (SPListItem puntenType in puntenTypes) {
if (!Convert.ToBoolean(puntenType["Obsolete"].ToString())) {
this.typeList.Items.Add(new ListItem(
puntenType["Punten Type"].ToString(),
puntenType["Punten Type"].ToString()));
}
}
} /* FillTypeList */
[/csharp]

With code like these small pieces, demonstrating the SharePoint Object Model, I created a small User Control, containing a DataGrid to display the items, some input fields to add new items and a dropdown list with users, to filter on when being logged on as an Administrator.

 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
NSurvey already provides general charts displaying the results, but it uses a bar chart and I had to output pie charts as well. So, I implemented them as well.

First, I created a new page, called PieChartReport.aspx, which was empty. After this I used the same code as the BarChartReport and filled up a ChartPointCollection, which I then used to create a new PieChart, render it and send it back to the client

[csharp]
ChartEngine engine = new ChartEngine();
ChartCollection charts = new ChartCollection(engine);

engine.Size = new Size(350, 400);
engine.Charts = charts;
engine.Title = new ChartText();

if (questions.Questions[0].IsParentQuestionIdNull()) {
engine.Title.Text = Server.HtmlDecode(
Regex.Replace(questions.Questions[0].QuestionText, "<[^>]*>", " "));
} else {
String questionText = String.Format("{0} - {1}",
questions.Questions[0]["ParentQuestionText"].ToString(),
questions.Questions[0].QuestionText);
questionText = questionText.Replace(Environment.NewLine, "");
questionText = questionText.Replace("\t", "");
questionText = questionText.Replace("

", "");
questionText = questionText.Replace("

", "");
engine.Title.Text = Server.HtmlDecode(
Regex.Replace(questionText, "<[^>]*>", " "));
}

PieChart pie = new PieChart(data);
engine.Charts.Add(pie);
ChartLegend legend = new ChartLegend();
legend.Position = LegendPosition.Bottom;
engine.HasChartLegend = true;
engine.Legend = legend;
engine.GridLines = GridLines.None;
[/csharp]



Update: I used the following control by the way (which was already in NSurvey): http://www.carlosag.net/Tools/WebChart/Default.aspx
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
NSurvey provides charting reporting by default, but this could be enhanced by using the Country List control together with the added Belgian Regions. With this question in a survey, as a required question, every entry would have location information. Together with MapPoint a graphical overview could be made, showing additional information per country and region.

To accomplish this, I created a new administration page and edited the UINavigator and HeaderControl classes to add the new page to the menu. On this page were two dropdown lists which included the column names of a survey’s text entries. These lists were used to indicate where NSurvey stored the Country and Region questions. Together with a button that would generate the chart.



To generate the chart, all possible regions were first collected by grouping the entries and storing the unique country and regions. After this, the MapPoint FindService was instantiated and the Find method was called for each address.

[csharp]
FindServiceSoap findService = new FindServiceSoap();
if (ConfigurationSettings.AppSettings["MapPointProxy"] != String.Empty) {
findService.Proxy = this.proxyObject;
}
findService.Credentials = this.ourCredentials;
findService.PreAuthenticate = true;

FindSpecification findSpec = new FindSpecification();
findSpec.DataSourceName = "MapPoint.EU";

foreach (DictionaryEntry locationEntry in locationData) {
// key example: "West-Vlaanderen, BE"
findSpec.InputPlace = locationEntry.Key.ToString();
FindResults foundResults = findService.Find(findSpec);
if (foundResults.NumberFound > 0) {
((CustomLocation)locationEntry.Value).LatLong =
foundResults.Results[0].FoundLocation.LatLong;
}
}
[/csharp]

This gave me the LatLong of every location MapPoint had found, which I used to create an array of Location objects to be passed to the GetBestMapView method. This method returned a MapViewRepresentations object which described to view to use when calling the GetMap method. This view assured every location was on it.

[csharp]
MapViewRepresentations mapRepresentations =
renderService.GetBestMapView(myLocations, "MapPoint.EU");
ViewByHeightWidth[] myViews = new ViewByHeightWidth[1];
myViews[0] = mapRepresentations.ByHeightWidth;
[/csharp]

At this point all required location information was known, the only thing left was to define pushpins that would show up on the generated map and would be clickable.

[csharp]
Pushpin[] myPushpins = new Pushpin[foundRegions.Count];
Int32 pinCounter = 0;
foreach (DictionaryEntry foundRegion in foundRegions) {
myPushpins[pinCounter] = new Pushpin();
myPushpins[pinCounter].IconDataSource = "MapPoint.Icons";
myPushpins[pinCounter].IconName = "1"; // Red pin
Int32 nrResults = ((CustomLocation)foundRegion.Value).ResultCount;
myPushpins[pinCounter].Label = String.Format("{0} {1}",
nrResults,
(nrResults == 1) ? "result" : "results");
myPushpins[pinCounter].LatLong = (LatLong)foundRegion.Key;
myPushpins[pinCounter].ReturnsHotArea = true;
myPushpins[pinCounter].PinID =
(CustomLocation)foundRegion.Value).Location();
pinCounter++;
}
[/csharp]

To get the map, I had to call the GetMap method and supply a MapSpecification. This specification describes the size of the map, the quality, the pushpins and what MapPoint should return. Here, it will return a URL, pointing to the generated map.

[csharp]
MapSpecification mapSpec = new MapSpecification();
mapSpec.DataSourceName = "MapPoint.EU";
mapSpec.Views = myViews;
mapSpec.Options = new MapOptions();
mapSpec.Options.ReturnType = MapReturnType.ReturnUrl;
mapSpec.Options.Format = new ImageFormat();
mapSpec.Options.Format.Height = 500;
mapSpec.Options.Format.Width = 500;
mapSpec.Options.Style = MapStyle.Locator;
mapSpec.Pushpins = myPushpins;
MapImage[] mapImages = renderService.GetMap(mapSpec);
[/csharp]

After the call, MapPoint returned a MapImage object, containg the url to the map, together with information about the special areas on the map, called HotAreas. To make these areas clickable on the map, an HTML imagemap had to be generated.

[csharp]
StringBuilder imageMapName = new StringBuilder();
imageMapName.Append("imageMapName.Append("_Map\">");
for (Int32 i = 0; i < hotAreas.Length; i++) {
String pinId = hotAreas[i].PinID;
imageMapName.Append("\n imageMapName.Append(hotAreas[i].IconRectangle.Left).Append(",");
imageMapName.Append(hotAreas[i].IconRectangle.Top).Append(",");
imageMapName.Append(hotAreas[i].IconRectangle.Right).Append(",");
imageMapName.Append(hotAreas[i].IconRectangle.Bottom);
imageMapName.Append("\" title=\"").Append(pinId).Append("\">");
}
imageMapName.Append("
");
this.imageMapHotAreas.Text = imageMapName.ToString();
mapObject.Attributes["USEMAP"] = "#" + mapObject.ID + "_Map";
[/csharp]

The result was a map, scaled to the best size to include all locations, with pushpins on it, which are clickable and point to the same page with an additional querystring.



This made it possible to visualize the results per region, and when you select a certain region, provide filtered results of that region.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
One of the standard controls of NSurvey is the Country List, which provides a dropdown list of countries. Belgium is one of these countries, but when you select Belgium, it doesn’t display the possible regions for Belgium.

This is because the region information is also implemented with the subscriber model. When the country selection changes, it publishes the selected country to the Region List which then looks up the xml file of the selected country, containing the region information. The problem was that there wasn’t a region file for Belgium. So, I looked up the Belgian regions from Microsoft Passport and created the be.xml file:

[xml]



Region :



[Select Region]


Antwerpen
Antwerpen


Vlaams-Brabant
Vlaams-Brabant


Hainaut
Hainaut


Liege
Liege


Limburg
Limburg


Luxembourg
Luxembourg


Namur
Namur


Oost-Vlaanderen
Oost-Vlaanderen


Waals-Brabant
Waals-Brabant


West-Vlaanderen
West-Vlaanderen




[/xml]
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
As this application was going to collect feedback from Microsoft events, it had to look like it belonged to Microsoft, and it had to be designed professionally. To do this, I visited the Microsoft site, and saved the page to my dev pc. There I stripped all the content and created a template with two user controls, SiteHeader and SiteFooter.

The next step was to include the previously created SurveyListControlOverview on the Default.aspx page to provide a starting point for the user.



When they user selected a survey and clicked the button, the OverviewSurveyId property was retrieved and forwarded to the Survey.aspx page, which displayed the survey in the same layout, together with the survey title.



If an error occurs, the user gets redirected to a generic error page and an email gets dispatched to the site administrators.



A contact page was also added to provide a contact person for users having problems or questions.



The last step in creating the layout was testing if it worked the same in Internet Explorer and Mozilla Firefox. Luckily it worked the same from the first time and the layout was finished.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
By default NSurvey provides different kinds of answer types. These are for example Basic Field, Rich Field, Calendar Field, Email Field, Hidden Field, Password Field, Large Field and dropdown lists which use XML files as data source. NSurvey, however, also allows you to extend on these types to create new answer types, with specific functionality.

One of the requirements of the survey was that it had to be possible for students to select their school from a list, but also have the possibility to enter it manually if it wasn’t listed. To do this, I created a School answer type.

This type was inherited from a regular Basic Field type, but was invisible by default. The special feature of this field was that it subscribed to a dropdown list which listed all available schools and an Other possibility. This meant that when the selection of the dropdown list changed, it would publish the new selection to all subscribed answers. Because of this, when the Other possibility was chosen, the field was made visible and it was possible to manually enter the school.

To do this, I had to implement the IAnswerSubscriber interface and use the following code for the ProcessPublishedAnswers method:

[csharp]
public void PublisherCreation(Object sender, AnswerItemEventArgs e) { }

public void ProcessPublishedAnswers(Object sender, AnswerItemEventArgs e) {
if (e != null && e.PostedAnswers != null && e.PostedAnswers.Count > 0) {
String selectedSchool = ((PostedAnswerData)e.PostedAnswers[0]).FieldText;
this.ShowField = selectedSchool.ToLower().Equals("other");
this.CreateChildControls();
}
} /* ProcessPublishedAnswers */
[/csharp]

I also provided a modified CreateChildControls method:

[csharp]
protected override void CreateChildControls() {
if (this.ShowField) {
if (this.ShowAnswerText) {
// This prevents the Answer title being displayed twice
if (Controls.Count > 2) {
Controls.RemoveAt(1);
Controls.RemoveAt(0);
}

if (this.ImageUrl != null && this.ImageUrl.Length != 0) {
Image selectionImage = new Image();
selectionImage.ImageUrl = this.ImageUrl;
selectionImage.ImageAlign = ImageAlign.Middle;
selectionImage.ToolTip = Text;
Controls.AddAt(0, selectionImage);
} else {
Literal literalText = new Literal();
literalText.Text = this.Text;
Controls.AddAt(0, literalText);
}

Controls.AddAt(1, new LiteralControl("
"));
}

if (this.FieldHeight > 1) {
// Creates a multi line field
_fieldTextBox.TextMode = TextBoxMode.MultiLine;
_fieldTextBox.Wrap = true;
_fieldTextBox.Columns = this.FieldWidth;
_fieldTextBox.Rows = this.FieldHeight;
} else {
_fieldTextBox.MaxLength = this.FieldLength;
_fieldTextBox.Columns = this.FieldWidth;
}

Controls.Add(_fieldTextBox);
OnAnswerPublisherCreated(new AnswerItemEventArgs(GetUserAnswers()));
} else {
Controls.Clear();
}
} /* CreateChildControls */
[/csharp]

This way, the field only got shown when the published answer equaled other, otherwise it was hidden. Another version of this was the CheckBoxField answer type. This type provided a default invisible field, which became visible after a certain checkbox was checked.



 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
NSurvey supports matrix questions in its surveys. The only problem with this type of question was that in the reporting section of NSurvey, it listed each row of a matrix question as a possible selection, but it didn’t include which matrix it belonged to. This leaded to a very confusing list, when you have several identical matrix questions which only differentiated in the main question asked.

The solution I had in mind was to change the output from “row question” to “matrix question – row question”. To do this, I first had to modify some stored procedures to include the ParentQuestionText field. After this I traced down the places where the possible questions were added to the dropdown list and added some logic to check if it was a matrix question and concatenated the matrix question with the row question.

One of the places where I had to do this was in the BarChartReport class, which was responsible for generating charts of the rated matrix questions. In the SetQuestionData method the following piece of code could be found

[csharp]
engine.Title.Text = Server.HtmlDecode(
Regex.Replace(_dataSource.Questions[0].QuestionText, "<[^>]*>", " "));
[/csharp]

Which I changed to the following:

[csharp]
if (_dataSource.Questions[0].IsParentQuestionIdNull()) {
engine.Title.Text = Server.HtmlDecode(
Regex.Replace(_dataSource.Questions[0].QuestionText, "<[^>]*>", " "));
} else {
String questionText = String.Format("{0} - {1}",
_dataSource.Questions[0]["ParentQuestionText"].ToString(),
_dataSource.Questions[0].QuestionText);
questionText = questionText.Replace(Environment.NewLine, "");
questionText = questionText.Replace("\t", "");
questionText = questionText.Replace("

", "");
questionText = questionText.Replace("

", "");
engine.Title.Text = Server.HtmlDecode(
Regex.Replace(questionText, "<[^>]*>", " "));
}
[/csharp]

This change, together with the changed procedure because the ParentQuestionText had to be used, resulted in charts with the correct title.



The only thing left was to make sure this change also occurred in the HTML report and the questions dropdown list.

To do this I had to add the following piece of code to the GetQuestionListWithSelectableAnswers method in the DataAccess part:

[csharp]
foreach (QuestionData.QuestionsRow row in questions.Questions) {
if (!row.IsParentQuestionIdNull()) {
row.QuestionText = String.Format("{0} - {1}",
row["ParentQuestionText"].ToString(),
row.QuestionText);
}
}
[/csharp]

These changes made the matrix questions display correctly, as you can see in this picture, which represents a five-question matrix.

 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
The first thing I noticed is the small dropdown in the admin section listing all available surveys. This would become my starting point for users, a perfect place to choose the survey they want to take.

I tracked this down to the SurveyListControl user control which I inherited to create SurveyListControlOverview. This user control removes to automatic postback when it’s in overview mode and also provides an OverviewSurveyId property to indicate the selected survey. It also displays all surveys, because it had to run in anonymous mode, without users having to log on before being able to answer. A shared password would be provided on the event, giving access to the survey.

After this, the user could select a survey from the dropdown list. The only problem was that the choices were ordered by creation date, which would become a problem in the long run when a lot of surveys would be available. To change this I added a simple ORDER BY Title in the vts_spSurveyGetList stored procedure.

At this point, I had a dropdown list with all surveys listed alphabetically to add to any aspx page I wanted.

 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
For one of the projects I had to do, I had to create an online survey application, which would be used to gather feedback from Microsoft events. Up until then, feedback was collected by handing out a form and entering the data manually.

As I was given free choice on how to solve this problem I suggested using an existing open-source framework and extending it to meet the requirements. This suggestion was quickly approved because on one side it meant commitment from Microsoft towards open-source and on the other hand it prevented re-inventing the wheel. The project used for this solution is called NSurvey. This provides a survey framework, making it very easy to setup surveys, add questions, add users, do mailings, implement security on a survey-based level, perform statistical operations on the answers and add new functionality by extending on existing classes.



NSurvey is an ASP.NET application, written in C#, which uses a SQL Server back-end, using stored procedures, and various other layers. The installation of NSurvey went very smoothly because of an msi file, placing all files in their correct location.

I started by testing the application and learning the big structure of how it worked. During this small test round, I began thinking on how the final solution would look.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
One of the tools I had in my toolbox was Reflector. This tool is written by Lutz Roeder and allows you to examine a .NET assembly. Through the use of reflection it can display all namespaces, classes, methods, properties, events and fields in the dll.

It is possible to view the code in IL, C#, VB.NET and Delphi. Some of the useful features are the Call and Callee Graph.

The Call Graph shows you which items are used by a given method, while the Callee Graph displays all the methods that call a given method.







Forgot something for blog readers, the url where to get it: http://www.aisto.com/roeder/dotnet/
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
In the .NET Framework there is a feature called boxing, which goes hand in hand with unboxing. Boxing is an implicit conversion of a value-type to the object type. While this is a very nice feature when programming normal applications, this overhead becomes a big performance hit when working with algorithms that need to do lots of operations, such as path finding.

When you create a value-type, it’s originally placed on the stack. If you box this value-type, the required memory is allocated on the heap, the value gets copied and an object is placed on the stack, pointing to the boxed value on the heap.

Boxing an int
Boxing an int

Unboxing the int again
Unboxing



In a pathfinder I created long ago, in the .NET 1.1 era, a list of points had to be maintained. While this was possible using an ArrayList, it proved faster to write a typed list. The main reason behind this was because the ArrayList’s method signatures all work with objects, causing implicit boxing when using them. Retrieving items from an ArrayList also required unboxing, because that had to be cast back into their Point type.

I wrote a small application to demonstrate the boxing and unboxing taking place, and the performance impact. The test data were 10 million random Point values.

[csharp]// Adding to an ArrayList
for (int i = 0; i < itemsToGenerate; i++) {
arrayList.Add(rndCosts[i]);
}[/csharp]

When we take a look at the IL code this piece generates, we would see the following:

[code]ldobj [System.Drawing]System.Drawing.Point
box [System.Drawing]System.Drawing.Point
callvirt instance int32 [mscorlib]System.Collections.ArrayList::Add(object)[/code]

On the second line, you can see it uses the box operation to box our value-type Point (stored in the typed Point array rndCosts) before calling the Add method.

The solution to this is using generics, available from .NET 2.0 onwards, or writing your own typed list object. As .NET 1.1 had to be used, I chose the second solution. To do this, I used Reflector to look at the ArrayList code, and use that code to create a new list, replacing all instances of object with our required type, Point in this example.

Now we use the same piece of test code with our PointList.
[csharp]// Adding to a typed PointList
for (int i = 0; i < itemsToGenerate; i++) {
pointList.Add(rndCosts[i]);
}[/csharp]

If we look at the IL this piece generates, we notice the following:

[code]ldobj [System.Drawing]System.Drawing.Point
callvirt instance int32 [PointList]CumpsD.Collections.PointList::Add(
valuetype [System.Drawing]System.Drawing.Point)[/code]

The Framework does not use the box operation anymore, effectively getting rid of the overhead we had with our first solution. When we take a look at the test results, we can clearly see the difference between the first solution and the second.

[code]ArrayList: added 10000000 items.
00:00:03.1845792 seconds
PointList: added 10000000 items.
00:00:00.2804032 seconds[/code]

By using the strong typed PointList it was possible to get just a bit more performance out of the code, making the project operate better in the long run. Just for fun, I revisited this test using .NET 3.5, with generics and created a List to store the items. I ended up having similar performance (00:00:00.2915599 seconds) as the typed list I created in .NET 1.1, pretty good considering I wrote it two years ago ;)

The main conclusion here is to avoid any non-typed collection when performance is an issue. Some browny points for our Bits & Bytes Religion when it comes to the happy feeling you get when working with generics :)

 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
Important note (to not get confused):
The text below (and the future ones) is written in the past tense, because they are text that have to go into a book for school, which you have to write after your internship, giving info about what you did. But I'm writing it piece by piece, because otherwise it will be too much to remember, and a lot less detailed :)




For the second year in row, Microsoft organized the Imagine Cup. This is an international contest with various categories. As a part of my internship, I had to compete in the Visual Gaming and the Information Technology category.



The IT category was about diving into solving real life IT problems. With questions about networks, databases and various servers the content was really diversified.

You get 30 minutes to solve 30 questions, scoring 3 for each correct answer, 0 for a blank answer and -1 for a wrong answer. The first 5 people of a country advanced to the next category, which was the first goal of my internship.

After having spent half an hour taking the quiz, I had to wait a day to get the results, to prevent abuse. My score ended up being a 66/90, placing me at a shared first place in the Belgian competition. My first goal was reached.



The Visual Gaming competition was a coding challenge. In this competition you were given an SDK, which included a 2D-viewer, 3D-viewer, documentation and the required assemblies.

In the VG competition you had to write code for robots in a small game, their brains, or also called, AI. The main things I learned thanks to this competition were algorithm-knowledge, performance issues and logic thinking.

Algorithm knowledge was useful to find optimal actions for the robots, for example, the A* algorithm explained above, helped to find the shortest path. Other algorithms such as Traveling Salesman Problem also had to be solved and implemented to gain a strategic advantage in this game.

Performance was a very important aspect in the VG competition. Because your code had a limited time-window it could run in, it had to be very fast, making sure it stayed inside that window. This was the reason why I implemented a binary heap in the pathfinder, and why I made a lot of performance optimizations, such as preventing boxing and unboxing, storing certain 2-dimensional arrays in a 1-dimensional and cutting back on object initializations.

But knowing how the algorithms work and how to tweak for performance alone didn’t do the trick. The difficulty lies in making it all work together and operating as a big unit, working its way to a victory. That’s where logic thinking came in, to determine what tactics to use, and which algorithms to use.

After having played with this an afternoon I managed to get 1325 points, which was enough to get to Round 2, and to achieve another internship goal. My personal next goal was to try to score as good as possible in the second round.

(Which I will talk about when the second rounds is on its way :p)
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
A pathfinder has to be very fast, but when we take a look at the performance we notice that working with the open-list is the bottleneck.

There are several possible data structures to store the open-list. We could use an ArrayList to store the values and keep it sorted using an IComparable interface. With this solution we end up with too much overhead keeping the entire list sorted. After all, the only thing our pathfinder is interested in, is the node with the lowest F-score, it doesn’t care about the other nodes.

A better solution is using a binary heap. In a binary heap, each item has two children with a value higher or equal to itself. Which means in the end the lowest item is at the top of the heap, easily accessible.



One of the nice things of a binary heap is the fact that it can be stored in a 1-dimensional array, making sorting of the heap a very quick operation.

The top of the heap is always stored at index 1. We don’t use the 0-index when working with zero-based arrays.



The children of any given item are always stored at the item’s location * 2 and the item’s location * 2 + 1. For example, in the image given above, the item with value 20 is stored at index 3 and its two children can be found at index 6 (3 * 2) and index 7 (3 * 2 + 1).

Adding an item to a binary heap can be achieved by adding the new item at the end of the array and then letting the new item bubble its way up.

This is achieved by comparing the item with its parent, swapping them when the item is smaller then its parent and repeating this until the parent is smaller.

[csharp]
public void Add(Int32 fCost) {
this.binaryHeap[this.numberOfItems] = fCost;

Int32 bubbleIndex = this.numberOfItems;
while (bubbleIndex != 1) {
Int32 parentIndex = bubbleIndex / 2;
if (this.binaryHeap[bubbleIndex] <= this.binaryHeap[parentIndex]) {
Int32 tmpValue = this.binaryHeap[parentIndex];
this.binaryHeap[parentIndex] = this.binaryHeap[bubbleIndex];
this.binaryHeap[bubbleIndex] = tmpValue;
bubbleIndex = parentIndex;
} else {
break;
}
}
this.numberOfItems++;
} /* Add */
[/csharp]

To remove an item from a binary heap, we simply take the item at index 1. But now we have to repair our heap, because there is a gap at the top. To fix this we take the last item and place it at the top, after which we let it sink downwards. This is done by comparing the value with its two children, replacing it with the smallest child and repeating this until the parent is smaller than both children.

[csharp]
public BinaryHeapItem Remove() {
this.numberOfItems--;
Int32 returnItem = this.binaryHeap[1];

this.binaryHeap[1] = this.binaryHeap[this.numberOfItems];

Int32 swapItem = 1, parent = 1;
do {
parent = swapItem;
if ((2 * parent + 1) <= this.numberOfItems) {
// Both children exist
if (this.binaryHeap[parent] >= this.binaryHeap[2 * parent]) {
swapItem = 2 * parent;
}
if (this.binaryHeap[swapItem] >= this.binaryHeap[2 * parent + 1]) {
swapItem = 2 * parent + 1;
}
} else if ((2 * parent) <= this.numberOfItems) {
// Only one child exists
if (this.binaryHeap[parent] >= this.binaryHeap[2 * parent]) {
swapItem = 2 * parent;
}
}
// One if the parent's children are smaller or equal, swap them
if (parent != swapItem) {
Int32 tmpIndex = this.binaryHeap[parent];
this.binaryHeap[parent] = this.binaryHeap[swapItem];
this.binaryHeap[swapItem] = tmpIndex;
}
} while (parent != swapItem);
return returnItem;
} /* Remove */
[/csharp]

A small comparison between an ArrayList and this binary heap implementation gives the following results:

[code]
Binary Heap: added 4000 items.
Time needed: 00:00:00
Lowest F-score: 1
Sorted ArrayList: added 4000 items.
Time needed: 00:00:07.2968750
Lowest F-score: 1

Binary Heap: added 10000 items.
Time needed: 00:00:00.0156250
Lowest F-score: 1
Sorted ArrayList: added 10000 items.
Time needed: 00:00:56.1250000
Lowest F-score: 1
[/code]

Inspiration and some images used from Patrick Lester.
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
The A* algorithm works with two lists, an open-list and a closed-list. The open-list is the list of nodes to be checked, while the closed-list already has been checked. Each node also gets scored with F, G and H-scores.

F-score: Total cost for a node (G-score + H-score).
G-score: Movement cost.
H-score: Estimated movement cost.


In my demonstration program I used pixels as nodes in a grid-based map. A rule the pathfinder had to obey was it could only move horizontally and vertically.

We start with a begin point and an endpoint, a map and we know which nodes are not passable.



The first step in A* is to add the start point to the closed list, and examine its neighbors.

We ignore it if it’s an obstacle or already on the closed list. When it’s not yet on the open-list, we add it to the open-list, and when it’s already on the list, we check of the G-score isn’t lower than the current G-score. If the score is lower, we change the parent of the node.



These steps are repeated until the goal-node is added to the open-list.

Thanks to the parent information of each node it is possible to reconstruct the shortest path from end to start.



The most important information for the algorithm is that it has to know the node with the lowest F-score, because this is most likely the one leading to the shortest path, and the one which will be added to the closed-list during the next iteration.

Because the pathfinder has to be fast, the data structure used to store these values has to be very fast as well. That’s why a binary heap is used in combination with A*.

(Other data structures can be used, but this solution proved to be the fastest on a 200 * 200 map)

 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
The first task I got during my internship was to manage a meeting of the Microsoft Student Council. This was also the start of my internship, the real thing from the beginning.

The Microsoft Student Council is a Microsoft program focused towards students. Its goals are to support dedicated students who are willing to work with new technologies and are passionate about learning new things. Every student is able to join the council, without any costs. The only requirement consists out of the students coming to Microsoft, instead of Microsoft asking students to join. In other words, if you are motivated, you can be part of it.

The council organizes four meetings a year, each with a different theme. These students also get the opportunity to attend Microsoft seminars and to get involved in discussions with Microsoft developers. Nowadays there are about 100 students who are part of this council.

Besides organizing training and meetings, the Microsoft Student Council also tries to get students involved in various competitions and challenges, trying to spark the students’ interest for new things. This results in a good community-feeling among the involved students, learning each other new things.

On the 11th of February 2005 the second meeting took place, having Digital Entertainment as its theme. Due to an unexpected absence of the person organizing it, I was assigned to manage this meeting during the day.

Practically this meant I had to provide the required information for everyone on the SharePoint site used by the council, provide a route description, act as a contact person for the speakers, welcome everyone on the meeting, close the meeting at the end and make sure all speakers follow their assigned speaking times.

I also was a speaker myself during this event. My talk was about Game Programming - AI in Games, in particular about the A* pathfinding algorithm and using a binary heap as data structure.

Pathfinding is very important in gaming, because it is used a lot it has to be as fast and efficient as possible. A good pathfinder can be used to calculate distances as well.

For navigation, the A* algorithm is used to find the shortest path.



Note to blog readers: Because I'm writing all of this in Word, I can't paste it into .Text without the layout getting screwed up, so I removed some table formatting and images. I do try to make readable though ;)
 
Deze post is geïmporteerd van de oude blog en is nog niet geconverteerd naar de nieuwe syntax.
As some of you might now, I'll be doing my internship at Microsoft.

One of the things I'm supposed to do, is blog about my time there.

So, because of this, there is now an MS Internship category on my blog.

Of course not everything I do will be posted, but I'm sure there will be enough :)


I'm really looking forward working there!