Self-referencing Generic Type Constraint

Here is a simple and nice piece of code where I found the use of a self-referencing generic type constraint useful.

This interface is used to represent a hierarchy. The generic type parameter T is constrained to IHierarchical<T>. Although at first glance it might seem like a recursion, of course, it isn't. It just requires the type argument to be implementing the same interface.

Looking around for other guys that use this pattern, I found this post by Eric Lippert who more or less discourages it. Although I agree with him that this pattern can be confusing if not used properly, I still think it can be quite elegant and expressive if used in suitable context.

I think this hierarchy example is a good fit for the pattern.

Side note on the actual use of this interface:
As I said, I use this to represent a hierarchy. In my code this interface is actually implemented by several classes that participate in several hierarchies. The use of the Parent pointer seems redundant, but in my case it is not as the classes that implement this interface are actually SharePoint content types, so the Children property is actually a multi-lookup column in SharePoint and the Parent pointer is the column at the other end of the relationship. Both values are populated by executing a single Linq2SharePoint query.

Two way data binding on a DependencyProperty

I was creating a WPF UserControl today which (among other things) was exposing a DependencyProperty for use in data binding. The problem I came across was that it didn't bind two-way by default and for a while I couldn't figure out either why nor what was the most slick way to go about it.

So here is the trick in case anyone stumbles upon this post while facing a similar problem. In the metadata for your DependencyProperty, don't forget to set the BindsTwoWayByDefault property to true.

Starting RSS Graffiti

A year ago, around January 2009, I was looking for a way to liven up the  Fan Page I was maintaining in Facebook for Blues.Gr (the Blues social network I run on Ning). One easy thing to do, was obviously to post to the Facebook Fan Page wall, updates from Blues.Gr so that people can learn about what is going on over at Blues.Gr easily through their daily Facebook activity feeds. Best way to do that of course was to do it automatically by reading the RSS feeds available on Blues.Gr and posting any new entries to the Facebook Fan Page.

The view from my attic in Stockholm where RSS Graffiti was built at nights.

So I started looking for a Facebook application that would do just that. To my amazement I discovered that I could not find any application that would do the job in decent way (my definition of “decent” anyway).

Existing applications that I did not like enough included Social RSSNetworked BlogsInvolverFacebook Notes and some others that I wouldn’t even consider.

I didn’t like Social RSS mainly for two reasons:

  1. I don’t like ugly user interfaces (they don’t make me feel good about myself, my taste etc.)
  2. It didn’t (at the time) use the Facebook Stream API and thus didn’t provide any distribution of posted stories to the news feeds of the Page’s fans etc.

I didn’t like Networked Blogs enough also mainly for two other reasons:

  1. It was a pain for me to claim the RSS feeds from my social network due to a number of silly restrictions that were out of my control including restrictions imposed by Networked Blogs itself.
  2. I felt it was built for a different purpose than the one I needed it for and frankly I didn’t even like the whole concept enough.

I rejected Involver apps because although it was an obviously serious professional effort, I found it to be ridiculously expensive for my non-for-profit activity in Blues.Gr and I was also unhappy with the fact that its functionality was broken down to many different smaller applications that were sold separately.

Facebook Notes was using the Stream API but it was not doing what I wanted to do. I wanted to post news from my site and directly link back to my site from my Fan Page. Notes was copying (importing) the news from my site to Facebook and then was distributing links to those copies instead. That’s not what I wanted. Moreover it didn’t look nice either. The format of the content was being messed during import and display and the whole thing just was out of the question.

So I thought why not write my own application for the job anyway? How hard could that be?

I started (on January 19th 2009), by writing a blog post on the internal blog I maintain for logging ideas with my business partner, and a few days later we decided to work on the project.

So RSS Graffiti was born. We initially called the project RSS Minifeeder (because at the time Facebook was calling the current “news feed” as “minifeed”). Later looking for a better name I came up with the word “graffiti” as representing the activity of writing on a wall (often in an aesthetically pleasing way). The .com domain was free; time was running out; hence the name “RSS Graffiti” was coined although it is not the most clear and straight forward name declaring what the application does.

Me and Dimitris started working on RSS Graffiti on January 21st, 2009, and devoted 16 man-hours per week to the project. RSS Graffiti version 1.0 Alpha was released 24 calendar weeks later (roughly 400 man-hours). RSS Graffiti version 1.2.1 Beta was the first version to be listed on Facebook’s Application Catalog (on August 22nd, 2009) and essentially it was the first version exposed to the open public (and thus we consider August 22nd 2009 to be the starting date of the application).

Four months later, RSS Graffiti is currently added to 17.000 walls, and actively publishes stories from 17.000 feeds to the walls of 2.500 Facebook Profiles, 5.500 Facebook Fan Pages and 500 Facebook Groups. These numbers are obviously a bit rounded up or down to be easier to read. They change by the minute anyway.

RSS Graffiti is still in Beta (current version is 1.8.0 Beta at the time of writing this) and is available as a free beta service while plans for premium services are also being considered for Q1 2010.

ECDIS software

I am working on two ECDIS projects in parallel. Both have to do with monitoring sea traffic.

As some details of the projects are confidential, I am not currently able to release any details on the projects themselves. But I can talk about my work in those two projects.

Project A:

The goal is to create a system that will help track ships that are involved in, or responsible for different types of events that can happen in the seas. For instance: an oil spill is located and reported somewhere. Authorities need to identify the vessels, suspect for causing the environmental damage.

The project approaches the problem by creating a system that will allow for sea traffic monitoring and then, correlating that information with earth observation (EO) data to produce a list of suspect vessels. Vessels are ranked by certain qualitative, quantitative and spatial criteria to help the user make the final decisions and identify the offending vessel using the hints provided by the system along with his/her experience and best knowledge.

What I am building for this project is a custom ECDIS system that implements all the functionality required for this application.

Sea traffic is monitored using a network of AIS receivers. AIS messages are temporarily recorded in local databases and transmitted in real time to a central database through web services over a VPN.

The ECDIS software I am building is using AIS signals to mark the positions of different vessels at specific points in time. It also provides a mechanism for importing and recoding vector data that come form EO sources (processed satellite imagery). EO data are used to identify event locations and possibly other detected targets in the area of the event during data capture.

The concept of the software developed is described below. I will post some screen-shots or a screen-cast when possible to support the description below:

The application window is divided vertically into two panes. The main pane (on the left) is the map pane where the ENC is displayed. The second narrower pane on the right is used for context sensitive information display. Both panes are tabbed for better information and functionality grouping. Application commands are available through main and context menus and toolbars.

Below the ENC pane there is a set of playback controls much similar to those you see in video players: a Play/Pause button, a timeline you can scroll, a playback speed selector etc.

All data recorded in the system (be them AIS messages, EO data or Events) are tied to a specific point in time and space.

The concept is that while the system's database maintains historical sea traffic data for long periods of time, the user only needs to focus on a specific subset of those data related to a particular event. To enable this approach the application allows the user to select one of the recorded events to focus on. Focusing on an event implicitly means filtering AIS and EO data to a specific point in time and space. More accurately: around a specific point in time and space. This way the system loads only relevant data from the database which makes processing faster and consumes less system resources. Selection of the timeframe and area is made either implicitly and explicitly by the user during the selection of the event under investigation.

So keep in mind two concepts here:

  • the "investigated time-frame" which is essentially a period determined by a staring date/time and a length (duration)
  • and the "investigated area" determined by the central location of the event and a range in nautical miles around it.

All data that fall into the selected time and space are loaded into in memory data sets. But not all those data are concurrently displayed in the ENC pane. The AIS and EO data displayed in the ENC pane at any given time is a subset of the loaded data and is determined by:

  • the "focus time"
  • the length of the "visible time-frame"
  • and the view-port (which is identified by its center coordinates and range)

The "focus time" is equivalent to the playback position in a video player. As the user scrolls the timeline control to the left or right, the focus time changes.

The length of the "visible time-frame" refers to the time span before focus time during which all recorded signals should be visualized. For instance if the visible time-frame is set to 1 hour then the track behind a vessel's "current" position will be displayed for the last hour ("current" being determined by "focus-time").

The "view-port" is nothing more that the visible area of the ENC and is defined by means of panning and zooming the map.

Probably you are already getting the "big picture": A system that will playback what happened around an event (i.e. an oil spill) and allow you to watch it like you would watch a video. What you see is what you would see if you were flying with a plane over the event at the selected time, only mapped in ENC, loaded with useful information and of course: interactive.

During playback, you can move around the map by zooming and panning, change the playback speed and generally interact fully with the application (all functionality remains available).

By pointing your mouse to a vessel's latest signal or track you can see relevant information on the right pane of the application. Information available for each vessel includes all category 5 AIS message fields (ship static and voyage information), all fields of categories 1, 2 & 3 of AIS messages (position reports) and derived information based on algorithms and ranking databases, that help classify the ship and rank the probability of it being the offending vessel.

At the time of writing this project is in its final stages. It has been demonstrated to the customer and given their satisfaction it is pending some further development and optimizations before it is officially presented and delivered.

Project B:

Project B is an entirely independent project from Project A. Nevertheless, it is so relevant in context, that it is being developed in parallel. Actually so far I did not see a need to even branch the first project. Minor behavioral differences are handled very effectively through configuration files.

The goal of this project is to use AIS, VTS radar and EO data to identify certain types of vessels in the context of naval security.

AIS, VTS and EO data are correlated with data fusion algorithms and the results are again ranked by risk level.

My work in this project involves the creation of the visualization console. Data acquisition and fusion is handled by other project parties and the results of their work is just input data for my application.

The main difference from Project A is that this time, the software must be used mainly for near real-time monitoring. Playback is just a useful feature.

This project uses more sophisticated data sets and also includes estimated data. There are also considerable differences in data formats. All these had being handled properly during the design phase of the software and provisions where made so that it can read a wider variety of different data sources.

This project is also approaching its demonstration phase.

Technologies used in both projects include:

Apart from the above technologies these projects required extensive understanding of ECDIS, ENCs, AIS, VTS and EO related literature and of course engineering know-how in both software and earth sciences.

Both systems are being developed using Microsoft Visual Studio 2005 and C#.


This one is a personal project. It started a few years ago (somewhere in mid 2003), when I created my previous web site (same address, older technology), which was based on Windows SharePoint Services version 2.0.
Goal of the project was to create a single source CV for multiple platforms and applications.

  • Edit the CV content only once
  • Maintain the CV in two languages (English & Greek)
  • Use everywhere
    • web for online reading,
    • MS Word & Adobe PDF for distributing and printing,
    • whichever else application comes along as need

Obvious solution was: use XML and XSLT.
After browsing around for standards I discovered (back then), the XMLRésuméLibrary Project which defined an XML vocabulary for CVs along with a set of tools for visualizing and printing them.
I didn't like the tools they provided as I wanted to approach the whole thing "the Microsoft way". So I just took the DTD from there.
First thing I tried was to create an InfoPath form from the DTD, but converting the DTD to XSD did not yield a solid reasonable schema. Would be nice to have an InfoPath form for editing my CV but the time needed to be devoted in creating a solid result was not worth the try. So I scratched that effort and decided to edit in straight XML.
Next thing I had to do was create an XSL transformation to visualize my CV on my web site. I wanted to maintain the layout and style of the CV I already had in Word format so I created the XSLT from scratch.
Now I had another problem to solve: I needed two versions of the CV in two different languages. There were three solutions to that problem. There was no provision for multilingualism in the XMLRésuméLibrary DTD so I either had:

  • to alter the DTD,
  • trick it somehow (using the "targets" attribute),
  • or just maintain two different XML sources one for each language.

I opted for the third approach (because actually I did not think of the second one at the time).
Maintaining two XML files was not the actual "problem". The problem now was maintaining a single XSLT as apart from content provided in XML I had to translate the static text of the CV (labels etc.)
To do that I used a separate XML (which I named Resources.XML) with a schema I defined for that purpose. This XML file included all static text in translated versions distinguished by a "language" attribute. The Resources.XML file was included by the XSLT using xsl:include and was referenced wherever needed being passed a parameter that specified the selected language.
So far I had the following files:

  • "My Resume.Greek.XML" containing the Greek version of my CV.
    This file had to be edited every time I needed to update my CV in Greek.
  • "My Resume.English.XML" containing the English version of my CV.
    This file had to be edited every time I needed to update my CV in English.
  • Resources.XML containing the labels used in my CV localized in both languages.
    This was a static file created once and never really had to be altered. Here is a sample part of the file:
  • "My Resume.XSL" containing all the XSL transformation required to convert either language source of my CV to DHTML. This was a static file created once and only had to be altered whenever I needed to improve the style and layout of the output. Here is the rough structure of the XSL file:
  • "My Resume.Greek.XSL" which is a minimal file that just stets a variable indicating the selected language to Greek and includes "My Resume.XSL" to do the actual transformation. This file is static and never needs to be edited either. Here is the content of this XSL file:
  • "My Resume.English.XSL" which is a minimal file that just stets a variable indicating the selected language to English and includes "My Resume.XSL" to do the actual transformation. This file is static and never needs to be edited either. The content of this file is analogous to it's Greek equivalent displayed right above. It just changes the value of the "language" variable to English.

These are the basic elements of my first XML CV solution. In practice I maintained a different XSLT for use in MS Word because the XSLT for the web included DHTML interactivity (JavaScript) and slightly different styling than what looked best for print.

All these were not as easy or straight forward as they seem. Problems I was faced with included:

  • Issues with MS Word integration:
    • CSS needed some tweaking to produce the results I wanted in Word.
    • I also had to have an automatically updating Word document. So I used just a Word field to include the XML and transform it on the fly and I also used a Word macro to to automatically update the field every time someone opened the file. Here is the field code:
  • Issues with SharePoint integration when moving to WSS 3.0:
  • See this relevant post for a clue.
  • Issues with PDF transformation:
    I never really tried to solve this one. I am still creating PDF versions by hand (by saving to PDF through MS Word 2007). I will have to look for an automated solution for this one in the future.

All these pretty much remain under investigation since they need some time and effort which the for the moment are not practically worth for.

So this about sums it up for the first phases of this project. Which brings us to 2008. Many things have changed since 2003 that all this started and even since 2006 when migration to WSS 3.0 caused me to re-investigate some of the projects details.

Today we have new things like: Europass, hResume and microformats, HR-XML specs, Linked-In, Xing and other Web 2.0 stuff. So the project is again being revisited these days on any spare time I can get a hold of for it.

What do I currently do?

  • I am making a new XSL to convert from XMLResume to Europass layout.
    This is being done purely for practical reasons.
  • I am considering the problems of integrating with Europass specs in general.
    This has a lot of implications as the two formats have fundamental differences. HR-XML is considered also along this path.
  • I am about to implement hResume in my existing and new XSL transforms.
  • I am considering the problem of integrating with Linked-In.

There are a lot of thoughts on these issues but I will not make more comments on them until I feel I have something concrete to say about them.

Can't change access modifiers when inheriting from Generic Types.

Well I might be silly, but I had not run-up to this one yet. Until just now:

You cannot change the accessibility level of a class member by means of hiding when inheriting from a generic type.

Consider this example. A simple console application using two classes: MyList inherits from List and My2ndList inherits from MyList. Generic type List has a public method named Add that is not declared as virtual.
My intention was to completely hide the base implementation of the Add method in my derived class. In other words, let's assume that I want MyList class to not expose an Add method. What one would normally do in this case, would be to hide the method by using the new modifier and changing its access modifier from public to private like I tried to do in line 32 of the code snippet that follows.
Well. Guess what. This does not work if you are inheriting from a Generic Type. Try the code bellow then try to play around with the access modifiers in lines 32 and 47. Although I would normally assume that the code bellow would not even compile, it does!

So why am I posting about this?

Well... I didn't come across any comments on the subject on the Internet for the little I looked around and I thought it was an interesting thing to talk about.

If you know of any links discussing the subject and explaining the internals of the compiler or generics implementation in C#, then by all means please do leave a comment to this post.

Multiple calls to RegisterOnSubmitStatement and Client-Side Validation

Ok! Here is a new thing I discovered yet again the hard way...

In short: Do not call Page.ClientScript.RegisterOnSubmitStatement after the Page Load event.


Well yes! It's not under all circumstances that you can notice the difference but it's there and it's major!
I do not really wan to describe this, so I 'll take you though it with an example:
Let's say you have an aspx page. The page has two controls in it. For simplicity lets make those controls UserControls. The controls are pretty simple: just a TextBox and a RequiredFieldValidator in each of them.

So there you have it:

Control A (let's call it OnSubmitControlA):
and the code file:

Control B (let's call it OnSubmitControlB):

and the code file:

And finally the page itself:

(the codefile has nothing special in it...)

The page has of course a submit button so that we can submit and test it...

So! What we have here!?
  • A Page,
  • two controls that wan to access client-side code just before the page submits (for no particular reason)
  • and at least a Validator Control that will fail validation at some point. (If we did not have a validator then I would not have a case here!)
Now go render the page an see the result. If you leave either TextBox empty and click on the submit button, you will notice that only the alert from the first control pop's up. The other registered script is never called....

Now go back and make a slight change. Move in both contols' codefile the call to Page.ClientScript.RegisterOnSubmitStatement from OnPreRender to OnLoad, like this:

do the same on the other control:

Done! Go back and render the page! Leave either TextBox empty and click submit... See??? Now both alerts pop up!!!

Why is that???

Well look at the source of the rendered page before and after the change to see what' going on:

Here is the script rendered when the call to RegisterOnSubmitStatement is placed in the OnPreRender event:

And here is the script rendered when the call to RegisterOnSubmitStatement is placed in the OnLoad event:

Got it?

If RegisterOnSubmitStatement is called after OnLoad, then the first time it's called the framework appends the statement that calls ValidatorOnSubmit() and returns false if it fails; (effectively blocking the rest of the script from executing). Subsequent calls of RegisterOnSubmitStatement (after OnLoad) are appended to the script generated by the first call (and get blocked by the effect I just described).

Instead, if all your calls to RegisterOnSubmitStatement come before the end of the OnLoad phase, then all registered scripts are appended to previously generated scripts before the eventual injection of the call to ValidatorOnSubmit().

Hoping for comments on this...

xsl:include, xsl:import & msxml:script blocked on WSS 3 XML WebParts

I discovered today (the hard way), that we can no longer use xsl:include and xsl:import elements in XSL Transformations for Windows SharePoint Services 3.0 XML WebParts.

Along with those msxml:script is also blocked.

I am going to investigate this further and see if there is an administrative way to bypass this security restriction. XSLT code reuse is a useful thing and I want to keep it as an option in some environments.


I am including two relevant links I found on the web:
I would appreciate comments from people who know more on the subject.

Moving to WSS 3

I am in the process of replacing my old personal site with a new version based on Windows SharePoint Services 3.0.

My previous site is based on Windows SharePoint Services 2.0, and is currently located at

Though I could just gradually upgrade the old site to the new version of WSS I opted not to do so because the last few months I have been considering of restructuring my sites completely. I am not currently getting into any more details on this, but the bottom line is that I came to have far too many personal sites which rendered my main site almost useless. On the other hand I need I WSS based site in order to enjoy the goodies of WSS like custom lists and document sharing with my family, friends and colleagues.

So what you can expect to see here is either a private site or a public site replacing the current versions of and

CS:Editor ViewState

Quick Tip:
CommunityServer 2.1 uses TinyMCE as a rich text editor. They use what appears to be a wrapper to TinyMCE: CommunityServer.Controls.Editor. As I am developing an application on top of CS 2.1 I am using the same rich text editor and encountered the following problem:
I was setting declaratively the Height Property of the Editor control, but it could not retain it's value after postbacks.
Look at the ASPX part:

So far so good but, after a postback the Editor control returned to its default height (messing-up my layout).

To make a long story short, it figured that something is wrong either with timing or with implementation. I mean that either the wrapper, or the wrapped control do not assign the value of Height before ViewState is saved, or they assign it to either a Property that is not saved in view state, or during an Event that fires before LoadViewState.

So what I did to work-around the issue was just disable ViewState in the Editor control.

The problem has gone away and that means that LoadViewState was overwriting my value.

I did not devote any time in looking further into this matter so I'm just posting this work-around, but if I do, I 'll let you know the details.

UseSubmitBehavior & GetPostBackEventReference

funny thing…

I needed a button (in an ASP.NET 2.0 page) to post-back and raise an event for another control. That means that I click a button but the post-back event is not handled by the button clicked but instead by another control on the same page.

The easy way was to add a System.Web.UI.WebControls.Hyperlink and set its NavigateUrl property to Page.ClientScript.GetPostBackClientHyperlink(myOtherControl, "Arguments")
But I wanted it to look like a Button so instead I added a System.Web.UI.HTMLControls.HTMLButton Control and did the same thing to its OnClientClick property.
All worked fine.

The funny thing happened when I decided that I like most to use a System.Web.UI.WebControls.Button instead.

ASP.NET Buttons are rendered as <input type="submit" /> controls. and by default their client-side onClick event is wired to the __doPostBack javascript method by ASP.NET.
To change that behavior in ASP.NET 2.0 we are supposed to use the UseSubmitBehavior Property which when set to false causes the Button to be rendered as <input type="button" /> and allows you to set the client-side onClick event programmatically (or declaratively) by setting the OnClientClick server-side Property.

That's what I did, only to discover that to whatever I was assigning to the OnClientClick Property, the original __doPostBack call was automatically appended by the framework.
I did not spend all the time in the world to figure this out and as I failed to find a reasonable way out of the problem, what I did, was to trick the client-side out of it by appending a JavaScript “return;” statement to the script I was assigning to OnClicntClick.

To make all these more obvious here is the hands on:
in the aspx/ascx file:

Notice that lines 5 and 6 above define the Hyperlink and HTMLButton that worked fine and line 7 defines the WebControls.Button that did not behave. Lines 1 to 3 define a DetailsView just because that is the control I wanted to handle the post-back event.

Now in the code file:

These get rendered as follows at run-time (if you look at the source in the browser):

This works just fine if you click on the Hyperlink1 or Button1 controls, but not at all if you clicked at Button2. That is because Button2 makes the call to __doPostBack twice. Once because I asked it to do so (by assigning its OnClientClick Property as you see in line 4 of the code-file above) and once because the framework automatically inserted the default post-back event reference for the button (Button2) itself.

To overcome this behavior (as I could not find another way around it) I just changed the last line in the code-file (line 4 in the snippet) by appending a javascript return statement as follows:

This caused Button2 to render as follows:

Notice the return statement (in bold) rendered between the two calls to __doPostBack. Although this does not prevent the second __doPostBack call from beeing inserted by the framework, it does prevent it from executing….

If you know a proper way out of this little puzzleplease let me know….


  1. As you obviously noticed I needed the button to create an Insert button that would enable a DetailsView to go to Insert Mode even if the view has no items to display. The button (when clicked) causes the DetailsView to receive an Insert Command.
  2. Of course you fortunately do not need to do all this to make a details view to enter Insert Mode. It would be enough to just and any button with a server-side Click event handler changing the mode to Insert by just calling DetailsView1.ChangeMode(DetailsViewModes.Insert);

ASP.NET 2.0 Experiences

With this post I am opening a series -hopefully- of posts, based on my experiences with ASP.NET 2.0. This post by itself is at the moment incomplete.

I have been programming ASP.NET since version 1.0 and did most of my work in ASP.NET 1.1. Only recently, and just after the release of Visual Studio 2005 and the .Net Framework 2.0 I started working in ASP.NET 2.0. I have two initial goals to accomplish (apart from learning all the new and changed features of the new release):

  • I want to port my version 1.1 code libraries to ASP.NET 2.0
  • I want to create a web project I had in mind for the last couple of months in ASP.NET 2.0

In the future I will probably work in ASP.NET 2.0 exclusively and I also might want to port some of my version 1.1 apps to version 2.0.

My first findings so far:

  • ASP.NET 2.0 includes a framework for web site membership, role based security, and personalization. It is based on the same authentication and authorization mechanisms that come from version 1.1 but introduces the use of a data store and a "provider" API to support the new features. Previously I used to use my own implementation of such features using a custom solution. The new version makes my relevant code somewhat obsolete. I am already trying membership, roles and profiles and finding them easy and helpful. I 'll get back to you with more specifics on this... be continued...

My Funny Surprises With .NET 2.0

As I was developing applications for .Net Framework 1.0 and 1.1, I was building a couple of code libraries with features I found smart and useful for my apps.

Being an active member of, (a Greek .net developer community often nicknamed DNZ), I recently decided to share some of that code with others in DNZ.

The idea was to do that while porting the code to .Net Framework 2.0 through Visual Studio 2005.

Now, what’s so funny about that?

First thing I wanted to share was my Wizard classes. I implemented them in the context of a web project and used to find them brilliant.

Using SourceSafe Ι started today pinning and branching code to a new VS 2005 solution that would contain the part of my libraries related to Wizards.

“Come on! Where is the funny part?” you would still ask!

Well I compiled it and got the following compile time errors:

  • Error1: 'Wizard' is an ambiguous reference between 'System.Web.UI.WebControls.Wizard' and 'Softbone.Shared.Wizards.Wizard' 
  • Error 2: 'WizardStep' is an ambiguous reference between 'System.Web.UI.WebControls.WizardStep' and 'Softbone.Shared.Wizards.WizardStep' 
  • Error 3 'WizardStepCollection' is an ambiguous reference between 'System.Web.UI.WebControls.WizardStepCollection' and 'Softbone.Shared.Wizards.WizardStepCollection' 

(note: Softbone is "brand" name I am using, and Softbone.Shared is the namespace of the shared code library I am creating porting my code from .net framework 1.1 to 2.0).


Not only Microsoft implemented the same functionality as I did, but the just happened to have chosen the same class names as I did!

Well that, I thought was funny enough to mention…

So, it might be the case that my library is obsolete with the new version of the .Net Framework, but still, here is what is more interesting about it now than ever: I have to check my implementation against Microsoft’s one to see where mine falls short or even if I did some things that are better or smarter than Microsoft’s.

So I will be posting my code soon to my main blog and my .net blog in DNZ and keep you posted here, to see if I have more interesting or funny encounters during my Wizard Library porting to .Net Framework 2.0.

I ‘ll let you know….


Reading a recent post in I decided to implement as a proof-of-concept an algorithm using regular expressions to convert a decimal numeral representing an amount of money, to a verbal from in Greek text.

So to make it more obvious the problem was to convert 1,234,567.89 Euros to the string "One Million, Two Hundred Thirty Four, Five Hundred Sixtyseven Euros and Eightynine Cents" (only that the text should be in Greek and not English as I typed here to make the concept clear for everyone).

The algorithm I came up with is not the best that I could do but is enough as a proof of concept. Do not be alarmed if it seems too long at first glance. It's just the code comments that make it so long.

Here it goes:

You can also find this at the relevant blog article in here (in Greek).

MetaBlogAPI for CS 1.1

I wanted to test w.Bloggar and BlogJet today for posting in a CS 1.1 blog. MetaBlogAPI comes as a separate download from CS 1.1 so I tried to find and get it. Believe me I had a real hard time!

After an hour or so of googling around for it, a friend from finally pointed me to it.

Here it is:

If you need it, go get it…

The other versions found in various posts in are usually older versions and will just give you a "Method not found: System.Collections.ArrayList COmmunityServer.Blogs.Components.Weblogs.GetWebLogs(CommunityServer.ComponentsUser,  Boolean, Boolean, Boolean). (EGetRecentError)" error message.

I wish they had a more standard place for it on the web than just a form post…

JavaScript - The World's Most Misunderstood Programming Language

Incredible article on Javascript!

I have been programming Object Oriented languages for a very long time (since OO Turbo-Pascal in late 80’s). The truth is that I myself had probably misunderstood JavaScript. Why? Two sets of reasons: a) all those reasons described in the article mentioned bellow, b) as a result of a I devoted no time in studying it.

Well, better late than never: From now on I am going to think twice when I need a piece of functionality on the client.

Do read this article by Douglas Crockford:

JavaScript - The World's Most Misunderstood Programming Language

P.S. Don’t miss in the article above the link to the code for inheritance in JavaScript. I am not going thought the cons of JavaScript here but in the pros compare it to inheritance models supported by C# and Java.

BTW: What do you think of multiple inheritance. We used to be able to use it in the C++ days (C++ is still around but anyway). Do you miss it in C# or Java? (I mean multiple inheritance of classes not interfaces i.e. class c inherits class a and class b. In C# and Java you can inherit form max one class and implement as many interfaces as required. But of course interfaces do not include base implementations…).

Also: Can you give me your best example where you really missed multiple inheritance?

VS.NET Copy Project

Here is a solution for an issue I run into today.
In VS.NET 2003 I tried to use the "Copy Project" feature to copy (deploy) the files required to run a web application to another web server. Nevertheless I got an error saying (more or less): ..."error occurred while copying the project" ... "Visual Studio .NET has detected that the specified Web server is not running ASP.NET version 1.1. You will be unable to run ASP.NET Web applications or services".
I was evaluating Community Server 1.1 and I had a full source installation on my Windows XP workstation altering bits and pieces to customize the application. I had previously installed a binary installation of the same product to my Windows 2003 Web Server.
First I made sure ASP.NET was running on my remote web server. Just to be on the safe side I even run aspnet_regiis -i, but it did not fix the problem. I had never encountered an issue like that so I googled for it. I found similar problems but all had solutions I had already tried with no luck. So mine had to be different.
I figured out what was wrong when I tried to find out what my VS.NET was doing on my workstation to determine the ASP.NET version on my remote web server. It turned out that VS.NET did an HTTP request to:


This file (an actual ASPX page named get_aspx_ver.aspx) does not exist so the server returns the well known "resource not found" error page. But ASP.NET in that default error page, returns the .NET Framework version information which is in turn used by VS.NET to verify version information. Silly but true!
Now! The problem was that Community Server by default had defined Custom Error Pages. So suddenly it occurred to me and here is what I did to solve the problem:
I went to my remote web server, and added the following lines to the web.config of my application:

P.S: I also added the same lines to the web.config in my workstation since I was about to overwrite the web.config on the server. Remember I was originally trying to copy the files needed to run the project from my workstation to the web server using Copy Project feature of VS.NET.

CSS cache

One annoying thing about modifying themes and styles sometimes, is CSS caching. Although including a stylesheet in a page by using <link rel... tag is a good thing (because the stylesheet gets cached by the browser and is not downloaded again and again for all pages sharing the same stylesheets), this exact feature can become a bugger when you try to modify the CSS and instantly view the changes in the refreshed page.

I last encountered this problem when I was trying to modify this blog's theme by editing CSS files.

To overcome this problem I kept renaming the CSS file each time i made a modification to it by appending a version number on its name (i.e. base_1.css, and then base_2.css and so on).

This forced my browser to reload the new file.

Better ideas are always welcome.