<May 2010>
Author: Created: Tuesday, September 09, 2008 1:58:59 PM RssIcon
News and updates about
By Stefan Koell on Thursday, May 27, 2010 10:03:51 PM


Last couple of weeks were really slow. Had lot’s of troubles getting infrastructure stuff up and running. Had to get rid of my Exchange server, needed to switch to Team Foundation Server because I’m really sick of Subversion and the latest challenge was to get the Royal TS 2.0 to compile against .NET 4.0. Let me tell you, it’s not that easy as you might think!

Anyhow, I’m back on track and I think I will be able to finish the Options dialog by the end of the upcoming weekend.

2 Sections of the Options dialog are done already:

imageThe “General” Section:
”Application Start” will allow you to set one of three options which are pretty self explanatory:

  1. Do not open any documents
  2. Open documents from last session
  3. Open a selection of documents (see screenshot)

The option “Application Close” moved from the main window’s “Tools” menu to the options dialog and will prompt for confirmation before you close the application if any connection is still active.

“Theme” works pretty much the same way as before except for one small detail: changing the selection will “preview” how the theme will look like by temporarily change the theme.

Royal TS 2.0 will have some informational popup-banners which you can hide and some prompts with a checkbox option “Do not show this again”.“Reset Warnings and Messages” will reset all those dialogs and popup-banners which may annoy you.

The “Encryption” Section:

This section is exactly the same UI element you can find to encrypt and password-protect your documents. Because Royal TS 2 will allow you to store credentials in your application settings, this makes sense. The general idea is, that you do not have any credentials defined in your documents – you still can, but you should define your credentials in your application settings and just reference them in your documents. This way you do not really have to encrypt your document (again, you still can if you need to). It makes document sharing much easier. I will post a detailed blog post in a couple of weeks how document sharing and referencing credentials will work.

One important question, I would love to get your feedback:

I’m considering to reduce the priority or maybe drop one feature present in Royal TS 1.x for the first Royal TS 2.0 release: “Minimize to Systray”. It seems that Windows 7 changed the game a bit and the “Show Desktop” function and the new task bar integration seems to cause weird behavior. I originally planned to keep at least every Royal TS 1.x feature in the 2.0 release but this could delay the release date a bit and I was wondering how important that feature might be for you. Please leave a comment or drop me an email…

By Stefan Koell on Tuesday, May 18, 2010 1:13:38 PM

I start to think, I will never get V2 of Royal TS out. There’s always something coming up consuming time and I’m not talking about family here. Things you expect to just work, do actually not – at least not always. First I was very happy when I got my new server. I was looking forward to setup all the stuff which should actually help me to develop more efficiently. I bought a nice box where I could run Hyper-V, run my test boxes on it as well as Team Foundation Server 2010 (which btw is great compared to Subversion!). But then, reality caught up on me.

Problems with the network driver on my Hyper-V box (reminder to myself, never trust a Broadcom driver – only use Intel network adapters!). The NIC on the host worked perfectly but all guests moved from another hyper-v machine weren’t able to get networking up and running, even when I used the legacy network device. Days of research and tests were necessary until I gave up and led me to try a different NIC brand. After installing an Intel card, everything worked fine – on host and on the guests. On a sidenote: this server box is a Dell box and is officially “Hyper-V certified”.

My Exchange server died a horrible death because of this incident. Fortunately no data loss. Btw, the nice guys @ netmonic offered to host my Exchange mailbox for a reasonable price and let me say, I would never go back to hosting it myself. Perfect service, good value, no headaches anymore! (contact them for the latest rates, the prices on their site are a bit outdated)

Then, out of the blue(!), my new server started to blue screen whenever I copied approx. 1 GB to or from the server. Even when doing backups, it blue screened somewhat after a GB. The server ran fine for weeks – even with the backups! The good news, there’s a fix for that ( The bad news, it wasn’t really easy to find and I am starting to lose confidence in MS server products. 

The past few months showed that things like WHQL, certified drivers, Hyper-V certification, all doesn’t really mean anything. If you are out of luck you can have a hard time. I hope everything is stable now and I can start to do actual work now…

By Stefan Koell on Saturday, May 15, 2010 7:22:54 PM

 Since a lot of users are confused about targeting, I decided to give it another shot and try to explain on a more practical side why targeting is done the way it is. There are some fundamental technical reasons why you need to use overrides and cannot target a group directly (as many might expect first), so let's take a closer look at the platform beneath and it might be clearer to understand why this has to be done this way.

Here's the line you read (or hear) when it comes to targeting a workflow (i.e. discovery, rule, monitor, etc.):

You cannot target a group directly, you always need to target a class like Windows Computer, disable the workflow, create an override afterwards to enable the workflow for a limited number of instances (Windows Computers in this case).

So, what's all about targets (classes) and groups?

A group (after you created one) is basically a class, very much alike the "Windows Computer" class. The main difference is, it's a singleton class (only one instance will always exist) and it is hosted on the RMS (which is maintaining the group membership). Because of this fact, you:

  • see all the groups in your system when you have to choose a target class
  • see every workflow targeted directly to a group's class run on the RMS (this is the host of the singleton-instance) because OpsMgr thinks that's the place you want to execute the workflow.

A group can have members (nothing more than instances of other classes) from different kinds. You can actually have one or more "Windows Computer" classes and one or more "IIS Web Sites" class instances mixed with some "Logical Disk" class instances, all in the same group. 

Why do we need to specify a target which is not a group when we create a workflow?

The most obvious reason is the "Variable Replacement Mechanism" (I have no clue if there's an official name for that). When you target a workflow, OpsMgr knows all attributes (properties) of your target. This allows you to pass any attribute from the current instance your workflow is running against to your workflow. For example, when you target a workflow (i.e. a script monitor) to the class "IIS Web Site", you can pass the attribute "LogFileDirectory" to your script.


After you select the target you can access all the attributes from that target in your workflow:


This is one of the most powerful features OpsMgr has to offer in all the workflow processing. Once you've wrapped your brain around that concept you will realize that this beast can do almost everything in a relatively simple way.

As a result, the following rules apply:

  • Once you chose a target class, you cannot change it after the workflow was applied to the system. This is simply because of the implications related to the "Variable Replacement Mechanism". When you change the target, all the dynamic values you pull in might or might not work afterwards. I guess the engineers at MS could come up with a solution allowing you to change the target and prompt for each and every dynamic value to change for the new target class but as you can see for yourself, the outcome is unpredictable. Therefore they just disabled the option to change the target class. Even doing so in XML can be a mess and you often just end up creating this workflow from scratch using your new target class.
  • Because of groups can have members of different classes, there's no way for OpsMgr to find out which attributes can be used for the variable replacement. Consider our previously shown group containing classes of "IIS Web Site", "Logical Disk" and "Windows Computer". Each class have a different set of attributes, so targeting a group directly would prevent you from using the replacement feature. Many of the built-in workflows and vendor MPs depend on that feature!

But why can I target a group in the Windows Service monitoring template wizard?

Since OpsMgr 2007 R2 the Windows Service monitoring template wizard was “upgraded” and allows you to specify a group you want to target the monitor. I really wished they designed that dialog differently because every new user will get confused because all the workflows are not directly targeted to the group you specify in this dialog!


In the end, the wizard is doing the exact same thing behind the scenes you would do when you create your own workflow:

It creates a disabled workflow (in this special case a discovery) targeted to "Windows Computer". 

It then creates an override for this workflow, enabling it for the group you chose. 

Ok, I need to do overrides, any other benefits of using this technique?

Using an override to enable/disable a workflow for a group of members has also its advantages. See Jakub's post here:

Jakub explains how overrides are actually applied and shows how powerful this mechanism really is. In short, for overrides it isn't really necessary to use the exact same target as your workflow is targeted to. The calculation algorithm allows you to be more flexible here. For example:

A workflow targeted to an "IIS Web Site" instance can be overridden using a group containing "Windows Computer" instances. OpsMgr will find out for which "IIS Web Site" instances this override will be applied and includes all instances running on the specified "Windows Computer" instances.


I completely understand anyone having a hard time with this concept. It is a bit strange at first and as most of you I had to get used to it as well. The stuff above is just a very condensed view and is far from being complete. There are several other reasons, benefits and rules in the workflow engine I did not mention here. These are the facts that helped me to understand the engine and platform better and I hope I could illustrate some of that for you.

One thing MS can and should do better is making these concepts more accessible in the UI. It begins with the very confusing terminology in all the dialogs and wizards and surely ends in the documentation. The documentation is getting better and better while the UI is still confusing or getting more confusing (see Windows Service monitoring template wizard).

By Stefan Koell on Wednesday, April 28, 2010 8:48:26 PM

I spent the last couple of days, installing and configuring TFS 2010. Getting TFS up and running is really easy and went smoothly. I’m really impressed what MS did with the installation experience.

However, when you want to make TFS and TFS web access accessible over https it’s not that easy anymore. I also couldn’t find any detailed instructions so it was a bit of trial and error…

Lessons Learned:

  • When you use a self-signed certificate, make sure that the CN is the same FQDN as used in Visual Studio to connect to your TFS. Invalid certificates are accepted as long as you have installed the certificate in the Trusted Root Certification Authorities store. A name mismatch (the CN of the certificate doesn’t match the name of the host you are trying to access) is not accepted. Internet Explorer let’s you decide if you want to continue or not, Visual Studio not – it’s just blocking…
  • Import the self signed certificate into your computer accounts personal store.
  • Add another binding to the TFS site in IIS for https with the self-signed certificate
  • Change the notification URL to the https://FQDN/tfs
  • Open the web.config “C:\Program Files\Team Foundation Server 2010\Application Tier\Web Access\Web\web.config” and uncomment/adjust the remarked section “tfServers”:
By Stefan Koell on Sunday, April 25, 2010 10:54:18 AM

PowerWF is a very cool visual powershell workflow designer with the ability to create OpsMgr management packs with just a click. Very impressive tool, see for yourself:

A bit expensive for my taste and when you look at Apple’s Automator it should be a product by MS included in Windows…

By Stefan Koell on Friday, April 16, 2010 12:21:46 PM

If you are a SCOM geek, want to get some attention and win some cool prizes, come over to the site and enter the Management Pack Extension Contest! There are four separate contest categories:

  • Reporting pack extensions
  • Diagram or SLM pack extensions
  • Visio or Dashboard pack extensions
  • Tuning pack extension

You can submit one extension for each category. The contest started a couple of days ago and ends at June 7, 2010.  Click here for more details:

See you on the other side ;-)

By Stefan Koell on Sunday, March 28, 2010 7:28:02 PM

Here another Version 2.0 progress report, this time I will blog about the computer browser and the bulk-add feature in Royal TS Version 2.

The wizard dialog for new remote desktop connection items allows you to add multiple connections at once. This screenshot shows the wizard for a new RDP item (click the screenshots for the original size image):


Since this is the first time I reveal a central UI part of the new version, let me comment on some of the things you see:

  • Royal TS 2 will have a huge load of more options!
  • Red borders indicate required fields
  • Lots of very detailed tool tips
  • You can finish the wizard without going through all pages
  • The screenshots were made using the Black Office 2007 theme - many other themes are available. Right now, I personally like the black theme.

Notice the little browse button on the right edge of the computer textbox. Clicking on that button opens the standard computer browser dialog:

Computer Browser

The screenshot shows the “Advanced” dialog which opens when you click on the “Advanced…” button of the “Select Computers” dialog. Because my dev machine is not member of a domain it will show you all machines in a workgroup. If your machine is member of a domain, you can change the location using the “Locations…” button to search for machines in your active directory.

After selecting one or more machines you return back to the wizard form:


When you selected more than one machine, the computer textbox will show each machine separated by a semicolon. It’s still allowed to edit the computer textbox to add or remove machines. So basically you can also do a bulk-add without invoking the computer browser by just entering multiple entries separated with a semicolon.

The display name textbox will be disabled (and ignored) when you use the bulk-add function. Royal TS will then use the computer name as display name for those items.

When you edit an existing item, you can still use the computer browser but it will only allow you to select one machine from the browser.

Again, same procedure as usual: If you have any feedback, leave a comment or contact me directly.


By Stefan Koell on Friday, February 26, 2010 6:26:26 PM

It’s been a while since my last post but there wasn’t really much to blog about Royal TS 2.0. I’m making progress, just not with the pace I imagined. The last couple of weeks I tried to wrap my head around the “Details” view – essentially the right-explorer-pane. In it’s current implementation, Royal TS shows you the list of connections from a folder or the document. If a connection is selected in the tree, you see the “Dashboard” in the details view. Basically the behavior will remain unchanged. There will be dashboards for connections, tasks, credentials and there will be a details view for folders or documents.

Here’s how the details will look like in Version 2 (this isn’t really the final design, I guess the one or the other detail might change until 2.0 is released):

image Let me explain the screen above to you: As you can see, Royal TS will finally have tabs. The “Details” tab is kind of a special tab which will be activated as soon as you click on an inactive connection item to show you it’s dashboard or if you click on a folder/document to show you the details view (content of the folder).

Now, as the picture above suggests, the details view has lot’s of end user capabilities which might be handy for your organizational tasks:

  • Group by one or multiple columns (feature is well know for Outlook users)
  • Sort columns/groups
  • Customize columns (show/hide) and reorder columns
  • Find as you type
  • Column filters and quick column filters you may know from Excel
  • Build complex filters using a filter editor (see screenshot)
  • Filter across all columns (similar to the filter functionality in Royal TS 1.6.x)
  • Quickly filter out Active or Inactive connections
  • Auto Best Fit / Best Fit adjusts the column width to best fit the contents
  • Optionally keep all filter settings when you change the selection in the tree (by default, all filters are cleared when you change the selection)
  • Optionally show items from all subfolders as well

This piece is still not ready and I think I need another day or two to get it done. That’s it for now. If you have feedback, don’t hesitate…

By Stefan Koell on Wednesday, February 17, 2010 4:32:41 PM

I was asked recently to post an article on how we do web page monitoring. For a number of reasons we do not really use the built-in “Web Application” monitoring template. One of the reasons is that we are not really happy with the selection of the watcher nodes. We needed a way to monitor every web server in our farms without managing the watcher nodes manually all the time. We create host entries on our web servers pointing to themselves. So every time you browse to on one of the web servers you do not go through the load balancer. Since the host entry for points to the web server itself, you will browse to the web hosted on the server you are currently connected to.

So I created a small script which is basically doing web monitoring the way we wanted it to be. In this blog post I will talk about the implementation we started to use back in MOM 2005 and still use it (slightly modified) in our SCOM 2007 environments. We have recently migrated all those scripts to PowerShell and did our own class definitions using the authoring console. For now, I will focus on the much simpler implementation using VBScript and OpsConsole without any work in the Authoring Console. Download the vbscript from the following link:

Before you begin you should create a group containing all your computers you want to monitor with a web page. Or you can of course also use the script like the Web Application template to monitor a web page through a load balancer or whatever using watcher nodes. In any case, create a computer group with your web servers/watcher nodes.

image_thumb2 In your Operations Manager console switch to the “Authoring Space”, expand “Management Pack Objects”, right-click on the “Monitors” node, select Create a Monitor –> Unit Monitor

Now select “Scripting / Generic / Timed Script Two State Monitor”

Select a destination management pack.

Attention: The group I talked about earlier needs to be in the same management pack with the script monitor we now create. Or the group is in a sealed management pack, then you can select a different destination management pack.

Click next.
image_thumb7 Provide a name for your monitor and select a target like “Windows Server”.

Notice that we uncheck the checkbox “Monitor is enabled”. We will later create an override to enable the monitor for all the web servers/watcher nodes we created the group earlier.
image_thumb10 Configure a schedule. In general we schedule all our monitors (or rule) to run every 5 minutes (of course there are exceptions).
image_thumb18 I strongly suggest to provide a meaningful script file name on this page, as it will help you to find it on the agent when you have to trouble shoot something.

Let’s setup the timeout to 5 minutes.

Open the script attached to this blog post and copy everything from the code4ward.Sample.WebContentCheck.vbs into the script text field.

The script is very generic and needs 3 parameters to run successfully. 

As you can see from the script body,
Parameter 1: is the URL of the web page you want to monitor
Parameter 2: is the expected text in the content
Parameter 3: is the timeout in seconds (-1 means no timeout)

Before we click on next, click on the Parameters button to specify your parameters.
image_thumb22 To be on the safe side, I always put the parameters in double-quotes. The parameters line reads:
”” “code4ward” “30”

The configuration means, download the web page from every 5 minutes (the scheduled we configured earlier), look for the string “code4ward” (without the quotes) in the content, abort request after 30 seconds if there’s no answer from the web server.
If “code4ward” is in the content and the web page was returned within 30 seconds, the monitor is healthy.
If “code4ward” is not in the content or the web page took longer than 30 seconds, the monitor is unhealthy.
image_thumb25 Now we need to hook up the property bag status messages from the script with the health monitor’s unhealthy state:

Property[@Name=’Status’] Equals Error
image_thumb30 Now we need to hook up the property bag status messages from the script with the health monitor’s healthy state:

Property[@Name=’Status’] Equals OK
image_thumb33 Here you can decide, if you want the health state to be warning or critical.
image_thumb36 The last page of the wizard let’s you configure the alert properties for this monitor. In order to get all the nice output from the monitor in the alert description, you need to copy “$Data/Context/Property[@Name='Message']$” (without the quotes") into the alert description field.

Now click on “Create” and your monitor is ready to use.

All you need to do now, is to create an enable-override on the monitor for the group we created before.

As you can see, the monitor itself is pretty simple and has not all the features you know from the Web Application template. But sometimes less is more and this script monitor is used to monitor hundreds of sites without any problems.

If you have any questions or feedback, just comment or drop me an email.


By Stefan Koell on Thursday, February 11, 2010 9:32:16 PM

construction_barrierToday we had a short downtime of our web site because a long overdue update to the latest and greatest DotNetNuke version was installed.

Site performance seems to be significantly better and I hope that the main issues in the forum (broken posts) are now history.