<August 2010>
Author: Created: Tuesday, September 09, 2008 1:58:59 PM RssIcon
News and updates about
By Stefan Koell on Friday, August 06, 2010 4:32:07 PM

Operations Manager has some really nice reporting features. Most management packs have several predefined reports on board and even community management packs (especially those from last MP contest) have pretty cool reports on board as well. Still, sometimes there’s just not the right report available. In this case you have a couple of options, here are some popular examples:

  1. Access the data directly in the database:
  2. Creating a Custom Report for SCOM 2007 R2 with SQL 2008 reporting in Microsoft Visual Studio 2008

While those options have many advantages and gives you full control of how your report will look and feel like, it’s very time consuming and there’s also a steep learning curve involved. If you just want to get the data out of your data warehouse and sent to you by email as an excel file, it’s a lot of trouble to go through.

Another option is to use the built-in Report Builder which ships with SQL Reporting Services. I couldn’t find much information on how to use this beast with SCOM, so I decided to play around with it and blog about it.

But before we get started, we need to install the models on the reporting server. Pete posted an excellent blog post over a year ago describing how to install the models which are shipped with SCOM since SP1:

After you installed the models and assigned the right data source to them, we can get started…

The Mission:

For this demo, I show how to create a report listing all computers where the average CPU utilization over the last 7 days is below a certain value. There’s a similar report in the Virtual Machine Manager Management Pack but unfortunately the report is not designed to get scheduled because it only accepts absolute date values. So, our report here should be scheduled to run every week showing the results for the last 7 days.

Get Started:

In the reporting space you should an “Actions” item called “Design a new report” (be sure you have the Actions pane visible!):

Clicking on the above item will open the Report Builder of SQL Reporting Services. When you successfully installed the models as explained in Pete’s post, you should see the two models in the Getting Started pane:


Select the Performance model, Table layout and click on OK. On the left hand side you should now see the “Explorer” pane showing you all the entities from the model and the corresponding fields below.

Basic Report Design:

Our report should contain a column showing the computer name, one showing the average value, one for the minimum value and another one for the maximum value of the CPU utilization. In the Entities explorer select “Object” and in the fields box, drag the “Name” field to the column fields on the design surface.


After doing that, click on the “Performance Data Dailies” entry in the Entities explorer. Expand the fields “# Average Value”, “# Min Value” and “# Max Value”, like this:


Drag the child elements: “Avg Average Value”, “Min Min Value” and “Max Max Value” to the column fields area on the design surface, next to the name field. Right-click on each of the columns and uncheck the “Show Subtotal” item to get rid of the subtotal line. You can also double-click on the column headers to provide some meaningful name for each column. Your designer should look like this now:


Fine Tuning the Report Design:

Right-click on the field below the computer name column header (on the “xxxxxx” text)and select “Edit Formula…”. Let’s use the formula dialog to ensure every computer name is written in upper case:


For the average, min and max values we want to round the numbers to display only 2 digits. Open the formula dialog as before for these columns and use the ROUND function:



Click on the “Sort and Group” tool bar button to specify which column is sorted by default:


You may also check out the Report Properties available using the Report menu:



Now it’s time to pick the right data from our model. First thing to do is to limit our results to the class “Windows Computer”. To do that, select Class as entity and drag the “Class Default Name” to the right panel and select Windows Computer from the Filter List:


After that we need from the “Performance Data Daily\Performance Rule Instance\Performance Rule” entity the fields “Performance Object Name” (equals Processor) and “Performance Counter Name” (equals % Processor Time).

From “Performance Data Daily\Performance Rule Instance” entity the field “Instance Name” (equals _Total).

Then select “Performance Data Daily” from the entity box and drag the Date Time field to the right and configure it “after 7 days ago”.

Lastly, expand the # Average Value field and drag the Avg Average Value to the right, and configure it “less than or equal to 1”. Right click on the last condition and select “Prompt” from the context menu to make this value configurable. The filter should then look like this:



Now you can test and run the report from the report builder or save it directly to the reporting services instance. Use the web access to the reporting services reports to create a new folder if you wish and save your freshly compiled report to that folder. After a refresh in the Operations Console, the folder and the report will appear and is ready to run and more important: ready to schedule!


You can always re-open your reports with the Report Builder using the File – Open menu. I hope I could show you how to use the models and the reporting builder to create your own reports. If you have any feedback or improvement suggestions, let me know.

Have fun!

Stefan Koell 
Operations Manager MVP

By Stefan Koell on Sunday, July 11, 2010 8:02:17 PM

Still working hard on V2. This time I will blog about controlling Royal TS settings using Group Policies:



An ADMX file will be supplied with some predefined policies to set. So far all policies can be set on computer level and on user level. Policies configured on computer level are stronger than on the user level.

Here’s an example how to configure data base logging using group policy objects:


Royal TS will pick up the policy change and will apply them immediately:


Notice the label at the bottom indicating some settings cannot be changed because they are applied using group policies.

Here are some more settings we plan to integrate:




As always, if you have any feedback, let us know…

By Stefan Koell on Friday, July 02, 2010 10:34:22 AM

image001Yesterday I received word that I had been awarded an MVP for System Center Operations Manager.

My thanks to Microsoft and to the community for this award. I have been very active at the SystemCenterCentral forums, created a series of blog posts about creating PowerShell modules and offer a freeware tool called LogSmith which enables you to easily slice and dice collected events from OpsMgr.

I’m very excited about the next version of OpsMgr and I will try to keep up with my community work. You can soon expect a brand new version of LogSmith and I hope I can do some more step-by-step guides on my blog. So if you have any suggestions, let me know.

By Stefan Koell on Wednesday, June 16, 2010 9:59:23 AM

Very interesting article on different methods to implement a singleton pre-.NET 4 and a new way to do it with C#:

By Stefan Koell on Thursday, May 27, 2010 10:03:51 PM


Last couple of weeks were really slow. Had lot’s of troubles getting infrastructure stuff up and running. Had to get rid of my Exchange server, needed to switch to Team Foundation Server because I’m really sick of Subversion and the latest challenge was to get the Royal TS 2.0 to compile against .NET 4.0. Let me tell you, it’s not that easy as you might think!

Anyhow, I’m back on track and I think I will be able to finish the Options dialog by the end of the upcoming weekend.

2 Sections of the Options dialog are done already:

imageThe “General” Section:
”Application Start” will allow you to set one of three options which are pretty self explanatory:

  1. Do not open any documents
  2. Open documents from last session
  3. Open a selection of documents (see screenshot)

The option “Application Close” moved from the main window’s “Tools” menu to the options dialog and will prompt for confirmation before you close the application if any connection is still active.

“Theme” works pretty much the same way as before except for one small detail: changing the selection will “preview” how the theme will look like by temporarily change the theme.

Royal TS 2.0 will have some informational popup-banners which you can hide and some prompts with a checkbox option “Do not show this again”.“Reset Warnings and Messages” will reset all those dialogs and popup-banners which may annoy you.

The “Encryption” Section:

This section is exactly the same UI element you can find to encrypt and password-protect your documents. Because Royal TS 2 will allow you to store credentials in your application settings, this makes sense. The general idea is, that you do not have any credentials defined in your documents – you still can, but you should define your credentials in your application settings and just reference them in your documents. This way you do not really have to encrypt your document (again, you still can if you need to). It makes document sharing much easier. I will post a detailed blog post in a couple of weeks how document sharing and referencing credentials will work.

One important question, I would love to get your feedback:

I’m considering to reduce the priority or maybe drop one feature present in Royal TS 1.x for the first Royal TS 2.0 release: “Minimize to Systray”. It seems that Windows 7 changed the game a bit and the “Show Desktop” function and the new task bar integration seems to cause weird behavior. I originally planned to keep at least every Royal TS 1.x feature in the 2.0 release but this could delay the release date a bit and I was wondering how important that feature might be for you. Please leave a comment or drop me an email…

By Stefan Koell on Tuesday, May 18, 2010 1:13:38 PM

I start to think, I will never get V2 of Royal TS out. There’s always something coming up consuming time and I’m not talking about family here. Things you expect to just work, do actually not – at least not always. First I was very happy when I got my new server. I was looking forward to setup all the stuff which should actually help me to develop more efficiently. I bought a nice box where I could run Hyper-V, run my test boxes on it as well as Team Foundation Server 2010 (which btw is great compared to Subversion!). But then, reality caught up on me.

Problems with the network driver on my Hyper-V box (reminder to myself, never trust a Broadcom driver – only use Intel network adapters!). The NIC on the host worked perfectly but all guests moved from another hyper-v machine weren’t able to get networking up and running, even when I used the legacy network device. Days of research and tests were necessary until I gave up and led me to try a different NIC brand. After installing an Intel card, everything worked fine – on host and on the guests. On a sidenote: this server box is a Dell box and is officially “Hyper-V certified”.

My Exchange server died a horrible death because of this incident. Fortunately no data loss. Btw, the nice guys @ netmonic offered to host my Exchange mailbox for a reasonable price and let me say, I would never go back to hosting it myself. Perfect service, good value, no headaches anymore! (contact them for the latest rates, the prices on their site are a bit outdated)

Then, out of the blue(!), my new server started to blue screen whenever I copied approx. 1 GB to or from the server. Even when doing backups, it blue screened somewhat after a GB. The server ran fine for weeks – even with the backups! The good news, there’s a fix for that ( The bad news, it wasn’t really easy to find and I am starting to lose confidence in MS server products. 

The past few months showed that things like WHQL, certified drivers, Hyper-V certification, all doesn’t really mean anything. If you are out of luck you can have a hard time. I hope everything is stable now and I can start to do actual work now…

By Stefan Koell on Saturday, May 15, 2010 7:22:54 PM

 Since a lot of users are confused about targeting, I decided to give it another shot and try to explain on a more practical side why targeting is done the way it is. There are some fundamental technical reasons why you need to use overrides and cannot target a group directly (as many might expect first), so let's take a closer look at the platform beneath and it might be clearer to understand why this has to be done this way.

Here's the line you read (or hear) when it comes to targeting a workflow (i.e. discovery, rule, monitor, etc.):

You cannot target a group directly, you always need to target a class like Windows Computer, disable the workflow, create an override afterwards to enable the workflow for a limited number of instances (Windows Computers in this case).

So, what's all about targets (classes) and groups?

A group (after you created one) is basically a class, very much alike the "Windows Computer" class. The main difference is, it's a singleton class (only one instance will always exist) and it is hosted on the RMS (which is maintaining the group membership). Because of this fact, you:

  • see all the groups in your system when you have to choose a target class
  • see every workflow targeted directly to a group's class run on the RMS (this is the host of the singleton-instance) because OpsMgr thinks that's the place you want to execute the workflow.

A group can have members (nothing more than instances of other classes) from different kinds. You can actually have one or more "Windows Computer" classes and one or more "IIS Web Sites" class instances mixed with some "Logical Disk" class instances, all in the same group. 

Why do we need to specify a target which is not a group when we create a workflow?

The most obvious reason is the "Variable Replacement Mechanism" (I have no clue if there's an official name for that). When you target a workflow, OpsMgr knows all attributes (properties) of your target. This allows you to pass any attribute from the current instance your workflow is running against to your workflow. For example, when you target a workflow (i.e. a script monitor) to the class "IIS Web Site", you can pass the attribute "LogFileDirectory" to your script.


After you select the target you can access all the attributes from that target in your workflow:


This is one of the most powerful features OpsMgr has to offer in all the workflow processing. Once you've wrapped your brain around that concept you will realize that this beast can do almost everything in a relatively simple way.

As a result, the following rules apply:

  • Once you chose a target class, you cannot change it after the workflow was applied to the system. This is simply because of the implications related to the "Variable Replacement Mechanism". When you change the target, all the dynamic values you pull in might or might not work afterwards. I guess the engineers at MS could come up with a solution allowing you to change the target and prompt for each and every dynamic value to change for the new target class but as you can see for yourself, the outcome is unpredictable. Therefore they just disabled the option to change the target class. Even doing so in XML can be a mess and you often just end up creating this workflow from scratch using your new target class.
  • Because of groups can have members of different classes, there's no way for OpsMgr to find out which attributes can be used for the variable replacement. Consider our previously shown group containing classes of "IIS Web Site", "Logical Disk" and "Windows Computer". Each class have a different set of attributes, so targeting a group directly would prevent you from using the replacement feature. Many of the built-in workflows and vendor MPs depend on that feature!

But why can I target a group in the Windows Service monitoring template wizard?

Since OpsMgr 2007 R2 the Windows Service monitoring template wizard was “upgraded” and allows you to specify a group you want to target the monitor. I really wished they designed that dialog differently because every new user will get confused because all the workflows are not directly targeted to the group you specify in this dialog!


In the end, the wizard is doing the exact same thing behind the scenes you would do when you create your own workflow:

It creates a disabled workflow (in this special case a discovery) targeted to "Windows Computer". 

It then creates an override for this workflow, enabling it for the group you chose. 

Ok, I need to do overrides, any other benefits of using this technique?

Using an override to enable/disable a workflow for a group of members has also its advantages. See Jakub's post here:

Jakub explains how overrides are actually applied and shows how powerful this mechanism really is. In short, for overrides it isn't really necessary to use the exact same target as your workflow is targeted to. The calculation algorithm allows you to be more flexible here. For example:

A workflow targeted to an "IIS Web Site" instance can be overridden using a group containing "Windows Computer" instances. OpsMgr will find out for which "IIS Web Site" instances this override will be applied and includes all instances running on the specified "Windows Computer" instances.


I completely understand anyone having a hard time with this concept. It is a bit strange at first and as most of you I had to get used to it as well. The stuff above is just a very condensed view and is far from being complete. There are several other reasons, benefits and rules in the workflow engine I did not mention here. These are the facts that helped me to understand the engine and platform better and I hope I could illustrate some of that for you.

One thing MS can and should do better is making these concepts more accessible in the UI. It begins with the very confusing terminology in all the dialogs and wizards and surely ends in the documentation. The documentation is getting better and better while the UI is still confusing or getting more confusing (see Windows Service monitoring template wizard).

By Stefan Koell on Wednesday, April 28, 2010 8:48:26 PM

I spent the last couple of days, installing and configuring TFS 2010. Getting TFS up and running is really easy and went smoothly. I’m really impressed what MS did with the installation experience.

However, when you want to make TFS and TFS web access accessible over https it’s not that easy anymore. I also couldn’t find any detailed instructions so it was a bit of trial and error…

Lessons Learned:

  • When you use a self-signed certificate, make sure that the CN is the same FQDN as used in Visual Studio to connect to your TFS. Invalid certificates are accepted as long as you have installed the certificate in the Trusted Root Certification Authorities store. A name mismatch (the CN of the certificate doesn’t match the name of the host you are trying to access) is not accepted. Internet Explorer let’s you decide if you want to continue or not, Visual Studio not – it’s just blocking…
  • Import the self signed certificate into your computer accounts personal store.
  • Add another binding to the TFS site in IIS for https with the self-signed certificate
  • Change the notification URL to the https://FQDN/tfs
  • Open the web.config “C:\Program Files\Team Foundation Server 2010\Application Tier\Web Access\Web\web.config” and uncomment/adjust the remarked section “tfServers”:
By Stefan Koell on Sunday, April 25, 2010 10:54:18 AM

PowerWF is a very cool visual powershell workflow designer with the ability to create OpsMgr management packs with just a click. Very impressive tool, see for yourself:

A bit expensive for my taste and when you look at Apple’s Automator it should be a product by MS included in Windows…

By Stefan Koell on Friday, April 16, 2010 12:21:46 PM

If you are a SCOM geek, want to get some attention and win some cool prizes, come over to the site and enter the Management Pack Extension Contest! There are four separate contest categories:

  • Reporting pack extensions
  • Diagram or SLM pack extensions
  • Visio or Dashboard pack extensions
  • Tuning pack extension

You can submit one extension for each category. The contest started a couple of days ago and ends at June 7, 2010.  Click here for more details:

See you on the other side ;-)