HowTo: Perform “On Behalf Of” Calls Using Azure Active Directory

Probably every developer out there is familiar with the scenario of a UI-driven application (let’s say a web app) that needs to make calls to a backend service, and in quite a few of those scenarios the backend service needs to know which user is logged in in order to fullfill the request. And if you have ever been in charge of deciding on an implementation for this, you have been at the crossroads: do I go with the full-fledged impersonation / delegation solution, or do I conveniently decide that I trust the web app to make the correct calls?

If you’ve chosen the latter, you went with the so-called trusted subsystem architecture. Simply put: your backend service is treating the web app as a system that can be trusted to properly authenticate end users, and only perform backend service calls if and when appropriate, possibly including end user identifiers (such as usernames) as part of these calls.

Trusted Subsystem
The Trusted Subsystem solution

If you opted for the full-fledged impersonation and delegation solution, you probably learned very soon that this is hard. In the old on-prem enterprise world, you would have to learn about the intimate details of Kerberos Constrained Delegation. And if you were ‘lucky’ enough to be working with WIF and WS-Federation or SAML, you would find out that these protocols do support these scenarios, but still make it pulling-your-hair-out-difficult to implement. And now we’re just calling one downstream service from our web app; once we need to call yet another service from the first service, we more often than not just give up and go with the trusted subsystem approach after all.

Azure Active Directory To The Rescue

Luckily, SAML and WS-* are no longer the only protocols available. OAuth 2.0 and OpenID Connect have been gaining momentum for some time now, and are treated as first class citizens in the latest Identity & Access Management solutions that Microsoft is offering, especially Azure Active Directory. To add to that, Microsoft has provided a client-side library called ADAL (Active Directory Authentication Library) for a variety of platforms (including AngularJS and iOS for example) to simplify interaction with Azure Active Directory as much as possible.

And the good news is: even impersonation and delegation has gotten really simple, with a lot less moving parts on the client. (Everyone who has ever struggled with config files trying to get this to work using WS-* and WIF knows exactly what I mean…)

The guys at Microsoft are also putting a lot of effort into code samples on Github that show how to use Azure AD and ADAL to get all sorts of scenarios working.


The On Behalf Of scenario is also available on there. It’s a native client that calls an API, which in turn calls the Graph API on behalf of the logged in user. Obviously, the native client app can be substituted for an ASP.NET Core MVC web app, as shown in this repo.

Not every platform-scenario-combo is available. For example, the API calling another API scenario (i.e. the On Behalf Of scenario) is not available in its ASP.NET Core incarnation. And since the code to achieve this for ASP.NET Core Web API is not readily deducible from the native client sample that is only available with a ASP.NET Web API, I’d like to share some of it here.

First of all, the middleware to wire up an ASP.NET Core Web API to actually consume tokens is a bit different from how it used to be done. You can take you queue from the aforementioned repo; just make sure to save the token you receive so that you can access it later:

app.UseJwtBearerAuthentication(new JwtBearerOptions
    AutomaticAuthenticate = true,
    AutomaticChallenge = true,
    Authority = String.Format(Configuration["AzureAd:AadInstance"], Configuration["AzureAD:Tenant"]),
    Audience = Configuration["AzureAd:Audience"],
    SaveToken = true

Actually using this token to bootstrap the On Behalf Of flow works like this:

var authority = [insert authority here];
var clientId = [insert client ID here];
var clientSecret = [insert client secret here];
var resourceId = [insert the resource ID for the called API here];

AuthenticationContext authContext = new AuthenticationContext(authority);
ClientCredential credential = new ClientCredential(clientId, clientSecret);
AuthenticateInfo info = await HttpContext.Authentication.GetAuthenticateInfoAsync(JwtBearerDefaults.AuthenticationScheme);
var token = info.Properties.Items[".Token.access_token"];
var username = User.FindFirst(ClaimTypes.Upn).Value;
var userAssertion = new UserAssertion(token, "urn:ietf:params:oauth:grant-type:jwt-bearer", username);
AuthenticationResult result = await authContext.AcquireTokenAsync(resourceId, credential, userAssertion);

The AuthenticationResult that ADAL is returning here contains an Access Token that can be used to call the downstream Web API. Simple, right? OK, it involves some code, but it’s pretty straighforward when compared to a WS-*-and-a-WCF-service scenario I wrote about earlier.

Enter Microservices

As said before, we’ve all encountered On Behalf Of scenarios and the perils of getting them to work using SAML, WS-* or Kerberos, and more often than not we gave up on the full-fledged scenario. But in an increasingly API-centered world, we are calling other external services much more frequently than we did only a couple of years ago. And now that microservices gains a lot of momentum as an architectural style, this frequency increases even more since fulfilling a user request in a microservices environment is pretty much always a matter of multiple services collaborating.

Advocates of microservices recognize that flowing user identities through services is a concern that deserves more attention in a microservices architecture. Sam Newman, for example, discusses this issue in his book Building Microservices, in a paragraph aptly titled “The Deputy Problem”.

He recognizes the ease of use that comes with OpenID Connect and OAuth 2.0. And while he is still somewhat skeptical about whether these protocols will make it into the mainstream market any time soon, for all you dev’s out there that are on the Microsoft ecosystem, this is not a concern anymore.

Extending The Scenario

Obviously, we want to do more than simply impersonate end users when calling downstream services. Especially in a microservices environment, where multiple clients are calling multiple services for even the most mundane of tasks, we may want to have varying levels of trust: “Sure, I’d be more than happy to perform this request for the user, but only if he is calling me through an application that is entrusted to make these types of delegated calls.” In order words, you may want to base your authorization decisions on characteristics of both the end user and the calling app.

Azure Active Directory is capable of handing these types of scenarios as well, for example by using scopes. I’m not getting into those now, but I’ll be teaming up with my colleague Jurgen van den Broek for a session at the Dutch TechDays 2016, in which we will cover these and a lot more scenarios – including a peek into the future by discussing what the AAD v2 endpoint brings to the table.

Immediately after the TechDays session, I’ll update this post with a link to the full code sample. So stay tuned, and feel free to post a comment if you need help in the meantime.

What Exactly Is That CORS-Thing?? The What, the Why and the How Explained

If you’ve stumbled upon this post, chances are you’ve encountered some strange behavior while trying to call an endpoint, like a REST API for example, from within the browser. You may have seen your browser issue an OPTIONS request that is greeted with a 405 Method Not Allowed issued by the API.


If this has happened to you, you are probably serving the JavaScript from another application than the one hosting the API.

You were probably expecting an XmlHttpRequest fetching a JSON document instead of that failed OPTIONS request, so read on to find out what’s going on and what you can do to successfully call that API.

So What’s CORS?

In short, you would probably benefit from enabling CORS on your API. CORS is short for Cross Origin Resource Sharing, and enabling CORS is basically a way of allowing your web application to call the API from the client browser, while that API is hosted on a host different than the one your web application is served from. You are not allowed to do that out of the box for security reasons. If you are only interested in actually getting this to work, feel free to skip to the How-part of this post.

OK, so you are interested in a little more background. As said, you can’t call API’s from the browser out of the box if they do not reside on the same host (‘have the same origin’) as the web application. Same origin here means: same URI scheme, hostname and port number. This behavior is enforced by the browser. If a piece of JavaScript attempts to call an API of different origin than its own, the browser will first makes a pre-flight request to the target server to ask whether the server is OK with being called from another origin. Enabling CORS means: instructing the API on how to meaningfully respond to such pre-flight requests. Without CORS enabled, API’s typically respond with the 405 we talked about. Most modern browsers support this pre-flight request, which is a prerequisite for using CORS.


The ‘security reasons’ behind all this are known as the same-origin policy. According to this principle, resources are isolated from each other on the basis of their origin. So, a piece of script for example can only access other documents in the browser when they share the same origin, and it can only call endpoints on that same origin; all resources from other origins are off-limits.



This makes good sense, because failure to restrict this would mean that a malicious web page that is opened in a user’s browser session would have access to all documents and endpoints for other websites the user is also visiting. Imagine one of these other websites being your personal banking environment, and you probably get why the same-origin policy is kind of a good thing.

But obviously, there are also legitimate use cases for cross-origin API calls. Strategists, visionaries and evangelists preach the API-driven world, in which every company should disclose their processes through API’s to be consumed by clients. Those clients typically will not reside on the same origin, but we do want them to be able to call our API’s.

In recent years, several hacks have been conjured up to bypass the same-origin policy, with JSONP being one of the more prominent ones. I won’t dive into the specifics here; you can read all about it online. The issue with JSONP (apart from some sophisticated exploits) is that, as an API publisher, you open up your API for all origins by definition. And this is where CORS comes in: a controlled way of whitelisting some origins while treating all others according to the same-origin policy. And, as a bonus, the implementation is much easier: CORS is entirely a server-side setting, whereas JSONP requires the client to do part of the heavy lifting.


So, on to the actual way of doing this. And this is actually the simplest part: you just need to make sure that the API responds differently to the OPTIONS request. What the browser is actually asking, by means of the Origin header it sends along, is whether the specified origin is allowed to call the API. The API may either not allow this at all (the default), only allow a specific list of origins, or allow all origins. And it communicates this by including a Access-Control-Allow-Origin header to the response to the pre-flight request.


A specific value is indicative of an API that allows this specific origin. Alternatively, an asterisk (*) indicates that all origins are allowed.

For a .NET-based WebAPI, you can use OWIN middleware or the WebAPI CORS package, depending on your application architecture and the requirements. The use of CORS through OWIN middleware is nicely described here, while the CORS package method is detailed over here.

public partial class Startup
    public void Configuration(IAppBuilder app)

And yes, you can only enable CORS on the API side; not on the caller side. After all, the same origin policy is meant to protect the API from access by malicious websites the user may be visiting.

Hope this helps!

Blast from the Past: A Delegation Scenario using WS-Federation and WS-Trust

A couple of weeks ago, I wanted to outline some of the different flavors and protocols available for delegation scenarios using a federated identity. One of the protocols on my list was WS-Federation and WS-Trust. Yes, I know, all the cool kids are doing OpenID Connect these days, but some of us are working for enterprises that bought into the whole federation-thing rather early and while still on-premise. For those environments ADFS is most likely the Identity Provider. And if the relying parties are .NET-based apps, the protocol of choice for identity federation is WS-Federation.

Of course, I did want to use the latest and greatest as much as possible, so I checked out the new OWIN/Katana gear for WS-Federation. And sure enough, getting identity federation to work using ADFS as the Identity Provider was a breeze. However, delegating the federated identity to a backend WCF service: not so much…

The theory here is that, firstly, the WCF service is also registered as a relying party in ADFS; secondly, that the web application is allowed to delegate identities to that relying party; and thirdly, that the web application can use the ADFS-issued user token to send back to ADFS as part of the request for a delegation token. Now the issue I encountered is that the token, as persisted by the OWIN middleware, does not have the same format as is expected by the time the delegation call is being made. More specifically, the token is persisted as a string, whereas the delegation code is expecting a SecurityToken.

I’ve tried to work this out in just about every way I could think of. This was not exactly made easier by the utter lack of online resources when it comes to WS-Federation (especially in its .NET 4.5 and OWIN incarnations). Still, I did not get this to work using the OWIN middleware. So I defaulted back to the ‘classic’ way of making this work, configuring the initial federation with ADFS through the web.config for both the front end MVC application and the backend WCF service that the web app is calling into. And as said, the online resources on WS-Federation in .NET 4.5 are limited, so I figured I’d share my sample on Github.

There’s a lot of moving parts to this sample, and principles to grasp if you want to fully understand the code. Luckily, all of that is pretty much covered in this guide. The ADFS part of it is pretty accurate as it is, and even though it is aimed at ADFS 2.0, it’s easily transferable to ADFS 3.0. As far as the code goes, the principles remain the same but the implementation is based on WIF on .NET 4.0. So you’ll have to do some digging through my sample to match it to the way it is described in the guide. Just see it as a learning opportunity ;).

I will reveal one difference: the guide assumes that the account running the web application is domain-joined so the web app can authenticate itself to ADFS using Windows Authentication when it makes the call to get the delegation token. To simplify the setup, I chose to authenticate to ADFS using a username and password so that I wouldn’t have to set up Kerberos. To make the username-based binding work, I used Dominick Baier’s UserNameWSTrustBinding. This was available in WIF on 4.0 but did not make it into 4.5, so Dominick added it to his Thinktecture.IdentityModel NuGet package.

Oh, and don’t expect the sample to be production ready. In fact, it won’t even work out of the box when you run it. You will have to configure several URL’s to match you environment. And as said, you’ll have to configure ADFS using the tutorial I mentioned.

Of course, I haven’t totally given up on the OWIN route, nor am I finished outlining the different delegation flavors. So stay tuned, because there’s more to come on coding identity federation and delegation!

TFS Build: How To Customize Work Item Association

Recently I got involved in implementing Team Foundation Server 2012 for a large development project. And even though I’ve worked with several versions of TFS over the years, this was the first time I really dove into the options for customization beyond your average workflow modifications. I’ve picked up some interesting tips and tricks over the last couple of months. Most of these customizations were already available somewhere on the web (see for example Ewald Hofman’s excellent series on customizing Team Build 2010, which in general applies quite nicely to TFS 2012 as well), but some modifications are my original work in that I did not find it anywhere else on the web. I thought I’d share some of the things I encountered.

A standard build based on the default build template will associate changesets and work items to builds. The way it does that is by retrieving the label for the previous successful build for the given build’s Build Definition, and by determining which changesets are included in the current build that were not included in the previous build. Some or all of those changesets might have work items associated with them, and those work items get associated with the build. Assuming the default build template, this is done as part of the “Compile, Test, and Associate Changesets and Work Items” parallel sequence. Look for a sequence like this in the template editor:


Now if your team is anything like my team, they will not deliver a specific piece of functionality or fix a particular bug in a single checkin. A single functional requirement as documented in a Product Backlog Item may require the involvement of multiple persons, all with different specialties. So if one team member checks in his changes and associates it with a Task, that does not mean that the Task’s parent Product Backlog Item should be considered part of the next build. And the opposite can also be true. Let’s assume, for example, a scenario in which the project’s goal is to deliver a standard software product (an ERP system for example) and some customizations to go with it. Let’s also assume that the metadata for this ERP system does not reside in TFS. Instead, during a release build, the metadata is extracted from the development environment, committed to TFS, and from there pushed to QA, and ultimately to Production. Now, if a bug is filed, the bug may very well be caused by some configuration setting in the ERP system. The ERP guy on the team fixes the bug by modifying the configuration, which is pushed to QA with the next release. The bug is then marked as Resolved, but TFS Build will never associate this bug with a build because no changeset occurred that contained the fix.

To solve this problem, I tried my hand at an alternative method for associating a build with work items. What I wanted was to associate all work items that have a specific status (such as Done or Resolved), and are not yet associated to any previous builds. That means that we have to query TFS to get that specific set of work items, since the set is no longer a function of the changesets for this build. Now, querying TFS for work items is pretty well covered on the web, so I’m not going to elaborate on how to do that. The relevant part here is that the result of this query is stored in a variable scoped to the topmost Sequence, and that the query is the first thing that gets executed on the Build Agent. This is to ensure that the correct work items are retrieved before the sources are downloaded so as to avoid synchronicity issues. This variable is then used as a parameter in a custom build activity. For each work item in the list, the activity sets the value of the IntegrationBuild field to the build number, and it associates the work item with the build. The code for this activity looks something like the following:

using System;
using System.Activities;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using Microsoft.TeamFoundation.Build.Client;
using Microsoft.TeamFoundation.Build.Workflow.Activities;
using Microsoft.TeamFoundation.VersionControl.Client;
using Microsoft.TeamFoundation.WorkItemTracking.Client;

namespace CustomActivities
    public sealed class AssociateResolvedWorkItems: CodeActivity
        public InArgument<IList> WorkItems { get; set; }

        protected override void Execute(CodeActivityContext context)
            var workItems = context.GetValue(WorkItems);

            var buildDetail = context.GetExtension<IBuildDetail>();
            var store = buildDetail.BuildServer.TeamProjectCollection.GetService<WorkItemStore>();

            foreach (WorkItem workItem in workItems)
                // Update the workitem
                workItem["Microsoft.VSTS.Build.IntegrationBuild"] = buildDetail.BuildNumber;
                workItem.History = ActivitiesResources.Format(ActivitiesResources.BuildUpdateForWorkItem, null);
            var array = workItems.ToArray();
            var errors = store.BatchSave(array);

            var associated = new List();

            foreach (WorkItem workItem in workItems)
                var error = errors.FirstOrDefault(item => item.WorkItem.Id == workItem.Id);
                if (error != null)
                    // Alert update error
                    context.TrackBuildWarning(String.Concat("Unable to associate work item '", workItem.Id.ToString(CultureInfo.InvariantCulture), "': '", workItem.Title,
                                      "' - ", error.Exception), BuildMessageImportance.High);
                    // Write update message...
                    context.TrackBuildMessage(String.Concat("The work item '", workItem.Id.ToString(CultureInfo.InvariantCulture), "' was updated with build label '",
                                      buildDetail.BuildNumber, "'."), BuildMessageImportance.Low);
                    // ... and add to the list of work items to associate

            // Associate updated workitems to build

You see that this is a two-way association. First, the build number is set as the value of the IntegrationBuild field on the work item. This makes sure that work item queries that use this field still work as expected. Second, the work item is added to the Build Information. This makes sure that the work items are displayed on the Build Details. And as an added bonus, Microsoft Test Manager’s Recommended Tests view also correctly displays the list of work items when comparing two builds.

The activity is inserted in the template at the same place as where the standard associations occurred:


One final thing to do is determine whether you also still want the standard association (based on changeset-related work items) to occur. If you don’t want that, the “Associate Changesets And Workitems” activity (which appears right above our newly inserted activity) has an interesting parameter for you to edit. If you right-click on the activity and select Properties, you notice that one of the parameters is named “UpdateWorkItems” and has the value ‘True’. Simply changing that to ‘False’ will make sure that only the work items you selected are associated with the build.

ACS Now Supports Federated Signout

For all of us who gave identity federation a try, federated signout has probably been a theme of some controversy. If your application supports it, you might have had to explain to users why logging out of your application also means that they are logged out to all other applications that happen to use the same Identity Provider. But if your application does not support it, you might have had a discussion or two about why the logoff functionality “does not work” – meaning that a user that is logged off can log back in to your application with re-authenticating to the Identity Provider.

This is a conceptual problem that, to my mind, is not quite solved yet. And it may prove impossible to solve as long as we don’t rethink the concepts of logging in and out of web applications. For example, if I use my Microsoft Account (formerly known as LiveID) to logon to some random web application and then log off again, I might be surprised that I am also logged out of the Windows Azure Management Portal, my Office365 environment, and my MDSN Subscriber pages. That’s not what I want if I click ‘Logoff’ at On the other hand, if I click logoff and browse away, and then return after a while, I am probably surprised to see that I am logged back in again without re-submitting my credentials. Now I’m a technical guy, and I will probably have noticed my browser flickering due to the redirects to the Microsoft Logon Page and back, but my manager – or my girlfriend for that matter – may not be as perceptive as me.

As said, this problem may very well turn out not to have a technical solution. It is, however, an interesting topic for more philosophically inspired moments. But no matter where you stand on this matter, if you used Azure Access Control Service you did not really have a choice whether to implement federated signout or not, simply because ACS did not support it. Attempting to send the correct messages to ACS simply resulted in a static page that made the omission clear.

Recently, however, I discovered that ACS has been updated to support federated signout. The update apparently happened back in December 2012, so it took me quite a while to stumble upon this new feature, but here is how to use it from an ASP.NET application that acts as a Relying Party and uses WIF:

var config = FederatedAuthentication.FederationConfiguration.WsFederationConfiguration;
WSFederationAuthenticationModule.FederatedSignOut(new Uri(config.Issuer), new Uri(config.Reply));

Simple, right? So now we cannot hide behind the limited signout functionality in ACS anymore when we have to choose whether or not to support federated signout. So I guess that leaves no alternative than to really start thinking about the experience we want our users to get used to.

Different Ways To Verify ACS Token Signing Certificate

Today, I had an interesting discussion with a colleague on the possible attack vectors on the communication between a Relying Party application and ACS (Azure Access Control Service, also known as Windows Azure Active Directory Access Control). The discussion focussed on the extend to which the RP can be sure that he is in fact communicating with the STS (Secure Token Service, which in this scenario is ACS). The discussion was a bit blurred by some other issues that arose so we didn’t really get to the heart of the matter, but we were both left with a nagging concern that communication was not as secure as we would want it to be.

Now it’s never a good idea to keep a factual disagreement undecided, and when it comes to security there’s all the more reason to figure out what’s really going on. So, to set the background: we have an RP that lets its users authenticate with ACS (which in turn uses LiveID as Identity Provider). We consciously did not configure the tokens to be encrypted, so we only have them signed by ACS. This feature cannot be disabled (or I did not find the button for it) and that makes good sense, too. Since the communication between the RP and the STS all flows through the client (see this post for an extensive description), we want to make sure that the client is not able to tamper with the SAML token that it gets from the STS to pass on to the RP. There’s no viable scenario in which you would want to enable clients fiddling with the tokens, so not being able to disable token signing is a good thing as far as I’m concerned.

OK, so token signing is obligatory. The way this actually happens is – broadly – as follows if you don’t customize your ACS namespace: ACS generates a X.509 certificate that is used for signing the tokens. Upon signing a token, it uses this certificate’s private key to do the signing, and then embeds its own public key in the RequestSecurityTokenResponse that flows from ACS to the client and then to the RP. The RP then extracts this public key from the reponse, and uses it to validate the signed SAML token that is also part of this message.

So the question becomes: what is keeping a client from creating its own X.509 certificate on the fly, create its own SAML token and sign it with the private key of the roque certificate, and embed the certificate’s public key in the response? How will the RP, upon extracting the public key, detect that this is not the public key from the STS? Of course, the normal way of doing this would be to place the public key of the legitimate signing certificate in the Trusted Person Certificate Store on the client’s host (assuming it’s Windows). But that’s not how it is configured if you just use the Identity and Access Tool to register your app as an RP at your ACS namespace. In fact, the relevant part of your web.config will look like this:

        <add value="[your URI]" />
      <issuerNameRegistry type="System.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, System.IdentityModel, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089">
          <add thumbprint="[a certificate thumbprint]" name="[a friendly name]" />
      <certificateValidation certificateValidationMode="None" />

As this snippet shows, the certificateValidationMode is None, meaning that you can put anything you like in whichever Certificate Store on the client host, but your app will not hit that store to validate anything. Of course, we can always switch the certificateValidationMode to PeerTrust or PeerOrChainTrust and add the public key of the signing certificate to the Trusted Person Certificate Store. But if that would be the only way to be sure that the token does indeed come from the STS, it would for example mean that, first of all, the default configuration as done by the Identity and Access Tool would leave you severely unprotected, and second of all it would mean that there’s no way to get any level of issuer assurance in, for example, an Azure Websites scenario.

Luckily, it turns out that things are not as bad as they seem, and the key to this is the <trustedIssuers> element in the snippet above. As you can see, this element contains a collection of certificate thumbprints. The one element that we have in our web.config in this case is placed there by the Identity and Access Tool when I first set up my application to use ACS as STS, and it is the thumbprint for the certificate that ACS uses to sign my tokens. So now, when the RP receives a RequestSecurityTokenResponse from the client and extracts the public part of the certificate from that message, it first validates the certificate by comparing its thumbprint to the one in the web.config. If the client forged the response with his own certificate, the thumbprint in the web.config will not match with the certificate used, and authentication does not occur. The class responsible for this is the ConfigurationBasedIssuerNameRegistry, which holds a registry of all valid issuers based on the items in the <trustedIssuers> element. This class is referenced in the <issuerNameRegistry> element as the class to use for this purpose.

Of course, there’s always a theoretical possibility that a second certificate does resolve to the same thumbprint. After all, the output space for thumbprints is limited and in fact smaller than the output space for certificate keys. But the change that an attacker is able to willingly create a certificate that results in the same thumbprint as the ACS certificate is extremely small. But hey, if the risk is still too big for your comfort, you can always take the Trusted Persons Certificate Store route on top of all this :).

UPDATE: Today the discussion with my colleague continued for a bit. He made the valid point that the above behavior (i.e. authentication fails if the thumbprints don’t match) is not intended as a security feature. Instead, what happens here is that the WIF framework wants to determine the name of the issuer based on the incoming public key. It does this by calling into the ConfigurationBasedIssuerNameRegistry, which will return an empty string if the incoming public key does not resolve to a known thumbprint. Further down the line, WIF throws an exception if the issuer name is null or empty. This means that the availability of a matching thumbprint is not really what is evaluated here; it’s just that it happens to function as the “bridge” between the public key and the issuer name. This becomes a real concern given the possibility to provide your own implementation of the IssuerNameRegistry instead of going with the default configuration-based variant. Now if that custom implementation happens to ignore the thumbprint while resolving the certificate into an issuer name, the RP can no longer be sure that the incoming token actually came from the STS.

So, to conclude: if there’s really no possibility to do proper certificate validation, you can be sure of untampered communication with the STS, but only as a side effect of resolving the certificate to a name through the thumbprint. If at all possible, it’s a recommended practice to set the certificateValidationMode to at least PeerTrust and to place the token signing certificate’s public key in the Trusted Persons Certificate Store.

WCF Custom Headers: A Short How-To

Recently I was working on a project that entailed a web application that called into a back end WCF service. That WCF service was connected to a system for which the users of the web application were not known as users due to licensing issues, so the calls were made under the credentials of a service account. However, the back end system did have access to some information on the users of the web application and needed to make some authorization decisions based on that information. So I needed to make the username of the web user known to the back end WCF service, even though the call takes place under different credentials.

The solution to these types of requirements is usually a custom header, so that’s the approach I took. However, I did not want the developers to be required to manually modify every service proxy they create. Moreover, adding the header in code just does not seem as clean as it can be. So I decided to create message inspectors to handle the application of the headers, and let custom behaviors be responsible for applying the message inspectors. The behaviors can then be applied to both the WCF service and the client though the config files. I will go through these steps in this post, but if you just want a working sample, the sources are available here.

The Header

First off, I created a custom header that derives from MessageHeader.

    public class UsernameHeader : MessageHeader
        public const string UserAttribute = "userName";

        public const string MessageHeaderName = "UserName";

        public const string MessageHeaderNamespace = "";

        public string UserName { get; private set; }

        public UsernameHeader(string userName)
            if (String.IsNullOrEmpty(userName))
                throw new ArgumentNullException("userName");
            UserName = userName;

        public override string Name
            get { return MessageHeaderName; }

        public override string Namespace
            get { return MessageHeaderNamespace; }

        protected override void OnWriteHeaderContents(XmlDictionaryWriter writer, MessageVersion messageVersion)
            writer.WriteAttributeString(UserAttribute, UserName);

Of course I could have opted for an untyped header that would have slightly simplified extracting it from the request, but this gives more flexibility in case it’s not just a single value that needs to be transmitted.

The Client

The next step is to create the client part of the solution. For the message inspector, we need to create a class that overrides IClientMessageInspector. The method implementations look like this:

        public object BeforeSendRequest(ref Message request, IClientChannel channel)
            // TODO: fill the header below with the user name from the current thread, for example
            var header = new UsernameHeader("UserName");

            return null;

        public void AfterReceiveReply(ref Message reply, object correlationState)

As this implementation shows, I only need to handle the BeforeSendRequest method, because I only have to do something upon sending a request; not while receiving a response. In the BeforeSendRequest implementation, I simply create the UserNameHeader and initialize it with the username, and add it to the request headers.

Now I want to create a behavior that applies this message inspector to the runtime. This can be done by implementing IEndpointBehavior. Since both IEndpointBehavior and IClientMessageInspector are just interfaces (as opposed to base classes), I can use my existing class to implement this second interface. The implementation looks like this:

        public void Validate(ServiceEndpoint endpoint)

        public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters)

        public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher)

        public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)

The only method with an actual implementation is ApplyClientBehavior, which adds the instance (i.e. the IClientMessageInspector implementation) to the collection of message inspectors of the passed in ClientRuntime.

All that’s left to do for the client behavior is to implement BehaviorExtensionElement so as to be able to add the behavior to the client through the config file. And since this is the only class we need to implement for our solution, we can use the same class that already implements the two interfaces. The implementation of BehaviorExtensionElement looks like this:

        protected override object CreateBehavior()
            return this;

        public override Type BehaviorType
            get { return GetType(); }

The Service

Now for the service side. We basically do the same thing as we did with the client, the only difference being in the interfaces to implement. Because we want to modify the service runtime now, we implement IDispatchMessageInspector instead of IClientMessageInspector. The implementation looks like this:

        public UsernameHeader UsernameHeader { get; private set; }

        public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext)
            var headerIndex = request.Headers.FindHeader(UsernameHeader.MessageHeaderName, UsernameHeader.MessageHeaderNamespace);
            if (headerIndex >= 0)
                var reader = request.Headers.GetReaderAtHeader(headerIndex);
                UsernameHeader = ParseHeader(reader);

            return null;

        public void BeforeSendReply(ref Message reply, object correlationState)

        private static UsernameHeader ParseHeader(XmlDictionaryReader reader)
            if (reader.IsStartElement(UsernameHeader.MessageHeaderName, UsernameHeader.MessageHeaderNamespace))
                var originatingUser = reader.GetAttribute(UsernameHeader.UserAttribute);
                if (String.IsNullOrEmpty(originatingUser))
                    throw new FaultException("No originating user provided", FaultCode.CreateSenderFaultCode(new FaultCode("ParseHeader")));

                return new UsernameHeader(originatingUser);

            return null;

What this does upon receiving a request is parse out the UsernameHeader and set it to a property. This way, the deserialized header becomes availabe for user code as a property of the service behavior because we will implement the behavior and the message inspector in the same class, just like we did with the client. Again, I could have opted for a simpler approach that involves extracting the header directly from the OperationContext when needed. Going through the extra step of accessing it through the behavior, however, is a cleaner approach. First of all I just need to extract the header once, but more importantly this approach conforms to the general idea of the behavior as the primary WCF extension and interaction point.

Now the interface we implement for the behavior is IServiceBehavior. This differs slightly from the IEndpointBehavior interface, which can be an alternative in some scenarios. The implementation is, again, very simple:

        public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)

        public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection endpoints, BindingParameterCollection bindingParameters)

        public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
            foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers)
                foreach (var endpointDispatcher in channelDispatcher.Endpoints)

What we do here is apply the instance (i.e. the IDispatchMessageInspector) to all endpoints. And just as with the client, we have our class inherit from the BehaviorExtensionElement so as to make it configurable through the config file.

Connecting The Dots

Now all that’s left to do is modify the config files to make all this work. For both the client and the service, we need to register the custom BehaviorExtensionElement like this:

      <add name="UsernameBehavior" type="[Fully qualified assembly name of wither the client or the service behavior]" />

Then, we need to register the behavior for the client:

        <endpoint behaviorConfiguration="ClientBehavior" />
      <behavior name="ClientBehavior">
        <UsernameBehavior />

And the service:


To actually get the value from the header, simply extract the behavior from the OperationContext:

            if (OperationContext.Current != null)
                var usernameBehavior = OperationContext.Current.Host.Description.Behaviors.Find<UsernameServiceBehavior>();
                if (usernameBehavior != null && usernameBehavior.UsernameHeader != null)
                    return usernameBehavior.UsernameHeader.UserName;
            return null;

And that’s it! Hope this helps. Full source code is available here.

UPDATE: I added a small code snippet that shows how to actually access the header on the service side. I also elaborated a bit on the reasoning behind the approach of making the header available through the service behavior instead of accessing it directly.