Leveraging ARM templates for deploying Azure CosmosDB

The other day I stumbled upon the brand new (but long awaited) ARM support for CosmosDB databases and collections. CosmosDB ARM support used to be limited to just provisioning the database account. Everything inside it, such as databases and collections, had to be provisioned using some other mechanism, such as PowerShell or Azure CLI – or heaven forbid, the portal.

But that’s over: support for ARM is finally here! It’s not all perfect yet, though. I guess Rome wasn’t build in a day either, so let’s count our blessings – and find workarounds for what’s still missing.

One of those workarounds has to do with provisioning and updating throughput (i.e. RU/s) on either a database or a collection. Provisioning throughput upon creating a new database can be done by setting an options object with a throughput property in the database resource for example, like so:

But, changing that value after initial creation is not allowed. Updates to that value can instead be passed through a nested settings resource, like this:

And this settings resource, in turn, is only valid as a child to a parent resource that is itself already provisioned with throughput – so a template only holding a settings resource with throughput in it (e.g. without the options object) fails upon creating. So, it seems that updating throughput requires a different template than the one that was used for the initial creation. That’s not how I like my ARM templates…

So I set out to devise a workaround, where the end goal is to have a deployment pipeline that uses ARM wherever possible, and that can be run multiple times yielding the same result, i.e. is idempotent. I found that, while you need to use the options object to initially provision the throughput, the throughput is allowed to be set to null upon subsequent deployments. This can be leveraged by some piece of simple logic, that conditionally sets this value to either the specified throughput, or null, depending on whether it’s an update or a create:

Now, I just need to pass that flag to the template from the outside. For that, I can use an Azure CLI or PowerShell task to determine whether the database already exists, and pass the result of that into the ARM template as an input parameter. I won’t go into the details here, but this should be easy to implement.

Obviously, this is not ideal. I’m sacrificing the decoupling of my ARM template from other tasks in my pipeline: the template depends on another task to provide input instead of just a parameter file, and I no longer have the option to let the ARM template figure out the database name based on naming conventions or whatever, because I need to know that name up front in order to be able to pass it to the CLI / PowerShell task. But, putting it all together, I do have a single template used for creates and updates, and a single place (i.e. the parameter file) to keep track of the throughput I provisioned. To me, that’s something worth a sacrifice.

The full ARM template looks like this:

Hope this helps!

Updates on IP Restrictions for Azure App Services

Two weeks ago, I wrote about the new VNet Integration feature on Azure App Services. This has everything to do with being able to lock down downstream systems to only accept traffic coming from a specific VNet under your control, as opposed to a set of public IP addresses that are managed by Microsoft, shared with other tenants, and prone to change.

Today it’s time to follow up on that. Because we may not only want to protect downstream systems, but also the web apps themselves. For public web apps, that protection typically does not exist at the network level, but wouldn’t it be nice if there was some network-level protection option available for private apps? Until now, the only real way to do that was by employing a very pricey App Service Environment. An ASE sits in your own VNet with all the security and flexibility this brings, but it is exceedingly expensive.

But the need to actually deploy an ASE has now disappeared if all you want to do is lock down access to a web app so that it’s only available from your network: the Microsoft.Web service endpoint has just become available!

That’s right: you can now configure a service endpoint for Azure App Services on one or more of your subnets:

ServiceEndpoint

Then, on the Web App, you can configure IP restrictions to only allow your subnet to access the Service Endpoint:

AccessRestriction

So, there you have it: not only can you rigorously protect access to downstream systems while allowing traffic originating from Azure App Services; you can now also protect Azure App Services itself to only allow internal traffic!

Enabling Azure App Service VNet Integration ‘v2’ from CI/CD

If you’re anything like me, you want to automate everything from deploying your basic Azure infrastructure all the way to the application code. And the bar is set exceedingly high for deviations to this rule.

Sometimes – and especially with new and/or preview features – that requires some extra work, because support for such features in your deployment technology of choice may not yet be available.

Take the case of the new VNet Integration feature for Azure Web Apps. All very cool that it can be set through the portal, but that’s about the last thing I want to do when creating robust deployments. For me, ARM templates are the technology of choice when it comes to deploying Azure resources, even though a case can be made for alternatives such as Azure CLI. But in the case of VNet integration, both ARM and the Azure CLI are not an option yet at the time of writing. There aren’t even proper Azure Powershell commands to get this done.

In situations like this, I head over to the Azure Resource Explorer, and see if I can reverse-engineer what happens in the Azure Resource Manager API when I change a setting through the portal. Armed with that knowledge, I’m able to craft an API call that gets the job done. Wrapped in a fairly simple Powershell script, it may end up like this:

Some remarks here are in order, the most important of which is that the subnet with which the app is integrated, must be preconfigured with a delegation to Microsoft.Web/serverFarms. This is done automatically when you enable VNet Integration through the portal, but not when making the API call yourself like in the script above. Fortunately, this actually is settable through ARM, so no need to include it in the script.

Second, the script works with the ‘old’ AzureRM modules for all Azure interactions apart from the actual API call. For me, this works best for interoperability with Azure DevOps hosted agents, which didn’t support the new Az modules yet at the time I created this script. But of course it can quite easily be adapted to the new modules.

And lastly, this is not really pretty and production-ready code (but then again, is Powershell ever pretty?). It can certainly be improved upon, but for me this is a piece of dispensable code in the sense that I take it out of my pipeline and discard it, the minute ARM or Azure CLI support becomes available. It’s just that I don’t feel like waiting for that to happen before I can automate deployments that use this feature.

Hope this helps anyone looking for a way to automate the new VNet Integration feature, and just maybe it can also serve as a bit of inspiration as to how you could create your own temporary solutions to features not available in your primary deployment technology of choice.

New: SAS token support in the new Azure Service Bus .NET Core client

For some time now, Azure Service Bus comes with two client libraries. The first is the good old WindowsAzure.ServiceBus, which is functionally complete and mature, but requires the full .NET Framework 4.5. The second is the new Microsoft.Azure.ServiceBus library which targets .NET Standard and is therefore usable within .NET Core, but is not functionally complete yet.

ASB

But a couple of days ago, at least one of those functional omissions was (partly) resolved with the release of version 2.0.0 of the client, because this version now offers rudimentary support for SAS tokens. Rudimentary because it will not generate tokens for you yet, but it will play nicely with tokens you crafted yourself.

Why is that important? Well, because SAS tokens play a key role in messaging scenarios that cross organizational boundaries. When two parties from different organizations communicate via Request/Reply messaging for example, one or both parties will be communicating via one or more queues that belong to the other party’s organization. In those situations, you’d typically prefer granting access using SAS tokens instead of keys.

Because crafting a SAS token is a rather precise task that can take some time for first-timers, I created a simple Request/Reply sample that involves using a SAS token. It’s intentionally kept simple so you will want to expand upon it before using it in your own application; it just aims to showcase the general idea of generating and using SAS tokens in a Request/Reply scenario. Let me know what you think in the comments!

HowTo: Secure a Custom Webhook for Azure Event Grid

As I wrote before, I’m playing around with the new Azure Event Grid lately. As I mentioned in my previous post, custom event publishers and subscribers hold a lot of promise, especially while we are still awaiting the bulk of Azure services to be hooked up to Event Grid.

AEG

But for custom publishers and subscribers to actually lift off, we need some way to authorize calls, both those from the publisher to Event Grid and those from Event Grid to the subscriber endpoint. Now the first is pretty well covered in the docs. But the call from Event Grid to the subscriber endpoint is not very well described at this point in time. It just mentions some initial validation sequence, which is supposed to prove ownership of the endpoint but in actuality just verifies that the endpoint is expecting to handle Event Grid events.

If this were the whole story, having an Event Grid subscriber endpoint would imply accepting unauthorized calls containing event payloads, meaning anyone with knowledge of the endpoint address can send bogus events your way – and since you have no way to tell authentic from fake events, you’d also be opened up for Denial of Service.

503-error

Luckily, a conversation with the Product Team quickly revealed that this is not the whole story. When you register a subscriber endpoint in Azure Event Grid, you can include a query string. This query string will be included in each and every call to your endpoint, so both the initial validation call and subsequent event notification calls. If you put some sort of key in there and then verify its presence in each incoming call, you’ve effectively locked out the Man In The Middle, and you just made a Denial of Service a lot harder.

EditEventSub

Furthermore, query strings that are added this way are not visible when enumerating Event Grid Subscribers in the portal, as an added layer of security.

ListEventSubs

I’ve updated my code samples to include a possible way to handle this for a ASP.NET Core WebAPI webhook.

Thanks to Dan Rosanova for clearing up how authorizing Event Grid calls to custom webhooks can be done.

Handling Custom Azure Event Grid Events

Lately I’ve been exploring the new possibilities opened up by Azure Event Grid, which was introduced last month.

Azure Event Grid is a fully managed platform for publishing and subscribing to event notifications. It is intended to ultimately encompass all Azure services as event publishers and/or subscribers but it also allows for custom, non-Azure participants. At this point, the following publishers and handlers are available:

event-grid-functional-model

For a little bit of background information on how AEG relates to the other event offerings on Azure, such and Event Hub or Service Bus, see this write-up by Saravana Kumar.

Long story short: it all looks very promising, but since most Azure services are yet to be hooked up to Event Grid the custom topic publishers and WebHook subscribers may hold the most promise for the short term.

So I tried my hand at actually getting that to work with a console app for publishing and a ASP.NET Core WebAPI for handling; code is available here.

In general, it’s relatively straightforward. The only thing that took me some time to get right is the handling of the validation process. The issue was that the documentation says that the validation request contains a header ‘Event-Key’ with a value of ‘Validation’. In actuality, this is ‘aeg-event-key’ with value ‘SubscriptionValidation’. Since my API is routing the validation request to a special action based on the header, this is pretty relevant. But, let’s keep in mind that Event Grid is in preview at this point, and the documentation is part of that status.

Microsoft Build 2017 – Day 2

The first day of Build 2017 was packed with exciting announcements and great content, such as the announcement of IoT Edge, new stuff on .NET Core and .NET Standard, a lot of work on AI and ML, and much more; see Peter’s write-up¬†for some more detail (in Dutch). So let’s see if the second day can top this :).

Keynote

The Thursday keynote was pretty much centered around Windows 10, and more specifically the Windows 10 Fall Creators Update, that will include new features such as OneDrive Files On Demand and Windows Timeline. With OneDrive Files On Demand, your files in OneDrive will be available for you to work with, regardless of whether or not they are actually present on the device. If they’re not, they’ll be pulled from the Cloud when needed. And with Windows Timeline, your work on these documents or whatever else you have going on, will travel with you from device to device, including Android and iOS devices. All this is made possible by combining Cortana and the Microsoft Graph to track your data and your activities. And how about copying something on your PC, and pasting it on your iPhone? You can do that with the Cloud Powered Clipboard. Obviously a lot more was covered during the keynote, which is available at Channel 9.

Discussing All Things Azure

An interesting new session format this year is the Open Q&A, and for me the one with Mark Russinovich and Corey Sanders was a must-attend. They discussed upcoming features in Azure, such as the future possibility to deploy most of the storage options, including Azure SQL Database, in a private network to cut it off from direct Internet access; or the expansion of Azure AD to include identity information for compute objects such as VM’s with roughly the same capabilities as computer objects in on-prem AD’s; or upcoming support for encryption at rest for all storage services. They also touched upon the state of Cloud Services as pretty much the oldest service in the book: it’s not going anywhere as long as customers depend on it, but don’t expect a lot of new innovations coming to it anymore.

Serverless, Containers, Service Fabric

Of course, serverless architectures are also among the top-ranking topics during the conference, as well as container services and Service Fabric. Some highlights in the serverless computing area are the availability of the Azure Functions Runtime for on-prem deployment of Azure Functions, increased Visual Studio tooling support for Azure Functions an so on. Service Fabric becomes increasingly integrated with all sorts of related technologies, such as Azure Networking, API Management, containers and .NET Core 2.0.

Presenting As A Form Of Art

And in closing this post, a honorable mention goes to the Anders Hejlsberg session on TypeScript. He did one of those sessions where it’s just a very experienced presenter with a microphone and a code editor, and he showed off some very cool stuff that’s made possible just by layering a type system on top of JavaScript. It’s impossible for me to do it justice here, so just watch out for the session to appear on Channel 9, and treat yourself to an hour of entertainment.