Azure Virtual WAN migration – tales from the frontline

We recently conducted an Azure Virtual WAN migration for a customer to help resolve a number of issues in their legacy Azure networking environment. Some statistics about their environment included:

  • 60 connected virtual networks
  • 20 virtual network gateways deployed
  • 6 firewall instances
  • 2 SDWAN fabrics
  • 300+ route tables
  • multiple hub and spoke deployments

The goals for the program were set out as follows:

  • Simplified network design
  • Simplified network management
  • Centralised security controls for all traffic
  • Support for SDWAN platforms
  • Right sizing of resources such as ExpressRoute
  • Ability to support additional regions and office locations

I came up with the following high level design which was accepted:

The Azure Virtual WAN migration approach was run along the following lines:

  • Virtual WAN to be deployed into a parallel environment including new ExpressRoute circuits
  • ExpressRoute routers to be linked to provide access between legacy environment and the new virtual WAN environment
  • Staged migration to allow for dev/test environments to be migrated first to complete testing and prove capability

This meant during migration, we had an environment running similar to this:

Azure Virtual WAN migration

What follows are an outline of some of the issues we faced during the migration. From the “that seems obvious” to the more “not so obvious” issues we faced.

The “That seems obvious” issues we faced

Conflicting Network Address Ranges

This one seems the most obvious and hindsight is always 20-20. Some of the networks migrated were connected in strange and undocumented ways:

In this configuration, the network range was not automatically routed and could only be seen by its immediate neighbour but the migration process broke and reconnected all peers to meet with the centralised managed traffic requirement. When the network was migrated to the virtual WAN, everything could see it, including a remote office site with the same subnet for its local servers.

Network Policy for Private Endpoints required if using “Routing Intent and Routing Policies”

This one is also obvious in hindsight. It was missed initially due to inconsistent deployment configurations on subnets. Not all subnets were subject to route tables and user defined routes, so some subnets with private endpoints had been deployed without this configured:

When “Routing intent and Routing Policies” are enabled, this effectively is the same as applying a route table to the subnet and therefore a private endpoint network policy is required.

Propagate Gateway Routes

Some of the virtual networks contained firewalls that provided zone isolation between subnets. The default route table for a network with Routing Intent enabled sends Virtual Network traffic to VNETLocal. To shift the functionality to the new centralised firewalls, a local Virtual Network route via the managed appliance was needed.

Without “Propagate Gateway Routes” enabled the locally generated route table at the device included the new Virtual Network route plus the default set of routes that Microsoft apply to all network interfaces including a default to internet.

The “Not So Obvious” issues we faced

Enabling “Routing Intent and Routing Policies” for Private traffic only

Initially when deciding how internet egress would be handled, the initial directive by Cyber team was to ingress/egress through the same point. As there was a “default route” coming up the ExpressRoute from the on-premises connection, I turned on “Routing Intent and Routing Policies” for Private traffic only:

The unexpected behaviour of only managing internal traffic is that in all connected Virtual Networks, the route table applied sends the RFC1918 address ranges to route via the managed application, but then applies the remaining default route table you would normally see on any standard virtual network. All routes being broadcast via the ExpressRoute gateways are not propagated to the end devices. In the end, we needed to apply “Internet traffic” policies via Routing Intent and Routing Policies to egress through our central managed applications as well.

Asymmetric Routing

Asymmetric routing, the bane of Azure network administrators everywhere. With ExpressRoute connections to the on-premises network in two locations, all Virtual WAN networks are made available via two paths into the environment.

Hub to hub communication adds additional AS-Path info to the route path which should play into route determination, but in our case the on-premises router connections added even more. Therefore traffic originating in Hub2 would route onto the network via the local ExpressRoute, but the return path was the same or shorter (preferenced) via Hub1. With firewalls in play, traffic was dropped due to an unknown session and an asymmetric route.

There were two ways to handle this. The new (Preview) Route Map feature for Virtual Hubs is designed to assist with this issue by using policies to introduce additional AS-Path information to the route path. The problem is, (at the time of this writing) this feature is in preview and we are in production.

The alternative was to use BGP community tags and allow the ExpressRoute routers to apply policy based routing information to the AS path.

BGP Community tags

On the surface, this looked to be a simple solution. Apply a BGP community tag to each VNET based on the hub it is peered with. By default, the BGP community region tag is also automatically applied and this information is shared with the ExpressRoute routers.

Except, Virtual WAN does not fully support BGP community tagging. Community tags are shared with the local ExpressRoute service, but are stripped in inter-hub communication. Applying region specific policies to the routing path is not possible if both regions community tags are not shared.

Next-gen firewalls

The next-gen managed applications that were deployed presented a number of issues for us in our migration configuration as well. Some of the issues we faced are vendor agnostic and not specific to our deployment, some specific to the brand.

I will cover these in another post.


This article is also available here: Azure Virtual WAN migration – tales from the frontline – Arinco

Securing service traffic in Azure – Part 3

Securing service traffic in Azure

In Part 1 we looked at using private endpoints to secure access to a data source, from only our app service. In Part 2 I showed you how to achieve a similar result but using service endpoints. The same set of controls that recommends we secure everything, and led us down this path, also recommends that we implement application security. In this final part of the series of “Securing service traffic in Azure”, I will take the application configuration from Part 2:

And show you how to protect your traffic end to end by placing it behind a Web Application Firewall.

Web Application Firewall

A Web Application Firewall, or WAF, is a security solution that monitors, filters, and blocks incoming and outgoing web traffic to protect web applications from various cyber threats and attacks. From a high level:

  1. The client connects to the WAF
  2. The WAF inspects the request for malicious intent
  3. If OK, the WAF then forwards the request to the application layer

It is fair to say that the topic of Web Application Firewalls, best practice and their configuration is a large topic and not one covered here. As our focus is on securing service traffic in Azure, I will only be focusing on the basic functionality of the Microsoft WAF and the application gateway required to secure my application. For more detail on Microsoft Web Application Firewalls, you can read about it here.

The WAF capability requires a Microsoft Application Gateway with WAF enabled. I am going to step through the configuration of the Application Gateway and how to secure our function app behind it. Once secured with the application gateway, I can then apply additional security of the WAF policies to it.

Microsoft Application Gateway

Selecting “Microsoft Application Gateway” from the marketplace, I have given my application gateway a name, and as this is used purely for our testing, I have set the autoscaling to “No” and selected the smallest instance count. As the application gateway behaves like an external load balancer, it also requires a subnet to connect to in order to interact with internal services. This subnet is important to securing the app service completely behind the Application gateway.

For my front end, I have added a new public IP address and accepted the defaults:

The backend pool is where I link the App Gateway to the function app, so I have selected “App Services” from the target type, and then selected my function app:

Now that I have a front end and back end connection, the application gateway requires a routing rule to link the two together. I have given the routing rule a name of “forward_traffic” and a priority of 100. The routing rule needs a listener and a target. As this is about securing an app service behind a WAF, I will not delve into configuring HTTPS listeners. That is a topic for another discussion. For now, I will be using open HTTP over port 80 as our connection point:

I then need to configure the backend targets. My target is the pool created in the previous tab “bepool1”:

I am creating a new “Backend settings”. For this, I have accepted most of the defaults, however I need to link to the app service.

As the Hostname for the Function App continues to resolve even when the firewalling is complete, I pick the hostname from the backend target. This is for test/demo purposes only so I can ignore the warning.

Once this is all entered, I let the system create my application gateway. Testing with the IP address of the public IP, my datafile is returned:

Securing the Function App

The application gateway is now forwarding traffic through to the Function App, however, my function app is still responding to its FQDN:

To restrict the Function App, I need to go into the “Networking” settings and create an “Access Restriction”:

Enabling this will open a new blade of settings. As I am using service endpoints not private endpoints, I am leaving “Allow public access” enabled. For the site access rules, I have set the “unmatched rule action” to Deny, and then created a new rule:

For the new rule, I have linked it to my virtual network and subnet that is configured for the Application gateway. This generates a warning that there is no service endpoint. But when we finalise the configuration, the system creates it for us. Back to the main page and I save the configuration:

Now when we test access to my function app, we can see the firewall is preventing access:

However, when we test the application gateway again, you can see that the traffic is flowing as expected:

Now that the function app is secured behind an Application Gateway my configuration logically looks like this:

That just leaves us to inspect traffic as it passes through the application gateway.

Adding WAF policies

As I mentioned earlier, WAF configuration is a much bigger topic than I can cover here. For the purposes of this demonstration, I will be using defaults to demonstrate the theory only.

My first step is to create the WAF policy. From the market place, I have selected “Web Application Firewall (WAF)” by Microsoft. I have selected “Regional WAF”, given the policy a name, selected “Prevention” and then selected “Review and Create” to create the policy:

Once the policy is created, to connect it to the application gateway, I need to upgrade the Application Gateway from “Standard V2” to “WAF V2”. This is found under the “Web Application firewall” tab for our application gateway:

Here you can see I have selected my recently created WAF policy and then clicked save. To test the applied policy, I now try calling my function app with a SQL injection in the URL:

As you can now see, my application is not only secured so that only one external point exists for my data, it is also protected by an application firewall ensuring security and filtering

This is the final part in the series “Securing service traffic in Azure”.

Securing service traffic in Azure – Part 2

In “Securing service traffic in Azure – Part 1” I talked about using private endpoints to secure traffic between your Azure services. In this part, I explain another way to secure your traffic between Azure services using service endpoints.

So why not just use private endpoints?

As I demonstrated in Part 1, by configuring a private endpoint, I successfully secured traffic between my function app and my data source, so why would we use something else? The two primary reasons are:

  • Purpose – You should only ever deploy the minimum required services to support your environment. Private endpoints are designed to provide “private” routable access to a service running in Azure. This means traffic from anywhere within your network environment can access this endpoint. If your only requirement is to secure traffic between services in Azure, then the routing capability is not required.
  • Cost – For each private endpoint there is a cost required to run the endpoint, approximately $9USD/month. As usage of endpoints increase, so to do your costs scale and this can add up quickly over time.

If our only purpose is to secure traffic between services in Azure, then this is where service endpoints provide an alternative.

The Test Environment

Like Part 1 of this series, for simplicity, I will continue to use a test environment that consists of:

  • A storage account hosting a CSV file of data.
  • A PowerShell function app based on a http trigger that will retrieve the data from the CSV file.

Securing the storage account with a service endpoint

This time, the first step to securing the storage account is to use the firewall to disable public access and restrict access to selected virtual networks:

In this case I am connecting to the subnet used in Part 1 of this series. Similar to disabling public network access completely, by restricting to specific virtual networks, I have now disabled access from the internet:

and my function app no longer works:

Securing the service traffic in Azure by linking the function app to the storage account

Just like with private endpoints, I need to link the function app to the service endpoint that was created when I restricted the access to specific virtual network subnets. This is done in exactly the same method for private endpoints by enabling “virtual network integration” and linking it to the same subnet used for the storage account.

Now our environment will look like this:

  • The function app is integrated with the virtual network
  • The storage account has a delegation to the same subnet as the virtual network integration
  • Storage access is now only accessible within the Azure fabric and not across the open internet

And now, my function app is again returning data from the file on the storage account:

In part 3, I will demonstrate how to present a protected address with traffic secured between each layer.

Securing service traffic in Azure – Part 1

A question I come across a lot is how to secure traffic between services running in Azure. In this multi-part series, I will look at using two different approaches for securing service traffic in Azure using a Function App and a storage account as my example services.

The test environment

For the purposes of this post, I am going to use a simple function app that communicates with a storage account and retrieves the content of a file. Granted, I could get the content directly from the storage account, but this would not demonstrate the required functionality. Therefore the demo environment will consist of:

  • A storage account hosting a CSV file full of data.
  • A PowerShell function app based on a http trigger that will retrieve the data from the CSV file.
  • And an account with contributor rights to the resource group in which everything is being created.
Function App accessing storage file

When called, our function app will write the contents of the CSV to the browser:

Function app returned data

Concerns about the described environment

Now while our communications are over HTTPS and using access keys (for the sake of this demonstration), concerns are sometimes raised that the communication, encrypted or not, is over open internet. In addition, for this communication to occur over the open internet, our data source (the storage account) requires a public endpoint. All it takes is a simple misstep and our data could be exposed to the internet.

So how do we restrict public access to our source and ensure communications occur over a more secure channel?

Microsoft provide two mechanisms to secure the connection for the data source:

In this post, I am going to focus on private endpoints.

Securing the storage account with a private endpoint

Our first step is to secure the storage account from public access by disabling “Public Network Access” in the Firewall and Virtual Networks tab:

securing the storage account

With this committed, you can see that our function app no longer returns the content of the data file:

Securing Azure storage account prevents services from accessing it

To allow secured access to our data store, I am now going to need to create a private endpoint for the storage account. For this demo, I am going to call the instance “storage-endpoint” and attach it to the blob resource:

and now I need a virtual network to attach the endpoint to. For this demo, I have created a virtual network and added a subnet “privateips” to place private endpoints in:

I am accepting the defaults for DNS:

and then creating the private endpoint. As I accepted the default DNS settings, this will also create a privatelink DNS zone to host the A record for the private endpoint:

I have now successfully secured the blob storage from the internet:

However, as demonstrated earlier, our function app also has no access. I will now show you how to remedy that.

Connecting the function app to the private endpoint

Before we go any further, it is worth noting that the SSL certificate for the storage account is against “apptestrmcn0.blob.core.windows.net” and I have just created a private endpoint with the name “apptestrmcn0.privatelink.blob.core.windows.net“. Any service trying to connect to the private endpoint is going to fail due to a certificate mismatch. Not to worry, if I try resolving the original name of the storage account from an internal host in the same virtual network, you will see that the FQDN also maps to the new private endpoint:

As the storage account has now been isolated to private endpoint traffic only, I need to connect the Function App to the virtual network too. This is achieved via a “VNet integration” which relies on service delegations. Service delegations require a dedicated subnet per delegation type, and I therefore cannot use the same subnet as our private endpoint. For the sake of simplicity, I am going to use a separate subnet call “functionapps” within the same Virtual Network to demonstrate the functionality.

VNET integration is configured via the Networking tab for the function app:

When I select VNET integration, I am going to then select our subnet for function apps:

When Iclick connect, the service delegation will happen in the background if not already delegated and the function app will get a connection to the subnet. In the next window, as our function app’s sole purpose is to fetch data from the blob, I will disable internet traffic for the function app and apply the change.:

This is the last of the configuration. It should look like this:

And as you can see, our function app is, again, able to retrieve data the storage account:

In “Securing service traffic in Azure – Part 2” I will look at the alternative to private endpoints.

Azure firewall basic is now generally available

Firewall

On the 15th March, 2023 Microsoft announced the general availability of Azure firewall basic.

Azure firewall is a cloud native network security service that provides threat protection for cloud workloads running in Azure. It is a stateful service offering both east/west and north/south protection along with high availability and scalability. Azure firewall is available in 3 SKU’s; Standard, Premium and now Basic. All 3 versions provide the following features:

  • Built-in high availability
  • Availability Zones
  • Application FQDN filtering rules
  • Network traffic filtering rules
  • FQDN tags
  • Service tags
  • Threat intelligence
  • Outbound SNAT support
  • Inbound DNAT support
  • Multiple public IP addresses
  • Azure Monitor logging
  • Certifications

And while premium has additional features such as TLS inspection and IDPS, the Basic SKU has the following limitations:

  • Supports Threat Intel alert mode only.
  • Fixed scale unit to run the service on two virtual machine backend instances.
  • Recommended for environments with an estimated maximum throughput of 250 Mbps.

Where Azure Firewall Basic comes into its own is in cost to run. The Basic pricing model is designed to provide essential protection to SMB customers at an affordable price point for low volume workloads.

Approximate Costs

At the time of writing this article, the approximate retail costs for running Azure Firewall are:

SKUCost
Basic$0.592 (AU) per deployment hour
or
$432.16 (AU) per month
Standard$1.871 (AU) per deployment hour
or
$1,365.83 (AU) per month
Premium$2.619 (AU) per deployment hour
or
$1,911.87 (AU) per month
Recommended retail costs for running Azure Firewall

As you can see, Azure Firewall Basic is considerably cheaper than the Standard or Premium SKU’s just to turn on. But as mentioned previously, it is only for small workloads. The processing costs for data through Azure firewall basic are roughly 4 times more expensive.

If we look at processing 100GB in an hour the running costs would look like:

SKUCost per GBProcessing costTotal cost
(inc run cost)
Basic$0.098 (AU)$9.80 (AU)$10.39 (AU)
Standard$0.024 (AU)$2.40 (AU)$4.27 (AU)
Premium$0.024 (AU)$2.40 (AU)$5.02 (AU)
Recommended retail data processing costs

Clearly, sustained high workloads are much more expensive through the Basic SKU as opposed to the Standard or Premium SKU’s. The basic SKU is cost cheaper only when customers are processing less than 9,520GB per month, or 13GB per hour.

Recommendation

The new pricing model provides a much cheaper option for SMB customers to secure essential workloads at an affordable price where data volumes are low.

New enhanced connection troubleshoot for Azure Networking

On the 1st March, 2023, Microsoft announced “New enhanced connection troubleshoot” for Azure Network watcher has gone GA. Previously Azure Network Watcher provided specialised stand alone tools for use with network troubleshooting but these have now been consolidated into one place with additional tests and actionable insights to assist with troubleshooting.

Complex network paths
Network Troubleshooting can be difficult and time consuming.

With customers migrating advanced, high-performance workloads to Azure, it’s essential to have better oversight and management of the intricate networks that support these workloads. A lack of visibility can make it challenging to diagnose issues, leaving customers with limited control and feeling trapped in a “black box.” To enhance your network troubleshooting experience, Azure Network Watcher combines these tools with the following features:

  • Unified solution for troubleshooting all NSG, user defined routes, and blocked ports
  • Actionable insights with step-by-step guide to resolve issues
  • Identifying configuration issues impacting connectivity
  • NSG rules that are blocking traffic
  • Inability to open a socket at the specified source port
  • No servers listening on designated destination ports
  • Misconfigured or missing routes

These new features are not available via the portal at the moment:

connection troubleshoot via portal does not display enhanced connection troubleshoot results
Connection Troubleshooting via the portal

The portal will display that there are connectivity issues, but will not provide the enhanced information. This is accessible via PowerShell, Azure CLI and the Rest API. I will now show the real reason this is not working.

Accessing “enhanced connection troubleshoot” output via PowerShell

I am using the following PowerShell to test the connection between the two machines:

$nw = get-aznetworkwatcher -location australiaeast
$svm = get-azvm -Name Machine1
$dvm = get-azvm -Name Machine2
Test-AzNetworkWatcherConnectivity -NetworkWatcher $nw -SourceId $svm.Id -DestinationId $dvm.Id -DestinationPort 445

This returns the following JSON:

ConnectionStatus : Unreachable
AvgLatencyInMs   :
MinLatencyInMs   :
MaxLatencyInMs   :
ProbesSent       : 30
ProbesFailed     : 30
Hops             : [
                     {
                       "Type": "Source",
                       "Id": "a49b4961-b82f-49da-ae2c-8470a9f4c8a6",
                       "Address": "10.0.0.4",
                       "ResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/CONNECTIVITYTEST/providers/Microsoft.Compute/virtualMachines/Machine1",
                       "NextHopIds": [
                         "6c6f06de-ea3c-45e3-8a1d-372624475ced"
                       ],
                       "Issues": [
                         {
                           "Origin": "Local",
                           "Severity": "Error",
                           "Type": "GuestFirewall",
                           "Context": []
                         }
                       ]
                     },
                     {
                       "Type": "VirtualMachine",
                       "Id": "6c6f06de-ea3c-45e3-8a1d-372624475ced",
                       "Address": "172.16.0.4",
                       "ResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/CONNECTIVITYTEST/providers/Microsoft.Compute/virtualMachines/Machine2",
                       "NextHopIds": [],
                       "Issues": []
                     }
                   ]

As you can see, the issues discovered are explained in more detail, in this case, the local firewall is affecting the communication. If we check the local Defender firewall, we can see there is a specific rule blocking this traffic:

Blocked outbound protocols

If we remove the local firewall rule, connectivity is restored:

ConnectionStatus : Reachable
AvgLatencyInMs   : 1
MinLatencyInMs   : 1
MaxLatencyInMs   : 2
ProbesSent       : 66
ProbesFailed     : 0
Hops             : [
                     {
                       "Type": "Source",
                       "Id": "f1b763a1-f7cc-48b6-aec7-f132d3fdadf8",
                       "Address": "10.0.0.4",
                       "ResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/CONNECTIVITYTEST/providers/Microsoft.Compute/virtualMachines/Machine1",
                       "NextHopIds": [
                         "7c9c103c-44ab-4fd8-9444-22354e5f9672"
                       ],
                       "Issues": []
                     },
                     {
                       "Type": "VirtualMachine",
                       "Id": "7c9c103c-44ab-4fd8-9444-22354e5f9672",
                       "Address": "172.16.0.4",
                       "ResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/CONNECTIVITYTEST/providers/Microsoft.Compute/virtualMachines/Machine2",
                       "NextHopIds": [],
                       "Issues": []
                     }
                   ]

The enhanced connection troubleshoot can detect 6 fault types:

  • Source high CPU utilisation
  • Source high memory utilisation
  • Source Guest firewall
  • DNS resolution
  • Network security rule configuration
  • User defined route configuration

The first four faults are returned by the Network Watcher Agent extension for Windows as demonstrated above. The remaining two faults are from the Azure fabric. As you can see below, when a Network Security Group is misconfigured on the source or destination, our issue returns, but the output displays clearly where and which network security group is at fault:

ConnectionStatus : Unreachable
AvgLatencyInMs   :
MinLatencyInMs   :
MaxLatencyInMs   :
ProbesSent       : 30
ProbesFailed     : 30
Hops             : [
                     {
                       "Type": "Source",
                       "Id": "3cbcbdbe-a6ec-454f-ad2e-946d6731278a",
                       "Address": "10.0.0.4",
                       "ResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/CONNECTIVITYTEST/providers/Microsoft.Compute/virtualMachines/Machine1",
                       "NextHopIds": [
                         "29e33dac-45ae-4ea3-8a9d-83dccddcc0eb"
                       ],
                       "Issues": []
                     },
                     {
                       "Type": "VirtualMachine",
                       "Id": "29e33dac-45ae-4ea3-8a9d-83dccddcc0eb",
                       "Address": "172.16.0.4",
                       "ResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/CONNECTIVITYTEST/providers/Microsoft.Compute/virtualMachines/Machine2",
                       "NextHopIds": [],
                       "Issues": [
                         {
                           "Origin": "Inbound",
                           "Severity": "Error",
                           "Type": "NetworkSecurityRule",
                           "Context": [
                             {
                               "key": "RuleName",
                               "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/ConnectivityTest/providers/Microsoft.Network/networkSecurityGroups/Ma
                   chine2-nsg/SecurityRules/DenyAnyCustom445Inbound"
                             }
                           ]
                         }
                       ]
                     }
                   ]

In addition to the fault detection, IP Flow is also a part of the enhanced connection troubleshoot, providing a list of hops to a service. An excerpt of a trace to a public storage account is below:

PS C:\temp> Test-AzNetworkWatcherConnectivity -NetworkWatcher $nw -SourceId $svm.Id -DestinationAddress https://announcementtest.blob.core.windows.net/test1 -DestinationPort 443

ConnectionStatus : Reachable
AvgLatencyInMs   : 1
MinLatencyInMs   : 1
MaxLatencyInMs   : 1
ProbesSent       : 66
ProbesFailed     : 0
Hops             : [
                     {
                       "Type": "Source",
                       "Id": "23eb09fd-b5fa-4be1-83f2-caf09d18ada0",
                       "Address": "10.0.0.4",
                       "ResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/CONNECTIVITYTEST/providers/Microsoft.Compute/virtualMachines/Machine1",
                       "NextHopIds": [
                         "78f3961c-9937-4679-97a7-4a19f4d1232a"
                       ],
                       "Issues": []
                     },
                     {
                       "Type": "PublicLoadBalancer",
                       "Id": "78f3961c-9937-4679-97a7-4a19f4d1232a",
                       "Address": "20.157.155.128",
                       "NextHopIds": [
                         "574ad521-7ab7-470c-b5aa-f1b4e6088888",
                         "e717c4bd-7916-45bd-b3d1-f8eecc7ed1e3",
                         "cbe6f6a6-4281-402c-a81d-c4e3d30d2247",
                         "84769cde-3f92-4134-8d48-82141f2d9bfd",
                         "aa7c2b73-0892-4d15-96c6-45b9b033829c",
                         "1c3e3043-98f2-4510-b37f-307d3a98a55b",
                         "b97778cb-9ece-4e87-bf6d-71b90fac3847",
                         "cb92d16d-d4fe-4233-b958-a4d3dbe78303",
                         "ec9a2753-3a60-4fce-9d92-7dbbc0d0219d",
                         "df2b1a3e-6555-424c-8e48-5cc0feba3623"
                       ],
                       "Issues": []
                     },
                     {
                       "Type": "VirtualNetwork",
                       "Id": "574ad521-7ab7-470c-b5aa-f1b4e6088888",
                       "Address": "10.124.144.2",
                       "NextHopIds": [],
                       "Issues": []
                     },
                     {
                       "Type": "VirtualNetwork",
                       "Id": "e717c4bd-7916-45bd-b3d1-f8eecc7ed1e3",
                       "Address": "10.124.146.2",
                       "NextHopIds": [],
                       "Issues": []
                     },

Centralising the troubleshooting tools under one command is obviously a great enhancement, but by also providing increased visibility into configurations or system performance make this a great update for your troubleshooting toolbox.

Azure Premium SSD V2 disks

Azure Premium SSD V2 disks generally available

Microsoft have recently announced the release of Azure Premium SSD V2 disks as generally available for select regions, but what does this mean for you?

Azure Premium v2 disks uncouple IOPs and throughput metrics from disk size, meaning you can adjust the IOPS, throughput, and capacity according to your workload needs. Premium SSD v2 disks are designed to handle performance-sensitive and general-purpose workloads that require low average read and write latency, high IOPS, and high throughput. Whereas with Azure Premium and Azure Ultra disks, IOPS and throughput are fixed by disk size. This can make Premium SSD V2 disks an efficient and cost-effective option for running and scaling transaction-intensive workloads.

Achieving High IO loads with Premium SSD

Assuming you have a highly transactional 100GB database generating a sustained load of 10,000 IOPs with a throughput of 200MB / sec and an appropriate compute configuration to support this. To achieve this with Premium SSD disks would require 4 x P20 disks or 2 x P30 disks presented to your compute and striped using a volume set. This however would also see overkill in both available space and data transfer.

4 x P202 x P30
Size2,048GB2,048GB
IOPs9,200 IOPs10,000 IOPs
Throughput600 MB/sec400 MB/sec
Achieving High IO loads with Premium SSD

Note that the above configuration only takes into account performance and does not address any concerns such as redundancy and data protection.

Achieving High IO loads with Premium SSD V2 disks

As V2 disks are configurable across all 3 parameters, the allocated disk is much more aligned with the need of the workload. Also, all three parameters can be adjusted as workload parameters change through growth. For the example above, a Premium SSD V2 disk would look like:

RequiredIncludedAdditional
Size100GBn/an/a
IOPs10,0003,0007,000
Throughput (MB/sec)200 12575
Achieving High IO loads with Premium V2 SSD

According to the Microsoft article for Managed Disks Pricing, this configuration would represent a significant saving in cost for the disk as well as reduced overhead. Like the previous example, this example only takes into account performance and does not address any concerns such as redundancy and data protection.

Current limitations of Azure Premium SSD v2 disks

As of the writing of this article, there are some limitations to Premium v2 disks. Firstly, they are only available in the following regions:

  • US East
  • West Europe

The other limiting factors are:

  • Disks are only available on Locally Redundant Storage.
  • Snapshots are not supported
  • Encryption capabilities are very limited
  • Azure Backup / Site Recovery aren’t supported for VM’s with V2 disks

I would expect these limitations to change over time.

Why Azure Premium SSD v2 disks should be on your radar

Current Premium SSD Disk configurations are fixed in their Size/ IOPs/Throughput ratios and therefore require additional overhead and creativity to achieve specific workloads for high performing systems and usually come with wasted resources that you are paying for.

With the introduction of Premium SSD V2 disks, you can have better control over your configuration and therefore only pay for what you need. As reductions in operating costs and management overhead is something that benefits all users and should be something watched for in your Azure deployment region.

Connecting a Seeed Xiao with a Waveshare 1.02 inch e-ink display

After completing the Teams status light, for my next project, I’m moving on to something useful for the other half. This project requires very low power draw while driving an external display. E-ink displays are great for this task and once displayed, the device can go to sleep while still displaying the summary data with no power. As the e-ink display will be using SPI, and my sensors will be using I2C to keep the number of pins down. In the small microprocessors, not all can supply both I2C and SPI, however the Seeeduino Xiao met all my needs. I came across a few challenges on this project, so this post covers how to connect a Seeeduino Xiao with a Waveshare 1.02 e-ink display.

Parts

For this project, I used the following parts:

Both Seeed Studio and Waveshare provide guides on how to install the Arduino IDE and get it working with their hardware. This guide covers specifically the wiring and changes made to the code to get the 1.02″ display working with the Seeeduino Xiao

Seeeduino Xiao Pinout selection

The Seeeduino Xiao has 16 pins as below:

For this project, I am using:

PinsPurpose
3V3, GNDPowering the display and sensors
D8, D10Display SPI pins
D4, D5I2C sensors
D0-D3Display pins

Waveshare 1.02″ e-ink pins

The Waveshare display has the following pins:

Connecting the Seeeduino Xiao with a Waveshare 1.02 inch display

The pins are connected as follows:

Waveshare e-ink displaySeeeduino Xiao
VCC3V3
GNDGND
DIND10/MOSI
CLKD8/SCK
CSD0
DCD1
RSTD2
BUSYD3

Configuring the example code

Waveshare have example code for their screens running with various hardware platforms on their GitHub page. I downloaded the “epd1in02d” directory and opened it up in the Arduino IDE.

Before compiling the sample code, I first needed to update it to reflect the Seeeduino Xiao pin numbers.

The GPIO pins are defined in DEV_Config.h. I updated the pin numbers to reflect the Xiao’s configuration:

/**
 * GPIO config
**/
#define EPD_RST_PIN         2
#define EPD_DC_PIN          1
#define EPD_CS_PIN          0
#define EPD_BUSY_PIN        3

My test compile still failed, due to the following error:

invalid conversion from 'const unsigned char*' to 'uint8_t* {aka unsigned char*}' [-fpermissive]

This occurred in a couple of places. If I updated the IDE to compile for an Arduino UNO, there were no problems with the compile. This error was specific to the Xiao code compile.

Investigating the errors, I found them to occur with the following files:

  • font24cn.cpp
  • imagedata.cpp

As I’m not using chinese fonts or loading images, I commented out the fonts from the file fonts.h:

// extern cFONT Font12CN;
// extern cFONT Font24CN;

and I commented out the the image processing code from the sample file:

#include "GUI_Paint.h"
#include "DEV_Config.h"
#include "EPD_1in02d.h"
#include "fonts.h"

//#include "imagedata.h"

and removed the image display:

//  EPD_Display_Image(IMAGE_DATA);
//  DEV_Delay_ms(500);
//  EPD_Clear();

At this point, the code successfully compiled and was able to display the first sample page on the display:

Successful sample screen display
Successfully displayed the sample screen

And there you have it, a Seeeduino Xiao with a Waveshare 1.02Now on the rest of the project.

Building an automated Microsoft Teams status light – Part 3

In “Building an automated Microsoft Teams status light – Part 2“, I demonstrated the process to retrieve your teams status via a REST call. In this post, I will be taking the returned status information to then make an additional REST call to the HomeAssistant instance to update the colour displayed by the light.

Status information

The REST API call returns a number of status and availability values. I am interested in the following combinations:

AvailabilityActivity
AvailableAvailable
BusyInACall
BusyInAConferenceCall
DoNotDisturb<n/a>

As I have a second ring light for the webcam, I am tracking the activity. So, if I am “Available”, the light is green. If I am “Busy” or “DoNotDisturb”, then the light is red. However, if I am in a call, then the system also activates additional lighting for the webcam.

HomeAssistant Information

In order to connect with HomeAssistant, I needed two key pieces of information:

  • HomeAssistant Access Key
  • Light Entity details

Access token

To create an access token, the menu can be found at the bottom of the User Profile section:

Create a new token and copy the token key as it will not be displayed again. The other piece of information required is the Entity ID for the light:

Updating the status light state

Updating the state of the status light is a relatively straight forward matter. Firstly HomeAssistant REST interface requires the bearer token to passed via the header:

$hasstoken = "<insert token here>"
$headers = @{ Authorization = "Bearer $hasstoken"
            contenttype = "application/json"
        }

The payload for the REST call sets the entity, the colour and the brightness for the light, and as the content type indicates, the payload will be in JSON:

$payload = @{"entity_id"="<Insert Entity ID here>";
    "color_name"="red";
    "brightness"=255}

We now have all of the information we need, but the content is JSON, so it is converted in the REST call:

Invoke-RestMethod -Headers $headers -Method Post -Body ($payload|ConvertTo-Json) -Uri 'http://<home assistant server>:<home assistant port>/api/services/light/turn_on'

Putting it all together

We now have all the pieces required to poll Microsoft Teams status and then update our light:

# Import PSMSGraph
Import-Module PSMSGraph

# Tenant and Client IDs
$tenantID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
$clientID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

# Create Secret
$clientSecret = (ConvertTo-SecureString "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" -AsPlainText -Force)

# Call the Graph App
$GraphApp = New-GraphApplication -Name "presence-app" -Tenant $tenantID -ClientCredential $creds -RedirectUri "http://localhost/" 

# This will prompt you to log in with your O365/Azure credentials and authorise access
$AuthCode = $GraphApp | Get-GraphOauthAuthorizationCode
$GraphAccessToken = $AuthCode | Get-GraphOauthAccessToken -Resource 'https://graph.microsoft.com'

$laststatus = $null
$pollinterval = 5

while ($true)
{
    $presence = Invoke-RestMethod -Headers @{Authorization = "Bearer $($graphaccesstoken.GetAccessToken())" } -Uri 'https://graph.microsoft.com/beta/me/presence' -method Get    
    
    if ($presence.availability -ne $laststatus)
    {
        if ($presence.availability -eq "Available")
        {
            $color=0,0,255
            $payload = @{"entity_id"="light.neopixel_light";
                "color_name"="green";
                "brightness"=255}
        }
        elseif ($presence.availability -eq "Away") {
            $payload = @{"entity_id"="light.neopixel_light";
                "color_name"="yellow";
                "brightness"=100}
        }
        else 
        {
            $payload = @{"entity_id"="light.neopixel_light";
                "color_name"="red";
                "brightness"=255}
        }
        $returninfo = Invoke-RestMethod -Headers $headers -Method Post -Body ($payload|ConvertTo-Json) -Uri 'http://homeassistant:8123/api/services/light/turn_on'
        $laststatus = $presence.availability
    }
    Start-Sleep -Seconds $pollinterval
    if ($GraphAccessToken.IsExpired)
    {
        $GraphAccessToken | Update-GraphOAuthAccessToken -Verbose    
    }
}


And there you have it, a script that polls Microsoft Teams and updates a light in HomeAssistant to reflect your current status.

Building an automated Microsoft Teams status light – Part 2

In “Building an automated Microsoft Teams status light – Part 1”, I outlined the very basic method of how I wired up a NeoPixel LED to an ESP8266 and connected it to HomeAssistant. Any RGB smart bulb connected to HomeAssistant will achieve the same goal, I just wanted something that can sit on the bookshelf that sits outside my makeshift office.

In this part, I will be showing how to connect to your online teams presence to retrieve your current status.

Set up your Azure Active Directory

The first step is to create an AAD application and grant the appropriate rights. In order to achieve this, you will need the “Application Developer” role if “Users can register applications” has been set to “No” in your AAD. I set up my initial registration as follows:

Register an application

Once registered, I then needed to set up the permissions for the application to query presence information. The application requires the following permissions:

  • Presence.Read
  • User.Read

As I was also toying with some calendar information, I added calendar.read rights as well. The last step was to create a secret for the application:

Now that this is all complete, record the following information:

  • Tenant ID
  • Client ID
  • Client secret

These are all required for PowerShell to collect your presence information.

Query AAD for presence information with PowerShell

For this piece of code, I will be using the PSMSGraph module to get the presence information.

# Import PSMSGraph
Import-Module PSMSGraph

Using the information recorded earlier, create the following variables:

# Tenant and Client IDs
$tenantID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
$clientID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

# Create Secret
$clientSecret = (ConvertTo-SecureString "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" -AsPlainText -Force)

With all of the required ID’s and secrets set, I now call the Graph application and get the access tokens. This will prompt for you to log in with your O365 credentials.

# Call the Graph App
$GraphApp = New-GraphApplication -Name "presence-app" -Tenant $tenantID -ClientCredential $creds -RedirectUri "http://localhost/" 

# This will prompt you to log in with your O365/Azure credentials and authorise access
$AuthCode = $GraphApp | Get-GraphOauthAuthorizationCode
$GraphAccessToken = $AuthCode | Get-GraphOauthAccessToken -Resource 'https://graph.microsoft.com'

Now that I have an access token, I am able to make a REST call to get the presence information:

$presence = Invoke-RestMethod -Headers @{Authorization = "Bearer $($graphaccesstoken.GetAccessToken())" } -Uri 'https://graph.microsoft.com/beta/me/presence' -method Get

We can now see the presence information returned:

PS C:\> $presence


@odata.context      : https://graph.microsoft.com/beta/$metadata#users('xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx')/presence/$entity
id                  : xxxxxxxx-xxxx-xxxx-xxxxx-xxxxxxxxxxxx
availability        : Available
activity            : Available
outOfOfficeSettings : @{message=; isOutOfOffice=False}

We can see two properties on the object:

  • availability
  • activity

The following information is what I’m using. Availability is the primary control I’m using for the light. Activity is allowing me to link an additional light that activates the webcam ring light when I join a video call:

AvailabilityAvailable
Busy
Away
DoNotDisturb
ActivityAvailable
InACall
InAConferenceCall

Now that I have the presence information, I can loop with a delay to get this information, however the access token will eventually expire. If we look at the access token properties we can see the following:

PS C:\> $graphaccesstoken


GUID            : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
RequestedDate   : 8/03/2022 8:04:25 PM
LastRequestDate : 8/03/2022 8:04:25 PM
IsExpired       : False
Expires         : 8/03/2022 9:18:17 PM
ExpiresUTC      : 8/03/2022 10:18:17 AM
NotBefore       : 8/03/2022 7:59:27 PM
NotBeforeUTC    : 8/03/2022 8:59:27 AM
Scope           : {Calendars.Read, Presence.Read, User.Read}
Resource        : https://graph.microsoft.com
IsRefreshable   : True
Application     : Guid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Name: presence-app

From the properties, we can see “IsExpired” and “IsRefreshable”. When the token expires, we can update the token without the need for re-authentication:

    if ($GraphAccessToken.IsExpired)
    {
        $GraphAccessToken | Update-GraphOAuthAccessToken -Verbose    
    }

Putting this all together, my code looks like:

# Import PSMSGraph
Import-Module PSMSGraph

# Tenant and Client IDs
$tenantID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
$clientID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

# Create Secret
$clientSecret = (ConvertTo-SecureString "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" -AsPlainText -Force)

# Create credentials
$creds = [pscredential]::new($clientID, $clientSecret)

# Call the Graph App
$GraphApp = New-GraphApplication -Name "presence-app" -Tenant $tenantID -ClientCredential $creds -RedirectUri "http://localhost/" 

# This will prompt you to log in with your O365/Azure credentials and authorise access
$AuthCode = $GraphApp | Get-GraphOauthAuthorizationCode
$GraphAccessToken = $AuthCode | Get-GraphOauthAccessToken -Resource 'https://graph.microsoft.com'


while ($true)
{
    $presence = Invoke-RestMethod -Headers @{Authorization = "Bearer $($graphaccesstoken.GetAccessToken())" } -Uri 'https://graph.microsoft.com/beta/me/presence' -method Get
    write-host "Availability -" $presence.availability
    Write-Host "Activity -" $presence.activity
    Write-Host "`n"

    Start-Sleep -Seconds 2
    if ($GraphAccessToken.IsExpired)
    {
        $GraphAccessToken | Update-GraphOAuthAccessToken -Verbose    
    }
}

In the next post, I will outline how to update the teams status light in HomeAssistant.