Servicebus Tooling in Serverless360

At the Composite Application Overview screen, you can click the [Manage] button to see the run details per application resource. From the Detail screen, you will not only get an overview tab with all the resources in your application. You will also get separate tabs for each and every resource. If you open the tab for a servicebus topic, you will see a circle with dots at the far right where you will get an overview of the actions you can perform on servicebus topics.

From here you can for instance get a list of the topic subscriptions. Again you will find a circle with dots at the far right. Here you can for instance view and edit the subscription rules.

Another important scenario is where you want to peek or defer messages from the servicebus dead letter queue. Click the particular queue to get to a screen with three tabs: Messages, Dead-Letter and Deferred Dead-Letter. Click the Dead-Letter tab. You can retrieve messages in peek-lock mode or defer messages. When you defer messages, they will no longer be available to other applications. You can only resubmit or delete deferred messages from Serverless360. So, select Defer, enter a message count and if you like an error reason as well.

If you select one or more messages, you can actually delete, resubmit or save/download the message. If you click on one of the messages, you can actually view and change the message content and properties for resubmitting directly or at a later stage.

Automatic resubmit or delete from the dead letter queue is also possible. Select menu option Activities from the left hand side. The Create drop-down at the top ley’s you create a Dead Letter Activity. This way you can resubmit without manual intervention.

You can for instance delete all TTL expired messages or resubmit messages with certain characteristics (= selection criteria). You can directly resubmit or schedule the resubmit.

The Send Message activity can be used to test a logic app by sending messages to a servicebus one-by-one or in batch. Can be of great help as well.

Batch Processing in Logic Apps

Let’s look at the specific scenario:

  • The first logic app reads files from blob storage (triggered by EventGrid). The files are debatched and messages are sent to the EventGrid one by one.
  • The second logic app reads messages from the EventGrid and sends these messages to Azure Service Bus after transforming them to a common data format.
  • The third logic app reads the messages from the servicebus using peek-lock. Messages are either completed or send to the servicebus again using deferred or scheduled messaging.

The first two logic apps seemed to work correctly, the third logic app showed strange behavior when sending larger message quantities. After sending 3000 messages to the ServiceBus and we actually saw the first 2800 messages being processed like a baby. But then, for the last 200 records we saw some sort of dropping behavior. The logic app ran at very irregular intervals, processing one message at a time. Some times it processed one message per minute, sometimes five messages per minute, sometimes one message in two minutes. Very strange behavior indeed. Most probably this behavior was caused by some sort of retry mechanism kicking in.

Anyhow, we continued searching. In the end it turned out we ran into all sorts of Azure limitations. It’s very hard to pinpoint the exact problem, but it’d good to refer to the following link.

The first problem was with the second logic app receiving messages from the EventGrid. If you look at the trigger history of the second logic app, it seems like all triggers are being processed. If you look more carefully, you will notice the trigger end time will increase to multiple minutes trough-put instead of just a few seconds. So, the assumption that the first two logic apps were running correctly, was actually wrong. Larger amounts of messages quickly lead to an overload of messages. In other words, you will have to dispense the message load. You can do this by replacing the EventGrid (push-push) with Azure Service Bus (push-pull). The first logic app sends messages to the servicebus, the second logic app reads messages from the servicebus in a loop construct. This way we can prevent 3000 concurrent logic app runs from being triggered at once via the EventGrid. This in turn will also prevent overflooding of the third logic app.

The third logic app makes service calls via a Http Action. Here we run into a limit of 2500 concurrent outgoing calls. Initially we had a servicebus trigger running every minute. This construct was replaced by an EventGrid trigger on every new ServiceBus message. This trigger was again followed by a loop construct processing 50 batches of 20 messages.

Problem solved with acceptable performance, but a lot of extra work. In my opinion, this is quite a disqualifier for the EventGrid solution. Carefully look at the scenario and the type of messages sent (data messages or event messages) before opting to use Azure EventGrid.

Side note. You can check the logic app for throttling behavior by going to the Metrics section. There you can select metric Trigger Throttled Events, Action Throttled Events or Run Throttled Events. You will now see throttling behavior via a visual graph representation.

ServiceBus Next Available

Thanks to my colleague Eldert Grootenboer, I have written this post on how to combine singleton and concurrent processing in a queuing solution. The trick is to use servicebus sessions. When sending messages to the servicebus, you can set SessionId equal to the unique Id of the client. On receiving messages from the servicebus, SessionId is set to Next Available. This way, different clients are processed concurrently and mulitple updates for the same client are processed one by one. You want to process different client concurrently for reasons of performance. You want to process client updates one by one, because you want to keep the ordering of client updates, so that more recent client updates are not overwritten by older client updates.

Sending messages to the ServiceBus:

When sending messages to the servicebus, don’t forget to base64 encode the messages:
“body”: {
“ContentData”: “@{base64(body(‘Transform_RelatieBericht’))}”,
“SessionId”: “@variables(‘RelatieNummer’)” }

Receiving messages from the ServiceBus:

To get values from the servicebus message received, you can use the following syntax, with [xpath] being equal to the xpath statement: @xpath(xml(base64ToBinary(triggerBody()?[‘ContentData’])), [xpath])

And then the final step. Messages are not actually removed from the servicebus after reading because of the peek-lock construct. After successful processing always call ServiceBus Complete.

Lock token en SessionId are taken from the trigger (see: receiving messages from the servicebus).

BizTalk 2016 Relay Endpoint

In this post, I want to describe a hybrid integration scenario where a BizTalk 2016 service is exposed to Azure using a service bus relay. Note that the relay is not found under Service Bus, but under resource named Relays. This may be a point of confusion. First you will have to create a relay namespace. Then you add a WCF Relay to this namespace. I used an excellent QuickLearn YouTube video as the basis for this post. Let’s dive into it.

The first gotcha, is that you can now use SAS for service bus relays. Prior to BizTalk 2016, you could only use ACS. ACS is a federated identity solution with trusted authentication providers like Google, Facebook and Microsoft LiveID. ACS is quite complicated for the relay scenario. It’s actually better to use SAS. SAS simply uses a shared secret token for authentication. It’s recommended to use SAS over ACS as it provides a simple, flexible and eay-to-use authentication scheme for hosting a relay in the Azure Service Bus.

BizTalk 2016 has two relay bindings that use SAS: BasicHttpRelay and NetTcpRelay. Both adapters expose a https relay endpoint by default. To configure the relay endpoint with SAS, create a receive location with the BasicHttpRelay Adapter. On the Security tab, specify the Shared Access Signature. The SAS key needs to have Manage level access to the entire service bus namespace, because it actually needs to create a relay endpoint. It’s not enough to just have Send or Listen level access.

On the same security tab, you can specify client security. The client can have anonymous access, but more likely the client will have to provide an access key (relay access token) as well. That would be a SAS signature with Send level access.

You can directly use the primary SAS token from the Azure Portal (as contained under Shared access policies). But, this token protects the Azure Service Bus at the namespace level. For more fine-grained access you can generate a fine-grained SAS token. A SAS token let’s you control the start and expiration time, the resources (ie relay services) you are granting access to and the permissions being granted (manage, send, receive). To generate a SAS token you need to have a reference to the servicebus NuGet package. Then you can use the following code in C# to get a token to send messages to a specific relay service for the period of a year.

The SAS token ends up in a so-called servicebus authentication header.

There’s another gotcha. It’s important to realize, that the BizTalk relay adapters are in-process adapters. That’s different from the regular WCF adapters. These are isolated adapters, meaning that the services are hosted in IIS. As a consequence of being in-process, relay services don’t have access to additional processing in the IIS request pipeline like rate limiting, throttling and/or caching. We can make up for this missing functionality by using Azure API Management. Note that API Management cannot only add policies, but also has the capability to give a REST/JSON endpoint to the SOAP/XML service exposed by BizTalk.

As an example a two-way receive port is created that uses the BasicHttpRelay adapter and a map that transforms the input message to an output message. On enabling the receive port, we’ll see the relay endpoint is created in the Azure Portal. In Azure API Management the set-header policy is used to add the servicebus authorization header. That means, I don’t have to distribute the secret key to all the clients. To prove that it works, we can use the Azure API Management Developer Portal.

Finding the Servicebus Queue

I had to troubleshoot an Azure App Services solution that I didn’t develop myself. In the solution a message was sent by a web app to a servicebus queue named From4PS_queue. This action activated a logic app with a servicebus queue trigger. The question was: which servicebus queue is used?

In the Azure Portal I found a service bus named SupportCalls for development, test and production. At the servicebus level I found an ACS policy named SharedAccessKey. After doubleclicking this ACS policy I found the primary connection string for the servicebus. I also found four queues at the servicebus level. Among these queues was the queue I was looking for. This queue had two shared access key policies: api for sending messages to the queue and logicapp for reading messages from the queue. Both access key policies also held a primary connection string.

In the app settings of the api app the primary connection string of the queue (policy: api) was used. I could see this right away from the appsettings of the api app. The settings for the logic app were harder to find. In the logic app’s parameters file I found the servicebus connection string TstSupportCalls (not the queue connection string, not the Dev version). This was confusing, because the setting from the pameters file is actually not used. When I turned to the json file, I saw the queue trigger with a connection named DevSupportCalls. From the resource group of the API connection I could reassure myself that the servicebus connection string DevSupportCalls was used. The name of the queue was entered in another property of the API App.

[box type=”success”] Use the servicebus connection string in logic apps. Use the queue connection string in api apps. [/box]