Itinerary based message routing using Azure Logic Apps and Service Bus Actions

This is another integration pattern used quite extensively in the integration world. It is used when a message is required to be routed to several endpoints in a particular order using some form of routing list. Depending on your business requirements, the message being routed may be enriched or replaced before sending it to the next service endpoint in the list.

image

This is probably one of the easiest integration patterns to implement using Logic Apps and Service Bus Actions. There are probably other methods to implement this pattern, but I wanted to abstract the routing logic away from the Logic App itself and leave it to focus on the business process and not worry about setting up the next routing endpoint.

By using the Actions feature of the service bus I can defined the routing order of the next service endpoint, or in this case the next service bus subscription by setting properties to match the next subscriber filter condition. A service bus action is executed after the filter condition has been met and is used to set the value of either a system or custom property before the message is consumed from the service bus. It is set in a similar way to the T-SQL “Set” command where you set a field to a value.

For this scenario I have a sales order message that is required to be passed to several logic apps in a particular sequence. There is a sales order header logic app, sales order line logic app, sale order payments logic app and finally a sales order completion logic app. The sales order message will look similar to this below.

image

Provisioning Azure Service Bus

The method involves creating a service bus topic called “salesorder” with the following subscriptions.

servicebus

The filter and action rules are setup as in the following table. There are 2 custom properties “MsgType” and “ItineraryLeg” used for the subscriptions. The MsgType is just used to group the messages and the ItineraryLeg  defines the order of the subscribers.

The action is used to specify the next subscriber to send the message to after the current process by setting the ItineraryLeg property.

Seq Subscription Name Rule Filter Rule Action
1 SOHeader MsgType=’salesorder’ and ItineraryLeg = ” set  ItineraryLeg = ‘solines’
2 SOLines MsgType=’salesorder’ and ItineraryLeg = ‘solines’ set  ItineraryLeg = ‘sopayments’
3 SOPayments MsgType=’salesorder’ and ItineraryLeg = ‘sopayments’ set  ItineraryLeg = ‘socompletion’
4 SOCompletion MsgType=’salesorder’ and ItineraryLeg = ‘socompletion’  

A sample of the SOHeader rules are shown below:image

Creating the Logic Apps

Next we will provision the 4 logic apps to create the SO header, process the line items, apply the payments and complete the sales order. 

image

The logic apps (SalesOrderHeaderProcessor, SalesOrderLinesProcessor, SalesOrderPaymentsProcessor) are constructed in a similar manner as shown below and the only difference is the topic subscription names for the service bus trigger action.  The Delay action is where you would implement your business process logic on the received message.

image

Expanding the service bus trigger shape has the following properties set. The other logic apps will have different Topic subscription names set.image

The real smarts of these workflows are the “Republish message” service bus action shape. You will need to add this shape to every logic app that is involved in processing the message. Here we set the ItineraryLog and MsgType property values to the same values in the Trigger service bus action shape. Also I am adding another custom  property called “Tracking”. This is used to track the name of logic app the message was routed to in a chorological order and can be useful for debugging. The content property is just set the received message from the service bus trigger. Depending on your business requirements, you may be required to publish a totally different message.

image

Below is the code behind for the Republish message shape showing how to setup the properties section.

“body”: {
                       “ContentData”: “@{triggerBody()?[‘ContentData’]}”,
                       “Properties”: {
                           “ItineraryLeg”: “@triggerBody()?[‘Properties’][‘ItineraryLeg’]”,
                           “MsgType”: “@triggerBody()?[‘Properties’][‘MsgType’]”,
                           “Tracking”: “@concat(coalesce(triggerBody()?[‘Properties’]?[‘Tracking’],’Begin’),’,’,workflow()?[‘name’])”
                       }
                   },

The other logic app SalesOrderCompletionProcessor simply pulls the message from the last subscription and sends the Tracking property value to RequestBin.

image

Messages are placed onto the service bus using the MessagePublisher logic app which accepts a json message and initialises the service bus custom properties “ItineraryLeg” and “MsgType”. Here the ItineraryLeg is set to an empty string and the MsgType being set to “salesorder”. The body of the HTTP request trigger is used as the message content for the Send message action shape.

image

Testing

Using PostMan we can POST a message to the MessagePublisher logic app. If you set the delays long enough in the logic apps, you can see the message being published and consumed by the subscribers in the correct order as defined in the service bus actions.

Here is the result of the message being posted to the MessagePublisher logic app and the output from the SalesOrderCompletionProcessor  logic app which sends it to RequestBin.

Begin,SalesOrderHeaderProcessor,SalesOrderLinesProcessor,SalesOrderPaymentsProcessor,SalesOrderCompletionProcessor

Now if I wanted to change the order of the message processing, say I wanted to process the payments before the sales order lines. Here I would simply update the actions on the service bus subscriptions as follows:

Seq Subscription Name Rule Filter Rule Action
1 SOHeader MsgType=’salesorder’ and ItineraryLeg = ” set  ItineraryLeg = ‘sopayments’
3 SOPaymentsSOLines MsgType=’salesorder’ and ItineraryLeg = ‘solines’ set  ItineraryLeg = ‘socompletion’
2 SOPayments MsgType=’salesorder’ and ItineraryLeg = ‘sopayments’ set  ItineraryLeg = ‘solines’
4 SOCompletion MsgType=’salesorder’ and ItineraryLeg = ‘socompletion’  

Now the output of the SalesOrderCompletionProcessor looks like this below. You can clearly see the payments logic app being executed after the sales header process.

Begin,SalesOrderHeaderProcessor,SalesOrderPaymentsProcessor,SalesOrderLinesProcessor,SalesOrderCompletionProcessor

In Summary

Use service bus actions to manage the itinerary list to abstract the routing logic away from the normal business process of the logic apps.

Typically you would use Azure Resource templates to setup the service bus subscriptions and rules under TFS control.

Keep a watch out for my next article on Content Based Routing using Azure Logic Apps and Service Bus.

Enjoy…

Advertisements
Posted in Logic Apps, Service bus | Tagged , , , | Leave a comment

Integration Scatter-Gather pattern using Azure Logic Apps and Service Bus

Recently I have taken a role as the Integration Architect and tech lead for a project to integrate messaging between MS Dynamics 365  AX/CRM and 3rd party systems. This was a huge solution that took over 10 months to design and develop.  It consisted of developing over 60 Logic Apps.

With a large integration project like this were multiple entities are required to be updated across multiple systems in a transactional manner, I required a process to correlate and aggregate all the responses into one composite message before proceeding onto the next task in the workflow. The scatter-gatherer pattern was chosen as a  good candidate for this type of scenario.

The next problem was how to implement this pattern using Logic Apps. Using what I had learnt from my previous blog on enforcing ordered delivery of messages  https://connectedcircuits.wordpress.com/2017/08/26/enforcing-ordered-delivery-using-azure-logic-apps-and-service-bus/ I was able to design the solution below.

image

It is based on a single Logic App to publish the message onto a service bus topic and to retrieve all the responses from another service bus queue. I also wanted the solution to be flexible to decide which services the message should be scattered to by passing in a routing list which determines who to send the request message to.

The main component of the solution is the (Scatter/Gather)  logic app which sends the request message to the the service bus topic after setting subscription property values and a unique batch Id. In this example I have 3 sub-processor logic apps which will process the message in some form or another and then write a response message onto the service bus queue. Because sessions have been enabled on this queue and I am setting the session Id with the same BatchId that was sent with the message, we are able to correlate all the responses from to sub-processors within the same logic app instance that initially published the message onto the service bus topic.

After the Scatter/Gather logic app sends the message onto the service bus topic, it will go into a loop until all the response messages from the Process logic apps are received. It will then return the aggregated responses from the sub-process logic apps.

The whole solution can be broken down into 3 sections, setting up the Azure Service Bus, developing the Scatter-Gatherer Logic App workflow and the Sub-process Logic Apps subscribing to the topic.

Setting up the service bus

Lets start by setting up the Azure Service Bus. A Topic is provisioned to publish the request message with rules created for each subscriber. In this scenario I will have 3 subscriptions with default rules set to a property called MsgProcessor = ‘1’, MsgProcessor = ‘2’ and MsgProcessor = ‘3’ on each subscription respectively.

image

Next a Service Bus Queue is provisioned to receive all the response messages from the subscribers. Sessions must be enabled on this queue to allow a single consumer to process all messages with the same sessionId which in our case will be the Scatter/Gather Logic App.image

Developing the Scatter/Gather Logic App

The next part to develop is the scatter-gather Logic App which is the real smarts of the solution. Below is the full workflow with all the actions collapsed. We will go through and expand each of the shapes.

image

This  workflow is triggered by a HTTP request, but can also be triggered by some other connectors.

image

Next is a series of variables required by the workflow listed below:

  • BatchId – This is used to group all the messages together and is used as the SessionId and is initialised with a random Guid. 
  • CompositeMsg – This holds the aggregated response messages from the Processor logic apps.
  • IntemediateMsg – Used in the loop the temporary store the aggregated message.
  • ScatterCount – The number of sub-processors the request body was published to.
  • ResponseCount – Holds the current number of responses received from the Processing logic apps.

image

Once the variables have been setup, a parallel branch is used to scatter the HTTP Request body contents to the subscribing systems. For this demo I have an option to send the request body content to 3 different sub-processes.

image

To determine which sub-process to send the request body to, I pass a list of the process names to send the request body to as one of the HTTP Request Header properties as shown here:

“X-ScatterList”:”‘Process1′,’Process2’,’Process3′”

The filters on the parallel conditions are setup as below, where is checks if the list in the HTTP header contains the process name.image

Expanding one of the condition tasks shows an action to send the request onto the Service Bus  Topic and to increment the ScatterCount variable by one.image

Below is the “Send message processor 1”  service bus connector expanded to show the properties. The two important pieces of information are custom properties BatchId and MsgProcessor. The BatchId will be used to set the sessionId latter and the MsgProcessor is used by the subscription filter on the  service bus Topic.image

The next step is to increment to Scatter count. This is how we keep track of the number sub-processors that the message was scattered to. image

Once the request body content has been published to the service bus Topic, we cycle in a loop until we have received all the response messages from the service bus queue or the Until loop times out.image

The first step in the loop is to get one new message at a time from the queue using the BatchId variable as the session Id. This is to ensure we only get messages off the queue matching this same Id. image

Then we check if a message was found using the Condition action with this filter: @equals(length(body(‘Get_messages_from_a_queue_(peek-lock)’)), 1)

image

If a message was found on the service bus queue then the “If true” branch is executed and if no message was found the “If false” branch is executed setting a delay of a few seconds before iterating again.

Below are the actions inside of the “If true” branch.image

First we check if this is the first message received off the queue using the following filter: @equals(variables(‘ResponseCount’), 0)

image

If it is, then we just set the variable “CompositeMsg” to the service bus queue content data.image

As we are using the Service Bus Connector that is capable of returning multiple messages, we need to use an indexer to get to the first message. Below is the code view of the above action. Note when the sub-processor logic app puts a message onto the service bus, we base64 encode it, therefore we need to decode it back to string value.image

Now if this was not the first message received of the service bus, we need to append it the other messages received. This is done with the following actions in the “If false” branch shown below.image

First we need to copy the contents of the CompositeMsg variable into the “IntermediateMsg” variable. Then concatenate the message received of the service bus and the contents of the “IntemediateMsg” variable,  and then add them to the “CompositeMsg” variable. The syntax of the value for the Append intermediate to composite is shown below:

“value”: “@{concat(variables(‘IntermediateMsg’),’,’,base64ToString(body(‘Get_messages_from_a_queue_(peek-lock)’)?[0][‘ContentData’]))}”

After the message from the queue has been added or appended to the CompositeMsg variable, it is completed to remove it from the queue and the ResponseCount variable is incremented.

 image

Once all the response messages have been received from the sub-process logic apps or the “Until loop” times out, we need to close the service bus queue session using the BatchId and return the composite response message.

image

Here I am just sending the composite response message to a RequestBin endpoint. Ideally you would place this onto another service bus and perhaps with the initial HTTP request body to tie everything together.

Sub-Processor Logic Apps

The last part of this solution is the sub-processor logic apps which subscribe to the service bus Topic and processes the message in some form or another before returning a response message onto the service bus queue. Using a separate logic app the manage the post processing of the initial request message provides scalability and separation from the  messaging orchestration components.image

The properties of the trigger service bus connector is shown below which has a typical setup.image

Next the delay task is there just to simulate processing of the message received from the topic. This where you would normally call another API endpoint or process the message in some other fashion. Remember the maximum lock duration is 5 minutes and if your process will take longer than this, you will need to renew the lock.image

After the message has been processed, you will need to compose a response message to put back onto the service bus queue. The schema of the response message should be generic across all the sub-process logic apps to make it easier to parse latter on. For the demo a relatively simple schema will be used consisting of the received BatchId and the sub-processor logic app name.image 

Once the response message has been composed, it is placed onto the service bus queue ensuring the session Id is set to the BatchId that was sent with the message. Remember this queue has been provisioned with “Sessions Enabled”image

The code view for the connector looks like this:image

The last task of the workflow is to complete the topic subscription as shown here:image

Testing the solution

We can use Postman to send the following request to the scatter-gather logic app. Also I am setting the scatter list header property to publish the message to all 3 sub-processor logic apps.

POST /workflows/…/triggers/manual/paths/invoke?api-version=2016-10-01&sp=%2Ftriggers%2FmanualHTTP/1.1
Host: prod-11.australiasoutheast.logic.azure.com:443
Content-Type: application/json
X-ScatterList: ‘Process1′,’Process2′,’Process3’
Cache-Control: no-cache
Postman-Token: 644c5cdb-fd60-4903-b3ec-2d3d6febfe7e

{
    “OrderId” : “12”,
    “Message” : “Hello1”
}

Using RequestBin we can see the aggregated response messages from all 3 sub-processors.

[{"BatchId":"5e1df244-2153-4529-aaba-12a82aa788fd","ResponseMsg":"Process1"},{"BatchId":"5e1df244-2153-4529-aaba-12a82aa788fd","ResponseMsg":"Process2"},{"BatchId":"5e1df244-2153-4529-aaba-12a82aa788fd","ResponseMsg":"Process3"}]

What’s next

Further enhancements can be made by adding flexibility for the timeout of the Until loop as some responses may take hours/days  to send a response back.  When waiting for very long periods for a response to return, you may need to extend the delay period in the “Until loop” to avoid reaching the 5000 iteration limit.

Keep a watch for my next blog where I will show you to implement the Itinerary based routing integration pattern using Azure Logic Apps and  Service Bus.

Enjoy…

Posted in Azure, Logic Apps | Tagged , , , , | Leave a comment

Using an Azure APIM Policy to call an OAuth endpoint and cache the token

Recently I have been involved as the Integration Architect and tech lead for a project to integrate between MS Dynamics 356  AX/CRM and 3rd party systems.

One of the challenges was to provide single endpoints for internal systems calling web services outside of the organisation that utilised OAuth2.0.  This is where a consumer normally has to call an OAuth token endpoint first and then append the token to the request before calling the actual web service.

By using APIM as a proxy and policies in APIM, I managed to achieve the goal of providing a single URL endpoint for the consumer. The policy initially gets the token from the authorisation endpoint, caches the token and then passes the token to the web service being called. This process is known as fragment caching, where parts of the responses are cached for subsequent requests. Also by caching the bearer part of the token improved the performance significantly for subsequent calls.

Below were the steps I used to add a web API to create transfers orders in Dynamics AX and a policy using the Azure APIM management portal.

1. First create the properties for the oAuth clientId and client secret. A good tip is to prefix the property name and set the Tags with the name of the API you are calling. This helps latter on when you have multiple properties to manage.image

The Properties page should look like something like below after adding your custom properties:

image

2. Next, go to the API’s page to either add or import the API’s definition you are calling. Here I am just going to add the API manually by first entering the name, API endpoint address and the public facing URL suffix. Remember to add it to an existing product.

image

3. After the API has been created we will then add the operations. For this demo I will just add one operation to create a transfer order.

image

4. Now that the API we wish to call has been added to the APIM service we can finally pay our attention to creating a policy which does the background task of obtaining the OAuth token. Here I will setup a policy on the CreateTransferOrder operation of the Web API. Once selected, click the ADD POLICY button to begin creating the policy from the template.

image

5. The “<inbound>” policy section is applied to the incoming request before forwarding the request to the backend service. This is where we will check the cache for the authorisation token and if no hit, then we call the OAuth token endpoint to obtain a token.

   1: <inbound>

   2:     <cache-lookup-value key="token-{{Dev-Web1-ClientId}}" variable-name="bearerToken" />

   3:     <choose>

   4:         <when condition="@(!context.Variables.ContainsKey("bearerToken"))">

   5:             <send-request mode="new" response-variable-name="oauthResponse" timeout="20" ignore-error="false">

   6:                 <set-url>https://login.microsoftonline.com/xxxxxx.onmicrosoft.com/oauth2/token</set-url>

   7:                 <set-method>POST</set-method>

   8:                 <set-header name="Content-Type" exists-action="override">

   9:                     <value>application/x-www-form-urlencoded</value>

  10:                     <!-- for multiple headers with the same name add additional value elements -->

  11:                 </set-header>

  12:                 <set-body>@("grant_type=client_credentials&client_id={{Dev-Web1-ClientId}}&client_secret={{Dev-Web1-ClientSecret}}&resource=https%3A%2F%2Fxxxx.xxxxx.ax.dynamics.com")</set-body>

  13:             </send-request>

  14:             <set-variable name="accessToken" value="@((string)((IResponse)context.Variables["oauthResponse"]).Body.As<JObject>()["access_token"])" />

  15:             <!-- Store result in cache -->

  16:             <cache-store-value key="token-{{Dev-Web1-ClientId}}" value="@((string)context.Variables["AccessToken"])" duration="3600" />

  17:         </when>

  18:     </choose>

  19: </inbound>

Lets go through the key points of this definition below.

  • Line 2: – Assigns the value in cache to the context variable called “bearerToken”. On first entry, the cache value will be null and the variable will not be created. Note I am adding the ClientId as part of the cache key name to keep it unique. Property values are accessed by surrounding the key name with double braces. eg  {{myPropertyName}}
  • Line 4: – Checks if the context variable collection contains a key called “bearerToken” and if not found executes the code between the opening and closing “<when>” XML elements.
  • Line 5: – Initiates the request to the OAuth endpoint with a response timeout of 20 seconds. This will put the response message into the variable called “oauthResponse”
  • Line 6: – Is where you set the URL to send the request to. In this scenario I am using the Azure AD OAuth token endpoint below as our STS service: https://login.microsoftonline.com/xxxxxx.onmicrosoft.com/oauth2/token
  • Line 12: – This is where you define the body payload for the request and this is defined  as a typical client credentials grant type payload. Here I am getting the values for the client Id and secret from the user definable properties set in the Properties page. The resource parameter is just hardcoded to the Urlencoded resource URL of the API but can also parameterised.

<set-body>@(“grant_type=client_credentials&client_id={{Dev-Web1-ClientId}}&client_secret={{Dev-Web1-ClientSecret}}&resource=https%3A%2F%2Fxxxx.xxxxx.ax.dynamics.com”)</set-body>

  • Line 14: – Casts the response as a JSON object to allow the retrieval of the “access_token” value using an indexer and assigns it to the context variable “accessToken”.
  • Line 16: – Is where we add the contents of the variable “accessToken” into cache for a period of 3600 seconds.

6. Now that the “<inbound>” section has been completed, we can look at the “<backend>” section of the policy. This is where the policy forwards your request to the backend web service as defined in the API configuration page.

   1: <backend>

   2:     <send-request mode="copy" response-variable-name="transferWSResponse" timeout="20" ignore-error="false">

   3:         <set-method>POST</set-method>

   4:         <set-header name="Authorization" exists-action="override">

   5:             <value>@("Bearer " + (string)context.Variables["bearerToken"])</value>

   6:         </set-header>

   7:         <set-header name="Ocp-Apim-Subscription-Key" exists-action="delete" />

   8:         <set-header name="Content-Type" exists-action="override">

   9:             <value>application/json</value>

  10:         </set-header>

  11:     </send-request>

  12: </backend>

Lets go through this section as before.

  • Line 2: – Creates the request to the backend web service. Here we are placing the response from the web service into the variable called “transferWSResponse”.
  • Line 4: – Is the creating the “Authorization” header to be sent with the request.
  • Line 5: – Adds the bearer token value from the context variable “bearerToken” the authorisation header.
  • Line 7: – Removes the APIM subscription from being forwarded to the backend web service.

7. Now we need to return the response message from the backend web service to the caller. This is done in the “<outbound>” policy section. Here we just simply return the value of the variable “transferWSResponse” back to the caller.

   1: <outbound>

   2:     <return-response response-variable-name="transferWSResponse">

   3:     </return-response>

   4:     <base />

   5: </outbound>

That’s the whole policy defined which will call the OAuth endpoint the get the token and cache it for subsequent calls. 

Using the tracing feature in APIM, when the first request is made, the cache will be null and the variable will not be set as shown below.

image

The next trace shows any subsequent requests will hit the cache and set the context variable to the bearer token until it expires.

image

One important note about retrieving data from cache is its an out-of-process call and can add tens of milliseconds onto a request.

Working with the APIM polices can make a huge impact on your API development efforts as logging and access management can be off-loaded using the polices available in APIM. A full list of expressions used in polices can be found here: https://docs.microsoft.com/en-us/azure/api-management/api-management-policy-expressions

Enjoy…

Posted in APIM, Azure, Logic Apps | Tagged , , , , | Leave a comment

APIM backup & restore using Azure Automation Services

In this post I will describe the steps of using PowerShell scripts to backup APIM and using the Automation service to schedule the backup every month. The restore function also allows you to restore  APIM into another resource group or APIM service. For the project I am working on now, this is what I am doing to move the configuration settings between each environment.

First you need to create a blob store which ideally should be Read-Access geo-redundant storage (RA-GRS). This is where the APIM backups will be stored. After the blob store has been provisioned, create a container for the backup file as shown below.

image

Once the container is created, take note of the Storage account name and Access key for the blob store. These values will be used in the PowerShell script later.

image

Next provision an Azure Automation service and ensure the Create Azure Run As account is set to “yes”.

image

Once it has been provisioned, ensure the modules have been updated by clicking on the “Modules” link on the left hand navigation panel and then “Update Azure Modules”. Note this does take a while to complete.

 image

After the update has been completed, click the “Browse gallery” link and in search textbox type “apim”. Once found,  double click on the row to open the import blade.

 image

Now click the Import icon to import the cmdlet. This can take several minutes to import.

image

After the PowerShell module has been imported, create a new Runbook and ensure the type has been set to “PowerShell”. Then click the Create button at the bottom of the page.

image

This will open up a new blade where we can add and test the PowerShell script to backup the APIM settings.

image

Now add the following script below into the text editor and remember to update the variables with your environment settings. Once you have added the script, click the “Save” button and then the “Test pane” button to ensure the script runs successfully.

   1: Disable-AzureDataCollection

   2: Write-Output "Starting backup of APIM..."

   3:  

   4: # sign in non-interactively using the service principal

   5: $connectionName = "AzureRunAsConnection";

   6: $storageAccountName = "apimstorebackup";

   7: $storageAccountKey = "<storage account key>";

   8: $resourceGroupName = "APIMService";

   9: $apimName = "apimmanager"; 

  10: $targetContainerName = "backup";

  11: $targetBlobName "AzureAPIM.apimbackup"

  12: try

  13: {

  14:     # Get the connection "AzureRunAsConnection "

  15:     $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName         

  16:  

  17:     Write-Output "Logging in to Azure..."

  18:     Add-AzureRmAccount `

  19:         -ServicePrincipal `

  20:         -TenantId $servicePrincipalConnection.TenantId `

  21:         -ApplicationId $servicePrincipalConnection.ApplicationId `

  22:         -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 

  23: }

  24: catch {

  25:     if (!$servicePrincipalConnection)

  26:     {

  27:         $ErrorMessage = "Connection $connectionName not found."

  28:         throw $ErrorMessage

  29:     } else{

  30:         Write-Error -Message $_.Exception

  31:         throw $_.Exception

  32:     }

  33: }

  34:  

  35: $sourceContext = (New-AzureStorageContext -StorageAccountName $storageAccountName  -StorageAccountKey $storageAccountKey);

  36:  

  37: Write-Output "Starting backup of APIM instance";

  38: Backup-AzureRmApiManagement `

  39:             -ResourceGroupName $resourceGroupName `

  40:             -Name $apimName `

  41:             -StorageContext $sourceContext `

  42:             -TargetContainerName $targetContainerName `

  43:             -TargetBlobName $targetBlobName;

  44:  

  45: Write-Output "Backup of APIM completed.";

Here are the description of the variables:

  • $connectionName = “AzureRunAsConnection” – this is the default connection account that was created when the Automation service was provisioned.
  • $storageAccountName = “apimstorebackup” – name of the blob storage account that was created in the first step.
  • $storageAccountKey = “<storage account key>” – the blob store access key obtained from the portal.
  • $resourceGroupName = “APIMService” –  name of the Azure resource group.
  • $apimName = “apimmanager” – the name of the APIM service.
  • $targetContainerName = “backup”  – name of the backup container in blob store.
  • $tartgetBlobName = “AzureAPIM.apimbackup” – file name of the backup file.  This can be omitted and will create a default filename {apimName}-{yyyy-MM-dd-HH-mm}.apimbackup

Once you have confirmed the script executes without any errors, you can now set up a recurring schedule by creating a new schedule in the Automation service blade under Shared Resources.

imageimage

Next you need to link your Runbook to this schedule by double clicking on your runbook name and then the schedule button on the top menu. This will open another blade where you can view all your schedules that you can select from.

image

That is the automated back process completed now. Below is the PowerShell script required to restored the backup file.

#get the storgae context
$sourceContext = (New-AzureStorageContext `
                    -StorageAccountName “<blob storage name>” `
                    -StorageAccountKey “<blob storage account key from Azure portal>”)

#restore the backup
Restore-AzureRmApiManagement -ResourceGroupName “<name of resource group>” `
                             -Name “<name of the APIM service>” `
                             -StorageContext $sourceContext  `
                             -SourceContainerName “<blob storage container name>” `
                             -SourceBlobName “<backup file name>”

More details on these scripts can be found here: https://docs.microsoft.com/en-us/powershell/module/azurerm.apimanagement/restore-azurermapimanagement?view=azurermps-4.3.1

Enjoy.

Posted in APIM, Azure | Tagged , , , , , | Leave a comment

Enforcing Ordered Delivery using Azure Logic Apps and Service Bus

When consuming messages from an Azure service bus the order may not be guaranteed due to the brokered based messaging scheme where multiple consumers can consume messages from the bus. Sure you can force the Logic App to execute as a single instance but then you sacrifice performance and scalability. You can also use ReceiveAndDelete but then you loose the transactional nature of the bus. Ultimately to ensure a message is consumed in the correct order using the transactional nature of the bus you would add a sequence number to each message and use this to enforce the ordering.

To achieve ordered delivery using Logic Apps, you would need to ensure all related messages are consumed by the same Logic App instance and for this we use the session Id property on the service bus. Below is the full workflow process to force ordered delivery using Logic Apps and session Id’s on the service bus subscription.

image

This scenario is based on a financial institution which requires all monetary transfers to be processed in an ordered fashion.  The key is choosing a suitable session identifier and with this in mind, the account number was the most suitable candidate as we want a single consumer to process all the transactions for a particular account number.

Here we have created a subscription for a topic called AccountTransfers. Note the Enabled sessions is checked.

image

Once the service bus has been configured, we can now dissect the workflow to see how we can achieve ordered delivery.

The workflow is initiated by a pooling Service Bus Connector. The properties of this connector are shown below. The key point here is to set the Session id to “Next Available”. This forces the Logic App to create a new instance for each unique session id value found on the service bus.

image

The next action “ProcessSBMessage” is used to call another logic app which does the processing of the message found on the bus. Here I am just passing the raw base64 encoded message from the Service Bus Trigger action. Using the pattern “separation of concerns” moves the business logic  away from the process of ensuring ordered delivery.

image

Once the message has been sent to the chained Logic App and a response has been returned, we can complete the message from bus with the following action.

image

Next we go into a loop until the exit condition has been satisfied. I am going to use a counter that is incremented if no messages are found on the service bus. If no more messages are found on the service bus after 30 seconds, the loop will exit.

image

The loop inside starts with another service bus connector trigger which gets the messages from the topic subscription. Here we only want to retrieve one message at a time from the service bus using a peek-lock trigger and using the Session Id from the initial service bus trigger “When a message is received in a topic subscription”.  We then check if a message is found in the output body using the expression “@not(equals(length(body(‘Get_messages_from_a_topic_subscription_(peek-lock)’)), 0))

image

If a message is found, the “If True” branch is executed which again calls the same Logic App as before to process the message. Note the indexer to get to the context data as the service bus connector trigger above returns a collection.

image

Once a successful response is received from the ProcessSBMessage Logic App, the message is completed and the LoopCounter variable is reset to zero. Note the lock token is from the service bus connector trigger within the loop and the Session Id is from the initial service bus connector which started the workflow. image

Below is the code view for setting the lockToken and SessionId of the “Complete the message” action inside the loop.  Take note of the indexer “[0]” before the LockToken element.

image

If no messages are found on the service bus, the False branch is then executed. This simply has a delay action as not to pool too quickly and increments the LoopCounter. 

image

The last step is to close the session when the Until loop exists using the Session Id from the initial service bus connector trigger which started the workflow.

image

Now you are ready to the send messages into the service bus. You should see a Logic App spin up for each unique session Id. Remember to set the session Id property on the service bus to some value before sending the message.

Enjoy…

Posted in Logic Apps | Tagged , , , | 2 Comments

Error updating AX entities using the Dynamics 365 for Operations connector in Logic Apps

When trying to update an entity via the Dynamics 365 connector you may encounter the following error.

{ “status”: 400, “message”: “Only 1 of 2 keys provided for lookup, provide keys for SalesOrderNumber,dataAreaId.”, “source”: “127.0.0.1” }

One would think passing the ItemInternalId guid value which is the primary key for the entity as the Object Id property would be adequate to find the record to update. Seems not by the error being thrown back.

image

 

Apparently you need to supply the 2 keys,  SalesOrderNumber and dataAreaId  which was mentioned in the error response message as the Object Id as shown below. Note the comma between the sales order number (Sales Order) and the dataAreaId (Company)

image

So the item path for the entity to update looking from the code view would look like this:

image

Enjoy…

 

Posted in Azure, Logic Apps | 2 Comments

Fixing syntax errors when porting Logic Apps to Visual Studio

After initially designing your Logic App in the Azure Portal, you may wish to port it to Visual Studio 2017 to manage the template under a source control repository and to further develop it from Visual Studio.

After porting the code, you may encounter some of the following errors when trying to save or deploy your Logic App from Visual Studio.

Error: …the string character ‘@’ at position ‘0’ is not expected.

This is fairly easy to identity as Visual Studio highlights the code which it thinks the syntax is incorrect as shown below. Remember this code was ported over from the Azure Portal where it parsed without any issues.

image

It is complaining about the unrecognised function . To get around this issue we need to use the “concat” string function to treat this as a string literal.

The original syntax is here: “ProductCodes”: “[@{outputs(‘Compose_Product_Detail’)}]”

By surrounding the whole value “[@{outputs(‘Compose_Product_Detail’)}]” with concat as shown below resolves this error.

“@concat(‘[‘, outputs(‘Compose_Product_Detail’)’, ‘]’)”

Breaking the designer when parameterising the subscription Id when calling a function

You may have a call-back function defined in your Logic App which has the subscription guid embedded in the code (blanked out for security reasons) similar to below:

image

When trying to parameterise the subscription key by simply adding the function “subscription().subscriptionId” in place of the guid value as shown below:

“function”: {
“id”: “/subscriptions/subscription().subscriptionId/resourceGroup…

You will get the following error when trying to same the changes.

image

To overcome this issue, wrap the value in a concat function as shown below. Note the url has been shorted with “…” to make it readable.

“function”: {
“id”: “[concat(‘/subscriptions/’,subscription().subscriptionId,’/resourceGroups/…/providers/Microsoft.Web/sites/…/functions/Cmn_GuidMapNullValue’)]”

You can also apply this same technique to parameterise other information in the Id key such as the website location.

Enjoy…

Posted in Azure, Logic Apps | Leave a comment