Skip to content

Azure Automation Hybrid Workers

When starting out with Azure Automation the number of options available can be quite daunting. Our internal IT environment still uses a lot of on-premises resources, which triggered my interest in hybrid workers.

This article will go into the details on what the hybrid workers are and how they work. Fair warning, a lot of text in this one.

What are hybrid workers?

The primary use of hybrid workers is to provide the ability to start Azure Automation jobs on locations where Azure cannot reach by itself. It allows the automation code to run directly on the host system where the worker is installed, and this is exactly how you should approach them:

Azure Automation Hybrid Workers allow you to run automation code on a remote system.

Now, the naming Microsoft chooses can be somewhat unclear at times (really?). The Windows installer explicitly calls it the On-Premises Hybrid Worker… But don’t let that stop you to install hybrid workers on Azure VM’s, AWS VM’s or where ever you might need them. Wherever you need to run local scripts on a virtual machine from Azure Automation hybrid workers are your go-to resource.

Worker Groups

Hybrid workers are organized into worker groups. These groups are used to provide a form of high-availability to your runbooks. Whenever an automation task is started against a hybrid worker group the first worker to respond picks up the job and runs it.

This way of dispatching tasks has impact on the way we use workers. For consistent execution of the automation scripts members of a hybrid worker group should have equal access rights to the accessed resources, and also have equally installed Powershell modules, settings etc; consider them members of the same load balancing group – scripts should be able to complete their tasks from each node in the cluster.

Run Powershell automation scripts on multiple servers

I found myself in the situation where I needed to use Azure Automation to perform scripted tasks against a multitude of servers. My first approach was to install a hybrid worker on each server and then directly run the Powershell scripts. This resulted in a number of difficulties. FIrst, when the server was offline or unavailable the task simply would fail to start. This is because all my workers were the sole member of a worker group – which in turn meant that each time a machine went down, the worker group went down.

The second problem was that of installing the agent itself. The Hybrid Worker has a dependency on the AZ Powershell module, which meant that all my machines got a large number of Powershell Modules installed which I did need only for the sake of the worker (note – the script Microsoft provides to install the worker uses the almost obsolete AzureRM module. After installation it is no longer required, and you can uninstall it safely).

This was about the moment I realized I was doing it the wrong way around. Hybrid workers should be considered proxy endpoints in your internal network, being able to reach systems that cannot be reached natively from Azure.

The solution was to install the hybrid worker on a server in the internal network (in the case of my demo environment the management server), and treat that server as my Powershell break-out server. Let me explain the difference by the hand of an example.

Retrieving a machine’s culture settings (the wrong way)

My demo environment consists of four virtual machines.

  • DC: my domain controller (Server 2019 core)
  • MGT: my management server (Server 2019 datacenter)
  • DEMO-1: A demo machine (Server 2019 datacenter)
  • DEMO-2: A demo machine (Server 2019 datacenter)

I started out great. My script was really simple – it was a oneliner:

Get-Culture

I had installed hybrid workers on all of my machines, and I was good to go.

Opening my Azure Automation account I created a runbook:

Create a basic runbook

Now, when I test the runbook it runs fine. The output is exactly as expected, a one-liner showing the culture settings on the machine. Problems start when I need to run this against all my machines. All of a sudden I have four jobs to start, and four output windows to review. My only conclusion was that there had to be a better way to do this.

The proper way to use hybrid workers

To come back to my earlier point, hybrid workers should be considered target endpoints from where we can start scripts. Taking the same demo environment into account as described in the previous paragraph we to think about where the hybrid worker should be installed.

In the case of my lab environment I have three specific roles – a domain controller, a management server and two servers for running applications. The best place in this scenario would be the managements server:

  • As a management server it has network access to the other servers
  • It is “the right place” to install Powershell modules and other requirements
  • In the case of Powershell it is a suitable place to initiate remoting.

After installing and configuring the hybrid worker the following script can be used to retrieve the culture status using remoting:

Param (        
    [Parameter(Mandatory=$true)]        
    [string[]]$VMnames    
)

function Get-CultureDetails {
    param ([string[]]$VMComputerNames)
    Invoke-Command -ComputerName $VMComputerNames -ScriptBlock { Get-Culture 
    }
}

Write-Output (Get-CultureDetails -VMComputerNames $VMnames)

The basic layout of the runbook is split in three parts: a parameter block, the functions used to perform the actions and the actual task.

When starting the script you will receive the following dialog box:

The parameter defined in the script is provided as a input box that accepts string as JSON. I’ve selected a hybrid worker as Run On account, and selected the MGT worker group.

As the job completes we get one line as a result, and three errors. The task completed succesfully against the local machine but remoting failed:

Good old access denied.

The credential thing

Now would be a good time to have “the talk”. Repeat after me: domain admin credentials should never be used for simple tasks!

Jokes aside, as an admin you should be implementing the principle of the least privilege – also when remoting. Back to the example script, there are a few ways we can resolve the access denied issue.

Option 1: Change the hybrid worker account

This is the easiest way to solve the issue, but not very much a principle of least privilege solution. By default the worker group runs under the local system account. As this is not a domain account, it cannot be used to access remote machines.

We can safely enter credentials in the automation account under “credentials”. This can be done either by using a UPN or by the DOMAIN\USER format. I’ve used a domain account that is local administrator on all servers.

Now, when accessing the Worker Group we can change the credentials used:

Go the the Hybrid worker group settings, select and set the custom credentials.

When running the tasks now the output is complete:

Option 2: Using local credentials in your script

This option is only viable for test and demo environments, as you will be able to read your account password in plain text in the script. Approach it the same way you would fill a username and password in a Powershell script:

$pwd =

"P@ssword1" | ConvertTo-SecureString -AsPlainText -Force
$username = "administrator" $credential = New-Object System.Management.Automation.PsCredential($username,$pwd)

After that you change the Invoke-Command call to include credentials and you will be good to go.

This is a BAD solution. You should never, ever, have credentials in plain sight!

Option 3: PSCredentials

The most robust option is to use credentials in the Automation account and pass them to the script by defining a PSCredential parameter. We update the script to look as following:

Param
    (
        [Parameter(Mandatory=$true)]
        [string[]]$VMnames,
        [Parameter(Mandatory=$true)]
        [pscredential]$RemotingCredentials
    )

function Get-CultureDetails {
    param (
        [string[]]$VMComputerNames,
        [pscredential]$Credentials
    )
    Invoke-Command -ComputerName $VMComputerNames -ScriptBlock { Get-Culture } -Credential $Credentials
}

Write-Output (Get-CultureDetails -VMComputerNames $VMnames -Credentials $RemotingCredentials)

I’ve added the parameter in the opening block, and modified the function to accept it. One thing that almost caught me off guard, is that the PSCredential object in the parameters is not a standard PSCredential object. When you run the code check what is mentioned in the description:

The input required is an Automation.PSCredential. So, this means we can split roles! We can have any number of different credentials pointing to specific accounts for specific tasks. Just enter the right remoting credentials and you should get the right results!

Option 4: With thanks to Chris Twiest: Get-AutomationPSCredential

Off course, just 20 hours after I placed the post one of my former colleagues attended me to the fact we can use a built-in mechanism of the Automation server.

While editing your script you can use the sidebar to insert a code-snippet that automatically fills in a Powershell line that retrieves the credential from the vault:

Using this method you can specify a credential set in your script without exposing the actual credentials. This is the way it is supposed to work – and like with option 3 it allows for perfect application of the principle of least privilege.

Thanks Chris 🙂

Wrapping it up

Hybrid Workers can be quite daunting to get started with. I personally found that the documentation on Microsoft Docs is helpful but scattered all over the place. The use case for a hybrid worker group is valid however: You can perfectly use a bridgehead machine to perform all manner of tasks directly on virtual machines.

Leave the worker group running under the local system accounts and split off the roles required for the different scripts, following the principle of least privilege.

With that, have fun with Azure Automation, and let me know how this works out for you!

Published inAzurePowershell

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *