Category Archives: Series: PowerShell Monitoring

Monitoring with PowerShell: Monitoring log on of specific users.

Hi Guys, This’ll be the last blog before I go on holidays, So enjoy it and see you all in two weeks.

This time we’re going to talk about montoring the logon of specific users. We use named accounts for all our engineers and want to alert if another account that is unnamed has been logged onto in an interactive session. To do this, we’ll use the WMI instrumentation Win32_LoggedOnUser.

Just as an extra disclaimer: Please remember that in co-managed or environments that belong to others you won’t be able to always perform all best practices. This script will mostly be used to monitor those messy environments, and give you a little bit more sense of extra security. My personal advice would always be to disable accounts that are no longer allowed to login, use managed service accounts without interactive permissions for services, and delete accounts of ex-employees directly after the leave the company.

The script

Let’s get started on the script. First we’ll have to define which accounts we do not want to have in interactive sessions:

$ForbiddenList = @("Cyberdrain","cyber","migration","administrator","admin","service-QuickBooks*","svc-QB*","ExEmployee1")

So in this list, we name all accounts that are forbidden. You can add any user you would like to this list, We manage a lot of servers, and sometimes after an engineer leaves our company servers are still logged in with the user that we’ve disabled/deleted. With this script we also monitor those situations and log out the deleted user from all servers.

The next step is getting a list of the active users and comparing them:

$ActiveUsers = (Get-CimInstance Win32_LoggedOnUser).antecedent | Select-Object -Unique | Where-Object {$_.name -in $ForbiddenList}

So here we get all users that are currently logged on to the machine, and compare them to our forbidden list, our $activeUsers variable only gets filled if there is a match in the list.

if(!$ActiveUsers){$ActiveUsers = 'false'}

And this last line says that if $ActiveUsers is empty, so no users have been found that are logged in and in our list, it will say “false”. The complete script is just 3 lines and can be found below.

$ForbiddenList = @("Cyberdrain","cyber","migration","administrator","admin","service-QuickBooks*","svc-QB*","ExEmployee1")
$ActiveUsers = (Get-CimInstance Win32_LoggedOnUser).antecedent | Select-Object -Unique | Where-Object {$_.name -in $ForbiddenList}
if(!$ActiveUsers){$ActiveUsers = 'false'}

And that’s it. Some monitoring for situations you do not want to end up in. Remember to always follow security best practices first. Only use these scripts as an early warning system that someone, somewhere has made a mistake. 🙂 As always, Happy PowerShelling!

Documenting with PowerShell: Chapter 2 – Documenting Bitlocker keys

Our RMM system currently does not have support to securely store the bitlocker key inside of the RMM system itself. I’ve subscribed to the school of bitlocking everything that passes through my company, So also computers that sometimes never get connected to Azure AD, Active Directory to store the key in. We also get users that lost the USB drive or piece of paper that the key was stored on.

As we use a documentation system (IT-Glue) to store all our passwords, I figured why not try to also store our Bitlocker keys there, while tagging the device too so we can always find which device belongs to which key easily.

First for the none IT-Glue users I’ll generate a HTML file. With some small adaptation you can upload this to Confluence, ITBoost, or any other system you use. After that example, we’ll get onto IT-Glue again. So let’s get started!

Base script

The base script is the part of the script that captures the data that we want. In our case This will be the Bitlocker key, and output it an HTML file in C:\Temp\Temp.html You can use this script however you’d like.

$BitlockVolumes = Get-BitLockerVolume
#Some HTML to make the page pretty.
$head = @"
<script>
function myFunction() {
    const filter = document.querySelector('#myInput').value.toUpperCase();
    const trs = document.querySelectorAll('table tr:not(.header)');
    trs.forEach(tr => tr.style.display = [...tr.children].find(td => td.innerHTML.toUpperCase().includes(filter)) ? '' : 'none');
  }</script>
<title>Audit Log Report</title>
<style>
body { background-color:#E5E4E2;
      font-family:Monospace;
      font-size:10pt; }
td, th { border:0px solid black; 
        border-collapse:collapse;
        white-space:pre; }
th { color:white;
    background-color:black; }
table, tr, td, th {
     padding: 2px; 
     margin: 0px;
     white-space:pre; }
tr:nth-child(odd) {background-color: lightgray}
table { width:95%;margin-left:5px; margin-bottom:20px; }
h2 {
font-family:Tahoma;
color:#6D7B8D;
}
.footer 
{ color:green; 
 margin-left:10px; 
 font-family:Tahoma;
 font-size:8pt;
 font-style:italic;
}
#myInput {
  background-image: url('https://www.w3schools.com/css/searchicon.png'); /* Add a search icon to input */
  background-position: 10px 12px; /* Position the search icon */
  background-repeat: no-repeat; /* Do not repeat the icon image */
  width: 50%; /* Full-width */
  font-size: 16px; /* Increase font-size */
  padding: 12px 20px 12px 40px; /* Add some padding */
  border: 1px solid #ddd; /* Add a grey border */
  margin-bottom: 12px; /* Add some space below the input */
}
</style>
"@

foreach($BitlockVolume in $BitlockVolumes) {
$HTMLTop = @"
    <h1>Bitlocker Information</h1>
    <b>Computername: </b>$($BitlockVolume.ComputerName)<br>
    <b>Encryption Method:</b>$($BitlockVolume.EncryptionMethod)<br>
    <b>Volume Type:</b>$($BitlockVolume.VolumeType)<br>
    <b>Volume Status:</b>$($BitlockVolume.VolumeStatus)<br>
"@
$HTML += $BitlockVolume.KeyProtector | convertto-html -Head $head -PreContent "$HTMLTop <br> <h1>Keys for $($ENV:COMPUTERNAME) - $($BitlockVolume.Mountpoint)</h1>"
}
$html | Out-File C:\Temp\temp.html

Now, that’s cool. This gives us a good ol’ HTML file. We now have a choice, use the previous script found here and adapt it to upload it to IT-Glue as a Flexible Asset or make the choice to upload it as an embedded password and tag the correct device. That sounds cooler to me!

This script looks for a configuration in your IT-Glue database based on the computer’s serial number. If it finds a match it uploads the bitlocker key as an embedded password, with the name “COMPUTERNAME – DRIVE:” as an example for my computer “DESKTOP-U3984 – C:” – We do this because the hostname might change over time and you’d want the keys to be uploaded separately.

IT-Glue script

#####################################################################
$APIKEy =  "APIKEYHERE"
$APIEndpoint = "https://api.eu.itglue.com"
$orgID = "ORGIDHERE"
#####################################################################
#Grabbing ITGlue Module and installing,etc
If(Get-Module -ListAvailable -Name "ITGlueAPI") {Import-module ITGlueAPI} Else { install-module ITGlueAPI -Force; import-module ITGlueAPI}
#Settings IT-Glue logon information
Add-ITGlueBaseURI -base_uri $APIEndpoint
Add-ITGlueAPIKey $APIKEy
#This is the data we'll be sending to IT-Glue. 
$BitlockVolumes = Get-BitLockerVolume
#The script uses the following line to find the correct asset by serialnumber, match it, and connect it if found. Don't want it to tag at all? Comment it out by adding #
$TaggedResource = (Get-ITGlueConfigurations -organization_id $orgID -filter_serial_number (get-ciminstance win32_bios).serialnumber).data
foreach($BitlockVolume in $BitlockVolumes) {
$PasswordObjectName = "$($Env:COMPUTERNAME) - $($BitlockVolume.MountPoint)"
$PasswordObject = @{
    type = 'passwords'
    attributes = @{
            name = $PasswordObjectName
            password = $BitlockVolume.KeyProtector.recoverypassword[1]
            notes = "Bitlocker key for $($Env:COMPUTERNAME)"

    }
}
if($TaggedResource){ 
    $Passwordobject.attributes.Add("resource_id",$TaggedResource.Id)
    $Passwordobject.attributes.Add("resource_type","Configuration")
}

#Now we'll check if it already exists, if not. We'll create a new one.
$ExistingPasswordAsset = (Get-ITGluePasswords -filter_organization_id $orgID -filter_name $PasswordObjectName).data
#If the Asset does not exist, we edit the body to be in the form of a new asset, if not, we just upload.
if(!$ExistingPasswordAsset){
Write-Host "Creating new Bitlocker Password" -ForegroundColor yellow
$ITGNewPassword = New-ITGluePasswords -organization_id $orgID -data $PasswordObject
} else {
Write-Host "Updating Bitlocker Password" -ForegroundColor Yellow
$ITGNewPassword = Set-ITGluePasswords -id $ExistingPasswordAsset.id -data $PasswordObject
}
}

This script can also be found as an AMP file here, that’s it! as always, happy PowerShelling!

Documenting with PowerShell – New series

Hi All!

Starting this week I’ll be blogging about using PowerShell with your RMM/Automation platform and running scripts to collect valuable documentation. I’ll try to keep it as generic as possible and export the documentation to HTML, but I’ll always include a version to upload it to IT-Glue or Confluence. As requested by some I’ll also include the AMP for N-Central so you can get going with it.

To get started straight away, I’ll share the script that we will be using throughout this series to upload documentation to IT-Glue fully automated. You won’t even need to create flexible assets as the script does this for you.

The Script

For the script you’ll need at least Windows 10, or Server 2012R2+. You’ll also need your IT-Glue API key and the URL, generally speaking that URL is “https://api.itglue.com” or “https://api.eu.itglue.com” for european users. Now let’s get started on our uploading script. 🙂

N-Able users can download the AMP for this script here (Right click->Save as) The script can use Custom Device or Organisation Properties as input, and as thus you can enter the Organisation ID on each Custom Organisation Property and automate your documentation process completely.

#####################################################################
$APIKEy =  "YOUR API KEY GOES HERE"
$APIEndpoint = "https://api.eu.itglue.com"
$orgID = "THE ORGANISATIONID YOU WOULD LIKE TO UPDATE GOES HERE"
$FlexAssetName = "ITGLue AutoDoc - Quick example"
$Description = "a quick overview of easy it is to upload data to IT-Glue"
#####################################################################
#This is the object we'll be sending to IT-Glue. 
$HTMLStuff = @"
<b>Servername</b>: $ENV:COMPUTERNAME <br>
<b>Number of Processors</b>: $ENV:NUMBER_OF_PROCESSORS <br>

This is a little example of how we upload data to IT-Glue.
"@
$FlexAssetBody = 
@{
    type = 'flexible-assets'
    attributes = @{
            name = $FlexAssetName
            traits = @{
                "name" = $ENV:COMPUTERNAME
                "information" = $HTMLStuff
            }
    }
}

#ITGlue upload starts here.
If(Get-Module -ListAvailable -Name "ITGlueAPI") {Import-module ITGlueAPI} Else { install-module ITGlueAPI -Force; import-module ITGlueAPI}
#Settings IT-Glue logon information
Add-ITGlueBaseURI -base_uri $APIEndpoint
Add-ITGlueAPIKey $APIKEy
#Checking if the FlexibleAsset exists. If not, create a new one.
$FilterID = (Get-ITGlueFlexibleAssetTypes -filter_name $FlexAssetName).data
if(!$FilterID){ 
    $NewFlexAssetData = 
    @{
        type = 'flexible-asset-types'
        attributes = @{
                name = $FlexAssetName
                icon = 'sitemap'
                description = $description
        }
        relationships = @{
            "flexible-asset-fields" = @{
                data = @(
                    @{
                        type       = "flexible_asset_fields"
                        attributes = @{
                            order           = 1
                            name            = "name"
                            kind            = "Text"
                            required        = $true
                            "show-in-list"  = $true
                            "use-for-title" = $true
                        }
                    },
                    @{
                        type       = "flexible_asset_fields"
                        attributes = @{
                            order          = 2
                            name           = "information"
                            kind           = "Textbox"
                            required       = $false
                            "show-in-list" = $false
                        }
                    }
                )
                }
            }
              
       }
New-ITGlueFlexibleAssetTypes -Data $NewFlexAssetData 
$FilterID = (Get-ITGlueFlexibleAssetTypes -filter_name $FlexAssetName).data
} 

#Upload data to IT-Glue. We try to match the Server name to current computer name.
$ExistingFlexAsset = (Get-ITGlueFlexibleAssets -filter_flexible_asset_type_id $Filterid.id -filter_organization_id $orgID).data | Where-Object {$_.attributes.name -eq $ENV:COMPUTERNAME}

#If the Asset does not exist, we edit the body to be in the form of a new asset, if not, we just upload.
if(!$ExistingFlexAsset){
$FlexAssetBody.attributes.add('organization-id', $orgID)
$FlexAssetBody.attributes.add('flexible-asset-type-id', $FilterID.id)
Write-Host "Creating new flexible asset"
New-ITGlueFlexibleAssets -data $FlexAssetBody

} else {
Write-Host "Updating Flexible Asset"
Set-ITGlueFlexibleAssets -id $ExistingFlexAsset.id  -data $FlexAssetBody}

The script does multiple things for you, that a lot of other scripts tend to skimp over;

  • We check if a Flexible Asset type with our chosen name is already present, if its not. we create it
  • We then check if a Flexible Asset form already exists with the same name as we’ve entered, if not, we’ll upload a fresh one, if it does, we’ll upload an update for that specific item.

In the following series I’ll teach you how to get the organisation ID by information we gather on the machine you are running your script on. We’ll be tackling how to get the correct devices tagged on your flexible assets, but of course we’ll start by taking apart the script above and teaching you how to create fully automated network documentation.

As always, Happy Powershelling!

Monitoring with PowerShell Chapter 3: Monitoring network state

Our clients often want us to monitor specific network connections, such as VPN tunnels that need to be online, services that always need to be reachable, or even simply to report on internet connection speeds. To do this, we mostly use our network controller software and default RMM sets. In rare cases, that is not enough, so we’ve developed some monitoring sets for our RMM to help us with this.

To start, we have a RMM monitoring set that uses the Test-Connection cmdlet to ping multiple hosts entered in our RMM system. We define these per client, so their most important resources are checked constantly.

$State = "Healthy"
$IPsToPing = $IPsToPing.split(",")
try{
$ConnectionTest = test-connection $IPsToPing -count 3 -ErrorAction stop -Verbose
}
catch [System.Management.Automation.ActionPreferenceStopException]            
{            
try {            
throw $_.exception            
}                 
catch [System.Net.NetworkInformation.PingException] {            
$state = "$($error[0])"            
}                      
catch {            
$state = "$($error[0])"           
}            
}

$EndResult = $ConnectionTest | measure-Object -Property ResponseTime -Average -sum -Maximum -Minimum

$AVGMS = $EndResult.Average.ToString(00)
$MaxMS = $EndResult.Maximum.ToString(00)
$MinMs = $EndResult.Minimum.ToString(00)

$AVGMS shows the average of MS with 3 pings, $MaxMS shows the highest reached MS during 3 pings, and $MinMs, you’ve guessed it – Shows the fastests pings in the west 😉

Next to ping monitoring we also check the health state of the internal network on Windows Servers. We see when doing take overs of infrastructure that in a lot of situations Network Location Awareness does not function or start correctly.

The network location awareness service actually is the service that tells your OS what network profile it should use, like “public”, “private” or “domain”. Not having it running can cause a myriad of issues such a SSPI issues on SQL servers, Firewalling issues, and much more!

Most of our environments use $Domainname.com or when doing takeovers $domainname.local. The common denominator in this is that the network profile contains a period and from there we compare it to the actual Network Category. If these do not match, we alert and see if we need to recover the Network Location Awareness service.

$NetworkProfile = Get-NetConnectionProfile

if ($NetworkProfile.Name -contains ".")
{ $NLAState = "Domain Authentiction might not work properly."  }else{ $NLAState = "Healthy" }

foreach ($NetProfile in $NetworkProfile | where {$_.Name -match "."}) {
 if ($NetProfile.NetworkCategory -ne "DomainAuthenticated")
 { $DomainState = "Network is not set to DomainAuthenticated."  }else{ $DomainState = "Healthy" }
}

if(!$NLAState){ $NLAState = "healthy }

And that’s the blog for today! enjoy and as always, Happy PowerShelling!

Monitoring with PowerShell Chapter 3: Monitoring Modern Authentication

Modern Authentication is turned on by default for new tenants, but if you have legacy tenants or take over tenants from others MSP’s than sometimes you might have tenants that do not use Modern Authentication yet.

Monitoring and auto remediation is key in this when using Multi factor Authentication. We want the best user experience, so we must have it enabled to make sure users get a nice looking pop-up in outlook. also we want to avoid using App Passwords.

PowerShell Monitoring script:

This script only monitors the Modern Auth status, and does not auto-remediate.

$creds = get-credential
Connect-MsolService -Credential $creds
$clients = Get-MsolPartnerContract -All
 
foreach ($client in $clients) { 
 $ClientDomain = Get-MsolDomain -TenantId $client.TenantId | Where-Object {$_.IsInitial -eq $true}
 Write-host "Logging into portal for $($client.Name)"
 $DelegatedOrgURL = "https://ps.outlook.com/powershell-liveid?DelegatedOrg=" + $ClientDomain.Name
 $ExchangeOnlineSession = New-PSSession -ConnectionUri $DelegatedOrgURL -Credential $credential -Authentication Basic -ConfigurationName Microsoft.Exchange -AllowRedirection
 Import-PSSession -Session $ExchangeOnlineSession -AllowClobber -DisableNameChecking
 
 $Oauth = Get-OrganizationConfig 
 
 if($Oauth.OAuth2ClientProfileEnabled -eq $false){ $ModernAuthState += "$($ClientDomain.name) has modern auth disabled"}
 
 Remove-PSSession $ExchangeOnlineSession
}

if(!$ModernAuthState){ $ModernAuthState = "Healthy"}
PowerShell auto-remediation script
$creds = get-credential
Connect-MsolService -Credential $creds 
$clients = Get-MsolPartnerContract -All
 
foreach ($client in $clients) { 
 $ClientDomain = Get-MsolDomain -TenantId $client.TenantId | Where-Object {$_.IsInitial -eq $true}
 Write-host "Logging into portal for $($client.Name)"
 $DelegatedOrgURL = "https://ps.outlook.com/powershell-liveid?DelegatedOrg=" + $ClientDomain.Name
 $ExchangeOnlineSession = New-PSSession -ConnectionUri $DelegatedOrgURL -Credential $credential -Authentication Basic -ConfigurationName Microsoft.Exchange -AllowRedirection
 Import-PSSession -Session $ExchangeOnlineSession -AllowClobber -DisableNameChecking
 
 $Oauth = Get-OrganizationConfig 
 
 if($Oauth.OAuth2ClientProfileEnabled -eq $false){ Set-OrganizationConfig -OAuth2ClientProfileEnabled $true }
 
 Remove-PSSession $ExchangeOnlineSession
}

And that’s it! Hope it helps and as always, Happy PowerShelling.

Monitoring with PowerShell Chapter 3: Monitoring SSL certificates on IIS

For some clients that still have on-site servers running IIS we have to monitor the SSL status to make sure that things such as the Remote Desktop Gateway, or SSTP VPN are always available. To make sure that we are able to replace SSL certifcates on time we need to know when specific certificates expire.

To do this, we will use the Get-IISSite cmdlet. This cmdlet exposes all of the IIS configuration in an easy to use format. First, we’ll get all sites with bindings like this;

(Get-IISSite).bindings

Now if you run this, you won’t see that much information. To make sure we get a list of all possible items we have a couple of choices, but the simplest is using the select statement:

(Get-IISSite).bindings | select *

Here we’re telling PowerShell we want to select every possible object returned by our command. With this you’ll see we have a lot more information about the bindings. Including that we can filter on the protocol used. Because we only want information about the pages secured with a certificated we can run the same command again, but this time filtering on the protocol “HTTPS”

(Get-IISSite).bindings | where-object {$_.Protocol -eq "https"}

Now that we have the information that we want! our next step is easy; we will find the bound certificate in the local certificate store and check the expiry date. We like to be warned 14 days before expiry, but you can change the days to anything you’d like 🙂

$Days = (Get-Date).AddDays("14")
$CertsBound = (Get-IISSite).bindings | where-object {$_.Protocol -eq "https"}
foreach($Cert in $CertsBound){
$CertFile = Get-ChildItem -path "CERT:LocalMachine\$($Cert.CertificateStoreName)" | Where-Object -Property ThumbPrint -eq $cert.RawAttributes.certificateHash
if($certfile.NotAfter -lt $Days) { $CertState += "$($certfile.FriendlyName) will expire on $($certfile.NotAfter)" }
}

if(!$certState){$CertState = "Healthy"}

And thats it! if the certificate will expire in 14 days, this set will start alerting. Happy PowerShelling!

Monitoring with PowerShell Chapter 3: Monitoring user creation

We all know that bad actors often create accounts for repeat access when they gain access to your network. To make sure that we are aware of these situations we user PowerShell monitoring.

For security measures we monitor 3 types of user creation:

  • All created domain users,
  • All users added privileged groups(e.g. Domain Admins)
  • All created local users.
  • All (Temporary) users created that contain specific keywords.
Temporary users Monitoring:

The temporary users monitoring set is only used on active directory users, not local users. as we’re already monitoring all local user creation this would be redudant.

$ArrayOfNames = @("test", "tmp","skykick","mig", "migwiz","temp","-admin","supervisor")

foreach($name in $ArrayOfNames){
$filter =  'Name -like "*'+ $($name) + '*"'
$Users = Get-ADUser -Filter $filter
if($users -ne $null){
foreach($user in $users){
$TemporaryUser += "$($user.name) has been found, created at $($user.whencreated)`n"
}
}
}
if(!$TemporaryUser){$Userlist = "No Temporary Accounts Found" }
Created Domain Users

For domain users, we alert on all users created in the last 24 hours. after 24 hours this monitoring set goes back to a healthy state.

$When = ((Get-Date).AddDays(-1)).Date
$GetUsers = Get-ADUser -Filter {whenCreated -ge $When} -Properties whenCreated

foreach($user in $GetUsers){
$userchanges += "$($user.name) has been created at $($user.whencreated) `n"
}

if($UserChanges -eq $Null) { $UserChanges = "No Changes Detected"}
All created local users

Local users do not have a creation date, that makes it a little more difficult to monitor. To solve this we create a file with the results of get-localuser, We update that file every 24 hours. Then from there on, we compare the file to the most recent results of the output of get-localuser. We are alerted if the compare finds newer content, Meaning a user has been created or removed in the local user database.

$Outputfile = "C:\Windows\temp\LocalUsers.txt"
$Localusers = Get-LocalUser | Select-Object *
Test-Path $Outputfile | %{if($_ -eq $false){new-item $Outputfile} }
$OutPutFileContents = Get-Content $Outputfile | ConvertFrom-Json

If((get-item $Outputfile).LastWriteTime -lt (Get-Date).AddDays((-1))){    $Localusers | ConvertTo-Json |out-file $Outputfile  }

$Compare = Compare-Object -DifferenceObject $OutPutFileContents.name -ReferenceObject $Localusers.name

if(!$Compare) { $Compare = "Healthy"}
Privileged Group Changes

We monitor the privileged groups by using Ashley McGlone’s script to get the changes. You can find that full script here.

Function Get-PrivilegedGroupChanges {            
    Param(            
        $Server = "localhost",            
        $Hour = 24            
    )            
                
        $ProtectedGroups = Get-ADGroup -Filter 'AdminCount -eq 1' -Server $Server            
        $Members = @()            
                
        ForEach ($Group in $ProtectedGroups) {            
            $Members += Get-ADReplicationAttributeMetadata -Server $Server `
                -Object $Group.DistinguishedName -ShowAllLinkedValues |            
             Where-Object {$_.IsLinkValue} |            
             Select-Object @{name='GroupDN';expression={$Group.DistinguishedName}}, `
                @{name='GroupName';expression={$Group.Name}}, *            
        }            
                
        $Members |            
            Where-Object {$_.LastOriginatingChangeTime -gt (Get-Date).AddHours(-1 * $Hour)}            
                
    }     
    
    $ListOfChanges = Get-PrivilegedGroupChanges
    
    if($ListOfChanges -eq $Null) 
    {
    $GroupChanges = "No Changes Detected"
    }
    else {
    foreach($item in $ListOfChanges){
    $GroupChanges += "Group $($ListOfChanges.GroupName) has been changed - $($listofchanges.AttributeValue) has been added or removed"
    }
    }
    
    if(!$groupChanges) { $GroupChanges = "No Changes Detected"}

Monitoring with PowerShell Chapter 3: Monitoring and remediating Windows Feature Update status

With the advent of Windows 10 all MSP’s are faced with a new challenge: How do we manage the different Windows 10 Feature versions and how do we make sure we can automatically upgrade our clients to the latest version of the Windows 10 OS? Microsoft has not made feature updating very straightforward, and sometimes the automatic updates error out.

A lot of RMM systems claim they do version upgrades perfectly, unfortunately I have not seen any RMM that gracefully upgrades machines without too many issues and without user intervention. as an MSP its key to automate as much as possible and prevent engineers from having to bother users to perform machine upgrades, so here is PowerShell to the rescue:

Checking the Windows version and comparing it to your standards.

We’ve decided we want all of our users on the same feature level, because of this we’ve created a monitoring set that alerts us when Windows 10 computers have a lower version then one we’ve centrally set. Quite simply our monitoring set only returns the current release ID we pull from the registry:

$ReleaseID = (Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Name releaseid).releaseid

Now we alert on this monitoring set, when it is below a expected value. In our case anything lower than 1903 generates an alert. Our system then sees that this alert has been generated and performs a remediation script during the next maintenance cycle we’ve agreed with the client.

Remediation

Remediation for having the wrong feature update is easy. Run the update and done. The problem is that most RMM systems don’t handle this very cleanly. To resolve this we’ve created a PowerShell script that grabs the ISO from a networkshare, copies it to a temporary location, and runs the upgrade from there.

The script also checks if there is enough space on the GPT boot partition, and if not deletes the unnecessary font files.

If the network share is not available, It will download the ISO from a web server you specify, which is great for a mobile workforce. All you have to change in this script is $ISOPath and $Webserver.

$ISOPath = "\\Servername\Netlogon\Windows10.iso"
$WebServer = "http://YOURWEBSERVER/Windows10.iso"
$OSDiskIndex = gwmi -query "Select * from Win32_DiskPartition WHERE Bootable = True" | Select-Object -ExpandProperty DiskIndex
$PartTypeFull = gwmi -query "Select * from Win32_DiskPartition WHERE Index = 0" | Select-Object -ExpandProperty Type
$PartTypeMid = $PartTypeFull.Substring(0,3)
$PartType = Out-String -InputObject $PartTypeMid
if ($PartType -like "*GPT*")
{
write-output ("System has a GPT partition, clearing EFI fonts....");
cmd.exe /c "mountvol b: /s"
Remove-Item b:\efi\Microsoft\Boot\Fonts\*.* -force
cmd.exe /c "mountvol b: /d"
}
if (Test-Path $ISOPath)
{
write-output ("Mounting ISO from: "+$ISOPath);
}
else
{
write-output ("Warn: ISO not found at: "+$ISOPath);
write-output ("Downloading ISO from webserver....");
mkdir c:\temp
$ISOPath = "c:\temp\Windows_10_upgrade.iso";
invoke-webrequest $Webserver -OutFile $ISOPath
}
Mount-DiskImage -ImagePath $ISOPath
$ISODrive = Get-DiskImage -ImagePath $ISOPath | Get-Volume | Select-Object -ExpandProperty DriveLetter
write-output ("Mounted ISO on drive: "+$ISODrive)
$Exe = ":\setup.exe"
$Arguments = "/auto upgrade /quiet /noreboot"
$ExePath = $ISODrive + $Exe
write-output ("Running setup from ISO: " + $ExePath)
Start-Process $ExePath $Arguments

And tada! thats it. The upgrade will run, but the machine will not reboot until the user performs the reboot. We schedule more tasks during our maintenance cycle, so we’d rather have the RMM system handle the reboot.

Happy PowerShelling!

Monitoring with PowerShell Chapter 3: Hyper-v state

We managed a whole lot of hyper-v servers for our clients, including large clusters but also smaller single server solutions. This makes it difficult to make sure that everyone creates VM’s as they should, and sometimes mistakes are made by engineers or backup software that cause a checkpoint to be left on a production server.

To make sure we don’t get problems along the way we use the following monitoring sets.

Monitoring checkpoints, Snapshots, AVHD’s

We monitor each VM for running checkpoints and snapshots by running the following script. This checks if a snapshot is older than 24 hours and creates an alert based on this. If no snapshots are found it reports that the snapshot state is healthy

$snapshots = Get-VM | Get-VMSnapshot | Where-Object {$_.CreationTime -lt (Get-Date).AddDays(-1) }
foreach($Snapshot in $snapshots){
$SnapshotState += "A snapshot has been found for VM $($snapshot.vmname). The snapshot has been created at  $($snapshot.CreationTime)  `n"
}
if(!$SnapshotState) { $snapshotstate = "Healthy"}

we used this monitoring set for a while, but then found that we had some servers that got restored from a backup that did not have a snapshot available, but did run on an AVHX. That can cause issues as the AVHDX can grow without you noticing as it doesn’t have a complete snapshot available. To also monitor AVHDX’s we’re using the following set

$VHDs = Get-VM | Get-VMHardDiskDrive
foreach($VHD in $VHDs){
if($vhd.path -match "avhd"){ $AVHD += "$($VHD.VMName) is running on AVHD: $($VHD.path) `n"}
}
if(!$AVHD){ $AVHD = "Healthy" }

Version of Integration services

Monitoring Integration services on older version of hyper-v, or migrated versions is quite important as the hyper-v integration services also provider driver interfaces to the client VM’s. To solve this we use the following monitoring script:

$VMMS = gwmi -namespace root\virtualization\v2 Msvm_VirtualSystemManagementService
 
# 1 == VM friendly name. 123 == Integration State
$RequestedSummaryInformationArray = 1,123
$vmSummaryInformationArray = $VMMS.GetSummaryInformation($null, $RequestedSummaryInformationArray).SummaryInformation
 

$outputArray = @()
 

foreach ($vmSummaryInformation in [array] $vmSummaryInformationArray)
   {  
 
   switch ($vmSummaryInformation.IntegrationServicesVersionState)
      {
       1       {$vmIntegrationServicesVersionState = "Up-to-date"}
       2       {$vmIntegrationServicesVersionState = "Version Mismatch"}
       default {$vmIntegrationServicesVersionState = "Unknown"}
      }

   $vmIntegrationServicesVersion = (get-vm $vmSummaryInformation.ElementName).IntegrationServicesVersion
   if ($vmIntegrationServicesVersion -eq $null) {$vmIntegrationServicesVersion = "Unknown"}
 
   $output = new-object psobject
   $output | add-member noteproperty "VM Name" $vmSummaryInformation.ElementName
   $output | add-member noteproperty "Integration Services Version" $vmIntegrationServicesVersion
   $output | add-member noteproperty "Integration Services State" $vmIntegrationServicesVersionState
 
   # Add the PSObject to the output Array
   $outputArray += $output
 
   }

foreach ($VM in $outputArray){
if ($VM.'Integration Services State' -contains "Version Mismatch"){
$ISState += "$($VM.'VM Name') Integration Services state is $($VM.'Integration Services State')`n"
}}
if(!$IIState){ $IIState = "Healthy" }

NUMA spanning

The next script is made to monitor the NUMA span of virtual machine. You might notice a decrease in performance when your NUMA spanning incorrect, not just in assigned memory but a general performance degradation of up to 80%. For more information, you can check this link and this link.

$VMs = Get-VM
foreach ($VM in $VMs){
$GetvCPUCount = Get-VM -Name $VM.Name | select Name,NumaAligned,ProcessorCount,NumaNodesCount,NumaSocketCount
$CPU = Get-WmiObject Win32_Processor
$totalCPU = $CPU.numberoflogicalprocessors[0]*$CPU.count
if ($GetvCPUCount.NumaAligned -eq $False){
$vCPUoutput += "NUMA not aligned for; $($VM.Name). vCPU assigned: $($GetvCPUCount.ProcessorCount) of $totalCPU available`n"
}}
if(!$vCPUOutput){ $vCPUOutput = "Healthy" }

Monitoring with PowerShell Chapter 3: Monitoring MFA-Server and Office365 MFA status

We use both Azure MFA Server to secure our on-site resources, and Office365 MFA for our clients. To make sure we don’t have aggressors changing the MFA settings, or simply administrators forgetting to set-up MFA for clients we make sure that we alert on both.

The issue with monitoring the MFA server is that its a product Microsoft bought later on its in life. As such does not have a PowerShell module included, and it does not conform to the current Common Engineering Criteria.

To solve this, we add a DLL that exposes the functionality that we need, when they get a list of all users(starting with A, until Z).

Add-Type -Path "C:\Program Files\Multi-Factor Authentication Server\pfsvcclientclr.DLL"
$problem = [PfSvcClientClr.ConstructResult]::miscError;
$main = [PfSvcClientClr.PfSvcClient]::construct([PfSvcClientClr.ConstructTarget]::local, [ref] $problem);
$DLLModule = $main.GetType().GetMethod("getInterface").MakeGenericMethod([PfSvcClientClr.IPfMasterComposite]).Invoke($main, $null);
$users = $DLLModule.find_users_startsWith("a")
foreach($username in $users){
if($DLLModule.get_user_disabledBehavior("$username") -eq "succeed") { $UserOutput += "$($username) is set to succeed authentication without MFA `n" }
}
if(!$UserOutput) { $UserOutput = "Healthy" }

Next we will monitor the multi-factor authentication on the Office365 side. For this we will use the MSOL module and your partner credentials you’ve generated using this blog. The script can be used both to get information for all clients, or a single client. To demonstrate I’ve added both

All Clients script:

$ApplicationId       = "xxxxxx-xxxx-xxxx-xxxx-xxxxxxxx"
$ApplicationSecret   = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx="
$RefreshToken        = "VeryLongRefreshToken"
$Tenantname          = "YourTenant.onmicrosoft.com"

$PasswordToSecureString = $ApplicationSecret  | ConvertTo-SecureString -asPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential($ApplicationId,$PasswordToSecureString)
$aadGraphToken = New-PartnerAccessToken -RefreshToken $refreshToken -Resource https://graph.windows.net -Credential $credential -TenantId $Tenantname
$graphToken =  New-PartnerAccessToken -RefreshToken $refreshToken -Resource https://graph.microsoft.com -Credential $credential -TenantId $Tenantname

Connect-MsolService -AdGraphAccessToken $aadGraphToken.AccessToken -MsGraphAccessToken $graphToken.AccessToken
$AllUSers = Get-MsolPartnerContract |  Get-MsolUser -all | select DisplayName,UserPrincipalName,@{N="MFA Status"; E={ 
if( $_.StrongAuthenticationRequirements.State -ne $null) {$_.StrongAuthenticationRequirements.State} else { "Disabled"}}}
foreach($User in $allusers | Where-Object{ $_.'MFA Status' -eq "Disabled"}){
$DisabledUsers += "$($User.UserPrincipalName) Has MFA disabled `n"
}
if(!$DisabledUsers){ $DisabledUsers = "Healthy"}

Single client script

$ApplicationId       = "xxxx-xxxxx-xxxxx-xxxxx-xxxxxx"
$ApplicationSecret   = "xxxxxxxxxxxxxxxxxxxxxxxxxxxx="
$RefreshToken        = "VeryLongStringHere"
$Tenantname          = "YourTenant.onmicrosoft.com"
$ClientTenantName    = "TheClientsTenant.onmicrosoft.com"
$PasswordToSecureString = $ApplicationSecret  | ConvertTo-SecureString -asPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential($ApplicationId,$PasswordToSecureString)
$aadGraphToken = New-PartnerAccessToken -RefreshToken $refreshToken -Resource https://graph.windows.net -Credential $credential -TenantId $Tenantname
$graphToken =  New-PartnerAccessToken -RefreshToken $refreshToken -Resource https://graph.microsoft.com -Credential $credential -TenantId $Tenantname

Connect-MsolService -AdGraphAccessToken $aadGraphToken.AccessToken -MsGraphAccessToken $graphToken.AccessToken
$AllUSers = Get-MsolPartnerContract -DomainName $ClientTenantName |  Get-MsolUser -all | select DisplayName,UserPrincipalName,@{N="MFA Status"; E={ 
if( $_.StrongAuthenticationRequirements.State -ne $null) {$_.StrongAuthenticationRequirements.State} else { "Disabled"}}}
foreach($User in $allusers | Where-Object{ $_.'MFA Status' -eq "Disabled"}){
$DisabledUsers += "$($User.UserPrincipalName) Has MFA disabled"
}
if(!$DisabledUsers){ $DisabledUsers = "Healthy"}