Category Archives: Series: PowerShell Monitoring

Functional PowerShell for MSPs (Beginner course)

Hi guys,

I’m organising another PowerShell event. Joining the event can be done here. It’ll be a webinar about PowerShell.

The session is mostly oriented for beginners, We’ll have a public Q&A and everyone will be able to enter content during the presentation if you have questions about specific scripts or other issues.

The session will not focus on the theoretical parts of PowerShell. This will be a completely functional session in which you’ll pick up the following:

  1. Configuring your IDE(5-10 minutes.)
  2. Gathering information you want using PowerShell
  3. Finding the correct module for your job.
  4. Passing information to different systems(RMM, Documentation, etc)
  5. Q&A

I hope you’ll find the time to join me! Happy PowerShelling.

Monitoring with PowerShell: Monitoring Office C2R updates

This blog might be a little shorter than normally, I’ve been a bit swamped with work so if you have any questions, let me know!

This time we’re going to monitor the update status of Microsoft Office that’s been installed using C2R. C2R installers do not get updates from the Microsoft Update services and thus RMM systems often can’t update these. Seeing as C2R is now the standard for all Office Installations we’ll need to start monitoring this separately from Windows Updates. We also want all our of clients to be in the same update channel.

$ReportedVersion = Get-ItemPropertyValue -Path "HKLM:\SOFTWARE\Microsoft\Office\ClickToRun\Configuration" -Name "VersionToReport"
$Channel = Get-ItemPropertyValue -Path "HKLM:\SOFTWARE\Microsoft\Office\ClickToRun\Configuration" -Name "CDNBaseUrl"

If(!$Channel) { 
    $Channel = "Non-C2R version or No Channel selected."
} else {
    switch ($Channel) { 
        "http://officecdn.microsoft.com/pr/492350f6-3a01-4f97-b9c0-c7c6ddf67d60" {$Channel = "Monthly Channel"} 
        "http://officecdn.microsoft.com/pr/64256afe-f5d9-4f86-8936-8840a6a4f5be" {$Channel = "Insider / Monthly Channel (Targeted)"} 
        "http://officecdn.microsoft.com/pr/7ffbc6bf-bc32-4f92-8982-f9dd17fd3114" {$Channel = "Semi-Annual Channel"} 
        "http://officecdn.microsoft.com/pr/b8f9b850-328d-4355-9145-c59439a0c4cf" {$Channel = "Semi-Annual Channel (Targeted)"} 
    }
}

To monitor on the versions we want to support by checking this page by Microsoft. We also monitor the Channel by alerting on anything that is not “Monthly Channel”, as soon as we see an agent that has the incorrect channel we fix it by running the following command

"C:\Program Files\Common Files\Microsoft Shared\ClickToRun\OfficeC2RClient.exe" /changesetting Channel=Monthly

When a client is not up to date, we force the latest update via the following command, this updates the client specifically to the version we want.

 "C:\Program Files\Common Files\Microsoft Shared\ClickToRun\OfficeC2RClient.exe"  /update USER displaylevel=False updatetoversion=16.0.7341.2029

If you want to update to any update that is available, for the channel the installation is in.

 "C:\Program Files\Common Files\Microsoft Shared\ClickToRun\OfficeC2RClient.exe"  /update USER displaylevel=False

And that’s it! you can now use this to update to the latest versions, and monitor the minimum required version you need installed. As always, Happy PowerShelling!

Monitoring with PowerShell: UPS Status (APC, Generic, and Dell)

So we’re using several types of UPS’s at our clients, and sometimes bump into generic USB UPS systems too. To monitor these we use a couple of methods that all have benefits and downsides. Let’s get started.

If a generic USB UPS is installed, Windows Server recognizes this as a Battery Unit. The status is sent to the server by using a generic Windows Driver called “Microsoft Compliant Control Method Battery” which is quite the mouthfull. The good thing is that with this driver we can use a couple of small PowerShell commands to find the exact status of the battery.

USB UPS systems

 $Battery = Get-CimInstance -ClassName win32_battery
Switch ($Battery.Availability) {
    1  { $Availability = "Other" ;break}
   2  { $Availability =  "Not using battery" ;break}
   3  { $Availability = "Running or Full Power";break}
   4  {$Availability =  "Warning" ;break}
   5  { $Availability = "In Test";break}
   6  { $Availability = "Not Applicable";break}
   7  { $Availability = "Power Off";break}
   8  { $Availability = "Off Line";break}
   9  { $Availability = "Off Duty";break}
   10  {$Availability =  "Degraded";break}
   11  {$Availability =  "Not Installed";break}
   12  {$Availability =  "Install Error";break}
   13  { $Availability = "Power Save - Unknown";break}
   14  { $Availability = "Power Save - Low Power Mode" ;break}
   15  { $Availability = "Power Save - Standby";break}
   16  { $Availability = "Power Cycle";break}
   17  { $Availability = "Power Save - Warning";break}
    }

$BatteryStatus = $Battery.Status
$BatteryName = "$($Battery.name)"
$Remaining = $Battery.EstimatedChargeRemaining
$EstRunTimeMinutes = $Battery.EstimatedRunTime
$BatAvailability = $Availability

The script gets the battery status out of WMI, it shows if the machine is running on battery or not, and you can alert on this. We’ve set our systems up to make sure that when the battery status changes from anything but “Not using battery” it alerts, and possibly shuts down the machine.

Another thing to pay attention to is the Battery Status – Most APCs and Dell’s connected to USB even tell the OS if the battery is in a warning state or failed, you should alert on anything but “OK” for the status.

We can’t really monitor network UPS systems with this, as they do not get their data in w32_battery, so we’ll have to use a couple of different solutions for this. I’ll try covering this in a future blog. As always, happy PowerShelling!

Monitoring with PowerShell: Monitoring Security state

After the last couple of blogs I’ve been asked how I monitor the security state of Windows Servers, so I figured I would create a blog about monitoring some security advisement. Of course there is another disclaimer involved.

Disclaimer: Monitoring these security settings is only a small part of what your entire security monitoring suite should look like. There are a lot more settings and changes you’d need to monitor than just these, but these are items that can be used as a early warning system.

Now that we’ve got that out of the way we can start our monitoring script. We will dissect the script together and have the complete version at the bottom of the page.

The Script

First we will start on monitoring debuggers. This can be done both on both workstations and on servers. Debuggers are often used to secretly start a different process with elevated credentials, or you can have a executable start without the user ever clicking on it.

$debug = Get-Childitem -Path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\" -Recurse | Where-Object { $_.Property -eq "Debugger" } | Where-Object { $_.pschildname -ne "DeviceCensus.exe" }
if(!$debug) { 
    $DebuggerFound = "Healthy - No Debugers found"
} else {
foreach($key in $debug){
$DebuggerFound += "$($key.pschildname) is debugged `n"
}
}

Using this we find exactly which process has a debugger attached. DeviceCensus.exe always has a debugger attached so we can ignore this executable. Next we’ll be moving on to WDigest monitoring.

WDigest was a protocol that was introduced in the Windows XP time, the idea at that time was that this was to be used for web based authentication. Wdigest is enabled by default from server 2003 until Server 2012R2. The problem is that to have wdigest run correctly plain-text passwords got stored in LLASS. To resolve this Microsoft released an update to make sure you can disable wdigest on systems.

The problem is that all that is required to enable wdigest again is to change a registry key, We are going to monitor this key with two simple PowerShell commands. You want these items to be set to 0. If not, you should resolve by setting it to 0.

$WDigestNegotiate           = get-childitem -path "HKLM:\System\CurrentControlSet\Control\SecurityProviders\WDigest" | Where-Object {$_.Property -eq "Negotiate"}
$WDigestUseLogonCredential  =  get-childitem -path "HKLM:\System\CurrentControlSet\Control\SecurityProviders\WDigest" | Where-Object {$_.Property -eq "UseLogonCredential"}

Next up is Cached Credentials Account monitoring, again this can be used on both workstations and servers. Cached Credentials are used to logon when the domain controller is not available. For servers and workstations I would advise to lower this to 0. On laptops that is more difficult as users need to be able to work offline, currently we set it to 3 (2 for system logons, and 1 for the actual user account).

$CachedCredentialsAllowed   = (Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon").CachedLogonsCount

And as a last check we monitor the LM Compatibility Level. This declares what types of authentications can be used on the device. For more information on NTLM, LM Compatibility, and kerberos check this blog from Microsoft. We always completely go to the maximum security level of 5.

$NTLMCompatibilityLevel     = (Get-ItemProperty "HKLM:\SYSTEM\CurrentControlSet\Control\Lsa").lmcompatibilitylevel

And that’s it. Monitoring these items make your environment a little bit more secure, and protects you against most forms of Pass The Hash. The full script can be found below.

Full Script

$debug = Get-Childitem -Path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\" -Recurse | Where-Object { $_.Property -eq "Debugger" } | Where-Object { $_.pschildname -ne "DeviceCensus.exe" }
if(!$debug) { 
    $DebuggerFound = "Healthy - No Debugers found"
} else {
foreach($key in $debug){
$DebuggerFound += "$($key.pschildname) is debugged <br>`n"
}
}
$WDigestNegotiate           = get-childitem -path "HKLM:\System\CurrentControlSet\Control\SecurityProviders\WDigest" | Where-Object {$_.Property -eq "Negotiate"}
$WDigestUseLogonCredential  =  get-childitem -path "HKLM:\System\CurrentControlSet\Control\SecurityProviders\WDigest" | Where-Object {$_.Property -eq "UseLogonCredential"}
$CachedCredentialsAllowed   = (Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon").CachedLogonsCount
$NTLMCompatibilityLevel     = (Get-ItemProperty "HKLM:\SYSTEM\CurrentControlSet\Control\Lsa").lmcompatibilitylevel

Monitoring with PowerShell: Monitoring Dell device updates

I’m a big fan of Dell’s Command Update utility. Dell Command update is a program that makes updating Dell based devices super easy, a single utility that you can install on any workstation to update all devices is great. We always deploy Dell Command update with any machine we hand out to clients.

The next issue that occurs is that we need to know if the updates are running well. For this, I’ve made a monitoring set. To make sure that you don’t just monitor without action, we also created a set that automatically remediates.

The monitoring script

The monitoring script downloads a zip file with the Dell Command Update utility. You can create this zip-file yourself by installing Dell Command Update and simply zipping the install location. It then unzips the downloaded file, and runs the DCU-cli with the Report Parameter, I would advise to only run this set on an hourly or even daily schedule, using your RMM system of course.

#Replace the Download URL to where you've uploaded the ZIP file yourself. We will only download this file once. 
$DownloadURL = "https://www.cyberdrain.com/wp-content/uploads/2019/09/DCU.zip"
$DownloadLocation = "$($Env:ProgramFiles)\DCU\"
#Script: 
$TestDownloadLocation = Test-Path $DownloadLocation
if(!$TestDownloadLocation){
new-item $DownloadLocation -ItemType Directory -force
Invoke-WebRequest -Uri $DownloadURL -OutFile "$($DownloadLocation)\DCU.zip"
Expand-Archive "$($DownloadLocation)\DCU.zip" -DestinationPath $DownloadLocation -Force
}
#We start DCU with a reporting parameter set. We wait until the report has been generated.
Start-Process "$($DownloadLocation)\DCU-CLI.exe" -ArgumentList "/report `"$($DownloadLocation)\Report.xml`"" -Wait
[xml]$XMLReport = get-content "$($DownloadLocation)\Report.xml"

$BIOSUpdates        = ($XMLReport.updates.update | Where-Object {$_.type -eq "BIOS"}).name.Count
$ApplicationUpdates = ($XMLReport.updates.update | Where-Object {$_.type -eq "Application"}).name.Count
$DriverUpdates      = ($XMLReport.updates.update | Where-Object {$_.type -eq "Driver"}).name.Count
$FirmwareUpdates    = ($XMLReport.updates.update | Where-Object {$_.type -eq "Firmware"}).name.Count
$OtherUpdates       = ($XMLReport.updates.update | Where-Object {$_.type -eq "Other"}).name.Count
$PatchUpdates       = ($XMLReport.updates.update | Where-Object {$_.type -eq "Patch"}).name.Count
$UtilityUpdates     = ($XMLReport.updates.update | Where-Object {$_.type -eq "Utility"}).name.Count
$UrgentUpdates      = ($XMLReport.updates.update | Where-Object {$_.Urgency -eq "Urgent"}).name.Count

As this is a number monitor, if something is 0 you are completely up to date, we monitor all type of updates. We also like knowing if an update is urgent, which has a separate category.

Remediation

So remediation can be done quickly, In theory we would only have to run a single command, which is the following script

$DownloadLocation = "$($Env:ProgramFiles)\DCU\"
Start-Process "$($DownloadLocation)\DCU-CLI.exe" -Wait

The problem with running this script directly that by default all updates that the DCU finds will be installed, and you cannot set a classification to be excluded. If you would like to exclude specific update types such as BIOS updates or utility software, you’ll have to do this:

  • Open DCU on your administrator workstation
  • click on the cog in the top right corner
  • update filter:, unselect the updates you want to exclude.
  • Export/Import: and export the MySettings.xml file.
  • Add this MySettings.xml file to your self-hosted DCU zip file.

If you’ve done this small list of tasks, then use the following script to install the updates instead:

$DownloadLocation = "$($Env:ProgramFiles)\DCU\"
Start-Process "$($DownloadLocation)\DCU-CLI.exe" -ArgumentList "/import /policy `"$($DownloadLocation)\MySettings.xml`"" -Wait
Start-Process "$($DownloadLocation)\DCU-CLI.exe" -Wait

When executing Thunderbolt or BIOS updates. You will also need to suspend Bitlocker. You can use the following script for this. My advice would be to execute the reboot immediately in this case – and only use this if you are certain that the device is in a secure environment during execution.

$DownloadLocation = "$($Env:ProgramFiles)\DCU\"
Start-Process "$($DownloadLocation)\DCU-CLI.exe" -ArgumentList "/import /policy `"$($DownloadLocation)\MySettings.xml`"" -Wait
Suspend-BitLocker -MountPoint 'C:' -RebootCount 1
Start-Process "$($DownloadLocation)\DCU-CLI.exe" -Wait

the AMP file can be found here. As always, Happy PowerShelling!

Monitoring with PowerShell: SMART status via CrystalDiskInfo

In a peer-group that I am a member of recently we’ve had a small discussion about monitoring the SMART status of hard drives. We all agreed that the issue with SMART monitoring is that often it is unreliable when using RMM systems. This is due to RMM systems using only the Windows SMART output which lacks some critical values you should monitor. SMART itself could be a pretty decent early warning system when using all values supplied.

To resolve this, I’ve created a set that uses CrystalDiskInfo. A tool made by CrystalMark which presents the values to you in a nice overview. We’ve used this in the past to troubleshoot or check disks for predictive failures manually, but figured we should also try the same automated. This piece of PowerShell makes SMART monitoring more agile and reliable, because we alert on more information than just the predicted failure values.

The script relies on Invoke-expression, and expand-archive, as such at least Windows 8.1 will be required.

The script

As always, the script is self-explanatory. Please upload the zip file to your own web server or location to where the latest version of CrystalDiskInfo is hosted. This also creates a folder in program program files directory and unzips itself there.

#Replace the Download URL to where you've uploaded the ZIP file yourself. We will only download this file once. 
$DownloadURL = "http://rwthaachen.dl.osdn.jp/crystaldiskinfo/71535/CrystalDiskInfo8_3_0.zip"
$DownloadLocation = "$($Env:ProgramFiles)\CrystalDiskInfo\"
#Script: 
$TestDownloadLocation = Test-Path $DownloadLocation
if(!$TestDownloadLocation){
new-item $DownloadLocation -ItemType Directory -force
Invoke-WebRequest -Uri $DownloadURL -OutFile "$($DownloadLocation)\CrystalDiskInfo.zip"
Expand-Archive "$($DownloadLocation)\CrystalDiskInfo.zip" -DestinationPath $DownloadLocation -Force
}
#We start CrystalDiskInfo with the COPYEXIT parameter. This just collects the SMART information in DiskInfo.txt
Start-Process "$($Env:ProgramFiles)\CrystalDiskInfo\DiskInfo64.exe" -ArgumentList "/CopyExit" -wait
$DiskInfoRaw  = get-content "$($Env:ProgramFiles)\CrystalDiskInfo\DiskInfo.txt" | select-string "-- S.M.A.R.T. --------------------------------------------------------------" -Context 0,16
$diskinfo = $DiskInfoRaw -split "`n" | select -skip 2 | Out-String | convertfrom-csv -Delimiter " " -Header "NOTUSED1","NOTUSED2","ID","RawValue" | Select-Object ID,RawValue

[int64]$CriticalWarnings = "0x" + ($diskinfo | Where-Object { $_.ID -eq "01"}).rawvalue
[int64]$CompositeTemp = "0x" + ($diskinfo | Where-Object { $_.ID -eq "02"}).rawvalue -273.15
[int64]$AvailableSpare = "0x" +($diskinfo | Where-Object { $_.ID -eq "03"}).rawvalue
[int64]$ControllerBusyTime ="0x" + ($diskinfo | Where-Object { $_.ID -eq "0A"}).rawvalue
[int64]$PowerCycles ="0x" + ($diskinfo | Where-Object { $_.ID -eq "0B"}).rawvalue
[int64]$PowerOnHours = "0x" + ($diskinfo | Where-Object { $_.ID -eq "0C"}).rawvalue
[int64]$UnsafeShutdowns = "0x" +($diskinfo | Where-Object { $_.ID -eq "0D"}).rawvalue
[int64]$IntegrityErrors ="0x" + ($diskinfo | Where-Object { $_.ID -eq "0E"}).rawvalue
[int64]$InformationLogEntries ="0x" + ($diskinfo | Where-Object { $_.ID -eq "0F"}).rawvalue

The output variables will always contain data, this data can be used to threshold against in your RMM system. The thresholds I would use are:

  • $CriticalWarnings = 0
  • $CompositeTemp = 55 (this is 55 degrees celsius)
  • $AvailableSpare = 50 (This means there are 50 reallocation blocks available. This is extremely preventive so you might want to tune it to your personal preference)
  • $ControllerBusyTime = Not monitored, currently only log this for reporting purposes
  • $PowerCycles = Not monitored, currently only log this for reporting purposes
  • $PowerOnHours = 40000 (This is around 5 years of constant runtime.)
  • $UnsafeShutdowns = 365 (I like to know if users are not shutting down their computers normally. This could also point at other software related problems.)
  • $IntegrityErrors = 1 (This is what Windows normally reports on. We want to know as soon as these issues arise)
  • $InformationLogEntries = 1 (How many events have been generated related to disk SMART events)

I hope this helps MSPs that are having issues with SMART monitoring in their RMM systems, anyway – As always, Happy PowerShelling!

Monitoring with PowerShell: Monitoring log on of specific users.

Hi Guys, This’ll be the last blog before I go on holidays, So enjoy it and see you all in two weeks.

This time we’re going to talk about montoring the logon of specific users. We use named accounts for all our engineers and want to alert if another account that is unnamed has been logged onto in an interactive session. To do this, we’ll use the WMI instrumentation Win32_LoggedOnUser.

Just as an extra disclaimer: Please remember that in co-managed or environments that belong to others you won’t be able to always perform all best practices. This script will mostly be used to monitor those messy environments, and give you a little bit more sense of extra security. My personal advice would always be to disable accounts that are no longer allowed to login, use managed service accounts without interactive permissions for services, and delete accounts of ex-employees directly after the leave the company.

The script

Let’s get started on the script. First we’ll have to define which accounts we do not want to have in interactive sessions:

$ForbiddenList = @("Cyberdrain","cyber","migration","administrator","admin","service-QuickBooks*","svc-QB*","ExEmployee1")

So in this list, we name all accounts that are forbidden. You can add any user you would like to this list, We manage a lot of servers, and sometimes after an engineer leaves our company servers are still logged in with the user that we’ve disabled/deleted. With this script we also monitor those situations and log out the deleted user from all servers.

The next step is getting a list of the active users and comparing them:

$ActiveUsers = (Get-CimInstance Win32_LoggedOnUser).antecedent | Select-Object -Unique | Where-Object {$_.name -in $ForbiddenList}

So here we get all users that are currently logged on to the machine, and compare them to our forbidden list, our $activeUsers variable only gets filled if there is a match in the list.

if(!$ActiveUsers){$ActiveUsers = 'false'}

And this last line says that if $ActiveUsers is empty, so no users have been found that are logged in and in our list, it will say “false”. The complete script is just 3 lines and can be found below.

$ForbiddenList = @("Cyberdrain","cyber","migration","administrator","admin","service-QuickBooks*","svc-QB*","ExEmployee1")
$ActiveUsers = (Get-CimInstance Win32_LoggedOnUser).antecedent | Select-Object -Unique | Where-Object {$_.name -in $ForbiddenList}
if(!$ActiveUsers){$ActiveUsers = 'false'}

And that’s it. Some monitoring for situations you do not want to end up in. Remember to always follow security best practices first. Only use these scripts as an early warning system that someone, somewhere has made a mistake. 🙂 As always, Happy PowerShelling!

Documenting with PowerShell: Chapter 2 – Documenting Bitlocker keys

Our RMM system currently does not have support to securely store the bitlocker key inside of the RMM system itself. I’ve subscribed to the school of bitlocking everything that passes through my company, So also computers that sometimes never get connected to Azure AD, Active Directory to store the key in. We also get users that lost the USB drive or piece of paper that the key was stored on.

As we use a documentation system (IT-Glue) to store all our passwords, I figured why not try to also store our Bitlocker keys there, while tagging the device too so we can always find which device belongs to which key easily.

First for the none IT-Glue users I’ll generate a HTML file. With some small adaptation you can upload this to Confluence, ITBoost, or any other system you use. After that example, we’ll get onto IT-Glue again. So let’s get started!

Base script

The base script is the part of the script that captures the data that we want. In our case This will be the Bitlocker key, and output it an HTML file in C:\Temp\Temp.html You can use this script however you’d like.

$BitlockVolumes = Get-BitLockerVolume
#Some HTML to make the page pretty.
$head = @"
<script>
function myFunction() {
    const filter = document.querySelector('#myInput').value.toUpperCase();
    const trs = document.querySelectorAll('table tr:not(.header)');
    trs.forEach(tr => tr.style.display = [...tr.children].find(td => td.innerHTML.toUpperCase().includes(filter)) ? '' : 'none');
  }</script>
<title>Audit Log Report</title>
<style>
body { background-color:#E5E4E2;
      font-family:Monospace;
      font-size:10pt; }
td, th { border:0px solid black; 
        border-collapse:collapse;
        white-space:pre; }
th { color:white;
    background-color:black; }
table, tr, td, th {
     padding: 2px; 
     margin: 0px;
     white-space:pre; }
tr:nth-child(odd) {background-color: lightgray}
table { width:95%;margin-left:5px; margin-bottom:20px; }
h2 {
font-family:Tahoma;
color:#6D7B8D;
}
.footer 
{ color:green; 
 margin-left:10px; 
 font-family:Tahoma;
 font-size:8pt;
 font-style:italic;
}
#myInput {
  background-image: url('https://www.w3schools.com/css/searchicon.png'); /* Add a search icon to input */
  background-position: 10px 12px; /* Position the search icon */
  background-repeat: no-repeat; /* Do not repeat the icon image */
  width: 50%; /* Full-width */
  font-size: 16px; /* Increase font-size */
  padding: 12px 20px 12px 40px; /* Add some padding */
  border: 1px solid #ddd; /* Add a grey border */
  margin-bottom: 12px; /* Add some space below the input */
}
</style>
"@

foreach($BitlockVolume in $BitlockVolumes) {
$HTMLTop = @"
    <h1>Bitlocker Information</h1>
    <b>Computername: </b>$($BitlockVolume.ComputerName)<br>
    <b>Encryption Method:</b>$($BitlockVolume.EncryptionMethod)<br>
    <b>Volume Type:</b>$($BitlockVolume.VolumeType)<br>
    <b>Volume Status:</b>$($BitlockVolume.VolumeStatus)<br>
"@
$HTML += $BitlockVolume.KeyProtector | convertto-html -Head $head -PreContent "$HTMLTop <br> <h1>Keys for $($ENV:COMPUTERNAME) - $($BitlockVolume.Mountpoint)</h1>"
}
$html | Out-File C:\Temp\temp.html

Now, that’s cool. This gives us a good ol’ HTML file. We now have a choice, use the previous script found here and adapt it to upload it to IT-Glue as a Flexible Asset or make the choice to upload it as an embedded password and tag the correct device. That sounds cooler to me!

This script looks for a configuration in your IT-Glue database based on the computer’s serial number. If it finds a match it uploads the bitlocker key as an embedded password, with the name “COMPUTERNAME – DRIVE:” as an example for my computer “DESKTOP-U3984 – C:” – We do this because the hostname might change over time and you’d want the keys to be uploaded separately.

IT-Glue script

#####################################################################
$APIKEy =  "APIKEYHERE"
$APIEndpoint = "https://api.eu.itglue.com"
$orgID = "ORGIDHERE"
#####################################################################
#Grabbing ITGlue Module and installing,etc
If(Get-Module -ListAvailable -Name "ITGlueAPI") {Import-module ITGlueAPI} Else { install-module ITGlueAPI -Force; import-module ITGlueAPI}
#Settings IT-Glue logon information
Add-ITGlueBaseURI -base_uri $APIEndpoint
Add-ITGlueAPIKey $APIKEy
#This is the data we'll be sending to IT-Glue. 
$BitlockVolumes = Get-BitLockerVolume
#The script uses the following line to find the correct asset by serialnumber, match it, and connect it if found. Don't want it to tag at all? Comment it out by adding #
$TaggedResource = (Get-ITGlueConfigurations -organization_id $orgID -filter_serial_number (get-ciminstance win32_bios).serialnumber).data
foreach($BitlockVolume in $BitlockVolumes) {
$PasswordObjectName = "$($Env:COMPUTERNAME) - $($BitlockVolume.MountPoint)"
$PasswordObject = @{
    type = 'passwords'
    attributes = @{
            name = $PasswordObjectName
            password = $BitlockVolume.KeyProtector.recoverypassword[1]
            notes = "Bitlocker key for $($Env:COMPUTERNAME)"

    }
}
if($TaggedResource){ 
    $Passwordobject.attributes.Add("resource_id",$TaggedResource.Id)
    $Passwordobject.attributes.Add("resource_type","Configuration")
}

#Now we'll check if it already exists, if not. We'll create a new one.
$ExistingPasswordAsset = (Get-ITGluePasswords -filter_organization_id $orgID -filter_name $PasswordObjectName).data
#If the Asset does not exist, we edit the body to be in the form of a new asset, if not, we just upload.
if(!$ExistingPasswordAsset){
Write-Host "Creating new Bitlocker Password" -ForegroundColor yellow
$ITGNewPassword = New-ITGluePasswords -organization_id $orgID -data $PasswordObject
} else {
Write-Host "Updating Bitlocker Password" -ForegroundColor Yellow
$ITGNewPassword = Set-ITGluePasswords -id $ExistingPasswordAsset.id -data $PasswordObject
}
}

This script can also be found as an AMP file here, that’s it! as always, happy PowerShelling!

Documenting with PowerShell – New series

Hi All!

Starting this week I’ll be blogging about using PowerShell with your RMM/Automation platform and running scripts to collect valuable documentation. I’ll try to keep it as generic as possible and export the documentation to HTML, but I’ll always include a version to upload it to IT-Glue or Confluence. As requested by some I’ll also include the AMP for N-Central so you can get going with it.

To get started straight away, I’ll share the script that we will be using throughout this series to upload documentation to IT-Glue fully automated. You won’t even need to create flexible assets as the script does this for you.

The Script

For the script you’ll need at least Windows 10, or Server 2012R2+. You’ll also need your IT-Glue API key and the URL, generally speaking that URL is “https://api.itglue.com” or “https://api.eu.itglue.com” for european users. Now let’s get started on our uploading script. 🙂

N-Able users can download the AMP for this script here (Right click->Save as) The script can use Custom Device or Organisation Properties as input, and as thus you can enter the Organisation ID on each Custom Organisation Property and automate your documentation process completely.

#####################################################################
$APIKEy =  "YOUR API KEY GOES HERE"
$APIEndpoint = "https://api.eu.itglue.com"
$orgID = "THE ORGANISATIONID YOU WOULD LIKE TO UPDATE GOES HERE"
$FlexAssetName = "ITGLue AutoDoc - Quick example"
$Description = "a quick overview of easy it is to upload data to IT-Glue"
#####################################################################
#This is the object we'll be sending to IT-Glue. 
$HTMLStuff = @"
<b>Servername</b>: $ENV:COMPUTERNAME <br>
<b>Number of Processors</b>: $ENV:NUMBER_OF_PROCESSORS <br>

This is a little example of how we upload data to IT-Glue.
"@
$FlexAssetBody = 
@{
    type = 'flexible-assets'
    attributes = @{
            name = $FlexAssetName
            traits = @{
                "name" = $ENV:COMPUTERNAME
                "information" = $HTMLStuff
            }
    }
}

#ITGlue upload starts here.
If(Get-Module -ListAvailable -Name "ITGlueAPI") {Import-module ITGlueAPI} Else { install-module ITGlueAPI -Force; import-module ITGlueAPI}
#Settings IT-Glue logon information
Add-ITGlueBaseURI -base_uri $APIEndpoint
Add-ITGlueAPIKey $APIKEy
#Checking if the FlexibleAsset exists. If not, create a new one.
$FilterID = (Get-ITGlueFlexibleAssetTypes -filter_name $FlexAssetName).data
if(!$FilterID){ 
    $NewFlexAssetData = 
    @{
        type = 'flexible-asset-types'
        attributes = @{
                name = $FlexAssetName
                icon = 'sitemap'
                description = $description
        }
        relationships = @{
            "flexible-asset-fields" = @{
                data = @(
                    @{
                        type       = "flexible_asset_fields"
                        attributes = @{
                            order           = 1
                            name            = "name"
                            kind            = "Text"
                            required        = $true
                            "show-in-list"  = $true
                            "use-for-title" = $true
                        }
                    },
                    @{
                        type       = "flexible_asset_fields"
                        attributes = @{
                            order          = 2
                            name           = "information"
                            kind           = "Textbox"
                            required       = $false
                            "show-in-list" = $false
                        }
                    }
                )
                }
            }
              
       }
New-ITGlueFlexibleAssetTypes -Data $NewFlexAssetData 
$FilterID = (Get-ITGlueFlexibleAssetTypes -filter_name $FlexAssetName).data
} 

#Upload data to IT-Glue. We try to match the Server name to current computer name.
$ExistingFlexAsset = (Get-ITGlueFlexibleAssets -filter_flexible_asset_type_id $Filterid.id -filter_organization_id $orgID).data | Where-Object {$_.attributes.name -eq $ENV:COMPUTERNAME}

#If the Asset does not exist, we edit the body to be in the form of a new asset, if not, we just upload.
if(!$ExistingFlexAsset){
$FlexAssetBody.attributes.add('organization-id', $orgID)
$FlexAssetBody.attributes.add('flexible-asset-type-id', $FilterID.id)
Write-Host "Creating new flexible asset"
New-ITGlueFlexibleAssets -data $FlexAssetBody

} else {
Write-Host "Updating Flexible Asset"
Set-ITGlueFlexibleAssets -id $ExistingFlexAsset.id  -data $FlexAssetBody}

The script does multiple things for you, that a lot of other scripts tend to skimp over;

  • We check if a Flexible Asset type with our chosen name is already present, if its not. we create it
  • We then check if a Flexible Asset form already exists with the same name as we’ve entered, if not, we’ll upload a fresh one, if it does, we’ll upload an update for that specific item.

In the following series I’ll teach you how to get the organisation ID by information we gather on the machine you are running your script on. We’ll be tackling how to get the correct devices tagged on your flexible assets, but of course we’ll start by taking apart the script above and teaching you how to create fully automated network documentation.

As always, Happy Powershelling!

Monitoring with PowerShell Chapter 3: Monitoring network state

Our clients often want us to monitor specific network connections, such as VPN tunnels that need to be online, services that always need to be reachable, or even simply to report on internet connection speeds. To do this, we mostly use our network controller software and default RMM sets. In rare cases, that is not enough, so we’ve developed some monitoring sets for our RMM to help us with this.

To start, we have a RMM monitoring set that uses the Test-Connection cmdlet to ping multiple hosts entered in our RMM system. We define these per client, so their most important resources are checked constantly.

$State = "Healthy"
$IPsToPing = $IPsToPing.split(",")
try{
$ConnectionTest = test-connection $IPsToPing -count 3 -ErrorAction stop -Verbose
}
catch [System.Management.Automation.ActionPreferenceStopException]            
{            
try {            
throw $_.exception            
}                 
catch [System.Net.NetworkInformation.PingException] {            
$state = "$($error[0])"            
}                      
catch {            
$state = "$($error[0])"           
}            
}

$EndResult = $ConnectionTest | measure-Object -Property ResponseTime -Average -sum -Maximum -Minimum

$AVGMS = $EndResult.Average.ToString(00)
$MaxMS = $EndResult.Maximum.ToString(00)
$MinMs = $EndResult.Minimum.ToString(00)

$AVGMS shows the average of MS with 3 pings, $MaxMS shows the highest reached MS during 3 pings, and $MinMs, you’ve guessed it – Shows the fastests pings in the west 😉

Next to ping monitoring we also check the health state of the internal network on Windows Servers. We see when doing take overs of infrastructure that in a lot of situations Network Location Awareness does not function or start correctly.

The network location awareness service actually is the service that tells your OS what network profile it should use, like “public”, “private” or “domain”. Not having it running can cause a myriad of issues such a SSPI issues on SQL servers, Firewalling issues, and much more!

Most of our environments use $Domainname.com or when doing takeovers $domainname.local. The common denominator in this is that the network profile contains a period and from there we compare it to the actual Network Category. If these do not match, we alert and see if we need to recover the Network Location Awareness service.

$NetworkProfile = Get-NetConnectionProfile

if ($NetworkProfile.Name -contains ".")
{ $NLAState = "Domain Authentiction might not work properly."  }else{ $NLAState = "Healthy" }

foreach ($NetProfile in $NetworkProfile | where {$_.Name -match "."}) {
 if ($NetProfile.NetworkCategory -ne "DomainAuthenticated")
 { $DomainState = "Network is not set to DomainAuthenticated."  }else{ $DomainState = "Healthy" }
}

if(!$NLAState){ $NLAState = "healthy }

And that’s the blog for today! enjoy and as always, Happy PowerShelling!