Category Archives: Windows Server

Deploying Auto-VPN or Always-On VPN with SSTP

Hi All,

Sorry for the break in blogs about monitoring – I’ve been quite busy with work, so I haven’t had the time to create a monitoring blog. I have been able to create a blog about deploying Always-on VPN, or as Microsoft used to call it “Auto-VPN”. Always-on VPN is going to be the replacement for DirectAccess. DirectAccess was a technology that created 2 hidden VPN tunnels over SSL and encrypted all the data between your client machine and your local network. The downside was that it required Windows Enterprise.

Warning: Long read 🙂
Continue reading

Monitoring with PowerShell Chapter 2: DHCP Pool status

Hi All,

As I’ve explained in my previous the series is taking a bit of a turn here and we’re going to start some blogs about remediation instead of just monitoring. I’ll link back to a previous blog and will explain how we automatically react to these issues within our RMM, if you do not have an RMM – Don’t worry! We’ll include the monitoring + remediation script so you can combine the scripts any way you’d like.

The second monitoring and remediation we’re getting on is a full DHCP-scope and auto-remediate when the scope is completely full. We’ll monitoring several aspects such as the amount of free IP’s, the scope status and lease-time, we’ll also try to clean the scope if it reaches a full state for very old leases or BAD_ADDR’s. Remember that if you bump into this issue a lot it’s better to increase scope size or manage your devices and network 🙂

Continue reading

Blog Series: Monitoring using PowerShell: Part Seven – Monitoring back-ups with PowerShell

Hi All,

My next couple of blogs will be a series of blogs where I will be explaining on how to use PowerShell for the monitoring of critical infrastructure. I will be releasing a blog every day that will touch on how to monitor specific software components, but also network devices from Ubiquity, third-party API’s and Office365. I will also be showing how you can integrate this monitoring in current RMM packages such as Solarwinds N-Central, Solarwinds RMM MSP and even include the required files to import the  monitoring set directly into your system.

Continue reading

Blog Series: Monitoring using PowerShell: Part Six – Monitoring CSV volumes for space and status

Hi All,

My next couple of blogs will be a series of blogs where I will be explaining on how to use PowerShell for the monitoring of critical infrastructure. I will be releasing a blog every day that will touch on how to monitor specific software components, but also network devices from Ubiquity, third-party API’s and Office365. I will also be showing how you can integrate this monitoring in current RMM packages such as Solarwinds N-Central, Solarwinds RMM MSP and even include the required files to import the  monitoring set directly into your system.

Continue reading

Mini-Blog: How to use Azure Functions to run PowerShell scripts

 

Lately I’ve had a couple of scripts that needed to run on a daily basis, in the past I used the task scheduler on a server for this but that would mean I had to mess around when passwords that expired, and all the other misery that is related to standard scheduled tasks.

To work around all the known limitatons I’ve created a new Azure Function App. Azure Functions allow you to run scripts at timed intervals which you set. To learn more about Azure Functions you can find information here

Continue reading

Free online powershell training

Hi all!

After a couple of weeks of silence I have some great news; I will be giving free online powershell courses for beginners and intermediates. Hopefully I’ll be able to assist some of you in questions you have about your own scripts, or scripts you’ve used from my blog.

The first course will be August 7 at 19:00 GMT+1. You can join the course by emailing me at Kelvin [at] Limenetworks.nl or via the following Skype for Business URL: Skype for Business Meeting

In the first course I will be focusing on using Powershell in your day-to-day operations and automating minor tasks. It’ll be hands-on as much as possible and not only focus on theory. there will be room for questions during the one hour course.

Hope to see you there!

Mini Blog: Checking processor performance

I haven’t been blogging alot lately, mostly due to renovating at home and having very large projects in the office. To compensate I’ve decided to write some quick mini blogs to make sure I don’t lose the magic 🙂

I’ve found with some application monitoring I’ve been setting up it was required to make a quick snapshot of how heavy the processor was being used during my scripts, as the application made some SQL queries it could create spikes in the CPU that I wanted to avoid.

To take a quick snapshot of the current values of the processor status, I’ve used the get-counter cmdlet and retrieved the cooked value of this to query it further;

$CPUQueueLength = (get-counter -counter "\System\Processor Queue Length").countersamples.cookedvalue
$CPUUSerTime = (get-counter -counter "\Processor(*)\% User Time").countersamples.cookedvalue
$CPUPrivTime = (get-counter -counter "\Processor(*)\% Privileged Time").countersamples.cookedvalue

Happy scripting 🙂

Using PowerShell to monitor Backups

We are using a RMM that has integrated BackupExec monitoring. I’ve found that this integrated monitoring was somewhat lacking. It gave us the current job status and that’s about it, Meaning there was not really a way to resolve issues pre-preemptively.

To resolve this we’ve created the following monitoring PowerShell script and integrated it into our RMM solution. For your convenience we’ll dissect the script so you can re-use this in your own solution or set it as scheduled job 🙂

We’ll start by importing the BEMCLI module which is included in BackupExec 2012 and up.

#Setting default CLI
import-module bemcli 

After importing the BEMCLI We’ll be able to use the cmdlets that get us the info we want, in this case “get-BEAlert” which gives us the current Alerts that have not been acknowledged within BackupExec

#Getting Alerts
$Alerts = Get-BEAlert

Now that we have the Alerts stored in a variable we wil loop through the contents of the alerts to find what we require;

#Looping through the alerts and writing the message to the console
foreach($Alert in $Alerts){
write-host $Alert.message
}

Now that we see that the messages are posted in the console we’d of course like a better overview.

#Looping through the alerts and setting them.
foreach($Alert in $Alerts){
switch ($Alert.Category)
{
JobWarning{$JobWarning = "TRUE - $($Alert.Message)" }
JobFailure{$JobFailure = "TRUE - $($Alert.Message)"}
JobCancellation{$JobCancellation = "TRUE - $($Alert.Message)" }
CatalogError{$CatalogError = "TRUE - $($Alert.Message)"}
SoftwareUpdateWarning{$SoftwareUpdateWarning = "TRUE - $($Alert.Message)"}
SoftwareUpdateError{$SoftwareUpdateError = "TRUE - $($Alert.Message)" }
DatabaseMaintenanceFailure{$DatabaseMaintenanceFailure = "TRUE - $($Alert.Message)"}
IdrCopyFailed{$IdrCopyFailed = "TRUE - $($Alert.Message)"}
BackupJobContainsNoData{$BackupJobContainsNoData = "TRUE - $($Alert.Message)" }
JobCompletedWithExceptions{$JobCompletedWithExceptions = "TRUE - $($Alert.Message)"}
JobStart{$JobStart = "TRUE - $($Alert.Message)"}
ServiceStart{$ServiceStart = "TRUE - $($Alert.Message)"}
ServiceStop{$ServiceStop = "TRUE - $($Alert.Message)"}
DeviceError{$DeviceError = "TRUE - $($Alert.Message)"}
DeviceWarning{$DeviceWarning = "TRUE - $($Alert.Message)"}
DeviceIntervention{$DeviceIntervention = "TRUE - $($Alert.Message)"}
MediaError{$MediaError = "TRUE - $($Alert.Message)"}
MediaWarning{$MediaWarning = "TRUE - $($Alert.Message)" }
MediaIntervention{$MediaIntervention = "TRUE - $($Alert.Message)"}
MediaInsert{$MediaInsert = "TRUE - $($Alert.Message)"}
MediaOverwrite{$MediaOverwrite = "TRUE - $($Alert.Message)"}
MediaRemove{$MediaRemove = "TRUE - $($Alert.Message)"}
LibraryInsert{$LibraryInsert = "TRUE - $($Alert.Message)"}
TapeAlertWarning{$TapeAlertWarning = "TRUE - $($Alert.Message)"}
TapeAlertError{$TapeAlertError = "TRUE - $($Alert.Message)" }
IdrFullBackupSuccessWarning{$IdrFullBackupSuccessWarning = "TRUE - $($Alert.Message)"}
LicenseAndMaintenanceWarning{$LicenseAndMaintenanceWarning = "TRUE - $($Alert.Message)"}
default{$OtherErr = "TRUE - $($Alert.Message)" }
}
}

Now if we put this all together the result would be;

<# .SYNOPSIS Gets BackupExec information and reports on Running Alerts - Only works on BackupExec 2012 and higher. .DESCRIPTION Using BEMCLI we retrieve data from BACKUPEXEC, including multiple types of alerts, LastRunTime, etc. Currently Alerts are generated for the following catagories: JobWarning JobFailure JobCancellation CatalogError SoftwareUpdateInformation SoftwareUpdateWarning SoftwareUpdateError DatabaseMaintenanceFailure IdrCopyFailed IdrFullBackupSuccess BackupJobContainsNoData JobCompletedWithExceptions JobStart ServiceStart ServiceStop DeviceError DeviceWarning DeviceIntervention MediaError MediaWarning MediaIntervention MediaInsert MediaOverwrite MediaRemove LibraryInsert TapeAlertWarning TapeAlertError IdrFullBackupSuccessWarning LicenseAndMaintenanceWarning .LINK http://www.cyberdrain.com #>
#Setting default CLI
i#Setting default CLI
import-module bemcli 
#Getting Alerts
$Alerts = Get-BEAlert
#Looping through the alerts and setting them.
foreach($Alert in $Alerts){
switch ($Alert.Category)
{
JobWarning{$JobWarning = "TRUE - $($Alert.Message)" }
JobFailure{$JobFailure = "TRUE - $($Alert.Message)"}
JobCancellation{$JobCancellation = "TRUE - $($Alert.Message)" }
CatalogError{$CatalogError = "TRUE - $($Alert.Message)"}
SoftwareUpdateWarning{$SoftwareUpdateWarning = "TRUE - $($Alert.Message)"}
SoftwareUpdateError{$SoftwareUpdateError = "TRUE - $($Alert.Message)" }
DatabaseMaintenanceFailure{$DatabaseMaintenanceFailure = "TRUE - $($Alert.Message)"}
IdrCopyFailed{$IdrCopyFailed = "TRUE - $($Alert.Message)"}
BackupJobContainsNoData{$BackupJobContainsNoData = "TRUE - $($Alert.Message)" }
JobCompletedWithExceptions{$JobCompletedWithExceptions = "TRUE - $($Alert.Message)"}
JobStart{$JobStart = "TRUE - $($Alert.Message)"}
ServiceStart{$ServiceStart = "TRUE - $($Alert.Message)"}
ServiceStop{$ServiceStop = "TRUE - $($Alert.Message)"}
DeviceError{$DeviceError = "TRUE - $($Alert.Message)"}
DeviceWarning{$DeviceWarning = "TRUE - $($Alert.Message)"}
DeviceIntervention{$DeviceIntervention = "TRUE - $($Alert.Message)"}
MediaError{$MediaError = "TRUE - $($Alert.Message)"}
MediaWarning{$MediaWarning = "TRUE - $($Alert.Message)" }
MediaIntervention{$MediaIntervention = "TRUE - $($Alert.Message)"}
MediaInsert{$MediaInsert = "TRUE - $($Alert.Message)"}
MediaOverwrite{$MediaOverwrite = "TRUE - $($Alert.Message)"}
MediaRemove{$MediaRemove = "TRUE - $($Alert.Message)"}
LibraryInsert{$LibraryInsert = "TRUE - $($Alert.Message)"}
TapeAlertWarning{$TapeAlertWarning = "TRUE - $($Alert.Message)"}
TapeAlertError{$TapeAlertError = "TRUE - $($Alert.Message)" }
IdrFullBackupSuccessWarning{$IdrFullBackupSuccessWarning = "TRUE - $($Alert.Message)"}
LicenseAndMaintenanceWarning{$LicenseAndMaintenanceWarning = "TRUE - $($Alert.Message)"}
default{$OtherErr = "TRUE - $($Alert.Message)" }
}
}

Now you can schedulde this script in your own RMM or sent e-mails based on the result.:) Happy Scripting!

Forcing DFS to prefer the local DC, Without creating subnets and sites

I’ve recently did some temporary work on a legacy-environment for a client. This client recently added some 2012 servers as domain controllers and file servers, The only issue was that there was no way that the client could edit Sites and Services to create correct sites and associated subnets,  due to a legacy in-house application depending on the default site to contain all domain controllers.

The client only had 2 sites(Belgium and NL) and to resolve this we’ve simply edited the registry on both DC/File servers with the following key:

Name:PreferLogonDC
Type:dword(32 bit)
Value:1
Location:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Dfs

Just note that this only does the magic for the default DFS shares NETLOGON, and SYSVOL, and of course I advised the client to stop using that awful in-house application 😉

Search Service crashing on RDS / Server 2012R2

I’ve recently been experiencing many issues with the Windows Search service on Server 2012R2 – The search index service would crash and cause entire RDP sessions to hang whenever typing something in the start menu. Of course on RDS servers this was a major issue and my clients got very upset due to not being able to work for multiple hours without calling us for assistance.

This issue was actually rather difficult to troubleshoot as all of the servers experiencing this had all Windows Updates installed and no similar software. We even found the issue on a brand new fresh install without any external software once.

After several days of constant troubleshooting we’ve found the following symptoms to be true in all cases;

  • The Search Service itself does not crash – a subprocess of the search service(FilterHost) crashes.
  • Restarting the Search service resolves the issue temporarily.
  • Deleting and recreating the index does not resolve the issue.
  • The Search Index grows to extreme sizes(50GB+).
  • Offline Dedrag of the Search Databases often helps in shrinking the size, but does not resolve the issue.

Of course we’ve tried everything to resolve this – We’ve performed clean boots on servers, used procmon to find if another process causes the crash, enabled crash dumps and increased the time-out value on the search service to check if this was not a performance issue. Unfortunately none of these items helped in the slightest. The crashes kept on coming and our client actually considered removing the Windows Servers and moving to a different provider.

Afterwards I’ve been reading the details and I found we may have a classic scenario of the built-In search feature causing NTDLL thread pool exhaustion. Windows Search uses the NTDLL thread pool to achieve natural language process such as word breaking. It is possible that at the time of the crash there were many queued requests for Search. Maybe the server got too many requests and search couldn’t handle it. The Windows search has a limit of 512 threads to cater to the search operations. Seeing our clients have large operations with many Outlook and full-text-indexes for files this could be the case.

to resolve we restricted the search to use only a single thread per-query, which should resolve the problem if its indeed reaching the threads threshold due to many search queries.

This can be done by configuring this Registry value:

Name : CoreCount
Type : DWORD
Value : 1
Location : HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Search

After configuring this value and rebooting we’ve not seen these issues at all – And deployed the solution to our entire RDS farm, Of course some credits should go to my co-worker Maarten van der Horst as he was relentless and did not let this issue go. 🙂