About…

How to remove “local autodiscover” or SCP when migrating to O365

I work in the SMB, and we migrate most of our clients to Office 365 from local exchange servers. Often we still use a local server for user and computer management and move the users from Small Business Server 2008/2011 to Server 2012R2.

The caveat of this is that you often also bring along Exchange settings embedded deeply in the Active Directory schema. When users open Outlook on their local machines it will first find the SCP in the Active Directory and does not use AutoDiscover. That means that users will be logged onto the old(Decommissioned) server. While external AutoDiscover outside of the clients network works perfectly.

There are three ways of resolving this:

 

  1. Remove the SCP using PowerShell
  2. Remove the SCP using ADSI Edit
  3. Disable IIS on the decomissioned server

 

1.)

Using PowerShell on the old server is probably the easiest method, You can run the following cmdlet in the Exchange Powershell Module.
Remember to only run this if all local exchange servers have been decommissioned.

Get-ClientAccessServer | Set-ClientAccessServer -AutoDiscoverServiceInternalUri $null

If you still have one CAS server in the network, you can run the following command instead:

$ServerName = "OLDSERVER"
Set-ClientAccessServer -identity $ServerName -AutoDiscoverServiceInternalUri $null

2.) Using ADSIEDIT
To delete the string from the active directory you can use ADSI edit to open the following path, please change the red marked parts to your current environment:
CN=ServerName,CN=Autodiscover,CN=Protocols,CN=ServerName,CN=Servers,CN=Exchange Administrative Group (FYENEBBA),CN=Administrative Groups,CN=OrganizationName,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=DomainName,DC=Suffix

3.)

The last method is a “Lazy mans” solution and advised if you do not want to make any permanent changes. You can decide to disable the IIS services on the CAS server, preventing logons, Outlook will automatically use Autodiscover when it finds it cannot log into the local server.

 

Happy migrating 🙂

Managing Office 365/Azure tenants using powershell

One of the fantastic benefits of having Microsoft partner portal access is the ability to remote manage your clients/tenants. One of the downsides of this is that the partner portal is sometimes somewhat slow, or has a convoluted approach for remote management. A great way to resolve this is by using PowerShell to manage the tenants instead. This is just a quick post that could help you understand the commands involved;

First off – You’ll need to download and install the tooling required to connect with the Azure Powershell objects

  • Download the Microsoft Online Services Sign-In Assistant for IT Professionals: here
  • Download the Microsoft Azure Active Directory Powershell objects here

After downloading these and following all the required reboots you’ll be able to connect to Azure/O365 by issuing the following command in your new Azure Powershell Module;

connect-msolservice

After connecting to the MSOL service you now have access to the Microsoft online service modules. To manage your allowed partners we’ll first try to retrieve the tenant IDs that are available to us by executing the following command:

Get-MsolPartnerContract  | fl

Of course, this gives us way too much information – We only need to see the tenant id, to make sure we get this we execute the following:

Get-MsolPartnerContract -All
OR
Get-MsolPartnerContract -Domainname ClientDomain.ORG

Now with this tenant ID, We’re able to execute PowerShell commands based on the tenant instead of our own environment simply by adding -tenantID to the normal MSOL commands. e.g.

Get-MsolUser -TenantId tenantID | set-msoluser -StrongPasswordRequired $true

Happy PowerShelling!

Using Azure MFA on an onsite RDS 2012R2

Azure MFA is a fantastic product – Its easy to setup and maintain, and not very costly to purchase (for pricing, click here). The great thing about Azure MFA is that it becomes very easy to secure your local directory, but also your remote desktop connections or RDS your 2008/2012 farms. There is just one downside; Out of the box Remote Desktop(terminal services) security does not work on Server 2012R2. I’m not sure why Microsoft decided to not support 2012R2 RDP access. I actually have a ticket outstanding with the Azure MFA team.

Of course there a solution; instead of securing direct RDP access, you can decide to secure Remote Desktop Gateway and have your users connect to the Remote Desktop Gateway. This might sound like a large change but I always advise my clients to use RD gateway – mostly due to it being accessible from almost all locations due to running on port 443 and having SSL security is a nice added bonus.

To add MFA to RD gateway we need to perform the following prerequisites ;

  1. Deploy a standard RD-Gateway, with NPS. This can be done on a separate server, or on the RDS server if you have a small farm.
  2. Deploy Microsoft Azure MFA on a different server, Please note: MFA and NPS cannot run on the same server due to NPS and MFA Radius clients running on the same ports. For a good tutorial on how to install Azure MFA see the following link: link
  3. Open port 443 to your RD gateway server.
  4. Choose a shared secret and note it – We’ll use the example “ThisIsNotASecret”

After performing the first 3 steps, its time to set up RD Gateway, NPS and the Azure MFA Server

RD Gateway setup:

  • Open the RD Gateway console, and right-click the server name, choose the tab “RD CAP Store”
  • Turn off the “Request clients to send a statement of health” check box if you have clients that are not NAP capable.
  • Select “Central server running NPS” and remove the current server if there is any, Now enter the hostname of the MFA server and our selected shared secret “ThisIsNotASecret”.
  • Close the Console – we’re done on this side. 🙂

NPS Setup:

  • Open the NPS console and go to RADIUS Clients, Right click and select New
  • Enter a friendly name – e.g. AzureMFA and note this.
  • enter the IP of the MFA server & our selected shared secret “ThisIsNotASecret”
  • click OK and move to “Remote Radius servers” in the left hand menu.
  • Double click the default TS Gateway Server Group and click edit, select the Azure MFA server from this list and click on load balancing.
    • Change the priority to 1 and the weight to 50
    • change the number of seconds before a connection is dropped to 45 seconds.(could be less, but I select 45 seconds to keep uniformity among servers)
    • Change the number of seconds before server is unavailable to 45 seconds.(could be less, but I select 45 seconds to keep uniformity among server
    • Click OK and close this window. Move to Connection Request Policies
  • You should see the default connection policy here – disable or delete this, as we will create our own policies.
  • Right click the policies and select “New” Name this policy “Receive MFA Requests”. The settings for this policy are:
    • NAS Port type: Virtual(VPN)
    • Client Friendly Name: AzureMFA
    • Authentication Provider: Local Computer
    • Override Authentication: Disabled
  • Create another policy and name this “Send MFA requests”. The settings for this policy are:
    • NAS Port type: Virtual (VPN)
    • Accounting provider name: TS GATEWAY SERVERS GROUP
    • Authentication Provider name: TS GATEWAY SERVER GROUP
    • Authentication provider: Forwarding request
  • And that concludes the NPS setup. Almost there! 🙂

Azure MFA Setup:

The last steps are fairly straight forward:

  • Open the MFA administrator console and select the RADIUS option in the left hand menu.
  • Enable Radius and on the clients tab add the IP of the NPS server.
  • enter the shared secret “ThisIsNotASecret”.
  • Now select the tab “Targets” and enter the IP of the RDS Server.
  • Go to the left hand menu and select user. Enable a user for tests with SMS messages or the app.
  • Open the Windows Firewall for inbound Radius traffic
  • Test! 🙂 If you followed the manual to the letter you now secured your RD Gateway with MFA.

 

Happy MFA’ing! 🙂

Forcing DFS to prefer the local DC, Without creating subnets and sites

I’ve recently did some temporary work on a legacy-environment for a client. This client recently added some 2012 servers as domain controllers and file servers, The only issue was that there was no way that the client could edit Sites and Services to create correct sites and associated subnets,  due to a legacy in-house application depending on the default site to contain all domain controllers.

The client only had 2 sites(Belgium and NL) and to resolve this we’ve simply edited the registry on both DC/File servers with the following key:

Name:PreferLogonDC
Type:dword(32 bit)
Value:1
Location:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Dfs

Just note that this only does the magic for the default DFS shares NETLOGON, and SYSVOL, and of course I advised the client to stop using that awful in-house application 😉

Using powershell to backup SSH devices (And more!)

I’ve recently been moving to a new RMM product that offers better automation policies than the one I’ve used before. The new RMM product also has the ability to run scripts with in and output from the RMM product itself – e.g. if a network contains a Juniper router, You can run automation policies based on the devices in that network.

Of course this opens up great opportunities for automation of device based backups, like routers, switches, etc. I’ve created a powershell script to automate this. The script currently supports Draytek, Juniper SRX series, Juniper SSG series, and Sonicwall devices :). The script is pretty much self explanatory due to the comments.

#######################################
#script created by TeGek – http://www.cyberdrain.com
#Router / Juniper SRX backup script version 0.1
#Runs a backup of the juniper config, drops the file in C:\RouterBackups and uploads this file to a remote FTP site.
#Parameters: RouterIP, DeviceType, Username, Password
#######################################
Param(
[string]$RouterIP,
[string]$DeviceType,
[string]$Username,
[string]$Password
)
#######################################
#set variables and create a secure string for username/password of the router.
$date = Get-Date -format (“dd-M-yyyy”) #We get the date in the European format.
$clientID = $env:USERDNSDOMAIN #we’ll use the userDNSdomain to define the clien tname, it assists in FTP uploads and to clarify who this config file belongs to.
$secpasswd = ConvertTo-SecureString $Password -AsPlainText -Force
$mycreds = New-Object System.Management.Automation.PSCredential ($username, $secpasswd)
$FTPSERVER = “1.1.1.1”
#######################################
#Set the correct command, we do this based on the device. You can simply add an item based by copy+pasting the statement and entering the correct SSH command. A case select probably would have been nicer here, But this statement is quicker to copy.
if($DeviceType = “Juniperssg”){ $command = “get config”}
if($DeviceType = “Junipersrx”){ $command = “cli show config”}
if($DeviceType = “draytek”){ $command = “sys config”}
if($DeviceType = “sonicwall”){ $command = “export current-config cli”}
#######################################
#Next, we try to create a new directory on C:\ to store the temporary files, You can also choose to keep this folder intact for local router backups 🙂
try{
New-Item -Path C:\RouterBackups -ItemType Directory -force
}catch{
write-host “Router Backups Directory exists, Moving on”
}
#######################################
#We download darkoperator/Carlos Perez’s SSH client. For more information goto: http://www.darkoperator.com/.
Try{
iex (New-Object Net.WebClient).DownloadString(“https://gist.github.com/darkoperator/6152630/raw/c67de4f7cd780ba367cccbc2593f38d18ce6df89/instposhsshdev”)
Import-Module “$env:homepath\documents\windowspowershell\modules\posh-ssh”
} catch {
write-host “Download SSH client failed! Backup Failed”
}
#######################################
#after downloading, we run a simple command that states we’d like to run the backup. Of course this depends on the type of device.
try{
New-SSHSession -ComputerName $RouterIP -Credential $mycreds
Invoke-SSHCommand -Index 0 -Command $command out-file C:\RouterBackups\$clientid-$date.txt
Get-SSHSession | Remove-SSHSession
}catch{
write-host “Could not connect to SSH, Backup Failed”
}
#######################################
#And here we try to upload the file – If its not required just delete this section 🙂
Try{
$file = “C:\RouterBackups\$clientid-$date.txt”
$ftp = “ftp://$FTPSERVER/$clientID-$date.txt”
“ftp url: $ftp”
$webclient = New-Object System.Net.WebClient
$uri = New-Object System.Uri($ftp)
“Uploading $File…”
$webclient.UploadFile($Uri, $File)
}catch{
write-host “Uploading file to FTP Failed!”
}

Search Service crashing on RDS / Server 2012R2

I’ve recently been experiencing many issues with the Windows Search service on Server 2012R2 – The search index service would crash and cause entire RDP sessions to hang whenever typing something in the start menu. Of course on RDS servers this was a major issue and my clients got very upset due to not being able to work for multiple hours without calling us for assistance.

This issue was actually rather difficult to troubleshoot as all of the servers experiencing this had all Windows Updates installed and no similar software. We even found the issue on a brand new fresh install without any external software once.

After several days of constant troubleshooting we’ve found the following symptoms to be true in all cases;

  • The Search Service itself does not crash – a subprocess of the search service(FilterHost) crashes.
  • Restarting the Search service resolves the issue temporarily.
  • Deleting and recreating the index does not resolve the issue.
  • The Search Index grows to extreme sizes(50GB+).
  • Offline Dedrag of the Search Databases often helps in shrinking the size, but does not resolve the issue.

Of course we’ve tried everything to resolve this – We’ve performed clean boots on servers, used procmon to find if another process causes the crash, enabled crash dumps and increased the time-out value on the search service to check if this was not a performance issue. Unfortunately none of these items helped in the slightest. The crashes kept on coming and our client actually considered removing the Windows Servers and moving to a different provider.

Afterwards I’ve been reading the details and I found we may have a classic scenario of the built-In search feature causing NTDLL thread pool exhaustion. Windows Search uses the NTDLL thread pool to achieve natural language process such as word breaking. It is possible that at the time of the crash there were many queued requests for Search. Maybe the server got too many requests and search couldn’t handle it. The Windows search has a limit of 512 threads to cater to the search operations. Seeing our clients have large operations with many Outlook and full-text-indexes for files this could be the case.

to resolve we restricted the search to use only a single thread per-query, which should resolve the problem if its indeed reaching the threads threshold due to many search queries.

This can be done by configuring this Registry value:

Name : CoreCount
Type : DWORD
Value : 1
Location : HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Search

After configuring this value and rebooting we’ve not seen these issues at all – And deployed the solution to our entire RDS farm, Of course some credits should go to my co-worker Maarten van der Horst as he was relentless and did not let this issue go. 🙂

Stop AADSync logs from clogging up your servers disk space

I’ve been rolling out a lot of large AADSync deployments recently – I love how AADSync gives a SSO experience to the SMB markets without having to deploy ADFS. But as always these deployments in SMB markets have some downsides; The default configuration for AADSync/Dirsync is that it logs everything using tracing and ForeFront MSSQL Logs.

On smaller deployments or deployments where diskspace is expensive you might want to limit the usage of these logs. Of course my advice as always is to only make these changes when your DirSync/AADSync environment is running well and not experiencing any issues whatsoever.

To prevent the SQL database from growing to absurd sizes:

param([int]$DaysToKeep=2)
$DirSync = Get-WmiObject -Class “MIIS_SERVER” -Namespace “root\MicrosoftIdentityIntegrationServer”
$DirSync.ClearRuns([DateTime]::Today.AddDays(-$DaysToKeep)) | Format-Table ReturnValue

Save this script as .\ClearDirSyncDB.ps1 and run it. Of course you can set this as a schedulded task to automate it.

To prevent the TRACING folder from filling with logs:

  • Stop FIMSynchronizationService service via services.msc.
  • Run Notepad or your favorite text editor as administrator
  • Open the MIISERVER.Config file at X:\Program Files\Microsoft Online Directory Sync\SYNCBUS\Synchronization Service\Bin\miiserver.exe.config (Where X is the drive you installed AADSYNC)
  • Paste the following after
  • </appSettings>

<system.diagnostics>
  <trace autoflush=”true”>
  <listeners>
  <add name=”DirectorySynchronizationTraceFile” type=”System.Diagnostics.TextWriterTraceListener” initializeData=”C:\Temp\DirectorySynchronizationTrace.log” traceOutputOptions=”DateTime” />
  </listeners>
  </trace>
  <sources>
  <source name=”passwordSync” switchName=”sourceSwitch” switchType=”System.Diagnostics.SourceSwitch”>
  <listeners>
  <add name=”console” type=”System.Diagnostics.ConsoleTraceListener”>
  <filter type=”System.Diagnostics.EventTypeFilter” initializeData=”Information” />
  </add>
  <add name=”sharedTextLogger” type=”System.Diagnostics.TextWriterTraceListener” initializeData=”PasswordSync.log”>
  <filter type=”System.Diagnostics.EventTypeFilter” initializeData=”Verbose” />
  </add>
  <remove name=”Default” />
  </listeners>
  </source>
  </sources>
  <switches>
  <add name=”sourceSwitch” value=”Verbose” />
  </switches></system.diagnostics>

  • Re-start FIMSynchronizationService service via services.msc.
  • Reactivate your syncronisation by running the configuration wizard and entering your domain details

And then you’re all done! dirsync should not longer eat up your precious diskspace 🙂

Creating hosted mailboxes while on premises mailboxes still exist.

One of the major drawbacks when using AADSync or Dirsync is that the remote mailboxes are not created when a local mailbox exists. This could prove difficult when migrating to Office 365 due to the mailboxes not existing simultaneously.

For instance we use third party migration tools that require the remote mailbox to exist. In a normal situation when assigning a office 365 exchange license to a user a mailbox is automatically created. When this user still has an on premises mailbox this will not happen due to the msExchMailboxGuid property that has been set in active directory. Removing this property also marks the mailbox for deletion in Exchange as it no longer is able to find the correct GUID for the user.

To resolve this we have 2 options, both require some work – It just depends on which you personally prefer

Option 1; Disable Dirsync or AADSync and create the mailboxes online.

 

The first option is quite simple and requires manual intervention, this option is best if the following statements are both true;

  1. you are not working in the Cloud environment yet and the users have never logged in.
  2. You are phasing out the local directory you are syncing from or are planning to remove Exchange locally on short notice.

If these are not true, I’d advise you to use option 2 which gives you more of a solution

First you remove licenses from all the users you have synced with Active Directory. Then disable Active Directory Sync. Your synchronized users will be converted into Cloud objects and you will be able to reassign licenses. When reassigning licenses an online mailbox will be created.

After 24 hours you can re-enable Active Directory Sync and the users Office 365 mailboxes will exist and ignore the msExchMailboxGuid

Option 2; Change the Dirsync or AADSync settings to prevent msExchMailboxGuid replication.

Another option is to prevent the synchronization of the msExchMailboxGuid – I often prefer this as I do not need to make any modifications to the already setup Cloud environment and it gives me more freedom when creating new users locally – I often incorporate auto provisioning script for mailboxes and other specialized items so I am still able to use these local scripts and not worry that the hosted mailbox will not be created.

As always, Please be careful when preforming these tasks. Do so at your own risk;

  • Launch X:\Program Files\Windows Azure Active Directory Sync\SYNCBUS\Synchronization Service\UIShell\miisclient.exe (where X: is the drive you installed the sync tool.)
  • Click on “Management Agents”
  • Right click on “Active Directory Connector” and click on “Configure Attribute Flow”
  • Open the ObjectType:User property.
  • Find the to msExchMailboxGuid.
  • Click the Delete button to remove the mapping to the msExchMailboxGuid.
  • Click Ok to save the changes
  • Close the MIIClient.

We have now prevented new objects from syncing the msExchMailboxGuid property to the cloud – The problem that remains is that existing objects have already replicated this property to the Office 365 Azure Active Directory. To make sure that at the next import job we clear this property on the hosted side we will need to perform a few SQL tasks. First we use SQL Management studio to connect to the Servername\MSONLINE SQL instance. When connected we run the following query;

UPDATE mms_metaverse SET msExchMailboxGuid = NULL WHERE (msExchMailboxGuid IS NOT NULL)

When this query runs the status of the msExchangeMailboxGuid in the SQL database is set to NULL. Your local active directory object is not changed. Now you can force a full import using the miisclient or using the Dirsync powershell module.

After a successful sync we are able to reassign a license and the hosted mailbox will be created! That way the local and hosted mailbox coexist and you are able to replicate the data using third party tools.

Have fun migrating!

 

 

Migrating permissions over domains

As I’ve stated in previous blogs – I work at a Managed Services Provider so this makes me lucky enough to work in environments ranging from very small and simple to large scale operations that require that every step is a planned one. Of course this means that we need a fairly easy policy to manage and migrate permissions over a large range of Microsoft products such as server 2003 to server 2012. So when planning a policy we need to account for all small discrepancies that could occur between clients, domains, etc.

I will not share our exact procedure and policy for maintaining permissions but I would like to share our way of moving permissions across servers,forests and entire domains, namely SubinACL;

SubInACL is a command-line tool that enables administrators to obtain security information about files, registry keys, and services, and transfer this information from user to user, from local or global group to group, and from domain to domain

One of the great features of this tool is the plain text exporting functionality – Most tools such as Powershell’s get-acl and external tools rely on the system to resolve the SIDs and when making an export they export only the SIDs instead of the plain-text property name. The issue with this is when you migrate data to a different server that has no relation to the previous server(eg a new domain) you cannot restore the permission and are forced to recreate your permissions structure by hand.

So, as SubInACL does not rely on the system to resolve the SID it does this during the export to text functionality. To export permissions to a text file you simply install SubInACL and run the following command.

Subinacl /noverbose /output=D:\Permissions.txt /subdirectories “D:\FolderWeWantToBackup”

With this command we tell SubInACL to create a backup of all the permissions of D:\FolderWeWantToBackup including sub directories and files to D:\Permissions.txt. The NoVerbose switch is actually required to create a file that is capable to be used as a play file – Without the switch it would create a more readable format but unfortunately this file would fail at the import with the message “Invalid Function” or “Invalid Device”

to restore the file we have multiple options; first first step will always be to copy the entire file structure to the new server. For this you can use any tool that you prefer, I use SyncToy or Robocopy. After this you have to check what type of restore you have to perform. These are some examples you could use;

Replace single account name:

Subinacl /playfile D:\Permissions.txt /replace=[DomainName\]OldAccount=[DomainName\]New_Account

Replace old domain name:

Subinacl /playfile D:\Permissions.txt /replacestringonoutput=OldDomainName=NewDomainName

Replace SIDs with new SIDs(See SubInACL Documentation for more information):

Subinacl /playfile D:\Permissions.txt /changedomain=OldDomainName=NewDomainName[=MappingFile[=Both]]

So with these restore commands you should be able to create easy to use scripts to move across domains as long as you retain the old username names and groupnames, or use the replace string on output functionality. Happy migrating 🙂

Using BEMCLI to automate restores

We use Symantec Backup Exec for backups are some of our larger clients. Of course we have monitoring set up on these backup sets and jobs. We monitor when they fail, we have a verification phase to check if the content in the backup is correct and matches the data from the snapshot of the server, But of course this is not enough.

If you need absolute certainty that a backup is correct there is only ONE true way of testing it. Restore the backup to a different location and test if the restore jobs are able to run, For this we use the BEMCLI. The BEMCLI is a set of Powershell cmdlets that you can import on the backupexec machine itself and send powershell based commands to the server.

Lets build a very simple restore test, First we import the module BEMCLI to make sure we get all the BEcommands, a complete list of commands and a help file can be found here

import-module BEMCLI
Submit-BEFileSystemRestoreJob -FileSystemSelection C:\RestoreFolder -AgentServer Testserver.testdomain.local -NotificationRecipientList me@mycompany.com -RedirectToPath \\BACKUPEXECSERVER\Restore

Lets dissect the command a little:

  • Submit-BEFileSystemRestoreJob simply means we want to start a restore job, as we do not have any date filter we want to try the restore from the latest backup set.
  • -FileSystemSelection C:\RestoreFolder is the folder we would like to restore data from.
  • -AgentServer is the server where C:\RestoreFolder is located.
  • -NotificationRecipientList me@mycompany.com I think this speaks for itself. What contact should be notified about this job. 🙂 Please note that this needs to be an existing recipient within the backupexec notification options!
  • -RedirectToPath \\BACKUPEXECSERVER\Restore And of course, we want to put the files in a different location and not overwrite our existing copy, To do this I always create a “Restore” share to dump the files on.

Of course you can schedule this script using task schedulder and be done with it, you always restore C:\RestoreFolder…But that doesn’t sound like a good test does it? Always restoring the same file in a backup…Of course not! 😉 So that why we’ll now select a random file(not folder!) to restore.

import-module BEMCLI
$file = Get-ChildItem -Recurse -File \\Testserver.domain.local\c$\ Get-Random -count 1
Submit-BEFileSystemRestoreJob -FileSystemSelection $file.fullname -AgentServer Testserver.testdomain.local -NotificationRecipientList me@mycompany.com -RedirectToPath \\BACKUPEXECSERVER\Restore

Of course the command is mostly the same as before, but the file selection is now random by using get-childitem and get-random. We select 1 random file from the server using its UNC path.(\\Testserver.domain.local\c$\). All you have to do is make sure the account you run this script under has the correct permissions, and of course that the FilesystemCollection object is in your backup selection list.

Happy restoring!