About…

Using BEMCLI to automate restores

We use Symantec Backup Exec for backups are some of our larger clients. Of course we have monitoring set up on these backup sets and jobs. We monitor when they fail, we have a verification phase to check if the content in the backup is correct and matches the data from the snapshot of the server, But of course this is not enough.

If you need absolute certainty that a backup is correct there is only ONE true way of testing it. Restore the backup to a different location and test if the restore jobs are able to run, For this we use the BEMCLI. The BEMCLI is a set of Powershell cmdlets that you can import on the backupexec machine itself and send powershell based commands to the server.

Lets build a very simple restore test, First we import the module BEMCLI to make sure we get all the BEcommands, a complete list of commands and a help file can be found here

import-module BEMCLI
Submit-BEFileSystemRestoreJob -FileSystemSelection C:\RestoreFolder -AgentServer Testserver.testdomain.local -NotificationRecipientList me@mycompany.com -RedirectToPath \\BACKUPEXECSERVER\Restore

Lets dissect the command a little:

  • Submit-BEFileSystemRestoreJob simply means we want to start a restore job, as we do not have any date filter we want to try the restore from the latest backup set.
  • -FileSystemSelection C:\RestoreFolder is the folder we would like to restore data from.
  • -AgentServer is the server where C:\RestoreFolder is located.
  • -NotificationRecipientList me@mycompany.com I think this speaks for itself. What contact should be notified about this job. 🙂 Please note that this needs to be an existing recipient within the backupexec notification options!
  • -RedirectToPath \\BACKUPEXECSERVER\Restore And of course, we want to put the files in a different location and not overwrite our existing copy, To do this I always create a “Restore” share to dump the files on.

Of course you can schedule this script using task schedulder and be done with it, you always restore C:\RestoreFolder…But that doesn’t sound like a good test does it? Always restoring the same file in a backup…Of course not! 😉 So that why we’ll now select a random file(not folder!) to restore.

import-module BEMCLI
$file = Get-ChildItem -Recurse -File \\Testserver.domain.local\c$\ Get-Random -count 1
Submit-BEFileSystemRestoreJob -FileSystemSelection $file.fullname -AgentServer Testserver.testdomain.local -NotificationRecipientList me@mycompany.com -RedirectToPath \\BACKUPEXECSERVER\Restore

Of course the command is mostly the same as before, but the file selection is now random by using get-childitem and get-random. We select 1 random file from the server using its UNC path.(\\Testserver.domain.local\c$\). All you have to do is make sure the account you run this script under has the correct permissions, and of course that the FilesystemCollection object is in your backup selection list.

Happy restoring!

 

Juniper SRX: Using RPM to monitor and change routes

I’ve been using the SRX series of Juniper for about 1 year now. I’ve always used the SSG series with pleasure and never had any doubts or issues with them, I often deploy dual wan solutions which need to be highly available or at least have some form of fail-over because my clients use VOIP or cloud services that rely on a stable internet connection.

In the old SSG series this was very straightforward – Set up Track-ip on the Interface and it will bring the interface down when several pings to IPs fail and reach your set threshold. In SRX series this gets a little more complicated. There are now 3 types of monitors, which all have extra subtypes;

  • HTTP Probes
  • ICMP(PING) Probes
  • TCP/UDP port connection Probes

 

Now there are some things to really pay attention to, When you set up RPM you add onto your static routing table with the one in the RPM configuration. That means if you have a route based VPN enabled you will need to add this route to your RPM configuration instead of the static route configuration as the routes in RPM will take precedence.

Time to dissect the simple ICMP RPM config given to us by Juniper:

Here we set up the basics. We create a a probe named "Example" and set a blank test on it with the name "Test-Name". We tell the RPM that 3 probes should be sent, with an probe interval of 15 seconds. that means 3 probes are sent 15 seconds between each other. The final is the Test-Interval, which tells the RPM service to wait 10 seconds between the tests. Quite simply put it means that it sends 1 probe every 15 seconds, and after 3 probes are reached it waits 10 seconds and starts again.

set services rpm probe example test test-name probe-count 3
set services rpm probe example test test-name probe-interval 15
set services rpm probe example test test-name test-interval 10

Next we tell the RPM service how many failures are allowed within this test. Seeing we’re sending 3 probes I only want to change the route when all 3 pings have failed

set services rpm probe example test test-name thresholds successive-loss 3
set services rpm probe example test test-name thresholds total-loss 3

After which we set the test-action to preform, in this case a simple ICMP ping to google DNS (Please note that in a production environment you should never ping a host that is not under your management). We are using external interace fe-0/0/0.0

set services rpm probe example test test-name target address 8.8.8.8
set services rpm probe example test test-name destination-interface fe-0/0/0.0
set services rpm probe example test test-name next-hop 8.8.8.8

And to finish it up we set the RPM, and the route to be used if the probes fail.

set services ip-monitoring policy test match rpm-probe example
set services ip-monitoring policy test then preferred-route route 0.0.0.0 next-hop 192.168.1.1

Tada! Simple monitoring and fail-over achieved. 🙂 You can check the status via the web-interface or via the CLI using show services ip-monitoring status