Tag Archives: SQL

My First vCenter Orchestrator Workflow – Part 5 – A little bit of SQL

orchestratepart5Thus far in this series of posts we have installed and configured vCenter Orchestrator as well as setup and utilized a couple of plugins; the vCenter plug-in and the PowerShell plug-in.  Before the series ends I wanted to go through one more plug-in.  The SQL plug-in.  The SQL plug-in is used for, well, you guessed it; running SQL queries against a database.  If you remember in Part 4 we setup a workflow that took two input parameters; a host and a location code.  The script then went on to create a new port group on a vSwitch named locationcode-VM_Network.  The logic in itself works fine and the workflow validates, the only problem I see with it is that the user needs to enter the ‘location code’ manually as an input.  Now I know you are all very careful and take your time when doing things, but I’m not, and more than likely after doing this 40 or 50 times you can count on me keying in that location code wrong 🙂 – Enter the SQL plugin.

So the setup is as follows; I have a database containing a table with the following columns; locationcode and deployed (boolean).  What I would like to do is have an input parameter that would allow me to select the location code from a list of non-deployed locations in a drop-down box (rather than manually enter this information in), and in turn pass the location code to my PowerShell script.  Once the script has ran I’d like to update the deployed column for that specific location, ensuring it isn’t displayed in the list again in turn ensuring I can’t deploy the same location twice.  Make sense?  I know it’s a lot to follow but the concepts of setting it up are pretty much the same no matter what you are looking to do.

Alright – enough background – Let’s get into it.  I’m not going to go through the installation of the SQL Plug-in – it’s exactly the same installation process as the Powershell plug-in which we covered in Part 4.  Similar to how we setup our Powershell host we need to setup our SQL server database connection inside of vCO.  To do this fire up your vCO client and navigate through the workflow tree to Library->SQL->Configuration and select the ‘Add a database’ workflow, right click and Start Workflow.  There are a few parameters as you can see that we need to pass to the workflow in order for it to successfully run.  First, give this connection a name and select your desired Database Type – in my case MS SQL.  Also here we need to pass a Connection URL.  Now if you don’t know anything about jdbc connection urls no fear, it’s not that difficult.  Simply enter it in the the following format…


So, for a SQL Server with named DC.lab.local running on the default port 1433 and  database named ServerDeploy you would use the following…



After clicking Next we are once again presented with our Shared/Per User session mode – again, I chose shared to use one simple set of credentials rather than a per user authentication.  When you are ready click ‘Submit’ to add your new database to vCO’s inventory.  One thing to note here is that this step is not necessary   If we wanted we could perform all of this at run time inside code, however for tutorial purposes and learning purposes it’s sometimes easier to do it this way.

Alright, nuff config!  It’s time now to get started on our task at hand; Querying the database for non-deployed servers and pushing the result as a drop-down box as an input parameter to our workflow that we created in Part 4.  First off there is a simple change we need to make to our existing workflow.  Here’s a tip – don’t feel like buffalo-ing your original workflow, simply right click on it and  select ‘Duplicate Workflow’ to copy it.  OK, first off we need a new attribute.  We originally have locationcode setup an input parameter of type string – and we still need this, however the result we get back from our database will be an array of strings.  So, on the General tab of your workflow create a new attribute called  databaseParameter of type SQL:Database and assign it the value of the Database we created earlier (this should allow you to browse the inventory to do so).  Once you are done that simply Save & Close and continue on with any validation errors.


So here comes the real magic!   We need to take that database attribute and pass it to a newly created action which will in turn spit out an array of string attributes (our locations in our database).   Again, you could do the following all in script embedded within your workflow, but you never know when you are going to need to reuse something so I’ve decided to create a new action to do so.    To create an new action be sure you are on ‘Design’ view from within your client and click on the actions tab in the left hand side menu.  Where you place your action doesn’t really matter, I chose to right click com.vmware.library.sql and create my action inside that module.  Just remember where you put it and what you named it:).


OK, you should now be in the Action editor.  This is where we are going to place the code that does all the querying of the database.  As I said earlier we need to pass this Action a database parameter and it will return an array of string.   The setup of input parameters and return types, along with all the other work we are going to do is all done on the scripting tab.  First off define your return type of an Array/string.  Secondly add an input parameter of type SQL:Database.  You can see all of this outlined in the capture below…


Hey!  That was easy!  Now it’s time for a little scripting.  vCO script is nothing but Javascript which calls the rich set of API’s that vSphere provides, along with the functions and properties of all the plug-ins provided in vCO.  The script I used is displayed below…

var resultingArray = new Array();
var query = "SELECT locationcode FROM Locations WHERE deployed = 0";
var resultingActiveRecords = databaseParameter.readCustomQuery(query);
for each (var resultRecord in resultingActiveRecords) {
var locationcode = resultRecord.getProperty("locationcode");
return resultingArray;

Simply paste this into the script pane of your action.  As you can see it’s a pretty simple script.  Creates and array, queries our database, pushes the locationcode column value of each record returned into that array and finally returns the array.  So – time to Save and Close and head back to our workflow.

So at this point we are left with 2 tasks.  The first being the creation of the drop-down box as an input.  To do this we will need to change the way our original input parameter, locationcode, is displayed.  This is done on the Presentation tab of our script editor.  Be sure you have selected locationcode in the Presentation tree that the top of the screen and select the icon to add a new property to it.  There are many several different properties listed but the one we are after is called Predefined list of elements.  Under the value field of our new property select the icon to assign an action.  In the popup find/filter for our action we created earlier, assign the actions parameter to our database parameter and click Apply.


There done right…well, kinda, you could run the workflow now and it would populate our input and it would go ahead and run the PowerCLI script and the VM Network would be created, however if you remember it was my intent to go back at the end of the workflow and update our database to ensure that the same locationcode could not be selected twice.  To do this we will need to drag a new Scriptable task element to run after we invoke our PowerCLI script.  Inside this scriptable task we will need to import a couple of local attributes in order to accomplish the sql we need to do, the locationcode, and the databaseParameter.


As for the script it’s going to look very similar to the syntax that we placed inside of our action with the exception of and executeCustomQuery function in place of the readCustomQuery and the actual query itself is different.  The following is what I used…

var query = "UPDATE locations SET deployed= 1 WHERE locationcode= '" + locationcode+ "'";

And now at last we are done!!  Go ahead and run your workflow, either from within the vCO client or the Web Client and you should now see a drop-down selection for the locationcode.  Once it’s selected once the script will run, create the VM Network, then update our database to ensure that the selected locationcode is not shown again in the drop-down.


So that my friends is the end of the line for me on creating my first vCenter Orchestrator Workflow but it is most definitely not the end of the line with vCO.  With access to a huge set of vSphere API’s along with all of the functionality and properties provided by its’ plugins, throw in some sweet integration with the vSphere Web Client I’m beginning to see my usage of vCO ramp up within my current role.  This series has hardly even grazed the surface in terms of vCO’s functionality so I urge you all to go out there and learn as much as you can about this product.  I’ll do one more post in the series and outline some of the resources that I’ve found along the creation of this workflow so expect to see that soon.

My first vCenter Orchestrator Workflow

Friday Shorts – SQL Server Editions and the NHL Unlocked

So it’s been a little while since I’ve put out a Friday Shorts – mainly because everything that I’ve been doing has been either so insignificant or requires a blog post all to itself, but without further ado, here’s my shorts…

Get the SQL Server Edition you are using

sqlserver_sql_server_2008_logoRan into a situation where I was a rebuilding an application for someone, this someone did not know whether initially they had deployed the application with the SQL Server Express edition that it came with, or installed a separate version locally on the machine.  Well, a quick SQL Query like the following will solve that issue and let you know exactly what you are currently running…


SELECT SERVERPROPERTY(‘productversion’), SERVERPROPERTY (‘productlevel’), SERVERPROPERTY (‘edition’)

 OMG!  The Lockout is over!!!!

the-hockey-sweaterSo there is a book that I often read to my son before bed called “The Hockey Sweater”, a book that was often read to me as a child.  The first line of this book is as follows; “The winters of my childhood were long, long seasons. We lived in three places – the school, the church and the skating rink – but our real life was on the skating rink.”    This simple phrase could not be more true when it comes to my attitude towards hockey and the NHL.  The recent announcement of the NHL lockout ending brought along thousands of tweets and blog articles of “die-hard” fans stating that they will be boycotting the NHL.  They will not be attending games, with some people saying they will refuse to even view a game on TV.  Well, I could jump on this bandwagon as well, but I know that deep down inside, even though the owners and the players have most certainly hurt the fans, I will still follow my team religiously through out this shortened season and I will get just as excited as I did when I was 14 watching Les Glorieux hoist their 24th Stanley Cup around the Montreal Forum.   There is no way I could ever boycott that feeling, and IMO there are many many others that feel the same way and live by that same simple phrase from “The Hockey Sweater”  I know there is at least one person anyways, whoever designed our Five dollar bill printed those same words on it 🙂



P.S. – Get this Subban stuff sorted out!!!

Upgrading to vCenter Server 5.1 and SSO Database connection has failed.

VMware LogoSo this post isn’t so much as a how to upgrade to vSphere 5.1 post as it is a simple outline of some of the gotchya’s that I ran into. And not so much during the actual upgrade of vCenter, but with the introduction of the vCenter Single Sign On service. vCenter SSO was introduced in 5.1 in order to act as an authentication broker as well as a security token exchange provider enabling you to authenticate to the SSO service once and then pass those tokens to various other VMware solutions that utilize the SSO components such as Orchestrator.

Honestly, all of the information that you find here (aside from some of the SQL Server tasks) you can find within VMware’s documentation set. That being said I’m going to throw it out here anyways since sometimes I find it easier to follow a blog post rather than a 500 page pdf. Also, this post will really only apply if you are using the embedded SQL Express database for your current vCenter Server, you shouldn’t experience these issues if using an external db.

So first off even though I wanted to install all components on the same machine I opted to go with each individual install rather than the “Simple Install”. I think I’ve read somewhere to do this but can’t remember where, either way, that’s what I did.

tablespaces-filelocationsAnyways, SSO in itself requires a database, a separate database from your original vCenter database. Now VMware does provide you with the SQL scripts in order to create that database as well as the roles and users that go along with it. You can find these buried within the vCenter Server install media at “Single Sign On\DBScripts\SSOServer\Schema\mssql”. If you don’t have SQL Server Management Studio you might as well download and install that as well, as I”m not going to be touching on SQL command line at all. So, the first script you need to run is rsaIMSLiteMSSQLSetupTablespaces.sql – simply open this script up in SSMS, change the <CHANGE ME> to the directories you wish to store your mdf/ldf database files in. If you don’t know you can always right click on your vCenter database and have a look at where its’ files are located, with the default install of MSSQL Express it’s normally C:\Program Files (x86)\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\ –


Once you have changed these to your preferred locations simply execute the query, this should create the new SSO DB for you as well as some tables within it. As for the users, they are located in that same directory on the install media and is titled rsaIMSLiteMSSQLSetupUsers.sql. Again, load this script into SSMS, replace <CHANGE ME> with your desired passwords for these database users and execute.

Also in accordance with the VMware documentation you need to be sure your SQL server is running under mixed authentication, at the time mine was only running under Windows Authentication.  This can be done by right-clicking your server inside of management studio, selecting properties and modifying the Server authentication section under the Security tab.

So with all of these prerequisites met I went along my merry way of upgrading my vCenter Server.  Everything was fine until I got to the point in the installation which setup the vSphere SSO database.  I entered in my server name, the users that I had created earlier yet still ended up getting the following error “Database connection has failed.  You can refer to the vm-sso-javalib.log in the system temporary folder for more information.”


After a frenzy of googling and reading I tracked down the issue to being that of the default of MS SQL Express utilizing dynamic ports.  You can read more about dynamic ports here, it’s certainly not my place to try and explain them, what I will try to explain though is how to get your SSO database connected.  So we need to change our SQL instance from using dynamic ports to a static port.  In the end it’s actually quite easy.  First off you need to start your MS SQL Server Instance Configuration (should be in your start menu).  configuration-portchangeFirst you need to go to the network configuration of the server, then double click on tcp/ip.  From here a second dialog box will appear.  Scroll all the way down to the bottom to the IPAll section.  Simply enter the port you wish to use in the TCP Port section.  I chose 1433 (the default SQL port and the one specified in the database connection screen during the SSO setup).  Really you could chose any port, so long as they match.  After changing this value and doing another quick restart on the SQL Service I was able to complete my installation of vSphere Single Sign-On.

Needless to say it took a lot of work to get the first prerequisite of vCenter Server 5.1 up and running, and honestly the rest of the installations (Inventory Service and vCenter Server itself) went flawless.  .  Either way I thought I would throw up the issues I ran into and how I resolved them in the case it might help someone else.  As always, comments are most certainly encouraged in the box below.

AppAssure 5 – A review!

About a month ago AppAssure released the first version of their backup software since being acquired by Dell.  Dell AppAssure is one of the great sponsors on this blog so I figured hey, why not grab myself the trial version, install it, and share my experiences with my readers.  I tried to take note of as many of the features as I could, but being in a limited nested workstation lab there was only so much I could do.  So honestly, if you like what you see, I would go on over to AppAssure.com and get yourself your own free trial and give it a go.  Also, although I only tested from within a VMware environment, AppAssure actually supports protecting machines across your complete infrastructure, being Hyper-V, VMware, and even physical machines.  


AppAssure Installation Prerequisites

First off is installation.  AppAssure is installed on top of either Windows 7, Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 (you must have a 64 bit OS, 8 GB of RAM and obviously some sort of storage.)  The installation process was a breeze.  There were a few prerequisites but as you can see the installer had the option to go ahead and install them for me.  This is a small but great feature to have as there's nothing I hate more than searching around for prerequisites and updates.  For the most part it was a 'Next->Next->Done' install.


Once you have the AppAssure core installed you can access it by pointing your browser to http://HOSTNAME:8006/apprecovery/admin and logging in with your local administrator username and password.  At this point you could click around and set everything up manually, but the 'Setup Core' button located near the top right of your screen will guide you through configuring mostly all of the parameters and functions that need to be setup in order to make AppAssure functional.  The 'Setup Core' guides you through the following areas….

Display Name – This is simply the name you want displayed within your AppAssure core console – not a lot to talk about here.

Email Notifications – Here you can configure your SMTP settings, Email subject, and your to/from Email addresses.  Another really small but very cool feature here is the ability to customize the actual message that the core is sending out.  Using variables you can basically move information around in the message.  As I said, a very small feature, but certainly a useful one.




Repository – This is the section where you define where you want your backups to go.  Essentially you create a repository and then add one or more disk targets to it.  The targets can be either direct attached storage or a cifs share somewhere on your network.  By showing/hiding the details you can get very granular on how the repository stores its data by changing items such as bytes per sector, average bytes per record and whether or not you want Windows or AppAssure to handle the write caching.







Encryption – Another very useful feature especially for those with strict security standards.  Here you have the ability to create an encryption key in order to fully encrypt (AES 256 bit) all of your backup data.  This is not something that I see included with most backup software and it's certainly a nice to have feature.



Now that all of the setup and configuration is done, we can get down into the core purpose for this application, and that's backing up your infrastructure.  The 'Machines' tab is where we will spend most of our time for this section.  This is where we will add and protect (this is the term AppAssure uses for backing up a client) our critical machines (AppAssure's term for a server, whether it be virtual or physical).  

The first thing we need to do is deploy the AppAssure agents to a machine.  The agents are used for many things but its main objectives are tracking changed blocks, performing quiescence of the applications, and handling all communication between the core and the client.  This can be done by manually by installing the agents from the target machine, or you can simply use the 'Remote Deploy' wizard from the core.  This will initiate the install of the agent and all of the required pre-requisites to the target machine.  To do the latter, simply select 'Actions->Deploy Agent, provide the hostname and some credentials for your target machine and you are done.  The status of this and almost all tasks can be monitored through the 'Events' tab.

After the deploy has completed you still will not see the server listed on the Machine tab.  We still need to add or protect the machine.  To do so, select Actions->Protect Machine.  In the dialog that appears we need to provide the hostname/ip and some credentials to connect to the machine.  During this process you will also be able to edit what AppAssure calls their protection schedule.  By default AppAssure will snapshot the machine every 60 minutes and perform a backup, however you can change these values during the addition of the machine by selecting 'Edit'.  You can once again get very granular with how often you are performing the snapshots.  You can specify different schedules for peak and non-peak times, as well as set different schedules for weekends as opposed to weekdays.  Also, you can get as simple as saying one snapshot per day, or just disable protection all together.

Once the Protect Machine tasks have completed you should see your machine listed in the Machine tab.  At this point you could just walk away as the protection schedule will kick in immediately and follow the parameters and policies that you set up during the wizard.  Probably not the case however so as you can see below there are a few more tabs that we can go through dealing with this machine.

On the configuration tab there are several additional options that can be applied to your protected machine.  First off the settings area allows you to change the display name and port/encryption information.  Under Configuration->Events you can override the global email alert settings and configure alerts that are specific to this machine.

Under Retention Policy you can either use those core defaults that you set up earlier, or you can set up a customized retention policy for this machine.  As you can see from the screenshot you are able to get very granular on how retention is handled and they have a very nice graphical representation of the retention information that you have selected.  This is always a very confusing item for myself to configure and it's nice to have a visual to see how simple changes to your retention schedule affect the recovery points that are being saved.

Licensing, Protection Settings, and Transfer settings are again simply individual overrides from the global configuration.  One huge note here (and I didn't have a chance to test all of this) is that if the target machine you are protecting is running SQL, Exchange, or SharePoint you will be given additional options such as the ability to truncate logs, etc.

From here you are basically good to go.  You can let the protections settings take effect and start taking snapshots your machines.  Or, you can immediately force a snapshot of your machine.


AppAssure handles restores in a few different ways.  

Exporting Data to a VM
This option basically will take a selected recovery point and restore it to a virtual machine located in your infrastructure.  You are not just limited to ESXi/vCenter either; As you can see in the snapshot above AppAssure supports VMware Workstation as well as Hyper-V for export targets.  Again, this process is dead simple and basically only requires a recovery point, a target host and authentication to get started.  Outlined in the screenshot there are many other options you can set while performing this restore such as Location(datacenter, resource pool, datastore) as well as which volumes you want to restore, and how the disks are to be provisioned (thin/thick).
Bare Metal Restores
Bare Metal restores allow you to take a backup of a physical machine and completely restore it to either the same machine, or even a machine with dissimilar hardware.  Now I didn't have the resources to completely test a Bare Metal Restore, but from the documentation it appears that it basically encompasses creating a boot CD image, burning the image to disk, booting up the target server from disk, connecting to the recovery console instance, mapping volumes, initiating the recovery, and then monitoring the process.  I can see this being a very handy tool for those physical boxes that companies still have, not only for disaster recovery, but even for major hardware upgrades as well.

Performing a rollback.
Rollback is essentially a full restore of a volume or volumes back to the state they were in when a snapshot was taken.  These are performed directly on the production machine or can also be set up to rollback one machine to an alternate machine somewhere else.  Again, select the desired recovery point and then click 'Rollback'  As you can see from the screenshot it is possible to do a live recovery, which is a very cool feature.  Once you are all set up simply click 'Rollback' to go back in time.

Mounting for a file-level restore

The first step of a file-level restore is the same as all the others; Select your recovery point to which you want to restore.  From there, you can simply click Mount.  Here you will get the option of which drive to mount, where to mount it, the write permissions on the mount point and whether to add a Windows share to it or not.  Once completed, if you browse to c:\ProgramData\MountPoints\ (assuming that is where you mounted it to) you should be able to see all of you files and folders from the recovery point you have selected.  From there you can copy and paste any file you need out of the mount point and back to the original VM or whatever location you desire.  Once completed, don't forget to dismount by going to Tools->Mounts and selecting dismount.
Object-Level Restore
Similar to the file-level restore, AppAssure can actually restore application objects and items from SharePoint, SQL, and Exchange.  Meaning you could restore SharePoint objects (individual documents and items, sites and subsites) , SQL objects (databases), and Exchange objects (Storage Groups, Datastores, Messages, Volumes) without interrupting your production servers and without taking to the time to completely restore a full backup.  Object level restore is a must for any organization as it saves you valuable time when corruption occurs.  And speaking of corruption, AppAssure will consistently monitor your SQL databases and Exchange datastores for corruption and alert if you it finds any.
AppAssure handles replication by basically connecting two cores together.  The core that you chose to replicate to can be either local to your datacenter, in a remote location, or even a subscription service from a third party that supports AppAssure backups.
During the replication configuration you have the ability to set up a 'seed drive' which will basically house all of the AppAssure recovery points and allow you to physically transport the drive to the secondary location and perform the initial seed manually.
By utilizing AppAssure's replication you have the ability to perform a full-out or individual machine failover to your replica site.  Also, you have the ability to failback to your production site once you have solidified the issues.
Again, I didn't have the resources to fully test AppAssure's replication.
Virtual Standby by AppAssure is a pretty nice feature that allows you to essentially perform a P2V of a physical machine and have a complete copy of that machine sitting within your virtual infrastructure.  A Virtual Standby will contain all of the recovery points that you have within your backups stored as snapshots.  This allows you to perform a very fast recovery of a machine in the event that it 'disappears'.  You can configure a Virtual Standby through the VM Export process explained above.  Basically, every snapshot taken during the backup process will incrementally update your Virtual Standby machine.  A very nice feature to have for those companies with a lot of physical infrastructure still running.
So there you have it, this is my experience while checking out AppAssure 5.  The application certainly has a lot of nice features, some of the most exciting to me is the ability to backup/restore both your physical and virtual infrastructure from within one application, one interface.  I love the ability to create a Virtual Standby for a physical machine.  This function certainly makes DR a lot easier for those physical boxes kicking around and helps save you a bit of money on hardware/resources on the secondary site as well.  Although I'd love to see some Linux support, AppAssure certainly provides me everything I need from within my Windows environment.  The interface is clean, consistent, and very easy to navigate around and use.  Again, I didn't have the resources to completely test out everything, but from what I have seen they seem to have a very scalable architecture in place.  The item-level recovery for Sharepoint, SQL, and Exchange is an awesome feature…No need to restore complete servers when I can just restore a single mailbox!   Another great feature is the ability to take a recovery point and restore this to a completely different subset of hardware.  This feature somewhat steps outside the realm of backup and allows you perform hardware upgrades (especially of physical machines) with ease…  I'd certainly recommend heading on over to AppAssure.com and grabbing a trial copy of AppAssure and checking it out yourself…. As always I'd love to hear your comments, concerns, opinions or experience with AppAssure below in the comments area…


DR for your DR Solution – Backing up your Veeam Backup & Replication Database

Last week I published an article based around moving your Veeam Backup & Replication Database to a new server.  In that article I used a backup and restore method to move the Veeam DB somewhere else.  Well, the feedback I received from that article was mostly based around how to setup a scheduled job to backup the Veeam database, so, without further ado, here's how I would recommend doing so.

Connect to your Veeam DB server

First off, we need to initiate a connection to the SQL Server hosting your Veeam DB.  Honestly, it's probably easier just to RDP or console in to the  SQL Server hosting the database and then connect to it.  In my examples below I will be using SQL Server Management Studio to do so.  Just a note, if you are running the default installation of Veeam with SQL Server Express, you can connect to the server using 'SERVERNAME\VEEAM' as your server name and use Windows Authentication.  Otherwise, just connect to your SQL instance as you normally would.

Create the Maintenance Plan

This is a pretty easy step.  Once you are connected just create a new maintenance plan by expanding Management -> Maintenance Plans.  Right click on Maintenance Plans and select New Maintenance Plan.  You could also select to go through the wizard, but again, for this example I chose to create it from scratch.

Create the Backup Database and Maintenance Cleanup tasks

You should now be in what is called Design Mode of the maintenance plan.  In the bottom left hand corner of your screen you should see all of the Design Tasks that we are able to just click and drag over to our Design Surface in the centre.  Here is where we need to setup our first task which is to do the actual backups of our databases.  Simply select the Backup Database task and drag it over into our Design Surface.  Do the same with the Maintenance Cleanup task as well.

Configure the Backup Database Task

Ok, we are now getting into the meat of this job.  Double click the Backup Database task that we just moved over.  Here is where we will setup all of our parameters evolving around our backups.  First off we need to select our connection.  In most cases this can just be left at 'Local Server Connection', however depending on how you are setup you may need to enter in the actual hostname or IP of the server.  Leave the backup type set to Full.  Under the database(s) section is where you can select which databases you would like to backup with this job.  You can just select the Veeam DB if you want, but I chose to select 'All Databases'.  Back up to – Select your target, usually disk nowadays.  I left the expiration of the backup sets at the default, which is to not expire.  Check the create a backup for every database, and in this example I've also checked the subdirectory for each database as well.  I just find this to be a bit more polished when browsing through the backups 🙂  You can use whatever you want for the backup extension, but the standard for SQL is 'bak'.  Select a location to store the backups, click 'OK' and your done.

Configure the Maintenance Cleanup Task

This step can be skipped if you want but you will have to watch your free space.  In this step we will set the job to 'clean up after itself' by deleting backups that are a certain number of days or weeks old.  Again setup your connection the same as you did in the previous step.  You will also want to select to 'Delete Backup Files'  Select 'Search folder and delete files based on an extension' and setup the folder the same as above. Select to include first-level subfolders.  Enter in our file extension the same as above (bak).   Select your desired retention policy under the 'File Age' section.  I usually chose somewhere along the lines of a couple of weeks depending on the importance as well as the free space on the backup targets.  After all this is setup click 'OK' to save your cleanup task.

Connect the two tasks together

Here's an easy one for you.  If you single click on the Backup Database Task you will see a green arrow appear underneath it.  Simply drag that arrow over to the Maintenance Cleanup Task. You could also take the vice versa action on this one.  You could have the single click the Cleanup Task and drag arrow to the Backup Task.  Either way, whichever one task runs first you should end up with the same results.

Schedule the plan

Click the schedule button in the Maintenance Plan toolbar.  In the job schedule dialog that opens select the required times and frequency that you wish to run this job and click 'OK' to save the schedule.  The schedules are very flexible and allow for virtually every scheduling combination that you can think of.

This is all the modification that we need to do so just save your maintenance plan by going to File -> Save Selected Item or by right-clicking the tab at the top and selecting 'Save'.

Check the Agent and Test.

If you now have a look under the your SQL Agent by going to SQL Server Agent -> Jobs you should see that a new agent job has been created with a name similar to that of your maintenance plan.  You can right click on this job and select properties.  You should see your schedule as well as the SSIS call to run your maintenance plan.  If you would like to test your backup job now, simply right click your job and select 'Start Job At Step'.  This should popup a dialog showing you the status of the job.  Once completed you should see some backups in the target backup locations you inputted earlier.

So there you have it!  DR for your DR Solution.

Moving your Veeam Backup & Replication Database

As you are probably are aware, in the beginning there are few VMs.  People start out by virtualizing their low flyers, web servers, etc..  As time goes on there is something called VM sprawl which occurs.  Suddenly there are many many VMs being spun up for no real reason at all.  Tier 1 apps make there way into your vCenter, and things begin to grow exponentially.  The problem with this, is that sometimes you have solutions installed from the beginning which were initially setup in the 'small' environment you had in the beginning.  As is the case with my Veeam installation.  When I first installed it, I just used the SQL express option that came with it.  Now, since the environment has grown, and the stakes are higher when failures occur, I have the need to move this database from its' default SQL express to a full blown version of SQL that is managed by all of our monitoring and backup tools.  Below is just how I did that…

First off, you will need to get your hands on a copy of SQL Server Management Studio.  We will need this in order to perform the backups of the SQL express databases that Veeam has created.  You can find 2005 here and 2008 here.

After you have installed this on your Veeam server just follow the steps below to move your database.  Oh, and be sure to stop and / or disable any running jobs or jobs that may run while you are performing these actions.  It's probably best to just disable all of them until you are done.

Stop Veeam Services

We will need to stop (and I disabled) all of the Veeam Services.  Don't worry, when we get to re-installing the Veeam applications later the services will be re-enabled again.  The reason for this is that we do not want any data flowing into our database as we are backing it up and moving it.  You will need to stop the following services; Veeam Backup Enterprise Manager, Veeam Backup Service, Veeam Indexing Service, and the Veeam vPower NFS Service as shown below


Backup your VeeamBackup database

​Once the services are stopped it's time to get going on backing up our Veeam databases.  Veeam has 2 databases that it uses; VeeamBackup and VeeamBackupReporting.  We will need to back both of these up in order to restore on our new SQL server.  So, fire up SQL Server Management Studio and find the VeeamBackup database and follow the following


  • Right-Click
  • the VeeamBackup database and select Tasks->Backup
  • Make note of the location near the bottom where its' going to save the backup
  • file.
  • Click
  • 'OK' and browse to the targeted location and copy the backup to your new SQL server.

You will need to repeat the bulleted points above for the VeeamBackupReporting database as well.

Restore on your new SQL server as the same name

Now that we have the backups on the new SQL Server we will need to restore them.  The following is how I did


  • Right-Click
  • Databases and Select Restore
  • in the 'To Database' field we will want to type the name of our target database.  I would just use the same name as it was before – 'VeeamBackup'
  • Here we need to select 'From Device' and then 'Add' our backup file that we have just copied
  • over.
  • After
  • we have added our file, be sure to check the checkbox under the Restore heading.
  • Check out the settings on the Options tab.  Here is where you will want to check to be sure that the mdf and ldf files will restore to the locations that you want them to.
  • When your happy, click 'OK'

Again, we need to repeat this for the VeeamBackupReporting database as well.

Security Setup

Depending on how had Veeam setup before, this may or may not need to be done, or it may need to be done in a different way.  Essentially the user that you are going to run the Veeam installation as will need to have db_owner rights to both of these databases.  So if you installed Veeam as a local user account previously, you might want to think about creating and using a domain account this time around, as now we have two servers that we need to authenticate on.  However you decide, you will need to grant that user dbo rights as follows…

  • Under Security->Logins, right click and select New Login.  Browse to the windows user account that you want to use.
  • In the 'User Mapping' section, select both VeeamBackup and VeeamBackupReporting and ensure that the db_owner role membership is checked off.
  • Also in that section, assign the Default Schema for both of those databases to dbo.

Remove and Re-Install Veeam

There may be other ways to 're-point' Veeam to the new database server (registry hacks), but for all intensive purposes it takes literally less than 5 minutes to install Veeam so i found this the easiest route.  Go into control panel and remove both Veeam Backup and Replication and the Veeam Enterprise Manager.  Once done, re-install both of these applications.  When prompted for your database location enter your desired information to point to the new SQL Server.  You should be prompted that a Veeam server is already connected to the database, when asked this just select 'Yes' to connect this (the new installation) server to the database instead.

And it's that easy!  You know have a more expandable database and the ability to utilize more RAM and CPU.

Just a note, I cannot guarantee that this will work for you…it worked great for my environment!  If in doubt, open up a support call with Veeam and have them give you a hand or directions to do so…  Also, if you are looking for a way to automate a backup of your Veeam Database, check out this post.

Migrating vCenter to a new server

Maybe its during the upgrade from vSphere 4.0 to 4.1 and you require the 64 bit hardware. Maybe you need to upgrade to take advantages of a newer OS and you just don't trust the next next done that Microsoft provides you. Maybe its the fact that you currently have a physical vCenter and you want to take advantage of VMware High Availability and Distributed Resource Scheduler. Or maybe its the complete opposite and you are currently virtualized and you have your reasons to be physical. Whatever your reasons may be, there will more than likely be a time where you as a VI admin will want or have to migrate your vCenter to a new server. In my case, it was option number 4.

We had been running a virtualized vCenter ever since our upgrade to vSphere 4, however with the addition of the Distributed Virtual Switch, along with more and more third party applications depending on vCenter, coupled with a failure that left me unable to assign a vm to a network I just thought it was time to move our vCenter Server outside of the cluster that it was controlling. The decision to go physical was purely an overkill, but i wanted to have my vCenter completely disconnected from the san fabrics and interconnects that are currently the interworkings of our i/o flowing in and out of our hosts. I had read about the recommendation of management clusters, but the costs of building that would for sure outweigh the advantages of having it virtualized. Also, this will give us a central spot to house our PowerCli scripts and also run our UPS shutdown scripts from. So, physical it is, and here is my plan to get there. It wouldnt have been much of a concern but i wanted to have a solution where the dvSwitches would continue to function, and where i wouldnt need to go and touch every host afterwards. So, assigning this new physical vCenter the same name and same IP was a key step. Since i wanted to ensure a smooth transition i ran this one through the lab first using these steps. So here is what i had. vCenter1 (the current active vCenter) and vCenter2 (the new vCenter).

1. Get everything that you need (vCenter installation, 64 bit sql native client, sysprep files, etc). If vCenter1 is virtualized be sure to take note on which host its running on as you will need to disable or disconnect its network later on.
2. When you are ready to go stop the vCenter service on vCenter1.
3. Now we need to disconnect or disable the network on vCenter1.
4. Delete the current computer account for vCenter1 from active directory. I assume you could also just rename it, i just chose to delete the account.
5. Now lets move over to vCenter2 and do a few things. Rename this machine to vCentre1 and also give it the same ip as vCenter1. This should ensure that we don't have to ssh into each host after the upgrade and touch the hosts file.
6. Now just complete the installation of vCenter on vCenter2 which is the now vCenter1. (Confused yet?). Note – when asked about the database be sure to select to use the existing db, otherwise you can kiss all your hard work goodbye.

Thats all, you should be able to connect to your new vCenter instance using the same name or IP as you always have. You may have to reconnect your hosts however, as a new password for the vpxuser would have been generated with the new install and the hosts and vCenter will be out of sync. Thats as simple as right clicking your host, selecting connect and providing root credentials though. There you have it, a brand new vCenter! While your at it you might as well defragment your vCenter Server database as well to give it that snappy new feeling again. Also, there is a KB article which takes you through some of the steps above here. As always if you notice anything that I'm lacking or if I'm performing actions that will cause the market to crash please comment and let me know.

Giving vCenter a kick in the rearend!! – Defragmenting your vCenter Database

I've seen this happen numerous times on a number of our VMware vCenter installations.  For the first few months(sometimes days)  vCenter Server is very snappy and responsive, when moving from page to page things come up instantly and moving between different statistics and metrics on performance graphs is veryquick.  Then, as time goes by, things begin to drag down a bit, tabs start to take a little bit longer to load when moving between them, performance graphs can throw exceptions and things just generally slow down.

Of course there are many issues that could be causing this to happen, but the most common that I have found is a fairly simple one, a neglected VMware vCenter DB that is full of fragmentation and in need of a little TLC.  I don't want to go into what exactly database/index fragmentation is, but a good read if you have the time (and interest) is here.  Also, VMware has released KB Article 1003990 as well, which covers off fragmentation within the vCenter DB.

And then there is also my explanation….

When I'm looking at tuning my vCenter DB, the tables that I find myself always defragmenting are as follows..


As you may guess, these tables hold historical and rolled up performance statistics and data.  Since vCenter is always  collecting data (depending of course on your Intervals and durations) these tables are constantly being updated (New stats coming in, old stats going out).  Just as in file level defragmentation, the large number of writes, updates, and deletes causes some tables to become heavily fragmented.

I'm not going to go through defragmenting all of these as it is the same steps for each table/index.  For this purpose I'll just go through VPX_HIST_STAT3.  First off, to see the fragmentation inside a table just run the following command in SQL Server Management Studio

USE [vcenter database name] GO
dbcc showcontig([tablename],[indexname])

You can retrieve the names of the indexes either from  KB Article 1003990 or by expanding the Indexes folder in SSMS.  Essentially this command translates to..

dbcc showcontig ('VPX_HIST_STAT3','PK_VPX_HIST_STAT3')

In my example this returns the following stats.

To summarize these results for a VI Admin, the lines that you should really be looking at is 'Scan Density' and 'Logical Scan Fragmentation'.  In short, you want Scan Density to be as close to 100% as possible, and Logical Scan Fragmentation to be as close to 0% as possible.  To defrag the indexes in this table we use the following command….

dbcc indexdefrag('[databasename]’,'[tablename]’,'[indexname]')

so, after filling in the values we get…

dbcc indexdefrag ('VIM_VCDB','VPX_HIST_STAT3','PK_VPX_HIST_STAT3')
which then returns….

As you can see we now have a lower Logical Scan Fragmentation and a higher Scan Density.  This is the expected result we want from the defragmentation.  Just repeat this step on all of the indexes and tables you want to defragment and you should be enjoying a much snappier, more responsive vCenter Server in no time.  Keep in mind, some smaller indexes and tables will always be fragmented and not much can be done to correct that issue.  Personally, I concentrate on the 8 tables listed above.  Keep these tables/indexes happy and you are well on your way….