Category Archives: Uncategorized

Win your way to VMworld with VMTurbo!

VMTurboLogoSmIt’s that special time of year again when we start to see a lot of planning and chatter around VMworld.  This year, the mega virtualization show has been moved back to Las Vegas at the Mandalay Bay resort – running from August 28th through September 1st!

Now if you are struggling to find the funding to get yourself out there, or if say your company just doesn’t want to fork out the dollars to purchase a conference pass for you then why not try your hand with VMTurbo?

Starting today!  Yes, TODAY VMTurbo will be giving away 2 full conference passes to three lucky winners!  All you need to do is sign up and answer a few basic questions about your virtual environment!

And if you are late to the game and didn’t see this in time don’t fret – VMTurbo will be holding two more draws for two conference passes on June 17 and July 15th as well!  You only have to sign up once to be included in all three draws so don’t hesitate, sign up now!

A few updates to the Veeam silent install!

A ways back I wrote an article about how to utilize vRealize Orchestrator to deploy and install Update 2 for Veeam v8 to any of your virtualized Veeam Backup & Replication servers.  This has worked well for me as I normally have the vSphere Web Client open anyways – why not just do my updates with this.

Well, with v9 Update 1 there have been a few changes to how you go about silently deploying a Veeam update.  First up there is no need to extract the update file – we can simply pass our install parameters directly to the update package – therefore we no longer need the setup.bat file that orchestrates the unpacking and installing!  We can simply copy the file out and and pass the /silent, /noreboot, and /log arguments to it – as shown below…

c:\downloads\VeeamBackup&Replication_9.0.0.1491_Update1.exe /silent /noreboot /log c:\downloads\patch.log VBR_AUTO_UPGRADE=1

So, if you had followed along with the previous post and want to update your vRO workflows to be compatible with v9U1 you may need to change a few things.  For those starting fresh, I’ll go over the whole process again below.

The Workflow

The vRO workflow is pretty simple (especially now with silent installation improvements) – It really needs to do only two things; 1 – copy out our update file, 2 – run the executable.  In order to support these two functions we to first setup a few arguments and input parameters to our workflow

Arguments

  • vroPathToVeeam (string) – this will contain the path to the Veeam update file that we place on the vRO server (i.e. /var/log/vco/VeeamBackup&Replication_9.0.0.1491_Update1.exe )
  • guestPathToVeeam (string) – this will contain the path on the target server in which we would like to copy the file (i.e. c:\downloads\VeeamBackup&Replication_9.0.0.1491_Update1.exe )
  • vmUsername (string) – the username with permissions to copy/run files on the Veeam Backup and Replication server (i.e. administrator@domain.local)
  • vmPassword (SecureString) – the above users password 🙂

Input Parameters

  • vm (VC:VirtualMachine) – this will be the virtualized Veeam Backup & Replication server name.  We’ve set this up as an input parameter instead of an attribute so we can automatically pass the VM’s name from the right-click context menu inside the vSphere Web Client.

The Schema

As you can see below we basically mimic the two functions mentioned previously with Scriptable Tasks within vRO; Copying the files and Executing the update.

VeeamWorkflow

As far as copying the files go it’s a pretty simple script

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// establish connection
var host = vm.sdkConnection;
var guestOperationsManager = host.guestOperationsManager;
// create authentication
var guestAuth = new VcNamePasswordAuthentication();
guestAuth.username = vmUsername;
guestAuth.password = vmPassword;
// construct fileManager
var fileManager = guestOperationsManager.fileManager;
result = false;
var attr = new VcGuestFileAttributes();
var srcFile = new File(vroPathToVeeam);
var uri = fileManager.initiateFileTransferToGuest(vm , guestAuth ,guestPathToVeeam, attr, srcFile.length, true);
// Copy File
result = fileManager.putFile(vroPathToVeeam, uri);
asdfa
asdf

And to execute the actual file once it has been copied (The Execute Update scriptable task)…

1
2
3
4
5
6
7
8
9
10
11
12
var host = vm.sdkConnection;
var guestOperationsManager = host.guestOperationsManager;
var guestAuth = new VcNamePasswordAuthentication();
guestAuth.username = vmUsername;
guestAuth.password = vmPassword;
guestAuth.interactiveSession = false;
var guestProgramSpec = new VcGuestProgramSpec();
guestProgramSpec.programPath = guestPathToVeeam;
guestProgramSpec.arguments = "/silent /noreboot /log c:\downloads\patch.log VBR_AUTO_UPGRADE=1";
guestProgramSpec.workingDirectory = "";
var processManager = guestOperationsManager.processManager;
result = processManager.startProgramInGuest(vm , guestAuth , guestProgramSpec);

And with that we are done – don’t forget to  map the relevant input parameters from the global workflow to those inside the individual scriptable tasks…

We can now run this workflow from within the vRO client, select a virtualized Veeam B&R server and have v9 Update 1 automatically deployed and installed to it.  To go one step forward you can also go and associate this workflow with the right-click context menu within the vSphere Web Client if you like, then you never have to even open up vRO 🙂

Keep in mind this workflow does nothing to ensure that you have no active jobs – this is something I’d love to add someday, check for running jobs and if there is, wait or cancel them before running the update…

Thanks for reading!

 

StarWind Software HCA brings hyperconvergence to the SMB

Virtualization drastically changed the way we deploy applications and servers within our datacenters – eliminating racks upon racks of single purpose servers and replacing them with compute clusters and shared storage.  In terms of IT, the timespan that virtualization has been mainstream has been short, but even in the small amount of time that it has been in our datacenters we have seen evolutions in the underlying infrastructure that supports virtualization.  We’ve seen converged systems enter the marketplace, bundling the server clusters and storage together under one support SKU.  From there we’ve went one level further with hyperconvergence, eliminating all of the complexity and troubles of having separate pieces of infrastructure for compute and storage.  Hyperconvergence has certainly made a foot hold on the industry, but for the most part the target customer for these types of deployments have been aimed at small, medium and large ENTERPRISE, not business.  Hyperconvergence is a perfect fit for the SMB, however up until now has been out of reach in terms of price for companies needing a small deployment for 50-100 VMs…

Enter StarWind

logo_strwStarWind has been around since 2003 and are most famous for their flagship product StarWind Virtual SAN – a shared storage solution running on Windows and providing capacity to both VMware and Hyper-V clusters.  StarWind have supported a 2 node, highly available storage setup for VMware with their Virtual SAN for a few years and have had success doing so.  However now they have taken that one step further by providing a hyperconverged solution including the hardware, computer, storage, network, and management under one simple solution called the StarWind HyperConverged Appliance (HCA)

Note: This review was sponsored by StarWind Software, meaning I did receive compensation for the words on this page! That said the words on this page are mine and mine alone and not StarWinds :)

 

The hardware

Before we get too much into the software driving the HCA let’s first take a step back and checkout some of the unique ways StarWind is providing hardware.  The commodity servers underneath the StarWind HyperConverged Appliance (HCA) are key to how they are able to offer their powerful solution, yet keep costs at a minimum and target SMB’s and ROBOs.  We have a few options when it comes to hardware and the StarWind HCA – we can buy new, refurbished, or quite simply, bring our own hardware.

You will see the phrase “best in breed” a lot within this review as that is the path that StarWind has chosen to take while putting together the pieces of their HCA.  StarWind is a software company, not a hardware company so they’ve opted to chose Dell as their preferred provider for the infrastructure beneath the HCA.  As we all know Dell brings tremendous solutions to their customers, providing infrastructure that can be scaled to meet the needs of the “mom and pop” shop all the way through to the large enterprise.  Not to mention they have one of the biggest distributions in the world for providing hardware and servicing warranty and parts replacements.

 

Whether you chose to buy new or refurbished (StarWind has partnered with xByte and Arrow) there are some commonalities between the solutions.  First off, customers have the option to purchase up to 5 years of 4 hour pro support on any hardware purchased (even refurbished) so you can ensure that you are covered in terms of hardware failures.  Also, whether it’s new or refurbished hardware, the StarWind HCA comes in three different flavors, ranging from small to large depending on your needs.

Model S – The HCA Model S is the entry level system which is mainly designed and targeted at SMBs and ROBOs.  This unit, a Dell T320 is a tower format – perfect for those remote/small offices that don’t have proper data centers or racks already installed.  The Model S does not require additional network switching and can be connected directly into existing switches – while all storage traffic is routed through a directly connected 10GbE back end.  The Model S starts with a 2-node starter set, and scales out to a maximum of 16 nodes.

Starwind-models

Model L – HCA Model L takes the next step and provides a mid-level system for SMBs, providing the ability to pack more CPU, Memory, and storage into each node and moving up into a rack architecture (Dell R620).  Again, the 2 node starter set utilizes existing switching and scales to a maximum of 64 nodes.

Starwind-modell

Model XL – Finally, the Model XL provides yet even bigger hardware configurations and faster CPU’s.  As with the Model L this solution comes in a rack mount architecture (Del R720) and is designed for SMBs with high performance computing demands, VDI deployments, or mid-size enterprise ROBOs.  The Model XL provides us with maximum storage density and allows us utilize a dedicated 10 or 40 GbE back end for storage traffic.  Just like the other models the starter set comes with 2 nodes, with the Model XL having the capability to scale to 64 nodes total.

Starwind-modelxl

As we can see above there is a wide range of compute, memory, storage and networking configurations available with the HCA which can meet the needs of almost any SMB/ROBO deployment out there.  All models are equipped with a direct connected 10GB backend to handle the storage replications and are sold on a node by node basis, or in a 2-node starter set to get you up and running quickly.

One advantage to the StarWind solution is that you aren’t locked in to specific model types once you purchase them.  In fact, StarWind has published many “typical configurations” that meet the needs of various use-cases for the SMB.

Two Model L Nodes – Typical setup for an SMB looking to run a File Server, Exchange environment, SQL Server, etc.

Three Model L Nodes – Same as the previous configuration, but with an additional node added to provide more compute and storage to run additional workloads.

Three Model L Nodes + Two Model XL Nodes – Configuration for an SMB looking to run File Server, Exchange, SQL Server, etc with the addition for a 150 seat VDI deployment.

Three Model L Nodes + Two Model XL Nodes + Six Model S Nodes – This would be a typical setup for those organizations looking to deploy a central solution with support for three remote offices.  Each remote office would have a 2-node Model S cluster to support local services.  The 3 Model L and 2 Model XL nodes would be deployed at a central office to provide support for File Server, Exchange, SQL, VDI, etc as well as acting as a replication target for the remote locations.

Bring your own hardware

If the solutions provided by StarWind don’t quite meet your requirements, or if you simply want to leverage a past investment, customers also have the option to use their own hardware for the StarWind HCA.  By purchasing just the software, services and support, a small business is able to keep costs down while getting more bang for their buck on their previous hardware investments by utilizing infrastructure they may already have in place.  Obviously there are concerns in terms of warranty and support when going with this model, but the key is there is a lot of flexibility when it comes to how the customer can deploy the StarWind HCA.

The software

Let’s face it, the underlying hardware is quickly becoming a commodity in today’s world and focus has quickly moved to the functionality of the software.  StarWind recognizes this and has taken a stance to deploy their HCA in a “best of breed” type scenario.  To best understand the this we can look no further than the diagram below, which outlines each piece of software included within StarWinds VMware offering…

starwind-software

Hypervisors

The StarWind HCA comes at a minimum with 2 servers running vSphere 6 yet can scale to 64 nodes easily by simply dropping in more hosts.  As far as management goes the vCenter Server Appliance is licensed and preconfigured within the StarWind HCA in order to allow organizations to manage their environment with a  product they may already be familiar with – no need to learn new interfaces or attend respective training.  It should be noted that this review is focused around VMware, but the StarWind HCA does support Microsoft Hyper-V as well.

Storage

Hyperconvergence takes our traditional storage arrays, those big metal, external, shared storage solutions and collapses them down into the local storage, which in turn performs some magic and presents these local disks back out to the cluster as shared storage.  StarWind has built their company based on providing shared storage to clusters and their shared storage product, StarWind Virtual SAN is the backbone to providing availability within their HCA deployment, thus, we will spend most of our time focusing on this.

StarWind Virtual SAN is an installable product that runs on Microsoft Windows.  With a two-node deployment of the StarWind Hyperconverged Appliance you will see two instances of StarWind Virtual SAN, with each VM living on its’ own node.  From there, the local storage is claimed by the corresponding Virtual SAN VM, and presented back out to the ESXi hosts in terms of an iSCSI datastore.  The real power of Virtual SAN however comes in the form of availability as the product comes preconfigured in a highly available model.  It does this by utilizing NICs within the ESXi hosts to synchronously mirror the datastore from one Virtual SAN instance to what they call a Replication Partner, which is essentially the StarWind Virtual SAN instance on the other host.  On the ESXi host end of things, the software iSCSI initiator is used and binds multiple paths to the iSCSI target together, with each path pointing to a different StarWind Virtual SAN instance.  When looking at it from a physical mapping we see StarWind utilize 2 NICs on each ESXi host for their synchronization/replication traffic, as well as 1 NIC for their heartbeating/failover mechanisms.

starwindvirtualsan

Aside from availability StarWind Virtual SAN offers a lot more as well – too much really to go over in this post but could possible use a post of its own.  In terms of my favorites check out the following list.

  • Server-Side Cache – RAM and flash-based devices are utilized to provide performance in terms of write-back caching, which in turn reduces I/O latency and eliminates a lot of useless network traffic.  These caches, well, they are also synchronized between hosts so we aren’t left in a situation that could result in data loss.
  • Scale – StarWind can scale in multiple ways including both up and out.  We can scale up by simply increasing the number of drives and spindles that we have within each node.  Scaling out is achieved by adding another node complete with StarWind Virtual SAN which in turn gives us capacity, as well as CPU and Memory to our cluster.
  • Deduplication and Compression – Most physical arrays are deploying this in some fashion these days and even though StarWind Virtual SAN is software-based we can still get the capacity and I/O advantages that are offered through built-in inline deduplication and compression.
  • Snapshots – LUN based snapshots are provided within StarWind Virtual SAN and cold data can also be redirected to a less expensive, secondary storage tier if need be.
  • Future integration with VVOLs now in tech preview.

Data Protection

veam Be it the StarWind HCA is doing a great job at providing availability for your production data locally there are still times where corruption, or user-generated events happen and a backup need be called upon to save the day.  StarWind doesn’t have a backup solution of their own so in a true “best of breed” mentality they have went out and selected Veeam as their preferred partner to provide data protection within the HCA solution.   If customers opt to purchase Veeam with the StarWind HCA, it means that Veeam Backup & Replication (currently v8) will come pre-installed and fully configured along with all the other software – the only thing that the customer has left to do is add some backup storage and setup the backup jobs.

Support

Another huge benefit of going with the StarWind HCA comes in terms of support.  Although the StarWind HCA contains various pieces from different vendors (StarWind, VMware, Veeam, Dell) all of the support and maintenance is processed under one SKU, leaving the customer with only one number to dial when needing some help.  This eliminates all of the “he said, she said” and finger-pointing that sometimes happen when dealing with multiple vendors support on their own.  All support is handled by StarWind Software, and is provided 24/7, 365 days a year.

What does it look like in the end?

When you purchase a StarWind 2 node HCA you can expect to see something similar to the following diagram

starwind-solution

First up, 2 nodes, preinstalled with ESXi and fully configured.  On top of those we have a StarWind Virtual SAN node on each host, which claims all of the local storage and presents it back to the corresponding ESXi hosts in a fully replicated, highly available manner.  The key to the whole solution is that when the units are shipped to you they are completely configured and ready to go, with 4 preconfigured VMs (vCenter, StarWind Virtual SAN x2, and Veeam) – all you have to do is start deploying virtual machines and supporting your business.

starwind-networking

As far as networking goes it will all be preconfigured for us, with the exception of the domain network (management/production) which would be specific to your business.  We can see that we have dedicated links for the heartbeating mechanisms (the magic behind Virtual SANs failover mechanism as well as dedicated links to handle the storage replication – the magic behind availability.

So what do I think…

StarWind has been in the software SAN game for a long time and it’s nice to see them start to provide a complete solution including the hardware and configuration as well.  Hyperconvergence is a great solution, which fits perfectly into the SMB space – however solutions today seem to be priced outside of what an SMB can afford.  StarWind has placed their hyperconverged solution at a price in which the SMB can handle – and provided a complete solution, including the hardware, software, hypervisor, and data protection – all under one support umbrella.  This is StarWinds first generation of their HCA so I’m excited to see where they go from here – I’ve already been told that they are previewing support for features such as VMware’s VVOLs so work is still being done.  In fact, it seams to me that a lot of the features we see on traditional hardware based arrays are also being supported on StarWinds software based Virtual SAN.  The StarWind HCA is a very easy solution to use – essentially it’s just vSphere and requires no configuration from the clients end.  In the end my overall experience with the StarWind HCA has been a good one!    StarWind HCA not only brings simplicity with their appliance, but also choice!  Customers can chose to utilize new or refurbished hardware, or simply roll their own install.  They can chose to include Veeam within their HCA deployment or go some other route!  They can chose to scale up or scale out depending on their needs  Choice and simplicity is key when it comes to providing a solution to SMB as they don’t normally have the IT resources, budget or time to spend on training and deployment.  The StarWind HCA is certainly a viable option for those small businesses that fit that description, looking to deploy an easy to use, highly available hyperconverged solution that can grow and shrink as they do at a fraction of the cost of other solutions out there today.

If you would like to learn more about StarWinds HCA you can do so here – also, they offer a fully functioning trial version and also a free version of StarWind Virtual SAN – the backbone of the HCA.  Also, if you are a StarWind HCA or a StarWind Virtual SAN user I’d love to hear your thoughts – simply use the comment boxes below for any questions, concerns, comments, etc..

Running free VeeamZip directly from the vSphere Web Client

veam_thumb.pngThere are a lot of times I find myself needing to take a one-off backup job of a VM – prior to software upgrades or patching I always like to take a backup of the affected VM(s) in the event that, well, you know, I mangle things.  VeeamZip is great for this – it allows me to process a quick backup of my VM that is separate from its’ normal backup and replication routines.  Since I deal in an environment that running paid Veeam licenses I have access to the Veeam Plug-in for the vSphere Web Client – and this plug-in does exactly what the title of this blog post is – it allows us to perform VeeamZips of our VMs without having to leave the vSphere Web Client and/or log into our Veeam Backup and Replication console.

What if I’m using Veeam Backup and Replication FREE?

So this is all great for me, but I got to thinking – What if I wasn’t running a paid version of Veeam Backup?  What if I was simply running the free version – this doesn’t come with Enterprise Manager, therefore it doesn’t come with a means of getting the Veeam Backup and Replication Web Client plug-in installed – therefore no VeeamZip from the Web Client right? – Wrong!  Ever since Veeam Backup and Replication v8 U2 came out they have been including PowerShell cmdlets around the VeeamZip functionality.  I wrote about how to use it last year in Scheduling Veeam Free Edition Backups.  Well, since we have PowerShell that means we can use vRealize Orchestrator to build a workflow around it – and we have the ability to execute workflows directly from within the vSphere Web Client – so without ado, Running the free VeeamZip functionality directly from the vSphere Web Client.

First up the script

I didn’t get too elaborate with the script as you can see below.  This is simply a handful lines that take in a few parameters; the VM to backup, the destination to store the backup, and the retention, or autodeletion of the backup.

1
2
3
4
5
6
7
8
9
10
11
12
Param(
[Parameter(Mandatory=$true)][string]$VM,
[Parameter(Mandatory=$true)][string]$Destination,
[Parameter(Mandatory=$true)][ValidateSet("Never","Tonight","TomorrowNight","In3days","In1Week","In2Weeks","In1Month")][string]$Autodelete
)
#Load Veeam Toolkit
& "C:\Program Files\Veeam\Backup and Replication\Backup\Initialize-VeeamToolkit.ps1"
#Get the VM Veeam Entity.
$vmentity = Find-VBRViEntity -Name $VM
 
#VeeamZip it!
Start-VBRZip -Entity $vmentity -Folder $destination -AutoDelete $Autodelete -DisableQuiesce

That’s it for the script – simple right – feel free to take this and add whatever you seem fit to suit your needs 🙂

The Orchestrator Configuration

Before we get to creating our workflow there are a few things we need to do within orchestrator, mainly adding our server that hosts our Veeam Free instance as a PowerShell host within vRO.  But even before we run the ‘Add a PowerShell Host’ workflow we need to run a few winrm commands on the Veeam Free instance.  I have a complete post about setting up a PowerShell host here, but will include the commands you need to run below for quick reference.

First up, on the Veeam server run the following in a command shell…

  • winrm quickconfig
  • winrm set winrm/config/service/auth @{Kerberos=”true”}
  • winrm set winrm/config/service @{AllowUnencrypted=”true”}
  • winrm set winrm/config/winrs @{MaxMemoryPerShellMB=”2048″}

Next, from within vRO (as shown below) we can run the the “Add a PowerShell host” workflow…

PowerShellHost1

As you can see my Veeam Server is actually the same as my vCenter server – don’t recommend doing this but hey it’s a small lab!  Just be sure to use the FQDN of your Veeam Server for Host/IP column.

PowerShellHost2

Ensure that the remote host type is WinRM, and that the Authentication method is set to Kerberos.

PowerShellHost3

And be sure that we are passing our username in the ‘username@domain’ format, along with ‘Shared Session’ for the session mode.  Once you are done go ahead and click ‘Submit’.  If everything goes as planned your Veeam Backup and Replication server should be added as a PowerShell host within vRealize Orchestrator.

And now, the workflow!

Finally we can get to actually building our workflow.  If you remember our script we take in three parameters; VM, Desitination and AutoDelete – so we will mimic the same with our workflow, only calling them Input Parameters within vRO (shown below)

workflow1

Now since we will be using the built-in Powershell workflow ‘Invoke an External Script’ we will also need to have some workflow attributes setup in order to pass to that workflow.  Below you can see how I’ve setup mine…

workflow2

Your configuration may vary a little from this one, but as you can see we simply add a PowerShell host attribute and map it to our newly added host, as well assign the ScriptPath attribute to the representation of where we saved our little VeeamZip script earlier.  The arguments attribute can remain empty as we will only use this to build the arguments string to pass to the script.

workflow3

The first element we want to add to our workflow schema is a Scriptable task – go ahead and drag that over into your workflow.  This is where we will create our arguments string.

workflow4

As far as what goes into the scripting you can see I’ve simply brought in the arguments attribute, along with our three input parameters and simply chained them together into one string (arguments = ‘”‘+VM.name+'” “‘+Destination+'” “‘+AutoDelete+'”‘;), then ensured that my arguments attribute was included in the output as well.

workflow5

Next drag the ‘Invoke an external script’ workflow into your schema (you can see I’ve renamed mine ‘Run VeeamZip’.  Ignore all of the prompts regarding the setup of parameters that pop up – the easiest way I like to do this is by editing the workflow (the pencil above it) and using the ‘Visual Binding’ tab as shown below.

workflow6

Simply drag and drop your in attributes to their corresponding in attributes on the external script workflow, along with mapping your output to output.  Easy Peasy!

At this point you can go ahead and save and close your workflow – we are done with Orchestrator.  If you want to run the workflow a few times to test from within vRO go ahead – but the point of this post was to run it from within the Web Client so let’s move on to that step.

vRO and vSphere Web Client

I love vRealize Orchestrator and I love the fact that I can contextually execute custom workflows from within the vSphere Web Client.  To do this you need to first register your vRO instance with vCenter – this should be automatically done for you depending on you set everything up – I’m not going to get into that configuration today.  To get to our context mappings we need to click on Home->vRealize Orchestrator.  With the vRO Home in context select the ‘Manage’ tab and then ‘Context Actions’.  We then are going to want to hit the little green + sign to add a new workflow context map.

webclientworkflow1

As far as the next steps they are pretty self explanatory – navigate through your vRO inventory to your workflow, click ‘Add’, and select ‘Virtual Machine’ from the types box.  This is what will allow us to right click on a VM and run our VeeamZip, passing the contextually selected VM to the workflow’s VM input parameter.  Click ‘OK’ and it’s time to VeeamZip!

Now when you want to run the workflow you can simply right click a VM and navigate to (in my case) All vRealize Orchestrator Actions->VeeamZipVM

execute1

As you can see our workflow will start, using our VM selected as the VM input, and simply prompt us for the destination and AutoDelete settings.

execute2

And there you have it!  We can now use the Free version of Veeam Backup and Replication to VeeamZip our VMs directly from within the vSphere Web Client.  In fact, our workflow will even show up within our vSphere tasks so we can monitor the status of the job.  Now, there is no error checking or anything like that…yet!  Let me know if you have any questions, concerns, etc… always happy to read the comments!

Friday Shorts – #vDM, New Web Client, Linux Cleanup, Betas and more…

Is it flowing? I like flowing, cascading hair. Thick lustrous hair is very important to me. Let me ask you this. If you stick your hand in the hair is it easy to get it out?

George Costanza – Seinfeld

Virtual Design Master 4 looking for sponsors

vdmlogoIf you have never checked out the Virtual Design Master challenge I suggest you stop reading this and head over to their site and peruse the last 3 season, then com back here of course…. Anyways, the online, reality based challenge is back for Season 4 and they are looking for sponsors to help provide prizes, swag, infrastructure, etc for the upcoming season!  So if you work for a vendor and want to get your brand attached to VDM4, follow this link to indicate your interest!  They are looking to get everything firmed up to have a July/August competition.

New HTML5 vSphere Web Client!

VMware LogoWhy VMware feels the need to change how the lightening fast, crazy responsive, highly reliable vSphere Web Client that is currently out there is beyond me, but they are…  I hope you can detect the sarcasm in that last sentence.  Anyways, they have been hard at work (re)developing the vSphere Web Client, removing it’s reliance on flash and flex and providing the same functionality through code based on HTML5.  I’ve not yet had a chance to check this out, but from the reactions on the blogosphere and Twitter I’d say that they are on the right track!  They are releasing the vSphere Web Client 6.5 as a Fling, allowing the product to get out into everyone’s hands before it’s integrated into a vSphere version.  If you have a chance go and check it out – it’s simply a virtual appliance that integrates with your current environment.

Getting Linux ready for a vSphere Template!

tuxcloudFellow VFD4 delegate Larry Smith recently posted in regards to cleaning up your Ubuntu templates!  It’s a great post that covers off a lot of things than you can do to ensure you have a clean, prepped instance of Ubuntu to use a template within your vSphere environment.  That said, he takes it one step further, scripting out the complete cleanup in bash – and in Ansible.  If you deal with Linux/Ubuntu templates I would definitly recommend heading over to Larry’s blog and applying some of this scripty goodness.

vSphere.next – Beta Time!

VMware has announced that the next version of vSphere will enter a (limited) public beta. If you feel like you have the time and are ready to provide the effort in providing feedback, submitting bugs, etc to VMware in regards to the next release of vSphere then you can head here and indicate your interest in being a part of the beta.  As far as I know not everyone will be accepted – careful consideration will be taken on who is chosen to participate as they want to ensure they are getting valuable feedback and discovering any gotchya’s in the product before releasing it to the masses!

ZertoCon – The Premier Business Continuity Conference

ZertoCON_Ads_125x125Zerto has been a long time sponsor of this blog so I thought I’d place a shoutout to them and what they have in the works this spring!  You can join Zerto and many others from May 23-25 in beautiful Boston for ZertoCon.  Lately we have seen a lot of these smaller vendors opting to have their own conferences – and honestly if you use their products they are a must for you to go!  The VM/EMC Worlds are a great venue, but honestly, these smaller, laser focused conferences are absolutely fabulous if you are looking to gain more knowledge around certain vendors and their ecosystems!  I encourage you to check it out and sign up if you have the chance to go!

Adding Veeam Proxies to jobs via Powershell

veeamlogoThere will come a time in every growing environment when you need to scale your Veeam Backup and Replication deployment to help keep up with ever increasing demands as it pertains to backing up all those new virtual machines.  Veeam itself has a couple of different deployment models when it comes to scaling – we can scale up – this is done by adding more CPU and RAM to our current proxies and increasing the number of maximum concurrent tasks that our proxies can process – a good rule of thumb for this one is dedicating a CPU per task, so 2 concurrent tasks = 2 CPU’s.  Another option when it comes to scaling Veeam is scale out, which is done by building and adding additional Veeam proxies into the fold.  Which one you chose is completely up to you however in my experience I’ve had a better experience by scaling out and adding more Veeam proxies into my infrastructure – why?  Not really sure, I just don’t like having more than 2 or 3 processes hitting any one proxy at the same time – just a preference really…

If we have accepted defaults when creating our backup/replication jobs they should be set to ‘Automatic selection’ as it pertains to our Backup Proxy settings – this means our work is done as as soon as we add the proxy into our backup infrastructure it will now be available to all the jobs.  That said, if you have changed settings (like me) to specify certain groups of proxies for certain jobs then we will have to edit each and every job in order to have it utilize our new proxy.   This isn’t a hard process but can present some challenges as it pertains to time depending on how many jobs you have.  I don’t have an extreme amount of jobs, maybe 10 or so, but I also don’t like doing the same thing over and over as it often leads to mistakes.

Enter PowerShell

So with all that said here’s a quick little Powershell script that you can utilize to add a new proxy to a listing of existing jobs.  As you can see I’ve chosen to add it to all of my jobs, but this can easily be modified to get only the jobs you want by passing some -Filter parameters to the Get-VBRJob cmdlet.  The script is pretty simple, taking only one parameter, the proxy to add (***NOTE*** you will need to go into Veeam B&R and configure this machine as a proxy within your Backup Infrastructure, that part isn’t automated), looping through all of my jobs, retrieving a list of existing proxies and adding the new one to that list, then applying the new list back to the job.  It does this for both the source proxies and the target proxies (as you can see with -Target).

Param ( [string]$proxyToAdd )

Add-PSSnapin VeeamPSSnapIn

1
2
3
4
5
6
7
8
9
10
11
12
13
$newProxy = Get-VBRVIProxy -Name $proxyToAdd
$jobs = Get-VBRJob
 
foreach ($job in $jobs)
{
$existingProxies = Get-VBRJobProxy -Job $job
$newProxyList = $existingProxies + $newProxy
Set-VBRJobProxy -Job $job -Proxy $newProxyList
 
$existingProxies = Get-VBRJobProxy -Job $job -Target
$newProxyList = $existingProxies + $newProxy
Set-VBRJobProxy -Job $job -Proxy $newProxyList -Target 
}

Simply save your script (I called it AddProxyToJobs.ps1) and run it as follows…

c:\scripts\AddProxyToVMs.ps1 newproxyname

There is absolutely no error checking within the script so by all means make sure you get the syntax right or you could end up with a slew of errors.  Either way, this is a nice way to add a new proxy to a list of jobs without having to manually edit every job.  And as I mention with every script I write if you have any better ways to accomplish this, or see any spots where I may have screwed up please let me know….