StarWind Software HCA brings hyperconvergence to the SMB

Virtualization drastically changed the way we deploy applications and servers within our datacenters – eliminating racks upon racks of single purpose servers and replacing them with compute clusters and shared storage.  In terms of IT, the timespan that virtualization has been mainstream has been short, but even in the small amount of time that it has been in our datacenters we have seen evolutions in the underlying infrastructure that supports virtualization.  We’ve seen converged systems enter the marketplace, bundling the server clusters and storage together under one support SKU.  From there we’ve went one level further with hyperconvergence, eliminating all of the complexity and troubles of having separate pieces of infrastructure for compute and storage.  Hyperconvergence has certainly made a foot hold on the industry, but for the most part the target customer for these types of deployments have been aimed at small, medium and large ENTERPRISE, not business.  Hyperconvergence is a perfect fit for the SMB, however up until now has been out of reach in terms of price for companies needing a small deployment for 50-100 VMs…

Enter StarWind

logo_strwStarWind has been around since 2003 and are most famous for their flagship product StarWind Virtual SAN – a shared storage solution running on Windows and providing capacity to both VMware and Hyper-V clusters.  StarWind have supported a 2 node, highly available storage setup for VMware with their Virtual SAN for a few years and have had success doing so.  However now they have taken that one step further by providing a hyperconverged solution including the hardware, computer, storage, network, and management under one simple solution called the StarWind HyperConverged Appliance (HCA)

Note: This review was sponsored by StarWind Software, meaning I did receive compensation for the words on this page! That said the words on this page are mine and mine alone and not StarWinds :)

 

The hardware

Before we get too much into the software driving the HCA let’s first take a step back and checkout some of the unique ways StarWind is providing hardware.  The commodity servers underneath the StarWind HyperConverged Appliance (HCA) are key to how they are able to offer their powerful solution, yet keep costs at a minimum and target SMB’s and ROBOs.  We have a few options when it comes to hardware and the StarWind HCA – we can buy new, refurbished, or quite simply, bring our own hardware.

You will see the phrase “best in breed” a lot within this review as that is the path that StarWind has chosen to take while putting together the pieces of their HCA.  StarWind is a software company, not a hardware company so they’ve opted to chose Dell as their preferred provider for the infrastructure beneath the HCA.  As we all know Dell brings tremendous solutions to their customers, providing infrastructure that can be scaled to meet the needs of the “mom and pop” shop all the way through to the large enterprise.  Not to mention they have one of the biggest distributions in the world for providing hardware and servicing warranty and parts replacements.

 

Whether you chose to buy new or refurbished (StarWind has partnered with xByte and Arrow) there are some commonalities between the solutions.  First off, customers have the option to purchase up to 5 years of 4 hour pro support on any hardware purchased (even refurbished) so you can ensure that you are covered in terms of hardware failures.  Also, whether it’s new or refurbished hardware, the StarWind HCA comes in three different flavors, ranging from small to large depending on your needs.

Model S – The HCA Model S is the entry level system which is mainly designed and targeted at SMBs and ROBOs.  This unit, a Dell T320 is a tower format – perfect for those remote/small offices that don’t have proper data centers or racks already installed.  The Model S does not require additional network switching and can be connected directly into existing switches – while all storage traffic is routed through a directly connected 10GbE back end.  The Model S starts with a 2-node starter set, and scales out to a maximum of 16 nodes.

Starwind-models

Model L – HCA Model L takes the next step and provides a mid-level system for SMBs, providing the ability to pack more CPU, Memory, and storage into each node and moving up into a rack architecture (Dell R620).  Again, the 2 node starter set utilizes existing switching and scales to a maximum of 64 nodes.

Starwind-modell

Model XL – Finally, the Model XL provides yet even bigger hardware configurations and faster CPU’s.  As with the Model L this solution comes in a rack mount architecture (Del R720) and is designed for SMBs with high performance computing demands, VDI deployments, or mid-size enterprise ROBOs.  The Model XL provides us with maximum storage density and allows us utilize a dedicated 10 or 40 GbE back end for storage traffic.  Just like the other models the starter set comes with 2 nodes, with the Model XL having the capability to scale to 64 nodes total.

Starwind-modelxl

As we can see above there is a wide range of compute, memory, storage and networking configurations available with the HCA which can meet the needs of almost any SMB/ROBO deployment out there.  All models are equipped with a direct connected 10GB backend to handle the storage replications and are sold on a node by node basis, or in a 2-node starter set to get you up and running quickly.

One advantage to the StarWind solution is that you aren’t locked in to specific model types once you purchase them.  In fact, StarWind has published many “typical configurations” that meet the needs of various use-cases for the SMB.

Two Model L Nodes – Typical setup for an SMB looking to run a File Server, Exchange environment, SQL Server, etc.

Three Model L Nodes – Same as the previous configuration, but with an additional node added to provide more compute and storage to run additional workloads.

Three Model L Nodes + Two Model XL Nodes – Configuration for an SMB looking to run File Server, Exchange, SQL Server, etc with the addition for a 150 seat VDI deployment.

Three Model L Nodes + Two Model XL Nodes + Six Model S Nodes – This would be a typical setup for those organizations looking to deploy a central solution with support for three remote offices.  Each remote office would have a 2-node Model S cluster to support local services.  The 3 Model L and 2 Model XL nodes would be deployed at a central office to provide support for File Server, Exchange, SQL, VDI, etc as well as acting as a replication target for the remote locations.

Bring your own hardware

If the solutions provided by StarWind don’t quite meet your requirements, or if you simply want to leverage a past investment, customers also have the option to use their own hardware for the StarWind HCA.  By purchasing just the software, services and support, a small business is able to keep costs down while getting more bang for their buck on their previous hardware investments by utilizing infrastructure they may already have in place.  Obviously there are concerns in terms of warranty and support when going with this model, but the key is there is a lot of flexibility when it comes to how the customer can deploy the StarWind HCA.

The software

Let’s face it, the underlying hardware is quickly becoming a commodity in today’s world and focus has quickly moved to the functionality of the software.  StarWind recognizes this and has taken a stance to deploy their HCA in a “best of breed” type scenario.  To best understand the this we can look no further than the diagram below, which outlines each piece of software included within StarWinds VMware offering…

starwind-software

Hypervisors

The StarWind HCA comes at a minimum with 2 servers running vSphere 6 yet can scale to 64 nodes easily by simply dropping in more hosts.  As far as management goes the vCenter Server Appliance is licensed and preconfigured within the StarWind HCA in order to allow organizations to manage their environment with a  product they may already be familiar with – no need to learn new interfaces or attend respective training.  It should be noted that this review is focused around VMware, but the StarWind HCA does support Microsoft Hyper-V as well.

Storage

Hyperconvergence takes our traditional storage arrays, those big metal, external, shared storage solutions and collapses them down into the local storage, which in turn performs some magic and presents these local disks back out to the cluster as shared storage.  StarWind has built their company based on providing shared storage to clusters and their shared storage product, StarWind Virtual SAN is the backbone to providing availability within their HCA deployment, thus, we will spend most of our time focusing on this.

StarWind Virtual SAN is an installable product that runs on Microsoft Windows.  With a two-node deployment of the StarWind Hyperconverged Appliance you will see two instances of StarWind Virtual SAN, with each VM living on its’ own node.  From there, the local storage is claimed by the corresponding Virtual SAN VM, and presented back out to the ESXi hosts in terms of an iSCSI datastore.  The real power of Virtual SAN however comes in the form of availability as the product comes preconfigured in a highly available model.  It does this by utilizing NICs within the ESXi hosts to synchronously mirror the datastore from one Virtual SAN instance to what they call a Replication Partner, which is essentially the StarWind Virtual SAN instance on the other host.  On the ESXi host end of things, the software iSCSI initiator is used and binds multiple paths to the iSCSI target together, with each path pointing to a different StarWind Virtual SAN instance.  When looking at it from a physical mapping we see StarWind utilize 2 NICs on each ESXi host for their synchronization/replication traffic, as well as 1 NIC for their heartbeating/failover mechanisms.

starwindvirtualsan

Aside from availability StarWind Virtual SAN offers a lot more as well – too much really to go over in this post but could possible use a post of its own.  In terms of my favorites check out the following list.

  • Server-Side Cache – RAM and flash-based devices are utilized to provide performance in terms of write-back caching, which in turn reduces I/O latency and eliminates a lot of useless network traffic.  These caches, well, they are also synchronized between hosts so we aren’t left in a situation that could result in data loss.
  • Scale – StarWind can scale in multiple ways including both up and out.  We can scale up by simply increasing the number of drives and spindles that we have within each node.  Scaling out is achieved by adding another node complete with StarWind Virtual SAN which in turn gives us capacity, as well as CPU and Memory to our cluster.
  • Deduplication and Compression – Most physical arrays are deploying this in some fashion these days and even though StarWind Virtual SAN is software-based we can still get the capacity and I/O advantages that are offered through built-in inline deduplication and compression.
  • Snapshots – LUN based snapshots are provided within StarWind Virtual SAN and cold data can also be redirected to a less expensive, secondary storage tier if need be.
  • Future integration with VVOLs now in tech preview.

Data Protection

veam Be it the StarWind HCA is doing a great job at providing availability for your production data locally there are still times where corruption, or user-generated events happen and a backup need be called upon to save the day.  StarWind doesn’t have a backup solution of their own so in a true “best of breed” mentality they have went out and selected Veeam as their preferred partner to provide data protection within the HCA solution.   If customers opt to purchase Veeam with the StarWind HCA, it means that Veeam Backup & Replication (currently v8) will come pre-installed and fully configured along with all the other software – the only thing that the customer has left to do is add some backup storage and setup the backup jobs.

Support

Another huge benefit of going with the StarWind HCA comes in terms of support.  Although the StarWind HCA contains various pieces from different vendors (StarWind, VMware, Veeam, Dell) all of the support and maintenance is processed under one SKU, leaving the customer with only one number to dial when needing some help.  This eliminates all of the “he said, she said” and finger-pointing that sometimes happen when dealing with multiple vendors support on their own.  All support is handled by StarWind Software, and is provided 24/7, 365 days a year.

What does it look like in the end?

When you purchase a StarWind 2 node HCA you can expect to see something similar to the following diagram

starwind-solution

First up, 2 nodes, preinstalled with ESXi and fully configured.  On top of those we have a StarWind Virtual SAN node on each host, which claims all of the local storage and presents it back to the corresponding ESXi hosts in a fully replicated, highly available manner.  The key to the whole solution is that when the units are shipped to you they are completely configured and ready to go, with 4 preconfigured VMs (vCenter, StarWind Virtual SAN x2, and Veeam) – all you have to do is start deploying virtual machines and supporting your business.

starwind-networking

As far as networking goes it will all be preconfigured for us, with the exception of the domain network (management/production) which would be specific to your business.  We can see that we have dedicated links for the heartbeating mechanisms (the magic behind Virtual SANs failover mechanism as well as dedicated links to handle the storage replication – the magic behind availability.

So what do I think…

StarWind has been in the software SAN game for a long time and it’s nice to see them start to provide a complete solution including the hardware and configuration as well.  Hyperconvergence is a great solution, which fits perfectly into the SMB space – however solutions today seem to be priced outside of what an SMB can afford.  StarWind has placed their hyperconverged solution at a price in which the SMB can handle – and provided a complete solution, including the hardware, software, hypervisor, and data protection – all under one support umbrella.  This is StarWinds first generation of their HCA so I’m excited to see where they go from here – I’ve already been told that they are previewing support for features such as VMware’s VVOLs so work is still being done.  In fact, it seams to me that a lot of the features we see on traditional hardware based arrays are also being supported on StarWinds software based Virtual SAN.  The StarWind HCA is a very easy solution to use – essentially it’s just vSphere and requires no configuration from the clients end.  In the end my overall experience with the StarWind HCA has been a good one!    StarWind HCA not only brings simplicity with their appliance, but also choice!  Customers can chose to utilize new or refurbished hardware, or simply roll their own install.  They can chose to include Veeam within their HCA deployment or go some other route!  They can chose to scale up or scale out depending on their needs  Choice and simplicity is key when it comes to providing a solution to SMB as they don’t normally have the IT resources, budget or time to spend on training and deployment.  The StarWind HCA is certainly a viable option for those small businesses that fit that description, looking to deploy an easy to use, highly available hyperconverged solution that can grow and shrink as they do at a fraction of the cost of other solutions out there today.

If you would like to learn more about StarWinds HCA you can do so here – also, they offer a fully functioning trial version and also a free version of StarWind Virtual SAN – the backbone of the HCA.  Also, if you are a StarWind HCA or a StarWind Virtual SAN user I’d love to hear your thoughts – simply use the comment boxes below for any questions, concerns, comments, etc..

Running free VeeamZip directly from the vSphere Web Client

veam_thumb.pngThere are a lot of times I find myself needing to take a one-off backup job of a VM – prior to software upgrades or patching I always like to take a backup of the affected VM(s) in the event that, well, you know, I mangle things.  VeeamZip is great for this – it allows me to process a quick backup of my VM that is separate from its’ normal backup and replication routines.  Since I deal in an environment that running paid Veeam licenses I have access to the Veeam Plug-in for the vSphere Web Client – and this plug-in does exactly what the title of this blog post is – it allows us to perform VeeamZips of our VMs without having to leave the vSphere Web Client and/or log into our Veeam Backup and Replication console.

What if I’m using Veeam Backup and Replication FREE?

So this is all great for me, but I got to thinking – What if I wasn’t running a paid version of Veeam Backup?  What if I was simply running the free version – this doesn’t come with Enterprise Manager, therefore it doesn’t come with a means of getting the Veeam Backup and Replication Web Client plug-in installed – therefore no VeeamZip from the Web Client right? – Wrong!  Ever since Veeam Backup and Replication v8 U2 came out they have been including PowerShell cmdlets around the VeeamZip functionality.  I wrote about how to use it last year in Scheduling Veeam Free Edition Backups.  Well, since we have PowerShell that means we can use vRealize Orchestrator to build a workflow around it – and we have the ability to execute workflows directly from within the vSphere Web Client – so without ado, Running the free VeeamZip functionality directly from the vSphere Web Client.

First up the script

I didn’t get too elaborate with the script as you can see below.  This is simply a handful lines that take in a few parameters; the VM to backup, the destination to store the backup, and the retention, or autodeletion of the backup.

1
2
3
4
5
6
7
8
9
10
11
12
Param(
[Parameter(Mandatory=$true)][string]$VM,
[Parameter(Mandatory=$true)][string]$Destination,
[Parameter(Mandatory=$true)][ValidateSet("Never","Tonight","TomorrowNight","In3days","In1Week","In2Weeks","In1Month")][string]$Autodelete
)
#Load Veeam Toolkit
& "C:\Program Files\Veeam\Backup and Replication\Backup\Initialize-VeeamToolkit.ps1"
#Get the VM Veeam Entity.
$vmentity = Find-VBRViEntity -Name $VM
 
#VeeamZip it!
Start-VBRZip -Entity $vmentity -Folder $destination -AutoDelete $Autodelete -DisableQuiesce

That’s it for the script – simple right – feel free to take this and add whatever you seem fit to suit your needs 🙂

The Orchestrator Configuration

Before we get to creating our workflow there are a few things we need to do within orchestrator, mainly adding our server that hosts our Veeam Free instance as a PowerShell host within vRO.  But even before we run the ‘Add a PowerShell Host’ workflow we need to run a few winrm commands on the Veeam Free instance.  I have a complete post about setting up a PowerShell host here, but will include the commands you need to run below for quick reference.

First up, on the Veeam server run the following in a command shell…

  • winrm quickconfig
  • winrm set winrm/config/service/auth @{Kerberos=”true”}
  • winrm set winrm/config/service @{AllowUnencrypted=”true”}
  • winrm set winrm/config/winrs @{MaxMemoryPerShellMB=”2048″}

Next, from within vRO (as shown below) we can run the the “Add a PowerShell host” workflow…

PowerShellHost1

As you can see my Veeam Server is actually the same as my vCenter server – don’t recommend doing this but hey it’s a small lab!  Just be sure to use the FQDN of your Veeam Server for Host/IP column.

PowerShellHost2

Ensure that the remote host type is WinRM, and that the Authentication method is set to Kerberos.

PowerShellHost3

And be sure that we are passing our username in the ‘username@domain’ format, along with ‘Shared Session’ for the session mode.  Once you are done go ahead and click ‘Submit’.  If everything goes as planned your Veeam Backup and Replication server should be added as a PowerShell host within vRealize Orchestrator.

And now, the workflow!

Finally we can get to actually building our workflow.  If you remember our script we take in three parameters; VM, Desitination and AutoDelete – so we will mimic the same with our workflow, only calling them Input Parameters within vRO (shown below)

workflow1

Now since we will be using the built-in Powershell workflow ‘Invoke an External Script’ we will also need to have some workflow attributes setup in order to pass to that workflow.  Below you can see how I’ve setup mine…

workflow2

Your configuration may vary a little from this one, but as you can see we simply add a PowerShell host attribute and map it to our newly added host, as well assign the ScriptPath attribute to the representation of where we saved our little VeeamZip script earlier.  The arguments attribute can remain empty as we will only use this to build the arguments string to pass to the script.

workflow3

The first element we want to add to our workflow schema is a Scriptable task – go ahead and drag that over into your workflow.  This is where we will create our arguments string.

workflow4

As far as what goes into the scripting you can see I’ve simply brought in the arguments attribute, along with our three input parameters and simply chained them together into one string (arguments = ‘”‘+VM.name+'” “‘+Destination+'” “‘+AutoDelete+'”‘;), then ensured that my arguments attribute was included in the output as well.

workflow5

Next drag the ‘Invoke an external script’ workflow into your schema (you can see I’ve renamed mine ‘Run VeeamZip’.  Ignore all of the prompts regarding the setup of parameters that pop up – the easiest way I like to do this is by editing the workflow (the pencil above it) and using the ‘Visual Binding’ tab as shown below.

workflow6

Simply drag and drop your in attributes to their corresponding in attributes on the external script workflow, along with mapping your output to output.  Easy Peasy!

At this point you can go ahead and save and close your workflow – we are done with Orchestrator.  If you want to run the workflow a few times to test from within vRO go ahead – but the point of this post was to run it from within the Web Client so let’s move on to that step.

vRO and vSphere Web Client

I love vRealize Orchestrator and I love the fact that I can contextually execute custom workflows from within the vSphere Web Client.  To do this you need to first register your vRO instance with vCenter – this should be automatically done for you depending on you set everything up – I’m not going to get into that configuration today.  To get to our context mappings we need to click on Home->vRealize Orchestrator.  With the vRO Home in context select the ‘Manage’ tab and then ‘Context Actions’.  We then are going to want to hit the little green + sign to add a new workflow context map.

webclientworkflow1

As far as the next steps they are pretty self explanatory – navigate through your vRO inventory to your workflow, click ‘Add’, and select ‘Virtual Machine’ from the types box.  This is what will allow us to right click on a VM and run our VeeamZip, passing the contextually selected VM to the workflow’s VM input parameter.  Click ‘OK’ and it’s time to VeeamZip!

Now when you want to run the workflow you can simply right click a VM and navigate to (in my case) All vRealize Orchestrator Actions->VeeamZipVM

execute1

As you can see our workflow will start, using our VM selected as the VM input, and simply prompt us for the destination and AutoDelete settings.

execute2

And there you have it!  We can now use the Free version of Veeam Backup and Replication to VeeamZip our VMs directly from within the vSphere Web Client.  In fact, our workflow will even show up within our vSphere tasks so we can monitor the status of the job.  Now, there is no error checking or anything like that…yet!  Let me know if you have any questions, concerns, etc… always happy to read the comments!

Friday Shorts – #vDM, New Web Client, Linux Cleanup, Betas and more…

Is it flowing? I like flowing, cascading hair. Thick lustrous hair is very important to me. Let me ask you this. If you stick your hand in the hair is it easy to get it out?

George Costanza – Seinfeld

Virtual Design Master 4 looking for sponsors

vdmlogoIf you have never checked out the Virtual Design Master challenge I suggest you stop reading this and head over to their site and peruse the last 3 season, then com back here of course…. Anyways, the online, reality based challenge is back for Season 4 and they are looking for sponsors to help provide prizes, swag, infrastructure, etc for the upcoming season!  So if you work for a vendor and want to get your brand attached to VDM4, follow this link to indicate your interest!  They are looking to get everything firmed up to have a July/August competition.

New HTML5 vSphere Web Client!

VMware LogoWhy VMware feels the need to change how the lightening fast, crazy responsive, highly reliable vSphere Web Client that is currently out there is beyond me, but they are…  I hope you can detect the sarcasm in that last sentence.  Anyways, they have been hard at work (re)developing the vSphere Web Client, removing it’s reliance on flash and flex and providing the same functionality through code based on HTML5.  I’ve not yet had a chance to check this out, but from the reactions on the blogosphere and Twitter I’d say that they are on the right track!  They are releasing the vSphere Web Client 6.5 as a Fling, allowing the product to get out into everyone’s hands before it’s integrated into a vSphere version.  If you have a chance go and check it out – it’s simply a virtual appliance that integrates with your current environment.

Getting Linux ready for a vSphere Template!

tuxcloudFellow VFD4 delegate Larry Smith recently posted in regards to cleaning up your Ubuntu templates!  It’s a great post that covers off a lot of things than you can do to ensure you have a clean, prepped instance of Ubuntu to use a template within your vSphere environment.  That said, he takes it one step further, scripting out the complete cleanup in bash – and in Ansible.  If you deal with Linux/Ubuntu templates I would definitly recommend heading over to Larry’s blog and applying some of this scripty goodness.

vSphere.next – Beta Time!

VMware has announced that the next version of vSphere will enter a (limited) public beta. If you feel like you have the time and are ready to provide the effort in providing feedback, submitting bugs, etc to VMware in regards to the next release of vSphere then you can head here and indicate your interest in being a part of the beta.  As far as I know not everyone will be accepted – careful consideration will be taken on who is chosen to participate as they want to ensure they are getting valuable feedback and discovering any gotchya’s in the product before releasing it to the masses!

ZertoCon – The Premier Business Continuity Conference

ZertoCON_Ads_125x125Zerto has been a long time sponsor of this blog so I thought I’d place a shoutout to them and what they have in the works this spring!  You can join Zerto and many others from May 23-25 in beautiful Boston for ZertoCon.  Lately we have seen a lot of these smaller vendors opting to have their own conferences – and honestly if you use their products they are a must for you to go!  The VM/EMC Worlds are a great venue, but honestly, these smaller, laser focused conferences are absolutely fabulous if you are looking to gain more knowledge around certain vendors and their ecosystems!  I encourage you to check it out and sign up if you have the chance to go!

Adding Veeam Proxies to jobs via Powershell

veeamlogoThere will come a time in every growing environment when you need to scale your Veeam Backup and Replication deployment to help keep up with ever increasing demands as it pertains to backing up all those new virtual machines.  Veeam itself has a couple of different deployment models when it comes to scaling – we can scale up – this is done by adding more CPU and RAM to our current proxies and increasing the number of maximum concurrent tasks that our proxies can process – a good rule of thumb for this one is dedicating a CPU per task, so 2 concurrent tasks = 2 CPU’s.  Another option when it comes to scaling Veeam is scale out, which is done by building and adding additional Veeam proxies into the fold.  Which one you chose is completely up to you however in my experience I’ve had a better experience by scaling out and adding more Veeam proxies into my infrastructure – why?  Not really sure, I just don’t like having more than 2 or 3 processes hitting any one proxy at the same time – just a preference really…

If we have accepted defaults when creating our backup/replication jobs they should be set to ‘Automatic selection’ as it pertains to our Backup Proxy settings – this means our work is done as as soon as we add the proxy into our backup infrastructure it will now be available to all the jobs.  That said, if you have changed settings (like me) to specify certain groups of proxies for certain jobs then we will have to edit each and every job in order to have it utilize our new proxy.   This isn’t a hard process but can present some challenges as it pertains to time depending on how many jobs you have.  I don’t have an extreme amount of jobs, maybe 10 or so, but I also don’t like doing the same thing over and over as it often leads to mistakes.

Enter PowerShell

So with all that said here’s a quick little Powershell script that you can utilize to add a new proxy to a listing of existing jobs.  As you can see I’ve chosen to add it to all of my jobs, but this can easily be modified to get only the jobs you want by passing some -Filter parameters to the Get-VBRJob cmdlet.  The script is pretty simple, taking only one parameter, the proxy to add (***NOTE*** you will need to go into Veeam B&R and configure this machine as a proxy within your Backup Infrastructure, that part isn’t automated), looping through all of my jobs, retrieving a list of existing proxies and adding the new one to that list, then applying the new list back to the job.  It does this for both the source proxies and the target proxies (as you can see with -Target).

Param ( [string]$proxyToAdd )

Add-PSSnapin VeeamPSSnapIn

1
2
3
4
5
6
7
8
9
10
11
12
13
$newProxy = Get-VBRVIProxy -Name $proxyToAdd
$jobs = Get-VBRJob
 
foreach ($job in $jobs)
{
$existingProxies = Get-VBRJobProxy -Job $job
$newProxyList = $existingProxies + $newProxy
Set-VBRJobProxy -Job $job -Proxy $newProxyList
 
$existingProxies = Get-VBRJobProxy -Job $job -Target
$newProxyList = $existingProxies + $newProxy
Set-VBRJobProxy -Job $job -Proxy $newProxyList -Target 
}

Simply save your script (I called it AddProxyToJobs.ps1) and run it as follows…

c:\scripts\AddProxyToVMs.ps1 newproxyname

There is absolutely no error checking within the script so by all means make sure you get the syntax right or you could end up with a slew of errors.  Either way, this is a nice way to add a new proxy to a list of jobs without having to manually edit every job.  And as I mention with every script I write if you have any better ways to accomplish this, or see any spots where I may have screwed up please let me know….

How the Friday Shorts posts come to be

I’m always looking for a way to automate things – whether it be in my work life, personal life, home life, or even my blog life – when I first started to do Friday Shorts, an initiative for me to share out some blogs and articles that sparked my interest it was a manual process.  There was a lot of “Who wrote that article again?, What site was that again?, Where’s that link I emailed?” going on in my head and even though it was only a handful of links it was still a lot of work.   Now I had already automated my sharing of links out to Twitter (needs an update BTW, oauth1 depreciated in Google Script now – another post) and for the most part I’ve found that the articles I chose for Friday Shorts are the same as the ones I share out.   So with all that said I set out to automate a process that would at the very least get a draft of a Friday Shorts article into this blog – and this is what I’ve come up with.

So in order to do this I’ve utilized four different services; Google Drive/Scripts, IFTTT, Delicious, and WordPress – each playing a key role in the automation.  So the process goes as follows

  • While perusing all the great blogs out there if I find one I feel needs to be included, I quickly use the Delicious toolbar button to create a public bookmark with the ‘FridayShorts’ tag.  Also, while using my RSS reader of choice (Digg) – if I ‘Digg’ a post, IFTTT runs a recipe to automatically create bookmarks of the post with the proper tag.
  • Another IFTTT recipe takes all of my Delicious ‘FridayShorts’ tagged bookmarks, and appends them to a Google Spreadsheet within my Drive account.
  • From there I have a WordPress plugin I’ve developed which essentially connects to a Google Script which parses a spreadsheet – allowing me to select which articles I’d like to include and finally creating a draft WordPress post (Friday Shorts) following a specific template.  The articles which I select are then updated (through the Google Script calls) in order to ensure they aren’t displayed the next time I go through the process.
  • I clean up the post, add some pictures, descriptions, links and whatever and publish it…

So with all that said let’s have a quick look at how each of the components are setup and configured.

IFTTT

If this then that is a great tool for automating almost anything within your life.  For the purposes of creating a Friday Shorts post I have two main recipes which are utilized…  In all honesty recipes on IFTTT are fairly simple to setup so I’ll simply just show the screenshots outlining what they do.

Digg to Delicious
Delicious to Google
DiggToDeliciousIFTTT DelciousToGoogleIFTTT

See – pretty simple to setup – at then end of it all we should be left with a spreadsheet similar to the following…

FridayShortsSpreadsheet

Google Script

Now we have the information that we want sitting inside a Google Spreadsheet – but before we get into the WordPress Plugin we first need to create a Google Script (script.google.com) containing all of the functions and methods which will actually do the work we request from WordPress.  To do so, go to script.google.com and select File-New Project.  This should open up a blank script for you to run with.

Sometimes it’s easier just to see the whole thing first so let’s just lay it out there for you – we can go through some specifics at the end of the script.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
// open up the spreadsheet and set the active sheet.
var files = DriveApp.getFilesByName('FridayShortsLinkCollector');
var file;
while (files.hasNext()) 
{
file = files.next();
Logger.log(file.getName());
}
 
var ss = SpreadsheetApp.open(file);
var sheet = ss.getSheets()[0];
 
// return specific link
function getLink(articleid)
{
var values = sheet.getDataRange().getValues();
var articles = [];
for(var i=0, iLen=values.length; i<iLen; i++) 
{
if(values[i][6] == articleid) 
{
var article = [];
article.push(values[i][6].toString());
article.push(values[i][1].toString());
article.push(values[i][4].toString());
article.push(values[i][0].toString());
article.push(values[i][5].toString());
articles.push(article);
}
}
return articles;
}
 
// return all unprocessed links
function getAllLinks2()
{
var values = sheet.getDataRange().getValues();
var articles = [];
for(var i=0, iLen=values.length; i<iLen; i++) 
{
if(values[i][5] == "0") 
{
var article = [];
article.push(values[i][6].toString());
article.push(values[i][1].toString());
article.push(values[i][4].toString());
article.push(values[i][0].toString());
article.push(values[i][5].toString());
articles.push(article);
}
}
return articles;
}
 
// mark specific link as processed
function markArticleProcessed(articleid)
{
values = sheet.getDataRange().getValues();
 
for(var i=0, iLen=values.length; i<iLen; i++) 
{
if (values[i][6] == articleid)
{
values[i][5] = "1";
}
}
sheet.getDataRange().setValues(values);
}

So, if you are having trouble with my great comments throughout the code (I know, one-line here and there :)) let me try and explain a few things.  First of all, Lines 1 through 11 simply find the Spreadsheet within my Google Drive containing the links, then set the active sheet to the first one.

From there things are a bit more simple – the getLink() function returns an array of one specific article, taking in the articleID (auto genereated number in the spreadsheet) as a paramater.  The getAllLinks2() function returns an array of all of the articles within the spreadsheet (so long as their 5th cell (or processed as I call it) contains a value of 0.  The markArticleProcessed() function takes in a specific articleID as a parameter, and simply changes it’s 5th cell to a 1 – meaning it has been processed by a Friday Shorts article.

GoogleDeployAPIThat’s it for Google code but there are a few other things that you will need to do in order to make your script available for use by the WordPress plugin.  First off, select ‘Publish->Deploy as API executable”  Be sure to select “New” under the version (you will need to do this upon making any changes to the code) and make note of the API ID – we will need this for the PHP calls in WordPress.  Go ahead and click ‘Update’ when ready!

Secondly we need to open up some API’s within Google in order to allow the script to access your Google Drive content.  This is done in the developers console.  The easiest way I’ve found to get to the place we need to be is by selecting Resources->Developer Console Project.  If you haven’t already you will have to give your project a name and save it.  Once that is done simply click the link shown to go the scripts associated project.

DevelopersConsoleLink

The two APIs that we will need to open up for this project are the Drive API and the Google Apps Script Execution API (as shown below).  From the Dashboard select ‘Enable APIs and get credentials like keys”.  From there you should immidiately see the Drive API, but may have to search for the Script Execution API.  Either way, in the end you need to have them enabled as shown below…

APIsEnabled

From here move down to the Credentials section.  We need to create credentials to allow WordPress to access our Google content.  To do so select ‘New Credentials’ and then ‘Oauth Client ID’ as the type.  When presented with the application type chose ‘Web Application’, give it a name, and set the proper redirect URLs.  This can get tricky and will certainly change given your setup but my working setup is shown below.

redirectURIs

As you can see I’ve entered quite a few redirect URIs – not sure if I need them all but it works this way.  Also, make note of your Client ID and Secret – put it in the same place as your Script/API Id as we will need all of this for the WordPress Plugin.  For now, we are done with Google 🙂

The WordPress Plugin

I don’t want to go too deep into the specifics of how to create WordPress Plugins – partly because there is a lot too cover – too much for a this post, oh, and partly because I have no idea about 99% of it.  I simply fiddled until I got what I needed to work.  Again, let me simply just lay out some code and explain what it does – I’ll leave the fine details of the structure of WordPress Plugins to someone else.

folderstructure

As for my plugin there are really only 5 components to it..

  • The Google PHP SDK – download it here
  • fridayshorts.php – this is the main plugin page
  • fridayshorts-functions.php – this is a page containing all of the function calls to the Google Script
  • options.php – this page contains a means to setup the options for the plugin (the Script ID, Client ID and Client Secret)
  • js – some Javascript for checking boxes and stuff 🙂

Due to the fact that there is probably a few hundred lines of code within the plugin itself I’m not going to throw it all out there – instead I’ll just put up a few examples of how I call the Script Execution API from within the PHP code.

First up we need to import the PHP SDK for Google declare some variables – I’ve stored my client id and secret (from the Google section above) in a wordpress option – so to recall these we simply just need to do the following…

require_once 'Google/autoload.php';
 
$client_id = get_option('fsGoogleClientID');
$client_secret = get_option('fsGoogleClientSecret');
$redirect_uri = get_option('fsGoogleRedirectURI');

Now that we have this information we can start setting up the objects we need to interact with our Google script as shown below, storing everything we need in the client object.  As you can see I’ve also specified the scopes in which the API requests will fall under.

$scriptId = get_option('fsGoogleScriptID');
$client = new Google_Client();
$client->setApplicationName("Process Friday Shorts");
$client->setClientId($client_id);
$client->setClientSecret($client_secret);
$client->setRedirectUri($redirect_uri);
$client->setScopes(array('https://www.googleapis.com/auth/drive','https://spreadsheets.google.com/feeds','https://www.googleapis.com/auth/spreadsheets'));
$client->setAccessType('offline'); // Gets us our refreshtoken

As far as processing one of the functions in the Google script that can be done as shown below.  I’ve also shown the code I use to display data on my plugin pages so you can sort of visualize what is happening.  The function getAllLinks is passwed through the setFunction function (alot of function in there :)) and you can see how I go about parsing the response back from the API call to build out an HTML table containing data from my spreadsheet.

function getAllLinks()
{
global $client,$scriptId;
$client->setAccessToken($_SESSION[‘token’]);
$service = new Google_Service_Script($client);
$request = new Google_Service_Script_ExecutionRequest();
$request->setFunction(‘getAllLinks’);
$response = $service->scripts->run($scriptId, $request);
$resp = $response->getResponse();
$articles = $resp[‘result’];

//build html table and return
$content = ‘<table width=”98%” bgcolor=”white”><TR><TD><input type=”checkbox” onchange=”fridayshorts_links_checkall(this)” name=”checkAll” value=”all”></TD><TD><B>Title</B></TD></TR>’;
foreach ($articles as $article)
{
$content .= ‘<TR><TD><input type=”checkbox” name=”art[]” value=”‘.$article.'”></TD><TD>’.$article.'</TD></TR>’;
}
$content .= ‘</TABLE>’;
return $content;

}

Anyways, enough with the php code – if you are in dire need of it just let me know and I’ll send it to you – way to boring to go through it all line by line.  In the end though I’m left with nice little GUI that allows me to select which items I’d like to include in my Friday Shorts post as shown below…

fridayshortsplugin

Once I’ve selected which posts I’d like to include within my new Friday Shorts post I can go ahead and click Create Draft.  What happens then is a new draft is created within WordPress in a format that I specified for my Friday Shorts posts.  The code to do so is as follows

$my_post = array(
'post_content' => $content,
'post_title' => "Friday Shorts",
'post_status' => 'draft',
'tags_input' => 'Friday Shorts'
);
$post_id = wp_insert_post( $my_post, true );

Additionaly, remember that markArticleProcessed function within our Google Script – it’s called as well, passing the parameter of each link as it loops over them and sending that id back to the Google Script using the setParameters function on the request object – as follows…

function markArticleProcessed($articleid)
{
global $client,$scriptId;
$client->setAccessToken($_SESSION['token']);
$service = new Google_Service_Script($client);
$request = new Google_Service_Script_ExecutionRequest();
$request->setFunction('markArticleProcessed');
$request->setParameters($articleid);
$response = $service->scripts->run($scriptId, $request);
$resp = $response->getResponse();
$article = $resp['result'];
 
return;
 
}

So now you know just how far I will go in order to maintain my comfort level of laziness – honestly, automation is key in my life and anything I can automate means more time for creativity!  Again, I’m sorry I couldn’t go deeper into the PHP/Wordpress plugin development – it would just be one heaping pile of code on a page that makes no sense – but if you are interested definitely get in touch with me and I will send it along!  Anyways, thanks for reading thus far and I hope this post helps you in some way automate something of your own!

Setting up VVOLs on HP 3PAR

3par_arrayAs I’ve recently brought a HPE 3PAR 7200 into production with an ESXi 6.0 U2 cluster I thought what a better time than now to check out just how VVOLs are implemented by HPE.
Although the tasks to do so aren’t difficult by any means I find the documentation around doing so is a bit scattered in different KB’s and documents between VMware and HPE, especially if you have upgraded to their latest firmware (3.2.2 MU2).

Pre-reqs

As far as prerequisites go there really isn’t that many other than ensuring you are up to date on both your 3PAR firmware and ESXi versions.  For the 3PAR you will need to ensure you are running at the very least 3.2.1.  In terms of vSphere – 6.0 or higher.  Also don’t forget to check your HBA’s on the VMware HCL and ensure that they are actually  supported as well, and note the proper firmware/driver combinations recommenced by VMware.

After spending the day(s) updating firmware (ugh!) it’s finally time to get going.

Step 1 – Time

NTP is your friend for this step.  Before proceeding any further you need to ensure that all of your hosts, vCenter Server and 3PAR are all synced in terms of time.  If you have NTP setup and running then you are laughing here, but if you don’t, stop looking at VVOLs and set it up now!  It should be noted that the 3PAR and the VMware infrastructure can be set to different time zones, however they must still be synced in terms of time!

Step 2 – Can we see the protocol endpoint?

At this stage we should actually check our ESXi hosts to ensure we can see the protocol endpoint on the 3PAR.  To do so we will need to ensure that we see the same WWN after running a couple of different commands.  First, as shown below, the ‘showport’ command on our 3PAR.  Circled is the WWN of our 3PAR array.  Make note of this!

showport

With the WWN of our storage array in memory we can now head over to our ESXi hosts.  SSH in and run the ‘esxcli storage core device list –pe-only‘ command.  This command will return any Protocol Endpoints visible from the ESXi host.  If all goes well we should see the same WWN that we did with showport, and the ”Is VVOL PE’ flag set to true – as shown below

pe-only

As you can see, we have a match so at least we have some visibility from our hosts!

Step 3 – VASA

showvasaAs we all know the whole concept of VVOLs requires the array to support VASA 2.0 and act as a storage provider for vCenter – this is what allows us to create our VM profiles and have the array automatically provision VVOLs depending on what profile is selected.  On the 3PAR we can check the status of VASA by simply running the ‘showvasa’ command.  In the case shown we can see that it is already enabled and functioning properly, however this wasn’t always the case for me.  To enable the service I first tried the ‘startvasa’ command, however it was complaining about not having a certificate.  The easiest way, if you plan on using self-signed certificates to generate one is to simply run the ‘setvasa -reset’ command.  This will reset your VASA configuration and generate a self-signed cert.  After this you can simply run ‘startvasa’ to get everything up and running…

Step 4 – Create the storage container

Now if you are following the HPE VVOL integration guide you won’t see this step, mainly because it was created around the 3.2.1 firmware, which would have already had a default, and only one storage container created for you.  If you are running 3.2.2 though you have the option to create more than one storage container, and by default comes with, well, no storage containers.  So before we go and register our vCenter with the VASA provider we first need to create a storage container to host our VVOL datastore.  First, create a new Virtual Volume set with the following command

createvvset myvvolsetname

Then, let’s create our storage container and assign our newly created set to it

setvvolsc -create set:myvvolsetname

Again, these commands wouldn’t be required in 3.2.1 as far as I know, but are in 3.2.2

Step 5 – Register our VASA within vCenter

Now it’s time to head over the familiar, lightening fast interface we call the vSphere Web Client and register the 3PARs VASA implementation as a storage provider.  Make note of the ‘VASA_API2_URL’ shown in step 3 – you will need this when registering.  With your vCenter Server context selected, navigate to Manage->Storage Providers and click plus sign to add a new storage provider.

registerprovider

Enter your VASA URL from step 3, along with a name, username, and password and click ‘OK’.  For this instance I’ve used 3paradm, but you may be better off investigating creating a new account with just the ‘service’ role within the 3PAR.  Either way, get your new storage provider registered in vCenter and wait for the Status to show as Online and active.

Step 6 – The VVOL datastore

We are almost there I promise!  Before we can deploy VMs within a VVOL or assign storage profiles to match certain CPG’s within the 3par we need to have our VVOL datastore setup within vCenter.  I found the best spot to create this datastore is by right clicking the Cluster or ESXi host we want to have access to VVOLs and selecting Storage->New Datastore.  Instead of selecting VMFS or NFS as we normally would, select VVOL as the type as shown below

vvoltype

On the next screen simply give your datastore a name and select the storage container (this is what we made available in step 4).  Then, simply select the hosts you wish to have access to deploy VVOLs to and away you go!

Step 7 – Storage Profiles

At this point you could simply deploy VMs into your newly created VVOL datastore – the 3PAR will intelligently chose the best CPG to create the VVOL in, but the power really comes by being able to assign certain VM storage profiles to our disks, and having the VVOL go to the proper CPG depending on the array capabilities. Storage Profiles are created by clicking on the Home icon and navigating to Policies and Profiles within the web client.  In the VM Storage Profiles section simply click the ‘Create new storage profile’ button.  Give your new profile a name and continue on to the Rule-Set section.

profile

The Rule sets of my “Silver” VM storage profile are shown above.  As you can see, I’ve specified that I want this storage profile to place VM disks within my FastClass raid 5 CPG, and place their subsequent snapshots in the SSD tier CPG.  When you click next you will presented with a list of the compatible and incompatible storage.  Certainly select your compatible storage and click next.  Once we have all of the profiles we need we can simply assign them to our VMs disk as shown below…

vmprofile

As you can see I’ve selected our newly created “Silver” policy for our new VM.  What this states is that when this VM is created a new VVOL will be created on our FastClass disks on the 3PAR housing the VMs.

Step 8 – VVOL visibility

Although we are technically done with deploying VVOLs at this point I wanted to highlight the showvvvolvm command that we can utilize on the 3PAR in order to gain visibility into our VVOLs.  The first being simply listing out all of our VMs that reside on VVOLs within the 3PAR.

showvvolvm -sc

showvvol1

As you can see by the Num_vv column we have 3 VVOLs associated with our VM (MyNewVM).  But how do we get the information on those VVOLs individually – we can use the same command just with the -vv flag

showvvolvm -sc -vv

showvvol2

So now we can see that we have 1 VVOL dedicated for the config, 1 VVOL dedicated for the actual disk of the VM, and finally 1 VVOL hosting a snapshot that we have taken on the VM.

Anyways, that’s all I have for now – although I haven’t gone too deep into each step I hope this post helps someone along the way get their VVOLs deployed as I had a hard time finding all of this information in one spot.  For now I like what I see between HP and VMware concerning VVOLs – certainly they have a long road ahead of them in terms of adoption – we are still dealing with a 1.0 product from VMware here and there are a lot of things that need to be worked out concerning array based replication, VASA high availability, functionality without VASA, GUI integration, etc – but that will come with time.  Certainly VVOLs will change the way we manage our virtualized storage and I’m excited to see what happens – for now, it’s just fun to play with 🙂  Thanks for reading!