Category Archives: Posts

Primary Data set to make their 5th appearance at Storage Field Day

This June @ SFD13 Primary Data will make their 5th appearance at a fully fledged Tech Field Day event despite having just GA’d their flagship product just 9 short months ago at VMworld 2016 (They have a few TFD Extra events as well).  Now, pardon my math but for a company that was founded in 2013 that means they spent roughly 3 years in development – gathering loads of customer feedback and beta testing before releasing their data management platform to the wild, all the while giving updates via TFD – there’s an example of a company taking it’s time and doing something right!

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

Now, before we go into detail about Primary Data’s products and what they provide I wanted to take a second to talk a little bit about their founders and leadership team – because we all know great products and execution start from the top – and Primary Data has quite the top!  Let’s have a look below…

Lance Smith (CEO) – Despite holding many top level leadership positions over the years perhaps the most relevant would be COO of Fusion-io and following the acquisition moving into Senior Vice President and GM inside of SanDisk.

Rick White (Co-Founder and CMO) – This isn’t Rick’s first go around as he was also one of the orginal co-founders and CMO at Fusion-io.  Are you beginning to sense a little pattern here around Fusion-io? Smile

David Flynn (Co-Founder and CTO) – Here comes that pattern again!  Prior to founding Primary Data David Flynn was a co-founder and CEO at Fusion-io.  Now reading his bio off of the site it also states that David holds over 100 patents across a wide range of technologies – not too shabby.

Steve Wozniak (Chief Scientist) – Yeah, that’s right, the Steve Wozniak, you know, the guy from Dancing with the Stars – oh, and he kind of helped found and shape a small little valley startup named Apple.

Now keep in mind these are just 4 people that I have picked out off of Primary Data’s company page!  And even though there is a ton of brainpower listed here, there’s still a lot more people at Primary Data whose experience in the industry that just blows my mind!

So what exactly does Primary Data bring to market?

Primary Data’s key storage solution focus around their flagship product DataSphere.  Now DataSphere in itself isn’t a storage array, being best more described as what they coin a “Metadata Engine for the Enterprises”.   So what does this really mean?  Well, hopefully I’m getting it here but to me DataSphere looks like it is somewhat of an abstraction tool – a way to de-couple all of your applications from the data that they use and store.  The first step is to take all of an organizations storage and pools it together into a single logical namespace.  It’s this storage, be it direct-attached, SAN based, or even cloud which can be in turn presented back to your enterprise applications.   But its not necessarily this pooling which drives up the value of DataSphere – it’s the analytics and automatic data movement that really stands out for me.  DataSphere is able to map a set of application rules or objectives to automatically move data across different tiers of storage in order to ensure that certain SLA’s are met, or more-so, that the right resources are being assigned for the right applications.  Meaning the proper storage resources are provisioned to applications, no matter where that application is running, no matter where that storage is running – All we need to do is specify that we want Application X to have Storage Requirements Y and let DataSphere tier and tune to its heart delight!  The metadata engine keeps track of where things are and whats going on.

As you can imagine this gives us a really good opportunity to scale in terms of storage – and really prevents the need to over provision which has become such a defacto standard in deployments today – in turn, organizations save dollars!    Now from what I can tell DataSphere doesn’t necessarily sit inside the data path for the application either – meaning it’s virtually invisible and not affecting performance in anyway.  Instead it lies in somewhat of an out of band architecture – allowing applications to have direct paths to their data while DataSphere simply handles the metadata processing and analytics.

There are a ton of benefits that I can initially see with DataSphere – thinking of scaling, migration to cloud, and simply performance compliance are a few that come straight to mind.  I’m most certain there is much much more than can be done and I know I’m just scratching the surface, so with that I guess we will have to wait until Primary Data presents at Storage Field Day in June to learn more.  As always follow along with the hashtag #SFD13 or check out my page here for all that is SFD13!

My Veeam VMCE-A Course and Exam Experience

vmce_a_logoFirst up a bit of a back story – as a Veeam Vanguard I was lucky enough to have received the required training last June in order to qualify for my VMCE exam, which I wrote and passed in August of 2016!  A nice little perk of the program if you ask me!  Anyways, earlier this month a handful of us were again lucky to participate in the next level of training, the VMCE-A Design & Optimization course in an online pilot, thus, qualifying us to write the VMCE-A exam.    Under normal circumstances I would take a lot of time to study up and possibly create guides on this blog for the certifications I write – however, with VeeamON right around the corner and the ability to take advantage of a “Free Second Chance” offer for writing certifications on site my normal study strategies didn’t apply.  I couldn’t pass up the chance of at the very least getting a look at the exam, even if it meant failing – hey, a free second chance!

So with the course fresh on my memory I studied where I could between it and my exam appointment at the conference, be it on the car ride to the airport, at 30,000 feet in the air and during a few meals at the conference.  Anyways, the tl;dr version is I passed the exam….barely – getting only 4% over the pass mark of 70%.  Not a mark I’d certainly be super proud of, but in the end a pass is a pass and I’ll take it!

On to the exam!

exam_multiplechoiceThe VMCE-A D&O exam is 40 randomized questions, all multiple choice.  Some questions have only one answer, while some are in the “Select 2, Select3” format.  As mentioned earlier a passing score is 70% or higher.  As far as the content goes I can’t say a lot as NDA’s are in effect, however what I can say is that all questions I received are fully covered within the VMCE-A D&O course material – and in fact, at the end you get a nice little detailed breakdown on how you scored in the different sections covered in the course (Design & Sizing, Infrastructure, Security, Optimization, Automation & Compliance, and Troubleshooting).  This certainly helps you to nail down where you might want to freshen up in order to improve your skill-sets.

One big thing I will say is that this exam is tough!  For as easy as Veeam can be to simply get up and running there is a lot to know about their complete suite of products – and a lot to cover in order to test on all of the features and benefits of just Veeam Backup & Replication.  Now being a customer I’m not designing these Veeam solutions day in and day out, so I focused a lot of my attention on the design section, as well as other parts of VBR that I don’t use that often.  But just as with the VMCE it’s not enough to solely focus on just VBR – Veeam ONE, Cloud Connect, etc – these are all free game for testing on this exam – so if you don’t use them I would certainly recommend brushing up on them.   I can’t stress enough that all of the content I was tested on in the exam is covered within the course materials (textbook/slides) – so pay attention during the course!  I can say, that if you see something labeled as a best practice or a formula you should remember these – Remember, this is an architect exam based on designing Veeam environments!  Just keep that in the back of your mind while studying!

As far as timing goes you have 1 hour (add another 30 minutes if English isn’t your first language) to complete the 40 questions.  I found this to be more than enough time.  Just like VMware’s VCP exams you can flag certain questions for review, and go back and forth between questions on the exam at your leisure.  The strategy I took, since I had no idea how much of a time crunch their might be, was to simply go through the questions answering the ones I knew I was right and flagging any that I was unsure of for review after.  This process took me roughly 30 minutes, which allowed me another 30 minutes to go back and review those questions I didn’t quite have a grasp of.  My review took roughly 10 minutes – after that I went through every question again, double-checking and tallying in my head how many I knew I had right, hoping to come up with a score high enough to make me feel comfortable enough to click that dreadful ‘End Exam’ button.  In the end I knew I was I close, but ended it anyways!

You will get your score immediately after completing the exam – so you know whether it was a pass or fail right away – no painful time spent wondering Smile  Also, as mentioned earlier, upon exiting the facility you will get a print out showing how you scored in each category.  I’m certainly happy I passed and know that I can for sure improve in some areas – maybe another study guide is in the cards for me!

The Veeam Certification Paths

For those that don’t know Veeam currently has 2 different certifications.  The VMCE, which documents proof  that the engineer has the necessary level of knowledge to correctly deploy, configure and administrator Veeam Availability Suite.  Then, the VMCE-A D&O which adds on the knowledge from the VMCE, bringing in more of a design and optimize feel to the test, all the while following the Veeam best practices.  Once you have achieved both the VMCE and the VMCE-A, Veeam accredits you with the title of the Veeam Certified Architect, or VMCA.  The VMCA is not a separate certification and does not require a separate step – it’s simply a designation handed to those whom have completed the requirements for both the VMCE and VMCE-A, along with passed both exams.

VeeamCertificationPath

A little about the course

Honestly, even if you don’t go through with the exam the VMCE-A Design and Optimization course is an awesome course to take.  I guarantee you will get something out of it even if you design on a daily basis.  For me, being a customer and administrator of these products it was an awesome opportunity to walk through the Veeam design methodologies, and deep diving into each step one by one to come out with the full solution.  The course has a couple of design scenarios inside of it, of which there is really no right or wrong answer.  We broke into a couple of different groups to do these and it was amazing to see just how different the end designs were.  The instructors take the opportunity to pick away at these designs, trying to understand your though process and figure out how you think – asking a lot of questions in regards to why you set it up the way you did!  This to me was the biggest advantage of the course – having that interaction and learning other ways to accomplish similar results – and seeing where you might be going astray in your though process.

So with that I hope this helps anyone else who might be on the fence about taking either the course or the exam.  I can proudly say that I am a VMCA now and that feels great (and I’m glad I don’t have to cash in that second chance as it’s a very tough exam – or at least it was to me).

A field day of Storage lay ahead!

SFD-Logo-400x400I’ve had the awesome opportunity to participate in a few tech field day events throughout the last few years, travelling to Austin for VFD4, Boston for VFD5, and finally San Jose for TFD12 just last November!  To be honest, these days provide me with more knowledge than a lot of the week long training courses I take!  They are jam packed with technical deep dives from a variety of companies – companies who are eager to share and get the message out in regards to their offerings!  And it’s not just the presenting companies jamming the knowledge in my brain, it’s the delegates as well!  I’ve honestly met some of the smartest people I know there being a fellow delegate – not just on virtualization, but spreading across the whole IT ecosystem.   Anyways, I’m super excited to have been invited back to another field day experience – this one, a storage event, taking place June 14-16 in Denver!

#SFD13

As it stands today Storage Field Day 13 is shaping up to have 6 presenting companies – all with their own different ties into the storage market.  We have everything from the tried and true companies such as DellEMC and Seagate all the way through to the startup technologies of Primary Data and the newly acquired Exablox.  In between sits the non-profit, vendor-neutral SNIA along with the hometown Colorado based X-IO Technologies.

DellEMC_Logo_Prm_Blue_Gry_rgb-100x18 exablox-logo-1200

Certainly a wide variety of companies to hear from – which should help to keep the event both exciting and interesting!

I mentioned that I’ve gained a lot of knowledge from other delegates in the past – and man o man this one will be no different.  I’m actually super excited for this event as I’ve really only met a few people on this list – so there will be a lot new faces and friends to make here for me.  Honestly, it’s a little bit intimidating as there are some real storage rockstars in this list!

I’ll do my best to try and get a few preview posts out for some of those presenting companies I know little about.  Mainly for my own homework so I don’t “go in cold” and get thrown into the deep end Smile  That said I can’t promise anything as this event is quickly sneaking up on me and only a couple weeks away now!   As always I’ll try to get the stream setup on my event page here – as well as place any content I create surrounding SFD13.   Be sure to follow along on twitter as well using the hashtag #SFD13 and be sure to keep an eye on the official event page!  See you in Denver!

#VeeamON 2017 – Wait! There’s more!

If you packed your bags up and started to shutdown after the Wednesday keynote thinking you have heard the last of the VeeamON announcements then you might want to think about digging out that notebook and sharpening your pencils again as we aren’t done yet here at VeeamON 2017!

Thursday, Paul Matiz took the stage for the final keynote of the show and made some more announcements around existing products – and threw the gauntlet down on one brand new product, the Veeam PN, which I released a separate post about!  That said, Veeam PN wasn’t the only Thursday announcement – there were a few others outlined below.

Veeam Backup for Office 365

Veeam’s released their first version of their SaaS email based backup for Office 365 last year and honestly people flocked to it!  With more and more companies migrating to the Microsoft hosted solution rather than putting up with the headaches of dealing with the multiple on-premises Exchange servers Veeam wanted to take advantage and help those organization protect their most critical communication asset!

With the version 1.5 announced just the other day, things like scaleablity have been added in order to horizontally scale your Office 365 backups by adding new proxies and repositories to help speed up the time it takes to pull down your O365 mailboxes.

In addition to this, automation has also been a focus.  With full PowerShell support coming into the product we can now use the easy verb-noun cmdlets to backup and restore Office 365 mailboxes.  And, more than just PowerShell – a fully supported RESTfull API is also available.

That said why stop at version 1.5 – let’s get moving on to 2.0   The Veeam community spoke – we loved the ability to back up the email and use the explorer functionality to perform restores back into mailboxes – but Office 365 is so much more – What about SharePoint?  What about OneDrive?

Well, Veeam now has an answer for those questions as they released somewhat of a roadmap for their Office 365 backup strategy, with both SharePoint and OneDrive on it in version 2.0.

Veeam Management Pack v8 update

For those Systems Center users that use Veeam Management pack to monitor and gain insights into their critical applications running on both VMware and Hyper-V you will be pleased to know that Veeam has released a slew of new features into the Veeam MP v8.  Now providing Azure backed dashboards in update 4, Veeam MP users will be able to update instantaneously.

Veeam has certainly announced a lot of things this week – with a heavy focus on cloud.  With Mark Russinovich doing the final keynote of the conference I can certainly say that cloud is most definitely the future – and data protection needs to be part of that!

Veeam announces the new Veeam Powered Network (Veeam PN)

VeeamOn-2017-2During the final keynote of VeeamON 2017 Veeam took the stage and threw down the guantlet on a brand new Veeam product release, the Veeam Powered Network, or Veeam PN for short.

Veeam PN is a new product, not a feature added to any others, which was initially developed to solve an internal issue within Veeam.  Veeam has a lot of employees and developers in remote sites all accross the world – and the pain of constantly connecting those sites together via VPN. coupled with the frustration of tunnels dropping all the time gave birth to the Veeam PN.  Kind of feels a lot like how a VMware fling comes to life, first being internal only, then released to the masses, then actually built out as an application offering.  Although VeeamPN can be used to establish this connectivity between any site at all,  the real benefits and the initial design intentions all focus on Microsoft Azure.

Veeam PN – Disaster Recovery to Microsoft Azure

Veeam PN is deployed into your Azure environment via the Azure Marketplace.  Once your cloud network have been established another virtual appliance is then deployed from veeam.com into your on-premises environments.  From there it’s as simple as setting up which networks you wish to have access into Azure and importing site configuration files that are automatically generated to your remote sites – with that, you have a complete and secure site to site tunnel established.  I’m not sure of the scale-ability of Veeam PN just yet, but I do know it supports having multiple sites connected into Azure for those ROBO situations.  For those remote workers on the road, well they can simply connect into Veeam PN and download a configuration file that simplifies their setup of the Open VPN client to establish a client-to-site VPN.

VeeamPN

So at this point you may be thinking “Why would Veeam develop this tech focused around networking and what does it have to do with backup or DR?”  Well, let’s couple this together with a little feature Veeam has called “Direct Restore to Microsoft Azure”.  By recovering our VMs and physical endpoints directly into Azure, and then easily establishing the network connectivity using Veeam PN we can now leverage true DR in the cloud in an easy to use, scale-able, and secure way.  THis is the “nirvana” of recovery that we have all been looking for.

One more thing – it’s free!

There it is – the Veeam way!  They released Backup and Replication with a free tier, Windows/Linux endpoint agents free, Direct Restore to Azure – free, the explorer tech – free!  Let’s add one more to that list!  Veeam PN is absolutely free!  And even though they have talked a lot about it being leveraged for Azure, companies and organizations can essentially use this technology to connect any of their sites and clients together – absolutely free!

Details around any betas or GA haven’t been revealed yet but keep your eyes open and I’ll do my best to help spread around any opportunity for you to get your hands on the new Veeam Powered Network!

A glimpse into #VeeamVanguard day!

Sure the Veeam Vanguard program comes complete with tons of great swag and free trips to VeeamON and whatnot – but in all honesty the biggest benefit of the program in my opinion is the access that Veeam provides – access with fellow Vanguards and access with key people within Veeam, across the whole company from executives to SEs.  Here at VeeamON 2017 we get a special day jam packed full of access – and below is a bit of a lowdown on what happened (or as much as we can tell you about anyways).

Veeam Availability Orchestrator – Michael White

The day started off with a couple of hours with Michael White (@mwVme) giving the low down on Veeam Availability Orchestrator – One of Veeams newest products which helps orchestrate and automate disaster recovery fail over.  Before actually getting into any product specifics, Michael actually went through a brief discussion about what Disaster Recovery and Business Continuity actually is, and how we can best prepare for any situation that may occur.  Michael is a perfect fit to evangelize this product as he had a lot of examples with other companies he as worked for over the years, outlining how he was prepared, or at sometimes, un prepared for disasters that hit.  In all honesty it was a great way to start the day, getting a little bit of education rather than just immediately diving into product specifics!

Veeam Availability Console – Clint Wyckoff

Directly after Michael we had Veeam evangelist Clint Wyckoff come and in and give us a breakdown on the new release candidate of  Veeam Availability Console.  I’ve seen the product before, but like anything Veeam there is always a number of changes in a short time – and it was nice to see the product as moves into tech preview.  For those that don’t know, VAC is Veeams answer to a centralized management solution for those large, dispursed enterprises as well as Veeam Service Providers to manage, deploy, and configure both their Veeam Backup & Replication servers as well as the newly minted Veeam Agents for Microsoft and LInux.

Vanguard support from the top down

One great thing that I like about the Veeam Vanguard program is that it’s not just a “pet project” for the company.  During Vangaurd day we were introduced to Danny Allan, VP of Cloud and Alliance Strategy at Veeam.  Danny is our new executive sponsor at Veeam – meaning we have support at the highest levels of the company.  It’s really nice to see a company sink so much support and resources from all roles into a recognition program – one of the many reasons why I feel the Vanguard program is so successful.

Nimble

After lunch we had Nimble come in and brief us on their Secondary Flash Array and the interesting performance enhancements it has when being used with Veeam.  Last year during our Vanguard day we didn’t have another vendor other than Veeam present.  It’s nice to see some of Veeams partners and ecosystem vendors reaching to find some availability to talk with us.  Nimble certainly has a great product – and due to the fact that I’m not sure what all was covered under NDA I’ll simply leave this at that!

AMA with Veeam R&D

Earlier when I mentioned one of the biggest benefits of the Vanguard program was access this is basically what I was referring too.  For the rest of the afternoon we basically had a no holds barred, ask me anything session with Anton Gostev, Mike Resseler, Alexy Vasilev, Alec King, Vladimir Eremin, Dmitry Popov, and Andreas Newfert – all Veeam employees who manage, or work very closely with R&D deciding what features are implemented, when they get implemented, and basically define a road map for when these features get inserted into products.  Now this session was definitely NDA as a lot was talked about – but just let me say this was the best and most interesting portion of the whole day!

With so much being under NDA and embargo there isn’t a lot I can tell you about the content – but for those wondering this is just a brief description of how much access you get into Veeam being under the Vanguard label.  Certainly if you wish, I encourage you to apply for the program – you won’t regret it!

Veeam Availability Suite v10 – what we know so far…

Although we got a hint at some of the announcements coming out of VeeamON during partner day on Tuesday it was really the general session Wednesday morning which brought forth the details surrounding what Veeam has in store for the future.  In true Veeam fashion we yet even more innovation and expansion into their flagship Veeam Availability Suite – covering your data protection needs from all things virtual, physical, and cloud.  So without further ado let’s round up some of which we saw during the Wednesday keynote at VeeamON 2017.

 

Veeam Agent Management

It’s no surprise that as soon as Veeam released their support for protecting Windows and Linux physical workloads that customers and partners all begged for integration into VBR.  Today, we are seeing just that as Veeam has wrapped a very a nice management interface around managing backups for both our virtual machines, along with our physical Windows and Linux workloads.  This not only gives us the ability to manage those physical backups within VBR, but also gives us the ability to remotely discover, deploy, and configure the agents for the physical endpoints as well!

Backup and restore for file shares.

Veeam Availability Suite v10 brings with it the ability to backup and restore directly from our file shares.  Basically those SMB shares can be accessed via a UNC share and files backed up and protected by Veeam.  Different from Veeams traditional restore point though, Veeam Backup and Restore for file shares doesn’t necessarily store restore points, but acts almost like a versioning system instead – allowing administrators to state how many days they would like to version the files, whether or not to keep deleted files, and also specify some long term retention around the file.  This is some pretty cool feature set to be added to v10 and I can’t wait to see where this goes – whether the file share functionality can somehow be mapped to the image level backup and work together to restore complete restore points as well as apply any newer file versions that may exist.

Continuous Data Protection for all

Perhaps some of the most exciting news of all is Veeams announcement to support Continuous Data Protection, allowing enterprise and organizations to drastically lower the RPO by default to a whopping 15 second restore point.  Ever since Veeam hit the market their replication strategy has been to snapshot VMs in order to gain access to CBT data and replicate that across.   That said we all recognize the pain points of running our infrastructure with the impact of snapshots.  That’s why, with the new CDP strategy set forth by Veeam today they will utilize VMware vSphere’s Storage APIs for I/O filtering in order to intercept and capture those IO streams to our VMs and immediately replicating the data to another location.  This to me is a huge improvement for an already outstanding RTPO that organizations can leverage Veeam to achieve.  This is truly groundbreaking for Veeam as we can now say have 4 hours of 15 second restore points to chose from.  It’s nice to see a vendor finally take advantage of the APIs set forth by VMware.

vCloud Director Integration into Cloud Connect.

Veeam service providers have been providing many customers the ability to consume both backup and replication as a service – allowing customers to essentially ship off their data to them, allowing the SP to become the DR site.  That said, it’s always just been those VMs that live within just vCenter and vSphere.  Today Veeam announced the support for vCloud Director organizations and units to also take advantage of the Cloud Connect offering – allowing those running vCloud Director to also consume the DR as a Service that Veeam partners have been providing, keeping their virtual datacenters and hardware plans while failing over their environments.

Veeam Availability for AWS

Yeah, you heard that right!   We have seen Veeam hit the market focusing solely on virtualized workloads, slowly moving into the support of physical workloads – and now, supporting the most famous well know public cloud – Amazon AWS.  Cloud always presents risk into an environment, which in turn means that we need something exactly like Veeam Availability for AWS to protect those cloud workloads and ensure our data is always recoverable and available if need be.  In true Veeam fashion though, the solution will be agentless.

Ability to archive older backup files

Veeam v10 now brings with it the ability for us to essentially archive off any backup files as they age in our backup policies off to some cheaper storage.  Now we all know that cloud and archive storage is a great solution for this so guess what – yeah, we know have the ability to create what is called an “Archive Storage” repository which can live on any type of native object storage, be it Amazon or even your own swift integration.  This frees up your primary backup storage performance in order to manage things such as restores, etc. – while the archive storage can do what it does best – hold those large, lesser accessed backup files.

Universal Storage Integration API

For the last few VeeamON events the question of who the next storage vendor would be to integrate into Veeam is always on everyone’s mind.  With the announcement of the new Universal Storage Integration APIs the next storage vendor could literally be anyone.   This is basically an API set that will allow storage vendors to integrate into Veeam – basically giving Veeam the ability to control the array, creating, deleting and removing storage snapshots allowing customers to lower RTO and RPO without ever leaving the familiar Veeam Console.

This honestly just scrapes the surface on some of the announcements Veeam has in store for us this week so stay tuned as there is another keynote tomorrow where I’m sure we will hear more about VBR v10 and also, possibly some NEW product announcements.  For now, it’s off to some deep dives to learn some more about some of these great features!  Thanks for reading

Veeam Availability Orchestrator – Automation for your Disaster Recovery

As a member of the Veeam Vanguards here at VeeamON 2017  we got to spend a couple of hours with Michael White (@mwVme) who gave us an update on Veeam Availability Orchestrator – Veeams’ answer to orchestrating and automating fail-over to their replicated VMs.  Michael certainly is a great choice when looking for someone to evangelize this product as he had a number of examples of DR situations he has either helped with, or orchestrated companies through – which had both good and bad outcomes!  But back to topic – VAO was announced a while back, in fact, over  a year ago Veeam announced their plans for VAO during their “Next big thing” event in April of 2016.  Since then I’ve got to see the application move along through various beta stages and was pleasantly surprised to see how the product has matured as they gear up for their 1.0 release (no, I don’t know when that is).

For those not familiar with VAO let me give you a little bit of a breakdown.  VAO is essentially a wrapper, or an engine that simply interacts with other Veeam products via API calls.  Think Veeam ONE, Veeam Business View, and Veeam Backup & Replication all talking together to one centralized Disaster Recovery Orchestration machine.  As far as the architecture there really isn’t anything special – it’s a web interface, with a SQL backend.   As far as I know the only limitations associated with Veeam Availability Orchestrator are the fact that it is only supported within a VMware environment and that an Enterprise Plus license must be applied to the VBR instance VAO connects to.

So what does VAO do that VBR doesn’t?

Hearing the phrases like “testing our replicas” and “using the Virtual Labs” you might be wondering what exactly VAO does that VBR doesn’t.  I mean, we have the SureReplica technology within VBR and it works great at testing whether or not we can recover so why would we need VAO?  The answer here is really about the details.  Sure, VAO doesn’t re-invent the wheel when it comes to DR testing – why would they force you to reconfigure all of those Virtual Labs again?  They simply import them, along with a lot of information from VBR to use within VAO.  That said though, VAO does much much more.  From what I’ve seen we can basically break VAO down into three separate components.

Orchestration

VAO takes what you have already setup within VBR and allows you to automate and orchestrate around that.  Meaning we already have replicated our VMs to a DR location, setup our fail-over plans and virtual labs, and completed configuration around re-iping and post fail-over scripts to handle our recovery.  VAO takes all of this and adds flexibility into our recovery plans to execute and trigger pre and post fail-over scripts, along with per-VM testing scripts as well.  At the moment we are limited to just PowerShell, however we may see more scripting languages supported come GA time.   Essentially VAO gives us more flexibility in running and trigger external process during a fail-over even than what VBR provides on its’ own.

Automated DR Testing

VAO takes all of this fail-over orchestration and applies this to our testing environments as well.    By giving use the ability to test, and test often we, as organizations can drastically increase our success rate when a true disaster occurs.  Certainly virtualization has really impacted our ability to test DR plans, in a good way – but there are still a lot of challenges when it comes to performing a true test – VAO closes that gap even more.

Dynamic Documentation

Probably the biggest feature in my opinion of VAO is it’s ability to automatically and dynamically create Disaster Recovery documentation.  DR documentation are often overlooked, and left sitting on some file server, stale and not updated at all.  Environments today are under constant change, and when our production environments change so do our DR requirements.  VAO does a good job at dynamically pulling in any new VMs added or older VMs removed and adjusting it’s documentation accordingly.  In the end we are left with some nicely updated documentation and run books to reference when we the time comes that we need them.

All of this said though, to me the true value of VAO really is it’s ability to focus on the details.  From what I’ve seen VAO does a great job at reporting any warnings, errors or failures as it applies to any DR test or fail-over event.  Not just on its’ canned testing scripts (for instance connecting to a mailbox on a failed over exchange server), but on our custom built PowerShell scripts as well.  Without this attention to detail a lot of assumptions and false positives can be “assumed” during a DR test – leaving us left with an inconsistent state during an actual fail-over event.  VAO, in all of its reporting and messaging certainly provides a nice mechanism into the visibility of each and every VM, and each and every task associated with that VM inside of a fail-over plan.

We still don’t have a solid release date on VAO but in true Veeam fashion let me give you this estimate – “When its’ ready” 🙂

No vMotion for you! – A general system error occurred: vim.faultNotFound

vMotion is pretty awesome am I right?  Ever since I first saw my first VM migrate from one host to another without losing a beat I was pretty blown away – you always remember your first Smile  In my opinion it’s the vMotion feature that truly brought VMware to where they are today – laid the groundwork for all of the amazing features you see in the current release.  It’s something I’ve taken for granted as of late – which is why I was a little perplexed when all of a sudden, for only a few VMs, it just stopped working…

vMotionError

You can see above one of my VMs that just didn’t seem to want to budge!  Thankfully we get a very descriptive and helpful error message of “A general system error occurred: vim.faultNotFound” – you know, because that really helps a lot!  With my Google-Fu turning up no results and coming up empty handed in forum scouring I decided to take a step back to the VCP days and look at what the actual requirements of vMotion are – surely, this VM is not meeting one of them!  So with that, a simplified version of the requirements to vMotion…

  • Proper vSphere licencing
  • Compatible CPUs
  • Shared Storage (for normal vMotion)
  • vMotion portgroups on the hosts (min 1GB)
  • Sufficient Resources on target hosts
  • Same names for port groups

Licensing – check!  vCloud Suite

CPU Compatibility – check! Cluster of blades all identical

Shared Storage – check!  LUNs available on all hosts

vMotion interface – check!  Other VMs moved no problem

Sufficient Resources – check!  Lots of resources free!

Same names for port groups – check!  Using a distributed switch.

So, yeah, huh?

Since I’d already moved a couple dozen other VMs and the fact that this single VM was failing no matter what host I tried to move it to I ruled out the fact that there was anything host related causing this and focussed my attention to the single VM.  Firstly I thought maybe the VM was tied to the host somehow, using local resources of some sort – but the VM had no local storage attached to it, no CD ROMs mounted, nothing – it was the perfect candidate for vMotion but no matter what I tried I couldn’t get this VM to move!  I then turned my attention to networking – maybe there was an issue with the ports on the distributed switch, possibly having none available.

After a quick glance, there was lots of ports available, but there was another abnormality that reared its ugly head!  The VM was listed as being connected to the switch on the ‘VMs’ tab – however on the ‘Ports’ tab it was nowhere to be found!   So what port was this VM connected to?  Well, let’s ssh directly to the host to figure this one out…

To figure this out we need to run the “esxcli network vm port list” command and pass it the VMs worldID – to get that, we can simply execute the following

esxcli network vm list

From there, we can grab the world ID of our VM in question and run the following

esxcli network vm port list –w world_id

In my case, I came up with the following…

vmportid

Port 317!  Sounds normal right?  Not in my case.  In fact, I knew for certain from my documentation that the ports on this port group only went up to 309!  So, I had a VM, connected to the port group, on a port that essentially didn’t exist!

How about a TL;DR version?

Problem stemmed from the VM being connected to essentially a non-existent port!  Since I couldn’t have any downtime on this port my fix was to simply create a another port group on the dvSwitch, mimicking the settings from the first.  After attaching the VM to the newly built port group, then re-attaching back to the existing one I was finally attached to what I saw as a valid port, Port #271.

port-fixed

After doing this guess what finally started working again – that’s right, the wonderful and amazing vMotion Smile.  I’m sure you could achieve the same result by simply disconnecting and connecting, however you will experience downtime with that method – so I went the duplicate port group route.

Where there is one there’s many

All of this got me thinking – this can’t be the only VM that’s experiencing this issue is it?  I started looking around trying to find some PowerCLI scripts that I could piece together and as it turns out, knowing what the specific problem was certainly helps with the Google-Fu and I found a blog by Jason Coleman dealing with this exact same issue!  Wish I could’ve found that earlier Smile.  Anyways, Jason has  a great PowerCLI script attached to his post that peels through and detects which VMs in your environment are experiencing this exact problem!  He even has automated the creation of the temporary port groups as well!  Good work Jason!  After running it my conclusions were correct – there were about a dozen VMs that needed fixing in my environment.

How or why this occurred I have no idea – I’m just glad I found a way around it and as always, thought I’d share with intention of maybe helping others!  Also – it gave me a chance to throw in some Seinfeld action on the blog!  Thanks for reading!

VCSA 6.5 Migration deployment sizes limited!

Recently I finally bit the bullet and decided to bring the vCenter portion of a vSphere environment up to version 6.5.  Since the migration from a Windows based vCenter to the VCSA is now a supported path I thought it would also be a good time to migrate to the appliance as well.  So with that I ran through a few blogs I found in regards to the migration, checked out the vSphere Upgrade Guide and peeled through a number KB’s looking for gotchya’s.  With my knowledge in hand I headed into the migration.

At this point I had already migrated my external windows based PSC to version 6.5 and got started on the migration of the windows-based vCenter Server.  Following the wizard I was prompted for the typical SSO information along with where I would like to place the appliance.  The problem though came when I was prompted to select a deployment size for my new VCSA.  My only options available were Large and X-Large.  Might not be a big deal if in fact this environment required this amount of resources – Looking at the table below those deployment sizes are scoped to fit at a 1000 host and above mark.

DeploymentSize

Did this environment have 1000+ hosts and 10000+ VMs?  Absolutely not!  At its largest it contained maybe 70 hosts and a few hundred VMs running on them – a Small configuration at best, medium if you want to be conservative!  At first I thought maybe I was over provisioned in terms of resources on my current vCenter Server – but again, it only had 8 vCPU’s and 16GB of RAM.  With nothing out of the ordinary with vCenter itself I turned my attention to the database – and that’s where my attention stayed as it was currently sitting at a size of 200GB.  Honestly, this seemed super big to me and knowing that it had been through a number of upgrades over the years I figured I would make it my goal to shrink this down as small as possible before trying again!  TL;DR; version – The database was the culprit and I did end up with the “small” option –  but I did a number of things after a frenzy of Google’s and searches – all listed below…

WAIT!!!!  Don’t be that guy!  Make sure you have  solid backups and can restore if things here go sideways – engage VMware GSS if needed – don’t just “do what I do” 🙂

 

Reset the vpx provider

The vpx data provider basically supplies the object cache for vCenter – caching all inventory objects such as hosts, clusters, VMs, etc in order to provide that super-snappy response time in the vSphere Web Client 6.0 (Is this sarcasm?).  Anyways, resetting this essentially will reduce the size of our Inventory Database.  Now, the problem in versions prior to 5.5 Update 3 is that there was no way to reset individual data providers – in order to do one you had to do them all – and that meant losing all of your tags, storage profiles/policies, etc.  Thankfully, 5.5 U3 and 6.0 allows us to simply reset just vpx, leaving the rest of our environment in-tact.  In order to do so we must first get into the vSphere Inventory Managed Object Browser (MOB) and get the UUID of the vpx provider.  **NOTE, this is different than the MOB you may be used to logging into, see below ***

First, log into the Inventory Service MOB by pointing your browser to https://vCenterIP/invsvc/mob1/    From there, simply click the ‘RetrieveAllProviderConfigs’ link within the Methods section as shown below

invsvcprovider

In the pop up dialog, click ‘Invoke Method’, then run a search for vpx

vpxprovider

It’s the providerUuid string that we are looking for – go ahead and copy that string to your clipboard and return to https://vCenterIP/InvSvc/mob1/ – this time, clicking the ‘ResetProviderContent’ link under Methods.  In the pop up dialog, paste in your copied UUID and click ‘Invoke Method’ as shown below…

resetcontent

After a little while the window should refresh and hopefully you see no errors!   The process of resetting for myself took roughly 5 minutes to complete….

Getting rid of logs

Although vCenter does its own log rotation you may want to check out and see just how much space your logs are taking up on your current vCenter server before migrating as some of this data is processed during the migration/upgrade.  I freed up around 30GB of disk by purging some old logs – not a lot, but 30GB that didn’t need to be copied across the wire during the migration.  There is a great KB article here outlining the location and purpose of all of the vCenter Server log files – have a look at it and then peruse through your install and see what you may be able to get rid of.   For the windows version of vCenter you can find all of the logs in the %ALLUSERSPROFILE%\VMware\vCenterServer\logs\ folder.  I mostly purged anything that was gzipped and archived from most of the subfolders within this directory.  Again, not a difference maker in terms of unlocking my “Small” deployment option – but certainly a time-saver during the migration!  So what was culprit that was not allowing me to select “Small” – yeah, let’s get to that right now…

My Bloated vCenter Database

bloateddbYeah, 200GB is a little much right – even after resetting the vpx provider and shrinking the database files I was still sitting pretty high!  So, since I had no intention of migrating historical events, tasks and performance data I thought I’d look at purging it before hand!  Now if you have ever looked at the tables within your vCenter Server database you will find that VMware seems to create a lot of tables by  appending a number to the VPX_HIST_STAT table.  I had a lot of these – and going through them one by one wasn’t an option I felt like pursuing.  Thankfully, there’s a KB that provides a script to clean all of this up – you can find that here!  Go and get the MSSQL script in that KB and copy it over to your SQL Server.  Once you stop the vCenter Service we can simply run the following command via the command prompt on our SQL Server to peel through and purge our data.

sqlcmd -S IP-address-or-FQDN-of-the-database-machine\instance_name -U vCenter-Server-database-user -P password -d database-name -v TaskMaxAgeInDays=task-days -v EventMaxAgeInDays=event-days -v StatMaxAgeInDays=stat-days -i download-path\2110031_MS_SQL_task_event_stat.sql

Obviously you will need to assign some values to the parameters passed (TaskMaxAgeInDays, EventMaxAgeInDays, & StatMaxAgeInDays).  For these you have a few options.

  • -1 – skips the respective parameter and deletes no data
  • 1 or more – specifies that the data older than that amount of days will be purged
  • 0 – deletes it all!

For instance, I went with the 0, making my command look like the following….

sqlcmd -S IP-address-or-FQDN-of-the-database-machine\instance_name -U vCenter-Server-database-user -P password -d database-name -v TaskMaxAgeInDays=0 -v EventMaxAgeInDays=0 -v StatMaxAgeInDays=0 -i download-path\2110031_MS_SQL_task_event_stat.sql

After purging this data, and running a shrink on both my data and log files I finally had my vCenter database reduced in size – but only to 30GB.  Which, in all honesty still seemed a bit large to me – and after running the migration process again I still didn’t see my “Small” deployment option.   So I went looking for other large tables within the database and…..

Hello VPX_TEXT_ARRAY

It’s not very nice to meet you at all!!!  After finally getting down to this table – and running “sp_spaceused ‘VPX_TEXT_ARRAY’” I found that it was sitting a whopping 27GB.  Again, a flurry of Google!  What is VPX_TEXT_ARRAY and what data does it hold?  Can I purge it?  Well, yes….and no.  VPX_TEXT_ARRAY, from what I can gather keeps track of VM/Host/Datastore information – including information in regards to snapshots being performed on your VMs.  Also from what I can gather, from my environment anyways, is that this data exists within this table from, well, the beginning of time!  So, think about backup/replication products which constantly perform snapshots on VMs in order to protect them – yeah, this could cause that table to grow.  Also, if you are like me, and have a database that has been through a number of upgrades over the years you may end up having quite a bit of data and records within this table as it doesn’t seem to be processed in any sort of maintenance job.  In my case, 7 million records resided within VPX_TEXT_ARRAY.  Now, don’t just go and truncate that table as it most likely has current data residing in it – data vCenter needs in order to work – there’s a reason it tracks it all in the first place right?  Instead, we have to parse through the table, comparing the records with those that are in the VPX_ENTITY table, ensuring we only delete items which do not exist.  The SQL you can use to do so, below…

DELETE FROM VPX_TEXT_ARRAY
WHERE NOT EXISTS(SELECT 1 FROM VPX_ENTITY WHERE ID=VPX_TEXT_ARRAY.MO_ID)

A long and boring process – 18 hours later I was left with a mere 9000 records in my VPX_TEXT_ARRAY table.  Almost 7 Million removed.  Just a note, there is a KB outlining this information as well – in which it says to drop to SINGLE_USER mode – You can if you wish, but I simply just stopped my vCenter Server service and stayed in MULTI_USER so I could check in from time to time to ensure I was still actually removing records.  an sp_spaceused ‘VPX_TEXT_ARRAY’ in another query window will let you track just that.   Also, it might be easier, if you have the space, to set the initial size of your transaction logs something bigger than the amount of data in this table.  This allows SQL to not have to worry about growing them as it deletes records – you can always go back in the end and reset the initial size of the tlogs to 0 to shrink them.

So – a dozen coffees and a few days later I finally ran another shrink on both the data and log files, setting their initial sizes to 0 and voila – a 3GB database.  Another run at the migration and upgrade and there it was – the option to be “Small”!  Again, this worked in my environment – it may not work in yours – but it might help get you pointed in the right direction!  Do reach out if you have any questions and do ensure you have solid backups before you attempt any of this or anything you read on the net really Smile  Also, there’s always that Global Support Services thing that VMware provides if you want some help!   Thanks for reading!

Spring forward to the Toronto VMUG UserCon

Ahh Spring –  Most people describe this as a time where the rain falls and cleans everything up around us – flowers blooming, grass growing – a sign of warmth to come!  In Canada though, it’s a sign of giant muddy snow piles full of gravel, salt and sand from all of the plowing and shoveling performed all Winter long – for me, it’s a muddy white dog and two little munchkins tracking muck all over the house – All that said, there is some hope for Spring this year!  March 23rd marks the date for our next Toronto VMUG UserCon – so, if you want to escape the mud and the muck come on down to the Metro Toronto Convention Centre this Thursday and join 600+ of your peers for some great learning, technical sessions and some awesome keynotes!  We’ve got a great one planned this year and I just wanted to highlight some of the keynotes and sponsors we have lined up for Thursday!

First up – Mr. Frank Denneman

Over the years we have been lucky enough to have some awesome keynote speakers for our UserCon – this year is no exception!  I’m super excited to hear from Frank Denneman!  If you don’t know who Frank is let me try and enlighten you a little – this man literally wrote the book on DRS – three times!   The “HA and DRS/Clustering Deepdive” books – written by Frank and his co-author Duncan Epping are honestly one of the greatest tech books ever.  It’s written in a text that is easy to read, and has literally taught me so much about HA and DRS I can’t even begin to explain it all!  Certainly a must read for any VMware admin.  Frank moved on from VMware for a little while to work with PernixData as the CTO and has just recently returned to VMware taking on the role of Senior Staff Architect within their SDDCaaS Cloud Platform Business Unit.  Frank will be giving a talk titled “A Closer Look at VMware Cloud on AWS”.  With VMware and Amazon announcing a partnership recently allowing us to consume bare-metal ESXi from within the wide range of Amazon’s data centers this will most certainly be an interesting keynote explaining just how it works – and what we can expect from it in terms of unified management between our on-premises and AWS infrastructure.

The Breakouts and Panels!

After Frank the morning breakout sessions will then kick off – here we will have sessions from a variety of partners and vendors whom provide everything from hardware to storage to back up to monitoring.  You will see all of the familiar names here with 30 minute breakout sessions covering off their technologies.  Take a look at our sponsors below – without these companies these events wouldn’t be possible!    A round of sessions from VMware follows a couple of rounds of sessions from third-party vendors, then, lunch, and an aspiring/VCDX panel talk where you can be sure to get some in-depth answers to any questions you may have about design, architecture, or every day management of your VMware infrastructure.

Drinks, Food, and DiscoPosse’s

After lunch we have another couple of rounds of breakout sessions by VMware and our sponsors – with a reception following immediately thereafter.  vSphere with Operations Management will sponsor our networking reception, complete with drinks and appetizers – a perfect way to end what I’m sure will be a jam-packed day!  That said, what’s a beer without entertainment right?  We are super happy to have our own VMUG co-leader Eric Wright (@discoposse) giving our closing keynote for the day!  Think of this a little like the technology version of CBC’s Hometown Heroes segment that they offer on Hockey Night in Canada!  Eric, our own hometown hero will deliver a jam packed hour of all things VMware and Terraform, showing us just how easy it is to start automating our infrastructure with the open source software!  I got a sneak peek of this at our last local VMUG meeting and this is something you won’t want to miss!

Free Stuff!

Then, yes, of course, Giveaways!  We have some pretty cool prizes this year including cold hard cash (VISA gift cards), GoPro’s, and the ever popular grand prize of a complete vSphere Homelab!   This is on top of all the great giveaway’s we see from our sponsors!

So if you aren’t busy this Thursday, register now & drop in – we’d love to see you there!  Even if you are busy, cancel everything and come on down!  Can’t make it?  Follow along via Twitter with the hashtag #tovmug and hey, we have more meetings coming up as well to help you all get the Toronto VMUG experience.  Our Q2 meeting is May 31st sponsored by Veeam and Mid-Range and our Q3 meeting is tentative for September 19th with sponsors Zerto and Tanium (still in development) – come and check us out.  As always, stay connected.  You can follow us on Twitter, connect on LinkedIn, watch our website, or become a member of the Toronto VMUG Community in order to stay up to date on all things tovmug!  See you Thursday!

 

Don’t delay!  Register now for the March 23rd Toronto VMUG UserCon!