Monthly Archives: May 2017

X-IO Technologies – A #SFD13 preview

In terms of the technology sector we always joke that when startups are 5 years old that sometimes makes them legacy!  Meaning, 5 years is a long time in the eyes of a technologist – things change, tech changes, new hardware emerges.  All of this drives change!  Well if a 5 years makes a mature company than I’m not sure what to call X-IO Technologies.  X-IO was founded nearly 25 years ago, in 1995 – taking them right off the scale in terms of aging for a tech company!   Honestly, I’ve heard the name before (or saw the logo) but I’ve never really looked at what it is X-IO does – so today let’s take a look at the current X-IO offerings and solutions and what they have to bring to the table – and, if you are interested you can always learn more when they present at the upcoming Storage Field Day 13 event in Denver come June 14th – 16th.  But for now, the tech…

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

What does X-IO bring?

From their website it appears that X-IO have a couple basic offerings, all hardware appliances, and all serving different points of interest in the storage market.  Let’s try and figure out what each of them does..

Axellio Edge Computing

This appears to be an edge computing system marketed mainly companies needing performance for big data analytics as well as those looking for a platform to crunch data from IoT sensors.   These converged storage and compute boxes are very dense in both CPU, Memory and Storage.  Supporting up to 88 cores of CPU, 2TB of memory, and a maximum of 72, yes 72 2.5” NVMe SSD drives.  Each appliance is basically broke down into two server modules for the compute and memory, as well as up to 6 FlashPacs (A FlashPac is essentially a module hosting 12 dual ported NVMe slots).  As far as scale goes I don’t see much mention in terms of pooling appliances, so it appears that these are standalone boxes each serving a single purpose.

iglu Enterprise Storage Systems

Here it appears we have a storage array.  The iglu storage system can be built using all flash, or a mixture of both flash and disk, or, just spinning disk itself.    They appear to have multiple models supporting each disk configuration, with their all-flash version supporting over 600,000 IOPs.  Controllers on the iglu system are distributed, meaning whenever we add more capacity we are also adding more controllers, thus increasing both space and performance with the same upgrade.  As far as software goes we see all the familiar features such as snapshots, CDP, replication, stretched clustering features,  integration with VMware, SQL, Oracle, etc.…  One nice aspect to this is that all iglu systems, no matter the model, have access to all of the software features – there is no having to license individual aspects of the software.

I’m excited to see what X-IO has to say at SFD13 come this June.  There was some mention of some unique way of handling drive failures, as well as offering a lengthy 5 year warranty on everything which may separate them from the storage vendor pack – but I’m hoping they have much more to talk about in regards to their storage offerings to give it that wow factor!  As always you can find all my SFD13 related information here or follow the event page here to stay updated and catch the live stream!

Hear more from Exablox at Storage Field Day 13

As I continue along with some SFD13 previews I stumble upon Exablox.  Exablox is one of the few presenting companies at SFD13 that I know very little about so I’m excited to hear what they have to offer and say come June 14th in Denver when the event kicks off.  Also,  Exablox is coming into SFD after being acquired by StorageCraft earlier this year and given the partnerships between the two companies in the past I’m sure we will hear some more in regards to how this integration is going.  Headquartered in Mountain View Exablox was the brainchild of Matthew Catino, Tad Hunt and Frank Barrus. Founded in 2010 the three set out to create a more scalable filer array – and in the end we are left with a product they call OneBlox.

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

OneBlox – A reimagined scale-out storage solution.

Honestly when I first started diving into OneBlox I pretty much assumed it was just a simple filer box that you purchased chalked full of drives – but after some more digging there is honestly a lot of differences between OneBlox and a lot of the other scale out NAS boxes I’m familiar with.

  • You can bring your own drives – yeah, you can put pretty much whatever drives you want in these things!  Different speeds and capacities? No problem
  • It runs off of object storage – underneath all that SMB/NFS presentation is a heaping helping of object based storage, which is a big enabler for our next point
  • There is no RAID – rather than utilizing parity to protect against failure OneBlox utilizes their custom based object file system to intelligently write 3 copies of every object, ensuring your data is written on not only different drives, but different nodes within a ring as well.  Wait! What’s a ring?
  • Their ring architecture – A  ring is essentially a cluster of 1 or more OneBlox nodes.  All nodes within a single ring are essentially pooled to form one single global file system which shrinks and grows as drives and nodes are added.  Item such as deduplication and compression are all formed globally across this file system – meaning we have inline deduplication across multiple OneBlox nodes.

OneSystem – Cloud Based Management

As of late it seems like everyone is jumping on this “cloud based” management solution – and rightly so.  We get the ability to manage our infrastructure from anywhere with a browser and an internet connection.   OneSystem is Exablox’s play in this field.  OneSystem essentially allows administrators to access all of  their deployed OneBlox’s from a secure, available, cloud-based management server through a browser.  OneSystem provides real-time monitoring and health reporting as well as supports a multi-tenant environment for  those service providers and large enterprises that may need it.  Don’t trust the cloud just yet?  No problem, the whole OneSystem can be deployed on premises as well if need be.

As I mentioned earlier I don’t know a lot about Exablox so I’m excited to see what they have to say at SFD13.  I read a little in regards to some unique CDP and replication strategies they have as well for protecting data between different  rings.  From the surface it looks like some cool tech and I can’t wait to learn more about some of their biggest use cases and see more about how that underlying custom object based file-system works!  Hopefully we see more at SFD13!  For those that want to follow along I’ll have the live stream along with any of  my SFD13 related content on my page here, or be sure to stay up to date by keeping an eye on the official SFD13 event page.  And hey, follow along and ask questions using the hash tag #SFD13 – See you in June!

Primary Data set to make their 5th appearance at Storage Field Day

This June @ SFD13 Primary Data will make their 5th appearance at a fully fledged Tech Field Day event despite having just GA’d their flagship product just 9 short months ago at VMworld 2016 (They have a few TFD Extra events as well).  Now, pardon my math but for a company that was founded in 2013 that means they spent roughly 3 years in development – gathering loads of customer feedback and beta testing before releasing their data management platform to the wild, all the while giving updates via TFD – there’s an example of a company taking it’s time and doing something right!

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

Now, before we go into detail about Primary Data’s products and what they provide I wanted to take a second to talk a little bit about their founders and leadership team – because we all know great products and execution start from the top – and Primary Data has quite the top!  Let’s have a look below…

Lance Smith (CEO) – Despite holding many top level leadership positions over the years perhaps the most relevant would be COO of Fusion-io and following the acquisition moving into Senior Vice President and GM inside of SanDisk.

Rick White (Co-Founder and CMO) – This isn’t Rick’s first go around as he was also one of the orginal co-founders and CMO at Fusion-io.  Are you beginning to sense a little pattern here around Fusion-io? Smile

David Flynn (Co-Founder and CTO) – Here comes that pattern again!  Prior to founding Primary Data David Flynn was a co-founder and CEO at Fusion-io.  Now reading his bio off of the site it also states that David holds over 100 patents across a wide range of technologies – not too shabby.

Steve Wozniak (Chief Scientist) – Yeah, that’s right, the Steve Wozniak, you know, the guy from Dancing with the Stars – oh, and he kind of helped found and shape a small little valley startup named Apple.

Now keep in mind these are just 4 people that I have picked out off of Primary Data’s company page!  And even though there is a ton of brainpower listed here, there’s still a lot more people at Primary Data whose experience in the industry that just blows my mind!

So what exactly does Primary Data bring to market?

Primary Data’s key storage solution focus around their flagship product DataSphere.  Now DataSphere in itself isn’t a storage array, being best more described as what they coin a “Metadata Engine for the Enterprises”.   So what does this really mean?  Well, hopefully I’m getting it here but to me DataSphere looks like it is somewhat of an abstraction tool – a way to de-couple all of your applications from the data that they use and store.  The first step is to take all of an organizations storage and pools it together into a single logical namespace.  It’s this storage, be it direct-attached, SAN based, or even cloud which can be in turn presented back to your enterprise applications.   But its not necessarily this pooling which drives up the value of DataSphere – it’s the analytics and automatic data movement that really stands out for me.  DataSphere is able to map a set of application rules or objectives to automatically move data across different tiers of storage in order to ensure that certain SLA’s are met, or more-so, that the right resources are being assigned for the right applications.  Meaning the proper storage resources are provisioned to applications, no matter where that application is running, no matter where that storage is running – All we need to do is specify that we want Application X to have Storage Requirements Y and let DataSphere tier and tune to its heart delight!  The metadata engine keeps track of where things are and whats going on.

As you can imagine this gives us a really good opportunity to scale in terms of storage – and really prevents the need to over provision which has become such a defacto standard in deployments today – in turn, organizations save dollars!    Now from what I can tell DataSphere doesn’t necessarily sit inside the data path for the application either – meaning it’s virtually invisible and not affecting performance in anyway.  Instead it lies in somewhat of an out of band architecture – allowing applications to have direct paths to their data while DataSphere simply handles the metadata processing and analytics.

There are a ton of benefits that I can initially see with DataSphere – thinking of scaling, migration to cloud, and simply performance compliance are a few that come straight to mind.  I’m most certain there is much much more than can be done and I know I’m just scratching the surface, so with that I guess we will have to wait until Primary Data presents at Storage Field Day in June to learn more.  As always follow along with the hashtag #SFD13 or check out my page here for all that is SFD13!

My Veeam VMCE-A Course and Exam Experience

vmce_a_logoFirst up a bit of a back story – as a Veeam Vanguard I was lucky enough to have received the required training last June in order to qualify for my VMCE exam, which I wrote and passed in August of 2016!  A nice little perk of the program if you ask me!  Anyways, earlier this month a handful of us were again lucky to participate in the next level of training, the VMCE-A Design & Optimization course in an online pilot, thus, qualifying us to write the VMCE-A exam.    Under normal circumstances I would take a lot of time to study up and possibly create guides on this blog for the certifications I write – however, with VeeamON right around the corner and the ability to take advantage of a “Free Second Chance” offer for writing certifications on site my normal study strategies didn’t apply.  I couldn’t pass up the chance of at the very least getting a look at the exam, even if it meant failing – hey, a free second chance!

So with the course fresh on my memory I studied where I could between it and my exam appointment at the conference, be it on the car ride to the airport, at 30,000 feet in the air and during a few meals at the conference.  Anyways, the tl;dr version is I passed the exam….barely – getting only 4% over the pass mark of 70%.  Not a mark I’d certainly be super proud of, but in the end a pass is a pass and I’ll take it!

On to the exam!

exam_multiplechoiceThe VMCE-A D&O exam is 40 randomized questions, all multiple choice.  Some questions have only one answer, while some are in the “Select 2, Select3” format.  As mentioned earlier a passing score is 70% or higher.  As far as the content goes I can’t say a lot as NDA’s are in effect, however what I can say is that all questions I received are fully covered within the VMCE-A D&O course material – and in fact, at the end you get a nice little detailed breakdown on how you scored in the different sections covered in the course (Design & Sizing, Infrastructure, Security, Optimization, Automation & Compliance, and Troubleshooting).  This certainly helps you to nail down where you might want to freshen up in order to improve your skill-sets.

One big thing I will say is that this exam is tough!  For as easy as Veeam can be to simply get up and running there is a lot to know about their complete suite of products – and a lot to cover in order to test on all of the features and benefits of just Veeam Backup & Replication.  Now being a customer I’m not designing these Veeam solutions day in and day out, so I focused a lot of my attention on the design section, as well as other parts of VBR that I don’t use that often.  But just as with the VMCE it’s not enough to solely focus on just VBR – Veeam ONE, Cloud Connect, etc – these are all free game for testing on this exam – so if you don’t use them I would certainly recommend brushing up on them.   I can’t stress enough that all of the content I was tested on in the exam is covered within the course materials (textbook/slides) – so pay attention during the course!  I can say, that if you see something labeled as a best practice or a formula you should remember these – Remember, this is an architect exam based on designing Veeam environments!  Just keep that in the back of your mind while studying!

As far as timing goes you have 1 hour (add another 30 minutes if English isn’t your first language) to complete the 40 questions.  I found this to be more than enough time.  Just like VMware’s VCP exams you can flag certain questions for review, and go back and forth between questions on the exam at your leisure.  The strategy I took, since I had no idea how much of a time crunch their might be, was to simply go through the questions answering the ones I knew I was right and flagging any that I was unsure of for review after.  This process took me roughly 30 minutes, which allowed me another 30 minutes to go back and review those questions I didn’t quite have a grasp of.  My review took roughly 10 minutes – after that I went through every question again, double-checking and tallying in my head how many I knew I had right, hoping to come up with a score high enough to make me feel comfortable enough to click that dreadful ‘End Exam’ button.  In the end I knew I was I close, but ended it anyways!

You will get your score immediately after completing the exam – so you know whether it was a pass or fail right away – no painful time spent wondering Smile  Also, as mentioned earlier, upon exiting the facility you will get a print out showing how you scored in each category.  I’m certainly happy I passed and know that I can for sure improve in some areas – maybe another study guide is in the cards for me!

The Veeam Certification Paths

For those that don’t know Veeam currently has 2 different certifications.  The VMCE, which documents proof  that the engineer has the necessary level of knowledge to correctly deploy, configure and administrator Veeam Availability Suite.  Then, the VMCE-A D&O which adds on the knowledge from the VMCE, bringing in more of a design and optimize feel to the test, all the while following the Veeam best practices.  Once you have achieved both the VMCE and the VMCE-A, Veeam accredits you with the title of the Veeam Certified Architect, or VMCA.  The VMCA is not a separate certification and does not require a separate step – it’s simply a designation handed to those whom have completed the requirements for both the VMCE and VMCE-A, along with passed both exams.

VeeamCertificationPath

A little about the course

Honestly, even if you don’t go through with the exam the VMCE-A Design and Optimization course is an awesome course to take.  I guarantee you will get something out of it even if you design on a daily basis.  For me, being a customer and administrator of these products it was an awesome opportunity to walk through the Veeam design methodologies, and deep diving into each step one by one to come out with the full solution.  The course has a couple of design scenarios inside of it, of which there is really no right or wrong answer.  We broke into a couple of different groups to do these and it was amazing to see just how different the end designs were.  The instructors take the opportunity to pick away at these designs, trying to understand your though process and figure out how you think – asking a lot of questions in regards to why you set it up the way you did!  This to me was the biggest advantage of the course – having that interaction and learning other ways to accomplish similar results – and seeing where you might be going astray in your though process.

So with that I hope this helps anyone else who might be on the fence about taking either the course or the exam.  I can proudly say that I am a VMCA now and that feels great (and I’m glad I don’t have to cash in that second chance as it’s a very tough exam – or at least it was to me).

A field day of Storage lay ahead!

SFD-Logo-400x400I’ve had the awesome opportunity to participate in a few tech field day events throughout the last few years, travelling to Austin for VFD4, Boston for VFD5, and finally San Jose for TFD12 just last November!  To be honest, these days provide me with more knowledge than a lot of the week long training courses I take!  They are jam packed with technical deep dives from a variety of companies – companies who are eager to share and get the message out in regards to their offerings!  And it’s not just the presenting companies jamming the knowledge in my brain, it’s the delegates as well!  I’ve honestly met some of the smartest people I know there being a fellow delegate – not just on virtualization, but spreading across the whole IT ecosystem.   Anyways, I’m super excited to have been invited back to another field day experience – this one, a storage event, taking place June 14-16 in Denver!

#SFD13

As it stands today Storage Field Day 13 is shaping up to have 6 presenting companies – all with their own different ties into the storage market.  We have everything from the tried and true companies such as DellEMC and Seagate all the way through to the startup technologies of Primary Data and the newly acquired Exablox.  In between sits the non-profit, vendor-neutral SNIA along with the hometown Colorado based X-IO Technologies.

DellEMC_Logo_Prm_Blue_Gry_rgb-100x18 exablox-logo-1200

Certainly a wide variety of companies to hear from – which should help to keep the event both exciting and interesting!

I mentioned that I’ve gained a lot of knowledge from other delegates in the past – and man o man this one will be no different.  I’m actually super excited for this event as I’ve really only met a few people on this list – so there will be a lot new faces and friends to make here for me.  Honestly, it’s a little bit intimidating as there are some real storage rockstars in this list!

I’ll do my best to try and get a few preview posts out for some of those presenting companies I know little about.  Mainly for my own homework so I don’t “go in cold” and get thrown into the deep end Smile  That said I can’t promise anything as this event is quickly sneaking up on me and only a couple weeks away now!   As always I’ll try to get the stream setup on my event page here – as well as place any content I create surrounding SFD13.   Be sure to follow along on twitter as well using the hashtag #SFD13 and be sure to keep an eye on the official event page!  See you in Denver!

#VeeamON 2017 – Wait! There’s more!

If you packed your bags up and started to shutdown after the Wednesday keynote thinking you have heard the last of the VeeamON announcements then you might want to think about digging out that notebook and sharpening your pencils again as we aren’t done yet here at VeeamON 2017!

Thursday, Paul Matiz took the stage for the final keynote of the show and made some more announcements around existing products – and threw the gauntlet down on one brand new product, the Veeam PN, which I released a separate post about!  That said, Veeam PN wasn’t the only Thursday announcement – there were a few others outlined below.

Veeam Backup for Office 365

Veeam’s released their first version of their SaaS email based backup for Office 365 last year and honestly people flocked to it!  With more and more companies migrating to the Microsoft hosted solution rather than putting up with the headaches of dealing with the multiple on-premises Exchange servers Veeam wanted to take advantage and help those organization protect their most critical communication asset!

With the version 1.5 announced just the other day, things like scaleablity have been added in order to horizontally scale your Office 365 backups by adding new proxies and repositories to help speed up the time it takes to pull down your O365 mailboxes.

In addition to this, automation has also been a focus.  With full PowerShell support coming into the product we can now use the easy verb-noun cmdlets to backup and restore Office 365 mailboxes.  And, more than just PowerShell – a fully supported RESTfull API is also available.

That said why stop at version 1.5 – let’s get moving on to 2.0   The Veeam community spoke – we loved the ability to back up the email and use the explorer functionality to perform restores back into mailboxes – but Office 365 is so much more – What about SharePoint?  What about OneDrive?

Well, Veeam now has an answer for those questions as they released somewhat of a roadmap for their Office 365 backup strategy, with both SharePoint and OneDrive on it in version 2.0.

Veeam Management Pack v8 update

For those Systems Center users that use Veeam Management pack to monitor and gain insights into their critical applications running on both VMware and Hyper-V you will be pleased to know that Veeam has released a slew of new features into the Veeam MP v8.  Now providing Azure backed dashboards in update 4, Veeam MP users will be able to update instantaneously.

Veeam has certainly announced a lot of things this week – with a heavy focus on cloud.  With Mark Russinovich doing the final keynote of the conference I can certainly say that cloud is most definitely the future – and data protection needs to be part of that!

Veeam announces the new Veeam Powered Network (Veeam PN)

VeeamOn-2017-2During the final keynote of VeeamON 2017 Veeam took the stage and threw down the guantlet on a brand new Veeam product release, the Veeam Powered Network, or Veeam PN for short.

Veeam PN is a new product, not a feature added to any others, which was initially developed to solve an internal issue within Veeam.  Veeam has a lot of employees and developers in remote sites all accross the world – and the pain of constantly connecting those sites together via VPN. coupled with the frustration of tunnels dropping all the time gave birth to the Veeam PN.  Kind of feels a lot like how a VMware fling comes to life, first being internal only, then released to the masses, then actually built out as an application offering.  Although VeeamPN can be used to establish this connectivity between any site at all,  the real benefits and the initial design intentions all focus on Microsoft Azure.

Veeam PN – Disaster Recovery to Microsoft Azure

Veeam PN is deployed into your Azure environment via the Azure Marketplace.  Once your cloud network have been established another virtual appliance is then deployed from veeam.com into your on-premises environments.  From there it’s as simple as setting up which networks you wish to have access into Azure and importing site configuration files that are automatically generated to your remote sites – with that, you have a complete and secure site to site tunnel established.  I’m not sure of the scale-ability of Veeam PN just yet, but I do know it supports having multiple sites connected into Azure for those ROBO situations.  For those remote workers on the road, well they can simply connect into Veeam PN and download a configuration file that simplifies their setup of the Open VPN client to establish a client-to-site VPN.

VeeamPN

So at this point you may be thinking “Why would Veeam develop this tech focused around networking and what does it have to do with backup or DR?”  Well, let’s couple this together with a little feature Veeam has called “Direct Restore to Microsoft Azure”.  By recovering our VMs and physical endpoints directly into Azure, and then easily establishing the network connectivity using Veeam PN we can now leverage true DR in the cloud in an easy to use, scale-able, and secure way.  THis is the “nirvana” of recovery that we have all been looking for.

One more thing – it’s free!

There it is – the Veeam way!  They released Backup and Replication with a free tier, Windows/Linux endpoint agents free, Direct Restore to Azure – free, the explorer tech – free!  Let’s add one more to that list!  Veeam PN is absolutely free!  And even though they have talked a lot about it being leveraged for Azure, companies and organizations can essentially use this technology to connect any of their sites and clients together – absolutely free!

Details around any betas or GA haven’t been revealed yet but keep your eyes open and I’ll do my best to help spread around any opportunity for you to get your hands on the new Veeam Powered Network!

A glimpse into #VeeamVanguard day!

Sure the Veeam Vanguard program comes complete with tons of great swag and free trips to VeeamON and whatnot – but in all honesty the biggest benefit of the program in my opinion is the access that Veeam provides – access with fellow Vanguards and access with key people within Veeam, across the whole company from executives to SEs.  Here at VeeamON 2017 we get a special day jam packed full of access – and below is a bit of a lowdown on what happened (or as much as we can tell you about anyways).

Veeam Availability Orchestrator – Michael White

The day started off with a couple of hours with Michael White (@mwVme) giving the low down on Veeam Availability Orchestrator – One of Veeams newest products which helps orchestrate and automate disaster recovery fail over.  Before actually getting into any product specifics, Michael actually went through a brief discussion about what Disaster Recovery and Business Continuity actually is, and how we can best prepare for any situation that may occur.  Michael is a perfect fit to evangelize this product as he had a lot of examples with other companies he as worked for over the years, outlining how he was prepared, or at sometimes, un prepared for disasters that hit.  In all honesty it was a great way to start the day, getting a little bit of education rather than just immediately diving into product specifics!

Veeam Availability Console – Clint Wyckoff

Directly after Michael we had Veeam evangelist Clint Wyckoff come and in and give us a breakdown on the new release candidate of  Veeam Availability Console.  I’ve seen the product before, but like anything Veeam there is always a number of changes in a short time – and it was nice to see the product as moves into tech preview.  For those that don’t know, VAC is Veeams answer to a centralized management solution for those large, dispursed enterprises as well as Veeam Service Providers to manage, deploy, and configure both their Veeam Backup & Replication servers as well as the newly minted Veeam Agents for Microsoft and LInux.

Vanguard support from the top down

One great thing that I like about the Veeam Vanguard program is that it’s not just a “pet project” for the company.  During Vangaurd day we were introduced to Danny Allan, VP of Cloud and Alliance Strategy at Veeam.  Danny is our new executive sponsor at Veeam – meaning we have support at the highest levels of the company.  It’s really nice to see a company sink so much support and resources from all roles into a recognition program – one of the many reasons why I feel the Vanguard program is so successful.

Nimble

After lunch we had Nimble come in and brief us on their Secondary Flash Array and the interesting performance enhancements it has when being used with Veeam.  Last year during our Vanguard day we didn’t have another vendor other than Veeam present.  It’s nice to see some of Veeams partners and ecosystem vendors reaching to find some availability to talk with us.  Nimble certainly has a great product – and due to the fact that I’m not sure what all was covered under NDA I’ll simply leave this at that!

AMA with Veeam R&D

Earlier when I mentioned one of the biggest benefits of the Vanguard program was access this is basically what I was referring too.  For the rest of the afternoon we basically had a no holds barred, ask me anything session with Anton Gostev, Mike Resseler, Alexy Vasilev, Alec King, Vladimir Eremin, Dmitry Popov, and Andreas Newfert – all Veeam employees who manage, or work very closely with R&D deciding what features are implemented, when they get implemented, and basically define a road map for when these features get inserted into products.  Now this session was definitely NDA as a lot was talked about – but just let me say this was the best and most interesting portion of the whole day!

With so much being under NDA and embargo there isn’t a lot I can tell you about the content – but for those wondering this is just a brief description of how much access you get into Veeam being under the Vanguard label.  Certainly if you wish, I encourage you to apply for the program – you won’t regret it!

Veeam Availability Suite v10 – what we know so far…

Although we got a hint at some of the announcements coming out of VeeamON during partner day on Tuesday it was really the general session Wednesday morning which brought forth the details surrounding what Veeam has in store for the future.  In true Veeam fashion we yet even more innovation and expansion into their flagship Veeam Availability Suite – covering your data protection needs from all things virtual, physical, and cloud.  So without further ado let’s round up some of which we saw during the Wednesday keynote at VeeamON 2017.

 

Veeam Agent Management

It’s no surprise that as soon as Veeam released their support for protecting Windows and Linux physical workloads that customers and partners all begged for integration into VBR.  Today, we are seeing just that as Veeam has wrapped a very a nice management interface around managing backups for both our virtual machines, along with our physical Windows and Linux workloads.  This not only gives us the ability to manage those physical backups within VBR, but also gives us the ability to remotely discover, deploy, and configure the agents for the physical endpoints as well!

Backup and restore for file shares.

Veeam Availability Suite v10 brings with it the ability to backup and restore directly from our file shares.  Basically those SMB shares can be accessed via a UNC share and files backed up and protected by Veeam.  Different from Veeams traditional restore point though, Veeam Backup and Restore for file shares doesn’t necessarily store restore points, but acts almost like a versioning system instead – allowing administrators to state how many days they would like to version the files, whether or not to keep deleted files, and also specify some long term retention around the file.  This is some pretty cool feature set to be added to v10 and I can’t wait to see where this goes – whether the file share functionality can somehow be mapped to the image level backup and work together to restore complete restore points as well as apply any newer file versions that may exist.

Continuous Data Protection for all

Perhaps some of the most exciting news of all is Veeams announcement to support Continuous Data Protection, allowing enterprise and organizations to drastically lower the RPO by default to a whopping 15 second restore point.  Ever since Veeam hit the market their replication strategy has been to snapshot VMs in order to gain access to CBT data and replicate that across.   That said we all recognize the pain points of running our infrastructure with the impact of snapshots.  That’s why, with the new CDP strategy set forth by Veeam today they will utilize VMware vSphere’s Storage APIs for I/O filtering in order to intercept and capture those IO streams to our VMs and immediately replicating the data to another location.  This to me is a huge improvement for an already outstanding RTPO that organizations can leverage Veeam to achieve.  This is truly groundbreaking for Veeam as we can now say have 4 hours of 15 second restore points to chose from.  It’s nice to see a vendor finally take advantage of the APIs set forth by VMware.

vCloud Director Integration into Cloud Connect.

Veeam service providers have been providing many customers the ability to consume both backup and replication as a service – allowing customers to essentially ship off their data to them, allowing the SP to become the DR site.  That said, it’s always just been those VMs that live within just vCenter and vSphere.  Today Veeam announced the support for vCloud Director organizations and units to also take advantage of the Cloud Connect offering – allowing those running vCloud Director to also consume the DR as a Service that Veeam partners have been providing, keeping their virtual datacenters and hardware plans while failing over their environments.

Veeam Availability for AWS

Yeah, you heard that right!   We have seen Veeam hit the market focusing solely on virtualized workloads, slowly moving into the support of physical workloads – and now, supporting the most famous well know public cloud – Amazon AWS.  Cloud always presents risk into an environment, which in turn means that we need something exactly like Veeam Availability for AWS to protect those cloud workloads and ensure our data is always recoverable and available if need be.  In true Veeam fashion though, the solution will be agentless.

Ability to archive older backup files

Veeam v10 now brings with it the ability for us to essentially archive off any backup files as they age in our backup policies off to some cheaper storage.  Now we all know that cloud and archive storage is a great solution for this so guess what – yeah, we know have the ability to create what is called an “Archive Storage” repository which can live on any type of native object storage, be it Amazon or even your own swift integration.  This frees up your primary backup storage performance in order to manage things such as restores, etc. – while the archive storage can do what it does best – hold those large, lesser accessed backup files.

Universal Storage Integration API

For the last few VeeamON events the question of who the next storage vendor would be to integrate into Veeam is always on everyone’s mind.  With the announcement of the new Universal Storage Integration APIs the next storage vendor could literally be anyone.   This is basically an API set that will allow storage vendors to integrate into Veeam – basically giving Veeam the ability to control the array, creating, deleting and removing storage snapshots allowing customers to lower RTO and RPO without ever leaving the familiar Veeam Console.

This honestly just scrapes the surface on some of the announcements Veeam has in store for us this week so stay tuned as there is another keynote tomorrow where I’m sure we will hear more about VBR v10 and also, possibly some NEW product announcements.  For now, it’s off to some deep dives to learn some more about some of these great features!  Thanks for reading

Veeam Availability Orchestrator – Automation for your Disaster Recovery

As a member of the Veeam Vanguards here at VeeamON 2017  we got to spend a couple of hours with Michael White (@mwVme) who gave us an update on Veeam Availability Orchestrator – Veeams’ answer to orchestrating and automating fail-over to their replicated VMs.  Michael certainly is a great choice when looking for someone to evangelize this product as he had a number of examples of DR situations he has either helped with, or orchestrated companies through – which had both good and bad outcomes!  But back to topic – VAO was announced a while back, in fact, over  a year ago Veeam announced their plans for VAO during their “Next big thing” event in April of 2016.  Since then I’ve got to see the application move along through various beta stages and was pleasantly surprised to see how the product has matured as they gear up for their 1.0 release (no, I don’t know when that is).

For those not familiar with VAO let me give you a little bit of a breakdown.  VAO is essentially a wrapper, or an engine that simply interacts with other Veeam products via API calls.  Think Veeam ONE, Veeam Business View, and Veeam Backup & Replication all talking together to one centralized Disaster Recovery Orchestration machine.  As far as the architecture there really isn’t anything special – it’s a web interface, with a SQL backend.   As far as I know the only limitations associated with Veeam Availability Orchestrator are the fact that it is only supported within a VMware environment and that an Enterprise Plus license must be applied to the VBR instance VAO connects to.

So what does VAO do that VBR doesn’t?

Hearing the phrases like “testing our replicas” and “using the Virtual Labs” you might be wondering what exactly VAO does that VBR doesn’t.  I mean, we have the SureReplica technology within VBR and it works great at testing whether or not we can recover so why would we need VAO?  The answer here is really about the details.  Sure, VAO doesn’t re-invent the wheel when it comes to DR testing – why would they force you to reconfigure all of those Virtual Labs again?  They simply import them, along with a lot of information from VBR to use within VAO.  That said though, VAO does much much more.  From what I’ve seen we can basically break VAO down into three separate components.

Orchestration

VAO takes what you have already setup within VBR and allows you to automate and orchestrate around that.  Meaning we already have replicated our VMs to a DR location, setup our fail-over plans and virtual labs, and completed configuration around re-iping and post fail-over scripts to handle our recovery.  VAO takes all of this and adds flexibility into our recovery plans to execute and trigger pre and post fail-over scripts, along with per-VM testing scripts as well.  At the moment we are limited to just PowerShell, however we may see more scripting languages supported come GA time.   Essentially VAO gives us more flexibility in running and trigger external process during a fail-over even than what VBR provides on its’ own.

Automated DR Testing

VAO takes all of this fail-over orchestration and applies this to our testing environments as well.    By giving use the ability to test, and test often we, as organizations can drastically increase our success rate when a true disaster occurs.  Certainly virtualization has really impacted our ability to test DR plans, in a good way – but there are still a lot of challenges when it comes to performing a true test – VAO closes that gap even more.

Dynamic Documentation

Probably the biggest feature in my opinion of VAO is it’s ability to automatically and dynamically create Disaster Recovery documentation.  DR documentation are often overlooked, and left sitting on some file server, stale and not updated at all.  Environments today are under constant change, and when our production environments change so do our DR requirements.  VAO does a good job at dynamically pulling in any new VMs added or older VMs removed and adjusting it’s documentation accordingly.  In the end we are left with some nicely updated documentation and run books to reference when we the time comes that we need them.

All of this said though, to me the true value of VAO really is it’s ability to focus on the details.  From what I’ve seen VAO does a great job at reporting any warnings, errors or failures as it applies to any DR test or fail-over event.  Not just on its’ canned testing scripts (for instance connecting to a mailbox on a failed over exchange server), but on our custom built PowerShell scripts as well.  Without this attention to detail a lot of assumptions and false positives can be “assumed” during a DR test – leaving us left with an inconsistent state during an actual fail-over event.  VAO, in all of its reporting and messaging certainly provides a nice mechanism into the visibility of each and every VM, and each and every task associated with that VM inside of a fail-over plan.

We still don’t have a solid release date on VAO but in true Veeam fashion let me give you this estimate – “When its’ ready” 🙂