Category Archives: Posts

Top vBlog voting is underway

The shear amount of blogs listed over on Eric Siebert’s vLaunchpad is simply amazing!  I don’t know how many there is listed there – but there is certainly a lot of scrolling that needs to happen in order to get to the bottom – it’s awesome to see just how much information is being shared with each other from the virtualization community!  Props to everyone for that!  And props to Eric – I sometimes struggle with setting up links within my blog posts, let alone tracking the RSS and twitter profiles of all of those blogs/bloggers.  Every year Eric keeps this list up to date – ensuring blogs are current, active, and categorized – all with the intentions of hosting the annual Top vBlog voting!

Well – that time is now.  It’s time to go and show your support for the blogs, news sites, and podcasts out there that help guide you through your daily job or spark your interests into new technologies.

This year we see some changes for the better to the contest – firstly, blogs with 10 posts or less during 2016 will not be listed on the ballot – ensuring that we are only voting for those who put forth the time and effort of releasing content.  Secondly, pubic voting will not be the sole measurement in the ranking of the blogs.  Sure, your opinion will still hold the majority of the ranking at 80%, however the remaining 20% will be split between the number of posts published on the blog, along with the Google PageSpeed score of the blog – forcing bloggers to sharpen up their web hosting skills and try and optimize their site.

So with that – if you have found a blog post particularly useful through out the year or enjoyed reading a particular community members blog – go and vote and support them!  In all honesty, it’s not as if there is a massive prize or anything at the end, but I can say, as a blogger, I enjoy looking at the results and seeing where people have ranked, as well as where I rank among them!  For me, it’s a humbling experience to even be listed!  So big thanks to Eric for tallying up all these votes and handling all of the category submissions and everything!  I know that it’s not for the faint of heart!  And also, huge thanks to Turbonomic for supporting the Top vBlog this year!  If you are looking to right size your environment, migrate to cloud, or simply get the most bang for your buck no matter where your workloads live I would recommend checking out what Turbonomic has to offer!  And, when you are done, go vote & Make Virtualization Great Again!

The StarWind Cloud VTL for AWS and Veeam

When companies are approaching a data protection strategy something dubbed the “3-2-1 rule” often comes up in conversation.  In its essence, the 3-2-1 rule is designed as a process to ensure that  you always have data availability should you need it.  That’s 3 copies of your data, on 2 different media types/sets,  with one being located offsite.  Now, when looking at taking this rule and applying it to our data protection design the subject of tape is usually discussed as it facilitates that second type of media we need to satisfy the “2” portion.  Tape has had a  play in  data protection for a long time but the mundane tasks of removing a tape and inserting another just doesn’t fit well inside of our modern datacenters.  When its time to restore we are then left with the frustration of finding the proper tapes and then the slow performance of moving data off of that tape back into production.  That’s why companies like StarWind initially built what is called the Virtual Tape Library (VTL).   The StarWind VTL mimics that of a physical tape library, however instead of requiring the manual intervention of removing and loading tapes it simply writes the data to disk.

The StarWind VTL is nothing new – in fact its’ been around since 2009.  But just  this past month at VeeamON StarWind announced yet another version of their  VTL, only this time, instead of just writing the data to local disk they now have  the option additionally sync those virtual tapes to the cloud.  The software, called StarWind Cloud VTL for AWS and Veeam couldn’t come at a more opportune time as only a week before the announcement “WannaCry” was worming its way through Europe, encrypting both production  and  backup data – leaving those companies without some sort of offsite air-gapped backup without a  whole lot of option.

StarWind-Overview

So how does it work?

The StarWind Cloud VTL for AWS and Veeam is 100% software based – therefore no extra hardware or appliances are need to be racked and stacked in your datacenter at all.  In fact, for convenience and cost reasons StarWind Cloud VTL can even be installed  directly along side with your Veeam Backup and Replication Backup server.  If you have ever installed any other StarWind products then the Cloud VTL setup will look very similar, utilizing a very easy to use wizard type installation.

Once installed, the configuration (as shown below) is really just adding our virtual tape device (drive) and however many number of virtual tapes we want.  As we can see, StarWind actually mimics the HPE MSL8096 Tape Library – therefore we may need to pull down any appropriate device drivers in order to support it.  Once installed we are left essentially with an iSCSI target that points to our VTL, which in tern maps to local disk.

Library-Location StarWind-HPEMimic

Cloud-Replication So by now you might be thinking “Hey, these tapes are mapped to disk not cloud” and you  are absolutely correct in that thought.  StarWind Cloud VTL implements what they call a Disk to Disk to Cloud process – meaning data is first copied to disk (StarWind) from disk (Production) and then further replicated to Cloud (Amazon S3/Glacier).  This scenario allows for the actual Veeam tape job to complete much faster as it’s simply streaming to local disk- after which, the data is replicated to Amazon.  To set this up we simply need to click on the ‘Cloud Replication’ option (shown right) within the StarWind management console and provide our access and region information for our S3 bucket.

Above I hinted at yet  another feature of the StarWind  Cloud VTL with the mention of Glacier.  As shown below we can see a few options as it pertains to our retention – the most interesting being the ability to migrate our tape data out of S3 and into the cheaper, more archive suitable Glacier service after a certain period of time.  This tiering feature allows us to keep costs down by essentially staging and de-staging our backup data depending on age to a lower tier, lower performance storage while keeping our most  recent restore points on a more reliable, higher performance cloud storage service.

S3-Glacier

We can also see that we have options surrounding when to purge the local on-site tape data as well as how long to  wait after the virtual tape has been ejected locally before we start the replication to S3.

That’s really it as far as the StarWind setup is concerned.  The only thing left to do now is setup our the VTL as a tape server with  VBR.  Now before we can do this we will first need to establish a connection to our VTL.  This, just as is done through  StarWind Virtual SAN is simply just an iSCSI target that is mounted with the standard Windows iSCSI tools.  As mentioned previously, the VTL mimics a HPE MSL8096 so be sure those drivers are downloaded and installed to ensure the VTL can be discovered.

For the VBR  configuration we simply add our StarWind VTL we have setup into our backup infrastructure as a  “Tape Server”.  After doing so we should be able to see all of our virtual tapes that we have created and can simply setup our tape jobs or File to Tape jobs just as we always have within Veeam – only this time, our tapes are basically being  replicated to S3.

In the end I think StarWind is on to something here!  This is their first go at cloud replication and I’m sure there is much more to come.   In fact we have already seen the addition of Microsoft Azure blob storage into the StarWind Cloud VTL portfolio so things are moving quickly.  The idea of still achieving the ultimate goal of the 3-2-1 rule while not having to physically mess around with tape is appealing – not to mention that by utilizing cloud we can get that offsite scalable storage tier without all the need to manage or update or even procure the hardware.  Personally I can see Veeam shops jumping on this.  It certainly makes that ideal environment of having some uber fast backup repository for your most recent backups on-stie and leaving StarWind and AWS with the job of migrating and managing the more “cold” archival type data up to the cloud.  Remember you  don’t want to be “that” IT shop that can’t recover from the next piece of ransomware that comes down the pipe.  If you would like to give StarWind Cloud VTL for Amazon and  Veeam a shot you  can pick yourself up a  free 30 day trial here.

Turbonomic 5.9 adds visibility into YOUR cloud!

As of late I’ve been making it somewhat of a personal goal to try to learn more about cloud – AWS in particular.  I’ve been going through the training over at acloud.guru, messing around with the free tier in AWS, and toying with the possibility of writing my AWS Certified Solutions Architect Associate exam.  Now, one thing that I have learned over the past couple of months is that AWS is a beast – there is a lot of services provided – and gaining visibility into these services, from both a cost and performance aspect seems next to impossible.  Now this post isn’t going to be focused around my struggles, but more so on how Turbonomic (formerly VMTurbo), more specifically the recently announced 5.9 version, can help organizations to bridge that visibility gap and achieve that ultimate goal of achieving maximum performance at a minimum cost.

Turbonomic 5.9 – Making Hybrid Cloud possible.

Although this is a minor release it certainly does come with some major enhancements to the product in terms of cloud integration.  Turbonomic has always done a great job at monitoring our on-premises environments – ensuring that VMs and services are right sized and running in the most cost efficient way, yet ensuring that performance and SLAs have been met.  Their supply-demand analytics engine is next to none when in it comes determining these placements, automatically resolving issues, and providing an instant ROI to organizations datacenters.  That said more and more organizations are now looking to move away from housing their own datacenters and investigating cloud enabled solutions, be it public, private, or a hybrid model – and, in a typical customer fashion – we really want to use the same tools and concepts that we are used to.  Turbonomic 5.9 seems to deliver on this expectation with the addition of a number of cloudy features to the product (summarized below)

  • Cloud Migration Planning – 5.9 gives us the ability to perform very in-depth cost analysis of moving our workloads to the public cloud.  IE. What would it cost me to move workload x to Amazon?  What would the costs be with migrating workload a and b to Azure?  What’s the cost comparison of migrating workload x from this AWS region to this Azure region?  Getting cost estimates from Azure, AWS, and SoftLayer in regards to these questions is very beneficial when performing feasibility studies around cloud adoption and migration.
  • Workload Specific Costing – Once we have our workloads in the cloud, Turbonomic will now track and report cost metrics, in real-time back to the dashboard.
  • Cloud Budgeting – Imagine setting a defined budget for your cloud services and seeing just how that budget is being consumed across the different regions, tags, and workloads defined with in it.  Aside from seeing your real-time budget impacts, Turbonomic will also take into account past costs in order to project future cloud consumption costs based on your growth and performance needs.  Also, if you have some sort of discounted account or agreement with either of the cloud providers, Turbonomic uses your credentials – so they are getting YOUR actual costs – not industry averages!
  • Lower Cloud Costs – This is really what Turbonomic is about IMO – ensuring you a reaching maximum performance at the lowest cost – and now we see this in the cloud as well.  Think about gaining visibility into what it my cost to scale up to a larger instance, or how much you can save by scaling down.  Turbonomic can predict these costs as well as even automatically scale these instances down, or better yet, suspend them during times they aren’t needed.

So yeah – all the benefits of the previous version of Turbonomic is now applicable to cloud – allowing organizations to get that “single pane of glass” cost viewing of both their on-premises workloads next to their AWS, Azure, or SoftLayer workloads as well!  Certainly these aren’t the only enhancements that have been released with 5.9 – we are also blessed with some pretty hefty performance impacts to the analytics engine as well – think 9 minutes to analyze and report on 100,000 VMs – not too shabby.  Also, as highlighted during their TFD presentations recently – the HTML5 interface is currently running in “dual” mode – with the intention of having all functionality fully available by the end of 2017!  But to me, the meat and potatoes of this release revolve around cloud.  Turbonomic answers a lot of the costing questions that come with cloud – and from what they claim, can lower your cloud bill by an average of 30%!  That should enable for a very fast ROI for organizations!  If you want to read more about the new features as I haven’t covered off all of them, definitely check out the Turbonomic “What’s New” page!  Also Vladan Seget has a great round up on his blog as well as Dave Henry on his!  And hey – if you want to check it all out for yourself you can grab yourself a free 30 day full featured trial of Turbomonic here!

 

SNIA comes back for another Storage Field Day

SNIA, the Storage Networking Industry association is a non-profit organization that is made up of a number of member companies striving to create vendor-neutral architectures and standards throughout the storage industry.  Think Dell, VMware, HPE, Hitachi, all the likely names all behind closed doors working for the greater good.  Ok – that’s their definition.  Mine?  Well, I compare it to Rocky III – you know, Rocky and Apollo, sworn enemies teaming up to make the world a better place by knocking out Mr. T.   So, I may be a little off with that, but not that far off!  Replace “Rocky and Apollo” with some “very big name storage companies” and swap out “knocking out Mr. T” with “releasing industry standards and specifications” and I think we are pretty close.

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

So, in all seriousness SNIA has been around for 20 years and was formed initially to deal with interoperability issues surrounding networked storage.  Today we now see them really focusing on architectures, standards, as well as a slew of education services, training and certifications.    Today we can see a ton of work being performed by SNIA around current storage trends such as flash, cloud, object-storage, persistent memory, etc..  You name it, they have some work being done around it.    From their website here is a handful of work that SNIA is currently investigating…

  • Cloud Data Management Interface (CDMI)
  • Linear Tape File System (LTFS)
  • IP Based Drive Management Specifications
  • NVM Programming Modle
  • Self-contained Information Retention Format
  • Solid State Storage Performance Test Specifications
  • Swordfish Scalable Storage Management APIs

Wait!  They aren’t selling anything!

Honestly, I’ve never been to a Tech Field Day event where a non-profit organization has spoke – so I’m very excited to see what SNIA will chose to talk about!  As shown above they have a broad range of topics to chose from – and by watching past SNIA videos from TFD they can go quite deep on these subjects.  It will be nice to hear a vendor-neutral approach to a TFD session.  I applaud SNIA for their efforts – it can’t be easy organizing and keeping all of its members in check – and it’s nice to see an effort from a company, be it non-profit or not, looking out for the customers, the partners, the people that have to take all of these storage arrays and protocols and make them all work!  As always, follow along with all my SFD13 content here – keep your eye on the official event page here – and we will see you in June!

X-IO Technologies – A #SFD13 preview

In terms of the technology sector we always joke that when startups are 5 years old that sometimes makes them legacy!  Meaning, 5 years is a long time in the eyes of a technologist – things change, tech changes, new hardware emerges.  All of this drives change!  Well if a 5 years makes a mature company than I’m not sure what to call X-IO Technologies.  X-IO was founded nearly 25 years ago, in 1995 – taking them right off the scale in terms of aging for a tech company!   Honestly, I’ve heard the name before (or saw the logo) but I’ve never really looked at what it is X-IO does – so today let’s take a look at the current X-IO offerings and solutions and what they have to bring to the table – and, if you are interested you can always learn more when they present at the upcoming Storage Field Day 13 event in Denver come June 14th – 16th.  But for now, the tech…

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

What does X-IO bring?

From their website it appears that X-IO have a couple basic offerings, all hardware appliances, and all serving different points of interest in the storage market.  Let’s try and figure out what each of them does..

Axellio Edge Computing

This appears to be an edge computing system marketed mainly companies needing performance for big data analytics as well as those looking for a platform to crunch data from IoT sensors.   These converged storage and compute boxes are very dense in both CPU, Memory and Storage.  Supporting up to 88 cores of CPU, 2TB of memory, and a maximum of 72, yes 72 2.5” NVMe SSD drives.  Each appliance is basically broke down into two server modules for the compute and memory, as well as up to 6 FlashPacs (A FlashPac is essentially a module hosting 12 dual ported NVMe slots).  As far as scale goes I don’t see much mention in terms of pooling appliances, so it appears that these are standalone boxes each serving a single purpose.

iglu Enterprise Storage Systems

Here it appears we have a storage array.  The iglu storage system can be built using all flash, or a mixture of both flash and disk, or, just spinning disk itself.    They appear to have multiple models supporting each disk configuration, with their all-flash version supporting over 600,000 IOPs.  Controllers on the iglu system are distributed, meaning whenever we add more capacity we are also adding more controllers, thus increasing both space and performance with the same upgrade.  As far as software goes we see all the familiar features such as snapshots, CDP, replication, stretched clustering features,  integration with VMware, SQL, Oracle, etc.…  One nice aspect to this is that all iglu systems, no matter the model, have access to all of the software features – there is no having to license individual aspects of the software.

I’m excited to see what X-IO has to say at SFD13 come this June.  There was some mention of some unique way of handling drive failures, as well as offering a lengthy 5 year warranty on everything which may separate them from the storage vendor pack – but I’m hoping they have much more to talk about in regards to their storage offerings to give it that wow factor!  As always you can find all my SFD13 related information here or follow the event page here to stay updated and catch the live stream!

Hear more from Exablox at Storage Field Day 13

As I continue along with some SFD13 previews I stumble upon Exablox.  Exablox is one of the few presenting companies at SFD13 that I know very little about so I’m excited to hear what they have to offer and say come June 14th in Denver when the event kicks off.  Also,  Exablox is coming into SFD after being acquired by StorageCraft earlier this year and given the partnerships between the two companies in the past I’m sure we will hear some more in regards to how this integration is going.  Headquartered in Mountain View Exablox was the brainchild of Matthew Catino, Tad Hunt and Frank Barrus. Founded in 2010 the three set out to create a more scalable filer array – and in the end we are left with a product they call OneBlox.

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

OneBlox – A reimagined scale-out storage solution.

Honestly when I first started diving into OneBlox I pretty much assumed it was just a simple filer box that you purchased chalked full of drives – but after some more digging there is honestly a lot of differences between OneBlox and a lot of the other scale out NAS boxes I’m familiar with.

  • You can bring your own drives – yeah, you can put pretty much whatever drives you want in these things!  Different speeds and capacities? No problem
  • It runs off of object storage – underneath all that SMB/NFS presentation is a heaping helping of object based storage, which is a big enabler for our next point
  • There is no RAID – rather than utilizing parity to protect against failure OneBlox utilizes their custom based object file system to intelligently write 3 copies of every object, ensuring your data is written on not only different drives, but different nodes within a ring as well.  Wait! What’s a ring?
  • Their ring architecture – A  ring is essentially a cluster of 1 or more OneBlox nodes.  All nodes within a single ring are essentially pooled to form one single global file system which shrinks and grows as drives and nodes are added.  Item such as deduplication and compression are all formed globally across this file system – meaning we have inline deduplication across multiple OneBlox nodes.

OneSystem – Cloud Based Management

As of late it seems like everyone is jumping on this “cloud based” management solution – and rightly so.  We get the ability to manage our infrastructure from anywhere with a browser and an internet connection.   OneSystem is Exablox’s play in this field.  OneSystem essentially allows administrators to access all of  their deployed OneBlox’s from a secure, available, cloud-based management server through a browser.  OneSystem provides real-time monitoring and health reporting as well as supports a multi-tenant environment for  those service providers and large enterprises that may need it.  Don’t trust the cloud just yet?  No problem, the whole OneSystem can be deployed on premises as well if need be.

As I mentioned earlier I don’t know a lot about Exablox so I’m excited to see what they have to say at SFD13.  I read a little in regards to some unique CDP and replication strategies they have as well for protecting data between different  rings.  From the surface it looks like some cool tech and I can’t wait to learn more about some of their biggest use cases and see more about how that underlying custom object based file-system works!  Hopefully we see more at SFD13!  For those that want to follow along I’ll have the live stream along with any of  my SFD13 related content on my page here, or be sure to stay up to date by keeping an eye on the official SFD13 event page.  And hey, follow along and ask questions using the hash tag #SFD13 – See you in June!

Primary Data set to make their 5th appearance at Storage Field Day

This June @ SFD13 Primary Data will make their 5th appearance at a fully fledged Tech Field Day event despite having just GA’d their flagship product just 9 short months ago at VMworld 2016 (They have a few TFD Extra events as well).  Now, pardon my math but for a company that was founded in 2013 that means they spent roughly 3 years in development – gathering loads of customer feedback and beta testing before releasing their data management platform to the wild, all the while giving updates via TFD – there’s an example of a company taking it’s time and doing something right!

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

Now, before we go into detail about Primary Data’s products and what they provide I wanted to take a second to talk a little bit about their founders and leadership team – because we all know great products and execution start from the top – and Primary Data has quite the top!  Let’s have a look below…

Lance Smith (CEO) – Despite holding many top level leadership positions over the years perhaps the most relevant would be COO of Fusion-io and following the acquisition moving into Senior Vice President and GM inside of SanDisk.

Rick White (Co-Founder and CMO) – This isn’t Rick’s first go around as he was also one of the orginal co-founders and CMO at Fusion-io.  Are you beginning to sense a little pattern here around Fusion-io? Smile

David Flynn (Co-Founder and CTO) – Here comes that pattern again!  Prior to founding Primary Data David Flynn was a co-founder and CEO at Fusion-io.  Now reading his bio off of the site it also states that David holds over 100 patents across a wide range of technologies – not too shabby.

Steve Wozniak (Chief Scientist) – Yeah, that’s right, the Steve Wozniak, you know, the guy from Dancing with the Stars – oh, and he kind of helped found and shape a small little valley startup named Apple.

Now keep in mind these are just 4 people that I have picked out off of Primary Data’s company page!  And even though there is a ton of brainpower listed here, there’s still a lot more people at Primary Data whose experience in the industry that just blows my mind!

So what exactly does Primary Data bring to market?

Primary Data’s key storage solution focus around their flagship product DataSphere.  Now DataSphere in itself isn’t a storage array, being best more described as what they coin a “Metadata Engine for the Enterprises”.   So what does this really mean?  Well, hopefully I’m getting it here but to me DataSphere looks like it is somewhat of an abstraction tool – a way to de-couple all of your applications from the data that they use and store.  The first step is to take all of an organizations storage and pools it together into a single logical namespace.  It’s this storage, be it direct-attached, SAN based, or even cloud which can be in turn presented back to your enterprise applications.   But its not necessarily this pooling which drives up the value of DataSphere – it’s the analytics and automatic data movement that really stands out for me.  DataSphere is able to map a set of application rules or objectives to automatically move data across different tiers of storage in order to ensure that certain SLA’s are met, or more-so, that the right resources are being assigned for the right applications.  Meaning the proper storage resources are provisioned to applications, no matter where that application is running, no matter where that storage is running – All we need to do is specify that we want Application X to have Storage Requirements Y and let DataSphere tier and tune to its heart delight!  The metadata engine keeps track of where things are and whats going on.

As you can imagine this gives us a really good opportunity to scale in terms of storage – and really prevents the need to over provision which has become such a defacto standard in deployments today – in turn, organizations save dollars!    Now from what I can tell DataSphere doesn’t necessarily sit inside the data path for the application either – meaning it’s virtually invisible and not affecting performance in anyway.  Instead it lies in somewhat of an out of band architecture – allowing applications to have direct paths to their data while DataSphere simply handles the metadata processing and analytics.

There are a ton of benefits that I can initially see with DataSphere – thinking of scaling, migration to cloud, and simply performance compliance are a few that come straight to mind.  I’m most certain there is much much more than can be done and I know I’m just scratching the surface, so with that I guess we will have to wait until Primary Data presents at Storage Field Day in June to learn more.  As always follow along with the hashtag #SFD13 or check out my page here for all that is SFD13!

My Veeam VMCE-A Course and Exam Experience

vmce_a_logoFirst up a bit of a back story – as a Veeam Vanguard I was lucky enough to have received the required training last June in order to qualify for my VMCE exam, which I wrote and passed in August of 2016!  A nice little perk of the program if you ask me!  Anyways, earlier this month a handful of us were again lucky to participate in the next level of training, the VMCE-A Design & Optimization course in an online pilot, thus, qualifying us to write the VMCE-A exam.    Under normal circumstances I would take a lot of time to study up and possibly create guides on this blog for the certifications I write – however, with VeeamON right around the corner and the ability to take advantage of a “Free Second Chance” offer for writing certifications on site my normal study strategies didn’t apply.  I couldn’t pass up the chance of at the very least getting a look at the exam, even if it meant failing – hey, a free second chance!

So with the course fresh on my memory I studied where I could between it and my exam appointment at the conference, be it on the car ride to the airport, at 30,000 feet in the air and during a few meals at the conference.  Anyways, the tl;dr version is I passed the exam….barely – getting only 4% over the pass mark of 70%.  Not a mark I’d certainly be super proud of, but in the end a pass is a pass and I’ll take it!

On to the exam!

exam_multiplechoiceThe VMCE-A D&O exam is 40 randomized questions, all multiple choice.  Some questions have only one answer, while some are in the “Select 2, Select3” format.  As mentioned earlier a passing score is 70% or higher.  As far as the content goes I can’t say a lot as NDA’s are in effect, however what I can say is that all questions I received are fully covered within the VMCE-A D&O course material – and in fact, at the end you get a nice little detailed breakdown on how you scored in the different sections covered in the course (Design & Sizing, Infrastructure, Security, Optimization, Automation & Compliance, and Troubleshooting).  This certainly helps you to nail down where you might want to freshen up in order to improve your skill-sets.

One big thing I will say is that this exam is tough!  For as easy as Veeam can be to simply get up and running there is a lot to know about their complete suite of products – and a lot to cover in order to test on all of the features and benefits of just Veeam Backup & Replication.  Now being a customer I’m not designing these Veeam solutions day in and day out, so I focused a lot of my attention on the design section, as well as other parts of VBR that I don’t use that often.  But just as with the VMCE it’s not enough to solely focus on just VBR – Veeam ONE, Cloud Connect, etc – these are all free game for testing on this exam – so if you don’t use them I would certainly recommend brushing up on them.   I can’t stress enough that all of the content I was tested on in the exam is covered within the course materials (textbook/slides) – so pay attention during the course!  I can say, that if you see something labeled as a best practice or a formula you should remember these – Remember, this is an architect exam based on designing Veeam environments!  Just keep that in the back of your mind while studying!

As far as timing goes you have 1 hour (add another 30 minutes if English isn’t your first language) to complete the 40 questions.  I found this to be more than enough time.  Just like VMware’s VCP exams you can flag certain questions for review, and go back and forth between questions on the exam at your leisure.  The strategy I took, since I had no idea how much of a time crunch their might be, was to simply go through the questions answering the ones I knew I was right and flagging any that I was unsure of for review after.  This process took me roughly 30 minutes, which allowed me another 30 minutes to go back and review those questions I didn’t quite have a grasp of.  My review took roughly 10 minutes – after that I went through every question again, double-checking and tallying in my head how many I knew I had right, hoping to come up with a score high enough to make me feel comfortable enough to click that dreadful ‘End Exam’ button.  In the end I knew I was I close, but ended it anyways!

You will get your score immediately after completing the exam – so you know whether it was a pass or fail right away – no painful time spent wondering Smile  Also, as mentioned earlier, upon exiting the facility you will get a print out showing how you scored in each category.  I’m certainly happy I passed and know that I can for sure improve in some areas – maybe another study guide is in the cards for me!

The Veeam Certification Paths

For those that don’t know Veeam currently has 2 different certifications.  The VMCE, which documents proof  that the engineer has the necessary level of knowledge to correctly deploy, configure and administrator Veeam Availability Suite.  Then, the VMCE-A D&O which adds on the knowledge from the VMCE, bringing in more of a design and optimize feel to the test, all the while following the Veeam best practices.  Once you have achieved both the VMCE and the VMCE-A, Veeam accredits you with the title of the Veeam Certified Architect, or VMCA.  The VMCA is not a separate certification and does not require a separate step – it’s simply a designation handed to those whom have completed the requirements for both the VMCE and VMCE-A, along with passed both exams.

VeeamCertificationPath

A little about the course

Honestly, even if you don’t go through with the exam the VMCE-A Design and Optimization course is an awesome course to take.  I guarantee you will get something out of it even if you design on a daily basis.  For me, being a customer and administrator of these products it was an awesome opportunity to walk through the Veeam design methodologies, and deep diving into each step one by one to come out with the full solution.  The course has a couple of design scenarios inside of it, of which there is really no right or wrong answer.  We broke into a couple of different groups to do these and it was amazing to see just how different the end designs were.  The instructors take the opportunity to pick away at these designs, trying to understand your though process and figure out how you think – asking a lot of questions in regards to why you set it up the way you did!  This to me was the biggest advantage of the course – having that interaction and learning other ways to accomplish similar results – and seeing where you might be going astray in your though process.

So with that I hope this helps anyone else who might be on the fence about taking either the course or the exam.  I can proudly say that I am a VMCA now and that feels great (and I’m glad I don’t have to cash in that second chance as it’s a very tough exam – or at least it was to me).

A field day of Storage lay ahead!

SFD-Logo-400x400I’ve had the awesome opportunity to participate in a few tech field day events throughout the last few years, travelling to Austin for VFD4, Boston for VFD5, and finally San Jose for TFD12 just last November!  To be honest, these days provide me with more knowledge than a lot of the week long training courses I take!  They are jam packed with technical deep dives from a variety of companies – companies who are eager to share and get the message out in regards to their offerings!  And it’s not just the presenting companies jamming the knowledge in my brain, it’s the delegates as well!  I’ve honestly met some of the smartest people I know there being a fellow delegate – not just on virtualization, but spreading across the whole IT ecosystem.   Anyways, I’m super excited to have been invited back to another field day experience – this one, a storage event, taking place June 14-16 in Denver!

#SFD13

As it stands today Storage Field Day 13 is shaping up to have 6 presenting companies – all with their own different ties into the storage market.  We have everything from the tried and true companies such as DellEMC and Seagate all the way through to the startup technologies of Primary Data and the newly acquired Exablox.  In between sits the non-profit, vendor-neutral SNIA along with the hometown Colorado based X-IO Technologies.

DellEMC_Logo_Prm_Blue_Gry_rgb-100x18 exablox-logo-1200

Certainly a wide variety of companies to hear from – which should help to keep the event both exciting and interesting!

I mentioned that I’ve gained a lot of knowledge from other delegates in the past – and man o man this one will be no different.  I’m actually super excited for this event as I’ve really only met a few people on this list – so there will be a lot new faces and friends to make here for me.  Honestly, it’s a little bit intimidating as there are some real storage rockstars in this list!

I’ll do my best to try and get a few preview posts out for some of those presenting companies I know little about.  Mainly for my own homework so I don’t “go in cold” and get thrown into the deep end Smile  That said I can’t promise anything as this event is quickly sneaking up on me and only a couple weeks away now!   As always I’ll try to get the stream setup on my event page here – as well as place any content I create surrounding SFD13.   Be sure to follow along on twitter as well using the hashtag #SFD13 and be sure to keep an eye on the official event page!  See you in Denver!

#VeeamON 2017 – Wait! There’s more!

If you packed your bags up and started to shutdown after the Wednesday keynote thinking you have heard the last of the VeeamON announcements then you might want to think about digging out that notebook and sharpening your pencils again as we aren’t done yet here at VeeamON 2017!

Thursday, Paul Matiz took the stage for the final keynote of the show and made some more announcements around existing products – and threw the gauntlet down on one brand new product, the Veeam PN, which I released a separate post about!  That said, Veeam PN wasn’t the only Thursday announcement – there were a few others outlined below.

Veeam Backup for Office 365

Veeam’s released their first version of their SaaS email based backup for Office 365 last year and honestly people flocked to it!  With more and more companies migrating to the Microsoft hosted solution rather than putting up with the headaches of dealing with the multiple on-premises Exchange servers Veeam wanted to take advantage and help those organization protect their most critical communication asset!

With the version 1.5 announced just the other day, things like scaleablity have been added in order to horizontally scale your Office 365 backups by adding new proxies and repositories to help speed up the time it takes to pull down your O365 mailboxes.

In addition to this, automation has also been a focus.  With full PowerShell support coming into the product we can now use the easy verb-noun cmdlets to backup and restore Office 365 mailboxes.  And, more than just PowerShell – a fully supported RESTfull API is also available.

That said why stop at version 1.5 – let’s get moving on to 2.0   The Veeam community spoke – we loved the ability to back up the email and use the explorer functionality to perform restores back into mailboxes – but Office 365 is so much more – What about SharePoint?  What about OneDrive?

Well, Veeam now has an answer for those questions as they released somewhat of a roadmap for their Office 365 backup strategy, with both SharePoint and OneDrive on it in version 2.0.

Veeam Management Pack v8 update

For those Systems Center users that use Veeam Management pack to monitor and gain insights into their critical applications running on both VMware and Hyper-V you will be pleased to know that Veeam has released a slew of new features into the Veeam MP v8.  Now providing Azure backed dashboards in update 4, Veeam MP users will be able to update instantaneously.

Veeam has certainly announced a lot of things this week – with a heavy focus on cloud.  With Mark Russinovich doing the final keynote of the conference I can certainly say that cloud is most definitely the future – and data protection needs to be part of that!

Veeam announces the new Veeam Powered Network (Veeam PN)

VeeamOn-2017-2During the final keynote of VeeamON 2017 Veeam took the stage and threw down the guantlet on a brand new Veeam product release, the Veeam Powered Network, or Veeam PN for short.

Veeam PN is a new product, not a feature added to any others, which was initially developed to solve an internal issue within Veeam.  Veeam has a lot of employees and developers in remote sites all accross the world – and the pain of constantly connecting those sites together via VPN. coupled with the frustration of tunnels dropping all the time gave birth to the Veeam PN.  Kind of feels a lot like how a VMware fling comes to life, first being internal only, then released to the masses, then actually built out as an application offering.  Although VeeamPN can be used to establish this connectivity between any site at all,  the real benefits and the initial design intentions all focus on Microsoft Azure.

Veeam PN – Disaster Recovery to Microsoft Azure

Veeam PN is deployed into your Azure environment via the Azure Marketplace.  Once your cloud network have been established another virtual appliance is then deployed from veeam.com into your on-premises environments.  From there it’s as simple as setting up which networks you wish to have access into Azure and importing site configuration files that are automatically generated to your remote sites – with that, you have a complete and secure site to site tunnel established.  I’m not sure of the scale-ability of Veeam PN just yet, but I do know it supports having multiple sites connected into Azure for those ROBO situations.  For those remote workers on the road, well they can simply connect into Veeam PN and download a configuration file that simplifies their setup of the Open VPN client to establish a client-to-site VPN.

VeeamPN

So at this point you may be thinking “Why would Veeam develop this tech focused around networking and what does it have to do with backup or DR?”  Well, let’s couple this together with a little feature Veeam has called “Direct Restore to Microsoft Azure”.  By recovering our VMs and physical endpoints directly into Azure, and then easily establishing the network connectivity using Veeam PN we can now leverage true DR in the cloud in an easy to use, scale-able, and secure way.  THis is the “nirvana” of recovery that we have all been looking for.

One more thing – it’s free!

There it is – the Veeam way!  They released Backup and Replication with a free tier, Windows/Linux endpoint agents free, Direct Restore to Azure – free, the explorer tech – free!  Let’s add one more to that list!  Veeam PN is absolutely free!  And even though they have talked a lot about it being leveraged for Azure, companies and organizations can essentially use this technology to connect any of their sites and clients together – absolutely free!

Details around any betas or GA haven’t been revealed yet but keep your eyes open and I’ll do my best to help spread around any opportunity for you to get your hands on the new Veeam Powered Network!