Tag Archives: VFD5

#VFD5 Preview–Ravello Systems

v2Ravello_Logo_large-wpcf_100x27Ravello Systems have certainly had there fair share of buzz lately and rightly so – the shear fact that you can run a 64 bit VM, on top of a nested ESXi host, on top of their hypervisor (HVX), on either Amazon or Google Cloud is to say at the least – the bomb!

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

I’ve had the chance to work with Ravello during their nested ESXi beta along with a few other bloggers and was blown away by the performance they provided while doing the exact scenario as described above.  I did a few posts on Ravello, one which involved a vMotion from Amazon AWS to Google Cloud if you’d like to check it out!  Needless to say I’m excited to see Ravello IRL at VFD5 on June 26 in Boston.  Also, I’ve heard through the grapevine that long-time Toronto VMUG attendee and friend Kyle Bassett will be part of the presentation – Kyle is a brilliant mind so you won’t want to miss it!

A home lab replacement?

In a lot of ways I can get the performance that I need in order to replace my home lab!  That said I’m no where near as extravegant when it comes to homelabs as a lot of people in these communities.  When it comes down to it though, a lot that I do within the lab is configuration validation, testing different setups, etc.  All of this is easily accomplished in Ravello!  In fact in some ways I can do a lot more within Ravello then I can within my own home lab.  Stringing together two datacenters, one in Google, one in Amazon via VXLAN for example!  For the most part I’m finding myself working more in cloud platforms than in my basement any more.

Bells and whistles

I would be selling Ravello short if I just said they allowed you to run nested ESXi in Amazon – they have a lot of value add, bells and whistles so to speak that make the service what it is.

Firstly they have what’s called an application – an application is essentially one or more VMs that perform some sort of function.  You could think of a couple ESXi hosts, a vCenter Server and some sort of iSCSI storage appliance as an application.  Applications and be started and stopped as whole unit, rather than each individual VM.

Secondly they have blueprints.  We can think of a blueprint as a point in time snapshot of any application.  Basically, blueprints allow you to save a configuration of an application to your library, which you can then deploy to either another application or another cloud.  Think of a blueprint being a base install of your ESXi/vCenter setup – you know before you go mangling inside of it.  If your original application ever breaks, or you’d like to explore new features without affecting your current setup you could simply save your application as a blueprint and deploy a new instance of it.  One newly released feature is the Ravello Repo, which allows customers to essentially share their blueprints with others, saving a lot of time when it comes to building up test and use cases.

Thirdly is pricing!  Honestly I’m not sure what hard costs I’ve incurred as I have gotten 1000 CPU hours/month for free – If you are a vExpert you can too as well as they have just extended this offer to all vExperts – very generous!  Not a vExpert, no problem, you can still get a free fully functioning trial here, good for 14 days worth of all you can eat cloud.  Although I’ve never seen my pricing I have looked at their pricing calculator – selecting 12 vCPU’s, 20GB of RAM and a TB of storage it comes out to around $1.32/hour – which too me is more than enough resources to get a small lab up and running and is more than affordable for what you get.  Plus you don’t deal with Amazon or Google at all – Ravello takes care of all of that.

What Ravello has in store for us on June 26 we don’t know, but I can assure you that it will be a treat to watch.  Speaking of watching, if you want to follow along with all the action you can do so by watching the live stream on the Tech Field Day page or on my VFD5 event page where all my content will live.

#VFD5 Preview – Scale Computing

Scale_Logo_High_Res-wpcf_100x38Virtualization Field Day 5 in Boston will be Scale Computing’s fifth appearance at a Tech Field Day event dating all the way back to VMworld 2012 when they launched their hyperconvergence solution, HC3.  Thinking about this is kind of funny really – picture the Scale Computing booth on the VMworld show floor – at the time they were a scale-out storage company, however they were launching their KVM based hyperconvergence solution which really has nothing to do with VMware at all!  One word – ballsy! Smile

Either way since then Scale has been promoting the HC3 which targets the SMB market, and they have been doing a great job of it as I’ve seen them at nearly every event I’ve been too, big or small.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So what is it?

387622We all know what hyperconvergence is right?  It’s just so hot right now!  Scale computing, just as the Nutanix’s and Simplivity’s of the world has combined the compute, network, and storage into one box, allowing businesses to gain performance and agility by implementing their building block type architecture.  Scale currently ships three different models of their HC3, differing in capacity and memory…

scale-products

 

And the uniqueness?

In order to succeed in any market you really need to have something which sets you apart from the “other guys”.  Something which makes your offering so compelling that you just have to have it!  What’s Scales?  I would most definitely say their niche is really knowing their target market, which in turn puts the HC3 at a very compelling price.

Scale has never once deviated from the market they say they serve.  They bring a hyperconverged, scale-able platform to the SMB.  But price isn’t the only thing that helps them succeed in the SMB space..  They have really evaluated everything from their interface, to ease of use, to the options that they expose within their management software.  Basically, Scale provides the SMB with a solution to create and run VMs – no more, no less.  When I watched Scale at VFD4 I often found myself asking questions like, “So is this it?  You just click create VM and you are done?  Where’s all the options?”.  The answers I got were “Yes, you are done, there are no other options.”  It’s simply just a solution for the SMB admin, who probably has little to no time to mess around with anything or learn anything new – it lets them get in, create a VM, and get out.

Now I’d be selling them a little short if I didn’t say that there were other options – they have the ability to take snapshots, to clone VMs, to setup replication between another Scale cluster.  All of these implemented in the same, easy to use, very little setup kind of way as everything else.  They also have all the “enterprisey” features as well – things like HA, Live Migration, Thin Provisioning etc – however they are all enabled by default and require no setup at all.

I’m very excited to see what Scale will be talking about at VFD5.  Their presentation was honestly one of my favorites at VFD5 ( and that’s not just the shot of bourbon talking).  I’m interested to see if they have stayed true with their “SMB” focus if talking about any future releases – I believe that Scale really knowing their target market plays a big part in the successes that they have been having.  If you want to follow along be sure to watch the live stream over at the VFD5 page or I should have it up and running, along with all of my VFD5 related content on this page as well.  I can say that their CTO, Jason Collier is a great speaker and it will be an entertaining 2 hours to say the least!

#VFD5 Preview–PernixData

PernixData_Logo_ColorI’ve had the pleasure of seeing PernixData a number of times both at our local Toronto VMUGs as well as at VMworld.  Also, I have a couple close friends working for Pernix so I’m very familiar with what the solutions they currently offer.  One interesting thing about Pernix is that they have a bit of a history of releasing new features and enhancements at Tech Field Day events (See their Storage Field Day 5 presentations) so I’m definitely looking forward to seeing them on June 24th in Boston.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So what do they do?

PernixData in its simplest form is a server side cache play. Their software, FVP,  essentially allows you to accelerate both reads and writes utilizing server components, both RAM and SSD drives.  Basically they sit in the middle of your data path between your hypervisor sending the I/O and your storage array which receives the I/O.  What this does is allow your server components to essentially act as a cache for your storage array – and since they sit right next to all of your compute you can imagine the benefits in terms of efficiency and performance FVP provides.

FVP recognizes that the first thing that comes to mind in looking at all of this is that the cache, the SSD and RAM is not shared storage – so what happens when a host decides to take a walk and brings all of that non-committed write cache with it?.  Because of situations just like this Pernix basically replicates any writes across all nodes (or the nodes you chose) in your FVP cluster before acknowledging the write back to the VM – allowing for host failure scenarios and ensuring that your writes are safely written back to your storage array.  All this while still supporting advanced vSphere features such as HA and DRS.

So is server-side cache a band-aid?

I’ve heard this term a lot in the industry – stating server side caching is just a band-aid for the real problem – your underlying storage.  But when I hear this I ask myself – if Pernix and other companies can deliver me a solution that drives enough IOPs and enough performance to successfully and efficiently run my environment do I really care if my underlying storage isn’t doing that on its own?  Honestly if no one is complaining and everything is running up to my expectations I feel like it’s a win-win – not a band-aid.

Pernix definitely has some awesome innovation in their software – FVP covers all angles when it comes to providing that fault tolerant, mirrored, read and write cache for your host.  You can enable caching on a per datastore or per VM level – allowing you to accelerate only your most crucial or needed workloads – also, FVP now supports not just block storage, but NFS as well!  I have no idea what Pernix has in store for us at VFD5 but you can bet it will be pretty awesome!  Once again, you can tune into all the action by watching the live stream on the VFD5 event page – as well, all my content and the live stream will also be on my VFD5 page.

#VFD5 Preview – NexGen

logo1Alright here’s another company presenting at VFD5 in Boston that I recognize, but know very little about!  Thankfully the Stanley Cup playoffs are done and I now have a little extra room in my brain to take in all the info that will be thrown at us.  Anyways I started to do a little digging on NexGen and oh boy, what a story do they have!  Stephen Foskett has a great article on his blog in regards to the journey NexGen has taken – it’s pretty crazy!  Certainly read Stephens article but I’ll try to summarize the craziness as best I can…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Basically, a couple of the LeftHand founders got together and founded NexGen – ok, this story doesn’t seem all that crazy so far.  Well, after a few years Fusion-io came in with their wallets open and aquired NexGen – again, not a real crazy spin on a story!  Moving on, we all know that SanDisk walked in and acquired Fusion-io with that getting NexGen.   Then, the next thing you know SanDisk spun out NexGen on their own, putting them right back to where they started!  This just all seems wild to me!

So where do they stand today!

NexGen is a storage company, a storage company offering a hybrid flash array with software that helps their customers align their business practices with their storage by prioritizing the data they store. So what does that really mean?  Basically it comes down to QOS and service levels.  NexGen customers can use these two concepts in ways that they can define performance, availablity, and protection of their data by defining the IOPs, throughput and latency that they need for each and every application.  Depending on the service levels assigned to a workload, NexGen can borrow IOPs from a lower tiered service in order to meet the QOS defined on a business critical application.

Another unique feature of NexGen Storage is in the way they use flash and SSD.  Most arrays will place their flash behind some sort of a RAID controller, whereas NexGen utilizes the PCIe bus to access their flash, providing a redundant, high-speed, low latency caching mechanism for both reads and writes.

There is certainly a lot more bells and whistles within the NexGen arrays and a lot bigger of a story to be told here.  The way NexGen is utilizing flash within the array is definitely peaking my interest, but honestly, I’m interested more in the story of the company and how all those acquisitions and spin-offs have helped them.  I’m sure they will address both of them at VFD5 and believe me there will be more posts around NexGen and their offerings.  If you want to follow along during the VFD5 presentations you can see them live both on the official VFD5 event page, as well as my VFD5 event page where all my content will be posted.

#VFD5 Preview – Rubrik

logo-large-gray-wpcf_100x48There has been much a buzz about Rubrik over the last few weeks with them going GA and coming up with oh, you know, a cool 41 mil in series B funding.  Certainly if you haven’t heard of them before you can probably recognize their name now!  I for one, had not looked at their solutions at all.  I’ve heard the name, but never gave it a look!  That will change come June 25th at Virtualization Field Day 5 when Rubrik takes the stage to deep dive into what they dub “The worlds first converged data management platform”.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So what exactly is a data management platform?

It’s most certainly a fancy name but for sure it’s much much more.  In simple terms you can think of the Rubrik appliance (Brik) as a backup appliance – a backup appliance that is architected in such a way that you can scale to thousands of nodes depending on the amount of data you are looking to protect.  Currently they offer their r330, which is 3 node appliance with 10TB of disk and a r340, a 4 node appliance with 15 TB of disk.

Wait – did you say backup?

Sure, there are a lot of players in the backup space.  We have our traditional players that have seen it all.  Companies like Symantec and EMC come to mind.  Then virtualization came along and we started to see backup solutions being purpose built for virtualization.  Veeam, Unitrends, Trilead are near the top of the list.  So with all of these companies still at play within the data center backup space do we have room for one more?  Can Rubrik differentiate themselves from the others?

So what makes Rubrik unique?

Appliance driven – With the exception of Unitrends I don’t see many backup vendors coming in the form of a full appliance.  Essentially what Rubrik has done is take the software and hardware requirements of their backup solutions and deliver it in a 2U scaleable appliance architecture.  Speaking of scale Rubrik building block architecture allows all tasks and operations to be ran on top of any node within their cluster – therefore, the more nodes you add don’t just expand capacity, but should also increase performance and availability as well.

Global File Search – This one is a big feature in my opinion.  There has been countless times where someone I support has came up to me looking for a file to be restored, but can’t remember where they saved that file.  “I just clicked it from my recent documents” they normally say.  Rubrik has a file search capability that spans across all of your VMs and actually incorporates auto complete functionality – a little like Google for your backups.

Multi-Tiered Storage – Man!  Some companies are just getting around to incorporating some kind of auto tiering in their production storage – Rubrik are doing it in your backup storage.  What this does is increase efficiency and speed.  All data sent to the Rubrik appliance enters through a flash tier – and we all know the benefits of flash.  The flash tier also provides the basis for the global file search magic as it stores all meta data on SSD as well.

Cloud Integrated – Well Amazon S3 anyways.  Users are able to chose where backups are located, whether that be on premises or inside Amazon!  A great solution for any of those backups that you are required to save for long-term and are seldom accessed!

I mentioned earlier that I don’t know a lot about Rubrik – In fact all that I know is what I’ve written in this blog post!  The buzz surrounding Rubrik has been nothing short of amazing so I’m excited to see what they have to offer and what separates them out from the already established players in the market!  On June 25th @ 10:30 we will see what Rubrik has to offer.  You too can watch the live stream on the VFD5 event page or on my VFD5 event page where all of my content and blogs about the show will be posted.

#VFD5 Preview – OneCloud

med-vert-notag-wpcf_93x60Am I looking forward to the presentation at Virtualization Field Day 5 from OneCloud?  I have no idea!  Why?  Well, here is a company that I know absolutely nothing about!  I can’t remember ever coming across OneCloud in any of my journey’s or conferences!  Honestly, I think this is the first company that is the only company that is presenting at VFD that I have absolutely no clue about what they do…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

That will certainly change fast

OneCloud will present at VFD5 on June 24th at 1:00 PM where I’m sure we will all be enlightened a little more on the solutions they provide.  That said I don’t like going in cold, knowing nothing about someone – thus, this preview blog post will at least help me understand a little bit about everything OneCloud has to offer…

So let’s start from the ground up.  OneCloud is essentially a management platform for a hybrid cloud play.  Their core technology, the Automated Cloud Engine (ACE) is the base to where they provide other services.  From what I can tell ACE essentially facilitates the discovery of your on premises data center, taking into account all of your VMs, physical storage and networking information.  From here, ACE can take different business objectives and transform these into API calls in order to essentially replicate all your infrastructure into the public cloud – for now, it appears to be just Amazon’s AWS which is supported.

The service running on top of ACE is OneCloud Recovery.  OneCloud Recovery allows organizations to facilitate a disaster recovery or business continuity solution involving the public cloud as the primary target – skipping costs and complexity of implementing a second or third site on premises.

Diagram-4

So here is how it all happens from start to finish – OneCloud is deployed into your environment, via the virtual appliance route.  Another instance is also deployed into Amazon.  From there it auto discovers your environment; your networking setup, storage configurations, data and applications are all tied together and somewhat of a blueprint of your environment is created.  You then use their policy engine to apply RTO and RPO objectives to your applications.  OneCloud will then provision a fully functioning virtual data center in Amazon – one that mirrors your environment in terms of networking and configuration.  OneCloud not only duplicates your environment into Amazon, but it will also optimize both your compute and storage in order to minimize costs.  Meaning it will scale down on CPU where it believes it can and place your data onto the most cost effective storage.  Once your data is there OneCloud performs ongoing replication in order to meet the RPO you have selected.  From there it’s just a matter of performing your normal DR tests and engaging in any failover (and failback) operations.

OneCloud seems to have some interesting technology and I’m looking forward to learning more at VFD5.  Some questions for OneCloud that come to mind – How do they compare to VMware’s vCloud Air DR services?   Do they plan on expanding out to other public clouds such as Google, Azure, or vCloud Air?  With a strong software base in ACE do they plan on moving outside just the DR/BC realm – things such as DevOps and public cloud labs come to mind.   I really like how they are abstracting away what can be some very complicated API calls to Amazon – any time a company provides a solution that involves simplicity it’s always a good thing, but especially so when dealing with the complex networking and configuration of public cloud and disaster recovery.  If you would like to learn more about OneCloud with me you can do so by watching the live stream on the VFD5 event page.  That stream, along with any other content created by myself will be posted on my VFD5 event page as well.