Once again it looks like I’m going to have to get on a plane and travel to the great US of A in order to see my fellow Toronto VMUG Co-leader Eric Wright, who lives within a couple hours of where I’m sitting right now! But that’s ok, because Eric will be bringing with him the VMTurbo Virtualization Field Day 5 presentation in Boston! For those that know or have heard Eric speak you will know what I mean – he certainly has a way of keeping the audience interested and getting his point across – a couple great qualities to have when speaking…
Anyways, It feels like we just got done having a look at VMTurbo during VFD4 in Austin and here they are right back in front of us at VFD5 in Boston. And a lot has changed since January with both the company and their flagship product Operations Manager – They’ve kicked their TurboFest User Groups into high gear, hosting meetings San Fran, London and Atlanta, they were named one of the best places to work by the Boston Business Journal, and Operations Manager 5.2 was released and with that came features such as QoS Adherance, more support at the application level in terms of MS SQL and Oracle, integration with Arista Networks to help make more “network aware decisions” and of course, now offering the complete package delivered thought a SaaS offering in Amazon AWS. So, yeah, they’ve been busy!
An economic look at your data center
If you haven’t had a look at Operations Manager you probably should. VMTurbo takes a unique approach as it pertains to monitoring and tuning your environment to ensure you get to what they like to call “Data Center Nirvana”. Essentially they take an economic model and apply it to your infrastructure – turning your data center into a supply chain. By treating your resources, things like CPU, memory, disk, etc as suppliers and your VMs as consumers, VMTurbo is able to apply economic formulas to your infrastructure, increasing cost of resources when supply is sparse, and decreasing when it is bountiful. By doing so Operations Manager is able to determine that while migrating a VM may make sense at eye level, costs may be too high on the other host, thus making recommendations to leave it be. It’s an interesting way of looking at things and makes a lot of sense to me…
Now there is certainly a lot more to what Operations Manager does and I encourage you all to tune into VFD5 to learn all of it. You can do so by heading over to the VFD5 page and watching the live stream, as well as keep up to date with all my content here. VMTurbo is a fast growing company with a unique idea so I’m sure they will have something mind-blowing for us come next Wednesday when they kick off all that is VFD5!
Ravello Systems have certainly had there fair share of buzz lately and rightly so – the shear fact that you can run a 64 bit VM, on top of a nested ESXi host, on top of their hypervisor (HVX), on either Amazon or Google Cloud is to say at the least – the bomb!
I’ve had the chance to work with Ravello during their nested ESXi beta along with a few other bloggers and was blown away by the performance they provided while doing the exact scenario as described above. I did a few posts on Ravello, one which involved a vMotion from Amazon AWS to Google Cloud if you’d like to check it out! Needless to say I’m excited to see Ravello IRL at VFD5 on June 26 in Boston. Also, I’ve heard through the grapevine that long-time Toronto VMUG attendee and friend Kyle Bassett will be part of the presentation – Kyle is a brilliant mind so you won’t want to miss it!
A home lab replacement?
In a lot of ways I can get the performance that I need in order to replace my home lab! That said I’m no where near as extravegant when it comes to homelabs as a lot of people in these communities. When it comes down to it though, a lot that I do within the lab is configuration validation, testing different setups, etc. All of this is easily accomplished in Ravello! In fact in some ways I can do a lot more within Ravello then I can within my own home lab. Stringing together two datacenters, one in Google, one in Amazon via VXLAN for example! For the most part I’m finding myself working more in cloud platforms than in my basement any more.
Bells and whistles
I would be selling Ravello short if I just said they allowed you to run nested ESXi in Amazon – they have a lot of value add, bells and whistles so to speak that make the service what it is.
Firstly they have what’s called an application – an application is essentially one or more VMs that perform some sort of function. You could think of a couple ESXi hosts, a vCenter Server and some sort of iSCSI storage appliance as an application. Applications and be started and stopped as whole unit, rather than each individual VM.
Secondly they have blueprints. We can think of a blueprint as a point in time snapshot of any application. Basically, blueprints allow you to save a configuration of an application to your library, which you can then deploy to either another application or another cloud. Think of a blueprint being a base install of your ESXi/vCenter setup – you know before you go mangling inside of it. If your original application ever breaks, or you’d like to explore new features without affecting your current setup you could simply save your application as a blueprint and deploy a new instance of it. One newly released feature is the Ravello Repo, which allows customers to essentially share their blueprints with others, saving a lot of time when it comes to building up test and use cases.
Thirdly is pricing! Honestly I’m not sure what hard costs I’ve incurred as I have gotten 1000 CPU hours/month for free – If you are a vExpert you can too as well as they have just extended this offer to all vExperts – very generous! Not a vExpert, no problem, you can still get a free fully functioning trial here, good for 14 days worth of all you can eat cloud. Although I’ve never seen my pricing I have looked at their pricing calculator – selecting 12 vCPU’s, 20GB of RAM and a TB of storage it comes out to around $1.32/hour – which too me is more than enough resources to get a small lab up and running and is more than affordable for what you get. Plus you don’t deal with Amazon or Google at all – Ravello takes care of all of that.
What Ravello has in store for us on June 26 we don’t know, but I can assure you that it will be a treat to watch. Speaking of watching, if you want to follow along with all the action you can do so by watching the live stream on the Tech Field Day page or on my VFD5 event page where all my content will live.
Virtualization Field Day 5 in Boston will be Scale Computing’s fifth appearance at a Tech Field Day event dating all the way back to VMworld 2012 when they launched their hyperconvergence solution, HC3. Thinking about this is kind of funny really – picture the Scale Computing booth on the VMworld show floor – at the time they were a scale-out storage company, however they were launching their KVM based hyperconvergence solution which really has nothing to do with VMware at all! One word – ballsy!
Either way since then Scale has been promoting the HC3 which targets the SMB market, and they have been doing a great job of it as I’ve seen them at nearly every event I’ve been too, big or small.
So what is it?
We all know what hyperconvergence is right? It’s just so hot right now! Scale computing, just as the Nutanix’s and Simplivity’s of the world has combined the compute, network, and storage into one box, allowing businesses to gain performance and agility by implementing their building block type architecture. Scale currently ships three different models of their HC3, differing in capacity and memory…
And the uniqueness?
In order to succeed in any market you really need to have something which sets you apart from the “other guys”. Something which makes your offering so compelling that you just have to have it! What’s Scales? I would most definitely say their niche is really knowing their target market, which in turn puts the HC3 at a very compelling price.
Scale has never once deviated from the market they say they serve. They bring a hyperconverged, scale-able platform to the SMB. But price isn’t the only thing that helps them succeed in the SMB space.. They have really evaluated everything from their interface, to ease of use, to the options that they expose within their management software. Basically, Scale provides the SMB with a solution to create and run VMs – no more, no less. When I watched Scale at VFD4 I often found myself asking questions like, “So is this it? You just click create VM and you are done? Where’s all the options?”. The answers I got were “Yes, you are done, there are no other options.” It’s simply just a solution for the SMB admin, who probably has little to no time to mess around with anything or learn anything new – it lets them get in, create a VM, and get out.
Now I’d be selling them a little short if I didn’t say that there were other options – they have the ability to take snapshots, to clone VMs, to setup replication between another Scale cluster. All of these implemented in the same, easy to use, very little setup kind of way as everything else. They also have all the “enterprisey” features as well – things like HA, Live Migration, Thin Provisioning etc – however they are all enabled by default and require no setup at all.
I’m very excited to see what Scale will be talking about at VFD5. Their presentation was honestly one of my favorites at VFD5 ( and that’s not just the shot of bourbon talking). I’m interested to see if they have stayed true with their “SMB” focus if talking about any future releases – I believe that Scale really knowing their target market plays a big part in the successes that they have been having. If you want to follow along be sure to watch the live stream over at the VFD5 page or I should have it up and running, along with all of my VFD5 related content on this page as well. I can say that their CTO, Jason Collier is a great speaker and it will be an entertaining 2 hours to say the least!
I’ve had the pleasure of seeing PernixData a number of times both at our local Toronto VMUGs as well as at VMworld. Also, I have a couple close friends working for Pernix so I’m very familiar with what the solutions they currently offer. One interesting thing about Pernix is that they have a bit of a history of releasing new features and enhancements at Tech Field Day events (See their Storage Field Day 5 presentations) so I’m definitely looking forward to seeing them on June 24th in Boston.
So what do they do?
PernixData in its simplest form is a server side cache play. Their software, FVP, essentially allows you to accelerate both reads and writes utilizing server components, both RAM and SSD drives. Basically they sit in the middle of your data path between your hypervisor sending the I/O and your storage array which receives the I/O. What this does is allow your server components to essentially act as a cache for your storage array – and since they sit right next to all of your compute you can imagine the benefits in terms of efficiency and performance FVP provides.
FVP recognizes that the first thing that comes to mind in looking at all of this is that the cache, the SSD and RAM is not shared storage – so what happens when a host decides to take a walk and brings all of that non-committed write cache with it?. Because of situations just like this Pernix basically replicates any writes across all nodes (or the nodes you chose) in your FVP cluster before acknowledging the write back to the VM – allowing for host failure scenarios and ensuring that your writes are safely written back to your storage array. All this while still supporting advanced vSphere features such as HA and DRS.
So is server-side cache a band-aid?
I’ve heard this term a lot in the industry – stating server side caching is just a band-aid for the real problem – your underlying storage. But when I hear this I ask myself – if Pernix and other companies can deliver me a solution that drives enough IOPs and enough performance to successfully and efficiently run my environment do I really care if my underlying storage isn’t doing that on its own? Honestly if no one is complaining and everything is running up to my expectations I feel like it’s a win-win – not a band-aid.
Pernix definitely has some awesome innovation in their software – FVP covers all angles when it comes to providing that fault tolerant, mirrored, read and write cache for your host. You can enable caching on a per datastore or per VM level – allowing you to accelerate only your most crucial or needed workloads – also, FVP now supports not just block storage, but NFS as well! I have no idea what Pernix has in store for us at VFD5 but you can bet it will be pretty awesome! Once again, you can tune into all the action by watching the live stream on the VFD5 event page – as well, all my content and the live stream will also be on my VFD5 page.
Alright here’s another company presenting at VFD5 in Boston that I recognize, but know very little about! Thankfully the Stanley Cup playoffs are done and I now have a little extra room in my brain to take in all the info that will be thrown at us. Anyways I started to do a little digging on NexGen and oh boy, what a story do they have! Stephen Foskett has a great article on his blog in regards to the journey NexGen has taken – it’s pretty crazy! Certainly read Stephens article but I’ll try to summarize the craziness as best I can…
Basically, a couple of the LeftHand founders got together and founded NexGen – ok, this story doesn’t seem all that crazy so far. Well, after a few years Fusion-io came in with their wallets open and aquired NexGen – again, not a real crazy spin on a story! Moving on, we all know that SanDisk walked in and acquired Fusion-io with that getting NexGen. Then, the next thing you know SanDisk spun out NexGen on their own, putting them right back to where they started! This just all seems wild to me!
So where do they stand today!
NexGen is a storage company, a storage company offering a hybrid flash array with software that helps their customers align their business practices with their storage by prioritizing the data they store. So what does that really mean? Basically it comes down to QOS and service levels. NexGen customers can use these two concepts in ways that they can define performance, availablity, and protection of their data by defining the IOPs, throughput and latency that they need for each and every application. Depending on the service levels assigned to a workload, NexGen can borrow IOPs from a lower tiered service in order to meet the QOS defined on a business critical application.
Another unique feature of NexGen Storage is in the way they use flash and SSD. Most arrays will place their flash behind some sort of a RAID controller, whereas NexGen utilizes the PCIe bus to access their flash, providing a redundant, high-speed, low latency caching mechanism for both reads and writes.
There is certainly a lot more bells and whistles within the NexGen arrays and a lot bigger of a story to be told here. The way NexGen is utilizing flash within the array is definitely peaking my interest, but honestly, I’m interested more in the story of the company and how all those acquisitions and spin-offs have helped them. I’m sure they will address both of them at VFD5 and believe me there will be more posts around NexGen and their offerings. If you want to follow along during the VFD5 presentations you can see them live both on the official VFD5 event page, as well as my VFD5 event page where all my content will be posted.
There has been much a buzz about Rubrik over the last few weeks with them going GA and coming up with oh, you know, a cool 41 mil in series B funding. Certainly if you haven’t heard of them before you can probably recognize their name now! I for one, had not looked at their solutions at all. I’ve heard the name, but never gave it a look! That will change come June 25th at Virtualization Field Day 5 when Rubrik takes the stage to deep dive into what they dub “The worlds first converged data management platform”.
So what exactly is a data management platform?
It’s most certainly a fancy name but for sure it’s much much more. In simple terms you can think of the Rubrik appliance (Brik) as a backup appliance – a backup appliance that is architected in such a way that you can scale to thousands of nodes depending on the amount of data you are looking to protect. Currently they offer their r330, which is 3 node appliance with 10TB of disk and a r340, a 4 node appliance with 15 TB of disk.
Wait – did you say backup?
Sure, there are a lot of players in the backup space. We have our traditional players that have seen it all. Companies like Symantec and EMC come to mind. Then virtualization came along and we started to see backup solutions being purpose built for virtualization. Veeam, Unitrends, Trilead are near the top of the list. So with all of these companies still at play within the data center backup space do we have room for one more? Can Rubrik differentiate themselves from the others?
So what makes Rubrik unique?
Appliance driven – With the exception of Unitrends I don’t see many backup vendors coming in the form of a full appliance. Essentially what Rubrik has done is take the software and hardware requirements of their backup solutions and deliver it in a 2U scaleable appliance architecture. Speaking of scale Rubrik building block architecture allows all tasks and operations to be ran on top of any node within their cluster – therefore, the more nodes you add don’t just expand capacity, but should also increase performance and availability as well.
Global File Search – This one is a big feature in my opinion. There has been countless times where someone I support has came up to me looking for a file to be restored, but can’t remember where they saved that file. “I just clicked it from my recent documents” they normally say. Rubrik has a file search capability that spans across all of your VMs and actually incorporates auto complete functionality – a little like Google for your backups.
Multi-Tiered Storage – Man! Some companies are just getting around to incorporating some kind of auto tiering in their production storage – Rubrik are doing it in your backup storage. What this does is increase efficiency and speed. All data sent to the Rubrik appliance enters through a flash tier – and we all know the benefits of flash. The flash tier also provides the basis for the global file search magic as it stores all meta data on SSD as well.
Cloud Integrated – Well Amazon S3 anyways. Users are able to chose where backups are located, whether that be on premises or inside Amazon! A great solution for any of those backups that you are required to save for long-term and are seldom accessed!
I mentioned earlier that I don’t know a lot about Rubrik – In fact all that I know is what I’ve written in this blog post! The buzz surrounding Rubrik has been nothing short of amazing so I’m excited to see what they have to offer and what separates them out from the already established players in the market! On June 25th @ 10:30 we will see what Rubrik has to offer. You too can watch the live stream on the VFD5 event page or on my VFD5 event page where all of my content and blogs about the show will be posted.