Monthly Archives: January 2015
Let’s set the scene
You are at your desk when your CIO can be heard descending down from the marble floored, nicely painted floor above you. They stop and ask for an update on the private cloud initiatives. As you begin laying out all your options they quickly intervene quoting several articles they read in the in flight magazine on their return trip from Hawaii. The end of the conversation caps off with an “OpenStack, it’s all the rage, and we need to be doing it, so, do it!” The endless articles and blogs that you have read in regards to OpenStack and it’s complexity suddenly rush back into your brain. As the sweat begins to drip off your nose you begin wondering if your planned trip to Hawaii is even feasible now that you will be wrapped up in the OpenStack project for the next year. Your coworker, whom was quaintly eavesdropping on the whole conversation promptly fires off a “FYI” into your inbox containing the link to all the Platform9 videos from VFD4 stating they might help, followed up with some nervous text around the whereabouts of his stapler.
FYI – Platform9 at #VFD
Platform9, the 2nd presenter on the 1st day of VFD4 featured Sirish Raghuram (CEO) and Madhura Maskasy (Head of Product) showcasing their OpenStack as a Service solution. Throughout all of the Platform9 presentations you could really see one simple message shine through, and that message just happens to be the Platform9’s manifesto
Platform9’s mission is to make private clouds easy for organizations of any scale.
So how does Platform9 do this? Well, ironically they use one of the most complex cloud projects out there – OpenStack.
So, why would Platform9 go down the OpenStack road given its reputation of being difficult and time consuming to implement and maintain – well, it all comes down to how Platform9 envisions the future of private cloud. Sure, private cloud will have all of the usual components – self service, orchestration layers, resource pooling and placement however Platform9 looks beyond this stating that the private cloud of the future must span virtualization technologies – essentially not differentiate between ESXi, KVM, or Hyper-V. On top of just spanning hypervisors, Platform9 states future private clouds must also span containers, as products like Docker have began to be all the buzz these days. However one of the biggest aspects of Platform9’s vision of private cloud is that it must be open source and since OpenStack is the 2nd largest open source project ever, coupled along with it’s incredible drive from the community – hey, why not OpenStack?
So when looking at all of these requirements – orchestration, hypervisor agnostic, container integration, resource pooling OpenStack begins to make a lot of sense, seeing as it meets all the criteria defined in Platform9’s vision of private cloud.
Wait a minute, you said as a service, isn’t OpenStack installed on-premises?
There are a couple of common ways enterprise can deploy OpenStack today. The first being completely on premises, utilizing existing or new infrastructure, all within your data center walls. This scenario is great in the fact that you have complete control of your OpenStack environment. The management and data all resides safely within your business. However this scenario can also pose challenges as well. To implement OpenStack on premises, businesses need to have the resources to do so – and those resources encompass both skill-sets and time – both of which can add up in terms of dollars. You first need the skill-sets available – the know how if you will on how to design, implement, and manage OpenStack. Secondly, you need the time. Time to manage the infrastructure, keep up with updates and upgrades that as we know can come like rapid fire in the open source world.
The second common deployment method of OpenStack is a hosted private cloud. In this scenario a service provider is leveraged which completely hosts a companies OpenStack deployment. In most cases they look after the installation and configuration, the management and updates, removing this burden from the customer. That said, this model does not allow companies to utilize existing infrastructure and usually results in a greenfield deployment. And depending on the scale of the infrastructure needed, in some ways this can cost just as much as a public cloud instance and your data still sits outside of your companies data center.
Platform9 takes an approach that merges both these scenarios, giving you the best of both worlds – in essence, they abstract the management layer of OpenStack from the hypervisor/data layer. In the end you are left with your data sitting on your existing infrastructure and the OpenStack management layer running and managed by Platform9 within their infrastructure. Thus you get all of the benefits of OpenStack private cloud, with out the complexity or requirements of setting it all up and managing it.
The nuts and bolts
I know I know, enough private cloud blah blah blah OpenStack blah blah blah how the heck does this work? First up let’s look at what Platform9 actually delivers from within OpenStack. Platform9 handles the key modules in OpenStack – that includes Keystone, Nova, and Glance and delivers them to you through their cloud. You only provide the compute (BYOC?) and the infrastructure to power them. What you won’t see inside of Platform9’s solution is Horizon – this is replaced by their own custom built dashboard.
Once you are ready you can simply download and install Platform9’s agent to any of the Linux/KVM instances that you would like to pair with your new OpenStack solution. Once initiated the agent will begin discovering information about anything and everything there is to know about your environment – this information is then encrypted and sent back to Platform9 to be reported back inside of the dashboard. From the dashboard roles can be assigned to your physical servers – this is a KVM server, this is my image catalog (glance) etc and changes are reflected back down to your hardware. That said, Platform9 is not just for greenfield deployments – if you already have KVM running on that physical server your VMs and images are seamlessly imported into Platform9’s OpenStack as instances, thus the whole “leverage your existing infrastructure” play.
Capacity is automatically calculated and reported into the Platform9’s dashboard, allowing customers to quickly see what CPU, Memory, Storage they have available, deployed, consumed, etc… The custom HTML5 Platform9 dashboard is quite slick and easy to navigate. It supports multiple tenants, users, and tiers of infrastructure, which can be stretched across multiple data centers. Meaning you could assign a specific user a specific amount of resources (CPU, Memory, Storage, Networks) which come from specific resource tiers or data centers.
Once environments are discovered and imported into Platform9 the custom dashboard then becomes the entry point for all OpenStack management and API calls. The dashboard will take those API calls and execute them accordingly, instructing the agents and your local infrastructure to take appropriate action. All OpenStack APIs are still available within Platform9
Wait! You said vSphere earlier – that was all KVM stuff!
Whoops! Did I forget to talk about vSphere integration! The good news is that since VFD4 and the now Platform9’s OpenStack has entered GA for use with KVM, and the vSphere integration beta has been announced. This means that all of Platform9’s OpenStack management functionality can also be used with your existing vSphere environment, with the agent simply being deployed as an ova appliance rather than directly on the hypervisor itself. In turn, your existing VMs will be discovered by Platform9 and imported as instances within the dashboard. Your templates are converted and imported into Platform9’s OpenStack image catalog – Basically, all of the functionality that is there with KVM is also available within vSphere, allowing you to manage both your KVM and vSphere environments side by side. Just replace KVM in the above section with vSphere. Oh, and Docker integration is in the works!
What’s OpenStack and what’s Platform9
Seeing as how the OpenStack project is ever changing, meaning there is plenty of work being done on each and every component of OpenStack provides a bit of a challenge for Platform9 when they are deciding what to include/exclude in their product. Take the topic of reporting for instance; There is a service being developed called Ceilometer inside of the OpenStack project that will handle all of the collection of utilization data in order to present for further analyses – so Platform9 has opted to wait for ceilometer before enhancing some of their reporting functionality. No point in vearing away from a vanilla OpenStack if you don’t have to. That said, some things can’t wait. The auto discovery of your existing workloads and infrastructure that Platform9 does is not native to OpenStack – this is something that they have went a head and developed as a value-add to their solution. Platform9 is also looking to enhance functionality of the different components, services, and drivers that already exist within OpenStack. Take the vSphere driver for instance – Platform9 is working on solutions to get support for more than one cluster per vCenter inside their environment. They are working on solutions to natively access the vSphere templates rather than performing copy/move functionality. They are looking to leverage the capabilities of the vCenter customization specifications directly from Platform9. The also note that they are dedicated and have full intent on pushing all of this work back into the OpenStack project – a value that every company should have.
So in the end do I think that Platform9 has achieved their mission of making private clouds easy for orgs of any scale – absolutely. The key differentiator for me with Platform9’s service is the shear fact that you can not only use your existing infrastructure, but you can do this in a way that is simple, seamless and all encompassing in terms of discovering your workloads and templates you currently have. In the end, you are left with your KVM/VMware environment, managed by Platform9’s OpenStack, setup within minutes, leaving you with a lot of free time to oh, I don’t know, look for Nelson’s stapler.
Now I know I titled this post in 10 minutes or its free but guess what? It’s free anyways! – You can try out Platform9 on 6 CPUs and 100 VMs absolutely free! For more info definitely check out Platform9’s site and all of the #VFD4 videos here!
Scale Computing was the very last presenter at Virtualization Field Day 4 in Austin, Texas, yet they are the very first presenter that I’m going to blog about. Now this has nothing to do with the fact that they had the “last word” or that they are the most recent vendor in my memory, it’s more because Scale achieved something that seemed a bit out of the ordinary for the week – they took a room full of VMware people (and one Hyper-V guy, sorry Jeff) and proceeded to peak everyone’s interest in a product that is based off of KVM. Scale’s presentation was one of the best IMO, the delivery, the format, the technology, the people they had there – it all seemed to mesh together well and I was very impressed. You can see it for yourself here.
Know your market
All to often we see vendors try and play the all encompassing role, catering to enterprise yet stating they also have offerings for small and mid market businesses and/or catering to SMB yet marketing their product at an enterprise data center scale at the same time This may be an easy thing for a larger company to do, say Dell, but for smaller companies it becomes a challenge – they have fewer employees and resources to truly be an “all in one” provider. Scale fits neither of these scenarios – right from the get go Scale has positioned themselves as a provider to the small and mid markets, and throughout their lifespan and their VFD4 presentation they didn’t veer away from that. Scale knows who their (potential) customers are and has certainly developed a solution that caters to them. As they stated, there are only 500 Fortune 500 companies, yet there are 380,000 small businesses.
What does SMB need?
Typically in the SMB space we see a very slim, if any, IT department – think anywhere from 1 to 12 people in your IT department doing everything from fixing digital signage, deploying iPhones, maintaining Exchange environments, and developing web applications/sites. It’s these type of companies that don’t have the resources, both people or money, to really employ any subject matter experts – they don’t have a virtualization guy, they don’t have storage guy, they don’t have an active directory guy, nor do they have any money to send employee’s off on training to become an expert in these technologies. So when it comes to running 50 or so virtualized workloads there are definitely challenges. These companies need a solution that is first off simple, something that they can simply plug in and use without too much trouble or training. It also needs to be a solution that is cheap – more times than none SMB enters into the buying phase, especially in IT, without a budget. They don’t have a fixed amount to spend on IT so if they go to the CEO or CFO with a solution that is priced through the roof you can pretty much bet on being shot down. One more thing SMB may look for in a solution is scale – let’s face it, most SMBs don’t enter a market with hopes of simply being an SMB their entire life – they want to grow, they want to expand, and with that, their technology solutions will need to adapt.
What does Scale offer?
Although Scale Computing originally went to market as a scale out storage solution, their initial intent was to always be in the hyperconvergence space. As Jason Collier stated during his presentation, “Scale Computing, a pretty stupid name for a storage company right?” They’ve successfully done this by coming to market with their hyperconverged solution dubbed the HC3. The HC3 aims to be a “data center in a box” by lumping storage, compute and virtualization into a singular appliance offering a plug and play type deployment. HC3 is sold in three separate hardware platforms; the HC1000, HC2000, and HC4000, all of which are sold in a 3-node starter system taking up only 3U of rack space – perfect for cramming into a broom closet or the corner of the room like SMB so often does. Variations between the platforms are as follows; The HC1000 system – a SuperMicro SATA based solution ranging from 6-24TB raw storage, 96GB RAM, and 12 cores of CPU. The HC2000, a Dell based SAS solution, ranging from 7.2 – 14.4TB raw storage, 192 GB memory and 18 cores. The HC4000, a Dell based SAS solution ranging from 14.4 – 28TB, 384GB Memory and 36 cores of CPU.
Although the hardware specs are important the real value of the HC3 comes from the HyperCore software which is layered on each node within your HC3 cluster. HyperCore, is purpose built for the HC3 system, continuously monitoring VMs, hardware, and software in order to ensure that nodes can automatically respond to infrastructure events and failures, maintaining the level of high availability that we all need. For the VMware faithful you can think of this like vCenter, but integrated into every single node – no separate license, no separate management server.
Underneath all of the orchestration included in the HyperCore software is perhaps some of the most innovative IP within Scale Computing and that is how they handle manage underlying storage. They do utilizing a technology that they have built from scratch, the Scale Computing Reliable Independent Block Engine, or SCRIBE for short. SCRIBE in layman’s terms is a direct block engine which bypasses any Linux file-systems whatsoever. It spans all of the HC3’s storage within the cluster, providing no single points of failure and ensuring consistency throughout the VMs running atop it. SCRIBE is what allows the HC3 to make it’s clones in a matter of seconds, as it doesn’t necessarily need to duplicate all data all the time. Same goes for snapshots and replication as well. For the VMware informed you could think of it a lot like VSAN, providing multiple copies of data spanning across multiple nodes without the use of a VSA. SCRIBE is a technical beast (at least for me) and how it updates metadata and ensures consistency can be a bit of challenge to understand for those that aren’t real storage nerds. I encourage you all to check out Scale’s explanation of SCRIBE from Storage Field Day 5 as it does a good job at laying it all out (get it ).
How does all this fit into SMB?
Sure when you pull the covers off and take a look at the inner workings of SCRIBE and HyperCore it all seems complicated, however Scale has placed a very stripped down, unique GUI atop all of the complexity that is KVM. We mentioned earlier that a SMB/Mid Market’s key technology needs revolve around simplicity, availability, scale-ability and budget – let’s look further at how Scale addresses each of those factors.
You would think that utilizing KVM and and a purpose built storage solution would really complicate things for an end user – Command line and bash are not normally a familiar place in terms of an SMB customer. That’s exactly why Scale has focused a ton of effort on the GUI and overlying HyperCore software that abstracts this complexity. The interface is trimmed down, providing end users with only the functionality that they need. It’s a platform on which SMBs can deploy, manage, snapshot and protect their VMs – no selecting datastores, no configuring switches, no hardware compatibility, etc. In fact I found myself continuously asking the question “So, is this it? This is the product? Where do I get the advanced settings?” Their answer was always “Yes, this is it, there are no advanced settings, remember our target customers”. This is a hard realization for a VMware guy, someone who has been twisting knobs and playing with advanced configuration parameters for years but I do get where they are going with it. Most SMBs simply don’t have the budgets and/or time to train employees on advanced settings and just want to provision their VM and move on to the next task. With the HC3 you can accomplish just this in a matter of seconds.
Available and Scalable
Every size of business has what the market likes to call business critical applications, and this most certainly applies to SMB as well. We don’t and can’t differentiate what is and isn’t a BCA depending on business size. If we did they would be called Enterprise business critical apps. Small and Mid-Market businesses, just as Enterprise have many applications that are extremely important to their company and need to be available and protected. The HC3 provides an avenue for the SMB to gain access to what is sometimes only known as an enterprise feature; High Availability. Each and every time a VM is deployed, it’s deployed in a way that makes it highly available – no enabling anything, no tweaking knobs, this is the default behavior HC3 undertakes. Implementing disaster recovery is also a breeze using the HC3, it’s as simple as adding another HC3 cluster at a remote location, configuring it as a replication target and enabling replication on a VM by VM basis inside of the HyperCore software. DR is something that is normally out of budget to an SMB – Scale brings this within an arms reach!
As mentioned earlier SMBs are always looking to grow and when they do their technology needs can skyrocket, leaving them to answer tough questions in regards to compute, storage and licensing needs. Most often than not SMB have bought into a solution that simply cannot scale with their growth resulting in expensive rip and replace scenarios. The HC3 is built in such a way that scale-ability is on the front line, I mean, it’s right in the companies name right? When a company has outgrown their HC3 starter pack they are able to simply add nodes into their cluster, and this process is done seamlessly. Simply rack and stack another node, point to an IP inside of your configuration interface and the configuration is simply applied to the node. The HC3 can support up to an 8 node cluster, certainly providing more than enough wiggle room for most SMBs to grow.
Perhaps the biggest driver in an SMB/Mid market business is budget – they simply don’t have the dollars to spend that enterprise companies do. The HC1000 starter pack is offered with at a really affordable price, starting at roughly 25K- incredibly lower than most solutions providing the same type of functionality. There are have one licensing tier – either you have an HC3 with all of the features or you don’t. There is no paying for “enterprise” features like HA or replication. One sku gives you everything Scale has to offer. But Scale addresses budget in more ways than just money. You simply don’t need the level of employee knowledge to operate an HC3 as you would with a full blown enterprise virtualization stack, thus saving huge amounts of money and time on training. The simplicity plays a role here as well. Most if not all of the features are setup and ready to go when your HC3 arrives – no spending time on money or consultancy to configure HA or enable the migration of VMs between nodes. And the features that do require a little setup, such as replication, are completed in a couple of clicks. Support plays another key role in budget friendliness. Scale ranks way above average receiving a Net Promoter score of 79 – and they have one type – 24/7. Every HC3 starter pack that is sold comes with 1 year of support already included, 24/7, no messing around.
So in the end we are left with a company that really knows their target market and it shines through in the product they deliver. Do I think the HC3 is a good fit for an SMB – absolutely. Maybe not every SMB but I’d say 99% of them. Scale has addressed a market that a lot of companies simply skim over. Most will strip a few “enterprise” features out of their already existing enterprise product, lower it’s maximum configurations and call it their SMB offering. This is not the case with Scale – what you see is what you get – A simple, easy to use, purpose-built, scalable hyperconverged appliance that even a 4 year old can use (with the bribery of chocolate chips of course). No licensing tiers, no need for feature comparisons and tough licensing discussions, just buy what you need and grow!
I’ve been a VMUG Advantage member going on three years now so I’ve experienced first hand some of the great value that it brings along with it. Basically, if you plan on attending VMworld and doing a couple of exams or training courses throughout the year the VMUG Advantage discounts offered more than pay for the program fees. The full VMUG Advantage beneiftis can be seen here, but for laziness sake, here’s a quick outline of the “biggies”.
- $100 discount on VMworld
- 20% off VMware delivered classes
- 20% off VMware certifications
- Access to all VMworld online content
- 50% off VMware Workstation/Fusion
- Discounts on VMware Learning Zone, VMware On-Demand, Lab Connect and more.
With that said, those that can’t attend VMworld or are not planning on taking any VMware training or exams might find themselves scratching their heads wondering what real value the VMUG Advantage program can bring to them. And honestly, before now their wasn’t a whole lot.
New in 2015 VMUG Advantage subscribers will now have access to EVALExperience! EVALExpereince provides subscribers with exclusive access to certain pieces of VMware software, coupled with a 365 day non-production NFR license. Basically, VMUG Advantage subscribers will be able to download and use nine different pieces of VMware software in their home labs in order to explore new features, gain hands on experience and further educate themselves on the offerings. So what’s included? Well, it’s not a simple lightweight set of software – there is some expensive sweet applications included (shown below) that should be able to keep you busy for the year
VMware vCenter Server 5 Standalone. VMware vSphere w/ Operations Management (Enterprise Plus), VMware vCloud Suite Standard, VMware vRealize Operations Insight, VMware vRealize Operations 6 Enterprise, VMware vRealize Log Insight, VMware vRealize Operations for Horizon, VMware Horizon Advanced, VMware Virtual SAN
To me, EVALExperience is a solution to a couple of gripes within the community. First, it adds that extra value to those VMUG Advantage subscribers that don’t attend VMworld, or don’t take any official training or certifications. IMO the ability to evaluate and download software for longer than the usual 60 day trial period is well worth the $200 price tag of VMUG Advantage. Secondly, it provides somewhat of a replacement to the VMTN that VMware used to offer. The VMTN which offered a similar type solution of downloading and using NFR licenses of VMware’s products was put to rest a number of years back. The community for a while now have been trying to get VMware to reinstate the VMTN – this to me, along with teaming up with VMUG, answers those screams about the VMTN.
So, if you are already a VMUG Advantage subscriber you should have received an email outlining how you can gain access, if you aren’t, go and sign up here. More info on the EVALExperience program can also be found here. For now, a big thanks to VMUG and Happy Testing!!!
In a previous post I highlighted 4 of the 8 sponsors taking part in Virtualization Field Day 4 coming up January 14th through 16th in Austin, Texas. Now it’s time to move on to the final four! As mentioned in the previous post, the Tech Field Day events would certainly not be possible without the support of the sponsors so a big thanks goes out to all 8 who are participating this round. Without further ado, let’s get to it…
If we look at the hyper-convergence market today it would be somewhat of an understatement to say that it is “red hot”. Simplivity, along with their competitor Nutanix, and fellow VFD sponsor Scale Computing have really changed the way companies are deploying in house IT. Even VMware has jumped on board by providing oems a go to market strategy leveraging their EVO:Rail hyper-convergence reference architecture. It’s a fair statement to say that hyper convergence is here to stay and a big part of that is due to technology and material that Simplivity has produced. Their product, the OmniCube provides customer with a scaleable, building block type architecture while encapsulating the server compute, storage, network, switching into a single pool of resources. I’ve seen Simplivity’s solution in action many times during tradeshows and VMworld, but never “out in the wild.”. Honestly, I think they have a great solution and there are a lot of things I like about it – The global source side dedup is awesome, compression is great. I also like the overall way Simplivity goes to market, by allowing commodity x86 hardware to take their software, along with their custom built hardware accelerator PCIx card and essentially end up with a build your own adventure type deployment. The hyper convergence market is “the in thing” right now so I can’t wait to see what Simplivity has in store for VFD4. Simplivity has dabbled with some of the Tech Field Day happenings in previous years, such as the SDDC Symposium and the TFD Extra’s that are held during VMworld but this will be their first full go at a full Tech Field Day event.
What more can we say about Tech Field Day and SolarWinds?!?! They have been a long time supporter of the event participating in (I’m using abbreviations since there are a lot of them) NFD1, NFD3, NFD5, NFD6, TFD4, TFD6, TFD7, and TFD9 – That’s quite a resume when it comes to sponsorship. As a company I really respect the way SolarWinds handles the community surrounding them. I had the chance a few months back to participate as a thwack ambassador and I can’t give this community enough praise! They are engaged, helpful and smart! Be sure to check out thwack if you get a chance! But, on to what matters – SolarWinds and their technology. This being a day about virtualization one can only assume that SolarWinds will speak to their management software, cleverly titled “Virtualization Manager” I’ve personally never used the product but have seen it in action many times during demos, webinars, etc.. and honestly if you are utilizing both VMware and Hyper-V in your environment and looking for a monitoring/management solution I wouldn’t hesitate to recommend that you at least check out SolarWinds. They have a ton of fully customizable alerts and reports to help customers track things like CPU ready, Memory balooning as well a complete section to help during capacity planning by finding under and over sized VMs within the environment. All this, integrating with other traditional SolarWinds products such as Server & Application Manager (SAM). If you have other SolarWinds products in your environment, Virtualization Manager may be a perfect fit. Whatever SolarWinds is presenting at VFD4 I will be all ears and for sure have the info posted here.
When it comes to StorMagic they are one of those companies that I’ve heard of before, but have never really looked too deeply at. Honestly, up until I began checking them out in more detail for this post I had assumed that they were “just another VSA vendor”. And in some ways, well, I’m right. But in a lot of ways, I’m wrong. StorMagics’ product, SvSAN is indeed a VSA, but not “just another VSA”. SvSAN seems to serve a distinct type of customer. A customer with a large centralized infrastructure and many remote/branch offices that it supports. In a perfect world these remote sites would have crazy awesome fibre connections back to the central office and all the applications, VMs and services would be driven from the central offices datacenter. In the real world we have crappy WAN links and “needy” applications – These needy applications require themselves to run inside of these branch offices in order to provide low latency and meet performance requirements. Perfect world, we have the budget and infrastructure to throw at this problem – SANs in every remote office. Real world, there’s no money! Back to the perfect world, we’d have IT staff in every office babysitting all of this stuff. Real world, we don’t, we have staff sitting in our central offices running rampant, having the person that answers the phone offsite reboot servers for them! SvSAN really helps bridge these perfect and real worlds. By utilizing the SvSAN VSA in the remote sites we are able to provide shared storage to our remote locations in an active-active fashion with as little as 2 nodes all managed centrally. Watch for more on StorMagic and SvSAN next week.
I’m pretty pumped to see VMTurbo at VFD4 since I know that one of the presenters, Eric Wright (twitter/blog) will be representing. The thing is Eric, along with Angelo Luciani and myself are co-leaders of the Toronto VMUG – which means we have seen countless presentations and sessions sitting side by side – It will be definitely cool to see Eric on the other side of the fence and I’m sure he will knock it out of the park. As far as VMTurbo go, like many other players they participate in the operations/management end of things. You can truly see that they have put a lot of development, time, and efffort into their flagship product Operations Manager. OM, like others, is a monitoring solution – looking for performance issues and troublesome areas in your environment, and making recommendations on how to alleviate them. OM though takes a drastically different approach than most monitoring tools. VMTurbo takes your virtual data centre and transforms it into the what they call an “economic market”. Picture this, your resources, things like memory, cpu, etc – these are all in demand, and they all have a cost. The cost of these items go up and down depending on availability. Not much memory around, cost goes up – have an abundance of memory, well, things are going to be a bit cheaper. The VMs are the consumers running around buying up all this merchandise. And depending on the products recommendations, say a VM was looking to move from one host to another, it may or may not be a good idea. Things may cost more over on that other host, so in turn, althought it is experiencing issues, it simply may be more cost prohibitive to stay right where it is. VMTurbo has been around a while, and they are a big player when it comes to community participation. If you are looking for a good primer, check out the videos from VFD3. Again I’m excited to see Eric, excited to see VMTurbo, and excited to learn more about the interesting model they have.
See you in Texas!
So, there you have it. Between the above and this post we have a brief look into the 8 sponsors who will be participating in Virtualization Field Day 4 next week (Jan 14-16) in Austin, TX. Just simply reviewing old TFD youtube videos has really got me pumped and excited and I can barely wait to sit at those tables amongst all the brain power that is attending.
If you want to watch live, you can – just head to the VFD4 landing page during the event as all sessions are live streamed. Also, don’t forget to watch and participate via twitter using the #VFD4 hashtag.
Virtualization Field Day 4 is right around the corner, taking place January 14-16 in Austin, Texas! Now, I’m trying to be smart and perform due diligence to the sponsors by doing a little pre-blogging and research on all of the vendors that will be participating. There are a total of 8 sponsors, and honestly, without them, Virtualization Field Day would be pretty boring. They are the ones that derive up the content and spark up the great conversations that happen over the three days both at the event and over Twitter.
Some of the sponsors I’m very familiar with, however there are some, that I’ve simply just heard of (thus my need to research). So, without further ado, here’s a small glimpse into 4 of the the 8 wonderful sponsors making #VFD4 possible. Watch for a post with the remaining sponsors soon!
I had always been under the impression that CommVault was somewhat of a newer company, not quite “startup” status, but I had always just assumed they were a decade or so old – I couldn’t be more wrong. CommVault had actually been formed as development group within Bell Labs a way back in 1988. To put 1988 into perspective, well, I was 10 and Edmonton Oilers won the Stanley Cup – I have a son who will soon go through the first situation himself, the second, well, I don’t see Lord Stanley in oil country any time soon 🙂 Yeah, so back to CommVault. 1988 places CommVault at the 17 year mark – certainly a long period of time for a software company to be around. The magic sauce at CommVault is their flagship product – Simpana. Simpana dubs itself to be “A Single Platform to Protect, Manage, and Access all of your companies information” and does so by providing customers with one code base, one product that handles all of your backup, replication, archive, and recovery needs. And by all they truly mean all – Simpana has support for physical servers, virtual servers (VMware and Hyper-V), desktops, laptops and even support for backing up and archiving individual application items such SQL databases and Exchange emails. One driving feature behind Simpana that sparks my interest is the ability to “migrate” or backup/restore to/from VMware, Hyper-V, vCloud, etc – this can definitely give customers options in terms of disaster recovery. Simpana has a ton of features, too many to go over on a small intro blurb so check them out yourself – I can’t wait to see what CommVault has to offer for VFD4 and you can bet that I’ll summarize it as best I can here.
This was one of those lesser known companies that I’ve never heard of and had to do a little research on (Sorry – that was really a bad joke 🙂 ). This will be Dell’s first time presenting at a Virtualization Field Day – they have certainly been present at Networking Field Day, Storage Field Day as well as a couple Tech Field Days but this is their first go at a VFD. Dell is a huge player in terms of virtualization and the company covers almost, if not all components of a virtual data center – meaning they sell servers, networking, storage, software and services all relating to virtualization (server, desktop, networking, storage) technologies. With that said, the mystery of what component Dell is going to present on is definitely making Dell number 1 on my most anticipated sessions. The fact that the Dell sessions will be held deep within the guts of their Austin headquarters is also a pretty awesome perk.
Less than 6 months after coming out of stealth Platform9 is positioned to give their first ever appearance at a Tech Field Day event on Wednesday afternoon. From what I can tell Platform9’s goal is really to take the simplicity, agility, and convenience of the public cloud and apply that to a customers on site, local hardware – kind of like, yup, you guessed it – a private cloud. There is a lot of competition in this space – and to tell you the truth, I personally don’t have a whole lot of experience with either Platform9 or their competitors, so I can only go on what I see on their website and what I have heard others say – this will change come next week during VFD4. Currently Platform9 supports KVM, vSphere, and Docker – whether or not this will be expanded I have no idea. The highlights I have noticed are the fact that the Platform9 management layer is somewhat hypervisor/technology agnostic, meaning vSphere/KVM VMs, along with Docker containers are all treated as what is called an instance within Platform9, and it’s 100% cloud managed, delivered in a SaaS model. All of this, built on OpenStack – which could be a huge +1 on their part if they have simplified this enough. Again, I can’t find very much in terms of demos or videos out there, so I’m very excited to see what Platform9 has to offer come next Wednesday.
Scale Computing has been around since 2009 which makes them somewhat of a veteran in terms of converged infrastructure – but the fact is, unlike Simplivity and Nutanix, Scale started out on a different path. They broke into the IT world by shipping scale-out storage – NAS/SAN models which were targeted toward the SMB market to help drive companies cost of storage down while providing a very simple, easy to use storage solution. This all changed in 2012 when Scale announced the availability of HC3 – a scale-able, clustered hyper-converged node architecture that included their core storage, but now with compute and virtualization thrown into the mix. By 2012 Nutanix already had a piece of hardware shipping, with Simplivity not long to follow the year after – but their seems to be a few things that differentiate the HC3. Perhaps the biggest being target audience – Scale has always, and is still very focused on the SMB market, which means price is one of the major differences. In order to drive down price, Scale developed their very own fork of KVM, meaning their offering comes complete – no need for VMware or any other hypervisor/management licensing. The HC3 peaks my interest as SMB has a lot of potential with virtualization – a lot of small companies just getting started or still exploring virtualization options. I’ve not explored HC3 in enough detail to see if they have what it takes in terms of benefits and features to become a viable player – so I’m very anxious to see what they have to offer at VFD4.
And then there were four…
In efforts to keep this post small enough for someone to read (and to give me a break from writing) I think I’ll take the remaining four VFD4 sponsors (Simplivity, SolarWinds, StorMagic and VMTurbo) and place them in a second post. Watch for that sometime soon!
Virtualization Field Day 4 kicks off next Wednesday, January 14th – As always, it is always live-streamed so you too can join into the action by watching live stream one the VFD4 landing page and participate in the conversations via Twitter using the hashtag #VFD4. Can’t wait!
Disclaimer: As a delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.
Since I do a lot of posts related to Veeam Backup and Replication I thought I would give any readers who haven’t already heard a quick heads up that VeeamON, Veeams premier data center availability event has been scheduled for a second appearance in Vegas come 2015! That’s right! It’s back!
The return of the green
I was lucky enough to be one of the bloggers selected to attend VeeamON 2014 and I can say from first hand experience that if you have anything to do with Veeam, whether you are a customer implementing it, an administrator managing it, a partner selling it or even the person who signs the dotted line on the purchase order you will find something worthwhile at this conference. The highlight for me, other than the most epic party ever, was definitely the technical sessions that Veeam puts together for this event. Anything and everything in regards to Veeam Backup and Replication is covered during this show – and the sessions do not disappoint in terms of technical level. There are a ton of Veeam engineers present at the show, so let’s just say no question should go unanswered! The expo lounge was great, more low-key than most events which was really nice to have conversations (and beers) with both Veeam and the vendor ecosystem surrounding them.
Last year the show was held at the Cosmopolitan, this year they have moved across the road (if that exists in Vegas) to the Aria Resort in order to support a projected growth of attendance. Let me just say that having the conference all enclosed under one roof (even though it is a Vegas roof) is very nice as it provides a more intimate setting and allows you to easily run back to your room to drop off swag, do some work, etc. Just as last year you can pick yourself up some deeply discounted VMCE training, however this time starting a few days early to allow attendees to take in both the training as well as all the conference has to offer – a great opportunity for those looking to get certified!
If any of this (education, learning, certification, networking, partying) seems to spark your interest you can go ahead and pre-register for VeeamON 2015 here! Taking place October 26 through the 29th it should be a great time and I hope to see you there again! For now, keep calm and VeeamON!