Tag Archives: VFD5

Ravello Systems – Inception without the kick!

Ravello-Systems-LogoIf you have at all visited this blog in the last 4 or so months you shouldn’t be surprised to hear that I’m a pretty big Ravello Systems fan!  I was part of their beta for nested ESXi and I’ve wrote about my thoughts on that plenty of times.  With the beta out of the way and access granted to all the vExperts, Ravello Systems took hold of the clicker at VFD5 in Boston for their first of what I hope is many Tech Field Day presentations.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the sponsors. I  have also been granted early access to Ravello Systems ESXi beta in the past as well, and have received free access as a vExpert.  All that said, this is done at my own discretion.

As I mentioned earlier I’ve written plenty about what I’ve done utilizing Ravello Systems.  The platform is great for configuration validations, home lab type stuff, and for just exploring different functionality within vSphere.   You know, user type stuff.  At VFD5 Ravello went pretty deep in regards to how their software functions within Google and AWS, so I thought I’d take a different approach and try and dive a little deeper into how their technology functions this time around…to the point that my brain started to hurt.

HVX – A hypervisor that runs hypervisors, designed to run on a hypervisor – huh?!?!

Ravello’s magic sauce, HVX is custom built from the ground up to be a high performance hypervisor to run applications (and other hypervisors) while in itself running on a hypervisor (in public cloud).  To say Ravello would know a thing or two about developing a hypervisor would be a major understatement – Ravello’s co-founders, Benny Schnaider and Rami Tamir were once the co-founders of another start-up called Qumranet.  You know, the same Qumranet that originally authored this little known thing called the Kernel-based Virtual Machine, or better known as….yeah, KVM.  So needless to say that have a little experience in the hypervisor world.

The first dream within a dream

As we know Amazon’s EC2 is essentially an instance of Xen, where-as Google’s Cloud utilizes KVM.  So when we publish our application inside of Ravello we essentially deploy an instance of HVX, installed within a VM that has been spun up on either Xen or KVM – once our HVX hypervisor has been instantiated on our cloud hypervisor, then our images or VMs within Ravello are deployed on top of HVX.  So even without yet touching ESXi within Ravello we are 2 levels deep!  Now in terms of a native ESXi deployment we know that we can take advantage of common virtualization extensions such as Intel-VT and AMD SVM, however in HVX, since we have already been abstracted away from the hardware by the Cloud hypervisor we don’t have these – instead, HVX implements a technology called Binary Translation to translate any executable code from the guests that is deemed “unsafe”.  Coupling this with something called Direct Execution, which basically allows any code that need not be translated to run directly on the CPU.  Honestly, if you want to dive deeper into binary translation and direct execution Ravello has a great blog outlining it in a lot more detail than can fit into my maple syrup soiled, hockey statistic filled Canadian brain.  Aside from the performance features, HVX also passes up emulated hardware – the same hardware that we as VMware administrators are all used to – things like PVSCSI, VMXNet3, LSI, etc – this is all available to our guests running on top of HVX, even to our guests running on top of our ESXi guests on top of HVX – I know right!

ravello-hvx

So, what actually happens when we click that ‘publish’ button from within the Ravello interface is somewhat unique – we know we need to install HVX into our cloud VM but how many instances of HVX actually get deployed?  I’m not going to try and understand their algorithms around how they size their hypervisor but I’m just going to say it depends on the resource allocation on the VMs within your application.  You could end up with a single VM running on one instance of HVX or you could end up with 6 VMs running on 2 instances of HVX – however the deployment scenario plays out you can be ensured that it will in only be VMs belonging to that single application that get deployed on the HVX instances – no VMs from other peoples applications, not even any VMs from other applications that you may have.

That networking though!

Perhaps one of Ravello’s major strong points is how it exposes a complete L2 network to the applications running on top of it!  By that I mean we have access to everything L2 provides, we have services available such as VLANS, broadcasting, multicasting, etc within the overlay network Ravello implements.  As we mentioned before, depending on the size of the application being deployed, we may or may not have multiple instances of HVX instantiated within the cloud provider.  If we are limited to a single HVX instance, then the networking is “simple” in terms that it doesn’t have to leave their hypervisor – all switching, routing, etc can be performed within the one HVX instance.  However when an application spans multiple HVX instances creative technologies come into play as shown below.  Ravello has essentially built their own distributed virtual switching mechanism which can tunnel the traffic between HVX instances or Cloud VMs via UDP connectivity.

ravello-networking

And storage…

The last challenge as it pertains to running Ravello applications inside the cloud comes in terms of storage performance.  Having HVX slotted in-between the the running applications and AWS allows Ravello to take advantages of the object storage capabilities of S3, but yet still present the underlying storage to the VMs as a block device.  Essentially, when we import a VM into Ravello Systems, it’s stored in it’s native format on top of HVX and that appears to be a block device, but under the covers the HVX file system is essentially storing this information in object storage.  Aside from all this abstraction HVX implements a Copy-on-write file system, delaying the actual allocation of storage until it is absolutely needed – in then end we are left with the ability to take very fast snapshots of our images and applications we deploy, easily duplicating environments and allowing people like myself to “frequently mess things up Smile

storage

The Ravello presentation at VFD5 was one of my favorites from a technology standpoint – they did a great job outlining just what it is they do, how they do it, and how they are chosing to deliver their solution.  There was some questions around performance that were met head on with a whiteboard and overall it was a great couple hours.  Certainly check out some of the other great community posts below centered around Ravello to get some more nested goodness..

Ravello has a great product which honestly completely blows my mind when I try and wrap my head around it – We have our VMs, running on ESXi, running on HVX, running on Xen, running on some piece of physical hardware inside an Amazon data center – attaching to both Amazon EBS and S3 – we are snapshotting these things, saving as blueprints, redeploying to Google Clouds which completely flip the underlying storage and hypervisor!!  It’s exporting VMs out from our current vSphere environments and deploying them into the public cloud, complete with all of their underlying networking – already setup for you!  Ravello has coined their nested virtualization capabilities as Inception, and if you have ever seen the movie I’d say it certainly lives up to the name.  It has this magic about it – where you are in so deep yet still in control.  If you have a chance check out their VFD5 videos and sign up for a free trial to check them out for yourself.

VMTurbo – allowing smart people to do smart things

VMTurboLogoSmLet’s face it, our environments now are way more complex than they were 10 years ago!  Although some tasks and components may be easier to work with and not quite as specialized, we have a lot of them – and they all need to work, in perfect harmony, together.  The problem with this is at times we get a couple members of the choir that get a little out of key – CPU starts screaming, network gets chatty and next thing you know we have an environment that’s screaming out of control, CPU start shoving network, network starts over drowning memory and to be quite honest, pretty much everyone in the choir at this point sounds like $@#!.

Although this scenario may sound a little far-fetched or a wee bit out there – I mean, CPU can’t sing we all know that!  Either way you put it any choir needs a conductor, a leader, someone who overlooks the complete environment, instructing certain members to gear down, and others to ramp up – Last month in Boston at VFD5, VMTurbo showed us just how they can wave the baton when it comes to bringing together the components of enterprise IT.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Having just saw VMTurbo at #VFD4 in Austin only 6 months prior I was skeptical as to what they would have to talk about in Boston, thinking it was mainly going to be the same presentation – I was wrong!  They could of easily filled another 4 hours talking about the new features that they have embedded into the new release of their flagship product, Operations Manager 5.2.

MOAR APPLICATION CTRL!

Traditionally VMTurbo has gathered its’ market intelligence by polling and retrieving statistics and metrics from the hypervisor – while this is a good technique and is used by many monitoring solutions today there are some applications which don’t necessarily work well in this scenario.  Applications which look after their own resource usage – think SQL, Java Heaps, etc. – these applications may not quite properly reflect their true usages in the hypervisor layer.  For this reason VMTurbo has released an Application Control Module (ACM), which completely integrates into their entire supply/demand model of monitoring.  To help put it into perspective let’s have a look at SQL – ACM essentially brings in statistics around transactions, response time, database memory usage, etc – All items which are not available within the hypervisor itself.

application-performance

From here, VMTurbo users are able to define QoS policies, or SLA’s around their applications performance.  Think I need x number of milliseconds for response time.  VMTurbo then looks holistically at your environment – it knows about the infrastructure underneath the app, what resources are available – it now knows about how that application is configured, memory management etc.  With all of this knowledge VMTurbo can then configure your environment and your application to a desired state, one that we know we are running efficiently, while meeting those SLA’s and QoS policies we have setup in regards to the application!

MOAR NETWORK CTRL!

imagesAside from applications VMTurbo has been busy with a few other cool benefits as well!  With the adoption of public and hybrid cloud on the rise, they’ve seen a need to introduce a lot of enhancements in terms of networking – for example, knowing the physical location of applications is key in terms of best placing “chatty” applications close to each other in order to reduce latency, while still maintaining their “desired state” in terms of both CPU, Memory and Storage as well.  They do this by grouping chatty applications together in what they call a vPOD.  From there OM implements Netflow in order to discover your physical switching configuration, and can work to ensure that vPODs are grouped together on the same top of rack switch or the same public cloud region, etc, moving the entire vPOD if one application requires more resources.

MOAR STORAGE CTRL!

Just as VMTurbo has made steps to get more information out of the application stack they are doing the same with storage!  By completely understanding the storage array underneath your infrastructure, OM is able take action on overcoming storage issues around capacity and performance.  Think of things such as knowing whether to expand a current volume or deploy a new one!  OM understands almost everything there is to know about your infrastructure and applications, and can therefor make the best decision on how to meet the SLA’s defined on those applications from a storage standpoint – one time it may make sense to simply grow a volume, while other times due to other applications running on that same volume it may be more efficient to create a new volume and migrate the application in question.

VMTurbo has certainly taken a unique play on monitoring and resolving issues with your environment.  This whole economic market play – with supply/demand being applied to your infrastructure and applications, is different, but honestly makes sense when looking at resource utilization.  I like how Operations Manager has been built – this modular approach allows them to come out with new features such as the application and storage modules, and simply plug them into the product, where they just simply inherited into the supply chain module and analytics can immediately be applied to them.  And as of now you can do it all from your own cloud on AWS!

If you want to watch the VMTurbo videos yourself you can do so here – or check out my complete VFD5 page here.  Also, we have had some other great community posts around what VMTurbo spoke about – be sure to check out each of them below as each delegate seemed to write about a different part of their presentation…

Operations Manager can certainly do some amazing things, allowing you to automate things such as moving an application to the cloud based on it’s supply/demand analytics – which at first sounds a bit scary – but hey, it wasn’t that long ago that people were weary of enabling DRS right?!?

Rubrik brought Apple simplicity and Google scale together, you’ll never guess what happens next…

logo-large-gray-wpcf_100x48Finally I have figured out a way to incorporate techniques from the vast amounts of click-bait on my Facebook into this blog!  Did it work?

In all seriousness though Rubrik, who recently presented at Virtualization Field Day 5 in Boston have brought together both the simplicity of Apple and the scale of Google and what happened next was a scalable, converged backup and recovery solution containing both hardware and software.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Now, all the time I hear people moan about backup solutions, shrug them off and say they are boring – I for one am not one of those people – I find the backup and recovery space very interesting and honestly there has been a ton of innovation in this space over the last 10 years that has made backup solutions more than just a simply copy of your data – it’s been quite fun to watch…but backup and recovery has a lot of players in the game right now so what really can Rubrik bring to the plate?

Rubrik – Time machine for Cloud Infrastructure

The biggest and most prominent difference that sets Rubrik apart is that they are one of really only a few vendors that ship their backup solution as a physical appliance.  While most backup vendors ship a software solution and leave it up to the customer to decide what to use for their backup storage, Rubrik is scale-out solution that comes with both the software and hardware.  There first shipping product is the R300, supporting roughly 200 VMs with around 100TB of storage available.

bezel--top

With control of the storage as well as the software Rubrik is able to leverage it’s various storage tiers in order to make both the backup and recovery process more efficient.  Take Instant VM Recovery for example – although this is not a new feature, we’ve seen a lot of backup solutions integrate the ability to simply power-on their VMs directly from a backup file, Rubirk can go once step further by granting flash resources to the instantly restored VM as they seem fit – essentially turning your backup appliance into a tier 2 storage array, instantly restoring a new copy of your VM – allowing you to run the VM on the Rubrik appliance with full storage resources until you see a fitting time to migrate it back to your production array.  During the backup process Rubrik is also able to ingest the initial bits into their flash tier, allowing a much quicker response back to vCenter and keeping the amount of time snapshots are open very minimal.  From there data is deduplicated and compressed inline before being written down to another tier of disks or even into the cloud

Wait did you say cloud?

Another nice feature that Rubrik has built-in is Amazon S3 integration.  Essentially when you set policies on a per VM basis you not only specify the number of restore points you want on disk, but you can also state that data which is x number of years old be maintained inside of Amazon.  Thus providing customers with somewhat of an archiving solution, and saving money on needing to purchase additional local storage for these purposes.  When it comes time to restoring data from Amazon, Rubrik has technology built-in to simply pull individual files down from Amazon, without having to pull down the entire VM or vmdk– certainly saving an organization a bit of money in transport costs if you only need to get at individual files.

If you are looking for more delegate and community opinions in regards to Rubrik and the technology they are creating I’d recommend the following…

Rubrik definitely has some awesome technology built into their 1.0 product which is heavily focused around their metadata foundation.  Having control over both the hardware and software has certainly gave them many advantages; fast search, quick recovery and more control over how the backup data gets laid out on their tiered storage platform.  They do have some areas where they are behind, things like object level recovery such as emails, active directory objects, databases, etc.  but to me the real key on the success of Rubrik will hinge around pricing – they told us it’s not going to break the bank – but honestly, a lot of the time backup solutions get thrown down to a second tier budget line and are not in the forefront of strategic IT decisions – so having to drop a large sum of money on insurance policy may not be in the cards for some IT shops.  That said, it’s 1.0 and they have done a lot in the last year and a bit they have been in existence – including adding some really smart and well known people to their staff.  Rubrik is certainly a company to keep an eye on in the coming year as they have built a solid, efficient foundation on which they layered their backup solution – So far they have “taken the backup out of recovery” and I can’t wait to see what they do next…

You can see more of my Virtualization Field Day 5 coverage here, as well as all of the recorded streams here.

Removing disaster recovery from your business with OneCloud

med-vert-notag-wpcf_93x60Let’s face it!  We all have enough to worry about with our production environments!  We’ve got servers, storage, switches, fabrics, subnets, hypervisors, cabling – ugh cabling!  Anyways the point being is that having to worry about all of the complexity and components within our production data center is enough to drive any IT pro crazy – and yet we are still asked to magically duplicate all these worries in a secondary location for disaster recovery purposes – in the words of my 6yo “no fair”

Sure we’ve seen virtualization come along and make our lives easier – and in turn it’s made disaster recovery easier as well.  Encapsulating our physical servers into a group of files and abstracting hardware, aka a VM has opened up so many doors for us to simply pick up and move that VM elsewhere.  The problem with DR is that there is still so much more to consider – things like what do our networks look like in our secondary site, do our VMs get re-ip’d during a fail-over, and one of the most important aspects – when we fail-over how do we ever get back?  Wouldn’t it just be nice to simply take the worries of disaster recovery right out of your business?

OneCloud is all it takes!

OneCloud Software took the stage at VFD5 in Boston this June and began their pitch at just how they can solve the disaster recovery complexities inside organizations today.  Disaster Recovery is one of the biggest use-cases for cloud today – lately we have seen VMware make an entrance into the space, Veeam has given their partners the option to become a cloud service provider for their solutions, most disaster recovery solutions provide some sort of cloud-like integration. So how does OneCloud plan to enter into this market and what exactly can they bring to separate themselves and make them unique?  Their stance – Hey, DR is complex, cloud is challenging – OneCloud can abstract away the complexities of both, turning the public cloud into a secure extension of your data center.  In it’s basics, their first product OneCloud Recovery ties together your on premises VMware environment with Amazon AWS, and then replicates data as it changes, essentially duplicating your environment in Amazon.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So with that in mind let’s take a deeper look at just how OneCloud Recovery works…

First up is planning – as with any project, planning is a very important phase and OneCloud does not take this lightly.  Their solution relies upon a lightweight tool they call OneCloud Software Insight.  Insight is a tool that installs within your on premises VMware environment – from their it discovers and maps out your infrastructure, including your VMs, storage and network.  It then determines what VMs are eligible for their DR solution, allows you to change policies applied, assigning greater or lesser RPO’s on a per-VM basis and uses that data to estimate costs within Amazon EC2.   Another feature of Insight I found interesting is it also takes into account the amount of bandwidth required compared to the amount of bandwidth available in order to get your data into the cloud while still maintaining the SLA’s and RPO’s you have setup within the application – this is something that I’ve not seen in a lot of other assessment/cloud readiness type tools.  At the end you are left with a very detailed cost breakdown of what your environment might cost to run in Amazon using OneCloud Recovery, taking into account all of the individual Amazon costs (EC2, S3, Glacier, etc) as well as the licensing costs of OneCloud Recovery.  Honestly this tool by itself provides some unique value and data to a customer, even if they don’t plan to leverage OneCloud Recovery.

On to the magic..

With planning out of the way, purchase orders issued,  and the trigger pulled OneCloud Recovery the magic can now begin – but it’s not magic, it’s tech – and this is how it all works…

First up is installation –  OneCloud Recovery is installed by simply importing an ova into your existing vSphere environment and providing two credentials; your vSphere and AWS credentials/keys. So far so good eh?  From there automation kicks in, your environment is discovered, mapped out and blueprinted and what they call the bootstrap process begins.  This process first creates a virtual private cloud within Amazon, and then deploys a couple of  appliances to both your on premises and Amazon environment.  Firstly the management appliance is deployed which essentially allows you to manage your OneCloud Recovery environment, which can be done from either on premises or within Amazon AWS due to its configuration and data being replicated to both sites.  Secondly we have worker appliances which are deployed at both ends – these are the heavy lifters of the solution which do all the moving and transfer of data, fail-over/fail-back, etc..  At then end of roughly 10 minutes or so, your complete virtual private cloud is built within Amazon and completely bridged to your on premises environment over a secure VPN tunnel that was established between the two sites – all automated!

Next we create our protection groups.  A protection group is essentially a policy that defines certain SLA’s/RPO’s etc.  Think of it in terms of a job that runs every 1 hour, every 2 hours, every 4 hours, etc.  Then we simply drag and drop our VMs into their designated protection groups depending on the RPO we wish to assign to the particular application.  And as you can see, the UI associated with OneCloud Recovery is very clean and looks like a joy to work with.OneCloudUI

With our VMs now assigned to a protection group the technology kicks in to high gear.  The protection process begins with our OneCloud Recovery worker first snapshotting the VM in order to free up it’s underlying disks.  It then converts the data into their own format which is highly compressed and stores it locally on some tier 2/3 storage on-site, then finally replicating the data to AWS, obviously performing a full seed first and subsequently leveraging vSphere’s Change Block Tracking from that point forward.

As far as fail-over options like many others OneCloud Recovery provides both test and live options to fail-over.  They do however offer a bit of uniqueness in this process.  We all know that compute in the cloud costs money right?  Imagine if we had an 8 vCPU, 32GB RAM production VM replicated into AWS – do you think we really need all that compute and memory during a fail-over test?  Probably not! OneCloud Recovery recognizes this and allows you power on your VMs inside of AWS in an undersized fashion!  I mean, that 8CPU/32GB RAM VM may function fine with 2vCPU/16GB of RAM with no load during a test, thus saving your money!

When it’s time to fail-over for real OneCloud Recovery also has some unique features!  While most disaster recovery solutions halt their protection during a fail-over action, OneCloud Recovery actually continues to provide you with options to protect your VMs, even while they are running in AWS.  They do so by leveraging Amazon EBS snapshots and protecting the data within your AWS region.

As far as fail-back is concerned OneCloud Recovery has some unique features as well.  When a primary data center comes back on line, OneCloud Recovery is able to determine if at all any existing data is still intact.  If it is, OneCloud Recovery can perform it’s fail-back in a delta fashion, moving only those changes that were made in AWS during the fail-over back to your primary data center, rather than having to extract entire VMs out of AWS which could be both costly and more importantly time consuming.

More to come from OneCloud?

In terms of OneCloud Recovery we are looking at a 1.0 product meaning there is a lot of development left to be done on the solution.  My personal requests – I’d love to see the software get a little more aggressive and customized in terms of it’s RPO’s.  Currently RPO’s are set in stone at 1,2,4,8,12,24 hours.  Aggressively I’d like to see some smaller numbers here – 1 hour of lost data could equate to quite a lot of money to a lot of enterprises.  Also I’d love to see some integration into more clouds other than just Amazon – think Google for example as there could potentially be some cost savings for customers there.

Looking at OneCloud Software as a company however I don’t think that OneCloud Recovery is going to be their only product.  They have a solid core technology that maps, discovers, and blueprints our on premises environments, then duplicates that to Amazon.  I don’t see this only being used for Disaster Recovery.  I had a chat with Marc Crespi, CEO and Co-Founder from OneCloud and brushed this past him – in which he didn’t confirm or deny any of it – but if I had to guess I can definitely see them exploring other areas in the future – think DevOps, Migrations, Hybrid Cloud, etc.  All areas that OneCloud Software’s core blueprint technology may be a good fit for.  But, this is all speculation on my part – but still watch these guys – they are on to something…

If you are looking for more info around OneCloud Recovery I definitely recommend checking out some of the other great community posts resulting from VFD5

OneCloud mentioned that they waited to go to market with OneCloud Recovery until they have met three internal requirements – it must be simple, it must be cost optimized and it must be a complete solution.  Honestly I think they have accomplished those goals with OneCloud Recovery.  The automation that has been built into the product, coupled with a very clean UI takes the cake in terms of simplicity – less knobs to turn = more simplicity!  Cost Optimization, well, the insight tool definitely gives you a great understanding of all the costs involved in using OneCloud Recovery, even taking into account that they can down-size VMs when running in Amazon.  And as for a complete solution they have for sure achieved what they set out to do – establish public cloud as an extension of your data center for disaster recovery purposes.

PernixData – transforming to a new arc…

arc1Nearing then end of their Virtualization Field Day 5 presentation, Satyam Vaghani, CTO and co-founder of PernixData touched on something dubbed “The arc of company life” – basically explaining how a companies journey involves an arc, moving from startup, through growth, till finally peaking and then beginning a decline.  Certainly all of these stages sound ideal for a company, minus the decline phase – but how does a company avoid this decline?

Transformation fuels growth

Satyam described this avoidance as transformation  By creating new arc, spawning off the original arc, companies can transform and endure the dreaded decline phase – essentially performing a pivot and focusing on something new, something useful, something different.

arc2

There has been a lot of talk in regards to disruption in the last 10 or so years – VMware has came and disrupted our data centers – which is a good thing, change is good – I think we can all agree on that – but Satyam says that in order for PernixData to cause disruption in our data centers they must first focus on technologies and solutions that will disrupt PernixData as a company – force themselves to look beyond server-side cache, beyond providing just performance to our VMs, force themselves to innovate and at the same time to transform and begin their journey on a new arc…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

PernixData_Logo_Color

A new hill to climb

So what is this new innovation, this pivot if you will that PernixData has decided to take?   Satyam took us on a journey explaining just how PernixData is trying to shift and gave us a few examples of some new products and solutions that PernixData is now focusing on, and how some enhancements to their core solution FVP is fuelling their transformation to grow…

PernixData Architect

Let’s start with PernixData Architect –  a new monitoring solution for your data center which as you can probably imagine focuses heavily on storage.  The interface for PernixData Architect looks smooth, implementing an SSO solution between FVP and other soon to be released applications from PernixData (announced below).  PernixData Architect goes deeper than simply reporting on those famous virtualization storage parameters such as DAVG, KAVG, and QAVG – think more in terms of statistics that we don’t see that often, but certainly do matter.  Sitting on top of the kernel PernixData Architect does indeed report on latency and IOP statistics, however it also presents us with metrics around commonly used block sizes, read/write ratios, active working sets, etc..  What can we do with all this data?  Well, aside from tuning and sizing our storage infrastructure it’s the perfect data for configuring and setting up FVP (Imagine that).

architect

PernixData Cloud

So, PernixData is gathering all this data as it pertains to storage and giving us visibility into our own data centers but what if we wanted to take that data and compare it to others?  PernixData Cloud gives us the ability to do just that.  I like to think of PernixData Cloud as a graphing representation for the hallway talks that take place at conferences.  We’ve all had those experiences where we are standing around at a conference asking our peers questions like “What’s your max IOPs?”  “What type of latency are you seeing?”  “Oh, you’re running an EMC VNX, so am I, what type of maximum throughput do you see?”  PernixData takes these type of questions and places them (and the answers) into a familiar interface to that of Architect and FVP.  Imagine being able to see what the most common type of SSD drive being utilized is?  Or what type of storage array the industry is using behind FVP?  All the time comparing your environment to the average of similar environments in terms of latency, IOPs, etc.  PernixData Cloud Insights can certainly be used to help influence purchasing decisions, drive adoption, and gage how you stack up against the rest of the world.  Again, taking those hallway conversations and bringing them into your data center, with a more accurate and exact answer (basically, no lies Smile)

Cloud1

That said while data center comparison solutions such as PernixData Cloud Insights are extremely helpful, they aren’t much good until they have a vast, wide-range of data from all verticals – and getting this data can sometimes be a major challenge.  I mean, you can ask customers for data all day long but unless something is in it for them, you will usually come up short.

Enter PernixData FVP Freedom

fvp-freedom-gPernixData FVP Freedom solves the “What’s in it for me?” problem when providing data for Cloud Insights, but before we get into how it does that let’s take a quick look at a few of the benefits of FVP.   If you are looking for a great overview of PernixData FVP I’d definitely check out Peter Chang’s blog around FVP 2.0   For those looking for a quick description of what FVP is we can look no further than Satyam’s words; “Making previously impossible storage systems possible”.  They do this by accelerating read and write activities to your storage array by caching to either SSD storage or RAM locally on each host.  The cache to memory feature is interesting to say the least – with more and more vendors shipping servers with un-ghastly amounts of RAM it only  makes sense to take advantage of this and utilize it to it’s fullest potential.  FVP allows you to do by utilizing their Distributed Fault Tolerant Memory (DFTM) solution.  By aggregating RAM resources from all your hosts, and synchronously replicating to at the very least two hosts within your cluster you are left with the performance speeds of memory in a volatile manner, ensuring you get both efficiency and availability for all of your accelerated workloads.

Now let’s get back to the data seed issues within PernixData Cloud – If you haven’t already guessed by the newly announced product name (FVP Freedom), PernixData is creating a new version of their FVP software for the low low price of free!  Now there will be a few caveats with the release – technology limitations allow you to only use 1 DFTM cluster which is limited to 128GB of write-through cache only, but in essence, you can accelerate unlimited applications,VMs and hosts on what you have to work with.  Outside of the technology there is one more catch, any PernixData Freedom implementation will upload statistics into PernixData Cloud – this is how they solve the issues around seeding the PernixData Cloud environment.  A trade for your environmental data – which will of course be aggregated and anonymized.

Movin’ on up!

All in all I like the announcements coming from PernixData out of VFD5!  And I’m not alone with this feeling – there are a number of other posts from some great bloggers listed below, definitely check them out.

Offering up FVP in a freemium type model can only help adoption – getting the application into more data centers and in more potential customers hands is a good thing.  And it’s not like you are that limited with 128GB of memory either – the example PernixData gave us during the presentation revolved around LoginVSI testing.  They went from 181 VSImax users to 328 after implementing a 128GB DFTM cluster across 2 hosts – basically cutting their VDI per desktop storage costs in half.    Needless to say this is a pretty big benefit for budget constrained companies that might not mind sharing their environment data with PernixData Cloud users in exchange for free software.  If your interested you can sign up for FVP Freedom here.  The other new applications (Architect and Cloud Insights) were a surprise to me – it’s not like they are doing anything new here, there is tons of monitoring software out there, but they are reporting on and using the data differently than most the others.  Sometimes having a unique spin on the data that you report on is enough to raise eyebrows – and I have no doubt that PernixData is on to something here, if not a solid base for something more to come!  However it plays out, they certainly took another path on the arc and are on the up and up yet again!

 

#VFD5 Preview – VMTurbo

VMTurboLogoSmOnce again it looks like I’m going to have to get on a plane and travel to the great US of A in order to see my fellow Toronto VMUG Co-leader Eric Wright, who lives within a couple hours of where I’m sitting right now!  But that’s ok, because Eric will be bringing with him the VMTurbo Virtualization Field Day 5 presentation in Boston!  For those that know or have heard Eric speak you will know what I mean – he certainly has a way of keeping the audience interested and getting his point across – a couple great qualities to have when speaking…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Anyways, It feels like we just got done having a look at VMTurbo during VFD4 in Austin and here they are right back in front of us at VFD5 in Boston.  And a lot has changed since January with both the company and their flagship product Operations Manager – They’ve kicked their TurboFest User Groups into high gear, hosting meetings San Fran, London and Atlanta, they were named one of the best places to work by the Boston Business Journal, and Operations Manager 5.2 was released and with that came features such as QoS Adherance, more support at the application level in terms of MS SQL and Oracle, integration with Arista Networks to help make more “network aware decisions” and of course, now offering the complete package delivered thought a SaaS offering in Amazon AWS. So, yeah, they’ve been busy!

An economic look at your data center

If you haven’t had a look at Operations Manager you probably should.  VMTurbo takes a unique approach as it pertains to monitoring and tuning your environment to ensure you get to what they like to call “Data Center Nirvana”.  Essentially they take an economic model and apply it to your infrastructure – turning your data center into a supply chain.  By treating your resources, things like CPU, memory, disk, etc as suppliers and your VMs as consumers, VMTurbo is able to apply economic formulas to your infrastructure, increasing cost of resources when supply is sparse, and decreasing when it is bountiful.  By doing so Operations Manager is able to determine that while migrating a VM may make sense at eye level, costs may be too high on the other host, thus making recommendations to leave it be.  It’s an interesting way of looking at things and makes a lot of sense to me…

Now there is certainly a lot more to what Operations Manager does and I encourage you all to tune into VFD5 to learn all of it.  You can do so by heading over to the VFD5 page and watching the live stream, as well as keep up to date with all my content here.  VMTurbo is a fast growing company with a unique idea so I’m sure they will have something mind-blowing for us come next Wednesday when they kick off all that is VFD5!