Tag Archives: Tech Field Day

Cohesity bringing secondary storage to Tech Field Day

logo-cohesity-dark-100x12Cohesity is next up in my flurry of Tech Field Day 12 previews with their secondary storage play.  I just recently got to see Cohesity present as they were a sponsor at our Toronto VMUG which took place at the legendary Hockey Hall of Fame, so I guess you could say that Cohesity is the only vendor I’ve seen present in the same building as the Stanley Cup.   Okay, I’ll try and get the Canadian out of me here and continue on with the post…

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

Who is Cohesity?

Cohesity was founded in 2013 (I’m detecting somewhat of a Tech Field Day 12 pattern here) by Mohit Aron, former CTO and co-founder of Nutanix.  You can certainly see Mohit’s previous experience at Google and Nutanix shining through in Cohesity’s offering – Offering complete visibility into an organizations “dark data” on their secondary storage appliance.

Secondary Storage?

Cohesity’s appliance in itself doesn’t claim to be a primary storage array – they aim at the secondary storage market.  Think of non mission critical data – data such as backups, file shares and test/dev copies.  All this data is a perfect fit for a Cohesity appliance.  How this data gets there and what we can do with it all lies within Cohesity’s DataProtect and DataPlatform software!

DataProtect and DataPlatform

For the most part, the on boarding of all this data onto their appliance is done through backups – Cohesity’s DataProtect platform to be more specific.  DataProtect  seamlessly integrates into your vSphere environment and begins to back up your infrastructure using a set of predefined and custom policies, or SLA’s if you will.  Policies are setup to define things such as RPO – how often we want to back up, as well as retention policies for archival – Backups over 30 days shall be archived to Azure/Amazon/Google.

Once the data resides within Cohesity’s appliance, another technology DataPlatform takes over – DataPlatform provides a Google-esque search across all the data, be it on premises or archived to cloud.  Here is where we can do some risk management, searching for patterns such as credit card number or social insurance numbers.  DataPlatform also allows us to leverage our backups for items such as test/dev, creating a complete copy of our environments very fast – isolated from our actual production networks.

dataplatform_thumb.png

With the release of 3.0, we have also seen physical Windows and Linux support added into the platform – so just as we protect our VMs, we can protect our physical servers, along with the applications such as SQL/Exchange/Sharepoint that are running on them.

With a best of VMworld 2016 award under their belts I’m pretty excited to go deeper into Cohesity – and expect to hear a lot more as to what their next steps might be!  Stay up to date on Cohesity and all things mwpreston/Tech Field Dayby watching my page here – and see all there is to know about Tech Field Day 12 on the main landing page here!  Thanks for reading and see yah in November 🙂

DellEMC will make it’s first appearance at a Field Day event since merger!

dellemc_logo_prm_blue_gry_rgb-100x18Next in the long list of previews for Tech Field Day 12 is DellEMC – you know, that small company previously known as EMC that provides a slew of products primarily based on storage, backup, cloud and security.  Yeah, well, apparently 67 billion dollars and the largest acquisition in the tech industry ever allows you to throw Dell in front of their name 🙂  November 16th will DellEMC’s first Tech Field Day presentation under the actual DellEMC name – split out we have saw Dell @ 7 events and EMC @ 5 events.   So let’s call this their first rather than combining them both for that dreaded number 13….

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

We all got a look at just what these two companies look like when combined as the newly minted DellEMC World just wrapped up!   We saw a number of announcements around how things play out while these two companies are now sharing the same playground, summarized as best I can as follows…

  • Hyper-converged – Big announcements around how PowerEdge servers will now be a flavor of choice for the VxRail/VxRack deployments.  Certainly this brings an element of choice in terms of the customization of performance and capacity provided by Dell – to the Hyperconverged solution once provided by EMC.  Same goes for the rails big brother, VxRack.
  • DataDomain – the former EMC backup storage solution will also be available on DellEMC PowerEdge servers.  What was once a hardware appliance is now a peice of software bundled on top of your favourite PowerEdge servers.  On top of that, some updates allowing data to be archived to cloud and multi-tenancy for service providers.
  • Updates to the Isilon series, including a new All Flash version being added to the scale-out NAS system.

1-emc-dell362

Dell has not be shy as of late at making BIG moves – going private then buying out EMC.  Certainly this transition is far from over – there is a lot of transition that still has to take place in order to really merge the two companies together.  From the outside things appear on the upside (except for the fact that I’m getting a ton of calls from both companies looking to explain everything now) however there are still many unanswered questions as to what will happen with overlapping product lines…  From the inside I can’t really say – I have no idea – all I know is I’m sure it’s not an easy thing for anyone when you take 70,000 EMC employees and throw them in with Dell’s 100,000+ – There will definitely be some growing pains there…

Only time will tell how DellEMC changes the story, if at all at Tech Field Day 12.  DellEMC are up first thing on November 16th – follow along with the live-stream, keep up with all things mwpreston @ Tech Field Day 12 here, and stay tuned on the official landing page for more info!  This is destined to be a good one!  Thanks for reading!

Intel to take the stage at Tech Field Day!

1000px-intel-logo-svg_-91x60Intel?  Who?  Never heard of them!  I always find it interesting the mix of presenting companies that Gestalt IT seems to get for their field day events – a lot may think its just for startups trying to get their name out – but with Intel, the 40+ year old tech giant involved I think we can say that’s pretty much debunked!  And this isn’t their first either, Intel has presented at  3 Storage Field Day events and a couple of Networking Field Day events as well!  So you can say they are well versed on the format….

Its’ kind of hard to do a preview post for Intel as they have been around for so long and have their hands in so many parts of the datacenter – I mean, they could talk about so many things.  Aside from the well known processors, they could talk about SSD’s, chipsets, caching, networking – pretty much anything and everything.  Since Virtualization Field Day has been renamed to Tech Field Day we can expect any of this, or anything else from Intel.

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

With that said let’s just have a look at the history of Intel rather than guess what they may talk about as I’m always interested in how companies form, especially those that were their in the very beginnings of this crazy IT world we live in now.  I always picture some kind of scene from Mad Men or Halt and Catch Fire! 🙂

My Coles Notes of Wikipedia 😉

andy_grove_robert_noyce_gordon_moore_1978_editSo yeah, Intel, founded in 1968 by Gordon Moore and Robert Noyce – Initially when selecting a name the combination of Moore-Noyce was quickly rejected, sounding too much like more noise 🙂 – instead, Intel, short for Integrated Electronics was chosen – and after paying a hotel brand which had the rights to the name a whopping $15000 the name has stuck – for 48 years!  Their first commercial chip, the 4004 contained 2300 transistors – put that into perspective with the 10-core Xeon i7 Haswell with its 2,600,000,000 transistors!  My how the times have changed – and if that isn’t enough take a look at some of the money surrounding the company.  When Intel initially IPO’d in 1971, they did so at an evaluation of 6.8 million, their Q3 2016 revenue – 15.8 Billion!

Intel plugged away in the early years generating most their revenue from random-access memory circuits, pumping chips into the DRAM, SRAM and ROM markets.  What would turn out to be their bread and butter, the microprocessor, wasn’t really on the radar – that is until the early 80’s or so when IBM started to use the 80286.  After that its a story we know all to well, the 80386, 486, Pentium and so on and so forth!

Anyways, that’s enough of my Wikipedia paraphrasing – yeah, Intel has been around a loooong time and have pivoted many times, surviving it all – check out some cool facts about the company here if you are still interested (Did you know they pioneered the cubicle?)!  I’ve never been part of a Field Day event where Intel has presented (alone) so I’m interested to see what they have to talk about – If you want to follow along as well keep your eyes on the official landing page for Tech Field Day 12 here – and use the hashtag #TFD12 come November

Rubrik to talk cloud data management at Tech Field Day

logo-large-grayIs it just me or does it seem that every time you turn around Rubrik is breaking news about receiving some crazy high number of dollars in funding?  Their last round, a series C of a wee 61 million brought them up to a total of 112 M – that last round more than doubled their total!  In all honesty it’s only three rounds – maybe its every time I end up writing about them it’s close to a closing of a round!  Either way, the Palo Alto based company will be spending a little of that money to present at the upcoming Tech Field Day 12 taking place November 15/16 in Silicon Valley!

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

So who’s Rubrik?

Besides being the company that is always tempting me into webinars and trade shows with Lego (Yeah, I got a thing for free Lego) they deliver what they call a “Cloud Data Management Platform”.  Rubrik came to light just over a couple of years ago, when some peeps from Google/Nutanix/Oracle got together and aimed to bring a new approach to the 41 billion dollar data protection industry.  It felt odd to say they were founded just a couple of years ago as it feels like they have been around for quite while – maybe it’s because I seen them way back at Virtualization Field Day 5 – but the more appropriate reason it seems they are older is because they are already on their third major release, this one dubbed Firefly, of their flagship software/hardware appliances!

Cloud Data Management – huh?

Yeah, let’s take that term and break it down so we can see what Rubrik really does.  In its basics, it’s a data protection/disaster recovery appliance – on the cuff, it’s much much more.  Sure, the core functionality of the Rubrik boxes is to backup your VMware/Physical environment, but the benefits of Rubrik really come from the policy based approach that they take.  We don’t necessarily create backup jobs on Rubrik’s platform, instead we create policies, or SLA’s if we will – from there, we add our VMs and our data sources to those policies.  The simplicity of Rubrik is once the policies are all created and the objects added to them, we are essentially done – we can let the software do the rest.  Need at 30 minute RPO on that VM, create a policy.  Want that same RPO on your physical SQL server – add it to the same policy!  How’s about archiving older/stale data from those backups up to Amazon or Azure – hey, Rubrik can do that too!

rubrik-features

I mentioned earlier however that Rubrik was much much more than backup.  Sure, backup is the bread and butter of the platform – that’s how they get the data on their box so they can apply the real magic against it.  Need to spin up a copy of a certain VM(s) for testing/development purposes – Let Rubrik do it, they can do it on flash!  Looking for a certain file inside of all those backups – yeah, remember I said Rubrik was founded by some people from Google – well they have a pretty nifty search that will globally search your backups, no matter where they are – meaning if they have been archived to Amazon or are sitting on another Rubrik box the search results are global!

I’m sure we will hear much much more from Rubrik come November and I’m excited to see them at a Field Day event once again!  Be sure to follow along – I should have the live-stream setup on my page here and get all of your Tech Field Day 12 information that you need by checking out the official landing page!  Thanks for reading!

Docker to make 4th Field Day appearance!

docker-logo-300-71x60Ah Docker – Probably the coolest logo of any tech company I know!  Certainly as of late that whale has been all the rage, well, more-so those containers sitting up on top of him.  We’ve seen the likes of Microsoft and VMware all declaring support for Docker – we have saw startups spawning around Docker supporting things such as management and persistent storage.  All of this says to me that containers and Docker are pretty much gearing up to go mainstream and start being utilized in a lot more enterprises around the world.  Docker is is the last company to present at Tech Field Day 12 – and in my opinion “last but not least” certainly applies to this situation.

So who’s Docker?

So, in terms of who/what Docker is, well they are kind of one of the same – confused?  Docker is essentially a project, an open source project, where-as Docker Inc is the company that originally authored said project.  While the use of Docker containers is most certainly free, Docker the company sells services and solutions around them…

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

So what does Docker offer?

First up is support – open source software is great and all, but for adoption in the enterprise you certainly need to have someone whom you can call upon when things go awry – companies like RedHat and Canonical certainly know this space well.  Software’s free, support is extra – and that’s one element that Docker the company comes into play offering support on case by case basis, as well as premium subscriptions around the container world.

ducp_dashboard

Next, is Docker Datacenter – In it’s basic Docker Datacenter is a service which allows customers to get the same agility, efficiency and portability of containers, but bringing security, policy and controls into the mix.  All things that again, enterprises prefer when going “all-in” to a product.  It can be deployed both on-premises or in a virtual private cloud type deployment hosted by Docker.

To be totally honest I’ve read a lot about containers but haven’t actually been involved in any “production” deployments as I’ve been striving to find use-cases around them.  I can see this changing in the future – with VMware moving into the space, making it easier and easier to deploy containers alongside your virtual machines it’s only a matter of time before containers really hit mainstream.  I’m excited to see what Docker has to talk about during Tech Field Day 12.  If you want to follow along the whole event will be live-streamed.  I’ll hopefully have the stream going, as well as all my other Field Day content on my page here – and for more information and everything Tech Field Day 12head over to the official page here.  Thanks for reading!

Igneous bringing the cloud to you at Tech Field Day

igneousiologo-100x34Today we will continue on our Tech Field Day 12preparation of trying to get a grasp on some of the companies presenting at the event.  Next up, Igneous Systems – again, another company I’ve not had any interaction with or have really even heard of.  With that, let’s take a quick look at the company and the services, solutions, and products they provide.

Who is Igneous?

Founded in just 2013, Igneous Systems is based out of Seattle and entered the market looking to solve the issue around unstructured large data and public cloud.  There founders have a fairly decent and strong storage background – Kiran Bhageshpur (CEO/cofounder) and Jeff Hughes (CTO/cofounder) both come from an engineering background, both from the Isilon division at EMC – and Byron Rakitzis (Architect/cofounder) was the first employee hired at NetApp, being responsible for a good chunk of code there and holding over 30 patents to his name.  I’m always interested in seeing the paths that startup founders have taken – this appears to be the first go around for these three guys so let’s hope they are successful!!!

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

Igneous – True Cloud for Local Data

These three guys have set out to bring the benefits and agility of public cloud down into the four walls of your datacenter.  If we think about different types of data flowing around within the enterprise today we can identify quite a few that just aren’t a good fit to ship up to services like Amazon S3.  Think IoT, with sensors that can generate a vast amount of data that you may want to have access to often.  It may not be cost efficient to ship this data up to the cloud for storage.  Other types of data such as security or syslog fall into that same type of category.  Aside from just being a vast amount of data, enterprises struggle with what to do with large datasets such as media content.  But the real driving factor behind shipping most data to services such as S3 comes in the terms of security and compliance – we may just not want our sensitive data sitting outside of our buildings!

The problem with this though is enterprises want the agility of public cloud.  They want to be able to budget in terms of storing this data – and after you buy a big honking box of storage to sit in your datacenter it’s pretty hard to scale down and somehow reclaim those dollars initially spent!  This is where Igneous comes into play.

Igneous is a hardware appliance – it’s still that big honking box of storage that sits inside our firewall – the difference being we don’t actually buy it, we rent it.  And the terms of this rental contract are based around capacity – a “pay as you go” type service.  Now you may be thinking, yeah great, we still have storage that we have to manage, we just don’t have to pay for it upfront – we still have to manage it!  That’s not the case.  When Igneous is engaged they deliver the appliance to your datacenter, they install it, and they manage it throughout its lifetime, meaning hardware and software upgrades are all performed by Igneous during the lifetime of the contract.

igneous-architecture

But the real advantage of Igneous, like most other products comes in the terms of software.  Having local storage is great but if it can’t be accessed and utilized the same way as we do those services such as S3 and Google Cloud then we haven’t really deployed the cloud into our datacenter.  The APIs provided by the Igneous box are accessed using the same familiar API calls that you are used to using with services like Azure, S3, and Google – So we still have the agility and efficiency of a cloud service, but the difference being, that your data is still your data and remains local inside your datacenter.   Obviously Igneous provides visibility into your data, allowing you do capacity management and run analytics against the data consumed.

Igneous has an interesting solution and one that I feel can be incredible useful.  How it integrates with other products is interesting to me.  Essentially, if they support the S3 API then technically we should be able to find some way to use Igneous with other 3rd party products that can send data to Amazon.  I’m thinking of backup and other products here which have the ability to copy data to S3 – we could essentially place an Igneous box at our DR site and possible copy the data there, keeping within our organizations.  We will most definitely find out more about Igneous and their local cloud solution come Tech Field Day 12when they present.  I encourage you to follow along – I’ll have the live-stream up on my page here, and you can also find it a ton of more information over at the official Tech Field Day 12page!  Thanks for reading

DriveScale set to make first ever Tech Field Day appearance!

drivescale_logo-100x27DriveScale – another company presenting at Tech Field Day 12 that I know very little about – consider this post a homework assignment on my part – to at least learn a little bit about the company, the problems they are trying to solve, and the products and services offered.  Just as the last company I had a look at, StorageOS, DriveScale is relatively young in the IT vendor space.  Not to say that is a bad thing – normally startups execute quickly and solve real world issues that exist today.  DriveScale has been around since 2013 but just come out of stealth in May this year – so naturally, this is their first appearance at a field day event.  Before we get into what DriveScale does and how their technology works we should take a look at something that peaked my interest right off the hop- and that’s the founders – In order to best understand this let me list each founder with some highly summarized bulleted accomplishments – I think you will be a little impressed.

Satya Nishtala

  • Holds 31 patents to his name in core datacenter areas – 31, yes, 31!
  • Technology Fellow at Nuova (Eventually acquired by Cisco and baked into UCS/Nexus platform)

Tom Lyon

  • Founder of Nuova (Eventually acquired by Cisco and baked into UCS/Nexus platform)
  • Employee #8 at Sun Microsystems – think about that for a minute – Sun Microsystems, Employee #8

Duane Northcutt

  • conceived of and led development of the Sun Ray desktop!
  • Held CTO/Vice President positions at Technicolor, Trident, and Silicon Image

The list goes on and on for these guys but man those are some hefty accomplishments to have at the helm of one company for sure.  Anyways, what they have done is not as important as what they are currently doing now, so let’s have a look at that.

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

Smarter Scale-Out

DriveScale’s whole solution is based around being a smarter scale-out solution – offering a rack scale architecture which includes both hardware and software to bring “advantages of proprietary scale-up infrastructure environments to the commodity of the scale-out world”  <- When I read this I kind of though, huh, I don’t get it – It sounds good, but I really don’t know what it means.  This is mostly due to the fact that they really target Hadoop and big data environments, something I’m not well versed on at all!   I’m sure we will all learn more when the present at TFD but for now here’s what I can gather around DriveScale’s solution.

Basically they take a group of physical servers and desegregate these into pools of both compute and storage resources.  Converting these resources into what they call “Software Defined Physical Nodes” allows DriveScale to use both software and hardware to present these resources to our applications, with the ability to grow and shrink these resources as needed.  When the time comes to scale out we aren’t faced with the same challenges of purchasing pre-defined nodes where compute and storage come married together – Instead, we can leverage DriveScale to simply add more compute by bringing more physical nodes into the pool, or add more storage by importing in a bunch of commodity JBODs.  In the end, we can scale up or down as much compute and storage as we need, without having to worry about things like data locality – because we have DriveScale sitting between our compute and storage resources.

drivescale-architecture

This is all made possible by a couple of hardware and software components – First we have the DriveScale Management Server and the DriveScale software agents – these provide the core functionality of the product, by pooling all of the compute and storage resources within the rack into logical nodes.  All data runs through a hardware appliance called the DriveScale adapter, which basically is a central connection point for all of our physical servers, through a 10GBe network.

There is certainly a lot more that DriveScale’s solution provides, things like High Availability and visibility – but before I completely make a fool of myself explaining on how this all works I’ll just leave this post off right here.  Here’s hoping I can learn a bit more how DriveScale technically does all of this at Tech Field Day 12- and hopefully convey that information back 🙂  That said, if you want to learn more about DriveScale for yourself their Tech Field Day 12 presentation will be live-streamed.  I’ll hopefully have it setup on my page here, or if you wish, keep your eyes on the official Tech Field Day 12 page.

Tech Field Day 12 – Goin’ back to Cali!

kangolYes Mr Cool J you heard that right – Although you don’t ‘think’ you are heading back, this guy is indeed ‘Goin back to Cali!’  While Mr Cool J would rather stay in New York I’m heading to Silicon Valley to partake in Tech Field Day 12 with a slew of great delegates and sponsors alike!  This will be my first time in the Valley – so I’m pretty pumped to say the least!  I’m excited to finally be in the heart of all of the companies and technologies that I’ve been using my whole life, and writing about here for the past 5 years or so!

TFD what?

So if you haven’t heard of Tech Field Day then you have most certainly been missing out!  TFD is the brainchild of Stephen Foskett and his company Gestalt IT and is essentially a learning resource for the community.  Now I know, I know, there is already many many resources out there for us to find out about certain technologies or companies – we have white papers, books, blogs, videos, training, etc – but the problem is most of this stuff usually stems from strong marketing roots, and at times, can be a bit overwhelming trying to weed out the message from the technology!  TFD solves this by deep diving into the technology, and by placing a dozen or so tech minded folks in a room with a vendor it helps to keep the presentations and messages on point – it’s about the technology, not the marketing!  You know when you are sitting through a webinar or a presentation and someone poses a question – and said question is responded to with a “I’ll connect you with an SE or with someone afterwards to talk” – this kind of stuff doesn’t really happen at TFD – most the time, vendors and companies presenting have the knowledge and the resources in the room to leave no question unanswered – that’s what I like to think TFD is!

logo-cohesity-dark-100x12 dellemc_logo_prm_blue_gry_rgb-100x18 docker-logo-300-71x60 drivescale_logo-100x27
igneousiologo-100x34 1000px-intel-logo-svg_-91x60 logo-large-gray storage-os-logo-100x21

Anyways, so yeah, the Valley – so excited for this!!  Tech Field Day 12 has a number of great sponsors and vendors lined up to present at the event (you can see them above).  Some of these companies are giants (Dell EMC, Intel), some fairly new to the market (Rubrik, Cohesity), some are all the rage right now (Docker), and honestly some I’ve never dealt with or even really heard of (StorageOS, DriveScale, Igneous).  It’s normally the latter that really impress me at these events!  So heads up, the time is near – TFD12 airs November 15th and 16th with two jam packed days!  To learn more about the event, certainly check out the official landing page!

As I have with the other TFD events I’ve participated in I’ll try to consolidate all of my content surrounding the event on a single page, which you can find here!  A huge thanks to Gestalt IT for having me back!  I can’t wait!  Oh, and sorry for the 90’s hip hop references – it was as witty as I could get at the moment 🙂  Either way, I can almost hear those scratching records and that crazy jazz music which kicked of the song right now 🙂

Rubrik Firefly – Now with physical, edge, and moar cloud!

Rubrik LogoRubrik, the Palo Alto based company who strives to simplify data protection within the enterprise has recently announced a series C worth a cool 61 million, doubling their total capital to a cool 112 million since founding just over a couple of years ago!  And as much as I love to hear about venture capital and money and whatnot I’m much more into the tech as I’m sure my readers are as well!  With that, alongside that Series C announcement comes a new release of their product, dubbed Rubrik Firefly!

Rubrik Firefly – A Cloud Data Management Platform

With this third major release from Rubrik comes a bit of a rebrand if you will – a cloud data management platform.  Nearly all organizations today have some sort of cloud play in their business; whether that be to build out a private cloud and support legacy applications or consume public cloud resources for cloud native applications – they all have some kind of initiative within their business that aligns with cloud.  The problem Rubrik sees here is that the data management and data protection solutions running within those business simply don’t scale to match what the cloud offers.  Simply put,  customers need to be able to manage, secure, and protect their data no matter where it sits – onsite, offsite, cloud, no matter what stage of cloud they are at – thus spawning the Cloud Data Management Platform

Rubrik Firefly Cloud Data Management

So what’s new?

Aside from a number of improvements and enhancements Rubrik Firefly brings a few big new features to the table; Physical Workloads, Edge Environments, and spanning across clouds.  Let’s take a look at each in turn…

Physical Workloads

I had a chance to see Rubrik a way back at Virtualization Field Day 5 where we got a sneak peek at their roadmap – at the time they supported vSphere only and had no immediate plans for physical workloads.  The next time they showed up at Tech Field Day 10 they actually had a bit of a tech preview of their support for physical MSSQL support – and today that has become a reality.  As you can see they are moving very fast with development of some of these features!   Rubrik Firefly adds official support for those physical SQL servers that you have in your environment, you know, the ones that take up so much resources that the DBA’s just will not let you virtualize.  Rubrik can now back these up in an automated, forever incremental fashion and give you same easy of use, efficiency, and policy based environment that you have within your virtual workload backups.  Firefly does this by deploying a lightweight Windows service, the Rubrik Connector Service onto your SQL server, allowing you to perform point in time restores and log processing through the same UI you’ve come to know with Rubrik.   Aside from deploying the service everything else is exactly the same – we still have SLA policy engine, SLA domains, etc.

And they don’t stop at just SQL!  Rubrik Firefly offers the same type of support for those physical Linux workloads you have lying around.  Linux is connected into Rubrik through  an rpm package, allowing for ease of deployment – From there Rubrik pulls in a list of files and directories on the machine, and again, provides the same policy based approach as to what to back up, when to back it up, and where to store it!

Both the SQL msi installer and the Linux rpm packaged are fingerprinted to the Rubrik cluster that creates them – allowing you to ensure you are only processing backups from the boxes you allow.

Edge Support

Although Rubrik is shipped as a physical appliance we all know that this is a software based world – and that doesn’t change with Rubrik.  The real value in Rubrik is the way the software works!  Rubrik has taken their software and bundled it up into a virtual appliance aimed for Remote/Branch Offices.  What this does is allow those enterprises with remote or branch offices to deploy a Rubrik instance at each location, all talking back to the mothership if you will at the main office.    This allows for the same policy based approach to be applied to those workloads running at the remote locations, thus allowing things such as replication back to the main office, archive to cloud, etc to be performed on the edge of the business along with at the main office.  The Virtual Appliance is bundled as an ova and sold on a “# of VMs” protected basis – so if you have only a handful of VMs to protect you aren’t paying through the nose to get that protection.

Cloud Spanning

Finally we come to cloud spanning.  Rubrik has always supported AWS as a target for archiving backups and brought us an easy to use efficient way of getting just the pieces of data we need back from AWS – but, we all know that Microsoft has been pushing Azure quite heavily as of late handing out lots and lots of credits!  You can now take those spare credits and put them to good use as Firefly brings in support for Azure blob storage!  The same searching and indexing technology that Rubrik has for Amazon can now be applied to Azure as well, giving customers options as to where they archive their data!

Bonus Feature – Erasure Coding

How about one more?  With the Firefly release Rubrik now utilizes erasure coding, bringing in a number of performance and capacity enhancements to their customers with a simple software upgrade!  Without putting hard numbers to it customers can expect to see a big increase in their free capacity once they perform the non-disruptive switch over to erasure coding!

Firefly seems like a great step towards the cloud data management platform – a topology agnostic approach to wrapping policy around your data, no matter where it is, ensuring it’s protected and secured!  The release of a Virtual Appliance perks my ears up as well – although it’s aimed directly at ROBO deployments now who knows where it might go in the future – perhaps we will see a software-only release of Rubrik someday?!?   If you are interested in learning more Rubrik has a ton of resources on their site – I encourage you to check them out for yourself.  Congratulations Rubrik on the Series C and the new release!

#VFD5 Preview – OneCloud

med-vert-notag-wpcf_93x60Am I looking forward to the presentation at Virtualization Field Day 5 from OneCloud?  I have no idea!  Why?  Well, here is a company that I know absolutely nothing about!  I can’t remember ever coming across OneCloud in any of my journey’s or conferences!  Honestly, I think this is the first company that is the only company that is presenting at VFD that I have absolutely no clue about what they do…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

That will certainly change fast

OneCloud will present at VFD5 on June 24th at 1:00 PM where I’m sure we will all be enlightened a little more on the solutions they provide.  That said I don’t like going in cold, knowing nothing about someone – thus, this preview blog post will at least help me understand a little bit about everything OneCloud has to offer…

So let’s start from the ground up.  OneCloud is essentially a management platform for a hybrid cloud play.  Their core technology, the Automated Cloud Engine (ACE) is the base to where they provide other services.  From what I can tell ACE essentially facilitates the discovery of your on premises data center, taking into account all of your VMs, physical storage and networking information.  From here, ACE can take different business objectives and transform these into API calls in order to essentially replicate all your infrastructure into the public cloud – for now, it appears to be just Amazon’s AWS which is supported.

The service running on top of ACE is OneCloud Recovery.  OneCloud Recovery allows organizations to facilitate a disaster recovery or business continuity solution involving the public cloud as the primary target – skipping costs and complexity of implementing a second or third site on premises.

Diagram-4

So here is how it all happens from start to finish – OneCloud is deployed into your environment, via the virtual appliance route.  Another instance is also deployed into Amazon.  From there it auto discovers your environment; your networking setup, storage configurations, data and applications are all tied together and somewhat of a blueprint of your environment is created.  You then use their policy engine to apply RTO and RPO objectives to your applications.  OneCloud will then provision a fully functioning virtual data center in Amazon – one that mirrors your environment in terms of networking and configuration.  OneCloud not only duplicates your environment into Amazon, but it will also optimize both your compute and storage in order to minimize costs.  Meaning it will scale down on CPU where it believes it can and place your data onto the most cost effective storage.  Once your data is there OneCloud performs ongoing replication in order to meet the RPO you have selected.  From there it’s just a matter of performing your normal DR tests and engaging in any failover (and failback) operations.

OneCloud seems to have some interesting technology and I’m looking forward to learning more at VFD5.  Some questions for OneCloud that come to mind – How do they compare to VMware’s vCloud Air DR services?   Do they plan on expanding out to other public clouds such as Google, Azure, or vCloud Air?  With a strong software base in ACE do they plan on moving outside just the DR/BC realm – things such as DevOps and public cloud labs come to mind.   I really like how they are abstracting away what can be some very complicated API calls to Amazon – any time a company provides a solution that involves simplicity it’s always a good thing, but especially so when dealing with the complex networking and configuration of public cloud and disaster recovery.  If you would like to learn more about OneCloud with me you can do so by watching the live stream on the VFD5 event page.  That stream, along with any other content created by myself will be posted on my VFD5 event page as well.

Friday Shorts – Certs, Tools, Loads, VVOLs and #SFD7

It’s been quite a long time since my last “Friday Shorts” installment and the links are certainly piling up!  So, without further ado here’s a few tidbits of information that I shared over the last little while…

A little bit of certification news!

VMware LogoVMware education and certification has certainly taken it’s fair share of backlash in the last few months, and honestly it’s rightly deserved!  People don’t like when they invest in a certification, both in money and time, just to have an expiry date placed on all their efforts!  Either way, that’s old news and nothing is changing there.  What I was most concerned about was whether or not I would be able to skip my upgrade of my VCP and just take a VCAP exam instead, which would in turn re-up my VCP.  Then the announcement of no more VCAP was made – which through those questions of mine for a loop – but now, after this announcement it appears that their will be an upgrade/migration path for those current VCAP holders to work towards the newly minted VCIX.  Have a read and figure out where you fit in and start planning.   I already hold a VCAP5-DCA so by taking the design portion of the VCIX I would be able to earn my VCIX certification in full – sounds good to me!  Now we just need the flipping exams blueprints to come out so we all can get to studying! 🙂

New version of RVTools!

rvtoolsYup, the most famous peice of “nice to haveware” has an updated version.  I’ve used RVTools for quite some time now – as an administrator any piece of free software that I can get to help me with my job is gold!  RVTools saves me a ton of time when gathering information as it pertains to my virtual environment and my VMs.  If you haven’t used it definitely check it out – if you have, upgrade – you can see all of the new changes and download here!!

KEMP giving away LoadMaster!

kempKeeping on the topic of free tools let’s talk about KEMP for a moment!  They are now offering their flagship KEMP LoadMaster with a free tier!  If you need any load balancing done at all I would definitely check this out!  Now, there is going to be some limitations right, nothing in this world is completely free 🙂  Certainly it’s only community supported and you can only balance up to a maximum of 20 MB/s – but hey, may be a great solution for your lab!  Eric Shanks has a great introduction to how to get it up and going on his blog so if you need a hand check it out!  I’ve also done up a quick review a few months back on load balancing your LogInsight installation with KEMP.  Anyways, if you are interested in checking it go and get yourself a copy!

You got your snapshot in my VVOL!

As my mind wanders during the tail end of the NHL season I often find my mind racing about different things during the commercial breaks of Habs games – this time I said to myself, self, do snapshots work the same when utilizing the new VVOL technology.  Then myself replied and it said, hey self, you know who would know this answer, Cormac Hogan.  A quick look at his blog and low and behold there it was, a post in regards to shapshots and VVOLs.  If you have some time check it out – Cormac has  a great  way of laying things out in quick and easy to follow blog posts and this on is no exception.  In fact, before the first place team in the eastern conference returned from the tv timeout I had a complete understanding of it – now, back to our regularly scheduled programming.

 #SFD7 – Did you see it?

SFD-Logo2-150x150It appears that most if not all the videos from Storage Field Day 7 have been uploaded from the Silicon Valley internets into the wide world of YouTube!  There was a great list of delegates, vendors and presenters there so I would definitely recommend you check them out!  There were crazy hard drive watches, fire alarms, and best of all, a ton of great tech being talked about!  IMO the show could of done with just a few more memes though 🙂  With that said you can find all their is to know about Storage Field Day 7 over at GestaltIT’s landing page!