Tag Archives: Tech Field Day

Cohesity bringing secondary storage to Tech Field Day

logo-cohesity-dark-100x12Cohesity is next up in my flurry of Tech Field Day 12 previews with their secondary storage play.  I just recently got to see Cohesity present as they were a sponsor at our Toronto VMUG which took place at the legendary Hockey Hall of Fame, so I guess you could say that Cohesity is the only vendor I’ve seen present in the same building as the Stanley Cup.   Okay, I’ll try and get the Canadian out of me here and continue on with the post…

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

Who is Cohesity?

Cohesity was founded in 2013 (I’m detecting somewhat of a Tech Field Day 12 pattern here) by Mohit Aron, former CTO and co-founder of Nutanix.  You can certainly see Mohit’s previous experience at Google and Nutanix shining through in Cohesity’s offering – Offering complete visibility into an organizations “dark data” on their secondary storage appliance.

Secondary Storage?

Cohesity’s appliance in itself doesn’t claim to be a primary storage array – they aim at the secondary storage market.  Think of non mission critical data – data such as backups, file shares and test/dev copies.  All this data is a perfect fit for a Cohesity appliance.  How this data gets there and what we can do with it all lies within Cohesity’s DataProtect and DataPlatform software!

DataProtect and DataPlatform

For the most part, the on boarding of all this data onto their appliance is done through backups – Cohesity’s DataProtect platform to be more specific.  DataProtect  seamlessly integrates into your vSphere environment and begins to back up your infrastructure using a set of predefined and custom policies, or SLA’s if you will.  Policies are setup to define things such as RPO – how often we want to back up, as well as retention policies for archival – Backups over 30 days shall be archived to Azure/Amazon/Google.

Once the data resides within Cohesity’s appliance, another technology DataPlatform takes over – DataPlatform provides a Google-esque search across all the data, be it on premises or archived to cloud.  Here is where we can do some risk management, searching for patterns such as credit card number or social insurance numbers.  DataPlatform also allows us to leverage our backups for items such as test/dev, creating a complete copy of our environments very fast – isolated from our actual production networks.

dataplatform_thumb.png

With the release of 3.0, we have also seen physical Windows and Linux support added into the platform – so just as we protect our VMs, we can protect our physical servers, along with the applications such as SQL/Exchange/Sharepoint that are running on them.

With a best of VMworld 2016 award under their belts I’m pretty excited to go deeper into Cohesity – and expect to hear a lot more as to what their next steps might be!  Stay up to date on Cohesity and all things mwpreston/Tech Field Dayby watching my page here – and see all there is to know about Tech Field Day 12 on the main landing page here!  Thanks for reading and see yah in November 🙂

DellEMC will make it’s first appearance at a Field Day event since merger!

dellemc_logo_prm_blue_gry_rgb-100x18Next in the long list of previews for Tech Field Day 12 is DellEMC – you know, that small company previously known as EMC that provides a slew of products primarily based on storage, backup, cloud and security.  Yeah, well, apparently 67 billion dollars and the largest acquisition in the tech industry ever allows you to throw Dell in front of their name 🙂  November 16th will DellEMC’s first Tech Field Day presentation under the actual DellEMC name – split out we have saw Dell @ 7 events and EMC @ 5 events.   So let’s call this their first rather than combining them both for that dreaded number 13….

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

We all got a look at just what these two companies look like when combined as the newly minted DellEMC World just wrapped up!   We saw a number of announcements around how things play out while these two companies are now sharing the same playground, summarized as best I can as follows…

  • Hyper-converged – Big announcements around how PowerEdge servers will now be a flavor of choice for the VxRail/VxRack deployments.  Certainly this brings an element of choice in terms of the customization of performance and capacity provided by Dell – to the Hyperconverged solution once provided by EMC.  Same goes for the rails big brother, VxRack.
  • DataDomain – the former EMC backup storage solution will also be available on DellEMC PowerEdge servers.  What was once a hardware appliance is now a peice of software bundled on top of your favourite PowerEdge servers.  On top of that, some updates allowing data to be archived to cloud and multi-tenancy for service providers.
  • Updates to the Isilon series, including a new All Flash version being added to the scale-out NAS system.

1-emc-dell362

Dell has not be shy as of late at making BIG moves – going private then buying out EMC.  Certainly this transition is far from over – there is a lot of transition that still has to take place in order to really merge the two companies together.  From the outside things appear on the upside (except for the fact that I’m getting a ton of calls from both companies looking to explain everything now) however there are still many unanswered questions as to what will happen with overlapping product lines…  From the inside I can’t really say – I have no idea – all I know is I’m sure it’s not an easy thing for anyone when you take 70,000 EMC employees and throw them in with Dell’s 100,000+ – There will definitely be some growing pains there…

Only time will tell how DellEMC changes the story, if at all at Tech Field Day 12.  DellEMC are up first thing on November 16th – follow along with the live-stream, keep up with all things mwpreston @ Tech Field Day 12 here, and stay tuned on the official landing page for more info!  This is destined to be a good one!  Thanks for reading!

Intel to take the stage at Tech Field Day!

1000px-intel-logo-svg_-91x60Intel?  Who?  Never heard of them!  I always find it interesting the mix of presenting companies that Gestalt IT seems to get for their field day events – a lot may think its just for startups trying to get their name out – but with Intel, the 40+ year old tech giant involved I think we can say that’s pretty much debunked!  And this isn’t their first either, Intel has presented at  3 Storage Field Day events and a couple of Networking Field Day events as well!  So you can say they are well versed on the format….

Its’ kind of hard to do a preview post for Intel as they have been around for so long and have their hands in so many parts of the datacenter – I mean, they could talk about so many things.  Aside from the well known processors, they could talk about SSD’s, chipsets, caching, networking – pretty much anything and everything.  Since Virtualization Field Day has been renamed to Tech Field Day we can expect any of this, or anything else from Intel.

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

With that said let’s just have a look at the history of Intel rather than guess what they may talk about as I’m always interested in how companies form, especially those that were their in the very beginnings of this crazy IT world we live in now.  I always picture some kind of scene from Mad Men or Halt and Catch Fire! 🙂

My Coles Notes of Wikipedia 😉

andy_grove_robert_noyce_gordon_moore_1978_editSo yeah, Intel, founded in 1968 by Gordon Moore and Robert Noyce – Initially when selecting a name the combination of Moore-Noyce was quickly rejected, sounding too much like more noise 🙂 – instead, Intel, short for Integrated Electronics was chosen – and after paying a hotel brand which had the rights to the name a whopping $15000 the name has stuck – for 48 years!  Their first commercial chip, the 4004 contained 2300 transistors – put that into perspective with the 10-core Xeon i7 Haswell with its 2,600,000,000 transistors!  My how the times have changed – and if that isn’t enough take a look at some of the money surrounding the company.  When Intel initially IPO’d in 1971, they did so at an evaluation of 6.8 million, their Q3 2016 revenue – 15.8 Billion!

Intel plugged away in the early years generating most their revenue from random-access memory circuits, pumping chips into the DRAM, SRAM and ROM markets.  What would turn out to be their bread and butter, the microprocessor, wasn’t really on the radar – that is until the early 80’s or so when IBM started to use the 80286.  After that its a story we know all to well, the 80386, 486, Pentium and so on and so forth!

Anyways, that’s enough of my Wikipedia paraphrasing – yeah, Intel has been around a loooong time and have pivoted many times, surviving it all – check out some cool facts about the company here if you are still interested (Did you know they pioneered the cubicle?)!  I’ve never been part of a Field Day event where Intel has presented (alone) so I’m interested to see what they have to talk about – If you want to follow along as well keep your eyes on the official landing page for Tech Field Day 12 here – and use the hashtag #TFD12 come November

Rubrik to talk cloud data management at Tech Field Day

logo-large-grayIs it just me or does it seem that every time you turn around Rubrik is breaking news about receiving some crazy high number of dollars in funding?  Their last round, a series C of a wee 61 million brought them up to a total of 112 M – that last round more than doubled their total!  In all honesty it’s only three rounds – maybe its every time I end up writing about them it’s close to a closing of a round!  Either way, the Palo Alto based company will be spending a little of that money to present at the upcoming Tech Field Day 12 taking place November 15/16 in Silicon Valley!

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

So who’s Rubrik?

Besides being the company that is always tempting me into webinars and trade shows with Lego (Yeah, I got a thing for free Lego) they deliver what they call a “Cloud Data Management Platform”.  Rubrik came to light just over a couple of years ago, when some peeps from Google/Nutanix/Oracle got together and aimed to bring a new approach to the 41 billion dollar data protection industry.  It felt odd to say they were founded just a couple of years ago as it feels like they have been around for quite while – maybe it’s because I seen them way back at Virtualization Field Day 5 – but the more appropriate reason it seems they are older is because they are already on their third major release, this one dubbed Firefly, of their flagship software/hardware appliances!

Cloud Data Management – huh?

Yeah, let’s take that term and break it down so we can see what Rubrik really does.  In its basics, it’s a data protection/disaster recovery appliance – on the cuff, it’s much much more.  Sure, the core functionality of the Rubrik boxes is to backup your VMware/Physical environment, but the benefits of Rubrik really come from the policy based approach that they take.  We don’t necessarily create backup jobs on Rubrik’s platform, instead we create policies, or SLA’s if we will – from there, we add our VMs and our data sources to those policies.  The simplicity of Rubrik is once the policies are all created and the objects added to them, we are essentially done – we can let the software do the rest.  Need at 30 minute RPO on that VM, create a policy.  Want that same RPO on your physical SQL server – add it to the same policy!  How’s about archiving older/stale data from those backups up to Amazon or Azure – hey, Rubrik can do that too!

rubrik-features

I mentioned earlier however that Rubrik was much much more than backup.  Sure, backup is the bread and butter of the platform – that’s how they get the data on their box so they can apply the real magic against it.  Need to spin up a copy of a certain VM(s) for testing/development purposes – Let Rubrik do it, they can do it on flash!  Looking for a certain file inside of all those backups – yeah, remember I said Rubrik was founded by some people from Google – well they have a pretty nifty search that will globally search your backups, no matter where they are – meaning if they have been archived to Amazon or are sitting on another Rubrik box the search results are global!

I’m sure we will hear much much more from Rubrik come November and I’m excited to see them at a Field Day event once again!  Be sure to follow along – I should have the live-stream setup on my page here and get all of your Tech Field Day 12 information that you need by checking out the official landing page!  Thanks for reading!

Docker to make 4th Field Day appearance!

docker-logo-300-71x60Ah Docker – Probably the coolest logo of any tech company I know!  Certainly as of late that whale has been all the rage, well, more-so those containers sitting up on top of him.  We’ve seen the likes of Microsoft and VMware all declaring support for Docker – we have saw startups spawning around Docker supporting things such as management and persistent storage.  All of this says to me that containers and Docker are pretty much gearing up to go mainstream and start being utilized in a lot more enterprises around the world.  Docker is is the last company to present at Tech Field Day 12 – and in my opinion “last but not least” certainly applies to this situation.

So who’s Docker?

So, in terms of who/what Docker is, well they are kind of one of the same – confused?  Docker is essentially a project, an open source project, where-as Docker Inc is the company that originally authored said project.  While the use of Docker containers is most certainly free, Docker the company sells services and solutions around them…

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

So what does Docker offer?

First up is support – open source software is great and all, but for adoption in the enterprise you certainly need to have someone whom you can call upon when things go awry – companies like RedHat and Canonical certainly know this space well.  Software’s free, support is extra – and that’s one element that Docker the company comes into play offering support on case by case basis, as well as premium subscriptions around the container world.

ducp_dashboard

Next, is Docker Datacenter – In it’s basic Docker Datacenter is a service which allows customers to get the same agility, efficiency and portability of containers, but bringing security, policy and controls into the mix.  All things that again, enterprises prefer when going “all-in” to a product.  It can be deployed both on-premises or in a virtual private cloud type deployment hosted by Docker.

To be totally honest I’ve read a lot about containers but haven’t actually been involved in any “production” deployments as I’ve been striving to find use-cases around them.  I can see this changing in the future – with VMware moving into the space, making it easier and easier to deploy containers alongside your virtual machines it’s only a matter of time before containers really hit mainstream.  I’m excited to see what Docker has to talk about during Tech Field Day 12.  If you want to follow along the whole event will be live-streamed.  I’ll hopefully have the stream going, as well as all my other Field Day content on my page here – and for more information and everything Tech Field Day 12head over to the official page here.  Thanks for reading!

Igneous bringing the cloud to you at Tech Field Day

igneousiologo-100x34Today we will continue on our Tech Field Day 12preparation of trying to get a grasp on some of the companies presenting at the event.  Next up, Igneous Systems – again, another company I’ve not had any interaction with or have really even heard of.  With that, let’s take a quick look at the company and the services, solutions, and products they provide.

Who is Igneous?

Founded in just 2013, Igneous Systems is based out of Seattle and entered the market looking to solve the issue around unstructured large data and public cloud.  There founders have a fairly decent and strong storage background – Kiran Bhageshpur (CEO/cofounder) and Jeff Hughes (CTO/cofounder) both come from an engineering background, both from the Isilon division at EMC – and Byron Rakitzis (Architect/cofounder) was the first employee hired at NetApp, being responsible for a good chunk of code there and holding over 30 patents to his name.  I’m always interested in seeing the paths that startup founders have taken – this appears to be the first go around for these three guys so let’s hope they are successful!!!

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

Igneous – True Cloud for Local Data

These three guys have set out to bring the benefits and agility of public cloud down into the four walls of your datacenter.  If we think about different types of data flowing around within the enterprise today we can identify quite a few that just aren’t a good fit to ship up to services like Amazon S3.  Think IoT, with sensors that can generate a vast amount of data that you may want to have access to often.  It may not be cost efficient to ship this data up to the cloud for storage.  Other types of data such as security or syslog fall into that same type of category.  Aside from just being a vast amount of data, enterprises struggle with what to do with large datasets such as media content.  But the real driving factor behind shipping most data to services such as S3 comes in the terms of security and compliance – we may just not want our sensitive data sitting outside of our buildings!

The problem with this though is enterprises want the agility of public cloud.  They want to be able to budget in terms of storing this data – and after you buy a big honking box of storage to sit in your datacenter it’s pretty hard to scale down and somehow reclaim those dollars initially spent!  This is where Igneous comes into play.

Igneous is a hardware appliance – it’s still that big honking box of storage that sits inside our firewall – the difference being we don’t actually buy it, we rent it.  And the terms of this rental contract are based around capacity – a “pay as you go” type service.  Now you may be thinking, yeah great, we still have storage that we have to manage, we just don’t have to pay for it upfront – we still have to manage it!  That’s not the case.  When Igneous is engaged they deliver the appliance to your datacenter, they install it, and they manage it throughout its lifetime, meaning hardware and software upgrades are all performed by Igneous during the lifetime of the contract.

igneous-architecture

But the real advantage of Igneous, like most other products comes in the terms of software.  Having local storage is great but if it can’t be accessed and utilized the same way as we do those services such as S3 and Google Cloud then we haven’t really deployed the cloud into our datacenter.  The APIs provided by the Igneous box are accessed using the same familiar API calls that you are used to using with services like Azure, S3, and Google – So we still have the agility and efficiency of a cloud service, but the difference being, that your data is still your data and remains local inside your datacenter.   Obviously Igneous provides visibility into your data, allowing you do capacity management and run analytics against the data consumed.

Igneous has an interesting solution and one that I feel can be incredible useful.  How it integrates with other products is interesting to me.  Essentially, if they support the S3 API then technically we should be able to find some way to use Igneous with other 3rd party products that can send data to Amazon.  I’m thinking of backup and other products here which have the ability to copy data to S3 – we could essentially place an Igneous box at our DR site and possible copy the data there, keeping within our organizations.  We will most definitely find out more about Igneous and their local cloud solution come Tech Field Day 12when they present.  I encourage you to follow along – I’ll have the live-stream up on my page here, and you can also find it a ton of more information over at the official Tech Field Day 12page!  Thanks for reading