Tag Archives: TFD12
Intel? Who? Never heard of them! I always find it interesting the mix of presenting companies that Gestalt IT seems to get for their field day events – a lot may think its just for startups trying to get their name out – but with Intel, the 40+ year old tech giant involved I think we can say that’s pretty much debunked! And this isn’t their first either, Intel has presented at 3 Storage Field Day events and a couple of Networking Field Day events as well! So you can say they are well versed on the format….
Its’ kind of hard to do a preview post for Intel as they have been around for so long and have their hands in so many parts of the datacenter – I mean, they could talk about so many things. Aside from the well known processors, they could talk about SSD’s, chipsets, caching, networking – pretty much anything and everything. Since Virtualization Field Day has been renamed to Tech Field Day we can expect any of this, or anything else from Intel.
With that said let’s just have a look at the history of Intel rather than guess what they may talk about as I’m always interested in how companies form, especially those that were their in the very beginnings of this crazy IT world we live in now. I always picture some kind of scene from Mad Men or Halt and Catch Fire! 🙂
My Coles Notes of Wikipedia 😉
So yeah, Intel, founded in 1968 by Gordon Moore and Robert Noyce – Initially when selecting a name the combination of Moore-Noyce was quickly rejected, sounding too much like more noise 🙂 – instead, Intel, short for Integrated Electronics was chosen – and after paying a hotel brand which had the rights to the name a whopping $15000 the name has stuck – for 48 years! Their first commercial chip, the 4004 contained 2300 transistors – put that into perspective with the 10-core Xeon i7 Haswell with its 2,600,000,000 transistors! My how the times have changed – and if that isn’t enough take a look at some of the money surrounding the company. When Intel initially IPO’d in 1971, they did so at an evaluation of 6.8 million, their Q3 2016 revenue – 15.8 Billion!
Intel plugged away in the early years generating most their revenue from random-access memory circuits, pumping chips into the DRAM, SRAM and ROM markets. What would turn out to be their bread and butter, the microprocessor, wasn’t really on the radar – that is until the early 80’s or so when IBM started to use the 80286. After that its a story we know all to well, the 80386, 486, Pentium and so on and so forth!
Anyways, that’s enough of my Wikipedia paraphrasing – yeah, Intel has been around a loooong time and have pivoted many times, surviving it all – check out some cool facts about the company here if you are still interested (Did you know they pioneered the cubicle?)! I’ve never been part of a Field Day event where Intel has presented (alone) so I’m interested to see what they have to talk about – If you want to follow along as well keep your eyes on the official landing page for Tech Field Day 12 here – and use the hashtag #TFD12 come November
Is it just me or does it seem that every time you turn around Rubrik is breaking news about receiving some crazy high number of dollars in funding? Their last round, a series C of a wee 61 million brought them up to a total of 112 M – that last round more than doubled their total! In all honesty it’s only three rounds – maybe its every time I end up writing about them it’s close to a closing of a round! Either way, the Palo Alto based company will be spending a little of that money to present at the upcoming Tech Field Day 12 taking place November 15/16 in Silicon Valley!
So who’s Rubrik?
Besides being the company that is always tempting me into webinars and trade shows with Lego (Yeah, I got a thing for free Lego) they deliver what they call a “Cloud Data Management Platform”. Rubrik came to light just over a couple of years ago, when some peeps from Google/Nutanix/Oracle got together and aimed to bring a new approach to the 41 billion dollar data protection industry. It felt odd to say they were founded just a couple of years ago as it feels like they have been around for quite while – maybe it’s because I seen them way back at Virtualization Field Day 5 – but the more appropriate reason it seems they are older is because they are already on their third major release, this one dubbed Firefly, of their flagship software/hardware appliances!
Cloud Data Management – huh?
Yeah, let’s take that term and break it down so we can see what Rubrik really does. In its basics, it’s a data protection/disaster recovery appliance – on the cuff, it’s much much more. Sure, the core functionality of the Rubrik boxes is to backup your VMware/Physical environment, but the benefits of Rubrik really come from the policy based approach that they take. We don’t necessarily create backup jobs on Rubrik’s platform, instead we create policies, or SLA’s if we will – from there, we add our VMs and our data sources to those policies. The simplicity of Rubrik is once the policies are all created and the objects added to them, we are essentially done – we can let the software do the rest. Need at 30 minute RPO on that VM, create a policy. Want that same RPO on your physical SQL server – add it to the same policy! How’s about archiving older/stale data from those backups up to Amazon or Azure – hey, Rubrik can do that too!
I mentioned earlier however that Rubrik was much much more than backup. Sure, backup is the bread and butter of the platform – that’s how they get the data on their box so they can apply the real magic against it. Need to spin up a copy of a certain VM(s) for testing/development purposes – Let Rubrik do it, they can do it on flash! Looking for a certain file inside of all those backups – yeah, remember I said Rubrik was founded by some people from Google – well they have a pretty nifty search that will globally search your backups, no matter where they are – meaning if they have been archived to Amazon or are sitting on another Rubrik box the search results are global!
I’m sure we will hear much much more from Rubrik come November and I’m excited to see them at a Field Day event once again! Be sure to follow along – I should have the live-stream setup on my page here and get all of your Tech Field Day 12 information that you need by checking out the official landing page! Thanks for reading!
Ah Docker – Probably the coolest logo of any tech company I know! Certainly as of late that whale has been all the rage, well, more-so those containers sitting up on top of him. We’ve seen the likes of Microsoft and VMware all declaring support for Docker – we have saw startups spawning around Docker supporting things such as management and persistent storage. All of this says to me that containers and Docker are pretty much gearing up to go mainstream and start being utilized in a lot more enterprises around the world. Docker is is the last company to present at Tech Field Day 12 – and in my opinion “last but not least” certainly applies to this situation.
So who’s Docker?
So, in terms of who/what Docker is, well they are kind of one of the same – confused? Docker is essentially a project, an open source project, where-as Docker Inc is the company that originally authored said project. While the use of Docker containers is most certainly free, Docker the company sells services and solutions around them…
So what does Docker offer?
First up is support – open source software is great and all, but for adoption in the enterprise you certainly need to have someone whom you can call upon when things go awry – companies like RedHat and Canonical certainly know this space well. Software’s free, support is extra – and that’s one element that Docker the company comes into play offering support on case by case basis, as well as premium subscriptions around the container world.
Next, is Docker Datacenter – In it’s basic Docker Datacenter is a service which allows customers to get the same agility, efficiency and portability of containers, but bringing security, policy and controls into the mix. All things that again, enterprises prefer when going “all-in” to a product. It can be deployed both on-premises or in a virtual private cloud type deployment hosted by Docker.
To be totally honest I’ve read a lot about containers but haven’t actually been involved in any “production” deployments as I’ve been striving to find use-cases around them. I can see this changing in the future – with VMware moving into the space, making it easier and easier to deploy containers alongside your virtual machines it’s only a matter of time before containers really hit mainstream. I’m excited to see what Docker has to talk about during Tech Field Day 12. If you want to follow along the whole event will be live-streamed. I’ll hopefully have the stream going, as well as all my other Field Day content on my page here – and for more information and everything Tech Field Day 12head over to the official page here. Thanks for reading!
Today we will continue on our Tech Field Day 12preparation of trying to get a grasp on some of the companies presenting at the event. Next up, Igneous Systems – again, another company I’ve not had any interaction with or have really even heard of. With that, let’s take a quick look at the company and the services, solutions, and products they provide.
Who is Igneous?
Founded in just 2013, Igneous Systems is based out of Seattle and entered the market looking to solve the issue around unstructured large data and public cloud. There founders have a fairly decent and strong storage background – Kiran Bhageshpur (CEO/cofounder) and Jeff Hughes (CTO/cofounder) both come from an engineering background, both from the Isilon division at EMC – and Byron Rakitzis (Architect/cofounder) was the first employee hired at NetApp, being responsible for a good chunk of code there and holding over 30 patents to his name. I’m always interested in seeing the paths that startup founders have taken – this appears to be the first go around for these three guys so let’s hope they are successful!!!
Igneous – True Cloud for Local Data
These three guys have set out to bring the benefits and agility of public cloud down into the four walls of your datacenter. If we think about different types of data flowing around within the enterprise today we can identify quite a few that just aren’t a good fit to ship up to services like Amazon S3. Think IoT, with sensors that can generate a vast amount of data that you may want to have access to often. It may not be cost efficient to ship this data up to the cloud for storage. Other types of data such as security or syslog fall into that same type of category. Aside from just being a vast amount of data, enterprises struggle with what to do with large datasets such as media content. But the real driving factor behind shipping most data to services such as S3 comes in the terms of security and compliance – we may just not want our sensitive data sitting outside of our buildings!
The problem with this though is enterprises want the agility of public cloud. They want to be able to budget in terms of storing this data – and after you buy a big honking box of storage to sit in your datacenter it’s pretty hard to scale down and somehow reclaim those dollars initially spent! This is where Igneous comes into play.
Igneous is a hardware appliance – it’s still that big honking box of storage that sits inside our firewall – the difference being we don’t actually buy it, we rent it. And the terms of this rental contract are based around capacity – a “pay as you go” type service. Now you may be thinking, yeah great, we still have storage that we have to manage, we just don’t have to pay for it upfront – we still have to manage it! That’s not the case. When Igneous is engaged they deliver the appliance to your datacenter, they install it, and they manage it throughout its lifetime, meaning hardware and software upgrades are all performed by Igneous during the lifetime of the contract.
But the real advantage of Igneous, like most other products comes in the terms of software. Having local storage is great but if it can’t be accessed and utilized the same way as we do those services such as S3 and Google Cloud then we haven’t really deployed the cloud into our datacenter. The APIs provided by the Igneous box are accessed using the same familiar API calls that you are used to using with services like Azure, S3, and Google – So we still have the agility and efficiency of a cloud service, but the difference being, that your data is still your data and remains local inside your datacenter. Obviously Igneous provides visibility into your data, allowing you do capacity management and run analytics against the data consumed.
Igneous has an interesting solution and one that I feel can be incredible useful. How it integrates with other products is interesting to me. Essentially, if they support the S3 API then technically we should be able to find some way to use Igneous with other 3rd party products that can send data to Amazon. I’m thinking of backup and other products here which have the ability to copy data to S3 – we could essentially place an Igneous box at our DR site and possible copy the data there, keeping within our organizations. We will most definitely find out more about Igneous and their local cloud solution come Tech Field Day 12when they present. I encourage you to follow along – I’ll have the live-stream up on my page here, and you can also find it a ton of more information over at the official Tech Field Day 12page! Thanks for reading
DriveScale – another company presenting at Tech Field Day 12 that I know very little about – consider this post a homework assignment on my part – to at least learn a little bit about the company, the problems they are trying to solve, and the products and services offered. Just as the last company I had a look at, StorageOS, DriveScale is relatively young in the IT vendor space. Not to say that is a bad thing – normally startups execute quickly and solve real world issues that exist today. DriveScale has been around since 2013 but just come out of stealth in May this year – so naturally, this is their first appearance at a field day event. Before we get into what DriveScale does and how their technology works we should take a look at something that peaked my interest right off the hop- and that’s the founders – In order to best understand this let me list each founder with some highly summarized bulleted accomplishments – I think you will be a little impressed.
- Holds 31 patents to his name in core datacenter areas – 31, yes, 31!
- Technology Fellow at Nuova (Eventually acquired by Cisco and baked into UCS/Nexus platform)
- Founder of Nuova (Eventually acquired by Cisco and baked into UCS/Nexus platform)
- Employee #8 at Sun Microsystems – think about that for a minute – Sun Microsystems, Employee #8
- conceived of and led development of the Sun Ray desktop!
- Held CTO/Vice President positions at Technicolor, Trident, and Silicon Image
The list goes on and on for these guys but man those are some hefty accomplishments to have at the helm of one company for sure. Anyways, what they have done is not as important as what they are currently doing now, so let’s have a look at that.
DriveScale’s whole solution is based around being a smarter scale-out solution – offering a rack scale architecture which includes both hardware and software to bring “advantages of proprietary scale-up infrastructure environments to the commodity of the scale-out world” <- When I read this I kind of though, huh, I don’t get it – It sounds good, but I really don’t know what it means. This is mostly due to the fact that they really target Hadoop and big data environments, something I’m not well versed on at all! I’m sure we will all learn more when the present at TFD but for now here’s what I can gather around DriveScale’s solution.
Basically they take a group of physical servers and desegregate these into pools of both compute and storage resources. Converting these resources into what they call “Software Defined Physical Nodes” allows DriveScale to use both software and hardware to present these resources to our applications, with the ability to grow and shrink these resources as needed. When the time comes to scale out we aren’t faced with the same challenges of purchasing pre-defined nodes where compute and storage come married together – Instead, we can leverage DriveScale to simply add more compute by bringing more physical nodes into the pool, or add more storage by importing in a bunch of commodity JBODs. In the end, we can scale up or down as much compute and storage as we need, without having to worry about things like data locality – because we have DriveScale sitting between our compute and storage resources.
This is all made possible by a couple of hardware and software components – First we have the DriveScale Management Server and the DriveScale software agents – these provide the core functionality of the product, by pooling all of the compute and storage resources within the rack into logical nodes. All data runs through a hardware appliance called the DriveScale adapter, which basically is a central connection point for all of our physical servers, through a 10GBe network.
There is certainly a lot more that DriveScale’s solution provides, things like High Availability and visibility – but before I completely make a fool of myself explaining on how this all works I’ll just leave this post off right here. Here’s hoping I can learn a bit more how DriveScale technically does all of this at Tech Field Day 12- and hopefully convey that information back 🙂 That said, if you want to learn more about DriveScale for yourself their Tech Field Day 12 presentation will be live-streamed. I’ll hopefully have it setup on my page here, or if you wish, keep your eyes on the official Tech Field Day 12 page.
As with previous Field Day events I’ve participated in I always like to do a little research on the companies presenting so I don’t simply go in blind not knowing what they do. First up for this year is StorageOS – a company I’ve not heard of up until this very minute of writing! Now in terms of IT time StorageOS has been been around only for a brief second, so my ignorance is maybe a little justified – It was just in 2013 that four men (Chris Brandon, Alex Chircop, Simon Croome, and Frank Black), all from the financial services industry united their frustrations with the way the ‘all to legacy’ storage industry was intermingling with the ‘all the rage’ container industry and with that, StorageOS was born.
So what’s the big deal?
Containers Containers Containers – that’s the big deal! We know that containers is a big buzz word in the industry right now – we see it all over the place. As soon as the Docker project first hit the scene, interest began to spark up everywhere – and just recently we’ve seen the godfathers come into the space – with Microsoft partnering up with Docker to introduce supports for Windows 2016 container support and VMware doing much the same with their latest announcements around vSphere Integrated Containers. With all of this happening I’m sure we are about to see a rise in the number of containers being deployed – but with this will come new challenges, challenges which are indeed already present today…
When we think about a container we think of it as being stateless, or ethereal in nature – meaning if one instance goes down we can simply spin another one up. If we need to scale, we simply spin another one up. If we need to move a container from one spot to another we simply destroy one and create another. No dependencies, no worries about conflicting packages, just pure developer zen. This flexibility is a big part of the benefits of containers. But, taking this attitude and applying it enterprise applications, or pretty much most any application creates some problems. I think we all agree that most application require some sort of persistence – some sort of data that needs to be stored somewhere, be it a database or persistent storage – with containers being destroyed and instantiated all the time where does this persistent storage reside?
Where does this persistent storage reside? This is the question that StorageOS has the answer for! StorageOS in its basics aims to bring an enterprise-class persistent storage platform to our existing containers, but deliver it in a agile, automated container-like way! In fact, StorageOS runs itself as a container within a Linux based system and locates any storage available to it at the time, whether that be direct attached, network attached, or cloud attached. All of this storage gets added to a pool – obviously, support for scale up is there by simply adding more local/network storage to a node – or scale-out is fully supported by instantiating new StorageOS nodes. This storage is then made available to our container engines, be them Docker, VMware, AWS, Google Cloud, etc via plugins. From there, we simply specify our StorageOS Volume Driver flag during container run commands and we have ourselves some persistent data storage shared out to our containers.
As we can see above StorageOS provides us with a number of features that we would expect to see in a storage array as well – think of things such as High Availability, Failover, Encryption, Caching, and Deduplication – but due to its design it also brings to the table a number of things that other storage arrays simply don’t have! First up, let’s think about where it runs – if our containers are running within our four walls of our datacenter then hey, instantiate StorageOS storage in the same spot! Instantiating container instances in the cloud – well, why not instantiate your storage right next it as well. It’s data locality like this that provides the low-latency, high performing storage that we all need! It’s traditional persistent storage but with the flexibility and efficiency of containers!
I find what StorageOS is doing very interesting and can see a lot of other “use-cases” that can pop up due to their architectural design. Certainly migration of data to the cloud is one of them. This will be StorageOS’ first Tech Field Day presentation and I’m happy I’ll be a part of it and excited to learn more about their technology. They are up at 2PM on Tuesday, November 15 – so keep tabs on the official event page as well as my Field Day page here where I hope to have the live-stream up and running for folks!