Tag Archives: TFD
Ah Docker – Probably the coolest logo of any tech company I know! Certainly as of late that whale has been all the rage, well, more-so those containers sitting up on top of him. We’ve seen the likes of Microsoft and VMware all declaring support for Docker – we have saw startups spawning around Docker supporting things such as management and persistent storage. All of this says to me that containers and Docker are pretty much gearing up to go mainstream and start being utilized in a lot more enterprises around the world. Docker is is the last company to present at Tech Field Day 12 – and in my opinion “last but not least” certainly applies to this situation.
So who’s Docker?
So, in terms of who/what Docker is, well they are kind of one of the same – confused? Docker is essentially a project, an open source project, where-as Docker Inc is the company that originally authored said project. While the use of Docker containers is most certainly free, Docker the company sells services and solutions around them…
So what does Docker offer?
First up is support – open source software is great and all, but for adoption in the enterprise you certainly need to have someone whom you can call upon when things go awry – companies like RedHat and Canonical certainly know this space well. Software’s free, support is extra – and that’s one element that Docker the company comes into play offering support on case by case basis, as well as premium subscriptions around the container world.
Next, is Docker Datacenter – In it’s basic Docker Datacenter is a service which allows customers to get the same agility, efficiency and portability of containers, but bringing security, policy and controls into the mix. All things that again, enterprises prefer when going “all-in” to a product. It can be deployed both on-premises or in a virtual private cloud type deployment hosted by Docker.
To be totally honest I’ve read a lot about containers but haven’t actually been involved in any “production” deployments as I’ve been striving to find use-cases around them. I can see this changing in the future – with VMware moving into the space, making it easier and easier to deploy containers alongside your virtual machines it’s only a matter of time before containers really hit mainstream. I’m excited to see what Docker has to talk about during Tech Field Day 12. If you want to follow along the whole event will be live-streamed. I’ll hopefully have the stream going, as well as all my other Field Day content on my page here – and for more information and everything Tech Field Day 12head over to the official page here. Thanks for reading!
Today we will continue on our Tech Field Day 12preparation of trying to get a grasp on some of the companies presenting at the event. Next up, Igneous Systems – again, another company I’ve not had any interaction with or have really even heard of. With that, let’s take a quick look at the company and the services, solutions, and products they provide.
Who is Igneous?
Founded in just 2013, Igneous Systems is based out of Seattle and entered the market looking to solve the issue around unstructured large data and public cloud. There founders have a fairly decent and strong storage background – Kiran Bhageshpur (CEO/cofounder) and Jeff Hughes (CTO/cofounder) both come from an engineering background, both from the Isilon division at EMC – and Byron Rakitzis (Architect/cofounder) was the first employee hired at NetApp, being responsible for a good chunk of code there and holding over 30 patents to his name. I’m always interested in seeing the paths that startup founders have taken – this appears to be the first go around for these three guys so let’s hope they are successful!!!
Igneous – True Cloud for Local Data
These three guys have set out to bring the benefits and agility of public cloud down into the four walls of your datacenter. If we think about different types of data flowing around within the enterprise today we can identify quite a few that just aren’t a good fit to ship up to services like Amazon S3. Think IoT, with sensors that can generate a vast amount of data that you may want to have access to often. It may not be cost efficient to ship this data up to the cloud for storage. Other types of data such as security or syslog fall into that same type of category. Aside from just being a vast amount of data, enterprises struggle with what to do with large datasets such as media content. But the real driving factor behind shipping most data to services such as S3 comes in the terms of security and compliance – we may just not want our sensitive data sitting outside of our buildings!
The problem with this though is enterprises want the agility of public cloud. They want to be able to budget in terms of storing this data – and after you buy a big honking box of storage to sit in your datacenter it’s pretty hard to scale down and somehow reclaim those dollars initially spent! This is where Igneous comes into play.
Igneous is a hardware appliance – it’s still that big honking box of storage that sits inside our firewall – the difference being we don’t actually buy it, we rent it. And the terms of this rental contract are based around capacity – a “pay as you go” type service. Now you may be thinking, yeah great, we still have storage that we have to manage, we just don’t have to pay for it upfront – we still have to manage it! That’s not the case. When Igneous is engaged they deliver the appliance to your datacenter, they install it, and they manage it throughout its lifetime, meaning hardware and software upgrades are all performed by Igneous during the lifetime of the contract.
But the real advantage of Igneous, like most other products comes in the terms of software. Having local storage is great but if it can’t be accessed and utilized the same way as we do those services such as S3 and Google Cloud then we haven’t really deployed the cloud into our datacenter. The APIs provided by the Igneous box are accessed using the same familiar API calls that you are used to using with services like Azure, S3, and Google – So we still have the agility and efficiency of a cloud service, but the difference being, that your data is still your data and remains local inside your datacenter. Obviously Igneous provides visibility into your data, allowing you do capacity management and run analytics against the data consumed.
Igneous has an interesting solution and one that I feel can be incredible useful. How it integrates with other products is interesting to me. Essentially, if they support the S3 API then technically we should be able to find some way to use Igneous with other 3rd party products that can send data to Amazon. I’m thinking of backup and other products here which have the ability to copy data to S3 – we could essentially place an Igneous box at our DR site and possible copy the data there, keeping within our organizations. We will most definitely find out more about Igneous and their local cloud solution come Tech Field Day 12when they present. I encourage you to follow along – I’ll have the live-stream up on my page here, and you can also find it a ton of more information over at the official Tech Field Day 12page! Thanks for reading
Before we get too far into this post let’s first get some terminology straight. StorMagic refers to your remote or branch offices as the “edge” – This might help when reading through a lot of their marketing material as sometimes I tend to relate “edge” to networking, more specifically entry/exit points.
StorMagic, a UK based company was founded in 2006 and set forth to develop a software based storage appliance that enterprise can use to solve one big issue – shared storage at the edge. StorMagic is another one of those companies that presented at VFD4 that really had a strong sense for who their target markets are – they aren’t looking to go into the data center (although there is no reason that they can’t), they aren’t looking to become the end all be all of enterprise storage (although I’m sure they would love that) – they simply provide a shared, highly available storage solution for those companies that tend to have many remote branch offices with a couple of ESXi (or Hyper-V) hosts. On the second day of VFD4 in a conference room at SolarWinds, StorMagic stood up and explained how their product, SvSAN can solve these issues.
Another VSA but not just another VSA
Choosing between deploying a traditional SAN vs. a VSA is a pretty easy thing to do – most the times it comes down to the shear fact that you simply don’t have enough room at your remote site to deploy a complete rack of infrastructure, nor do you have the resources on site to manage the complexity of a SAN – so a VSA presents itself as a perfect fit. With that said there are a ton of VSA’s out on the market today so what is setting StorMagic apart from all the other players in the space? Why would I chose SvSAN over any other solution? To answer these questions let’s put ourselves into the shoes of a customer in StorMagic’s target market – a distributed enterprise with anywhere from 10 to 10000 remote “edge” offices.
One of the driving forces behind SvSAN’s marketing material is the fact that you can setup your active/active shared storage solution with as little as 2 nodes. 2 – Two. Most VSA vendors require at least a three node deployment, and justifiably they do this to prevent a scenario called split-brain. Split-brain is a scenario where nodes within a clustered environment become partitioned, with each surviving node thinking it’s active, which results in a not so appealing situation. So how does StorMagic prevent split-brain scenarios with only two nodes? The answer lies in a heartbeat mechanism called the Neutral Storage Host (NSH). The NSH is recommended and designed to run centrally, with one NSH supporting multiple SvSAN clusters. Think one NSH supporting 100 remote SvSAN sites. The NSH communicates back and forth with the SvSAN nodes in order to determine who is up and who is down, thus being the “tie breaker” if you will in the event the nodes become partitioned. That said the NSH is an important piece to the SvSAN puzzle and it doesn’t necessarily need to run centralized. For those sites that don’t have good or any bandwidth, the NSH can be run on any Windows, Linux, Raspberry Pi device you want, locally at the site. Beyond the heartbeat mechanisms of the NSH, SvSAN also does a multitude of things locally between the two nodes to prevent split brain as well. It can utilize any one of its networks, be it management, iSCSI, or mirroring network to determine and prevent nodes from becoming partitioned. So with all this what advantages come from not requiring that third node of compute within the cluster – well, one less VMware license, one less piece of hardware you have to buy and one less piece of infrastructure you need to monitor, troubleshoot and backup – which can add up to a pretty hefty weight in loonies if you have 10000 remote sites.
Aside from lowering our infrastructure requirements SvSAN brings a lot of enterprise functionality to your remote sites. It acts in an active/active fashion, synchronously replicating writes between each node. When a second node of SvSAN is introduced, a second path to the VMs storage is presented to our hosts. If at any time one host fails, the other host containing the mirrored data can pick up where it left off, which essentially allows VMware HA to take our VMs that were running on local storage on the failed host, and restart them on the surviving host using local storage. While the failed node is gone, the surviving SvSAN journals and writes meta data around the changes that occur in the environment, minimizing the time that it will take to re-synchronize when the original node returns. That said the original node isn’t required for re-synchronization – the benefits of the SvSAN architecture allow for the second node to come up on different hardware or even different storage. This newly added node will be automatically configured, setup and re-synchronized into the cluster, same goes for the third, the fourth, the fifth node and so on, with just a few clicks.
As far as storage goes, SvSAN can take whatever local or network storage you have presented to the host and use that as their datastore. The appliance itself sits on a datastore local to the host, somewhere in the terms of 100GB – from there, the remaining storage can be passed straight up to SvSAN in a JBOD, RDM, or “vmdk on a datastore” fashion. SvSAN also gives us the ability to create different storage tiers, presenting different datastores to your hosts depending on the type of disk presented, be it SATA, SAS, etc. In terms of SSD, SvSAN supports either running your VMs directly on solid state datastores, or you can carve up SSD tier to be used as a write-back cache to help accelerate some of those slower tiers of storage.
In terms of management, StorMagic is fully integrated into the vSphere Web Client via a plug-in. From what I’ve seen, all of the tasks and configuration that you need to perform are done through very slick, wizard driven menus within the plug-in, and for the most part StorMagic has automated a lot of the configuration for you. When adding new nodes into the VSA cluster, vSwitches, network configurations, iSCSI multipathing – they are all setup and applied for you – when recovering existing nodes, surviving VSA’s can push configuration and IQN identifiers down to the new nodes, making the process of coming out of a degraded state that much faster.
Wait speaking of VMware
Worse transition ever but hey, who better to validate your solution than one of the hypervisors that you run on. As of Feb 4th, VMware and StorMagic have announced a partnership which basically allows customers to couple the new vSphere ROBO licensing with a license for SvSAN as well. Having VMware, who took a shot at their own VSA in the past (ugh, remember that!) chose your product as one they bundle their ROBO solutions with has to be a big push of confidence for both StorMagic and their potential customers. You can read more about the partnership and offering here – having both products bundled together is a great move on StorMagic’s part IMO as it can really help push both adoption and recognition within the VSA market.
Should I spend my loonies on this?
IMO StorMagic has a great product in SvSAN – They have done a great job in stating who their target market is and who they sell to – and defending questions to no end with that market in mind. HA and continuous up time is very important to those enterprises that have distributed architecture. They’ve placed these workloads at the “edge” of their business for a reason, they need the low latency, and honestly, the “edge” is where a company makes their money so why not protect it. With that said I see no reason why an SMB or mid market business wouldn’t use this within their primary data center and/or broom closet and I feel they could really benefit by maybe focusing some of their efforts in that space – but that’s just my take, and the newly coupled VMware partnership, combining SvSAN with the ROBO licenses kind of de-validates my thinking and validates that of StorMagic – so what do I know . Either way I highly recommend checking out StorMagic and SvSAN for yourself – you can get a 60 day trial on their site and you can find the full library of their VFD4 videos here.
In a previous post I highlighted 4 of the 8 sponsors taking part in Virtualization Field Day 4 coming up January 14th through 16th in Austin, Texas. Now it’s time to move on to the final four! As mentioned in the previous post, the Tech Field Day events would certainly not be possible without the support of the sponsors so a big thanks goes out to all 8 who are participating this round. Without further ado, let’s get to it…
If we look at the hyper-convergence market today it would be somewhat of an understatement to say that it is “red hot”. Simplivity, along with their competitor Nutanix, and fellow VFD sponsor Scale Computing have really changed the way companies are deploying in house IT. Even VMware has jumped on board by providing oems a go to market strategy leveraging their EVO:Rail hyper-convergence reference architecture. It’s a fair statement to say that hyper convergence is here to stay and a big part of that is due to technology and material that Simplivity has produced. Their product, the OmniCube provides customer with a scaleable, building block type architecture while encapsulating the server compute, storage, network, switching into a single pool of resources. I’ve seen Simplivity’s solution in action many times during tradeshows and VMworld, but never “out in the wild.”. Honestly, I think they have a great solution and there are a lot of things I like about it – The global source side dedup is awesome, compression is great. I also like the overall way Simplivity goes to market, by allowing commodity x86 hardware to take their software, along with their custom built hardware accelerator PCIx card and essentially end up with a build your own adventure type deployment. The hyper convergence market is “the in thing” right now so I can’t wait to see what Simplivity has in store for VFD4. Simplivity has dabbled with some of the Tech Field Day happenings in previous years, such as the SDDC Symposium and the TFD Extra’s that are held during VMworld but this will be their first full go at a full Tech Field Day event.
What more can we say about Tech Field Day and SolarWinds?!?! They have been a long time supporter of the event participating in (I’m using abbreviations since there are a lot of them) NFD1, NFD3, NFD5, NFD6, TFD4, TFD6, TFD7, and TFD9 – That’s quite a resume when it comes to sponsorship. As a company I really respect the way SolarWinds handles the community surrounding them. I had the chance a few months back to participate as a thwack ambassador and I can’t give this community enough praise! They are engaged, helpful and smart! Be sure to check out thwack if you get a chance! But, on to what matters – SolarWinds and their technology. This being a day about virtualization one can only assume that SolarWinds will speak to their management software, cleverly titled “Virtualization Manager” I’ve personally never used the product but have seen it in action many times during demos, webinars, etc.. and honestly if you are utilizing both VMware and Hyper-V in your environment and looking for a monitoring/management solution I wouldn’t hesitate to recommend that you at least check out SolarWinds. They have a ton of fully customizable alerts and reports to help customers track things like CPU ready, Memory balooning as well a complete section to help during capacity planning by finding under and over sized VMs within the environment. All this, integrating with other traditional SolarWinds products such as Server & Application Manager (SAM). If you have other SolarWinds products in your environment, Virtualization Manager may be a perfect fit. Whatever SolarWinds is presenting at VFD4 I will be all ears and for sure have the info posted here.
When it comes to StorMagic they are one of those companies that I’ve heard of before, but have never really looked too deeply at. Honestly, up until I began checking them out in more detail for this post I had assumed that they were “just another VSA vendor”. And in some ways, well, I’m right. But in a lot of ways, I’m wrong. StorMagics’ product, SvSAN is indeed a VSA, but not “just another VSA”. SvSAN seems to serve a distinct type of customer. A customer with a large centralized infrastructure and many remote/branch offices that it supports. In a perfect world these remote sites would have crazy awesome fibre connections back to the central office and all the applications, VMs and services would be driven from the central offices datacenter. In the real world we have crappy WAN links and “needy” applications – These needy applications require themselves to run inside of these branch offices in order to provide low latency and meet performance requirements. Perfect world, we have the budget and infrastructure to throw at this problem – SANs in every remote office. Real world, there’s no money! Back to the perfect world, we’d have IT staff in every office babysitting all of this stuff. Real world, we don’t, we have staff sitting in our central offices running rampant, having the person that answers the phone offsite reboot servers for them! SvSAN really helps bridge these perfect and real worlds. By utilizing the SvSAN VSA in the remote sites we are able to provide shared storage to our remote locations in an active-active fashion with as little as 2 nodes all managed centrally. Watch for more on StorMagic and SvSAN next week.
I’m pretty pumped to see VMTurbo at VFD4 since I know that one of the presenters, Eric Wright (twitter/blog) will be representing. The thing is Eric, along with Angelo Luciani and myself are co-leaders of the Toronto VMUG – which means we have seen countless presentations and sessions sitting side by side – It will be definitely cool to see Eric on the other side of the fence and I’m sure he will knock it out of the park. As far as VMTurbo go, like many other players they participate in the operations/management end of things. You can truly see that they have put a lot of development, time, and efffort into their flagship product Operations Manager. OM, like others, is a monitoring solution – looking for performance issues and troublesome areas in your environment, and making recommendations on how to alleviate them. OM though takes a drastically different approach than most monitoring tools. VMTurbo takes your virtual data centre and transforms it into the what they call an “economic market”. Picture this, your resources, things like memory, cpu, etc – these are all in demand, and they all have a cost. The cost of these items go up and down depending on availability. Not much memory around, cost goes up – have an abundance of memory, well, things are going to be a bit cheaper. The VMs are the consumers running around buying up all this merchandise. And depending on the products recommendations, say a VM was looking to move from one host to another, it may or may not be a good idea. Things may cost more over on that other host, so in turn, althought it is experiencing issues, it simply may be more cost prohibitive to stay right where it is. VMTurbo has been around a while, and they are a big player when it comes to community participation. If you are looking for a good primer, check out the videos from VFD3. Again I’m excited to see Eric, excited to see VMTurbo, and excited to learn more about the interesting model they have.
See you in Texas!
So, there you have it. Between the above and this post we have a brief look into the 8 sponsors who will be participating in Virtualization Field Day 4 next week (Jan 14-16) in Austin, TX. Just simply reviewing old TFD youtube videos has really got me pumped and excited and I can barely wait to sit at those tables amongst all the brain power that is attending.
If you want to watch live, you can – just head to the VFD4 landing page during the event as all sessions are live streamed. Also, don’t forget to watch and participate via twitter using the #VFD4 hashtag.
I know, I didn’t leave much to the imagination with the blog title and as you may of guessed I’m going to be attending Virtualization Field Day 4 in Austin, Texas this January!
I was ecstatic when I received the invitation and it didn’t take much convincing to get me to go! I’ve been a huge fan and supporter of the Tech Field Day format over the years, and not too many of them go by where I don’t catch a few session on the livestream. The fact that Austin is on average 30 degrees Celsius warmer than here in January sure does help too!
Aside from the heat I’m definitely looking forward to being a part of VFD4. This will be the fourth installment of Virtualization Field Day and it takes place January 14th through the 16th in Austin, Texas. The Tech Field Day events bring vendors and bloggers/thought leaders together in a presentation/discussion style room to talk everything and anything about given products or solutions. I’ll point you to techfieldday.com to get a way better explanation about the layout of the events.
This will be my first time as a delegate and I’m feeling very humbled for having been selected. Honestly I get to sit alongside some of the brightest minds that I know. Thus far Amit Panchel (@AmitPanchal76), Amy Manley (@WyrdGirl), James Green (@JDGreen), Julian Wood (@Julian_Wood), Justin Warren (@JPWarren), and Marco Broeken (@MBroeken) have all been confirmed as delegates with more to be announced as time ticks on. Some of these people I’ve met before, some I know strictly from Twitter and others I haven’t met at all so I’m excited to catch up with some people as well as meet some new people.
So far there have been 6 sponsors sign up for #VFD4 – Platform9, Scale Computing, Simplivity, Solarwinds, StorMagic and VMTurbo. Just as with the delegates some of these companies I know a lot about, some I know a little, and others I certainly need to read up on. Having seen many, and I mean many vendor presentations in my lifetime I have tremendous respect for those that sponsor and present at Tech Field Day. The sessions tend to be very technical, very interactive, and very informative – three traits that I believe make a presentation. I’m really looking forward to seeing my fellow Toronto VMUG Co-Leader and friend Eric Wright (@discoposse) sitting on the other side of the table 🙂
Be sure to follow along via Twitter by watching the #VFD4 hashtag leading up to and during the event. Also a livestream will be setup so you too can watch as it all goes down.
I’m so grateful to everyone for getting this opportunity – so thank you to my peers, the readers of this blog, Stephen Foskett and all the organizers of all the great Tech Field Days and the virtualization community in general – See you in Texas!