Tag Archives: Tech Field Day
DriveScale – another company presenting at Tech Field Day 12 that I know very little about – consider this post a homework assignment on my part – to at least learn a little bit about the company, the problems they are trying to solve, and the products and services offered. Just as the last company I had a look at, StorageOS, DriveScale is relatively young in the IT vendor space. Not to say that is a bad thing – normally startups execute quickly and solve real world issues that exist today. DriveScale has been around since 2013 but just come out of stealth in May this year – so naturally, this is their first appearance at a field day event. Before we get into what DriveScale does and how their technology works we should take a look at something that peaked my interest right off the hop- and that’s the founders – In order to best understand this let me list each founder with some highly summarized bulleted accomplishments – I think you will be a little impressed.
- Holds 31 patents to his name in core datacenter areas – 31, yes, 31!
- Technology Fellow at Nuova (Eventually acquired by Cisco and baked into UCS/Nexus platform)
- Founder of Nuova (Eventually acquired by Cisco and baked into UCS/Nexus platform)
- Employee #8 at Sun Microsystems – think about that for a minute – Sun Microsystems, Employee #8
- conceived of and led development of the Sun Ray desktop!
- Held CTO/Vice President positions at Technicolor, Trident, and Silicon Image
The list goes on and on for these guys but man those are some hefty accomplishments to have at the helm of one company for sure. Anyways, what they have done is not as important as what they are currently doing now, so let’s have a look at that.
DriveScale’s whole solution is based around being a smarter scale-out solution – offering a rack scale architecture which includes both hardware and software to bring “advantages of proprietary scale-up infrastructure environments to the commodity of the scale-out world” <- When I read this I kind of though, huh, I don’t get it – It sounds good, but I really don’t know what it means. This is mostly due to the fact that they really target Hadoop and big data environments, something I’m not well versed on at all! I’m sure we will all learn more when the present at TFD but for now here’s what I can gather around DriveScale’s solution.
Basically they take a group of physical servers and desegregate these into pools of both compute and storage resources. Converting these resources into what they call “Software Defined Physical Nodes” allows DriveScale to use both software and hardware to present these resources to our applications, with the ability to grow and shrink these resources as needed. When the time comes to scale out we aren’t faced with the same challenges of purchasing pre-defined nodes where compute and storage come married together – Instead, we can leverage DriveScale to simply add more compute by bringing more physical nodes into the pool, or add more storage by importing in a bunch of commodity JBODs. In the end, we can scale up or down as much compute and storage as we need, without having to worry about things like data locality – because we have DriveScale sitting between our compute and storage resources.
This is all made possible by a couple of hardware and software components – First we have the DriveScale Management Server and the DriveScale software agents – these provide the core functionality of the product, by pooling all of the compute and storage resources within the rack into logical nodes. All data runs through a hardware appliance called the DriveScale adapter, which basically is a central connection point for all of our physical servers, through a 10GBe network.
There is certainly a lot more that DriveScale’s solution provides, things like High Availability and visibility – but before I completely make a fool of myself explaining on how this all works I’ll just leave this post off right here. Here’s hoping I can learn a bit more how DriveScale technically does all of this at Tech Field Day 12- and hopefully convey that information back 🙂 That said, if you want to learn more about DriveScale for yourself their Tech Field Day 12 presentation will be live-streamed. I’ll hopefully have it setup on my page here, or if you wish, keep your eyes on the official Tech Field Day 12 page.
Yes Mr Cool J you heard that right – Although you don’t ‘think’ you are heading back, this guy is indeed ‘Goin back to Cali!’ While Mr Cool J would rather stay in New York I’m heading to Silicon Valley to partake in Tech Field Day 12 with a slew of great delegates and sponsors alike! This will be my first time in the Valley – so I’m pretty pumped to say the least! I’m excited to finally be in the heart of all of the companies and technologies that I’ve been using my whole life, and writing about here for the past 5 years or so!
So if you haven’t heard of Tech Field Day then you have most certainly been missing out! TFD is the brainchild of Stephen Foskett and his company Gestalt IT and is essentially a learning resource for the community. Now I know, I know, there is already many many resources out there for us to find out about certain technologies or companies – we have white papers, books, blogs, videos, training, etc – but the problem is most of this stuff usually stems from strong marketing roots, and at times, can be a bit overwhelming trying to weed out the message from the technology! TFD solves this by deep diving into the technology, and by placing a dozen or so tech minded folks in a room with a vendor it helps to keep the presentations and messages on point – it’s about the technology, not the marketing! You know when you are sitting through a webinar or a presentation and someone poses a question – and said question is responded to with a “I’ll connect you with an SE or with someone afterwards to talk” – this kind of stuff doesn’t really happen at TFD – most the time, vendors and companies presenting have the knowledge and the resources in the room to leave no question unanswered – that’s what I like to think TFD is!
Anyways, so yeah, the Valley – so excited for this!! Tech Field Day 12 has a number of great sponsors and vendors lined up to present at the event (you can see them above). Some of these companies are giants (Dell EMC, Intel), some fairly new to the market (Rubrik, Cohesity), some are all the rage right now (Docker), and honestly some I’ve never dealt with or even really heard of (StorageOS, DriveScale, Igneous). It’s normally the latter that really impress me at these events! So heads up, the time is near – TFD12 airs November 15th and 16th with two jam packed days! To learn more about the event, certainly check out the official landing page!
As I have with the other TFD events I’ve participated in I’ll try to consolidate all of my content surrounding the event on a single page, which you can find here! A huge thanks to Gestalt IT for having me back! I can’t wait! Oh, and sorry for the 90’s hip hop references – it was as witty as I could get at the moment 🙂 Either way, I can almost hear those scratching records and that crazy jazz music which kicked of the song right now 🙂
Rubrik, the Palo Alto based company who strives to simplify data protection within the enterprise has recently announced a series C worth a cool 61 million, doubling their total capital to a cool 112 million since founding just over a couple of years ago! And as much as I love to hear about venture capital and money and whatnot I’m much more into the tech as I’m sure my readers are as well! With that, alongside that Series C announcement comes a new release of their product, dubbed Rubrik Firefly!
Rubrik Firefly – A Cloud Data Management Platform
With this third major release from Rubrik comes a bit of a rebrand if you will – a cloud data management platform. Nearly all organizations today have some sort of cloud play in their business; whether that be to build out a private cloud and support legacy applications or consume public cloud resources for cloud native applications – they all have some kind of initiative within their business that aligns with cloud. The problem Rubrik sees here is that the data management and data protection solutions running within those business simply don’t scale to match what the cloud offers. Simply put, customers need to be able to manage, secure, and protect their data no matter where it sits – onsite, offsite, cloud, no matter what stage of cloud they are at – thus spawning the Cloud Data Management Platform
So what’s new?
Aside from a number of improvements and enhancements Rubrik Firefly brings a few big new features to the table; Physical Workloads, Edge Environments, and spanning across clouds. Let’s take a look at each in turn…
I had a chance to see Rubrik a way back at Virtualization Field Day 5 where we got a sneak peek at their roadmap – at the time they supported vSphere only and had no immediate plans for physical workloads. The next time they showed up at Tech Field Day 10 they actually had a bit of a tech preview of their support for physical MSSQL support – and today that has become a reality. As you can see they are moving very fast with development of some of these features! Rubrik Firefly adds official support for those physical SQL servers that you have in your environment, you know, the ones that take up so much resources that the DBA’s just will not let you virtualize. Rubrik can now back these up in an automated, forever incremental fashion and give you same easy of use, efficiency, and policy based environment that you have within your virtual workload backups. Firefly does this by deploying a lightweight Windows service, the Rubrik Connector Service onto your SQL server, allowing you to perform point in time restores and log processing through the same UI you’ve come to know with Rubrik. Aside from deploying the service everything else is exactly the same – we still have SLA policy engine, SLA domains, etc.
And they don’t stop at just SQL! Rubrik Firefly offers the same type of support for those physical Linux workloads you have lying around. Linux is connected into Rubrik through an rpm package, allowing for ease of deployment – From there Rubrik pulls in a list of files and directories on the machine, and again, provides the same policy based approach as to what to back up, when to back it up, and where to store it!
Both the SQL msi installer and the Linux rpm packaged are fingerprinted to the Rubrik cluster that creates them – allowing you to ensure you are only processing backups from the boxes you allow.
Although Rubrik is shipped as a physical appliance we all know that this is a software based world – and that doesn’t change with Rubrik. The real value in Rubrik is the way the software works! Rubrik has taken their software and bundled it up into a virtual appliance aimed for Remote/Branch Offices. What this does is allow those enterprises with remote or branch offices to deploy a Rubrik instance at each location, all talking back to the mothership if you will at the main office. This allows for the same policy based approach to be applied to those workloads running at the remote locations, thus allowing things such as replication back to the main office, archive to cloud, etc to be performed on the edge of the business along with at the main office. The Virtual Appliance is bundled as an ova and sold on a “# of VMs” protected basis – so if you have only a handful of VMs to protect you aren’t paying through the nose to get that protection.
Finally we come to cloud spanning. Rubrik has always supported AWS as a target for archiving backups and brought us an easy to use efficient way of getting just the pieces of data we need back from AWS – but, we all know that Microsoft has been pushing Azure quite heavily as of late handing out lots and lots of credits! You can now take those spare credits and put them to good use as Firefly brings in support for Azure blob storage! The same searching and indexing technology that Rubrik has for Amazon can now be applied to Azure as well, giving customers options as to where they archive their data!
Bonus Feature – Erasure Coding
How about one more? With the Firefly release Rubrik now utilizes erasure coding, bringing in a number of performance and capacity enhancements to their customers with a simple software upgrade! Without putting hard numbers to it customers can expect to see a big increase in their free capacity once they perform the non-disruptive switch over to erasure coding!
Firefly seems like a great step towards the cloud data management platform – a topology agnostic approach to wrapping policy around your data, no matter where it is, ensuring it’s protected and secured! The release of a Virtual Appliance perks my ears up as well – although it’s aimed directly at ROBO deployments now who knows where it might go in the future – perhaps we will see a software-only release of Rubrik someday?!? If you are interested in learning more Rubrik has a ton of resources on their site – I encourage you to check them out for yourself. Congratulations Rubrik on the Series C and the new release!
Am I looking forward to the presentation at Virtualization Field Day 5 from OneCloud? I have no idea! Why? Well, here is a company that I know absolutely nothing about! I can’t remember ever coming across OneCloud in any of my journey’s or conferences! Honestly, I think this is the first company that is the only company that is presenting at VFD that I have absolutely no clue about what they do…
That will certainly change fast
OneCloud will present at VFD5 on June 24th at 1:00 PM where I’m sure we will all be enlightened a little more on the solutions they provide. That said I don’t like going in cold, knowing nothing about someone – thus, this preview blog post will at least help me understand a little bit about everything OneCloud has to offer…
So let’s start from the ground up. OneCloud is essentially a management platform for a hybrid cloud play. Their core technology, the Automated Cloud Engine (ACE) is the base to where they provide other services. From what I can tell ACE essentially facilitates the discovery of your on premises data center, taking into account all of your VMs, physical storage and networking information. From here, ACE can take different business objectives and transform these into API calls in order to essentially replicate all your infrastructure into the public cloud – for now, it appears to be just Amazon’s AWS which is supported.
The service running on top of ACE is OneCloud Recovery. OneCloud Recovery allows organizations to facilitate a disaster recovery or business continuity solution involving the public cloud as the primary target – skipping costs and complexity of implementing a second or third site on premises.
So here is how it all happens from start to finish – OneCloud is deployed into your environment, via the virtual appliance route. Another instance is also deployed into Amazon. From there it auto discovers your environment; your networking setup, storage configurations, data and applications are all tied together and somewhat of a blueprint of your environment is created. You then use their policy engine to apply RTO and RPO objectives to your applications. OneCloud will then provision a fully functioning virtual data center in Amazon – one that mirrors your environment in terms of networking and configuration. OneCloud not only duplicates your environment into Amazon, but it will also optimize both your compute and storage in order to minimize costs. Meaning it will scale down on CPU where it believes it can and place your data onto the most cost effective storage. Once your data is there OneCloud performs ongoing replication in order to meet the RPO you have selected. From there it’s just a matter of performing your normal DR tests and engaging in any failover (and failback) operations.
OneCloud seems to have some interesting technology and I’m looking forward to learning more at VFD5. Some questions for OneCloud that come to mind – How do they compare to VMware’s vCloud Air DR services? Do they plan on expanding out to other public clouds such as Google, Azure, or vCloud Air? With a strong software base in ACE do they plan on moving outside just the DR/BC realm – things such as DevOps and public cloud labs come to mind. I really like how they are abstracting away what can be some very complicated API calls to Amazon – any time a company provides a solution that involves simplicity it’s always a good thing, but especially so when dealing with the complex networking and configuration of public cloud and disaster recovery. If you would like to learn more about OneCloud with me you can do so by watching the live stream on the VFD5 event page. That stream, along with any other content created by myself will be posted on my VFD5 event page as well.
It’s been quite a long time since my last “Friday Shorts” installment and the links are certainly piling up! So, without further ado here’s a few tidbits of information that I shared over the last little while…
A little bit of certification news!
VMware education and certification has certainly taken it’s fair share of backlash in the last few months, and honestly it’s rightly deserved! People don’t like when they invest in a certification, both in money and time, just to have an expiry date placed on all their efforts! Either way, that’s old news and nothing is changing there. What I was most concerned about was whether or not I would be able to skip my upgrade of my VCP and just take a VCAP exam instead, which would in turn re-up my VCP. Then the announcement of no more VCAP was made – which through those questions of mine for a loop – but now, after this announcement it appears that their will be an upgrade/migration path for those current VCAP holders to work towards the newly minted VCIX. Have a read and figure out where you fit in and start planning. I already hold a VCAP5-DCA so by taking the design portion of the VCIX I would be able to earn my VCIX certification in full – sounds good to me! Now we just need the flipping exams blueprints to come out so we all can get to studying! 🙂
New version of RVTools!
Yup, the most famous peice of “nice to haveware” has an updated version. I’ve used RVTools for quite some time now – as an administrator any piece of free software that I can get to help me with my job is gold! RVTools saves me a ton of time when gathering information as it pertains to my virtual environment and my VMs. If you haven’t used it definitely check it out – if you have, upgrade – you can see all of the new changes and download here!!
KEMP giving away LoadMaster!
Keeping on the topic of free tools let’s talk about KEMP for a moment! They are now offering their flagship KEMP LoadMaster with a free tier! If you need any load balancing done at all I would definitely check this out! Now, there is going to be some limitations right, nothing in this world is completely free 🙂 Certainly it’s only community supported and you can only balance up to a maximum of 20 MB/s – but hey, may be a great solution for your lab! Eric Shanks has a great introduction to how to get it up and going on his blog so if you need a hand check it out! I’ve also done up a quick review a few months back on load balancing your LogInsight installation with KEMP. Anyways, if you are interested in checking it go and get yourself a copy!
You got your snapshot in my VVOL!
As my mind wanders during the tail end of the NHL season I often find my mind racing about different things during the commercial breaks of Habs games – this time I said to myself, self, do snapshots work the same when utilizing the new VVOL technology. Then myself replied and it said, hey self, you know who would know this answer, Cormac Hogan. A quick look at his blog and low and behold there it was, a post in regards to shapshots and VVOLs. If you have some time check it out – Cormac has a great way of laying things out in quick and easy to follow blog posts and this on is no exception. In fact, before the first place team in the eastern conference returned from the tv timeout I had a complete understanding of it – now, back to our regularly scheduled programming.
#SFD7 – Did you see it?
It appears that most if not all the videos from Storage Field Day 7 have been uploaded from the Silicon Valley internets into the wide world of YouTube! There was a great list of delegates, vendors and presenters there so I would definitely recommend you check them out! There were crazy hard drive watches, fire alarms, and best of all, a ton of great tech being talked about! IMO the show could of done with just a few more memes though 🙂 With that said you can find all their is to know about Storage Field Day 7 over at GestaltIT’s landing page!
In a previous post I highlighted 4 of the 8 sponsors taking part in Virtualization Field Day 4 coming up January 14th through 16th in Austin, Texas. Now it’s time to move on to the final four! As mentioned in the previous post, the Tech Field Day events would certainly not be possible without the support of the sponsors so a big thanks goes out to all 8 who are participating this round. Without further ado, let’s get to it…
If we look at the hyper-convergence market today it would be somewhat of an understatement to say that it is “red hot”. Simplivity, along with their competitor Nutanix, and fellow VFD sponsor Scale Computing have really changed the way companies are deploying in house IT. Even VMware has jumped on board by providing oems a go to market strategy leveraging their EVO:Rail hyper-convergence reference architecture. It’s a fair statement to say that hyper convergence is here to stay and a big part of that is due to technology and material that Simplivity has produced. Their product, the OmniCube provides customer with a scaleable, building block type architecture while encapsulating the server compute, storage, network, switching into a single pool of resources. I’ve seen Simplivity’s solution in action many times during tradeshows and VMworld, but never “out in the wild.”. Honestly, I think they have a great solution and there are a lot of things I like about it – The global source side dedup is awesome, compression is great. I also like the overall way Simplivity goes to market, by allowing commodity x86 hardware to take their software, along with their custom built hardware accelerator PCIx card and essentially end up with a build your own adventure type deployment. The hyper convergence market is “the in thing” right now so I can’t wait to see what Simplivity has in store for VFD4. Simplivity has dabbled with some of the Tech Field Day happenings in previous years, such as the SDDC Symposium and the TFD Extra’s that are held during VMworld but this will be their first full go at a full Tech Field Day event.
What more can we say about Tech Field Day and SolarWinds?!?! They have been a long time supporter of the event participating in (I’m using abbreviations since there are a lot of them) NFD1, NFD3, NFD5, NFD6, TFD4, TFD6, TFD7, and TFD9 – That’s quite a resume when it comes to sponsorship. As a company I really respect the way SolarWinds handles the community surrounding them. I had the chance a few months back to participate as a thwack ambassador and I can’t give this community enough praise! They are engaged, helpful and smart! Be sure to check out thwack if you get a chance! But, on to what matters – SolarWinds and their technology. This being a day about virtualization one can only assume that SolarWinds will speak to their management software, cleverly titled “Virtualization Manager” I’ve personally never used the product but have seen it in action many times during demos, webinars, etc.. and honestly if you are utilizing both VMware and Hyper-V in your environment and looking for a monitoring/management solution I wouldn’t hesitate to recommend that you at least check out SolarWinds. They have a ton of fully customizable alerts and reports to help customers track things like CPU ready, Memory balooning as well a complete section to help during capacity planning by finding under and over sized VMs within the environment. All this, integrating with other traditional SolarWinds products such as Server & Application Manager (SAM). If you have other SolarWinds products in your environment, Virtualization Manager may be a perfect fit. Whatever SolarWinds is presenting at VFD4 I will be all ears and for sure have the info posted here.
When it comes to StorMagic they are one of those companies that I’ve heard of before, but have never really looked too deeply at. Honestly, up until I began checking them out in more detail for this post I had assumed that they were “just another VSA vendor”. And in some ways, well, I’m right. But in a lot of ways, I’m wrong. StorMagics’ product, SvSAN is indeed a VSA, but not “just another VSA”. SvSAN seems to serve a distinct type of customer. A customer with a large centralized infrastructure and many remote/branch offices that it supports. In a perfect world these remote sites would have crazy awesome fibre connections back to the central office and all the applications, VMs and services would be driven from the central offices datacenter. In the real world we have crappy WAN links and “needy” applications – These needy applications require themselves to run inside of these branch offices in order to provide low latency and meet performance requirements. Perfect world, we have the budget and infrastructure to throw at this problem – SANs in every remote office. Real world, there’s no money! Back to the perfect world, we’d have IT staff in every office babysitting all of this stuff. Real world, we don’t, we have staff sitting in our central offices running rampant, having the person that answers the phone offsite reboot servers for them! SvSAN really helps bridge these perfect and real worlds. By utilizing the SvSAN VSA in the remote sites we are able to provide shared storage to our remote locations in an active-active fashion with as little as 2 nodes all managed centrally. Watch for more on StorMagic and SvSAN next week.
I’m pretty pumped to see VMTurbo at VFD4 since I know that one of the presenters, Eric Wright (twitter/blog) will be representing. The thing is Eric, along with Angelo Luciani and myself are co-leaders of the Toronto VMUG – which means we have seen countless presentations and sessions sitting side by side – It will be definitely cool to see Eric on the other side of the fence and I’m sure he will knock it out of the park. As far as VMTurbo go, like many other players they participate in the operations/management end of things. You can truly see that they have put a lot of development, time, and efffort into their flagship product Operations Manager. OM, like others, is a monitoring solution – looking for performance issues and troublesome areas in your environment, and making recommendations on how to alleviate them. OM though takes a drastically different approach than most monitoring tools. VMTurbo takes your virtual data centre and transforms it into the what they call an “economic market”. Picture this, your resources, things like memory, cpu, etc – these are all in demand, and they all have a cost. The cost of these items go up and down depending on availability. Not much memory around, cost goes up – have an abundance of memory, well, things are going to be a bit cheaper. The VMs are the consumers running around buying up all this merchandise. And depending on the products recommendations, say a VM was looking to move from one host to another, it may or may not be a good idea. Things may cost more over on that other host, so in turn, althought it is experiencing issues, it simply may be more cost prohibitive to stay right where it is. VMTurbo has been around a while, and they are a big player when it comes to community participation. If you are looking for a good primer, check out the videos from VFD3. Again I’m excited to see Eric, excited to see VMTurbo, and excited to learn more about the interesting model they have.
See you in Texas!
So, there you have it. Between the above and this post we have a brief look into the 8 sponsors who will be participating in Virtualization Field Day 4 next week (Jan 14-16) in Austin, TX. Just simply reviewing old TFD youtube videos has really got me pumped and excited and I can barely wait to sit at those tables amongst all the brain power that is attending.
If you want to watch live, you can – just head to the VFD4 landing page during the event as all sessions are live streamed. Also, don’t forget to watch and participate via twitter using the #VFD4 hashtag.