Tag Archives: SFD13

The concept of “Scale In” for high volume data

When we think of scaling our infrastructure within our data centers a couple different models come to mind – we can scale out, which is essentially adding more servers or nodes into our applications cluster or infrastructure – or we can scale up which involves adding more resources into our already existing servers to support our applications running within them.  But whats with this “Scale In” business?

First, let’s look at the pros and cons of “Scale Out” and “Scale Up”

Scaling-up tends to be a little easier on the licensing side of things and helps with the cooling/power bills for our data centers, however it does impose a bit of a greater risk of hardware failure and tends to come at a cost once we start to hit maximums that a server can hold.

Scaling out, while providing nice upgrade paths and generally giving us an “unlimited” amount of availability leaves a much bigger footprint within our data centers, resulting in higher cooling/power bills and possibly more dollar signs when it comes to license and maintenance renewals.

For our average everyday workloads whether we scale out or up may not have that great of effect on our bottom line – but what about this whole new trend of machine learning and big data analytics.  These types of processes require an extremely high number of resources to process data.  Scaling out and up in these situations certainly has a huge effect on our data center bills – and often, still doesn’t provide us with the data locality and performance we need as it requires us to rely too much on our networks, which in turn, eventually need to scale up to support more data flow – so how do we over come this?

X-IO Technologies may have the answer!

At SFD13 in Denver this June X-IO Technologies invited us into their offices to see what they have been up to for the past little while.  In fact, it had been three years since X-IO last participated in a tech field day event and a lot has changed since then!   At SFD5 X-IO talked about their flagship ISE technology – a general purpose storage array targeted at the mid-market enterprise with the normal features such as performance, availability etc.  Fast forward to today and their story has completely changed – While still providing support for their older product lines, ISE and iglu, X-IO has shifted R&D resources and pivoted into the big-data market with their Axellio Edge solution – a converged storage and server appliance leveraging a lot of power and a ton of NVMe storage in the back end – their own “Scale In” solution!

Hello Axellio

Before delving into exactly how this Axellio Scale In solution performs let’s first take a look at what everyone is interested in – the hardware specs!  Axellio is a converged appliance – meaning it takes both compute, memory, and storage and combines them into one 2U rack mounted appliance.

As far as the compute and memory goes, Acellio contains 2 nodes, each node supporting up to 44 CPU cores and 1TB of RAM – so yeah, do the math – we basically have 88 cores and 2TB of memory to work with here.

That said the biggest benefit to the Axellio in my opinion is the storage back end – Axellio’s back-plane supports up to 72 dual ported NVMe SSD drives.  Currently that brings Axellio’s capacity maximum to 460TB with 6.4GB NVMe drives – in the future, with larger drives, we are looking at a whopping 1 Petabyte of storage – all on bus speed with NVMe performance – think greater that 12 million IOPs with 35 microseconds of latency!

So….back to scaling in.

To help explain their scaling in concept let’s take a look at an actual logical diagram of the Axellio platform.  As we can see Axellio doesn’t appear to function the same as our traditional converged and hyperconverged appliances we see today – the storage, in essence, isn’t distributed – meaning the server nodes do not have their own local storage that they pool together to present globally to a cluster – nor are they addressable by any sort of global namespace.  Although the FabricXpress functionality does allow for inter-node communication to support things like memory mapping back and forth from the nodes – they are essentially, two distinct server nodes.

axellio

What we have here, is basically two separate and distinct compute nodes, connecting to the same FabricXpress back-plane and basically both accessing the shared NVMe storage!  As you can start to imagine this is where we see the “scale in” concept come into play – we have the scale out advantages of having two nodes, while also combining the scale up benefits of having a lot of cores and memory – all backed by the blazing speeds of NVMe on the back end!

But the magic is in the software right?

Of course – software rules the world today – but Axellio isn’t providing you with any!  X-IO play with Axellio isn’t to sell you something to run your VMs on, or something in which you can simply pipe your data into some X-IO built analytics engine – this isn’t a general purpose server!  Axellio is basically an OEM box – a box that is targeted at companies and enterprises that need a mass amount of computing and storage performance requirements in order to solve specific problems.  Think things like streaming analytics or in memory big data applications.  In the end, it’s the customer that is left with the choice of how they want to leverage the Axellio platform – meaning they put the OS on the compute nodes, they determine if they want RAID or any other forms of availability with the storage, they decide whether to use each server node independently or to setup some form of HA between the two – the customer is in full control!

One interesting use case they had was an analytics engine where one server node takes on the role of writing the streaming data to the drives, while the other server node provides the compute and access to any real-time analytics that may need to be accessed!  Now – while this use case can be handled many different ways – Axellio does it at very high speeds, very low latency – oh, yeah – and within 2U of rack space!

So in the end I think X-IO technologies is on to something with Axellio – and honestly, it appears to me like they are still “learning” about how they plan to bring this to market!  Currently, they are focusing on providing a hardware platform to a somewhat niche group of players and looking to solve very specific use-cases and problems – a big change from directing all their efforts into the storage array market which is flooded with general purpose vendors.   And rightly so – they need to explore this area and get more and more data and use cases before going down any other roads with Axellio.  Where those roads may lead them is yet to be determined but in my opinion I can see one or two things happening with Axellio – it moves towards a reference architecture model – meaning we get in-depth documentation on how to do things like Hadoop or large scale Splunk deployments with Axellio – or maybe, just maybe X-IO technologies have something in the works in terms of their own software development that they can layer on top of Axellio!

If you want to learn more about X-IO Technologies and Axellio certainly check out their website here.  You can also find their SFD13 recorded presentations here – If you want to get really nerdy I’d suggest watching Richard Lary talk about dedup and math!  And of course don’t forget to check out the posts from fellow delegates Brandon Graves, Dan Frith, and Ray Lucchesi as well!  Thanks for reading!

Is there still a need for specialized administrators?

We have been hearing the cliché’s for quite some time now within the technology industry.  Sayings like “Breaking down silos” and “Jack of all trades, Master of none” have been floating around IT offices for the past 5 years – and while I believe that these sayings certainly hold some clout I still have my doubts about this new “Generalized IT Admin”.  Honestly, with the changing landscape of technology and the fast paced change we see introduced into our infrastructure by all means we need to know (or know how to quickly learn) a lot – A LOT.  And while this generalized, broad skill set approach may be perfect for the day to day management of our environments the fact is, when the sky clouds over and the storm rolls in, taking with it certain pieces of our data centers we will want to have those storage specialists, that crazy smart network person, or the flip-flop wearing virtualization dude who knows things inside and out available to troubleshoot and do root/cause on issues in order to get our environments back up and running as quickly as possible!

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc were all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

Now all this said these problem situations don’t come (I Hope) that often and coupled  with the fact that we are seeing more and more “converged” support solutions, organizations can leverage their “one throat to choke” support call and get the specialists they need over the phone – this all leads them one step closer to being able to employ these “Jack of all trades, Master of none” personnel in their IT departments.  But perhaps the biggest stepping stone in eliminating these specialized roles is the new rage being set forth by IT vendors implementing a little concept called “Policy Based Management”.

Enter NetApp SolidFire

Andy Banta from NetApp SolidFire spoke at Storage Field Day 13 about how they are utilizing policy based management to make it easier and more efficient for everyday administrators to consume and manage their storage environments.  I got the chance to sit as a delegate at SFD13 and watch his presentation, cleverly titled “The death of the IT Storage Admin” – and if you fancy you can see the complete recorded presentations here.

NetApp SolidFire is doing a lot of things right in terms of taking the steps to introduce efficiency into our environments and eliminate a lot of those difficult mundane storage tasks that we used to see dedicated teams of specialized administrators perform.  With that said let’s take a look at a few of those tasks and explore how NetApp SolidFire, coupled with VMware’s VVOL integration is providing policy based automation around them.

Storage Provisioning

In the olden days (and I mean like 5 years ago) the way we went about provisioning storage to our VMware environments could be, how do I say this, a little bit inefficient.  Traditionally, we as “Generalized VMware administrators” would determine that we needed more storage.  From there, we’d put a request out to the storage team in order to provision us a LUN.  Normally, this storage team would come back with all sorts of questions – things like “How much performance do you need?”, “How much capacity do you need?”, “What type of transport mechanism would you like this said storage to be delivered over?”, “What type of availability are you looking for?”.  After answering (or sometimes lying) our way through these conversations the storage team would FINALLY provision the LUN and zone it out to our hosts.  We, then create our datastore, present it to our ESXi hosts and away we go filling it up – only to come back to the storage team with the same request the very next month.  It’s not a fun experience and is highly inefficient.

VMware’s VVOLs is a foundation to help change this and NetApp SolidFire has complete integration points into them.  So in true VVOLs fashion we have our storage container, which consumes space from our SolidFire cluster on a per VM/disk fashion.  What this means is as administrators we simply assign a policy to our VM, or our VM disk and after that our vmdk is provisioned automatically on the SolidFire cluster – No LUNs, no storage team conversations – all performed by our “generalized admin”.

Storage Performance/Quality of Service

Now as far as VVOL provisioning capacity goes there isn’t a whole lot that is different between SolidFire and other IT storage vendors – but when we get into QOS I think we can all agree that SolidFire takes a step above the crowd.  SolidFire has always focused on the point that application performance and quality of service is the number one most important piece of their storage – and with their VVOL implementation this is still true.

When setting up our policies within the vSphere SPBM, NetApp SolidFire exposes a number of metrics and configuration options as it pertains to QOS in our rule setup.  We can configure settings allowing us to set minimum, maximum and burst IOPs on both our data VVOL(the vmdks) as well as our configuration VVOLs (vmx, etc.).  Once setup we simply apply these policies to our VMs and immediately we have assurance that certain VMs will always get the performance they need – or, on the flip side, certain VMs will not be able flood our storage, consuming IOPs and affecting their neighboring workloads.  This is a really cool feature IMO – while I see a lot of vendors allowing us to do certain disk type placement for our VVOL (placing vmdk on SSD, SAS, etc.) I’ve not see many that go as deep as SolidFire allowing us to guarantee and limit IOPs.

This essentially removes the complexity of troubleshooting storage performance needs and constraints on our workloads – the setup is all completed within the familiar vSphere Web Client (complete with a NetApp SolidFire plug-in) and is simply applied the same way as you have always just edited a VMs settings.

So – is the storage admin dead?

NetApp SolidFire has definitely taking a good chunk of the storage admins duties away and put them into the laps of our generalized admins!   Even though I haven’t mentioned it, even the scaling of NetApp SolidFire cluster, as well as the VASA provider failover is all automated in some way within their product.  So, yeah, I think they are on the right track – and they have taken some very difficult and complex tasks and turned them into a simple policy.  Now I wouldn’t jump to conclusions and say that the storage admin is 100% dead (there is still a lot storage complexities and a lot of storage related tasks to do within the datacenter)  but NetApp SolidFire have, how do I put this – maybe just put them into a pretty good coma and has them lying in a hospital bed!   If you have made it this far I’d love to hear your take on things – leave a comment, hit me up on twitter,  whatever…  Take a look at the NetApp SolidFire videos from SFD13 and let me know – do you think the storage admin is dead?  Thanks for reading!

SNIA comes back for another Storage Field Day

SNIA, the Storage Networking Industry association is a non-profit organization that is made up of a number of member companies striving to create vendor-neutral architectures and standards throughout the storage industry.  Think Dell, VMware, HPE, Hitachi, all the likely names all behind closed doors working for the greater good.  Ok – that’s their definition.  Mine?  Well, I compare it to Rocky III – you know, Rocky and Apollo, sworn enemies teaming up to make the world a better place by knocking out Mr. T.   So, I may be a little off with that, but not that far off!  Replace “Rocky and Apollo” with some “very big name storage companies” and swap out “knocking out Mr. T” with “releasing industry standards and specifications” and I think we are pretty close.

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

So, in all seriousness SNIA has been around for 20 years and was formed initially to deal with interoperability issues surrounding networked storage.  Today we now see them really focusing on architectures, standards, as well as a slew of education services, training and certifications.    Today we can see a ton of work being performed by SNIA around current storage trends such as flash, cloud, object-storage, persistent memory, etc..  You name it, they have some work being done around it.    From their website here is a handful of work that SNIA is currently investigating…

  • Cloud Data Management Interface (CDMI)
  • Linear Tape File System (LTFS)
  • IP Based Drive Management Specifications
  • NVM Programming Modle
  • Self-contained Information Retention Format
  • Solid State Storage Performance Test Specifications
  • Swordfish Scalable Storage Management APIs

Wait!  They aren’t selling anything!

Honestly, I’ve never been to a Tech Field Day event where a non-profit organization has spoke – so I’m very excited to see what SNIA will chose to talk about!  As shown above they have a broad range of topics to chose from – and by watching past SNIA videos from TFD they can go quite deep on these subjects.  It will be nice to hear a vendor-neutral approach to a TFD session.  I applaud SNIA for their efforts – it can’t be easy organizing and keeping all of its members in check – and it’s nice to see an effort from a company, be it non-profit or not, looking out for the customers, the partners, the people that have to take all of these storage arrays and protocols and make them all work!  As always, follow along with all my SFD13 content here – keep your eye on the official event page here – and we will see you in June!

X-IO Technologies – A #SFD13 preview

In terms of the technology sector we always joke that when startups are 5 years old that sometimes makes them legacy!  Meaning, 5 years is a long time in the eyes of a technologist – things change, tech changes, new hardware emerges.  All of this drives change!  Well if a 5 years makes a mature company than I’m not sure what to call X-IO Technologies.  X-IO was founded nearly 25 years ago, in 1995 – taking them right off the scale in terms of aging for a tech company!   Honestly, I’ve heard the name before (or saw the logo) but I’ve never really looked at what it is X-IO does – so today let’s take a look at the current X-IO offerings and solutions and what they have to bring to the table – and, if you are interested you can always learn more when they present at the upcoming Storage Field Day 13 event in Denver come June 14th – 16th.  But for now, the tech…

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

What does X-IO bring?

From their website it appears that X-IO have a couple basic offerings, all hardware appliances, and all serving different points of interest in the storage market.  Let’s try and figure out what each of them does..

Axellio Edge Computing

This appears to be an edge computing system marketed mainly companies needing performance for big data analytics as well as those looking for a platform to crunch data from IoT sensors.   These converged storage and compute boxes are very dense in both CPU, Memory and Storage.  Supporting up to 88 cores of CPU, 2TB of memory, and a maximum of 72, yes 72 2.5” NVMe SSD drives.  Each appliance is basically broke down into two server modules for the compute and memory, as well as up to 6 FlashPacs (A FlashPac is essentially a module hosting 12 dual ported NVMe slots).  As far as scale goes I don’t see much mention in terms of pooling appliances, so it appears that these are standalone boxes each serving a single purpose.

iglu Enterprise Storage Systems

Here it appears we have a storage array.  The iglu storage system can be built using all flash, or a mixture of both flash and disk, or, just spinning disk itself.    They appear to have multiple models supporting each disk configuration, with their all-flash version supporting over 600,000 IOPs.  Controllers on the iglu system are distributed, meaning whenever we add more capacity we are also adding more controllers, thus increasing both space and performance with the same upgrade.  As far as software goes we see all the familiar features such as snapshots, CDP, replication, stretched clustering features,  integration with VMware, SQL, Oracle, etc.…  One nice aspect to this is that all iglu systems, no matter the model, have access to all of the software features – there is no having to license individual aspects of the software.

I’m excited to see what X-IO has to say at SFD13 come this June.  There was some mention of some unique way of handling drive failures, as well as offering a lengthy 5 year warranty on everything which may separate them from the storage vendor pack – but I’m hoping they have much more to talk about in regards to their storage offerings to give it that wow factor!  As always you can find all my SFD13 related information here or follow the event page here to stay updated and catch the live stream!

Hear more from Exablox at Storage Field Day 13

As I continue along with some SFD13 previews I stumble upon Exablox.  Exablox is one of the few presenting companies at SFD13 that I know very little about so I’m excited to hear what they have to offer and say come June 14th in Denver when the event kicks off.  Also,  Exablox is coming into SFD after being acquired by StorageCraft earlier this year and given the partnerships between the two companies in the past I’m sure we will hear some more in regards to how this integration is going.  Headquartered in Mountain View Exablox was the brainchild of Matthew Catino, Tad Hunt and Frank Barrus. Founded in 2010 the three set out to create a more scalable filer array – and in the end we are left with a product they call OneBlox.

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

OneBlox – A reimagined scale-out storage solution.

Honestly when I first started diving into OneBlox I pretty much assumed it was just a simple filer box that you purchased chalked full of drives – but after some more digging there is honestly a lot of differences between OneBlox and a lot of the other scale out NAS boxes I’m familiar with.

  • You can bring your own drives – yeah, you can put pretty much whatever drives you want in these things!  Different speeds and capacities? No problem
  • It runs off of object storage – underneath all that SMB/NFS presentation is a heaping helping of object based storage, which is a big enabler for our next point
  • There is no RAID – rather than utilizing parity to protect against failure OneBlox utilizes their custom based object file system to intelligently write 3 copies of every object, ensuring your data is written on not only different drives, but different nodes within a ring as well.  Wait! What’s a ring?
  • Their ring architecture – A  ring is essentially a cluster of 1 or more OneBlox nodes.  All nodes within a single ring are essentially pooled to form one single global file system which shrinks and grows as drives and nodes are added.  Item such as deduplication and compression are all formed globally across this file system – meaning we have inline deduplication across multiple OneBlox nodes.

OneSystem – Cloud Based Management

As of late it seems like everyone is jumping on this “cloud based” management solution – and rightly so.  We get the ability to manage our infrastructure from anywhere with a browser and an internet connection.   OneSystem is Exablox’s play in this field.  OneSystem essentially allows administrators to access all of  their deployed OneBlox’s from a secure, available, cloud-based management server through a browser.  OneSystem provides real-time monitoring and health reporting as well as supports a multi-tenant environment for  those service providers and large enterprises that may need it.  Don’t trust the cloud just yet?  No problem, the whole OneSystem can be deployed on premises as well if need be.

As I mentioned earlier I don’t know a lot about Exablox so I’m excited to see what they have to say at SFD13.  I read a little in regards to some unique CDP and replication strategies they have as well for protecting data between different  rings.  From the surface it looks like some cool tech and I can’t wait to learn more about some of their biggest use cases and see more about how that underlying custom object based file-system works!  Hopefully we see more at SFD13!  For those that want to follow along I’ll have the live stream along with any of  my SFD13 related content on my page here, or be sure to stay up to date by keeping an eye on the official SFD13 event page.  And hey, follow along and ask questions using the hash tag #SFD13 – See you in June!

Primary Data set to make their 5th appearance at Storage Field Day

This June @ SFD13 Primary Data will make their 5th appearance at a fully fledged Tech Field Day event despite having just GA’d their flagship product just 9 short months ago at VMworld 2016 (They have a few TFD Extra events as well).  Now, pardon my math but for a company that was founded in 2013 that means they spent roughly 3 years in development – gathering loads of customer feedback and beta testing before releasing their data management platform to the wild, all the while giving updates via TFD – there’s an example of a company taking it’s time and doing something right!

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

Now, before we go into detail about Primary Data’s products and what they provide I wanted to take a second to talk a little bit about their founders and leadership team – because we all know great products and execution start from the top – and Primary Data has quite the top!  Let’s have a look below…

Lance Smith (CEO) – Despite holding many top level leadership positions over the years perhaps the most relevant would be COO of Fusion-io and following the acquisition moving into Senior Vice President and GM inside of SanDisk.

Rick White (Co-Founder and CMO) – This isn’t Rick’s first go around as he was also one of the orginal co-founders and CMO at Fusion-io.  Are you beginning to sense a little pattern here around Fusion-io? Smile

David Flynn (Co-Founder and CTO) – Here comes that pattern again!  Prior to founding Primary Data David Flynn was a co-founder and CEO at Fusion-io.  Now reading his bio off of the site it also states that David holds over 100 patents across a wide range of technologies – not too shabby.

Steve Wozniak (Chief Scientist) – Yeah, that’s right, the Steve Wozniak, you know, the guy from Dancing with the Stars – oh, and he kind of helped found and shape a small little valley startup named Apple.

Now keep in mind these are just 4 people that I have picked out off of Primary Data’s company page!  And even though there is a ton of brainpower listed here, there’s still a lot more people at Primary Data whose experience in the industry that just blows my mind!

So what exactly does Primary Data bring to market?

Primary Data’s key storage solution focus around their flagship product DataSphere.  Now DataSphere in itself isn’t a storage array, being best more described as what they coin a “Metadata Engine for the Enterprises”.   So what does this really mean?  Well, hopefully I’m getting it here but to me DataSphere looks like it is somewhat of an abstraction tool – a way to de-couple all of your applications from the data that they use and store.  The first step is to take all of an organizations storage and pools it together into a single logical namespace.  It’s this storage, be it direct-attached, SAN based, or even cloud which can be in turn presented back to your enterprise applications.   But its not necessarily this pooling which drives up the value of DataSphere – it’s the analytics and automatic data movement that really stands out for me.  DataSphere is able to map a set of application rules or objectives to automatically move data across different tiers of storage in order to ensure that certain SLA’s are met, or more-so, that the right resources are being assigned for the right applications.  Meaning the proper storage resources are provisioned to applications, no matter where that application is running, no matter where that storage is running – All we need to do is specify that we want Application X to have Storage Requirements Y and let DataSphere tier and tune to its heart delight!  The metadata engine keeps track of where things are and whats going on.

As you can imagine this gives us a really good opportunity to scale in terms of storage – and really prevents the need to over provision which has become such a defacto standard in deployments today – in turn, organizations save dollars!    Now from what I can tell DataSphere doesn’t necessarily sit inside the data path for the application either – meaning it’s virtually invisible and not affecting performance in anyway.  Instead it lies in somewhat of an out of band architecture – allowing applications to have direct paths to their data while DataSphere simply handles the metadata processing and analytics.

There are a ton of benefits that I can initially see with DataSphere – thinking of scaling, migration to cloud, and simply performance compliance are a few that come straight to mind.  I’m most certain there is much much more than can be done and I know I’m just scratching the surface, so with that I guess we will have to wait until Primary Data presents at Storage Field Day in June to learn more.  As always follow along with the hashtag #SFD13 or check out my page here for all that is SFD13!

A field day of Storage lay ahead!

SFD-Logo-400x400I’ve had the awesome opportunity to participate in a few tech field day events throughout the last few years, travelling to Austin for VFD4, Boston for VFD5, and finally San Jose for TFD12 just last November!  To be honest, these days provide me with more knowledge than a lot of the week long training courses I take!  They are jam packed with technical deep dives from a variety of companies – companies who are eager to share and get the message out in regards to their offerings!  And it’s not just the presenting companies jamming the knowledge in my brain, it’s the delegates as well!  I’ve honestly met some of the smartest people I know there being a fellow delegate – not just on virtualization, but spreading across the whole IT ecosystem.   Anyways, I’m super excited to have been invited back to another field day experience – this one, a storage event, taking place June 14-16 in Denver!

#SFD13

As it stands today Storage Field Day 13 is shaping up to have 6 presenting companies – all with their own different ties into the storage market.  We have everything from the tried and true companies such as DellEMC and Seagate all the way through to the startup technologies of Primary Data and the newly acquired Exablox.  In between sits the non-profit, vendor-neutral SNIA along with the hometown Colorado based X-IO Technologies.

DellEMC_Logo_Prm_Blue_Gry_rgb-100x18 exablox-logo-1200

Certainly a wide variety of companies to hear from – which should help to keep the event both exciting and interesting!

I mentioned that I’ve gained a lot of knowledge from other delegates in the past – and man o man this one will be no different.  I’m actually super excited for this event as I’ve really only met a few people on this list – so there will be a lot new faces and friends to make here for me.  Honestly, it’s a little bit intimidating as there are some real storage rockstars in this list!

I’ll do my best to try and get a few preview posts out for some of those presenting companies I know little about.  Mainly for my own homework so I don’t “go in cold” and get thrown into the deep end Smile  That said I can’t promise anything as this event is quickly sneaking up on me and only a couple weeks away now!   As always I’ll try to get the stream setup on my event page here – as well as place any content I create surrounding SFD13.   Be sure to follow along on twitter as well using the hashtag #SFD13 and be sure to keep an eye on the official event page!  See you in Denver!