Lessons learned from #vDM30in30

Phew!  I’m not sorry to say that #vDM30in30 is over with!  Not to say it wasn’t a lot of fun, but honestly, it’s a lot of work – especially when juggling family, travel, the day job and all!  One might think that simply blasting out 30 pieces of content in 30 days would be relatively easy – but it’s not!  That said, I learned a lot about my writing process and styles during this challenge, and as my final, and, unfortunately only 28th post of the month I’d like to share those with you…

The challenge of topics

It’s not easy coming up with topics to write about, especially when writing so often.  I was lucky enough to have had a handful of ideas already sitting in my draft folders – and #vDM30in30 finally gave me the opportunity to write about them.  That said, I know I had thought of more throughout the month and simply forgot to write them down.  So whatever your means of tracking your ideas are (drafts, post-its, bullet journals) write them down!  I found that if I didn’t commit it to something I would forget it.  Needless to say I have a dozen or so topics just sitting in my drafts now – which leads me to the next challenge…

The challenge of time

Surely this is probably the biggest hurdle of all – finding the time to articulate yourself and get a blog post written.  I find that this varies for me – for some topics I’ll simply start writing and have a complete post hashed out in an hour or so.  Others I find myself having to go do research, read other blogs, whitepapers, trying to fully understand what I’m writing about 🙂  Those are the ones that sometimes take days – 10 minutes here and there, revisiting the same ol’ things.  For me I’m best to dedicate all the time I need to write the post in one sitting – otherwise I have  a hard time reading my own writing once I revisit the post.  That said, time is tricky thing to find – we have families, commitments, other things we need to take care of – what I did was always critique myself with what I was doing.  If I was watching a habs game I would try and at least do something “blog productive” while doing so.  Those endless hours on an airplane – perfect for editing and getting things ready!  My advice here, just use your time wisely and don’t sacrifice the things you love the most just to write a blog post – the kids will eventually go to sleep – do it then 🙂

The challenge of writing

Perhaps this is the oddest hurdle to overcome.  Sometimes the words just come, other times I struggle trying to explain myself.  There were times where, even though I knew I would have a hard time coming back to complete a post I simply had to walk away.  If you are burnt out, nothing will make sense.  Take breaks, either small or large – we are all different just find what works for you.  For me, that was walking…

So I’m happy to say that even though I was two shy of the infamous thirty – I did learn some things about my writing process and styles.  With that said, here’s a look at what I accomplished throughout the month of November on mwpreston.net.

Tech Field Day 12 Stuff

My favorite Veeamy things…

Other vendor stuff

My Friday Shorts

Randoms

So there you have it!  Thanks all for following along and reading and I hope to participate next year as well.  All that said, don’t expect a post per day to continue here – I need some sleep!

The Atlas File System – The foundation of the Rubrik Platform

One of the core selling points of the Rubrik platform is the notion of something called “unlimited scale” – the ability to start small and scale as large as you need, all the while maintaining their masterless deployment!  Up until a few weeks ago I was unaware of how they actually achieved this, but after witnessing Adam Gee and Roland Miller present at Tech Field Day 12 in San Jose I have no doubts that the Atlas file system is the foundation upon which all of Rubrik is built.

atlascore

As shown above we can see how the platform is laid out by Rubrik – with the Atlas file system sitting within the core of the product and communicating with nearly every other component in the Rubrik platform.  Now picture each node containing exactly this same picture, scaling up to whatever number of nodes you might have – each node containing its’ own Atlas file system, with its own local applications accessing it – however the storage is distributed and treated as one scalable blob of storage addressable by a single global namespace.

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

Atlas – a distributed scalable file system.

As shown above other core modules such as Callisto, Rubriks distributed metadata store and the Cluster Management system all leverage Atlas under the hood – and in turn Atlas utilizes these for some of its functions.  For instance, to make Atlas scalable it leverages data from the Cluster Management system to grow and shrink – when a new brik is added, Atlas is notified via the CMS, at which point the capacity from the new nodes is added to the global namespace, thus increasing our total capacity available, as well as flash resources to consume for things such as ingest and cache.  It should also be noted that Atlas does take care of data placement as well, so adding a new node to the cluster will trigger it to re-balance, however it’s got the “smarts” to process this as a background task and take into affect all of the other activities occurring within the cluster, which it gets from the Distributed Task Framework –   Meaning we won’t see a giant performance hit directly after adding new nodes or briks due to the tight integration between all of the core components.

Adding disk and scaling is great, however the challenges of any distributed file-system is how to react when failures occur, especially when dealing with the low costs of commodity hardware.  Atlas performs file system replication in a way that it provides for the failure at a disk level and a node level, allowing for 2 disks, or 1 full node to fail without experiencing data loss.  How Atlas handles this replication depends solely on the version of Rubrik in your datacenter today.  Pre 3.0 releases used a technology called mirroring, which essentially triple replicated our data across nodes.  Although triple replication is a great way to ensure we don’t experience any loss of data it does so at the expense of capacity.  The Firefly release, 3.0 or higher, implements a different replication strategy via erasure coding.  By its nature, erasure coding essentially takes the same data that we once would of replicated three times and splits it  into chunks – the chunks are then processed and alternate chunks are encoded and created which can be used to rebuild the data if need be.  It’s these chunks that are intelligently placed across disks and nodes within our cluster to provide availability.  The short of the story here is that erasure coding gives us the same benefit of triple replication, without the cost of having triple the capacity – therefore more space will be available within Rubrik for what matters most, our data.

rubrik-selfheal Aside from replication of our data Atlas employs other techniques to keep our data available as well – items such as self healing and CRC detection allows Atlas to throw away and repair data as it becomes corrupt.   Now these are features within file-systems we expect to see, but Atlas can handle this a little different due to it’s distributed architecture.  The example given was with three briks, each containing four nodes – when a node fails, or data becomes corrupt Atlas actually repaired the data on a surviving node within the same brik, ensuring we are still spread out across briks.  If a brik happens to fail, the chunk of data would then be required to be on the same brik as another, but would be placed on another node, allowing still for node failure.  It’s this topology-aware deployment that really allows Rubrik to maximize it’s data availability and provide protection across not only nodes within a brik, but between brik failures as well, maximizing the failure tolerance guarantees they are providing.

Perhaps some of the most interesting ways Atlas works though are around how it exposes its’ underlying functions and integration points in the applications running on top of it, the Rubrik applications.  First up, the meat of Rubriks solution, mounting snapshots for restore/test purposes.  While all of our backup data is immutable, meaning it by no means can be changed in any way, Atlas does leverage a “Redirect on Write” technology in order to mount these backups for test/dev/restore purposes.  What this means is that when a snapshot is requested for mount, Atlas can immediately assemble the point in time using incremental pointers – no merging of incrementals to full backups or data creation of any kind – a simple presentation of the full VM in that point in time is presented.  Any writes issued to this VM are redirected, or written elsewhere and logged – thus not affecting the original source data whatsoever, all the while allowing the snapshot to be written to.

Atlas also exposes a lot of its underlying functionality to applications in order to create performance as well.  Take for instance the creation of a scratch or temporary partition for example – if Rubrik needs to instantiate one of these it can tell Atlas that this is indeed temporary – thus, Atlas doesn’t have the need to replicate the file making up the partition at all as it doesn’t necessarily require protection and can simply be tossed away when we are done with it.  And that tossing away, the cleaning up after itself can also be set from an application level.  In that same example we could simply set a ttl or expiry on our scratch file, and let the normal garbage collection maintenance job clean up during its normal run, rather than wasting time and resources in having the application make second or third calls to do it.  Applications can also leverage Atlas’s placement policies, specifying whether files or data should be placed on SSD or spinning disk, or even specify whether said data should be located as close as possible to other data.

So as you can see that although Rubrik is a very simple and easy policy based, set and forget, type of product there is a lot of complexity under the hood.  Complexity that is essentially abstracted away to the end-user, but available to the underlying applications making up the product.  In my mind this paves the way for a quick development cycle.  Being able to leverage the file-system for all its worth while not having to worry about “crazy” configurations customers may have.  We have certainly seen a major influx of custom-built file systems entering our data centers today – and this is not a bad thing.  While the “off the shelf”, commodity type play may fit well for hardware, the software is evolving – and this is evident in the Rubrik Atlas file system.  If you want to learn more definitely check out their Tech Field Day 12 videos here – they had a lot more to talk about than just Atlas!

VembuHIVE – A custom built file system for data protection

vembu-logoVirtualization has opened many doors in terms of how we treat our production environments.  We are now vMotioning or Live Migrating our workloads across a cluster of hosts – we are cloning workloads with much ease and deploying new servers into our environments at a very rapid rate.   We have seen many advantages and benefits to the portability and encapsulations that virtualization provides.  For a while, our backups though were treated as the same – simply copies of our data sitting somewhere else – only being utilized during those situations when a restore was required.  That said over the past 5 years or so we have seen a shift in what we do with our backup data as well.  Sure, it’s still primarily used for items such as restores, both on a file and image level – but backup companies have began to leverage that otherwise stale data in ways we could only imagine.  We see backups being used for analytics, compliance, and audit scans.  We see backups now being used in a devops nature – allowing us to spin up isolated, duplicate copies of our data for testing and development purposes.  We have also saw the ‘restore’ process dwindling away, with the “instant” recovery feature taking its’ place, powering up VMs immediately from within the deduplicated and compressed backup files, drastically decreasing our organizations RTO.

So with all of this action being performed on our backup files a question of performance comes into play.  No longer are we ok to simply store our backups on a USB drive formatted with a traditional file systems such as FAT or NTFS.  The type of data we are backing up, the modern virtualization disk images such as VHDx and VMDK depend on something more from the file system it’s living on – which is why Vembu, a data protection company out of India have developed their own file system for storing backups, the VembuHIVE.

Backups in the HIVE

beehiveWhen we hear the word VembuHIVE we can’t help but turn our attention towards bees – and honestly, they make the perfect comparison as to how the proprietary file system from Vembu performs.  A bee hive at its basics is the control center for bees – a place where they all work collectively to support themselves and each other – the hive is where the bees harvest their magic, organizing food, eggs, and honey.   The VembuHIVE is the central point of storage for Vembu’s magic, storing the bits and controlling how files are written, read and pieced together.  While VembuHIVE can’t produce honey (yet), it does produce data.  And it’s because of the way that VembuHIVE writes and reads our source data that we are able to mount and extract our backups in multiple file formats such as ISO, IMG, VMDK and VHDX – in a near instant fashion.

In essence, VembuHIVE is like a virtualized file system overlaid on top of your existing file system that can utilize utilities that mimic other OS file systems – I know that’s a mouthful but let’s explore that some more.

Version Control is key

In my opinion the key characteristic that makes VembuHIVE run is version control – where each and every file produced is accompanied by metadata controlling what version, or point in time, the file is from.  Probably the easiest comparison is to that of GIT.

versioncontrolWe all know of GIT – the version control system that keeps track of changes to our code.  GIT solved a number of issues within the software development ecosystem.  For instance, instead of copying complete projects before making changes we could simply branch out on GIT – which would basically track changes to source code and store only those lines which have changed – allowing us to easily roll back or to any point in time within our code – reverting and redoing any changes that were made.  This is all done by only storing changes and creating metadata to explain those changes – which in the end gives us a very fast way to revert to different points, fork off new points, all the while utilizing our storage capacity in the most efficient way possible.

VembuHIVE works much in the same way as GIT however instead of tracking source code we are tracking changed blocks within our backup files – allowing us to roll back and ahead within our backup file chain.  Like most backup products Vembu will create a full backup during the first run, and subsequently utilize CBT within VMware to copy only changed blocks during incremental backups.  That said, the way it handles and intelligently stores the metadata of those incremental backups allows Vembu to essentially present any incremental backup as what they call, a virtual full backup.  Basically, this is what allows Vembu BDR to expose our backups, be them full or incremental, in various file formats such as vmdk and vhdx.  This is done without performing any conversion on the underlying backup content and in the case of incremental backups there is no merging of changes to the previous full backup before hand.  It’s simply an instant export of our backups in whatever file format we chose.  I mention that we can instantly export these files, but it should be noted that these point in time backups can be instantly booted and mounted as well – again, no merge, no wait time.

VembuHIVE also contains most of the features you expect to see in a modern file system as well.  Features such as deduplication, compression and encryption are also available within VembuHIVE.  As well, VembuHIVE contains built-in error correction on top of all of this.  Every data chunk within the VembuHIVE file system has it’s own parity file – meaning when data corruption occurs, VembuHIVE can reference the parity file in order to rebuild or repair the data in question.  Error correction within VembuHIVE can be performed at many levels as well, protecting data from a disk image level, file-level, chunk-level or backup file-level basis – I think we are covered pretty good here

Finally we’ve mentioned a lot that we can instantly mount and exports our VMs on a VM level basis, however the intelligence and metadata within the VembuHIVE file system goes way beyond that.  Aside from exporting as vmkd’s or vhdx’s, VembuHIVE understands how content is organized within the backup file itself – paving the way for instant restores on an application level – think Exchange and Active Directory objects here.  Again, this can be done instantly, from any restore point at any point in time without performing any kind of merge process.

In the end VembuHIVE is really the foundation of almost all the functionality that Vembu BDR provides.  In my opinion Vembu have made the correct decision by architecting everything around VembuHIVE and by first developing a purpose built, modern file system geared solely at data protection.   A strong foundation always makes for a strong product and Vembu has certainly embraced that with their implementation of VembuHIVE

Friday Shorts – VeeamON, Storage Protocols, REST, and Murica!

“If that puck would’ve crossed the line Gord, that would’ve been a goal!” – Pierre McGuire – A Mr Obvious, annoying hockey commentator that drives me absolutely insane! (Sorry, watching the Habs game as I put all this together :))

Jambalaya and Backups – Get there!

veeam_logoVeeam had some big announcements this year along with a slew of releases of new products, beta’s and big updates to existing products.  All that said we can only assume that VeeamON, the availability conference focussed on the green is going to be a big one!  This year it takes place May 16-18 in New Orleans – a nice break from the standard Vegas conferences!  I’ve been to both VeeamON conferences thus far and I can tell you that they are certainly worth it – all of Veeams engineers/support is there so if you have a question, yeah, it’ll get answered and then some!  So, if you can go, go!  If you can’t, if it’s a money thing – guess what???  Veeams raffling off 10, yes 10 fully paid (airfare, hotel, conference) trips over the holidays – so yeah, go sign up!

But we have a REST API?

apiAlthough this post by John Hidlebrand may be a month old I just read it this week and it sparked some of my own inner frustrations that simmer around deep inside me 🙂  John talks about how having a REST API is just not enough at times – and I completely agree!  I’m seeing more and more companies simply state, oh yeah, we have a REST API, we are our first customer!  That’s all said and great – but guess what, you wrote it and you know how to use it!  All to often companies are simply developing the API and releasing it, but without any documentation or code examples on how to consume it!  John brings up a good point about, hey, how’s about having some PowerShell cmdlets built around it?  How about having an SDK we can consume?  Building your application off of a REST API is a great start don’t get me wrong, but if you want people to automate around your product – help us out a little please 🙂

In through iSCSI, out through SMB, in through SWIFT, out through REST

isolonFellow Veeam Vanguard and TFD12 delegate Tim Smith has a post over on his blog describing a lot of the different storage protocols on the market today and how EMC, sorry, Dell-EMC Isilon is working to support them all without locking down specific data to a specific protocol. If you have some time I’d certainly check it out!

Happy Thanksgiving Murica!

I’ve always found it odd that Canadians and Americans not only celebrate thanksgiving on different days, but in different months as well!   Come to find out there are quite a few other differences as well.  You can see the two holidays compared on the diffen.com site.  It makes sense that we here in Canada celebrate a bit earlier – especially if our thanks revolves around the harvest.  I mean, no one wants to wait till November in Canada to harvest their gardens and crops – you’d be shoveling snow off of everything!  Either way – Happy Thanksgiving to all my American friends – may your Turkey coma’s be long-lasting!

A VMware guy’s perspective on containers

docker-logo-300-71x60Recently I attended Tech Field Day 12 in San Jose and was lucky enough to sit down with Docker for a couple of hours.  Docker talked about a number of things including Containers as a Service, Security, Networking, Cloud and the recent integration points on Microsoft Server 2016.  Now I’m not going to pretend here – Docker, more specifically containers are something that I’ve heard of before (How could you not have?) but I’ve never really gone too deep into what they do, how they perform, or what use cases they fit well into.  I knew they had something to do with development – but that’s as far as I’ve really went with them.  Listening to Docker and other delegates questions during the presentation got me thinking that I should really start learning some of this stuff. – and it’s that thought right there which sent me down a rabbit hole for the last few days, reading countless blogs and articles, watching numerous videos and keynotes, and scratching my head more often than I would’ve liked too – in the end I’m left with the conclusion that there a lot of misconceptions in regards to containers, and I was falling right into mostly all of them…

VMware vs Docker

vmwaredockerHere’s the first misconception I was reading a lot about.  Quite a lot of chatter out there on the interwebs is happening about the downfall of the VM and the up-rise of the container.  And for some environments this may hold true, but, even according to Docker, these two technologies are not necessarily competitors.  You see, VM’s by their nature encapsulate a complete running VM – all of the OS, applications, libraries, and data running is encapsulated into a VM, with hardware emulation and a BIOS.  A container on the other hand is application focused – being more an application delivery construct while sharing processes related to the Linux kernel and operating system its’ running on.  Still confused?  Don’t worry – so was(am) I.  There’s an analogy that Docker uses quite often that might help; houses vs apartments.  Think of a VM as a house, complete with all the different living spaces and its self contained services such as heat, electricity, and plumbing.  On the flip-side, containers could be apartments – sure each one may be a little different but they share common services in the building – electricity and plumbing is shared and all comes from the same source.  So in essence there is room for both in the market, in fact, they really provide quite different platforms for running our applications – while Docker will focus in on stateless, scale able, non-persistent apps, mostly providing advantages around development and portability our VMs give us the “warm and fuzzy” feeling of having separate OS instances for our applications, with their front doors shut and locked.

Docker is just for developers

codeAnother pretty big misconception if you ask me!  Sure, Docker is getting huge adoption in the developer space because of it’s provided consistency – a developer can begin by by pulling down a Docker image and have the libraries and components setup on their laptop exactly how they want.  They then can share this image out to be forked by others, meaning we have a consistent environment no matter where the application is being developed.  When the time comes to move to test, or production, we are still running within that same, consistent environment – no more patch or library conflicts – A true developers nirvana!  But after reading so much about this I have come to the realization that Docker is not just a “Developer” thing, it’s for all of us, even us crazy operation guys!  The shear nature of having a container limited to one service, or micro-services if you will, allow us as administrators to deploy applications in our data center in the same way – think a container for Apache, a container for MySQL, each it’s own separate entity, each working together to provide a full application to our end users – and with the maturity and availability of images out there today take a guess who doesn’t have to go through all of the headaches and processes of setting all of this stuff up – operations doesn’t!  And spawning multiple instances of all of these is just one command line away!  It just feels right to me and just as we have seen the adoption of virtualization and the adoption of of companies shipping software bundled in virtual appliances I can see a day where we will soon see those same services packaged and shipped as containers.

But Docker is just for Linux nerds

linux-mac-windowsNot anymore…  Have yourself a copy of Windows 10 or Server 2016, yeah, simply install the feature called containers, grab the Docker engine and away you go!  Microsoft and Docker have made huge partnerships and as of right now you can even pull down some Microsoft applications right off of the “App Store” if you will.  Need yourself a SQL Server –docker run -d -p 1433:1433 -e sa_password=password -e ACCEPT_EULA=Y microsoft/mssql-server-windows-express – yeah, that’s all – your done!  Still think Docker is just for developers???  Microsoft has been doing some really out of character things as of late – think bash on windows, open sourcing .Net, SQL Server on Linux – just super weird non-traditional Microsoft things – but in a good way!  Don’t be surprised if we see Microsoft going all in with containers and Docker in the future!!!  Let the benefits of Continuous Integration and deployment be spread among all the nerds!!!

So I can deliver all my Windows apps through containers now!  Awesome!

Yes…but no!  Docker is not ThinApp/XenApp/App-V.  It doesn’t capture changes and compile things into an executable to be ran off a desktop or deployed through group policy.  In fact it’s just those server side applications that are supported in a Windows container.  We can’t for instance try and run Internet Explorer 6 with a certain version of the java plugin, nor can we run Microsoft Word within a container.  The purpose of this is to provide a portable, scale-able, consistent environment to run our server side, non-GUI Windows applications – think SQL Server, IIS, .NET, etc…  Now I can’t say where the technology will go in the future – a world in which we can all containerize desktop applications with Docker doesn’t sound too far fetched to me :).

So with all that I think I have a little better handle on containers and Docker since my Tech Field Day adventures – and wanted to simply lay it out the way I see it in the event that someone else may be struggling with the mountains of content out there.  If you want to learn more and dig deeper certainly check out all of the TFD videos that Docker has.  Also, Stephen Foskett has a great keynote that he has done – “What’s the deal with containers?” which I would certainly recommend you watch!  I’m still sort of discovering all of this but plan to really invest some time in the container world come next year – there is a lot of components that I want and need to understand a bit more such as persistent storage and networking – also, if I’m wrong or misinformed on any of this – do call me out 🙂 – that’s how we all learn!  Thanks for reading!

Disclaimer: As a Tech Field Day 12 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the presenting companies. All that said, this is done at my own discretion.

Did you know there is a Veeam User Group?

vuglogorevLike most of you I’ve been attending VMUGs for quite a while now and over the last few years I’ve helping out by co-leading the Toronto chapter.  Each and every one I attend I always get some value out of it – whether it’s from presenting sponsors, talking with peers, or just creepily listening to conversations from the corner – one of the challenges we seem to have is getting the “conversation” going – getting those customers and community members sitting in the audience to voice their opinion or even at times get up and do a presentation on something.  For our last meeting I reached out to Matt Crape (@MattThatITGuy) to see if he might be interested in presenting – Matt was quick to simply say yes – yes, but on one condition – do I want to come and present at his Veeam User Group?  So, with that a deal was cut and I headed out this morning to my first Veeam User Group.

Veeam User Group – VMUG with out the ‘M’

Matt runs the Southwest Ontario Veeam User Group (SWOVUG) – I’ve seen the tweets and blogs around the SWOVUG events taking place, and have always wanted to attend but something always seemed to get in the way – for those that know me I’m a huge Veeam user and fan – so these events are right up my alley.  So, I did the early morning thing again, battled the dreaded Toronto traffic and headed up to Mississauga for the day to check it out.

The layout of the meeting is somewhat similar to a VMUG meeting we have – two companies kindly supported the event; HPE and Mid-Range – and in return got the chance to speak.  HPE started with a short but good talk around their products that integrate with Veeam; mainly 3PAR, StoreOnce and StoreVirtual.  They also touched on HP OneView and the fact that they are laser focused on providing API entry points into all their products.

I’m glad HPE didn’t go too deep into the 3PAR integrations as I was up next and my talking points were around just that.  I simply outlined how my day job is benefiting from those said integrations; more specifically the Backup from Storage Snapshot, Restore from Storage Snapshot and On-Demand Sandbox for Storage Snapshots features.

After a quick, but super tasty lunch (Insert Justin Warren disclaimer post here) Mid-Range took the stage  Mid-Range is a local Veeam Cloud Connect partner offering DRaaS and a ton of other services around that.    Mid-Range did more than simply talk about the services they provide – they more-so went into the challenges and roadblocks of consuming disaster recovery as a service, then touched briefly on how Veeam and themselves could help solve some of those…

Finally to cap the day off we had David Sayavong, a local Veeam SE take the stage to talk to us about “What’s new in version 9.5?”.  David’s presentation was not just him up there flipping through slides of features, but more of a conversation around certain features such as ReFS integration and how all of the new Veeam Agents will come into play.  Just a fun fact for the day – the audience was asked who had already upgraded to 9.5 – and honestly around 1/3 of the room raised their hands.  That’s 33% that have already upgraded to a product that just GA’ed only 7 days ago – talk about instilling confidence in your customers.

Anyways I wanted to breifly outline the day for those that may be thinking of attending like I was, but haven’t yet set aside the time to do so.

But there’s more…

I mentioned at the beginning of the post that there is always struggles with getting people to “speak up” – this didn’t seem to be the case at the Veeam User Group.  I’m not sure what it was but conversations seemed to be flying all over the place – for instance, after I was done talking about the integration with 3PAR there was a big conversation that started around Ransomware and security.    Each presentation seemed more like a round table discussion than a sales pitch.  It truly was a great day with lots of interaction from the both the presenting companies and the audience – everything you want from user group.

The user group intrigued me – and maybe some day I’ll through my name in to try and get something started up on “my side of Toronto” – it’s Canada right – there’s only a handful of IT guys here so everything east of Toronto is mine Smile  For more information about the Veeam User Groups keep an eye out on the Veeam Events page and @veeamug on Twitter!  And to keep track of the SWOVUG dates I suggest following @MattThatITGuy and watching the swovug.ca site!  Good job Matt and team on a great day for all!