Tag Archives: #VDM30in30

Learning 3PAR – Part 2 – Moar Chunklets

chicletsIn Part 1 we went through some of the common terminology within the HP 3PAR array and now we will go into a bit more detail about one of them – the Chunklet!  A Chunklet is a key player in how the 3PAR aims to utilize all of the disks within the array, and in turn maximize the performance and protection that they can get out of the array!  With that said I mentioned that during the initialization of a physical disk it is divided up into 1GB Chunklets but what I didn’t mention is there are a few different types of Chunklets within the HP 3PAR – Now these may not be “official” HP names as I kind of named them myself during my reading.  And for some reason I’m now craving gum 🙂

Normal Used Chunklets

These are the Chunklets that are utilized by Logical Disks.  They are stringed together within different RAID sets across different physical disks in order to provide capacity to a CPG, which in turn passes it along to a Virtual Volume (this is essentially our datastore when it’s all said and done).  These chunklets hold our production data

Normal Reserved Chunklets (Logging Chunklets)

I don’t know if these really exist but this is what I’m going to call them.  They are pretty much the same as Normal Used Chunklets however they have been pre-configured in  reserved Logical Disks which are created by the system.  We normally see a reserved Logical Disk for Logging (used for disk failures/rebuilds), admin (used to store event logs and administration information) and srdata (used to store historical stats and information).  We will often see these logical disks containing chunklets closer to the end of the spindles as well.

Normal Unused (Free) Chunklets

These Chunklets are exactly how they are described – they are Chunklets that are provisioned, and are NOT spares, but have not yet been claimed by any Logical Disk.  It’s pretty safe to say that during installation all chunklets (except designated spares and reserved) are essentially free chunklets until you start provisioning LUNs.

Spare Chunklets

Some Chunklets will be designated as spares during the initialization of the 3PAR.  Meaning, not all 1GB Chunklets are available to be used within a Logical Disk.  Spare Chunklets are essentially a placeholder which is utilized when we have a physical disk failure and the Logical Disk RAID set needs to be rebuilt.  An intelligent note here – the system automagically selects which Chunklets are to be assigned as spares, however it does it in a way that most of the spare chunklets are located as close to the end of the physical disks block space as possible, leaving the closer blocks for production.

Chunklet Relationships

Everything just seems silly with the word chunklet in front of it 🙂  Either way there are few terms that are used to describe the relationships between our Normal Used Chunklets and all other chunklets within the system.

  • Local Spare Chunklets –  This would be a chunklet designated as a spare, whose primary path is connected to the same node that owns the source logical disk containing the used chunklet.
  • Local Free Chunklet – An Unused/Free chunklet whose primary path is connected to the same node that owns the source logical disk containing the used chunklet.
  • Remote Spare Chunklet – A spare chunklet whose primary path is  connected to a node different then the node owning the source logical disk containing the used chunklet.
  • Remote Free Chunklet – A free/unused chunklet whose primary path is connected to a node different then the node owning the source logical disk containing the used chunklet.

So we have mentioned failing physical disks a couple of times so I think now would be a good time to discuss what exactly happens during a disk failure and how it affects our Chunklets…

  • When a connection is lost or a failure of a physical disk occurs, the system immediately forwards all writes destined for failed chunklets that have been cached to chunklets contained in the reserved Logging Logical Disk.  This occurs until the failed physical disk/chunklets comes back online, until the Logging LD becomes full, or until the rebuild process has been completed.
  • The rebuild process occurs concurrently with the above step, where the system begins to reconstruct lost data utilizing the remaining chunklets and RAID levels provided.
    • There is some logic that happens during this rebuild/relocation phase as well – the system first looks to select a local spare chunklet , if none are to be found it moves on to a local free chunklet , then a remote spare chunklet, and finally a remote free chunklet.  All the while trying to maintain consistency in terms of the characteristics between the failed and target chunklets (Speed, Drive Type, etc).
  • Once the rebuild has completed, the logging disks are replayed and data flushed back down to the newly constructed volume.

So in the end these little tiny 1GB chunks of contiguous space are a key player in the 3PAR array.   To help understand them I tend to try to remove the fact that they are on individual drives, and think of them somewhat as really small, granular 1GB drives, some marked as spares, some in different logical drives with different raid sets, and some set aside to provide functionality for the array.  All that said though they are not separate drives, different chunklets live on the same drive, leaving us with the ability to provide different RAID levels on the same drive, mix and match different sized drives without wasting capacity, and stripe our logical disks across multiple shelves, and in some cases, even provide shelf-level protection.   Plus, they make for a nice little visualization of coloured blocks within the 3PAR Management Console 🙂

Veeam v9 – What we know so far…

veeamlogoJust as I did last year during VeeamON in regard to Veeam Backup and Replication v8 I thought I would throw a post out there about some of the new features of version 9 – Understandably this list can and probably will change, maybe new features are added or existing features slightly change – either way these are the features that I’ve heard about thus far – if I’m wrong or missing any please let me know and I’ll update accordingly

Unlimited Scale-out Backup Repository

This is perhaps one the biggest features included within v9 – all to often we see environments over provision the storage for their backup repositories – you never know when we might get a large delta or incremental and the last thing we want to have to do is go through the process of running out of space and having to provision more.  In the end we are left with a ton of unused and wasted capacity, and when we need more instead of utilizing what we have we simply buy more – not efficient in terms of capacity and budget management.  This is a problem that Veeam is looking to solve in v9 with their Unlimited Scale-out Backup Repository functionality.  In a nutshell the scale-out backup repo will take all of those individual backup repositories you have now and group them into a single entity or pool of storage.  From there, we can simply select this global pool of storage as our target rather than an individual repository.  Veeam can then chose the best location to place your backup files within the pool depending on the functionalities and user-defined roles each member of the pool is assigned.  In an essence it’s a software defined storage play, only targeted at backup repositories – gone are the days of worrying about which repository to assign to which job – everybody in the pool! Smile

More Snapshot/Repository integration.

Backup and restore from storage snapshots is no doubt a more efficient way to process your backups.  Just as Veeam has added support for HP 3PAR/StorVirtual and NetApp, we are now seeing EMC Dell thrown into that mix.  As of v9 we will now be able to leverage storage snapshots on EMC VNX/VNXe arrays to process our backup and restores directly from Veeam Backup and Replication – minimizing impact on our production storage and allowing us to keep more restore points, processing them faster and truly providing us with the ability to have < 15 minutes RTPO.

On the repository end of things we’ve seen the integration provided for DataDomain and Exagrid – as of v9 we can throw HP StoreOnce Catalyst into that mix. Having a tighter integration between Veeam and the StoreOnce deduplication appliance provides a number of enhancements in terms of performance to your backups and restores.  First off you will see efficiencies in copying data over slower links due to the source side deduplication that StoreOnce provides.  StoreOnce can also create synthetic full backups by performing only meta data operations, eliminating the need to actual perform a copy of the data during the synthetic creation, which in turns provides efficiency to a very high i/o intensive operation.  And of course, creating repositories for Veeam backups on the StoreOnce Catalyst can be done directly from within Veeam Backup & Replication, without the need to jump into separate management tools or UIs.

Cloud connect replication

Last year Veeam announced the Cloud Connect program which essentially allows partners to become somewhat of a service provider for their customers looking to ship their Veeam backups offsite.  Well, it’s 2015 now and we now can see that the same type of cloud connect technology now is available for replication.  Shipping backups offsite was a great feature, but honestly, being able to provide customers with a simple way to replicate their VMs offsite is ground breaking.  Disaster Recovery is a process and technology that is simply out of reach for a lot of business – there isn’t the budget set aside for a secondary site, let alone extra hardware sitting at that site essentially doing nothing.  Now customers are able to simply leverage a Veeam Cloud/Service Provider and replicate their VMs on a subscription based process to their data center.

DirectNFS

When VMware introduced the VMware API’s for Data Protection (VADP) it was ground breaking in what it provided vendors such as Veeam the ability to do in terms of backup  VADP is the grounds to how Veeam accesses data in their Direct SAN transport mode, allowing data to be simply transferred directly from the SAN to the Veeam Backup and Replication console.  That said VADP is only supported on block transports, limiting Direct SAN to just iSCSI and Fibre Channel.  In true Veeam fashion when they see an opportunity to innovate and develop functionality where it may be lacking they do so.  As of v9 we will now be able to leverage Direct SAN mode on our NFS arrays using a technology called DirectNFS.  DirectNFS will allow the VBR console server to directly mount to our NFS exports, allowing Veeam to process the data directly from the SAN, leaving the ESXi hosts to do what they do best – run production!

On-Demand Sandbox for Storage Snapshots

The opportunities that vPower and Virtual Labs have brought to organizations has been endless. Having the ability to spin up exact duplicates of our production environments, running them directly from our deduplicated backup files has solved many issues around patch testing, application upgrades, etc.  That said up until now we could only use backup files as the grounds for getting access to these VMs – starting with v9 we can now leverage storage snapshots on supported arrays (HP, EMC, NetApp) to create completely isolated copies of the data that resides on them.  This is huge for those organizations that leverage Virtual Labs frequently to perform testing of code or training.  Instead of waiting for backups to occur we could technically have a completely isolated testing sandbox spun up using Storage Snapshots in essentially, minutes.  A very awesome feature in my opinion.

ROBO Enhancements

Those customers who currently use Veeam and have multiple locations we will be happy to hear about some of the enhancements that v9 has centering around Remote/Branch Offices.  A typical configuration in deploying Veeam is to have a centralized console controlling the backups at all of our remote locations.  In v8, even if you had a remote proxy and repository located at the remote office, all the guest interaction traffic was forced to traverse your WAN as it was communicated directly from the centralized console.  In v9 things have changed – a new Guest Interaction Proxy can be deployed which will handle then handle this type of traffic.  When placed at the remote location, only simple commands will be sent across the WAN from the centralized console to the new GIP, which will in turn facilitate the backup of the remote VMs, thus saving on bandwidth and providing more room for, oh, I don’t know, this little thing called production.

When it comes to recovery things have also drastically changed.  In v8 when we performed a file-level recovery the data actually had to traverse our WAN twice – once when the centralized backup console pulled the data, then again as it pushed it back out to it’s remote target – not ideal by any means.  Within v9 we can now designate and remote Windows server as a mount server for that remote location – when a File-level recovery is initiated the Mount Server can now handle the processing of the files rather than the backup console, saving again on bandwidth and time.

Standalone Console

“Veeam Backup & Replication console is already running”  <- Any true Veeam end-user is sure to have seen this message at one time or another, forcing us to either find and kill the process or yell at someone to log off Smile  As of v9 the Veeam Backup & Replication console has now been broken out from the Veeam Backup & Replication server, meaning we can install a client on our laptops in order to access Veeam.  This is not a technical change in nature, but honestly this is one of my favorite v9 features.  I have a lot of VBR consoles and am just sick of having all those RDP sessions open – this alone is enough to force me to upgrade to VBR v9 Smile

Per-VM backup files

The way Veeam is storing our backup files is coming with another option in version 9.  Instead of having one large backup file that contains multiple VMs we can now enable what is called a “Per-VM backup file chain” option.  What this does store each VMs restore points within the job in their own dedicated backup file.  Some advantages to this?  Think about writing multiple streams inside of parallel processing mode into our repositories – this technically should increase the performance of our backup jobs.  Certainly this sounds like an option you may only want to use if your repository provides deduplication as you would lose the deduplication provided job-wide by Veeam if you have enabled this.

New and improved Explorers

The Veeam Explorers are awesome, allowing us to restore individual application objects from our backup files depending on what application is inside it.  Well, with v9 we have one new explorer as well as some great improvements to the existing ones.

  • Veeam Explorer for Oracle – new in v9 is the explorer functionality for Oracle.  Transaction-level recovery and transaction log backup and replay are just a couple of the innovative features that we can no perform on our Oracle databases.
  • Veeam Explorer for MS Exchange – We can now get a detailed export report which will outline exactly what has been exported from our Exchange servers – great for auditing and reporting purposes for sure!  Another small but great feature – Veeam will no provide us with an estimation in terms of export size for the data contained in our search queries.  At least we will have some idea as to how long it might take.
  • Veeam Explorer for Active Directory – Aside from Users and Groups and the normal objects in AD we might want to restore we can now process GPO’s and AD-Integrated DNS Records).  Oh, and if you know what you are doing Veeam v9 can also restore configuration partition objects (I’ll stay away from this one Smile)
  • Veeam Explorer for MS SQL – One big item that has been missing from the SQL explorer has been table-level recovery – in v9 this is now possible.  Also in v9 is the ability to process even more SQL objects such as Stored Procedures, functions and views as well as utilize a remote SQL server as a staging server for the restore.
  • Veeam Explorer for SharePoint – As much as I hate it SharePoint is still widely used, therefore we are still seeing development within Veeam on their explorer.  In v9 we can process and restore full sites as well as site-collections.  Also, list and item-level permissions are now possible to restore as well.

There are a few more enhancements and features but honestly I can’t write them all down – we will just have to wait to see for ourselves!  Veeam Backup & Replication version 9 is slotted to be released sometime later this year – so we won’t have to wait long!

Learning 3PAR – Part 1 – Chunklets, Logical Disk, CPGs, and Virtual Volumes

3parAs I’m currently in the beginning phases of a HP 3PAR deployment I thought it might be a good idea to write a few posts centering around some of the concepts built around the 3PAR architecture.  For the most part I can relate the different terminology names to other storage arrays I’ve used in the past but some of them are somewhat new to me as well.  Either way I’m no expert and am still learning myself so ease up on me if I make a mistake eh!  Anyways, for the first part of this series I’ll concentrate simply on some of the terminology and layers that exist within the 3PAR StorServ and try to explain them the best I can – remember, I’m explaining them to me as well!

5 Layers to the hosts

As with any array the path that data takes to get from our hosts to its’ final destination on disk is a complex one – but thankfully we don’t have to worry about all of the bumps in the road along the way.  That said it’s always nice to understand the road as best we can in order to determine how best practices and configuration changes will apply to our environment.  With the 3PAR that path contains 5 essential layers; Virtual Volumes, Common Provisioning Groups, Logical Disks, Chunklets, and Physical Disks.

3pardisk

 

We can somewhat see by the diagram the relationship between each layer but before taking a holistic view let’s first discuss each layer…

Physical Disks

This is an easy one right?  A physical disk is just that, a physical disk located inside of your 3PAR array, encompassing all types of disk within the array.

Chunklets

The first thing a 3PAR does when it is discovering its’ storage is break down all of the capacity on your physical disks into chunklets.  Each chunklet is 1GB in size and occupies contiguous space on a physical disk.  Chunklets are local to that physical disk only and cannot span to others.

Logical Disks

Logical disks are essentially a grouping of chunklets which are arranged as rows of like RAID sets. LD’s will ensure that each chunklet which resides in a RAID set is physically located on different physical disks.  We don’t directly create LD’s on the 3PAR – they are generated during the creation of a CPG (explained next), more-so, when a Virtual Volume is created on a CPG.   All of the metadata however, RAID type, allocation, growth of an LD is defined when creating the CPG itself.

Common Provisioning Groups (CPG)

A CPG is simply a pool of Logical Disks that provide the means for a Virtual Volume (explained next) to consume space.  When we deploy a CPG we do not actually use any of the space in our pooled logical disks until a virtual volume is created – meaning a 2TB CPG with no virtual volumes consumes no space at all.  We can think of a CPG similar to that of an EVA’s disk group, but feeding on logical disks instead of physical disks.

Virtual Volumes

No, these aren’t the VVOLs your looking for – this is simply a terminology that 3PAR uses to define the LUNs that are presented to the hosts – they are not the VVOLs which we have all seen come supported in vSphere 6.  Either way a Virtual Volume is a LUN that draws it’s capacity from a CPG – one CPG can provide space to many virtual volumes.  A virtual volume is the LUN that is exported out to your ESXi hosts, and eventually hosts datastores.  Just like most arrays Virtual Volumes can be provisioned either thick or thin – with a thin provisioned Virtual Volume only instructing its associated CPG to draw space from the logical disks as space is needed.  CPGs have the ability to create logical disks as needed to handle the increased demand for capacity up until the user-defined size limit of the CPG is reached.

So working backwards we can come to somewhat of the following

  • A datastore is located on a Virtual Volume
  • A Virtual Volume draws its’ space from a Common Provisioning Group (CPG).
  • A Common Provisioning Group is any given number of Logical Disks joined together to form some sort of contiguous space.
  • A Logical Disk is simply a collection of chunklets which are joined together in rows in order to produce a certain RAID set (1,5,6,etc).
  • A Chunklet is a 1GB piece (chunk) of any given physical disk within the array.  It’s also a very funny word.
  • A physical disk is…well, a physical disk.

So there we have it – it being the very very very basic understanding of some of the terminology within the HP 3PAR.  Certainly we can dive deeper into some of these terms here and we will in later posts – I mean, there are many different types of Chunklets, some reserved, some spare, but we will save those and some other terms such as Adaptive Optimization for another post (mainly because I have no idea quite yet Smile).

Friday Shorts – #VDM30in30 , Post Power On SRM commands, vExpert and more

Here we go, my latest addition of Friday Shorts – a collection of blogs and news that I’ve found interesting over the last few weeks!  In true Canadian fashion I apologize for the list being so short this week 🙂

#VDM30in30

2151-1-virtual-design-master-150x150I’m not sure how well known it is but there is a fun little challenge happening right now coming from the Virtual Design Master folks.  It’s called #VDM30in30.  The concept is simple – 30 blogs in 30 days throughout the month of November.  Just write and syndicate out to Twitter with the #VDM30in30 hashtag.  I watched last year and it actually generated a lot of great content – content that stretched beyond peoples main focus – blogs about career challenges, office setups, etc.. It’s nice to see another side of some of the great bloggers that are participating.  Speaking of participating I asked Eric Wright (@discoposse) if it was too late to join – his answer, it’s never too late 🙂  So, let’s consider this Post #1 in my list!  Honestly, I don’t think I’m going to hit 30 blogs – I’d be scraping the bottom of the barrel for topics and would end up with some kind of crazy Carpal Tunnel – but I’ll do my best to get as many as I can out – #VDM5in30 ???

Using PowerCLI to Automate SRM

I don’t (I mean never) have used SRM but when it comes to automation, be it through API’s or PowerShell I’m always interested.  Conrad Ramos (@vnoob) has a great article about how to automate some post power on commands within SRM using PowerShell and PowerCLI.  And let’s face it, if you are ever in a situation where your have implemented a fail over within SRM you probably want to utilize all of the automation you can since you will most likely have a number of crazed employees standing behind you panicking 🙂

Oooh Top vBlogs is coming soon!

Every year Eric Siebert spends a tireless amount of time putting together his Top vBlog voting!  Although in the end wherever I end up in the standings really doesn’t affect my writing or post frequency it’s still a fun little way of placing a number on this blog, as well as ranking my favorite bloggers, writers, and podcasters out there.  It appears he has already begun the plannings for the 2016 challenge so all I ask is as you are perusing around through your feed readers, syndication, and google results just take note of who’s blog that is – their name may very well be on the list – Give them a little love when you hit the polls!

vExpert 2016 applications are open!

For those who haven’t heard the applications for vExpert 2016 are now open!  Current vExperts are able to quickly apply by fillling out a fast-track application and those that are looking to apply for the first time will need to fill out an application that is slightly longer!  So, if you have started writing, blogging, or evangelizing in any way I encourage you to apply!  It won’t take long and hey, who knows, you might get in.  The vExperts are a humble bunch and always shrug off the benefits but in all honesty there are some nice perks that come along with the designation, lisenses, PluralSight subscriptions and a lot of great swag provided by a lot of vendors.  Just apply – it can’t hurt!