Tag Archives: VCAP5-DCA

8 weeks of #VCAP – Host Cache Scenario by @tomverhaeg

Big thanks to Tom Verhaeg ( BLOG / TWITTER ) for another awesome practice scenario for the VCAP5-DCA

You recently acquired some SSD drives for in your hosts. You’re not running vSphere 5.5 yet so vFRC is not an option. You read something about swap to host cache, and you think it might be wise to configure your SSD drive for usage as host cache.

Well, the process of configuring this isn’t that hard. The swap to host cache will be used as a last resort and a replacement of swapping to “disk”. Remember that vSphere has 4 main memory management techniques:

1) Transparent page sharing: Eliminates redundant copies of memory pages by removing them from memory and creating a reference instead.

2) Memory ballooning: In times of contention, the balloon driver (comes with VMware Tools) will ask the guest OS for unused memory and returns this back to vSphere

3) Memory compression: After ballooning runs out, try compressing the memory (basically gzipping it).

4) Swap to disk / host cache: Swap memory to a disk of some sort.

So, the swapping itself comes last in a process of memory management. While it’s still not wanted, swapping to an SSD is still better than to storage or slow local storage.

You configure this by offering up a (portion of a) n SSD tagged datastore as host cache. Go to Configuration -> Host cache configuration

clip_image002

All devices that are being recognized as SSD drive will show up here. You can right click the datastores and set the amount of disk space that you are willing to spend on host cache. If you haven’t formatted a datastore yet, but do have an SSD in place, you can use the Add storage wizard mentioned above.

clip_image003

Once you’ve configured this, you can browse the datastore which you have (partially) allocated to Host cache. On your datastore, you will find a hashed folder, and in that folder a folder named hostCache.

Something like this: 5241d252-0687-cf96-f89a-10ddb1eabcf5/hostCache

In this folder, you will find as much .vswp files as the total number of GB’s that you have allocated to host cache.

Hurray!

8 weeks of #VCAP – vSphere Network I/O Control

Alright – here we go, Network I/O Control – Objective 2.4 of the blueprint lists this as a skill you must know.  Honestly, I've never used this before writing this post…thankfully, it's a very very easy thing to configure.  Unless I'm missing something, in which case I'm in for some trouble come exam time ๐Ÿ™‚

First up, let's have a look at the requirements.

  • Enterprise Plus licensing – since you need a distributed switch to use NIOC, in turn you need Ent+ licenses.

OK, maybe I should of said requirement – not plural.  I can't seem to find any other requirements for using NIOC.  Anyways, the first step in getting NIOC setup is to enable it, and this in itself is a matter of checking a box.  From within the Networking inventory view on the Resource Allocation tab select ‘Properties’ and check the box ๐Ÿ™‚

nioc-enable

 

System Network Resource Pools

Easy enough right!  Now on to our Network resource pools.  As you can see, there are some default system network resource pools already setup within NIOC.

  • Fault Tolerance
  • iSCSI
  • Management Traffic
  • Virtual Machine Traffic
  • vMotion
  • vSphere Replication

I’ll leave it to your imagination as to what traffic these represent.  Basically these resource pools are automatically applied to their corresponding traffic type when we enable NIOC.   NIOC utilizes the same type of sharing mechanism that resource pools utilize.  Meaning each resource pool is assigned a share value, one that will apply relatively to the other pools during network contention.  Thus, if going by the example in the Networking guide, if we assign FT and iSCSI a share value of 100, while all other resource pools having 50 shares, iSCSI and FT would each get 25% while the remaining resource pools would receive 12.5% of the available bandwidth (during contention).  The table below should help with that formula

Resource Pool Shares Total Shares Percentage
iSCSI 100 400 25%
FT 100 400 25%
Management 50 400 12.5%
VM 50 400 12.5%
vMotion 50 400 12.5%
Replication 50 400 12.5%

What if I want to further segregate my VM traffic?

A valid question.  To resolve this NIOC allows us to create our own User-defined network resource pools. Again, this is a very easy process.  Selecting ‘New Network Resource Pool’ will get the dialog box open that we need.  See Below..

newresourcepool

As you can see, we can create our own resource pool, assign either a predefined (high, normal, low) share value to it (or we can set a custom number) as well as a QoS priority tag if we need to tag outbound QoS from our virtual switch.  Just a note, we can change the values and QoS tags on our system defined resource pools as well if need be.

Now that we have our resource pool created there’s only one final step in applying it.  Using the ‘Manage Port Groups’ link we can assign our newly created resource pool to one of our dvPortGroups.  Below I’ve done just that by assigning ‘My Server Traffic’ to dvServers.

assignportgroup

And that’s all there is to NIOC really.  Again, not too hard, but something I’ve never touched before now.  Also, something that could of caught me off guard on the exam – the last thing i want to do is spend time reading documentation!  Good luck studying!

8 weeks of #VCAP – Section 3 scenario – CPU Affinity!

Thanks once again to Tom Verhaeg for this great scenario.

The voice team has recently setup Cisco Unity. The VoIP administrator sends you an e-mail. To comply with Cisco best practices, the Cisco Unity VM needs to have CPU affinity set. You really don’t like this, but the VoIP administrator and your boss insist. Make it happen……..

Damn, this really isn’t a fun thing to do. CPU affinity restricts a VM only to run on specific cores / processors that you specify. There may be some requirements for this (such as the above), but overall you shouldn’t do it. This breaks NUMA architecture, and more important, Fully Automated DRS! To support this, the DRS level should either be manual or partially automated.

The process itself isn’t that complicated. Edit the settings of the VM and go to the resources tab. Under advanced CPU, you find the option for CPU affinity.

cpuaffinity1

If you do not see the Scheduling Affinity piece on a DRS-Cluster host, you are running DRS in fully automated mode. You can set DRS to manual for this VM by going to the cluster settings, and under DRS select Virtual Machine options. Set the DRS mode for this VM to either disabled, manual or partially automated.

cpuaffinity2Hurray!

8 weeks of #VCAP – More Networking Scenarios by Tom!

Another top notch scenario built by Tom Verhaeg! (blog/twitter)  Thanks Tom!

Your recent work on the new portgroup was top notch! Now, the network administrators have some new requirements. You currently use one vNIC for the DvS. A second pNIC has been connected to the network and you have been tasked with adding it to the DvS. Also ensure that the DvS_StorageNetwork Port Group only uses the new pNIC and does VLAN tagging on VLAN ID 20.

Another networking objective. Whoohoo! Allright, let us first check out the current network adapters available on the host:

ns-scenario1

Allright, so vmnic2 is the one that we can add to the DvS_AMS01. Go over to the networking view (Ctrl + Shift + N) and edit the settings of your DvS. We first need to check if the DvS allows for 2 uplinks, instead of just 1.

ns-scenario2

And check this out! It’s still set to 1. This is a good one to remember for the exam, on the DvS object itself, you configure the maximum number of physical adapters (also called uplink ports) per host. So set that one to 2 and let’s continue with adding vmnic2 to the DvS.

Since the host is already connected to the DvS, click the DvS and select Manage Hosts. You will find your host, and you can add the second nic.

ns-scenario3

You could also do this from the hosts and clusters view, do whatever works for you.

Now that we have added that pNIC to the DvS, we need to create the DvS_StorageNetwork port group. Remember that we need to do VLAN tagging on VLAN ID 20 here. Create the new port group now, it’s settings should look like this:

ns-scenario4

Now, for the last part: As ESXi does load balancing by default (originating port ID based) we will now have load balancing on the DvS_ProductionNetwork, which is great, but not what we need for the Storage Network.

Open up the settings of that port group and go to the Teaming and Failover section.

ns-scenario5

Both uplink ports are now under Active Uplinks. Let’s review real quick what the options are:

Active Uplinks – actively being used for traffic flow

Standby Uplinks – will only become active until a failure occurs on one of the active uplinks

Unused Uplinks – this adapter will never be used for this port group

We need to ensure that it will never use this uplink, so move the dvUplink1 over to the Unused Uplinks. It should then look like this:

ns-scenario6Hurray!

8 weeks of #VCAP – Network Scenario by @tomverhaeg

First off I want to thank Tom Verhaeg (blog/twitter) for providing this scenario.  Tom had gotten in contact with myself and wanted to do what he can to help our with the 8 weeks of #VCAP series as he is going through a similar type process as me in studying for the VCAP5-DCA.  So props to Tom for taking the time and initiative to give back.  Hopefully we see more from him in the coming weeks!  Even better for myself as I can run through some scenarios that I didn't make up ๐Ÿ™‚  Be sure to follow Tom on Twitter and check out his blog  Thanks for the help Tom!!!

Your company leverages the full Enterprise Plus licensing and has set up a Distributed vSwitch. Recently, the number of ports needed on a particular portgroup exceeded the number configured. You are tasked with creating a new Portgroup, called DvS_ProductionNetwork which only connects the running VM’s and also functions when vCenter is down.

Off we go again. So, let’s recall. There are 3 different options of port binding on a DvS. 

Static binding – Which creates a port group with a manual set number of ports. A port is assigned whenever a vNIC is added to a VM. You can connect a vNIC static binding only through vCenter.

Dynamic binding (Deprecated in vSphere 5.0!) – A port is assigned to a vNIC when the VM is powered on, and it’s vNIC is in a connected state. You can connect this dynamic binding only through vCenter.

Empheral binding – A port is assigned to a vNIC when the VM is powered on, and it’s vNIC is in a connected state. This binding method allows the bypass of vCenter, allowing you to manage virtual machine networking when vCenter is down.

So, that’s the one we need! Empheral binding! Luckily, it’s quite simple to configure. Hop over to the networking inventory (Ctrl + Shift + N) and create the new port group. Give it a name and leave the number of ports on the default of 128.

Now edit the settings of this port group, and select the Empheral binding under the port binding dropdown. Also note, that the number of ports is greyed out now.

Hurray!

tom

8 weeks of #VCAP – Storage Scenarios (Section 1 – Part 2)

Hopefully you all enjoyed the last scenario based post because you are about to get another one ๐Ÿ™‚  Kind of a different take on covering the remaining skills from the storage section, section 1.  So, here we go!

Scenario 1

A coworker has come to you complaining that every time he performs storage related functions from within the vSphere client, VMware kicks off these long running rescan operations.  He's downright sick of seeing them and wants them to stop, saying he will rescan when he feels the need to, rather than having vSphere decide when to do it.  Make it happen!

So, quite the guy your coworker, thinking he's smarter than the inner workings of vSphere but luckily we  have a way we can help him.  And also the functions we are going to perform are also part of the VCAP blueprint as well – coincidence?  Either way, the answer to our coworkers prayers is something called vCenter Server storage filters and there are 4 of them, explained below…

RDM Filter (config.vpxd.filter.rdmFilter) – filters out LUNs that are already mapped as an RDM

VMFS Filter (config.vpxd.filter.vmfsFilter) – filters out LUNs that are already used as a VMFS datastore

Same Hosts and Transports Filter (config.vpxd.filter.sameHostsAndTransporstFilter) – Filters out LUNS that cannot be used as a datastore extent

Host Rescan Filter (config.vpxd.filter.hostRescanFilter) – Automatically rescans storage adapters after storage-related management functions are performed.

As you might of concluded it's the Host Rescan Filter that we will need to setup.  Also, you may have concluded that these are advanced vCenter Server settings, judging by the config.vpxd prefixes.  What is conclusive is that all of these settings are enabled by default – so if we need to disable one, such as the Host Rescan Filter, we will need to set the corresponding key to false.  Another funny thing is that we won't see these setup by default.  Basically they are silently enabled.  Anyways, let's get on to solving our coworkers issue.

Head into the advanced settings of vCenter Server (Home-vCenter Server Settings->Advanced Options).  From here, disabling the host rescan filter is as easy as adding the config.vpxd.filter.hostRescanFilter and false values to the text boxes near the bottom of the screen and clicking 'Add' – see below

hostrescanfilterAnd voila!  That coworker of yours should no longer have to put up with those pesky storage rescans after he's done performing his storage related functions.

Scenario 2

You work for the mayors office in the largest city in Canada.  The mayor himself has told you that he installed some SSD into a host last night and it is showing as mpx.vmhba1:C0:T0:L0 – but not being picked up as SSD!  You mention that you think that is simply SAS disks but he persists it isn't (what is this guy on crack :)).  Either way, you are asked if there is anything you can do to somehow 'trick' vSphere into thinking that this is in fact an SSD.

Ok, so this one isn't that bad really, a whole lot of words for one task.  Although most SSD devices will be tagged as SSD by default there are times when they aren't.  Obviously this datastore isn't an SSD device, but the thing is we can tag it as SSD if we want to.  To start, we need to find the identifier of the device we wish to tag.  This time I'm going to run esxcfg-scsidevs to do so (with -c to show a compact display).

esxcfg-scsidevs -c

From there I'll grab the UUID of the device I wish to tag, in my case mpx.vmhba1:C0:T0:L0 – (crazy Rob Ford).  Now if I have a look at that device with the esxcli command I can see that it is most certainly not ssd.

esxcli storage core device list -d mpx.vmhba1:C0:T0:L0

ssd-noSo, our first step is to find out which SATP is claiming this device.  The following command will let us do just that

esxcli storage nmp device list -d mpx.vmhba1:C0:T0:L0

whichsatpAlright, so now that we know the SATP we can go ahead and define a SATP rule that states this is SSD

โ€‹esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T0:L0 -o enable_ssd

And from here we need to reclaim the device

esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T0:L0

And, another look at our listing out of the device should now show us that we are dealing with a device that is SSD.

esxcli storage core device list -d mpx.vmhba1:C0:T0:L0

ssd-yesSo there you go Mr. Ford, I mean Mr. Mayor – it's now SSD!!!!

And that's all for now ๐Ÿ™‚