Tag Archives: VCAP

8 weeks of #VCAP – The ESXi Firewall

Alright, continuing on the realm of security let's have a look at the built in firewall on ESXi.  This post will relate directly to Objective 7.2 on the blueprint!  Basically, a lot of this work can be done in either the GUI or the CLI, so chose what you are most comfortable with.  I'll be jumping back and forth from both!  Some things are just easier in the GUI I find….anyways, I only have like 4 weeks to go so let's get going…

First up, enable/disable pre configured services

Easy/Peasy!  Hit up the 'Security Profile' on a hosts configuration tab and select 'Properties' in the 'Services' section.  You should see something similar to that of below


I guess as far as enabling/disabling you would simply stop the service and set it to manual automation.

Speaking of automation, that's the second skill

As you can see above we have a few options in regards to automation behavior. We can Start/Stop with the host (basically on startup and shutdown), Start/Stop manually (we will go in here and do it), or Start automatically when …( I have no idea what this means 🙂 sorry – let me know in the comments 🙂 ).  Anyways, that's all there is to this!

We are flying through this, Open/Close Ports

Same spot as above just hit the 'Properties' link on the Firewall section this time.  Again, this is just as easy – just check/uncheck the boxes beside the service containing the port you want to open or close!  Have a look below – it's pretty simple!


Another releavant spot here is the 'Firewall' button at the bottom.  Aside from opening and closing a port, we can also specify which networks are able to get through if our port is open.  Below I'm allowing access only from the network.  

allowedipsAgain this can be done within the CLI, but i find it much easier to accomplish inside of the GUI.  But, that's a personal preference so pick your poison!

That's what I get for talk about the CLI, custom services!

Aha!  Too much talk of the CLI leads us to a task that can only be completed via the CLI; Custom Services.  Basically, if you have a service that utilizes ports that aren't covered off by the default services you need to create your own spiffy little service so you can enable/disable it and open/close those ports and allow access to it.  So, off to the CLI we go…

The services in the ESXi firewall are defined by XML files located in /etc/vmware/firewall  The service.xml file contains the bulk of them and you can define yours in there, or you can simply add any xml file in the directory and it will be picked up (so long as it is defined properly).  If you have enabled HA you are in luck – you will see an fdm.xml file there.  Since the VCAP is time sensitive this might be your quickest way out as you can just copy that file, rename it to your service and modify as it fits.  If not, then you will have to get into service.xml and copy text out of there.  I'm going to assume HA is enabled and go the copy/modify route.

So, copy fdm.xml to your service name

cp fdm.xml mynewservice.xml

Before modifying mynewservice.xml you will need to give root access to write to it, use the following to do so…

chmod o+w mynewservice.xml

Now vi mynewservice.xml – if you don't know how to use 'vi', well, you better just learn, go find a site 🙂  Let's say we have a requirement to open up inbound tcp/udp 8000 and tcp/udp 8001 on the outbound.  We would make that file look as follows, simply replacing the name and ports and setting the enabled flag.


Alright, save that bad boy, and probably it's a good idea to run 'chmod o-w mynewservice.xml' and take away that write permission.  If you go and look at your services, or simply run 'esxcli network firewall ruleset list' you might say, "hey, where's my new service?"  Well, it won't show up until you refresh the firewall – to do so, use the following command..

esxcli network firewall refresh

Now you can go check in the GUI or do the following to list out your services…

esxcli network firewall ruleset list

rulesetWoot!  Woot!  It's there!  But wait, it's disabled.  No biggie, we can go ahead and enable it just as we did the others in the steps earlier in this post – or, hey, since we are in the CLI let's just do it now!

esxcli network firewall ruleset set -r mynewservice -e true

And that's that!  You are done!  If asked to set the allowedIP information, I'd probably just jump back to the GUI and do that!

Set firewall security level – More CLI goodness

Well before we can set the firewall security level let's first understand what security levels are available to us.  ESXi gives us three…

High – This is the default – basically, firewall blocks all incoming and outgoing ports except for the the essential ports it needs to run.

Medium  – All Incoming is blocked, except for any port you open – outgoing is a free for all

Low – Nada – have at it, everything is open.  

Anyway, we can get the default action by specifying

esxcli network firewall get

and to change it we have a few options…  Passing '-d false' would set us to DROP (the default HIGH security level), passing a '-d true' will set us up to PASS traffic (I think this would be the medium security) and setting a '-e false' will disable the firewall completely (the low settings).  So, to switch to medium we could do the following

esxcli network firewall set -d true

I could be wrong here, so if I am just let me know and I'll update it 🙂

And guess what?  We are done with the firewall!  I would practice this stuff as it's easy measurable and can be quickly identified as you doing something right or wrong – I'd bet this will be on the exam in one way or another.  Good Luck!

8 weeks of #VCAP – Security

Just as I said I'm going to hop around from topic to topic, so without further ado we move from HA to security. This post will be pretty much all of objective 7 on the blueprint – some things I may graze over while focusing heavily on others.  

So first up is Objective 7.1 – now there is a lot of information in here and I'll just pull out the most important in my opinion, as well as the task I don't commonly perform.  So that said, I'm going to leave out the users, groups, lockdown mode, and AD authentication.  These things are pretty simple to configure anyways.  Also, this whole authentication proxy thing – I'm just going to hope for the best that it isn't on the exam 🙂  So, let's get started on this beast of an objective.


Yeah, we all enable it right – and we all suppress that warning with that advanced setting.  The point is, ssh is something that is near and dear to all our hearts, and we like to have the ability to access something via the CLI in the case the GUI or vCenter or something is down.  So with that said, let's have a look at what the blueprint states in regards to SSH – customization.  Aside from enabling and disabling this, which is quite easy so I won't go over it, I'm not sure what the blue print is getting at.  I've seen lots of sites referencing the timeout setting so we can show that.  Simply change the value in the Advanced Settings of a host to the desired time in seconds (Uservars->ESXiShellTimeOut) as shown below

esxishelltimeoutAs far as 'Customize SSH settings for increased security' goes, I'm not sure what else you can enable/disable or tweak to do so.  If you are familiar with sshd I suppose you could permit root from logging in and simply utilize SSH with a local user account.  

Certificates and SSL

The blueprint mentions the enabling and disabling of certificate checking.  This is simply done by checking/unchecking a checkbox in the SSL section of the vCenter Server settings.

The blueprint also calls out the generation of ESXi host certs.  Before doing any sort of certificate generation or crazy ssl administration always back your original certs up.  These are located in /etc/vmware/ssl – just copy them somewhere.  To regenerate new certs simply shell into ESXi and run generate-certificates – this will create new certs and keys, ignore the error regarding the config file 🙂  After doing this you will need to restart your management agents (/etc/init.d/hostd restart) and quite possibly reconnect your host to vCenter.

To deploy a CA signed cert you can simply just copy your certs to the same directory (/etc/vmware/ssl ) and be sure they are named rui.cert and rui.key and restart hostd the same as above.

As far as SSL timeouts I couldn't find this located in any of the recommended tools for this objective, it's actually in the security guide (which makes sense right, we are doing the security objective #fail  – either way, you need to edit the /etc/vmware/hostd/config.xml file and add the following two entries to modify the SSL read and handshake timeout values respectively (they are in milliseconds remember)



Once again you will need to restart hostd after doing this!

Password policies

Yikes!  You want to get confused try and understand the pam password policies.  I'll do my best to explain it – keep in mind it will be high level though – this is in the blueprint however I'm not sure if they are going to have you doing this on the exam.  Either way, it's good to know…  Honestly, I don't think I'm going to memorize this, if you work with it daily then you might, but me, no!  I'll just know that it is also in the security guide (search for PAM).  Anyways, here's the command

password requisite /lib/security/$ISA/pam_passwdqc.so retry=N min=N0,N1,N2,N3,N4

​Wow!  So what the hell does that mean?  Well, first off N represents numbers (N = retry attempts, N0 = length of password if only using one character class, N1 = length if using two character classes, N2 = length of words inside passphrases, N3 = length if using three character classes, N4 = length if using all four character classes).  Character classes are basically lower case, upper case, numbers and special characters.  They also confuse things by slamming the passphrase settings right in the middle as well – Nice!  Either way, this is the example from the security guide.

password requisite /bin/security/$ISA/pam_passwdqc.so retry=3 min=12,9,8,7,6

This translates into three retry attempts, 12 character password min if using only one class, 9 character minimum if using two classes, 7 character minimum if using three classes, and 6 character minimum if using all four classes.  As well, passphrases are required to have words that are at least 8 characters long.

No way can I remember this, I'm just going to remember Security Guide + CTRL+F + PAM 🙂

I'm going to cut this post off here and give the ESXi firewall its' own post – my head hurts!!!! 🙂

8 weeks of #VCAP – HA

Although High Availability is something I’ve been configuring for many years now I thought it might be a good idea to go over the whole process again.  This became especially evident after watching the HA section of Jason Nash’s TrainSignal/PluralSight course, as I quickly realized there are a lot of HA advanced settings that I’ve never modified or tested – with that said, here’s the HA post.

First off I’m not going to go over the basic configuration of HA – honestly, it’s a checkbox right – I think we can all handle that.  I will give a brief description of a few of HA bullet points that are listed within the blueprint and point everyone where we can manage them.

First up, Admission Control


When an HA event occurs in our cluster, we need to ensure that enough resources are available to successfully failover our infrastructure – Admission control dictates just how many resources we will set aside for this event.  If our admission control policies are violated, no more VMs can be powered on inside of our cluster – yikes!  There are three types…

Specify Failover Host – Ugly!  Basically you assign a host as the host that will be used in the event of an HA event.  The result of an HA event is the only time that this host will have VMs running on it – all other times, it sits there wasting money 🙂

Host failures cluster tolerates – This is perhaps the most complicated policy.  Essentially a slot size is calculated for CPU and memory, the cluster then does some calculations in order to determine how many slot sizes are available.  It then reserves a certain number of failover slots in your cluster to ensure that a certain number of hosts are able to failover.  There will be much more on slot size later on in this post so don’t worry if that doesn’t make too much sense.

Percentage of Cluster resources reserved – This is probably the one I use most often.  Allows you to reserve a certain percentage of both CPU and Memory for VM restarts.

So, back to slot size – a slot is made up of two components; memory and cpu.  HA will take the largest reservation of any powered on VM in your environment and use that as its memory slot size.  So even if you have 200 VMs that have only 2GB of RAM, if you place a reservation on just one VM of say, oh, 8GB of RAM, your memory slot size will be 8GB.  If you do not have any reservations set, the slot size is deemed to be 0MB + memory overhead.

As for CPU, the same rules apply – the slot size is the largest reservation set on a powered on VM.  If no reservations are used, the slot size is deemed to be 32MGHz.  Both the CPU and Memory slot sizes can be controlled by a couple of HA advanced settings – das.slotCpuInMhz and das.slotMemInMb (**Note – all HA advanced setting start with das. – so if you are doing the test and you can’t remember one, simply open the Availability doc and search for das – you’ll find them ).  These do not change the default slot size values, but more so specify an upper limit in wich a slot size can be.

So let’s have a look at these settings and slot size – first up, we can see our current slot size by selecting the ‘Advanced Runtime Info’ link off of a clusters’ Summary tab.  As shown below my current slot size for CPU is 500Mhz and 32MB for memory, also I have 16 total slots, 4 of which have been taken.


So let’s now set the advanced setting das.slotCpuInMhz setting to something lower than 500 – say we only ever want our CPU slot size for a VM to be 64Mhz.   Within the clusters’ HA settings (Right-click cluster->Edit Settings, vSphere HA) you will see an Advanced Options button, select that and set our das.slotCpuInMhz to 64 as shown below.

ha-2-slotsizeadvNow we have essentially stated that HA should use the smallest of either the largest VM CPU reservation, or the value for das.slotCpuInMhz as our CPU slot size.  A quick check on our runtime settings again reflects the change we just made.  Also, if you look, you will see that we have also increased our total available slots to 128, since we are now using a CPU slot size of 64 Mhz rather than 500.


So that’s admission control and slot sizes in a nutshell.  Seems like a good task to have you limit or change some slot sizes on the exam.  Also, I’m not sure how much troubleshooting needs to be performed on the exam but if presented with any VMs failing to power on scenarios, slot sizes and admission control could definitely be the answer.

More Advanced Settings

As you may have seen in the earlier screenshots there were a few other of those das. advanced settings shown.  Here’s a few that you may need to know for the exam, maybe, maybe not, either way, good to know…

das.heartbeatDsPerHost – used to increase the number of heartbeat datastores used – default is 2, however can be overridden to a maximum of 5.  Requires complete reconfiguration of HA on the hosts.

das.vmMemoryMinMb – value to use for the memory slot size if no reservation is present – default of 0

das.slotMemInMb – upper value of a memory slot size – meaning we can limit how large the slot size can be by using this value.

das.vmCpuMinMhz – value to use for the cpu slot size if no reservations are present – default of 32.

das.slotCpuInMhz – upper value of a CPU slot size – meaning we can limit how large the slot size can be by using this value

das.isolationAddress – can be used to change the IP address that HA pings when determining isolation – by default this is the default gateway.

das.isolationAddressX – can be used to add additional IPs to ping – X can be any number between 0 and 9.

das.useDefaultIsolationAddress – can be used to specify whether HA should even attempt to use the isolation address.

Anyways, those are the most commonly used settings – again, any others will be listed in the availability guide so use that if needed to find others on the exam – but remember, having to open those pdf’s will take away valuable time.

Other random things

Just a few notes on some other parts of HA that I haven’t used that often.  The first being VM Monitoring.  VM Monitoring is a process that will monitor for heartbeats and I/O activity from the VMware tools service inside your virtual machines.  If it doesn’t detect activity from the VM, it determines that it has failed and can proceed with a reboot of that VM.  vSphere has a few options as it pertains to VM monitoring that we can use to help prevent false positives and un needed VM reboots.

Failure Interval – Amount of time in seconds to check for heartbeats and I/O activity.

Minimum Uptime – The amount of time in seconds that VM monitoring will wait after a power on or restart before it starts to poll.

Maximum Per VM Resets – the number of times that a VM can be reset in a given time period (Reset Time Window)

Reset Time Window – used fo the maximum VM resets – specified in hours

The blueprint also mentions heartbeat datastore dependencies and preferences.  Quickly, vSphere will chose which datastores to use as HA heartbeat datastores automatically, depending on a number of things like storage transport, number of hosts connected, etc.  We can change this as well in our options.  We can instruct vSphere to only chose from our preferred list (and by which only selecting 2 datastores will in turn allows us to determine which datastores are used) or we can say to use our preferred if possible, but if you can’t, go ahead and chose the ones you want.

As well, most all of the settings we set for defaults such as isolation response and restart priority can be set on a per-VM basis as well.   This is pretty easy so I won’t explain it but just wanted to mention that it can be done.

I’d say that’s enough for HA – it’s not a hard item to administer.  That said, lab it, lab all of it!  Practice Practice Practice.

8 weeks of #VCAP – Auto Deploy

Ok!  Here we go!  AutoDeploy!  Now that we have covered both Image Builder and Host Profiles we can have a look at how to setup AutoDeploy!  Now I don't really expect to have to go through this process from start to finish on the exam, it was quite a big job to get it running.  That said, there are a lot of little tiny task involved in making AutoDeploy work properly that could be fair game on the exam.  Also, I know that we already covered Image Builder and Host Profiles, but I will quickly go over what needs to be done again, as it will help to engrave it into my mind. 🙂

First off, AutoDeploy Installation and Configuration

I'm not going to cover very much in regards to the installation – I mean, you run the vCenter installer, click next next next finish, then enable the plugin in your client.  Yup, that easy!  Configuration however, that will create you some work.  AutoDeploy is cool, but it doesn't come without its fair share of requirements and infrastructure that needs to be in place.  Let's go over some of that!

TFTP Server

First up is a TFTP server!  This needs to be in place!  So go and find a free one if you want or use one that you already have.  If you are using AutoLab then I've got great news for you, there is one already there.  It's installed on your domain controller!  Either way what we need to do is grab a zip file and extract it into our TFTP root.  The zip file can be found by going into the 'AutoDeploy' settings from your vSphere Client.  You should see an option there to 'Download TFTP Boot Zip'.  Basically, just download this and extract it to your TFTP root.  


DHCP Settings

Another third-party component that we need in place to use AutoDeploy is DHCP.  And whatever DHCP server you use you need the ability to set Options 66 and 67 in your scope settings.  What these do is specify where the TFTP server is and what boot file to use.  You can see in the above image that the boot file we need to use is called undionly.kpxe.vmw-hardwired – I'll leave the IP of your TFTP server up to you.  So go ahead and set those options on your DHCP server, 66 being the IP and 67 being the bootfile.  You can see below how I have done using the Windows DHCP server that is configured with AutoLab.


Basically, once you have installed AutoDeploy, TFTP, and setup DHCP with the proper boot server and file options we can go ahead and start with some of the goodness that is AutoDeploy.  Next up, Image Builder!

Image Builder

Remember, we went over this in depth here – but hey, let's build a quick image again!  First, let's get access to our offline depot.

Add-EsxSoftwareDepot C:\OfflineBundle\update-from-esxi5.0-5.0_update03.zip

From there let's go ahead and clone one of the ESXi images into our own new profile that we can use to deploy to hosts.  First, let's get just the name of the Image Profiles so we can easily just copy/paste it 🙂

Get-ESXImageProfile&nbsp;| Select Name

And now, we can do the clone process.

New-EsxImageProfile -Name VCAP_Image_Profile -CloneProfile ESXi-5.0.0-20131002001-standard -Vendor mwpreston.net

At this point we are good unless we wanted to add some external vibs.  For the sake of this article let's just leave it at that.

Host Profiles

I'll let you go over the Host Profiles section that I wrote up yourself.  Basically, we just need a Host Profile setup that will further configure our host after it has been installed by AutoDeploy.

AutoDeploy Rules

Ok, so we have TFTP, we have DHCP, we have our Image Profile and our Host Profile, time to create some AutoDeploy rules.  An AutoDeploy rule basically applies either an image, cluster configuration, or host profile to a host based on a specific component of that host matching a setting.  By that I mean we can match on the IP address, MAC, vendor, model, etc..  I will be using model in this example.  

So there are three auto deploy rules that we need to create and we need to do this in order.  First the Image, then state which cluster we want to join, and finally stating which host profile we would like to apply.  So, let's get started with the image.

New-DeployRule -Name IntialBootRule -Item VCAP_ImageProfile -Pattern &quot;vendor=VMware, Inc.&quot;
Add-DeployRule -DeployRule InitialBootRule

And on to the cluster we want the host to join…

New-DeployRule -Name ClusterRule -Item VCAP_ImageProfile -Pattern &quot;vendor=VMware, Inc.&quot;
Add-DeployRule -DeployRule ClusterRule

And finally the host profile we wish to apply to the host…

New-DeployRule -Name HostProfileRule -Item VCAP_ImageProfile -Pattern &quot;vendor=VMware, Inc.&quot;
Add-DeployRule -DeployRule HostProfileRule

We can also specify the boot image, cluster, and host proflile all in one rule if we want to.  This is done by specifing them separated by a comma as follows

New-DeployRule -Name AutoDeployNestedVMwareHosts&nbsp;-Item VCAP_ImageProfile,MyCluster,MyHostProfile -Pattern &quot;vendor=VMware, Inc&quot;
Add-DeployRule&nbsp;-DeployRule AutoDeployNestedVMwareHosts

At this point we are basically done configuring everything.  You should be able to deploy a host successfully!  But this is the VCAP right?  So I bet they will have this partially setup, or something that needs to be fixed.  I can't see having to go through this whole process.  So, we had better brush up on a few of the other AutoDeploy cmdlets that are available…

Get-DeployRuleSet – gets that current ruleset that we just loaded.

Set-DeployRuleSet – allows you to set the list of rules in the ruleset (Get-DeployRule Rule1, Rule2 | Set-DeployRuleSet)

Set-DeployRule – Update an existing rule (Set-DeployRule -Name "HostProfileRule" -Item "NewHostProfile"

Get-VMHostMatchingRules – Checks to see which rules match on a given host.

Repair-DeployImageCache – used to rebuild the Auto Deploy cache in the case it is deleted

And the most important one to remember for VCAP since it will remind you of the syntax for them all..


Repair-DeployImageCache – used to rebuild the Auto Deploy cache in the case it is deleted

Also another notable mention is the pattern.  AutoDeploy can match on a number of different characteristics of a host to determine which rules to apply.  By simply running Get-VMHostAttributes on a given host we can see all of the different items we have to match on in terms of a pattern


Ok!  At this point I think we are done!!!!  I know this was lengthy but hopefully helpful!!  I'm going to have to go over this stuff a few times in my lab as it is a lot to remember!!!  Good Luck!

8 weeks of #VCAP – Netflow, SNMP, and Port Mirroring

Objective 2.1 covers off some other components in regards distributed switches so I thought I would just group them all together in this post since there isn't a whole lot to getting the setup.

First up, SNMP

Remember a week or so ago when we went over how to manage hosts with the vSphere Management Assistant?  Well I hope you paid attention as we will need to have our hosts connected to the vMA in order to configure SNMP (technically you could do it with any instance of the vSphere CLI but the vMA is already there for you on the exam so you might as well use it).  We will need to use a command called vicfg-snmp in order to setup a trap target on our hosts.  So to start off, let's set a host target with the following command 

vifptarget -s host1.lab.local

Once our host is set as the target host we can start to configure SNMP.  First off, let's specify our target server, port, and community name.  For a target server of on the default port of 162 and a community name of Public we can use the following command

vicfg-snmp -t


Now, simply enable SNMP on the host with -E

vicfg-snmp -E

You know what, your done!  Want to test it, use -T.  Check your SNMP server to be sure you have recieved the trap!

vicfg-snmp -T

I would definitely recommend exploring the rest of the options with vicfg-snmp.  You can do so by browsing the help of the command.  Look at things like multiple communities (-c), how to reset the settings to default (-r), and how to list out the current configuration (-s) etc…

vicfg-snmp --help

Also, don't forget you need to do this on all of your hosts!  Keep in mind that vCenter also has SNMP settings.  These are configured in the vCenter Server Settings under the SNMP section.  There is a complete GUI around this so I'm not going to go over how to configure these.


Netflow is configured on the settings of your dvSwitch (Right-click dvSwitch->Edit Settings) on the NetFlow tab. There are a number of items we can configure here.  First off, our collector IP and port.  This is the IP and port of the actual NetFlow collector where we are sending the data too.  To allow all of your traffic to appear as coming from a single source, rather than multiple ESX management networks you can specify an IP address for the dvSwitch here as well.   This doesn't actually live on your network, just shows up in your NetFlow collector.  


There are a few other settings here as well; Active Flow Export Timeout and Idle Flow Export Timeout handle timeouts for the flows, whereas the sampling rate determins what portion of data to collect.  IE, a sampling rate of 2 will collect every other packet, 5, every fifth packet and so on.  The Process internal flows only will only collect data between VMs on the same host. That's really it for netflow, not that hard to configure.

Port Mirroring

I supposed you may be asked to mirror a certain port to an uplink or VM on the exam so it's probably best to go over this.  First off if you were asked to mirror traffic from VMA to VMB then yo1u need to determine what ports these VMs are attached to.  You can see this on the Ports tab of the dvSwitch.  Just sort by the 'Connectee' column and find their corresponding Port IDs.  For the sake of this example let's say VMA is on port 150 and VMB is on 200.

To do the actual mirroring we need to be on the Port Mirroring tab of the dvSwitches settings.  Here we can click 'Add' to setup the mirror.  As shown we give our session a name and description as well as there is a few settings regarding encapsulating VLANs and the maximum lenght or packet to capture.


The next couple of steps simply setup our source and destination for our mirror. To follow our example we can use 150 for the source, and port 200 for the destination. Unless we explicity check the 'Enable' box when completing the setup, all port mirrors are disabled by default.  They can be enabled by going back into the session and explicitly enabling the session.

I'm going to practice setting these up until I can do it with my eyes closed.  They are something that I don't use that option in my day to day operations, but I also recognize that the VCAP may ask you to do these are they can easily be scored.

8 weeks of #VCAP – Private VLANS

While we are on the topic of vSphere Distributed Switches why not just cover Private VLANs.  Private VLANs is something I've never used in production, thus the reason I'm covering it in this series.  Honestly, this lazy Sunday night is the first time I've even touched them and they are very very easy to configure technically so long as you understand the concepts first.

What is a PVLAN?

A Private VLAN is essentially a VLAN within a VLAN!  Can somebody say inception!!  Basically they allow us to take one VLAN and split it into three different private VLANs each containing restrictions in regards to connectivity to each other.   As far as use cases, the most common I can see is in a DMZ type scenario where lots of restrictions and security is in place. The three types are promiscuous, community, and isolated and are explained below.

Promiscuous PVLAN.

A Promiscuous VLAN has the same VLAN ID as your main VLAN.  Meaning if you wanted to setup some Private VLANs on VLAN 200, the promiscuous vlan would have an ID of 200.  VMs attached to the promiscuous VLAN can see all other VMs on other PVLANs, and all other VMs on the PVLAN can see any VMs on the promiscuous VLAN.  In the DMZ scenario, Firewalls and network devices are normally placed on the promiscuous VLAN as all VMs normally need to to see them.

Community PVLAN

​VMs that are a member of the Community PVLAN can see each other, as well as see VMs in the promiscuous VLAN.  They cannot see any VMs in the Isolated PVLAN.  Again, in the DMZ scenario a Community PVLAN could house VMs that need inter connectivity to each other, such as a web and database server.

Isolated PVLAN

​VMs in an isolated PVLAN are just that; isolated!  The only other VMs they would be able to communicate with are those in promiscuous VLAN.  They cannot see any VMs that are in the community VLAN, nor can they see any other VMs that might be in the Isolated VLAN.  A good spot to put a service that only needs connectivity to the firewall and nothing else.

PVLANs in vSphere

PVLANs can be implemented within vSphere only on a vSphere Distributed Switch.  Before we can assign a VM to a PVLAN there is a little leg work that needs to be done on the switch itself in terms of configuring the PVLAN.  To do so, right-click your dvSwitch and select 'Edit Settings'.  On the Private VLAN tab (shown below) is where you initially setup your PVLAN.  As you can see, I've setup my main private VLAN ID as 200, therefore my promiscuous PVLAN is also 200.  Then, I have an isolated and community PVLAN configured with and ID of 201 and 202 respectively.

pvlan1Now our Private VLAN is setup to be consumed.  The only thing left to do is create some port groups that contain the Private VLAN.  We need the port groups in order to assign VMs on the respective network.  Again, right-click your dvSwitch and select 'New Port Group'.  Give your port group a name, and set the VLAN type to Private VLAN.  Once this happens you will see another box appear where we can select either the Promiscuous, Isolated, or Community entry of our PVLAN.  Go ahead and make three port groups, each one being assigned to either 200, 201, or 202.

pvlan2Now it is as simple as attaching your VMs network adapters to the desired port group.  For my testing I created 4 small Linux instances; a firewall, a web server, a database server and a video streaming server.  Trying to recreate a DMZ type scenario I assigned the web and database server to the community PVLAN as they needed to communicate with each other.  I assigned the video streaming server to an isolated PVLAN as it has no need to communicate with either the web or db server.  And I assigned the firewall to the promiscuous PVLAN, as all VMs need to be able to communicate with it in order to gain access to the outside world.  After 'much a pinging' I found that everything was working as expected.  So try it out for yourself.  Try reassigning VMs to different port groups and watch how the ping responses stop.  Like I said, these are very easy to setup technically, just understand the implications of what happens when VMs do not belong to the proper PVLAN.  Good Luck!

8 weeks of #VCAP – Migrating to vSphere Distributed Switches

Alright there is a ton of information and blog posts that revolve around migrating to vSphere Distributed Switches and they are all great!  I hate to throw yet another one out there but as I've stated before things tend to sink in when I post them here.  Call it selfish if you want – I'm just going to call it studying Object 2.1 in the blueprint 🙂

Before we get to involved in the details I'll go through a few key pieces of information.  As you can see below, there are a lost of port groups that I will need to migrate.  These are in fact the port groups that are setup by default with AutoLab. Also, I'm assuming you have redundant NICs setup on all your vSwitches.  This will allow us to migrate all of our VM networks and port groups without incurring any downtime.  As stated before, there are many blog posts around this subject and many different ways to do the migration.  This is just the way I've done it in the past and I'm sure you can probably do this in a smaller amount of steps but this is just the process I've followed.


Step 1 – Create the shell

So the first step is to create our distributed switch.  This is pretty simple!  Just head into your network view and select 'New vSphere Distributed Switch',  Follow the wizard, it's not that hard.  Pay attention to the number of uplinks you allow as you need to be sure that you have as many uplinks in your distributed switch as you have physical adapters assigned to your standard switches.  Also, I usually add my hosts into the distributed switch during this process, just not importing and physical NICs.  Basically we're left with a distributed switch containing our hosts with no uplinks assigned.  Once we have our switch we need to duplicate all of the por tgroups we wish to migrate (Management, vMotion, FT, VM, etc.)  If you are following along with the autolab you should end up with something similar to the following (ignore my PVLAN port groups – that's another blog post).

vss2vds2One note about the uplinks that you can't see in the image above.  I've went into each of my port groups and setup the teaming/failover to mimic that of the standard switches.  So, for the port groups that were assigned to vSwitch0, i've set dvUplink1 and 2 as active, and 3/4 as unused.  For those in vSwitch1, 3/4 are active and 1/2 are unused.  This provides us with the same connectivity as the standard switches and allows us to segregate the traffic the exact same way that the standard switches did.  This can by editing the settings of you port group and modifying the Teaming and Failover section.  See below.


Step 2 – Split up your NICs

Alright!  Now that we have a shell of a distributed switch configured we can begin the migration process.  This is the process I was mentioning at the beginning of the post that can be performed a million and one ways!  This is how I like to do it.  From the hosts networking configuration page, be sure you have switched to the vSphere Distributed Switch context.  The first thing we will do is assign a physical adapter from every vSwitch on each host to an uplink on our dvSwitch.  Now, we have redundant NICs on both vSwitches so we are able to do this without affecting our connectivity (hopefully).  To do this, select the 'Manage Physical Adapters' link in the top right hand corner of the screen.  This will display our uplinks with the ability to add NICs to each one.

vss2vds4Basically, we want to add vmnic0 to dvUplink1 and vmnic2 to dvUplink3.  This is because we want one NIC from each standard switch into each of the active/unused configurations that we have setup previously.  It's hard to explain, but once you start doing it you should understand.  To do this, just click the 'Click to Add NIC' links on dvUplink1 and 3 and assign the proper NICs.  You will get a warning letting you know that you are removing a NIC from one switch and adding it to another.

vss2vds5Be sure you repeat the NIC additions on each host you have, paying close attention to the uplinks you are assigning them to.

Step 3 – Migrate our vmkernel port groups

Once we have a couple of NICs assigned to our dvSwitch we can now begin to migrate our vmkernel interfaces. To do this task, switch to the networking inventory view, right click on our dvSwitch and select 'Manage Hosts'.  Select the hosts we want to migrate from (usually all of them in the cluster).  The NICs that we just added should already be selected the in 'Select Physical Adapters' dialog.  Leave this as default, we will come back and grab the other NICs once we have successfully moved our vmkernel interfaces and virtual machine networking, it's the next screen, the 'Network Connectivity' dialog which we will perform most the work.  This is where we say what source port group should be migrated to what destination port group.  An easy step, simply adjusting all of the dropdowns beside each port group does the trick.  See below.  When your done, skip the VM Networking for now and click 'Finish'.

vss2vds6After a little bit of time we should now have all of our vmkernel interfaces migrated to our distributed switch.  This can be confirmed by looking at our standard switches and ensuring we see no vmkernel interfaces  What you might still see though is VMs attached to Virtual Machine port groups on the standard switches.  This is what we will move next.  

Step 4 – Move your virtual machine port groups

Again, this is done through the Networking Inventory and is very simple.  Right-click your dvSwitch and select 'Migrate Virtual Machine Networking'.  Set the VM network you wish to migrate as your source, and the one you created for it in your dvSwtich as your destination (see below).  When you click next you will be presented with a list of VMs on that network, and whether or not the destination network is accessible or not.  If we have done everything right up to this point it should be.  Select all your VMs and complete the migration wizard.

vss2vds7This process will have to be done for each and every virtual machine port group you wish to migrate.  In the case of autolab, Servers and Workstations.  Once we are done this we have successfully migrated all of our port groups to our distributed switch.

Step 5 – Pick up the trash!

The only thing left to do at this point would be to go back to the hosts' view of the distributed switch, select 'Manage Physical Adapters' and assign the remaining two NICs from our standard switches to the proper uplinks in our dvSwitch.

Step 6 – Celebrate a VCAP pass

And done!  This seems like a good thing to have someone do on the exam!  That said, if so, take caution.  The last thing you want to do is mess up a Management Network on one of your hosts and lose contact to it!  Yikes!

8 weeks of #VCAP – vSphere Management Assistant (vMA)

Alright here we go the vMA – I promised you I would bounce around topics which don't relate to each other whatsoever.

So, first off, let's get started with installing and configuring the vMA.  Installation really doesn't even need to be described.  It comes as an ovf and it's as simple as just importing that…

Configuration can get a bit tricky, especially if you haven't used IP Pools before.  We will cover IP Pools in another blog post so I'll just leave it at that.  For the moment, I just went into the vMA VM settings and disabled all of the vApp options!

Anyways, once you finally get the appliance booted up you will be prompted to enter in some network information – pretty simple stuff, menu driven, and then prompted to change the default password for vi-admin.  Easy stuff thus far.  Speaking of authentication the vMA utilizes 'sudo' to execute commands.  This basically allows vi-admin to execute commands under the root user account.  A bit of a security and safeguard mechanism utilized in some Linux OSes.

Alright, so we are now up and running so let's just go over some common tasks that we might perform in relation to the vSphere Management Assistant.  Probably a good idea to know all of these for the exam as vMA does have its very own objective and is referenced in many others.

vMA and your domain!

Certainly we may want to join the appliance to our domain.  This will give us plenty of benefits security wise, the biggest being we will not have to store any of our target hosts passwords within the vMA credential store – so long as the hosts are a member of the domain as well.  Commands related to vMA and domains are as follows…

To join vMA to a domain, obviously substituting your domain name and authentication…requires a restart of the appliance afterwards.

sudo domainjoin-cli join FQDN user_with_priveledges

And, to remove the vMA it's the same command, different parameters

sudo domainjoin-cli leave

And to view information

sudo domainjoin-cli query

So as mentioned above we can do some un-attended active directory authentication to our hosts.  This is a pretty long drawn out process so I doubt it will be asked, but then again I'm wrong 100% of 50% of the time – I'd just know where this information is in the vSphere Management Assistant user guide (HINT: Page 15).

Host Targets

Before we can use the vMA to execute commands on hosts we need to, well, add hosts to our vMA.  Within vMA terms, our hosts are called targets; targets on which we can execute commands.  So when adding hosts we have to provide the hostname and some credentials.  Thus we have a couple of options in regards to how we authenticate; adauth of fpauth (Default).  Examples of adding a host with both authentication types are below…along with some other host options..

Using local ESXi credentials

vifp addserver HOSTNAME

Using AD credentials

vifp addserver HOSTNAME --authpolicy adauth

Viewing the hosts we have added

vifp listservers

Removing a server

vifp removeserver HOSTNAME

Set a host as the target server – meaning set it up so you can run a command on the host without authentication

vifptarget -s HOSTNAME

To clear the current target

vifptarget -c

Security and user related functions

The vMA also has a few commands we can run to help better secure our systems.  When you add a host to vMA, it actually creates a vi-admin and vi-user account on your ESXi host.  You can tell vMA to rotate these passwords using the following command.

vifp rotatepassword (--now, --never or --days #)

vMA also has a vi-user account locally, which by default is disabled, since it has no password.  This account can be used to run commands on an ESXi host that would not normally require administrative priviledges.  Enabling this account is as easy as simply setting a password on it using the following

sudo passwd vi-user

For now that's it – That's all I can think of that is vMA related – Now we will be using it for some other components in the futre, like setting up snmp and different things, but I wanted to keep this post strictly about vMA specific commands.  Happy Studying!

8 weeks of #VCAP – LUN Masking

Alright – here we go – the push is on.  8 weeks to cover some random, sparingly used topics off of the VCAP5-DCA blueprint.  Today, let's tackle an item out of the very first objective on the blueprint; LUN masking.

LUN masking is essentially a process that will mask away LUNs, or make those LUNs inaccessible to certain ESXi hosts.  You know when you go into your backend array and say which hosts can have access to which LUNs – yeah, that's basically LUN masking.  However, for the sake of this exam, it's performed on the host itself through something called claimrules.  That said, it's much harder, but explained below…

So first off, we need to decide on a LUN that we want to mask.  There are many ways to list all of your LUNs/Datastores through the CLI and through the vSphere Client, so pick your beast.  What we need to get is the LUNs identifier – the long string of characters that ESXi uses to uniquely identify the LUN.  Since the claimrule is create within the CLI we might as well just find these numbers inside of the CLI as well – since you may be pressed for time on the exam.  So, let's first list our LUNs, showing each identifier.

esxcli storage core device list | less


As you can see I piped the output to less.  If we don't do this and there are a lot of LUNs attached to your host then you may get a little overwhelmed with the output.  "esxcfg-scsidevs -m" will also give you some great information here, which may be a little more compact than the esxcli command.  Chose your weapon, so long as you can get the identifier.  The LUN shown in the above image has an identifier of "naa.6006048c6fc141bb051adb5eaa0c60a9" – this is the one I'm targeting.

So now we have our identifier it's time to do some masking.  We have some decisions at this point to make though.  We can mask by path (removing individual path visibility), by vendor (this will mask all LUNs to a specific vendor), or by storage transport (yeah, like all iSCSI or all FC).  If we look at the currently defined claimrules we can see most types are utilized.  To do so, use the following command

esxcli storage core claimrule list


For our sake here we will go ahead an perform our masking by path.  I will note below though if you were to choose vendor or transport where that would be setup.

So, in order to do it by path, we need to see all of the paths associated with our identifier.  To do so, we can use the following command along with grepping for our identifier.

esxcfg-mpath &nbsp;-m | grep naa.6006048c6fc141bb051adb5eaa0c60a9

lunmask3Alright, so you can see we have 2 paths.  That means in order to completely mask away this LUN we will need to do all of the following twice; once using the vmhba32:C1:T0:L0 path and once using vmhba32:C0:T0:L0.

Now, time to begin constructing our claimrule!  First off we will need an ID number.  Certailny don't use one that is already taken (remember "esxcli storage core claimrule list") or you can use the "-u" to autoassign a number.  I like to have control over this stuff so I'm picking 200.  Also to note is the -t option – this specifies the type of claimrule (remember when i said we could mask by vendor).  Our -t to do a path will be location, however this could be vendor or transport as well.** Running "esxcli storage core claimrule add" with no arguments will output a bunch of examples **  So, in order to mask by location we will specify -A, -C, -T, and -L parameters referencing our path and the -P states we want to use the MASK_PATH plugin.  The command should look like the one below.

esxcli storage core claimrule add -r 200 -t location -A vmhba32 -C 1 -T 0 -L 0 -P MASK_PATH

and for our second path – don't forget to put a new rule ID

esxcli storage core claimrule add -r 201 -t location -A vmhba32 -C 0 -T 0 -L 0 -P MASK_PATH

Running "esxcli storage core claimrule list" will now show our newly created rules, however they haven't been applied yet.  Basically they are running in "file" – we need them to be in "runtime"  This is as as easy as running

esxcli storage core claimrule load

Now we are all set to go – kinda.  They are in runtime, but the rules will not be applied until that device is reclaimed.  So, a reboot would work here – or, a more ideal solution, we can run a reclaim on our device.  To do so we will need that device identifier again and the command to run is…

esxcli storage core claiming reclaim -d naa.6006048c6fc141bb051adb5eaa0c60a9

And done!  And guess what – that LUN is gonzo!!!  Congrats Master Masker!

HEY!  Wait!  I needed that LUN

Oh SNAP!  This is my lab environment and I need that LUN back, well, here's how we can undo everything we just did!

First off, let's get rid of those claimrules we just added

esxcli storage core claimrule remove -r 200
esxcli storage core claimrule remove -r 201

Listing them out will only show them in runtime now, they should no longer be in file. Let's get them out runtime by loading our claimrule list again.

esxcli storage core claimrule load

Now a couple of unclaim commands on our paths.  This will allow them to be reclaimed by the default plugin.

esxcli storage core claiming unclaim -t location -A vmhba32 -C 0 -T 0 -L 0
esxcli storage core claiming unclaim -t location -A vmhba32 -C 1 -T 0 -L 0

A rescan of your vmhba and voila!  Your LUN should be back!  Just as with Image Builder I feel like this would be a good thing to know for the exam.  Again, it's something that can easily be marked and tracked and very specific!  Happy studying!

Friday Shorts – VCAP 2nd Chance, VMworld session downloads, VSAN and expectations

You have the voice of an angel.  Your voice is like a combination of Fergie and Jesus. – Dale Dobac  (John C. Reilly) from Stepbrothers

VCAP x 2

certifiedFind yourself teeter tottering on whether to write your VCP or VCAP exam but just not sure if you are quite ready? Worried about dropping the $$$, coming up short and leaving with nothing? Live in Australia, New Zealand, India, Japan or Korea? If so VMware education has a nice promotion going on right now that will let you go write that first exam and if you don't pass the second attempt is on them. Hard not to take anyone up on that offer. Go ahead, get a feel for the test and if you fail…well, try again. Check out the promotion here and hurry!!! It all expires at the end of March. Oh, and while you are at it you might as well nab your VCA for free!

VMworld Sessions – get'em all!

download (1)As we gear up for VMworld EMEA the attendees of the US version of the show are aggressively going over all the online session content that they missed during the show due to networking, the solutions exchange, hangspace, shiny objects in hallways etc. In previous years I've done my fair share of streaming from the vmworld.com site until last year when I discovered this wicked awesome script. Well it looks like Damian Karlson and his team have updated the script for this years show. Damien has an article on his blog outlining all of the requirements so be sure to read that and get the script. I know I will be pulling down all of my commute entertainment soon

Networking for VMware Admins

Networking-PosterOk. Anything that Scott Lowe blogs about is definitely worth paying attention to. He's a crazy smart and crazy nice guy whom I have had the chance to meet a few times at VMworld and through my involvement with the Toronto VMUG. Well Scott's got a great series going on over on his blog right now titled introduction to networking. The best part about this series is that Scott is trying to put networking into the context of what a VMware admin would need to know. A cool take on a very complex technology. He's already on part 2 so be sure to bookmark his blog (like you don't already)

What's new in VSAN

newAs always another release of vSphere unleashes another flurry of white papers and technical documents outlining the features and benefits. And hey, VSAN is no exception in that flurry. Whats new in VSAN has now been added to that list! A whitepaper covering the requirements, installation, configuration and architectural details all focused on VSAN. When you have a few minutes id recommend checking it out. And speaking of VSAN, get in the beta!!!!

Managing expectations

expectationsWe as IT pros have all been in a situation where we have to manage something called expectations. Whether they be from our customers, executives, managers, coworkers, employees our even ourselves, managing expectations can really be key to defining whether or not we are succeeding in whatever it is that we are doing. That's the thought that came to my mind after reading this article written by Paul Stewart titled "The Importance of setting expectations". A very quick read and something to think about! #winning