Monthly Archives: October 2013

8 weeks of #VCAP – Netflow, SNMP, and Port Mirroring

Objective 2.1 covers off some other components in regards distributed switches so I thought I would just group them all together in this post since there isn't a whole lot to getting the setup.

First up, SNMP

Remember a week or so ago when we went over how to manage hosts with the vSphere Management Assistant?  Well I hope you paid attention as we will need to have our hosts connected to the vMA in order to configure SNMP (technically you could do it with any instance of the vSphere CLI but the vMA is already there for you on the exam so you might as well use it).  We will need to use a command called vicfg-snmp in order to setup a trap target on our hosts.  So to start off, let's set a host target with the following command 

vifptarget -s host1.lab.local

Once our host is set as the target host we can start to configure SNMP.  First off, let's specify our target server, port, and community name.  For a target server of 192.168.199.5 on the default port of 162 and a community name of Public we can use the following command

vicfg-snmp -t 192.168.199.5@162/Public

snmp1

Now, simply enable SNMP on the host with -E

vicfg-snmp -E

You know what, your done!  Want to test it, use -T.  Check your SNMP server to be sure you have recieved the trap!

vicfg-snmp -T

I would definitely recommend exploring the rest of the options with vicfg-snmp.  You can do so by browsing the help of the command.  Look at things like multiple communities (-c), how to reset the settings to default (-r), and how to list out the current configuration (-s) etc…

vicfg-snmp --help

Also, don't forget you need to do this on all of your hosts!  Keep in mind that vCenter also has SNMP settings.  These are configured in the vCenter Server Settings under the SNMP section.  There is a complete GUI around this so I'm not going to go over how to configure these.

NetFlow

Netflow is configured on the settings of your dvSwitch (Right-click dvSwitch->Edit Settings) on the NetFlow tab. There are a number of items we can configure here.  First off, our collector IP and port.  This is the IP and port of the actual NetFlow collector where we are sending the data too.  To allow all of your traffic to appear as coming from a single source, rather than multiple ESX management networks you can specify an IP address for the dvSwitch here as well.   This doesn't actually live on your network, just shows up in your NetFlow collector.  

netflow

There are a few other settings here as well; Active Flow Export Timeout and Idle Flow Export Timeout handle timeouts for the flows, whereas the sampling rate determins what portion of data to collect.  IE, a sampling rate of 2 will collect every other packet, 5, every fifth packet and so on.  The Process internal flows only will only collect data between VMs on the same host. That's really it for netflow, not that hard to configure.

Port Mirroring

I supposed you may be asked to mirror a certain port to an uplink or VM on the exam so it's probably best to go over this.  First off if you were asked to mirror traffic from VMA to VMB then yo1u need to determine what ports these VMs are attached to.  You can see this on the Ports tab of the dvSwitch.  Just sort by the 'Connectee' column and find their corresponding Port IDs.  For the sake of this example let's say VMA is on port 150 and VMB is on 200.

To do the actual mirroring we need to be on the Port Mirroring tab of the dvSwitches settings.  Here we can click 'Add' to setup the mirror.  As shown we give our session a name and description as well as there is a few settings regarding encapsulating VLANs and the maximum lenght or packet to capture.

portmirror1

The next couple of steps simply setup our source and destination for our mirror. To follow our example we can use 150 for the source, and port 200 for the destination. Unless we explicity check the 'Enable' box when completing the setup, all port mirrors are disabled by default.  They can be enabled by going back into the session and explicitly enabling the session.

I'm going to practice setting these up until I can do it with my eyes closed.  They are something that I don't use that option in my day to day operations, but I also recognize that the VCAP may ask you to do these are they can easily be scored.

8 weeks of #VCAP – Host Profiles

As I build up my knowledge for the VCAP5-DCA one item that I realize I have no clue about is AutoDeploy.  AutoDeploy basically has a couple of prerequisites; one being Image Builder, which we have already covered and the second is Host Profiles.  Auto Deploy uses Host Profiles in order to configure the host once it has loaded it's image.  Configuring things like datastores, networking, etc..

First, we need to create a Host Profile.  Browse to your Host Profiles view in vCenter and select the 'Create Profile' button. There are a couple of options for creating, you can either import one or create one from what will be called a reference host (A host that you have setup perfectly that you would like to duplicate).  In this example we will create a profile from an existing host.  The wizard is pretty simple, select a host and give your profile a name.

HP-1-createNow that the profile is created we can go and explore the settings inside it by selecting the profile and clicking 'Edit Profile' along the top bar.  Within the Edit Profile screen there is a ton of information and this where the blueprints skills mostly reference to.  One of the skills to to use Host Profiles to create sub profiles. As you can see there is already a number of sub-profiles in our main profile.  For instance, the NFS sub profiles are shown below; the three 'NFS Storage configuration' options are actually mount points to certain NFS datastores that will be deployed with the profile.  If we wanted to add another we could simply right-click on the NFS storage configuration folder and select 'Add Profile' and then fill in the required information for another NFS mount.

HP-1-createsubThe blueprint also mentions deploying vSphere Distributed Switches with host profiles.  This is something that I have never done so I will try and fumble through it here 🙂  There is a great whitepaper which outlines the process as well here.  First off, I've already done most the work as I had a vDS already setup on my reference host when I created the profile from it.  If you didn't, you would need to go back to that reference host, create the vDS, attach the host and the re-create your Host Profile from it.  From there it simply just looks as if you need to attach the profile to a host and provide some answeres around the networking and IP information, however, in the case of AutoDeploy we would probably want to use an answer file which we will discuss a bit later.

The blueprint also mentions storage configuration settings within Host Profiles as well so it's best to go over some of these.  Aside from the NFS scenarios I mentioned earlier you may want to have a look at some of the below sub profiles

Native Multipathing -> SATP default PSP configuration – this is used to define a default path selection policy for a given SATP.  A very real world scenario – you may want to set your EVA up to default to Round Robin, etc…

Pluggable Storage Architecture -> PSA Claimrule – Remember a week or so ago when we discussed creating claim rules for LUN Masking, well, these are in Host Profiles as well in the case you need to apply a claim rule to multiple hosts.

iSCSI Intiator Configuration – Software iSCSI Initiator –  explore around in here as this too is a real-world scenario.  You may want to prepopulate your discovery IP address, ensure the software initator is always vmhba##, CHAP settings, etc…

Attaching, checking for compliance and applying host profiles is pretty simple and can be done by right clicking a host and naviging through the Host Profile context menus.  You will see when you apply a profile the host needs to be in maintenance mode and a lot of the time you will be prompted for input in regards to passwords, IP addresses, etc…To get around having to enter this input that is unique to a host (and for use with AutoDeploy) we can generate answer files.

Answer files are managed through the Hosts and Clusters tab in the Host Profile settings in vCenter.  As you can see below I have a couple of hosts that have an answer file status of unknown – meaning it has no idea about the answer file.

HP-3-AFIn order to update these answer files it's as simple as right clicking on the host and selecting 'Update Answer File'.  From there you will be prompted to enter in all of the information that requires user input such as IP Addresses, etc…

That's really it for Host Profiles.  I don't expect the exam to quiz you on every possible sub process as there is a lot of them but the blueprint does specifically call out vSphere Distributed Switches and the storage configuration sections, so I would definitely have a poke around the lab in those two sections.  The security section may also be a good one to explore as I tend to use it the most when dealing with Host Profiles.

8 weeks of #VCAP – Private VLANS

While we are on the topic of vSphere Distributed Switches why not just cover Private VLANs.  Private VLANs is something I've never used in production, thus the reason I'm covering it in this series.  Honestly, this lazy Sunday night is the first time I've even touched them and they are very very easy to configure technically so long as you understand the concepts first.

What is a PVLAN?

A Private VLAN is essentially a VLAN within a VLAN!  Can somebody say inception!!  Basically they allow us to take one VLAN and split it into three different private VLANs each containing restrictions in regards to connectivity to each other.   As far as use cases, the most common I can see is in a DMZ type scenario where lots of restrictions and security is in place. The three types are promiscuous, community, and isolated and are explained below.

Promiscuous PVLAN.

A Promiscuous VLAN has the same VLAN ID as your main VLAN.  Meaning if you wanted to setup some Private VLANs on VLAN 200, the promiscuous vlan would have an ID of 200.  VMs attached to the promiscuous VLAN can see all other VMs on other PVLANs, and all other VMs on the PVLAN can see any VMs on the promiscuous VLAN.  In the DMZ scenario, Firewalls and network devices are normally placed on the promiscuous VLAN as all VMs normally need to to see them.

Community PVLAN

​VMs that are a member of the Community PVLAN can see each other, as well as see VMs in the promiscuous VLAN.  They cannot see any VMs in the Isolated PVLAN.  Again, in the DMZ scenario a Community PVLAN could house VMs that need inter connectivity to each other, such as a web and database server.

Isolated PVLAN

​VMs in an isolated PVLAN are just that; isolated!  The only other VMs they would be able to communicate with are those in promiscuous VLAN.  They cannot see any VMs that are in the community VLAN, nor can they see any other VMs that might be in the Isolated VLAN.  A good spot to put a service that only needs connectivity to the firewall and nothing else.

PVLANs in vSphere

PVLANs can be implemented within vSphere only on a vSphere Distributed Switch.  Before we can assign a VM to a PVLAN there is a little leg work that needs to be done on the switch itself in terms of configuring the PVLAN.  To do so, right-click your dvSwitch and select 'Edit Settings'.  On the Private VLAN tab (shown below) is where you initially setup your PVLAN.  As you can see, I've setup my main private VLAN ID as 200, therefore my promiscuous PVLAN is also 200.  Then, I have an isolated and community PVLAN configured with and ID of 201 and 202 respectively.

pvlan1Now our Private VLAN is setup to be consumed.  The only thing left to do is create some port groups that contain the Private VLAN.  We need the port groups in order to assign VMs on the respective network.  Again, right-click your dvSwitch and select 'New Port Group'.  Give your port group a name, and set the VLAN type to Private VLAN.  Once this happens you will see another box appear where we can select either the Promiscuous, Isolated, or Community entry of our PVLAN.  Go ahead and make three port groups, each one being assigned to either 200, 201, or 202.

pvlan2Now it is as simple as attaching your VMs network adapters to the desired port group.  For my testing I created 4 small Linux instances; a firewall, a web server, a database server and a video streaming server.  Trying to recreate a DMZ type scenario I assigned the web and database server to the community PVLAN as they needed to communicate with each other.  I assigned the video streaming server to an isolated PVLAN as it has no need to communicate with either the web or db server.  And I assigned the firewall to the promiscuous PVLAN, as all VMs need to be able to communicate with it in order to gain access to the outside world.  After 'much a pinging' I found that everything was working as expected.  So try it out for yourself.  Try reassigning VMs to different port groups and watch how the ping responses stop.  Like I said, these are very easy to setup technically, just understand the implications of what happens when VMs do not belong to the proper PVLAN.  Good Luck!

8 weeks of #VCAP – Migrating to vSphere Distributed Switches

Alright there is a ton of information and blog posts that revolve around migrating to vSphere Distributed Switches and they are all great!  I hate to throw yet another one out there but as I've stated before things tend to sink in when I post them here.  Call it selfish if you want – I'm just going to call it studying Object 2.1 in the blueprint 🙂

Before we get to involved in the details I'll go through a few key pieces of information.  As you can see below, there are a lost of port groups that I will need to migrate.  These are in fact the port groups that are setup by default with AutoLab. Also, I'm assuming you have redundant NICs setup on all your vSwitches.  This will allow us to migrate all of our VM networks and port groups without incurring any downtime.  As stated before, there are many blog posts around this subject and many different ways to do the migration.  This is just the way I've done it in the past and I'm sure you can probably do this in a smaller amount of steps but this is just the process I've followed.

vss2vds1

Step 1 – Create the shell

So the first step is to create our distributed switch.  This is pretty simple!  Just head into your network view and select 'New vSphere Distributed Switch',  Follow the wizard, it's not that hard.  Pay attention to the number of uplinks you allow as you need to be sure that you have as many uplinks in your distributed switch as you have physical adapters assigned to your standard switches.  Also, I usually add my hosts into the distributed switch during this process, just not importing and physical NICs.  Basically we're left with a distributed switch containing our hosts with no uplinks assigned.  Once we have our switch we need to duplicate all of the por tgroups we wish to migrate (Management, vMotion, FT, VM, etc.)  If you are following along with the autolab you should end up with something similar to the following (ignore my PVLAN port groups – that's another blog post).

vss2vds2One note about the uplinks that you can't see in the image above.  I've went into each of my port groups and setup the teaming/failover to mimic that of the standard switches.  So, for the port groups that were assigned to vSwitch0, i've set dvUplink1 and 2 as active, and 3/4 as unused.  For those in vSwitch1, 3/4 are active and 1/2 are unused.  This provides us with the same connectivity as the standard switches and allows us to segregate the traffic the exact same way that the standard switches did.  This can by editing the settings of you port group and modifying the Teaming and Failover section.  See below.

vss2vds3

Step 2 – Split up your NICs

Alright!  Now that we have a shell of a distributed switch configured we can begin the migration process.  This is the process I was mentioning at the beginning of the post that can be performed a million and one ways!  This is how I like to do it.  From the hosts networking configuration page, be sure you have switched to the vSphere Distributed Switch context.  The first thing we will do is assign a physical adapter from every vSwitch on each host to an uplink on our dvSwitch.  Now, we have redundant NICs on both vSwitches so we are able to do this without affecting our connectivity (hopefully).  To do this, select the 'Manage Physical Adapters' link in the top right hand corner of the screen.  This will display our uplinks with the ability to add NICs to each one.

vss2vds4Basically, we want to add vmnic0 to dvUplink1 and vmnic2 to dvUplink3.  This is because we want one NIC from each standard switch into each of the active/unused configurations that we have setup previously.  It's hard to explain, but once you start doing it you should understand.  To do this, just click the 'Click to Add NIC' links on dvUplink1 and 3 and assign the proper NICs.  You will get a warning letting you know that you are removing a NIC from one switch and adding it to another.

vss2vds5Be sure you repeat the NIC additions on each host you have, paying close attention to the uplinks you are assigning them to.

Step 3 – Migrate our vmkernel port groups

Once we have a couple of NICs assigned to our dvSwitch we can now begin to migrate our vmkernel interfaces. To do this task, switch to the networking inventory view, right click on our dvSwitch and select 'Manage Hosts'.  Select the hosts we want to migrate from (usually all of them in the cluster).  The NICs that we just added should already be selected the in 'Select Physical Adapters' dialog.  Leave this as default, we will come back and grab the other NICs once we have successfully moved our vmkernel interfaces and virtual machine networking, it's the next screen, the 'Network Connectivity' dialog which we will perform most the work.  This is where we say what source port group should be migrated to what destination port group.  An easy step, simply adjusting all of the dropdowns beside each port group does the trick.  See below.  When your done, skip the VM Networking for now and click 'Finish'.

vss2vds6After a little bit of time we should now have all of our vmkernel interfaces migrated to our distributed switch.  This can be confirmed by looking at our standard switches and ensuring we see no vmkernel interfaces  What you might still see though is VMs attached to Virtual Machine port groups on the standard switches.  This is what we will move next.  

Step 4 – Move your virtual machine port groups

Again, this is done through the Networking Inventory and is very simple.  Right-click your dvSwitch and select 'Migrate Virtual Machine Networking'.  Set the VM network you wish to migrate as your source, and the one you created for it in your dvSwtich as your destination (see below).  When you click next you will be presented with a list of VMs on that network, and whether or not the destination network is accessible or not.  If we have done everything right up to this point it should be.  Select all your VMs and complete the migration wizard.

vss2vds7This process will have to be done for each and every virtual machine port group you wish to migrate.  In the case of autolab, Servers and Workstations.  Once we are done this we have successfully migrated all of our port groups to our distributed switch.

Step 5 – Pick up the trash!

The only thing left to do at this point would be to go back to the hosts' view of the distributed switch, select 'Manage Physical Adapters' and assign the remaining two NICs from our standard switches to the proper uplinks in our dvSwitch.

Step 6 – Celebrate a VCAP pass

And done!  This seems like a good thing to have someone do on the exam!  That said, if so, take caution.  The last thing you want to do is mess up a Management Network on one of your hosts and lose contact to it!  Yikes!

8 weeks of #VCAP – vSphere Management Assistant (vMA)

Alright here we go the vMA – I promised you I would bounce around topics which don't relate to each other whatsoever.

So, first off, let's get started with installing and configuring the vMA.  Installation really doesn't even need to be described.  It comes as an ovf and it's as simple as just importing that…

Configuration can get a bit tricky, especially if you haven't used IP Pools before.  We will cover IP Pools in another blog post so I'll just leave it at that.  For the moment, I just went into the vMA VM settings and disabled all of the vApp options!

Anyways, once you finally get the appliance booted up you will be prompted to enter in some network information – pretty simple stuff, menu driven, and then prompted to change the default password for vi-admin.  Easy stuff thus far.  Speaking of authentication the vMA utilizes 'sudo' to execute commands.  This basically allows vi-admin to execute commands under the root user account.  A bit of a security and safeguard mechanism utilized in some Linux OSes.

Alright, so we are now up and running so let's just go over some common tasks that we might perform in relation to the vSphere Management Assistant.  Probably a good idea to know all of these for the exam as vMA does have its very own objective and is referenced in many others.

vMA and your domain!

Certainly we may want to join the appliance to our domain.  This will give us plenty of benefits security wise, the biggest being we will not have to store any of our target hosts passwords within the vMA credential store – so long as the hosts are a member of the domain as well.  Commands related to vMA and domains are as follows…

To join vMA to a domain, obviously substituting your domain name and authentication…requires a restart of the appliance afterwards.

sudo domainjoin-cli join FQDN user_with_priveledges

And, to remove the vMA it's the same command, different parameters

sudo domainjoin-cli leave

And to view information

sudo domainjoin-cli query

So as mentioned above we can do some un-attended active directory authentication to our hosts.  This is a pretty long drawn out process so I doubt it will be asked, but then again I'm wrong 100% of 50% of the time – I'd just know where this information is in the vSphere Management Assistant user guide (HINT: Page 15).

Host Targets

Before we can use the vMA to execute commands on hosts we need to, well, add hosts to our vMA.  Within vMA terms, our hosts are called targets; targets on which we can execute commands.  So when adding hosts we have to provide the hostname and some credentials.  Thus we have a couple of options in regards to how we authenticate; adauth of fpauth (Default).  Examples of adding a host with both authentication types are below…along with some other host options..

Using local ESXi credentials

vifp addserver HOSTNAME

Using AD credentials

vifp addserver HOSTNAME --authpolicy adauth

Viewing the hosts we have added

vifp listservers

Removing a server

vifp removeserver HOSTNAME

Set a host as the target server – meaning set it up so you can run a command on the host without authentication

vifptarget -s HOSTNAME

To clear the current target

vifptarget -c

Security and user related functions

The vMA also has a few commands we can run to help better secure our systems.  When you add a host to vMA, it actually creates a vi-admin and vi-user account on your ESXi host.  You can tell vMA to rotate these passwords using the following command.

vifp rotatepassword (--now, --never or --days #)

vMA also has a vi-user account locally, which by default is disabled, since it has no password.  This account can be used to run commands on an ESXi host that would not normally require administrative priviledges.  Enabling this account is as easy as simply setting a password on it using the following

sudo passwd vi-user

For now that's it – That's all I can think of that is vMA related – Now we will be using it for some other components in the futre, like setting up snmp and different things, but I wanted to keep this post strictly about vMA specific commands.  Happy Studying!

8 weeks of #VCAP – LUN Masking

Alright – here we go – the push is on.  8 weeks to cover some random, sparingly used topics off of the VCAP5-DCA blueprint.  Today, let's tackle an item out of the very first objective on the blueprint; LUN masking.

LUN masking is essentially a process that will mask away LUNs, or make those LUNs inaccessible to certain ESXi hosts.  You know when you go into your backend array and say which hosts can have access to which LUNs – yeah, that's basically LUN masking.  However, for the sake of this exam, it's performed on the host itself through something called claimrules.  That said, it's much harder, but explained below…

So first off, we need to decide on a LUN that we want to mask.  There are many ways to list all of your LUNs/Datastores through the CLI and through the vSphere Client, so pick your beast.  What we need to get is the LUNs identifier – the long string of characters that ESXi uses to uniquely identify the LUN.  Since the claimrule is create within the CLI we might as well just find these numbers inside of the CLI as well – since you may be pressed for time on the exam.  So, let's first list our LUNs, showing each identifier.

esxcli storage core device list | less

lunmask1

As you can see I piped the output to less.  If we don't do this and there are a lot of LUNs attached to your host then you may get a little overwhelmed with the output.  "esxcfg-scsidevs -m" will also give you some great information here, which may be a little more compact than the esxcli command.  Chose your weapon, so long as you can get the identifier.  The LUN shown in the above image has an identifier of "naa.6006048c6fc141bb051adb5eaa0c60a9" – this is the one I'm targeting.

So now we have our identifier it's time to do some masking.  We have some decisions at this point to make though.  We can mask by path (removing individual path visibility), by vendor (this will mask all LUNs to a specific vendor), or by storage transport (yeah, like all iSCSI or all FC).  If we look at the currently defined claimrules we can see most types are utilized.  To do so, use the following command

esxcli storage core claimrule list

lunmask2

For our sake here we will go ahead an perform our masking by path.  I will note below though if you were to choose vendor or transport where that would be setup.

So, in order to do it by path, we need to see all of the paths associated with our identifier.  To do so, we can use the following command along with grepping for our identifier.

esxcfg-mpath  -m | grep naa.6006048c6fc141bb051adb5eaa0c60a9

lunmask3Alright, so you can see we have 2 paths.  That means in order to completely mask away this LUN we will need to do all of the following twice; once using the vmhba32:C1:T0:L0 path and once using vmhba32:C0:T0:L0.

Now, time to begin constructing our claimrule!  First off we will need an ID number.  Certailny don't use one that is already taken (remember "esxcli storage core claimrule list") or you can use the "-u" to autoassign a number.  I like to have control over this stuff so I'm picking 200.  Also to note is the -t option – this specifies the type of claimrule (remember when i said we could mask by vendor).  Our -t to do a path will be location, however this could be vendor or transport as well.** Running "esxcli storage core claimrule add" with no arguments will output a bunch of examples **  So, in order to mask by location we will specify -A, -C, -T, and -L parameters referencing our path and the -P states we want to use the MASK_PATH plugin.  The command should look like the one below.

esxcli storage core claimrule add -r 200 -t location -A vmhba32 -C 1 -T 0 -L 0 -P MASK_PATH

and for our second path – don't forget to put a new rule ID

esxcli storage core claimrule add -r 201 -t location -A vmhba32 -C 0 -T 0 -L 0 -P MASK_PATH

Running "esxcli storage core claimrule list" will now show our newly created rules, however they haven't been applied yet.  Basically they are running in "file" – we need them to be in "runtime"  This is as as easy as running

esxcli storage core claimrule load

Now we are all set to go – kinda.  They are in runtime, but the rules will not be applied until that device is reclaimed.  So, a reboot would work here – or, a more ideal solution, we can run a reclaim on our device.  To do so we will need that device identifier again and the command to run is…

esxcli storage core claiming reclaim -d naa.6006048c6fc141bb051adb5eaa0c60a9

And done!  And guess what – that LUN is gonzo!!!  Congrats Master Masker!

HEY!  Wait!  I needed that LUN

Oh SNAP!  This is my lab environment and I need that LUN back, well, here's how we can undo everything we just did!

First off, let's get rid of those claimrules we just added

esxcli storage core claimrule remove -r 200
esxcli storage core claimrule remove -r 201

Listing them out will only show them in runtime now, they should no longer be in file. Let's get them out runtime by loading our claimrule list again.

esxcli storage core claimrule load

Now a couple of unclaim commands on our paths.  This will allow them to be reclaimed by the default plugin.

esxcli storage core claiming unclaim -t location -A vmhba32 -C 0 -T 0 -L 0
esxcli storage core claiming unclaim -t location -A vmhba32 -C 1 -T 0 -L 0

A rescan of your vmhba and voila!  Your LUN should be back!  Just as with Image Builder I feel like this would be a good thing to know for the exam.  Again, it's something that can easily be marked and tracked and very specific!  Happy studying!