Tag Archives: vSphere 5

8 weeks of #VCAP – The ESXi Firewall

Alright, continuing on the realm of security let's have a look at the built in firewall on ESXi.  This post will relate directly to Objective 7.2 on the blueprint!  Basically, a lot of this work can be done in either the GUI or the CLI, so chose what you are most comfortable with.  I'll be jumping back and forth from both!  Some things are just easier in the GUI I find….anyways, I only have like 4 weeks to go so let's get going…

First up, enable/disable pre configured services

Easy/Peasy!  Hit up the 'Security Profile' on a hosts configuration tab and select 'Properties' in the 'Services' section.  You should see something similar to that of below

builtinservices

I guess as far as enabling/disabling you would simply stop the service and set it to manual automation.

Speaking of automation, that's the second skill

As you can see above we have a few options in regards to automation behavior. We can Start/Stop with the host (basically on startup and shutdown), Start/Stop manually (we will go in here and do it), or Start automatically when …( I have no idea what this means 🙂 sorry – let me know in the comments 🙂 ).  Anyways, that's all there is to this!

We are flying through this, Open/Close Ports

Same spot as above just hit the 'Properties' link on the Firewall section this time.  Again, this is just as easy – just check/uncheck the boxes beside the service containing the port you want to open or close!  Have a look below – it's pretty simple!

opencloseports

Another releavant spot here is the 'Firewall' button at the bottom.  Aside from opening and closing a port, we can also specify which networks are able to get through if our port is open.  Below I'm allowing access only from the 192.168.1.0/24 network.  

allowedipsAgain this can be done within the CLI, but i find it much easier to accomplish inside of the GUI.  But, that's a personal preference so pick your poison!

That's what I get for talk about the CLI, custom services!

Aha!  Too much talk of the CLI leads us to a task that can only be completed via the CLI; Custom Services.  Basically, if you have a service that utilizes ports that aren't covered off by the default services you need to create your own spiffy little service so you can enable/disable it and open/close those ports and allow access to it.  So, off to the CLI we go…

The services in the ESXi firewall are defined by XML files located in /etc/vmware/firewall  The service.xml file contains the bulk of them and you can define yours in there, or you can simply add any xml file in the directory and it will be picked up (so long as it is defined properly).  If you have enabled HA you are in luck – you will see an fdm.xml file there.  Since the VCAP is time sensitive this might be your quickest way out as you can just copy that file, rename it to your service and modify as it fits.  If not, then you will have to get into service.xml and copy text out of there.  I'm going to assume HA is enabled and go the copy/modify route.

So, copy fdm.xml to your service name

cp fdm.xml mynewservice.xml

Before modifying mynewservice.xml you will need to give root access to write to it, use the following to do so…

chmod o+w mynewservice.xml

Now vi mynewservice.xml – if you don't know how to use 'vi', well, you better just learn, go find a site 🙂  Let's say we have a requirement to open up inbound tcp/udp 8000 and tcp/udp 8001 on the outbound.  We would make that file look as follows, simply replacing the name and ports and setting the enabled flag.

customservice

Alright, save that bad boy, and probably it's a good idea to run 'chmod o-w mynewservice.xml' and take away that write permission.  If you go and look at your services, or simply run 'esxcli network firewall ruleset list' you might say, "hey, where's my new service?"  Well, it won't show up until you refresh the firewall – to do so, use the following command..

esxcli network firewall refresh

Now you can go check in the GUI or do the following to list out your services…

esxcli network firewall ruleset list

rulesetWoot!  Woot!  It's there!  But wait, it's disabled.  No biggie, we can go ahead and enable it just as we did the others in the steps earlier in this post – or, hey, since we are in the CLI let's just do it now!

esxcli network firewall ruleset set -r mynewservice -e true

And that's that!  You are done!  If asked to set the allowedIP information, I'd probably just jump back to the GUI and do that!

Set firewall security level – More CLI goodness

Well before we can set the firewall security level let's first understand what security levels are available to us.  ESXi gives us three…

High – This is the default – basically, firewall blocks all incoming and outgoing ports except for the the essential ports it needs to run.

Medium  – All Incoming is blocked, except for any port you open – outgoing is a free for all

Low – Nada – have at it, everything is open.  

Anyway, we can get the default action by specifying

esxcli network firewall get

and to change it we have a few options…  Passing '-d false' would set us to DROP (the default HIGH security level), passing a '-d true' will set us up to PASS traffic (I think this would be the medium security) and setting a '-e false' will disable the firewall completely (the low settings).  So, to switch to medium we could do the following

esxcli network firewall set -d true

I could be wrong here, so if I am just let me know and I'll update it 🙂

And guess what?  We are done with the firewall!  I would practice this stuff as it's easy measurable and can be quickly identified as you doing something right or wrong – I'd bet this will be on the exam in one way or another.  Good Luck!

8 weeks of #VCAP – Security

Just as I said I'm going to hop around from topic to topic, so without further ado we move from HA to security. This post will be pretty much all of objective 7 on the blueprint – some things I may graze over while focusing heavily on others.  

So first up is Objective 7.1 – now there is a lot of information in here and I'll just pull out the most important in my opinion, as well as the task I don't commonly perform.  So that said, I'm going to leave out the users, groups, lockdown mode, and AD authentication.  These things are pretty simple to configure anyways.  Also, this whole authentication proxy thing – I'm just going to hope for the best that it isn't on the exam 🙂  So, let's get started on this beast of an objective.

SSH

Yeah, we all enable it right – and we all suppress that warning with that advanced setting.  The point is, ssh is something that is near and dear to all our hearts, and we like to have the ability to access something via the CLI in the case the GUI or vCenter or something is down.  So with that said, let's have a look at what the blueprint states in regards to SSH – customization.  Aside from enabling and disabling this, which is quite easy so I won't go over it, I'm not sure what the blue print is getting at.  I've seen lots of sites referencing the timeout setting so we can show that.  Simply change the value in the Advanced Settings of a host to the desired time in seconds (Uservars->ESXiShellTimeOut) as shown below

esxishelltimeoutAs far as 'Customize SSH settings for increased security' goes, I'm not sure what else you can enable/disable or tweak to do so.  If you are familiar with sshd I suppose you could permit root from logging in and simply utilize SSH with a local user account.  

Certificates and SSL

The blueprint mentions the enabling and disabling of certificate checking.  This is simply done by checking/unchecking a checkbox in the SSL section of the vCenter Server settings.

The blueprint also calls out the generation of ESXi host certs.  Before doing any sort of certificate generation or crazy ssl administration always back your original certs up.  These are located in /etc/vmware/ssl – just copy them somewhere.  To regenerate new certs simply shell into ESXi and run generate-certificates – this will create new certs and keys, ignore the error regarding the config file 🙂  After doing this you will need to restart your management agents (/etc/init.d/hostd restart) and quite possibly reconnect your host to vCenter.

To deploy a CA signed cert you can simply just copy your certs to the same directory (/etc/vmware/ssl ) and be sure they are named rui.cert and rui.key and restart hostd the same as above.

As far as SSL timeouts I couldn't find this located in any of the recommended tools for this objective, it's actually in the security guide (which makes sense right, we are doing the security objective #fail  – either way, you need to edit the /etc/vmware/hostd/config.xml file and add the following two entries to modify the SSL read and handshake timeout values respectively (they are in milliseconds remember)

<readTimeoutMs>15000</readTimeoutMs>

<handshakeTimeoutMs>15000</handshakeTimeoutMs>

Once again you will need to restart hostd after doing this!

Password policies

Yikes!  You want to get confused try and understand the pam password policies.  I'll do my best to explain it – keep in mind it will be high level though – this is in the blueprint however I'm not sure if they are going to have you doing this on the exam.  Either way, it's good to know…  Honestly, I don't think I'm going to memorize this, if you work with it daily then you might, but me, no!  I'll just know that it is also in the security guide (search for PAM).  Anyways, here's the command

password requisite /lib/security/$ISA/pam_passwdqc.so retry=N min=N0,N1,N2,N3,N4

​Wow!  So what the hell does that mean?  Well, first off N represents numbers (N = retry attempts, N0 = length of password if only using one character class, N1 = length if using two character classes, N2 = length of words inside passphrases, N3 = length if using three character classes, N4 = length if using all four character classes).  Character classes are basically lower case, upper case, numbers and special characters.  They also confuse things by slamming the passphrase settings right in the middle as well – Nice!  Either way, this is the example from the security guide.

password requisite /bin/security/$ISA/pam_passwdqc.so retry=3 min=12,9,8,7,6

This translates into three retry attempts, 12 character password min if using only one class, 9 character minimum if using two classes, 7 character minimum if using three classes, and 6 character minimum if using all four classes.  As well, passphrases are required to have words that are at least 8 characters long.

No way can I remember this, I'm just going to remember Security Guide + CTRL+F + PAM 🙂

I'm going to cut this post off here and give the ESXi firewall its' own post – my head hurts!!!! 🙂

8 weeks of #VCAP – HA

Although High Availability is something I’ve been configuring for many years now I thought it might be a good idea to go over the whole process again.  This became especially evident after watching the HA section of Jason Nash’s TrainSignal/PluralSight course, as I quickly realized there are a lot of HA advanced settings that I’ve never modified or tested – with that said, here’s the HA post.

First off I’m not going to go over the basic configuration of HA – honestly, it’s a checkbox right – I think we can all handle that.  I will give a brief description of a few of HA bullet points that are listed within the blueprint and point everyone where we can manage them.

First up, Admission Control

HA-1-AdminisionControl.

When an HA event occurs in our cluster, we need to ensure that enough resources are available to successfully failover our infrastructure – Admission control dictates just how many resources we will set aside for this event.  If our admission control policies are violated, no more VMs can be powered on inside of our cluster – yikes!  There are three types…

Specify Failover Host – Ugly!  Basically you assign a host as the host that will be used in the event of an HA event.  The result of an HA event is the only time that this host will have VMs running on it – all other times, it sits there wasting money 🙂

Host failures cluster tolerates – This is perhaps the most complicated policy.  Essentially a slot size is calculated for CPU and memory, the cluster then does some calculations in order to determine how many slot sizes are available.  It then reserves a certain number of failover slots in your cluster to ensure that a certain number of hosts are able to failover.  There will be much more on slot size later on in this post so don’t worry if that doesn’t make too much sense.

Percentage of Cluster resources reserved – This is probably the one I use most often.  Allows you to reserve a certain percentage of both CPU and Memory for VM restarts.

So, back to slot size – a slot is made up of two components; memory and cpu.  HA will take the largest reservation of any powered on VM in your environment and use that as its memory slot size.  So even if you have 200 VMs that have only 2GB of RAM, if you place a reservation on just one VM of say, oh, 8GB of RAM, your memory slot size will be 8GB.  If you do not have any reservations set, the slot size is deemed to be 0MB + memory overhead.

As for CPU, the same rules apply – the slot size is the largest reservation set on a powered on VM.  If no reservations are used, the slot size is deemed to be 32MGHz.  Both the CPU and Memory slot sizes can be controlled by a couple of HA advanced settings – das.slotCpuInMhz and das.slotMemInMb (**Note – all HA advanced setting start with das. – so if you are doing the test and you can’t remember one, simply open the Availability doc and search for das – you’ll find them ).  These do not change the default slot size values, but more so specify an upper limit in wich a slot size can be.

So let’s have a look at these settings and slot size – first up, we can see our current slot size by selecting the ‘Advanced Runtime Info’ link off of a clusters’ Summary tab.  As shown below my current slot size for CPU is 500Mhz and 32MB for memory, also I have 16 total slots, 4 of which have been taken.

ha-2-slotsizebefore

So let’s now set the advanced setting das.slotCpuInMhz setting to something lower than 500 – say we only ever want our CPU slot size for a VM to be 64Mhz.   Within the clusters’ HA settings (Right-click cluster->Edit Settings, vSphere HA) you will see an Advanced Options button, select that and set our das.slotCpuInMhz to 64 as shown below.

ha-2-slotsizeadvNow we have essentially stated that HA should use the smallest of either the largest VM CPU reservation, or the value for das.slotCpuInMhz as our CPU slot size.  A quick check on our runtime settings again reflects the change we just made.  Also, if you look, you will see that we have also increased our total available slots to 128, since we are now using a CPU slot size of 64 Mhz rather than 500.

ha-2-slotsizeafter

So that’s admission control and slot sizes in a nutshell.  Seems like a good task to have you limit or change some slot sizes on the exam.  Also, I’m not sure how much troubleshooting needs to be performed on the exam but if presented with any VMs failing to power on scenarios, slot sizes and admission control could definitely be the answer.

More Advanced Settings

As you may have seen in the earlier screenshots there were a few other of those das. advanced settings shown.  Here’s a few that you may need to know for the exam, maybe, maybe not, either way, good to know…

das.heartbeatDsPerHost – used to increase the number of heartbeat datastores used – default is 2, however can be overridden to a maximum of 5.  Requires complete reconfiguration of HA on the hosts.

das.vmMemoryMinMb – value to use for the memory slot size if no reservation is present – default of 0

das.slotMemInMb – upper value of a memory slot size – meaning we can limit how large the slot size can be by using this value.

das.vmCpuMinMhz – value to use for the cpu slot size if no reservations are present – default of 32.

das.slotCpuInMhz – upper value of a CPU slot size – meaning we can limit how large the slot size can be by using this value

das.isolationAddress – can be used to change the IP address that HA pings when determining isolation – by default this is the default gateway.

das.isolationAddressX – can be used to add additional IPs to ping – X can be any number between 0 and 9.

das.useDefaultIsolationAddress – can be used to specify whether HA should even attempt to use the isolation address.

Anyways, those are the most commonly used settings – again, any others will be listed in the availability guide so use that if needed to find others on the exam – but remember, having to open those pdf’s will take away valuable time.

Other random things

Just a few notes on some other parts of HA that I haven’t used that often.  The first being VM Monitoring.  VM Monitoring is a process that will monitor for heartbeats and I/O activity from the VMware tools service inside your virtual machines.  If it doesn’t detect activity from the VM, it determines that it has failed and can proceed with a reboot of that VM.  vSphere has a few options as it pertains to VM monitoring that we can use to help prevent false positives and un needed VM reboots.

Failure Interval – Amount of time in seconds to check for heartbeats and I/O activity.

Minimum Uptime – The amount of time in seconds that VM monitoring will wait after a power on or restart before it starts to poll.

Maximum Per VM Resets – the number of times that a VM can be reset in a given time period (Reset Time Window)

Reset Time Window – used fo the maximum VM resets – specified in hours

The blueprint also mentions heartbeat datastore dependencies and preferences.  Quickly, vSphere will chose which datastores to use as HA heartbeat datastores automatically, depending on a number of things like storage transport, number of hosts connected, etc.  We can change this as well in our options.  We can instruct vSphere to only chose from our preferred list (and by which only selecting 2 datastores will in turn allows us to determine which datastores are used) or we can say to use our preferred if possible, but if you can’t, go ahead and chose the ones you want.

As well, most all of the settings we set for defaults such as isolation response and restart priority can be set on a per-VM basis as well.   This is pretty easy so I won’t explain it but just wanted to mention that it can be done.

I’d say that’s enough for HA – it’s not a hard item to administer.  That said, lab it, lab all of it!  Practice Practice Practice.

8 weeks of #VCAP – Auto Deploy

Ok!  Here we go!  AutoDeploy!  Now that we have covered both Image Builder and Host Profiles we can have a look at how to setup AutoDeploy!  Now I don't really expect to have to go through this process from start to finish on the exam, it was quite a big job to get it running.  That said, there are a lot of little tiny task involved in making AutoDeploy work properly that could be fair game on the exam.  Also, I know that we already covered Image Builder and Host Profiles, but I will quickly go over what needs to be done again, as it will help to engrave it into my mind. 🙂

First off, AutoDeploy Installation and Configuration

I'm not going to cover very much in regards to the installation – I mean, you run the vCenter installer, click next next next finish, then enable the plugin in your client.  Yup, that easy!  Configuration however, that will create you some work.  AutoDeploy is cool, but it doesn't come without its fair share of requirements and infrastructure that needs to be in place.  Let's go over some of that!

TFTP Server

First up is a TFTP server!  This needs to be in place!  So go and find a free one if you want or use one that you already have.  If you are using AutoLab then I've got great news for you, there is one already there.  It's installed on your domain controller!  Either way what we need to do is grab a zip file and extract it into our TFTP root.  The zip file can be found by going into the 'AutoDeploy' settings from your vSphere Client.  You should see an option there to 'Download TFTP Boot Zip'.  Basically, just download this and extract it to your TFTP root.  

ad-1

DHCP Settings

Another third-party component that we need in place to use AutoDeploy is DHCP.  And whatever DHCP server you use you need the ability to set Options 66 and 67 in your scope settings.  What these do is specify where the TFTP server is and what boot file to use.  You can see in the above image that the boot file we need to use is called undionly.kpxe.vmw-hardwired – I'll leave the IP of your TFTP server up to you.  So go ahead and set those options on your DHCP server, 66 being the IP and 67 being the bootfile.  You can see below how I have done using the Windows DHCP server that is configured with AutoLab.

ad-2

Basically, once you have installed AutoDeploy, TFTP, and setup DHCP with the proper boot server and file options we can go ahead and start with some of the goodness that is AutoDeploy.  Next up, Image Builder!

Image Builder

Remember, we went over this in depth here – but hey, let's build a quick image again!  First, let's get access to our offline depot.

Add-EsxSoftwareDepot C:\OfflineBundle\update-from-esxi5.0-5.0_update03.zip

From there let's go ahead and clone one of the ESXi images into our own new profile that we can use to deploy to hosts.  First, let's get just the name of the Image Profiles so we can easily just copy/paste it 🙂

Get-ESXImageProfile&nbsp;| Select Name

And now, we can do the clone process.

New-EsxImageProfile -Name VCAP_Image_Profile -CloneProfile ESXi-5.0.0-20131002001-standard -Vendor mwpreston.net

At this point we are good unless we wanted to add some external vibs.  For the sake of this article let's just leave it at that.

Host Profiles

I'll let you go over the Host Profiles section that I wrote up yourself.  Basically, we just need a Host Profile setup that will further configure our host after it has been installed by AutoDeploy.

AutoDeploy Rules

Ok, so we have TFTP, we have DHCP, we have our Image Profile and our Host Profile, time to create some AutoDeploy rules.  An AutoDeploy rule basically applies either an image, cluster configuration, or host profile to a host based on a specific component of that host matching a setting.  By that I mean we can match on the IP address, MAC, vendor, model, etc..  I will be using model in this example.  

So there are three auto deploy rules that we need to create and we need to do this in order.  First the Image, then state which cluster we want to join, and finally stating which host profile we would like to apply.  So, let's get started with the image.

New-DeployRule -Name IntialBootRule -Item VCAP_ImageProfile -Pattern &quot;vendor=VMware, Inc.&quot;
Add-DeployRule -DeployRule InitialBootRule

And on to the cluster we want the host to join…

New-DeployRule -Name ClusterRule -Item VCAP_ImageProfile -Pattern &quot;vendor=VMware, Inc.&quot;
Add-DeployRule -DeployRule ClusterRule

And finally the host profile we wish to apply to the host…

New-DeployRule -Name HostProfileRule -Item VCAP_ImageProfile -Pattern &quot;vendor=VMware, Inc.&quot;
Add-DeployRule -DeployRule HostProfileRule

We can also specify the boot image, cluster, and host proflile all in one rule if we want to.  This is done by specifing them separated by a comma as follows

New-DeployRule -Name AutoDeployNestedVMwareHosts&nbsp;-Item VCAP_ImageProfile,MyCluster,MyHostProfile -Pattern &quot;vendor=VMware, Inc&quot;
Add-DeployRule&nbsp;-DeployRule AutoDeployNestedVMwareHosts

At this point we are basically done configuring everything.  You should be able to deploy a host successfully!  But this is the VCAP right?  So I bet they will have this partially setup, or something that needs to be fixed.  I can't see having to go through this whole process.  So, we had better brush up on a few of the other AutoDeploy cmdlets that are available…

Get-DeployRuleSet – gets that current ruleset that we just loaded.

Set-DeployRuleSet – allows you to set the list of rules in the ruleset (Get-DeployRule Rule1, Rule2 | Set-DeployRuleSet)

Set-DeployRule – Update an existing rule (Set-DeployRule -Name "HostProfileRule" -Item "NewHostProfile"

Get-VMHostMatchingRules – Checks to see which rules match on a given host.

Repair-DeployImageCache – used to rebuild the Auto Deploy cache in the case it is deleted

And the most important one to remember for VCAP since it will remind you of the syntax for them all..

Get-AutoDeployCommand

Repair-DeployImageCache – used to rebuild the Auto Deploy cache in the case it is deleted

Also another notable mention is the pattern.  AutoDeploy can match on a number of different characteristics of a host to determine which rules to apply.  By simply running Get-VMHostAttributes on a given host we can see all of the different items we have to match on in terms of a pattern

ad1

Ok!  At this point I think we are done!!!!  I know this was lengthy but hopefully helpful!!  I'm going to have to go over this stuff a few times in my lab as it is a lot to remember!!!  Good Luck!

8 weeks of #VCAP – Netflow, SNMP, and Port Mirroring

Objective 2.1 covers off some other components in regards distributed switches so I thought I would just group them all together in this post since there isn't a whole lot to getting the setup.

First up, SNMP

Remember a week or so ago when we went over how to manage hosts with the vSphere Management Assistant?  Well I hope you paid attention as we will need to have our hosts connected to the vMA in order to configure SNMP (technically you could do it with any instance of the vSphere CLI but the vMA is already there for you on the exam so you might as well use it).  We will need to use a command called vicfg-snmp in order to setup a trap target on our hosts.  So to start off, let's set a host target with the following command 

vifptarget -s host1.lab.local

Once our host is set as the target host we can start to configure SNMP.  First off, let's specify our target server, port, and community name.  For a target server of 192.168.199.5 on the default port of 162 and a community name of Public we can use the following command

vicfg-snmp -t 192.168.199.5@162/Public

snmp1

Now, simply enable SNMP on the host with -E

vicfg-snmp -E

You know what, your done!  Want to test it, use -T.  Check your SNMP server to be sure you have recieved the trap!

vicfg-snmp -T

I would definitely recommend exploring the rest of the options with vicfg-snmp.  You can do so by browsing the help of the command.  Look at things like multiple communities (-c), how to reset the settings to default (-r), and how to list out the current configuration (-s) etc…

vicfg-snmp --help

Also, don't forget you need to do this on all of your hosts!  Keep in mind that vCenter also has SNMP settings.  These are configured in the vCenter Server Settings under the SNMP section.  There is a complete GUI around this so I'm not going to go over how to configure these.

NetFlow

Netflow is configured on the settings of your dvSwitch (Right-click dvSwitch->Edit Settings) on the NetFlow tab. There are a number of items we can configure here.  First off, our collector IP and port.  This is the IP and port of the actual NetFlow collector where we are sending the data too.  To allow all of your traffic to appear as coming from a single source, rather than multiple ESX management networks you can specify an IP address for the dvSwitch here as well.   This doesn't actually live on your network, just shows up in your NetFlow collector.  

netflow

There are a few other settings here as well; Active Flow Export Timeout and Idle Flow Export Timeout handle timeouts for the flows, whereas the sampling rate determins what portion of data to collect.  IE, a sampling rate of 2 will collect every other packet, 5, every fifth packet and so on.  The Process internal flows only will only collect data between VMs on the same host. That's really it for netflow, not that hard to configure.

Port Mirroring

I supposed you may be asked to mirror a certain port to an uplink or VM on the exam so it's probably best to go over this.  First off if you were asked to mirror traffic from VMA to VMB then yo1u need to determine what ports these VMs are attached to.  You can see this on the Ports tab of the dvSwitch.  Just sort by the 'Connectee' column and find their corresponding Port IDs.  For the sake of this example let's say VMA is on port 150 and VMB is on 200.

To do the actual mirroring we need to be on the Port Mirroring tab of the dvSwitches settings.  Here we can click 'Add' to setup the mirror.  As shown we give our session a name and description as well as there is a few settings regarding encapsulating VLANs and the maximum lenght or packet to capture.

portmirror1

The next couple of steps simply setup our source and destination for our mirror. To follow our example we can use 150 for the source, and port 200 for the destination. Unless we explicity check the 'Enable' box when completing the setup, all port mirrors are disabled by default.  They can be enabled by going back into the session and explicitly enabling the session.

I'm going to practice setting these up until I can do it with my eyes closed.  They are something that I don't use that option in my day to day operations, but I also recognize that the VCAP may ask you to do these are they can easily be scored.

8 weeks of #VCAP – Migrating to vSphere Distributed Switches

Alright there is a ton of information and blog posts that revolve around migrating to vSphere Distributed Switches and they are all great!  I hate to throw yet another one out there but as I've stated before things tend to sink in when I post them here.  Call it selfish if you want – I'm just going to call it studying Object 2.1 in the blueprint 🙂

Before we get to involved in the details I'll go through a few key pieces of information.  As you can see below, there are a lost of port groups that I will need to migrate.  These are in fact the port groups that are setup by default with AutoLab. Also, I'm assuming you have redundant NICs setup on all your vSwitches.  This will allow us to migrate all of our VM networks and port groups without incurring any downtime.  As stated before, there are many blog posts around this subject and many different ways to do the migration.  This is just the way I've done it in the past and I'm sure you can probably do this in a smaller amount of steps but this is just the process I've followed.

vss2vds1

Step 1 – Create the shell

So the first step is to create our distributed switch.  This is pretty simple!  Just head into your network view and select 'New vSphere Distributed Switch',  Follow the wizard, it's not that hard.  Pay attention to the number of uplinks you allow as you need to be sure that you have as many uplinks in your distributed switch as you have physical adapters assigned to your standard switches.  Also, I usually add my hosts into the distributed switch during this process, just not importing and physical NICs.  Basically we're left with a distributed switch containing our hosts with no uplinks assigned.  Once we have our switch we need to duplicate all of the por tgroups we wish to migrate (Management, vMotion, FT, VM, etc.)  If you are following along with the autolab you should end up with something similar to the following (ignore my PVLAN port groups – that's another blog post).

vss2vds2One note about the uplinks that you can't see in the image above.  I've went into each of my port groups and setup the teaming/failover to mimic that of the standard switches.  So, for the port groups that were assigned to vSwitch0, i've set dvUplink1 and 2 as active, and 3/4 as unused.  For those in vSwitch1, 3/4 are active and 1/2 are unused.  This provides us with the same connectivity as the standard switches and allows us to segregate the traffic the exact same way that the standard switches did.  This can by editing the settings of you port group and modifying the Teaming and Failover section.  See below.

vss2vds3

Step 2 – Split up your NICs

Alright!  Now that we have a shell of a distributed switch configured we can begin the migration process.  This is the process I was mentioning at the beginning of the post that can be performed a million and one ways!  This is how I like to do it.  From the hosts networking configuration page, be sure you have switched to the vSphere Distributed Switch context.  The first thing we will do is assign a physical adapter from every vSwitch on each host to an uplink on our dvSwitch.  Now, we have redundant NICs on both vSwitches so we are able to do this without affecting our connectivity (hopefully).  To do this, select the 'Manage Physical Adapters' link in the top right hand corner of the screen.  This will display our uplinks with the ability to add NICs to each one.

vss2vds4Basically, we want to add vmnic0 to dvUplink1 and vmnic2 to dvUplink3.  This is because we want one NIC from each standard switch into each of the active/unused configurations that we have setup previously.  It's hard to explain, but once you start doing it you should understand.  To do this, just click the 'Click to Add NIC' links on dvUplink1 and 3 and assign the proper NICs.  You will get a warning letting you know that you are removing a NIC from one switch and adding it to another.

vss2vds5Be sure you repeat the NIC additions on each host you have, paying close attention to the uplinks you are assigning them to.

Step 3 – Migrate our vmkernel port groups

Once we have a couple of NICs assigned to our dvSwitch we can now begin to migrate our vmkernel interfaces. To do this task, switch to the networking inventory view, right click on our dvSwitch and select 'Manage Hosts'.  Select the hosts we want to migrate from (usually all of them in the cluster).  The NICs that we just added should already be selected the in 'Select Physical Adapters' dialog.  Leave this as default, we will come back and grab the other NICs once we have successfully moved our vmkernel interfaces and virtual machine networking, it's the next screen, the 'Network Connectivity' dialog which we will perform most the work.  This is where we say what source port group should be migrated to what destination port group.  An easy step, simply adjusting all of the dropdowns beside each port group does the trick.  See below.  When your done, skip the VM Networking for now and click 'Finish'.

vss2vds6After a little bit of time we should now have all of our vmkernel interfaces migrated to our distributed switch.  This can be confirmed by looking at our standard switches and ensuring we see no vmkernel interfaces  What you might still see though is VMs attached to Virtual Machine port groups on the standard switches.  This is what we will move next.  

Step 4 – Move your virtual machine port groups

Again, this is done through the Networking Inventory and is very simple.  Right-click your dvSwitch and select 'Migrate Virtual Machine Networking'.  Set the VM network you wish to migrate as your source, and the one you created for it in your dvSwtich as your destination (see below).  When you click next you will be presented with a list of VMs on that network, and whether or not the destination network is accessible or not.  If we have done everything right up to this point it should be.  Select all your VMs and complete the migration wizard.

vss2vds7This process will have to be done for each and every virtual machine port group you wish to migrate.  In the case of autolab, Servers and Workstations.  Once we are done this we have successfully migrated all of our port groups to our distributed switch.

Step 5 – Pick up the trash!

The only thing left to do at this point would be to go back to the hosts' view of the distributed switch, select 'Manage Physical Adapters' and assign the remaining two NICs from our standard switches to the proper uplinks in our dvSwitch.

Step 6 – Celebrate a VCAP pass

And done!  This seems like a good thing to have someone do on the exam!  That said, if so, take caution.  The last thing you want to do is mess up a Management Network on one of your hosts and lose contact to it!  Yikes!

8 weeks of #VCAP – vSphere Management Assistant (vMA)

Alright here we go the vMA – I promised you I would bounce around topics which don't relate to each other whatsoever.

So, first off, let's get started with installing and configuring the vMA.  Installation really doesn't even need to be described.  It comes as an ovf and it's as simple as just importing that…

Configuration can get a bit tricky, especially if you haven't used IP Pools before.  We will cover IP Pools in another blog post so I'll just leave it at that.  For the moment, I just went into the vMA VM settings and disabled all of the vApp options!

Anyways, once you finally get the appliance booted up you will be prompted to enter in some network information – pretty simple stuff, menu driven, and then prompted to change the default password for vi-admin.  Easy stuff thus far.  Speaking of authentication the vMA utilizes 'sudo' to execute commands.  This basically allows vi-admin to execute commands under the root user account.  A bit of a security and safeguard mechanism utilized in some Linux OSes.

Alright, so we are now up and running so let's just go over some common tasks that we might perform in relation to the vSphere Management Assistant.  Probably a good idea to know all of these for the exam as vMA does have its very own objective and is referenced in many others.

vMA and your domain!

Certainly we may want to join the appliance to our domain.  This will give us plenty of benefits security wise, the biggest being we will not have to store any of our target hosts passwords within the vMA credential store – so long as the hosts are a member of the domain as well.  Commands related to vMA and domains are as follows…

To join vMA to a domain, obviously substituting your domain name and authentication…requires a restart of the appliance afterwards.

sudo domainjoin-cli join FQDN user_with_priveledges

And, to remove the vMA it's the same command, different parameters

sudo domainjoin-cli leave

And to view information

sudo domainjoin-cli query

So as mentioned above we can do some un-attended active directory authentication to our hosts.  This is a pretty long drawn out process so I doubt it will be asked, but then again I'm wrong 100% of 50% of the time – I'd just know where this information is in the vSphere Management Assistant user guide (HINT: Page 15).

Host Targets

Before we can use the vMA to execute commands on hosts we need to, well, add hosts to our vMA.  Within vMA terms, our hosts are called targets; targets on which we can execute commands.  So when adding hosts we have to provide the hostname and some credentials.  Thus we have a couple of options in regards to how we authenticate; adauth of fpauth (Default).  Examples of adding a host with both authentication types are below…along with some other host options..

Using local ESXi credentials

vifp addserver HOSTNAME

Using AD credentials

vifp addserver HOSTNAME --authpolicy adauth

Viewing the hosts we have added

vifp listservers

Removing a server

vifp removeserver HOSTNAME

Set a host as the target server – meaning set it up so you can run a command on the host without authentication

vifptarget -s HOSTNAME

To clear the current target

vifptarget -c

Security and user related functions

The vMA also has a few commands we can run to help better secure our systems.  When you add a host to vMA, it actually creates a vi-admin and vi-user account on your ESXi host.  You can tell vMA to rotate these passwords using the following command.

vifp rotatepassword (--now, --never or --days #)

vMA also has a vi-user account locally, which by default is disabled, since it has no password.  This account can be used to run commands on an ESXi host that would not normally require administrative priviledges.  Enabling this account is as easy as simply setting a password on it using the following

sudo passwd vi-user

For now that's it – That's all I can think of that is vMA related – Now we will be using it for some other components in the futre, like setting up snmp and different things, but I wanted to keep this post strictly about vMA specific commands.  Happy Studying!

8 weeks of #VCAP – LUN Masking

Alright – here we go – the push is on.  8 weeks to cover some random, sparingly used topics off of the VCAP5-DCA blueprint.  Today, let's tackle an item out of the very first objective on the blueprint; LUN masking.

LUN masking is essentially a process that will mask away LUNs, or make those LUNs inaccessible to certain ESXi hosts.  You know when you go into your backend array and say which hosts can have access to which LUNs – yeah, that's basically LUN masking.  However, for the sake of this exam, it's performed on the host itself through something called claimrules.  That said, it's much harder, but explained below…

So first off, we need to decide on a LUN that we want to mask.  There are many ways to list all of your LUNs/Datastores through the CLI and through the vSphere Client, so pick your beast.  What we need to get is the LUNs identifier – the long string of characters that ESXi uses to uniquely identify the LUN.  Since the claimrule is create within the CLI we might as well just find these numbers inside of the CLI as well – since you may be pressed for time on the exam.  So, let's first list our LUNs, showing each identifier.

esxcli storage core device list | less

lunmask1

As you can see I piped the output to less.  If we don't do this and there are a lot of LUNs attached to your host then you may get a little overwhelmed with the output.  "esxcfg-scsidevs -m" will also give you some great information here, which may be a little more compact than the esxcli command.  Chose your weapon, so long as you can get the identifier.  The LUN shown in the above image has an identifier of "naa.6006048c6fc141bb051adb5eaa0c60a9" – this is the one I'm targeting.

So now we have our identifier it's time to do some masking.  We have some decisions at this point to make though.  We can mask by path (removing individual path visibility), by vendor (this will mask all LUNs to a specific vendor), or by storage transport (yeah, like all iSCSI or all FC).  If we look at the currently defined claimrules we can see most types are utilized.  To do so, use the following command

esxcli storage core claimrule list

lunmask2

For our sake here we will go ahead an perform our masking by path.  I will note below though if you were to choose vendor or transport where that would be setup.

So, in order to do it by path, we need to see all of the paths associated with our identifier.  To do so, we can use the following command along with grepping for our identifier.

esxcfg-mpath &nbsp;-m | grep naa.6006048c6fc141bb051adb5eaa0c60a9

lunmask3Alright, so you can see we have 2 paths.  That means in order to completely mask away this LUN we will need to do all of the following twice; once using the vmhba32:C1:T0:L0 path and once using vmhba32:C0:T0:L0.

Now, time to begin constructing our claimrule!  First off we will need an ID number.  Certailny don't use one that is already taken (remember "esxcli storage core claimrule list") or you can use the "-u" to autoassign a number.  I like to have control over this stuff so I'm picking 200.  Also to note is the -t option – this specifies the type of claimrule (remember when i said we could mask by vendor).  Our -t to do a path will be location, however this could be vendor or transport as well.** Running "esxcli storage core claimrule add" with no arguments will output a bunch of examples **  So, in order to mask by location we will specify -A, -C, -T, and -L parameters referencing our path and the -P states we want to use the MASK_PATH plugin.  The command should look like the one below.

esxcli storage core claimrule add -r 200 -t location -A vmhba32 -C 1 -T 0 -L 0 -P MASK_PATH

and for our second path – don't forget to put a new rule ID

esxcli storage core claimrule add -r 201 -t location -A vmhba32 -C 0 -T 0 -L 0 -P MASK_PATH

Running "esxcli storage core claimrule list" will now show our newly created rules, however they haven't been applied yet.  Basically they are running in "file" – we need them to be in "runtime"  This is as as easy as running

esxcli storage core claimrule load

Now we are all set to go – kinda.  They are in runtime, but the rules will not be applied until that device is reclaimed.  So, a reboot would work here – or, a more ideal solution, we can run a reclaim on our device.  To do so we will need that device identifier again and the command to run is…

esxcli storage core claiming reclaim -d naa.6006048c6fc141bb051adb5eaa0c60a9

And done!  And guess what – that LUN is gonzo!!!  Congrats Master Masker!

HEY!  Wait!  I needed that LUN

Oh SNAP!  This is my lab environment and I need that LUN back, well, here's how we can undo everything we just did!

First off, let's get rid of those claimrules we just added

esxcli storage core claimrule remove -r 200
esxcli storage core claimrule remove -r 201

Listing them out will only show them in runtime now, they should no longer be in file. Let's get them out runtime by loading our claimrule list again.

esxcli storage core claimrule load

Now a couple of unclaim commands on our paths.  This will allow them to be reclaimed by the default plugin.

esxcli storage core claiming unclaim -t location -A vmhba32 -C 0 -T 0 -L 0
esxcli storage core claiming unclaim -t location -A vmhba32 -C 1 -T 0 -L 0

A rescan of your vmhba and voila!  Your LUN should be back!  Just as with Image Builder I feel like this would be a good thing to know for the exam.  Again, it's something that can easily be marked and tracked and very specific!  Happy studying!

8 weeks of #VCAP – Image Builder

As I have roughly 8 weeks till I sit my first VMware advanced certification I'm going to be pushing out somewhat random posts dealing with areas on the VCAP5-DCA blueprint where I feel I need practice on.  Don't be surprised if they don't jump around from objective to objective.  Maybe some of you will find them useful, maybe not.  But in my experience wring stuff down and sharing it here really helps me learn it.  And since everything I read about the VCAP DCA is that you need to move a superman speed, I had better know my stuff.  So, without further ado, here's the first – Image Builder.

I've never used Image Builder at all, which you can come to the conclusion that you will most likely see some Auto Deploy and Host Profile posts to follow.  I've just never had the 'need' to get any of this setup and configured.  Anyways, enough of that, let's have a look at how to build a custom image with Image Builder.

First off let's figure out what Image Builder is.  In it's basic functionality, it is essentially a ways at managing multiple software depots and packages.  Packages meaning those VIBs that you need to download for certain pieces of software and drivers that don't come with the vanilla ESXi installer.  So, basically we can use Image Builder to take the ESXi software bundle (the installer), add and remove drivers or software in the form of VIBs to and from it, and then output our results to an image profile for use by Auto Deploy, or simply export everything to our own customized ISO file to install from disk.  

So first, in order to add items to the ESXi installer we need to get the ESXi installer – pretty simple stuff thus far.  Don't get the ISO file, we need what is called an offline bundle, a zip file containing all of the information that would be in that ISO.  You should be able to pull that down from the same place in myvmware that you would get the ISO.

So, the first thing we need to do is add the ESXi Offline Bundle that we downloaded as a software depot.

Add-ESXSoftwareDepot c:\VCAP\update-from-esxi5.1-5.1_update01.zip

imagebuilder2

So the this will basically install four image profiles for us.  An image profile is essentially a group of packages (VIBs).  THis is what we are going to be using to export to ISOs and bundles.  To view the profiles available use

Get-ESXImageProfile | Select Name

imagebuilder3

As we can see there are 4.  The one we are concerned with is "ESXi-5.1.0-20130402001-standard".  Now, instead of modifing the default profile we will instead clone it to one of our own.  Always good practice in case we end up having issues.

Now we can clone our profile to a new name using the following command

New-EsxImageProfile -CloneProfile ESXi-5.1.0-20130402001-standard -Name MikesProfile -Vendor mwpreston.net

imagebuilder4

As you can see, running our Get-ESXImageProfile will now list our newly created profile.

So, here's where the magic starts to happen!  In this case I have a need to add a driver for an Intel NIC.  I've already downloaded the software depot for it, so just as I did for the ESXi software bundle I will add the Intel software bundle using the following command.

Add-EsxSoftwareDepot C:\VCAP\ixgbe-3.14.3.1-offline_bundle-1265488.zip

So at this point I need to look inside of the these depots and find the actual name of the package (VIB) that I would like to add to my profile.  I already added all of the ESXi vibs when I did the clone.  To find my intel VIB I can run 'Get-ESXSoftwarePackage' however that will return everything.  Let's take an easier route and filter by vendor, since I know it is Intel.

Get-ESXSoftwarePackage | where {$_.Vendor -eq 'Intel'}

imagebuilder5

So, now that we know the name of the package we wish to add to our Image Profile is net-ixgbe we can add it with the following command.

Add-EsxSoftwarePackage -ImageProfile MikesProfile -SoftwarePackage net-ixgbe

We can also remove packages with, guess what, Remove-ESXSoftwarePackage 🙂  At this point our Image Profile has been created so we are left with only the task of exporting it – and honestly, this is the easiest part.

To export to and ISO
Export-ESXImageProfile -ExportToIso c:\VCAP\MyISO.iso

​To export to a bundle – for use with Auto Deploy

Export-ESXImageProfile -ExportToBundle c:\VCAP\MyBundle.zip

And thats it!  Image Profiles checked off the list.  These are pretty simple to work with providing you can remember all of the PowerCLI cmdlets.  Honestly, this feels like something that they might include on the exam as it is easily measurable.  Stay tuned.  I'll try and complete the process with Auto Deploy soon!

Why Orchestrate?

Orchestrate-FacilitateAs you all can probably tell by reading my blog lately I have went head first down a path that lead me directly to vCenter Orchestrator.  Honestly, since doing that I haven't looked back.  I've known about the application for quite some time but never could find a use case for it in my environment.  Sure, getting the same results every time is great, that's the most obvious reason to script anything, but I had already been achieving this with Perl, PowerCLI and PowerShell, so why orchestrate?

This is an attitude I had for quite some time.  I'll just keep plugging away at all of the tasks I need to do throughout my day, finding efficiencies here and there for things, scripting some, manually pounding away on the keyboard for others; no biggie!  Then something happened – and by something I mean our environment grew…substantially.  We for the most part doubled in size over the course of a few months and things started getting really crazy really fast.  Those daily tasks, or one-off things that I had been doing started to become a hold up to the rest of the team and the business.  Let's take a simple example of deploying a new application or VM into your environment…

Wait, I thought VMware was supposed to improve provisioning time?

Well it certainly has, I can deploy servers in a matter of minutes now, have them on the right network, with our base load of software – and even with some of the Software Defined Datacenter pieces implemented I can have security and compliance built right into this deployment process as well.  But, the fact of the matter is I still have a lot of other things I need to do in order to call my server or VM completely deployed.  That's where vCenter Orchestrator comes in.

So I'm secure, provisioned and have a base software load installed and configured, what else is there?

Backup/Replication/DR – Some products will point to a datastore and/or cluster as their target which means this may be already setup for you when a new VM is deployed.  However I don't have my backup solutions configured that way.  I like to add my VMs to jobs which are based on RPO, therefore this is something I need to do manually after it has been provisioned

Monitoring/Reporting – Again, some products will automatically pick up new VMs and begin monitoring them.  I do have vCOPs setup, however there are many other tools I use to monitor specific application level statistics and performance, in which I need to go and setup manually after I deploy the VM.

Active Directory placement and group policy – For the Windows VMs I like these to be sitting in the proper OU after I deploy them, without this they will never receive the proper group policy – again, needs to be setup after the fact.

So how does vCO help with this?

vCenter Orchestrator by itself doesn't – but coupled with all the plug-ins available it becomes a pretty powerful tool.  If any of the services that provide you with those additional tasks have some sort of way to programmatically perform tasks such as an API, PowerShell cmdlets, SQL backends, etc – you should be able to find a vCO plug-in or simply use straight up JavaScript to interact with these programs and automate all that needs to be done.  In fact you could use the vCenter plug-in and design out the whole deployment process from start to finish using nothing but vCenter Orchestrator.  And for some icing on the cake, you can still initiate these workflows from directly inside the vSphere Web Client.

So this is just one small example of how I've been using vCenter Orchestrator.  Honestly, I'm finding more and more use cases for vCO everyday and reaping the benefits of orchestration and automation – which usually involve myself and a coffee watching scripts run 🙂  So, my question to you is…

Do you orchestrate?

vSphere USB Passthru and Autoconnect Devices and PowerCLI

usbWait!  I thought I had UPS plugged into my host and setup as passthru to my VM already!  Why can't I see it now?  What happened?  Who moved that external drive I had connected to my Veeam console to seed an offsite backup?  Ever find yourself asking yourself any of these questions…I certainly have!  Due to circumstances out of my control I have a few hosts that tend to be "out in the wild".  Available and insecure, readily accessible to the hundreds of people walking by it each day.  At time to time either someone trips over a cord, some deliberately unplugs something, or equipment needs to be moved and gets plugged in to different ports upon reconnection.

Not as many options in the GUI

As is with most other products, using the GUI to configure something sometimes doesn't give you all the options that you need.  Essentially when configuring USB Passthru to a VM from within either the vSphere Client or the vSphere Web Client you the device needs to be plugged in and it gets assigned to the VM based on the host USB port that it is connected to.  Again in most cases this is fine but in my situation I needed to be able to have this device connected to the VM no matter what port it was connected to.  Turned out after reading some documentation around the vSphere API as well as having a great discussion with Luc Dekens on the VMTN forums there is indeed a way to do exactly what I needed to do.

Community to the rescue!

So, in the API reference for the VirtualUSBUSBBackingInfo object it states the following

To identify the USB device, you specify an autoconnect pattern for the deviceName. The virtual machine can connect to the USB device if the ESX server can find a USB device described by the autoconnect pattern. The autoconnect pattern consists of name:value pairs. You can use any combination of the following fields.

  • path – USB connection path on the host
  • pid – idProduct field in the USB device descriptor
  • vid – idVendor field in the USB device descriptor
  • hostId – unique ID for the host
  • speed – device speed (low, full, or high)

Perfect, this is exactly what I was looking for.  Basically I can have the USB device autoconnect to the VM by initially connecting by using ANY combination of the above parameters as the deviceName.  First off, path is out of the question, it's what is going to changed when the device is plugged into a different port.  So I decided to use pid, vid, and hostId.  Therefore, if a device with the specified product id and vendor id is plugged into a host with the specified hostId it will automatically pass this through to the VM in which I assign it to!   Awesome!  One problem, I still don't have a clue what the pid, vid, and hostId are, nor do I know the PowerCLI syntax to add the device.

PIDS and VIDS and More…

So how do you find out the pid and vid of the device you want to add?  Well, there's a KB for that…kinda, KB1648 mentions how to do it.  Basically go to in the Windows registry HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\USB\ and search around and you'll find it 🙂  In my case i was using an APC SmartUPS 1500 which had a pid and vid of 0003 and 051D respectively.

The final piece of information we need is the hostId.  By hostId the documentation really means the actual hardware id that is stored within the BIOS of your host.  Accessing ExtensionData.Hardware.Systeminfo.Uuid will actually retrieve this for you.  As you will see in the script below there is certainty some formatting issues that need to be resolved with it, but it most certainly works 🙂

Putting it all together

Now that we have all the information we need it's time to fire up PowerCLI to get this task scripted (I needed to do it 40 times 🙂 ).  I'm not going to lie here, I had no idea how to do this with PowerCLI but by using Onyx I got a great start.

As you can see in the script below a few things happen.  First you need to specify some credentials to and your vCenter location as well as the pid, vid, hostname of the host that will be connected to the device as well as the VM name in which you want to pass the device through.  Lines 17 and 18 get some information in regards to the target VM and lines 21 and 22 get that hardware uuid from the host.  Lines 24 through 36 is the reformatting of the uuid that I described earlier.  You can see the format in the API reference.  The rest of the script does the actually setup of the USB device, this is the code, minus a few changes here and there, that Onyx spit out for me.

The Script

  1. Add-PSSnapin VMware.VIMAutomation.Core  
  2.    
  3. $vcserver = "vcenterserver"  
  4. $vcusername = "username"  
  5. $vcpassword = "password"  
  6.    
  7. $myhost = "Host that USB is attached to"  
  8. $myvm = "VM which to pass USB through to"  
  9.    
  10. $ppid = "PID of USB Device"  
  11. $vvid = "VID of USB Device"  
  12.    
  13. Connect-VIserver $vcserver -user $vcusername -pass $vcpassword  
  14.    
  15.    
  16. #get id of VM  
  17. $vm = get-vm $myvm  
  18. $vmid = $vm.ID  
  19.    
  20. #get host uuid from BIOS  
  21. $vmhost = get-vmhost $myhost  
  22. $vmhostId =  $vmhost.ExtensionData.Hardware.SystemInfo.Uuid  
  23.    
  24. #reformat vmhostID to the proper format for autoconnect string  
  25. $vmhostd = $vmhostid.Replace("-","")  
  26. $section1 = $vmhostid.substring(0,16)  
  27. $section2 = $vmhostid.substring(16)  
  28. $newsec1 = (&{for ($i = 0;$i -lt $section1.length;$i += 2)  
  29.    {  
  30.      $section1.substring($i,2)  
  31.    }}) -join '\ '  
  32. $newsec2 = (&{for ($i = 0;$i -lt $section2.length;$i += 2)  
  33.    {  
  34.      $section2.substring($i,2)  
  35.    }}) -join '\ '  
  36. $hostId = "$newsec1-$newsec2"  
  37.    
  38.    
  39. #create usb device and add it to the VM.  
  40. $spec = New-Object VMware.Vim.VirtualMachineConfigSpec  
  41. $spec.deviceChange = New-Object VMware.Vim.VirtualDeviceConfigSpec[] (1)  
  42. $spec.deviceChange[0] = New-Object VMware.Vim.VirtualDeviceConfigSpec  
  43. $spec.deviceChange[0].operation = "add"  
  44. $spec.deviceChange[0].device = New-Object VMware.Vim.VirtualUSB  
  45. $spec.deviceChange[0].device.key = -100  
  46. $spec.deviceChange[0].device.backing = New-Object VMware.Vim.VirtualUSBUSBBackingInfo  
  47. $spec.deviceChange[0].device.backing.deviceName = "pid:$ppid vid:$vvid hostId:$hostId"  
  48. $spec.deviceChange[0].device.connectable = New-Object VMware.Vim.VirtualDeviceConnectInfo  
  49. $spec.deviceChange[0].device.connectable.startConnected = $true  
  50. $spec.deviceChange[0].device.connectable.allowGuestControl = $false  
  51. $spec.deviceChange[0].device.connectable.connected = $true  
  52. $spec.deviceChange[0].device.connected = $false  
  53.    
  54. $_this = Get-View -Id "$vmid"  
  55. $_this.ReconfigVM_Task($spec)  
  56.    

So there you have it!  You can unplug and plug this USB device in to your hearts delight.  ESXi should pick up the device no matter what port it is plugged into and pass it on to your VM every time!  Certainly this isn't something that you will do everyday, but for those that have hosts sitting out in the open, may be a handy configuration to have set up in their environment.  As always, I'm not the greatest scripter in the world, so any comments, suggestions, improvements, concerns, thoughts are most definitely appreciated.