Monthly Archives: June 2015

#VFD5 Preview – VMTurbo

VMTurboLogoSmOnce again it looks like I’m going to have to get on a plane and travel to the great US of A in order to see my fellow Toronto VMUG Co-leader Eric Wright, who lives within a couple hours of where I’m sitting right now!  But that’s ok, because Eric will be bringing with him the VMTurbo Virtualization Field Day 5 presentation in Boston!  For those that know or have heard Eric speak you will know what I mean – he certainly has a way of keeping the audience interested and getting his point across – a couple great qualities to have when speaking…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Anyways, It feels like we just got done having a look at VMTurbo during VFD4 in Austin and here they are right back in front of us at VFD5 in Boston.  And a lot has changed since January with both the company and their flagship product Operations Manager – They’ve kicked their TurboFest User Groups into high gear, hosting meetings San Fran, London and Atlanta, they were named one of the best places to work by the Boston Business Journal, and Operations Manager 5.2 was released and with that came features such as QoS Adherance, more support at the application level in terms of MS SQL and Oracle, integration with Arista Networks to help make more “network aware decisions” and of course, now offering the complete package delivered thought a SaaS offering in Amazon AWS. So, yeah, they’ve been busy!

An economic look at your data center

If you haven’t had a look at Operations Manager you probably should.  VMTurbo takes a unique approach as it pertains to monitoring and tuning your environment to ensure you get to what they like to call “Data Center Nirvana”.  Essentially they take an economic model and apply it to your infrastructure – turning your data center into a supply chain.  By treating your resources, things like CPU, memory, disk, etc as suppliers and your VMs as consumers, VMTurbo is able to apply economic formulas to your infrastructure, increasing cost of resources when supply is sparse, and decreasing when it is bountiful.  By doing so Operations Manager is able to determine that while migrating a VM may make sense at eye level, costs may be too high on the other host, thus making recommendations to leave it be.  It’s an interesting way of looking at things and makes a lot of sense to me…

Now there is certainly a lot more to what Operations Manager does and I encourage you all to tune into VFD5 to learn all of it.  You can do so by heading over to the VFD5 page and watching the live stream, as well as keep up to date with all my content here.  VMTurbo is a fast growing company with a unique idea so I’m sure they will have something mind-blowing for us come next Wednesday when they kick off all that is VFD5!

#VFD5 Preview–Ravello Systems

v2Ravello_Logo_large-wpcf_100x27Ravello Systems have certainly had there fair share of buzz lately and rightly so – the shear fact that you can run a 64 bit VM, on top of a nested ESXi host, on top of their hypervisor (HVX), on either Amazon or Google Cloud is to say at the least – the bomb!

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

I’ve had the chance to work with Ravello during their nested ESXi beta along with a few other bloggers and was blown away by the performance they provided while doing the exact scenario as described above.  I did a few posts on Ravello, one which involved a vMotion from Amazon AWS to Google Cloud if you’d like to check it out!  Needless to say I’m excited to see Ravello IRL at VFD5 on June 26 in Boston.  Also, I’ve heard through the grapevine that long-time Toronto VMUG attendee and friend Kyle Bassett will be part of the presentation – Kyle is a brilliant mind so you won’t want to miss it!

A home lab replacement?

In a lot of ways I can get the performance that I need in order to replace my home lab!  That said I’m no where near as extravegant when it comes to homelabs as a lot of people in these communities.  When it comes down to it though, a lot that I do within the lab is configuration validation, testing different setups, etc.  All of this is easily accomplished in Ravello!  In fact in some ways I can do a lot more within Ravello then I can within my own home lab.  Stringing together two datacenters, one in Google, one in Amazon via VXLAN for example!  For the most part I’m finding myself working more in cloud platforms than in my basement any more.

Bells and whistles

I would be selling Ravello short if I just said they allowed you to run nested ESXi in Amazon – they have a lot of value add, bells and whistles so to speak that make the service what it is.

Firstly they have what’s called an application – an application is essentially one or more VMs that perform some sort of function.  You could think of a couple ESXi hosts, a vCenter Server and some sort of iSCSI storage appliance as an application.  Applications and be started and stopped as whole unit, rather than each individual VM.

Secondly they have blueprints.  We can think of a blueprint as a point in time snapshot of any application.  Basically, blueprints allow you to save a configuration of an application to your library, which you can then deploy to either another application or another cloud.  Think of a blueprint being a base install of your ESXi/vCenter setup – you know before you go mangling inside of it.  If your original application ever breaks, or you’d like to explore new features without affecting your current setup you could simply save your application as a blueprint and deploy a new instance of it.  One newly released feature is the Ravello Repo, which allows customers to essentially share their blueprints with others, saving a lot of time when it comes to building up test and use cases.

Thirdly is pricing!  Honestly I’m not sure what hard costs I’ve incurred as I have gotten 1000 CPU hours/month for free – If you are a vExpert you can too as well as they have just extended this offer to all vExperts – very generous!  Not a vExpert, no problem, you can still get a free fully functioning trial here, good for 14 days worth of all you can eat cloud.  Although I’ve never seen my pricing I have looked at their pricing calculator – selecting 12 vCPU’s, 20GB of RAM and a TB of storage it comes out to around $1.32/hour – which too me is more than enough resources to get a small lab up and running and is more than affordable for what you get.  Plus you don’t deal with Amazon or Google at all – Ravello takes care of all of that.

What Ravello has in store for us on June 26 we don’t know, but I can assure you that it will be a treat to watch.  Speaking of watching, if you want to follow along with all the action you can do so by watching the live stream on the Tech Field Day page or on my VFD5 event page where all my content will live.

#VFD5 Preview – Scale Computing

Scale_Logo_High_Res-wpcf_100x38Virtualization Field Day 5 in Boston will be Scale Computing’s fifth appearance at a Tech Field Day event dating all the way back to VMworld 2012 when they launched their hyperconvergence solution, HC3.  Thinking about this is kind of funny really – picture the Scale Computing booth on the VMworld show floor – at the time they were a scale-out storage company, however they were launching their KVM based hyperconvergence solution which really has nothing to do with VMware at all!  One word – ballsy! Smile

Either way since then Scale has been promoting the HC3 which targets the SMB market, and they have been doing a great job of it as I’ve seen them at nearly every event I’ve been too, big or small.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So what is it?

387622We all know what hyperconvergence is right?  It’s just so hot right now!  Scale computing, just as the Nutanix’s and Simplivity’s of the world has combined the compute, network, and storage into one box, allowing businesses to gain performance and agility by implementing their building block type architecture.  Scale currently ships three different models of their HC3, differing in capacity and memory…

scale-products

 

And the uniqueness?

In order to succeed in any market you really need to have something which sets you apart from the “other guys”.  Something which makes your offering so compelling that you just have to have it!  What’s Scales?  I would most definitely say their niche is really knowing their target market, which in turn puts the HC3 at a very compelling price.

Scale has never once deviated from the market they say they serve.  They bring a hyperconverged, scale-able platform to the SMB.  But price isn’t the only thing that helps them succeed in the SMB space..  They have really evaluated everything from their interface, to ease of use, to the options that they expose within their management software.  Basically, Scale provides the SMB with a solution to create and run VMs – no more, no less.  When I watched Scale at VFD4 I often found myself asking questions like, “So is this it?  You just click create VM and you are done?  Where’s all the options?”.  The answers I got were “Yes, you are done, there are no other options.”  It’s simply just a solution for the SMB admin, who probably has little to no time to mess around with anything or learn anything new – it lets them get in, create a VM, and get out.

Now I’d be selling them a little short if I didn’t say that there were other options – they have the ability to take snapshots, to clone VMs, to setup replication between another Scale cluster.  All of these implemented in the same, easy to use, very little setup kind of way as everything else.  They also have all the “enterprisey” features as well – things like HA, Live Migration, Thin Provisioning etc – however they are all enabled by default and require no setup at all.

I’m very excited to see what Scale will be talking about at VFD5.  Their presentation was honestly one of my favorites at VFD5 ( and that’s not just the shot of bourbon talking).  I’m interested to see if they have stayed true with their “SMB” focus if talking about any future releases – I believe that Scale really knowing their target market plays a big part in the successes that they have been having.  If you want to follow along be sure to watch the live stream over at the VFD5 page or I should have it up and running, along with all of my VFD5 related content on this page as well.  I can say that their CTO, Jason Collier is a great speaker and it will be an entertaining 2 hours to say the least!

#VFD5 Preview–PernixData

PernixData_Logo_ColorI’ve had the pleasure of seeing PernixData a number of times both at our local Toronto VMUGs as well as at VMworld.  Also, I have a couple close friends working for Pernix so I’m very familiar with what the solutions they currently offer.  One interesting thing about Pernix is that they have a bit of a history of releasing new features and enhancements at Tech Field Day events (See their Storage Field Day 5 presentations) so I’m definitely looking forward to seeing them on June 24th in Boston.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So what do they do?

PernixData in its simplest form is a server side cache play. Their software, FVP,  essentially allows you to accelerate both reads and writes utilizing server components, both RAM and SSD drives.  Basically they sit in the middle of your data path between your hypervisor sending the I/O and your storage array which receives the I/O.  What this does is allow your server components to essentially act as a cache for your storage array – and since they sit right next to all of your compute you can imagine the benefits in terms of efficiency and performance FVP provides.

FVP recognizes that the first thing that comes to mind in looking at all of this is that the cache, the SSD and RAM is not shared storage – so what happens when a host decides to take a walk and brings all of that non-committed write cache with it?.  Because of situations just like this Pernix basically replicates any writes across all nodes (or the nodes you chose) in your FVP cluster before acknowledging the write back to the VM – allowing for host failure scenarios and ensuring that your writes are safely written back to your storage array.  All this while still supporting advanced vSphere features such as HA and DRS.

So is server-side cache a band-aid?

I’ve heard this term a lot in the industry – stating server side caching is just a band-aid for the real problem – your underlying storage.  But when I hear this I ask myself – if Pernix and other companies can deliver me a solution that drives enough IOPs and enough performance to successfully and efficiently run my environment do I really care if my underlying storage isn’t doing that on its own?  Honestly if no one is complaining and everything is running up to my expectations I feel like it’s a win-win – not a band-aid.

Pernix definitely has some awesome innovation in their software – FVP covers all angles when it comes to providing that fault tolerant, mirrored, read and write cache for your host.  You can enable caching on a per datastore or per VM level – allowing you to accelerate only your most crucial or needed workloads – also, FVP now supports not just block storage, but NFS as well!  I have no idea what Pernix has in store for us at VFD5 but you can bet it will be pretty awesome!  Once again, you can tune into all the action by watching the live stream on the VFD5 event page – as well, all my content and the live stream will also be on my VFD5 page.

#VFD5 Preview – NexGen

logo1Alright here’s another company presenting at VFD5 in Boston that I recognize, but know very little about!  Thankfully the Stanley Cup playoffs are done and I now have a little extra room in my brain to take in all the info that will be thrown at us.  Anyways I started to do a little digging on NexGen and oh boy, what a story do they have!  Stephen Foskett has a great article on his blog in regards to the journey NexGen has taken – it’s pretty crazy!  Certainly read Stephens article but I’ll try to summarize the craziness as best I can…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Basically, a couple of the LeftHand founders got together and founded NexGen – ok, this story doesn’t seem all that crazy so far.  Well, after a few years Fusion-io came in with their wallets open and aquired NexGen – again, not a real crazy spin on a story!  Moving on, we all know that SanDisk walked in and acquired Fusion-io with that getting NexGen.   Then, the next thing you know SanDisk spun out NexGen on their own, putting them right back to where they started!  This just all seems wild to me!

So where do they stand today!

NexGen is a storage company, a storage company offering a hybrid flash array with software that helps their customers align their business practices with their storage by prioritizing the data they store. So what does that really mean?  Basically it comes down to QOS and service levels.  NexGen customers can use these two concepts in ways that they can define performance, availablity, and protection of their data by defining the IOPs, throughput and latency that they need for each and every application.  Depending on the service levels assigned to a workload, NexGen can borrow IOPs from a lower tiered service in order to meet the QOS defined on a business critical application.

Another unique feature of NexGen Storage is in the way they use flash and SSD.  Most arrays will place their flash behind some sort of a RAID controller, whereas NexGen utilizes the PCIe bus to access their flash, providing a redundant, high-speed, low latency caching mechanism for both reads and writes.

There is certainly a lot more bells and whistles within the NexGen arrays and a lot bigger of a story to be told here.  The way NexGen is utilizing flash within the array is definitely peaking my interest, but honestly, I’m interested more in the story of the company and how all those acquisitions and spin-offs have helped them.  I’m sure they will address both of them at VFD5 and believe me there will be more posts around NexGen and their offerings.  If you want to follow along during the VFD5 presentations you can see them live both on the official VFD5 event page, as well as my VFD5 event page where all my content will be posted.

#VFD5 Preview – Rubrik

logo-large-gray-wpcf_100x48There has been much a buzz about Rubrik over the last few weeks with them going GA and coming up with oh, you know, a cool 41 mil in series B funding.  Certainly if you haven’t heard of them before you can probably recognize their name now!  I for one, had not looked at their solutions at all.  I’ve heard the name, but never gave it a look!  That will change come June 25th at Virtualization Field Day 5 when Rubrik takes the stage to deep dive into what they dub “The worlds first converged data management platform”.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So what exactly is a data management platform?

It’s most certainly a fancy name but for sure it’s much much more.  In simple terms you can think of the Rubrik appliance (Brik) as a backup appliance – a backup appliance that is architected in such a way that you can scale to thousands of nodes depending on the amount of data you are looking to protect.  Currently they offer their r330, which is 3 node appliance with 10TB of disk and a r340, a 4 node appliance with 15 TB of disk.

Wait – did you say backup?

Sure, there are a lot of players in the backup space.  We have our traditional players that have seen it all.  Companies like Symantec and EMC come to mind.  Then virtualization came along and we started to see backup solutions being purpose built for virtualization.  Veeam, Unitrends, Trilead are near the top of the list.  So with all of these companies still at play within the data center backup space do we have room for one more?  Can Rubrik differentiate themselves from the others?

So what makes Rubrik unique?

Appliance driven – With the exception of Unitrends I don’t see many backup vendors coming in the form of a full appliance.  Essentially what Rubrik has done is take the software and hardware requirements of their backup solutions and deliver it in a 2U scaleable appliance architecture.  Speaking of scale Rubrik building block architecture allows all tasks and operations to be ran on top of any node within their cluster – therefore, the more nodes you add don’t just expand capacity, but should also increase performance and availability as well.

Global File Search – This one is a big feature in my opinion.  There has been countless times where someone I support has came up to me looking for a file to be restored, but can’t remember where they saved that file.  “I just clicked it from my recent documents” they normally say.  Rubrik has a file search capability that spans across all of your VMs and actually incorporates auto complete functionality – a little like Google for your backups.

Multi-Tiered Storage – Man!  Some companies are just getting around to incorporating some kind of auto tiering in their production storage – Rubrik are doing it in your backup storage.  What this does is increase efficiency and speed.  All data sent to the Rubrik appliance enters through a flash tier – and we all know the benefits of flash.  The flash tier also provides the basis for the global file search magic as it stores all meta data on SSD as well.

Cloud Integrated – Well Amazon S3 anyways.  Users are able to chose where backups are located, whether that be on premises or inside Amazon!  A great solution for any of those backups that you are required to save for long-term and are seldom accessed!

I mentioned earlier that I don’t know a lot about Rubrik – In fact all that I know is what I’ve written in this blog post!  The buzz surrounding Rubrik has been nothing short of amazing so I’m excited to see what they have to offer and what separates them out from the already established players in the market!  On June 25th @ 10:30 we will see what Rubrik has to offer.  You too can watch the live stream on the VFD5 event page or on my VFD5 event page where all of my content and blogs about the show will be posted.

Test driving vCloud Air On-Demand–Part 2

vmware-vcloud-air-virtual-private-cloud-ondemand-1-638In part 1 of my vCloud Air test drive we went through the vCloud Air UI as well as went over the steps it took to get a VM up and running in the cloud.  This is all great except for the fact that our VM had no connection to the internet – nor did we have any way of accessing our VM outside of the default console that vCloud Air provides.  This section will deal with just that – we will explore the NAT and firewall rules that need to be setup in order to get our VM access to the internet as well as port forward our public IP in order to provide ssh access into resources within vCloud Air.

Just a note – If you wanted to try out vCloud Air On-Demand on your own you can do so by following this url – and using the promo code Influencer2015.  This will get you $500 in service credits to burn in 90 days – more than enough credit to give it a valid test.  This code and url expires June 30, 2015 so be sure to register ASAP – also, it’s valid only for new MyVMware accounts meaning you will most likely need to register under a different email than you currently use.

Just as I did in the first post in this series, part 2 will have a video an an accompanying blog post.  The video, embedded below along with the blog post both accomplish the same result – so hopefully I’m covering off everyone’s content type of choice!

Connecting our cloud to the internet

As we seen in part 1 our VM was not connected to the internet by default.  Thankfully accomplishing this task is not that hard, even automated to a certain extent inside of vCloud Air (notice a recurring pattern here?), basically creating the NAT and firewall rules that we need in order to allow communication out.

Speaking of NAT let’s get a little of the vCloud Air terminology straight before we continue.   First up is our firewall – the vCloud Air firewall essentially is closed by default, meaning all traffic both in and out from the public IP is blocked by default.  In order to change this, we create what’s called Firewall Exceptions.  How vCloud Air interprets NAT is always based on the internal vCloud Air network, as follows

  • SNAT (Source NAT). – Deals with traffic originating from within the vCloud Air Network (source) destined for another network (ie Internet)
  • DNAT (Destination NAT) – Deals with traffic originating from another network (ie Internet) destined for the vCloud Air network (destination).

Now before we can get into any natting or firewalling we need to create a public ip address to nat out of.  This is done by browsing to your gateway configuration and then the ‘Public IPs’ tab.  The actual adding of the IP is done by clicking ‘Add IP Address’ – tricky huh?

Once we have our public IP setup we can do one of two things – we can create the SNAT and Firewall exceptions manually or we can right-click our VM and selected ‘Connect to Internet’  The latter option will automatically create the SNAT and Firewall exceptions that we need in order to allow outbound access from our newly created VM.

connecttointernet_thumb

Just a note here as well – I found it best to simply let the UI report back that everything has completed successfully – not just here but when doing things such as deleting VMs and creating data centers.  Sometimes navigating away from a page while a task was in process caused either the task to take and extremely long time or forced me to log back into the UI.  Anyways, after the task completes you should see the following rules that were created.

snatrulesforinternet_thumb[1]

firewallrulesforinternet_thumb[1]

Now if you are wondering how to manually create a firewall rule don’t worry because we are going  to do this as well.  Although the rules have been created to allow http/https/dns out of our network there is nothing created around ICMP or ping.  This is a commonly used method of testing connection to systems, so I’m not sure why this isn’t included in the ‘Connect to Internet’ workflow.  Either way it gives us the opportunity to go through the process.  Simply clicking the Add button will allow us to configure the following rule, which allows ping out not only from our VM (109.2) but our whole 109.0 network out to our external IP (shown below).

firewallforicmp_thumb[1]

At this point we are almost there in terms of access to the internet.  vCloud Air has statically assigned us and IP from our default IP pool, however it hasn’t done any configuration in regards to dns – so if you were to go try and ping google.ca at this point your VM would have no way of resolving it.  If you need to add some name servers to your Ubuntu VMs interface you can do so by running the following commands.

echo “dns-nameservers 8.8.8.8 8.8.4.4” >> /etc/network/interfaces

ifdown eth0 && ifup eth0

At this point, we should be successful in pinging google.ca or any other network address located on the internet – we have properly connected our cloud to the internet.

Connecting the Internet to our cloud.

Remember back in part 1 when I was griping a bit about not being able to send CTRL+ commands to my VM through the default console?  Well, one way around this might be to configure and allow ssh through our firewall, which would allow me to use putty or any other ssh client and issue CTRL+ to my hearts delight.  Keep in mind these scenarios would also work for Windows VMs and RDP by simply using port 3389 or whichever port you desire.

So, since our ssh traffic is going to be originating from the internet and destined to our vCloud network we first need to create a DNAT rule in order to port forward port 22 from our external IP to our internal Ubuntu server (Note:  The default Ubuntu image already is listening on port 22 for ssh).  The setup of the DNAT rule is shown below, remember to wait after clicking ‘Finish’ till vCloud Air reports success back.

dnatsetup4_thumb[1]

Even though we have our DNAT setup now we still need to allow access on port 22 on our external IP through our firewall – remember everything is blocked by default so the following firewall exception will need to be created.  I’m left the source IP and port as Any/Any, essentially allowing access from anywhere – if you had a specific IP that you would always be connecting from you could technically be a bit more secure and use that.  For my testing though, I don’t care so much…

firewallsshsetup_thumb[1]

And there you have it!  After waiting for the rules to apply (just wait) you should know be able to open up putty or your favorite ssh client, enter in your external (public) IP and log in to your VM.  Any other services and ports you want to open up?  Just simply repeat the following steps using whichever port you desire.

Although writing all this down after the fact seems pretty self explanatory and easy, to be honest, I struggled a bit during the networking portion.  Not to say it isn’t intuitive, but with everything else being a breeze within the vCloud Air UI I would’ve thought there would be some pre-built workflows around opening up services given the number of steps it takes.  Even if it was just common items such as ssh, rdp, or www.  That said, it is possible that maybe if I RTFM it might have been a bit easier – but I like to jump right in – helps me evaluate the usability.

All in all VMware has a great service in vCloud Air On-Demand.  It’s a piece that was originally missing from their cloud offerings.  Having a pay-as-you-go service where you don’t need to fork out long term commitments or budget is key, especially when you think in terms of timely workloads, dev/test, etc.  In the end vCloud Air has impressed me – a clean UI, easy to use solution without breaking the bank!

Again, if you want to test out vCloud Air On Demand for yourself go ahead and get a new MyVMware account and sign up at this URL using the promo code Influencer2015.  I know I’ve mentioned this a lot over the past two blog posts but it will get you $500 in service credits and that’s more than enough to get a solid judgment on the service.  Honestly, who doesn’t want free things!  Thanks for reading/watching.

Test driving vCloud Air On-Demand–Part 1

vmware-vcloud-air-virtual-private-cloud-ondemand-1-638

As a vExpert I tend to get a number of opportunities to evaluate different pieces of software and platforms – and as much as I’d like to simply look at every one I just don’t have the time to do so.  That said, when the vCloud team reached out with an offer to have a go at their vCloud Air On-Demand service I rearranged some of my priorities – partly because cloud is interesting to me, but mostly because they also gave me the chance to let my readers have the same opportunity!  VMware offers everyone $300 in service credits to evaluate vCloud Air On-Demand, but they gave me an extra $200 – and the promotional code to give you guys the same!  So, if you register at using this exact link – and use the promo code Influencer2015, you too can have a total of $500 in service credits to play with.  Just a note – you have 90 days to use up your credits before they expire – oh, and you need to register before June 30th, 2015 – so hurry!  Another caveat, this offer is valid for NEW MyVMware accounts only – so, ummm, uh, yeah, find another email to register with Smile

On to the evaluation

So I’ve recorded a couple of videos in regards to what I’ve done inside of vCloud Air, the first one, attached just below this paragraph takes us through a little tour of the vCloud Air web UI, and shows us the steps to get our first VM up and running.

 

Now if you don’t feel like listening to my Canadian accenty, cold-infested, whispering (I had a house full of sleeping kids) voice I’ve written the process down as well.  Hey, we all learn in different ways right – some people like videos and others can’t stand them – so here’s both.

Judging a book by its’ cover

A simple, clean interface can go a long way when it comes to peoples reaction and opinions on the software that they use.  The vCloud Air team certainly kept this in mind when developing the UI supporting their on-demand service.  It’s very clean – showing only the basic information that one would really need to see to get a handle on their virtual data centers and VMs.  If you have ever used vCloud Director (vCD) you know just how many different tabs and options are available within VMware’s cloud offering – there are a ton of them, and I find the vCD interface cumbersome and hard to use.  It’s nice to see that VMware has taken some of the basic functionality that vCD provides, and abstracted it away to the vCloud Air UI – allowing their customers to perform common tasks such as power operations, network setup, and VM creation/snapshotting without having to ever set foot inside of vCD.

Let’s Cloud Bro!

Let’s get to it!  The first step after logging into the vCloud Air portal is to create a virtual datacenter.  Before we do that though we have to determine exactly what region we want to work in.  As shown below we have some options as to where we would like our virtual datacenter to be located – I’ve chosen Virginia for some of my testing – but if you are following along, chose one close to you.

vd

To create our Virtual Data Center select the + icon next to the Virtual Data Centers label.  As you can see there isn’t a whole lot of configuration required in this step, simply a name.  Also you can see that each VDC allows for 50 VMs containing 130 GHz CPU, 100GB of RAM and 2TB of both SSD accelerated storage and standard storage.

createVDC_thumb[1]

At this point automation kicks in and our virtual data center is created.  Once it’s complete we can see that a number of components will be created and configured by default for us.  Selecting our VDC from the left hand menu and clicking on the ‘Networks’ tab we can see a number of these pre-configured items such as our public gateway IP address, the default gateway IP for our internal network, as well as the IP range that will be handed out to VMs within our VDC.  We can also create new networks from directly within the vCloud Air UI, however if you need to delve a little deeper into the services offered you can do so by using the ‘Manage in vCloud Director’ link in the top right hand corner.  This will open an already authenticated vCloud Director session where you can manage your networks and add services such as DHCP, load balancing, etc. Essentially all of the functionality that you would normally have when running a full instance of vCD.

networkstab_thumb[1][1]

In order to create firewall rules, nat rules, and assign an accessible public IP to our gateway we need to select our default gateway under the ‘Gateways’ tab.  Again, we can break out into a vCloud Director window here as well.   We will come back to this section in part 2 of this series to connect our VM to the internet and grant ssh access but for now its just good to know where this information is located.

gateway_thumb[1][1]

Speaking of VMs let’s get on with the show here and get our first VM created.  This is done on the ‘Virtual Machines’ tab (Use the giant “Create your first virtual machine” button).  When creating a VM you can select from the catalog which has been provided by VMware, or by creating a catalog, uploading and ISO and creating your VM from scratch.  For the sake of this evaluation I just used the 32 bit Ubuntu server provided by VMware.

createVM_thumb[1][1]

After selecting your VM from the catalog you can then name it and customize the cpu/memory/storage to your choosing.  vCloud Air will default these settings to their preferred amounts but you can change them using their respective sliders.  What’s nice about his screen is that you can see how s simple CPU, RAM and Storage change can affect your price per hour.  In my case, this Ubuntu VM with 1 CPU, 2GB RAM and 10 GB of accelerated storage is a mere 5 cents/hour – not bad

changeVMHardware_thumb[1][1]

Once the VM has been created it should now be listed under the Virtual Machines tab.  Right-clicking the VM will bring up a context menu showing all the actions available, including power options, console access, snapshotting, etc..

vmoptions_thumb[1][1]

Clicking on the VMs name within our list will also bring us into more details in regards to that VM.  The ‘Resource Usage’ tab showing estimated costs, ‘Settings’ tab showing various configurable items, and the ‘Networks’ tab showing the networking information for the VM.  As shown below we can see that our new Ubuntu VM has claimed the first address within our IP pool – 192.168.109.2.

vmnetwork_thumb[1][1]

Another important note about the ‘Settings’ tab is the ‘Guest OS Password’ section.   In order to login to our newly created VM we will need the root password.  This can be revealed by clicking ‘show initial password’.  By default, all the VMs from the default catalog provided by vCloud Air will prompt you to change the default password after first login.  Let’s make note of this password and go ahead and open a console to change it.

changepassword_thumb[1][1]

As we can see below the console provided by the vCloud Air UI is pretty barebones – allowing us to simply provide input to the VM and a button to send CTRL+ALT+DEL to the VM.  I found this a little frustrating at times, especially since I was using a Linux VM.  There were times where I had to direct a CTRL+C command to the VM but had no way of doing so, instead I had to proceed with a complete reboot of the VM.  An on-screen keyboard may be a better solution here.

keyboard

At this point we are done with part 1 of my test drive.  My goal here was simply to get a VM up and running and we’ve certainly accomplished that.  So far my opinion around vCloud Air On-Demand is a good one – Aside from a little hiccup of trying to send CTRL+ commands to the VM through the built-in console everything else has been a breeze.  I really like the UI – how they have taken some of the complexity involved with trying to certain tasks within vCD and provided a one-click, automated solution without ever having to touch vCD – yet still giving users the option to move into vCD if needed.  In part 2 we will have a look at setting up some of the networking and firewalling in our virtual data center – things will get a bit more complicated as we explore the NAT and firewall rules inside our gateway.

If you have any experience or thoughts about vCloud Air I’d love to hear them – leave a comment below or find me on twitter.  And as mentioned before if you wanted to evaluate vCloud Air On-Demand yourself go ahead and register here, using the Influencer2015 promotional code to get yourself $500.00 in service credits.

Don’t forget to read Test Driving vCloud Air On-Demand Part 2

#VFD5 Preview – OneCloud

med-vert-notag-wpcf_93x60Am I looking forward to the presentation at Virtualization Field Day 5 from OneCloud?  I have no idea!  Why?  Well, here is a company that I know absolutely nothing about!  I can’t remember ever coming across OneCloud in any of my journey’s or conferences!  Honestly, I think this is the first company that is the only company that is presenting at VFD that I have absolutely no clue about what they do…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

That will certainly change fast

OneCloud will present at VFD5 on June 24th at 1:00 PM where I’m sure we will all be enlightened a little more on the solutions they provide.  That said I don’t like going in cold, knowing nothing about someone – thus, this preview blog post will at least help me understand a little bit about everything OneCloud has to offer…

So let’s start from the ground up.  OneCloud is essentially a management platform for a hybrid cloud play.  Their core technology, the Automated Cloud Engine (ACE) is the base to where they provide other services.  From what I can tell ACE essentially facilitates the discovery of your on premises data center, taking into account all of your VMs, physical storage and networking information.  From here, ACE can take different business objectives and transform these into API calls in order to essentially replicate all your infrastructure into the public cloud – for now, it appears to be just Amazon’s AWS which is supported.

The service running on top of ACE is OneCloud Recovery.  OneCloud Recovery allows organizations to facilitate a disaster recovery or business continuity solution involving the public cloud as the primary target – skipping costs and complexity of implementing a second or third site on premises.

Diagram-4

So here is how it all happens from start to finish – OneCloud is deployed into your environment, via the virtual appliance route.  Another instance is also deployed into Amazon.  From there it auto discovers your environment; your networking setup, storage configurations, data and applications are all tied together and somewhat of a blueprint of your environment is created.  You then use their policy engine to apply RTO and RPO objectives to your applications.  OneCloud will then provision a fully functioning virtual data center in Amazon – one that mirrors your environment in terms of networking and configuration.  OneCloud not only duplicates your environment into Amazon, but it will also optimize both your compute and storage in order to minimize costs.  Meaning it will scale down on CPU where it believes it can and place your data onto the most cost effective storage.  Once your data is there OneCloud performs ongoing replication in order to meet the RPO you have selected.  From there it’s just a matter of performing your normal DR tests and engaging in any failover (and failback) operations.

OneCloud seems to have some interesting technology and I’m looking forward to learning more at VFD5.  Some questions for OneCloud that come to mind – How do they compare to VMware’s vCloud Air DR services?   Do they plan on expanding out to other public clouds such as Google, Azure, or vCloud Air?  With a strong software base in ACE do they plan on moving outside just the DR/BC realm – things such as DevOps and public cloud labs come to mind.   I really like how they are abstracting away what can be some very complicated API calls to Amazon – any time a company provides a solution that involves simplicity it’s always a good thing, but especially so when dealing with the complex networking and configuration of public cloud and disaster recovery.  If you would like to learn more about OneCloud with me you can do so by watching the live stream on the VFD5 event page.  That stream, along with any other content created by myself will be posted on my VFD5 event page as well.

#VFD5 Preview–DataGravity

DataGravity_392x652-wpcf_100x17Let’s set the stage here!  We got Paula Long – yes, the same Paula Long that co-founded EqualLogic  – yes the same EqualLogic that Dell purchased in 2008 for 1.4 billion.  We have John Joseph – another long time (as long as  you can get in startups) EqualLogic member!  These two get together to execute on an idea, hire David Siles, a long term member of the senior leadership team at Veeam to be their CTO and then, on Tuesday, August 19th, 2014 at approximately 12:01 am, weighing in at 85 lbs and 26.75” tall DataGravity was born.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

DataGravity will present at Virtualization Field Day 5 in Boston on June 25th and I cannot be more excited to hear what they have to say.  I’ve spoke with them before, briefly at the craziness that is VMworld – and honestly, the booth was so busy with people wanting to get into to see the new baby that I couldn’t stay long – so having a couple hours with them will be long overdue.

Just another storage startup?

Technically yes and technically no!  So in terms of technically yes what I mean is DataGravity is a storage array!  They are your primary storage!  They can provide storage to your ESXi hosts not only through the traditional NFS mounts and iSCSI targets, but also have a built-in VM-Aware storage provider – allowing you to skip the whole LUN provisioning and treat your VMs as a first class citizen in terms of living on the array!  VM-Aware of course makes it easier for us to perform things like monitoring, data protection and provisioning.  That said, haven’t we seen all this before?  Isn’t the market full of this?

Those questions lead me to the “technically no” part of my answer!  Sure, they do the primary storage, they have their flash piece!  If this blog post ended here then they would certainly be just another storage startup – but it doesn’t!  DataGravity’s differentiator in my opinion is the way they split their nodes of storage, and the unique functionality those nodes provide!

Not just another storage startup!

I’m not going to go too deep into how DataGravity works, partly because they are going to jam 2 hours of awesomeness into my brain at the end of the month so I’ll save it for then, and partly because I don’t really know how it all works…yet.

The main thing I get is that they “optimize, protect, track, and analyze data as its stored” – their words.  My words – it does more than just primary storage with the sweet spot being the “analysis”.   Basically the primary storage is just that, primary storage – but as data comes in it’s stored on a secondary node – this node can be used for the obvious, data protection, but also for analysis.  So think of it this way – it’s easy now to see who created a certain file, but do we have visibility into who has modified that file over time, who else has read that file, where else that file might be stored, what other files this person has created!  DataGravity gives us this functionality – and not just on a per VM level, on a complete array level!   And all of this analysis and querying being run on a secondary storage node, leaving production to do production like things.   Essentially it’s like Google for your storage array!

For now that’s all I have to give you but expect a bit of a deeper post to come the end of June, early July on DataGravity as I hear what they have to say at VFD5.  Don’t forget if you want to join in on the Virtualization Field Day 5 action you can do so by watching the live stream and follow along with the #VFD5 hashtag on Twitter!  And just a reminder – I’ll try to have the live stream and any event related content on my VFD5 landing page here as well!

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.