Recommendation for a large scale VM machine. More Questions.

Our "pub" where you can post about things completely Off Topic or about non-silent PC issues.

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
aristide1
*Lifetime Patron*
Posts: 4284
Joined: Fri Apr 04, 2003 6:21 pm
Location: Undisclosed but sober in US

Recommendation for a large scale VM machine. More Questions.

Post by aristide1 » Fri Jun 15, 2012 1:34 pm

Looking at configuring a VM PC capable of at least 20 VM sessions simultaneously. The consensus on the VMWare forums favors no more than 4 VMs per core. Some say up to 10 is reasonable, but I find that excessive. People I know say their laptop has trouble running 2. Total RAM apparently should be the total of all allocated, though frankly I would just get 64GB and be done with it, no point monkeying with 48. HDs are a separate issue. This would be running VMWare ESXI.

Any suggestions?

UPDATE - The hardware we managed to configure dropped in price and we've been able to buy more. We're going to have 128GB of RAM, and frankly VMware pricing is becoming a concern, per my last post below.

Using Sandy Bridge 26xx CPUs we're guessing we need about 1GHz per VM, meaning a 2.5GHz core will support 2.5 VM's, ergo an 8 core processor will handle 20 VMs rather nicely. We're going to have 2 26xx physical CPUs, each core will get 64GB of RAM. 40VMs total, though it sounds like 50 should not be a problem. No chance I'll be able to borrow it for folding.
Last edited by aristide1 on Tue Jul 03, 2012 3:15 pm, edited 2 times in total.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Recommendation for a large scale VM machine.

Post by washu » Fri Jun 15, 2012 2:31 pm

How many VMs you can run really depends on what kind of work they are doing. If the VMs are mostly idle then you can run more than if they are CPU intensive. However, RAM is almost always the biggest limiting factor, followed by disk I/O. CPU is a distant third.

I run a VM cluster at work with 8 X 2.13 GHz CPU cores (2 X Xeon E5506) in each server, so not really fast processors. RAM is always runs out long before CPU. Our most loaded server has 32 VMs, so 4 per core. However, it has 84 "cores" allocated in total, so 10.5 per core. The only reason it doesn't have more VMs is lack of RAM.

Unless most of your VMs are going to be very CPU intensive then a modern Intel quad core should have no problems with 20 VMs.

Be aware that the free version of ESXi can only use 32 GB of RAM

aristide1
*Lifetime Patron*
Posts: 4284
Joined: Fri Apr 04, 2003 6:21 pm
Location: Undisclosed but sober in US

Re: Recommendation for a large scale VM machine.

Post by aristide1 » Fri Jun 15, 2012 3:02 pm

No, I don't expect them to be mostly idle, but CPU usage will be erratic. Your comments about memory confirm my suspicions, but I also need to find out what the memory allocations are on our current VMs. CPU wise I'd like to see Intel's lower priced hex core like the Xeon E5645. A quad would probably work, but leave no room for growth, but that's only a problem with a single socket motherboard.
Be aware that the free version of ESXi can only use 32 GB of RAM
That was really useful, hadn't researched it to that point yet.
Thanks.

SebRad
Patron of SPCR
Posts: 1121
Joined: Sun Nov 09, 2003 7:18 am
Location: UK

Re: Recommendation for a large scale VM machine.

Post by SebRad » Sat Jun 16, 2012 2:58 am

Hi, have you considered the Sandybridge Extreme platform, may be cheaper with wider range of choices?
It looks like it supports the hardware virtual features, unlike the normal Sandy and Ivy CPUs.
The i7 3820 is quite affordable and there are 6 core / 12 thread CPUs, for twice the price, too. 8 DIMMs in quad channel for up to 64GB RAM and 40 PCIe lanes for lots of video cards or other high end PCIe devices.
Also supports over-clocking, although I guess CPU power isn't really your bottleneck, would also support high speed/bandwidth RAM although I'm not sure if you need that with quad channel bandwidth anyway.

Regards, Seb

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Recommendation for a large scale VM machine.

Post by andyb » Sat Jun 16, 2012 1:55 pm

If you really need a huge amount of RAM and a lot of cores, the only price sensible options are an Intel Extreme CPU (new one coming out at about £440 called the i7 3930K), most of the S2011 motherboards that have 8-DIMM's support 128GB of RAM (read the small print) and can be bought for £150,

If you need more CPU cores and more RAM than that can offer you then you need to move into the realm of servers. 16x cores, 128GB ECC Registered RAM is an expensive jump up.

£1,650 will get you a Dual socket AMD G34 Asus Mobo supports 256GB through 16 DIMM's (£390), 2x 8-Core 2.0 GHz CPU's (£220 each), and 16x ECC Registered 8GB DIMM's (£50 each), and then you will need to get a huge case to put it in.

Or back to another Intel Option, Xeon S2011, Supermicro Mobo that supports 512GB of RAM through 16x DIMM's (£460), and 6x (16x with HT) core Xeon 2011 CPU's (start from £310), and 16x ECC Registered 8GB DIMM's (£50 each), but that will set you back £1900.

Its not just that Server motherboards are expensive and then having to buy 2 CPU's, its the RAM that hurts the wallet, £50 per 8GB DIMM is a lot, but it gets even worse if you want 256GB, 16GB ECC Registered DIMM's cost £140 each.

Unless you think that you need the advantages of a true server motherboard or ECC Registered RAM then your best option looks to be an Intel Extreme 2011 CPU, not least because of the (relatively) inoffensive price-tag.


Andy

aristide1
*Lifetime Patron*
Posts: 4284
Joined: Fri Apr 04, 2003 6:21 pm
Location: Undisclosed but sober in US

Re: Recommendation for a large scale VM machine.

Post by aristide1 » Sun Jun 17, 2012 5:32 pm

Well obviously I need to go back and collect more requirements, but this is not a bad start. We can barely run 2 VMs on a dual-core laptop, so 4-5 on a quad sounds reasonable. RAM is always the bottleneck, so I'll start a couple of sessions and see what Task Manager tells me. CPU is 2nd on the requirements list, and HD use is third. There's also the matter of how it's going to be used.

I was told at least 20 VMs, but the consensus over on the VMWare forums is no more than 30 per host, which is why I prefer a hex core over a quad. I'm not sure they would go for a scratch build, I certainly will make a pitch. Dell has a server configurator, and HP stuff is expensive but may offer larger ultimate capacities RAM-wise. Both are willing to talk one through the process, but I'd better have the answers to their questions before I look into that.

A "desktop" versus a true "server" is still in question. A server in this case being a dual socket board at the minimum with RAM capacity above 64GB. I hate the fact that one gives up some speed here for ECC RAM and the 16GB sticks are more expensive as well, but it seems some of the mainstream boards with 8 DIMM slots max out at 64GB, meaning each slot maxes out with an 8GB DIMM. The ASUS Z9PE-D8 maxes out as 64GB, but for $600 one can get a real server MB. The thing I hate about server motherboards is that just about have lousy ratings on NewEgg, but yes not many ratings overall.

I should probably keep an eye on which CPU has the best QPI, I'm sure it's the least affordable ones.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Recommendation for a large scale VM machine.

Post by andyb » Mon Jun 18, 2012 10:21 am

A question I have to ask.

What is the requirement to run all of these VM's on a single machine.?

I might have missed the point, but I don't see why you cant do this with several much cheaper machines.


Andy

aristide1
*Lifetime Patron*
Posts: 4284
Joined: Fri Apr 04, 2003 6:21 pm
Location: Undisclosed but sober in US

Re: Recommendation for a large scale VM machine.

Post by aristide1 » Tue Jul 03, 2012 2:44 pm

I might have missed the point, but I don't see why you cant do this with several much cheaper machines.
What I do know is they want a machine that ESXi certified, installable or embedded. The server was chosen from VMware's HCL.
Also they wanted TXT Trusted Execution Technology which eliminated a lot of boxes (boxes spelled AMD). Other things
I noticed were smaller boxes had older technology, not Sandy Bridge, lower QPI numbers. This Dell server has 24 DIMM slots,
so some of the requirements is that the box can grow over time. You know, like a fungus.
Be aware that the free version of ESXi can only use 32 GB of RAM
I need somebody to elaborate. Does this refer to ESXi itself using 32GB of RAM or the vRAM pool?
And how does one get passed it? Through licensing no doubt, but of what?

Sorry for the late response, we thought we had it down pretty well, but perhaps not.

VMware pricing is convoluted. We're looking at the Essentials kit, it claims to have a max of 192GB of vRAM, but upon closer inspection that's the software's max capacity. The licensing itself appears to buy the customer 32GB at a time. If you have the full 192 vRAM used then you need SIX licenses from my understanding. In our case we have 128GB of vRAM, so we can get by with just FOUR licenses.

And all that provided we don't require services available in vSphere Standard or Enterprise. (I've developed a phobia of all software with the word Enterprise in it, it's doublespeak for "the most costly".)

VMware certainly knows how to suck people in. You get a trail version for 60 days with everything turned on. You get used to using certain options and processes, and then you figure out what you need to license.

Some people were asking why ESXi (free) requires such a pricey license. To which I replied :shock: .

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Recommendation for a large scale VM machine.

Post by washu » Tue Jul 03, 2012 5:47 pm

aristide1 wrote: I need somebody to elaborate. Does this refer to ESXi itself using 32GB of RAM or the vRAM pool?
The limit is the total physical RAM in the server. If you have more than 32GB installed ESXi with the free license will refuse to run properly.

aristide1
*Lifetime Patron*
Posts: 4284
Joined: Fri Apr 04, 2003 6:21 pm
Location: Undisclosed but sober in US

Re: Recommendation for a large scale VM machine. More Questi

Post by aristide1 » Tue Jul 03, 2012 6:27 pm

Which makes it borderline useless. Hopefully multiple licenses of Essentials will resolve that issue.

32GB is good enough only to determine the average RAM used perhaps on 10VMs.

Thanks washu.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Recommendation for a large scale VM machine. More Questi

Post by andyb » Tue Jul 10, 2012 4:11 pm

Is there a reason why you cant run Linux, then have 4x copies of Windows running, then run your VM's.?

Although this would be a very convoluted way to get round VMware's stupid licensing policy, I know it is possible to do. Practicality is another question, as is whether is creates more problems than it solves.

Just a thought. The other thought is to look at VM software from other companies.


Andy

aristide1
*Lifetime Patron*
Posts: 4284
Joined: Fri Apr 04, 2003 6:21 pm
Location: Undisclosed but sober in US

Re: Recommendation for a large scale VM machine. More Questi

Post by aristide1 » Tue Jul 10, 2012 5:46 pm

vSphere licensing is poorly explained, but they have no interest in what OS's you run or even how many. They are charging on vRAM use and on the # of CPU cores, if you happen to extend over 6 per physical processors allowed per the less expense license options.

Example - A server with 2 8-core processors and 128GB of RAM which will be devoted and probably used by the VMs. Each vSphere license (the lower levels) provides 32GB of vRAM allotment, per license. In this setup if you average 96GB of vRAM use on their sliding scale you need 3 licenses. If you use more then you require 4. No license and you're capped at 32GB no matter what. But suppose the entire system has only 32GB of RAM alloted. For Essentials and vSphere Standard each processor can only be up to 6 cores, so one still requires 2 licenses.

The vRAM limits used to be even lower, but customers complained and they slowly raised their limits.

I hate the fact that I can't find specific details about VM many management with their products. We will mostly needs to do snapshots and restores, lots of restores. All other stuff is not much of an issue. No 24/7 up time or critical business nature issues.

celondil
Posts: 8
Joined: Tue Jul 10, 2012 6:24 pm

Re: Recommendation for a large scale VM machine. More Questi

Post by celondil » Tue Jul 10, 2012 6:35 pm

There is a lot of talk about CPU, RAM, and licensing. Nothing about Storage though... If any of your workloads are IO intensive at all that could easily become a bottleneck (more VM's = more seeks). I definitely recommend multiple spindles and a hardware controller with a battery backed or flash backed cache of some sort in that case.

aristide1
*Lifetime Patron*
Posts: 4284
Joined: Fri Apr 04, 2003 6:21 pm
Location: Undisclosed but sober in US

Re: Recommendation for a large scale VM machine. More Questi

Post by aristide1 » Tue Jul 10, 2012 6:55 pm

We're starting with 4 HDs, and I thought 6 would be better for the reasons you state. It's also the reason I don't want spanned volumes, individual drives allows for a degree of traffic control. We could easily end up swapping out all the drives for a high number of SAS 10K or even 15K RPM units. SSD's are beyond budget. We're handling this issue just like the licensing, we start and then we see what our actual needs will be before we commit. I'd partition the first half (outer portion of each disc) for actual activity, and a separate partition for VM snapshots, but that's just me.

I want to see VM monitor and report this info while 30 VMs are cranking away.

celondil
Posts: 8
Joined: Tue Jul 10, 2012 6:24 pm

Re: Recommendation for a large scale VM machine. More Questi

Post by celondil » Tue Jul 10, 2012 7:50 pm

aristide1 wrote:We're starting with 4 HDs, and I thought 6 would be better for the reasons you state. It's also the reason I don't want spanned volumes, individual drives allows for a degree of traffic control. We could easily end up swapping out all the drives for a high number of SAS 10K or even 15K RPM units. SSD's are beyond budget. We're handling this issue just like the licensing, we start and then we see what our actual needs will be before we commit. I'd partition the first half (outer portion of each disc) for actual activity, and a separate partition for VM snapshots, but that's just me.

I want to see VM monitor and report this info while 30 VMs are cranking away.
I figured SSD's were out of budget, none of the enterprisey ones are remotely cost-effective for main storage.

I'd say that a good RAID controller and some sort of flash/battery backed cache would be more cost effective than SSD's at this point. They basically make a lot of the write IO trivial since that can go to the cache for a while and free up IO time for reads. As long as you aren't hammering the disks on writes, you can see a major improvement in latency and IOPs. If you go beyond the cache's ability to store and write the data though and you start sliding back to the spindles being the bottleneck.

You can get a good LSI card + battery for probably around $1000 USD online. Its no where near the same benefit as an SSD but it costs a fraction of the price. It can make the difference between needing those 5th and 6th spindles now and later.

LSI does support an option called Cachecade on some of their controllers. Basically its another SSD as a large cache option (think SRT on roids), but it could be an option to consider. I haven't used it myself (we use HP server's at work and they don't support it), but the idea is sound. Put say a pair of R1 SSD's in front of a multi-TB LUN made up of spindles and let the hotspots go to cache, thereby freeing up those spindles to work on the other IO.

aristide1
*Lifetime Patron*
Posts: 4284
Joined: Fri Apr 04, 2003 6:21 pm
Location: Undisclosed but sober in US

Re: Recommendation for a large scale VM machine. More Questi

Post by aristide1 » Wed Jul 11, 2012 3:44 am

Back in the days when dinosaurs roamed the earth I had in my PC an EISA caching SCSI controller with 4 SIMMS on it. Yeah, it moved.

I want to implement Intel SRT Smart Response Technology on my next PC, which also uses an SSD as a huge cache, but right now Intel requires your setup be RAID 0 for that to happen. That's an idiotic requirement. Oh well, never underestimate the use of a good software cache.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Recommendation for a large scale VM machine. More Questi

Post by washu » Wed Jul 11, 2012 7:17 am

aristide1 wrote:I want to implement Intel SRT Smart Response Technology on my next PC, which also uses an SSD as a huge cache, but right now Intel requires your setup be RAID 0 for that to happen. That's an idiotic requirement. Oh well, never underestimate the use of a good software cache.
I'm not sure where you got the RAID 0 requirement, because that is not true. Intel SRT does require the controller to be in "RAID" mode, but the magnetic disk does not need to be in a RAID config. It can be a single drive or any RAID config that Intel RST supports (0, 1, 5 or 10). The cache is limited to 64 GB, any space above that is usable as a separate drive.

I personally don't see the point given how much SSD prices have dropped, but to each their own.

celondil
Posts: 8
Joined: Tue Jul 10, 2012 6:24 pm

Re: Recommendation for a large scale VM machine. More Questi

Post by celondil » Wed Jul 11, 2012 9:03 am

washu wrote:
aristide1 wrote:I want to implement Intel SRT Smart Response Technology on my next PC, which also uses an SSD as a huge cache, but right now Intel requires your setup be RAID 0 for that to happen. That's an idiotic requirement. Oh well, never underestimate the use of a good software cache.
I'm not sure where you got the RAID 0 requirement, because that is not true. Intel SRT does require the controller to be in "RAID" mode, but the magnetic disk does not need to be in a RAID config. It can be a single drive or any RAID config that Intel RST supports (0, 1, 5 or 10). The cache is limited to 64 GB, any space above that is usable as a separate drive.

I personally don't see the point given how much SSD prices have dropped, but to each their own.
If you need multiple TB of storage, SSD's are still not cost effect. And unfortunately with servers, the prices are fairly silly still. They've come down quite a lot in the last year as well, but they are still in the $5 / GB range for what HP offers. Last year it was closer to $20 / gb. And since in that environment, you are pretty much always going to need RAID redundancy, simply buying one won't cut it. So your likely facing lost capacity in the form of RAID overhead.

What is pretty common are 25 disk arrays being used in a big RAID-1 (w/ a hotspare) not due to space requirements, but for IOPs. Some systems even use multiple arrays with that kind of config. But all those drives take up space and power. If you can get away with a couple of SSD's acting as a cache, and stick with the drives in the chassis rather than using an external array, that's a win. It all depends though on if there are hotspots on the array that consistently get the IO.

It might not make sense as much in the desktop market with those SSD's now in the ~$1 / gb range, but the amount of space and number of random IO needed in the server market along with the price of the SSD's changes that. If nothing else, less latency translates to a better experience to your users can justify the partial investment, but putting a TB or two in SSD's on a server would be scary expensive. A 2TB LUN of SSD's in RAID-1 with HP's current pricing would run between $31k and $45k. Now if HP supported SSD caching like say IBM, I could get the spindles for about 6k, then add a pair of SSD's for maybe another 4-8k total and you end up with something that would be pretty competitive performance wise with the pure SSD solution since that one would max out the controller at about 1/3 the cost.

Enterprise really is a synonym for expensive...

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Recommendation for a large scale VM machine. More Questi

Post by andyb » Wed Jul 11, 2012 11:46 am


aristide1
*Lifetime Patron*
Posts: 4284
Joined: Fri Apr 04, 2003 6:21 pm
Location: Undisclosed but sober in US

Re: Recommendation for a large scale VM machine. More Questi

Post by aristide1 » Sun Aug 12, 2012 6:22 pm

Well as a follow up the company I work for purchased a box.

Image This is my 1st cell phone photo uploaded.

2 Intel E5-2665's. Total 16 cores. 2.4GHz
128GB Memory DDR3-1600
With 8 empty DIMM slots left (can you tell this is a server motherboard?)
4 1TB HDs (they still concern me.)
Intel 3500 Quad Gb NIC
2 750 watt PSs.

Image

It's extremely quiet under light load, less so when the fans have to move air, and they certainly do.
I think there's 8 80mm fans inside the thing, and the 2 small fans in the PSs. Its a rack server but very deep.

ESXi is loaded, and so far we've only used vClient, though I believe vCenter is going to be inevitable.
We don't move VMs much or need 24/7 uptime, we do testing, and what we need is to restore to a snapshot, routinely and quickly.

We'd like to go with just Essentials licensing (4 of them, sigh) since we use VMs for testing, not for employee desktops.
Nothing on the VMware site spells out what you get. Obviously finding out the day before the 60 day trial expires is not good business practice.

We estimate we need 1GHz of 1 core to run a VM session (loaded with Windows or a Linux) so we're looking at 2.4 times 16 = about 35-40VMs concurrently.
(So far the heaviest HD activity I have seen by far is during a restore snapshot.)

They pretty much knew what they wanted, I made some recommendations (Sandy Bridge for higher memory bandwith than prior generation Xeons, DDR3-1600, and more HD's.)
They seemed to appreciate the added knowledge. At the same time I noted we could save a lot of money by not going to 16GB DIMMs and staying with 8GB DIMMS. We still
have plenty of empty slots.

I'd love to borrow it for one weekend, load just 1 Linux and fold.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Recommendation for a large scale VM machine. More Questi

Post by andyb » Mon Aug 13, 2012 12:40 pm

Looks like a good choice, I do have one question though, do you really need 4x slow 1TB HDD's.? would it not be better to swap a couple of them with SSD's so you don't get IO bottlenecks and if you are using for testing I assume you are constantly fiddling, installing things and opening and closing different VM's, if so then storage performance might become an issue.


Andy

aristide1
*Lifetime Patron*
Posts: 4284
Joined: Fri Apr 04, 2003 6:21 pm
Location: Undisclosed but sober in US

Re: Recommendation for a large scale VM machine. More Questi

Post by aristide1 » Mon Aug 13, 2012 2:44 pm

The HDs are an experiment. If they're too slow they'll be swapped out, but commercial/large SSDs are still rather pricey.

And we can use the HDs elsewhere, they certainly won't go to waste.

Overall I'm in agreement, but am holding a wait and see attitude.

A

Post Reply