The Standard Performance Evaluation Corporation (SPEC) has released its first new virtualisation benchmark in eight years.
The new SPECvirt Datacenter 2021 benchmark succeeds SPEC VIRT_SC 2013. The latter was designed to help users understand performance in the heady days of server consolidation, so required just one host. The new benchmark requires four hosts – a recognition of modern datacentre realities.
The new tests are designed to test the combined performance of hypervisors and servers. For now, only two hypervisors are supported: VMware’s vSphere (versions 6.x and 7.x) and Red Hat Virtualisation (version 4.x). David Schmidt, chair of the SPEC Virtualization Committee, told The Register that Red Hat and VMware are paid up members of the committee, hence their inclusion. But the new benchmark can be used by other hypervisors if their vendors create an SDK. He opined that Microsoft, vendor of the Hyper-V hypervisor that has around 20 per cent market share, didn’t come to play because it’s busy working on other SPEC projects.
SPECvirt Datacenter 2021 runs in three phases. For starters, it assumes that most hosts are in maintenance mode, then brings more hosts online to test load balancing. A third phase of tests saturates all four hosts and gives them a solid workout. The benchmark tests how hypervisors manage resources across a datacentre, and simulates performance under the following five workloads:
- OLTP database, based on HammerDB benchmark;
- Hadoop/Big Data cluster, based on BigBench benchmark;
- A departmental mail server;
- A departmental web server;
- A departmental collaboration server;
One set of results using the new benchmark has already been published, featuring vSphere 7.0U2a, Lenovo ThinkSystem SR665 servers, and AMD EPYC 7763 CPUs.
That CPU is a 64-core beast that Lenovo has used to make merry licensing mischief in single-socket servers. However, the server-maker chose to use it in a two-socket machine for its benchmark run.
Lenovo and HPE are also paid-up members of SPEC’s virtualisation committee, while Intel and Oracle have made contributions. Schmidt said the long period between benchmarks is attributable to the complexities of designing a valid test. He added that he expects committee members will soon publish more benchmark results but hopes that organisations that put the test to work will also share numbers they generate. ®