Monday 30 January 2017

Hardware Planning for Skype for Business 2015 Enterprise

I've been looking through the MS hardware and virtualisation guides for Skype for Business 2015 (and Lync 2013) and put together some notes to try to translate some of the best practices.

There aren't many corners that can be cut when setting this up and the hardware requirements are pretty large. The smallest enterprise set up that Microsoft details is for 10,000 seats. Microsoft won't support a configuration with hardware less than recommended which makes the hardware requirements for an organisation with under 10,000 seats seem even higher.

Most of the notes are bullet points taken from the Lync 2013 virtualisation guide, but I have put a section at the bottom to try and translate the official 2015 hardware requirements. The disk IO and capacity requirements are also listed here.

References

Lync Server 2013 Virtualization 
Capacity Planning Doc 
Lync Server 2013 Stress Testing Guide 
2015 Hardware Specifications 
RAID IOPS Calculator 


MS Assumptions in 2013 virtualisation guide


  • 10k users on 3 Front Ends 
  • Shared resource techniques, including processor over subscription, memory over commitment, and I/O virtualisation, cannot be used because of their negative impact on Lync scale and call quality.

General Information
  • Need to do own testing with LSS – A must - Lync Server 2013 Stress Testing Guide 
  • There is a section (virtualisation guide) on KHIs (Key Hardware Indicators) which should be checked during testing.
  • No vMotion / Live Migrate – VMs can only be moved while powered off. 
  • Mixing physical and virtual in the same pool (lync role) is not supported 
  • Each server in the pool must have identical resources 
  • Physical servers must be fully redundant (ie PSU, RAID) 
  • Using a lower specification of server should be done with caution and it is highly recommended to use Stress and Performance tools to verify the final solution. Support will not do anything unless HW specs are met.

CPU
  • Disable Hyperthreading 
  • 1:1 vCPU to pCPU 
  • Host must support nested page tables and extended page tables (NPT and EPT) 
  • Disable NUMA spanning 
  • MS Config uses 8 x HPDL560 G8, 4x E5-4650 (8c/16t) 2.70 Ghz 
  • 6-10 percent overhead for VMs in guide.
  • Microsoft guide does not account for NUMA home nodes at all, therefore spanning VMs across NUMA nodes or CPUs should not be an issue. (They have 12-core VMs on hosts with 8-core Intel CPUs, therefore the VM must span NUMA)

Memory
  • No over commitment 
  • No Dynamic Memory (or VMware ballooning - must reserve all)
  • MS Config uses 8 x HPDL560 G8, 128 GiB (8 * 16 GiB)

Networking
  • Must Use VMQ (Virtual Machine Queue) 
  • Physical NIC segregation between Host and Guest communication.
  • SR-IOV is recommended.
  • MS Config uses 8x HPDL560 G8 with 4x 1 Gb NIC 
  • Each host must have at least 1 dedicated NIC for lync workload.
  • Lync server media workload can reach over 500 Mbps. 
  • If more than 1 VM on a host size NIC accordingly, consider 10 GbE or multiple 1 GbE ie. 3x1Gb NICs Teamed. 
  • Synthetic NICs in guest are preferred, also use physical NIC offloads if available.
  • Legacy NIC not supported in Lync media workloads.
  • Use only IPv4 -OR- IPv6 on a NIC

Storage
  • Fixed / Pass Through disks (NOT Dynamic) - VHDX format 
  • VM storage must be dedicated to VMs (ie. Don't use hypervisor system drive for VMs) 
  • VM Checkpoints not supported 
  • 'be aware' of contention between VMs 
  • MS Config uses 4 * 300 GiB RAID 1 local system drives 
  • MS Config uses 8 * Drive Enclosures with 12 600 GiB SAS 15k drives each (96 drives) 
  • Each physical host has 1 x 600 GB array and 3 x 1.2 TB arrays each with 700 read and 350 write IOPS 
  • VM: IDE for boot disk, SCSI for all other attached drives 
  • ISCSI for data drives supported 
  • Normal best practice for OS and binaries on OS drive, DBs and data on data drives 
  • Implement MPIO for back end storage

Software (OS)
  • Hypervisor – 2012R2, 2012, 2008R2, or SVVP tested platform 
  • All Lync Server workloads supported in VM 
  • Use VM Templates – Sysprep cannot be applied after Lync installation 
  • Guest OS – 2012R2, 2012, 2008 R2 required

DR
  • Front end pools in both sites, both active, both pools must be phys or virt (not mixed) 
  • Admin can fail over from one site to the other 
  • Both pools should handle all users

SQL
  • SQL HA using SQL Mirroring + witness is recommended. Checking the 2015 hardware requirements, SQL AlwaysOn is also supported and is likely the best choice for a 2015 deployment.
  • See below for supported SQL Server versions.
Supported Versions
  • MS SQL 2014 Ent or Std with CU6.
  • MSSQL 2012 Ent or Std with latest SP.
  • MSSQL 2008 R2 Ent or Std with latest SP.
  • Mirroring, Clustering and HA all supported, mirroring only in the Topology Builder.
  • Active/Passive only, do not use passive for anything else.

Hypervisor Considerations
  • Place VMs in the same application tier on different hosts for HA.
  • Lync Server 2013 can be deployed on Windows Server 2012 Hyper-V and later technology, or any third-party hypervisor that has been validated under the SVVP. (implies 2016 is supported -- need to check with MS to be sure.
  • Resource allocations not explicitly required unless oversubscribed - Seems an absurd statement since you cannot over commit?
  • If you deploy AV on host, ensure exclusions are in place (doesn't detail exclusions.) 
  • Disable virtual CD/DVD ROM 
  • Lync unable to use HA or DR capabilities of Hypervisor (SRM, Hyper-V Replica)

Image
Microsoft Virtualisation Guide VM to Host Placement


Skype for Business 2015 Hardware Requirements (Per VM)

Front End, Back End, Standard Edition and Persistent Chat

Microsoft Specification Translation Comment
64-Bit dual, hex-core 2.26 Ghz 12 Core
32 gigabytes (GB) 32 GB
8 * 10k rpm "with 72 GB free" --or—
SSD with similar performance
2 in RAID1
6 in RAID10

232 IOPS, 72 GB
697 IOPS, 216 GB
Lync 2013 doc suggests 66/33 read/write IO profile (700/350 iops per LUN)
1 dual-port 1 Gbps NIC
–or—
2 single NIC Teamed with single MAC
1 Gbps redundant Doesn't say how teaming to be done, so NFT only seems appropriate
OS 2012R2 or 2012 Specific KBs are required, see MS hardware spec site

Edge, Standalone Mediation, Video Interop and Directors

Microsoft Specification Translation Comment
64-Bit dual, quad-core 2.26 Ghz 12 Core
16 gigabytes (GB) 32 GB
4 * 10k rpm "with 72 GB free" --or—
SSD with similar performance
2 in RAID1 +
2 in RAID1

232 IOPS, 72 GB
232 IOPS, 72 GB
Lync 2013 doc suggests 66/33 read/write IO profile (700/350 iops per LUN)
1 dual-port 1 Gbps NIC
–or—
2 single NIC Teamed with single MAC
1 Gbps redundant Doesn't say how teaming to be done, so NFT only seems appropriate
OS 2012R2 or 2012 Specific KBs are required, see MS hardware spec site

Disk calculations are based on using 72 GB 10,000 RPM drives as detailed in the Microsoft Spec.

2 x 72 GB drives in RAID 1 gives: 72 GB capacity and 232 mixed total IOPS.

6 x 72 GB drives in RAID 1 gives: 216 GB capacity and 697 IOPS.

See the IO calculator linked for details.

You can see the guidelines are pretty detailed and once you translate them they become quite clear. Once the hypervisor servers and VMs are built, I plan to post a script to configure as per best practices.

2 comments:

  1. For the Front end servers, I see the requirement to have 2 disks in RAID 1, and 6 disks in RAID 10. However, nowhere on the Internet does it detail what the 2 disk are for, and what the 6 disks are for. I'm assuming that the 2 disk are for the OS, and Skype should be installed on the 6 disks? If that is the case, where is it outlined so that I don't have to assume? Also, why doesn't the Edge server need separate disks for the OS?

    ReplyDelete
    Replies
    1. I would assume the same as you, but I don't recall anything specific either.

      I'd assume they want the extra data disk in there for the additional IOPS for SQL and Lync data.

      If you look at the 2015 hardware specs link it states the FE servers need 4 disks, 2 x RAID 1. My table appears to be wrong which I'll correct shortly.

      Delete

Please be nice! :)

Nutanix CE 2.0 on ESXi AOS Upgrade Hangs

AOS Upgrade on ESXi from 6.5.2 to 6.5.3.6 hangs. Issue I have tried to upgrade my Nutanix CE 2.0 based on ESXi to a newer AOS version for ...