Monday 30 January 2017

Hardware Planning for Skype for Business 2015 Enterprise

I've been looking through the MS hardware and virtualisation guides for Skype for Business 2015 (and Lync 2013) and put together some notes to try to translate some of the best practices.

There aren't many corners that can be cut when setting this up and the hardware requirements are pretty large. The smallest enterprise set up that Microsoft details is for 10,000 seats. Microsoft won't support a configuration with hardware less than recommended which makes the hardware requirements for an organisation with under 10,000 seats seem even higher.

Most of the notes are bullet points taken from the Lync 2013 virtualisation guide, but I have put a section at the bottom to try and translate the official 2015 hardware requirements. The disk IO and capacity requirements are also listed here.

References

Lync Server 2013 Virtualization 
Capacity Planning Doc 
Lync Server 2013 Stress Testing Guide 
2015 Hardware Specifications 
RAID IOPS Calculator 


MS Assumptions in 2013 virtualisation guide


  • 10k users on 3 Front Ends 
  • Shared resource techniques, including processor over subscription, memory over commitment, and I/O virtualisation, cannot be used because of their negative impact on Lync scale and call quality.

General Information
  • Need to do own testing with LSS – A must - Lync Server 2013 Stress Testing Guide 
  • There is a section (virtualisation guide) on KHIs (Key Hardware Indicators) which should be checked during testing.
  • No vMotion / Live Migrate – VMs can only be moved while powered off. 
  • Mixing physical and virtual in the same pool (lync role) is not supported 
  • Each server in the pool must have identical resources 
  • Physical servers must be fully redundant (ie PSU, RAID) 
  • Using a lower specification of server should be done with caution and it is highly recommended to use Stress and Performance tools to verify the final solution. Support will not do anything unless HW specs are met.

CPU
  • Disable Hyperthreading 
  • 1:1 vCPU to pCPU 
  • Host must support nested page tables and extended page tables (NPT and EPT) 
  • Disable NUMA spanning 
  • MS Config uses 8 x HPDL560 G8, 4x E5-4650 (8c/16t) 2.70 Ghz 
  • 6-10 percent overhead for VMs in guide.
  • Microsoft guide does not account for NUMA home nodes at all, therefore spanning VMs across NUMA nodes or CPUs should not be an issue. (They have 12-core VMs on hosts with 8-core Intel CPUs, therefore the VM must span NUMA)

Memory
  • No over commitment 
  • No Dynamic Memory (or VMware ballooning - must reserve all)
  • MS Config uses 8 x HPDL560 G8, 128 GiB (8 * 16 GiB)

Networking
  • Must Use VMQ (Virtual Machine Queue) 
  • Physical NIC segregation between Host and Guest communication.
  • SR-IOV is recommended.
  • MS Config uses 8x HPDL560 G8 with 4x 1 Gb NIC 
  • Each host must have at least 1 dedicated NIC for lync workload.
  • Lync server media workload can reach over 500 Mbps. 
  • If more than 1 VM on a host size NIC accordingly, consider 10 GbE or multiple 1 GbE ie. 3x1Gb NICs Teamed. 
  • Synthetic NICs in guest are preferred, also use physical NIC offloads if available.
  • Legacy NIC not supported in Lync media workloads.
  • Use only IPv4 -OR- IPv6 on a NIC

Storage
  • Fixed / Pass Through disks (NOT Dynamic) - VHDX format 
  • VM storage must be dedicated to VMs (ie. Don't use hypervisor system drive for VMs) 
  • VM Checkpoints not supported 
  • 'be aware' of contention between VMs 
  • MS Config uses 4 * 300 GiB RAID 1 local system drives 
  • MS Config uses 8 * Drive Enclosures with 12 600 GiB SAS 15k drives each (96 drives) 
  • Each physical host has 1 x 600 GB array and 3 x 1.2 TB arrays each with 700 read and 350 write IOPS 
  • VM: IDE for boot disk, SCSI for all other attached drives 
  • ISCSI for data drives supported 
  • Normal best practice for OS and binaries on OS drive, DBs and data on data drives 
  • Implement MPIO for back end storage

Software (OS)
  • Hypervisor – 2012R2, 2012, 2008R2, or SVVP tested platform 
  • All Lync Server workloads supported in VM 
  • Use VM Templates – Sysprep cannot be applied after Lync installation 
  • Guest OS – 2012R2, 2012, 2008 R2 required

DR
  • Front end pools in both sites, both active, both pools must be phys or virt (not mixed) 
  • Admin can fail over from one site to the other 
  • Both pools should handle all users

SQL
  • SQL HA using SQL Mirroring + witness is recommended. Checking the 2015 hardware requirements, SQL AlwaysOn is also supported and is likely the best choice for a 2015 deployment.
  • See below for supported SQL Server versions.
Supported Versions
  • MS SQL 2014 Ent or Std with CU6.
  • MSSQL 2012 Ent or Std with latest SP.
  • MSSQL 2008 R2 Ent or Std with latest SP.
  • Mirroring, Clustering and HA all supported, mirroring only in the Topology Builder.
  • Active/Passive only, do not use passive for anything else.

Hypervisor Considerations
  • Place VMs in the same application tier on different hosts for HA.
  • Lync Server 2013 can be deployed on Windows Server 2012 Hyper-V and later technology, or any third-party hypervisor that has been validated under the SVVP. (implies 2016 is supported -- need to check with MS to be sure.
  • Resource allocations not explicitly required unless oversubscribed - Seems an absurd statement since you cannot over commit?
  • If you deploy AV on host, ensure exclusions are in place (doesn't detail exclusions.) 
  • Disable virtual CD/DVD ROM 
  • Lync unable to use HA or DR capabilities of Hypervisor (SRM, Hyper-V Replica)

Image
Microsoft Virtualisation Guide VM to Host Placement


Skype for Business 2015 Hardware Requirements (Per VM)

Front End, Back End, Standard Edition and Persistent Chat

Microsoft Specification Translation Comment
64-Bit dual, hex-core 2.26 Ghz 12 Core
32 gigabytes (GB) 32 GB
8 * 10k rpm "with 72 GB free" --or—
SSD with similar performance
2 in RAID1
6 in RAID10

232 IOPS, 72 GB
697 IOPS, 216 GB
Lync 2013 doc suggests 66/33 read/write IO profile (700/350 iops per LUN)
1 dual-port 1 Gbps NIC
–or—
2 single NIC Teamed with single MAC
1 Gbps redundant Doesn't say how teaming to be done, so NFT only seems appropriate
OS 2012R2 or 2012 Specific KBs are required, see MS hardware spec site

Edge, Standalone Mediation, Video Interop and Directors

Microsoft Specification Translation Comment
64-Bit dual, quad-core 2.26 Ghz 12 Core
16 gigabytes (GB) 32 GB
4 * 10k rpm "with 72 GB free" --or—
SSD with similar performance
2 in RAID1 +
2 in RAID1

232 IOPS, 72 GB
232 IOPS, 72 GB
Lync 2013 doc suggests 66/33 read/write IO profile (700/350 iops per LUN)
1 dual-port 1 Gbps NIC
–or—
2 single NIC Teamed with single MAC
1 Gbps redundant Doesn't say how teaming to be done, so NFT only seems appropriate
OS 2012R2 or 2012 Specific KBs are required, see MS hardware spec site

Disk calculations are based on using 72 GB 10,000 RPM drives as detailed in the Microsoft Spec.

2 x 72 GB drives in RAID 1 gives: 72 GB capacity and 232 mixed total IOPS.

6 x 72 GB drives in RAID 1 gives: 216 GB capacity and 697 IOPS.

See the IO calculator linked for details.

You can see the guidelines are pretty detailed and once you translate them they become quite clear. Once the hypervisor servers and VMs are built, I plan to post a script to configure as per best practices.

Tuesday 17 January 2017

Azure Resource Manager (ARM) Templates Getting Started Guide

Preface
I've been doing some research and development work on ARM templates recently and thought I would put together a getting started guide while I was at it.

There are some great online references for ARM template creation:





Getting Started

The basics
ARM templates are JSON formatted files that describe Azure Resources. You can send the file off to Azure in a number of ways and the resources described will be implemented by the Service. ARM resources are deployed in parallel, so even complicated templates can be deployed quickly.

The ARM template is broken down into four sections: Parameters, Variables, Resources and Outputs. Only one section is actually required to have content - resources - the others help to make the templates more dynamic and flexible.

I use Visual Studio Code for creating and editing ARM templates as it has some excellent plugins which provide intellisense when creating resources. Using this means that you can just hit Ctrl+Space to get a hint at the possible resource properties on the fly. To set up VS Code for intellisense, the official guide is here.

You can also download templates from the Azure Portal to edit later. You can download templates from created resources, from the last page of the resource creation process, or even from entire resource groups.

Here is a really basic, valid ARM template to create a Storage Account. It's pretty rigid in that the storage account name is hard-coded, to fix that, parameters can be used to take user input.



Execution
ARM templates can be deployed in a number of ways: PowerShell, X-plat CLI and the Azure Portal are some common ways.

The following PowerShell code will create a Resource Group and deploy the template into it.



Taking input
The parameters section allows the template to take input. This input can be from the user at deployment time, from a parameters answer file or from another template. A basic parameter looks like this:


Storing Variables and Transforming input
The variables section is useful for holding regularly used values or for transforming input. For example in some of my templates, I use a resources prefix which is concatenated with resource names so that related resources can be seen at a glance.

The example includes a variable to apply a tag to a resource. In this example there is only a single resource, but in more complicated templates, the variable can be reused in all of the resources.



Producing Output
It can be useful to produce outputs from a template to get information on the resources that were created. In this example the DNS name assigned to the public IP is returned after deployment. Outputs can also be used in chained templates.


Creating multiple copies of a resource
A really powerful feature of ARM templates is the copy loop. This means you can create multiple, identical copies of specific resources in your template. For example you can create 3 copies of a VM using one code block. When you use a copy loop, you can reference the copyIndex() function in order to generate unique names for resources. In the following example, there is a simple copy loop to generate 3 public IPs. 



Using arrays in loops
Arrays can be combined with the copy loop in order to name resources with a list of predefined or runtime input names. In the example, the array is generated as a parameter and then indexed using the copyIndex() function. The count of the copy property can be set to the length of the array so that all array elements are indexed dynamically.


Running scripts in VMs using the extension resource
A nice part of Azure Resource Manager is the ability to add virtual machine extensions. One such extension is the script extension which allows the Resource Manager service to run scripts inside a virtual machine. No external access is required to the virtual machine since Azure executes the script for you. Scripts need to be hosted on an accessible web server or storage account. If there is any sensitive information in the script you should use storage keys and a key vault to access the script privately from the template. The following example just runs a publicly accessible script in the virtual machine 'myVirtualMachine'.


Linking to other templates for complex solutions
Chaining templates is super useful for creating complex solutions. For instance you can create a template to build an availability group of VMs, then call that template from another template with the copy loop and parameters to build multi-tier applications. 

I have a full implementation of this on my GitHub, the implementation will loop through several templates and create multiple availability groups and VMs. Be careful if you run this as it will spawn a lot of resources.

Here is the 'master' template which calls the other templates in the solution:



And here are all the resources it creates!




Tuesday 10 January 2017

iDRAC Password Change PowerShell Script

I knocked up a quick script for changing the password on iDRAC cards.

This function can be looped through to change a local user password on a bunch of iDRAC cards for when the auditors come! :)

It requires you have the racadm command line utility installed and in $ENV:Path

Nutanix CE 2.0 on ESXi AOS Upgrade Hangs

AOS Upgrade on ESXi from 6.5.2 to 6.5.3.6 hangs. Issue I have tried to upgrade my Nutanix CE 2.0 based on ESXi to a newer AOS version for ...