
Transcription
Flexible Cloud PlatformBuilding a Modern Datacenter Fabric
Flexible Cloud PlatformRev 2.1Author: Mikael NystromCo- Author: Markus LassfolkCopyright: Crocus LepusIntel and Xeon are registered trademarks of Intel Corporation. Microsoft is a trademark of the Microsoft Group of companies.Table of ContentsThe Team . 5Mikael Nyström – TrueSec . 5Markus Lassfolk – TrueSec . 5Jorgen Brandelius – TrueSec . 5Thomas Meltzer – Intel . 5Overview . 5Overview – Design . 5Overview – Hardware . 6CPU . 6Memory . 7Network Adapter . 7Bus . 8Storage . 8Server . 9Overview – Software . 10Windows Server 2012 R2 . 10System Center 2012 R2 . 10System Center Virtual Machine Manager 2012 R2 . 11System Center Data Protection Manager 2012 R2 . 12System Center Operations Manger 2012 R2. 13System Center Orchestrator 2012 R2. 14System Center Configuration Manager 2012 R2 . 15System Center Service Manager 2012 R2 . 16System Center Service Provider Foundation . 17
Windows Azure Pack. 18Microsoft Operations Management Suite . 19Microsoft Deployment Toolkit . 20Building Fabric . 21The Network . 23Deploying the Network . 23The Management stack . 24Deploy the Management Stack . 25The Management Server list . 25The Laptop . 26The Build Server . 26Creating the Servers . 26The FAADDS01 Server . 27Active Directory: . 27DHCP: . 27DNS: . 27The FAADDS02 Server . 28Active Directory: . 28DHCP: . 28DNS: . 28The FARRAS01 Server . 29Routing and Remote Access Server: . 29The FARDGW01 Server. 30Remote Desktop Gateway Server: . 30The FAFILE01 Server . 31File Server configuration: . 31The FAMGMT01 Server . 32Management server configuration: . 32The FADEPL01 Server . 33Windows Deployment Services: . 33Automated Deployment Toolkit: . 33Microsoft Deployment Toolkit 2013: . 33The FAADCA01 Server . 35Certificate Authority: . 35The FAWSUS01 Server . 36
Report Viewer 2008 . 36SQL Server Express 2012 . 36Windows Server Update Services . 36The FASCVM01 Server . 38Automated Deployment Toolkit 8.1 . 38SQL Server 2012. 38System Center Virtual Machine Manager 2012 R2 . 38System Center Operations Manager 2012 R2 . 39The FASCOR01 Server . 40SQL Server 2012. 40System Center Orchestrator 2012 R2. 40The FASCOM01 Server . 42SQL Server 2012. 42System Center Operations Manager 2012 R2 . 42The FASCDP01 Server . 43SQL Server 2012. 43System Center Data Protection Manager 2012 R2 . 43The FAADFS01 Server. 44Active Directory Federation Services . 44The FAWAPR01 Server . 45Web Application Proxy. 45The FAHOST01 Server . 46Hyper-V . 46The FAHOST02 Server . 47Hyper-V . 47Wrapping up. 48Servers was deployed using System Center Virtual Machine Manager 2012 R2 . 48Servers was NOT deployed using System Center Virtual Machine Manager 2012 R2. 48The Storage Stack . 49Deploying the Storage stack . 49The Storage Stack Server list . 50Creating the Servers . 50The FASTOR01 Server. 51Storage: . 51The FASTOR02 Server. 52
Storage: . 52Wrapping up. 52Servers was deployed using System Center Virtual Machine Manager 2012 R2 . 52Servers was NOT deployed using System Center Virtual Machine Manager 2012 R2. 52The Compute Stack . 53Deploying the Compute stack . 53The Compute Stack Server list. 53Creating the Servers . 54The FAHOST11 Server . 54Hyper-V: . 54The FAHOST12 Server . 55Hyper-V: . 55The FAHOST13 Server . 56Hyper-V: . 56Servers was deployed using System Center Virtual Machine Manager 2012 R2 . 56Servers was NOT deployed using System Center Virtual Machine Manager 2012 R2. 56Self Service . 57Deploying Self Service . 57The Management Server list . 57Creating the Servers . 57The FAMSQL01 Server . 58SQL Server 2012. 58The FAAZAS01 Server . 59Windows Azure Pack . 59The FAAZTS01 Server . 60Windows Azure Pack . 60The FAAZGW01 Server . 61Remote Desktop Gateway. 61Wrapping up. 61About TrueSec . 62About LabCenter. 63About Intel . 64
The TeamThere are many people involved in a document like this and here is the short list:Mikael Nyström – TrueSecPrincipal Technical Architect, Microsoft MVPMarkus Lassfolk – TrueSecPrincipal Technical Architect, Microsoft MVPJorgen Brandelius – TrueSecSenior Execute ConsultantThomas Meltzer – IntelEnterprise Technology SpecialistOverviewThe purpose of the document is to provide guidelines to build a Flexible Cloud Platform System, which isa Private Cloud built on the Microsoft Virtualization stack utilizing the latest technology and featuresavailable from Microsoft.Overview – DesignThe design is based on a green field setup, which makes it easy and fast to build with minimal impact onthe existing production environment. All pre-existing workloads are then migrated via Quick- or LiveMigration to the new platform to minimize downtime.It’s easy to scale and grow by adding hardware and resources to the platform with the help of SoftwareDefined Compute, Software Defined Networking and Software Defined Storage.
Overview – HardwareWhen designing and buying hardware, it’s important to size all components individually and against eachother. A balanced system does not have more resources than needed.For example; having a storage solution that is running at 100% and at the same time have more than 50%of the memory in the Hyper-V hosts unused is not well-balanced, since it means that there is a lot of freememory that can’t be used. Another example would be if a network adapter has support for only 4 VirtualMachine Queues (VMQ), while another comparable network adapter for roughly the same cost hassupport for 254 VMQ’s. Too few VMQ’s will affect performance of the virtual machines.-Read More: http://tinyurl.com/nhjnnc3CPUThe choice of CPU is important from various perspectives:-Flexibility requires the ability to move virtual machines between Hyper-V hosts. Preferably usethe same vendor for optimal performance and scalability.For Hyper-V hosts, number of Cores are generally more important than CPU speed.Larger L1, L2 and L3 cache is preferred.As a good practice; the ration between physical cores and vCPU should be between 6:1 and 8:1.Number of Lanes in the CPU is important in high performance systems.Two or more CPU’s are usually needed to utilize all PCI Slots on the motherboard.Suggestion: Intel Xeon Processor E5 v3 Family
MemoryWhen talking about memory two things are important. The correct amount of memory in each server andthe correct speed. It’s always a good idea to use the free Microsoft Assessment and Planning (MAP)Toolkit (more info: http://tinyurl.com/k4pd6xz) to measures the current workload in regards to memory,CPU, network and storage utilization. To calculate the needed amount of RAM in a Hyper-V host use thisformula:(𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑃𝑈𝑠 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑟𝑒𝑠 𝑝𝑒𝑟 𝐶𝑃𝑈) 8 𝑋 𝑣𝐶𝑃𝑈𝑠𝑇𝑜𝑡𝑎𝑙 𝑎𝑚𝑜𝑢𝑛𝑡 𝑜𝑓 𝑅𝐴𝑀 𝑖𝑛 𝐻𝑜𝑠𝑡 / 𝑋 𝑣𝐶𝑃𝑈𝑆 𝑀𝑒𝑚𝑜𝑟𝑦: 𝑣𝐶𝑃𝑈 𝑅𝑎𝑡𝑖𝑜Example: 1 Hyper-V Host with 2 CPU’s using 8 cores each and 256GB of RAM. Example calculation:(2 8) 8 128 𝑣𝐶𝑃𝑈𝑠256 2 𝑣𝐶𝑃𝑈 𝑅𝑎𝑡𝑖𝑜128General recommendation is to have a vCPU Ration of 2-4, so with that amount of CPU’s and Cores thiscustomer could have used 256-512GB of RAM in the Hyper-V hosts.A VM that has too little RAM will use the page file residing on the storage solution to swap memory.This will impact the overall performance both of that VM and the storage solution, which in turn canresult in degraded performance for all VMs due to unnecessary IOPS.There is usually not a need to dedicate one host for hot stand-by, but verify that there is enough free RAMin the whole cluster to run the workload if one or more host is offline so the cluster is not overcommitted.When using High Performance VMs like SQL or other heavy memory intensive applications it’simportant to verify NUMA.-Calculate the expected workload to know how much RAM is needed.High availability clusters do need more memory due to redundancy.Consider the fact that memory is generally cheaper than storage in regards to IOPS.A higher MHz for memory speed will increase performance.Talk to your hardware vendor about where the sweet spot is for RAM vs Cost. It may be cheaperto buy an additional server, than adding more memory to the existing ones.Though, remember to include the cost for OS and management licenses in that calculation.Network AdapterNetwork adapters are a part of both Software Defined Network and Software Defined Storage. Thatmeans that the network layer will be used for both traditional network traffic and for storage traffic(SMB3). It’s possible to install 4x10GB (or more) Ethernet network adapters and use 2 for VM traffic and2 for storage traffic or any other combination. Keeping latency low is very important in a storagenetwork, consider using network equipment that supports RDMA for Storage Traffic.-Network adapters that are used for VM traffic should support a high number of VMQ’s. Forexample:o Intel Ethernet Converged Network Adapter X520 Product Family
-o Intel Ethernet Converged Network Adapter X540 Familyo Intel Ethernet Converged Network Adapters X710 FamilyNetwork adapters used for storage traffic should be at least 10GB or better.Network adapters used for storage traffic with a higher speed than 10GB should have support forRDMA if they are connected to a switch that supports RDMA.When designing the network infrastructure, remember to include connectivity for Out-Of-Band NetworkAdapters. Also include 1GB Network adapters for management if the 10/40/56GB Ethernet adapters doesnot support PXE or if dedicated networks are used for Storage and Management.Equipment, all 10Gbit Network components should use either SFP OR RJ45 connection, don't mix andmatch! Basically when you ask for a quote on the servers and switches specify that you want them to usethe same connectivity.The importance of Low Latency cannot be stressed enough.BusEspecially the storage nodes (Scale-Out File Servers) need multiple expansion cards (multiple NIC’s andHBA’s). Most modern servers need to populate CPU’s in all sockets to be able to use the correspondingbuses.-Optimize the bus usage by inserting expansion cards that match the bus width and speed with theslot on the motherboard.Spread workloads (expansion cards) evenly across all buses.For example: 1 NIC 1 HBA on the left side, and 1 NIC 1 HBA on the right side.PCI-E 3.0 or later is recommended.StorageStorage is a crucial but often overlooked component in a private cloud solution. If the storage solution isslow, it will directly impact the workload (Virtual Machines) in a negative way. Even a small latency ofjust 3ms will be noticed inside a VM.It’s possible to use legacy solutions like a traditional SAN in different ways. The SAN can either bedirectly attached to the hypervisor, as done in the past, or published to the compute nodes through ScaleOut File Servers. The latter is preferred because that solution will be more flexible, as it’s possible toreplace, migrate and extend the Software Defined Storage solution with minimal impact on productionenvironment. It’s also easier to control QoS (Quality of Service) between the storage and the hypervisorthat way.-Consider using Scale-Out Fileservers to present your existing SAN instead of only a traditionalSAN connected directly to the Hyper-V hosts.Storage Spaces is a first class storage solution that works well in conjunction with Scale-Out FileServers.If you build a Storage Spaces solution you will be your own SAN vendor and have to act like one.Keep track of performance, failed disks, monitoring etc.Before the storage solution is enabled for production it has to be tested for both performance andfailover.
ServerMinimize the number of models. That makes it easier to have spare parts or one spare server to replace abroken server. Focus on commodity hardware so the servers are easy to replace and try to use off the shelfhardware, rather than customer to order (CTO).Upgrade firmware in all components during installation to get full functionality and support fromhardware and software vendors. Some servers are delivered with power saver (Green IT) settings enabled,which will impact performance a lot. Our strongest recommendation and best practice is to change thatsetting in BIOS/UEFI to OS Controlled, to let Windows handle the power save settings or else follow thevendor’s recommendations for “Max Performance” or “Low Latency” workloads.-The server needs to support IPMI/DCMI or SMASH for OSD using System Center VirtualMachine Manager (SCVMM).Blade servers are not always the best alternative, consider the ability to use Software DefinedNetwork and Software Defined Storage.Verify there is enough slots for all expander cards like HBA’s and NIC’s.The only two roles that should use the physical servers are Hyper-V hosts and Scale-OutFileservers, everything else should be virtualized.
Overview – SoftwareWindows Server 2012 R2Windows Server 2012 R2 is the base operating system in the Private Cloud. It runs at all levels of thesystem and has been design to work in a multitenant environment. The Operation systems is primarilydesign to be managed using PowerShell and there are features and settings that can’t be controlledthrough the Graphical User Interface (GUI).Windows Server 2012 R2.-Read More: http://tinyurl.com/ws2012r2System Center 2012 R2System Center 2012 R2 is a datacenter management suit that consists of multiple components. Whenpurchasing one license for product it contains all components, it’s not possible to buy them ystemCenter2012r2Licenence
System Center Virtual Machine Manager 2012 R2System Center Virtual Machine Manager (SCVMM) is the main management component in a PrivateCloud solution. SCVMM manages the Fabric (Hypervisors, Storage and Network) as well as all virtualmachines and it is connected to Service Provider Foundation, System Center Operations Manager andSystem Center Orchestrator. SCVMM is one of the first systems to be deployed, as SCVMM will then beused to deploy and configure the rest of the stack.The Management Console in SCVMM.-Read more: http://tinyurl.com/TSCVMM2012R2
System Center Data Protection Manager 2012 R2System Center Data Protection Manager (SCDPM or DPM) is data protection (backup) component and isused in the Private Cloud to perform backup of data inside Virtual Machines as well as from outsideVirtual Machines using snapshots directly on the Hyper-V hosts.The Management Console in DPM.-Read more: http://tinyurl.com/TSCDPM2012R2
System Center Operations Manger 2012 R2System Center Operations Manager (SCOM) is the main monitoring component in the Private Cloud.With the help of agents and management packs it monitor the entire platform both pro- and re-actively. Itconnects directly to SCVMM to provide the Performance and Resource Optimization Tips (PRO Tips) aswell as providing both Microsoft Operations Management Suite with data to enable Capacity Planningand to Charge Back systems.The Management Console in SCOM.-Read more: http://tinyurl.com/TSCOM2012R2
System Center Orchestrator 2012 R2System Center Orchestrator (SCOR, SCORCH) is the automation engine in the Fabric. Basicallyeverything that need to be done more than once, could and should be automated. That is done by creatingRunbooks (workflow
Memory When talking about memory two things are important. The correct amount of memory in each server and the correct speed. It’s always a good ide