IPP - Planning and Optimising the Green IT DC
Download
Report
Transcript IPP - Planning and Optimising the Green IT DC
Green IT Business Transformation Seminar
Planning & Optimising the Green IT Datacentre:
Design, Operation & Management Best
Practices, Technologies & Challenges
Pierre Ketteridge, IP Performance Ltd
1
Introduction
Yes! Of course…
…but only with careful planning, design
and management!
2
Introduction
• The direct carbon impact (ie Carbon Footprint)
of Data Centres on the environment is almost
exclusively related to power consumption
• Data Centres do not (when properly designed
and managed) vent hot air or polluting gases
into the atmosphere – cooling should be a
‘closed system’
• There may be indirect carbon impacts through
staffing levels, travel to and from site,
operational maintenace and housekeeping
3
Introduction
15% of business power consumption is
accounted for by Data Centres & ICT…
Cooling
IT
Components
Power
Distribution
…Lighting accounts for 1-3%,
dependent on whether LO operation
is implemented or not
4
Cooling
Cooling falls into two categories:
• Air Cooling
• Liquid (water) Cooling
5
Cooling> Air Cooling
Air Cooling
The traditional way of cooling a Data Centre
Computer Room:
• CRAC (Computer Room Air Conditioner)
• Water Chiller
• Cold Aisle/Hot Aisle Configuration
6
Cooling> Air Cooling
Inherent limitations of CRAC-based Air
Cooling Systems:
• CRAC capacity needs to be 30% greater than the actual
demand
• Limitations in cooling (5kW – 7kW per rack)
• N+1 active equipment resilience/redundancy drives
efficiency of cooling system down further
7
Cooling> Air Cooling
Some Easy-to-Implement Air Cooling
Optimisation Suggestions:
• Hot Aisle/Cold Aisle Arrangement
• Cold Aisle Containment
• Blanking Panels
• Raised Floor Brush Strips
• Underfloor, Inter- and Intra-rack Cable Management
• Free Air Cooling
8
Cooling> Air Cooling> Hot Aisle/Cold Aisle
• With no hot aisle/cold aisle
arrangement, returning heated
air mixes with the CRAC-cooled
air and cooling to the DC CR
equipment is impaired. There is
also the issue of bypass cold
airflow, which can impact chiller
operation.
• With a hot aisle/cold aisle
arrangement, chilled air is forced
out into the front-of-cabinet facing
cold aisles, across the equipment
surface, and warm air is
channeled out into the rear-ofcabinet facing hot aisle for return
to the chiller/CRAC.
9
Cooling> Air Cooling> Hot Aisle/Cold Aisle
• Ineffective positioning of CRACs impair the
airflow around the DC CR.
• CRACs along the side walls are too close to the
equipment racks, and will cause the airflow to
bypass the floor vents in those cold aisles.
• Place cooling units at the end of the equipment
rows, not mid-row.
• CRACs should be aligned with the hot aisles
to prevent hot/cold aisle airflow crossover,
which apart from increasing the temperature of
air supply to the rack fronts but also can trigger
the cooling unit to throttle back, reducing
cooling overall.
• Limit maximum cooling unit throw distance to
50'
10
Cooling> Air Cooling> Hot Aisle/Cold Aisle
Separation of High-density Racks
• Air cooling systems become ineffective
when high-density racks are co-located
• “Borrowing” of adjacent rack cooling
capacity is not possible in this
configuration
• An alternative (other than self-contained
cooling) is to spread out high-density
racks to maintain the cooling averages
• Obviously this is not always practical –
witness the prevalance of blade server and
virtualisation technologies – two to five
times the per rack power draw of
traditional servers
11
Cooling> Air Cooling> Cold Aisle Containment
Cold Aisle Containment
• Very simple to deploy / Retrofit
• Hot and cold aisles physically
separated
• Greater watts per rack approx 10kW
• Over sizing of the CRAC is reduced
• CRAC efficiency is increased due to a
higher delta T
• CRAC fan speed can be reduced which
provides:
- Reduced running costs
- Increased MTBF
12
Cooling> Air Cooling> Blanking Panels
• Reduction and stabilization of
equipment air-intake temperatures
• Elimination or reduction of the number
and severity of hotspots
• Increased availability, performance,
and reliability of IT equipment, especially
in the top one-third of the equipment
cabinet
• Elimination of exhaust air
recirculation within the cabinet,
optimising cooling and reducing
energy consumption and OpEx
• Deferral of CapEx (additional cooling
capacity)
• The potential of greening the data
center by reducing its carbon footprint
13
Cooling> Air Cooling> Raised Floor Brush Strips
Raised Floor Brush Grommets
• Cable openings allow approx. 60%
of conditioned air to escape
• Use brush grommets to seal every
cabling entry point
• Increases static pressure in the
under-floor plenum - ensures that
the DC airflow remains at a
pressure above atmospheric
• Extend reach of Hot Aisle/Cold Aisle
system
• Self-sealing and interwoven closure system
• Brush grommets can be installed as DC is commissioned, or retro-fitted
• No changes to existing wiring configuration
• Fits into the raised floor tiles prior to cabinet installation
• Simple
• Inexpensive
14
Cooling> Air Cooling> Cable Management
Cable Management – Intra-rack, Inter-rack
and underfloor
•
•
•
•
•
Airflow within racks is also affected by unstructured cabling arrangements
Deployment of high-density servers creates new problems in cable management
Cut data cables and power cords to the correct length – use patch panels where
appropriate
Equipment power should be fed from rack-mounted PDUs
Raised floor/subfloor plenum ducting carries other services apart from airflow:
– Data cabling, power cabling, water pipes/fire detection & extinguishing systems
•
•
•
Remove unnecessary or unused cabling - old cabling is often abandoned
beneath the floor – particularly in high churn/turnover Co-Lo facilities
Spread power cables out on the subfloor - under the cold aisle to minimize airflow
restrictions
Run subfloor data cabling trays at the stringer level in the hot aisle - or at an
“upper level” in the cold aisle, to keep the lower space free to act as the cooling
plenum
15
Cooling> Air Cooling> Free Air Cooling
What is Free Cooling?
Roof-Mounted Free
Air Cooler
Chiller Unit
DC CRAC
16
Cooling> Air Cooling> Free Air Cooling
Average UK Temperatures
25
Average Day
Average Night
15
10
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
-5
Feb
0
Nov
5
Jan
Degrees C
20
Month
17
Cooling> Air Cooling> Free Air Cooling
Budgetary Example – Projected Cost of Running the
System for a Year
200C0
20 C
0
20 C
150C
Not using the Free Cooler
• Chiller Capacity
150 kW
• Energy needed to run the chiller
62 kW
• Numbers of Hours running per year
8784
• Cost per kWh
£0.0784
Total Cost of Running per Year
£42,697.00
100% free cooling 70% of the year
• Chiller capacity
• Energy needed to run the chiller
• Numbers of hours running per year
• Cost per kWh
• Cost of running the chiller
• Cost of running Free Cooling (10.4kw)
Total Cost of Running per Year
150 kW
62 kW
2580
£0.0784
£12,540.00
£ 5,058.00
£17,599.00
18
Cooling> Liquid Cooling
High Density Data Centres and Liquid Cooling
• When going above 10kW per rack a new,
more targeted/directed cooling method is
required
• Most common methods is Water Cooling
19
Cooling> Liquid Cooling
So What is Liquid – or Water – Cooling?
• Delivery of chilled water to multiple heat exchange
points from a central unit
• The central unit circulates water from the buildings
existing chilled water loop
• Heat exchange units in rear doors (one per cabinet,
capacity 30kW) or side doors (2 x dual cabinet
resilience, 2 x 15kW)
• Heat is carried away in the water - air is ejected back
out into the DC at the same temperature it entered the
rack - zero thermal footprint
20
Cooling> Liquid Cooling
Why Use Water Cooling?
• Water 3,500 times more thermally efficient than air
• Air cooling only delivers 5-7kW of cooling per rack (10kW
with hot aisle/cold aisle arrangement)
• High Density DCs place increasing power and thermal
control demands on the infrastructure
• Blade servers - up to 80 servers in a standard 42u cabinet
– and anything from 80 to 800 virtual machines!
• Fully-loaded blade server rack can produce 25Kw of heat
• Water Cooling can deliver 30kW of cooling to a fullyloaded 42u rack
21
Cooling> Liquid Cooling
Adding the benefits of Free Cooling, some CapEx/OpEx
implications of Water Cooling:
• Water cooling has a slightly higher install cost (more terminations/
pipe work)…but greater kW per sq ft gives us…
– 35-45% reduction in required real estate
– 15-30% reduction lower in overall construction costs
– 10-20% reduction on total annual fan power consumption
– 12-14% reduction in power delivered to mechanical chilled
water plant
• For an average efficiency data centre, annual savings of £22,000
and £80,000 for small and large data centres respectively
• Significant when the design life of the data centre is 10 years
• Reduction in energy is a reduction in costs and also a reduction in
your carbon footprint
22
Network Components
Active Equipment (Networking)
• Switches
• Routers
• Appliances
– Load balancers
– Caching/Proxying
– Bandwidth Management
– Application Acceleration & Optimisation
23
Network Components> Ethernet LAN Switches
Data Centre Switch Requirements
•
•
•
•
•
•
•
•
Port density
Feeds & Speeds
Performance
Functionality
Capabilities
Feature set
Resilience/Redundancy
Security
Price
Power consumption/Heat output
24
Network Components>
Components Ethernet LAN Switches
Data Centre Switch Requirements
• High port density per chassis
• Low power consumption
Optimised for the environment
• Availability
• High performance
• Low latency
Optimised for the application
25
Network Components>
Components Ethernet LAN Switches
26
Network Components>
Components Ethernet LAN Switches
Ethernet Switch Power Consumption A Comparative Example: 15,000 User Network
Solution
ALU Configuration
Cisco Configuration
LAN edge 48 ports
216 x OS6850
216 x Catalyst 3750
LAN edge 24 ports
160 x OS6850
160 x Catalyst 3750
40 x OS9000
40 x Catalyst 4500
8 x OS9000 Chassis
8 x Catalyst 6500
LAN aggregation
• LAN core
Total
Delta Power Consumption
Cisco/A-L
54 kW/h
48kW/h
102 kW/h
Across an installed network base of 15,000 ports, it was possible to save
102 kW/h, resulting in:
• Lower Power Consumption
• Less Cooling Equipment
• Smaller Batteries
• Smaller Data Centers
27
Network Components> WAN Routers
Routers
• Look at power consumption figures/thermal output
• Deploy shared WAN architecture – MPLS, VPLS, IP VPNs
• Investigate leveraging and integrating bandwidth
optimisation and application acceleration technologies
28
Network Components> Appliances> Load Balancing
LAN/WAN Optimisation Appliances
…an area where we can make a difference, in the way in which technologies
are deployed to optimise LAN/WAN bandwidth usage and availability of backend servers.
An excellent example would be application delivery, traffic management and
web server load balancers:
• High Performance through acceleration techniques
• High Availability
29
Network Components> Appliances> DPI Bandwidth Management
More LAN/WAN Optimisation Options…
DPI Bandwidth Management solutions:
• Inspection, Classification, Policy Enforcement
and Reporting on all traffic:
– Identification - application signature, TCP/UDP port,
protocol, source/destination IP addresses, URL
– Classification – CoS/ToS (IP Prec/Diffserv
CodePoint/DSCP); user-defined QoS policy
– Enforcement based on user-defined policy
– Reporting – RT and long-term – extremely valuable
for SLAs/SLGs in DC environments
30
Network Components> Appliances> WAN Optimisation
LAN/WAN Optimisation Options (cont’d)
WAN optimisation and application acceleration:
• Usually deployed as a reverse proxy device
• Provides some form of bandwidth management
• Protocol optimisation – making LAN protocols more latency-tolerant
– eg. TCP handshake spoofing
• Object caching
– Files, videos, web content, locally cached and served
•
Byte caching
– Repetitive traffic streams, hierarchically indexed and tagged (inline only)
•
Compression
– (inline only)
•
Proxy support for common protocols
– HTTP, CIFS, SSL (HTTPS), FTP, MAPI, P2P, MMS, RTSP, QT, TCPTunnel, DNS etc
31
Network Components> Appliances> WAN Optimisation
LAN/WAN Optimisation Options (cont’d)
WAN optimisation and application acceleration:
• Reverse Proxy
• Bandwidth Management
• Protocol optimisation – for latency-intolerant
LAN protocols
– eg. TCP handshake spoofing
• Object caching
• Byte caching
• Compression (inline only)
• Proxy support for all/most common protocols
32
Infrastructure Management
Managing the Data Centre Infrastructure
“Lights Out” operation requires…
• Little or no human intervention
•Exceptions:
• Planned maintenance
• Fault rectification/management (emergency
maintenance/repair)
• Physical installs/removals
• Housekeeping (cable management, MAC)
• Cleaning
• How are you going to control it? How are you going to
manage it?
33
Infrastructure Management
Remote Control and Management
• RDC, VNC – In Band Management
• Console Servers – Out of Band Management
• KVM switching (local/remote)
• KVM/IP switching & USB2 VM Remote Drive Mapping
• IPMI Service Processor OOB Management
• Intelligent Power Management (iPDUs)
34
Infrastructure Management
Service Processor
Management
(Closed Loop InBand
or Out-of-Band) –
IPMI, iLO, DRAC etc
SMASH CLP
Intelligent Power
Management
(iPDU)
Console Server
Management
(Routers, Switches,
Appliances)
KVM/KVM-over-IP
(Servers, Blade Servers, Management
PCs, Appliance Management Devices)
VNC/RDC
5
35
Summary
Summary - Cooling
• Data Centre “Greening” is mainly down to managing
power consumption
• Cooling is the biggest consumer of power (50%)
• Optimise your air-cooled CRAC system:
–
–
–
–
–
Cold Aisle/Hot Aisle arrangement
Cold Aisle containment
Blanking Panels
Raised floor/underfloor brush strips/grommets
Free air cooling system
36
Summary
Summary – Cooling (Cont’d)
• If deploying high-density bladeservers/virtualisation,
consider water-cooling (max kW/hr cooling rises from 510kW/hr to 30kW/hr)
• Targeted control
• Even distribution of cooling
• Full (42u) rack utilisation
• Zero thermal footprint – design flexibility
• Remember free air cooling reduces costs further
• Real Estate savings
37
Summary
Summary - Active Equipment (Networking)
Switches:
• high port density, low power consumption, PSU disconnect/fanless
operation
• Extrapolate power consumption over entire port count
Routers:
• Modular architecture, high density, low power consumption
• Make full use of available bandwidth
– Shared services: IP VPN, point-to-multipoint or meshed MPLS
– Use/honour QoS marking
– Deploy Bandwidth optimisation techniques
38
Summary
Summary - Active Equipment
(Networking) – Cont’d
Appliances:
• Load Balancing – Maximise performance,
utilisation and availability of server
resources
Maximise performance,
• DPI Bandwidth Management
utilisation and availability
of WAN resources
• WAN Optimisation
39
Summary
Summary – Infrastructure Management
• Remote Infrastructure Control and Management enables “lights-out”
operation
• Remote console management gives CLI access to network
infrastructure – routers, switches, firewalls, other network
optimisation appliances
• KVM-over-IP allows remote, distributed control of server and
workstation systems
• Service Processor Management allows remote control and
management of system processor and environmental
monitors/controls
• Intelligent Power Management enables remote monitoring, control
and management of PDUs, UPS and battery backup resources
40
Close
THANK YOU
Pierre Ketteridge, IP Performance Ltd
[email protected]
[email protected]
www.ip-performance.co.uk
41