mamas - Webcourse

Download Report

Transcript mamas - Webcourse

Computer Structure
The Uncore
1
Computer Structure 2014 – Uncore
Personal Computer System
USB
SATA
USB
HDMI
PCH
Line out
Audio Codec
Platform Controller Hub
SATA
BIOS
DMI×4
FDI
DMI
PCIe
PCIe ×16
System Agent
Display
Port
Display
Core
Core
Core
Core
IMC
2ch DDR3
LLC
LLC
LLC
LLC
Graphics
2
Computer Structure 2014 – Uncore
3rd Generation Intel CoreTM




3
22nm process
Quad core die, with Intel HD Graphics 4000
1.4 Billion transistors
Die size: 160 mm2
Computer Structure 2014 – Uncore
The Uncore Subsystem
 High bandwidth bi-directional ring bus
– Connects between the IA cores and the various un-core sub-systems
 The uncore subsystem includes
– The System Agent (SA)
– The Graphics Unit (GT)
– The Last Level Cache (LLC)
DMI
System
Agent
 In Intel Xeon Processors (used in servers)
– No Graphics Unit
– Instead it contains many more components:




4
An LLC with larger capacity and snooping
capabilities to support multiple processors
Intel® Quick Path Interconnect (QPI) interfaces
that can support multi-socket platforms
Power management control hardware
A system agent capable of supporting high
bandwidth traffic from memory and I/O devices
From the Optimization Manual
PCIe
IMC
Display
Core
LLC
Core
LLC
Core
LLC
Core
LLC
Graphics
Computer Structure 2014 – Uncore
Scalable Ring On-die Interconnect
• Ring-based interconnect between Cores, Graphics, LLC and
System Agent domain
• Composed of 4 rings
– 32 Byte Data ring, Request ring,
Acknowledge ring, and Snoop ring
– Fully pipelined at core frequency/voltage:
bandwidth, latency and power scale with cores
• Massive ring wire routing runs over the LLC
with no area impact
• Access on ring always picks the shortest path –
minimize latency
• Distributed arbitration, ring protocol handles
coherency, ordering, and core interface
• Scalable to servers with large number of
processors
DMI
PCI
Express*
System
Display Agent
Core
LLC
Core
LLC
Core
LLC
Core
LLC
Graphics
High Bandwidth, Low Latency, Modular
5
Foil taken from IDF 2011
IMC
Computer Structure 2014 – Uncore
Last Level Cache – LLC
 The LLC consists of multiple cache slices
– The number of slices is equal to the number of IA cores
– Each slice contains a full cache port that can supply 32 bytes/cycle
 Each slice has logic portion + data array portion
– The logic portion handles



Data coherency and memory ordering
Access to the data array portion
LLC misses and write-backs to memory
DMI
System
Display Agent
– The data array portion stores cache lines


May have 4/8/12/16 ways
Corresponding to 0.5M/1M/1.5M/2M block size
 The GT sits on the same ring interconnect
– Uses the LLC for its data operations as well
– May in some case competes with the core on LLC
6
From the Optimization Manual
PCI
Express*
IMC
Core
LLC
Core
LLC
Core
LLC
Core
LLC
Graphics
Computer Structure 2014 – Uncore
Ring Interconnect and LLC
 Physical addresses are distributed among cache slices
– Addresses are uniformly distributed using a hash function
– From the cores and the GT view, the LLC acts as one shared cache

With multiple ports and bandwidth that scales with the number of cores
– The number of cache-slices increases with the number of cores

The ring and LLC are not likely to be
a BW limiter to core operation
– The LLC hit latency, ranging between 26-31 cycles,
depends on the core location relative to the LLC block
(how far the request needs to travel on the ring)
 Traffic that cannot be satisfied by the LLC
– LLC misses, dirty line write-back,
non-cacheable operations, and MMIO/IO operations
– Travels through the cache-slice logic portion and the
ring, to the IMC in the system agent
DMI
PCI
Express*
System
Display Agent
IMC
Core
LLC
Core
LLC
Core
LLC
Core
LLC
Graphics
7
From the Optimization Manual
Computer Structure 2014 – Uncore
Cache Box
• Interface block
–
–
–
–
Between Core/Graphics/Media and the Ring
Between Cache controller and the Ring
Implements the ring logic, arbitration, cache controller
Communicates with System Agent for LLC misses,
external snoops, non-cacheable accesses
• Full cache pipeline in each cache box
– Physical Addresses are hashed at the source
to prevent hot spots and increase bandwidth
– Maintains coherency and ordering for the
addresses that are mapped to it
– LLC is fully inclusive, and eliminates
unnecessary snoops to cores
– Per core “Core Valid bit” indicates if core
needs to be snooped for a given cache line
• Runs at core voltage/frequency, scales with Cores
Distributed coherency & ordering;
Scalable Bandwidth, Latency & Power
8
Foil taken from IDF 2011
DMI
PCI
Express*
System
Display Agent
IMC
Core
LLC
Core
LLC
Core
LLC
Core
LLC
Graphics
Computer Structure 2014 – Uncore
LLC Sharing
• LLC is shared among all Cores, Graphics and Media
– Graphics driver controls which streams are cached/coherent
– Any agent can access all data in the LLC, independent of who
allocated the line, after memory range checks
• Controlled LLC way allocation mechanism
prevents thrashing between Core/GFX
DMI
• Multiple coherency domains
– IA Domain (Fully coherent via cross-snoops)
System
Display Agent
IMC
– Graphic domain (Graphics virtual caches,
flushed to IA domain by graphics engine)
Core
LLC
– Non-Coherent domain (Display data,
flushed to memory by graphics engine)
Core
LLC
Core
LLC
Core
LLC
Much higher Graphics performance,
DRAM power savings, more DRAM BW
available for Cores
9
PCI
Express*
Foil taken from IDF 2011
Graphics
Computer Structure 2014 – Uncore
Cache Hierarchy
Level
Line Size Write Update
(bytes)
Policy
Inclusive
Latency
of lower
(cycles)
levels
Bandwidth
(Byte/cyc)
Capacity
ways
L1 Data
32KB
8
64
Write-back
-
4
2×16
L1 Instruction
32KB
8
64
N/A
-
-
1×16
MLC
256KB
8
64
Write-back
No
12
1 × 32
LLC
Varies
Varies
64
Write-back
Yes
26-31
1 × 32
 The LLC is inclusive of all cache levels above it
– Data contained in the core caches must also reside in the LLC
– Each LLC cache line holds an indication of the cores that may have
this line in their MLC and L1 caches
 Fetching data from LLC when another core has the data
– Clean hit – data is not modified in the other core – 43 cycles
– Dirty hit – data is modified in the other core – 60 cycles
10
From the Optimization Manual
Computer Structure 2014 – Uncore
The GFX request
NCU
LLC
GO
to both
Bothsends
Core’s
requests
System
LLC
The
sends
the
requests
first
was Core’s
not
sent
tosecond
the chunk
IO
Agent
cores
acknowledge
get
arequests
ittoinCore
the LLC,
and
Core
0,
1
The
chunk
each
get
to
each
tocurrent
theone
right
of the
(1/2
cache
line)
toPEG/DMI
each
ring
in
the
that
both
cores
the
indicate
andCVBs
the
travel
in GFX
the
ringcanthat
cores
LLC
slice
one
of
the
cores
cycle,
since
it has
get
thedata
data
no
snoop
isread
needed
issue
a wrong polarity
requests
iMPH
MC
DRd
0
L2
GO
1
Hit
D1 D2
1 1
L3
No
Snoop
Core 1
DRd
L2
1
GO
GO
G D1 D2
0G
Hit
D1 G
Hit
D2
0
0
No
No
Snoop
Snoop
GT
L2
Core 0
AD
BL
L3
AK
IV
DRd
G
GO
G
11
11
DDR
Computer Structure 2014 – Uncore
Data Prefetch to MLC and LLC
 Two HW prefetchers fetch data from memory to MLC and LLC
– Prefetch the data to the LLC
– Typically data is brought also to the MLC

Unless the MLC is heavily loaded with missing demand requests
 Spatial Prefetcher
– For every line fetched to the MLC, prefetch the next sequential line
 Streamer Prefetcher
– Monitors read requests from the L1 caches for ascending and
descending sequences of addresses


L1 D$ requests: loads, stores, and L1 D$ HW prefetcher
L1 I$ code fetch requests
– When a forward or backward stream of requests is detected


12
The anticipated cache lines are pre-fetched
Cannot cross 4K page boundary (same physical page)
From the Optimization Manual
Computer Structure 2014 – Uncore
Data Prefetch to MLC and LLC
 Streamer Prefetcher Enhancement
– The streamer may issue two prefetch requests on every MLC lookup
 Runs up to 20 lines ahead of the load request
– Adjusts dynamically to the number of outstanding requests per core
 Not many outstanding requests  prefetch further ahead
 Many outstanding requests  prefetch to LLC only, and less far ahead
– When cache lines are far ahead

Prefetch to LLC only and not to the MLC

Avoids replacement of useful cache lines in the MLC
– Detects and maintains up to 32 streams of data accesses
 For each 4K byte page, can maintain one forward and one backward
stream
13
From the Optimization Manual
Computer Structure 2014 – Uncore
The System Agent
• Contains PCI Express, DMI,
Memory Controller, Display Engine…
• Contains Power Control Unit
DMI
– Programmable uController, handles all power
management and reset functions in the chip
System
Display Agent
IMC
Core
LLC
Core
LLC
– Provides cores/Graphics /Media with high BW,
low latency to DRAM/IO for best performance
Core
LLC
– Handles IO-to-cache coherency
Core
LLC
• Smart integration with the ring
• Separate voltage and frequency from
ring/cores, Display integration for better
battery life
• Extensive power and thermal
management for PCI Express* and DDR
Graphics
Smart I/O Integration
14
PCI Express*
Foil taken from IDF 2011
Computer Structure 2014 – Uncore
System Agent Components
 PCIe controllers that connect to external PCIe devices
– Support different configurations: ×16+×4, ×8+×8+×4, ×8+×4+×4+×4
 DMI (Direct Media Interface) controller
– Connects to the PCH (Platform Controller Hub)
PCH
 Integrated display engine
– Handles delivering the pixels to the screen
FDI
 Flexible Display Interface (FDI)
– Connects to the PCH, where the display
connectors (HDMI, DVI) are attached
 DisplayPort (used for integrated display)
– e.g., a laptop’s LCD
HDMI
SATA
USB
DMI×4
DMI
Display
Port
×16 PCIe
PCI
Express*
System
Display Agent
IMC
Core
LLC
 Integrated Memory Controller (IMC)
Core
LLC
– Connects to and controls the DRAM
Core
LLC
Core
LLC
 An arbiter that handles accesses from
Ring and from I/O (PCIe & DMI)
– Routes the accesses to the right place
– Routes main memory traffic to the IMC
15
Graphics
Computer Structure 2014 – Uncore
2ch DDR3
The Memory Controller
 The memory controller supports two channels of DDR
– Data rates of 1066MHz, 1333MHz and 1600MHz
 Each channel has its own
resources
CH A
DDR
DIMM
– Handles memory requests
DRAM
independently
Ctrlr
CH B
DDR
– Contains a 32 cache-line
DIMM
write-data-buffer
– Supports 8 bytes per cycle
– A hash function distributes addresses are between channels

Attempts to balance the load between the channels in order to
achieve maximum bandwidth and minimum hotspot collisions
 For best performance
– Populate both channels with equal amounts of memory

Preferably the exact same types of DIMMs
– Use highest supported speed DRAM, with the best DRAM timings
16
From the Optimization Manual
Computer Structure 2014 – Uncore
The Memory Controller
 High-performance out-of-order scheduler
– Attempts to maximize memory bandwidth while minimizing latency
– Writes to the memory controller are considered completed when they are
written to the write-data-buffer
– The write-data-buffer is flushed out to main memory at a later time, not
impacting write latency
 Partial writes are not handled efficiently on the memory controller
– May result in read-modify-write operations on the DDR channel
 if the partial-writes do not complete a full cache-line in time
– Software should avoid creating partial write transactions whenever possible
 E.g., buffer the partial writes into full cache line writes
 iMC also supports high-priority isochronous requests
– E.g., USB isochronous, and Display isochronous requests
 High bandwidth of memory requests from the integrated display
engine takes up some of the memory bandwidth
– Impacts core access latency to some degree
17
From the Optimization Manual
Computer Structure 2014 – Uncore
Integration: Optimization Opportunities
• Dynamically redistribute power between Cores & Graphics
• Tight power management control of all components,
providing better granularity and deeper idle/sleep states
• Three separate power/frequency domains:
System Agent (Fixed), Cores+Ring, Graphics (Variable)
• High BW Last Level Cache, shared among Cores and Graphics
– Significant performance boost, saves memory bandwidth and power
• Integrated Memory Controller and PCI Express ports
– Tightly integrated with Core/Graphics/LLC domain
– Provides low latency & low power – remove intermediate busses
• Bandwidth is balanced across the whole machine,
from Core/Graphics all the way to Memory Controller
• Modular uArch for optimal cost/power/performance
– Derivative products done with minimal effort/time
18
Foil taken from IDF 2011
Computer Structure 2014 – Uncore
DRAM
19
Computer Structure 2014 – Uncore
Basic DRAM chip
RAS#
Addr
CAS#
Row
Address
Latch
Column
Address
Latch
Row
Address
decoder
Memory
array
Column addr
decoder
Data
 DRAM access sequence
–
–
–
–
Put Row on addr. bus and assert RAS# (Row Addr. Strobe) to latch Row
RAS# to CAS# delay
Put Column on addr. bus and assert CAS# (Column Addr. Strobe) to latch Col
Get data on address bus
20
Computer Structure 2014 – Uncore
DRAM Operation
 DRAM cell consists of transistor + capacitor
– Capacitor keeps the state;
Transistor guards access to the state
– Reading cell state:
raise access line AL and sense DL

AL
DL
M
C
Capacitor charged 
current to flow on the data line DL
– Writing cell state:
set DL and raise AL to charge/drain capacitor
– Charging and draining a capacitor is not
instantaneous
 Leakage current drains capacitor even when
transistor is closed
– DRAM cell periodically refreshed every 64ms
21
Computer Structure 2014 – Uncore
DRAM Access Sequence Timing
tRP – Row Precharge
RAS#
tRCD – RAS/CAS
delay
CAS#
A[0:7]
X
Row i
Col n
X
Row j
CL – CAS latency
Data
–
–
–
–
–
Data n
Put row address on address bus and assert RAS#
Wait for RAS# to CAS# delay (tRCD) between asserting RAS and CAS
Put column address on address bus and assert CAS#
Wait for CAS latency (CL) between time CAS# asserted and data ready
Row pre-charge time: time to close current row, and open a new row
22
Computer Structure 2014 – Uncore
DRAM controller
 DRAM controller gets address and command
– Splits address to Row and Column
– Generates DRAM control signals at the proper timing
A[20:23]
address
decoder
Chip
select
Time
delay
gen.
RAS#
CAS#
Select
A[10:19]
A[0:9]
D[0:7]
address
mux
Memory address bus
DRAM
R/W#
 DRAM data must be periodically refreshed
– DRAM controller performs DRAM refresh, using refresh counter
23
Computer Structure 2014 – Uncore
Improved DRAM Schemes
 Paged Mode DRAM
– Multiple accesses to different columns from same row
– Saves RAS and RAS to CAS delay
RAS#
CAS#
A[0:7]
X
Row
X
Col n
X
Col n+1
X
Data n
Data
X
Col n+2
D n+1
D n+2
 Extended Data Output RAM (EDO RAM)
– A data output latch enables to parallel next column address with
current column data
RAS#
CAS#
A[0:7]
Data
24
X
Row
X
Col n
X
Col n+1
X
X
Col n+2
Data n
Data n+1
Data n+2
Computer Structure 2014 – Uncore
Improved DRAM Schemes (cont)
 Burst DRAM
– Generates consecutive column address by itself
RAS#
CAS#
A[0:7]
Data
25
X
Row
X
Col n
X
Data n
Data n+1
Data n+2
Computer Structure 2014 – Uncore
Synchronous DRAM – SDRAM
 All signals are referenced to an external clock (100MHz-200MHz)
– Makes timing more precise with other system devices
 4 banks – multiple pages open simultaneously (one per bank)
 Command driven functionality instead of signal driven
– ACTIVE: selects both the bank and the row to be activated
 ACTIVE to a new bank can be issued while accessing current bank
– READ/WRITE: select column
 Burst oriented read and write accesses
– Successive column locations accessed in the given row
– Burst length is programmable: 1, 2, 4, 8, and full-page
 May end full-page burst by BURST TERMINATE to get arbitrary burst length
 A user programmable Mode Register
– CAS latency, burst length, burst type
 Auto pre-charge: may close row at last read/write in burst
 Auto refresh: internal counters generate refresh address
26
Computer Structure 2014 – Uncore
SDRAM Timing
BL = 1
clock
cmd
ACT
NOP
RD RD+PC ACT
tRCD >
20ns
NOP
RD
ACT
NOP
RD
RD
NOP
NOP
t RRD >
20ns
t RC>70ns
Bank
Bank 0
X
Bank 0 Bank 0 Bank 1
X
Bank 1 Bank 0
X
Bank 0 Bank 1
X
X
Addr
Row i
X
Col j Col k Row m
X
Col n Row l
X
Col q col n+1
X
X
Data q
Data
n+1
CL=2
Data
Data j Data k
Data n
 tRCD: ACTIVE to READ/WRITE gap = tRCD(MIN) / clock period
 tRC: successive ACTIVE to a different row in the same bank
 tRRD: successive ACTIVE commands to different banks
27
Computer Structure 2014 – Uncore
DDR-SDRAM
 2n-prefetch architecture
– DRAM cells are clocked at the same speed as SDR SDRAM cells
– Internal data bus is twice the width of the external data bus
– Data capture occurs twice per clock cycle
 Lower half of the bus sampled at clock rise
 Upper half of the bus sampled at clock fall
0:n-1
SDRAM
Array
0:n-1
0:2n-1
n:2n-1
400M
xfer/sec
200MHz clock
 Uses 2.5V (vs. 3.3V in SDRAM)
– Reduced power consumption
28
Computer Structure 2014 – Uncore
DDR SDRAM Timing
200MHz
clock
cmd
ACT
NOP
NOP
RD
NOP
ACT
NOP
NOP
RD
NOP
ACT
NOP
NOP
tRCD >20ns
t RRD >20ns
t RC>70ns
Bank
Bank 0
X
X
Bank 0
X
Bank 1
X
X
Bank 1
X
Bank 0
X
X
Addr
Row i
X
X
Col j
X
Row m
X
X
Col n
X
Row l
X
X
CL=2
Data
29
j
+1 +2 +3
n +1 +2 +3
Computer Structure 2014 – Uncore
DIMMs
 DIMM: Dual In-line Memory Module
– A small circuit board that holds memory chips
 64-bit wide data path (72 bit with parity)
– Single sided: 9 chips, each with 8 bit data bus
– Dual sided: 18 chips, each with 4 bit data bus
– Data BW: 64 bits on each rising and falling edge of the clock
 Other pins
– Address – 14, RAS, CAS, chip select – 4, VDC – 17, Gnd – 18,
clock – 4, serial address – 3, …
30
Computer Structure 2014 – Uncore
DDR2
 DDR2 doubles the bandwidth
– 4n pre-fetch: internally read/write 4×
the amount of data as the external bus
– DDR2-533 cell works at the same freq. as
a DDR266 cell or a PC133 cell
Memory
Cell Array
I/O
Buffers
Data Bus
Memory
Cell Array
I/O
Buffers
Data Bus
Memory
Cell Array
I/O
Buffers
Data Bus
– Prefetching increases latency
 Smaller page size: 1KB vs. 2KB
– Reduces activation power – ACTIVATE
command reads all bits in the page
 8 banks in 1Gb densities and above
– Increases random accesses
 1.8V (vs 2.5V) operation voltage
– Significantly lower power
31
Computer Structure 2014 – Uncore
DDR3
 30% power consumption reduction compared to DDR2
– 1.5V supply voltage (vs. 1.8V in DDR2)
– 90 nanometer fabrication technology
DDR3
 Higher bandwidth
– 8 bit deep prefetch buffer
(vs. 4 bit in DDR2)
Mem clock
100MHz
Bus clock
400MHz
Memory Cell
Array
I/O
Buffers
Data rate
800MT/s
Data Bus
 2× transfer data rate vs DDR2
– Effective clock rate of 800–1600 MHz


using both rising and falling edges of a 400–800 MHz I/O clock
DDR2: 400–800 MHz using a 200–400 MHz I/O clock
 DDR3 DIMMs
– 240 pins, the same number as DDR2, and are the same size
– Electrically incompatible, and have a different key notch location
32
Computer Structure 2014 – Uncore
DDR3 Standards
 DRAM timing, measured in I/O bus cycles, specifies 3 numbers
– CAS Latency – RAS to CAS Delay – RAS Precharge Time
 CAS latency (latency to get data in an open page) in nsec
– CAS Latency × I/O bus cycle time
Standard
Name
Mem
clock
(MHz)
I/O bus
clock
(MHz)
DDR3-800
100
400
2.5
DDR3-1066
133⅓
533⅓
1.875
DDR3-1333
166⅔
666⅔
1.5
DDR3-1600
200
800
1.25
DDR3-1866
233⅓
933⅓
1.07
DDR3-2133
266⅔
1066⅔
0.9375
33
I/O bus
Data rate
Cycle time
(MT/s)
(ns)
Peak
Timings
transfer
(CL-tRCDrate (MB/s)
tRP)
5-5-5
800
PC3-6400
6400
6-6-6
6-6-6
1066⅔ PC3-8500
8533⅓
7-7-7
8-8-8
8-8-8
1333⅓ PC3-10600 10666⅔
9-9-9
9-9-9
1600 PC3-12800 12800
10-10-10
11-11-11
11-11-11
1866⅔ PC3-14900 14933⅓
12-12-12
Module
name
2133⅓ PC3-17000
17066⅔
12-12-12
13-13-13
Computer Structure 2014 – Uncore
CAS
Latency
(ns)
12 1⁄2
15
11 1⁄4
13 1⁄8
15
12
13 1⁄2
11 1⁄4
12 1⁄2
13 3⁄4
11 11⁄14
12 6⁄7
11 1⁄4
12 3⁄16
DDR2 vs. DDR3 Performance
The high latency of DDR3 SDRAM has
negative effect on streaming operations
34
Source:
xbitlabs
Computer Structure 2014 – Uncore
How to get the most of Memory ?
 For best performance
– Populate both channels with equal amounts of memory

Preferably the exact same types of DIMMs
– Use highest supported speed DRAM, with the best DRAM timings
 Each DIMM supports 4 open pages simultaneously
– The more open pages, the more random access
– It is better to have more DIMMs: n DIMMs  4n open pages
– Dual sided DIMMs may have separate CS of each side
 Support 8 open pages
 Dual sided DIMMs may also have a common CS
35
Computer Structure 2014 – Uncore
SRAM – Static RAM




True random access
High speed, low density, high power
No refresh
Address not multiplexed
 DDR SRAM
– 2 READs or 2 WRITEs per clock
– Common or Separate I/O
– DDRII: 200MHz to 333MHz Operation; Density: 18/36/72Mb+
 QDR SRAM
– Two separate DDR ports: one read and one write
– One DDR address bus: alternating between the read address and
the write address
– QDRII: 250MHz to 333MHz Operation; Density: 18/36/72Mb+
36
Computer Structure 2014 – Uncore
SRAM vs. DRAM
 Random Access: access time is the same for all locations
DRAM – Dynamic RAM
SRAM – Static RAM
Refresh
Refresh needed
No refresh needed
Address
Address muxed: row+ column
Address not multiplexed
Access
Not true “Random Access”
True “Random Access”
density
High (1 Transistor/bit)
Low (6 Transistor/bit)
Power
low
high
Speed
slow
fast
Price/bit
low
high
Typical usage Main memory
37
cache
Computer Structure 2014 – Uncore
Read Only Memory (ROM)
 Random Access
 Non volatile
 ROM Types
– PROM – Programmable ROM

Burnt once using special equipment
– EPROM – Erasable PROM

Can be erased by exposure to UV, and then reprogrammed
– E2PROM – Electrically Erasable PROM



38
Can be erased and reprogrammed on board
Write time (programming) much longer than RAM
Limited number of writes (thousands)
Computer Structure 2014 – Uncore