Architecture and Dataflow Overview
Download
Report
Transcript Architecture and Dataflow Overview
Architecture and Dataflow Overview
LHCb Data-Flow Review
September 2001
Beat Jost
Cern / EP
Overall Architecture
Data
rates
LHCb Detector
VELO
TRACK ECAL
HCAL MUON RICH
40 MHz
Fixed latency
4.0 s
Level 1
Trigger
40 TB/s
1 MHz
Level -0
Timing L0
&
40-100 kHz Fast L1
Control
Front-End Electronics
1 TB/s
Level -1
LAN
Level 0
Trigger
1 MHz
Front-End Multiplexers (FEM)
Variable latency
<2 ms
Front End Links
Throttle
RU
RU
RU
Read-out units (RU)
Read-out Network (RN)
Variable latency
6-15 GB/s
50 MB/s
L3 ~200 ms
L2 ~10 ms
6-15 GB/s
SFC
Storage
Data-Flow Review Sep. 2001
Functional Components
•Timing and Fast Controls (TFC)
•Front-End Multiplexing (FEM)
•Readout Unit (RU)
•Readout Network (RN)
•Sub-Farm Controllers (SFC)
•CPU Farm
External Interfaces/Sub-Systems
•Front-End Electronics
•Triggers (Level-0 and Level-1)
•Accelerator and Technical Services
•(Controls & Monitoring)
SFC Sub-Farm Controllers (SFC)
CPU
CPU
CPU
CPU
Trigger Level 2 & 3
Event Filter
Control
&
Monitoring
Beat Jost, Cern
2
Functional Requirements
Transfer the physics data from the output of the Level-1
Electronics to the the CPU farm for analysis and later to
permanent storage
Dead-time free operation within the design parameters
Reliable and ‘error-free’, or at least error-detecting
Provide Timing information and distribute trigger
decisions
Provide monitoring information to the controls and
monitoring system
Support independent operation of sub-parts of the
system (partitioning)
Data-Flow Review Sep. 2001
Beat Jost, Cern
3
Performance Requirements
LHCb in Numbers
Number of Channels
~1'000'000
Bunch crossing rate
40 MHz
Level-0 accept rate
<1.1 MHz
Level-1 accept rate
40 kHz
Readout Rate
40 kHz
Event Size
100-150 kB
Event Building Bandwidth
4-6 GB/s
Level-2 accept rate
~5 kHz
Level-3 accept rate
~200 Hz
Level-2/3 CPU Power
2·106 MIPS
Data rate to Storage
~50 MB/s
LHCb DAQ in Numbers
Level-1 Boards
Front-End Links
Link Technology
FEM+RU
Links into Readout Network
~400
~400
(optical?) GbE
~120
70-100
The System will be designed against the nominal Level-1
trigger rate of 40 kHz, with a possible upgrade path to a
Level-1 trigger rate of 100 kHz. Lead-time ~6-12 months
Scalability
Data-Flow Review Sep. 2001
Beat Jost, Cern
4
General Design Criteria
Uniformity
As much commonality as possible among sub-systems and subdetectors
Reduced implementation effort
Reduced maintenance effort (bug fixed once is fixed for all)
Reduced cost
Simplicity
Keep individual components as simple as possible in functionality
Minimize probability of component failure
Important for large numbers
Keep protocols as simple as possible to maximize reliability
Strict separation of controls and data paths throughout
the system
Possibly at the cost of increased performance
requirements in certain areas
Data-Flow Review Sep. 2001
Beat Jost, Cern
5
Specific Choices (1)
Only point-to-point links, no shared buses across modules…
For the physics data obvious
For controls desirable
Clear separation between data path and control path
Link and Network Technology
(optical) Gb Ethernet as uniform technology from the output of the
Level-1 electronics to the input to the SFC, because of its (expected)
abundance and longevity (15+ years)
Readout Protocol
Pure push-trough protocol throughout the system, i.e. every source of
data sends them on as soon as available
Only raw Ethernet frames, no higher-level network protocol (IP)
No vertical nor horizontal communications, besides data
(->Throttle mechanism for flow control)
Data-Flow Review Sep. 2001
Beat Jost, Cern
6
Specific Choices (2)
Integrated Experiment Control System (ECS)
Same tools and mechanisms for detector and dataflow controls
Preserving operational independence
Crates and Boards
The DAQ components will be housed in standard LHCb crates
(stripped-down VME crates)
The Components will be implemented on standard LHCb boards
(9Ux400mm VME-like, without VME slave interface)
Data-Flow Review Sep. 2001
Beat Jost, Cern
7
Constraints and Environment
The DAQ system will be located at Point 8 of the LHC
Some equipment will be located underground…
all the Level-1 electronics
FEM/RU?
(parts) of readout network?
…and some on the surface
Optical GbEthernet
allows free distribution
(parts) of the readout network
SFCs
CPU-farm
Computing infrastructure (CPUs, Disks, etc…)
Control Room Consoles etc.
No DAQ Equipment will be located in radiation areas
Issues
Cooling/Ventilation
Floor-space
Data-Flow Review Sep. 2001
Beat Jost, Cern
8
Summary
Design criteria
Simplicity, Commonality, Uniformity
Potentially with higher cost in certain areas
Lot of advantages in operation of the system
Designed around Gb Ethernet as basic link technology
throughout the system (except individual farm nodes)
Pure push protocol without higher network protocol
No shared buses for neither data nor controls
Controls and data paths are separated throughout the
system
Data-Flow Review Sep. 2001
Beat Jost, Cern
9