transparencies - Indico
Download
Report
Transcript transparencies - Indico
ACADEMIC TRAINING
B. Panzer – CERN/IT,
F. Rademakers – CERN/EP,
P. Vande Vyvre - CERN/EP
Academic Training CERN
Outline
Day 1 (Pierre VANDE VYVRE)
Day 2 (Bernd PANZER)
Trigger and Data Acquisition
Day 4 (Fons RADEMAKERS)
Computing infrastructure
Technology trends
Day 3 (Pierre VANDE VYVRE)
Outline, main concepts
Requirements of LHC experiments
Data Challenges
Simulation, Reconstruction and analysis
Day 5 (Bernd PANZER)
Computing Data challenges
Physics Data Challenges
Evolution
CERN Academic Training 12-16 May 2003
2
P. Vande Vyvre CERN-EP
Trigger and Data Acquisition
Dataflow, Trigger and DAQ architectures
Trigger
Data transfer
Event building
Storage
Software framework
Simulation
Conclusion
CERN Academic Training 12-16 May 2003
3
P. Vande Vyvre CERN-EP
Online dataflow
Detector
Digitizers
Trigger
Level 0,1
Front-end Pipeline/Buffer
Decision
Trigger
Level 2
Readout Buffer
Decision
Subevent Buffer
Event-Build. Netw.
High-Level
Trigger
Event Buffer
Decision
Storage network
Permanent storage
Transient storage
CERN Academic Training 12-16 May 2003
4
P. Vande Vyvre CERN-EP
TRG/DAQ/HLT
Rare/All
BSY
CTP
L0, L1a, L2
BSY
LTU
LTU
L0, L1a, L2
FE
TTC
TTC
FE
FE
FERO FERO FERO FERO
FERO FERO FERO FERO
Event
DDL
Fragment
RORC RORC RORC RORC
RORC RORC RORC RORC
LDC
LDC
LDC LDC LDC LDC
FEP
FEP
Sub-event FEP FEP FEP FEP
HLT Farm
Load
Balancing
Event Building Network
EDM
Event
GDC
GDC
GDC
GDC
File
PDS
Storage Network
CERN Academic Training 12-16 May 2003
TDS
5
P. Vande Vyvre CERN-EP
ALICE
Levels
LV-1 rate
Readout
Storage
4
500 Hz
25 GB/s
1250 MB/s
CERN Academic Training 12-16 May 2003
6
P. Vande Vyvre CERN-EP
ATLAS
40 MHz
2.5 ms
LV
L1
RoI
75 kHz
LVL2
RoI Builder
L2 Supervisor
L2 N/work
L2 Proc Unit
~2 kHz
Lvl1 acc = 75 kHz
ROIB
L2SV
RRC
RoI
requests
ROB
H
RRM
L2P
L
Event Filter
Processors
~ 200 Hz
L2N
EFP
EFP
EFP
EFP
~ sec
DFM
EBN
SFI
EFacc = ~0.2 kHz
3
100 kHz
100 GB/s
100 MB/s
CERN Academic Training 12-16 May 2003
IOM
Lvl2 acc = ~2 kHz
DAQ
D FE Pipelines
E
T
R/O Read-Out Drivers
120 GB/s
120 GB/s
~ 10 ms
T Event Filter
Levels
LV-1 rate
Readout
Storage
ROD
RoI data = 2%
~3 GB/s
40 MHz
Trigger
Calo
MuTrCh Other detectors
R/O
S
Y
S
T
E
M
E
V
B
ROD-ROB Connection
D
A
T
A
F
L
O
W
Read-Out Buffers
ROD-ROS Merger
I/O Manager
~3+3 GB/s
Dataflow Manager
Event Building N/work
Sub-Farm Input
EFN
Event Filter N/work
SFO
Sub-Farm Output
~ 300 MB/s
7
P. Vande Vyvre CERN-EP
CMS
Levels
LV-1 rate
Readout
Storage
2
100 kHz
100 GB/s
100 MB/s
CERN Academic Training 12-16 May 2003
8
P. Vande Vyvre CERN-EP
LHCb
Level-1
Traffic
125-239
Links
1.1 MHz
8.8-16.9 GB/s
FE
FE
FE
FE
FE
FE
FE
FE
FE
Front-end Electronics
FE FE FE TRM
Switch
Switch
77-135 NPs
NP
NP
77-135 Links
6.4-13.6 GB/s
Storage
System
Readout Network
NP
NP
Level-1 Traffic
Multiplexing
Layer
L1-Decision
24 NPs
24 Links
1.5 GB/s
Sorter
SFC
SFC
SFC
TFC
System
37-70 NPs
Switch
349
Links
40 kHz
2.3 GB/s
30 Switches
73-140 Links
7.9-15.1 GB/s
Switch
Gb Ethernet
NP
HLT
Traffic
NP
50-100 Links
5.5-10 GB/s
SFC
50-100
SFCs
Event
Builder
NP
Switch
SFC
SFC
HLT Traffic
Mixed Traffic
Levels
LV-1 rate
Readout
Storage
3
1
4
40
Farm CPUs
~1200 CPUs
MHz
GB/s
MB/s
CERN Academic Training 12-16 May 2003
9
P. Vande Vyvre CERN-EP
Trigger
Multi-level trigger system
Reject background
Select most interesting collisions
Reduce total data volume
CERN Academic Training 12-16 May 2003
10
P. Vande Vyvre CERN-EP
Multi-level trigger
L0
L1
Time
L2
LHCB
CMS
HLT
ATLAS
ALICE
1.00E+00 1.00E+01 1.00E+02 1.00E+03 1.00E+04 1.00E+05 1.00E+06
Multi-level trigger system to optimize
Rate and granularity
System speed and size
Technology required
CERN Academic Training 12-16 May 2003
11
P. Vande Vyvre CERN-EP
Trigger
Trigger Level 0
HEP specific
Custom logic
Home-made development
Trigger Level 1
Custom building blocks
Custom logic
Fast but rigid
Special architectures
Programmable by “a few experts”
Computing farm
Trigger Level 2
Special architectures
Computing farm
High Level Trigger (HLT)
Computing farm
CERN Academic Training 12-16 May 2003
General-purpose
Home-made software
Commodity building blocks
Slow but flexible
Programmable by “all”
12
P. Vande Vyvre CERN-EP
Trigger & Timing distribution
Transfer from TRG to electronics
One to many
Massive broadcast (100’s to 1000’s)
Optical, Digital
HEP-specific components
HEP developments
CERN Academic Training 12-16 May 2003
13
P. Vande Vyvre CERN-EP
Trigger & Timing distribution
CERN Academic Training 12-16 May 2003
14
P. Vande Vyvre CERN-EP
Detector & Readout Data Link (1)
Transfer from detector to DAQ
Point-to-point
Massive parallelism (100’s to 1000’s)
Interface detector/readout
Analog
HEP-specific components
Digital
HEP developments based on commodity
components
Fiber Channel or Gigabit Ethernet
2.1 or 2.5 Gb/s
CERN Academic Training 12-16 May 2003
15
P. Vande Vyvre CERN-EP
TX_CLK
TX
SERDES
PLL
PROTOCOL
device
IF_CLK
RX
OT – optical transceiver
PM – power monitor circuit
OSC – crystal oscillator
TLK2501
RX_CLK
PLL
I2C
APEX20KE
SERDES – serializer/de-serializer
DDL Interface
OT
SPI
(1-LP)250 MB/s
PM
2.5 Gb/s
OSC
250 MB/s
50/125um MMF
850nm
Detector & Readout Data Link (2)
ID
ID – identification memory
CERN Academic Training 12-16 May 2003
16
P. Vande Vyvre CERN-EP
Optical link source
CERN Academic Training 12-16 May 2003
17
P. Vande Vyvre CERN-EP
Links Adapters
Transfer from 1 or several links to I/O
bus of the memory or the computer
Many-to-one
Massive parallelism (100’s to 1000’s)
Interface detector/readout
Physical interface realized by
Custom chip
IP core (VHDL code synthesized in FPGA)
CERN Academic Training 12-16 May 2003
18
P. Vande Vyvre CERN-EP
PCI evolution
Initiative of Intel
Public from the start, “imposed” to industry
Industry de-facto standard for local I/O: PCI (PCI SIG)
1992: origin
1993: V2.0
1994: V2.1
1996: V2.2
1999: PCI-X 1.0
2002: PCI-X 2.0
CERN Academic Training 12-16 May 2003
32 bits
32 bits
33 MHz
133 MBytes/s
64 bits
64 bits
64 bits
66 MHz
133 MHz
512 MHz
512 MBytes/s
1 GBytes/s
4 Gbytes/s
19
P. Vande Vyvre CERN-EP
Optical link destination & PCI adapter
CERN Academic Training 12-16 May 2003
20
P. Vande Vyvre CERN-EP
Link and adapter performance (1)
• Example of ALICE DDL and RORC
• PCI 32 bits 33 MHz interface with custom chip
• No local memory. Fast transfer to PC memory
DDL saturated for block size
above 5 kBytes:
– 35’000 events/sec
– RORC handling
overhead in LDC:
28 µsec
CERN Academic Training 12-16 May 2003
40000
35000
100
30000
80
25000
60
20000
15000
40
10000
20
Event rate (/sec)
Event rate saturated for
block size below 5 kBytes:
120
Data rate (MBytes/sec)
– 101 Mbytes/sec
RORC readout
5000
0
0
0
10
20
30
40
50
60
70
Block size (kBytes)
Data rate MByte/sec
21
Event rate per sec
P. Vande Vyvre CERN-EP
Link and adapter performance (2)
• PCI 32 bits 66 MHz with commercial IP core
• No local memory. Fast transfer to PC memory
Reach 200 MB/s for block
size above 2 kBytes.
Data transfer PCI load: 83 %
250
Data rate (MBytes/sec)
Total PCI load: 92 %
HLT RORC readout
200
150
100
50
0
0
10
20
30
40
50
60
70
Block size (kBytes)
Data rate MByte/sec
CERN Academic Training 12-16 May 2003
22
P. Vande Vyvre CERN-EP
Subevent & event buffer
Baseline:
Adopt commodity component (PC)
Develop fast dual-port memories
Key parameters:
Cost/performance
Performance: memory bandwidth
CERN Academic Training 12-16 May 2003
23
P. Vande Vyvre CERN-EP
PC Memory Bandwidth
www.cs.virginia.edu
Stream v4.0
with gcc 2.96-103
1600
1400
1321
1200
SDRAM 100 Mhz
MB/s
1000
SDRAM 133 Mhz
800
650
600
200
250
DDR SDRAM 133 Mhz
AMD modules
DDR SDRAM 266 Mhz
Xeon machine
RDRAM
471
400
II & III
} Pentium
machines
327
0
Memory
CERN Academic Training 12-16 May 2003
24
P. Vande Vyvre CERN-EP
Event Building Network (1)
Baseline:
Adopt broadly exploited standards
Switched Ethernet (ALICE, ATALS, CMS)
Adopt a performant commercial product
Myrinet (CMS)
Motivations for switched Ethernet:
Performance of Gigabit Ethernet switches currently
available already adequate for most DAQ @ LHC
Use of commodity items: network switches and
interfaces
Easy (re)configuration and reallocation of resources
Same technology also used for DAQ services
CERN Academic Training 12-16 May 2003
25
P. Vande Vyvre CERN-EP
Event Building Network (2)
TPC LDC
1
TRD LDC
14 ... 211
Sector 1-2
Switch 1
LDC
224
1
LDC
14
Sector 35-36 Pixel - Strips
Switch 18
Switch 19
1
LDC
1
9
Drift
Switch 20
LDC
1
8
7
Muon-PMD-TRG
TOF-HM-PHOS
Switch 21
Switch 22
200 MB/s
2500 MB/s
C0
2500 MB/s
Data Link to computing center
1250 MB/s
600 MB/s
GDC 1
GDC 4
60 MB/s
TDS 1
TDS 2
1
21
60 MB/s
1
10 ...
CERN Academic Training 12-16 May 2003
31
GDC
40
26
10
TDS
20
P. Vande Vyvre CERN-EP
Event Building Network (3)
Baseline:
Adopt broadly exploited standards
Transport protocol: TCP/IP (ALICE event building)
Adopt efficient protocol
Transport protocol: raw packets (LHCb TRG L1)
Motivations for TCP/IP:
Reliable and stable transport service:
Flow control handling
Lost packet handling
Congestion control
Can be verified during the ALICE Data Challenges
Industry mainstream:
Guaranteed support from present and future industrial providers:
operating systems, switches, interfaces
Constant improvements
CERN Academic Training 12-16 May 2003
27
P. Vande Vyvre CERN-EP
Ethernet NIC’s Performance
Fast Ethernet copper
Intel 82557, 82550, 82559 with eepro100 driver, mostly on-board
3Com 3C980*, 3C905 with 3c59x driver, mostly on-board
around 11 MB/s, stable
around 11 MB/s, stable
Gigabit Ethernet
NetGear GA620 with acenic driver
3Com 3C996 with bcm5700 or tg3 driver
up to 78 MB/s
up to 88 MB/s (150% of one CPU)
Intel Pro/1000* (82545EM) with e1000 driver
up to 95 MB/s (56% -> 75% of one CPU)
CERN Academic Training 12-16 May 2003
28
P. Vande Vyvre CERN-EP
ADC IV: DATE Event
Building (1)
CERN Academic Training 12-16 May 2003
29
P. Vande Vyvre CERN-EP
ADC IV: DATE Event
Building (2)
Event building
No recording
• 5 days non-stop
• 1750 MBytes/s sustained (goal was 1000)
CERN Academic Training 12-16 May 2003
30
P. Vande Vyvre CERN-EP
Transient Data Storage
Transient Data Storage at point 2
before archiving (migration to tape),
if any, in the computing center
Several options being tested by
ALICE DAQ
Technologies
Disk attachment:
DAS: IDE (commodity), SCSI
NAS: disk server
SAN: Fiber Channel
RAID-level
Key selection criteria: cost/performance &
bandwidth/box
CERN Academic Training 12-16 May 2003
31
P. Vande Vyvre CERN-EP
Storage: file & record size
(file cache active)
CERN Academic Training 12-16 May 2003
32
P. Vande Vyvre CERN-EP
Storage: file & record size
(file cache inactive)
CERN Academic Training 12-16 May 2003
33
P. Vande Vyvre CERN-EP
Storage: effect of connectivity
CERN Academic Training 12-16 May 2003
34
P. Vande Vyvre CERN-EP
Storage: effect of SCSI RAID
CERN Academic Training 12-16 May 2003
35
P. Vande Vyvre CERN-EP
Transient Data Storage
Disk storage highly non scalable
To achieve high bandwidth performance
1 stream, 1 device, 1 controller, 1 bus
With these conditions:
15-20 MB/s with 7.5 kRPM IDE disks
18-20 MB/s with 10 kRPM SCSI disks
To obtain 1.25 GB/s with commodity solutions
Footprint too big
Infrastructure cost too high
Investigate ways to obtain more compact performance
RAID (Redundant Array of Inexpensive Disks)
RAID 5, large caches, intelligent controllers
HP 3 SCSI devices: 30 MB/s with 10 kRPM disks
HP 6 SCSI devices: 40 MB/s with 10 kRPM disks
EMC 7 FCS: 50 MB/s with 10 kRPM disks (4 U)
IBM 5 FCS: 70 MB/s with 15 kRPM disks
Dot Hill SANnet II: 90 MB/s with 15 kRPM disks (2 U)
CERN Academic Training 12-16 May 2003
36
P. Vande Vyvre CERN-EP
Permanent Data Storage (1)
Infinite storage
At very low cost
Must be hidden by a MSS
Critical area
Small market
Limited competition
Not (yet) commodity
Solution demonstrated since ‘02
CERN Academic Training 12-16 May 2003
37
P. Vande Vyvre CERN-EP
Permanent Data Storage (2)
Tape Drive
STK 9940A
STK 9940B
10 MB/s
60 GB/Volume
SCSI
30 MB/s
200 GB/Volume
Fibre Channel
CERN Academic Training 12-16 May 2003
Tape Library
Several tape drives of both generations
38
P. Vande Vyvre CERN-EP
Permanent Data Storage (3)
CERN Academic Training 12-16 May 2003
39
P. Vande Vyvre CERN-EP
DAQ Software Framework
DAQ Software Framework
Common interfaces for detector-dependant applications
Target the complete system from the start
ALICE DATE (Data Acquisition and Test Environment)
Complete ALICE DAQ software framework:
Data-flow: detector readout, event building
System configuration, control (100’s of programs to start, stop,
synchronize)
Performance monitoring
Evolving with requirements and technology
Key issues
Scalability
Very small configurations (1 PC). Used in test beams
Verified for scale of the final system (100’s of PCs) during the DC
Support and documentation
CERN Academic Training 12-16 May 2003
40
P. Vande Vyvre CERN-EP
Run Control (1)
CERN Academic Training 12-16 May 2003
41
P. Vande Vyvre CERN-EP
Run Control (2)
State of
one node
CERN Academic Training 12-16 May 2003
42
P. Vande Vyvre CERN-EP
Performance monitoring - AFFAIR
LDC
DATE
ROOT
I/O
Fabric monitoring
Evt. Build.
Switch
GDC
ROOT
Plots
Round
Files
Robin DB
ROOT
DB
ROOT
Plots
for Web
Disk
Server
CASTOR
DATE
performances
ROOT I/O
performances
Tape
Server
CERN Academic Training 12-16 May 2003
43
CASTOR performances
P. Vande Vyvre CERN-EP
Control Hierarchy
ECS
Detector
Control Sys.
Pixel Muon
HV
TPC
GAS
CERN Academic Training 12-16 May 2003
Trigger
Control Sys.
Pixel Muon
DAQ
Run Control
TPC
TPC
LTC
44
Pixel Muon
LDC
1
LDC
2
TPC
LDC
216
P. Vande Vyvre CERN-EP
Experiment Control System
ECS functions
• State Machines
• Configuration and booking
• Command/Status
• Synchronize subsytems
ECS
• Operator console
config
operators
DCS
DCS
Pixel Muon TPC
CERN Academic Training 12-16 May 2003
•Automated procedures
TRG
DAQ
TRG
DAQ
Pixel Muon TPC
45
operators
Pixel Muon TPC
P. Vande Vyvre CERN-EP
Partition: Physics Run
CTP
LTC
Physics Run
Partition
Detector B
LTC
Detector A
TTC
TTC
TTC
TTC
Det. TTC Rx
RO DDL SIU
DDL
DDL
DDL SIU
RORC
DDL DIU
RORC
LDC/FEP
LDC/FEP
LDC/FEP
LDC/FEP
LDC/FEP
Event Building Network
GDC
CERN Academic Training 12-16 May 2003
GDC
GDC
GDC
46
GDC
Storage
P. Vande Vyvre CERN-EP
2 Partitions: Physics Run & Standalone
CTP
Physics Run
Partition
LTC
Standalone
Partition
LTC
TTC
TTC
TTC
TTC
Det. TTC Rx
RO DDL SIU
DDL
DDL
DDL SIU
RORC
DDL DIU
RORC
LDC/FEP
LDC/FEP
LDC/FEP
LDC/FEP
LDC/FEP
Event Building Network
GDC
CERN Academic Training 12-16 May 2003
GDC
GDC
GDC
47
GDC
Storage
P. Vande Vyvre CERN-EP
ADC IV: DATE Scalability test
CERN Academic Training 12-16 May 2003
48
P. Vande Vyvre CERN-EP
Simulation conditions/results
Conditions:
•
•
•
•
•
8000 Hz tot: 1600 Hz CE,MB,EL,MU
EL, MU considered rare
50 % rejection at LDC
HLT rejects 80 % of EL
Realistic event sizes, distributions, buffer numbers, transfer rates
1600
1400
1200
Original count
1000
800
After P/F considerations
600
400
Final count
200
0
EL
MU
Huge and unacceptable decrease of EL and MU triggers due to detector busy
CERN Academic Training 12-16 May 2003
49
P. Vande Vyvre CERN-EP
The problem
CE,MB
~50 GB/s
EL
(after P/F and time to read
events into detector buffers)
MU
Detectors
~25 GB/s
(limited by DDL rates)
~1.25 GB/s
(after compression by 0.5)
Huge reduction of the original
rare decays (Electron-EL and
Muon-MU) due to various
backpressures
CERN Academic Training 12-16 May 2003
DAQ
Would like to
accept all
rare decays
PDS
50
P. Vande Vyvre CERN-EP
The proposed solution
High/low level at LDC to inform the CTP to block “non important” high
bandwidth triggers ( CE, MB) to prevent multi-event buffer getting full
Result: almost no losses after P/F
1600
1400
1200
1000
800
600
400
200
0
CERN Academic Training 12-16 May 2003
EL
MU
52
P. Vande Vyvre CERN-EP
Promises of the future
Industry has learned to do switches for Telco:
Silicon has been developed
Exponential development of Internet has lead to
commodity networking. Same revolution as WS in ’90s
Switches are better switches everywhere !
Industry is mastering wireless technology
Mobility for all !
CERN Academic Training 12-16 May 2003
53
P. Vande Vyvre CERN-EP
I/O and system busses
Bus
I/O
System
PCI 32 bits/33 MHz
PCI 64 bits/33 MHz
PCI 64 bits/66 MHz
PCI-X
Future I/O
NGIO 2.5 Gb
Infiniband
CERN Academic Training 12-16 May 2003
Industrial
Support
1990, Intel
1995, PCI SIG
2000, IBM, Compaq, HP
IBM, Compaq, HP
Adaptec, 3COM
Intel, Sun, Dell,
Hitachi, NEC, Siemens
Intel, Sun, Dell, IBM
Compaq, HP, Microsoft
54
Bus
width
(bits)
32
64
64
64
Bus Max. bw Type
clock on single
(MHz) channel
33
132
Bus
66
264
Bus
66
533
Bus
133
1056 Bus
Channel
serial
2500
serial
2500
500
Channel
P. Vande Vyvre CERN-EP
Infiniband
Techno
2.5 Gbit/s line rate
1, 4 or 12 lines giving 0.5, 2, 6 GB/S
Switch-based system
Transport: reliable connection and datagram, unreliable connection
and datagram, IPV6, ethertype
Common link architecture and components with Fibre
Channel and Ethernet
Chips: Cypress, IBM, Intel, LSI logic, Lucent, Mellanox,
Redswitch
Products: Adaptec, Agilent
CERN Academic Training 12-16 May 2003
55
P. Vande Vyvre CERN-EP
Inifiniband
Host
CPU
CPU
CPU
HCA
Mem Ctrl
Switch
T
C
A
Fibre
Channel
T
C
A
SCSI
T
C
A
Gigabit
Ethernet
Host Channel Adapter (HCA)
Target Channel Adapter (TCA)
CERN Academic Training 12-16 May 2003
56
P. Vande Vyvre CERN-EP
Inifiniband: multiple hosts
CPU
CPU
T
C
A
CPU
HCA
Mem Ctrl
Switch
Host 1
CPU
CPU
CPU
HCA
Mem Ctrl
Switch
F
C
T SCSI
C
A
T
C
A
Gig.
Eth.
T
C
A
SCSI
T SCSI
C
A
Internet
Router
Host 2
CERN Academic Training 12-16 May 2003
57
P. Vande Vyvre CERN-EP
Rapid I/O
The RapidIO Interconnect Architecture:
Chip-to-chip and board-to-board communications at performance
levels scaling to ten Gigabits per second and beyond.
High-performance, packet-switched, interconnect technology
Switches on the board
The RapidIO Trade Association:
Non-profit corporation controlled by its members
Direct the future development
For networking products: increased bandwidth, lower costs, and a
faster time-to-market than other more computer-centric bus standards.
Steering Committee: Alcatel, Cisco Systems, EMC Corporation,
Ericsson, Lucent Technologies, Mercury Computer Systems, Motorola,
and Nortel Networks
CERN Academic Training 12-16 May 2003
58
P. Vande Vyvre CERN-EP
Conclusions
Trigger and Data Acquisition systems
Large and complex systems
100s of components
Most are commodity
Some HEP developments to adapt to special requirements
Integration is THE BIG ISSUE
Testing
Software framework
Data flow
Control
Data Challenges to verify the performances and the integration
Simulation to verify the overall system behavior with a
detector in the beam !
CERN Academic Training 12-16 May 2003
59
P. Vande Vyvre CERN-EP
Tomorrow
Day 1 (Pierre VANDE VYVRE)
Day 2 (Bernd PANZER)
Trigger and Data acquisition
Day 4 (Fons RADEMAKERS)
Computing infrastructure
Technology trends
Day 3 (Pierre VANDE VYVRE)
Outline, main concepts
Requirements of LHC experiments
Data Challenges
Simulation, Reconstruction and analysis
Day 5 (Bernd PANZER)
Computing Data challenges
Physics Data Challenges
Evolution
CERN Academic Training 12-16 May 2003
60
P. Vande Vyvre CERN-EP