Cosylab – The Leading Commercial Provider of Accelerator

Download Report

Transcript Cosylab – The Leading Commercial Provider of Accelerator

ITER Control System
Technology Study
Klemen Žagar
[email protected]
Overview

About ITER

ITER Control and Data Acquisition System (CODAC) architecture

Communication technologies for the Plant Operation Network
 Use cases/requirements
 Performance benchmark
EPICS Collaboration Meeting, Vancouver, April 2009
2
A Note!

Information about ITER and CODAC architecture presented here-in
is a summary of ITER Organization’s presentations

Cosylab prepared studies on communication technologies for ITER
EPICS Collaboration Meeting, Vancouver, April 2009
3
About ITER (International Thermonuclear
Experimental Reactor)
EPICS Collaboration Meeting, Vancouver, April 2009
4
About ITER
Central Solenoid
Nb3Sn, 6 modules
29m
Cryostat
24 m high x 28 m dia.
Toroidal Field Coil
Nb3Sn, 18, wedged
Vacuum Vessel
9 sectors
Blanket
440 modules
Poloidal Field Coil
Nb-Ti, 6
Port Plug
~28m
heating/current drive, test
blankets
limiters/RH
diagnostics
Major plasma radius 6.2 m
Plasma Volume: 840 m3
Torus Cryopumps,
8
Plasma Current: 15 MA
Typical Density: 1020 m-3
Divertor
Typical Temperature: 20 keV
54 cassettes
Fusion Power: 500 MW
Machine mass: 23350 t (cryostat + VV + magnets)
- shielding, divertor and manifolds: 7945 t + 1060 port plugs
- magnet systems: 10150 t; cryostat: 820 t
EPICS Collaboration Meeting, Vancouver, April 2009
5
CODAC Architecture
EPICS Collaboration Meeting, Vancouver, April 2009
6
Plant Operation Network (PON)


Command Invocation
Data Streaming




PON self-diagnostics



Diagnosing problems in the PON
Monitoring the load of the PON network
Process Control


Event Handling
Monitoring
Bulk Data Transfer
Reacting on events in the control system by issuing commands or
transmitting other events
Alarm Handling


Transmission of notification of anomalous behavior
Management of currently active alarm states
EPICS Collaboration Meeting, Vancouver, April 2009
7
Prototype and Benchmarking

We have measured latency and throughput in a controlled test environment
 Allows side-by-side comparison
 Also, hands-on experience is more comparable
 Latency test:
 Where a central service is involved (OmniNotify, IceStorm or
EPICS/CA):



Without a central service (OmniORB, ICE, RTI DDS):





Send a message (to the central service)
Upon receipt on the sender node, measure difference between send and
receive times
Round-trip test
Send a message (to the receiving node)
Respond
Upon receipt of the response, measure the difference
Throughput test:
 Send messages as fast as possible
 Measure differences between receive times
 Statistical analysis to obtain average, jitter, minimum, 95th percentile, etc.
8
Applicability to Use Cases
CHANNEL
ACCESS
5 /5
4/3
5/5
Event handling
4/3
4/4
5/4
4/5
5/5 (EPICS)
5/5 (TANGO)
5/3
5/3
5/3
4/4
5/4
4/4
5
4
5
3
Process control
5 (EPICS)
5 (TANGO)
4
3
Alarm handling
5 (EPICS)
5 (TANGO)
3
3
Diagnostics
9
ZeroC ICE
4/2
Bulk data transfer

RTI DDS
Command invocation
Monitoring

omniORB
CORBA
First number: performance
Second number: functional
applicability of the use case
1.
not applicable at all
2.
applicable, but at a significant performance/quality
cost compared to optimal solution; custom design
required
3.
applicable, but at some performance/quality cost
compared to optimal solution; custom design
required
4.
applicable, but at some performance/quality cost
compared to optimal solution; foreseen in existing
design
5.
applicable, and close to optimal solution; use case
foreseen in design
Applicability to Use Cases
CHANNEL
ACCESS
5 /5
4/3
5/5
Event handling
4/3
4/4
5/4
4/5
5/5 (EPICS)
5/5 (TANGO)
5/3
5/3
5/3
4/4
5/4
4/4
5
4
5
3
Process control
5 (EPICS)
5 (TANGO)
4
3
Alarm handling
5 (EPICS)
5 (TANGO)
3
3
Diagnostics
10
ZeroC ICE
4/2
Bulk data transfer

RTI DDS
Command invocation
Monitoring

omniORB
CORBA
First number: performance
Second number: functional
applicability of the use case
1.
not applicable at all
2.
applicable, but at a significant performance/quality
cost compared to optimal solution; custom design
required
3.
applicable, but at some performance/quality cost
compared to optimal solution; custom design
required
4.
applicable, but at some performance/quality cost
compared to optimal solution; foreseen in existing
design
5.
applicable, and close to optimal solution; use case
foreseen in design
PON Latency (small payloads)
700
ICE
OmniORB
RTI DDS
600
Commercial DDS II
500
Latency [ms]
400
300
200
100
0
0
500
1000
Payload size [bytes]
EPICS Collaboration Meeting, Vancouver, April 2009
1500
2000
2500
11
PON Latency (small payloads)
500
EPICS (sync)
OmniNotify
IceStorm
450
400
Latency [ms]
350
300
Ranking:
1. OmniORB (one way invocations)
2. ICE (one way invocations)
3. RTI DDS (not tuned for latency)
4. EPICS
5. OmniNotify
6. ICE storm
250
200
150
100
0
500
1000
1500
2000
2500
Payload size [bytes]
EPICS Collaboration Meeting, Vancouver, April 2009
12
PON Throughput
100,0%
90,0%
OmniORB (oneway)
80,0%
RTI DDS (unreliable)
ICE (oneway)
70,0%
1Gbps link utilization
Commercial DDS II (unreliable)
60,0%
50,0%
40,0%
30,0%
20,0%
10,0%
0,0%
10
100
1.000
10.000
Payload size [bytes]
EPICS Collaboration Meeting, Vancouver, April 2009
100.000
1.000.000
13
PON Throughput
80,0%
EPICS (async)
EPICS (sync)
IceStorm
OmniNotify
70,0%
1Gbps link utilization
60,0%
50,0%
40,0%
Ranking:
1. RTI DDS
2. OmniORB (one way invocations)
3. ICE (one way invocations)
4. EPICS
5. ICE storm
6. OmniNotify
30,0%
20,0%
10,0%
0,0%
10
100
1.000
10.000
100.000
1.000.000
Payload size [bytes]
EPICS Collaboration Meeting, Vancouver, April 2009
14
PON Scalability

RTI DDS efficiently leverages IP multicasting

With technologies that do not use IP
multicasting/broadcasting, per-subscriber
throughput is inversely proportional to the
number of subscribers!
(source: RTI)
EPICS Collaboration Meeting, Vancouver, April 2009
15
EPICS

Ultimately, ITER Organization has chosen EPICS:
 Very good performance.
 Easiest to work with.
 Very robust.
 Full-blown control system infrastructure (not just middleware).
 Likely to be around for a while (widely used by many labs).

Where EPICS could improve?
 Use IP multicasting for monitors.
 A remote procedure call layer (e.g., “abuse” waveforms to
transmit data serialized with with Google Protocol Buffers, or
use PVData in EPICSv4).
EPICS Collaboration Meeting, Vancouver, April 2009
16
Thank You for Your Attention
17