Evian presentation

Download Report

Transcript Evian presentation

Ongoing activity:
PyECLOUDforPyHEADTAIL development
G. Iadarola, G. Rumolo
Many thanks to:
H. Bartosik, K.Li, L.Mether, M. Schenk
Electron cloud meeting – 14 August 2014
Introduction
We addressed the possibility of re-using PyECLOUD modules to simulate the interaction of a proton
bunch with an electron cloud within PyHEADTAIL
 The modular structure of the two codes and the fact that they are written in the same
language ease the task
Practically we developed a PyHEADTAIL interface module for PyECLOUD (PyECLOUDforPyHEADTAIL)
Following the PyHEADTAIL philosophy, the module features an ecloud object, with a .track method
 The object takes as input a set of beam MPs, computes the electron dynamics and applies
back the transverse kicks to the beam
Structure – PyECLOUD buildup
PyECLOUD main loop
Generate seed e-
Evaluate beam electric field
(stored map for rigid beam)
Evaluate the e- electric field
(Particle in Cell)
Compute e- motion (t->t+Δt)
Detect impacts and generate
secondaries
Next time step
Structure: PyECLOUDforPyHEADTAIL
PyECLOUD object (.track method)
Initial e- distrib.
(from PyECLOUD
buildup sim.)
Per each slice
Generate seed e-
PyHEADTAIL
bunch
PyHEADTAIL slicer
Evaluate beam slice electric field
(Particle in Cell)
Evaluate the e- electric field
(Particle in Cell)
Apply kick on the beam particles
Compute e- motion (t->t+Δt)
(possibly with substeps)
Detect impacts and generate
secondaries
PyHEADTAIL
bunch
Some remarks
This approach shows several advantages:
•
The two tools will share a most of the work of development and maintenance
•
Advanced e-cloud modeling features implemented in PyECLOUD (arbitrary chamber
shape, secondary electron emission, arbitrary magnetic field map, Boris electron
tracker, accurate modeling for curved boundary) become naturally available for
PyHEADTAIL.
This enables several new simulation scenarios:
o Scenarios where the electron wall interaction cannot be neglected, e.g. long
bunches (PS), doublet beams
o Quadrupoles (including triplests, for example we could keep one beam rigid)
o Combined function magnets
But some concerns about computational burden (model much heavier compared to
HEADTAIL, + interpreted language….)
Where do we stand
First version has been implemented (only uniform initial distribution for now)
Line profiling used to remove evident bottlenecks (f2py instead of np.sum for electrostatic
energy evaluation)
First checks against HEADTAIL (for SPS drift section) were reasonably good:
•
Electron motion (pinch)
Where do we stand
First version has been implemented (only uniform initial distribution for now)
Line profiling used to remove evident bottlenecks (f2py instead of np.sum for electrostatic
energy evaluation)
First checks against HEADTAIL (for SPS drift section) were reasonably good:
•
Electron motion (pinch)
•
Kick on protons after the first EC interaction
Where do we stand
First version has been implemented (only uniform initial distribution for now)
Line profiling used to remove evident bottlenecks (f2py instead of np.sum for electrostatic
energy evaluation)
First checks against HEADTAIL (for SPS drift section) were reasonably good:
•
Electron motion (pinch)
•
Kick on protons after the first EC interaction
•
Instability threshold, centroid and emittattance behavior
Where do we stand
Computing time (“…speed matters”)
(SPS drift, 50 slices, 4 sigmaz, 200000 MPprot, 70000 Mpele, 128x128 grid nodes, lxplus)
 PyECLOUD/PyHEADTAIL ~4 times slower w.r.t. HEADTAIL
From profiling
•
~55% of time for Particle In Cell (protons and electrons)
 of which >90% is matrix inversion
•
~25% electron tracking (but PyECLOUD has substeps…)
We can gain significantly if we speed up the linear system solution:
•
Not much to do on algorithm side (precomputed LU factorization)
 perhaps iterative?
•
Improve the implementation (presently scipy.sparse)
 Preliminary test with Trilinos library look promising
Thanks for your attention!
To do list for the next weeks
•
HL-LHC activities
 Matching quadruoles and update on the triplets (following the results of the numerical
convergence scans)
 Bunch length scans according to the needs of the 200 MHz study
•
Electron cloud monitor in an SPS lattice quadrupole:
 how much can we learn?
 Is there an optimized hole distribution to gain the most information without perturbing
significantly the multipacting?
 How far is the possible design? Tolerable impact on aperture?
•
Simulations to be done for COLDEX (to be defined with Vincent)
•
Preparation for SPS scrubbing run