Correction

Constants

Specific constants to be used in correction, to help with consistency.

Filters

Filters for different kind of measurement data, and to filter the entries in the response matrix based on (the presumably then filtered) measurement data. Measurement filters extract valid (or trustworthy) data, e.g. to be used in corrections. The main function is omc3.correction.filters.filter_measurement() which decides on which filter to use for the given keys.

In earlier implementations there was a split between all kinds of measures, i.e. beta, phase etc. In this implementation most of it is handled by the _get_filtered_generic function.

omc3.correction.filters.filter_measurement(keys: Sequence[str], meas: dict[str, pd.DataFrame], model: pd.DataFrame, opt: DotDict) dict[source]

Filters measurements in keys based on the dict-entries (keys as in keys) in opt.errorcut, opt.modelcut and opt.weights and unifies the data-column names to VALUE, ERROR, WEIGHT. If opt.use_errorbars is True the weights will be also based on the errors.

omc3.correction.filters.filter_response_index(response: dict, measurement: dict, keys: Sequence[str])[source]

Filters the index of the response matrices response by the respective entries in measurement.

Handler

This module contains high-level functions to manage most functionality of the corrections calculations.

omc3.correction.handler.correct(accel_inst: Accelerator, opt: DotDict) None[source]

Perform global correction as described in omc3.global_correction.

Parameters:
  • accel_inst (Accelerator) -- Accelerator Instance

  • opt (DotDict) -- Correction options, see omc3.global_correction for details.

omc3.correction.handler.get_measurement_data(keys: Sequence[str], meas_dir: Path, beta_filename: str, w_dict: dict[str, float] = None) tuple[list[str], dict[str, tfs.TfsDataFrame]][source]

Loads all measurements defined by keys into a dictionary.

Arc-by-Arc Global Correction

In this module, functions are provided to modify the linear equation problem in global correction, correcting the phase advance at each BPM, into a problem of correcting the phase-advances over the whole arcs.

This is done by identifying the closest BPM to the IPs defining the arc, available in the measurement data and summing all measured phase-advances between these.

In the current implementation, only the measured data is modified to contain the arc phase advances, which will then be globally corrected with the given correctors.

In a future implementation this should be extended to loop over each arc and correct each individually with only the correctors available in the respective arc.

For now everything is very LHC specific, a future implementation should also extract the accelerator specific parts into the accelerator class.

See https://github.com/pylhc/omc3/issues/480 .

omc3.correction.arc_by_arc.circular_sum_phase(phases: Series, tune: float, bpm_pair: tuple[str, str])[source]

Calculate the sum of the phases from bpm to bpm of the given bpm pair, taking into account the circularity of the accelerator.

omc3.correction.arc_by_arc.circular_sum_phase_error(phase_errors: Series, bpm_pair: tuple[str, str])[source]

Calculate the sum of the phases errors from bpm to bpm of the given bpm pair, taking into account the circularity of the accelerator.

omc3.correction.arc_by_arc.get_arc_by_arc_bpm_pairs(bpms: Sequence[str], include_ips: str | None = None) dict[str, tuple[str, str]][source]

Get a dictionary of bpm_pairs for each arc, defining the start and end of each arc.

Parameters:
  • bpms (Sequence[str]) -- List of BPMs.

  • include_ips (str | None) -- Include the IPs of each arc. Can be either ‘left’, ‘right’, ‘both’ or None

Returns:

Mapping of arc to BPM pairs to use for each arc.

Return type:

dict[str, tuple[str, str]]

omc3.correction.arc_by_arc.get_bpm_pair_phases(phase_df: DataFrame, bpm_pairs: dict[tuple[str, str]], tune: float) DataFrame[source]

Create a new DataFrame containing as entries the phase advances between the given bpm pairs. The format/columns are the same as used by global correction.

Parameters:
  • phase_df (pd.DataFrame) -- Old DataFrame containing all the phase advances between the measured BPMs.

  • bpm_pairs (dict[tuple[str, str]]) -- Identified BPM pairs to be used.

  • tune (float) -- Model tune of the machine.

Returns:

New DataFrame containing the phase advances between the given bpm pairs.

Return type:

pd.DataFrame

omc3.correction.arc_by_arc.get_left_right_pair(bpms: Sequence[str], arc: str) tuple[str, str][source]

Get the pair of BPMs that are furthest apart in the given arc, i.e. the ones closest to the IPs defining the arc, left and right.

Parameters:
  • bpms (Sequence[str]) -- List of BPMs.

  • arc (str) -- Arc to find the BPMs in (e.g. ‘12’)

Returns:

The found BPM pair.

Return type:

tuple[str, str]

omc3.correction.arc_by_arc.identify_closest_arc_bpm_to_ip(bpms: Sequence[str], ip: int, side: Literal['L', 'R']) str[source]

Pick the BPM with the lowest index from the given sequence, that is on the given side of the given IP.

TODO: Use a regex instead, filtering the list by [LR]IP and choose the lowest via sort. This would assure that also BPMW etc. could be used. (jdilly, 2025)

omc3.correction.arc_by_arc.reduce_phase_measurements_to_arcs(meas_dict: dict[str, DataFrame], model: TfsDataFrame, include_ips: str | None = None)[source]

Reduce the phase-advance in the given measurement to the phase-advance between two BPM-pairs at the extremities of each arc.

Parameters:
  • meas_dict (dict[str, pd.DataFrame]) -- Dictionary of measurements as used in Global Correction.

  • model (tfs.TfsDataFrame) -- Model of the machine, used only the get the tunes from the headers.

  • include_ips (str | None) -- Include the IPs of each arc. Can be either ‘left’, ‘right’, ‘both’ or None

Returns:

The modified measurement dict.

Return type:

dict[str, pd.DataFrame]

Model Appenders

Utilities to append new columns to measurement and model dataframes. E.g. get differences between measurement and model and append those to the measurement data (for corrections).

omc3.correction.model_appenders.add_coupling_to_model(model: DataFrame) DataFrame[source]

Computes the coupling RDTs from the input model TfsDataFrame and returns a copy of said TfsDataFrame with columns for the real and imaginary parts of the computed coupling RDTs.

Parameters:

model (tfs.TfsDataFrame) -- Twiss dataframe.

Returns:

A TfsDataFrame with the added columns.

omc3.correction.model_appenders.add_differences_to_model_to_measurements(model: pd.DataFrame, measurement: dict[str, pd.DataFrame], keys: Sequence[str] = None) dict[str, pd.DataFrame][source]

Provided with DataFrames from a model and a measurement, and a number of keys to be found in both, returns a dictionary with the variation from measurement to model for each key.

Parameters:
  • model (pd.DataFrame) -- DataFrame of the model.

  • measurement (Dict[str, pd.DataFrame]) -- DataFrames of the measurement.

  • keys (Sequence[str]) -- Parameters to get variation to model for. Optional. If omitted, all entries in measurement are used.

Returns:

A dictionary of optics parameters and the resulting DataFrames.

Model Diff

Calculate the differences in optics parameters between twiss-models. Similar to omc3.correction.model_appenders.add_differences_to_model_to_measurements(), yet operates on two twiss files instead.

omc3.correction.model_diff.diff_twiss_parameters(model_a: TfsDataFrame, model_b: TfsDataFrame, parameters: Sequence[str] = None) TfsDataFrame[source]

Create a TfsDataFrame containing the difference of the given parameters between model_a and model_b.

Response MAD-X

Provides a function to create the responses of beta, phase, dispersion, tune and coupling via iterative madx calls.

The variables under investigation need to be provided as a list (which can be obtained from the accelerator class).

For now, the response matrix is stored in a hdf5 file.

author:

Lukas Malina, Joschua Dilly, Jaime (…) Coello de Portugal

omc3.correction.response_madx.create_fullresponse(accel_inst: Accelerator, variable_categories: Sequence[str], delta_k: float = 2e-05, num_proc: int = 4, temp_dir: Path = None) dict[str, pd.DataFrame][source]

Generate a dictionary containing response matrices for beta, phase, dispersion, tune and coupling and saves it to a file.

Parameters:
  • accel_inst -- Accelerator Instance.

  • variable_categories (list) -- Categories of the variables/knobs to use. (from .json)

  • delta_k (float) -- delta K1L to be applied to quads for sensitivity matrix

  • num_proc (int) -- Number of processes to use in parallel.

  • temp_dir (str) -- temporary directory. If None, uses folder of original_jobfile.

Response TWISS

Provides a class to get response matrices from Twiss parameters.

Warning

The responses are only valid for MAD-X Beam 1 and Beam 2 twiss-files, not for Beam 4 !! Also, it only works properly for on-orbit twiss files.

The calculation is based on formulas in [1], [2] and is summarized in [3], where the following equations can be found in Eq. 10 - Eq. 15.

  • Beta Response:

\[\delta \beta_{z,j} = \mp \beta_{z,j} \sum_m \delta K_{1,m} \frac{\beta_{z,m}}{2} \frac{cos(2\tau_{z,mj})}{sin(2\pi Q_z)}\]
  • Dispersion Response:

\[\begin{split}\delta D_{x,j} =&+ \sqrt{\beta_{x,j}} \sum_m (\delta K_{0,m} + \delta K_{1S,m} D_{y,m} - \delta K_{1,m} D_{x,m}) \frac{\sqrt{\beta_{x,m}}}{2} \frac{cos(\tau_{x,mj})}{sin(\pi Q_x)} \\ \delta D_{y,j} =&- \sqrt{\beta_{y,j}} \sum_m (\delta K_{0S,m} - \delta K_{1S,m} D_{x,m} - \delta K_{1,m} D_{y,m}) \frac{\sqrt{\beta_{y,m}}}{2} \frac{cos(\tau_{y,mj})}{sin(\pi Q_y)}\end{split}\]
  • Norm. Dispersion Response: similar as above but with \(\frac{1}{\sqrt{\beta}}\) linearized

\[\begin{split}\delta \frac{D_{x,j}}{\sqrt{\beta_{x,j}}} =&+ \sum_m (\delta K_{0,m} + \delta K_{1S,m} D_{y,m} - \delta K_{1,m} D_{x,m} ) \frac{\sqrt{\beta_{x,m}}}{2} \frac{cos(\tau_{x,mj})}{sin(\pi Q_x)} &&+ \frac{D_{x,j}}{\sqrt{\beta_{x,j}}} \delta K_{1,m} \frac{\beta_{x,m}}{4}\frac{cos(2\tau_{x,mj})}{2sin(\pi Q_x)} \\ \delta \frac{D_{y,j}}{\sqrt{\beta_{y,j}}} =&- \sum_m (\delta K_{0S,m} - \delta K_{1S,m} D_{x,m} - \delta K_{1,m} D_{y,m}) \frac{\sqrt{\beta_{y,m}}}{2} \frac{cos(\tau_{y,mj})}{sin(\pi Q_y)} &&- \frac{D_{y,j}}{\sqrt{\beta_{y,j}}} \delta K_{1,m} \frac{\beta_{y,m}}{4}\frac{cos(2\tau_{y,mj})}{2sin(\pi Q_y)}\end{split}\]
  • Phase Advance Response:

\[\delta \Phi_{z,wj} = \pm \sum_m \delta K_{1,m} \frac{\beta_{z,m}}{4} \left\{ 2\left[ \Pi_{mj} - \Pi_{mw} + \Pi_{jw} \right] + \frac{sin(2\tau_{z,mj}) - sin(2\tau_{z,mw})}{sin(2\pi Q_z)} \right\}\]
  • Tune Response:

\[\delta Q_z = \pm \sum_m \delta K_{1,m} \frac{\beta_{z,m}}{4\pi}\]
  • Coupling Response:

\[\begin{split}\delta f_{\substack{\scriptscriptstyle 1001 \\ \scriptscriptstyle 1010},j} = \sum_m \delta J_{1,m} \, \frac{\sqrt{\beta_{x,m}\beta_{y,m}}}{4} \, \frac{\exp{(i(\Delta\Phi_{x,mj} \mp \Delta\Phi_{y,mj}))}}{1-\exp({2\pi i (Q_x \mp Q_y}))}\end{split}\]

For people reading the code, the response matrices are first calculated like:

|  Elements of interest (j) --> ... |
|Magnets (m)                        |
|  |                                |
|  v                                |
|  .                                |
|  .                                |
|  .                                |
|                                   |

This avoids transposing all vectors individually in the beginning. At the end (of the calculation) the matrix is then transposed to fit the \(M \cdot \delta K\) orientation.

Also \(\Delta \Phi_{z,wj}\) needs to be multiplied by \(2\pi\) to be consistent.

References

class omc3.correction.response_twiss.TwissResponse(accel_inst, variable_categories, varmap_or_path, at_elements='bpms')[source]

Provides Response Matrices calculated from sequence, model and given variables.

Parameters:
  • accel_inst (accelerator) -- Accelerator Instance (needs to contain elements model).

  • variable_categories (list) -- List of variable categories to get from the accelerator class.

  • varmap_or_path (dict, string) -- mapping of the variables, either as dict-structure of Series or path to a pickled-file.

  • at_elements (str) -- Get response matrix for these elements. Can be: ‘bpms’: All BPMS (Default) ‘bpms+’: BPMS+ used magnets (== magnets defined by variables in varfile) ‘all’: All BPMS and Magnets given in the model (Markers are removed)

get_beta_beat(mapped=True)[source]

Returns Response Matrix for Beta Beating

get_coupling(mapped=True)[source]

Returns Response Matrix for the coupling

get_dispersion(mapped=True)[source]

Returns Response Matrix for Dispersion

get_norm_dispersion(mapped=True)[source]

Returns Response Matrix for Normalized Dispersion

get_phase(mapped=True)[source]

Returns Response Matrix for Total Phase

get_phase_adv(mapped=True)[source]

Returns Response Matrix for Phase Advance

get_response_for(observables=None) dict[source]

Calculates and returns only desired response matrices

get_tune(mapped=True)[source]

Returns Response Matrix for the Tunes

omc3.correction.response_twiss.create_response(accel_inst: Accelerator, vars_categories: Sequence[str], optics_params: list[str]) dict[source]

Wrapper to create response via TwissResponse

omc3.correction.response_twiss.dphi(data, q)[source]

Return dphi from phase advances in data, see Eq. 7 in [3]

omc3.correction.response_twiss.get_phase_advances(twiss_df: DataFrame) dict[str, DataFrame][source]

Calculate phase advances between all elements

Returns:

Matrices similar to DPhi(i,j) = Phi(j) - Phi(i)

omc3.correction.response_twiss.response_add(*args) DataFrame[source]

Merges two or more Response Matrix DataFrames

omc3.correction.response_twiss.tau(data, q)[source]

Return tau from phase advances in data, see Eq. 8 in [3]

omc3.correction.response_twiss.upper(list_of_strings: Sequence[str]) Sequence[str][source]

Set all items of list to uppercase

Sequence Evaluation

Evaluates the variable responses from a sequence in MAD-X.

First: Set all variables to 0 Then: Set one variable at a time to 1

Compare results with case all==0.

omc3.correction.sequence_evaluation.check_varmap_file(accel_inst: Accelerator, vars_categories)[source]

Checks on varmap file and creates it if not in model folder.

omc3.correction.sequence_evaluation.evaluate_for_variables(accel_inst: Accelerator, variable_categories, order: int = 4, num_proc: int = 4, temp_dir: Path = None) dict[source]

Generate a dictionary containing response matrices for beta, phase, dispersion, tune and coupling and saves it to a file.

Parameters:
  • accel_inst (Accelerator) -- Accelerator Instance.

  • variable_categories (list) -- Categories of the variables/knobs to use. (from .json)

  • order (int or tuple) -- Max or [min, max] of K-value order to use.

  • num_proc (int) -- Number of processes to use in parallel.

  • temp_dir (Path) -- temporary directory. If None, uses model_dir.

Response Matrix IO

Input and output functions for response matrices.

omc3.correction.response_io.ignore_natural_name_warning()[source]

This context manager catches and ignores the ‘NaturalNameWarning’ emitted within, which is our case comes from pytables. It warns about table entries such as ‘kq4.r8b2’ which we can’t access with syntax such as some_table.kq4.r8b2 but we don’t care about this. We let pandas handle the access with getattr (which works).

If encountering issues, comment out the context manager and debug.

omc3.correction.response_io.read_fullresponse(path: Path, optics_parameters: Sequence[str] = None) dict[str, DataFrame][source]

Load the response matrices from disk. Beware: As empty DataFrames are skipped on write, default for not found entries are empty DataFrames.

omc3.correction.response_io.read_varmap(path: Path, k_values: Sequence[str] = None) dict[str, dict[str, Series]][source]

Load the variable mapping file from disk. Beware: As empty DataFrames are skipped on write, default for not found entries are empty Series.

omc3.correction.response_io.write_fullresponse(path: Path, fullresponse: dict[str, DataFrame])[source]

Write the full response matrices to disk. Beware: Empty Dataframes are skipped! (HDF creates gigantic files otherwise)

omc3.correction.response_io.write_varmap(path: Path, varmap: dict[str, dict[str, Series]])[source]

Write the variable mapping file to disk. Beware: Empty Dataframes are skipped! (HDF creates gigantic files otherwise)