Using bapsflib.lapd
The bapsflib.lapd
is a one-stop-shop for everything specifically
related to handling data collected on the LaPD. The package provides:
HDF5 file access via
bapsflib.lapd.File
LaPD machine specs and parameters in
bapsflib.lapd.constants
LaPD specific tools (e.g. port number to LaPD \(z\) conversion
bapsflib.lapd.tools.portnum_to_z()
) inbapsflib.lapd.tools
.
Accessing HDF5 Files
Opening a File
Opening a HDF5 file is done using the
bapsflib.lapd.File
class. File
subclasses h5py.File
, so group and dataset manipulation
is handled by the inherited methods; whereas, the new methods (see
Table 1) are focused on mapping the data structure and
providing a high-level access to the experimental data recorded by the
LaPD DAQ system.
File
is a wrapper on
h5py.File
and, thus, HDF5 file manipulation is handled by the
inherited methods of h5py.File
.
File
adds methods and
attributes specifically for manipulating data and metadata written to
the file from the Large Plasma Device (LaPD) DAQ system, see
Table 1.
To open a LaPD generated HDF5 file do
>>> import h5py
>>> from bapsflib import lapd
>>> f = lapd.File('test.hdf5')
>>> f
<HDF5 file "test.hdf5" (mode r)>
>>>
>>> # f is still an instance of h5py.File
>>> isinstance(f, h5py.File)
True
which opens the file as ‘read-only’ by default.
File
restricts opening modes to ‘read-only’
(mode='r'
) and ‘read/write’ (mode='r+'
), but maintains
keyword pass-through to h5py.File
.
After Opening a File
Upon opening a file, File
calls on the
LaPDMap
class
(a subclass of HDFMap
) to construct
a mapping of the HDF5 file’s internal data structure. This mapping
provides the necessary translation for the high-level data reading
methods, read_data()
, read_controls()
, and
read_msi()
. If an element of the HDF5 file
is un-mappable – a mapping module does not exist or the mapping
fails – the data can still be reached using the not-so-lower
inherited methods of h5py.File
. An instance of the mapping
object is bound to File
as
file_map
>>> from bapsflib import lapd
>>> from bapsflib._hdf import HDFMap
>>> f = lapd.File('test.hdf5')
>>> f.file_map
<LaPDMap of HDF5 file 'test.hdf5'>
>>>
>>> # is still an instance of HDFMap
>>> isinstance(f.file_map, HDFMap)
True
For details on how the mapping works and how the mapping objects are
structured see HDF5 File Mapping (HDFMap). For details on using the
file_map
see File Mapping for details.
The opened file object (f
) provides a set of high-level methods and
attributes for th user to interface with, see Table 1.
method/attribute |
Description |
---|---|
dictionary of control device mappings (quick access to
|
|
dictionary of digitizer [device] mappings (quick access to
|
|
instance of the LaPD HDF5 file mapping (instance of
LaPDMap )(see File Mapping for details)
|
|
dictionary of meta-info about the HDF5 file and the experimental
run
(see File Info: Metadata You Want for details)
|
|
dictionary of MSI diagnostic [device] mappings (quick access to
|
|
instance of
LaPDOverview
which that allows for printing and saving of the file mapping
results(see File Overview: Getting, Printing, and Saving for details)
|
|
high-level method for reading control device data contained in the
HDF5 file (instance of
HDFReadControls )(see For Control Devices for details)
|
|
high-level method for reading digitizer data and mating control
device data at the time of read (instance of
HDFReadData )(see For a Digitizer for details)
|
|
high-level method for reading MSI diagnostic date (instance of
HDFReadMSI )(see For a MSI Diagnostic for details)
|
|
printout the LaPD experimental run description
( |
File Mapping
The file mapping is key to bapsflib
’s ability to provide a
consistent user interface to all the possible LaPD HDF5 file
configurations. It acts as the translator between the package interface
and the HDF5 file structure.
The LaPD file mapping is constructed by the
LaPDMap
class, which sub-classes
HDFMap
(see HDF5 File Mapping (HDFMap) for
HDFMap
details), and an instance is bound to
the file object as file_map
. Except for some select
cases, the file_map
object does not need to be directly
accessed. Its results are provide through higher-level user interfaces,
such as the attributes/methods info
,
run_description
, read_data()
, etc. The
results can also be easily printed or saved using the
overview
attribute, see File Overview: Getting, Printing, and Saving for details.
Table 2 shows the available attributes and methods
bound to file_map
.
method/attribute |
Description |
---|---|
dictionary of control device mapping objects |
|
dictionary of digitizer mapping objects |
|
dictionary of experimental info collected from various group attributes in the HDF5 file |
|
retrieve the mapping object for a specified device |
|
|
|
version string of the LaPD DAQ Controller software used to generate the HDF5 file |
|
mapping object for the digitizer that is considered the “main digitizer” |
|
dictionary of MSI diagnostic mapping objects |
|
dictionary of experimental run info collected from various group attributes in the HDF5 file |
|
list of all subgroup and dataset paths in the HDF5 root group, control device group, digitizer group, and MSI group that were unable to be mapped |
File Info: Metadata You Want
Every time a HDF5 file is opened a dictionary of metadata about the file
and the experiment is bound to the file object as
info
.
>>> f = lapd.File('test.hdf5')
>>> f.info
{'absolute file path': '/foo/bar/test.hdf5',
'exp description': 'this is an experiment description',
...
'run status': 'Started'}
Table Table 3 lists and describes all the items that can be found in the info dictionary.
key |
Description & Equivalence |
---|---|
|
absolute path to the HDF5 file
os.path.abspath(f.filename) |
|
description of experiment
f['Raw data + config].attrs['Experiment description'] |
|
name of the experiment for which the run belongs to
f['Raw data + config].attrs['Experiment Name'] |
|
description of experiment set
f['Raw data + config].attrs['Experiment set description'] |
|
name of experiment set the
'exp name' belongs tof['Raw data + config].attrs['Experiment set name'] |
|
base name of HDF5 file
os.path.basename(f.filename) |
|
name of Investigator/PI of the experiment
f['Raw data + config].attrs['Investigator'] |
|
LaPD DAQ software version that wrote the HDF5 file
f.file_map.hdf_version |
|
date of experimental run
f['Raw data + config].attrs['Status date'] |
|
description of experimental run
f['Raw data + config].attrs['Description'] |
|
name of experimental data run
f['Raw data + config].attrs['Data run'] |
|
status of experimental run (started, completed, etc.)
f['Raw data + config].attrs['Status'] |
File Overview: Getting, Printing, and Saving
The LaPDOverview
class provides a
set of tools (see Table 4) to report the results of
the file mapping generated by LaPDMap
.
An instance of LaPDOverview
is
bound to File
as the
overview
attribute and will report
the current status of the mapping object.
>>> from bapsflib import lapd
>>> f = lapd.File('test.hdf5')
>>> isinstance(f.overview, lapd._hdf.lapdoverview.LaPDOverview)
True
Thus, if any changes are made to the mapping object
(file_map
), which could happen for
certain control devices [*], then those changes will be reflected in the
overview report.
Executing f.overview.print()
will print to screen the entire
mapping report and f.overview.save()
will print the report to
a text file with the same name as the HDF5 file and in the same
location. The report is divided into three blocks:
General File and Experimental Info
This block contains information on the file (name, path, etc.), the experiment (exp. name, investigator, etc.), and the experimental run setup (run name, description, etc.).
Example:
======================================================================== test.hdf5 Overview Generated by bapsflib (v1.0.0) Generated date: 4/19/2018 3:35:43 PM ======================================================================== Filename: test.hdf5 Abs. Path: /foo/bar/test.hdf5 LaPD version: 1.2 Investigator: Smith Run Date: 8/14/2017 9:49:53 PM Exp. and Run Structure: (set) <Set Name> (exp) +-- <Exp Name> (run) | +-- <Run Name> Run Description: some description of the experimental run Exp. Description: some description of the experiment as a wholeDiscovery Report
This block gives a brief report on what devices the
LaPDMap
class discovered.There are no details about each discovered device, just what was discovered.
Example:
Discovery Report ---------------- Control devices (1) +-- Raw data + config/ found | +-- Waveform Digitizer devices (1) +-- Raw data + config/ found | +-- SIS crate (main) MSI devices (5) +-- MSI/ found | +-- Discharge | +-- Gas pressure | +-- Heater | +-- Interferometer array | +-- Magnetic field Unknowns (2) aka unmapped +-- /Raw data + config/Data run sequence +-- /Raw data + config/N5700_PSDetailed Report
This block reports details on the mapping results for each discovered device (MSI diagnostics, control devices, and digitizers).
Basically reports the constructed
configs
dictionary of each devices mapping object.Example:
Detailed Reports ----------------- Digitizer Report ^^^^^^^^^^^^^^^^ SIS crate (main) +-- adc's: ['SIS 3302', 'SIS 3305'] +-- Configurations Detected (1) (1 active, 0 inactive) | +-- sis0-10ch active | | +-- adc's (active): ['SIS 3302'] | | +-- path: /Raw data + config/SIS crate/sis0-10ch | | +-- SIS 3302 adc connections | | | | +-- (brd, [ch, ...]) bit clock rate nshotnum nt shot ave. sample ave. | | | | +-- (1, [3, 4, 5, 6, 7, 8]) 16 100.0 MHz 6160 12288 None 8 | | | | +-- (2, [1, 2, 3, 4]) 16 100.0 MHz 6160 12288 None 8 Control Device Report ^^^^^^^^^^^^^^^^^^^^^ Waveform +-- path: /Raw data + config/Waveform +-- contype: waveform +-- Configurations Detected (1) | +-- waveform_50to150kHz_df10kHz_nf11 | | +-- {...} MSI Diagnostic Report ^^^^^^^^^^^^^^^^^^^^^ Discharge +-- path: /MSI/Discharge +-- configs | +--- {...} Gas pressure +-- path: /MSI/Gas pressure +-- configs | +-- {...} Heater +-- path: /MSI/Heater +-- configs | +-- {...} Interferometer array +-- path: /MSI/Interferometer array +-- configs | +-- {...} Magnetic field +-- path: /MSI/Magnetic field +-- configs | +-- {...}
LaPDOverview
also provides the ability to report
only snippets of the report, see Table 4 below.
Method |
Description and Call |
---|---|
Print to screen the entire overview. >>> f.overview.print()
|
|
Save the report to a file given by >>> f.overview.save(filename='foo.txt')
If |
|
Print the general info block. >>> f.overview.report_general()
|
|
Print the discovery report block. >>> f.overview.report_discovery()
|
|
Print the detail report block. >>> f.overview.report_details()
|
|
Print the detail report block for all control devices. >>> f.overview.report_controls()
Print the detail report block for a specific control device (e.g. Waveform). >>> f.overview.report_controls(name='Waveform')
|
|
Print the detail report block for all digitizers. >>> f.overview.report_digitizers()
Print the detail report block for a specific digitizer (e.g. SIS 3301). >>> f.overview.report_digtitizers(name='SIS 3301')
|
|
Print the detail report block for all MSI diagnostics. >>> f.overview.report_msi()
Print the detail report block for a specific MSI diagnostic (e.g. Discharge). >>> f.overview.report_msi(name='Discharge')
|
Reading Data from a HDF5 File
Three classes HDFReadData
,
HDFReadControls
, and
HDFReadMSI
are given to read
data from digitizers, control devices, and MSI diagnostics,
respectively. Each of these read classes are bound to
File
, see Table 5,
and will return a structured numpy
array with the requested data.
Read Class |
Bound Method on |
What it does |
---|---|---|
Designed to extract digitizer data from a HDF5 file with the option of mating control device data at the time of extraction. (see reading For a Digitizer) |
||
Designed to extract control device data. (see reading For Control Devices) |
||
Designed to extract MSI diagnostic data. (see reading For a MSI Diagnostic) |
For a Digitizer
Digitizer data is read using the read_data()
method on
File
. The method also has the option of mating control
device data at the time of declaration (see section
Adding Control Device Data) [1].
At a minimum, the read_data()
method
only needs a board number and channel number to extract data. For
example, the entire dataset for a signal attached to board=1
and
channel=0
can be extracted as follows:
>>> import numpy as np
>>> from bapsflib import lapd
>>> from bapsflib._hdf.utils.hdfreaddata import HDFReadData
>>>
>>> f = lapd.File('test.hdf5')
>>> board, channel = 1, 0
>>> data = f.read_data(board, channel)
>>>
>>> isinstance(data, HDFReadData)
True
>>> isinstance(data, np.ndarray)
True
where data
is an instance of
HDFReadData
, which is a
subclass of numpy.ndarray
. Thus, data
behaves just like
a numpy.ndarray
, but has additional BaPSF focused
methods/attributes that describe the data’s origin and parameters
(see Table 6).
Method |
Description |
---|---|
temporal step size (in sec) adjusted for the adc clock rate and sample averaging |
|
voltage step size (in volts) calculated from the adc’s bit-resolution and voltage range |
|
dictionary of meta-data corresponding to the origin of the extracted data |
By default, data
is a structured numpy
array with the
following dtype
:
>>> data.dtype
dtype([('shotnum', '<u4'),
('signal', '<f4', (12288,)),
('xyz', '<f4', (3,))])
where 'shotnum'
contains the HDF5 shot number, 'signal'
contains the signal recorded by the digitizer, and 'xyz'
is a
3-element array containing the probe position. In this example,
the digitized signal is automatically converted from bits to voltage
and 12288
is the size of the signal’s time-array. The
'xyz'
is initialized with numpy.nan
values, unless
motion control data is requested at instantiation (see
Adding Control Device Data).
There are several additional keyword options to control the read
behavior of read_data()
:
Keyword |
Default |
Description |
---|---|---|
|
|
row index of the HDF5 dataset (see Extracting a sub-set) |
|
|
global HDF5 file shot number (see Extracting a sub-set) |
|
|
digitizer name for which
board and channel belong
to |
|
|
name of the digitizer’s analog-digital-converter (adc) for which
board and channel belong to |
|
|
name of the digitizer configuration
|
|
|
Set |
|
|
list of control devices whose data will be matched and added to
the requested digitizer data
|
|
|
Ensures that the returned data array only contains shot numbers
that are inclusive in
shotnum , the digitizer dataset, and
all control device datasets.(see Extracting a sub-set)
|
|
|
set |
For details on handling and manipulating data
see
handle_data.
Note
Since bapsflib.lapd
leverages the h5py
package,
the data in the HDF5 file resides on disk until one of the read
methods, read_data()
, read_msi()
, or
read_controls()
is called. In calling one of these
methods, the requested data is brought into memory as a
numpy.ndarray
and a numpy.view
onto that
ndarray
is returned to the user.
Extracting a sub-set
There are three keywords for sub-setting a dataset: index
,
shotnum
, and intersection_set
. index
and
shotnum
are indexing keywords, whereas, intersection_set
controls sub-setting behavior between the indexing keywords and the
dataset(s).
index
refers to the row index of the requested dataset and
shotnum
refers to the global HDF5 shot number. Either indexing
keywords can be used, but shotnum
overrides index
.
However, there is extra overhead in determining the shotnum
dataset locations, so index
will often execute quicker than, or
at least on par with, shotnum
. index
and
shotnum
can be of type int
, List[int]
,
slice()
, numpy.ndarray
, or numpy.s_
.
Sub-setting with index
looks like:
>>> import numpy as np
>>> # -- Using int values --
>>> # read dataset row 10
>>> data = f.read_data(board, channel, index=9)
>>> data['shotnum']
HDFReadData([10], dtype=uint32)
>>> # -- using List[int] or numpy.ndarray values --
>>> # read dataset rows 10, 20, and 30
>>> data = f.read_data(board, channel, index=[9, 19, 29])
>>> data = f.read_data(board, channel, index=np.array([9, 19, 29]))
>>> # -- Using slice() or numpy.s_ --
>>> # read dataset rows 10 to 19
>>> data = f.read_data(board, channel, index=slice(9, 19))
>>> data = f.read_data(board, channel, index=np.s_[9:19])
>>> # read every third row in the dataset from row 10 to 19
>>> data = f.read_data(board, channel, index=slice(9, 19, 3))
>>> data = f.read_data(board, channel, index=np.s_[9:19:3])
>>> data['shotnum']
HDFReadData([10, 13, 16, 19], dtype=uint32)
Sub-setting with shotnum
looks like:
>>> import numpy as np
>>> # -- Using int values --
>>> # read dataset shot number 10
>>> data = f.read_data(board, channel, shotnum=10)
>>> data['shotnum']
HDFReadData([10], dtype=uint32)
>>> # -- using List[int] or numpy.ndarray values --
>>> # read dataset shot numbers 10, 20, and 30
>>> data = f.read_data(board, channel, shotnum=[10, 20, 30])
>>> data = f.read_data(board, channel, shotnum=np.array([10, 20, 30]))
>>> # -- Using slice() or numpy.s_ --
>>> # read dataset shot numbers 10 to 19
>>> data = f.read_data(board, channel, shotnum=slice(10, 20))
>>> data = f.read_data(board, channel, shotnum=np.s_[10:20])
>>> # read every 5th dataset shot number from 10 to 19
>>> data = f.read_data(board, channel, index=slice(10, 20, 5))
>>> data = f.read_data(board, channel, index=np.s_[10:20:5])
>>> data['shotnum']
HDFReadData([10, 15], dtype=uint32)
intersection_set
modifies what shot numbers are returned by
read_data()
. By default intersection_set=True
which forces the returned data to only contain shot numbers that
exist in the digitizer dataset, exist in any specified control device
datasets, and are requested by index
or shotnum
.
Setting intersection_set
to False
will return a
data
array that has all shot numbers (\(\ge 1\)) specified
by index
or shotnum
. If a digitizer or control
device dataset does not have an entry corresponding to a specific shot
number, then its spot in the data array will be filled with a “NaN”
value (numpy.nan
for floats, -99999
for signed-integers,
and numpy.empty()
for any other numpy.dtype
).
Specifying digitizer
, adc
, and config_name
It is possible for a LaPD generated HDF5 file to contain multiple
digitizers, each of which can have multiple analog-digital-converters
(adc’s) and multiple configuration settings. For such a case,
read_data()
has the keywords digitizer
,
adc
, and config_name
to direct the data extraction
accordingly.
If digitizer
is not specified, then it is assumed that the
desired digitizer is the one defined in
main_digitizer
. Suppose
the test.hdf5
has two digitizers, 'SIS 3301'
and
'SIS crate'
, then 'SIS 3301'
would be assumed
as the main_digitizer
. To
extract data from 'SIS crate'
one would use the
digitizer
keyword as follows:
>>> data = f.read_data(board, channel, digitizer='SIS crate')
>>> data.info['digitizer']
'SIS crate'
Digitizer 'SIS crate'
can have multiple active
adc’s, 'SIS 3302'
and 'SIS 3305'
. By default, if only
one adc is active then that adc is assumed; however, if multiple adc’s
are active, then the adc with the slower clock rate is assumed.
'SIS 3302'
has the slower clock rate in this case, so to extract
data from 'SIS 3305'
one would use the adc
keyword as
follows:
>>> data = f.read_data(board, channel, digitizer='SIS crate',
... adc='SIS 3305')
>>> data.info['adc']
'SIS 3305'
A digitizer can also have multiple configurations, but typically only
one configuration is ever active for the HDF5 file. In the case that
multiple configurations are active, there is no overlying hierarchy for
assuming one configuration over another. Suppose digitizer
'SIS crate'
has two configurations, 'config_01'
and
'config_02'
. In this case, one of the configurations has to be
specified at the time of extraction. To extract data from
'SIS crate'
under the the configuration 'config_02'
one
would use the config_name
keyword as follows:
>>> f.digitizers['SIS crate'].active_configs
['config_01', 'config_02']
>>> data = f.read_data(board, channel, digitizer='SIS crate',
... config_name='config_02')
>>> data.info['configuration name']
'config_02'
Adding Control Device Data
Adding control device data to a digitizer dataset is done with the
keyword add_controls
. Specifying add_controls
will
trigger a call to the
HDFReadControls
class and
extract the desired control device data.
HDFReadData
then compares and
mates that control device data with the digitizer data according to the
global HDF5 shot number.
add_controls
must be a list of strings and/or 2-element tuples
specifying the desired control device data to be added to the digitizer
data. If a control device only controls one configuration, then it is
sufficient to only name that device. For example, if the
'6K Compumotor'
motion control device is only driving one
probe, then the data extraction call would look like:
>>> list(f.controls['6K Compumotor'].configs)
[3]
>>> data = f.read_data(board, channel,
... add_controls=['6K Compumotor'])
>>> data.info['added controls']
[('6K Compumotor', 3)]
In the case the '6K Compumotor'
control device has multiple
configurations (driving multiple probes), the add_controls
call
must also provide the configuration name to direct the extraction.
This is done with a 2-element tuple entry for add_controls
,
where the first element is the control device name and the second
element is the configuration name. For the '6K Compumotor'
the
configuration name is the receptacle number of the probe drive [2].
Suppose the '6K Compumotor'
is utilizing three probe drives
with the receptacles 2, 3, and 4. To mate control device data from
receptacle 3, the call would look something like:
>>> list(f.controls['6K Compumotor'].configs)
[2, 3, 4]
>>> control = [('6K Compumotor', 3)]
>>> data = f.read_data(board, channel, add_controls=control)
>>> data.info['added controls']
[('6K Compumotor', 3)]
Multiple control device datasets can be added at once, but only
one control device for each control type can be added (see
ConType
for control types). Adding
'6K Compumotor'
data from receptacle 3 and 'Waveform'
data would look like:
>>> list(f.controls['Waveform'].configs)
['config01']
>>> f.controls['Waveform'].contype
contype.waveform
>>> f.controls['6K Compumotor'].contype
contype.motion
>>> data = f.read_data(board, channel,
>>> add_controls=[('6K Compumotor', 3),
>>> 'Waveform'])
>>> data.info['added controls']
[('6K Compumotor', 3), ('Waveform', 'config01')]
>>> data.dtype
dtype([('shotnum', '<u4'),
('signal', '<f4', (12288,)),
('xyz', '<f4', (3,)),
('command', '<U150')])
Since '6K Compumotor'
is a motion
control type, it fills out the 'xyz'
field in the returned
data array; whereas, 'Waveform'
will add field names to the
data array according to the fields specified in its mapping
constructor
HDFMapControlWaveform
.
See For Control Devices for details on these added fields.
Control device data can also be independently read using
read_controls()
.
(see For Control Devices for usage)
Each control device has its own concept of what constitutes a
configuration. The configuration has to be unique to a block of
recorded data. For the '6K Compumotor'
the receptacle
number is used as the configuration name, whereas, for the
'Waveform'
control the configuration name is the name of the
configuration group inside the 'Waveform
group. Since the
configurations are contain in the
f.file_map.controls[config_name].configs
dictionary, the
configuration name need not be a string.
For Control Devices
Note
To be written
For a MSI Diagnostic
MSI diagnostic data is read using the
read_msi()
method on
File
. Only the MSI diagnostic name
needs to be supplied to read the associated data:
>>> from bapsflib import lapd
>>>
>>> # open file
>>> f = lapd.File('test.hdf5')
>>>
>>> # list mapped MSI diagnostics
>>> f.list_msi
['Discharge',
'Gas pressure',
'Heater',
'Interferometer array',
'Magnetic field']
>>>
>>> # read 'Discharge' data
>>> mdata = f.read_msi('Discharge')
The returned data mdata
is a structured numpy
array where
its field structure and population is determined by the MSI diagnostic
mapping object. Every mdata
will have the fields
'shotnum'
and 'meta'
. 'shotnum'
represents the
HDF5 shot number. 'meta'
is a structured array with fields
representing quantities (metadata) that are both diagnostic and shot
number specific, but are not considered “primary” data arrays. Any
other field in mdata
is considered to be a “primary” data array.
Continuing with the above example:
>>> # display mdata dytpe
>>> mdata.dtype
dtype([('shotnum', '<i4'),
('voltage', '<f4', (2048,)),
('current', '<f4', (2048,)),
('meta', [('timestamp', '<f8'),
('data valid', 'i1'),
('pulse length', '<f4'),
('peak current', '<f4'),
('bank voltage', '<f4')])])
>>>
>>> # display shot numbers
>>> mdata['shotnum']
array([ 0, 19251], dtype=int32)
Here, the fields 'voltage'
and 'current'
correspond to
“primary” data arrays. To display display the first three samples of
the 'voltage'
array for shot number 19251 do:
>>> mdata['voltage'][1][0:3:]
array([-44.631958, -44.708252, -44.631958], dtype=float32)
The metadata field 'meta'
has five quantities in it,
'timestamp'
, 'data valid'
, 'pulse length'
,
'peak current'
, and 'peak voltage'
. Now, these metadata
fields will vary depending on the requested MSI diagnostic. To view
the 'peak voltage'
for shot number 0 do:
>>> mdata['meta']['peak voltage'][0]
6127.1323
The data array mdata
is also constructed with a info
attribute that contains metadata that is diagnostic specific but not
shot number specific.
>>> mdata.info
{'current conversion factor': [0.0],
'diagnostic name': 'Discharge',
'diagnostic path': '/MSI/Discharge',
'dt': [4.88e-05],
'hdf file': 'test.hdf5',
't0': [-0.0249856],
'voltage conversion factor': [0.0]}
Every info
attribute will have the keys 'hdf file'
,
'diagnostic name'
, and 'diagnostic path'
. The rest of
the keys will be MSI diagnostic dependent. For example,
mdata.info
for the 'Magnetic field'
diagnostic would
have the key 'z'
that corresponds to the axial locations of the
magnetic field array.
>>> # get magnetic field data
>>> mdata = f.read_msi('Magnetic field')
>>> mdata.dtype
dtype([('shotnum', '<i4'),
('magnet ps current', '<f4', (10,)),
('magnetic field', '<f4', (1024,)),
('meta', [('timestamp', '<f8'),
('data valid', 'i1'),
('peak magnetic field', '<f4')])])
>>> mdata.info
{'diagnostic name': 'Magnetic field',
'diagnostic path': '/MSI/Magnetic field',
'hdf file': 'test.hdf5',
'z': array([-300. , -297.727 , -295.45395, ..., 2020.754 ,
2023.027 , 2025.3 ], dtype=float32)}