File

class bapsflib.lapd.File(name: str, mode='r', silent=False, **kwargs)

Bases: File

Open a HDF5 file created by the LaPD at BaPSF.

Parameters:
  • name – name (and path) of file on disk

  • mode – readonly 'r' (DEFAULT) and read/write 'r+'

  • silent – set True to suppress warnings (False DEFAULT)

  • kwargs – additional keywords passed on to h5py.File

Example:
>>> # open HDF5 file
>>> f = File('sample.hdf5')
>>> type(f)
bapsflib.lapd._hdf.file.File
>>> isinstance(f, bapsflib._hdf.utils.file.File)
True
>>> isinstance(f, h5py.File)
True

Attributes Summary

attrs

Attributes attached to this object

controls

Dictionary of control device mappings.

digitizers

Dictionary of digitizer device mappings

driver

Low-level HDF5 file driver used to open file

file

Return a File instance associated with this object

file_map

LaPD HDF5 file map (LaPDMap)

filename

File name on disk

id

Low-level identifier appropriate for this object

info

Dictionary of general info on the HDF5 file and the experimental run.

libver

File format version bounds (2-tuple: low, high)

meta_block_size

Meta block size (in bytes)

mode

Python mode used to open file

msi

Dictionary of MSI device mappings.

name

Return the full name of this object.

overview

LaPD HDF5 file overview.

parent

Return the parent group of this object.

ref

An (opaque) HDF5 reference to this object

regionref

Create a region reference (Datasets only).

swmr_mode

Controls single-writer multiple-reader mode

userblock_size

User block size (in bytes)

Methods Summary

build_virtual_dataset(name, shape, dtype[, ...])

Assemble a virtual dataset in this group.

clear()

close()

Close the file.

copy(source, dest[, name, shallow, ...])

Copy an object or group.

create_dataset(name[, shape, dtype, data])

Create a new HDF5 dataset

create_dataset_like(name, other, **kwupdate)

Create a dataset similar to other.

create_group(name[, track_order])

Create and return a new subgroup.

create_virtual_dataset(name, layout[, fillvalue])

Create a new virtual dataset in this group.

flush()

Tell the HDF5 library to flush its buffers.

get(name[, default, getclass, getlink])

Retrieve an item or other information.

items()

Get a view object on member items

keys()

Get a view object on member names

move(source, dest)

Move a link to a new location in the file.

pop(k[,d])

If key is not found, d is returned if given, otherwise KeyError is raised.

popitem()

as a 2-tuple; but raise KeyError if D is empty.

read_controls(controls[, shotnum, ...])

Reads data from control device datasets.

read_data(board, channel[, index, shotnum, ...])

Reads data from digitizer datasets and attaches control device data when requested.

read_msi(msi_diag[, silent])

Reads data from MSI Diagnostic datasets.

require_dataset(name, shape, dtype[, exact])

Open a dataset, creating it if it doesn't exist.

require_group(name)

Return a group, creating it if it doesn't exist.

run_description()

Print description of the LaPD experimental run.

setdefault(k[,d])

update([E, ]**F)

If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v

values()

Get a view object on member objects

visit(func)

Recursively visit all names in this group and subgroups (HDF5 1.8).

visititems(func)

Recursively visit names and objects in this group (HDF5 1.8).

Attributes Documentation

attrs

Attributes attached to this object

controls

Dictionary of control device mappings.

digitizers

Dictionary of digitizer device mappings

driver

Low-level HDF5 file driver used to open file

file

Return a File instance associated with this object

file_map

LaPD HDF5 file map (LaPDMap)

filename

File name on disk

id

Low-level identifier appropriate for this object

info

Dictionary of general info on the HDF5 file and the experimental run.

libver

File format version bounds (2-tuple: low, high)

meta_block_size

Meta block size (in bytes)

mode

Python mode used to open file

msi

Dictionary of MSI device mappings.

name

Return the full name of this object. None if anonymous.

overview

LaPD HDF5 file overview. (LaPDOverview)

parent

Return the parent group of this object.

This is always equivalent to obj.file[posixpath.dirname(obj.name)]. ValueError if this object is anonymous.

ref

An (opaque) HDF5 reference to this object

regionref

Create a region reference (Datasets only).

The syntax is regionref[<slices>]. For example, dset.regionref[…] creates a region reference in which the whole dataset is selected.

Can also be used to determine the shape of the referenced dataset (via .shape property), or the shape of the selection (via the .selection property).

swmr_mode

Controls single-writer multiple-reader mode

userblock_size

User block size (in bytes)

Methods Documentation

build_virtual_dataset(name, shape, dtype, maxshape=None, fillvalue=None)

Assemble a virtual dataset in this group.

This is used as a context manager:

with f.build_virtual_dataset('virt', (10, 1000), np.uint32) as layout:
    layout[0] = h5py.VirtualSource('foo.h5', 'data', (1000,))
name

(str) Name of the new dataset

shape

(tuple) Shape of the dataset

dtype

A numpy dtype for data read from the virtual dataset

maxshape

(tuple, optional) Maximum dimensions if the dataset can grow. Use None for unlimited dimensions.

fillvalue

The value used where no data is available.

clear() None.  Remove all items from D.
close()

Close the file. All open objects become invalid

copy(source, dest, name=None, shallow=False, expand_soft=False, expand_external=False, expand_refs=False, without_attrs=False)

Copy an object or group.

The source can be a path, Group, Dataset, or Datatype object. The destination can be either a path or a Group object. The source and destinations need not be in the same file.

If the source is a Group object, all objects contained in that group will be copied recursively.

When the destination is a Group object, by default the target will be created in that group with its current name (basename of obj.name). You can override that by setting “name” to a string.

There are various options which all default to “False”:

  • shallow: copy only immediate members of a group.

  • expand_soft: expand soft links into new objects.

  • expand_external: expand external links into new objects.

  • expand_refs: copy objects that are pointed to by references.

  • without_attrs: copy object without copying attributes.

Example:

>>> f = File('myfile.hdf5', 'w')
>>> f.create_group("MyGroup")
>>> list(f.keys())
['MyGroup']
>>> f.copy('MyGroup', 'MyCopy')
>>> list(f.keys())
['MyGroup', 'MyCopy']
create_dataset(name, shape=None, dtype=None, data=None, **kwds)

Create a new HDF5 dataset

name

Name of the dataset (absolute or relative). Provide None to make an anonymous dataset.

shape

Dataset shape. Use “()” for scalar datasets. Required if “data” isn’t provided.

dtype

Numpy dtype or string. If omitted, dtype(‘f’) will be used. Required if “data” isn’t provided; otherwise, overrides data array’s dtype.

data

Provide data to initialize the dataset. If used, you can omit shape and dtype arguments.

Keyword-only arguments:

chunks

(Tuple or int) Chunk shape, or True to enable auto-chunking. Integers can be used for 1D shape.

maxshape

(Tuple or int) Make the dataset resizable up to this shape. Use None for axes you want to be unlimited. Integers can be used for 1D shape.

compression

(String or int) Compression strategy. Legal values are ‘gzip’, ‘szip’, ‘lzf’. If an integer in range(10), this indicates gzip compression level. Otherwise, an integer indicates the number of a dynamically loaded compression filter.

compression_opts

Compression settings. This is an integer for gzip, 2-tuple for szip, etc. If specifying a dynamically loaded compression filter number, this must be a tuple of values.

scaleoffset

(Integer) Enable scale/offset filter for (usually) lossy compression of integer or floating-point data. For integer data, the value of scaleoffset is the number of bits to retain (pass 0 to let HDF5 determine the minimum number of bits necessary for lossless compression). For floating point data, scaleoffset is the number of digits after the decimal place to retain; stored values thus have absolute error less than 0.5*10**(-scaleoffset).

shuffle

(T/F) Enable shuffle filter.

fletcher32

(T/F) Enable fletcher32 error detection. Not permitted in conjunction with the scale/offset filter.

fillvalue

(Scalar) Use this value for uninitialized parts of the dataset.

track_times

(T/F) Enable dataset creation timestamps.

track_order

(T/F) Track attribute creation order if True. If omitted use global default h5.get_config().track_order.

external

(Iterable of tuples) Sets the external storage property, thus designating that the dataset will be stored in one or more non-HDF5 files external to the HDF5 file. Adds each tuple of (name, offset, size) to the dataset’s list of external files. Each name must be a str, bytes, or os.PathLike; each offset and size, an integer. If only a name is given instead of an iterable of tuples, it is equivalent to [(name, 0, h5py.h5f.UNLIMITED)].

efile_prefix

(String) External dataset file prefix for dataset access property list. Does not persist in the file.

virtual_prefix

(String) Virtual dataset file prefix for dataset access property list. Does not persist in the file.

allow_unknown_filter

(T/F) Do not check that the requested filter is available for use. This should only be used with write_direct_chunk, where the caller compresses the data before handing it to h5py.

rdcc_nbytes

Total size of the dataset’s chunk cache in bytes. The default size is 1024**2 (1 MiB).

rdcc_w0

The chunk preemption policy for this dataset. This must be between 0 and 1 inclusive and indicates the weighting according to which chunks which have been fully read or written are penalized when determining which chunks to flush from cache. A value of 0 means fully read or written chunks are treated no differently than other chunks (the preemption is strictly LRU) while a value of 1 means fully read or written chunks are always preempted before other chunks. If your application only reads or writes data once, this can be safely set to 1. Otherwise, this should be set lower depending on how often you re-read or re-write the same data. The default value is 0.75.

rdcc_nslots

The number of chunk slots in the dataset’s chunk cache. Increasing this value reduces the number of cache collisions, but slightly increases the memory used. Due to the hashing strategy, this value should ideally be a prime number. As a rule of thumb, this value should be at least 10 times the number of chunks that can fit in rdcc_nbytes bytes. For maximum performance, this value should be set approximately 100 times that number of chunks. The default value is 521.

create_dataset_like(name, other, **kwupdate)

Create a dataset similar to other.

name

Name of the dataset (absolute or relative). Provide None to make an anonymous dataset.

other

The dataset which the new dataset should mimic. All properties, such as shape, dtype, chunking, … will be taken from it, but no data or attributes are being copied.

Any dataset keywords (see create_dataset) may be provided, including shape and dtype, in which case the provided values take precedence over those from other.

create_group(name, track_order=None)

Create and return a new subgroup.

Name may be absolute or relative. Fails if the target name already exists.

track_order

Track dataset/group/attribute creation order under this group if True. If None use global default h5.get_config().track_order.

create_virtual_dataset(name, layout, fillvalue=None)

Create a new virtual dataset in this group.

See virtual datasets in the docs for more information.

name

(str) Name of the new dataset

layout

(VirtualLayout) Defines the sources for the virtual dataset

fillvalue

The value to use where there is no data.

flush()

Tell the HDF5 library to flush its buffers.

get(name, default=None, getclass=False, getlink=False)

Retrieve an item or other information.

“name” given only:

Return the item, or “default” if it doesn’t exist

“getclass” is True:

Return the class of object (Group, Dataset, etc.), or “default” if nothing with that name exists

“getlink” is True:

Return HardLink, SoftLink or ExternalLink instances. Return “default” if nothing with that name exists.

“getlink” and “getclass” are True:

Return HardLink, SoftLink and ExternalLink classes. Return “default” if nothing with that name exists.

Example:

>>> cls = group.get('foo', getclass=True)
>>> if cls == SoftLink:
items()

Get a view object on member items

keys()

Get a view object on member names

move(source, dest)

Move a link to a new location in the file.

If “source” is a hard link, this effectively renames the object. If “source” is a soft or external link, the link itself is moved, with its value unmodified.

pop(k[, d]) v, remove specified key and return the corresponding value.

If key is not found, d is returned if given, otherwise KeyError is raised.

popitem() (k, v), remove and return some (key, value) pair

as a 2-tuple; but raise KeyError if D is empty.

read_controls(controls: List[str | Tuple[str, Any]], shotnum=slice(None, None, None), intersection_set=True, silent=False, **kwargs)

Reads data from control device datasets. See HDFReadControls for more detail.

Parameters:
  • controls (List[Union[str, Tuple[str, Any]]]) – A list of strings and/or 2-element tuples indicating the control device(s). If a control device has only one configuration then only the device name 'control' needs to be passed in the list. If a control device has multiple configurations, then the device name and its configuration “name” needs to be passed as a tuple element ('control', 'config') in the list. (see condition_controls() for details)

  • shotnum (Union[int, list(int), slice(), numpy.array]) – HDF5 file shot number(s) indicating data entries to be extracted

  • intersection_set (bool) – True (DEFAULT) will force the returned shot numbers to be the intersection of shotnum and the shot numbers contained in each control device dataset. False will return the union instead of the intersection, minus \(shotnum \le 0\). (see HDFReadControls for details)

  • silent (bool) – False (DEFAULT). Set True to ignore any UserWarnings (soft-warnings)

Return type:

HDFReadControls

Example:
>>> # open HDF5 file
>>> f = File('sample.hdf5')
>>>
>>> # list control devices
>>> list(f.controls)
['6K Compumotor', 'Waveform']
>>>
>>> # list '6K Compumotor' configurations
>>> list(f.controls['6K Compumotor'].configs)
[2, 3]
>>>
>>> # extract all '6k Compumotor' data for configuration 3
>>> cdata = f.read_controls([('6K Compumotor', 3)])
>>> type(cdata)
bapsflib._hdf.utils.hdfreadcontrols.HDFReadControls
>>>
>>> # list 'Waveform' configurations
>>> list(f.file_map.controls['Waveform'].configs)
['config01']
>>>
>>> # extract 'Waveform' data
>>> cdata = f.read_controls(['Waveform'])
>>> list(cdata.info['controls'])
['Waveform']
>>>
>>> # extract both 'Waveform' and '6K Compumotor'
>>> controls = ['Waveform', ('6K Compumotor', 2)]
>>> cdata = f.read_controls(controls)
>>> list(cdata.info['controls'])
['6K Compumotor', 'Waveform']
read_data(board: int, channel: int, index=slice(None, None, None), shotnum=slice(None, None, None), digitizer=None, adc=None, config_name=None, keep_bits=False, add_controls=None, intersection_set=True, silent=False, **kwargs)

Reads data from digitizer datasets and attaches control device data when requested. (see hdfreaddata.HDFReadData for details)

Parameters:
  • board – digitizer board number

  • channel – digitizer channel number

  • index (Union[int, list(int), slice(), numpy.array]) – dataset row index

  • shotnum (Union[int, list(int), slice(), numpy.array]) – HDF5 global shot number

  • digitizer (str) – name of digitizer

  • adc (str) – name of the digitizer’s analog-digital converter

  • config_name (str) – name of digitizer configuration

  • keep_bits (bool) – True to keep digitizer signal in bits, False (default) to convert digitizer signal to voltage

  • add_controls (List[Union[str, Tuple[str, Any]]]) – A list of strings and/or 2-element tuples indicating the control device(s). If a control device has only one configuration then only the device name 'control' needs to be passed in the list. If a control device has multiple configurations, then the device name and its configuration “name” needs to be passed as a tuple element ('control', 'config') in the list. (see condition_controls() for details)

  • intersection_set (bool) – True (DEFAULT) will force the returned shot numbers to be the intersection of shotnum, the digitizer dataset shot numbers, and, if requested, the shot numbers contained in each control device dataset. False will return the union instead of the intersection, minus \(shotnum \le 0\). (see HDFReadData for details)

  • silent (bool) – False (DEFAULT). Set True to ignore any UserWarnings (soft-warnings)

Return type:

HDFReadData

Example:
>>> # open HDF5 file
>>> f = File('sample.hdf5')
>>>
>>> # list control devices
>>> list(f.digitizers)
['SIS crate']
>>>
>>> # get active configurations
>>> f.digitizers['SIS crate'].configs
['config01', 'config02']
>>>
>>> # get active adc's for config
>>> f.digitizers['SIS crate'].configs['config01']['adc']
('SIS 3302,)
>>>
>>> # get first connected brd and channels to adc
>>> brd, chs = f.digitizers['SIS crate'].configs['config01'][
...     'SIS 3302'][0][0:2]
>>> brd
1
>>> chs
(1, 2, 3)
>>>
>>> # get data for brd = 1, ch = 1
>>> data = f.read_data(brd, chs[0],
...                    digitizer='SIS crate',
...                    adc='SIS 3302',
...                    config_name='config01')
>>> type(data)
bapsflib._hdf.utils.hdfreaddata.HDFReadData
>>>
>>> # Note: a quicker way to see how the digitizers are
>>> #       configured is to use
>>> #
>>> #       f.overview.report_digitizers()
>>> #
>>> #       which prints to screen a report of the
>>> #       digitizer hookup
read_msi(msi_diag: str, silent=False, **kwargs)

Reads data from MSI Diagnostic datasets. See HDFReadMSI for more detail.

Parameters:
  • msi_diag – name of MSI diagnostic

  • silent (bool) – False (DEFAULT). Set True to ignore any UserWarnings (soft-warnings)

Return type:

HDFReadMSI

Example:
>>> # open HDF5 file
>>> f = File('sample.hdf5')
>>>
>>> # list msi diagnostics
>>> list(f.msi)
['Interferometer array', 'Magnetic field']
>>>
>>> # read 'Interferometer array'
>>> mdata = f.read_msi('Interferometer array')
>>> type(mdata)
bapsflib._hdf.utils.hdfreadmsi.HDFReadMSI
require_dataset(name, shape, dtype, exact=False, **kwds)

Open a dataset, creating it if it doesn’t exist.

If keyword “exact” is False (default), an existing dataset must have the same shape and a conversion-compatible dtype to be returned. If True, the shape and dtype must match exactly.

If keyword “maxshape” is given, the maxshape and dtype must match instead.

If any of the keywords “rdcc_nslots”, “rdcc_nbytes”, or “rdcc_w0” are given, they will be used to configure the dataset’s chunk cache.

Other dataset keywords (see create_dataset) may be provided, but are only used if a new dataset is to be created.

Raises TypeError if an incompatible object already exists, or if the shape, maxshape or dtype don’t match according to the above rules.

require_group(name)

Return a group, creating it if it doesn’t exist.

TypeError is raised if something with that name already exists that isn’t a group.

run_description()

Print description of the LaPD experimental run.

setdefault(k[, d]) D.get(k,d), also set D[k]=d if k not in D
update([E, ]**F) None.  Update D from mapping/iterable E and F.

If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v

values()

Get a view object on member objects

visit(func)

Recursively visit all names in this group and subgroups (HDF5 1.8).

You supply a callable (function, method or callable object); it will be called exactly once for each link in this group and every group below it. Your callable must conform to the signature:

func(<member name>) => <None or return value>

Returning None continues iteration, returning anything else stops and immediately returns that value from the visit method. No particular order of iteration within groups is guaranteed.

Example:

>>> # List the entire contents of the file
>>> f = File("foo.hdf5")
>>> list_of_names = []
>>> f.visit(list_of_names.append)
visititems(func)

Recursively visit names and objects in this group (HDF5 1.8).

You supply a callable (function, method or callable object); it will be called exactly once for each link in this group and every group below it. Your callable must conform to the signature:

func(<member name>, <object>) => <None or return value>

Returning None continues iteration, returning anything else stops and immediately returns that value from the visit method. No particular order of iteration within groups is guaranteed.

Example:

# Get a list of all datasets in the file >>> mylist = [] >>> def func(name, obj): … if isinstance(obj, Dataset): … mylist.append(name) … >>> f = File(‘foo.hdf5’) >>> f.visititems(func)