xbpch provides three main utilities for reading bpch files, all of which are provided as top-level package imports. For most purposes, you should use open_bpchdataset(), however a lower-level interface, BPCHFile() is also provided in case you would prefer manually processing the bpch contents.

See Usage and Examples for more details.

xbpch.open_bpchdataset(filename, fields=[], categories=[], tracerinfo_file='tracerinfo.dat', diaginfo_file='diaginfo.dat', endian='>', decode_cf=True, memmap=True, dask=True, return_store=False)

Open a GEOS-Chem BPCH file output as an xarray Dataset.

xbpch.open_mfbpchdataset(paths, concat_dim='time', compat='no_conflicts', preprocess=None, lock=None, **kwargs)

Open multiple bpch files as a single dataset.

You must have dask installed for this to work, as this greatly simplifies issues relating to multi-file I/O.

Also, please note that this is not a very performant routine. I/O is still limited by the fact that we need to manually scan/read through each bpch file so that we can figure out what its contents are, since that metadata isn’t saved anywhere. So this routine will actually sequentially load Datasets for each bpch file, then concatenate them along the “time” axis. You may wish to simply process each file individually, coerce to NetCDF, and then ingest through xarray as normal.

Parameters: paths : list of strs Filenames to load; order doesn’t matter as they will be lexicographically sorted before we read in the data concat_dim : str, default=’time’ Dimension to concatenate Datasets over. We default to “time” since this is how GEOS-Chem splits output files compat : str (optional) String indicating how to compare variables of the same name for potential conflicts when merging: ‘broadcast_equals’: all values must be equal when variables are broadcast against each other to ensure common dimensions. ‘equals’: all values and dimensions must be the same. ‘identical’: all values, dimensions and attributes must be the same. ‘no_conflicts’: only values which are not null in both datasets must be equal. The returned dataset then contains the combination of all non-null values. preprocess : callable (optional) A pre-processing function to apply to each Dataset prior to concatenation lock : False, True, or threading.Lock (optional) Passed to dask.array.from_array(). By default, xarray employs a per-variable lock when reading data from NetCDF files, but this model has not yet been extended or implemented for bpch files and so this is not actually used. However, it is likely necessary before dask’s multi-threaded backend can be used **kwargs : optional Additional arguments to pass to xbpch.open_bpchdataset().
class xbpch.BPCHFile(filename, mode='rb', endian='>', diaginfo_file='', tracerinfo_file='', eager=False, use_mmap=False, dask_delayed=False)

A file object for representing BPCH data on disk

Attributes

 fp (FortranFile) A pointer to the open unformatted Fortran binary output (the original bpch file) var_data, var_attrs (dict) Containers of BPCHDataBundles and dicts, respectively, holding the accessor functions to the raw bpch data and their associated metadata
__init__(filename, mode='rb', endian='>', diaginfo_file='', tracerinfo_file='', eager=False, use_mmap=False, dask_delayed=False)

Parameters: filename : str Path to the bpch file on disk mode : str Mode string to pass to the file opener; this is currently fixed to “rb” and all other values will be rejected endian : str {“>”, “<”, “:”} Endian-ness of the Fortran output file {tracerinfo, diaginfo}_file : str Path to the tracerinfo.dat and diaginfo.dat files containing metadata pertaining to the output in the bpch file being read. eager : bool Flag to immediately read variable data; if “False”, then nothing will be read from the file and you’ll need to do so manually use_mmap : bool Use memory-mapping to read data from file dask_delayed : bool Use dask to create delayed references to the data-reading functions
__weakref__

list of weak references to the object (if defined)

Parse the entire bpch file on disk and set up easy access to meta- and data blocks.

Process the header information (data model / grid spec)

Iterate over the block of this bpch file and return handlers in the form of BPCHDataBundles for access to the data contained therein.