Commit 07a12753 authored by Jan Kotanski's avatar Jan Kotanski
Browse files

New upstream version 0.11.0

parent 2cfd7210
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
version: 2
- package-ecosystem: "pip" # See documentation for possible values
directory: "/" # Location of package manifests
interval: "daily"
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see:
name: Python package
branches: [ master ]
branches: [ master ]
runs-on: ubuntu-latest
python-version: ['3.6', '3.7', '3.8', '3.9']
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest
if [ -f ci/requirements_gh.txt ]; then pip install -r ci/requirements_gh.txt; fi
- name: Build package
run: |
python build
python bdist_wheel
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Run tests
run: |
# C extensions
# Packages
# Installer logs
# Unit test / coverage reports
# Translations
# Mr Developer
# generated c files, i.e. all except the ones in the src directory
# .readthedocs.yml
# Read the Docs configuration file
# See for details
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
configuration: doc/source/
# Build documentation with MkDocs
# configuration: mkdocs.yml
# Optionally build your docs in additional formats such as PDF
# - pdf
# Optionally set the version of Python and requirements required to build your docs
version: 3.7
- requirements: requirements.txt
- requirements: ci/requirements_rtd.txt
- method: setuptools
path: .
language: python
update: true
# Enable 3.7 without globally enabling sudo and dist: xenial for other build jobs
- python: 3.6
os: linux
- python: 3.7
dist: xenial
- python: 3.7
os: linux-ppc64le
dist: xenial
- python: 3.7
os: linux-arm64
dist: xenial
- python: 3.8
dist: xenial
# command to install dependencies
# Upgrade distribution modules
- "python -m pip install --upgrade pip"
- "pip install --upgrade setuptools"
- "pip install --upgrade wheel"
# Install build dependencies: HDF5 directory setup here which is needed for PPC64le
- "if [ $TRAVIS_ARCH = 'ppc64le' ]; then source ci/ppc64le_installer; fi"
- "pip install -r ci/requirements_travis.txt --upgrade"
# Print Python info
- "python ci/"
- "pip list"
# Generate source package or wheel
- "python build bdist_wheel"
- "pip install ."
# command to run tests
- "python build test"
- "python ./ --installed"
Please see doc/sources/Changlog.rst
\ No newline at end of file
# Patterns to exclude from any directory
global-exclude *~
global-exclude *.pyc
global-exclude *.pyo
global-exclude .git
global-exclude .ipynb_checkpoints
recursive-include package/debian* *
recursive-include fabio/ext *.c *.h *.pyx
recursive-exclude test/tiftest *
recursive-exclude test/testimages *
recursive-exclude testimages *
recursive-exclude fabio.egg-info *
recursive-exclude build *
recursive-exclude dist *
recursive-exclude pylint *
include stdeb.cfg
include setup.cfg
exclude MANIFEST
include README.rst
include copyright
include requirements.txt
include pyproject.toml
#Include doc without checkpoints
recursive-include doc *
recursive-exclude doc .ipynb_checkpoints/*.ipynb
FabIO: Fable Input/Output library
Main websites:
* (historical)
|Build Status| |Appveyor Status|
FabIO is an I/O library for images produced by 2D X-ray detectors and written in Python.
FabIO support images detectors from a dozen of companies (including Mar, Dectris, ADSC, Hamamatsu, Oxford, ...),
for a total of 30 different file formats (like CBF, EDF, TIFF, ...) and offers an unified interface to their
headers (as a python dictionary) and datasets (as a numpy ndarray of integers or floats)
Getting FabIO
FabIO is available from `PyPI <>`_.
`Debian/Ubuntu packages <>`_, and
`wheels <>`_ are available
for windows, linux and MacOSX from the silx repository:
Documentation is available at `silx <>`_.
The general philosophy of the library is described in:
`FabIO: easy access to two-dimensional X-ray detector images in Python; E. B. Knudsen, H. O. Sørensen, J. P. Wright, G. Goret and J. Kieffer Journal of Applied Crystallography, Volume 46, Part 2, pages 537-539. <>`_
Transparent handling of compressed files
FabIO is expected to handle gzip and bzip2 compressed files transparently.
Following a query about the performance of reading compressed data, some
benchmarking details have been collected at fabio_compressed_speed.
This means that when your python was configured and built you needed the
bzip and gzip modules to be present (eg libbz2-dev package for ubuntu)
Using fabio in your own python programs
>>> import fabio
>>> obj = fabio.edfimage.EdfImage("mydata0000.edf")
(2048, 2048)
>>> obj.header["Omega"]
Design Specifications
FabIO = Fable Input/Output
Have a base class for all our 2D diffraction greyscale images.
This consists of a 2D array (numpy ndarray)
and a python dictionary (actually an ordered dict) of header information in (string key, string value) pairs.
Class FabioImage
Needs a name which will not to be confused with an RGB color image.
Class attributes, often exposed as properties:
* data -> 2D array
* header -> ordered dictionary
* rows, columns, dim1, dim2 -> data.shape (propertiy)
* header_keys -> property for list(header.keys()), formerly used to retain the order of the header
* bytecode -> data.typecode() (property)
* m, minval, maxval, stddev -> image statistics, could add others, eg roi[slice]
Class methods (functions):
* integrate_area() -> return sum( within slice
* rebin(fact) -> rebins data, adjusts dims
* toPIL16() -> returns a PILimage
* getheader() -> returns self.header
* resetvals() -> resets the statistics
* getmean() -> (computes) returns self.m
* getmin() -> (computes) returns self.minval
* getmax() -> (computes) returns self.maxval
* getstddev() -> (computes) returns self.stddev
* read() -> read image from file [or stream, or shared memory]
* write() -> write image to file [or stream, or shared memory]
* readheader() -> read only the header [much faster for scanning files]
Each individual file format would then inherit all the functionality of this class and just make new read and write methods.
There are also fileseries related methods (next(), previous(), ...) which returns a FabioImage instance of the next/previous frame in a fileserie
Other feature:
* possibility for using on-the-fly external compression - i.e. if files are
stored as something as .gz, .bz2 etc could decompress them, using an external
compression mechanism (if available).
Supported file formats
+ AdscImage
* Bruker:
+ BrukerImage
+ Bruker100Image
+ KcdImage: Nonius KappaCCD diffractometer
* D3M
+ D3mImage
* Dectris:
+ CbfImage (implements a fast byte offset de/compression scheme in python/cython)
+ PilatusImage (fileformat derived from Tiff)
+ EigerImage (derived from HDF5/NeXus format, depends on `h5py`)
+ EdfImage: The ESRF data Format
+ XsdImage: XML serialized image from EDNA
+ Fit2dImage: Fit2d binary format
+ Fit2dmaskImage: Fit2d Mask format
+ Fit2dSpreadsheetImage: Fit2d ascii tables (spread-sheet)
+ LimaImage: image stacks written by the LImA aquisition system
+ SparseImage: single crystal diffractions images written by pyFAI
* General Electrics
+ GEimage (including support for variant used at APS)
* Hamamatsu
+ HiPiCImage
* HDF5: generic format for stack of images based on h5py
+ Hdf5Image
+ EigerImage
+ LimaImage
+ SparseImage
* JPEG image format:
+ JPEG using PIL
+ JPEG 2000 using Glymur
* Mar Research:
+ MarccdImage (fileformat derived from Tiff)
+ Mar345Image imaging plate with PCK compression
* MPA multiwire
+ MpaImage
* Medical Research Council file format for 3D electron density and 2D images
+ MrcImage
* Nonius -> now owned by Bruker
+ KcdImage
* Numpy: generic reader for 2D arrays saved
+ NumpyImage
* Oxford Diffraction Sapphire 3
+ OXDimage uncompressed or with TY1 or TY5 compression scheme
+ Esperanto format (with bitfield compression)
* Pixirad Imaging
+ PixiImage
+ PnmImage
* Princeton Instrument SPE
+ SpeImage
* Raw Binary without compression
* Rigaku
+ RaxisImage
+ DtrekImage
* Tiff
+ TifImage using either:
- Pillow (external dependency)
- TiffIO taken from PyMca
Please see doc/source/INSTALL.rst
Please see doc/source/Changelog.rst
.. |Build Status| image::
.. |Appveyor Status| image::
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
Bootstrap helps you to test scripts without installing them
by patching your PYTHONPATH on the fly
example: ./ ipython
__authors__ = ["Frédéric-Emmanuel Picca", "Jérôme Kieffer"]
__contact__ = ""
__license__ = "MIT"
__date__ = "26/12/2020"
import sys
import os
import distutils.util
import subprocess
import logging
logger = logging.getLogger("bootstrap")
def is_debug_python():
"""Returns true if the Python interpreter is in debug mode."""
import sysconfig
except ImportError: # pragma nocover
# Python < 2.7
import distutils.sysconfig as sysconfig
if sysconfig.get_config_var("Py_DEBUG"):
return True
return hasattr(sys, "gettotalrefcount")
def _distutils_dir_name(dname="lib"):
Returns the name of a distutils build directory
platform = distutils.util.get_platform()
architecture = "%s.%s-%i.%i" % (dname, platform,
sys.version_info[0], sys.version_info[1])
if is_debug_python():
architecture += "-pydebug"
return architecture
def _distutils_scripts_name():
"""Return the name of the distrutils scripts sirectory"""
f = "scripts-{version[0]}.{version[1]}"
return f.format(version=sys.version_info)
def _get_available_scripts(path):
res = []
res = " ".join([s.rstrip('.py') for s in os.listdir(path)])
except OSError:
res = ["no script available, did you ran "
"'python build' before bootstrapping ?"]
return res
def execfile(fullpath, globals=None, locals=None):
"Python3 implementation for execfile"
with open(fullpath) as f:
data =
except UnicodeDecodeError:
raise SyntaxError("Not a Python script")
code = compile(data, fullpath, 'exec')
exec(code, globals, locals)
def run_file(filename, argv):
Execute a script trying first to use execfile, then a subprocess
:param str filename: Script to execute
:param list[str] argv: Arguments passed to the filename
full_args = [filename]
try:"Execute target using exec")
# execfile is considered as a local call.
# Providing globals() as locals will force to feed the file into
# globals() (for examples imports).
# Without this any function call from the executed file loses imports
old_argv = sys.argv
sys.argv = full_args"Patch the sys.argv: %s", sys.argv)"Executing %s.main()", filename)
print("########### EXECFILE ###########")
module_globals = globals().copy()
module_globals['__file__'] = filename
execfile(filename, module_globals, module_globals)
sys.argv = old_argv
except SyntaxError as error:
logger.error(error)"Execute target using subprocess")
env = os.environ.copy()
env.update({"PYTHONPATH": LIBPATH + os.pathsep + os.environ.get("PYTHONPATH", ""),
"PATH": os.environ.get("PATH", "")})
print("########### SUBPROCESS ###########")
run = subprocess.Popen(full_args, shell=False, env=env)
def run_entry_point(entry_point, argv):
Execute an entry_point using the current python context
:param str entry_point: A string identifying a function from a module
import importlib
elements = entry_point.split("=")
target_name = elements[0].strip()
elements = elements[1].split(":")
module_name = elements[0].strip()
function_name = elements[1].strip()"Execute target %s (function %s from module %s) using importlib", target_name, function_name, module_name)
full_args = [target_name]
old_argv = sys.argv
sys.argv = full_args
print("########### IMPORTLIB ###########")
module = importlib.import_module(module_name)
if hasattr(module, function_name):
func = getattr(module, function_name)
else:"Function %s not found", function_name)
sys.argv = old_argv
def find_executable(target):
"""Find a filename from a script name.
- Check the script name as file path,
- Then checks if the name is a target of the
- Then search the script from the PATH environment variable.
:param str target: Name of the script
:returns: Returns a tuple: kind, name.
if os.path.isfile(target):
return ("path", os.path.abspath(target))
# search the file from
import setup
config = setup.get_project_configuration(dry_run=True)
# scripts from project configuration
if "scripts" in config:
for script_name in config["scripts"]:
if os.path.basename(script) == target:
return ("path", os.path.abspath(script_name))
# entry-points from project configuration
if "entry_points" in config:
for kind in config["entry_points"]:
for entry_point in config["entry_points"][kind]:
elements = entry_point.split("=")
name = elements[0].strip()
if name == target:
return ("entry_point", entry_point)
# search the file from env PATH
for dirname in os.environ.get("PATH", "").split(os.pathsep):
path = os.path.join(dirname, target)
if os.path.isfile(path):
return ("path", path)
return None, None
home = os.path.dirname(os.path.abspath(__file__))
LIBPATH = os.path.join(home, 'build', _distutils_dir_name('lib'))
cwd = os.getcwd()
build = subprocess.Popen([sys.executable, "", "build"],
shell=False, cwd=os.path.dirname(os.path.abspath(__file__)))
build_rc = build.wait()
if build_rc == 0:"Build process ended.")
logger.error("Build process ended with rc=%s", build_rc)