module documentation

Functions to set up a Raspberry Pi Camera v2 for scientific use.

This module provides slower, simpler functions to set the gain, exposure, and white balance of a Raspberry Pi camera, using the picamera2 Python library. It's mostly used by the OpenFlexure Microscope, though it deliberately has no hard dependencies on said software, so that it's useful on its own.

There are three main calibration steps:

  • Setting exposure time and gain to get a reasonably bright image.
  • Fixing the white balance to get a neutral image
  • Taking a uniform white image and using it to calibrate the Lens Shading Table

The most reliable way to do this, avoiding any issues relating to "memory" or nonlinearities in the camera's image processing pipeline, is to use raw images. This is quite slow, but very reliable. The three steps above can be accomplished by:

picamera = picamera2.Picamera2()

adjust_shutter_and_gain_from_raw(picamera)
adjust_white_balance_from_raw(picamera)
lst = lst_from_camera(picamera)
picamera.lens_shading_table = lst
Class ExposureTest Record the results of testing the camera's current exposure settings.
Function adjust_shutter_and_gain_from_raw Adjust exposure and analog gain based on raw images.
Function adjust_white_balance_from_raw Adjust the white balance in a single shot, based on the raw image.
Function as_flat_rounded_list Flatten array, round, and then convert to list.
Function channels_from_bayer_array Given the 'array' from a PiBayerArray, return the 4 channels.
Function check_convergence Check whether the brightness is within the specified target range.
Function copy_alsc_section Copy the rpi.alsc algorithm from one tuning to another.
Function downsampled_channels Generate a downsampled, un-normalised image from which to calculate the LST.
Function get_16x12_grid Compresses channel down to a 16x12 grid - from libcamera.
Function get_static_ccm Get the rpi.ccm section of a camera tuning dict.
Function grids_from_lst Convert form luminance/chrominance dict to four RGGB channels.
Function index_of_algorithm Find the index of an algorithm's section in the tuning file.
Function load_default_tuning Load the default tuning file for the camera.
Function lst_from_camera Acquire a raw image and use it to calculate a lens shading table.
Function lst_from_channels Given the 4 Bayer colour channels from a white image, generate a LST.
Function lst_from_grids Given 4 downsampled grids, generate the luminance and chrominance tables.
Function lst_is_static Whether the lens shading table is set to static.
Function raw_channels_from_camera Acquire a raw image and return a 4xNxM array of the colour channels.
Function recreate_camera_manager Delete and recreate the camera manager.
Function set_minimum_exposure Enable manual exposure, with low gain and shutter speed.
Function set_static_ccm Update the rpi.alsc section of a camera tuning dict to use a static correction.
Function set_static_geq Update the rpi.geq section of a camera tuning dict.
Function set_static_lst Update the rpi.alsc section of a camera tuning dict to use a static correction.
Function test_exposure_settings Evaluate current exposure settings using a raw image.
Function upsample_channels Zoom an image in the last two dimensions.
Type Alias LensShadingTables Undocumented
Function _geq_is_static Whether the green equalisation is set to static.
def adjust_shutter_and_gain_from_raw(camera: Picamera2, target_white_level: int = 700, max_iterations: int = 20, tolerance: float = 0.05, percentile: float = 99.9) -> float: (source)

Adjust exposure and analog gain based on raw images.

This routine is slow but effective. It uses raw images, so we are not affected by white balance or digital gain.

Parameters
camera:Picamera2A Picamera2 object.
target_white_level:intThe raw, 10-bit value we aim for. The brightest pixels should be approximately this bright. Maximum possible is about 900, 700 is reasonable.
max_iterations:intWe will terminate once we perform this many iterations, whether or not we converge. More than 10 shouldn't happen.
tolerance:floatHow close to the target value we consider "done". Expressed as a fraction of the target_white_level so 0.05 means +/- 5%
percentile:floatRather then use the maximum value for each channel, we calculate a percentile. This makes us robust to single pixels that are bright/noisy. 99.9% still picks the top of the brightness range, but seems much more reliable than just np.max().
Returns
floatUndocumented
def adjust_white_balance_from_raw(camera: Picamera2, percentile: float = 99, luminance: np.ndarray | None = None, Cr: np.ndarray | None = None, Cb: np.ndarray | None = None, luminance_power: float = 1.0, method: Literal['percentile', 'centre'] = 'centre') -> tuple[float, float]: (source)

Adjust the white balance in a single shot, based on the raw image.

NB if channels_from_raw_image is broken, this will go haywire. We should probably have better logic to verify the channels really are BGGR...

def as_flat_rounded_list(array: np.ndarray, round_to: int = 3) -> list[float]: (source)

Flatten array, round, and then convert to list.

def channels_from_bayer_array(bayer_array: np.ndarray) -> np.ndarray: (source)

Given the 'array' from a PiBayerArray, return the 4 channels.

def check_convergence(test: ExposureTest, target: int, tolerance: float) -> bool: (source)

Check whether the brightness is within the specified target range.

def copy_alsc_section(from_tuning: dict, to_tuning: dict): (source)

Copy the rpi.alsc algorithm from one tuning to another.

This is done in-place, i.e. modifying to_tuning.

def downsampled_channels(channels: np.ndarray, blacklevel=64) -> list[np.ndarray]: (source)

Generate a downsampled, un-normalised image from which to calculate the LST.

TODO: blacklevel probably ought to be determined from the camera...

def get_16x12_grid(chan: np.ndarray, dx: int, dy: int) -> np.ndarray: (source)

Compresses channel down to a 16x12 grid - from libcamera.

This is taken from https://git.linuxtv.org/libcamera.git/tree/utils/raspberrypi/ctt/ctt_alsc.py for consistency.

def get_static_ccm(tuning: dict): (source)

Get the rpi.ccm section of a camera tuning dict.

def grids_from_lst(lum: np.ndarray, Cr: np.ndarray, Cb: np.ndarray) -> np.ndarray: (source)

Convert form luminance/chrominance dict to four RGGB channels.

Note that these will be normalised - the maximum green value is always 1. Also, note that the channels are BGGR, to be consistent with the channels_from_raw_image function. This should probably change in the future.

def index_of_algorithm(algorithms: list[dict], algorithm: str) -> int: (source)

Find the index of an algorithm's section in the tuning file.

def load_default_tuning(cam: Picamera2) -> dict: (source)

Load the default tuning file for the camera.

This will open and close the camera to determine its model. If you are using a model that's supported by picamera2 it should have a tuning file built in. If not, this will probably crash with an error.

Error handling for unsupported cameras is not something we are likely to test in the short term.

def lst_from_camera(camera: Picamera2) -> LensShadingTables: (source)

Acquire a raw image and use it to calculate a lens shading table.

def lst_from_channels(channels: np.ndarray) -> LensShadingTables: (source)

Given the 4 Bayer colour channels from a white image, generate a LST.

Internally, is just calls downsampled_channels and lst_from_grids.

def lst_from_grids(grids: np.ndarray) -> LensShadingTables: (source)

Given 4 downsampled grids, generate the luminance and chrominance tables.

The grids are the 4 BAYER channels RGGB

The LST format has changed with picamera2 and now uses a fixed resolution, and is in luminance, Cr, Cb format. This function returns three ndarrays of luminance, Cr, Cb, each with shape (12, 16).

def lst_is_static(tuning: dict) -> bool: (source)

Whether the lens shading table is set to static.

def raw_channels_from_camera(camera: Picamera2) -> LensShadingTables: (source)

Acquire a raw image and return a 4xNxM array of the colour channels.

def recreate_camera_manager(): (source)

Delete and recreate the camera manager.

This is necessary to ensure the tuning file is re-read.

def set_minimum_exposure(camera: Picamera2): (source)

Enable manual exposure, with low gain and shutter speed.

We set exposure mode to manual, analog and digital gain to 1, and shutter speed to the minimum (8us for Pi Camera v2)

Note ISO is left at auto, because this is needed for the gains to be set correctly.

def set_static_ccm(tuning: dict, col_corr_matrix: tuple[float, float, float, float, float, float, float, float, float]): (source)

Update the rpi.alsc section of a camera tuning dict to use a static correction.

tuning will be updated in-place to set its shading to static, and disable any adaptive tweaking by the algorithm.

def set_static_geq(tuning: dict, offset: int = 65535): (source)

Update the rpi.geq section of a camera tuning dict.

Parameters
tuning:dictthe raspberry pi tuning file. This will be updated in-place to set the geq offset to the given value.
offset:intThe desired green equalisation offset. Default 65535. The default is the maximum allowed value. This means the brightness will always be below the threshold where averaging is used. This is default as we always need the green equalisation to averages the green pixels in the red and blue rows due to the chief ray angle compensation issue when the the stock lens is replaced by an objective.
def set_static_lst(tuning: dict, luminance: np.ndarray, cr: np.ndarray, cb: np.ndarray): (source)

Update the rpi.alsc section of a camera tuning dict to use a static correction.

tuning will be updated in-place to set its shading to static, and disable any adaptive tweaking by the algorithm.

def test_exposure_settings(camera: Picamera2, percentile: float) -> ExposureTest: (source)

Evaluate current exposure settings using a raw image.

CAMERA SHOULD BE STARTED!

We will acquire a raw image and calculate the given percentile of the pixel values. We return a dictionary containing the percentile (which will be compared to the target), as well as the camera's shutter and gain values.

def upsample_channels(grids: np.ndarray, shape: tuple[int]) -> np.ndarray: (source)

Zoom an image in the last two dimensions.

This is effectively the inverse operation of get_16x12_grid

LensShadingTables = (source)

Undocumented

Value
tuple[np.ndarray, np.ndarray, np.ndarray]
def _geq_is_static(tuning: dict) -> bool: (source)

Whether the green equalisation is set to static.