class documentation

class StreamingPiCamera2(BaseCamera): (source)

Constructor: StreamingPiCamera2(camera_num)

View In Hierarchy

A Thing that provides and interface to the Raspberry Pi Camera.

Currently the Thing only supports the PiCamera v2 board. This needs generalisation.

Method __enter__ Start streaming when the Thing context manager is opened.
Method __exit__ Close the picamera connection when the Thing context manager is closed.
Method __init__ Initialise the camera with the given camera number.
Method analogue_gain The Analogue gain applied by the camera sensor.
Method analogue_gain.setter Undocumented
Method auto_expose_from_minimum Adjust exposure until a the target white level is reached.
Method calibrate_lens_shading Take an image and use it for flat-field correction.
Method calibrate_white_balance Correct the white balance of the image.
Method capture_array Acquire one image from the camera and return as an array.
Method capture_image Acquire one image from the camera.
Method capture_jpeg Acquire one image from the camera as a JPEG.
Method colour_correction_matrix.setter Undocumented
Method colour_gains The red and blue colour gains, must be between 0.0 and 32.0.
Method colour_gains.setter Undocumented
Method correct_colour_gains_for_lens_shading Correct white balance gains for the effect of lens shading.
Method discard_frames Discard frames so that the next frame captured is fresh.
Method exposure_time The camera exposure time in microseconds.
Method exposure_time.setter Undocumented
Method flat_lens_shading Disable flat-field correction.
Method flat_lens_shading_chrominance Disable flat-field correction.
Method full_auto_calibrate Perform a full auto-calibration.
Method get_tuning_algo Return the active tuning algorithm settings for the given algorithm.
Method lens_shading_tables.setter Set the lens shading tables.
Method reset_ccm Overwrite the colour correction matrix in camera tuning with default values.
Method reset_lens_shading Revert to default lens shading settings.
Method save_settings Override save_settings to ensure that camera properties don't recurse.
Method sensor_mode.setter Change the sensor mode used.
Method set_static_green_equalisation Set the green equalisation to a static value.
Method start_streaming Start the MJPEG stream. This is where persistent controls are sent to camera.
Method stop_streaming Stop the MJPEG stream.
Class Variable mjpeg_bitrate Bitrate for MJPEG stream (None for default).
Class Variable stream_resolution Resolution to use for the MJPEG stream.
Instance Variable camera_configs Undocumented
Instance Variable camera_num Undocumented
Instance Variable colour_correction_matrix The colour_correction_matrix from the tuning file.
Instance Variable default_tuning Undocumented
Instance Variable stream_active Whether the MJPEG stream is active.
Instance Variable tuning The Raspberry PiCamera Tuning File JSON.
Property camera_configuration The "configuration" dictionary of the picamera2 object.
Property capture_metadata Return the metadata from the camera.
Property lens_shading_is_static Whether the lens shading is static.
Property lens_shading_tables The current lens shading (i.e. flat-field correction).
Property manual_camera_settings The camera settings to expose as property controls in the settings panel.
Property primary_calibration_actions The calibration actions for both calibration wizard and settings panel.
Property secondary_calibration_actions The calibration actions that appear only in settings panel.
Property sensor_mode The intended sensor mode of the camera.
Property sensor_modes All the available modes the current sensor supports.
Property sensor_resolution The native resolution of the camera's sensor.
Property streaming True if the camera is streaming.
Method _get_persistent_controls Undocumented
Method _initialise_picamera Acquire the picamera device and store it as self._picamera.
Method _streaming_picamera Lock access to picamera and return the underlying Picamera2 instance.
Class Variable _sensor_modes Undocumented
Instance Variable _analogue_gain Undocumented
Instance Variable _colour_gains Undocumented
Instance Variable _exposure_time Undocumented
Instance Variable _picamera Undocumented
Instance Variable _picamera_lock Undocumented
Instance Variable _sensor_mode Undocumented
Instance Variable _setting_save_in_progress Undocumented

Inherited from BaseCamera:

Method background_detector_data The data for each background detector, used to save to disk.
Method background_detector_data.setter Set the data for each detector. Only to be used as settings are loaded from disk.
Method capture_and_save Capture an image and save it to disk.
Method capture_downsampled_array Acquire one image from the camera, downsample, and return as an array.
Method capture_to_memory Capture an image to memory. This can be saved later with save_from_memory.
Method clear_buffers Clear all images in memory.
Method detector_name The name of the active background selector.
Method detector_name.setter Validate and set detector_name.
Method grab_as_array Acquire one image from the preview stream and return as an array.
Method grab_jpeg Acquire one image from the preview stream and return as blob of JPEG data.
Method grab_jpeg_size Acquire one image from the preview stream and return its size.
Method image_is_sample Label the current image as either background or sample.
Method kill_mjpeg_streams Kill the streams now as the server is shutting down.
Method save_from_memory Save an image that has been captured to memory.
Method set_background Grab an image, and use its statistics to set the background.
Method settle Sleep for the settling time, ready to provide a fresh frame.
Method update_detector_settings Update the settings of the current detector.
Class Variable downsampled_array_factor The downsampling factor when calling capture_downsampled_array.
Class Variable lores_mjpeg_stream Undocumented
Class Variable mjpeg_stream Undocumented
Class Variable settling_time The settling time when calling the settle() method.
Instance Variable background_detectors Undocumented
Property active_detector The active background detector instance.
Property background_detector_status The status of the active detector for the UI.
Method _robust_image_capture Capture an image in memory and return it with metadata.
Method _save_capture Save the captured image and metadata to disk.
Class Variable _memory_buffer Undocumented
Instance Variable _detector_name Undocumented
def __enter__(self): (source)

Start streaming when the Thing context manager is opened.

This opens the picamera connection, initialises the camera, sets the sensor_modes property, and then starts the streams.

def __exit__(self, exc_type, exc_value, traceback): (source)

Close the picamera connection when the Thing context manager is closed.

def __init__(self, camera_num: int = 0): (source)

Initialise the camera with the given camera number.

This makes no connection to the camera (except to get the default tuning file).

Parameters
camera_num:intThe number of the camera. This should generally be left as 0 as most Raspberry Pi boards only support 1 camera.
@lt.thing_setting
def analogue_gain(self) -> float: (source)

The Analogue gain applied by the camera sensor.

def analogue_gain(self, value: float): (source)

Undocumented

@lt.thing_action
def auto_expose_from_minimum(self, target_white_level: int = 700, percentile: float = 99.9): (source)

Adjust exposure until a the target white level is reached.

Starting from the minimum exposure, gradually increase exposure until the image reaches the specified white level.

Parameters
target_white_level:intThe target 10bit white level. 10-bit data has a theoretical maximum of 1023, but with black level correction the true maximum is about 950. Default is 700 as this is approximately 70% saturated.
percentile:floatThe percentile to use instead of maximum. Default 99.9. When calculating the brightest pixel, a percentile is used rather than the maximum in order to be robust to a small number of noisy/bright pixels.
@lt.thing_action
def calibrate_lens_shading(self): (source)

Take an image and use it for flat-field correction.

This method requires an empty (i.e. bright) field of view. It will take a raw image and effectively divide every subsequent image by the current one. This uses the camera's "tuning" file to correct the preview and the processed images. It should not affect raw images.

@lt.thing_action
def calibrate_white_balance(self, method: Literal['percentile', 'centre'] = 'centre', luminance_power: float = 1.0): (source)

Correct the white balance of the image.

This calibration requires a neutral image, such that the 99th centile of each colour channel should correspond to white. We calculate the centiles and use this to set the colour gains. This is done on the raw image with the lens shading correction applied, which should mean that the image is uniform, rather than weighted towards the centre.

If method is "centre", we will correct the mean of the central 10% of the image.

@lt.thing_action
def capture_array(self, stream_name: Literal['main', 'lores', 'raw', 'full'] = 'main', wait: float | None = 0.9) -> ArrayModel: (source)

Acquire one image from the camera and return as an array.

This function will produce a nested list containing an uncompressed RGB image. It's likely to be highly inefficient - raw and/or uncompressed captures using binary image formats will be added in due course.

stream_name: (Optional) The PiCamera2 stream to use, should be one of ["main", "lores", "raw", "full"]. Default = "main" wait: (Optional, float) Set a timeout in seconds. A TimeoutError is raised if this time is exceeded during capture. Default = 0.9s, lower than the 1s timeout default in picamera yaml settings

def capture_image(self, stream_name: Literal['main', 'lores', 'raw'] = 'main', wait: float | None = 0.9) -> Image: (source)

Acquire one image from the camera.

Return it as a PIL Image

stream_name: (Optional) The PiCamera2 stream to use, should be one of ["main", "lores", "raw"]. Default = "main" wait: (Optional, float) Set a timeout in seconds. A TimeoutError is raised if this time is exceeded during capture. Default = 0.9s, lower than the 1s timeout default in picamera yaml settings

@lt.thing_action
def capture_jpeg(self, metadata_getter: lt.deps.GetThingStates, resolution: Literal['lores', 'main', 'full'] = 'main', wait: float | None = 0.9) -> JPEGBlob: (source)

Acquire one image from the camera as a JPEG.

The JPEG will be acquired using Picamera2.capture_file. If the resolution parameter is main or lores, it will be captured from the main preview stream, or the low-res preview stream, respectively. This means the camera won't be reconfigured, and the stream will not pause (though it may miss one frame).

If full resolution is requested, we will briefly pause the MJPEG stream and reconfigure the camera to capture a full resolution image.

wait: (Optional, float) Set a timeout in seconds. A TimeoutError is raised if this time is exceeded during capture. Default = 0.9s, lower than the 1s timeout default in picamera yaml settings

Note that this always uses the image processing pipeline - to bypass this, you must use a raw capture.

def colour_correction_matrix(self, value): (source)

Undocumented

@lt.thing_setting
def colour_gains(self) -> tuple[float, float]: (source)

The red and blue colour gains, must be between 0.0 and 32.0.

def colour_gains(self, value: tuple[float, float]): (source)

Undocumented

def correct_colour_gains_for_lens_shading(self, colour_gains: tuple[float, float]) -> tuple[float, float]: (source)

Correct white balance gains for the effect of lens shading.

The white balance algorithm we use assumes the brightest pixels should be white, and that the only thing affecting the colour of said pixels is the colour_gains.

The lens shading correction is normalised such that the minimum gain in the Cr and Cb channels is 1. The white balance assumption above requires that the gain for the brightest pixels is 1. The solution might be that, when calibrating, we note which pixels are brightest (usually the centre) and explicitly use the LST values for there. However, for now I will assume that we need to normalise by the maximum of the Cr and Cb channels, which is correct the majority of the time.

@lt.thing_action
def discard_frames(self): (source)

Discard frames so that the next frame captured is fresh.

@lt.thing_setting
def exposure_time(self) -> int: (source)

The camera exposure time in microseconds.

When setting this property the camera will adjust the set value to the nearest allowed value that is lower than the current setting.

def exposure_time(self, value: int): (source)

Undocumented

@lt.thing_action
def flat_lens_shading(self): (source)

Disable flat-field correction.

This method will set a completely flat lens shading table. It is not the same as the default behaviour, which is to use an adaptive lens shading table.

This flat table is used to take an image with no lens shading so that the correct lens shading table can be calibrated.

@lt.thing_action
def flat_lens_shading_chrominance(self): (source)

Disable flat-field correction.

This method will set the chrominance of the lens shading table to be flat, i.e. we'll correct vignetting of intensity, but not any change in colour across the image.

@lt.thing_action
def full_auto_calibrate(self, portal: lt.deps.BlockingPortal): (source)

Perform a full auto-calibration.

This function will call the other calibration actions in sequence:

  • flat_lens_shading to disable flat-field
  • auto_expose_from_minimum
  • set_static_green_equalisation to set geq offset to max
  • calibrate_lens_shading
  • reset_ccm
  • calibrate_white_balance
  • set_background
@overload
def get_tuning_algo(self, algorithm_name: str, raise_if_missing: Literal[True]) -> dict:
@overload
def get_tuning_algo(self, algorithm_name: str, raise_if_missing: bool) -> dict | None:
(source)

Return the active tuning algorithm settings for the given algorithm.

Returns
dict | NoneThe algorithm dictionary if found, returns None if no tuning data is loaded or if the tuning algorithm is not found.
Raises
MissingCalibrationErrorIf raise_if_missing is true and there is no tuning file is available, or the requested algorithm is not present.
def lens_shading_tables(self, lst: LensShading): (source)

Set the lens shading tables.

@lt.thing_action
def reset_ccm(self): (source)

Overwrite the colour correction matrix in camera tuning with default values.

These values are from the Raspberry Pi Camera Algorithm and Tuning Guide, page 45.

@lt.thing_action
def reset_lens_shading(self): (source)

Revert to default lens shading settings.

This method will restore the default "adaptive" lens shading method used by the Raspberry Pi camera.

def save_settings(self): (source)

Override save_settings to ensure that camera properties don't recurse.

This method is run by any Thing when a ThingSetting is saved. However, the method reads the thing_setting. As reading the thing setting talks to the camera and calls save_settings if the value is not as expected, this could cause recursion. Also this means that saving one setting causes all others to be read each time.

def sensor_mode(self, new_mode: SensorModeSelector | dict | None): (source)

Change the sensor mode used.

@lt.thing_action
def set_static_green_equalisation(self, offset: int = 65535): (source)

Set the green equalisation to a static value.

Green equalisation avoids the debayering algorithm becoming confused by the two green channels having different values, which is a problem when the chief ray angle isn't what the sensor was designed for, and that's the case in e.g. a microscope using camera module v2.

A value of 0 here does nothing, a value of 65535 is maximum correction.

@lt.thing_action
def start_streaming(self, main_resolution: tuple[int, int] = (820, 616), buffer_count: int = 6): (source)

Start the MJPEG stream. This is where persistent controls are sent to camera.

Sets the camera resolutions based on input parameters, and sets the low-res resolution to (320, 240). Note: (320, 240) is a standard from the Pi Camera manual.

Create two streams:

  • lores_mjpeg_stream for autofocus at low-res resolution
  • mjpeg_stream for preview. This is the main_resolution if this is less
    than (1280, 960), or the low-res resolution if above. This allows for high resolution capture without streaming high resolution video.

main_resolution: the resolution for the main configuration. Defaults to (820, 616), 1/4 sensor size. buffer_count: the number of frames to hold in the buffer. Higher uses more memory, lower may cause dropped frames. Value must be between 1 and 8, Defaults to 6.

@lt.thing_action
def stop_streaming(self, stop_web_stream: bool = True): (source)

Stop the MJPEG stream.

mjpeg_bitrate = (source)

Bitrate for MJPEG stream (None for default).

stream_resolution = (source)

Resolution to use for the MJPEG stream.

camera_configs: dict[str, dict] = (source)

Undocumented

camera_num = (source)

Undocumented

@lt.thing_property
colour_correction_matrix: tuple[float, float, float, float, float, float, float, float, float] = (source)

The colour_correction_matrix from the tuning file.

This is broken out into its own property for convenience and compatibility with the micromanager API

Ir is a 9 value tuple used to specify the 3x3 matrix that the GPU pipeline uses to convert from the camera R,G,B vector to the standard R,G,B.

See page Raspberry Pi Camera Algorithm and Tuning Guide, page 45.

default_tuning = (source)

Undocumented

stream_active: bool = (source)

Whether the MJPEG stream is active.

The Raspberry PiCamera Tuning File JSON.

@lt.thing_property
camera_configuration: Mapping = (source)

The "configuration" dictionary of the picamera2 object.

The "configuration" sets the resolution and format of the camera's streams. Together with the "tuning" it determines how the sensor is configured and how the data is processed.

Note that the configuration may be modified when taking still images, and this property refers to whatever configuration is currently in force - usually the one used for the preview stream.

@lt.thing_property
capture_metadata: dict = (source)

Return the metadata from the camera.

@lt.thing_property
lens_shading_is_static: bool = (source)

Whether the lens shading is static.

This property is true if the lens shading correction has been set to use a static table (i.e. the number of automatic correction iterations is zero). The default LST is not static, but all the calibration controls will set it to be static (except "reset")

@lt.thing_property
lens_shading_tables: LensShading | None = (source)

The current lens shading (i.e. flat-field correction).

Return the current lens shading correction, as three 2D lists each with dimensions 16x12, if a static lens shading table is in use.

Return None if: - adaptive control is enabled - multiple LSTs in use (for different colour temperatures),

@lt.thing_property
manual_camera_settings: list[PropertyControl] = (source)

The camera settings to expose as property controls in the settings panel.

@lt.thing_property
primary_calibration_actions: list[ActionButton] = (source)

The calibration actions for both calibration wizard and settings panel.

@lt.thing_property
secondary_calibration_actions: list[ActionButton] = (source)

The calibration actions that appear only in settings panel.

@lt.thing_property
sensor_mode: SensorModeSelector | None = (source)

The intended sensor mode of the camera.

@lt.thing_property
sensor_modes: list[SensorMode] = (source)

All the available modes the current sensor supports.

@lt.thing_property
sensor_resolution: tuple[int, int] | None = (source)

The native resolution of the camera's sensor.

@property
streaming: bool = (source)

True if the camera is streaming.

def _get_persistent_controls(self) -> dict: (source)

Undocumented

def _initialise_picamera(self): (source)

Acquire the picamera device and store it as self._picamera.

This duplicates logic in Picamera2.__init__ to provide a tuning file that will be read when the camera system initialises.

@contextmanager
def _streaming_picamera(self, pause_stream=False) -> Iterator[Picamera2]: (source)

Lock access to picamera and return the underlying Picamera2 instance.

Optionally the stream can be paused to allow updating the camera settings.

Parameters
pause_stream

If False the Picamera2 instance is simply yielded. If True:

  • Stop the MJPEG Stream
  • Yield the Picamera2 instance for function calling the context manager to
    make changes.
  • On closing of the context manager the stream will restart.
Returns
Iterator[Picamera2]Undocumented
_sensor_modes = (source)

Undocumented

_analogue_gain: float = (source)

Undocumented

_colour_gains: tuple[float, float] = (source)

Undocumented

_exposure_time: int = (source)

Undocumented

_picamera = (source)

Undocumented

_picamera_lock = (source)

Undocumented

_sensor_mode: dict | None = (source)

Undocumented

_setting_save_in_progress: bool = (source)

Undocumented