Available tools

Containers

class stracking.containers.SParticles(data=None, properties={}, scale=None)

Container for particles

The container have two data. The particle array (N, D+1) of the particles and a properties dictionary for the features

data

Coordinates for N points in D+1 dimensions. T,(Z),Y,X. The first axis is the integer ID of the time point. D is either 2 or 3 for planar or volumetric time series respectively.

Type:

array (N, D+1)

properties

Properties for each point. Each property should be an array of length N, where N is the number of points.

Type:

dict {str: array (N,)}, DataFrame

scale

Scale factors for the image data.

Type:

tuple of float

class stracking.containers.STracks(data=None, properties={}, graph={}, features={}, scale=())

Container for trajectories

This container is compatible with the Napari tracks layer

data

Coordinates for N points in D+1 dimensions. ID,T,(Z),Y,X. The first axis is the integer ID of the track. D is either 3 or 4 for planar or volumetric timeseries respectively.

Type:

array (N, D+1)

properties

Properties for each point. Each property should be an array of length N, where N is the number of points.

Type:

dict {str: array (N,)}, DataFrame

graph

Graph representing associations between tracks. Dictionary defines the mapping between a track ID and the parents of the track. This can be one (the track has one parent, and the parent has >=1 child) in the case of track splitting, or more than one (the track has multiple parents, but only one child) in the case of track merging. See examples/tracks_3d_with_graph.py

Type:

dict {int: list}

features

Properties for each tracks. Each feature should be an map of trackID=feature. Ex: features[‘length’][12]=25.2

Type:

dict {str: dict}

scale

Scale factors for the image data.

Type:

tuple of float

Detectors

class stracking.detectors.DoGDetector(min_sigma=1, max_sigma=50, sigma_ratio=1.6, threshold=2.0, overlap=0.5)

Detect spots on 2D+t and 3d+t image using the DOG algorithm

Parameters:
  • min_sigma (scalar or sequence of scalars, optional) – The minimum standard deviation for Gaussian kernel. Keep this low to detect smaller blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

  • max_sigma (scalar or sequence of scalars, optional) – The maximum standard deviation for Gaussian kernel. Keep this high to detect larger blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

  • sigma_ratio (float, optional) – The ratio between the standard deviation of Gaussian Kernels used for computing the Difference of Gaussian

  • threshold (float, optional.) – The absolute lower bound for scale space maxima. Local maxima smaller than thresh are ignored. Reduce this to detect blobs with less intensities.

  • overlap (float, optional) – A value between 0 and 1. If the area of two blobs overlaps by a fraction greater than threshold, the smaller blob is eliminated.

run(image, scale=None)

Run the detection on a ND image

Parameters:
  • image (ndarray) – time frames to analyse

  • scale (tuple or list) – scale of the image in each dimension

Returns:

detections

Return type:

SParticles

class stracking.detectors.DoHDetector(min_sigma=1, max_sigma=30, num_sigma=10, threshold=0.01, overlap=0.5, log_scale=False)

Determinant of Hessian spots detector.

Implementation from scikit-image Blobs are found using the Determinant of Hessian method . For each blob found, the method returns its coordinates and the standard deviation of the Gaussian Kernel used for the Hessian matrix whose determinant detected the blob.

Parameters:
  • min_sigma (float, optional) – The minimum standard deviation for Gaussian Kernel used to compute Hessian matrix. Keep this low to detect smaller blobs.

  • max_sigma (float, optional) – The maximum standard deviation for Gaussian Kernel used to compute Hessian matrix. Keep this high to detect larger blobs.

  • num_sigma (int, optional) – The number of intermediate values of standard deviations to consider between min_sigma and max_sigma.

  • threshold (float, optional.) – The absolute lower bound for scale space maxima. Local maxima smaller than thresh are ignored. Reduce this to detect less prominent blobs.

  • overlap (float, optional) – A value between 0 and 1. If the area of two blobs overlaps by a fraction greater than threshold, the smaller blob is eliminated.

  • log_scale (bool, optional) – If set intermediate values of standard deviations are interpolated using a logarithmic scale to the base 10. If not, linear interpolation is used.

run(image, scale=None)

Run the detection on a ND image

Parameters:
  • image (ndarray) – time frames to analyse

  • scale (tuple or list) – scale of the image in each dimension

Returns:

detections

Return type:

SParticles

class stracking.detectors.LoGDetector(min_sigma=1, max_sigma=50, num_sigma=10, threshold=0.2, overlap=0.5, log_scale=False)

Laplacian of Gaussian spots detector

Detect blobs on an image using the Difference of Gaussian method. The implementation is from scikit-image

Parameters:
  • min_sigma (scalar or sequence of scalars, optional) – the minimum standard deviation for Gaussian kernel. Keep this low to detect smaller blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

  • max_sigma (scalar or sequence of scalars, optional) – The maximum standard deviation for Gaussian kernel. Keep this high to detect larger blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

  • num_sigma (int, optional) – The number of intermediate values of standard deviations to consider between min_sigma and max_sigma.

  • threshold (float, optional.) – The absolute lower bound for scale space maxima. Local maxima smaller than thresh are ignored. Reduce this to detect blobs with less intensities.

  • overlap (float, optional) – A value between 0 and 1. If the area of two blobs overlaps by a fraction greater than threshold, the smaller blob is eliminated.

  • log_scale (bool, optional) – If set intermediate values of standard deviations are interpolated using a logarithmic scale to the base 10. If not, linear interpolation is used.

run(image, scale=None)

Run the detection on a ND image

Parameters:
  • image (ndarray) – time frames to analyse

  • scale (tuple or list) – scale of the image in each dimension

Returns:

detections

Return type:

SParticles

class stracking.detectors.SDetector

Interface for a particle detector

The parameters must be set to the constructor and the image data to the run method .. rubric:: Example

` my_detector = MyParticleDetector(threshold=12.0) particles = my_detector.run(image) `

run(image, scale=None)

Run the detection on a ND image

Parameters:
  • image (ndarray) – time frames to analyse

  • scale (tuple or list) – scale of the image in each dimension

Returns:

detections

Return type:

SParticles

class stracking.detectors.SSegDetector(is_mask=False)

Detections from segmentation image

Create a list of particles position from a segmentation image. The segmentation image can be a

binary mask or a label image. This detector is useful for example to create detection from CellPose segmentation

Parameters:

is_mask (bool) – True if the input image is a mask, false if input image is a label

run(image, scale=None)

Run the detection on a ND image

Parameters:
  • image (ndarray) – time frames labels images

  • scale (tuple or list) – scale of the image in each dimension

Returns:

detections

Return type:

SParticles

Properties

class stracking.properties.IntensityProperty(radius)

Calculate the intensity properties of the partices

This measure adds 5 properties: mean_intensity, min_intensity, max_intensity std_intensity and the radius parameter

run(sparticles, image)

Calculate the feature

Parameters:
  • sparticles (SParticles) – Particles list

  • image (array) – 2D+t or 3D+t image array

Returns:

sparticles – input particles with the calculated feature added to the properties

Return type:

SParticles

class stracking.properties.SProperty

Interface to implement a property measure

Measure a property for each particle in a SParticles

run(sparticles, image)

Calculate the feature

Parameters:
  • sparticles (SParticles) – Particles list

  • image (array) – 2D+t or 3D+t image array

Returns:

sparticles – input particles with the calculated feature added to the properties

Return type:

SParticles

Linkers

class stracking.linkers.EuclideanCost(max_cost=1000)

Calculate the squared euclidean distance between two objects center

It calculated the squared distance and not the distance to save computation

run(obj1, obj2, dt=1)

Calculate the cost of linking particle1 and particle2

Parameters:
  • particle1 (array) – First particle data (t, Y, X) for 2D, (t, Z, Y, X) for 3D

  • particle2 (array) – Second particle data (t, Y, X) for 2D, (t, Z, Y, X) for 3D

Returns:

cost – Link cost

Return type:

float

class stracking.linkers.SLinker(cost=None)

Interface for a particle tracker

The parameters must be set to the constructor and the image data and particles to the run method .. rubric:: Example

` euclidean_cost = EuclideanCost(max_move=5.0) my_tracker = MyParticleTracker(cost=euclidean_cost, gap=1) tracks = my_detector.run(image, particles) `

Parameters:

cost (SLinkerCost) – Object defining the linker cost

run(particles, image=None)

Run the tracker

Parameters:
  • image (ndarray) – time frames to analyse

  • particles (SParticles) – List of particles for each frames

Returns:

detections

Return type:

SParticles

class stracking.linkers.SLinkerCost(max_cost=1000)

Interface for a linker cost

This calculate the cost between two particles

run(particle1, particle2)

Calculate the cost of linking particle1 and particle2

Parameters:
  • particle1 (array) – First particle data (t, Y, X) for 2D, (t, Z, Y, X) for 3D

  • particle2 (array) – Second particle data (t, Y, X) for 2D, (t, Z, Y, X) for 3D

Returns:

cost – Link cost

Return type:

float

class stracking.linkers.SNNLinker(cost=None, gap=1, min_track_length=2)

Linker using nearest neighbor algorithm

Find the trajectories by linking each detection to it nearest neighbor

This tracker cannot handle split or merge events

Example

particles = SParticles(…) euclidean_cost = EuclideanCost(max_move=5.0) my_tracker = SNNLinker(cost=euclidean_cost, gap=1) tracks = my_tracker.run(particles)

Parameters:
  • cost (SLinkerCost) – Object defining the linker cost

  • gap (int) – Gap (in frame number) of possible missing detections

  • min_track_length (int) – Minimum number of connections in a selected track

run(particles, image=None)

Run the tracker

Parameters:
  • image (ndarray) – time frames to analyse

  • particles (SParticles) – List of particles for each frames

Returns:

detections

Return type:

SParticles

class stracking.linkers.SPLinker(cost=None, gap=1, min_track_length=2)

Linker using Shortest Path algorithm

Find the optimal trajectories by finding iteratively the shortest path in the graph of all the possible trajectories

This tracker cannot handle split or merge events

Example

particles = SParticles(…) euclidean_cost = EuclideanCost(max_move=5.0) my_tracker = SPLinker(cost=euclidean_cost, gap=1) tracks = my_tracker.run(particles)

Parameters:
  • cost (SLinkerCost) – Object defining the linker cost

  • gap (int) – Gap (in frame number) of possible missing detections

  • min_track_length (int) – Minimum number of connections in a selected track

run(particles, image=None)

Run the tracker

Parameters:
  • image (ndarray) – time frames to analyse

  • particles (SParticles) – List of particles for each frames

Returns:

detections

Return type:

SParticles

Features

class stracking.features.DisplacementFeature

Calculate track length features.

Length is defined here as the number of point in a track

run(stracks, image=None)

Measure a track property

Parameters:
  • stracks (STracks) – Track data container

  • image (ndarray) – optional image data

Returns:

stracks – tracks with new feature

Return type:

STracks

class stracking.features.DistanceFeature

Calculate track length features.

Length is defined here as the number of point in a track

run(stracks, image=None)

Measure a track property

Parameters:
  • stracks (STracks) – Track data container

  • image (ndarray) – optional image data

Returns:

stracks – tracks with new feature

Return type:

STracks

class stracking.features.LengthFeature

Calculate track length features.

Length is defined here as the number of point in a track

run(stracks, image=None)

Measure a track property

Parameters:
  • stracks (STracks) – Track data container

  • image (ndarray) – optional image data

Returns:

stracks – tracks with new feature

Return type:

STracks

class stracking.features.SFeature

Interface for a particle feature measurement

Parameters:

stracks (STracks) – tracks to analyse

run(stracks, image=None)

Measure a track property

Parameters:
  • stracks (STracks) – Track data container

  • image (ndarray) – optional image data

Returns:

stracks – tracks with new feature

Return type:

STracks

Filters

class stracking.filters.FeatureFilter(feature_name, min_val, max_val)

Select trajectories based on feature

This filter select trajectories where a given feature have a value between a given min and max value

Parameters:
  • feature_name (str) – Name of the feature to use

  • min_val (float) – Minimum value of the feature to keep the track

  • max_val (float) – Maximum value of the feature to keep the track

run(stracks)

Run the filtering

Parameters:

stracks (STracks) – Tracks to filter

Returns:

stracks – Filtered tracks

Return type:

STracks

class stracking.filters.STracksFilter

Interface for a tracks filter

A filter can select tracks based on properties of features Must implement the filter method

run(stracks)

Run the filtering

Parameters:

stracks (STracks) – Tracks to filter

Returns:

stracks – Filtered tracks

Return type:

STracks

IO

class stracking.io.CSVIO(file_path)

Read/write tracks from/to csv file

This format does not support split/merge events and tracks features

Parameters:

file_path (str) – Path of the csv file

is_compatible()

Check if the file format and the reader are compatible

Returns:

compatible – True if the reader and the filter are compatible, False otherwise

Return type:

bool

read()

Read a track file into STracks

The parsed data are stored in the stracks attribute

write(tracks)

Write tracks to file

Parameters:

tracks (STracks) – Tracks to write

class stracking.io.ICYIO(file_path)

Read a ICY model

Parameters:

file_path (str) – Path of the xml ICY file

is_compatible()

Check if the file format and the reader are compatible

Returns:

compatible – True if the reader and the filter are compatible, False otherwise

Return type:

bool

read()

Read a track file into STracks

The parsed data are stored in the stracks attribute

write()

Write tracks to file

Parameters:

tracks (STracks) – Tracks to write

class stracking.io.ISBIIO(file_path)

Read/Write a ISBI XML tracks format

Parameters:

file_path (str) – Path of the xml ISBI file

is_compatible()

Check if the file format and the reader are compatible

Returns:

compatible – True if the reader and the filter are compatible, False otherwise

Return type:

bool

read()

Read a track file into STracks

The parsed data are stored in the stracks attribute

write()

Write tracks to file

Parameters:

tracks (STracks) – Tracks to write

class stracking.io.StIO(file_path)

Read/write tracking with the native stracking format

This format has been created for this library to easily read and write data stored in the STracks container

Parameters:

file_path (str) – Path of the .st.json file

is_compatible()

Check if the file format and the reader are compatible

Returns:

compatible – True if the reader and the filter are compatible, False otherwise

Return type:

bool

read()

Read a track file into STracks

The parsed data are stored in the stracks attribute

write(tracks)

Write tracks to file

Parameters:

tracks (STracks) – Tracks to write

class stracking.io.TrackMateIO(file_path)

Read a TrackMate model

Parameters:

file_path (str) – Path of the xml TrackMate model file

is_compatible()

Check if the file format and the reader are compatible

Returns:

compatible – True if the reader and the filter are compatible, False otherwise

Return type:

bool

read()

Read a track file into STracks

The parsed data are stored in the stracks attribute

write()

Write tracks to file

Parameters:

tracks (STracks) – Tracks to write

stracking.io.read_particles(file)

Read particles from a file

The CSV file must contain column with headers T, Y and X for 2D data and T, Z, Y and X for 3D data. All additional columns will be read as particles properties

Parameters:

file (str) – Path of the input file

Return type:

SParticles container with the read file

Raises:

IOError when the file format is not recognised or is not well formatted

stracking.io.read_tracks(file_path)

Main track reader

This method call the first compatible reader is found

Parameters:

file_path (str) – Path of the track file to read

Returns:

tracks – Container of the trajectories

Return type:

STracks

stracking.io.write_particles(file, particles)

Write particles into a file

Parameters:
  • file (str) – Path the file to be written

  • particles (SParticles) – Particles container

stracking.io.write_tracks(file_path, tracks, format_='st.json')

Write tracks to file

Parameters:
  • file_path (str) – Path of the destination file

  • tracks (STracks) – Container of tracks to be saved

  • format (str) – Name of the file format (‘st.json’, ‘CSV’, ‘ICY’, ‘Trackmate’)