PaCeQuant is available since release version 1.8.6 of MiToBo.
We are proud to announce the first version of PaCeQuant, a tool for high-throughput quantification of shape features for pavement cells!
- will be announced soon... :-)
Name of Plugin/Operator
(available since MiToBo version 1.8.6)
- extraction of 27 characteristic shape features to quantify pavement cell shape
- fully automatic segmentation of cell regions from input images
- optional import of external/manual cell segmentation data
- classification of lobes into type I (2-cell contact) and type II (3-cell contact)
- additional R scripts for feature visualization
Usage - Parameters
To run PaCeQuant perform the following steps:
- install MiToBo by following the instructions on the Installation page
- run MiToBo and start the operator runner by selecting the menu item PaCeQuant from Plugins -> MiToBo
This will bring up the operator window of PaCeQuant.
Phases and Operation modes
PaCeQuant supports three different options for running listed below:
expects images as input, segments the images and extracts features
expects images as input, just segments the cell regions, no feature extraction
works on binary or label images or ImageJ regions, extracts features for the given regions
In addition PaCeQuant can be run in either of two modes:
PaCeQuant processes the data provided directly within the graphical environment of ImageJ, i.e. reads regions from the ROI manager, and directly shows results
PaCeQuant processes all files (images or ROI files) in a given folder and writes results to disk
For batch mode the user can specify an input directory, in interactive mode PaCeQuant expects an input image or regions to be available in ImageJ.
Depending on the chosen phase and operation mode the graphical user interface is dynamically re-configured to show only the options relevant for the selected options.
Depending on the chosen phases and operation modes here you need to specify either input data already loaded in ImageJ/Fiji or a directory where PaCeQuant can find the data to process. In detail, you have to provide the following information for the various configurations:
|Operation Mode||Phase(s) to Run||Input Data|
|INTERACTIVE||SEGMENTATION_ONLY||Gray-scale input image already opened in ImageJ/Fiji.|
|SEGMENTATION_AND_FEATURES||Gray-scale input image already opened in ImageJ/Fiji.|
|FEATURES_ONLY||Binary or label image already opened in ImageJ/Fiji, or ImageJ ROI set from ROI manager.|
|BATCH||SEGMENTATION_ONLY||Directory containing gray-scale input images.|
First-level sub-folders are also processed.
|SEGMENTATION_AND_FEATURES||Directory containing gray-scale input images.|
First-level sub-folders are also processed.
|FEATURES_ONLY||Directory containing either binary or label images to process, a collection of ImageJ ROI files (with ending '.roi', or an archive of multiple ImageJ ROIs (with ending '.zip').|
In BATCH mode PaCeQuant tries to analyze all image files present in the given folder or any of its direct sub-folders. In particular, PaCeQuant will also analyze data from a potential result folder of a previous run, if it is present in the given folder. To avoid problems resulting from analyzing wrong data you should ensure that the provided directory only contains native input images or segmentation data and no other image data not suitable for processing with PaCeQuant.
As PaCeQuant measures lengths and areas from the given data it is important that the tool is properly calibrated, i.e. the physical size of a pixel is known. PaCeQuant supports two calibration modes:
here PaCeQuant seeks to extract calibration information from the given input data
the user can enter calibration data, i.e. the physical size of a pixel and the units to use
Please note that in ImageJ ROI files no calibration data is stored. Thus, when using PaCeQuant to extract features from external segmentation data provided as ImageJ ROIs, you always need to manually provide calibration data. And this also holds when manually post-processing segmentation data extracted with PaCeQuant as all calibration data of the original images will get lost.
Configuration of Segmentation Phase
For detailed configuration of the algorithms applied during the segmentation phase the following parameters are available:
|Border Contrast||Allows to select whether the boundaries of the cells are darker or brighter than the background.|
|Heuristic for Gap Closing||During segmentation sometimes small gaps in the boundaries remain which can be closed by PaCeQuant applying one of the following heuristics:
|Unit for Size Thresholds||Unit in which the size thresholds for filtering valid regions (see next two parameters) are specified, i.e. either PIXELS or MICRONS.|
|Minimal Size of Cells||Segmented cells being too small can automatically be excluded by specifying a minimal size for valid cells.|
|Maximal Size of Cells||Segmented cells being too large can also automatically be excluded by specifying a maximal size for valid cells.|
Configuration of Feature Extraction Phase
|Feature Extraction||For changing various parameters used in feature extraction the MorphologyAnalyzer2D operator of MiToBo is used and can be configured here. Note that changing parameters might hamper comparison of PaCeQuant results among different work groups or laboratories.|
|Analyze lobe types?||Activates the optional classification of individual lobes into type I (2-cell contact) or type II (3-cell contact) lobes.|
Additional Configuration Parameters
|Draw region IDs to output image?||If the segmentation phase is run PaCeQuant outputs a label image showing segmented cell regions. If this option is activated region IDs are drawn to that image for easier interpretation. But note that this renders the image unsuitable for any further automatic analysis.|
|Verbose||If enabled additional log messages will be printed to console.|
|Show/save additional results?||If enabled an additional image stack with a collection of intermediate images will be generated. The images in this stack might help to get a deeper insight into PaCeQuant and might help to identify problems in case that the segmentation of input images fails.|
|Show/save feature stack?||If enabled a stack of images is generated where each image visualizes the values of a specific feature. For most images in this stack the feature values of individual cells are mapped to the intensity value of the corresponding cell (e.g., for features like area, solidity, width, length, branch count, etc.).|
Usage - Output data
Will be provided soon...