Foreground and Background Separation In Mouse Brain Tissues Using Statistical Tests. Can Tests Substitute For No Prior Knowledge ?
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The separation of foreground from background regions has long been a central challenge in computer vision, particularly in situations where no predefined model is available. Nevertheless, large image regions that are likely to contain salient subjects can be identified as foreground through blind modeling approaches. Objective evaluation can be achieved using self-organizing techniques—such as Self-Organizing Maps (SOMs), clustering, and predictive methods—and by assessing their results through self-consistency checks using statistical tests across multiple segmentation trials with varying sampling and partitioning conditions. Conventional machine-learning approaches, including logistic regression, can further provide insight into image differentiation by partitioning pixels into foreground and background clusters. These methods were applied to a mouse brain scan. Foreground and background were treated as the two primary pixel classes and were assigned using: (1) logistic regression, which provides a binary classification (foreground versus non-foreground); (2) SOM-based neural networks, which assign each image block to one of two output nodes corresponding to foreground or background; and (3) hierarchical clustering, which further groups and subdivides blocks within each category, and (4) \kma{ }(\km) that is operated with 2 main classes and no other information. The studied case does not need to fall into the category of immediate DL{ }(Deep Learning) applications. One practical reason is that the data cannot adequately support a DL scheme (many layers capturing very fine details, such as synapses and their structure). At the stage where the paper can be used, there is no need to track such scans from a series of "NanoZoomer" brain scans. Instead, one can use the results from the paper (maps) to train a DL scheme and also use the performance results as additional parameters. Possibly, at a much higher resolution of $10000 \quad pixels \times 10000 pixels $, DL would be a one-way solution. Examples include automated artifact detection in brain images, small tumor detection, automatic image tagging (what is a lesion), and the analysis of MRI, CT, and EEG data. An example of the use of weakly supervised \sgm{ } of brain images using DL can be found in \cite{Gila2025}, where the weak supervision is applied to bounding boxes around seed-pixels or entire scans (slices) selected from a scan. The work also uses blocks around pixels (seed-pixels), but only when the \lgra{ } method is applied. Reportedly, annotated masked (e.g., BG) slices or boxes of labeled BG pixels are tracked across a scanned brain’s volume. The degree of supervision is estimated to be greater than the one adopted here, since \lgra{ } is used in only one of the three implemented methods and not in KM, HCL, or the tests. The resulting pixel-level maps indicating foreground and background regions were evaluated for self-consistency by varying block size, number of nodes, seed-point placement, and the number of clusters. This methodology may serve as a useful preprocessing step prior to the application of more specialized image analysis techniques. JEL Codes: Comma-separated JEL codes go here