Skip to main content
Have a personal or library account? Click to login
Pluvianus: Interactive GUI for Exploration and Quality Control of CaImAn Calcium Imaging Analysis Results Cover

Pluvianus: Interactive GUI for Exploration and Quality Control of CaImAn Calcium Imaging Analysis Results

Open Access
|Apr 2026

Full Article

(1) Overview

Introduction

Over the last few decades, optical neuroimaging approaches have become essential for dissecting the functional organization of neural circuits. Two-photon fluorescence microscopy, in particular, has provided a powerful means to probe neuronal population dynamics with single-cell resolution in vivo, offering both the penetration depth and signal specificity required for quantitative measurements of large-scale activity patterns. Consequently, two-photon calcium imaging is now widely used to monitor neuronal activity in the living brain, generating terabyte-scale datasets that capture tens of thousands of neurons over extended timescales [1]. Although a decade of methodological innovation has aimed to extract meaningful cellular activity signals from these imaging data, the reliable extraction, curation, and validation of neuronal traces remain a major bottleneck in laboratories using two-photon populational imaging [2].

Early efforts such as SIMA introduced the first open-source toolkit for motion correction and region-of-interest segmentation in these dynamic fluorescence recordings [3], but were quickly surpassed by newer algorithms that excelled in the Neurofinder challenge [4]. Still, the search for better performing, more reliable, and widely applicable solutions is ongoing [5].

Two mature implementations now dominate everyday practice: Suite2p, which utilizes SVD decomposition [6], and CaImAn, built around constrained non-negative matrix factorization (CNMF) that enables demixing of overlapping neuronal sources [7]. Both algorithms reliably achieve human-level accuracy on standard two-photon imaging datasets, yet indications from download statistics and GitHub activity suggest slightly broader adoption of CaImAn.

Deep-learning methods span a spectrum from spatiotemporal 3D CNNs [STNeuroNet; [8]], through shallow U-Nets optimized for speed [SUNS; [9]] and semi-supervised U-Net pipelines that reduce the number of required ground-truth labels [10], to online engines such as CITE-On that couple segmentation with real-time trace extraction [11]. Despite impressive benchmark scores, none have yet become routine: for some, high computational demands or limited reproducibility pose barriers; for others, the lack of annotated training data or ongoing software support limits practical adoption. These practical constraints have kept the CNMF-based pipeline the de facto standard in most experimental neuroscience laboratories.

Since reliable source identification in diverse time sequences is non-trivial, automated pipelines such as CaImAn benefit from an interactive layer. From the perspective of the end-users, a graphical user interface (GUI) would be highly beneficial, where key determinants are easily tweaked for accurate identification of active neurons, preferably with immediately visualized changes that can be readily interpreted by the user. Real-time calcium data analysis based on CaImAn-defined ground truth segmentation [12] would also benefit from fast manual curation.

CaImAn currently provides two primary interactive interfaces. The first consists of Jupyter notebook–based visualizations used in the tutorial demos. These notebooks are lightweight; however, they require familiarity with code-centric environments and are not optimized for a fully graphical user experience. The second is a minimal Qt-based GUI (caiman_gui.py), considered experimental, which provides basic visualization functionality but remains limited in scope and has not evolved into a comprehensive curation environment [13].

Several groups have wrapped the library in convenience GUIs tailored to specific experimental paradigms or analysis workflows. For example, Mesmerize supports workflow organization and systematic parameter search [14], EZcalcium targets relatively simple rodent sensory cortex recordings [15], and CalciumZero focuses on iPSC-derived brain organoids [16]. While these front-ends lower entry barriers, their tight coupling to specific paradigms compromises CaImAn’s hallmark versatility; uptake has therefore been modest. Workflow integrators such as NeuroWRAP embed CaImAn and Suite2p into containerized pipelines but focus on batch processing rather than interactive quality control [17].

Similarly to Pluvianus aegyptius, the ‘crocodile bird’, historically described by Herodotus as entering the open mouths of crocodiles to pick pieces of food between their teeth [18], we developed a GUI called Pluvianus to aid in picking the best selection of cells by visualizing the output of CaImAn. It provides users with intuitive insight into what the analyzed data actually looks like and enables manual curation. The GUI serves both as a tool for power users to fine-tune and validate algorithmic performance, and as a natural second step for newcomers after running the CaImAn demo notebooks, offering a visual, code-free way to explore results and build understanding before moving on to custom analyses. Unlike previous wrapping attempts that embed custom pre-processing steps or enforce rigid folder hierarchies, Pluvianus deliberately focuses solely on post-hoc inspection and validation. To preserve the analysis package’s flexibility valued by expert users, it imposes no constraints on the computational pipeline or the user’s file organization, which remain entirely untouched (Figure 1).

Figure 1

Workflow of calcium imaging analysis with CaImAn and Pluvianus. Pluvianus is positioned at the end of the CaImAn pipeline and serves as a manual verification and curation tool focused on component selection.

Pluvianus contributes to broader efforts toward standardization and reproducibility in calcium imaging analysis and addresses the critical need for interactive, flexible quality control tools. By providing a standardized interface for quality control that is independent of specific experimental setups or analysis workflows, the software facilitates consistent validation procedures across different laboratories and experimental paradigms. As the field continues to generate increasingly large and complex datasets, tools like Pluvianus that bridge the gap between automated processing and human expertise will become increasingly valuable for ensuring the reliability of scientific conclusions drawn from calcium imaging experiments.

Implementation and Architecture

The software package is implemented entirely in Python with minimal additional dependencies beyond the core CaImAn suite. It provides a standalone, Qt 6-based desktop application designed specifically for visual quality control, seamlessly opening native *.hdf5 result files generated by CaImAn’s CNMF, OnACID or CNMF-E algorithms.

The GUI (Figure 2) combines synchronized spatial and temporal views with an interactive scatterplot of component metrics. This scatter widget allows users to click-select outliers, adjust component acceptance thresholds, or toggle component acceptance individually. The two spatial panels can cycle through the raw movie, CNMF component footprints, reconstructed movie, residuals, and summary images (mean, max, correlation). Temporal panel may overlay curves of raw fluorescence, CNMF temporal components, ΔF/F traces, and residuals for instant comparison. Currently, the software handles only 2D movies. Handling microendoscopic 1p data (CNMF-E) is partially supported.

Figure 2

Main GUI of Pluvianus. The interface displays temporal (top) and spatial panels (bottom right), as well as a scatter plot of the component evaluation metric (bottom left). All panels are synchronized to the selected component and time point.

The Compute menu provides algorithms (several of which call CaImAn functions) for calculating missing component evaluations, ΔF/F estimations, or—for comparison—the raw fluorescent traces under the component contours. After inspecting the results, the modified threshold levels for component evaluation and classification state of the components can be saved back to the .hdf5 file for subsequent pipeline steps. The final activity traces can also be saved to pynapple .npy files. Comprehensive documentation of the user interface, including available functions and a tutorial-style usage guide, can be found in the documentation on GitHub (https://github.com/katonage/pluvianus).

Example Use on Visual Cortex Data

We used Pluvianus to visualize and verify CaImAn analysis results by measuring two-photon population activity in mice while performing visual stimulation. The following description briefly summarizes the experimental procedures to provide context for this dataset.

In this project, male Thy1-GCaMP6s mice were used. Experiments were conducted in accordance with the Hungarian Act of Animal Care and Experimentation (1998, XXVIII) and Directive 2010/63/EU of the European Parliament and of the Council (22 September 2010) on the protection of animals used for scientific purposes. The experimental protocol was approved by the regional ethical committee (license number PEI/001/2290-11/2015). No animal surgeries or experiments were performed specifically for the development of Pluvianus or the preparation of this manuscript.

A craniotomy approximately 4 mm in diameter was opened over the visual cortex, covered by KvikSil and a 4 mm coverglass, a 3D-printed headplate was cemented onto the skull under general anesthesia using fentanyl (0.05 mg/kg), midazolam (5 mg/kg), and medetomidine (0.5 mg/kg) (FMM cocktail). For two-photon imaging, the head of similarly anesthetized mice was fixed under a 10× XLPLN10XSVMP objective (Olympus) providing a field of view of approximately 1.1 mm. Two-photon images were acquired using a resonant-scanhead two-photon microscope (Femto2D-Dual, Femtonics) at 920 nm (Chameleon Ultra II, Coherent), detected with a photomultiplier tube equipped with a 490–550 nm bandpass filter (reference voltage set to 100%). Time series were acquired in resonant scanning mode at 30.9 Hz with approximately 35 mW laser power at a depth of 210 µm below the dura.

Visual stimulation consisted of drifting grating stimuli presented in eight directions on a monitor positioned approximately 15 cm from the animal’s eye. Each 3 s stimulus presentation was followed by a 4 s interstimulus interval displaying a uniform gray screen matched to the mean luminance of the gratings (6.7 cd/m2).

We developed Python scripts to extract two-photon fluorescence data arrays from vendor-specific (.mesc, Femtonics) measurement files and to concatenate stimulus epochs. The use of GCaMP6s enabled 3 × temporal averaging to improve per-frame signal-to-noise ratio and reduce data size without compromising temporal resolution. Fluorescence data were analyzed in segments comprising 40–80 visual stimulus events, corresponding to approximately 10 minutes of recording (512 × 512 pixels, ~20,000 frames). We used the mesmerize-core package [19] to wrap CaImAn for rigid motion correction and CNMF-based source extraction. Pluvianus was subsequently used to open and inspect the resulting output files organized within the folder structure generated by mesmerize-core.

We performed the inspection steps described in the tutorial section of the documentation (https://github.com/katonage/pluvianus/blob/main/docs/Tutorial.md). Using this workflow, we observed that the OnACID algorithm did not provide a speed advantage over CNMF under our experimental conditions, while CNMF yielded more robust segmentation results. Initial parameter values were based on those provided in the CaImAn tutorial notebooks. The decay time was set to 0.9 s to match the kinetics of GCaMP6s. The spatial parameters rf and stride were determined according to the average cell diameter in pixels, following CaImAn recommendations (rf = 37, stride = 19, dxy = 2.1 µm).

Using the assessment of component extraction completeness method described in the tutorial, we found that several cells were not detected by the algorithm when using the default parameter set. To address this, we increased the K parameter (number of expected components per patch) to ensure that all active neurons were captured (Figure 3). Although this adjustment led to the detection of substantially more components than before (approximately twofold) and increased the computational time, false positives can be removed during subsequent component filtering. In contrast, neurons that are not detected during source extraction are irretrievably lost for downstream analysis. After performing source extraction with the increased sensitivity, we did not observe any residual activity that exceeded the magnitude of a typical component classified as ‘bad’, suggesting that the extraction captured all meaningful sources.

Figure 3

Assessment of the completeness of component extraction on mouse visual cortex two-photon data. With K = 4, multiple false negatives were observed (red arrows), which are the brightest patches on the MaxResAll image. Re-running the CaImAn CNMF algorithm with K = 20 detected all prominent activity sources (green arrows), which were therefore removed from the MaxResAll residual image (right). The remaining activity (e.g., yellow arrow) is comparable to the signal from components correctly identified but later rejected during curation (red delineated components in MaxResNone). Scale bar: 20 µm.

We used Pluvianus, in particular the scatter plot widget, to inspect and refine component acceptance thresholds. This resulted in the following parameter values: SNR_lowest = 1.5, min_SNR = 2.0, rval_lowest = 0.1, rval_thr = 0.7, cnn_lowest = 0.4, min_cnn_thr = 1.0. In our pipeline, we did not use the option to manually reassign components after threshold evaluation.

Finally, we used Pluvianus to inspect the spatial and temporal properties of accepted components, particularly to assess the quality of baseline subtraction. A representative example is shown in Figure 4. We found that the algorithm adequately removed baseline fluctuations induced by visual stimulation, which are dominated by neuropil activity in the visual cortex.

Figure 4

Visualization of original and processed component activity. A representative component’s original fluorescence trace (calculated as the mean fluorescence of the movie within the component contour, blue) is shown alongside the corresponding CNMF output (detrended ΔF/F trace, red). The traces demonstrate that the algorithm successfully removed background fluctuations induced by the visual stimulus (indicated by black arrows). This view also allows verification that no fast-rising signal increase occurs at the onset of the visual stimulus (purple arrow), which would indicate stray light contamination in the imaging setup. The spatial widget further allows verification that the activity associated with this component is confined to the contour at the peaks of the ΔF/F trace (inset).

While working with this two-photon dataset, we regularly identified over 200 visually responsive cells within the 1.1 mm field of view of the anesthetized mouse primary visual cortex. Analyzing datasets of this size, Pluvianus proved to be an indispensable tool for assessing source extraction results and facilitating the optimization of processing parameters, as well as for curating results by refining component acceptance thresholds.

Quality Control

Pluvianus has been manually tested through interactive use of its graphical user interface. Functional correctness was verified by visually performing the analysis steps on CaImAn’s official demo datasets (demo_pipeline, demo_OnACID_mesoscope, and demo_pipeline_cnmfE), as well as through extensive practical use on our own in-house dataset, a mouse visual cortex two-photon dataset analyzed using CaImAn. No automated unit or integration tests have been implemented at this stage. Detailed step-by-step instructions for installation and use, as well as a tutorial covering common analysis tasks on output from the CaImAn demos are provided in the project’s documentation on GitHub. These resources enable users to quickly verify the correct installation and the intended operation of the software.

(2) Availability

Operating system

Windows, Linux, and Mac

Programming language

Python 3.12

Additional system requirements

See requirements for CaImAn: https://caiman.readthedocs.io/en/latest/Installation.html.

Dependencies

List of contributors

Gergely Katona, HUN-REN Research Centre for Natural Sciences.

András Dávid, Budapest University of Technology and Economics.

Andrea Slézia, HUN-REN Research Centre for Natural Sciences.

Attila Kaszás, HUN-REN Research Centre for Natural Sciences.

Software location

Archive: Zenodo

Code repository: GitHub

Language

English

(3) Reuse Potential

CaImAn is designed to analyze calcium imaging datasets. Pluvianus complements CaImAn’s scalable source-extraction algorithms with a modern, researcher-friendly visualization GUI, without imposing constraints on how analyses are performed or how files are organized, thus preserving CaImAn’s exceptional adaptability. This design ensures compatibility with all existing CaImAn-based applications, enabling Pluvianus to be easily integrated wherever researchers require an interactive workflow or the ability to inspect intermediate results during pipeline development. In many cases, results are already saved in the required .hdf5 format; otherwise, only this minor adjustment is needed. Any neuroscience laboratory already utilizing CaImAn’s CNMF or OnACID pipelines can seamlessly add Pluvianus to their existing conda environment and immediately browse native *.hdf5 output files, regardless of the specific imaging modality or experimental setup. It may also complement existing CaImAn wrapper solutions and has been successfully used together with mesmerize-core [19] (Figure 2). Pluvianus is publicly distributed on GitHub under the permissive MIT license, allowing researchers to freely reuse, modify, and extend its functionality to fully leverage CaImAn’s extensive feature set. Community contributions from interested researchers are anticipated and welcomed via GitHub issues and discussions following standard GitHub workflows.

Acknowledgements

We thank Pat Gunn for his helpful comments.

Author Contributions

Gergely Katona: Conceptualization, two-photon imaging data collection, software development, software testing, and manuscript writing. Correspondence: katona.gergely@ttk.hu. András Dávid: Software development, software testing, and manuscript writing. Andrea Slézia: Chronic in vivo mouse surgery and manuscript writing. Attila Kaszás: Conceptualization, chronic in vivo mouse surgery, two-photon imaging data collection, manuscript writing, and resources. Correspondence: kaszas.attila@ttk.hu.

DOI: https://doi.org/10.5334/jors.623 | Journal eISSN: 2049-9647
Language: English
Submitted on: Sep 30, 2025
Accepted on: Apr 2, 2026
Published on: Apr 17, 2026
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Gergely Katona, András Dávid, Andrea Slézia, Attila Kaszás, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.