Introduction
Stars, like our solar system’s Sun, shine brightly because their energy output is sustained by ongoing fusion of hydrogen in their cores. With properties intermediate between those of the smallest, coldest “red dwarf” stars and giant planets (e.g., Jupiter and Saturn), brown dwarfs are an intriguing class of celestial objects formed like stars from clouds of molecular material, but lacking sufficient mass to maintain stable hydrogen fusion, resulting in their faint, infrared-dominated light emission profiles (e.g., Reid and Hawley 2005). Brown dwarfs can provide unique astrophysical insights, and hence are extremely scientifically valuable. For instance, brown dwarfs overlap in mass, radius, and temperature with planets around other stars (exoplanets), but have atmospheres much more readily suitable for detailed characterization, as brown dwarfs roam interstellar space alone, free from the contaminating glare of a much brighter host star (e.g., Faherty et al. 2016). Hunting for cold brown dwarfs is also exciting because they have only recently become detectable with infrared telescopes, giving us a fresh chance to find historically nearby neighbors of the Sun (e.g., Luhman 2013; Luhman 2014).
Searching for our solar system’s nearest cosmic neighbors is one of the most time-honored and enduring quests in astronomy. The recent advent of powerful infrared telescopes/detectors has catalyzed a new era of mapping the Sun’s neighbors by enabling the discovery of dim and cold brown dwarfs. The coldest brown dwarfs can be detected only in the Sun’s local neighborhood of our Milky Way galaxy (e.g., Kirkpatrick et al. 2019), hence our project’s name: Backyard Worlds: Cool Neighbors (hereafter referred to as “Cool Neighbors”). The vast dataset furnished by NASA’s Wide-field Infrared Survey Explorer (WISE; Wright et al. 2010) space telescope has unrivaled potential to pinpoint brown dwarfs, but this 50 trillion–pixel archive has not yet been fully explored. Our Backyard Worlds: Planet 9 citizen science project (Kuchner et al. 2017), launched in 2017, has serendipitously discovered hundreds of brown dwarfs through extensive visual inspection of WISE sky maps (e.g., Meisner et al. 2020). Despite this success, the Backyard Worlds: Planet 9 interface was optimized for discovery of theorized outer solar system planets (sometimes dubbed “Planet 9” or “Planet X”; e.g., Batygin and Brown 2016, Matese and Whitmire 2011) rather than brown dwarfs.
In mid-2023 we launched the next evolution of Backyard Worlds, a citizen science project called Backyard Worlds: Cool Neighbors (Humphreys et al. 2022). Cool Neighbors is optimized for the discovery of cold brown dwarfs thanks to newly incorporated machine learning (ML; sometimes alternatively referred to as “artificial intelligence” or “AI”) pre-selection of brown dwarf candidates (Caselden et al. 2020). Through our machine learning pre-selection, Cool Neighbors improves the scientific efficiency of volunteers’ time spent visually inspecting WISE telescope images compared with Backyard Worlds: Planet 9, as the latter simply shows random sky locations to participants.
We begin by reviewing the scientific context motivating Backyard Worlds: Planet 9 and Cool Neighbors, then briefly describe salient aspects of the Backyard Worlds: Planet 9 project that represents a precursor to Cool Neighbors. Next, we discuss the design of Cool Neighbors, highlighting key differences compared with Backyard Worlds: Planet 9, and providing an overview of the deep learning brown dwarf candidate pre-selection employed by Cool Neighbors. We subsequently discuss the ways in which Cool Neighbors has presented the human plus ML combination to (potential) volunteers via user training and promotional materials that emphasize the complementarity of citizen science plus machine learning. Finally, we assess the engagement and excitement of Backyard Worlds volunteers about Cool Neighbors via quantitative classification metrics and a survey of highly engaged participants’ views about the usage of ML/AI within Cool Neighbors.
Project Details—Cool Neighbors and Backyard Worlds: Planet 9
Scientific context
Our Milky Way galaxy’s inhabitants extend below the hydrogen burning mass limit, with a substellar sequence of brown dwarfs continuing to much lower temperatures and masses than the smallest stars. But dim brown dwarfs, by their nature elusive and difficult to characterize, have left us with many persistent questions. How far does the population of cool substellar objects born like stars extend into the “planetary mass” regime? Do Jupiter-mass objects form more commonly in isolation or as companions to stars? What are the physical properties of the lowest luminosity brown dwarfs?
Backyard Worlds: Planet 9 has two distinct science goals meant to be achieved via the same Zooniverse citizen science workflow: (1) searching for theorized planets in the outer regions of our own solar system, sometimes referred to as “Planet 9” or “Planet X”, and (2) discovering new nearby brown dwarfs. It is reasonable to address both of these science goals simultaneously because searching for planets in our own solar system and nearby brown dwarfs in both cases involves hunting for celestial objects that appear to move relatively rapidly across the sky. Backyard Worlds: Planet 9 prioritized the first goal, as discovering a new planet in the solar system would be an extremely high-impact result (the last time a solar system planet was discovered was more than 150 years ago).
Historically, astronomers have searched for nearby stars, brown dwarfs, and planets in the outer solar system by looking for objects that appear to move across the sky relatively rapidly compared with distant stars and galaxies. For instance, Barnard’s star was noted to be likely very close to the solar system in 1916 when its large apparent motion was recognized (Barnard 1916). Thereafter, for nearly a century, the list of the Sun’s closest three neighboring star systems remained unchanged. This stagnation in the census of the Sun’s nearest neighbors was due in part to the fact that, until recently, sensitive sky surveys over wide fields of view were possible only at visible light wavelengths (0.3 microns < λ < 0.7 microns, where λ is the light’s wavelength), whereas cool brown dwarfs shine most prominently in the infrared (λ ~ 4–5 microns).
Beginning in 2010, NASA’s WISE mission began revolutionizing our view of the infrared universe (Wright et al. 2010). WISE is a 40 cm aperture telescope aboard a satellite in low-Earth orbit. WISE brings excellent sensitivity at wavelengths of 3–5 microns and by now has imaged the entire sky more than 20 times in its W1 (3.4 micron) and W2 (4.6 micron) bandpasses, spanning a >10-year time baseline. WISE has discovered hundreds of brown dwarfs (e.g., Kirkpatrick et al. 2011), initially based primarily on their red WISE colors (indicative of low temperature), rather than motion. The novel “unWISE” (Meisner et al. 2017) reprocessing of WISE data goes far deeper than preceding WISE-based data products in terms of celestial motions, enabling a new generation of WISE brown dwarf searches based primarily on motion rather than color. The continued search for the coolest brown dwarfs is particularly critical and timely given James Webb Space Telescope’s (JWST’s) unprecedented ability to conduct spectroscopy of these objects in the mid-infrared, near the peak of their spectral energy distributions (e.g., Miles et al. 2023).
Backyard Worlds: Planet 9
The Backyard Worlds: Planet 9 project (Kuchner et al. 2017; http://backyardworlds.org) was launched on February 15, 2017. Backyard Worlds: Planet 9 is hosted on the Zooniverse online crowdsourcing platform (Simpson et al. 2014). Backyard Worlds: Planet 9 splits the sky into 1.16 million square patches, each 11.7 arcminutes on a side; for comparison, the full Moon as viewed from Earth is roughly 31 arcminutes in diameter. The deep unWISE sky maps for each WISE sky pass are downloaded, processed, and arranged in a time-lapse image blink (i.e., a time-lapse movie) that flips through many years’ worth of WISE data in a matter of seconds. Importantly, part of the Backyard Worlds: Planet 9 image processing prior to upload of Zooniverse subjects is to apply a technique called “difference imaging,” whereby relatively unchanging objects like the vast majority of distant stars and galaxies are subtracted, leaving behind imprints only from those objects that appear to move relatively rapidly or vary significantly in brightness. Random sky patches are shown to volunteers for classification.
Over the past ~7 years, Backyard Worlds: Planet 9 volunteers have performed 8.8 million classifications, and discovered hundreds of brown dwarfs, including several of the coldest known brown dwarfs (e.g., Bardalez Gagliuffi et al. 2020). Both the Backyard Worlds: Planet 9 and Cool Neighbors web interfaces were constructed using the Zooniverse Project Builder tool.
Backyard Worlds: Cool Neighbors
With its focus on outer planet discovery, Backyard Worlds: Planet 9 is not fully optimized for discovering nearby brown dwarfs. Showing entirely random sky locations to citizen scientists is inefficient when searching for celestial moving objects. The aforementioned relatively large size of Backyard Worlds: Planet 9 sky patches is driven by searching for “Planet 9,” which would appear to move much faster across the sky than would even the nearest, fastest brown dwarfs. A targeted variant of Backyard Worlds: Planet 9, zoomed in on specific sky regions thought to be of relatively high interest, was therefore germane toward the goal of maximally mining the WISE data set for nearby brown dwarf discoveries.
Hence, we launched the Backyard Worlds spinoff project Cool Neighbors. The critical difference between Cool Neighbors and Backyard Worlds: Planet 9 is that Cool Neighbors is a targeted search for brown dwarfs. Moreover, the Cool Neighbors targeting is performed with a deep neural network algorithm, folding in a new ML element previously absent from the Backyard Worlds: Planet 9 project. The targeted nature of Cool Neighbors also enables multiple follow-on adjustments that may themselves be viewed as enhancements. First, because specific brown dwarf candidates are known in advance of creating the Zooniverse subjects, we can zoom in on the immediate vicinity of each specific candidate, limiting the sky area covered by each subject to 2 arcminutes × 2 arcminutes, a ~35x reduction of sky area to be visually scanned per subject. Relatedly, because such a narrowed sky area is covered by each subject, it makes sense for Cool Neighbors to skip the difference imaging aspect of Backyard Worlds: Planet 9’s pre-processing and simply show the WISE sky maps themselves, containing mostly unchanging distant stars and galaxies. Cool Neighbors images also differ from those of Backyard Worlds: Planet 9 in that the former uses an inverted color scale (empty sky regions are white) whereas the latter uses the opposite (empty sky regions are dark). These differences relative to Backyard Worlds: Planet 9 are spelled out in the user-facing Frequently Asked Questions (FAQ) section of the Cool Neighbors Zooniverse interface. Because of Cool Neighbors candidate pre-selection, this project requires roughly an order of magnitude fewer subjects than does Backyard Worlds: Planet 9.
Cool Neighbors launched on June 27, 2023. Cool Neighbors citizen scientists have performed 1.8 million classifications since its launch, as of January 31, 2024. Cool Neighbors represents an excellent case study in the combination of ML and citizen science for multiple reasons. First, user activity/engagement metrics for Cool Neighbors can be compared with those of Backyard Worlds: Planet 9, which does not include any ML, to help assess whether machine learning candidate pre-selection is indeed having the desired positive effects on participant efficiency/productivity. Second, Cool Neighbors and Backyard Worlds: Planet 9 each have well over 1 million classifications, providing robust sample sizes for downstream analyses.
Usage of artificial intelligence techniques for Cool Neighbors’ candidate pre-selection
Cool Neighbors candidate pre-selection is based on an ML algorithm called SMDET (SMDET is not an acronym or initialism). SMDET applies a custom convolutional recurrent neural network to segment and classify cubes of astronomical images in search of moving objects, with motion being a proxy for nearness. SMDET is a recurrent convolutional neural network that segments astronomical image time series (ITS) according to the presence of (apparent) faint, fast objects (FFOs; Caselden et al. 2020). We trained SMDET with synthetic objects injected into WISE ITS provided by the unWISE project (Meisner et al. 2017). The input data to SMDET typically consists of ~13 unWISE sky images per sky location per WISE channel (W1 and W2) spanning approximately 6–7 years in terms of observation epoch. The SMDET neural network architecture, shown in Figure 1, uses 3-dimensional convolutional layers and 2-dimensional convolutional long short-term memory layers. The image-level outputs (on the right of the figure) are the segmented ITS and reproductions of any FFOs identified in channels W1 and W2. We use these outputs to rank ITS according to the likelihood that they contain one or more FFO and to estimate precise FFO coordinates. The top ~70,000 ranked ITS centered on these SMDET “centroid” sky coordinates became our Zooniverse subject set. SMDET was built and deployed by Backyard Worlds citizen scientist Dan Caselden.

Figure 1
Schematic of the SMDET neural network architecture used to pre-select Cool Neighbors moving object candidates shown to volunteers via Zooniverse. SMDET analysis starts with pixel data (top left of schematic) for a set of 13 time-series unWISE W1 (3.4 micron) images and a corresponding set of 13 time-series unWISE W2 (4.6 micron) images covering the same 176 arcsecond × 176 arcsecond patch of sky. The neural network processes the pixel data with consecutive groups of 3-dimensional convolutional layers and long short-term memory layers. The output of each group is averaged into skip connections (bottom of schematic) to improve gradient calculation during backpropagation. SMDET outputs an “object mask” (top right of schematic) that models faint, high proper motion sources in the input pixel data, and a “segmentation mask” (middle right of schematic) that classifies which input pixels capture 1% or more of the flux from a faint, high proper motion source. ELU: Exponential Linear Unit, LSTM: Long Short Term Memory.
We use SMDET for Cool Neighbors candidate selection because ML techniques have previously shown great promise in revealing WISE moving objects missed by prior searches based on more conventional catalog queries (e.g., Marocco et al. 2019). Nevertheless, SMDET sometimes assigns high FFO likelihoods to a variety of false positives that enormously outnumber true FFOs, such as statistical image noise, bright star artifacts, detector persistence defects, and so forth. SMDET therefore serves as an excellent starting point for WISE moving object searches, but a human visual vetting step is still required afterwards to identify only those bona fide moving objects worthy of investing the follow-up resources of highly oversubscribed telescope facilities. Figure 2 shows examples of the Cool Neighbors Zooniverse classification interface, for a bona fide celestial moving object (top panels) and also for a common class of bright star artifact contaminant (bottom left panel). Since the launch of Cool Neighbors, the Backyard Worlds classifications performed have been dominated by those from Cool Neighbors, though the overall total of ~10.6 million Backyard Worlds classifications from both versions of the project combined is still strongly dominated by the ~8.8 million classifications received by Backyard Worlds: Planet 9 (see Figure 3, left panel).

Figure 2
Cool Neighbors Zooniverse classification interface example (from Humphreys et al. 2022). The two top row telescope images display the visually perceptible motion of a dim brown dwarf (orange dot near the center of each panel). The telescope image at bottom left displays a spurious algorithmically selected brown dwarf candidate caused by an orange donut-shaped observational artifact. Adjacent to this bottom left image is the Zooniverse classification task interface, as is seen by a volunteer performing a classification.

Figure 3
(Left) Cumulative Backyard Worlds classifications since Backyard Worlds: Planet 9 launch in February 2017, including both Backyard Worlds: Planet 9 and Backyard Worlds: Cool Neighbors. Backyard Worlds volunteers have performed more than 10 million total classifications since early 2017, with ~9 million from Backyard Worlds: Planet 9 and ~2 million from Backyard Worlds: Cool Neighbors. We thus have very robust sample sizes for our analyses of classification data from both projects. (Right) Histograms of time per classification separately for Backyard Worlds: Planet 9 (blue histogram) and Backyard Worlds: Cool Neighbors (orange histogram). The legend lists the median and mean of each distribution. Backyard Worlds: Cool Neighbors classifications are typically performed ~3x faster than Backyard Worlds: Planet 9 classifications, both in terms of median time per classification and mean time per classification. The Backyard Worlds: Cool Neighbors distribution peaks at a lower abscissa value (lesser amount of time taken per classification) and has less of a tail toward longer (several minutes or more) classification times compared with Backyard Worlds: Planet 9.
Presentation of Machine Learning/Artificial Intelligence to (Prospective) Cool Neighbors Volunteers
Recent studies of the general public’s views toward AI have found that people tend to be more concerned than excited about AI and to have negative views about the potential impacts of AI on their lives (e.g., Pew Research 2023, Rainie and Husser 2024). Therefore, we have been very careful when presenting the ML/AI component of Cool Neighbors to (prospective) volunteers, and we have endeavored to keep the following two guidelines in mind:
Avoid framing the citizen science project as a competition against the ML algorithm.
Avoid any implication that the intent behind using machine learning is to replace/eliminate the need for future human volunteer contributions.
For Cool Neighbors, adhering to these guidelines is relatively straightforward, as its ML component occurs before the citizen scientists’ candidate vetting, rather than in parallel with the human volunteer classifications (which could lend itself to concerns around guideline #1 above) or after the human volunteer classifications (which could lend itself to concerns around guideline #2 above).
Considering the Cool Neighbors web interface and training materials on Zooniverse in combination with volunteer recruitment materials surrounding the project’s launch in mid-2023, the project has taken a measured approach regarding its presentation of ML/AI to (prospective) volunteers. For instance, neither ML nor AI is mentioned on the project’s landing page (http://coolneighbors.org), which instead focuses on the astronomical concepts of our solar system’s cosmic neighbors and brown dwarfs. Similarly, neither the Cool Neighbors project name nor its logo make any reference to AI or ML, instead aiming to appeal to public interest in space science, stars, and planets.
Regarding terminology, Cool Neighbors uses ML rather than AI in volunteer-facing materials. This decision is rooted in academic astronomy, where the peer-reviewed literature has generally tended to use the term ML rather than AI, perhaps because the latter could be perceived as excessively sensationalized by the standards of academic writing. However, public engagement materials need not adhere to conventions of academic writing, and the SMDET methodology used by Cool Neighbors can reasonably be referred to as AI rather than ML, given SMDET’s neural network approach. In the future, it might be interesting to explore whether volunteer interest, recruitment, engagement, and/or retention are affected by the choice of ML versus AI terminology in our volunteer-facing materials. On the one hand, AI could sound more futuristic or technologically advanced, but on the other hand, it might also come across as more intimidating or perhaps even threatening, in the sense of “bots” replacing humans’ roles in the discovery process.
The following representative list contains many of the mentions of ML within Cool Neighbors’ Zooniverse interface and launch promotion materials:
– Our “About/Research” tab on Zooniverse states: “Using machine learning algorithms, we identified a number of locations across the WISE database that appear to have some degree of movement. However, these algorithms can be tricked by artifacts like detector noise and diffraction spikes. The most efficient way to vet these potential brown dwarf candidates is by utilizing the power of citizen science to analyze huge amounts of data in a short amount of time” (Cool Neighbors 2023a).
– Our Zooniverse “About/FAQ” tab states: “We phrased our classification question in a way that we hope will make it as easy as possible for you to spot most movers. However, our machine learning algorithm isn’t always perfect. If you catch a mover away from the center of the image, feel free to respond ‘yes’ as you would normally.” (Cool Neighbors 2023b).
– The National Optical-Infrared Astronomy Research Laboratory (NOIRLab) announcement of the Cool Neighbors launch says “Cool Neighbors expands on the popular Backyard Worlds: Planet 9 project with a dedicated search for brown dwarfs that now leverages machine learning” (NOIRLab 2023a).
– A NOIRLab Stories blog post about the Cool Neighbors launch says: “Each flipbook is centered on a brown dwarf candidate that was pre-selected via machine learning. These algorithms have limitations, though, so the most efficient way to vet potential brown dwarf candidates is to leverage citizen science to inspect huge amounts of data in a short amount of time” (NOIRLab 2023b).
– A Cool Neighbors X (Twitter) thread from @CoolNeighbors says “Backyard Worlds: Cool Neighbors combines machine-learning and human pattern-recognition to uncover brown dwarfs. Your findings could reshape our view of the Sun’s cosmic neighborhood or even catch JWST’s attention!” (Cool Neighbors 2023c)
– A Backyard Worlds blog post promoting the launch of Cool Neighbors says “Although the machine-learning algorithm significantly reduces the search effort to find potential brown dwarfs, it can still be deceived by artifacts and noise that mimic movement (which is precisely why your contribution is crucial)!” (Backyard Worlds 2023)
All of the above mentions of ML are generic: SMDET is never called out by name in our volunteer-facing materials, nor is the broader ML/AI concept of neural networks. Similarly, we do not use Backyard Worlds training or promotional materials as a venue for attempting to explain the concepts of ML/AI to public audiences, though conceivably more explanatory text could be devoted to this, especially within the project’s Zooniverse training materials (e.g., in the Zooniverse Project Builder’s Frequently Asked Questions section).
Overall, in about a third of cases explicitly calling out ML, our volunteer-facing materials mention ML in passing with context that implies a positive connotation, but without any further elaboration. In the other roughly two thirds of cases, our volunteer-facing materials emphasize the limitations of ML relative to human visual vetting and hence the continued vital need for citizen scientist contributions. This relatively measured, balanced approach to the presentation of ML to (prospective) Cool Neighbors volunteers could perhaps help explain why our survey of advanced Backyard Worlds participants (see the section entitled “Highly Engaged Volunteers’ Views Toward Cool neighbors’ Usage of Machine Learning”) finds that the plurality of respondents (49.1%) say that the ML aspect of Cool Neighbors target pre-selection does not strongly influence their level of excitement (either negatively or positively) about participating in the project.
Quantifying Engagement: Classification Analysis
We exported the Backyard Worlds: Planet 9 and Backyard Worlds: Cool Neighbors classification data from Zooniverse on January 31, 2024. All classification analyses presented in this paper are based on these January 31, 2024 “snapshots”. Two areas we focused on toward quantifying user engagement and efficiency in Cool Neighbors versus Backyard Worlds: Planet 9 were participant time invested per classification and number of classifications per registered user.
Time invested per classification
Utilizing Zooniverse’s native data exports and associated aggregation Python library (https://aggregation-caesar.zooniverse.org/docs), a list of classifications from the Backyard Worlds: Planet 9 and Cool Neighbors projects could be compiled and analyzed. Each classification contains useful metadata, such as the Coordinated Universal Time (UTC) at which it was created, the user who made the classification, and the identification number of the subject being classified. Processing this information meaningfully requires two filters on the classification data. Users who are not logged into the Zooniverse website are listed with a username that contains the phrase “not-logged-in” followed by a hashed version of their IP address. These users are filtered from the classification data to ensure that we analyze active, registered users. In addition, a per-user filter is performed on the classifications to determine what subset contains reasonably consecutive classifications. This means that a retained classification is neither the first classification by a particular user, as there would be no previous classification to compare against, nor submitted more than 300 seconds past the user’s previous classification. If the next classification runs afoul of either of these criteria, it is not processed. This was done to minimize the influence of large outliers and to ensure that the measured classification time differential is due to the difficulty, or simplicity, of the classification task rather than other unrelated factors.
With a roughly equal number of classification time differentials, or classification durations, for both projects over the first six months of each project after its respective launch date, we observe a significant difference in how users interacted with the classification task of each project (see Figure 3, right panel). The Cool Neighbors histogram of time invested per classification is seen to peak at faster times, with a mean classification duration of 13.7 seconds for Cool Neighbors versus 38.3 seconds for Backyard Worlds: Planet 9. Similarly, we find a median classification duration of 6.0 seconds for Cool Neighbors versus 22.0 seconds for Backyard Worlds: Planet 9. Both the mean and median values of these classification durations suggest that the ML targeting employed by Cool Neighbors is enabling participants to complete classifications roughly 3 times faster than they had for Backyard Worlds: Planet 9. The two histograms shown in the right panel of Figure 3 represent classification duration distributions that differ with very high statistical significance. Applying a two-sample Kolmogorov-Smirnov test, we find that the hypothesis that the blue and orange histograms are drawn from the same distribution has a vanishingly small p-value less than 10–250.
Number of classifications per user
Using the same classification exports and analysis tools as above, we also investigated the number of classifications per registered, logged-in Zooniverse user in both projects. In this case, we did not limit analyses to the first six post-launch months of each project. We find (see Table 1) that on average, logged-in, registered users of Cool Neighbors have each contributed 423.2 classifications versus 87.6 classifications for Backyard Worlds: Planet 9. In the median, logged-in, registered users of Cool Neighbors have each contributed 31 classifications versus 15 classifications for Backyard Worlds: Planet 9. Registered Cool Neighbors volunteers have thus statistically performed ~2–5x more classifications each than have registered Backyard Worlds: Planet 9 users, depending on the exact metric adopted. This difference is even more striking when considering that the Backyard Worlds: Planet 9 project has been operating for roughly 10 times longer than Cool Neighbors. If we repeat this analysis of mean/median numbers of classifications per logged-in, registered user while restricting to the first seven months of Backyard Worlds: Planet 9 to better match the available time period for Cool Neighbors, we still find very similar results — a median of 16 classifications per user and a mean of 75.1 classifications per user for Backyard Worlds: Planet 9. The increased classifications per registered user in Cool Neighbors is encouraging and may suggest that the targeted, “zoomed in” sky images enabled by Cool Neighbors usage of ML pre-selection might be better at engaging and retaining volunteers than simply showing random, large chunks of the sky.
Table 1
Various quantitative comparisons of Backyard Worlds: Planet 9 versus Backyard Worlds: Cool Neighbors.
| PROJECT NAME | LAUNCH DATE | CLASSIFICATIONS (ALL TIME) | CLASSIFICATIONS (FIRST 6 MONTHS) | REGISTERED USERS |
|---|---|---|---|---|
| Backyard Worlds: Planet 9 | 2/15/2017 | 8.8 million | 4.3 million | 79,970 |
| Backyard Worlds: Cool Neighbors | 6/27/2023 | 1.8 million | 1.6 million | 4,087 |
| PROJECT NAME | MEAN CLASSIFICATIONS PER REGISTERED USER (ALL TIME) | MEDIAN CLASSIFICATIONS PER REGISTERED USER (ALL TIME) | MEAN TIME PER CLASSIFICATION(FIRST 6 MONTHS) | MEDIAN TIME PER CLASSIFICATION(FIRST 6 MONTHS) |
| Backyard Worlds: Planet 9 | 87.6 | 15 | 38.3 seconds | 22.0 seconds |
| Backyard Worlds: Cool Neighbors | 423.2 | 31 | 13.7 seconds | 6.0 seconds |
| PROJECT NAME | ROBUST STANDARD DEVIATION OF TIME PER CLASSIFICATION (FIRST 6 MONTHS) | REGISTERED USERS (FIRST 6 MONTHS) | ||
| Backyard Worlds: Planet 9 | 26.5 seconds | 41,298 | ||
| Backyard Worlds: Cool Neighbors | 7 seconds | 3,641 |
Additional Metrics
Table 1 includes a number of additional quantitative metrics from our classification analysis, in order to provide further context about the data sets employed throughout this section. Note that while Backyard Worlds: Planet 9 has ~80,000 registered users, and Cool Neighbors has only ~4,000 registered users thus far, only ~1,000 registered users overlap between these two sets, suggesting that Cool Neighbors has recruited a large crop of new participants beyond those previously involved with Backyard Worlds: Planet 9. Table 1’s robust standard deviation column uses half the difference between a given distribution’s 84th percentile value and 16th percentile value as a measure of the spread of values. This metric is based on the fact that in a normal distribution ~68% of values fall within +/– 1 standard deviation of the mean, and is a useful statistic for asymmetric distributions with non-Gaussian tails like those shown in the right panel of Figure 3.
Highly Engaged Volunteers’ Views Toward Cool Neighbors’ Usage of Machine Learning
In early 2024, we polled the Backyard Worlds advanced participant group to understand how the incorporation of ML into Cool Neighbors might (or might not) be affecting their level of excitement about participating in this citizen science project. The Backyard Worlds advanced participant group consists of 359 members subscribed to a Google Group that furnishes email exploder and web forum capabilities. Citizen scientists are invited to join this group when they (optionally) provide either Backyard Worlds: Planet 9 or Cool Neighbors with their email address while submitting a moving object candidate to the professional science team via a Google Form linked from the relevant Zooniverse interface. (The form is titled “Think You’ve Got One?” for Backyard Worlds: Planet 9 and “Move-In Form” for Cool Neighbors.) A citizen scientist then joins the Backyard Worlds Google Group by accepting the automatically generated invitation, though not all invites end up being accepted.
The 359 Backyard Worlds advanced participants represent a very small fraction of the total Backyard Worlds participants; the number of unique registered Zooniverse users for Backyard Worlds: Planet 9 and Cool Neighbors is ~83,000 combined. That being said, these advanced participants represent a very disproportionately high fraction of the total classifications received by these projects — both Backyard Worlds: Planet 9 and Cool Neighbors have Gini coefficients larger than 0.8 (see Spiers et al. 2019 for further discussion of the Gini index as applied to distributions of classifications per citizen scientist).
We designed our poll entitled “Cool Neighbors, Machine Learning & Artificial Intelligence” with anonymity and brevity in mind. The poll does not request any contact information and contains only multiple choice and selection box style questions. There were four questions in total, shown in Figure 4. The poll was advertised on the Backyard Worlds advanced participant Google Group. Importantly, the invitation to fill out the poll explicitly stated that prior participation in Cool Neighbors was not a prerequisite, to avoid biasing the set of responses toward those who viewed Cool Neighbors favorably compared with Backyard Worlds: Planet 9. In addition to avoiding any collection of personally identifiable information, we felt anonymity of the poll was important to help ensure that respondents would indicate their unfiltered opinions/beliefs rather than attempting to respond in a way that they perceive might most please the professional science team. Our poll’s opening paragraph of preamble text displayed to respondents stated that aggregate results of the poll would be submitted for publication in the journal Citizen Science: Theory and Practice. Within the 9-day period during which the poll – a Google Form – was left open, 53 responses were received, corresponding to a 14.8% response rate.

Figure 4
Results of our anonymous online survey of Backyard Worlds advanced participants regarding their views about the usage of machine learning to pre-select the Cool Neighbors moving object candidates that subsequently get displayed/classified on Zooniverse. Each of the four panels arranged vertically is one of the four multiple-choice or selection box questions, with the prompt, selection options, and breakdown of results shown. The order of the sub-panels from top to bottom matches the order in which the questions were shown to respondents. The order in which each question’s selection options are shown here is the same as the order in which these options were listed for respondents. The second question (second panel from top) represents our main focus of this poll, with other questions being asked primarily for additional context/metadata. 49.1% of Backyard Worlds advanced participants say that Cool Neighbors’ machine learning pre-selection does not strongly influence their excitement level about participating in the project, 39.6% of respondents say that incorporating machine learning makes them more excited about participating, and only 11.3% of respondents say that Cool Neighbors’ use of machine learning makes them less excited to participate. All survey questions were marked non-optional for respondents and each question received 53 responses. ML: Machine Learning, AI: Artificial Intelligence.
Figure 4 shows the full results of the poll. The second question represents the main focus of the survey, whereas the other questions were intended to provide further context. 49.1% of respondents say that Cool Neighbors’ usage of ML/AI does not strongly affect their excitement about participating in the project, 39.6% say it increases their excitement, and only 11.3% say it decreases their excitement. The third question, about respondents’ general views toward ML and AI, shows that the respondents typically have positive views toward ML and AI: 77.4% view these positively, 15.1% are neutral, and 7.5% view these negatively. It is perhaps not surprising in light of respondents’ positive views toward ML/AI that the respondents are much more likely to view Cool Neighbors’ utilization of these techniques as a source of added excitement rather than decreased excitement about the project. We note that recent surveys of broader populations have found more negativity about AI than we find in our poll (e.g., Pew Research 2023, which surveyed “Americans”). This is perhaps not surprising, as Backyard Worlds participants, and especially its advanced participants, may well be much more enthusiastic/optimistic about scientific and technological progress than the general public at large (e.g., Pew Research 2023). In retrospect, including additional survey questions about whether respondents felt their views toward ML/AI had been influenced by Cool Neighbors promotional and/or informational resources might have provided further insight regarding the success of our project’s ML/AI communication principles.
Conclusion
Within less than three months of launch, Cool Neighbors surpassed 1 million classifications performed by citizen scientists and discovered many new high-probability brown dwarfs not found by any prior searches. In the future, it will be interesting to compare the suitably defined “discovery rate” for Cool Neighbors to that of Backyard Worlds: Planet 9, once we are further out from Cool Neighbors launch and have been able to more thoroughly follow up on and confirm promising moving object candidates with additional telescope facilities. It would also be interesting to eventually use Cool Neighbors volunteer classifications as additional training data for the supervised ML candidate pre-selection, thereby improving the SMDET model.
Using the first ~7 months of Cool Neighbors classification data since launch, we find strong indications that the project’s use of ML moving object pre-selection may be enabling more efficient and sustained volunteer participation in the project: Cool Neighbors receives typically 2–5x more classifications per registered user than Backyard Worlds: Planet 9, and Cool Neighbors volunteers can complete consecutive classifications at a rate ~3x faster than for Backyard Worlds: Planet 9.
We also reviewed the measured, balanced approach that Cool Neighbors has taken toward communicating ML/AI via volunteer-facing training and promotional materials, showing a number of specific examples. This communications approach may have played a role in the resulting 49.1% of Cool Neighbors advanced participants who say that their excitement about the project is neither strongly negatively nor strongly positively influenced by the project’s use of ML for moving object candidate pre-selection. The remainder of poll respondents were 3+ times more likely to be more excited about Cool Neighbors owing to its use of ML/AI pre-selection rather than less excited (see Figure 4, second panel from top). Cool Neighbors represents an early example of the complementarity between citizen science and artificial intelligence, a pairing that is expected to catalyze immense discovery and public engagement in coming years.
Data Accessibility Statement
Data used in this study have not been made available, as they are expected to contain as-yet unpublished brown dwarf discoveries, and therefore publishing the data prematurely would negatively affect Backyard Worlds participants who have generously dedicated their time to making these discoveries.
Acknowledgements
We wish to thank all participants who have contributed classifications to Backyard Worlds: Planet 9 and/or Backyard Worlds: Cool Neighbors. We especially thank those Backyard Worlds advanced participants who responded anonymously to our survey about AI/ML in relation to Backyard Worlds. We also wish to specially thank Backyard Worlds: Planet 9 and Backyard Worlds: Cool Neighbors citizen scientist moderators, whose efforts have significantly enhanced engagement via the Zooniverse Talk forum. This publication uses data generated via the Zooniverse.org platform, development of which is funded by generous support, including a Global Impact Award from Google, and by a grant from the Alfred P. Sloan Foundation.
Funding Information
Backyard Worlds: Cool Neighbors has been supported by NASA, which funded this work through the Citizen Science Seed Funding Program, Grant 80NSSC21K1485.
Competing Interests
The authors have no competing interests to declare.
Author Contributions
A. Meisner led the preparation of this manuscript. D. Caselden produced Figure 1 and related text about SMDET. A. Humphreys performed several classification analyses for this manuscript. M. Kuchner provided input on relevant classification metrics. E. Schapera, J.D. Kirkpatrick, A. Schneider, L.C. Johnson, J. Faherty, S. Casewell, F. Marocco, and A. Burgasser contributed equally to the remainder.
