Have a personal or library account? Click to login
The Complex Card Matching Task (CCMT): A PsychoPy-Based Task for Studying the Exploration-Exploitation Trade-Off Cover

The Complex Card Matching Task (CCMT): A PsychoPy-Based Task for Studying the Exploration-Exploitation Trade-Off

Open Access
|Feb 2026

Full Article

(1) Overview

Introduction

The exploration-exploitation tradeoff is a decision-making dilemma where an individual must choose between exploitation, leveraging known strategies or resources to maximize immediate rewards, and exploration, seeking new options that may yield greater long-term benefits but come with uncertainty. Managing this tradeoff effectively is essential for adapting to changing environments and optimizing behavior [1].

Adaptive Gain Theory (AGT) [2] provides a framework for understanding how the brain dynamically regulates this balance, specifically through the locus coeruleus-norepinephrine (LC-NE) system. According to AGT, the LC-NE system modulates behavioral control states by adjusting the “gain,” or sensitivity, of neural responses to stimuli. This modulation supports a phasic mode for exploitation, where the LC generates bursts of activity in response to task-relevant stimuli, enhancing focused attention and facilitating efficient, goal-oriented actions. In contrast, the tonic mode of LC activity promotes exploration by increasing baseline neural responsiveness, enabling a broad search for new opportunities and adaptation to changing situations.

Pupillometry, which measures fluctuations in pupil size, has emerged as a viable method to indirectly track LC-NE activity [3, 4], as changes in phasic and tonic pupil responses align with these LC modes [2]. Following AGT’s predictions, during exploitative behaviors, when individuals focus on known, rewarding options, phasic LC activity predominates, leading to brief, sharp pupil dilations in response to task-relevant stimuli. This phasic response aligns with focused attention, where the LC’s short bursts of norepinephrine release enhance signal-to-noise ratios, supporting quick and accurate responses. In contrast, during explorative behaviors, individuals shift toward broader, more flexible behavior, seeking new or uncertain options. This shift is accompanied by tonic LC activity, which elevates baseline neural responsiveness across a wider range of stimuli. This tonic mode is reflected in increased baseline (pretrial) pupil dilation, which is often larger before exploration trials compared to exploitation trials. Such tonic dilation indicates heightened arousal and readiness to engage with diverse stimuli, even those not directly relevant to the current task.

While previous studies have used tasks like multi-armed bandit paradigms [5] to investigate the exploration-exploitation trade-off, these tasks often present significant limitations in clearly distinguishing between exploration and exploitation states. Many of these paradigms lack explicit markers or task structures that identify when participants are exploring new options versus exploiting known ones. This absence of clear demarcation can make it challenging to accurately interpret pupillary and neural responses as exclusively representing either exploration or exploitation, potentially conflating the cognitive states associated with each behavior [5, 6, 7].

The Complex Card Matching Task (CCMT) was designed to elicit both exploration and exploitation within a single task, allowing for direct comparison of pupillary response variations both within each state and during transitions between them. This task resembles the Wisconsin Card Sorting Task (WCST) [8] but introduces key changes, such as varied stimuli forms, colors, and rule complexity. For example, whereas the WCST typically involves rules based on single features with stimuli that repeat across trials, the CCMT allows two card features to form a rule, and all presented stimuli are switched with each trial. Additionally, while the WCST is a proprietary test with associated costs, the CCMT is freely available for use and modification.

In the CCMT, participants are presented with five cards on each trial: four cards at the top and one card at the bottom of the screen. They are required to match the bottom card to one of the top cards based on complex, unknown rules that change every 10 trials. Through trial and error, participants explore various options, using feedback after each trial to identify the correct rule and subsequently exploit it until a new rule is introduced. This design enables the tracking of pupillary response changes as participants progress from initial exploration, where information gathering predominates, to exploitation, which continues through to the final trial of each rule block.

The CCMT also allows examination of the effect of task difficulty on behavioral and pupillary responses during exploration and exploitation phases by introducing two distinct difficulty levels. This feature facilitates clearer isolation of difficulty impacts on pupillary dynamics. Shifts between exploration and exploitation can be pinpointed at two critical junctures: (1) when a new rule appears, marking a transition from exploitation at the end of a block to exploration at the start of a new block, and (2) within blocks, after participants have discerned the rule and shift from exploration to exploitation.

Figure 1 illustrates on block of 10 trials in the easy condition of the task. At the start of each block, a rule is selected at random (for instance, a “shape rule”), and participants are given 10 trials to determine the active rule through trial-and-error. Red rectangles indicate the selected card. After identifying the correct rule, participants typically continue to apply it until the block concludes. At the beginning of the next block, a new rule is implemented.

jors-14-549-g1.png
Figure 1

Example procedure for a single block in the easy condition.

Implementation and architecture

The CCMT was developed using PsychoPy [9], an open-source Python-based software that enables researchers to design and run a wide variety of psychological experiments. To run the CCMT task, PsychoPy must be installed. For the pupillometry version of the experiment, an eye-tracker must also be connected to the display computer.

The task procedure is organized as follows:

  • Participant Demographics: Participants are prompted to enter their unique subject number, age (as an integer), and gender (selected from “male,” “female,” or “other” with customizable options).

  • Instructions: Written instructions are displayed on the screen, with embedded images illustrating the card layout, and specific examples of rules. All possible matching rules are explicitly listed, and participants are encouraged to read these instructions carefully.

  • Practice Trials: Participants complete a series of practice trials, with the number of blocks and trials per block adjustable. By default, two blocks of six trials each are provided for each difficulty level (easy and hard).

  • Main Task (Card Game): Participants complete a predetermined number of blocks and trials. By default, this consists of 15 blocks, each containing 10 trials, for each difficulty level (easy and hard).

This procedure is repeated for each difficulty level. The starting difficulty level is randomly selected at the beginning of the task to ensure counterbalancing across participants. Between each task section (easy or hard), participants are given a break, and the researcher can press a designated key to re-calibrate the eye-tracker before the next section. Upon completing the task, a thank-you screen prompts participants to inform the researcher. This screen can be customized to include debriefing information.

Source code of the CCMT package

The CCMT package consists of five image files (.png) used in the instruction phase and three Python modules (.py) that can be adapted for different research needs. The main script must be run from within PsychoPy (Coder or Runner), rather than through a standard external Python interpreter, to ensure that all PsychoPy-specific modules and eye-tracking components load correctly. All files need to be stored in the same folder for the task to execute properly.

  • CCM_task.py: The main script that runs the task.

  • libcard.py: A module that simplifies the creation, visualization, and iteration with card.

  • librule.py: A module that the task rules and evaluates user responses.

The main script includes a demographic tool to record participant information, instructions for each task component, and functions for presenting visual stimuli and recording behavioral responses. It also contains adjustable code to modify the number of trials within each block and the total number of blocks (rules) for each difficulty level. In the pupillometry version, this module also integrates eye-tracking configurations to enable pupillometry measurements. The task is specifically designed for SR Research’s EyeLink eye-trackers (SR Research Ltd., Mississauga, Ontario, Canada), although additional code may be required for compatibility with other eye-trackers (e.g., Tobii). By default, only the participant’s right eye is recorded, with a sampling rate of 250 Hz. Both options can be easily modified as needed.

The libcard.py module manages the specific attributes of each card displayed in the task. Each card contains a set of customizable features: color, shape, number of shapes, and, in the hard condition, size. In each trial, each card displays a specific shape (circle, square, diamond, or triangle), a defined quantity of that shape (from 1 to 4), a particular color (selected from PsychoPy color options, with defaults being “darkblue,” “darkgreen,” “brown,” and “darkgoldenrod”), and, in the hard condition, varying sizes for each shape (small, medium, large).

The librule.py module defines the possible rules for both easy and hard difficulty levels. In the easy condition, the bottom card must be matched to one of the top cards based on a single attribute such as color, shape, number of shapes, or a combination of two of these attributes. In the hard condition, the matching rules also include six options, but to increase complexity, only combinations of two attributes—color, shape, number of shapes, and size—are used as matching criteria.

Input

Progression through the instruction phase is controlled by pressing the space bar, while responses during the card game are made by clicking on a card with the mouse. Participants are instructed to keep one hand on the mouse throughout the task to minimize arm movement. In the pupillometry version of the task, re-calibration during the experiment is managed entirely by the researcher, who can initiate it using either the keyboard or the host PC.

Data output

After the task is completed, two data files are saved in the CCMT directory, both named “CCM_XXX,” where “XXX” represents the participant’s unique identifier. The first file is a CSV file containing the behavioral data for each participant, with each row representing a single trial. The columns include:

  • age indicates the participant’s age,

  • gender indicates the participant’s gender,

  • difficulty specifies whether the participant completed the easy or hard version of the task,

  • blockNumber indicates the current block number,

  • trialNumber records the trial index within each block (ranging from 1 to 10, restarting at the beginning of each block),

  • rule provides a descriptive label for the active rule during each trial,

  • accuracy denotes the accuracy of each trial, coded as 0 for incorrect matches and 1 for correct matches,

  • reactionTime captures the time taken by participants to select and click on the card they believed matched the active rule, recorded in seconds with three decimals numbers.

The second file is an EDF file that contains the eye-tracking and pupillometry data. By default, it records eye position, blinks, saccades, and pupil size for the right eye. Additional eye-movement characteristics can be extracted by generating a sample report using the EyeLink Data Viewer software. This EDF file also includes the same behavioral data as the CSV file, allowing for convenient manipulation of both behavioral and pupillometry data within a single file. For each trial, messages are sent to the EDF file’s message column to differentiate between “fixation” (display of a central “+” to measure baseline pupil size), “experiment” (display of the card game), and “feedback” (display of “correct” or “incorrect” messages) phases of the task.

Python Code for Preprocessing

Two Python notebooks are available to preprocess both behavioral and pupillometry data, aiming to classify each trial as either “Exploration” or “Exploitation.”

For behavioral data, the code allows researchers to specify the folder path where their CSV files are stored. It then processes each file in the folder to:

  1. Extract participant IDs,

  2. Organize data by difficulty and block,

  3. Create a new State column to classify each trial as either “Exploration” or “Exploitation” based on predefined rules (detailed below),

  4. Output can be either:

    • 4.1. Compiled data from all participants into a single Excel file for further analysis.

    • 4.2. Updated individual participants files, which can be separately stored in a new folder as to preserve original data.

Preprocessing pupillometry data involves two main steps: (1) preprocessing behavioral data files (CSV files) as described previously, and (2) integrating the State column from the behavioral files into the pupillometry Excel files. In the first step, behavioral data files are accessed from a specified folder. Each file is updated with the State column and saved in a new folder specified by the researcher. In the second step, the updated behavioral files and pupillometry files are matched based on subject numbers. The State column is then transferred from the updated behavioral file to the corresponding pupillometry file. The updated pupillometry files are then saved in a specified new folder to preserve the original data. These updated files can then be used for further pupillometry preprocessing.

Each notebook includes code that classifies trials according to specific rules. To begin, each trial is categorized as either correct (indicating that the participant may be exploiting a known rule) or incorrect (indicating that the participant is exploring to identify the current rule). For the exploration phase to be considered complete and for exploitation of the rule to begin, a minimum of three consecutive correct trials is required. This criterion helps ensure that participants have genuinely learned the rule, rather than achieving a correct answer by chance alone. If this condition is met, the exploration period is deemed to have ended, and these trials are considered exploitation. The first trial of each new block is treated as a special case. Because participants are unaware that a new rule is being introduced, this trial is labeled as “T1” (Trial 1) and is not classified as either exploration or exploitation. All subsequent trials are classified as either “Exploration” or “Exploitation” based on the above criteria.

Quality control

The CCMT has been extensively tested during development, and successfully used in two separate experiments totaling 119 college students participants in a laboratory environment [10]. Each core component of the task (e.g., card generation, rule application, data recording) was individually tested to verify that it functions as expected in isolation. This helped catch any issues in specific functions or modules, ensuring that all parts of the code are working as intended. Tests were conducted to ensure seamless interaction between modules, particularly in scenarios where behavioral and pupillometry data are recorded simultaneously. This involved verifying data synchronization across modules and ensuring compatibility with the eye-tracking device.

To help users quickly verify that the software is working as expected, sample output files are provided on GitHub. Researchers can run the task with a few trial blocks and compare their generated output to the example output files, which include both behavioral and pupillometry data. By doing this, users can ensure that the software is recording data correctly and matching the expected output format.

For the pupillometry task, please note that EyeLink eye-trackers record pupil size in arbitrary units. Researchers are advised to consult SR Research’s guidelines for instructions on converting these arbitrary units to millimeters, as the conversion process depends on the specific eye-tracking setup being used.

(2) Availability

Operating system

Windows, MacOS.

Programming language

Python.

Additional system requirements

For the pupillometry version of the CCMT, hardware requirements include an SR Research Eye-Link, although the code can be modified to support other eye-tracker models (such as Tobii). The behavioral version of the task does not require an eye-tracker and can be run on any operating system. The minimum system requirement for PsychoPy is a computer with a graphics card that supports OpenGL, ideally version 2.0 or higher. The CCMT task folder takes up about 400 KB on disk, while the PsychoPy software requires a few hundred megabytes.

Dependencies

PsychoPy version 2.3 or higher (http://psychopy.org)

Jupyter Notebook (https://jupyter.org), Python version 3.8 or higher (for optional data pre-processing)

List of contributors

Giovanna C. Del Sordo, Post-doctoral researcher, New Mexico State University, USA.

Fabio Tardivo, PhD candidate, New Mexico State University, USA.

Megan H. Papesh, associate professor, University of Massachusetts Lowell, USA.

Software location

Code repository

Language

English.

(3) Reuse potential

The CCMT is well-suited for experimental studies of cognitive flexibility, and decision-making, across fields such as psychology, neuroscience, behavioral economics, and cognitive science. The software is designed to allow easy modifications or extensions; for example, researchers could adjust the number of trials and difficulty levels to match specific experimental needs or adjust the different timings of the task. Additionally, the task could be adapted for compatibility with various physiological measurement tools, such as EEG or fMRI, beyond its current setup for EyeLink eye-trackers. Technical support is available via the author on GitHub, where researchers can report issues, suggest modifications, or request assistance.

Competing Interests

The authors have no competing interests to declare.

DOI: https://doi.org/10.5334/jors.549 | Journal eISSN: 2049-9647
Language: English
Submitted on: Dec 20, 2024
|
Accepted on: Dec 22, 2025
|
Published on: Feb 9, 2026
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Del Sordo Giovanna C., Tardivo Fabio, Papesh Megan H., published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.