Motor impairments affect approximately 190 million people worldwide, with spinal cord injuries (SCIs) accounting for over 15 million of these cases, according to the World Health Organization (WHO) (World Health Organization, Spinal Cord Injury, 2024). In Switzerland, an average of one person sustains a SCI each day, with more than half of these injuries occurring at the cervical level, resulting in tetraplegia. This condition may severely compromise both sensory and motor function in the upper and lower limbs as well as the torso, leading to significant limitations in independence and daily living (Spooren, Janssen-Potten, Kerckhofs, & Seelen, 2009). In many cases, paralysis is irreversible, causing partial or complete loss of bodily function (Readioff, et al., 2022).
Individuals with tetraplegia often face significant challenges in performing activities of daily living (ADLs), including eating, working, and managing household tasks. Assistive technologies (ATs) offer potential to enhance quality of life (QoL) by fostering greater independence and social participation. Among these, robotic arm systems have emerged as a promising solution, particularly for restoring partial upper limb function. Even limited recovery of arm and hand use can significantly enhance autonomy, simplify daily routines, and reduce caregiver burden (Brose, et al., 2010). Survey data underscore the importance of hand function among individuals with tetraplegia: 95% indicated that regaining hand mobility is a higher priority than the recovery of other functions, such as walking, sexual activity, or bladder control (McDowell, Moberg, & Graham Smith, 1979) (Snoek, IJzerman, & Stoffers, 2000), highlighting the profound impact of upper limb function on overall QoL.
Despite the increasing availability of assistive robotic arm systems, many existing solutions fall short in terms of usability, particularly for individuals with severe motor impairments such as tetraplegia. A critical gap remains in the development of intuitive and accessible control mechanisms that align with the real-world needs, preferences, and physical capabilities of end users. Current human-machine interfaces (HMIs) often lack adaptability and fail to support effective interaction in everyday contexts, limiting the practical utility of these technologies. For instance, many assistive robotic arms possess more degrees of freedom than the dimensionality of available control inputs (e.g., sip and puff, residual body motions), making intuitive control difficult, especially for users with severe impairments (Jain et al., 2015). Also, a recent review of mechanical assistive upper-limb devices cites interference, restricted mobility, and usability constraints as persistent barriers to adoption in real-world ADLs (Gbetoho Atigossou et al. 2024).
To address these challenges, the present work forms part of the project “Development of a Robotic Arm with and for People with Tetraplegia: A User-Centered Research and Development Project.” This initiative aims to design and evaluate an assistive robotic arm system through a participatory, interdisciplinary process that actively involves end users, clinicians, and engineers throughout all development stages. By grounding the design in real-world needs and user feedback, the project seeks to create an intuitive, adaptable, and functionally meaningful assistive device that enhances autonomy and supports daily living activities for individuals with severe motor impairments.
The overarching aim was to develop a suitable user interface (UI) to command an already existing prototype wheelchair-mounted robotic arm (WMRA). The solution needed to be tailored to the specific needs and limitations of users with severe motor impairments and enable individuals with tetraplegia to perform a meaningful range of ADLs in both home and clinical settings. The core research question was thereby: How can a WMRA be commanded intuitively and efficiently to facilitate the execution of ADLs? Given the complex and highly individualized nature of motor impairments, a one-size-fits-all solution is often impractical. Therefore, to address this fact and to ensure that the resulting solution bridges the gap between technical feasibility, reliability, and real-world practicability, a user-centered co-development approach will be employed, where user feedback will be continuously collected and iteratively integrated. This paper highlights the role of interprofessional collaboration and continuous user involvement in guiding the development of ATs that are both technically viable and aligned with real-world user needs.
This study employed a user-centered, iterative design methodology grounded in the Double Diamond framework (see Figure 1) by combining qualitative and quantitative methods, including surveys, interviews, and observational studies.

Double Diamond Model
The research process followed the four phases of the Double Diamond: Discover, Define, Develop, and Deliver. In the Discover phase, user needs were explored through a systematic literature review and informed by findings from a previous study conducted by the project team (Hutmacher et al., 2025) that investigated the needs of individuals with tetraplegia (see Figure 2).

Needs of individuals with tetraplegia
This was complemented by a follow-up survey and ethnographic methods, including shadowing and interviews with individuals with tetraplegia as well as healthcare professionals. During the Define phase, insights from the collected data were synthesized to identify critical requirements for a robotic control system. The Develop phase focused on iterative prototyping of the command structure and GUI, including early-stage testing with target users. Finally, the Deliver phase evaluated the final system through structured usability assessments and qualitative feedback.
Data collection was conducted in alignment with the iterative development of the command system.
Initially, user needs were defined in a follow-up survey involving 16 individuals with tetraplegia to determine the relevance of 18 predefined activities of daily living (ADLs) that a robotic arm could assist with (see Figure 4). Participants rated each task on a scale from 0 to 6, where 0 indicated “not important” and 6 indicated “very important”. This was complemented by qualitative methods, including: shadowing sessions with users in both home and clinical environments, semistructured interviews with occupational therapists and physiotherapists, and informal situational conversations during therapy sessions.
These methods may enable the identification of practical challenges and user preferences, which inform interface requirements, task prioritization, and functional expectations for the robotic arm.
An assessment matrix was developed by the multidisciplinary project team, comprising engineers, clinicians, and user representatives to systematically prioritize 18 ADLs that could benefit from robotic assistance. The development process involved defining and weighting three key criteria: technical feasibility (weighted at 50%), frequency of task occurrence in daily life (25%), and user-rated importance (25%). These weights were determined through discussions within the team to balance engineering constraints with user needs and practical relevance.
The matrix was then tested and refined through iterative reviews and pilot evaluations within the team, ensuring that prioritization reflected both technical possibilities and real-world user preferences. Figure 3 illustrates the structure of the matrix and summarizes the scoring process, providing a visual representation to help understand how tasks were ranked. This structured approach guided the selection of the most appropriate ADLs for implementation in the robotic system, ensuring a focus on tasks that offered the greatest impact and feasibility.

ADL (Activities of Daily Living) assessment matrix analysis revealing which tasks are most relevant for integration into the robotic arm system, based on user preferences and technical considerations

Average importance of ADL (Activities of Daily Living) for individuals with tetraplegia from 0 (not important) to 6 (very important)
A user-centered design approach, based on a well-established framework will guide the development of a GUI tailored for individuals with tetraplegia (Gulliksen, et al., 2003). This approach emphasizes iterative refinement through ongoing user involvement to ensure that the interface adequately addresses real-world constraints and user needs. The requirements for the GUI will be gathered through initial interviews, task analyses, and collaborative testing sessions with potential users. The design process will focus on three main pillars: usability, adaptability to varying motor capabilities, and operational safety within assistive contexts.
The planned implementation of the GUI involves using NiceGUI, a Python-based framework for building responsive web interfaces. The GUI is hosted on a local tablet, running a browser-based client connected to the robotic system over the local network. Communication between the GUI and the underlying control architecture is handled via gRPC-based message passing, with direct integration into the system's Finite State Machine (FSM) logic. This architecture is designed to enable the GUI to trigger complex robotic behaviors, receive state feedback, and update the interface in real time.
A hierarchical FSM was implemented to manage both autonomous and manual task control. An abstract base class, known as the Abstract State Machine, was created to encapsulate reusable FSM logic. States, transitions, triggers, and error-handling routines were defined programmatically, with states corresponding to robotic control modes (e.g., manual, task execution, fast navigation). Each GUI interaction triggered FSM transitions, ensuring synchronized behavior between the user interface and robotic system.
The GUI was designed with several modular panels to accommodate different control needs. The Manual Control Panel enabled direct manipulation of the robotic arm, featuring buttons sized and arranged according to user requirements. The Fast Navigation Panel provided a three-step semi-automatic control system that utilized spatial selection to guide the arm to approximate positions before transitioning to manual mode. The Tasks Panel allowed users to execute predefined semi-automated tasks by combining object detection with user confirmation. The Settings Panel offered users the option to customize the interface layout, button size, and control modes according to their preferences.
User testing was carried out in two phases. In the Prototype Evaluation phase, early GUI versions were shared as static screenshots and simple interactive demos, with feedback collected via questionnaires and verbal interviews. The Remote Live Testing phase involved deploying a functional online version of the GUI, which was tested by users with tetraplegia interacting with the system in real time through tablet interfaces. Sessions were recorded with consent and assessed for usability, task success, and subjective satisfaction. The participants consisted of four individuals with tetraplegia and two able-bodied individuals who served as baseline comparisons. Observations during testing focused on interface navigation, success rate of command initiation, error handling, and customization preferences.
To evaluate usability and user acceptance more thoroughly, structured feedback sessions were conducted with both individuals with tetraplegia and non-tetraplegic participants. These sessions aimed to assess the effectiveness of the GUI in conjunction with the newly integrated FSM command structure. Four participants with tetraplegia participated remotely via Zoom, where the system's goals and functionalities were explained prior to interaction. Among these participants, some had prior experience with earlier versions of the system, while one was new to the interface.
During the sessions, participants observed the interface and the robotic arm in real-time. The GUI was displayed on a tablet positioned next to the robotic arm, and its screen was shared through Zoom. Additionally, a secondary camera provided a first-person perspective from the robot's head-level mount on the wheelchair. This dual-video setup allowed participants to view the interface and the robot's actions simultaneously. After an initial walkthrough covering the primary tabs—settings, manual control, task execution, and fast navigation—participants verbally instructed the researcher to perform specific actions, observing the robot's responses accordingly. Two predefined tasks, “Drinking” and “Adjusting Glasses,” were demonstrated alongside basic manual movements. Field notes were taken throughout these sessions to capture participant reactions and qualitative feedback, with each session lasting approximately 45 minutes.
Additionally, three non-tetraplegic participants tested the system in person at the research office. These individuals sat in wheelchairs and directly interacted with the GUI on the tablet for about 20 minutes, enabling a comparison between remote and firsthand interaction modalities. This means, in this latter firsthand interaction modality, participants used the system directly themselves and sat in the wheelchair with the physical robot attached to it. In contrast, participants in the remote experience were at home without the robot, connected through a Zoom video call with an engineer, and would see the GUI and a live video of the robot acting as a result of the inputs on the GUI.
To evaluate the usability and user acceptance of the assistive robotic system, a series of structured feedback sessions was conducted with individuals who have tetraplegia, as well as non-tetraplegic participants. These sessions aimed to assess the effectiveness of the GUI and the newly integrated FSM command structure.
Four participants with tetraplegia were engaged remotely via Zoom. Before the interactive portion, the system's goals and functionality were explained in detail. Among these participants, some had previous experience with earlier system versions, while one was entirely new to the interface.
During the sessions, the participants observed the interface and robot in real-time. The GUI was displayed on a tablet adjacent to the robotic arm, and its screen was shared via Zoom. Additionally, a second camera provided a first-person perspective from the robot's head level on the wheelchair. This dual-video setup enabled participants to view both the interface and the robot's actions simultaneously. After an initial walkthrough of the GUI's primary tabs—settings, manual control, task execution, and fast navigation—participants were verbally instructed by the researcher to perform specific actions, observing the robot's responses to their commands.
Furthermore, three non-tetraplegic participants tested the system in person at the research office. These individuals sat in wheelchairs and directly interacted with the GUI on the tablet for about 20 minutes, allowing a comparison between firsthand and remote interaction modalities.
Following the interactive sessions, all participants completed the System Usability Scale (SUS), a standardized questionnaire widely used for assessing the perceived usability of interactive systems (Brooke, 1995). A translated German version of the SUS was employed to ensure accessibility for all participants.
SUS consists of 10 statements rated on a five-point Likert scale, offering a quick and reliable measure of usability. The resulting scores range from 0 to 100, where higher scores reflect greater user satisfaction and ease of use. This quantitative assessment complemented the qualitative insights gathered during the feedback sessions.
In the follow-up study, the tasks rated the highest by participants were opening and closing doors, drinking, picking up objects, turning devices on and off, scratching the head, and combing hair. These results highlight that users prioritize independence in basic self-care activities and interaction with their environment. Additionally, qualitative feedback revealed some skepticism about robotic handling of hot liquids due to safety concerns, along with a desire for tools capable of supporting fine motor control tasks, such as cutting plants or handling small objects.
An Assessment matrix (see Figure 3) was used to rank the 18 ADLs based on three criteria: technical feasibility, task frequency, and user-rated importance. The matrix identified six top-priority tasks for implementation, which included drinking, scratching the head, adjusting glasses, turning devices on or off, bringing a cup of coffee, and picking up objects. These tasks were considered to offer the most practical value by balancing technical feasibility with the functionality most desired by users.
Observations with a 57-year-old man diagnosed with Spinal Muscular Atrophy provided valuable insights into the real-world use of the robotic arm. Although he preferred to use his own hand supported by a passive arm-lift device (MiAssiSt, 2024), he relied on the Jaco robotic arm. The Jaco robotic arm, made by Kinova Robotics (Canada), is an assistive, lightweight robotic arm that mounts on wheelchairs to help people with limited mobility perform daily tasks independently. The arm, therefore, helps its users maintain independence when alone. He utilized 3D-printed custom tool holders to perform tasks such as scratching his head, adjusting his glasses, and pushing buttons, demonstrating a high need for task customization. However, several challenges emerged during his use of the system. These included the complexity of joystick controls and mode switching, the absence of semi-automated functions, limited weight handling capacity, and the inability to perform fine motor tasks.
Additionally, a separate visit to a rehabilitation center in Switzerland provided valuable insights into the needs of a woman recovering from Miller Fisher Syndrome. Although she had regained some mobility, she continued to experience difficulties with activities such as eating and transferring between surfaces. Healthcare professionals involved in her care confirmed that robotic assistance could address these specific limitations and enhance patient independence, particularly in settings where caregivers are not always present.
The feedback and insights gained from these at-home and clinical visits were subsequently integrated into the ongoing development and design of both the GUI and the overall robotic system.
The goal of designing a GUI that would be tailorable towards users' needs was developed by taking different needs of users with varying pathologies into account (see Figure 5). These included both neurological and traumatic conditions that significantly impact motor control and functional independence.

Different HMI (Human-Machine Interface) designs depending on pathology. Abbreviations: Spinal Cord Injury (SCI), Spinal Muscular Atrophy (SMA), Amyotrophic Lateral Sclerosis (ALS), Guillain-Barré Syndrome (GBS), Brain Computer Interface (BCI)
For instance, SCI often results from trauma and can lead to partial or complete paralysis, particularly when the cervical spine is affected, causing tetraplegia. Spinal Muscular Atrophy (SMA) is a genetic disorder characterized by progressive muscle wasting resulting from the degeneration of motor neurons. Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease that gradually impairs voluntary muscle control, typically leading to complete paralysis while leaving cognitive functions largely intact. Guillain-Barré Syndrome (GBS) is an acute autoimmune condition in which the peripheral nervous system is attacked, resulting in sudden and sometimes severe muscle weakness. Finally, traumatic injuries - even those not directly involving the spinal cord - can also result in significant loss of motor function and require long-term support through ATs.
To guide the design process and ensure the GUI would be responsive to these varying user profiles, a structured matrix was developed (see Figure 5). This matrix maps out the specific functional needs and interaction requirements of individuals affected by each condition, providing a foundation for tailoring interface features accordingly.
Users with different levels of motor function were able to interact with the GUI effectively using their preferred input methods, including chin joysticks, touchscreen, and (in future integration) speech recognition. The adaptable layout—particularly in the Manual Control Panel—proved critical to accommodating diverse needs. Three versions of the manual control were tested: the Standard Layout, which was usable by most participants (see Figure 6); the Large Button Layout, preferred by users with limited touch precision (see Figure 7); and the Compact Layout with closer buttons, favored by chin joystick users (see Figure 8). Each layout option could be accessed via the Settings tab, allowing participants to switch layouts independently according to their preferences.

Standard GUI (Graphical User Interface) Layout

Large Button GUI (Graphical User Interface) Layout

Compact GUI (Graphical User Interface) Layout
User testing demonstrated that the semi-automated task execution features significantly enhanced usability and user satisfaction. Participants with tetraplegia responded particularly positively to the “Drink” and “Adjust Glasses” routines, both of which were executed through structured finite state machine logic. These routines allowed users to trigger complex, multi-step actions with minimal input, while still maintaining control through pause and cancel options. Feedback highlighted that the semi-automated approach reduced cognitive and physical effort, increased reliability, and improved task confidence. Users described the routines as intuitive and empowering, with several noting that they would like similar functionality in commercial systems, such as the Jaco arm, which is already available on the market. Meaning, they would encourage the implementation of those semi-automated tasks in commercial systems and would see an added benefit if they could use them in their daily life. The “Drink” task was especially well received, as it enabled a typically difficult daily activity to be performed smoothly and comfortably. Participants also expressed interest in adding more task templates and environment-specific configurations, underscoring the value and scalability of the semi-automated framework. Overall, the results validate the decision to integrate predefined task sequences, supporting further development of personalized semi-automation for assistive robotics.
The Fast Navigation feature enabled faster execution of common reaching tasks compared to manual control alone. A specific position in space, together with its gripper orientation, can be selected in three steps, allowing the target to be reached in a faster and more intuitive manner compared to manual movements alone. An illustration of this three-step GUI can be found in Figure 9. The numbers and letters on the horizontal and vertical axes serve to define each position in the matrix. This is particularly helpful if a user cannot use the touchscreen but would like to give input via speech.

Fast navigation three-step GUI. Left: First step, choose a position that the robotic arm should reach. Center: Second step, choose the lateral position to be reached. Right: Third step, choose the gripper orientation
The FSM system managed transitions reliably across all states. No invalid or stuck states were recorded during testing. The inclusion of fallback states and state-locking mechanisms proved effective in avoiding race conditions during concurrent input events. A small number of user-reported delays occurred during fast state transitions, particularly when switching from autonomous to manual modes; however, these did not significantly impact usability.
An example of such an FSM for a defined task is illustrated in Figure 10.

Exemplary Finite State Machine Task Execution
Users reported high satisfaction with the GUI's flexibility, with preferences varying significantly based on individual motor abilities and input methods. Two participants prioritized larger buttons, citing reduced dexterity and the need for easier touch targeting. One participant preferred the joystick-style layout, as it was more compatible with their residual head-controlled input system. All participants appreciated the ability to tailor control speed and sensitivity, emphasizing that this adaptability was key to achieving comfortable and effective interaction.
Participants highlighted the GUI's clarity, its task-focused structure, and essential safety features such as the prominently placed emergency stop button. These aspects collectively contributed to the system's usability and user confidence during operation.
Feedback from the remote sessions highlighted several key strengths of the system, as well as areas for improvement. Participants responded very positively to the semi-automated task execution features. One participant stated, “Super, super! This system opens up completely new dimensions! I would love to use it.” Another emphasized the usefulness of structured task flows, saying, “I like the idea with the tasks, the basic idea with semi-saved steps. I would want that for Jaco too.” The interface navigation was also well received. The tabbed layout of the GUI was described as logical and user-friendly, with one user noting, “I think navigation is very good, it's certainly a benefit and very logical.” The clear separation between control modes contributed to the ease of use.
In terms of manual control, several participants offered constructive suggestions for improvement. Feedback included remarks such as, “In the manual mode, the buttons should be closer together,” and “The design and orientation of the manual buttons could be optimized.” This input led to the development of multiple GUI layout options to better accommodate user preferences. The flexibility of the settings menu was another positively noted feature. One participant stated, “The settings menu offers a lot of options for making settings, which I think is very good,” validating the system's adaptability-focused design.
Overall, the feedback confirmed that the system was intuitive and empowering, particularly in its semi-automated features and fast navigation. Specific interface elements were flagged for improvement and guided subsequent design iterations.
The SUS provided valuable quantitative validation of the system's effectiveness. All seven participants—four with tetraplegia and three without—completed the SUS after their interaction with the system. The average SUS score among participants with tetraplegia was 83.3, indicating a high level of usability and satisfaction, strongly supporting the system's alignment with their needs and expectations. In contrast, the average score among non-tetraplegic participants was 60, with individual results of 50, 60, and 70. These lower scores likely reflect differing expectations rooted in conventional interaction experiences and highlight aspects of the interface, such as button layout and manual control, that are less critical for the target group but relevant for broader usability refinement.
The SUS outcomes mirrored the qualitative findings, with higher scores among users with tetraplegia reinforcing the value of the participatory design approach and the system's user-centered features, including customizable controls and semi-automated tasks. At the same time, the relatively lower scores from non-target users underscore the importance of tailoring the interface to the needs of the primary user group while striving for broader inclusivity through universal design principles wherever feasible.
This study investigated the development of a user-centered command system that integrates intuitive navigation, semi-automated task execution, and a flexible GUI tailored to users with varying levels of motor impairment. Through a mixed-methods approach—including surveys, interviews, and structured observational studies—user needs were systematically translated into a technically feasible and functionally meaningful design. The iterative nature of the process ensured that feedback loops with end users remained central throughout development, aligning with best practices in participatory design (Robertson & Simonsen, 2013).
The system demonstrates notable innovations in the field of AT, particularly for individuals with tetraplegia. The integration of a finite state machine (FSM)-based control structure with a modular, customizable GUI enabled the robotic arm to support a wide range of ADLs with minimal cognitive and physical load. In contrast to many commercially available systems that rely heavily on manual input or fixed automation (e.g., the Jaco arm by Kinova), the proposed system combines semi-automation with user agency, offering a hybrid interaction model that users perceive as more intuitive and efficient. This aligns with prior findings emphasizing the importance of balancing autonomy and control in AT design (Frennert & Östlund, 2014).
One key outcome of this research is the confirmation that user-centered design directly contributes to improved usability, satisfaction, and acceptance of assistive systems. As shown in other domains of rehabilitation engineering, technologies developed with continuous user involvement are more likely to be adopted for long-term use (Norman & Draper, 1986). Participants in this study highlighted the value of being able to adjust button sizes, switch between different input modalities (e.g., chin joystick, touchscreen, and switch-based control), and reorganize the interface layout—features made possible only by embedding adaptability as a core design principle.
Moreover, the GUI's structure was intentionally minimalist, with a clear visual hierarchy and intuitive iconography, thereby reducing both visual complexity and learning effort. This design approach is in line with recommendations from accessible interface design literature (Shneiderman, 2017), which stress clarity and consistency as essential factors for effective interaction in ATs.
Another strength of this project lies in its interprofessional development model, which involved close collaboration between engineers, healthcare professionals, and individuals with disabilities. This multidisciplinary approach ensured that technical solutions were grounded in clinical reality and practical relevance (Alvarez, Cook, & Polgar, 2022). Health professionals contributed insight into patient workflows, motor function variability, and safety considerations, while engineers translated these constraints into modular design solutions. Such collaboration not only improves the functional outcome of AT systems but also fosters professional alignment and mutual understanding between fields—a crucial factor for successful deployment in clinical settings. This intense collaboration across multiple disciplines clearly influenced the profound knowledge and approach of the challenges throughout this project.
From the user's perspective, the system was perceived as both empowering and accessible. The semi-automated control modes significantly reduced task time and physical effort, while maintaining the user's sense of control and intentionality. This is particularly important for long-term engagement and acceptance, as prior studies have shown that users tend to abandon assistive devices that either feel too restrictive or are overly complex to operate (Philips & Zhao, 1993).
Additionally, the project contributes to broader discussions in rehabilitation technology on the integration of real-world user scenarios, such as home and clinical environments. Unlike lab-based evaluations, this study considered contextual variables like caregiver presence, task routines, and customization preferences, aligning with the concept of “ecological validity” in assistive technology research (Scherer, J, Glueckauf, & Rob, Assessing the Benefits of Assistive Technologies for Activities and Participation, 2005)
Overall, the findings support the notion that AT systems designed through participatory, interdisciplinary processes not only perform better technically but are also more likely to be adopted, personalized, and trusted by their users. This highlights the importance of incorporating user-centered, real-world development practices into future research and commercialization efforts in assistive robotics.
Despite promising outcomes, several limitations must be acknowledged before the proposed system can be widely deployed. A key challenge lies in the heterogeneity of pathologies among individuals with tetraplegia. This diversity complicates the development of a universally optimal control system. While customization partially mitigates this issue, future solutions will benefit from modular hardware and adaptive software that can learn and adjust to individual preferences over time.
Sample size and participant diversity also present constraints. Although 16 individuals participated in a follow-up survey and some contributed to interviews and observations, the sample remains relatively small. This limits the statistical power and generalizability of findings. Expanding future studies to include more diverse user profiles—varying in degrees of motor impairment, daily routines, and living environments—would strengthen the robustness and applicability of the results.
Furthermore, the testing environment was controlled and short-term and thus did not account for challenges arising during long-term in-home use. Factors such as user fatigue, sustained system reliability, and maintenance demands were beyond the scope of this study. These elements are critical for assessing real-world viability and long-term user satisfaction.
The robotic arm hardware used posed additional constraints. It was a high-cost, technically limited device, restricting accessibility and task flexibility. Specific limitations included inadequate grip precision, limited payload capacity, and the absence of motor brakes. These factors impacted system design and limited the range of feasible tasks—particularly those requiring fine motor control or safe handling of delicate or hazardous items (e.g., hot drinks, medication).
Lastly, while a user-centered design approach was central to development, technical and safety trade-offs were sometimes necessary. For example, certain tasks were deprioritized due to feasibility or risk concerns.
Addressing these limitations in future work will require extended field trials, collaboration with hardware developers to reduce costs and improve functionality, and ongoing efforts to balance user autonomy with safety and technical feasibility.
This project forms part of a broader initiative titled “Development of a Robotic Arm with and For People with Tetraplegia: A User-Centered Research and Development Project”.
The current system lays a strong foundation for future developments aimed at enhacing autonomy and QoL for individuals with tetraplegia. One key area for future research is the expansion of semi-automated tasks, allowing users to perform a broader range of everyday activities with reduced physical effort. Additionally, manual control could be improved through features such as dynamic acceleration adjustment. For example, enabling users to gradually increase movement speed by holding down a control button would enhance efficiency and adaptability during task execution.
A critical next step involved implementing long-term real-world trials in users' home enviornments, which were scheduled to begin in early 2025. These trials will provide essential insights into daily usability, hardware durability, and overall user satisfaction. The observations gained from these long-term deployments are expected to inform refinements in system setup, physical accessibility, interface responsiveness, and integration with other assistive devices. This will help ensure that the robotic arm system remains reliable, effective, and convenient under practical conditions.
The GUI, which currently supports high adaptability and individualized configuration, also presents several promising directions for further enhancement. One planned feature is a built-in configuration tool that allows users or caregivers to adjust layout and task hierarchies directly within the interface, without the need for developer intervention. This can be achieved through a drag-and-drop editor or a templating system embedded in the GUI, allowing users to have full control over their interface. Another area of focus is the implementation of context-aware GUI modes, where the interface dynamically adapts based on the current task phase, user behavior, or environmental cues. Such a system would streamline interaction, reduce complexity, and better align with users' needs in different scenarios.
The ability to save environment-specific positions for the robotic arm also holds significant potential. Users could store and recall configurations optimized for specific locations, such as a kitchen, workspace, or bathroom, allowing for faster and more intuitive execution of repeated tasks. When combined with a responsive interface that displays relevant controls or shortcuts based on location or task type, this feature would improve workflow efficiency and overall ease of use.
In the long term, the integration of advanced technologies, such as machine learning and artificial intelligence, offers exciting possibilities. These technologies could allow the robotic arm to learn from user behavior, anticipate task sequences, and eventually execute routine tasks autonomously. The GUI could evolve in parallel, incorporating intelligent features such as predictive button suggestions, task auto-completion prompts, or even behavior-based layout adjustments. Such developments would significantly enhance user experience while reducing cognitive load.
Future versions of the system may also explore the integration of multimodal input methods, including eye-tracking, voice control, or brain-computer interfaces (BCIs). These additions would further reduce the need for physical input, making the system more accessible to individuals with minimal or no residual motor function. An interface capable of integrating and smoothly transitioning between these modalities—based on user preference or situational demands—would represent a significant leap forward in the design of AT.
Beyond technical development, this project has broader implications for individuals with tetraplegia, engineers, and healthcare professionals. For individuals living with severe motor impairments, the system offers a pathway to increased self-determination in daily life. The ability to independently perform meaningful tasks, such as drinking, turning on devices, or scratching one's head, can restore a sense of agency and contribute to emotional well-being, social participation, and overall QoL. These outcomes align with findings in the assistive technology field, which highlight user-centered design as a driver of higher adoption rates and long-term satisfaction (Scherer & Federici, Why people use and don't use technologies: Introduction to the special issue on ATs for cognition/cognitive support technologies, 2015).
For engineers and developers, the project underscores the importance of participatory, iterative development that is grounded in real-world user needs. The integration of finite state machines, adaptable GUI architecture, and semi-automated control strategies offers a replicable model for future assistive systems. It demonstrates how balancing technical feasibility with human-centered design leads to solutions that are both functional and personally meaningful.
Healthcare professionals, including occupational therapists, rehabilitation specialists, and clinical support staff, can also benefit from this system. It presents a tool that can extend the reach of therapy beyond clinical settings, offering users opportunities for independent practice and supporting daily functions at home. The collaborative approach taken in this project underscores the importance of interdisciplinary work, where clinical insights and engineering expertise converge to address complex accessibility challenges. By fostering this type of interprofessional exchange, the system contributes not only to better technology but also to more holistic, patient-centered care models.
In sum, this project represents a meaningful step forward in the development of assistive robotics. It demonstrates that by centering on the needs and experiences of users, while actively engaging engineers and healthcare professionals, it is possible to create systems that go beyond functionality to support empowerment, inclusion, and long-term improvements in QoL.
In conclusion, this work demonstrates the feasibility and effectiveness of a user-centered, semi-automated robotic arm system for individuals with tetraplegia. By combining technical innovation with participatory design, the system offers a highly usable and customizable platform with strong potential for real-world application. Addressing remaining challenges related to cost, hardware limitations, and scalability will be essential to transitioning from a research prototype to a deployable assistive solution. The future development of the GUI will be central to this transition, serving as the primary interface through which users interact with increasingly capable and intelligent assistive systems.
