1 Introduction

Disability is very diverse and can manifest in different conditions of the body or of the mind. Most of these conditions generate difficulties for disabled persons to engage in activities and interact with the environment, sometimes even impacting self-esteem and giving rise to a feeling of lack of independence [18]. In this context, assistive robotics aims to support the well-being of people, carrying out tasks that would otherwise be performed by a caregiver, or, even worst, be precluded. In particular, Socially Assistive Robots (SAR) reach this goal by enabling social interaction to achieve measurable progress in convalescence, rehabilitation, learning and ultimately helping in maintaining a positive social life [7]. SAR can change how the disabled person is perceived by others and how s/he feels in a social context, and provide structure for interactions, overall enhancing the social behavior of the subject. The assistance of people with physical impairments [1] or cognitive disorders [15] are just a part of SAR’s potential applications. Social robots have been effectively deployed in the rehabilitation context [12] to promote physical and mental well-being [13, 23], and with children in a teach-through-play approach [25].

Access to regular sports competitions is often precluded for disabled people. Chess, which has been recognized as a sport by the International Olympic Committee in 1999, is a rare exception, as everyone can play it without disparities if they train for the same amount of time and at the same level. Chess players suffering from a disability are admitted to regular chess competitions, in addition to specifically dedicated tournaments, and can compete against able-bodied players thanks to ad-hoc solutions [9]. As an example, a blind person can play unassisted using a personal chess board with tactile dots on top of the pieces and saying the move out loud to the opponent. Motion-impaired players as well may not be able to move the pieces themselves. In this case, they have to rely on an assistant to play their moves on the official chessboard, an aspect that could negatively affect their sense of independence, and self-esteem.

In this context, this project aims to develop a robotic helper to support disabled chess players in competitive games. The robot would stand as an alternative to the currently involved human assistant, increasing the feeling of independence in the assisted person.

2 Related Works

The concept of a robot playing chess has been around for years. Starting from the 1950s, chess programs were able to calculate the best moves in games, but it was only 1982 when Novag Robot Adversary used a robotic arm to pick up and move the pieces automatically [20]. Chess is often used in board game research due to its high state-space complexity and complex rules-set, which make it a challenging yet achievable problem to test manipulation, Human-Robot Interaction (HRI), and perception capabilities of robotic systems. Most of the current research efforts are meant to create a robotic replica of a chess player [3, 6, 14, 24]. On the other hand, the idea of a robotic assistant for a disabled player has been seldom explored [19]. Indeed, many state-of-the-art robots are programmed for demonstrative purposes only (e.g. ChessKA, KUKA Monstr [4]). The goal of these devices is to play the moves coming from a chess engine in public exhibitions, e.g., to prove the ability of these artificial intelligence algorithms to beat human grandmasters in dedicated events [4]. These applications are not meant to be used in real tournaments, thus simplifying the game-tracking problem through additional sensors on the chessboard itself [14]. Oppositely, other works implement vision to recognize different pieces on the chessboard through machine learning algorithms. As an example, Del Toro and colleagues [6] designed a robotic arm that plays chess against a human opponent. The tracking of the pieces on the board is accomplished by training a convolutional neural network. Similarly, Matuszek and co-workers endowed a custom 6-DoF arm named Gambit [17] with an RGBD in-hand camera. Thanks to the visual information, Gambit can track the occupied squares, classify pieces via support vector machine, and detect the presence of hands in the field during the move execution [17]. Pieces movement recognition that does not rely on a machine learning algorithm has been implemented by Chen and Wang [3]. In the addressed work, the humanoid Rethink Robotics Baxter robot has been used to play chess against humans, exploiting computer vision to detect chess movements and using an open-source chess engine to compute the next move.

In the framework of assistive robotics, Omarsdottir and colleagues [19] created Chessmate, a robot intended as a telepresence tool for physically disabled players. This application makes use of a setup very far from the standard competition chess set, using a custom chessboard to manipulate pieces magnetically. Moreover, Chessmate relies on receiving as input the moves of both the players, lacking a system to recognize the opponents’ moves.

Another solution to foster inclusion though the game of chess for people suffering from motor impairment is represented by brain-computer interfaces [5, 11, 16]. The aim of these works shares the spirit of the present one, and the effort done to advance research in brain-computer interfaces is commendable. However, these works suffer from the limitation typical of brain-computer interfaces such as long setup time, the need for calibration procedures, and mental fatigue.

3 Contribution

The contribution of this paper is to integrate and adapt existing technologies to develop an application promoting inclusion through sports competition. Ultimately, the use of the robot in official competitions would allow the user to become independent from a human assistant, positively impacting his/her self-sufficiency (both actual and perceived). This insight is supported by the outcome of a survey administered to dystrophic patients to identify the requirements for an upper limb assistive exoskeleton [10]. Amongst other activities of daily living, playing chess independently, not only through the PC screen, arose as one of the potential target tasks. In this view, the proposed robotic assistant for competitive chess players with a motion disability would act as a full-fledged SAR.

This project is intended to possibly find application in the real-world context, since, to date, there is not a similar assistive device approved by regulations in the chess world.

In this view, the project has been carried out following these phases of work. (1) The key requirements for such an application are identified; (2) a prototype solution to fulfill the requirements is implemented and tested; (3) Accessibility for disabled people and compliance with chess rules are taken into account at every step of the project. The described approach introduces these novel features with respect to current solutions:

  1. 1.

    The developed solution should not rely on modification of chessboard or pieces designing them as “special”, but should comply with standard chessboard and pieces;

  2. 2.

    The developed solution should not rely on chess rules or do not allow/correct them given that only the player should be in charge of the game;

  3. 3.

    The developed solution should be under the constant control of the user without the help of an external person, and without using vocal commands so as to not disturb the game.

4 Functional Requirements

The present work aims at building an effective framework to use a robot as a SAR for physically disabled chess players in official chess competitions. As far as we know, to date there are no robots used in the competitive chess world as assistants for disabled people and thus there is a lack of rules about this topic. Nevertheless, the designed robotic assistant has to adhere to International Chess Federation (FIDE) official rules, substituting the human assistant that is nowadays required for disabled chess players to take part in competitions [9]. The minimal hardware requirements of the robot to be employed in the proposed application are (1) a mechanical arm spanning the official chessboard workspace, and (2) an RGB-D camera. When coming to functional requirements, the robot should

  1. (1)

    Play the moves selected by the user interacting with the standard equipment used in competitive tournaments. Therefore, the robot has to recognize and handle any type of move (i.e., captures, en-passants, castles, and promotionsFootnote 1), to place the captured pieces in a dedicated box/area, and to click the clock at the end of the move;

  2. (2)

    Keep track of the game situation;

  3. (3)

    Provide an intuitive graphical interface, accessible with a device designed for motion-impaired people;

  4. (4)

    Be “blind to the rules of chess”, i.e. not checking for the validity of the received instructions. Even illegal moves must be played, if requested by the user, in order to recreate a real game situation in which these mistakes are penalized;

  5. (5)

    Introduce a minimal time overhead, compared with a human assistant, to facilitate user’s acceptance.

Fig. 1
figure 1

Complete experimental setup. (1) TIAGo robot, mounting a parallel gripper. (2) The chessboard with the chess pieces placed in their initial configuration. (3) The box to host captured pieces, placed on the left side of the chessboard and with the ArUco marker number 300 placed on it. (4) The chess clock, positioned to the right of the chessboard and with (5) the ArUco marker number 100 placed on TIAGo’s side button. (6) The Ottobock joystick. (7) Joystick and push-button boards. (8) A computer for user interface visualization

5 Materials

The robot employed as assistant in the present application is a PAL Robotics TIAGo robot (Fig. 1, 1). It is composed by a mobile base, a lifting torso, a 7 degrees of freedom arm and a parallel gripper as end-effector. The two parallel plates of the gripper can move independently, reaching a total span of 8cm. The head is equipped with motors for pan and tilt motions and hosts a RGB-D camera (Orbecc Astra S camera). Thanks to its wide range of sensors and peripherals, the TIAGo robot has been employed in several application in the context of assistive robotics [2, 21, 22].

The application has been tested on an official chess competition wooden chessboard (dimensions: 55 cm \(\times \) 55 cm, 1.5 cm high) and pieces set (Fig. 1, 2). To replicate in a controlled environment the official setup, the complete experimental setup includes a wooden box (Fig. 1, 3) to place the captured pieces and a digital chess clock (Fig. 1, 4). The robot can be controlled through an Ottobock calibratable Mini joystick, which is usually employed to drive electric wheelchair and it is very sensible (Fig. 1, 6) linked to an electronic board connected to the control computer (Fig. 1, 8) via standard USB cable. A standard mouse or touch pad can alternatively be used to navigate through the user interface. Additionally, a push button connected to an Arduino Nano Every board (Fig. 1, 7) is mounted over the chess clock and two ArUco markers (binary fiducial markers used in computer vision for pose estimation), printed on cardboard are placed over the clock and over the box (Fig. 1 5 and 3, respectively).

Fig. 2
figure 2

Graph structure of the implemented application, divided in its functional units

6 Methods

The developed application is constructed by connecting pre-existing ROS nodes deployed on the TIAGo robot integrated with self-developed nodes written in Python language to manage different aspects of the application, namely vision, motion, and user interface. The graph in Fig. 2 schematically shows the final nodes network.

6.1 Vision

The vision system includes RGB image processing of the chessboard, ArUco markers detection, and pointcloud processing.

The image processing step is performed to localize the chessboard, the clock, and the box in the 3D space. At the beginning of the setup preparation, TIAGo’s head is oriented downwards, to frame the objects on the table. At this point, the RGB images acquired by the camera are processed to localize closed contours. The biggest contour between the identified ones is identified as the chessboard contour. Then, the Hough lines transform is exploited to extract the square centers. Clock and box localization, on the other hand, relies on the identification of ArUco markers.

To keep track of the pieces positioning on the chessboard, Algorithm 1 is implemented. The tracking algorithm assumes that the game starts from the known, conventional configuration. The user moves are known, as they are given as input to the robot. Therefore, a differential approach is adopted to recognize the opponent’s move by looking at the changes in the chessboard configuration compared to the previous turn.

In detail, two RGB images are acquired before and after the opponent’s turn (Fig. 3a and b, respectively). A homographic transform is applied to both the images, in order to change them to a bird-eye view. The Structural Similarity Index Measure (SSIM) [26] is used to compare the two images (Fig. 3c). The similarity map given by the SSIM is binarized, to determine if each pixel has changed or not with respect to the previous state (Fig. 3d). When this procedure returns only two cells, their status at the previous turn is checked. The cell formerly occupied by an opponent piece is identified as the start square (\(move_i\)), and the other one (either white or empty) is identified as the destination square (\(move_f\)). In some cases, due to the perspective view of the camera, more than two squares are identified by the SSIM (Fig. 3f). To solve this ambiguity, the point clouds of the candidate cells are reconstructed to robustly assess whether they are free or occupied, and if so, by which army (Fig. 3g and h). In Algorithm 1, at line 22, the logic to determine occupancy and color of a cell based on its point cloud is omitted, as it is a trivial set of if/else statement based on point count and RGB values. It is worth to underline that an entirely point cloud-based solution has been avoided on purpose as it would significantly increase the computational cost of the algorithm, thus negatively affecting the waiting time. In the unlikely case of ambiguity after this step, the user is presented with a random candidate move.

Fig. 3
figure 3

The first row (a to d) shows the steps of the simplest case for the move detection process. The underlying move is black pawn from d7 to d6. (a) RGB image of the chessboard before the opponent has played the move (homographic transformation has been applied). (b) RGB image of the chessboard after the opponent has played the move (homographic transformation has been applied). (c) Structural Similarity Index Measure (SSIM) map, resulting from the comparison of (a) and (b). (d) Binarized SSIM map. Cell contours and cell names are added for the sake of readability. The second row (e to h) shows the additional steps in detecting more challenging moves when also the point cloud is taken into account. The underlying move is black bishop from f8 to c5 (highlighted by the red arrow). (e) RGB image of the chessboard after the opponent has played the move (homographic transformation has been applied). (f) Binarized SSIM map. Note that the binarization step returns three cells, thus the move cannot be identified with the data at hand. In this example, an ambiguity in the destination cell (either c5 or d5) is observed. (g) Point cloud of the scene after the move, reconstructed combining RGB and depth image. (h) Cropped point cloud of one of the candidate destination cells (i.e. c5) after removing the chessboard plane. In this example, the d5 cell is found to be empty, thus discarded from the destination candidates

Algorithm 1
figure a

Move identification

6.2 Motion

The mechanical task that needs to be implemented is a pick-and-place task, which requires the motion of TIAGo’s 7 DoF arm in combination with the parallel plates gripper’s fingers. Additionally, other minor movements of TIAGo’s torso and head are used throughout the complete game of chess, to ensure an optimal view of the chessboard.

The motions of the arm are planned exploiting MoveIt!, an open-source robotic manipulation platform for motion planning and collision avoidance in ROS. The arm has to lift and lower the pieces over the chessboard, place captured pieces in the box, click the chess clock and, finally, move to a rest pose. Movements over the chessboard mimic the action of a human player, e.g. the knight is moved along its typical L-shaped path and the bishop in a diagonal direction.

The standard pick-and-place task is accomplished with the gripper oriented along the vertical axis, perpendicular to the chessboard plane. However, given some limitations of TIAGo’s operational workspace (i.e., the furthest rows are close to the robot’s maximum reach workspace; the closest ones require to operate close to the robot torso, thus limiting the feasible configurations), the desired gripper orientation could be unreachable. To mitigate the constraints on the robot positioning with respect to the chessboard, the gripper orientation varies depending on the pick and place locations. Moves in the central region of the chessboard are performed with the gripper in the standard vertical orientation. Two more gripper’s orientations (\(65 ^{\circ }\) and \(115 ^{\circ }\) with respect to the vertical axis) are defined, and used to pick and place pieces at the opposite ends of the board.

Note that the vertical orientation should be considered the standard one, as it minimizes the risk of collision with the surrounding pieces. The need for special gripper configurations arises from the geometry of the hardware at hand. Implementing the application on an arm with a wider workspace or a more favorable placement (e.g., a manipulator placed directly on the table) would remove the need for this shrewdness.

Special moves (i.e., en-passant and castle moves) are implemented as combinations of the described elementary movements. At the current stage of development, promotions require the collaboration of the opponent to place the desired piece on the chessboard.

6.3 User Interface

The graphical user interface (GUI), developed with QtDesigner, is meant for non-expert users affected by reduced arm mobility. In this view, the GUI is configured to respond to the inputs of the joystick, in addition to standard input devices such as a mouse or a touchpad. In the initialization phase, the intuitive instructions displayed on the screen guide the operator through the preparation of the setup, and let him/her check for the correctness of the calibration procedures. At the start of the game, an instruction wizard window appears, explaining how to manage the joystick to navigate the chessboard, select pieces, and control the desired move. Then, during the game, a window displaying the chessboard with the live pieces’ disposition is shown (Fig. 4). The application also offers a tool to edit the game situation displayed by the user interface if needed (i.e., if the Algorithm 1 fails to recognize the opponent’s move), as displayed in Fig. 4 (label 5).

The user interface is implemented as a ROS node, communicating with the vision and motion nodes to receive the results of the initialization procedure, update the chessboard live situation, and send tasks and targets to the robot.

Fig. 4
figure 4

User interface chessboard window. The game situation is represented on the screen. When the user moves the input device (touchpad, mouse, or joystick) the currently selected square is highlighted in light red (label 1). When selecting a move, the start and end square will be highlighted in dark red and yellow, respectively (not shown in this Figure), waiting for the user to confirm. The user is constantly informed of the application status (label 2). Errors in the move recognition can be manually corrected enabling the “Manual Mode” (label 3). Clicking the button opens a pop-up window with tools to cancel, add or replace pieces on the virtual chessboard. A similar pop-up window opens in case of promotion, i.e. when a pawn reaches the last rank on the opposite side (label 4)

7 Experimental Protocol

To test the developed platform, six famous chess games (reported in Table 1) have been replicated on a standard tournament chessboard, using TIAGo as an assistant for the other player. The test games include special moves (i.e., castle, en-passant, promotion), captures in the last and first rows of the chessboard, and variable duration, to test if the robot can properly handle them.

Table 1 The six famous games that have been replicated during the experimental tests

In the numerical analysis of the outcome measures, a conceptual distinction has been done between actions and moves. An action has been defined as the sequence of movements to perform a pick-and-place task (going over a square, lowering the gripper, picking the piece, moving to the target location, and placing the piece), while one move, can be a composition of two actions, e.g., in captures or castle moves, and it always ends with the click of the clock. During the recreations of the games, we evaluated:

  • Manipulation outcomes;

    • \(\circ \) Success in picking;

    • \(\circ \) Success in placing;

    • \(\circ \) Success in clicking the clock after the move execution;

  • Recognition outcome: success in recognizing the opponent’s move.

Lastly, the average time needed to execute a complete chess move and to perform an opponent move recognition, has been evaluated.

8 Results

The executed moves during the tests reached a total of 1029, while the total executed actions went up to 1303. In total, 133 out of the 1303 executed actions presented some errors. To have a more detailed insight of the undesired behaviors, 44 errors happened in picking, 11 in placing, and 13 in clicking the clock. In 65 cases, the gripper touched a piece on the chessboard that was not involved in the requested move. These contacts have been categorized as hard collision (30 occurrences) if they required an operator intervention to re-arrange the chessboard situation correctly, or soft collisions (35 occurrences) if they did not compromise the game situation.

The opponent moves recognition system, was able to correctly track the game situation in the \(96w\%\) of the cases.

It should be noticed that not all the undesired behaviors compromise the game progress. Indeed, the soft collisions do not require any intervention, and the tracking errors can be corrected by the user through the user interface (as described in Sect. 6.3). Considering this, the application required manual intervention during the test only in 89 of the total 1029 executed moves, resulting in a percentage of correct and independent functioning of \(91.6\%\). An overview of the overall performance is reported in Fig. 5.

Fig. 5
figure 5

Barplots reporting the application success rate. These results are relative to the executed moves. The sum of the white and grey bars accounts for independent functioning. The black part of the bar represents the errors that require the intervention of an operator

The percentage of failures over the totality of actions performed over each square is represented in the colormap reported in Fig. 6, where the color of each square gets darker as the frequency of failures in the square increases. To populate the map, the number of actions performed by TIAGo’s arm over each square has been counted.

Fig. 6
figure 6

Colormap representing the distribution of the error percentage over the performed actions for each square of the chessboard. In each cell, the absolute number of failures is reported

In terms of time, the move execution took on average \(37.93 \pm 11.50\) s. The average processing time to recognize the opponent’s move settled to \(4.88 \pm 2.78\) s.

8.1 Pilot Test

In a pilot test, TIAGo assisted a person affected by muscular dystrophy in playing a chess game against a healthy subject with the aim of evaluating the feasibility of the proposed approach when used by a possible target user. The player used a Bluetooth mouse to interact with the screen interface, and the setup of the chess set was managed by the opponent. The game was played without any major inconvenience (i.e., hard collision). The player was able to autonomously fix the game situation from the user interface when the robot failed to properly track the opponent’s moves. The game was played to the end demonstrating the validity of the proposed approach. The main limitation was identified as screen dependency, seen as a restriction to direct participation. Nevertheless, the user reported being satisfied with the operation of the application and found the experience enjoyable.

9 Discussion

The objective of the work was to implement a robotic assistant to support disabled chess players in competitive games. The robot plays the move on behalf of the player. The possible input devices (joystick, mouse, touchpad...) can be handled by a person with limited arm and hand mobility, and used to select moves from a graphical interface. The opponent’s move are tracked by a vision system to update the game situation as seen by the robot and displayed in the user interface.

This work was mainly oriented to guarantee accessibility to a social activity in a competitive environment, with all the specificity of the case. This aspect strongly distinguishes the presented application from other work employing robots in a chess-related environment. Despite this, the accuracy results obtained are compared with similar robotic “chess players”, to prove the validity of the implemented system. Del Toro and co-workers [6] used a four-finger gripper specifically designed for the project, but they found the placing accuracy to be strongly affected by low-cost nature of the project and by the simple open-loop mechanical system. As a result, only the 70% of the placing actions (35 out of 50) had a positive outcome. On the other hand, their machine learning algorithm for color classification and movement detection reached an impressive success rate of 99.4%. Chen and Wang [3], using a purely RGB-based (i.e., no depth information) approach to keep track of the game, registered a 100% accuracy in understanding the performed moves. Though impressive, this result may have benefited from the use of a black and white chessboard and chess set, in which the two armies are well-separated in the RGB color space. This condition is not always guaranteed, as materials, colors, and lighting conditions change from game to game. In addition, as for the arm motions, the authors report that occasionally the arm accidentally knocks over other pieces as it actuates, requiring human intervention to fix the chessboard situation, though no quantitative measure of this phenomenon is reported.

The main causes of failures are inaccuracies in the vision pipeline, arm replicability error (reported to be 3.5 mm in the worst-case scenario by the robot manufacturer), and failure in planning or executing the movement.

The heatmap in Fig. 6 shows how the most recurrent errors happened on the far side of the chessboard. This highlights how the localization of the squares’ centers is more accurate for the squares closer to TIAGo’s camera due to the perspective view of the chessboard. Moreover, most of the errors were pick errors, as expected, as this kind of error could be due to cumulative small inaccuracies committed in previous moves, i.e., when TIAGo positions a piece slightly out of the center of the square. If this happens, the robot is likely to fail when the same piece has to be picked again. To reduce the impact of this problem, the computer vision pipeline has been already re-designed, leveraging the insights gathered from the first implementation (superseded, thus not described in this paper). The solution presented significantly increased the robustness against changes in light and chessboard orientation. A further improvement in the segmentation and recognition performance could be achieved by integrating a bird-eye camera into the setup. As for the arm motions, the error introduced by the arm positioning is comparable with the size of the pieces. Nevertheless, the pieces’ cylindrical symmetry and the gripper large plates (4 cm) can help mitigate the problem. To make the application more robust against planning and/or execution failures, recovery strategies should be implemented, to enable the robot to try again when an action fails to be completed. As an example, when the planner fails to find a valid arm trajectory, the arm could be randomly moved to a different (safe) configuration. This would provide a new seed to the trajectory planner, possibly leading to the identification of a valid path.

As far as the execution time is concerned, the average move execution time computed during the tests is strongly related to the speed rate chosen to test TIAGo (limited to the 20% of the maximum speed to avoid harmful collisions during the test phase). The standard deviation of the duration of double-action moves is higher than the one for single-action ones because double-action moves can be composed of different arm motion combinations that can take longer time to be completed.

Even with the current execution time, the delay introduced by the robot can be deemed to be acceptable, in the context of long-timed games, typical of competitive tournaments. Nevertheless, such a long time for the robot execution of the move is likely to negatively impact the user experience. Thus, it may represent an obstacle to the actual deployment of the device in the real world. The present limitation can be contrasted by increasing the maximum allowed velocity for the robot motors.

In terms of economic viability of the proposed application, it must be remarked that the PAL Robotics TIAGo robot has been chosen as a test platform for availability reasons. However, the robot embeds sensors and software that have not been exploited for the present application. The proposed application can be deployed on a less expensive setup, made of an RGB-D camera, and mechanical arm with sufficient workspace, connected to the PC for user interface. This would cut the costs related to many unused functionalities of TIAGo, thus making the developed application more realistically implementable in a real chess competition context.

The pilot test proved the capability of the proposed system to assist a chess player. The gathered observations provided useful feedback on the current application, and helped to prioritize the future developmental steps. In particular, the main remarks concerned the user interface. Though the GUI has been appreciated for its usability, the player underlined the importance of having direct visual contact with the chessboard. In this view, a vocal control would enable greater involvement in the game. As for the input device, the user asked to play with the mouse instead of the joystick to speed up (1) the setup phase (i.e., avoiding the time-consuming process of rigidly attaching the joystick to the armrest of the wheelchair), (2) the move selection, moving freely around the chessboard, and not square-by-square. Nevertheless, this solution is only suitable for players with sufficient residual force and mobility.

10 Conclusions and Future Sights

In this work, a PAL Robotics TIAGo robot has been used as a development platform to implement a SAR application aimed at an official chess competition context. Experimental tests resulted in a satisfactory outcome both for the performance of the pick-and-place task and the opponent’s moves recognition algorithm. The percentage of moves executed without the need for an operator’s intervention reached the \(91.6\%\), with the majority of the errors happening on the side of the chessboard opposite to the robot. Though promising, these results reflect that roughly 1 move out 10 requires operator intervention. The error rate is probably too high to successfully deploy the system. Indeed, the target error rate can be considered to be around the 3–4\(\%\), corresponding to at most one error for average-length games.

The opponent’s moves recognition algorithm alone reached a total success rate of \(96\%\). Though these results could be further improved, e.g. by integrating a bird-eye or an in-hand camera, the vision pipeline is satisfactory in terms of accuracy. In this view, it is worth remembering that these errors can be corrected through the user interface (Fig. 4, label 3).

With the proposed application, whenever the user happens to promote a piece, the opponent is asked to substitute the promoted pawn with the user’s desired piece. A possibility to remove this dependency on human intervention is to locate, e.g. using ArUco markers, an area outside the chessboard where to place the captured pieces in a predefined configuration. Thanks to this shrewdness, the robot would be able to go back and reach for them whenever the player promotes a pawn and asks for them to replace it.

Regardless of the above, the top priority should be given to the implementation of a vocal control strategy, and its compliance with the official rules. In this way, the need for a monitor to display the GUI would be removed, allowing a more direct involvement of the player in the game, possibly improving the user experience. In this view, also the reduction of the time taken to execute a move should be prioritized, in order to ease the acceptance by the final users.

The presented implementation of a robotic assistant for disabled chess players, helped in the identification of new required features. In particular, the development of recovery strategies in case of arm motion failures, arose as the most impacting ones. In addition, a vocal control system could be integrated as an alternative to the present graphical interface. The user could thus switch to the preferred control interface. For example, the monitor could be removed in casual, non-competitive games in favor of vocal commands. On the other hand, the applicability of vocal control in the context of official tournaments should be investigated. After the implementation of the new required features, the application will be tested with naive and relevant users to gather data about usability and user experience. In conclusion, the process tested in this work is promising, even though further work is needed to improve robustness, execution speed, and naturalness of the interaction to facilitate user acceptance. Ultimately, this work could open the door to the real-world application of robotic helpers in official chess tournaments.