Trust becomes a critical issue for humans working with robots, especially when they can autonomously learn and adapt to new situations. The behaviour of these types of machines cannot be formally verified in advance. We propose to study the change in trust for a mixed initiative task under varying degrees of transparency of the adaptation process.
The two main research contributions are:
- The design and development of a robotic cognitive architecture that includes the ability for the robot to adapt autonomously to a change in the task environment. We instantiate the architecture using a Baxter robot for participation in a mixed initiative task where the environment changes, requiring the robot to adapt on the job.
- Modelling and evaluating the evolving human-robot trust relationship as the robot learns on the job.