Recently, the ability (or inability) of artificial intelligence to act like a human being has been emphasised several times: between AI boyfriends, bereavement technology and chatbots built to provide psychological support, it is clear that the empathic potential of this technology is being put to the test on a daily basis.
But how does artificial intelligence cope with having to apologise? Is it capable of a credible, heartfelt apology, as it should be in the case of sincere repentance? To test this potential, the BBC series 'AI vs The Mind' constructed a video game whose purpose is to insult players, while posing as a human user. Subsequently, the project requires this fake human player to apologise to the other participants, flesh-and-blood beings.
The next goal of the project is to submit this artificially generated apology to therreal players, to get their opinions on it. Will they be able to recognise the technological fabrication of the apology offered to them or will they trust the words of the AI?