Report – Are the Robots Coming for our Children?
There is no doubt that Artificial Intelligence (AI) is having an increasing impact across a wide range of industries. This session explored both the opportunities for AI in children’s industries, from robots to marketing campaigns, as well as the risks and ethical dilemmas this presents. In such an exciting and fast growing area, how do we ensure as an industry that we don’t cross the line?
- Applications for AI across children’s media are fast emerging, from robots that support early language development to deep analysis providing audience insights
- However, there are also numerous potential risks and ethical concerns
- The panellists highlighted existing guidelines such as those from the European Commission and called for more research, discussion and rules both nationally and at a company level to help us steer this minefield effectively
The session focused on use of AI for two very different applications in the children’s sector. Firstly, AI and speech recognition used within robots or other products for children, enabling conversation and control using voice. Secondly, use of AI to analyse linguistics of spoken or typed discussion to provide deep audience understanding.
Lucy Gill from Digills Consulting played a pre-recorded interview with Sam Spaulding, from MIT’s Personal Robots Group. Sam spoke passionately about his research, using AI driven robots to help pre-school children develop language skills, particularly targeting children who would otherwise struggle and so start school a long way behind their peers. He spoke of compelling evidence that one-to-one interactions with AI robots makes a dramatic difference at this critical early stage in a way that would be too expensive to replicate through interventions with humans.
Martyn Farrows, from Soapbox Labs, added a list of other applications where AI and particularly children’s voice recognition is adding value stating that AI could have a “huge potential for using speech for good.”
Ben Hookway, from Relative Insights, talked about how their company was founded on analysis of online discussions to uncover online predators to support police surveillance. He explained how technology that emerged from this is now being used for applications, primarily for adults, as a tool to provide deep audience insights that can support marketing. For example, they learned that those with SLR cameras refer to ’shooting’ photos whilst other photographers referring to ’taking photos’, and that older women refer to ‘applying’ makeup whilst younger women ‘put on’ makeup. Such insights allow marketing language to be more authentic.
Tom Winbow, from Ralph Creative, followed this up with a range of other applications for this linguistic analysis that has supported their campaigns for clients including Netflix. He reminded us of how important insights and audience understanding are, and, used ethically, how AI can become just another tool to help us do this effectively.
Two pre-recorded interviews with Dr Veronica Barassi, Principal Investigator on the Child, Data, Citizen project for Goldsmiths University of London, helped to highlight the risks inherent in using AI for applications relating to children, In particular:
- The ability to identity children from their voice
- Personal, contextual data gathered, and potentially stored, about a child from what they say
- The ability to infer additional characteristics about a child from their speech or text-based discussions
- The risk of profiling and so potentially stereotyping children
- Potential use of this data to influence children
- The somewhat unknown impacts of building relationships with robots
The panel agreed and expanded on these, they discussed the lines their organisation would not cross. Ben commented that they would only ever analyse verified data provided by a client (gathered in a GDPR/COPPA complaint way) and Martyn explained how possible it is to process voice data on a device such that NO voice data needs to be shared further or stored – this is how Soapbox Labs technology works. As Sam mentioned, it is possible to use AI ethically but a company’s business model has a big impact – if their business model is based on sharing data (as is the case for Amazon and Google) then ensuring a child’s privacy is much more difficult.
The session concluded with a discussion surrounding how the industry should ensure AI is used ethically for children and how the panel would advise those working in this area to proceed. The panel were keen to see more research into the impacts on children, as well as clearer guidelines/rules specifically to protect children both at a national/international and company level. Martyn pointed the audience towards the European Commissions for ‘Ethical Guidelines for Trustworthy AI’.
It was clear from the discussion and audience questions that this is a contentious topic that will continue to fuel a range of concerns in coming years. Surely this will continue to be a hot topic for years to come.
Child-Centred Consultant & Researcher
Dr Veronica Barassi
Goldsmiths, University of London
Principal Investigator on the Child, Data, Citizen Project
Child-Centred Consultant & Researcher
Director of Programming