Event Menu

Report – It’s Alive

Posted on: Wednesday 04 July 2018 11:31pm

With AI constantly in the news today, this session addressed fundamental questions for the children’s media industry; will AI in its various forms really work for kids of different ages? How will it impact creativity, content creation, learning, jobs? How can AI engage children? Where are the opportunities and what are the challenges?

Takeaway:

  • AI with children is at a very early stage.  It simply does not factor in the special style of a child’s conversations, and there are concerns around safety and being age appropriate. The Industries developing and benefiting from AI need to take a very different and response attitude to AI with children.
  • What is happening so far is very advertising and commerce-driven (e.g. Alexa), so panellists suggest this is not designed with magical experiences for children in mind.
  • AI has the ability to create very creative, personal, enriching experiences for children with potential to not only inspire and entertain, but also offer great educational and even therapeutic benefits.
  • Where they are given a chance to explore AI, children are incredibly interested and show a real aptitude in hackathon style environments to use AI to solve real world problems.

Detail:

Maddie Moate’s introduction acknowledged the different levels of awareness as to what AI actually is, and also mixed perceptions of the possible benefits and threats that it might bring.

She started by asking the panel to share their view on what AI is and where it is in its development.

Lee Allen shared a little about the history of AI, that there is a lot of excitement AND confusion, and that we have already seen several boom and bust cycles since the term was first coined in the ‘50’s.  However, the last 5 years have seen some spectacular progress due to a combination of faster computing, more data and better algorithms, in computer vision, speech recognition, natural language processing, as well as progress in autonomous systems.  Perhaps most fundamentally, tools are becoming more active rather than simply passive.

Maddie then asked the panel whether we SHOULD use it with children.

Martyn Farrows stressed that if we are to apply AI to children, we will be dealing with personal data on that child, so creating a digital footprint for each child.  If we then target that child with experiences we are removing serendipity for the child, a fundamental part of a child’s learning experience, and so deserves to be questioned.

Moreover, the magical experience children need is quite different to the current functional experience being delivered through Alexa. Most current platforms such as Alexa are not thought to be age appropriate or safe for children. So, the promise is huge as it enables better inclusion, they can be educational, familiar and comforting and, unlike adults, have infinite patience and attention. However, there are many challenges to overcome, most of all creating a safe, age appropriate experience for kids.

Maddie moved the discussion onto how AI can enhance creativity with an example of how Botnik have used AI to create the ‘Lost Grimm Fairytale’.

Lydia Gregory talked about how potentially powerful the tool of AI can be, by no means replacing humans but rather enhancing our skills. What Lost Grimm does is offers a simple creation experience with opportunities for children to add their own experience bringing benefits of educational experience, idea generation.  In many ways AI is creating an outlet to help children be MORE creative, rather than replacing their creativity.

She also stressed that AI’s Immediate opportunities will not always be ‘cool’ products – often simpler more boring functions will be the norm e.g. filtering highlights of Wimbledon coverage which a machine can do in minutes in a way a human can’t, and maybe wouldn’t want to do.

Maddie asked whether, and how, we should be educating our children about AI?.

Elena Sinel shared the Teens in AI hackathon, and how they offer a chance for children to explore and use AI to deal with world problems.  She shared how the children are fascinated by AI, pick it up very quickly, and rapidly find ways to solve very complex issues, e.g. climate change and some very challenging mental child/young adult health issues, and how a machine deals with very personal situations around a child sharing real life problems and giving the right advice or steering the child to a real human.

Typically every child participating ends up wanting to get into AI.  She emphasised how they need to take a lot into account, making sure they are protected and not put at risk, with questions going as far as considering human rights conventions protecting a child’s right to privacy.

Maddie asked whether we need to educate children to recognise that the ‘cute AI robot’ that they love, is actually a robot?.

Martyn felt this not an issue around engagement with the character, and in fact has parallels with the way a child falls in love with TV characters.  The bigger issue is the new one around collecting of data around the children, which the children’s industry needs to deal with by setting ethical boundaries. Lee added that the spoken word can connect emotions, create companionship, and so has real therapeutic potential. Elena balanced this by saying that some parents may be uncomfortable with the idea of a robot developing an emotional relationship with a child, particularly one that is at an experimental stage of development.

Martyn observed that with regards to standard in his area, toy manufacturers are already held to higher standard, e.g. Mattel with Barbie has to work to different standards compared with Amazon and Alexa.

 

Written by Craig Hill

Sign up for our e-bulletin

  • This field is for validation purposes and should be left unchanged.