Il Messaggiere - AI toys look for bright side after troubled start

NYSE - LSE
SCS 0.12% 16.14 $
RBGPF 0% 81.57 $
NGG 0.8% 80.12 $
CMSC 1.12% 23.27 $
CMSD 0.8% 23.69 $
BCE -0.04% 23.74 $
BCC 6.04% 83.05 $
RIO -3.77% 81.13 $
JRI 0.43% 13.8 $
GSK 0.34% 50.39 $
VOD -2.37% 13.5 $
RELX 1.83% 43.14 $
AZN 0.68% 94.65 $
RYCEF 1.89% 17.45 $
BP 0.47% 34.29 $
BTI 2.54% 55.19 $
AI toys look for bright side after troubled start
AI toys look for bright side after troubled start / Photo: Caroline Brehman - AFP

AI toys look for bright side after troubled start

Toy makers at the Consumer Electronics Show were adamant about being careful to ensure that their fun creations infused with generative artificial intelligence don't turn naughty.

Text size:

That need was made clear by a recent Public Interest Research Groups report with alarming findings, including an AI-powered teddy bear giving advice about sex and how to find a knife.

After being prompted, a Kumma bear suggested that a sex partner could add a "fun twist" to a relationship by pretending to be an animal, according to the "Trouble in Toyland" report published in November.

The outcry prompted Singaporean startup FoloToy to temporarily suspend sales of the bears.

FoloToy chief executive Wang Le told AFP that the company switched to a more advanced version of the OpenAI model used.

When PIRG tested the toy for the report, "they used some words children would not use," Wang Le said.

He expressed confidence that the updated bear would either evade or not answer inappropriate questions.

Toy giant Mattel, meanwhile, made no mention of the report in mid-December when it postponed the release of its first toy developed in partnership with ChatGPT-maker OpenAI.

- Caution advised -

The rapid advancement of generative AI since ChatGPT's arrival has paved the way for a new generation of smart toys.

Among the four devices tested by PIRG was Curio's Grok -- not to be confused with xAI's voice assistant -- a four-legged stuffed toy inspired by a rocket that has been on the market since 2024.

The top performer in its class, Grok refused to answer questions unsuitable for a five-year-old.

It also allowed parents to override the algorithm's recommendations with their own and to review the content of interactions with young users.

Curio has received the independent KidSAFE label, which certifies that child protection standards are being applied.

However, the plush rocket is also designed to continuously listen for questions, raising privacy concerns about what it does with what is said around it.

Curio told AFP it was working to address concerns raised in the PIRG report about user data being shared with partners such as OpenAI and Perplexity.

"At the very least, parents should be cautious," Rory Erlich of PIRG said about having chatbot-enabled toys in the house.

"Toys that retain information about a child over time and try to form an ongoing relationship should especially be of concern."

Chatbots in toys do create opportunities for them to serve as tutors of sorts.

Turkish company Elaves says its round, yellow toy Sunny will be equipped with a chatbot to help children learn languages.

"Conversations are time-limited, naturally guided to end, and reset regularly to prevent drifting, confusion, or overuse," said Elaves managing partner Gokhan Celebi.

This was to answer the tendency that AI chatbots get into trouble -- spouting errors or going off the rails -- when conversations drag on.

Olli, which specializes in integrating AI into toys, has programmed its software to alert parents when inappropriate words or phrases are spoken during exchanges with built-in bots.

For critics, letting toy makers police themselves on the AI front is insufficient.

"Why aren't we regulating these toys?" asks Temple University psychology professor Kathy Hirsh-Pasek.

"I'm not anti-tech, but they rushed ahead without guardrails, and that's unfair to kids and unfair to parents."

D.Lombardi--IM