By Marty Swant • November 20, 2023 • 6 min read •
So many AI discussions this year have centered on what businesses, regulators and researchers think. But what do parents and kids think?
To understand how families are grappling with generative AI, researchers at Kantar and the Family Online Safety Institute sought to find how “habits, hopes and fears” vary in the U.S., but also in Germany and Japan. The findings, released last week, showed that a majority of parents felt positive about their teens using generative AI even if they’re still concerned about the risks.
Overall, Germany had the highest percentage of overall positive sentiment (70%) followed by the U.S. (66%) and Japan (59%). Japan had the highest percentage with a negative outlook, with 38% expressing a negative outlook versus 29% in the U.S. and 27% in Germany. To address the concerns, parents in all three countries mentioned data transparency as a top priority — something that teens in the U.S. and Germany also mentioned.
When Kantar asked about how they’ve tried genAI tools, a majority of parents and teens mentioned using it for analytical tasks. However, a higher percentage of parents in all three countries have used it for creative tasks while teens were more likely to use it for “efficiency-boosting” tasks like grammar checks. On the other hand, parents and teens both mentioned being worried about the potential for job loss and misinformation. Teens also expressed worry about AI being used to created new forms of harassment and cyber-bullying.
One of the more surprising findings in Kantar’s report: Teens said their parents knew more about AI than they did. That’s especially noteworthy after nearly two decade of the social media era, when kids adopted new platforms faster than their parents. However, that might be because parents are using AI more than social media in work settings, noted Kara Sundby, senior director of Kantar’s Futures Practice. Another possibility is that parents have gone through enough digital evolutions to become more adept at “responsible learning.”
“Parents are also using this for themselves in ways that they didn’t jump on TikTok and Snap,” Sundby said. “It is more of a workhorse.”
Despite the optimism Kantar uncovered, a separate survey in the U.S. and U.K. conducted by Braze found that around half of consumers are worried brands won’t use their data responsibly, with just 16% being “completely confident” or “very confident.” Only 29% said they were comfortable with brands using AI to personalize experiences for them while another 38% said they weren’t comfortable and another 33% were unsure.
It’s important to understand how communities use AI systems and how they’re impacted by them, said Ashley Casovan, managing director of the AI Governance Center at the International Association of Privacy Professionals. In her prior role as managing director of the Responsible AI Institute, Casovan also helped conduct a survey of Canadians to see what they thought about AI, and found that 71% of respondents thought AI should be developed in consultation with “everyday people” while 82% thought it’s important to incorporate ethics to minimize harms. (Casovan also spent years working for the Canadian government, where she helped develop its first national policy around government use of AI.)
“There is a really strong need to understand the context of how these AI systems are being used,” Casovan said. “There does need to be literacy at the end user level, which I think you can generally say is the public, although it might stop before that.”
- It was a wild weekend for OpenAI. On Friday, cofounder Sam Altman was ousted from his role as CEO and removed from the board while CTO Mira Murati was named as interim CEO. In a blog post about the leadership changes, OpenAI also said cofounder and president Greg Brockman was stepping down from his role as board chair but staying with the company. However, hours later, Brockman announced he’d quit. Since then, various rumors and reports have swirled related to what sparked the sudden changes and might happen next. In OpenAI’s blog post, the company said Altman’s departure came after “a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”
- Major tech companies including Google, Meta, IBM and Shutterstock are adding new AI-related policies and tools to build trust, avoid risks and improve legal compliance.
- A new bipartisan bill in the U.S. Senate aims to bring more accountability and transparency to AI while also fostering innovation. The bill, called the Artificial Intelligence (AI) Research, Innovation, and Accountability Act, was filed last week with three Republican cosponsors along with three Democrats. Meanwhile, the FTC announced a new “voice cloning challenge” to raise awareness about the potential risks posed by AI.
- Major publishers including News Corp and IAC expressed that they are increasingly frustrated about generative AI companies scraping content without paying for it.
- A key executive at Stability AI, resigned over copyright concerns related to how the popular AI startup trains its AI models. In an op-ed about his departure, Ed Newton-Rex, who led Stability’s audio efforts, said he felt the company has “exploiting” copyrighted content by using it without permission. (Stability AI has been the target of multiple copyright lawsuits, including one from artists and another by Getty Images.)
- Generative AI has continued to show up in various companies’ quarterly earnings reports. Last week, Getty Images, Visa, Chegg, Alibaba and others all mentioned genAI in their results presentations. Generative AI partially contributed to the 35% growth in gross profit for Tencent’s online advertising business, the company said, citing the roles of “heightened demand for video advertisements and the innovative application of generative AI tools in creating compelling ad visuals.”
Prompts and Products:
- Microsoft announced a number of AI updates last week during its Ignite event. Along with rebranding Bing Chat and Bing Chat Enterprise to become Copilot — which will be available on Dec. 1 — Microsoft also announced other updates including new support for OpenAI’s GPTs, enhanced commercial data protections, more plugins and a new Copilot Studio to help people build their own standalone low-code Copilots. Microsoft also released a new report detailing how people have been using Copilots for creativity and productivity. Finally, it announced Baidu as the latest partner for its Chat Ads API.
- Google is experimenting with new music-related generative AI tools including a way for users to create original music clips for YouTube Shorts created by text-based prompts. In a preview released last week, the company showed how it’s working with major artists — including Charlie Puth, Demi Lovato, T-Pain, John Legend and Sia — who are providing their music for the project.
- IBM announced a new governance tool for its Watson portfolio called watsonx.governance that will help AI clients detect AI risks, predict potential future concerns, and monitor for things like bias, accuracy, fairness and privacy.
- Ally Financial announced initial results of generative AI experiment using company’s large language model. According to Ally, using the tools reduced campaign production time by 2-3 weeks while others saw a 34% time saving on various tasks.
- More agencies are inking new deals as a way to tap into the generative AI boom. Last week, Omnicom announced a new collaboration with Getty Images to integrate Getty’s new AI image generator into Omnicom’s Omni data orchestration platform. Meanwhile, Stagwell announced a new partnership with Google to infuse the tech giant’s AI into Stagwell Marketing Cloud.
Other stories from across Digiday:
- “Why the NFL released an AI-powered game with Amazon”
- “As the ‘trial of the century’ nears its end, have the scales tipped in the DoJ’s favor?”
- “Independent ad tech continues to tick along even as storm clouds gather overhead”