YouTube Kids is facing scrutiny over a surge in AI-generated content that experts warn could harm children's cognitive development, according to a report in The New York Times.
The investigation reveals that automated channels are flooding the platform with what is being called "digital garbage," content created by artificial intelligence (AI) that can negatively impact children's attention spans and their understanding of reality.
These AI-driven channels, often linked together, utilize programs like ChatGPT to generate nonsensical scripts, which are then transformed into distorted visuals using video generation tools. The resulting content often features characters with incomplete features, extra fingers, or melting faces, the New York Times detailed.
The sole purpose of this content is to manipulate algorithms, increase viewing time, and generate advertising revenue, with no regard for educational value, the investigation found.
Child development experts cited in the New York Times report warn that this type of content can cause "visual poisoning," blurring the lines between what is possible and impossible in the real world. For example, children may struggle to understand reality when viewing cartoon characters consuming inedible objects or performing surreal actions.
The AI-generated content can also produce frightening images, such as distorted faces and harsh, robotic screams, potentially causing anxiety and night terrors in children under the age of five, the report stated.
Further data from the Pew Research Center highlights the extent of the problem, indicating that 40% of the videos suggested to children in the "Shorts" category are either entirely or partially AI-generated.
The average time children spend watching this "digital garbage" has increased by 25% in a single year, driven by AI's ability to create addictive colors and music that overstimulate dopamine production, according to the Pew Research Center.
In response to the New York Times investigation, YouTube stated that it has begun implementing a mandatory labeling system for any video produced using AI. YouTube also announced the removal of over 150,000 channels since the beginning of the year for "automatically generated, unhelpful content," Reuters reported.
The New York Times concludes that algorithmic "digital babysitters" are not neutral and are designed for profit rather than education. As automated production tools continue to evolve, human oversight remains crucial to protect the minds of the next generation from digital erosion.