Starting today, YouTube requires users to state whether the
videos they upload include altered or synthetic media, including content
generated by artificial intelligence.
For videos about sensitive topics like health, news,
elections, or finance, YouTube says it will label the videos itself.
The platform explained that they will ask users who upload
new videos to answer "Yes" or "No" whether their videos
contain altered content or not.
Specifically, it will ask if any of the following describes
their content:
1. Make real people appear to say or do something they
didn't say or do.
2. Altering a recording of a real event or place.
3. Produce scenes that look realistic but don't actually
happen.
If the user answers "Yes," YouTube will put a
label in the video description that says "Altered or synthetic
content."
The announcement comes as technology companies look to
tackle the problem of online misinformation generated by AI.
YouTube, which is owned by Google, has hosted many unlabeled
videos that are either entirely AI-generated or include AI-generated elements.
Among these videos uploaded since 2022 have been spreading
fake news about black celebrities using AI tools.
For example, many videos feature AI-generated audio
narration, which can be produced much faster and cheaper than human actors
reading a script.
Other videos use thumbnails containing AI-edited photos,
such as photos of celebrities' faces edited to look angry or sad.
Not all of these examples will be labeled as synthetic
content under YouTube's new rules.
For example, the use of AI text-to-speech technology to
create voiceovers essentially does not require labels, unless the resulting
video is intended to deceive viewers with “realistic” but fake voices imitating
the voices of real people.
YouTube said it would first introduce altered and synthetic
content labels on its mobile apps, followed by the YouTube desktop browser and
YouTube TV in the next few weeks.
In the future, although the timing is not specified, YouTube
said it would penalize users who continue to choose not to disclose this
information.
It said it may also add labels itself if the resulting
unlabeled content could confuse or mislead people.
While YouTube has been unable to stem the tide of
AI-generated content already on its platform, its parent company, Google,
continues to roll out AI products for consumers such as the Gemini AI image
generator.
Gemini has come under fire for producing misleading
historical images that depict non-white people in scenes where they should not
be — such as in Nazi uniforms or in the US Congress in the 1800s.
In response, Google temporarily limited Gemini's ability to
create human images.

Comments
Post a Comment