Where does AI training data come from?
A report from The New York Times revealed on Friday that OpenAI may have trained AI models on YouTube video transcriptions and Google may have been doing the same thing.
The report found that in the hunt for fresh digital data to train its newer, smarter AI system, OpenAI researchers created a workaround called Whisper, which could take YouTube videos and transcribe them into text that could then be fed as new AI training data — for a more conversational, next-generation AI.
The process of developing GPT-4, the powerful AI model behind OpenAI's latest ChatGPT chatbot, took over a million hours of YouTube videos transcribed by Whisper, according to the NYTimes' sources.
The Times reports that OpenAI employees had conversations about how YouTube transcription training data could potentially violate YouTube's rules, but OpenAI decided to move forward anyway with the belief that training AI with the videos was fair use.
Knowledge of where the training data was coming from extended up to senior leadership, according to The Times, with OpenAI's president Greg Brockman even allegedly helping collect videos.
The Wall Street Journal's Joanna Stern interviewed OpenAI's CTO Mira Murati last month and asked her what data was used to train one of OpenAI's most recent products: a tool called Sora that generates videos based on text prompts.
"We used publicly available data and licensed data" Murati said. When Stern asked "So, videos on YouTube?" Murati replied, "I'm actually not sure about that."
When Stern further asked "Videos from Facebook, Instagram?" Murati stated, "You know, if they were publicly available, publicly available to use, there might be the data, but I'm not sure. I'm not confident about it."
YouTube CEO Neal Mohan said last week that if OpenAI used YouTube videos to train Sora, that would be a "clear violation" of YouTube's terms of use.
The terms of service "does not allow for things like transcripts or video bits to be downloaded" Mohan told Emily Chang, host of Bloomberg Originals.
Yet five sources told The Times that Google did the same thing as OpenAI, allegedly transcribing YouTube videos to generate new training text for its AI models in a potential violation of copyright law.
Google owns YouTube and told The Times that its AI is "trained on some YouTube content" that its agreements with creators allow.
Lawsuits over training AI with copyrighted material have become widespread in recent years, with authors like Paul Tremblay and Sarah Silverman alleging that their books were part of datasets used to train AI — without their consent.
The lawyers for these lawsuits, Joseph Saveri and Matthew Butterick, state on their website that generative AI is just "human intelligence, repackaged and divorced from its creators."
More than 15,000 authors signed a letter last year asking big tech CEOs, including ones at OpenAI, Google, Microsoft MSFT, Meta META, and IBM IBM, to obtain the consent of writers before training AI with their work and credit and compensate them.
It's not just authors: musicians too are feeling the impact of AI. Artists like Billie Eilish and Jon Bon Jovi signed an open letter last week accusing big tech companies of using their work to train models without permission or compensation.
"These efforts are direly aimed at replacing the work of human artists with massive quantities of AI-created "sounds" and "images" that substantially dilute the royalty pools that are paid out to artists" the letter stated.
Tennessee became the first state to pass legislation protecting artists from deepfakes, or cloned and manipulated versions of their voices, last month.
The article "OpenAI Reportedly Used More Than a Million Hours of YouTube Videos to Train Its Latest AI Model" first appeared on MarketBeat.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.