Researchers, developers, and experts criticise OpenAI’s GPT-4 release: “There is no way to validate it”
According to OpenAI, GPT-4 is bigger and better than its predecessors, GPT-3 and ChatGPT, but this information is currently impossible to confirm.
These days, OpenAI is being criticised for keeping quiet about GPT-4’s training data and technical details. Already on page two of the 98-page-long technical report, OpenAI explains that they are not going to publish technical descriptions of their latest large multimodal language model:
“Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”
That explanation has been met with harsh criticism from several prominent researchers, developers, and experts. Among others, Elon Musk, who is also one of OpenAI’s co-founders:
For-profit, closed-source
It is not only Elon Musk who pointed out that the reason for the word “open” being included in the company name is that OpenAI was originally founded as a “non-profit artificial intelligence research company” that was supposed to follow good research practices as well as publish results, methodology, and code, which was a direction that the company followed for several years.
A significant part of the current criticism of the GPT-4 report is that OpenAI’s claims about the model’s capabilities cannot be verified. Still, the AI company refers to everything from press material to the GPT-4 technical report as research, and that, according to William Falcon, CEO of Lightning AI and the developer behind the open-source Python library PyTorch Lightning, is extremely problematic:
“When an academic paper says benchmarks, it says ‘Hey, we did better than this and here’s a way for you to validate that.’ There’s no way to validate that here,” he explains to VentureBeat, elaborating that it would not be a problem if only OpenAI was no longer a non-profit open source company:
“That’s not a problem if you’re a company and you say, ‘My thing is 10 times faster than this.’ We’re going to take that with a grain of salt. But when you try to masquerade as research, that’s the problem.”
Thomas Wolf, co-founder of the open-source AI platform Hugging Face, also tells MIT Technology Review that the absence of actual technical details in the report makes it hard to judge how impressive GPT-4 really is. He, too, would like OpenAI to admit that it is “a fully closed company with scientific communication akin to press releases for products”.
Societal implications
It is not only the open source community that has an issue with OpenAI’s technical report. There has also been a big deal of criticism from competitors and independent academics regarding the societal implications the closing off could have. For example, Emily Bender, professor of linguistics at the University of Washington, calls the report “laughable”.
The criticism is based on the fact that it is not possible to gain insight into the model, although OpenAI acknowledges in the report that GPT-4 can produce “potentially harmful content, such as advice on planning attacks or hate speech” as well as “plausibly realistic and targeted content, including news articles, tweets, dialogue, and emails”.
On Twitter former Oxford research scientist in ethics and philosophy, Lason Gabriel, who now works at the Google-owned AI company DeepMind, argues that “Never has it been so clear that the design, development and alignment of these technologies is a matter of central public concern.”
At the same time, he refers to a passage of the technical report where OpenAI acknowledges that there is a high risk that GPT-4 could affect political currents:
“As GPT-4 and AI systems like it are adopted more widely in domains central to knowledge discovery and learning, and as use data influences the world it is trained on, AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”
How about a new job?
At the more humorous end of the scale, there is the open-source company Stability AI, which is behind the text-to-image model Stable Diffusion. It is a commercial company and thus a competitor to OpenAI, which is behind the similar Dall·E model.
Being both a competitor and an open source supporter, it is hardly surprising that Stability AI’s founder and CEO Emad Mostaque is not thrilled about OpenAI’s latest move either.
However, he has chosen to use the criticism as an opportunity to recruit open source enthusiasts from OpenAI, and should you be interested in applying for a position, you can do so via actuallyopenai.com.
