OpenAI CEO Sam Altman tells senators he wants to see A.I. licensed. That might be good for us. It’s definitely good for OpenAI.

OpenAI CEO Sam Altman testifying before a Senate Judiciary Commitee subcomittee.
Sam Altman, OpenAI's CEO, testifies about A.I. regulation before a subcommittee of the U.S. Senate Judiciary Committee.
Win McNamee—Getty Images

Hello, everyone. All eyes today were on Capitol Hill where OpenAI CEO Sam Altman testified before a subcommittee of the Senate Judiciary Committee, which is holding hearings on possible regulation of A.I.

Altman, in his prepared remarks, told the senators that “the regulation of A.I. is essential.” He came out in favor of “appropriate safety requirements, including internal and external testing prior to release,” for A.I. software and some kind of licensing and registration regime for A.I. systems beyond a certain capability. But at the same time, Altman called for a governance framework that is “flexible enough to adapt to new technological developments” and said that regulation should balance “incentivizing safety while ensuring that people are able to access the technology’s benefits.”

Overall, Altman got off easy. Senator Richard Blumenthal (D-Conn.), who called the hearing and chairs the subcommittee, seemed to do so in a spirit of honest inquiry, and Senator Josh Hawley (R-Mo), the subcommittee’s ranking minority member, said he was there mostly “to try to get my head around what these models can do.” The most pressing questioning of Altman came from Senator Marsha Blackburn (R-Tenn.) who was very concerned about generative A.I.’s copyright implications and impact on the Nashville-based country music scene. Besides copyright infringement, there are very real harms from generative A.I. already evident—including misinformation and election interference, fraud, bias, defamation, exploitative data gathering practices, data privacy violations, emerging evidence of wage depression in some fields, and environmental impacts—and it was heartening to see that the senators at least seemed to be aware of many of these issues.

But the spirit of the questioning was, by and large, collegial. And there is plenty the senators could have grilled Altman on. For instance, how does OpenAI justify its policy of trying to learn about A.I. safety and risks in large part through a policy of releasing products into the world and then seeing how people might try to use and abuse them? We don’t let drug companies or car companies do that. Should we let A.I. companies? Also, why has OpenAI said so little about GPT-4, including critically how big the model is and what data it was trained on? Would OpenAI be willing to divulge that information to a government agency? Why did OpenAI allow Microsoft to use a version of GPT-4 in the creation of its Bing Chat feature that it knew was not as safe as the version it had used for ChatGPT? The list goes on.

Altman’s advocacy for some rules is not surprising. Technology companies know that regulation is likely coming, and they are trying their best to shape it to their advantage. Altman explicitly called for licensing of generative A.I. models in his testimony and my suspicion is that the other companies selling access to proprietary A.I. models, such as Anthropic, Microsoft, and Google, will advocate some kind of licensing regime as well.

I think they will also push for a system that holds the companies building generative A.I. responsible for putting reasonable safeguards around the technology and taking steps to prevent the dangerous uses and misuses of the technology. Christina Montgomery, the chief privacy and trust officer at IBM, who also testified at the hearing, said that IBM thought there should be a “reasonable care” standard applied to the creators of generative A.I. She also advocated a sector-specific, risk-based approach to A.I. regulation that sounded very similar to the way the European Union has framed its new A.I. Act. Gary Marcus, the emeritus New York University professor of cognitive psychology who has emerged as a leading skeptic of deep learning approaches to A.I. and who has been sounding the alarm about the dangers of generative A.I., told the senators that he too favored a licensing model.

But, of course, the reasons tech companies working on proprietary models want such a system—and it was disappointing not to see more discussion of this in the Senate hearing—is not altruism. Among the biggest competitive threats these companies face is open source A.I. software. In this rapidly moving field, no one is moving faster than the open source community. It has proved remarkably innovative and agile at matching the performance and capabilities of the proprietary models, but doing so with A.I. models that are much smaller, easier and less expensive to train, and which can be downloaded for free. These open source companies would all struggle with a licensing regime because it would be difficult for them to put in place any robust limits and controls on how people use the models they’ve created—and open source, by its very nature, cannot prevent people from modifying code and removing any safeguards that have been put in place.

Altman and the other proprietary model purveyors know it. Altman even said in response to questions from Vermont Democratic Senator Peter Welch that he realized that there was a danger of regulatory capture—that large, wealthy companies would design rules that only they could meet—and said it was not a desirable outcome. Altman also said that maybe not all companies should be subject to the licensing regime he advocates. “We don’t want to stop our open-source community,” he said. But then he drew the line at a set of capabilities—such as a chatbot that can influence or shape someone’s political views—that are already within the reach of open-source alternatives to ChatGPT. He also mentioned the design of novel chemicals or pathogens, although that too is something for which some open-source models exist.

If the U.S. wants to see how difficult it is going to be to balance a desire to avoid the harms of generative A.I. and also protect a vibrant open-source community, it only has to look at Europe. There, the new A.I. Act, which is nearing finalization, has sparked belated alarm in the past two weeks from open source companies over the law’s provisions that would require those creating foundation A.I. models to monitor and impose controls on their use. Laion (Large-scale AI Open Network), a German-based research organization that has created some of the datasets that have been used to train foundation models, particularly the open-source text-to-image generation models, wrote a letter to the European Parliament, signed by many prominent European A.I. researchers, including Juergen Schmidhuber, that called for the A.I. Act to exempt open-source models and those built for research purposes from the law’s requirements.

In this context, it was intriguing to read a report in tech publication The Information earlier this week that, citing an anonymous source, said OpenAI was preparing to release an open source generative A.I. model of its own. If that’s true, I’m not quite sure what OpenAI’s strategy is. Right now, its business model is based around selling access to proprietary models through its API. Perhaps Altman is hedging his bets—hoping that most users will prefer accessing its largest models through its API, but wanting to have a hand in the open-source world too in case those models ultimately prove more popular with business customers.

The Judiciary Committee plans more hearings on A.I. in the near future. Let’s hope they start to ask some of the gurus for the open-source world—Clem Delangue from Hugging Face, Emad Mostaque from Stability AI, Harrison Chase from LangChain, and many of the academic researchers working in the area—to testify too. As I said in last week’s newsletter, it will be impossible to regulate A.I. effectively, and deal with the potential risks from generative A.I., without figuring out what to do about the open-source models.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

Google surges back with a host of A.I. announcements. At the company’s annual I/O developer conference, Google announced a slew of generative A.I. enhancements for its products, ranging from Google Maps to Lens to Gmail and the other Workspace apps. It also announced an upgrade to the A.I. model powering Bard, its ChatGPT competitor. And it previewed a new experimental generative A.I. search product—called simply Search Generative Experience (SGE)—that is available through a new Google Labs vertical. The new search product, which won’t replace the main Google Search bar for some time, could prove disruptive—to Google’s business model and to many companies, including Fortune and other media brands, that depend on search traffic. But for the moment, what Google’s announcements mostly did was counter public perceptions that the company was falling behind Microsoft in the race to commercialize A.I. You can read more of my analysis of Google’s I/O announcements here.

Irish newspaper fooled into running an A.I.-generated op-ed. The Irish Times published an opinion article generated by A.I. software and submitted by someone calling themselves Adriana Acosta-Cortez, according to a story in tech publication The Register. The article argued that Irish women using fake tan represented cultural appropriation. However, it was later discovered that the article and the accompanying photo were part of a hoax, and the newspaper had been duped into publishing a computer-generated piece. The incident raised concerns about editorial oversight and the challenges posed by generative A.I. in news organizations.

British publisher faces backlash over use of A.I.-generated cover art. Bloomsbury Publishing, a venerable British imprint, faced a backlash from illustrators and artists after it used an A.I. system to generate the cover art for a novel, The Verge reported. The company used Adobe Stock to source the cover image of a wolf it is using for the fantasy novel, House of Earth and Blood by Sarah J. Maas. The user who uploaded the image to Adobe Stock identified it as having been created with A.I. software. It is unclear if the system that person used to create it was trained on copyrighted material. Beyond the legal issues involved, artists argue it is unethical for an established and profitable publisher to use A.I. instead of hiring human artists to create covers.

Deepfake claims mar Turkish election. Deepfakes (or at least claims of them) and manipulated content circulating on social media played a large role in this week’s Turkish elections, as my Fortune colleague David Meyer reports. Opposition candidate Muharrem İnce withdrew from the race, claiming that a deepfake sex tape had been circulated featuring him. The main opposition candidate, Kemal Kılıçdaroğlu, blamed Russia for the disinformation. As David notes, Turkey’s experience may be a harbinger of what’s to come in elections all around the world, including next year’s U.S. presidential election.

A new startup, backed by $50 million in initial venture funding, says it has trained the best-performing medical language model yet. Palo Alto-based Hippocratic AI, a health care-focused A.I. startup, emerged from stealth mode today, backed by $50 million in “seed funding” from venture capital firms General Catalyst and Andreessen Horowitz, the firms announced. Hippocratic says it has trained a large language model specifically for medical use cases and that it outperforms all previous language models on a battery of more than 100 medical benchmark tests. Among the models Hippocratic says its LLM beat are OpenAI’s GPT-4 and Google’s initial MedPalm model (last week Google announced an updated version of MedPalm, which Hippocratic has not yet benchmarked its system against). The company said it has also trained the model to have a bedside manner that patients rated above those of other LLMs. Munjal Shah, the startup’s co-founder and CEO, told me that the company will initially focus on the safest use cases, such as explaining insurance coverage decisions to patients, reminding patients to take medication, and providing pre-operative advice, not on more risky ones such as making diagnoses.  

OpenAI allows all ChatGPT Plus subscribers to use internet-connected plugins. The plugins, which are being rolled out to ChatGPT Plus subscribers this week, allow the chatbot to access the internet and use more than 70 different third-party applications, for everything from chess to live sports sites. As Venture Beat reports, the ChatGPT plugins have the potential to transform the chatbot into a platform rather than just a tool. But the plugins also carry risks around misuse and accidental harms. These include the potential to use ChatGPT to execute cyberattacks or the danger that it may take unintended actions, including purchasing items that the user did not intend to buy.

EYE ON A.I. RESEARCH

Can a large language model be used to help align itself? That is basically the idea behind Self-Align, a technique that researchers at Carnegie Mellon University, the University of Massachusetts Amherst, and IBM have developed. The method involves using humans to write a handful of high level principles and examples of those principles in action, and then using a large language model to develop a series of prompts that can be fed to that model along with a series of further examples, all generated by the LLM itself, that further illustrate the principles. These are then fed to the model in a kind of structured curriculum. The system requires far less human effort—just a few hundred lines of annotations, principles, and examples—compared to tens or hundreds of thousands of human-annotated examples used in typical reinforcement learning from human feedback (RLHF) training. You can read the paper on the non-peer reviewed research repository arxiv.org here.

FORTUNE ON A.I.

Former Google CEO Eric Schmidt tells government to leave A.I. regulation to Big Tech—by Christiaan Hetzner

A.I. ‘controls humanity’ in the worst-case scenario but will probably just find us boring, says Stability AI CEO Emad Mostaque—by Steve Mollman

Google’s Sundar Pichai thinks A.I. will spur ‘big societal labor market disruptions’ but also make professions better—by Prarthana Prakash

A.I. will change ‘any professional informational task’ in 2-5 years, says Reid Hoffman—by Steve Mollman

BRAINFOOD

What happens if chatbots subtly influence and change our political views? That is the prospect raised by a story in the Wall Street Journal that Gary Marcus referenced in his Senate testimony. The story, written by Christopher Mims, was based on recent research that showed that when people asked a chatbot, like OpenAI’s ChatGPT, to help them write an essay, the chatbot often subtly nudged them to write the essay either for or against a particular proposition, based on biases in its training data. This subtle persuasion occurred mostly through selection—which arguments did the chatbot suggest the writer include, and which did it omit. But the researchers found that after conversing with the chatbot, the user often had their own political views altered, sometimes without even realizing it.

“You may not even know that you are being influenced,” says Mor Naaman, a professor in the information science department at Cornell University, and the senior author of the paper. He calls this phenomenon “latent persuasion.”

Previous studies have apparently found a similar phenomenon even for simple systems such as auto-complete. Google’s auto-complete suggestions in Gmail, for example, have been found to express a more positive sentiment overall than what many people would naturally express if left to simply write on their own. That in turn can affect others’ perceptions of the email sender. But it can also, over time, some studies have found, subtly alter the outlook of the person composing the email too.

Researchers say that making people aware of this subtle psychological influence may help inoculate people against it. More transparency around the data used to train A.I. systems and what biases, political and otherwise, the final systems (which are often fine-tuned through human feedback) have would also help. Some have even suggested we should be offered different chatbot “personalities,” including political types.

This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays. Sign up here.