Around the theaters: what should the regulation of generative AI look like?

We live in an age of unprecedented advances in generative artificial intelligence (AI), which are artificial intelligence systems capable of generating a wide range of content, such as text or images. The release of ChatGPT, a chatbot powered by OpenAI’s GPT-3 large language model (LLM), in November 2022 brought generative AI into the public consciousness, and other companies like Google and Microsoft have been equally busy creating new opportunities to exploit technology. Meanwhile, these continued advances and applications of generative AI have raised important questions about how the technology will affect the job market, how its use of training data involves intellectual property rights, and what form government regulation should take. of this sector. Last week, a congressional hearing with top industry leaders suggested an opening to AI regulation, something lawmakers have already considered reigning in some of the potential negative consequences of generative AI and AI more widely. general. Considering these developments, scholars throughout the Center for Technology Innovation (CTI) have been thinking about what the regulation of generative AI should look like.

NICOL TURNER LEE(@DrTurnerLee)
Senior Fellow and Director, Center for Technology Innovation:

Generative AI regulation could start with good consumer insights

Generative AI refers to machine learning algorithms that can create new content such as audio, code, images, text, simulations or even video. More recent attention has been on enabling chatbots, including ChatGPT, Bard, Copilot, and other more sophisticated tools that leverage LLMs to perform a variety of functions, such as gathering research for assignments, compiling legal , automating repetitive office tasks or enhance online research. While regulatory debates focus on the potential downsides of generative AI, including the quality of datasets, unethical applications, racial or gender bias, workforce implications, and greater erosion of democratic processes due to technological manipulation on the part of bad actors, the positives include a dramatic increase in efficiency and productivity as technology improves and simplifies certain processes and decisions such as streamlining medical processing of medical notes or helping educators in teaching critical thinking skills. There will be much to discuss about the ultimate value and consequences of generative AI for society, and if Congress continues to operate at a very slow pace to regulate emerging technologies and institute a federal privacy standard, generative AI will technically become more advanced and deeply rooted in society. But where Congress could score a very quick victory on the regulatory front is to require consumers to inform when AI-generated content is in use and to add labeling or some kind of multi-stakeholder certification process to encourage greater transparency. and accountability for existing and future use cases.

Again, the European Union is already at the forefront of this. In its most recent law on AI, the EU requires that AI-generated content be disclosed to consumers to prevent copyright infringement, illegal content and other illicit related to end-user lack of understanding of these systems. As more and more chatbots mine, analyze and present content in ways accessible to users, the results are often not attributable to one or more sources and despite some content use permissions granted under the fair use doctrine in the United States which protects copyrighted work, consumers are often left in the dark about the generation and explanation of the process and results.

Congress should prioritize consumer protection in future regulation and work to create agile policies that are future-proof to adapt to emerging consumer and societal harms, starting with immediate safeguards for users before they are left, yet once, to themselves as subjects of highly digitized products and Services. The EU could honestly agree to the disclosure requirement, and the US could further contextualize its application against existing models that do the same, including the Food and Drug Administration (FDA) labeling guidance or what I proposed in previous research: An adaptation of the Energy Star Rating system to AI. Bringing greater transparency and accountability to these systems must be central to any regulatory framework, and starting with small bites of a big apple could be a first stab at policymakers.

NIAM YARAGHI(@niamyaraghi)
Nonresident Senior Fellow, Center for Technology Innovation:

HIPAA and Health Information Blocking Rules Revisited: Balancing Privacy and Interoperability in the Age of AI

With the emergence of sophisticated artificial intelligence (AI) advances, including large language models (LLMs) such as GPT-4 and LLM-based applications such as ChatGPT, there is an urgent need to review healthcare privacy protections. Fundamentally, all AI innovations use sophisticated statistical techniques to discern patterns within vast data sets using increasingly powerful yet cost-effective computational technologies. These three components: big data, advanced statistical methods and computing resources have not only become available recently, but have also been democratized and made easily accessible to all at a rate unprecedented in previous technological innovations. This progression allows us to identify previously indistinguishable patterns, which creates opportunities for major advances but also possible harm to patients.

Privacy regulations, especially HIPAA, were established to protect patient privacy, operating on the assumption that anonymised data would remain anonymous. However, given the advances in AI technology, the current landscape has become riskier. Now it’s easier than ever to integrate various datasets from multiple sources, increasing the likelihood of accurately identifying individual patients.

In addition to the amplified privacy and security risk, new AI technologies have also increased the value of health data due to the enriched potential for knowledge extraction. As a result, many data providers may become more reluctant to share medical information with their competitors, further complicating health data interoperability.

Considering these heightened privacy concerns and the rising value of healthcare data, it is imperative that we introduce modern legislation to ensure that healthcare professionals continue to share their data while being shielded from the consequences of potential privacy breaches that could emerge from the widespread use of generative AI.

MARK MACCARTY(@Mark_MacCarthy)
Nonresident Senior Fellow, Center for Technology Innovation:

Lampedusa on the AI ​​Regulation

In Il Gattopardo, Giuseppe Di Lampedusa’s famous novel about the Sicilian aristocratic reaction to Italian unification in 1860, one of its central characters says: If we want things to stay as they are, things will have to change.

Something like this Sicilian response could happen in the tech industry’s embrace of the inevitable regulation of AI. Three things are necessary, however, if we are not to keep things as they are.

The first and most important step is sufficient resources for agencies to enforce existing law. Federal Trade Commission Chair Lina Khan rightly says AI is not exempt from the current Consumer Protection, Discrimination, Employment and Competition Act, but if regulatory agencies cannot hire technical and lead cases of artificial intelligence in a period of budget austerity, the current law will be a dead letter.

Second, policymakers should not be distracted by science fiction fantasies of AI programs that develop consciousness and achieve independent agency over humans, even if these metaphysical abstractions are endorsed by industry leaders. Not a penny of public money should be spent on these highly speculative diversions when scammers and industry edge-riders seek to use artificial intelligence to break existing law.

Third, Congress should consider adopting new identification, transparency, risk assessment and copyright protection requirements along the lines of the European Union’s proposed AI law. A request from national telecommunications and information administrations to comment on a proposed AI accountability framework and Senator Chuck Schumers’ (D-NY) recently announced legislative initiative to regulate AI could go in that direction.

TOM WHEELER(@tewheels)
Visiting Fellow, Center for Technology Innovation:

Innovative AI requires innovative oversight

Both sides of the policy aisle, as well as digital business leaders, are now talking about the need to regulate AI. A common theme is the need for a new federal agency. However, simply cloning the model used for existing regulatory agencies is not the answer. That model, developed for overseeing an industrial economy, took advantage of slower innovation to micromanage corporate activity. It’s not suited to the speed of the freewheeling AI era.

All regulations walk the tightrope between protecting the public interest and promoting innovation and investment. In the age of AI, going down this path means accepting that different AI applications pose different risks and identifying a plan that matches regulation with risk, while avoiding the regulatory micromanagement that stifles innovation.

That agility begins with adopting the formula by which digital businesses create technician standard as a formula for development behavioral standard: identify the problem; assemble a standard-setting process involving business, civil society and the agency; then give final approval and executive authority to the agency.

Industrialization was about replacing and/or augmenting the physicist power of humans. AI is about replacing and/or augmenting human beings cognitive powers. To confuse how the former has been regulated with what is needed for the latter would be to miss the opportunity for regulation to be as innovative as the technology it oversees. We need institutions for the digital age that address problems that are already obvious to everyone.

Google and Microsoft are general and unlimited donors to the Brookings Institution. The findings, interpretations, and conclusions published in this piece are solely those of the author and are not influenced by any donations.


#theaters #regulation #generative

Leave a Comment