OpenAI is on the receiving end of yet another lawsuit accusing it of infringement as a result of its alleged use of copyright protected works to train the large language models that power its generative AI chatbot, ChatGPT. Filed in a New York federal court on September 19 by the Authors Guild and more than a dozen authors, including John Grishman and George R.R. Martin, the complaint largely mirrors the ones that have preceded it. In short: The authors claim that OpenAI fed their copyrighted works into its models in order to enable them to “generat[e] sentences, paragraphs, and even complete texts, from cover letters to novels,” thereby, engaging in such “wholesale” copying of their works “without permission or consideration.” 

Because OpenAI’s models can “spit out derivative works: material that is based on, mimics, summarizes, or paraphrases [others’] works … anyone [can] generate – automatically and freely (or very cheaply) – texts that they would otherwise pay writers to create,” the Author’s Guild and the author plaintiffs assert. Against this background and in light of OpenAI’s alleged practice of using copyright protected works to train its models (without obtaining authorization from the authors or paying the requisite licensing fee), the plaintiffs set out claims of direct copyright infringement, vicarious copyright infringement, and contributory copyright infringement against OpenAI Inc. and the eleven other OpenAI entities which range from OPENAI LP and OPENAI LLC to OPENAI HOLDINGS LLC and OPENAI STARTUP FUND I LP.

In the newly-filed lawsuit, the Author’s Guild and the author plaintiffs make additional claims about the nature of OpenAI’s workings that go beyond the allegations in previous lawsuits filed against OpenAI, making claims about the specific datasets that OpenAI uses to train its models, for instance, and about the alleged impact of generative AI on authors. For instance, the plaintiffs cite a survey of authors conducted by the Authors Guild, in which “69 percent of respondents said they consider generative AI a threat to their profession, and 90 percent said they believe that writers should be compensated for the use of their work in ‘training’ AI.” 

But what might be one of the most interesting allegations in the complaint is one that references the sheer number of OpenAI entities. According to the plaintiffs, “The OpenAI Defendants are a tangled thicket of interlocking entities that generally keep from the public what the precise relationships among them are and what function each entity serves within the larger corporate structure.” 

Not necessarily a striking allegation on its own, the claim is a striking one in light of the allegations set out in another recent – but unrelated – lawsuit that a trio of artists lodged against Shein. In that case, in addition to accusing Shein of carrying out “large-scale and systematic” copyright and trademark infringement, the plaintiffs maintain that the ultra-fast fashion giant is running afoul of the Racketeer Influenced and Corrupt Organizations (“RICO”) Act in connection with its alleged infringement scheme.

It is well established that “egregious copyright infringement” – or criminal infringement – “constitutes racketeering,” thanks to Congress’ addition of criminal copyright infringement as a predicate act for RICO liability, the trio of artists claim in their lawsuit against Shein. (Under 17 U.S. Code § 506, criminal infringement occurs when the infringement is carried out: (1) “for purposes of commercial advantage or private financial gain; (2) by the reproduction or distribution, including by electronic means, during any 180-day period, of 1 or more copies … of 1 or more copyrighted works, which have a total retail value of more than $1,000; or (3) by the distribution of a work being prepared for commercial distribution, by making it available on a computer network accessible to members of the public.”)

As for the enterprise element, the plaintiffs argue that a civil RICO Act claim is appropriate because Shein’s alleged misconduct is “committed not by a single entity, but by a de-facto association of entities” that makeup the larger “decentralized” structure that is Shein. Part of the purpose of the “loose and overtly decentralized amalgamation of entities” that is Shein is to “avoid disclosing basic information,” the plaintiffs argue. Specifically, Shein’s “multiplicity of entities and [outwardly] decentralization structure … aid in its efforts to avoid liability for intellectual property infringement” – partially because it makes it difficult for plaintiffs to determine “an appropriate entity” to sue. 

Despite the group’s “byzantine” structure, the plaintiffs claim that the many Shein entities are all connected, and since each of the various Shein defendants has “knowingly committed criminal copyright infringement, [and] played its role with full knowledge of the overarching criminal copyright infringement it participates in,” Shein is engaging in “multiple acts of racketeering and criminal copyright infringement.”

THE BOTTOM LINE: The cases targeting Shein and OpenAI share the commonality of stemming from allegedly “widespread” copying and thus, are being waged primarily on copyright infringement grounds (or in OpenAI’s case, exclusively, on copyright infringement grounds) in furtherance of the plaintiffs’ respective quests to combat alleged injuries to themselves and their businesses. Beyond that, the cases hardly make for like-for-like comparisons, especially since the Author’s Guild and the author plaintiffs do not make RICO claims.

However, the Author’s Guild and author plaintiffs’ passing claim about the “tangled thicket of interlocking entities” that comprise OpenAI and that generally shield from the public the precise “relationships” among – and “functions” of – the entities at play is intriguing in light of the rising number of RICO claims that creative plaintiffs are increasingly waging to recover damages from businesses that might not otherwise be obvious targets of such litigation. (Shein, multi-level marketing companies, and cannabis distributors come to mind here.)  It will be interesting to see if other plaintiffs opt to delve further into the allegedly complex structure of OpenAI in any of the already-pending lawsuits or in any potentially impending suits.

The case is Authors Guild, et al. v. OpenAI, Inc., 1:23-cv-08292 (SDNY).

The rising adoption of artificial intelligence (“AI”) across industries (including fashion, retail. luxury, etc.) that has come about in recent years is bringing with it no shortage of lawsuits, as parties look to navigate the budding issues that these relatively new models raise for companies and creators, alike. A growing number of lawsuits focus on generative AI, in particular, which refers to models that use neural networks to identify the patterns and structures within existing data to generate new content. Lawsuits are being waged against the developers behind some of the biggest generative AI chatbots and text-to-image generators, such as ChatGPT and Stability AI, and in many cases, they center on how the underlying models are trained, the data that is used to do so, and the nature of the user-prompted output (which is allegedly infringing in many cases), among other things. 

In light of the onslaught of legal questions that have come about in connection with the rise of AI, we take a high-level (and chronological) look at some of the most striking lawsuits that are playing out in this space and corresponding developments …

Sept. 19, 2023: Authors Guild, et al. v. OpenAI, Inc.

The Authors Guild and more than a dozens authors, including John Grishman and George R.R. Martin, are suing an array of OpenAI entities for for allegedly engaging in “a systematic course of mass-scale copyright infringement that violates the rights of all working fiction writers and their copyright holders equally, and threatens them with similar, if not identical, harm.” In the complaint that they filed with the U.S. District Court for the Southern District of New York on September 19, the plaintiffs, who are authors of “a broad array of works of fiction,” claim that they are “seeking redress for [OpenAI’s] flagrant and harmful infringements of [their] registered copyrights” by way of its “wholesale” copying of such works without permission or consideration.

Specifically, the plaintiffs claim that by way of datasets that include the texts of their books, OpenAI “fed [their] copyrighted works into its ‘large language models,’ [which are] algorithms designed to output human-seeming text responses to users’ prompts and queries,” and which are “at the heart of [its] massive commercial enterprise.” Because OpenAI’s models “can spit out derivative works: material that is based on, mimics, summarizes, or paraphrases the plaintiffs’ works, and harms the market for them,” it is endangering “fiction writers’ ability to make a living, in that the [models] allow anyone to generate – automatically and freely (or very cheaply) – texts that they would otherwise pay writers to create.”

With the foregoing in mind, the plaintiffs set out claims of direct copyright infringement, vicarious copyright infringement, and contributory copyright infringement.

Sept. 8, 2023: Chabon v. OpenAI, Inc.

Authors Michael Chabon, David Henry Hwang, Matthew Klam, Rachel Louise Snyder, and Ayelet Waldman are suing OpenAI on behalf of themselves and a class of fellow “authors holding copyrights in their published works arising from OpenAI’s clear infringement of their intellectual property.” In their September 8 complaint, which was filed with a federal court in Northern California, Chabon and co. claim that OpenAI incorporated their “copyrighted works in datasets used to train its GPT models powering its ChatGPT product.” Part of the issue, according to the plaintiffs, is that “when ChatGPT is prompted, it generates not only summaries, but in-depth analyses of the themes present in [their] copyrighted works, which is only possible if the underlying GPT model was trained using [their] works.”

The plaintiffs claim that they “did not consent to the use of their copyrighted works as training material for GPT models or for use with ChatGPT,” and that by way of their operation of ChatGPT, OpenAI “benefit[s] commercially and profit handsomely from [its] unauthorized and illegal use of the plaintiffs’ copyrighted works.”

Jul. 11, 2023: J.L., C.B., K.S., et al., v. Alphabet, Inc., et. al.

Google and its owner Alphabet are being sued over their alleged practice of “stealing” web-scraped data and “vast troves of private user data from [its] own products” in order to build commercial artificial intelligence (“AI”) products like its Bard chatbot. In the complaint that they filed with a California federal court on Tuesday, J.L., C.B., K.S., P.M., N.G., R.F., J.D., and G.R., who have opted to file anonymously, claim that “for years, Google harvested [our personal and professional information, our creative and copywritten works, our photographs, and even our emails] in secret, without notice or consent from anyone,” thereby, engaging in unfair competition, negligence, invasion of privacy, and copyright infringement, among other causes of action. 

Jul. 7, 2023: Silverman, et al. v. OpenAI, Inc.

Mirroring the complaint that authors Paul Tremblay and Mona Awad filed against OpenAI on June 28, Sarah Silverman (yes, that Sarah Silverman), Christopher Golden, and Richard Kadrey (“Plaintiffs”) accuse the ChatGPT developer of direct and vicarious copyright infringement, violations of section 1202(b) of the Digital Millennium Copyright Act, unjust enrichment, violations of the California and common law unfair competition laws, and negligence in a new lawsuit. The basis of the lawsuit: “Plaintiffs and Class members are authors of books. Plaintiffs and Class members have registered copyrights in the books they published. Plaintiffs and Class members did not consent to the use of their copyrighted books as training material for ChatGPT. Nonetheless, their copyrighted materials were ingested and used to train ChatGPT.”

Jul. 7, 2023: Kadrey, et al. v. Meta Platforms, Inc.

The same trio of plaintiffs as above – Sarah Silverman, Christopher Golden, and Richard Kadrey – filed lodged a separate but very similar complaint against Meta Platforms in federal court in Northern California on July 7, accusing the Facebook and Instagram-owner of running afoul of copyright law by way of LLaMA, a set of large language models that it created and maintains. According to the plaintiffs’ suit, “many of [their] copyrighted books” were included in dataset assembled by a research organization called EleutherAI, which was “copied and ingested as part of training LLaMA.”

Jun. 28, 2023: Tremblay v. OpenAI, Inc.

A couple of authors are the latest to file suit against ChatGPT developer OpenAI. According to the complaint that they filed with a federal court in Northern California on June 28, Paul Tremblay and Mona Awad assert that in furtherance of the training of the large language model that powers the generative AI chatbot that is ChatGPT, OpenAI has made use of large amounts of data, including the text of books that they have authored without their authorization, thereby, engaging in direct copyright infringement, violations of the Digital Millennium Copyright Act, and unfair competition. 

Among other things, the plaintiffs allege that OpenAI “knowingly designed ChatGPT to output portions or summaries of [their] copyrighted works without attribution,” and the company “unfairly profit[s] from and take[s] credit for developing a commercial product based on unattributed reproductions of those stolen writing and ideas.”

Jun. 28, 2023: Plaintiffs P.M., K.S., et al. v. OpenAI LP, et al. 

More than a dozen underage individuals have filed suit against OpenAI and its partner/investor Microsoft in connection with the development and marketing of generative AI products, which allegedly involves the scraping of “vast” amounts of personal data. According to the June 28 complaint, OpenAI and the other defendants have “stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge” in furtherance of their creation and operation of the ChatGPT, Dall-E, and Vall-E programs. And they “continue to unlawfully collect and feed additional personal data from millions of unsuspecting consumers worldwide, far in excess of any reasonably authorized use, in order to continue developing and training the products.” 

The plaintiffs accuse OpenAI of violating: The Electronic Communications Privacy Act; The Computer Fraud and Abuse Act; California’s Invasion of Privacy Act and Unfair Competition law; Illinois’s Biometric Information Privacy Act, Consumer Fraud and Deceptive Business Practices Act, and Consumer Fraud and Deceptive Business Practices Act; and New York General Business Law s. 349, which prohibits deceptive acts and practices unlawful. Beyond that, the plaintiffs also set out negligence, invasion of privacy, intrusion upon seclusion, larceny/receipt of stolen property, conversion, unjust enrichment, and failure to warn causes of action.

UPDATED (Sept. 15, 2023): The unnamed plaintiffs moved to voluntarily dismiss their case against OpenAI and Microsoft without prejudice, which suggests that the parties reached an agreement out of court.

Jun. 5, 2023: Walters v. OpenAI LLC

And in yet another suit being waged against OpenAI, Plaintiff Mark Walters asserts that the company behind ChatGPT is on the hook for libel as a result of misinformation that it provided to a journalist in connection with his reporting on a federal civil rights lawsuit filed against Washington Attorney General Bob Ferguson and members of his staff. In particular, Walters claims that ChatGPT’s case summary (and journalist Fred Riehl’s article) stated that the lawsuit was filed against him for fraud and embezzlement. The problem with that, according to Walters’s lawsuit, is that he is “neither a plaintiff nor a defendant in the lawsuit,” and in fact, “every statement of fact” in the ChatGPT summary that pertains to him is false.

Apr. 3, 2023: Young v. NeoCortext, Inc.

“Deep fake” app Reface is at the center of a proposed class action complaint, with TV personality Kyland Young accusing the company of running afoul of California’s right of publicity law by enabling users to swap faces with famous figures – albeit without receiving authorization from those well-known individuals to use their likenesses. According to the complaint that he filed in a California federal court in April, Young asserts that Reface developer NeoCortext, Inc. has “commercially exploit[ed] his and thousands of other actors, musicians, athletes, celebrities, and other well-known individuals’ names, voices, photographs, or likenesses to sell paid subscriptions to its smartphone application, Refacewithout their permission.”

NeoCortext has since argued that Young’s case should be tossed out on the basis that the reality TV personality not only fails to adequately plead a right of publicity claim, but even if he could, that claim is preempted by the Copyright Act and barred by the First Amendment. 

Feb. 15, 2023: Flora, et al., v. Prisma Labs, Inc.

Prisma Labs – the company behind AI image-generating app, Lensa A.I. – was named in a proposed class action lawsuit in February, with the plaintiffs arguing that despite “collecting, possessing, storing, using, and profiting from” Lensa users’ biometric identifiers namely, scans of their “facial geometry,” in connection with its creation of custom avatars, Prisma has failed to properly alert users about the biometric data its collects and how it will be stored/destroyed, as required by the Illinois data privacy law

UPDATED (Aug. 8, 2023): A N.D. Cal. judge sided with Prisma Labs, granting its motion to compel arbitration in the proposed class action, despite the plaintiffs’ arguments that the arbitration provision in Lena’s terms is unconscionable and that “because some provisions in the arbitration agreement arguably fall below JAMS’ Consumer Arbitration Minimum Standards, the arbitration provision is illusory.”

Feb. 3, 2023: Getty Images (US), Inc. v. Stability AI, Inc.

In the wake of Getty announcing that it had “commenced legal proceedings” in the High Court of Justice in London against Stability AI, Getty Images (US), Inc. filed a stateside lawsuit, accusing Stability AI of “brazen infringement of [its] intellectual property on a staggering scale.” Specifically, the photo agency argues that Stability AI has copied millions of photographs from its collection “without permission from or compensation to Getty Images, as part of its efforts to build a competing business.” 

In addition to setting out a copyright infringement cause of action and alleging that Stability AI has provided false copyright management information and/or removed or altered copyright management information, Getty accuses Stability AI of trademark infringement and dilution on the basis that “the Stable Diffusion model frequently generates output bearing a modified version of the Getty Images watermark.” This creates “confusion as to the source of the images and falsely implying an association with Getty Images,” per Getty. And beyond that, Getty asserts that “while some of the output generated through the use of Stable Diffusion is aesthetically pleasing, other output is of much lower quality and at times ranges from the bizarre to the grotesque,” giving rise to dilution.

An original Getty Image (left) & one created by Stable Diffusion (right)

In a motion to dismiss in May, Stability AI, Inc. argued that Getty has not even attempted to make a case for jurisdiction under Delaware’s long-arm statute, as it “does not allege that any of the purportedly infringing acts regarding training Stable Diffusion occurred within Delaware.” Instead (and “although the amended complaint is vague in this regard”), Stability AI claims that Getty “appears to allege that the training took place in England and Germany,” pointing to the following language from the plaintiff’s amended complaint, “Stable Diffusion was trained . . . from Datasets prepared by non-party LAION, a German entity…”. Getty also does not allege that Stability AI Ltd. “contracted to supply services or things in Delaware,” per Stability AI.

Jan. 13, 2023: Andersen, et al. v. Stability AI LTD., et al.

Stability AI was named in a copyright infringement, unfair competition, and right-of-publicity lawsuit in January 2023, along with fellow defendants DeviantArt and Midjourney. In furtherance of the lawsuit, a trio of artists is accusing Stability AI and co. of engaging in “blatant and enormous infringement” by using their artworks – without authorization – to enable AI-image generators, including Stable Diffusion, to create what are being characterized as “new” images but what are really “infringing derivative works.” 

The defendants have pushed back against the suit, with Stability AI arguing this spring that while Stable Diffusion was “trained on billions of images that were publicly available on the Internet … training a model does not mean copying or memorizing images for later distribution. Indeed, Stable Diffusion does not ‘store’ any images.”  Meanwhile, in a filing of its own in April, text-to-image generator DeviantArt urged the court to toss out the claims against it and to strike the right-of-publicity claims lodged against it, as they “largely concern the potential for DreamUp to create art,” which falls neatly within the bounds of free speech. As such, the Los Angeles-based online art (and AI) platform says that the plaintiffs’ claims should be barred by California’s anti-SLAPP statute. 

May 6, 2020: Thomson Reuters Enterprise Centre GmbH et al v. ROSS Intelligence Inc.

In an early generative AI-centric case, Thomson Reuters alleges that ROSS copied the entirety of its Westlaw database (after having been denied a license) to use as training data for its competing generative AI-powered legal research platform. Reuters’ complaint survived a motion to dismiss in 2021. Fast forward to the summary judgement phase, and ROSS has argued, in part, that its unauthorized copying/use of the Westlaw database amounts to fair use. Specifically, ROSS claims that it took only “unprotected ideas and facts about the text” in order to train its model; that its “purpose” in doing so was to “write entirely original and new code” for its generative AI-powered search tool; and that there is no market for the allegedly infringed Westlaw content consisting of headnotes and key numbers.

*This article was initially published on June 5, and has been updated to reflect newly-filed lawsuits and updates in previously-reported cases.

The rapid rise in interest in – and adoption of – artificial intelligence (“AI”) technology, including generative AI, has resulted in global demands for regulation and corresponding legislation. Microsoft, for one, has pushed for the development of “new law and regulations for highly capable AI foundation models” and the creation of a new agency in the United States to implement those new rules, as well as the establishment of licensing requirements in order for entities to operate the most powerful AI models. At the same time, Sam Altman, the CEO of ChatGPT-developer OpenAI, had called for “the creation of an agency that issues licenses for the development of large-scale A.I. models, safety regulations, and tests that A.I. models must pass before being released to the public,” among other things. 

The United States has “trailed the globe on regulations in privacy, speech, and protections for children,” the New York Times reported recently in connection with calls for AI regulation. The paper’s Cecilia Kang noted that the U.S. is “also behind on A.I. regulations” given that “lawmakers in the European Union are set to introduce rules for the technology later this year” in the form of the Regulation on Artificial Intelligence (better known as the “AI Act”). Meanwhile, China currently has “the most comprehensive suite of AI regulations in the world, including its newly released draft measures for managing generative AI.”

What the U.S. has done to date is release non-binding guidance in the form of the AI Risk Management, the second draft of which was released by the National Institute of Standards and Technology in August 2022. Intended for voluntary use, the AI Risk Management Framework aims to enable companies to “address risks in the design, development, use, and evaluation of AI products, services, and systems” in light of the “rapidly evolving” AI research and development standards landscape. 

Shortly thereafter, in October 2022, the White House Office of Science and Technology Policy published the Blueprint for the Development, Use and Deployment of Automated Systems at the center of which are five principles that intended to minimize potential harm from AI systems. (Those five principles are: safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, consideration, and fallback.)

Despite such lags in regulation, a growing number of new AI-focused bills coming from lawmakers at the federal level are worth keeping an eye on. With that in mind, here is a running list of key domestic legislation that industry occupants should be aware of – and we will continue to track developments for each and update accordingly … 

Sept. 12 – Protect Elections from Deceptive AI Act

Bill: Protect Elections from Deceptive AI Act (S.2770)

Introduced: Sept. 12, 2023

Introduced by/Sponsors: Sens. Amy Klobuchar (D-MN), Chris Coons (D-Del.), Josh Hawley (R-Mo.), and Susan Collins (R-Maine) 

Snapshot: The bill would amend the Federal Election Campaign Act of 1971 to prohibit the distribution of deceptive AI-generated audio, images, or video relating to federal candidates in political ads or certain issue ads. The bill would allow federal candidates targeted by materially deceptive content to have that content taken down and enables them to seek damages in federal court.

What the sponsors are saying: “American democracy faces novel threats from deceptive content generated by artificial intelligence, and we must take action to defend our system of free and fair elections,” said Senator Coons. “Right now, we’re seeing AI used as a tool to influence our democracy. We need rules of the road in place to stop the use of fraudulent AI-generated content in campaign ads. Voters deserve nothing less than full transparency,” said Sen. Klobuchar. “This commonsense, bipartisan legislation would update our laws to prohibit these deceptive ads from being used to mislead voters no matter what party they belong to.”

Status: Sept. 12 – Read twice and referred to the Committee on Rules and Administration. 

Sept. 12 – Advisory for AI-Generated Content Act

Bill: Advisory for AI-Generated Content Act (S.2765)

Introduced: Sept. 12, 2023

Introduced by/Sponsors: Sen. Pete Ricketts (R-NE)

Snapshot: The bill would make it unlawful for an AI-generating entity to create covered AI-generated material unless such material includes a watermark that meets the standards established by the FTC. 

What the sponsors are saying:

Status: Sept. 12 – Read twice and referred to the Committee on Commerce, Science, and Transportation. 

Jul. 28 – Experiment with Artificial Intelligence Act of 2023

Bill: Creating Resources for Every American To Experiment with Artificial Intelligence Act of 2023 (H.R.5077)

Introduced: Jul. 28, 2023

Sponsors: Reps. Anna Eshoo (D-CA-16), Michael McCaul (TX-10), Don Beyer (VA-08), and Jay Obernolte (CA-23)

Snapshot: The CREATE AI Act establishes the National Artificial Intelligence Research Resource (NAIRR) as a shared national research infrastructure that provides AI researchers and students from diverse backgrounds with greater access to the complex resources, data, and tools needed to develop safe and trustworthy artificial intelligence.

What the sponsors are saying: “AI offers incredible possibilities for our country, but access to the high-powered computational tools needed to conduct AI research is limited to only a few large technology companies. By establishing the National Artificial Intelligence Research Resource (NAIRR), my bipartisan CREATE AI Act provides researchers from universities, nonprofits, and government with the powerful tools necessary to develop cutting-edge AI systems that are safe, ethical, transparent, and inclusive. Diversifying and expanding access to AI systems is crucial to maintain American leadership in frontier AI that will bolster our national security, enhance our economic competitiveness, and spur groundbreaking scientific research that benefits the public good,” said Rep. Eshoo.

Status: Jul. 28 – Referred to the House Committee on Science, Space, and Technology. 

Jul. 27 – Digital Consumer Protection Commission Act of 2023

Bill: Digital Consumer Protection Commission Act of 2023 (S.2597)

Introduced: Jul. 27, 2023

Sponsors: Sens. Elizabeth Warren (D-MA) and Lindsey Graham (R-SC)

Snapshot: The Bill would would rein in Big Tech by establishing a new commission to regulate online platforms. The commission would have concurrent jurisdiction with FTC and DOJ, and would be responsible for overseeing and enforcing the new statutory provisions in the bill and implementing rules to promote competition, protect privacy, protect consumers, and strengthen our national security.

What the sponsors are saying: “The digital revolution provided new opportunities for promoting social interaction, starting businesses, and democratizing information. But digital advancement has a dark side. Today, a tiny number of Big Tech companies generate most of the world’s Internet traffic and effectively regulate Americans’ digital lives. Big Tech companies have far too much power — over our economy, our society, and our democracy. Tech monopolies suppress competition by buying up rivals, preferencing their own products, and charging hefty commissions to other businesses. To get ever more users and data, social media companies manipulate users to drive them to addiction. They target kids with content on self-harm, eating disorders, and bullying. And they leave consumers in the dark about how their data is collected or used, and fall prey to massive data leaks that leave us vulnerable to criminal activity, foreign interference, and disinformation,” Warren and Graham said in a joint statement.

Status: Jul. 27 – Read twice and referred to the Committee on the Judiciary.

Jul. 27 – AI Labeling Act of 2023

Bill: AI Labeling Act of 2023  (S.2691)

Introduced: Jul. 27, 2023

Sponsors: Sens. Brian Schatz (D-HI) and John Kennedy (R-LA)

Snapshot: The Bill would require generative artificial intelligence (AI) systems to include a clear and conspicuous disclosure that identifies the content as AI-generated content and that is permanent or unable to be easily removed by subsequent users. The Bill also outlines obligations for developers and third-party licensees to implement procedures to prevent downstream use of AI systems without the required disclosure.

Status: Jul. 27 – Read twice and referred to the Committee on Commerce, Science, and Transportation. 

Jul. 27 – CREATE AI Act of 2023

Bill: CREATE AI Act of 2023 (S.2714)

Introduced: Jul. 27, 2023

Sponsors: Sens. Martin Heinrich (D-NM), Todd Young (R-IN), Sen. Cory Booker (D-NJ), and Mike Rounds (R-SD)

Snapshot: The bill would establish the National Artificial Intelligence Research Resource as a shared national research infrastructure that provides AI researchers and students from diverse backgrounds with greater access to the complex resources, data, and tools needed to develop safe and trustworthy artificial intelligence.

What the sponsors are saying: “We know that AI will be enormously consequential. If we develop and deploy this technology responsibly, it can help us augment our human creativity and make major scientific advances, while also preparing American workers for the jobs of the future. If we don’t, it could threaten our national security, intellectual property, and civil rights,” said Sen. Heinrich. “The bipartisan CREATE AI Act will help us weigh these challenges and unleash American innovation by making the tools to conduct important research on this cutting-edge technology available to the best and brightest minds in our country. It will also help us prepare the future AI workforce, not just for Silicon Valley companies, but for the many industry sectors that will be transformed by AI. By truly democratizing and expanding access to AI systems, we can maintain our nation’s competitive lead while ensuring these rapid advancements are a benefit to our society and country — not a threat.”

Status: Jul. 27 – Read twice and referred to the Committee on Commerce, Science, and Transportation.

Jun. 20 – National AI Commission Act

Bill: National AI Commission Act (H.R.4223)

Introduced: Jun. 20, 2023

Introduced by/Sponsors: Rep. Ted Lieu (D-CA), Rep. Ken Buck (R-CO, and Rep. Anna Eshoo (D-CA)

Snapshot: The legislation would create a national commission to focus on the question of regulating Artificial Intelligence (AI). The bipartisan, blue-ribbon commission will review the United States’ current approach to AI regulation, make recommendations on any new office or governmental structure that may be necessary, and develop a risk-based framework for AI. The group will be comprised of experts from civil society, government, industry and labor, and those with technical expertise coming together to develop a comprehensive framework for AI regulation. Senator Brian Schatz (D-HI) will be introducing companion legislation in the Senate.

What the sponsors are saying: “Artificial Intelligence is doing amazing things for our society. It can also cause significant harm if left unchecked and unregulated. Congress must not stay on the sidelines,” said Rep. Lieu. “However, we must also be humble and acknowledge that there is much we as Members of Congress don’t know about AI. That’s why our bill brings together experts in civil society, government, industry, labor and more to make recommendations on the best ways to move forward on AI regulation. Our bill forges a path toward responsible AI regulation that promotes technological progress while keeping Americans safe.”

“Artificial Intelligence holds tremendous opportunity for individuals and our economy,” said Rep. Buck. “It’s also possible that AI poses a great risk for our national security.” I’m proud to lead this bipartisan piece of legislation with Rep. Lieu to ensure that Congress considers expert opinions before the government takes action in this emerging field.”

“As Co-Chair of the bipartisan Congressional Artificial Intelligence Caucus, I understand how complex the issue of artificial intelligence is. The National AI Commission Act is an important first step to bring together stakeholders and experts to better understand how we can regulate AI and what guardrails must be in place as AI become more prevalent across society,” said Rep. Eshoo.

Status: Jun. 20 – Referred to the House Committee on Science, Space, and Technology. 

Jun. 14 – A Bill to Waive Immunity Under S. 230 for Generative AI

Bill: A bill to waive immunity under S. 230 of the Communications Act for claims and charges related to generative AI (S.1993)

Introduced: Jun. 14, 2023

Introduced by/Sponsors: Sen. Josh Hawley (R-MO) and Richard Blumenthal (D-Con.)

Snapshot; The legislation would amend Section 230 by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI.

What the sponsors are saying: We can’t make the same mistakes with generative AI as we did with Big Tech on Section 230,” said Hawley. “When these new technologies harm innocent people, the companies must be held accountable. Victims deserve their day in court and this bipartisan proposal will make that a reality.”

“AI companies should be forced to take responsibility for business decisions as they’re developing products—without any Section 230 legal shield,” said Senator Blumenthal. “This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era. AI platform accountability is a key principle of a framework for regulation that targets risk and protects the public.”

Status: Jun. 14 – Read twice and referred to the Committee on Commerce, Science, and Transportation.

Jun. 8 – Global Technology Leadership Act 

Bill: Global Technology Leadership Act (S.1873)

Introduced: June 8, 2023

Introduced by/Sponsors: Sens. Michael Bennet (D-Colo.), Todd Young (R-Ind.), and Mark Warner (D-Va.)

Snapshot: The legislation would establish an Office of Global Competition Analysis to assess how the United States fares in key emerging technologies – such as artificial intelligence (AI) – relative to other countries to inform U.S. policy and strengthen American competitiveness.

What the sponsors are saying: “We cannot afford to lose our competitive edge in strategic technologies like semiconductors, quantum computing, and artificial intelligence to competitors like China,” said Sen. Bennet. “To defend our economic and national security and protect U.S. leadership in critical emerging technologies, we need to be able to take into account both classified and commercial information to fully assess where we stand. With that information, Congress can make smart decisions about where to invest and how to strengthen our competitiveness.”

“This legislation will better synchronize our national security community to ensure America wins the technological race against the Chinese Communist Party. There is no single federal agency evaluating American leadership in critical technologies like artificial intelligence and quantum computing, despite their significance to our national security and economic prosperity. Our bill will help fill this gap,” said Sen. Young.

“Over the last few years, the U.S. has made significant investments in key sectors like semiconductor manufacturing. But as the U.S. works to out-innovate our global competitors, it’s crucial that we have a meaningful way to track how our progress stacks up against near-peers like China. I’m proud to join this bipartisan effort to create a centralized hub that’s responsible for keeping tabs on these developments, which are critical to our economic and national security,” said Sen. Warner.

Status: Jun. 8 – Read twice and referred to the Committee on Commerce, Science, and Transportation.


This is a short excerpt from a tracker that is published exclusively for TFL Enterprise subscribers. For access to our up-to-date legislation tracker, inquire today about how to sign up for an Enterprise subscription.

Amid ongoing hearings in Washington that focus on the rise and widespread adoption of artificial intelligence (“AI”), including generative AI, and the need for legislation to address corresponding ethics, privacy, infringement, and transparency/neutrality risks, U.S. Senators Richard Blumenthal (D-CT) and Josh Hawley (R-Mo.) have announced the launch of a bipartisan framework focused on AI. Calling it “the first tough, comprehensive legislative blueprint for real, enforceable AI protections” in the U.S., Sen. Blumenthal, who is the chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, and Hawley, who is the Subcommittee’s ranking member, say that the framework “should put us on a path to addressing the promise and peril AI portends.” 

The U.S. artificial intelligence (“AI”)-centric framework includes proposed requirements for the licensing and auditing of AI, the creation of an independent federal office to oversee the technology, liability for companies for privacy and civil rights violations, and requirements for data transparency and safety standards. Sen. Blumenthal stated in connection with the release of the framework that “hearings with industry leaders and experts [will continue],” as will “other conversations and fact finding to build a coalition of support for legislation.” 

In one show of early support, Institute for AI Policy executive director Daniel Colson stated that the AI governance framework is “a major step in the right direction for managing the risks from AI,” noting that “licensing requirements for training and deployment, liability for harms, and limitations on international transfer of software and hardware are three of the most important policy objectives for safety advocates.”

At a high level, the framework aims to … 

Establish a Licensing Regime Administered by an Independent Oversight Body: Companies developing sophisticated general-purpose AI models (e.g., GPT-4) or models used in high-risk situations (e.g., facial recognition) should be required to register with an independent oversight body. Licensing requirements should include the registration of information about Al models and be conditioned on developers maintaining risk management, pre-deployment testing, data governance, and adverse incident reporting programs. The oversight body should have the authority to conduct audits of companies seeking licenses and cooperate with other enforcers, including considering vesting concurrent enforcement authority in state Attorneys General. The entity should also monitor and report on technological developments and economic impacts of AI, such as effects on employment. Personnel must be subject to strong conflict of interest rules to mitigate capture and revolving door concerns.

Ensure Legal Accountability for Harms: Congress should ensure that AI companies can be held liable through oversight body enforcement and private rights of action when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms. Where existing laws are insufficient to address new harms created by AI, Congress should ensure that enforcers and victims can take companies and perpetrators to court, including clarifying that Section 230 does not apply to AI In particular, Congress must take steps to directly prohibit harms that are already emerging from AI, such as non-consensual explicit deepfake imagery of real people, production of child sexual abuse material from generative AI, and election interference.

Defend National Security and International Competition: Congress should utilize export controls, sanctions, and other legal restrictions to limit the transfer of advanced AI models, hardware and related equipment, and other technologies to China, Russia, and other adversary nations, as well as countries engaged in gross human rights violations.

Promote Transparency: Congress should promote responsibility, due diligence, and consumer redress by requiring transparency from the companies developing and deploying AI systems. This includes: (1) Developers should be required to disclose essential information about the training data, limitations, accuracy, and safety of AI models to users and companies deploying systems, including through simple, comprehendible disclosures and to provide independent researchers access to data necessary to evaluate AI model performance; (2) Users should have a right to an affirmative notice that they are interacting with an AI model or system; (3) AI system providers should be required to watermark or otherwise provide technical disclosures of AI -generated deepfakes; and (4) The new oversight body should establish a public database and reporting so that consumers and researchers have easy access to AI model and system information, including when significant adverse incidents occur or failures in AI cause harms.

Protect Consumers and Kids: Companies deploying AI in high-risk or consequential situations should be required to implement safety brakes, including giving notice when AI is being used to make decisions, particularly adverse decisions, and have the right to a human review. Consumers should have control over how their personal data is used in AI systems and strict limits should be imposed on generative AI involving kids.

Artificial intelligence (“AI”) is swiftly permeating all aspects of the fashion industry – from supply chain operations, including improved visibility into future demand, to the creative process, where text-prompts can give rise to new designs, and even the models, which brands can developed virtually and use to showcase a wider range of sizes and styles.  At first blush, AI – and in particular, generative AI, a type of AI that is capable of generating text, images, or other media, using large language models like ChatGPT – appears to be an incredibly disruptive technology that brings with it significant gains for those brands that embrace it early, but doing so is not without risk. In light of the potential for reputational, financial, and legal exposure that comes with the adoption of AI, brands must approach this technology with care, from both an ethical and business standpoint.

The main way that brands can protect themselves from disputes and other issues in relation to their use of AI is to carry out proper due diligence on the AI platforms they are working with and to ensure that any contracts with AI service providers are negotiated with the involvement of technical experts and lawyers with appropriate expertise. This includes careful attention to the relevant intellectual property (“IP”) elements. For any company, IP is likely to be at the center of its strategy to futureproof the business. At the same time, since IP is a key source of uncertainty when it comes to AI, it is the source of potential disputes for companies making use of such technology. 

The output of generative AI systems, for example, may consist of creative content for branding, clothing designs, and images of catwalk models, and may have multiple roles within a business. One of the most critical questions when it comes to the use of generative AI platforms is whether a brand can rely on the IP in the AI-generated outputs as being owned by the brand, and whether a brand will be exposed to infringement claims by third parties if it deploys that output. This represents a very real risk and the answer to this question will depend on, among other things, the contractual arrangements between the parties in question, and how the law evolves to deal with ownership and infringement in the context of AI-generated works. (Questions of infringement, both in the input and output stages, are currently being considered by courts.)

Beyond IP, there is a myriad of other points that companies should considered carefully before integrating AI into their business. Has sufficient due diligence been carried out on the external AI service provider and their services? What is the supplier actually agreeing to provide, and what objective, measurable, standards will they be held to? How will you determine whether or not the AI is functioning properly, especially given that large language models are complex systems that typically come with little transparency as to how they work (both in terms of training and output)? Is the supplier being held to appropriate ethical and data-related standards? 

These questions are important not only from the perspective of a brand’s legal and financial exposure, but also for protecting its reputation.

It is also worth considering how disputes involving AI might be best resolved. This is an area in which confidential arbitrations, as opposed to public court proceedings, are often chosen in dispute resolution clauses, particularly in view of the confidentiality concerns around the personal data of customers, as well as the fact that suppliers are keen to keep the workings of their AI systems out of the public domain in order to avoid making this information available to their competitors. All the while, limiting the resolution of disputes to confidential arbitrations provides companies with the added benefit of reducing the scope for negative media attention. 

Often, the starting point in any commercial dispute is the contract, and the position is no different with AI. As a result, the need for companies to get these contracts right is paramount.


Lizzie Williams is a dispute resolution lawyer at Harbottle & Lewis LLP with experience in fashion, retail, and technology disputes.