May 13, 2024 By Jonathan Reed 3 min read

The Digital Millennium Copyright Act (DMCA) is a federal law that protects copyright holders from online theft. The DMCA covers music, movies, text and anything else under copyright.

The DMCA also makes it illegal to hack technologies that copyright owners use to protect their works against infringement. These technologies can include encryption, password protection or other measures. These provisions are commonly referred to as the “Anti-Circumvention” provisions or “Section 1201”.

Now, a fierce debate is brewing over whether to allow independent hackers to legally circumvent Section 1201 restrictions to probe AI models. The goal of this legal hacking activity would be to detect problems like bias and discrimination.

Proponents of this exemption claim that it would boost transparency and trust in generative AI. Opponents, largely made up of media and entertainment companies, are interested in data privacy protection. And they fear the exemption could enable piracy.

The debate has just begun, and each side is presenting compelling arguments. The U.S. Copyright Office has opened the debate by receiving comments in opposition to the Section 1201 Exemption. Likewise, proponents have been given the opportunity to reply. And the final decision surrounding this AI cybersecurity issue has yet to be determined.

Opponents worry about privacy and protection

Opponents of the Section 1201 Exemption say that supporters have failed to meet their burden of proof. “As an initial matter, Proponents do not identify what technological protection measures (“TPMs”), if any, currently exist on generative AI tools or models. This failure alone leads to the conclusion that the request for the proposed exemption should be denied.”

Those opposed to the exemption also say it is too broad and based on a “sparse, undeveloped record.” Opponents also urge the Copyright Office to reject “belated attempts through the proposal to secure an expansion of the security research exemption to include generative AI models.”

Learn more about generative AI

Supporters worry about AI bias

Section 1201 Exemption supporters, like the Hacking Policy Council, say that the proposed exemption would only “apply to a particular class of works: computer programs, which are a subcategory of literary works. The proposed exemption would apply to a specific set of users: persons performing good faith research, as defined, under certain conditions. These are the same parameters that the Copyright Office uses to describe other classes of works and sets of users in existing exemptions.”

Supporters also say that they support “the petition to protect independent testing of AI for bias and alignment (“trustworthiness”) because we believe such testing is crucial to identifying and fixing algorithmic flaws to prevent harm or disruption.”

The bigger picture

Generative AI is artificial intelligence (AI) that can create original content — such as text, images, video, audio or software code — in response to a user’s prompt or request.

Recently, the world has witnessed an unprecedented surge of AI innovation and adoption. Generative AI offers enormous productivity benefits for individuals and organizations but presents very real challenges and risks. All this has led to a flurry of conversations surrounding how to regulate generative AI, and the Section 1201 Exemption is but one example.

The debate is occurring on a global scale, such as with the EU AI Act, which aims to be the world’s first comprehensive regulatory framework for AI applications. The Act completely bans some AI uses while implementing strict safety and transparency standards for others. Penalties for noncompliance can reach EUR 35,000,000 or 7% of a company’s annual worldwide revenue, whichever is higher.

Nobody knows who will win these arguments over AI security issues. But the future use and limits of generative AI hang in the balance.

More from News

Insights from CISA’s red team findings and the evolution of EDR

3 min read - A recent CISA red team assessment of a United States critical infrastructure organization revealed systemic vulnerabilities in modern cybersecurity. Among the most pressing issues was a heavy reliance on endpoint detection and response (EDR) solutions, paired with a lack of network-level protections. These findings underscore a familiar challenge: Why do organizations place so much trust in EDR alone, and what must change to address its shortcomings? EDR’s double-edged sword A cornerstone of cyber resilience strategy, EDR solutions are prized for…

DHS: Guidance for AI in critical infrastructure

4 min read - At the end of 2024, we've reached a moment in artificial intelligence (AI) development where government involvement can help shape the trajectory of this extremely pervasive technology. In the most recent example, the Department of Homeland Security (DHS) has released what it calls a "first-of-its-kind" framework designed to ensure the safe and secure deployment of AI across critical infrastructure sectors. The framework could be the catalyst for what could become a comprehensive set of regulatory measures, as it brings into…

Apple Intelligence raises stakes in privacy and security

3 min read - Apple’s latest innovation, Apple Intelligence, is redefining what’s possible in consumer technology. Integrated into iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1, this milestone puts advanced artificial intelligence (AI) tools directly in the hands of millions. Beyond being a breakthrough for personal convenience, it represents an enormous economic opportunity. But the bold step into accessible AI comes with critical questions about security, privacy and the risks of real-time decision-making in users’ most private digital spaces. AI in every pocket Having…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today