Written by Brook Schaaf
In last week’s summary judgment of Thomas Reuters v. ROSS, the court ruled against the defendant, an AI company, because it copied content called headnotes from Reuters’ Westlaw property to intentionally create a competing product.
This case stirred the cobwebs of my mind and led to a quick Perplexity search for “guy whose act was being shot out of a cannon.” This quickly turned up Zacchini, a human cannonball whose 15-second live act was filmed and broadcast in its entirety by a television station, which presumably advertised against the appropriated content. He sued.
According to the AI summary, the Supreme Court ruled in favor of Zacchini finding that the First and Fourteenth Amendments do not immunize the media from civil liability. He had a “right of publicity” claim under state law, and because broadcasting the entire act undermined the commercial value of the performance.
In recent decades, Google and others have prevailed against similarly spirited lawsuits. Courts have ruled in favor of the information aggregators largely based on public accessibility, public benefit, and the fair use doctrine, including transformative action – altering the original content enough to make it a separate product (the court rejected transformative action as a defense in the Reuters case).
The mixed results of hiQ v. LinkedIn, which made it all the way to the Supreme Court and back, seemed to allow for public page scraping. In addition to the above factors, affiliate-famous lawyer Gary Kibel added that “contracts of adhesion” and “a reality of how the internet works” are also considerations.
In this same vein, Big Tech has reportedly allowed training on any and all content it can reach, which can affect the equivalent of a DDoS attack. Unsurprisingly, OpenAI failed to deliver on a promised opt-out tool by 2025. The motivation of the AI companies to train is clear but now so are some of the consequences. In the face of criticism, some proponents have brazenly claimed a “right to train.” This will ultimately be a matter for legislators and the courts. Over 30 additional cases are pending, including multiple publishers against Cohere and, of course, New York Times v. OpenAI.
In its summary of Authors Guild v. Google, Justia.com states “Google’s commercial nature and profit motivation do not justify denial of fair use.” In the age of AI, I wonder if society and the law’s understanding won’t switch back to something more like 1977’s Zacchini decision, which recognizes that the commercial value of content may be reduced or even negated through unauthorized distribution. The likely negative consequences of affiliate monetization are obvious.
One must indeed accept reality, including the use of third-party data brokers, but when the original content can be identified, especially through regurgitation or citation, and when the old or new content has economic value, it is fair to recognize that another party with a “commercial nature” is acting on its “profit motive.”
The reference to state law in Zacchini makes me wonder if state-level legislation isn’t the solution, at least in the USA. Publishers could easily work with servers domiciled in friendly states. A possible litmus test: Does the content hold commercial relevance beyond an impression? Think subscriptions, clicks, sales, leads, etc.
If a more equitable approach is not found, publishers may feel like they’ve been blasted out of a cannon with spectacular reach in the moment but have no control over where they land or if they even stay intact when they do.
The post Zacchini vs. Scripps-Howard Broadcasting Company appeared first on FMTC.
Last Comments