Making AI licensing work for publishers of all sizes

The Center for Journalism and Liberty's new report, Same Gatekeepers, New Tollbooths: Mapping the AI Content Licensing Market, deserves to be read carefully by anyone with a stake in independent or specialized publishing. The CJL report's findings are sobering. The AI licensing market is taking shape right now, and it is doing so in ways that mirror the failures of the social media and search eras.
A small number of large bilateral deals dominate the scene while an emerging intermediary layer is being built partly by the same Big Tech firms whose AI products are already eroding publisher revenue. Meanwhile, the majority of publishers sit in an uncompensated third tier outside any licensing framework at all.
The report warns of "content cannibalization" where AI systems degrade the economic foundation of the very content they depend on, producing a slow deterioration of quality that the report's authors call "sloppification." The mechanism is straightforward–AI reduces traffic to original sources, lost traffic means lost revenue, and lost revenue means less investment in original reporting and specialist content. A lack of high-quality content ultimately results in AI outputs that are less reliable. It is a vicious circle that harms publishers and AI companies alike.
What does a fair AI content licensing market require?
There are three factors that can help the AI licensing market serve publishers of all sizes rather than just the largest ones: Transparency, model-level attribution, and the inclusion of independent and specialized publishers.
Why transparency matters in AI content licensing
The CJL report identifies the opacity of the current system as a major structural flaw. Publishers cannot negotiate fairly when they cannot see how their content is being used. Transparency has to mean more than press releases about headline deals. It requires disclosure of what content is in training sets, how it is being used in inference and retrieval, and what the corresponding compensation obligations are. Without that baseline, there is no meaningful market, only asymmetric extraction.
What model-level attribution means for publishers
When an AI system draws on specialized content to answer a question, the publisher whose work facilitated that answer should be visible. This is not just a matter of credit, though of course credit matters. It’s a precondition for fair compensation. You cannot pay for what you cannot trace. Attribution at the model level also creates accountability by giving publishers the ability to verify that compensation corresponds to actual use.
Why independent publishers need collective licensing
This is where Same Gatekeepers connects directly to work being done on the ground. Press Gazette reported in March on a new collective licensing scheme being developed by Publishers Licensing Services in the UK with the explicit aim of giving smaller publishers a seat at the table.
The CJL report endorses collective licensing as a structural mechanism that can equalize bargaining power. So does research from the Stigler Center's ProMarket, where economist Christian Peukert has argued that without standardized, non-discriminatory access to licensing frameworks, the current ad hoc deal-making will consolidate both the AI market and the content production market.
Why the window to fix AI licensing is closing
One of the most important observations in Same Gatekeepers is that the window for intervention is narrowing. Market structures are locking in, and the deals being struck today will define the market for years to come.
But the good news is that there are alternatives, and the CJL report shows how those alternatives can be given teeth. By focusing on transparency, model-level attribution, and the inclusion of independent and specialized publishers, the AI content licensing market can avoid the mistakes of the past and build a better, more equitable future for creators of all sizes.


