A New Kind of Commercial Speech Problem
A recent New York Post article reported that ChatGPT recommended a specific Korean beauty cream — conveniently available at a 40% discount on Amazon — and that the recommendation was presented as organic, experience-based advice. The cream "actually fixed my skin barrier," the headline promised. ChatGPT said so.
This is either the most natural development in the history of consumer recommendation or a significant undisclosed commercial arrangement presented as neutral AI output. Possibly both. The legal framework for determining which doesn't exist yet.
That gap deserves attention, because it's going to get much larger before anyone closes it.
What the FTC Currently Requires
The Federal Trade Commission's endorsement guidelines, updated most recently in 2023, require that material connections between endorsers and the products they recommend be clearly disclosed. An influencer paid to promote a product must say so. A celebrity with an equity stake in a brand must say so. The requirement applies to humans because the legal framework was built around human commercial speech.
What happens when the recommender is an AI system trained on data that may include commercial arrangements, affiliate relationships, or promotional content — and that doesn't inherently know or disclose the provenance of its training signals? The 2023 FTC guidance gestured at AI but did not resolve the question. The Commission has since issued some preliminary statements. None of them constitute binding legal clarity.
OpenAI has affiliate arrangements with some commercial partners. The details of those arrangements, and how they affect product recommendations surfaced by ChatGPT, are not publicly disclosed at the level that would satisfy the standard the FTC applies to human endorsers. That asymmetry is the problem.
The First Amendment Dimension
Commercial speech receives less First Amendment protection than political or artistic speech — this is settled doctrine going back to Virginia State Board of Pharmacy v. Virginia Citizens Consumer Council (1976) and refined through Central Hudson Gas v. Public Service Commission (1980). The government can regulate commercial speech that is false or misleading without triggering strict scrutiny.
The question is whether AI-generated product recommendations constitute commercial speech in the constitutional sense, and if so, by whom. The AI doesn't have constitutional rights. OpenAI does. If OpenAI's AI surfaces product recommendations that constitute commercial endorsements, OpenAI is the speaker, and disclosure requirements attach to OpenAI as a corporate entity.
This is not a difficult legal analysis. What's difficult is getting any regulatory agency to apply it while the AI companies have the political wind at their backs and have successfully positioned themselves as innovation engines rather than commercial media companies with financial interests in the recommendations they surface.
Why This Matters Beyond Skincare
The K-beauty cream is a minor example. The principle scales to everything. Financial products. Medical treatments. Legal services. News sources. Political candidates.
When a trusted, conversational AI system — one that millions of Americans interact with as though it were a knowledgeable friend rather than a commercial platform — surfaces recommendations that benefit entities with commercial relationships to the system's operator, and does so without disclosure, that is a form of commercial deception that the legal system should be able to address.
The failure to address it isn't a legal impossibility. The tools exist: FTC disclosure requirements, Section 5 of the FTC Act's prohibition on unfair or deceptive practices, state consumer protection laws. The failure is one of regulatory will and institutional speed. The AI companies are moving faster than the regulators, and the regulators are staffed with people who are genuinely uncertain whether the old frameworks apply to new technologies.
They do. The technology is new. The deception is old. When you recommend a product to someone without telling them you benefit from the recommendation, the harm to the recipient doesn't depend on whether the recommender is human or algorithmic. The standard should apply consistently.
Your skin barrier is your business. Just know who else is invested in what you put on it.





