Danny Rogers is the co-founder of the Global Disinformation Index, a British firm that works to defund disinformation sites. Rogers also teaches at New York University and is a fellow at the Truman Project on National Security. This interview has been lightly edited for length and clarity. Boigon: GDI has been doing a lot of research on how brands and institutions are inadvertently funding disinformation when their ads are placed on sites. How does this work? Rogers: Ad support to media had always been incidental. You create content, you get it out there and then you monetize it. Now, what we’ve seen is the perversion of that equation, where the content itself is being manipulated and adapted and curated in order to purely maximize the attention and thus squeeze every ad dollar out. So many of these threat actors, including even some of the state threat actors, are either primarily or very nearly primarily motivated by the profit incentive created by this ad tech ecosystem. Boigon: Do brands know that they are placing ads on disinformation sites funded by foreign and domestic bad actors? Rogers: I think the thing that people don’t understand is that when you see a Ford Motors or whatever advertisement on a page that you’re visiting, rarely, if ever, has that brand explicitly stated, “We would like to show up on that page.” So much of open-web ad machinery is what’s called your “real-time bit programmatically-placed program advertisements,” meaning that the advertiser has indicated you have certain characteristics — demographic, behavioral or otherwise — that they are targeting, and the tracking mechanisms of the internet identify you as such, and Google places that ad in front of you wherever you go on the internet. That’s why you’ll see similar ads following you around —not because brands are explicitly endorsing the content, but because of the way that ad tech is plugged, the exchange is funneling money to whichever site happens to be displaying that ad. Boigon: What is the solution here? Is it regulatory? Rogers: All of the data that’s being shared to enable the precise targeting of advertisements is, in and of itself, a pretty massive privacy invasion. But I think the biggest answer to your question is that the brands themselves don’t want this. The brands themselves are particularly terrified of this nightmarish brand safety scenario where, say, a Pfizer ad is showing up along anti-vaxx content. Because of the monopolistic nature of the ad industry, the dominant ad exchange, Google, has little to no incentive to actually offer brands the definitive ability to make that selection, and in fact, it’s actually against Google’s interest. Once the brands have that level of control, they can choose not to place ads in places — which means, very frankly, that Google makes less money. Part of the regulatory solution is this antitrust stuff that’s happening now, because the brands are relatively powerless. They can’t really go anywhere else for programmatic ads because the space is so dominated by a few massive players. Boigon: How much money goes to disinformation sites from brand advertisements being placed by ad tech companies? Rogers: We did a study last year that tried to put a lower bound on the estimate of how much money we’re talking industry-wide, and our estimate is that it’s at least, but probably significantly more than, a quarter billion dollars a year. |