_3.jpg&h=570&w=855&q=100&v=20250320&c=1)
Software company DoubleVerify (DV) has published a response to Adalytics’ report from March 28 on general invalid traffic (GIVT), saying that Adalytics selectively highlighted evidence that supports its claims of firms paying extra for ads and ignored contradictory data.
The 240-page report from Adalytics revealed that ads from thousands of brands were being shown to bots, despite many paying extra for services that promised to detect and stop them.
DV argues that Adalytics wrongly claimed that advertisers are billed for GIVT impressions and pre-bid verification doesn’t work. The company said it follows Media Rating Council (MRC) and Trustworthy Accountability Group (TAG) standards to filter GIVT post bid and remove SIVT pre-bid.
As part of its response, DV’s Fraud Lab reviewed every cited impression in Adalytics’ report and validated that all eligible GIVT impressions were correctly flagged.
Additionally, over 90% of examples relied on traffic from URLScan, which is a bot that doesn’t declare itself. Adalytics reportedly miscategorised that as a declared bot, despite the URLScan CEO confirming it is not.
Adalytics also conflated GIVT with sophisticated invalid traffic (SIVT), according to DV, which misrepresented detection capabilities.
DV also said that its post-bid systems ensure advertisers aren’t charged for invalid traffic, even when pre-bid filtering isn’t in place. It also confirmed 100% detection post-bid for all known bots and data centres cited in the report.
Other inaccuracies and areas of misrepresentation DV highlighted include past inaccuracies in Adalytics reports, such as reportedly misinterpreting DV tag presence, MFA classification, and DSP code meaning.
Shailin Dhar, an industry expert who was cited multiple times by Adalytics, also publicly criticised the company’s methodology and called for retractions, DV reported He said: “[Adalytics doesn’t] really understand ‘how’ ad fraud is committed or activated by motivated parties.”
DoubleVerify said in the report, “As we’ve stated before, and evidenced in this case, these appear to be manipulated findings. Results are selectively showcased that support a premise while excluding counterexamples that would undermine it.
“This creates confusion, suggesting that blocking all bots in the study pre-bid is straightforward and universally supported, which it’s not. And once again, even if URLScan is not identified pre-bid, that does not mean the publisher failed to identify and exclude it post-bid from billable counts. (To be clear, publishers are not doing anything wrong here.)”
This article was first published by Performance Marketing World.