By Alex Gardner.
It’s been well over a year since header bidding took hold of the programmatic marketplace. While much of the ongoing prattle in ad tech will continue to debate its virtues, header’s grip continues to tighten.
With recent announcements from the likes of Facebook, whose audience network demand has established its own path to accessing supply via the header, or the expansion to new supply channels such as video, it’s clear that header bidding is establishing deep and effective roots. Furthermore, with the addition of server-side support for exchange bidders, the limitations once imposed by Web browsers are circumvented, allowing publishers to open up to more and more demand.
With these advancements in the market, the complexities begin to compound, intensifying the need for a clear set of standard measures. Still, while that may seem obvious, many publishers are still evaluating results as though the waterfall still reigns.
Publishers must first accept that the waterfall is dead — and with it, the simplified performance indicators by which results were formerly measured. With the opportunity to review results in a more holistic way and easily compare and contrast different partners, publishers are now responsible for determining the net contribution of their exchange bidders relative to their costs and impacts.
First, forget about impressions. The new currency in its most raw form is the number of bid requests (impressions’ ad slots). This is now the unit of trade, which is subject to the number of ads per page, multi-unit support, lazy load, etc. This also forms the basis of what gets counted by the demand-side platform.
With the dial set to bid requests, we can start to understand relative performance across exchange bidders by assessing their bid rates and win rates, where bid rates represent the frequency of an exchange bidder returning non-zero bids as a percentage of total eligible bid requests. Correspondingly, win rate measures the percentage of an exchange bidder’s bids that presented the highest bid value against all sources of demand relative to the total requests available. Here’s a simple guide:
Suffice it to say, this is an oversimplification given the myriad of factors that can affect the competitiveness of a bidder, including pricing and auction dynamics and their priority within the ad server. However, even in the absence of perfect parity across all bidders, close monitoring of bid and win rates remains essential.
Equally important to assessing the competitiveness of exchange bidders is weighing the cost they may impose. There are two major costs to consider: resource requirements and latency.
While the former can be a challenge to quantify, the latter can, and should, be carefully measured. Timeout thresholds, as configured by the publisher, offer another lever by which to optimize bidder performance while minimizing impact on user experience.
Users want fast pages, and those expectations are putting major pressure on bidders to differentiate on speed and infrastructure. Publishers also have the option of moving slower bidders off the page and into a server to server connection. Though thresholds imposed by publishers may still vary widely, exchange bidders who can’t compete in low millisecond timeframes will see fewer and fewer bids making it to auction, and thus a lower win rate.
Publishers should look at these inputs apart from the revenue they’re generating, closely review wrapper reporting stats, and ensure they’re using these new set of KPIs across all partners. When adding a new partner, simple tools are now available to ensure you’re only engaging with those that drive meaningful value. The move server side is shifting certain conditions, but this doesn’t eliminate the need for publishers to objectively evaluate the strength and quality of their exchange partners.