Every fortnight or so we’ll bring you some technical updates that we feel you’ll find useful.
Today’s topics are a quick overview of the x4 standards released for feedback by IAB Tech Lab from Project Rearc, a look at the recent Stanford AI Index report and some insights from last week’s programmatic summit in Sydney.
Standards For Responsible Addressability & Predictable Privacy
The inaugural wave of releases from Project Rearc are now out, and congratulations to IAB Tech Lab and all involved collaboratively on them. The full versions are all available here, but we wanted to provide a very quick summary of them for you to get you started.

The core purpose of these new specs is to lay the foundation for addressable targeting, measurement and attribution solutions that incorporate accountability and compliance. There are four core proposed standards – which are comprised of two platforms to support accountability and privacy; supported by two addressability related releases.
Accountability Platform
These specifications propose an Accountability Platform that can enable all eco-system participants (via open, auditable data structures and standard practices) to demonstrate that they are complying with user preferences consistently and persistently. The intent is to reliably demonstrate digital advertising supply chain conformity to preferences and restrictions set by users and the various digital properties they visit.
The Global Privacy Platform
This platform would streamline technical privacy and data protection signaling standards into a singular schema and set of tools which can then adapt to regulatory and commercial market demands across all channels. The industry is obligated to ensure that efforts of the accountability platform can be managed globally as regulations will vary (and continue to change) quite dramatically both locally and regionally.
Best Practices for User-Enabled Identity Tokens
These establish guidelines for the encryption and use of user-provided identifiers (notably email addresses and phone numbers) in scenarios where online publishers or marketers want to offer personalised content or services tied to these user-provided IDs.
Taxonomy and Data Transparency Standards
Taxonomy and Data Transparency Standards (e.g. Data Label) to support seller-defined audience and context signaling is trying to bring some consistency and structure to marketing’s usage and naming conventions for contextual and audiences, through IAB Tech Lab’s various available and updated taxonomies. The proposal is to apply anonymised audience and content taxonomy IDs and Data Transparency Standard metadata within OpenRTB to support privacy-centric addressability and first-party data monetisation.
The feedback period for the global privacy platform spec is 30 days (ending April 8th) and the comment period for the accountability platform and the two addressability specs is 60 days (ending May 7th).
The full sets of specs are available here, including the formal feedback process
Some Insights from the recent Programmatic Summit in Sydney
Ashton Media’s programmatic summit last week felt important symbolically as a sense of normality gradually returns to our industry, at least in terms of networking, public presentations and conducive discussions. My personal highlights from the day are below, however my scope is slightly limited as I was stewarding one of the breakout rooms (A.K.A. ‘the fun room’) and I’m hopelessly restless at industry events so no doubt have neglected some key sessions.
- An emotional standing ovation in memory of Sam Smith following some kind works from Dan Robins in his opening notes, recognising Sam’s tragic passing last year.
- A reminder of Playground XYZ’s genuinely refreshing approach to capturing and measuring advertising attention via ads through a new customised metric – Attention Time. Click here for the whitepaper and for a reminder of why they won the inaugural MeasureUp Advertising Effectiveness award last year, click here.
- An insight into an honest and genuine digital transformation story from Samsung rather dramatically labelled as their ‘first-party data power play’. We were walked through the efforts to balance the needs of both the consumer and customer – as well as the unexpected levels of creativity involved in what always feels like such a purely scientific initiative. CHE Proximity very competently provided a key agency perspective and this session was ultimately packed with genuine learnings, guidance and some refreshing honesty. For some more info see here.

- A really clear and open session on audience targeting and user authentication, featuring some local heavyweights from Matterkind, LiveRamp and OpenX. A really nice mix of genuine insights and frank honesty from a well-rounded, experienced and very competent group. Thumbs up all round.
- DFINITY Founder and Chief Scientist Dominic Williams provided a closing keynote packed with smart, refreshing and very unique insights as he laid out their vision for transforming the public internet into a breakthrough computing platform that will renew the creative capacity of the web. A fascinating world-class knowledge session that left everyone honestly reeling. A high-quality close.
Stanford University’s 2021 AI Index annual report
As our industry obsesses with all things machine-learning (ubiquitously now labelled AI) it’s always assuring that someone is keeping an eye on the global trends as this space exponentially industrialises.
The AI Index is compiled by the Stanford Institute for ‘Human-Centered Artificial Intelligence’ with the support of an 11-member steering committee – with contributors from Harvard University, OECD, the Partnership on AI, and SRI International. The full report is freely available here and the 9 key takeaways are below for you to chew over.
Much broader than expected – and as this is a space well worthy of regular review for it’s inevitable impacts on our industry technologically – points 7 and 8 both particularly grabbed my attention. The full report is stacked with interesting data points, insights and graphs – and as previously mentioned these capabilities are very fast evolving and it’s still very early days. Keep an eye on it, would be my advice…
1. AI investment in drug design and discovery increased significantly
“Drugs, Cancer, Molecular, Drug Discovery” received the greatest amount of private AI investment in 2020, with more than USD 13.8 billion, 4.5 times higher than 2019.
2. The industry shift continues
In 2019, 65% of graduating North American PhDs in AI went into industry—up from 44.4% in 2010, highlighting the greater role industry has begun to play in AI development.
3. Generative everything
AI systems can now compose text, audio, and images to a sufficiently high standard that humans have a hard time telling the difference between synthetic and non-synthetic outputs for some constrained applications of the technology.
4. AI has a diversity challenge
In 2019, 45% new U.S. resident AI PhD graduates were white—by comparison, 2.4% were African American and 3.2% were Hispanic.
5. China overtakes the US in AI journal citations
After surpassing the US in the total number of journal publications several years ago, China now also leads in journal citations; however, the US has consistently (and significantly) more AI conference papers (which are also more heavily cited) than China over the last decade.
6. The majority of the US AI PhD grads are from abroad—and they’re staying in the US
The percentage of international students among new AI PhDs in North America continued to rise in 2019, to 64.3%—a 4.3% increase from 2018. Among foreign graduates, 81.8% stayed in the United States and 8.6% have taken jobs outside the United States.
7. Surveillance technologies are fast, cheap, and increasingly ubiquitous
The technologies necessary for large-scale surveillance are rapidly maturing, with techniques for image classification, face recognition, video analysis, and voice identification all seeing significant progress in 2020.
8. AI ethics lacks benchmarks and consensus
Though a number of groups are producing a range of qualitative or normative outputs in the AI ethics domain, the field generally lacks benchmarks that can be used to measure or assess the relationship between broader societal discussions about technology development and the development of the technology itself. Furthermore, researchers and civil society view AI ethics as more important than industrial organizations.
9. AI has gained the attention of the U.S. Congress
The 116th Congress is the most AI-focused congressional session in history with the number of mentions of AI in congressional record more than triple that of the 115th Congress.