Set it and forget it

Automating Amazon’s Retail Ad Component Selection

💡 TL;DR

Problem
Retail media formats were gaining traction, but both marketers and TripleLift’s internal teams were unclear on how to optimize them—and automation was limited.

What I did
Led a two-phase research initiative combining market analysis and product testing. First, I co-authored a report mapping creative variable performance across the funnel. Then, I collaborated with Product on internal user research to automate key ad component selection.

What changed
Our findings led to automatic retail unit customization, improved visual relevance, and better publisher fit—ultimately reducing user friction and enhancing campaign performance.

Problem

Retail media is a newer, fast-growing format in ad tech.

But at TripleLift, internal usage data and client feedback revealed consistent challenges:

  • Marketers weren’t sure how to optimize retail formats for different campaign goals

  • Internal product and revenue teams lacked a clear playbook for where and how these formats worked best

  • Ad component selection (e.g., imagery, messaging, pricing) still relied on manual work, making the user experience clunky and slow

Our research aimed to answer two big questions:

  1. What makes retail ad formats perform?

  2. How can we automate smart, scalable asset selection for these formats?

Research Goals & Methods

Phase 1: Market Insights (External Report)

We began by co-authoring The Marketer’s Building Blocks for Success, a comprehensive study of creative performance across funnel stages.
Our goals were to:

  • Quantify the impact of Targeting, Formats, Imagery, and Messaging across KPIs (Unaided Awareness, Brand Awareness, Preference,
    and Intent)

  • Explore how Advanced Creative Technology (Computer Vision, Dynamic Templating, Placement Optimization) influences performance

  • Help marketers and internal teams prioritize the right creative variables per objective

Methods included:

  • Eye-tracking studies with post-exposure surveys (n=700)

  • Mock ad placements tested with and without creative tech enhancements

  • Data visualization of funnel-stage impact by creative variable

Phase 2: Product-Facing Research

Using what we learned in the market report, we transitioned into internal UX research to improve product design and automation.

  • Internal ad interaction analysis to determine which creative variables mattered most for engagement

  • A/B testing to understand which publisher environments drove the best performance

  • Stakeholder workshops with Product and Engineering to explore automation opportunities

Click image for the market report!

Findings & Insights

We found that performance hinged on three key variables:

Imagery + Contextual Fit = Performance
When an ad's image matched the environment and was well-rendered (not cropped or misaligned), brand awareness jumped by +11%
Eye-tracking study insight

Dynamic Templating = Faster comprehension
Retail media units with adaptive layouts were seen as 36% more informative, increasing click and conversion likelihood
Post-exposure survey insight

Automation could reduce friction at scale
Internal testing revealed that automating the selection of current product imagery, prices, and copy fields based on inventory feeds improved user satisfaction and reduced campaign setup time

Action & Impact

We translated insights into clear product actions:

Automatic asset selection

  • Enabled product ingestion of up-to-date imagery, pricing, and descriptions

  • Used templating logic to auto-generate ad layouts that fit publisher specs

Publisher-environment A/B matching

  • Identified top-performing environments for different retail ad types

  • Routed ads accordingly to increase relevance

Retail media decision support

  • Marketing and revenue teams now use the report as a client-facing tool to explain why creative elements matter

Impact

  • Product enabled scalable automation for retail ad creation, saving time and reducing error

  • Internal teams gained a research-backed framework to guide format, asset, and targeting choices

  • Client conversations shifted from “what do I do with this format?” to “how do I make it work even better?”

Reflection

This project taught me the value of bridging market-level insights with product-level decisions. By starting with audience performance data, we built credibility and context that made internal recommendations easier to implement. Next time, I’d bring Engineering into the research synthesis phase earlier—we had some great ideas around automation that came a bit later in the cycle.