At Target, the Only Wrong Move Is Opting Out
Most organizations right now are running some version of the same experiment: distribute access to AI tools, watch what happens, and try to figure out what that means before the next planning cycle. The results have been uneven. Some teams found immediate efficiency gains. Others circled back six months later with nothing to show for it. The difference, in most cases, had nothing to do with the tools.
Kunal Banerjee has been in e-commerce and product management for nearly two decades, with stints at Capital One, eBay, and Walmart before joining Target, where he serves as Senior Director of Product and E-commerce. His remit includes parts of Target’s e-commerce funnel, its loyalty platform, and its retail media and ad tech platform. At eTail Palm Springs 2026, he sat down with ClickZ to work through a specific question: not where AI is going, but what it actually means for the people and functions using it today.
His answer was more grounded, and more demanding, than most of what gets said on conference stages about AI.
The framing Banerjee returned to again and again was this: AI changes execution velocity, not organizational direction.
“AI is just another tool to go faster to solve that strategy,” he said. “Whether the problem is the right problem to solve still depends on the company’s planning and governance process.”
This is a meaningful distinction. A lot of the anxiety around AI adoption in large organizations centers on whether AI will disrupt existing structures, whether it will create new silos, whether teams will go off in incompatible directions. Banerjee’s view is that this concern misidentifies the source of the problem. If an organization lacks a clear planning cadence, defined KPIs, and a coherent breakdown from revenue goals to team-level objectives, AI makes the underlying chaos worse. If those structures are in place, AI slots in as an accelerant without requiring a new governance model.
“AI has not really added any new complication,” he said. “Unless you don’t have a process, in which case AI doesn’t matter. You already have a complication.”
The practical implication is that time spent building better planning discipline before deploying AI more broadly is the prerequisite.
Not every team sees the same return. Banerjee was direct about this. For item content teams, the groups responsible for product titles, imagery, descriptions, and related content at scale, AI has already delivered substantial efficiency gains at Target. “We have multiple examples of things we’ve done which have shown amazing results, both efficiency-wise and actual dollar outcomes,” he said.
For UX and design teams, the picture is different. AI can generate initial ideas and high-level directions, but the refinement work, the step where brand principles, design systems, and audience understanding come together, still requires expert judgment. The efficiency gain is real but smaller.
The example Banerjee offered to illustrate this is instructive. As a product leader, he used to depend on design partners to produce wire frames before a project could advance. Now, he can take his brief to an LLM, point to his existing page and a few competitor references, and get conceptual directions in minutes. He then brings those to designers and asks them to apply brand principles and translate the concept into something code-ready. The expert work still happens. But it starts from a richer, faster foundation.
“Getting to an initial concept becomes faster,” he said. “Don’t try to take it all the way. You need expertise. But you can get to the initial thing much faster.”
The efficiency redistribution story, then, is not that AI replaces creative and analytical roles. It is that AI removes the upstream blocking work, compressing the time between problem identification and substantive human contribution.
One of the more honest observations in Banerjee’s conversation was about the limits of what AI can currently absorb. Organizations can encode experience: accumulated knowledge, repeatable processes, tested frameworks. This is genuinely valuable. It enables consistency across teams that previously relied on tribal knowledge concentrated in a few individuals.
But encoding judgment is different. “While I can teach a custom model to think like a really good product manager when doing certain work, there are things I don’t know how to teach,” he said. “We haven’t figured out how to train models to understand when what they’re saying or doing is wrong in the context of the discipline you’re following.”
This is the hallucination problem reframed not as a technical bug but as an organizational design challenge. AI will sometimes produce outputs that are wrong in ways that are not self-evident. Catching those outputs requires domain expertise: someone who knows that a particular result doesn’t fit the brand, the data context, or the business reality, even when the output looks plausible on the surface. That capability cannot be automated. It has to be staffed and governed.
The implication is that organizations deploying AI more broadly need to invest equally in the human layer that validates outputs, not just the systems that generate them.
“You need governance in place, and your own process, to catch that.”
On the ground, Banerjee offered a number that cuts against the prevailing sense that AI has already transformed how large organizations operate. Around 20 to 30% of people at his level of the organization are using AI on a high-frequency basis, meaning daily or near-daily across different tasks. The majority are using it for one or two specific applications, roughly once a week.
This reflects where most large enterprises actually are. The early movers tend to be people who were already highly adaptable, people who, in Banerjee’s observation, tend to be the same people who are trying hardest to improve at their jobs regardless of what tools are available. The broader population is still finding its footing.
What Target has done in response is to make opting out the only unacceptable position.
“What is not allowed is to say, ‘In my role, AI doesn’t really have anything to do, so I’ll stay away from it,'” Banerjee said. “That’s the only thing that’s not allowed. Everything else: we’ll see.”
This is a posture more than a mandate. It keeps experimentation open and diffuse without requiring every team to have a defined AI strategy from day one. The learning comes from the doing.
On a few fronts, Banerjee was willing to state a clear position.
The organizations that will get the most from AI are not necessarily the ones with the most sophisticated tools. They are the ones that have done the less exciting work first: clear goals, coherent KPI frameworks, defined problem spaces, and a governance process that catches bad outputs before they drive bad decisions.
Banerjee’s career has been built on exactly that kind of structural clarity. His view of AI is that it is a powerful accelerant applied to a foundation that either supports it or doesn’t. For senior marketing leaders, the honest question is whether the underlying architecture of their planning and measurement makes AI-generated speed useful, or just faster confusion.
EVENT COVERAGE SPONSORED BY FOSPHA
Leave a Reply
You must be logged in to post a comment.