skip to main content
Episode 74

Real Talk: AI Hype Versus Reality

Created at September 16th, 2025

Real Talk: AI Hype Versus Reality

Industry visionary Graham Wilkinson joins the podcast to talk about the industry’s adoption of AI, where it’s working and where it’s not. The team examines the role of AI across generative advertising, data fragmentation, breaking down silos and the genesis of creativity.

Watch Video
Real Talk: AI Hype Versus Reality
Real Talk: AI Hype Versus Reality

Transcript

Announcer:

Welcome to Real Talk about Marketing an Acxiom podcast where we explore real challenges and emerging trends marketers face. Today at Acxiom, we love to solve and helping the world’s leading brands realize the greatest value from data and technology.

Kyle Hollaway:

Welcome to Acxiom’s Real talk about marketing podcasts. I’m Kyle Holloway, your host, and I’m here with my co-host Dustin Rainey. As you all may know, real talk is focused on the convergence of Adtech and MarTech and how strategies and technologies continue to disrupt the status quo across the industry. And today we get to dive into the mega relevant and every ever hyped topic of artificial intelligence. Dustin, just yesterday we were joking about replacing you with a trustee AI chatbot named Dusty that could answer all the questions about Acxiom products and services. But the truth is we aren’t that far from that reality. We have lots to discuss. So I’m excited that we get to jump in with our esteemed guest, Graham Wilkinson, EVP, global head of ai, chief Innovation Officer for Acxiom Graham, welcome to the podcast.

Graham Wilkinson:

Thanks for having me, guys.

Kyle Hollaway:

Yeah. Now Graham, that is a heady job title. So why don’t you start by giving us a snapshot of your background and how you came to your current role

Graham Wilkinson:

And it’s a heavy story. Well, heavy is probably not the right word. It’s a long story. I’ll try and keep it short. So originally I’m from Manchester in the north of England and I started my career in advertising. I was in-house at a finance business and I was actually the head of creative design there At a young age, I’d taught myself to build websites, but also to do graphic design as well. And so I made my way through the ranks there, and then I was actually laid off during the GFC and I went to work in an agency after that and from there, learned the ins and outs of media.

I was at a lucky point I think in the evolution of advertising, the emergence of digital. It was just a kind sweet spot where I was allowed to go and play around with all these things like search and social and video ads, but then also do more traditional things like I used to do a lot of direct mail, whether it was from a graphic perspective or also dealing with partners like Experian and buying lists from them. And then a really good friend of mine moved to Australia and he was like, yeah, you would love it over here. They need people like you and you should see what it’s like. And I’d never been even been on vacation. My wife had been once as a child on vacation and we’re pretty pragmatic, so we were kind of, okay, let’s do it. And did some job interviews.

Ended up immigrating to a, was there for seven years and worked for Dentsu there. I kind of led various agency functions, but then started to get more formally into the technology side of things and was leading product booked for the agency group out there. And then I was headhunted to come to New York to go and be the global head of product for Reprise, which is now part of Esso. And they kind of hadn’t really had a formal global head of product before. I was tasked with obviously bringing productizing services, but bringing new technology out to the business. And that was eight years ago now. I mean it’s almost eight years to the day, funnily enough, since I moved to the US and took on that role. And two years into that, then I was tapped on the shoulder and asked to join Esso as the lead for product innovation. Since then, I’ve kind of formally been leading r and d teams, innovation teams, and luckily I know enough math, enough development and enough about a lot of things to be dangerous, but equally smart enough to know to employ very smart people and work with smart people like yourself. So funnily enough, I’ve worked with you two guys probably longer than most people at Acxiom and my r and d team quite intimately.

Dustin Raney:

Love your RD team, Graham, super bright folks for sure, just like you. And by the way, for our listeners around the Acxiom parts and honestly around IPG Graham is kind of becoming synonymous with ai. The r and d space is really inundated, right? Graham in the AI space right now, that’s where everyone’s looking. So let’s kind of start maybe the conversation around AI at a high level. Gartner says, one in five US adults now go straight to the AI tools like ChatGBT to ask even the simplest questions every day, right? It’s not just complex questions. How should I send this text to my wife in a way that I’m not going to get in trouble? Literally people and all this stuff’s getting fed inside of an engine, a backend database where natural language is kind of making use of all that. What are some other ways that you’re seeing consumer behaviors change and shift, and how do you see that really impacting the work that we do on a day in day basis and data-driven marketing?

Graham Wilkinson:

Yeah, I mean it’s a really, really interesting question and it is probably one of the most common questions I get for brands that we work with, but also a really hard question because it’s such emergent behavior at the moment, and we can in many ways go off the things that we experience and that we are doing personally, which is the easiest thing to attach to. But I still feel like we’ve barely scratched the surface of how this is going to change behavior for consumers. And I think that kind of that little bit of scratching at the moment, you can predict, I don’t know if we’re really seeing it, but you can definitely predict shorter consumer journeys. This has been talked about really since generative AI became a thing. This idea that you no longer have to troll through websites and search results, and you’re just going to say, Hey, I want to go and plan a trip and it’s going to recommend it hotels and restaurants and all these different experiences.

And so when you analyze that experience, you say, well, that’s just shorten something that was already relatively short compared to the pre-digital area, and now it’s shortened it even further. Now again, it’s very much a prediction because you could also see behavior push back against that. You could quite easily predict that too and go, well, maybe people like that experience of looking for things themselves as much as it makes total sense for journeys to be shortened. I don’t think that is done and dusted. And we go, yes, they’ll definitely always be shorter. I think that we will probably see categories in certain tasks that might stay the way they are or maybe even elongate. I think that this idea of discovery using AI is a really interesting one too because, and that’s why search is this battleground that everybody is talking about in the advertising industry because it looks, and it feels like AI or AI looks and feels like search, but it’s inherently different.

And you almost, I think about it as two ends of a spectrum. You’ve got Google who I’ve this, let’s be honest, this Rolls-Royce of a search engine that has been the top dog for a long, long time. And then you’ve got these platforms like ChatGPT and perplexity that come out and think about maybe organizing data in a different way because that’s really what this is about. There are two really strong arguments that the way you index information for search in Google is maybe there’s not a better way of doing it than that, but people expect augmented results that are chat because now that’s the behavior that they’ve got. But then you go ChatGPT and perplexity and other platforms that say, well, maybe organizing data in things like vector stores and embedding it. So maybe there’s a different way of approaching it can get to that level of accuracy and maybe then go even further beyond that. And I think what we’re seeing is these solutions appear in somewhere along that spectrum where they use maybe traditional search technique and augment with generative ai or they use more modern generative vector embedding based kind of search techniques and then still augment with generative ai.

Dustin Raney:

Man, it is fascinating. I kind of want to double click on Google for a second because I think just yesterday, and this was kind of enlightening to me in my experience because I guess I’ve been using chat GPT for so long now to do what search used to do for me. I kind of was like, well, what if I go search the same thing? I went out to Google and they gave me the option for the first time, it’s probably been out there for a little bit longer than I, where they literally changed my entire user experience where now that the chat bot is the search engine and the entire wave of my page, my screen looks is different. So now that prompt is taking up the entire screen, is giving me an in depth response similar to, I mean it’s Gemini, right? But are they losing out? That’s kind of a risk if you ask me, because if you think about how they monetized search was through all of the sponsored ads and stuff like that. So the space, the real estate on the screen is shockingly different. So what are your thoughts there?

Graham Wilkinson:

Yeah, so I think about two things as you are saying that. One, I think about Google has this advantage because it has an ecosystem, an operating system just like Apple has. And the reason that OpenAI have gone out and obviously bought Johnny i’s business, is that they can start to build an ecosystem because these days it’s not enough to have a platform because users, they want convenience. And as much as the notion of behavior being monitored and tracked is not palatable to most people, they like the benefits of ecosystems where when you move from one part of the Google ecosystem to another that it knows who you are and that they no longer have to keep saying, Hey, this is who I am. Can you please tailor the experience to me? So I think you’re absolutely right commercially, it’s a really tough one for Google, and I still don’t think we’ve seen what the new generative ad experience is.

What we’re doing is we’re comparing legacy digital era ad experience, and we’re trying to squish them in with this new technology and this emergent behavior and going, oh, well, it kind of doesn’t work out. I’m losing real estate on a page. How will Google make money? How do, but I think the bit that’s missing right now is what does generative advertising look like? I start to think about all these crazy things of why wouldn’t an ad be served with some sort of a small model embedded in it in the future? And that ad, the notion of anything being static and having to be approved as a version becomes obsolete. And actually these ads are just organic and they morph because they’re actually powered by individual models that are trafficked with the ad themselves. And then I think about things like how will the successor or the engagement of an ad be judged?

Will it be judged on the depth of the conversation you get into? So could you get into a cost per token commercialization model where the longer you have a conversation with a model or an agent or a series of agents, the more that the brand has to pay for that. But then you could go even further and say, well, maybe thinking about the whole, the keyword auction that Google created, maybe there’s a version of that that starts to think about, I suppose the value of words that are part of a conversation. So is salience, does that become a really key component of the value of the conversation? If you’re having lots of conversation, but there’s very little salience in what is actually happening in that conversation, what value is it to you as a brand? Whereas if you’ve got lots of really salient words that are relevant to you, your products, your brand, maybe you should have to pay more for them. So I know I kind of digressed there, but I think the generative ad experience is the bit that’s missing. That is the bit that is making this whole thing jar to us in the advertising industry. We are really looking at legacy ad formats with a new technology and the formats haven’t moved forward yet.

Kyle Hollaway:

Yeah, that’s really thought provoking there, at least for me. And that aspect of historically, honestly, the system’s kind of been gamed, right? SEO, it’s really just saying like, oh, how do I take my answer and get it up to the top of the page? How does, and then certainly Google and others have monetized that game where I can pay to play and get up at the top and such. Yet now we have this technology where honestly, I think the general consumer has right or wrong kind of put trust in of like, oh, this is just generalized answer, so conversational and it’s aggregated and such where you just feel like, oh, it’s a cumulative answer of knowledge, not so much a curated and monetized answer. And so how those two are going to come together, like you’re saying is really an interesting question because then suddenly it’s like, could you ask the same question? And you get different answers today anyway, but you go to perplexity or you can go to Chad GBT and are they going to give you different answers? They are prioritizing certain content or words and such to start to, maybe bias is the right word that they’re going to start to allow bias in based off of who’s kind of sponsoring the answer. So I think that’s a really challenging both ethical and technological question.

Graham Wilkinson:

And I think if you kind of play that out and you say, well, these models are massive generalizes and it’s widely thought that large language models are really not going to get us to artificial general intelligence because just word relationship models and intelligence is more complex than that. But if you want to deliver the equivalent of what Google has delivered for the last couple decades that allows you to connect into brands into brand properties, then I think that right now the thinking, which is obviously very early stage is, oh, how do we get our content into the training data of models? But I don’t think, that’s not the long-term answer. That content is helping to make these models better generalizes, but it will never be the thing that makes it be able to dig into the specificity of your brand and understand the nuances of it.

I could totally see a world though, where the chat GPTs the perplexes of the world, instead of offering to index and train on your data, maybe what they do is they route requests to your model, your brand model, and that’s the equivalent of routing a request to a website. You curate your website, you put your data on it and Google index it and then route people there, and then you go, well, not every business is going to be able to create their own model. You start to go down in scale and things like that. It gets more challenging. But Google solve for that. If you, again, go back to what happened with search, not everybody can run a massive website. Not everybody can run a huge nuanced AdWords campaign. So we’ll build tools to make it easy for you to do that. So actually we’ll help you build your own little business model by you upload your content, you let us connect to your inventory, all that sort of stuff. We’ll build you your model that then connects into our system. And when a request comes through that’s relevant for you, we’ll route to that. And the consumer then I believe should always be having a conversation with a model that comes from the brand, not the massive generalizing model, because I think that is a bleak future. I don’t really want to live and be a consumer in that future because I just think we know the risk of that is just massive homogenization.

Dustin Raney:

Graham, I think this is a good dovetail, right? This conversation here into hype versus reality because I think a lot of our audience, they’re in the marketing field, there might be some outliers, they’re listening to this on a regular basis. You just got back from Cannes, which is where a lot of the edge stuff is getting released or demoed. So you mentioned, I think it might’ve been in the current, I want to quote you so get ready. You mentioned how many AI demos, it cans felt more like a click through illusions than truly operational tools. What specific criteria do you use to assess whether an AI demo is genuine versus merely performative or vaporware? What are things that you’re looking at that you’re saying, man, this is ready to go? Where are we really?

Graham Wilkinson:

Yeah, I mean, I think the first thing I always want to establish is AI the thing, is it doing the thing that you’re trying to sell me? Or is it just some augmentation layer on top that makes the thing conversational? It’s like say it’s putting a, I dunno, it’s like using AI to make an engine sound on a Tesla because you like the sound of an engine, but actually it’s something else that’s powering the car and it, there’s nothing wrong with that. Maybe you’ll get an awesome engine sound from ai, but it doesn’t necessarily make the car any better at what it does. And I think there’s many, many businesses that have gone down that route that say, we are an AI business, but actually all they’ve done, again, just reiterate nothing wrong with this, all they’ve done is they’ve just added this new generative AI interactive layer on top of what they offer. So I think that’s the first thing to establish what is it that’s doing this thing? And in some instances, there’s nothing wrong with AI not doing the task that they’re trying to sell you because there’s many, many, as you guys know, as very competent technologists and engineers.

There are solutions out there that have already that provide very, very high quality outputs without using ai. There are just some things that work and they should absolutely stay in place. And then I think the next thing is how connected is your system? So if I’m really want to, I suppose, realize the benefits of ai, and this is one of the biggest challenges with the technology right now is it’s moved so rapidly, so many startups have emerged in this space. It’s created even more fragmentation than existed in an incredibly fragmented industry as it is in advertising. And so many of these advantages and efficiencies that are supposed to be gained are not realized and not even come anywhere close to be realized because the fragmentation has just increased a bit further and your system doesn’t connect into the next system, the next handoff. I think anybody who has played around with agents and just generally in ChatGPT, don’t even need to build agents, but just tried to deliver a task, what you very rapidly realize is that tasks are way more complex than we give them credit for.

Writing a piece of content is not just one task for us. We take it for granted, but we think about what we want to write about, we then maybe plan out what that might look like as a skeleton. We then start to write it, we get feedback, we iterate on it. In angen sense, that’s probably half a dozen agents at the very least that are doing things. And for something to be, in my opinion, for something to be truly AI driven, I don’t want to have to take the output from my idea agent and physically copy and paste it into the input for the next agent that takes those ideas and starts to write a skeleton. And so I think it’s entirely reasonable if someone is claiming to sell you an AI solution, that clunkiness has to go away. It has to be frictionless, and you can’t claim for it to be a workflow if every step in that workflow requires for you to put the thing down that you’re holding catchy breath and then lift it back up and pass it to the next person in the workflow.

So I mean, there’s some of the things that I think about. I do think that I also like to ask what is power? What AI are they using? And responses are very interesting. Get some people that, or some businesses that say, oh, we proudly use OpenAI. We only use OpenAI. We’ve got this partnership with them. Whatever that is, that’s just an example, but I don’t think that’s a plus. I think if you view things as these models are, majority of their training data is the corpus of the internet, but the only thing that makes it unique is the data, their data that they’ve either acquired or that they generate as a business. So by that, you would assume LAMA models are better at creating meta ads that Google models are better at creating Google ads, and therefore if you advertise on those two platforms, you should at the very least have two families of models.

But we know that the reality is for any given task situation, any one of those models could be the right model for that task. I don’t think there will ever be one model to rule them all. And again, coming back to this idea of them being massive generalizes, you just don’t want that in your advertising plans campaigns for you as a brand. So to me, a solid piece of AI tech has to have a curated ecosystem of models that are diverse and has to have the ability to select models that are appropriate for any given task that it delivers. And if it can’t do that, I think it’s too rigid. I think that you’re just pushing everybody down this one route. And in advertising, that’s definitely not where we want to go. We don’t want to say to two brands that operate in the same category. Yes, I understand that you are advertising now looks vaguely similar, but there’s nothing we can do about that because the technology uses the same model that answer’s not going to wash with anybody.

Kyle Hollaway:

As you mentioned, massive fragmentation as well as rapid innovation, right? I mean, every day there seems to be new kind of features or aspects to how AI can be delivered. We’ve only been in this conversation for 18 months when we’ve gone from the release of the first LLM to now a agentic platforms that are able to actually execute complex workflows on behalf of a person and in some cases without even a human in the middle anywhere, which has its own challenges. For a listener that’s at an agency or on the CMO organization at a large brand, how do they get started? How do they manage this rapid increase in innovation? What would you recommend? How would you start moving in this direction?

Graham Wilkinson:

Really, really big question, and I mean very important question, but big, and again, I don’t know if there’s a one size fits all. I think the first thing is how open is your organization to using ai? Because your first step might just be convincing people that

You have to get on this train, otherwise it’s going to have left the station and you are just not going to be able to catch up with it. And I do use the analogy of a kind of race a lot when I’m talking about AI, because I understand that the most common question is what efficiency is AI driving? But ultimately we are in a race to win customers for brands. And if your focus is on, well, how do I maintain the same speed I’m at, but keep it more efficient, then somebody else who’s hungry and realizes the potential, this technology will go, well, I’m just going to go faster and I’m going to go faster with the same amount of people or the same amount of resources whether they’re people or not, and do more with it. So I think first of all, there’s got to be what is the stance in the business around using ai?

And there has to be a realization that even for brands say, I think regulated industries is one thing, but I honestly believe that most the use of AI in regulated industries is a well-trodden path because it’s not that different from ensuring that you’re adhering to data privacy and security and all those things that we’ve been talking about for many, many, many years. I don’t believe it’s that different. I think where it’s kind of interesting is when you get brands that are, let’s say iconic and their brand, the characters in their brand and the products have been created by human creativity. And then there starts to be a little bit of preciousness about, well, we will always be a human creative brand, and that’s okay, but you can’t hamstring yourself by saying, well, because we’re always going to be a human creative brand, we’re not going to let AI run some of the operational aspects of our business that will just make us move faster, remove some of the barriers in place.

So that’s the first part of it. I think that the second is we’re an industry of silos. We love to create silos, and maybe it’s unfair to say we’re an industry, we are a species of silos. We live in different places, we speak different languages, we have all these different preferences, and these things all equate to barriers that come up at some point. And so you then apply that to work and you say, well, I have a department that I’m going to massively simplify a department that analyzes audiences department, that plans media department, that executes media department that analyzes the outcomes of it. So straightaway, I’ve got four departments, they go, well, they’re four skill sets, but actually they’re four silos because the people that build audiences might not and possibly cannot see everything that goes on in media planning and just physically can’t. No option for me, Kyle and Dustin to just plug ourselves into some system and then be aware of everything that we’re doing and thinking and everything we’ve ever experienced in our lives, but agents can do.

So to me, the first place we should be thinking about applying AI in our industry is breaking down silos. So where we can start to reveal information that was not traditionally exposed across departments, functions, things like that within a brand and in a privacy compliance sense. I’m not talking about sharing information across brands, I’m talking about sharing within a company because that’s where, it’s one thing replacing humans, human workflows with ai. But again, all you’re going to get to is status quo, just status quo cheaper. But if we’re not in this to do things better, what’s the point? In my opinion, that’s just my opinion, but I want to do things better and differently. So why are we not questioning the silos that exist and focusing on breaking those down? So I do believe that you should be able to get better outcomes because the knowledge can be really easily shared across agents.

Now, I’m not saying that to negate a human interaction. I think it’s a different conversation where humans come into play with this, and I think we’ve got a whole learning journey as human beings to maybe rediscover what it means to be a human being. I think we’ve forgotten a long time ago, but that doesn’t and shouldn’t stop building a gen flows that break silos down because they are the things that you’re going to do better advertising as a consequence of those silos coming down. Just it has to be better. The thought that somebody that’s doing something down the chain knew everything that went into the decision making process that came before them. Why wouldn’t that make a better outcome?

Dustin Raney:

Love that. Graham, with that in mind, so you talked about breaking down the business silos and some of the organizational backoff stuff. Let’s talk for a second about the creative side. You’re talking about what is it that makes us human, the nuance and the creativity and in the context of an ad agency that that’s one of the big services that we offer. And I think one of the big questions is how far is AI going to take over or consume that creative mindset or literally the creative, what do you visually see coming in on an ad? And there’s always been this promise of one-to-one that we haven’t really quite lived up to, but AI actually gets us to a point to where there is an actual possibility of that, but it comes with its own risk and ethical risk and bias and things like that. And what’s your take on how far we allow AI to get into that creative process?

Graham Wilkinson:

Yeah, again, really, really interesting territory and one that I think has got a long way to go before it’s played out. So I think first and foremost, AI absolutely has the ability, as we know to create beautiful imagery. But ultimately these models are, again, coming back to this idea of a large language model. It’s built on semantic relationships between words, which is not intelligence and equally is not creativity. There is a multidi dimensionality to intelligence and creativity. Somebody talked to me about the example of, if I say the word apple, right, as a human being, you might think about the food, the brand, you might think of the color, a color, red color, a green color, but you could have also, you might have choked on an apple as a child. So your thoughts and feelings and memories, all these dimensions are triggered by this one word.

Life’s language models don’t behave like that. They’re looking at how often in their training data, this word has been related to other words and how it’s related. So just as an aside, it’s interesting that LAMA and meta, the latest Lama model was actually not perceived. It was perceived as a bit of a failure. It didn’t really stack up from a performance perspective against other large language models, but they are taking a different approach now where they are trying to address this multidi dimensionality of intelligence so that they can get towards artificial general intelligence rather than just keep pushing tiny increments of large language model performance. The reason that’s important is that I suppose the point I’m making is right now, I think creativity in its purest senses is still entirely in our domain, in its purest sense. There’s a difference between production, which is iterating, creating versions of things.

But I heard one of the CEOs in one of our large businesses just yesterday talk about the idea that AI should absolutely be used for iteration on a great idea, but the great idea right now still kind of has to come from a human being because there’s just so much to those ideas that cannot really be recreated with any kind of generative model as it exists right now. What I do think is really interesting is we’ve done research into how do you start to scratch the surface of how you tease out creativity in something like a large language model. So we’ve been experimenting with adding these dynamics into large language models. So we might have multiple agents interacting with each other, and we will inside the system itself, we will essentially give them these dynamics. We have personal dynamics, interpersonal dynamics, and conceptual dynamics. So a good example is let’s say we have a strategy agent and a creative agent, we would define that they have a healthy rivalry.

If you have a multi-agent system, we might say that it’s a Monday morning and nobody wants to be in work, or it’s even it’s raining outside. That’s a contextual dynamic. And then we might just give them a personal dynamic, you didn’t sleep very well last night and you’re tired, or I don’t know, you had a great weekend and you’re in a good mood. And honestly, when we add those types of dynamics in the outputs fundamentally change, they become what we would all call creative, and it makes total sense because we’re asking the models to consider something that has no bearing on the task that they’ve been given. And all they want to do is generalize to the thing that you’ve asked them to do, but you’re forcing them to go, well, why do I feel tired? And what does tired even make me feel like? And so you had heard me say this before, but I call it sprinkling a bit of chaos into ai. I think we live in a dynamic chaotic world as human beings, and we take it for granted that we just deal with that, but these systems want order and they impose order as well. And so we have to, if you want to get creativity out of them, you have to impose chaos on that system and actually reduce the level of organization in there because otherwise it will always just come to that most general average answer.

Dustin Raney:

I think some people might be having an existential moment right now. That’s very fascinating response, Graham, and that addition of chaos to creativity, I think that alone deserves its entire podcast of itself. And yeah, that friction, I think that’s dead on. We have to think about how our lives just naturally flow. A lot of people call ’em bio rhythms or whatever. It’s like if you can model that and kind of mimic, because some days you just don’t know, you wake up in a bad mood.

Graham Wilkinson:

And some of this is hardcoded into us at this point in our lives.

When I heard about the Lama approach, my first thought was, well actually then once you’ve, if you can recreate the multidi dimensionality of intelligence, then what sets one model apart from another could be something as silly as its temperament. Do I want a model that has a bad temperament or a model that has a good temperament? It sees the sunny side alive, but you might want that mixture of models working on a task to get something that’s different from somebody else out of it. And it’s really, really interesting when you go down that rabbit hole and you start to think about really these things that we find fairly trivial and just part of our lives, they actually are really important to how these models may well become even more powerful and useful in the future.

Kyle Hollaway:

Yeah, I love that thought. Or maybe it even scares me some, but that thought that you start with at the beginning of just current just iteration kind of leads to homogeneity and whether it’s in the model or when we start thinking about personal agents now doing things on our behalf, and the thought of, is there going to be an advertising agent that is advertising to personal agents? And so now you’ve got this dual system trying to just advertise to itself. And because they’re systems without that chaos you’re talking about, it’s likely they’re going to just slowly devolve into very homogenous, what is the fastest, quickest, shortest route to an answer. And suddenly my personal agent may just be buying things just because they found some common lowest common denominator, which was price or whatever, and suddenly they’re just acting along the parameters they’ve been given, but it’s ultimately going to lead to a place that’s not effective. And so how do you continue to manage that as these agents and models start to play on both sides of the equation? Because right now it’s just really consumers interacting with one, but now when they start to interact with each other, how do you bring in that chaos or that creativity so that they just don’t devolve down into lowest common denominator? What’s the most efficient thing that the two systems can do between each other? I think that’s a really

Mindboggling thought

Dustin Raney:

On top of that. It’s like, that makes me think about legal agency for agents or maybe the requirement of legal agencies or entities for agents for legal ramifications. Because what if an agency steals from my agents? How do you understand the source of the action? How can you track it from a ledger and have immutable evidence when you weren’t humanly involved in the transaction? It’s like, who’s liable in that instance? I mean, these are all huge ethical and legal type questions. I don’t know that are completely answered yet. Right?

Graham Wilkinson:

And I think if anything, expose us as human beings, maybe to even more legal risk. I think there’s been less of a focus on that. But if it’s, let’s say a content owner is suing an AI model provider for using their content in their training materials, and how is that in many ways any different from you as a child reading a book and then 30, 40 years later using a concept out of that book. But you’re not consciously using that concept. You are just using it because it was your training data. It was at what point do you start to distinguish because the process is the same. So why should an AI model company be any more risk than a human being that actually went through an identical process? And therefore, by pursuing these legal cases, are we actually exposing ourself to greater risk in the future because these systems are built to mimic the way that we learn.

Dustin Raney:

And a lot of it too, you have to go back, we’re talking about LLM scraping, the open internet. ESPN pays a publisher. It cost them that article is relevant for a certain amount of time. So it might be like time-bound contracts, things like that. That have to be, because your analogy, I mean, I’ve not thought about that. It’s like I’ve read a lot of material from people that I didn’t necessarily pay for in books, got in the library, and it’s part of my model now, but that was a long time ago. Is it relevance? What are the metrics that the legal system has to take into consideration now in the agentic world? I mean, I think those are some fascinating things to think about

Graham Wilkinson:

It. And maybe it’s the reason why lawyers might not lose their jobs so quickly, even though I think the legal industry has been one of those that’s been touted as being heavily disrupted by ai. Maybe the reason we need human lawyers is for them to start to think about some of these more complex AI and legal problems.

Kyle Hollaway:

Yeah, absolutely. Yeah. Well, man, this conversation could go on probably forever just because such a fascinating topic and everything. We are running out of time though. And so Graham, since we’re on ai, we’ve kind of gone to a standard wrap up question that is actually AI based, and it is, if you fed all the data about Graham into ai, what are the three words it would generalize to describe you?

Graham Wilkinson:

Yeah, I suppose so Matt, let me talk you through my thought process for this. It very much comes from how we think about advertising and data that would be available as opposed to this idea that’s not real, that all my data could be fed into somebody. So I was thinking, well, what data has existed on me since I was born? So obviously, one thing that I alluded to in the intro at the start was the international aspect. I think if something was analyzing my data, it would go well, he’s been in Manchester, he is been in Melbourne, Australia, he is been in New York, but even in those countries and continents lived in different places and traveled to. So just international and travel would definitely be one of those words. Another one is I generalized it as hobbies. And the reason being is I am the sort of person who I figure out the next thing I want to do, and I just go and do it.

I recently got my motorcycle license. I’d never ridden one before. I was like, right, that’s what I’m going to do. But I’ve always been into cycling. The moment you could upload data, record data with cycling, I was always doing that with any app and then even food consumption, calories, all that sort of stuff. So I generalized as hobbies, but all my hobbies generally involve some sort of app that I’m uploading data to. So I assume there’s a tremendous amount of data on my cycle and eating habits. And then finally, just because of the job that I do and doing things like this, if you search for me on the internet, there’s obviously a lot of reference and relevance to advertising because it’s been a big part of my life for half of my life. And so I think, yeah, my three words were international hobbies is and advertising.

Dustin Raney:

Love it. Love it. That’s very well thought out. And man, it’s pretty cool. It’s, it’s a cool question to think about, and I kind of want to go back and I don’t think you and I have done this for ourselves. We probably need to do that at some point. Yeah, exactly. But Grant, thank you so much for being a guest on today’s episode. We certainly want to have you back sometime soon. And for our listeners, we hope today’s episode was informative and thought provoking. No, it certainly was for Kyle and I, we would so appreciate it if you would take a few minutes and leave us a review on iTunes or Apple Podcasts, and we look forward to seeing you next time.

Announcer:

Thanks for listening to Real Talk about Marketing an Acxiom podcast. You can find all our podcasts at Acxiom.com/real talk or on your favorite podcast platform.

 

Graham Wilkinson

Global Head of AI, EVP Chief Innovation Officer

More from Graham Wilkinson Connect on LinkedIn