skip to main content
Episode 48

Real Talk: The Actual Impact of AI

Created at February 5th, 2024

Real Talk: The Actual Impact of AI

Amazing accelerator or threat to humanity? Discussions around artificial intelligence (AI) span the full gamut. Ron Shevlin of Cornerstone Advisors joins the podcast to break down what brands can really expect as AI become part of the daily lives of knowledge workers.

Transcript

Kyle Hollaway:

Hello and welcome to Real Talk about Real Identity from Axiom. This podcast is devoted to important identity trends in the convergence of AdTech and MarTech. I’m Kyle Hollaway, your podcast host and I’m joined by our co-host, Dustin Raney.

Hello everyone and welcome back to Real Talk about Identity. I’m Kyle Hollaway, co-host of Real Talk here at my colleague Dustin Raney. Real Talk about Real Identity is focused on exploring the convergence and related disruption of MarTech and AdTech seen from an identity practitioner’s lens. Dustin, let’s talk about AI. After all, it’s in the news daily and as a growing area of discussion across the business world. So in anticipation of this conversation, I did a little experiment. I asked ChatGPT to write our introduction. Here’s the response, “As we navigate this digital era, a key player has emerged as a cornerstone in the advertising world, Artificial intelligence or AI. AI’s role in advertising is transformative. It’s not just about targeting ads more effectively, but about redefining how we understand and engage with consumers. AI algorithms analyze vast amounts of data to identify platforms and insights, allowing advertisers to create more personalized relevant advertising experiences.

This is where the magic of AI meets the precision of identity resolution. So what does the future customer journey look like with AI at the helm? Imagine a world where advertising is not just a one-way communication, but a dynamic interactive experience. AI enables a better understanding of consumer behaviors, preferences, and needs leading to a highly tailored customer journey. From the moment a consumer expresses interest to the post-purchase engagement, AI ensures that every touch point is an opportunity for a meaningful connection. The future of advertising with AI promises not only greater efficiency and effectiveness, but also a shift towards a more consumer-centric approach. We’re looking at a landscape where advertising is not just about selling a product, but about creating value and relevance for the consumer. AI empowers advertisers to be more empathetic, responsive, and innovative. In this era of digital transformation, the convergence of AdTech and MarTech powered by AI is not just an opportunity, it’s a journey towards a more connected, insightful, and engaging advertising world. Let’s explore how this journey is unfolding and what it means for advertisers, marketers, and consumers alike.” Wow, how was that?

Dustin Raney:

Very eloquent. Kyle, or should I say ChatGPT. Exciting times for sure, and I’m actually still caught waiting for my first kind of minority report moment where an ad says my name as I’m walking by a digital display. With that said, I have to bring us back to center a little bit in that this AI-powered revolution is not without its complexities. So navigating the intricate landscape of data privacy and ethical consideration is going to be paramount. [inaudible 00:03:24] have to ensure responsible AI implementation that respects customer or consumer autonomy, that builds trust and transparency as the cornerstone of sustainable AI-driven advertising. So to help us dive further into this conversation today we have with us Ron Shelvin, chief research officer for Cornerstone Advisors. Ron has been a consistent participant at Axiom’s annual Financial Services Summit and always brings tremendous insights with him. Ron, welcome to the show. Can you start by giving our listeners a brief snapshot of your background and what got you interested in AI?

Ron Shevlin:

Yeah, absolutely guys. Thanks a lot for having me on the podcast today. So quickly because I’m sure people rather hear some of the good stuff than what my background is. But I have basically been in the technology analyst world now for 26 years, was with Forrester Research for a long time. Then was with another analyst firm here in the Boston area where I lived for seven and then eight years ago joined Cornerstone Research, which is a consulting firm for the midsize bank and credit union space. Started a research practice much like the analyst firms, and so my job for 26 years has been to stay one step ahead of technology and where financial institution executives are. So that’s where you want to be. One step ahead right now is on AI and knowing what’s really going to be the impact of AI on financial services.

Kyle Hollaway:

Awesome. Excellent background there and really interested then to hear from your perspective, every day we’re hearing new hype around different types of AI or AI applications, AutoGPT, Gorilla, Worm, all of those. Can you break it down a little bit for listeners and help them understand what’s out there?

Ron Shevlin:

Yeah, I actually would think of it as not as breaking it down, but building it up. There’s been something that has been bugging me for the past couple of years about this space, is that up until last year, and I don’t know when people are going to hear this, but I think it’s been pretty much almost a year to the date that ChatGPT was launched back in the late November 2022. But leading up until that point, we in the industry got into a really bad habit of using the term AI as an umbrella term to… relates to a lot of different types of technologies that fall under the umbrella of AI. There’s machine learning conversational AI, robotic process automation visioning. That’s not easy for me to say. So we won’t talk about visioning anymore. Visioning, natural language processing.

There are a lot of different types of AI technologies and we tend to use the umbrella term AI indiscriminately. Then what happened a year ago when ChatGPT launched, it also helped to popularize the term generative AI and what’s happened in the past year is that people have now substituted generative AI as the umbrella term, and that’s just plain wrong. Generative AI is a type of AI technology and it’s called generative because it generates various types of outputs. Text can be one of them just like Kyle’s output, but Kyle could have asked it to actually write a song about identity in advertising, could have asked it to develop a picture of what AI in advertising might look like, could have created code, could have created data.

That’s what generative means in generative AI, and it is just simply one form of AI technology. Now the power comes when it’s incorporated and combined with other types of AI technologies like a conversational AI so that you can talk back and forth to a model or maybe using it with machine learning to actually develop something that can be applied to data analysis and ongoing learning. So there’s a lot of different forms of AI that are out there. Generative AI is the one that’s probably gotten the most press over the past year thanks to ChatGPT. As you alluded to, there are other large language models out there and in fact there are probably hundreds of them already, although ChatGPT and Gorilla and maybe Worm GPT as a malicious model or maybe some of the more well-known ones.

Dustin Raney:

Yeah, actually I have a use case, one use case that you mentioned there. I actually have a music studio in my house and I made the request to write a song in the form of John Mayer and next thing out comes lyrics versus chorus Bridge full structure. Then I came back behind it and asked, “All right, sit, play the chords, send me the chord structure as if you’re the Jonas brothers.” Next thing I know, I’ve got a hit song waiting to release. Not really, but it does call to question right to the art that I’m hearing. How much of that has been created or has leveraged this AI and it has made its way in the mainstream already. Do you see that as a threat to our human existence from a creator perspective or do you see it more as a tool?

Ron Shevlin:

You’re asking some heady questions there about threat to our existence, and I think that comes from totally non-business related applications. It comes from the thought that you could create a robot, arm it with arms and tell it to go shoot people, and that’s pretty scary stuff. You can literally program it to be very destructive and I think that’s why some folks are concerned, a broad scale wipeout type of perspective. But listen, we’ve had a lot of tools like that and things like that could do that and we’ve somehow managed to survive it. So I do think we’ll manage, but there’s also, I think bringing this back down to a little bit more business applications of it. And I’m wondering if your John Mayer song, by the way, doesn’t happen to be called Your Body Is A Wonderland, is it? I’m not sure that’s going to sell there Dustin.

So that point goes towards where I think that people in the industry really need to think about. And Kyle even your intro, let me ask you guys, was that a good intro? I think it was okay, and honestly as you were reading it, I kept thinking, I keep saying advertising is going to do this, AI is going to do this and AI is going to do that, but never really told me what it was that AI was going to do. And I think this is an important thing because I think with a state of a lot of these tools today, it’s an accelerator. It’s a start. I think if you guys really were going to use AI as a tool to create an intro, you probably would’ve done exactly what you did, but you probably would’ve tweaked it a little bit and said, “Hey, this is pretty good.”

When I spoke at the Axiom conference a couple of weeks, months ago at this point I was asked, “Hey, can you talk about the impact of ChatGPT and generative AI on banking?” So I did the same thing. I started off by saying, “So I went to ChatGPT and asked it what’s the impact on banking?” And it gave me an answer, but it really missed the nuances and could have really missed kind of the insight in putting things together. And if you think about how it does what it does, it ingests an incredible amount of data, input information that it picks up from whatever sources. And the case of ChatGPT, it’s picking it up from everything that’s out on the internet already. So Kyle, your intro was really written by who knows how many other people that have asked the similar question or have addressed those questions on websites somewhere and ChatGPT kind of pulls from that.

And because it has generative AI capabilities in it, that is the ability to generate text, it puts together something that sounds pretty damn good but wasn’t really that good. And I think this is where we’re at in 2023 going into 2024 with the tools, it’s an accelerator. It can help you get started with things. It can help pull together a lot of stuff that would’ve taken you maybe days, weeks, and perhaps even months to pull together, especially things like code, writing programming code could take weeks. Creating a webpage could take days or weeks to do, and these tools are doing it in minutes and maybe even hours at worse. So it really should be looked at as a tool to help accelerate our efforts. Not at this point and not for a good number of years not to replace us.

Dustin Raney:

I can tell you that my John Mayer song actually wasn’t that good at the end of the day, which is why our listeners probably won’t hear it. But I’ll also say that honestly Ron, one of my biggest concerns, and I’m sure that a lot of our listeners would say the same thing, is what I’m feeding into things like ChatGPT and Bard that are my own thoughts. That are things that I’ve created that I’m training and basically giving knowledge away, not knowing whether or not that… how am I going to actually earn. So there’s this thought that I’m sharing all this information that is a competitive differentiator, my IP that is getting shared and how do you ever get that back? Do you feel like there’s a point in which there’s going to have to be AI on top of AI to protect copyrights, to protect corporate assets and materials from being shared, just freely?

Ron Shevlin:

There probably will be. I think at some point it’s interesting that I don’t think ChatGPT right now can actually… you can’t feed it something and tell it, “Was this AI generated?” It doesn’t know that right now. So there’s certainly some legal protections that are going to be important for that. And I think there are some cases in front of some courts. I don’t think it’s exactly the Supreme Court, but I do think there are some cases in front of some courts today to address some of those questions. I think there’s some well-known or somewhat well-known actors and actresses and musicians who have brought some suits against ChatGPT.

I think today it’ll be really hard to predict what the results of those will be. And I also would say that I would doubt that I think ChatGPT is going to be found to be in violation of any laws because that, I don’t know if you write music or not, but so much of what we hear is derivative of other things anyway. What ChatGPT and these generative AI tools are doing is deriving things that are already out there and I think it’s going to be hard to prove that they violated any intellectual property or copyright regulations.

Kyle Hollaway:

Okay. Ron, your background and your focus as you’ve mentioned is on the financial service industry and as we start thinking about, okay, some of these tools driving it, place that down to more of a business function specifically. In that space, one, where do you see as the biggest opportunity for the financial service industry to leverage this technology? And then do you see any kind of potholes or areas that are of a concern for the industry?

Ron Shevlin:

Yeah, so there’s a lot of discussion and I think a lot of people focused on the opportunity to provide better financial advice and guidance. I would tell you, I don’t agree with that to be honest with you because I actually don’t think this is the biggest problem. Not that it’s not a problem, but I don’t think that AI brings a lot more to the table than our ability to provide financial advice and guidance today. Most people who need financial advice and guidance, it’s really simple. It’s really more spend less, save more, make more, kind of stuff. And yes, there’s stock picking, but there’s so much unknown and the problems is that the things that would really influence the movement and stock prices, the economy, are things that are not getting fed into the internet or a lot of databases in the first place. I mean, look at how many econometric models have been developed by high-end economic consulting firms and they always get it wrong.

Generative AI and AI tools in general aren’t going to improve that significantly or magically make it any better. So while there’s a lot of opportunity from a financial advice and guidance perspective, I actually don’t think that’s where a lot of the impact is going to be. I actually think the impact is going to be less about the impact on the customer or consumer and more on the impact of the productivity of the financial institutions themselves. And we in the industry have spent probably the better part of the last 10 years focused on digital transformation, focusing on the digitalization of a lot of large scale, high transaction volume processes like account opening or fraud to detection and management and things like that, transaction processing and so forth. But that’s not where the power of AI is going to come with fraud perhaps. But there’s always been fraud and machine learning technology applied to those areas, and I shouldn’t say always, but for a good 30 years now.

So for most people in the industry, that is forever, but the opportunities are really going to be more from a knowledge worker perspective and really about productivity improvement. It’s going to help legal people create and monitor and manage changes in the contracts. It’s going to enable IT folks to develop code, debug that code and deploy new data models and programs. It’s going to enable marketing people to create new marketing campaigns in fraction of the time it takes them to do that today. So think about the knowledge workers of the organization and what the tools are going to do is help accelerate getting their jobs done much more so than large-scale processes. And more so than I think trying to crack this nut on planning and vice and guidance, which just has too many unknown variables in it. And I don’t think no matter how much data ChatGPT or any other large language model is pulling from, there’s too many things that are going on that impact the results and we’re just not going to be able to capture a lot of that.

Kyle Hollaway:

Yeah, no, I think that’s a really interesting perspective and certainly I’ve always scratched my head as well on the focus on the advice side. It seems like there’s a lot of liabilities associated with that because we still know that there are challenges with hallucinations or the fact that the generated information is actually not correct, right? And so I can just imagine future lawsuits of Granny Smith lost her life savings because some financial institution AI told her to do something with her money and it was a hallucination and it didn’t actually pan out. So yeah, I think there is a lot of question on is that really the best use of it?

I’d love your focus on all of the aspects of optimizing and enhancing the back office capabilities and all the inner workings within the financial institutions to just accelerate those and maybe make them easier or at least be able to get a higher throughput in some of those areas. That’ll be really interesting to see how that plays out if there are the adoptions there. I did see one interesting thing because you did mention fraud and talking about fraud and certainly from an identity standpoint, that’s where Dustin and I live is in this identity space. And while we’re more on the marketing side, we’ve got a lot of interaction also with the authentication verification side of it, was the fact that I saw this, it was actually an advertisement, it was in an article.

But it was a copy of an advertisement where it was selling a extra prosthetic digit that people are actually, the bad actors can actually put on their hand and then therefore when there’s a picture taken of them in a fraud case, AI may think it’s a generated photo because it’s got a weird six finger. And that’s one way that they’re detecting AI generated images is because basically the extremities are still not being generated accurately. So if you look at a picture a lot of times if there’s some kind of weird angle on a hand or an extra finger or something, that’s indicator that it’s AI generated. So I think there’s a really interesting battle that’s going to be taking place between good actors and bad actors with generative AI and other technologies using them for malfeasance and then using them to try to identify or block those efforts. What are your thoughts on kind of that aspect of it?

Ron Shevlin:

Yeah, you’re bringing up a great point and there’s some really good examples of this. WormGPT, I think either Kyle you or Dustin alluded to that earlier, which for people don’t know, it is basically a large language model that is designed to help bad actors do stuff. So one of the examples of that, of the use of that particular model is generating messages to let’s say accounting managers and pressuring them to pay invoices that really don’t exist. But because the tools can both generate content, generate pictures and art and things like that, it can create pictures of invoices that mirror what the invoices of maybe their customers look like and it’s able to fool them. And banks to some tools like AutoGPT, which is a tool that works with ChatGPT to make the tools autonomous.

It’ll just keep going until it hits the goal and keeps pressuring, so you don’t even have to get a human in the middle of it. The other aspect to this is that I think generative AI is basically going to kill voice authentication as a means of identity assessment. One of my colleagues captured a recording of one of the partners at Cornerstone Advisors and basically tweaked it in the voice of Steve Williams partner and president of Cornerstone Advisors into saying something completely different from what he would ever say. I don’t remember the exact application, but these are the things that I think it puts a lot of challenges on financial institutions from a couple of different angles.

One is that it’s not simply about figuring out how to use the tools to be more productive, but to stay on top of the developments of the technology to understand how bad actors are using it. I often joke, and I’m not sure it’s actually a joke or not, but often joke that I think some of the smartest people on the planet are fraudsters. And their ability to use data is generally a good number of steps ahead of financial institutions and sadly way ahead of the regulators. And I think this is an area that is really going to hurt because I think the regulators, if the financial institutions are two or three steps behind, the regulators are close to 10 steps behind.

Dustin Raney:

Interesting times indeed. And I would like to shift a little bit now because we talked a lot about the bad guys, how it can be used in bad ways, even six fingers. Man, it’s crazy. We work in the digital or the advertising marketing space around consumers. We help a lot of the world’s largest companies go find better customers and understand, drive intelligence on knowledge of their customers and find new customers. Obviously AI is playing a big role now and it’s going to play an even bigger role in marketing and advertising efforts and understanding human behavior over the course of the coming years. What are some ways or applications that you’re seeing in the advertising marketing space that you found as wins? Things that brands companies are doing today that are producing results?

Ron Shevlin:

Yeah, I think one good example of this is in direct marketing and creating marketing campaigns. I know of one bank that used both ChatGPT, AutoGPT, and a couple other plugins to help it develop a marketing campaign. They were pushing health savings accounts and basically told ChatGPT and AutoGPT, “Create a marketing campaign and raise $2 million in deposits of health savings accounts.” Now, what the tools did autonomously to a large extent was first connected to and registered itself within an email marketing program, then created and then tested two versions of an offer letter. And then having been fed the data with the accounts that they were mailing, basically ran a simulation. And this is where the story gets bad because it would’ve been really cool if it wasn’t a simulation, but I’ll explain why it wasn’t a simulation in a minute. But they basically ran through a simulation over a period of about 21 days of having the tool continuously tweak the offer letter to do a couple of things, like determine which offers were the best offers.

What were the best elements of personalization in the offer letters themselves. And then figure out what worked best in terms of the content of the direct marketing letter. Things like, should it have a time constraint on it, should it have some other components, should it mention things like tax implications of the out savings accounts. And basically optimize and basically did something in 21 days that takes most marketing departments three to four months to do in terms of testing and then implementing and then tweaking and things. And probably did more tweaking and optimization than a typical marketing department would do because of the efforts. Now we’ll go back to why it was just a simulation and not real. I think Kyle, you mentioned the elements like hallucinations and I like to joke that ChatGPT 3.5 and 4.0 are like dumb and dumber. There are actually some researchers that found when it came to things like identifying prime numbers or creating mathematical programming equations, ChatGPT 4.0 did not do as well as 3.5 did.

So there are definitely some constraints. And then when you introduce the whole aspect of the potential for hallucinations, which are not like the… I’m an old grateful deadhead, so hallucinations mean a lot of different things to me. But now what it really refers to are just incorrect output. And the reason for the incorrect output is mostly to some extent because of the data that gets input, but also because the tool is trying to create an output to answer the question or the prompt that it’s been asked. So then it takes liberties with the actual data itself. So if you’re a bank and you’re using these tools today or if you’re an insurance company, whatever it might be, and looking to deploy these tools for a marketing campaign, quite frankly, you’ve got to be crazy today to trust the tools because you don’t know what it might actually put into these letters.

It might tell people that, “Oh, a health savings account will not only save you money but improve your health.” It might save things like that. So you can’t let these things run autonomously, and this is why today we really use it from a marketing perspective, more from a simulation, and again, as an accelerator. Something to do some testing upfront, accelerate that aspect of it, but give it back to the humans to review, double check, and then actually fix and deploy. Which is why next time I’m sure Kyle, you do an intro for your podcast, you might ask ChatGPT to write you a thing, but I bet you next time you trim it down and you give it a little bit of a refinement to do it better. So I think that’s where we’re at today with these tools is that, again, I go back to productivity and acceleration, but it’s accelerating human productivity, not necessarily the overall process.

Kyle Hollaway:

Yeah, I think ultimately what you’re alluding to is governance, just continuing to leverage these but wrapping governance around them so that they’re not just considered a black box and just, “Hey, I’m just going to make it run and it’s going to go off and do its thing.” But it’s actually feeding back into a human process that can validate it, tweak it, ensure that it’s compliant or whatnot, and then sending it on down its way. And to your point, yes, unlike the intro, sounds fairly eloquent, but when you really digest it’s missing a lot of actionable points as this general observations and statements. From your experience as you’ve been in the space and consulting with the financial services stuff, do you have some specific kind of success stories where you’ve seen AI being used effectively actually in a real world kind of situation?

Ron Shevlin:

Yeah, absolutely. I think there’s a few that I would point to. One is an insurance company that basically developed a web page for a new product offering it was launching. The tool went ahead and identified the right keywords, search engine terms to be used on the site, identified various graphics that could be used, developed a form for input, which then connected to the backend system through some code that the tools developed. It did this all in literally two to three hours, which would’ve easily taken two to three weeks. I know of another bank that actually had it generate code to port data from one database to another, basically creating a new data model that enabled it to do stuff in literally $1,500 worth of ChatGPT time. That probably would’ve taken hundreds of thousands of dollars of consulting time and programmer time. As an aside, over the past, what five, maybe 10 years, we have seen programmer salaries shoot up.

You hear stories about programmers at Google and Silicon Valley firms making multiple hundreds of thousands of dollars a year. Guess what? That party might be over with the ability of these tools to generate code like that. I do have another bank that was doing something a little bit more mundane with social media. It was connecting to LinkedIn to find potential candidates for open positions that they were looking for and basically used LinkedIn to start and continue conversations with people before turning it over to the humans. So again, accelerating some of the work but doing some things that… just really amazing things that more so than just generating music or generating code. But actually conducting ongoing conversations with people like a chatbot might do, but not user, not customer initiated, but the provider initiated conversations. So a lot of different uses for it, but again, I think the key is that these are not necessarily tools that automate large scale, high volume transaction processes. It’s one-off types of things that a lot of the knowledge workers in the financial institutions do.

Dustin Raney:

It seems maybe I can make the statement, that AI is only as good as the data behind it. And do you feel like that’s a true statement? And then do you feel, with that being said, are there certain players? Obviously we’re in the data business, that’s a huge economy that is driving. Do you feel like data providers or companies that do what Axiom does are going to become more essential as AI is being leveraged to be used by brands to go to market or do these use cases you’re talking about?

Ron Shevlin:

Not only is your statement true that data is very important here, but it’s probably the most important aspect of this, and nowhere is the old adage of garbage in, garbage out, more important and applicable than this whole world of AI and specifically generative AI. And this is why there are things like hallucinations or degradation in performance is because first, sometimes the data is wrong. But then often, remember models are trained on the data that goes into it, and then data changes over time, which means that the output of a particular model might actually be different when the data that is then available to it has changed from what it was trained on. And then you think about it, most data changes over time, which means there’s always a problem of maintaining model currency and updating the capabilities of a particular model. And then also the fact that yes, a tool like ChatGPT is incorporating everything that’s out on the internet.

That’s a huge amount of data, but a lot of that stuff is not particularly well-structured, it’s not designed to be fed into models, and this is where from an organizational perspective, things are important. I think for a lot of financial institutions, financial services companies, except for the very smallest. Look, within 10 years, maybe even five years, they will all have internal large language models that are feeding AI tools. Though those that get the best benefit out of that will be those that work today to ensure that the data quality that goes into those models are good. I know of another bank that was actually considering its AI, its innovation team wanted to go to the senior management team and ask for a three-month ban on internal emails. So that they could develop a categorization capability for tagging various pieces of information and actually telling people how to write emails internally and how to write internal reports so that they could be tagged appropriately for future model development.

Obviously that didn’t really happen and they never really went to the management team knowing that wasn’t really going to happen. But the challenge is that you literally are changing the wheels of the bus as it’s going when it comes from a data perspective. When it comes to companies like Axiom, not only is it a data provider, but it’s probably going, if it isn’t already, we’ll be more of a data consultant to a lot of financial institutions to help it figure out what is the quality of the data out there? How can it be structured and tagged to be incorporated into these models? It’s funny, was joking with people at Axiom at the conference when I was there, and it seems like everybody feels threatened by AI. “Oh, AI is going to replace 80% of the jobs.” No way, not within the next 20 years it isn’t.

So be afraid that, “Oh, all this is going to put Axiom out of business.” No. What it actually does is create a whole lot of more opportunities for companies like Axiom in the marketing services space. And in the advertising space more generally to help clients figure out how to use these tools and technologies both internally and externally. Lot’s going to change because of it, but let’s get real, the data is key. And right now the data just isn’t high enough quality to rely on these tools without a lot of human intervention and it just creates more opportunities. But we’re not going to, just like we have not seen the full impact of the internet, which has been around for 25, 30 years now. We’re not going to see the full impact of these AI tools and technologies for another good 10, 15, probably even 20 years, just as we’ve seen the evolution of the internet.

Dustin Raney:

That reminds me actually of a use case. Back in 2019, I attended an AI conference in Cleveland and there was an application that was built on top of Watson, the machine that I think won Jeopardy or whatever, and it’s used for weather channel and all this stuff. But it basically took PowerPoint or intranet documentation inside of a corporation and allowed people to consolidate so they can, as they’re typing words, all of a sudden all these slides that were already built around that topic were indexed.

And then it makes it easier for yourselves or whatever to create a presentation with material that already existed without being duplicative. And then based on them sharing that, the millions of dollars, if you think about time and how costly time is of recreating things that already exist, simply because you don’t have a way to organize it and how AI can be leveraged. So in a way that’s a use case that touches on all the things that you just talked about. It still requires human interaction. You have to go tag those PowerPoints to tell AI what to look for, but AI then is doing its job to consolidate and leverage scale to get you what you need quickly.

Ron Shevlin:

But keep in mind that the data quality aspect of this. Listen, I speak at a good number of conferences every year and I certainly do not want my boss to hear this, but the truth of the matter is I spend an inordinate amount of my time looking for pictures on Google image search. Looking for that right picture for the slide that I’m putting together. Most of my slides in my deck have maybe one or two, maybe three words on it. It’s all about the picture. So I want the picture to convey the thought that I’m talking about at that point in time in the presentation. Now with AI and tools like Dolly, which connect to ChatGPT, which create art and things like that, I have experimented with those tools and I can tell you that they’re just not right. There’s something about the lack of quality, the resolution of the pictures, they look phony, they don’t look real.

I know that picture of the Pope wearing a big puffy down jacket like the one you got on there, Dustin, but he’s got a silver one and his is a lot more puffier than your Columbia. But you know what? It’s just not quite there yet in terms of the quality. We’ve got a ways to go for these tools to mature. And I’ll tell you a little bit quick point to give a little historical perspective on this, and you guys are way too young to remember this. But back in the early eighties when PCs were first coming out, Lotus 1-2-3, when it was launched in January of 1983, became the killer app for PCs. It actually was not the first spreadsheet on the market, but was really the tool that gained the quickest adoption from a spreadsheet perspective. And is really credited with being the killer app for PCs because all of a sudden we didn’t need HP calculators anymore.

People were using it for database management. People were even using it for word processing. And from my perspective, ChatGPT is to 2023 what Lotus 1-2-3 was to 1983. It is a tool that can be used to really generate some productivity improvements but isn’t perfect. Back in the early ’80s, people hard coded a lot of errors into formulas and spreadsheets that caused a lot of problem. There wasn’t a lot of good documentation of spreadsheets, that caused a lot of problem, and there certainly wasn’t a lot of consistency of the usage of these tools. And I think there’s a lot of parallels between the early ’80s and where we were with PCs as we are with AI tools today. We’re starting to get used to them, they’re coming out, they’re coming out in droves. We’re not sure which ones will really survive and which ones are good and which ones are bad, and the data that’s going in isn’t that great.

And we have issues with data management. Everything that we had issues with back in the early ’80s with the introduction of PCs and all of the PC based apps that came out back in the ’80s we’re having today. But look at the development and the evolution of PC based tools. Microsoft came along and became the de facto standard. And now, so Microsoft Excel displaced 1-2-3 as the Lotus, as the default spreadsheet. Something will probably come along in the next five to 10 years to become the standard AI tool, which is why all of the big tech firms are all battling Salesforce, Oracle, Microsoft, Facebook, everybody’s… Apple. Actually, I haven’t seen a lot from Apple on this. I’m not sure about them just yet, but this is why the big firms are all battling this out because they understand that this is where the market’s going. And back in the early ’80s, the big mainframe providers like Unisys and Digital Equipment and even IBM weren’t developing software for the PC because they poo-pooed it and didn’t think it was going to be there. Today’s technology companies aren’t making that same mistake.

Kyle Hollaway:

Yeah, that’s great. Wow, we are getting close to time here, so really want to bring us to close with that on two questions. One, going back to a very practical statement for our listeners who’ve listen this, there’s a lot of ideas, a lot of thoughts, a lot to ingest. What would be a kind of singular… What’s the best kind of first step someone should be taking if they’re sitting in a financial service industry or even in any other industry, what’s a good first step that you would recommend for them?

Ron Shevlin:

Whatever department focus functional area they work in their organization is to sit down and define some of the problems, the nuisances. What are the things that just take us too long to do? And boy, if we could do that in a day instead of a week or a month, boy wouldn’t that be great? And then go do some homework on what some of these tools that are out there. I mean, yes, everybody thinks of ChatGPT as ask the question, get an input. But there are a lot of plugins, like I mentioned AutoGPT, which helps to make… it’s really more of a goal. You give it a goal and it autonomously goes through and uses the tools to do it.

Do some homework on what some of the tools are and see how these tools might address the process issues that you have in terms of getting some things done. Would be nice to be able to create them. Was this at the offer letter, but look at it more from a nuisance and a problem perspective. What are the things that take you a lot of time to do that would be great if we could do it in fractions of the time it takes us to do it today. And then go do some homework to see what types of tools are available. I wouldn’t worry so much about the data integrity aspects just yet. I wouldn’t worry about the legal aspects. You’ve got to first experiment with the tools to see what’s possible.

Kyle Hollaway:

Our listeners can take and just go do even today, this afternoon, just look at your existing systems and processes, find those pain points and then start to just ideate from there. So that’s awesome. So as a final, just our usual standard question the way ask on our way out is as you look forward, very dynamic, a lot of stuff going on, what excites you most about the next 12 months?

Ron Shevlin:

What excites me most about the 12 months is seeing my grandchildren who are three and one go to four and two because there’s a lot that changes between three and four and one and two. So I think what you probably are referring to though more is on the technology and the business side, and I think one of the things that I think excites me the most isn’t so much the AI thing that we’ve been talking about. But in financial services it’s the thanks to the open banking movement and thanks to a lot of the needs to develop new capabilities and competencies is the extent to which the industry will become very highly integrated. Partner ships will explode a lot of the capabilities to enable those capabilities. And then also the whole movement towards embedded finance where it’s really all about distribution channel change in financial services. And we certainly won’t be there at the end state in 12 months from now, but I think 2024 will be a year that we’ll see a lot of movement towards this integration and change towards embedded finance.

Kyle Hollaway:

All right, thank you so much. I think it is going to shape up to be a very interesting new year coming up. And so that’s going to wrap up our session for today. And so Ron, thank you so much for sharing your insights and that the journey with AI will certainly be transformative. And I’m sure we’re going to want to bring you back next year, later in the year as we see how this stuff progresses and get your take on that. So be looking in your inbox for an invite to come on back with us.

Ron Shevlin:

Thanks.

Kyle Hollaway:

For our listeners, you can find all our Real Idea podcast episodes at axiom.com/realtalk or you can find us on your favorite podcast platform. And with that, I want to thank you and everyone have a great day.

 

Ron Shevlin

Chief Research Officer

As Cornerstone Advisors’ chief research officer, Ron Shevlin heads up the firm’s fintech research efforts and authors many of its studies. He has been a management consultant for more than 30 years, working with leading financial services, consumer products, retail, and manufacturing firms worldwide. Prior to joining Cornerstone, Shevlin was a researcher and consultant for Aite Group, Forrester Research, and KPMG. Author of the Fintech Snark Tank blog on Forbes, Shevlin is ranked among the top fintech influencers globally and is a frequent keynote speaker at banking and fintech industry events.

More from Ron Shevlin Connect on LinkedIn