Letter P
Letter O
Letter D
Letter C
Letter A
Letter S
Letter T
Episode 97

Creating AI Policies That Promote Innovation & Trust

Patty Parobek
VP of AI Transformation at Mod Op

Tessa Burg and Patty Parobek talk about developing AI policies that strike a balance between innovation and responsibility.


“Until AI is integrated into an end-to-end process, you’re not going to achieve the next level of implementation, adoption and transformation.”


They discuss how companies can move beyond inconsistent AI adoption and fully integrate AI solutions to enhance efficiency, creativity and growth. Patty also explains how setting clear policies and governance can actually accelerate AI adoption rather than hold it back.


“Defining what you cannot do with AI helps you be more creative about operating inside of those guardrails.”


This episode offers practical steps for marketers working through AI challenges to help them build policies that promote responsibility, inspire trust and encourage innovative use of AI across teams and customer interactions.

Topics In This Episode:

  • AI adoption trends in marketing
  • Measuring the impact of AI on efficiency and performance
  • Integrating AI into end-to-end workflows
  • Responsible AI use and governance
  • Balancing innovation with data security and compliance
  • How guardrails can accelerate creativity and productivity
  • Developing a responsible use policy for AI in marketing
  • The role of data in AI-driven personalization
  • Consumer concerns about AI bias and data privacy
  • Practical steps for creating AI policies that build trust

Watch the Live Recording

Full Episode Transcripts

Tessa Burg: Hello and welcome to another episode of, “Leader Generation,” brought to you by Mod Op. I’m your host, Tessa Burg. And today I am joined once again by Patty Parobek, our VP of AI Transformation here at Mod Up. Patty, thanks for coming back on.

Patty Parobek: Absolutely. Happy to be here.

Tessa Burg: So, Patty and I collaborate together with increasing the adoption of AI internally here at Mod Op. And then in addition, extending our guidelines, vision, rules, and policies to the innovation we develop internally and the AI solutions we develop for clients. And we’ve noticed some trends internally with our clients. We’ve also been reading a lot of state of AI marketing reports recently and something that’s really stood out and I guess surprised me is everyone is struggling with measuring the impact their use of AI has actually had. So one of the stats is, you know, like, and I feel like I’m gonna get this wrong, so Patty, maybe you should say it, but it’s like 99% of marketers are using AI. What does that stat and what study is that from?

Patty Parobek: So, this is, and we talk about the marketing AI Institute a lot.

Tessa Burg: Yeah.

Patty Parobek: They have great research, but it is an institute dedicated to our profession and the use of AI. So you can imagine, Tessa and I, are very into all of the studies and research and content that comes from them. So the study that they put out is their Marketing AI Institute, State of AI 2024 report. I think this might be their third or fourth consecutive report where they measure things like adoption and transformation at scale. So the stats that you’re talking about are, reading from the report right now. When asked, “How would you best describe your personal use of AI tools?” 99% of marketers answer, and this is over 1700 marketers, so a very significant sample of marketers. 99% are actively using AI at some capacity. So only 1% are not using AI at all. And then when asked, “Well, what stage of AI transformation best describes your marketing team?” Only 10% cited that they’re achieving wide scale adoption of AI while consistently increasing efficiency and performance.

Tessa Burg: Yeah. And those two metrics are the two we hear constantly that clients want to see. And we’re getting more questions about, “Hey, can you help me predict the results I’m going to get if I use AI?” And that especially comes up because these AI tools are not cheap. So if I’m going to invest in, you know, issuing ChatGPT license to everyone, what can I expect? How can I measure that this is actually saving them time and or helping them grow and impacting the bottom line? And it feels like these questions are coming early, but it’s really hard to even measure that. Patty, what are some of the things that people should be doing or can be doing to start down that measurement road?

Patty Parobek: Yeah, it’s really interesting because we talk about scaling transformation and AI transformation a lot. And most marketers, as you can tell from that study, are stuck in this random acts of AI type of plane, right? I’m doing ad hoc work, I’m trying AI tools in ad hoc ways, one output at a time. And it’s no wonder people can’t measure efficiency increases because they have yet to apply AI to an end-to-end workflow or process. So until AI is integrated into an end-to-end process, you’re not going to achieve the next level of implementation and adoption and transformation, and you’re not gonna be able to have a sustained way of measuring any kind of significant increase of efficiency or productivity. So we can talk about things to scale that gap and, of course, it’s things like having a clear goal and making sure that you have access and unsiloed data and your systems are holistically integrated and all of those kind of processes, efficiency things. But before all of that, and this is what today’s topic is all about, before all of that, it all starts with guardrails and the ways to operate and that is responsible use.

Tessa Burg: Yeah, I think when people hear guardrails, responsible use, governance and compliance, they kind of shut down. They’re like, “Oh, you’re putting up red tape, you’re putting up a blocker to me getting access to these tools and we should really be testing and innovating.” But then on the other side, especially on the client side, and both of you, both of us have worked on the client side, you know, at software companies at CPG, consumer brand companies, and we know, especially, a place we both work for American Greetings, there’s a lot of attention to data security. So when we talk about that AI being integrated end-to-end into your data stores, processes, platforms and systems, it has to be integrated end-to-end in order for you to measure the impact on efficiency and productivity and to realize real growth. That is why responsible use paired with governance is so important. And it doesn’t necessarily have to mean that you’re not testing. It just means that you are accessing apps, that you are putting in communication plans that give everyone involved top down visibility into how data’s coming in, going out, what’s acting against it, and just building that trust that you’ve done your due diligence in vetting that you understand all of the important aspects of that model of any of the fail points or points where you need a human in the loop to look at bias and that you are able to validate, that your controls are stable. So I know I just said a lot and obviously in defense of starting with responsible use, but Patty, from the marketer standpoint, where does that resonate with you? Where does that help build your trust that as we use AI, that it’s not necessarily a blocker to reach?

Patty Parobek: Yeah, it’s one of the things that gets me fired up. It’s thinking that it’s going to be a blocker because if anything, it’s an accelerant. And the classic phrase that you hear when you talk about governance and these kinds of things is that better breaks make for faster acceleration. And it’s true. ‘Cause when you think of like Formula One racing, where that came from, the better the brakes were, the later in the game and the faster around the curve you were able to, you know, stop and accelerate, the tighter turns you were able to take. So you won more races. And it’s like defining what you cannot do helps you understand and be more creative about operating inside of these guardrails. So if you tell, you know, geez, I equate it a lot to parenting because, of course, I’m a mom, but if I give my kids confines of, well, these are the things that you can’t do in this bounce house place or this trampoline park, otherwise go nuts. They’re flying around the room and doing back flips and all those things and guess what? They’re very safe and they don’t break any bones. So, it’s understanding confines, so that you can move faster in those confines, unlock creativity towards meeting your goals.

Tessa Burg: So if you are a forward-looking marketer and you are sitting on the agency or the client side right now and you’re like, “Well, our IT team or our leadership has really discouraged us from engaging with external parties, whether that be software as a service or agencies and using AI.”

Patty Parobek: Yeah.

Tessa Burg: What should they be looking for in a partner’s responsible use policy? Like what are those aspects that are included?

Patty Parobek: So that’s a loaded question because, so when I think about organizations that are kind of on lockdown, they’re telling their organizations, “Don’t use it at all.” That also provides like a level of lack of clarity that’s going to keep people from taking advantage of anything. If as long as you can define a clear set of guardrails and responsible use. And we can go over what some good examples of responsible use policies are to make this more clear. But it’s never going to be an all or nothing type of play. So saying don’t use anything, first of all isn’t going to deter usage. We know 99% of marketers are already using it and it’s going to prevent people from opening up about their experiences and what they are doing. So you know how to better craft policies around usage.

Tessa Burg: I think it’s interesting that you said that if they’re, you know, when they’re locking it down, that’s throwing up guardrails. That’s where we started at Mod Op. When ChatGPT first came out, the reaction was, our responsible use policy should be that we don’t use AI because AI is going to replace the things that we do. And a lot of people, not a lot actually, a handful of people tested it and they thought it was super cool and I think people got intimidated by that, a little scared of the unknown. And what we found was the people who were most resistant to leaning into the what AI could do as it relates to delivering higher value to the clients, were the people who had not tried it yet. And the people who were the least responsible. I would say at the time were the people that we did want to use it, were the kids in the bounce house who’re going berserk. But we wanted to make sure it was being done in a way that delivered value to the client. So when we start with value of the client, what do clients care about? And that’s gonna be different for every business. And for us, it’s even different for different verticals. So, you know, we’ve seen other studies where consumers are starting to care about how ads were created. Is this real? Did a human do this? And it’s not because they’re attached to their creative or even the creative process, it’s because consumers know that LLMs have bias in them. When any tech is trained off of the internet or a bunch of publicly available data or maybe even privately available data and you don’t know what’s in that data set, there’s going to be bias. And consumers are now have that lens of, is this accurate? Is this bias? Do I know this to be true? Am I being targeted with this to reinforce? You know, a lot of people are familiar with the way algorithms work now. So if you are at a business and you’re thinking about where to start with your responsible use policy, step one is to dig deep into what does your audience understand about AI? Where are their concerns and where, and then that will lead to, where do you need to build visibility and trust in a policy that you publish publicly, so that you can set the guardrails that matter for your use of AI in your business?

Patty Parobek: It is true ’cause if you think about your clients, if you think about your customers in any organization, you’re going to want to meet them exactly where there are. To walk into this next evolution. And AI is here so there’s no avoiding it, but there is time and place across every organization. And if you think of like the traditional adoption curve, your customer base and your client base are going to be spread across that common bell curve. You’re going to have high adopters kind of leading edge, leading, you know, AI integration in their own organizations, in their own workflows. And you’re certainly going to have people who are resistant, people who are nervous. And when you’re dealing with your customers, when you’re trying to explain to your customers and clients that this isn’t something to be scared of, this is something to embrace. One of the best things a responsible use policy does is opens that conversation up. So, I completely agree with you. I think that it’s brilliant to start with your customers and clients to understand where are our majority of them in this curve. And so where do we need to start the conversation? Responsible use is one of the things that can reassure your customers. We’re not using your data in any way that you wouldn’t know about or agree to. We’re not using tools that are gonna jeopardize your data, your sensitive information in any way. And we wouldn’t do that to ourselves either. And we can talk about this and we can collaborate on how to best use these tools and systems together in an end-to-end process that isn’t just our process at Mod Op or our process at company X. It’s our collaborative process Mod Op and company X.

Tessa Burg: Yeah, and I think something that’s really easy that maybe might not be obvious is the data that is most useful for generating values in AI, like is not PII.

Patty Parobek: Right.

Tessa Burg:  So a lot of people, you know, when we think about the push for no cookies is because I’m feeling stocked, I’m feeling targeted, I feel like, “You know, companies and brands are just taking all my personal information.” You don’t need any PII to get a ton of value that generates highly personalized experiences at scale. So I think simply starting there and letting people know like, “Hey, we already know a lot of the things our consumers or even our clients are concerned with.” Clients’ the same thing. Clients are very protective of their data, they’re very protective of what makes their services and products proprietary. We don’t need any of that as agencies that wanna stand up automated solutions to deliver richer, more personalized, scalable experiences. None of that is good. And so if we start at that point of like, “What don’t we need?” And a lot of the things that people are concerned about, we don’t need. And if people say you do need that, then I would, I don’t know, I would definitely question that.

Patty Parobek: Yeah.

Tessa Burg: And the other big piece is, it doesn’t have to occur in the open. I think not everyone understands that there are licenses and there are ways that you can architect the environment to keep it behind your company’s firewalls and within your policies even when you’re working with outside vendors. So-

Patty Parobek: Yeah.

Tessa Burg: Those to me are the first really two easy things that if you are a marketer and if you have an IT team or tech team that’s hesitant is let’s start there. Like let’s address the big things that we know companies are concerned about. Now, what do our guardrails look like? And then also make that visible and make that public to your audience.

Patty Parobek: You know, something that I think could really make this very tangible to folks listening, is how Mod Op went about this, like our starting journey. It was really exciting for me coming to this organization and seeing how we were really leading in things like responsible use and governance around AI and AI applications. And it makes sense based on the caliber of client that we have. We need to be. But I’m interested to hear how it started with you. How did you meet clients where they were to create the first vetting frameworks, the first responsible use policies for?

Tessa Burg: Yeah. Well, it’s interesting being the CTO because our internal staff are my clients and I had the two halves. People who wanted licenses wanted it now saw the value immediately and the other half that was like, I think this might actually be a danger or a risk to our clients. So where we started was first let’s get an idea of what’s that percent and then of the people are concerned, what’s the why? Why are they concerned? And we took those concerns and directly put them in to our responsible use policy because their resistance is one, it’s reasonable. It’s reasonable and completely understandable to not want to just jump in, both feet first. And two, it’s real. And I think sometimes there’s a want when you’re a very visionary future forward looking person to kind of be like, “Oh, no, that’s not gonna happen.” To dismiss. But those concerns are real and definitely possible even more so today. So, where we aligned to is back to our vision. And we engaged experts that we knew had a deep understanding of our clients, our internal staff, and then our clients were concerns. At Mod Op, we have someone who has over two decades of experience and specifically in compliance and governance. Monica Richter. She was head of compliance and governance at S&P and then also our head of strategic consulting. Accelerating AI uses is something we help clients with all the time. Jonathan Murray, he’s a former CTO at Warner Music Group and oh, man, I’m gonna miss. He is, I know formerly a Microsoft executive, and the New York Times. And what we really liked about Jonathan’s consultation on responsible use is it was dealing with creative outputs. So as a creative agency, that’s another really important aspect is we wanna lean into bringing creative and tech together, but we also want to make sure that credit is being given and we are respecting who owns the IP, who owns a copyright. So, we struck that balance with the responsible use policy and creating frameworks so people could responsibly test. We also educated and published out our responsible use policy, so that those who were concerned could see where we were putting in protections for our clients, for our staff, for our own internal data and IP. And then we made sure that we highlighted and gave a voice to the experts in the room, to the people who had led governance and security initiatives in the past at companies that were scaling creative use products. So it definitely was a journey. There was a lot of debate and you can go to the website today and see our responsible use policy and the end result was something that was very, very simple. And I think that’s another key is it has to align to the concerns. It has to be address what’s important to the people your business serves and your internal staff. But it has to be simple, it has to be easy to understand or else no one’s going to follow it. And then on the other side, you have to be able to stand behind it and put controls in to make sure that people are following it. And if they’re not, that you have the right monitoring in place to have those one-on-one conversations to continue to build trust and visibility.

Patty Parobek: So I think, it’s certainly clear to me that not enough organizations are taking anywhere near the level of rigor that Mod Op has put against governance and responsible use in AI. And I don’t think it’s from a lack of desire to. I think it’s probably from what you said earlier in the conversation, it’s just not one of the first thought things and it’s thought of as a barrier, I’m not gonna move fast enough. And a lot of companies are small companies that probably don’t have the expertise or skillset to put some kind of framework like that in place. But even if you start with, these are the ways that we’re vetting the applications that we’re using internally. To make sure that they’re safe for use for us and for our data and our customer’s data. You already have the start of a responsible use policy. Just articulate that and have that conversation. I think a lot of organizations don’t even start with anything. They have zero things out there. And I know that, ’cause we were talking about this statistic earlier today too. The same study from the Marketing AI Institute talks about responsible use policies, responsible AI policies, and ethics policies. And they asked, “Does your organization,” the 1700 plus marketers, “Does your organization have an AI ethics policy and or responsible AI principles, either public or internal facing?” And only 36% answered yes to that. Only 36%. So, there’s a lot of room for growth. And if you start even with the conversation with your customers with a focus group, that’s a great place to start. But if you are a client and working with agency partners, these are good questions to also ask your agency partners, what protocols do you have in place? What frameworks do you have in place? What policies do you have in place to know that my data is protected?

Tessa Burg: Yeah, I totally agree. And we’re at the end of our time. So I would say we’ll close with one. If you need an example of responsible use policy, definitely visit our website. Two, when you’re thinking about, “Okay, but how do I write my own? What’s your vision? What’s your values? What are customers concerned about?” And I mean, use ChatGPT, say, “I need a responsible use policy that takes these things into consideration.” Work with it, ask it questions, what else should be added? What else should I consider? And what should I be monitoring? And then work that across your leadership team. How can you get alignment from tech? How can you get alignment from the people who are closest to the customers, customer service, sales, whoever that is in your organization, ’cause that buy-in and alignment is so important. But also if you haven’t done an internal survey to see where people are at in their adoption cycle or where, what their feelings are about AI, just having those conversations to Patty’s point is gonna get you a lot of great direct, very, very valuable feedback. And Patty, any other next steps that people can take to get started with putting, I forgot your phrase. Putting breaks on to go faster or good breaks help you accelerate.

Patty Parobek: I’ll just say this ending sentiment, and that’s in the lack of clarity, people do not move. So even giving some definition is going to actually accelerate movement more than none at all.

Tessa Burg: Yeah.

Patty Parobek: But yeah, I agree with using ChatGPT. It’ll give you actually a really solid starting point for your organization or a policy internal and external facing to use.

Tessa Burg: Well, thank you so much for joining us again today to talk about this important subject. We know a lot of people are getting pressure to show results on, you know, how is AI saving me time? How is AI driving growth? And this is the perfect starting point. It’ll help you get alignment, it will help create visibility and trust. So, best of luck to everyone. And if you wanna hear more episodes from “Leader Generation,” you can visit us at modop.com. That’s M-O-D-O-P.com. And Patty, if anyone has questions directly for you, where can they find you?

Patty Parobek: They can find me always on LinkedIn. Look for Patty Parobek. P-A-R-O-B-E-K. I’m probably one of the few or only one that exists so you will not have trouble finding me.

Tessa Burg: Well, thanks again for coming on today.

Patty Parobek

VP of AI Transformation at Mod Op
Patty Parobek

As Vice President of AI Transformation, Patty leads Mod Op’s AI practice group, spearheading initiatives to maximize the value and scalability of AI-enabled solutions. Patty collaborates with the executive team to revolutionize creative, advertising and marketing projects for clients, while ensuring responsible AI practices. She also oversees AI training programs, identifies high-value AI use cases and measures implementation impact, providing essential feedback to Mod Op’s AI Council for continuous improvement. Patty can be reached on LinkedIn or at [email protected].


 

Getting Digital Done

Small Arrow Icon

For an executive guide to growth and transformation, check out our book Getting Digital Done

Buy Now On Amazon Small Arrow Icon