Episode 104

Part 1: Adopting AI Safely & Strategically

Tessa Burg & Patty Parobek

TessaBurg+PattyParobek Circle 600x600

“Responsible use isn’t just words on paper; it’s about actions, accountability and continuous learning.”

Tessa Burg

In part one of a two-part series, Tessa Burg focuses on a critical topic for businesses today: the responsible use of AI. With ChatGPT and other AI tools rapidly reshaping marketing and business, this episode offers a thoughtful exploration of why every organization needs a Responsible Use Policy.


“Transparency with clients is critical—they want to know how their data is being protected.”


Tessa outlines practical steps for creating frameworks that safeguard data, inspire confidence in AI experimentation and scale innovation. Listeners will learn how to address the tension between early adopters and skeptics, assess the impact of AI tools on marketing processes and implement policies that protect clients and brands.

Highlights:

  • Importance of Responsible Use Policies for AI in business
  • The risks associated with using AI tools like ChatGPT and Grammarly
  • Frameworks for safely testing, learning and scaling AI in organizations
  • Addressing internal tensions between AI adopters and skeptics
  • Key steps for creating a Responsible Use Policy
  • Examples of industry disruptions and lessons for adopting new technology
  • Training staff for responsible and effective AI usage
  • Building transparency and accountability into AI processes
  • Client expectations for data safety and visibility

Watch the Live Recording

Tessa Burg: Hello and welcome to another episode of “Leader Generation,” brought to you by Mod Op. I’m your host Tessa Burg, and today is a two-part series. We are going to dive deep into the value and the need for responsible use policies at your business.

Tessa Burg: We have been on a journey this last couple years since the launch of ChatGPT to make sure that not only are we leaning into how we can use AI to elevate our creative work for our clients, how we can lean into upskilling and reskilling our team and staff to ensure that they have productive and fulfilling future growth within our company, but also making sure that we’re doing so in a way that is responsible, that keeps our IP and data safe here at Mod Op, and most importantly, keeps our clients’ brand and data safe as well.

Tessa Burg: So we’re gonna talk to you a little bit about why responsible use policies are so important, how they provide a framework and the guardrails your staff needs to build confidence in testing and learning from AI, as well as scaling it within your organization, and then get into some detail. What does that framework actually look like?

Tessa Burg: So episode one would be an overview, the why, the how. Episode two, we’ll go into the detail. And I am joined today by Patty Parobek to help us along. She’s our VP of AI transformation. Patty, thanks so much for being a part of this two-part interview series.

Patty Parobek: Yeah, absolutely. I’m so happy to be here.

Tessa Burg: Awesome. So you may be surprised to learn that only 10% of organizations have a formal responsible use policy in place as it specifically relates to generative AI, and this is according to ISACA in a recent study that they did. Honestly, it really isn’t that shocking. I think we, as people, have gotten into the habit when we download an app just saying yes to whatever the T’s and C’s are without really thinking about it. A lot of us have probably never even gone on our own websites to check out what the privacy policy is.

Tessa Burg: But as marketers and as marketing leaders, we are always held to very high standards in data privacy, and that we’re probably all familiar with. We know that we have worked very hard to gather leads. We know that in consumer marketing, we have run persona studies, we’ve done surveys, we have data within our Google Analytics, we have data and CDPs and data lakes. We have fought hard for that data and that is why protecting it is so important. And something that gets lost is that anytime you’re using an AI app, you are taking on a level of risk. And that app and a lot of the applications and technology you use today already have AI built into it. So when you are using copilot, when you are using ChatGPT or Grammarly, that is on your machine. And so, if you’re interacting with that wonderful data you’ve acquired and you are held to protecting, part of that app or all of that app may also be interacting with that and seeing it.

Tessa Burg: So it’s incredibly important that we’re aware of what’s important to us, what’s important differentiates our companies, remains protected as we go forward in learning how we use AI to accelerate our business and do what’s most important, give our clients, give our customers amazing experiences that increases retention and growth.

Tessa Burg: So I love this image, patty found this, of a mountain. And how scary is it? Not just what’s happening next in AI and what’s gonna happen in our industries in which we operate, but to scale a mountain. And imagine that if you had a guide, if you had guardrails as you’re climbing up this mountain, how much safer you would feel and how much more confident you would feel moving forward.

Tessa Burg: You’ve probably experienced this at your company. I know we’ve had this internally as well early in our journey. This natural tension between those who are embracing AI and really leaning into exploring what it can do, and those who are hesitant. They’re not sure if it’s right for their customers. You might not be sure if your IT team would even approve some of these apps. I know a lot of our clients are going through transformation projects of their own internally where they’re starting to document what processes can be automated, how do we help get our arms around our data? And us in marketing might be, some of us are wait and see ’cause we’re not… Uncertainty is really the key word. But even if you have a separate project going on in your organization, it is incredibly important as marketing leaders that we get our arms around how our staff is using AI, and they are. They’re using it in their personal use, which means they’re likely starting to use it for work, and we wanna baseline what those uses are, and understand how they’re impacting process and value so that we can start to operate differently.

Tessa Burg: I’m throwing this slide up. I know this is also a podcast episode, so I’ll tell you a little bit what’s on this visual. But we have an image, a blockbuster video. We have an image of FYE. I actually personally worked at Camelot Music in the mall, and then it was acquired by FYE, and I thought just a big conglomerate coming and taking over our little mom and pop music shop was a shift. Then came Napster and blockbuster like, looking at how Netflix delivered. This is very similar to what we’re going through today, Kodak, with the reusable cameras. We have to think about, did the value of the content change? Did the value of the service change? Not really. We’re also watching movies. We’re also watching TV. The value is the entertainment. There’s still a ton of creativity in what’s being produced when you think about what you’re consuming on Netflix. Same with music, same with bookstores like Borders. But what did change is the how.

Tessa Burg: How is that creativity, that service and value being delivered? And that is where we’re at today. As marketers, how we do our work, how we elevate to continue to deliver value at a strategic level, at a creative level in delivery is going to fundamentally transform. So we at Mod Op, which stands for Modus Operandi, the way in which something operates or works are very excited to be leaning into redefining the way agencies deliver value for their clients. We combine technology and creativity to give our clients an unfair advantage. And we do that across all of our services, which we call strategic business units. We have over 420 specialists in 13 cities across three different countries.

Tessa Burg: In our creative services, we do everything from brand position and advertising, content production, social media, and influencer management, packaging and brand identity and public relations. On the technology side, which is the world I live in, we do AI implementation and custom AI app development, strategic consulting, which is where we specifically work with our clients to help them get their arms around their data. ’cause we know that whatever you’re evolving to when you’re using AI is only as good as your own data and the people that you have internally and the skills they’re building and how that’s serving your vision and your customer end users. We bring that to life through our digital experience department, which does web, and app, and game development. And then help find the right customer at the right time through our media planning and data analytics and data analytics and measurement teams.

Tessa Burg: So that’s a little bit about us as a company. We serve a variety of clients on the B2C and B2B side, including AWS, Black and Decker, Fender, Baja Mar, and the list goes on. But we serve some of the largest brands in the world, which is why it’s so important to have a conversation about being vigilant in our execution of our responsible use policy. It is at the core of how we use AI in our business and then also how we build AI implementations and AI innovation internally. Responsible use is not just writing words down on a page. It also includes how you administer licenses, how you monitor the use of those licenses, and how when you are building an agent, or even a GPT, or software, or an integration that helps to extend the value of your AI agents and GPT into the client environment, that you are doing so in a manner that can be controlled and measured.

Tessa Burg: This doesn’t mean you don’t stop adapting. Building things for scale is a talent. It is a skill. Doing one-offs, band-aids, just testing and trying without a framework is pretty easy. Now, if you’re in that place, if you are experimenting with GPTs, if you’re just testing and learning, if you know your staff is using apps, but you’re not quite sure how or where, that’s fine. Now is the perfect time to start putting in a discipline.

Tessa Burg: And like I said, in episode two, we’re gonna go through step by step how to execute a discipline within your business to go from just testing apps to using them responsibly and measuring the ROI on them at scale. And this is what helps us in our third principle at Mod Op, which is to move relentlessly forward. And we do this with our clients. So we have internally a platform that is AI-enabled that we use to drive all our work, and we make that platform available through extended components into our client environments. So they’re always benefiting from our continuous optimization and from the feature enhancements that we build in that are specific to their visions, their values, and keeping their brands and consumer journeys safe.

Tessa Burg: So let’s talk a little bit more about what you as a client have told us. We recently sent out a survey to a select group of clients, and some of the feedback was really not surprising. Clients told us that privacy and security measures were really important to them, and they wanted more visibility into how we are executing our policies and how we are ensuring our staff is following them. They want to see that we are holding the staff accountable. They wanna know what we’re using, why, and when. They want reassurances and evidence that their data is being protected. And then if we are using generative AI to generate content, how do they know that? Where is it being used? So let’s dive a little bit deeper and walk through how we answer these questions.

Tessa Burg: So first topic, how are you keeping my data safe? Number one reason to have a responsible use policy is for data safety. Second topic, are you using AI tools for my work right now? We love making sure that clients know that AI is a part of our process as much as it’s going to become a part of their process as well. It’s not. We all have access to the same tools. So it’s not the use of AI that makes our agency different. It’s not the use of AI that will make any company different from its competitors. We all have access to the same technology. What makes it different is the standards, values that you put around your use of it, and then how you measure success and the quality of the output, which is determined by your data, the quality of your data, the processing of it, and the skills, strategy, and creative use of it from your staff.

Tessa Burg: So that brings us to our next big topic, which is diving into how do you hold your staff accountable. And accountability means, have they been trained properly? Have they followed through on how to best use a tool, not just for their personal output, not just to redraft an email, but to truly automate something that has measurable value to your business that moves it closer to scale. And the last topic is what is your AI roadmap? So let’s talk a little bit about where we start as an agency. And just to let you know, we have started to do AI audits in which we help clients baseline their technology, baseline the data they have today, get an understanding of where their staff is at, and then come together against their vision, values, and overall big challenges and goals, not really related to AI, just where they see themselves going as a business. And from there, we create an assessment with recommendations on what tools and technologies will help get you to your vision, and then also your own custom AI roadmap with Action steps, trainings that help you manage the transformation and change so that you can differentiate, serve your people well, give them those growth development paths that they want, and also serve your clients and customers in a new and exciting way. And I hope that from hearing a little bit more about our journey, you’ll be inspired to start that journey yourself.

Tessa Burg: So at Mod Op, we are just blessed to have a lot of experts under one roof who have very deep experience in working in compliance and governance. Jonathan Murray, our chief strategy officer, and also one of the co-authors of the book, “Getting Digital Done,” is the former CTO at the New York Times, Warner Music Group, and he was also the VP of the Public Sector and Global Accounts at Microsoft. He brings a wealth of knowledge in what it means to have responsible use and governance and compliance programs at the largest enterprise businesses. We also have a former VP of Engineering and Product Management with Derick Schaefer He’s another member of our Mod Op Strategic Consulting Unit. Monica Richter is the former head of S&P Global Ratings Data Group and Data Operations. She also worked at Dun & Bradstreet. So over 21 years specifically in that data security compliance and governance. And then myself, as well as Derek, have been exposed and worked in PCI and HIPAA-compliant environments where, again, data safety is core. And then Charles McGibney, who a lot of our clients don’t know, our internal IT director, who helps us execute the management and controls that we need to make sure that we’re monitoring use of AI within the agency while not stifling the ability to test, and learn, and take risk, because that’s the superpower agencies have. We can set up these environments that allow us to continuously test against many different clients, but we have to make sure that it’s done with our own data and with respect to the client’s wishes, client visibility, and client data. And so Charlie has been just amazing in helping us set up guardrails but not stifle productivity and innovation. This journey that we have been on to stand up our responsible use policy and to stand up an environment in which we can build our own innovation and build AI solutions for our clients was not linear.

Tessa Burg: In fact, it sort of ping pong back and forth. Two years ago, after ChatGPT launch, responsible use policy was step number one. We stood up an AI council that is dedicated to identifying, executing, and educating the staff on best practices around the use of AI, and that is everything from ChatGPT to using agents that we’ve built ourselves for client work. We also recognize that a lot of people were using AI apps as a part of their personal life and likely, we couldn’t stick our head in the sand, bringing them to work. So we did an entire app inventory. What are you using today? How can we help you get more value and use out of it? What can we do to learn from it? And we brought diverse groups of people together to assess and evaluate the value of those apps. We also looked at the technology that was already in our core stack. I’ve heard some things that the word AI company and AI startup will go away because really what’s happening is AI is just being built into everything you use. A good example is Adobe. They’re doing an excellent job. They’ve had AI in their product, and they’re elevating it up a step in a responsible way just to give designers and creators those small efficiencies that mean so much, whether that’s an edit on the spot, helping with versioning, helping with finding assets faster. That’s AI, but that’s been going on, especially in marketing and advertising where AI is just a part of the tool, for at least a decade.

Tessa Burg: So if you have not, you need to check the terms and compliance and governance on your media platforms, on your social listening tools because that’s where it’s been most prevalent, and especially anything that you’re doing in search, paid search, programmatic. And so that was a big part of our early steps is let’s get our arms around what people are using, what’s in the tech and the governance policies, in terms and conditions of our existing tech stack, and let’s start to formalize how we train and upscale our staff. Because the value of what we do, the strategic thinking, our problem solving is not gonna change, but how we deliver that value most definitely is.

Tessa Burg: So after we got our responsible use policy in place where we were using AI or where we wanted to use AI, we started having client conversations. And this is something we’ll be scaling in 2025 as we continue to scale our staff upskilling and development as our client transparency and communication initiatives. We also stood up the AI playground, which is simply a faster way to get access to AI apps and to test them against other apps we’ve already maybe using to get additional learning, inspire new ideas, and help us always ensure that we’re staying ahead on behalf of our clients in this rapidly changing landscape.

Tessa Burg: So there’s a lot of effort these last two years. There’s a lot of people involved. We’ve had, oh my gosh, I think over 147 different folks be a part of the AI council. And we’re a company of 421, and that number is growing every month as people find opportunities to serve their client in a different way. And what we’ve built now is a place for people to easily pick up, understand the framework, and use apps in an intentional manner that keeps our data safe and our client’s data safe. The results have been quite amazing, and I will say they got much faster after we hired Patty to help us with adoption because your use of AI is really only as good as the data, as the value that people understand it, and that you’re able to scale it. If you’re not scaling your use of your own innovation or apps, then the value that you can extract remains quite limited. So our speed of adoption went up four times. We did four times the in a 10th of the time from our early days when we organized the council. The speed impact, much bigger. When we broadened going outside of the council, we had an original 13 members, broaden that out to 147, got more diverse perspectives, more eyes from different strategic business units, looking at the use in one, in creative versus media versus PR, 8.5 use cases per tool.

Tessa Burg: So maximizing. These tools are not free. If you’re gonna pay for it, if you’re gonna pay to store your data and the processing that goes along with it, you wanna maximize the use of those investments and also increase our speed to innovation. We have yet to do an evaluation of a tool and have the end result be someone say, “I think this is going to replace me.” What the outcome usually is, is excitement, is I see more possibilities. And even though the process started with, “I’ve always wanted to and I wish, and so that’s why we found this app,” the wishlist gets bigger, but it’s not doing the things that we’re doing today. It’s delivering client value, yes, but it’s delivering it in a way that is more efficient, that is more productive, and it’s really fueled ideas, ideas for different ways to create, different ways to engage and measure with the staff. So in the last, I think it was, yeah, just three months, we’ve launched 12 prototypes of our own AI-enabled solutions that we’ll be extending to getting our work done better, but then also into client environments. So that is the background. I hope that at the end of this conversation, we’ve spent about 20 minutes together, you understand the value of a responsible use policy, you’re gonna right now take some steps to make sure you have your arms around all the apps that your team is using, whether it’s an AI startup or not.

Tessa Burg: Look at your existing tech stack, get your arms around their terms and conditions. How are they using their data? What is their privacy policy? You might be motivated too to say, “Hey, you know what, adoption and scale is where we’re stopping. We might need to consider having an AI adoption strategist or an AI adoption lead within our marketing department.” Bring people together, get their feedback, what’s working, what’s not, why, and use that feedback to start creating your own updated processes and trainings against those processes. And last, I hope that you’re starting to look forward and reimagine, hey, if we are doing our work differently, what does that mean for our end customers? What visibility are we giving to them and to our policies? And what are those controls need to be? That means you’re gonna have to reach out to IT. And even if they have their own thing going on, their own transformation, get a seat at that table because no one is closer to the customers, no one is closer to the clients who care about their own data protection than the marketing department. So update your responsible use policy to reflect the vision of your company, where you’re going, and make sure that your clients and customers know that responsible use, protection of their data, protection of them personally, and if you’re a B2B, protection of their brand is of utmost important to your company as well.

Tessa Burg: So best of luck, get going and if you want to hear more “Leader Generation” episodes, we talk about AI, we talk about marketing leadership, visit us at modop.com. You can find it under Resources and Leader Generation Podcast, and we’ll talk to you next time.

Tessa Burg & Patty Parobek

TessaBurg+PattyParobek Circle 600x600

Tessa is the Chief Technology Officer at Mod Op and Host of the Leader Generation podcast. She has led both technology and marketing teams for 15+ years. Tessa initiated and now leads Mod Op’s AI/ML Pilot Team, AI Council and Innovation Pipeline. She started her career in IT and development before following her love for data and strategy into digital marketing. Tessa has held roles on both the consulting and client sides of the business for domestic and international brands, including American Greetings, Amazon, Nestlé, Anlene, Moen and many more. Tessa can be reached on LinkedIn or at [email protected].

As Vice President of AI Transformation, Patty leads Mod Op’s AI practice group, spearheading initiatives to maximize the value and scalability of AI-enabled solutions. Patty collaborates with the executive team to revolutionize creative, advertising and marketing projects for clients, while ensuring responsible AI practices. She also oversees AI training programs, identifies high-value AI use cases and measures implementation impact, providing essential feedback to Mod Op’s AI Council for continuous improvement. Patty can be reached on LinkedIn or at [email protected].

Scroll to Top