Episode 101

Guardrails For Growth: Responsible AI In Business & Marketing

Monica Richter
Executive in Residence at Mod Op Strategic Consulting

Monica Richter, Executive in Residence at Mod Op Strategic Consulting

Responsible AI use isn’t just an option—it’s a necessity. In this episode of Leader Generation, Monica Richter, Mod Op’s strategic advisor and former Chief Data Officer, talks about how a strong AI policy helps protect brands, builds client trust and encourages innovation.


“A responsible AI policy liberates team members to innovate because they have guardrails.”


Monica uses real-world insights to explain why responsible AI matters, how to define ethical standards for your organization and the steps every leader should take to ensure AI is used wisely.

Highlights:

  • Importance of responsible AI use in business
  • Building trust and credibility with clients through responsible data use
  • Defining bias and ethics in AI models
  • Implementing effective governance and compliance practices
  • Role of responsible AI in innovation and empowerment
  • Communicating responsible AI policies within the organization
  • Benefits of a responsible AI policy for business growth and protection
  • Cultural integration of AI governance and best practices
  • Responding to AI regulations and preparing for future AI governance

Watch the Live Recording

Tessa Burg: Hello, and welcome to another episode of “Leader Generation,” brought to you by Mod Op. I’m your host, Tessa Burg, and today I am joined by Monica Richter. She’s currently an executive in residence and strategic advisor here at Mod Op and formerly the Chief Data Officer at Dun & Bradstreet and Senior Vice President of Global Data and Strategy and Operations at S&P Global Ratings. Monica, we’re super excited to have you here today to talk about a very important topic, responsible use. Thank you for joining us.

Monica: It’s great to be here, Tessa. Thank you for having me.

Tessa Burg: So I feel like some people might think governance and compliance and responsible use is not the most exciting topic. But at Mod Op we have been on a journey these last two years to make sure that responsible use is a part of the way that we operate, that we are holding people accountable for understanding the policies and living them in our innovation and how we access AI. So you have been a large part of that here at Mod Op being our strategic advisor and being the person with the deepest experience in responsible use governance compliance as it relates to data. So first before we jump into the questions and sue why it’s so important, tell us a little bit more about yourself and your background.

Monica: Sure, thanks, Tessa. So spent the first part of my career in the capital markets on the analytical side as a mathematic major. I’ve been sort of an analysis, a data geek all my life, but was really intrigued then when technological advancements and data advancements started to allow not only the volumes of data to get excited but also the variety of data getting excited and really starting to monetize data assets. And so I switched over to data and analytics as a profession sort of 15, probably 20 years ago, and really started to work with companies then to actually liberate the data, again, with responsible use though. So it’s always been that coupling of getting the value out of these assets but really using them in a way where you’re always making sure that the company is protected and any risks that you know of are being mitigated to the best of your ability.

Tessa Burg: Yeah, and what we have found in our own testing of AI applications is the quality of the data is extremely important. So I love that phrase, liberating the data. And there are two stats that really jump out at me. One, that 99% of marketers according to the Marketing AI Institute, are using AI applications, whether in their personal life or in their professional life. But another startling fact is that only 10% of businesses have a responsible use policy in place. And here we are in November, two years after the launch of ChatGPT and that that number feels way too low. Tell us a little bit about like, what is a responsible use policy?

Monica: Yeah, it’s really interesting, and I’ll focus on both of the words in that, responsible and use. Because to use it responsibly, it starts to feel heavy. It is about privacy, it is about security, it is about that risk mitigation across bias, hallucinations, hardware and software failures. It’s about these multitude of different risks with this very powerful technology that, to your statistic, is ubiquitous already. And how are companies managing that? So the responsible part is really important to get that governance in place, and it is both the data governance and the technical governance so that you can lead into your model governance. And that actually doesn’t matter if it’s basic machine learning or it’s now much more into the generative AI, cognitive AI and what will come next, which is all, for me, very exciting. But those models need to have those foundational governance practices in place so you get responsible use of AI. But the responsible part I’ve discussed, but if I can move to the use part, for the companies that have responsible use policies, what they have done is they have allowed their team members, their stakeholders, so it’s customers as well as internal team members to feel liberated to innovate because they have guardrails. They’re working in an environment of empowerment because they’ve been educated potentially already in an AI literacy program like we have at Mod Op. They have a feeling of, “I know where to go to with questions, I know what I can and can’t do.” And it becomes a really important part of the responsible use to focus on the utility of the policy because what you’re trying to do is get your team members really excited about this new technology and utilizing it. Their creative minds are at their best when they’re being augmented by other ideation, and that is where AI can really be helpful.

Tessa Burg: I agree. I think it is free and empowering when you have policies in place that give you the reassurance that you’re not going to expose or endanger your company’s brand or data or your clients or your customer’s brand or data. And customers are so aware now, since a lot of customers are using AI in their own life, like that it’s a part of how copy, how images, how assets and communication get generated. But there’s still this tension where people aren’t sure that they need it. Or maybe if I just wanna use an app, I might fall into the trap of like, “Well, I’m only testing it in this way on a free trial,” so maybe I don’t have to be as concerned as responsible use. Why, why, no matter what app you’re using, when and where is understanding and following responsible use policy absolutely crucial?

Monica: Well, you’ve mentioned a couple already, the first and foremost one is you do wanna think client first, like how are you building trust and credibility with your clients that you are using AI, you are in with this technological change that is so exciting, but you’re using it in a very protected way to build that credibility and to build that trust with the customers. Their data, their own client data is extremely important to them and is oftentimes regulated across the world already with data privacy regulation. And now what we’re starting to see across the globe is in pockets and it’s coming to the US. But the EU already has a very strong AI responsible use policy in place. And so as we move not only from data regulation into AI regulation, Mod Op and other companies with a responsible use policy are showcasing externally and internally, as I mentioned already, to that trust with the employee base, they’re showcasing that they’re monitoring what’s happening within the industry and they are staying ahead of this regulation to make sure that they are building that trust and that credibility. You don’t want to get into the position where you are actually racing behind a client that is very disconcerted because their data is actually leaked or you actually have bias in your models producing business results that no one is trusting or actually, you know, dangerous to a business that prides itself on, you know, diversity and openness and all of the ethical areas of business. And then you have a model that’s starting to throw out results that is not aligned to your value system. So it’s hugely important that the responsible AI policy aligns with the company’s philosophies and allows it to stay protected, again, in the regulated environments that we have and with the legal environments that we need to adhere to also with our contracts.

Tessa Burg: Yeah, you mentioned bias and ethics, and sometimes that’s tricky because our company values are different than other agency’s values, and we have clients with different values. How do you go about defining bias and defining ethics? And then what are some ways to monitor so that everyone who’s using AI and/or the leaders of those companies understand that the output is in line with values and with the standards that have been set for your specific company?

Monica: Yeah, so there are very standard definitions of bias and ethics with regards to you are actually not showcasing discrimination in your data or modeling outputs. In terms of ethics, there’s no model output that begins to come into spaces where you are saying that model has been trained on either data or has expressed a modeling output that leads into a space where you’re saying, “This is not, this is not appropriate for this business to be coming into this non-ethical space.” And I think model drift can happen. So I think having not only responsible AI policies in place, but also monitoring and compliance that you’re actually seeing what these models are doing over time is very important as an aside. But I do think that some of it, to your point, is going to be up to an organization to decide. As an example, we’re working with a client right now, and their pricing models need to take into account certain types of risk. And there’s certain categories of people that are riskier than other types of people in terms of behaviors. And that becomes a business question of how you actually say, “Does your pricing model already have any inherent bias in where when you start to model on top of a pricing structure already, are you exacerbating that?” Are you getting into a place where that’s not where the business wants to go, and it’s also fundamentally, again, against the values of the company? So I think some companies are gonna have to dig deep into their value statements and into their own business model to understand what these terms mean as they progress into AI and expectations of model outputs. So I think some of it, you know, is the basic definitions of non-bias and non-ethical and, sorry, non-biased and ethical behaviors. And then another part is gonna say, well, within our business model, do we have areas that will require really experts to come in and to be understanding what the models are doing? So it’s really interesting. Oftentimes we hear this phrase of, “We need a human in the loop.” And I do believe that, but I don’t want a teenager in the loop. I wanna trained specialist in the loop. So for me it’s always important to be speaking about these very, very specific issues for the business, but with very trained specialists in the room that can say, “Okay, that works for Mod Op, that doesn’t work for Mod Op,” And that you’re actually having those robust conversations to build that credibility.

Tessa Burg: Yeah, I agree, and we’re lucky that we have you as a resource here at Mod Up as an expert. We also engaged an external legal counsel who’s a specialist in creative environments, creative marketing, communication environments and AI and IP. And it’s so important that when we did those initial drafts, we did start with our values at the center. Like what do we want for our business? How do we see using AI that really reflects the value we bring to our clients? One of the things to that, when we were going through that journey is, I remembered having similar experiences making sure that the way we use technology was safe and truly generating value and not getting us into trouble. But I was, tell me a little bit more about experiences you’ve had in the past, like, where there’s been this need to sort of stop, reflect and realign and make sure that you have the right policies and controls in place to use technology effectively and safely.

Monica: Yeah, well, there’s been a multitude. I mean if you go back to even the early days of the internet coming in. You had your naysayers that thought that it was gonna be a flash, and it’s become, you know, one of the most powerful business tools and personal tools we have. I see AI the same, the social media content creation, especially within our book of business at Mod Op. But these, for me, they’re always this balance between, yes, we have to use this responsibly and we have to use this legally and we have to use this with complete expectations around privacy. But there’s always excitement about how these technologies are changing our lives and enabling business to flourish. So for me, I try not to use that word stop, Tessa, because it usually is parallel tracking, that, similar to how we’ve done it in AI, you have a real strong governance set of both individuals externally helping us, also like the lawyers you mentioned, but you have a governance track, and that was where the responsible AI policy came from. And a lot of that warm hug that the employees feel like they’re in an organization that is taking those responsibilities, those ethical concerns, those legal concerns, those regulatory concerns very seriously and building an environment that is trusted and credible alongside with let’s enable this super, fun modeling technology that we haven’t had at our fingertips before to flourish. And so we’ve had a real test and learn environment, again, surrounded by this governance, but we’ve allowed the teams to really come forward, try new apps, try new things with our clients, and really be innovating every day once this has been a potential for us. So again, I’d hesitate to say to any organization stop, but really understanding that you must parallel track the good governance with the innovation.

Tessa Burg: Yeah, no, that’s a great point. And I think almost as important as having a responsible use policy is making sure that you communicate it out to your staff, to your clients and your consumers. And we have ours right on our website. I think a lot of companies are starting to publish it out. What are some other ways other than just putting on a website or doing a company presentation saying, “Hey, here it is,” that you can make sure that your staff understands, follows, and validate that it’s actually being used at your company.

Monica: So I think communication is key, and communication can be initial training, ongoing training. It can be bringing in outside experts to talk in a lecture series about the importance of it. Certainly making it easy to find is a basic. But I do believe also bringing it up conversationally in any topics. When you have exploration going, it’s always important to have almost in like a flow chart of how you plan, train, develop and deploy, you have in that flow chart of activity. Are we adhering to our responsible AI policy? How are we feeling about governance right now? Are we putting in the right controls? And it’s just becomes part of that organizational vocabulary if you have checkpoints in your process to deployment that it becomes part of the culture, to be quite honest. And I think that is both top down. So certainly our CEO is very excited about AI, continues to talk to clients about it, to the market about it, to our entire team about it. But it’s also coming from the bottom up. It’s our teams every day talking about their comfort level with the projects they’re working on in conjunction with the responsible AI. And I just think it is a cultural question too. And that really gets underscored by just great communication.

Tessa Burg: Yeah, I would say what surprises me is there’s a lot of companies that are still operating and that we absolutely do not use AI for anything. Again, in that like stat that 99% of marketers are using it-

Monica: Yeah.

Tessa Burg:  it feels a little naive to think that if you’re a company saying absolutely don’t use it, that you don’t have people that are using it. I think at this point almost everyone is using it. And starting with a responsible use policy could be a great way to make sure that use is aligned with how your company operates, its values, and ensure that people are aware. I hope next year at this time there is a higher number than 10% of companies with responsible use policies. And I think you’ve given some guidance as to where they can start, that, one, bias and ethics is not as subjective as some people think. So get expert resources involved, get those things defined and have a conversation with your staff about why that is so important. And the second most important thing is to protect your own brand. And I think that brands that are saying, “Oh, we’re just not using it for anything yet,” it is like the internet. Like, the internet is there, it’s not going away.

Monica: It’s not going away.

Tessa Burg:  Yeah, I was at a job and I wanted to start doing more digital marketing. And I remember being explicitly told that we’re not sure how deep we wanna go in that because we’re not sure what the impact of the internet will be quite yet. And it was like, “What?” I mean, there are these technologies that become a part of our life and we can ignore them or we can put in what you refer to the warm hug to make us feel safe and secure as we, you know, charge forward and embrace the change and adapt and understand that this is just the next wave of the way things get done and the way we communicate and engage with each other.

Monica: Yeah, well in the spirit of if you’re not changing, you’re stagnating, I mean, that’s just classic business. But I do believe that when you speak about only 10%, at Mod Op Strategic Consulting, the arm that I serve most within the company, we’re seeing a lot of our clients though in deep discussions about how to actually get to a published policy. So the 10% might be quite low, especially in, you know, the marketing published space. But what we’re seeing across the variety of clients that we service is that they’re quite far along in the conversation on responsible use, adherence, obviously, to the regulation and any laws in place on privacy. So the conversation is at a much higher level, the volume of conversation on governance is very, very high. And what we’re seeing is that the implementation now and the execution to, you know, having a policy on a website is like that next step. And I assume that that that percentage is going to be going up literally every single month over the next year, as you mentioned, because all of that good work is happening right now. And if a company’s not touching it, to be quite honest, they’re being, you know, classic ostrich putting its head in the sand, ’cause it’s got to be done. It is here and it is now.

Tessa Burg:  Yeah, and that is great to hear. And something else that’s been really exciting with our clients specifically is them coming to us and asking us questions like, “How are you protecting my data? Where are you using AI?” And we’re like, “yes, let’s have this conversation.” We’re doing some workshops with clients, and it’s very energizing to be able to show the client that how you use AI and the innovation you’re building is all about protecting their brand while giving them that value of efficiency, more personalization, but it’s a journey. You know, I think when you hear all the AI hype, you think there’s like this switch that gets flipped, but it really requires strategic planning, great guardrails to make sure that what you’re going to execute is in service of the experience for clients and consumers.

Monica: Yeah.

Tessa Burg:  Then the ability to measure the effectiveness and optimize from there. And that’s all human thinking leveraging the tool. And we just hear so much about this app can do that, that app can do that. But if you want to get to actual scale, it requires this responsible use policy as the foundation for making something that has impact in a positive way that’s reflective of your company.

Monica: Yeah. And it’s not just app. We see that in some companies, you know, they’re doing their own internal building also as they’re getting more savvy and their data scientists are getting very excited about, you know, building, whether it’s small language models with smaller pools of data that are high quality, well-known almost within the organization or using larger language models that are already external. So we’re seeing all of that activity, but what’s really exciting is that the innovation is happening. Responsible AI, policies, good governance, it does not stifle that creativity, that innovation, that client first, all of those very, very important aspects of keeping this journey at a speed that I have hardly seen in the past. That is the one thing about AI that for me, you know, being in business for many decades is very different. The speed with which we are seeing this change is so exciting. So I would urge any business, come into this governance conversation with excitement because a responsible AI use policy will allow your team members and your clients to continue to feel that they’re working with a best-in-class company and they’re allowing that innovation. They will see the benefit as a client if they know that they’re with responsible use.

Tessa Burg:  I agree. And with that, we are at time for today’s conversation. Thank you, Monica, so much for joining us. And for listeners, we hope that by the end of this conversation you are excited about putting your responsible use policy in place and creating a foundation for AI use and innovation heading into 2025. If you need help with that or you want to ask Monica more questions, you can find her on LinkedIn. We will also have the show notes on our website at modop.com, modop.com. You can also learn more about our AI Edge. We have two services that help our clients get started in how to secure and make the most use of their data, whether in AI or in strategic business. That’s our AI Accelerator. And we can also help with any audits on how that will evolve roles and positions as AI continues to disrupt not just what’s possible but the way work gets done and how we operate today. That was long, but, Monica, anything to add before we close out?

Monica: No, again, just underscoring the excitement. It really is, it’s time, time to use it responsibly, but also time to let it fly.

Tessa Burg:  Yes, I agree. Well-

Monica: Thank you, Tessa.

Tessa Burg:  Thank you so much. And until next time, we’ll talk to you again soon.

Monica: Terrific. Thank you.

Monica Richter

Executive in Residence at Mod Op Strategic Consulting
Monica Richter, Executive in Residence at Mod Op Strategic Consulting

Monica Richter is an Executive in Residence at Mod Op Strategic Consulting, a corporate and nonprofit Board Advisor, Trustee and transformation consultant. With over 25 years of leadership experience, she now specializes in advising on strategy, business pivots, data-driven growth, analytical excellence, and innovation via technology and collaboration. She thrives on change, learning and inclusive leadership.

Scroll to Top