AI Governance: Using AI Responsibly In Marketing

Alon Yamin
CEO & Co-Founder of Copyleaks

Alon Yamin Circle 600x600

In our latest podcast episode, we celebrated a milestone: the first anniversary of Chat GPT’s launch. Alon Yamin from Copyleaks joined Tessa Burg to discuss the impact of generative AI on marketing and technology.

We tackled some big issues: the excitement and skepticism around AI, the need for governance, plus the challenges of data security and privacy.


“AI is a game-changing technology, but it’s important to use it in a safe way— a responsible way—and mitigate the risks.”


We also got into AI accuracy and the risks of “AI hallucination,” where AI-generated information may not always be correct. We also chatted about the Biden administration’s recent executive order on AI, aiming for a safer future.

AI isn’t a fleeting trend; it’s here to stay. That’s why we must stay ahead of the game to manage its risks and potential. Tune in to join the conversation.

Highlights From This Episode:

  • Impact of AI on marketing and technology
  • Challenges and opportunities of Generative AI
  • AI governance and security
  • Privacy and data protection
  • Accuracy and reliablility of AI tools
  • AI policy and regulation
  • Future trends in AI technology and compliance

Watch the Live Recording

Tessa Burg: Hello, and welcome to another episode of “Leader Generation,” brought to you by Mod Op. I’m your host Tessa Burg, CTO here at Mod Op. Today is kind of a special day. It is November 30th, the one-year anniversary of ChatGPT’s release. And it has been a crazy year, especially for people in marketing and technology. We’ve gone through a lot of phases. Some of us were super excited, some of us were skeptical and uncertain. We got a lot of feedback that many people at the beginning were trying to avoid it and really wanted to make sure that, in fact, gen AI, and AI in general, was not being used in their content or in their marketing. But one year later, I think we’re all at the stage of acceptance. Whether AI and machine learning excites you, concerns you, it’s here to stay. And we are no longer able to sit in a wait and see what will happen place. We’re in a place where we need to figure out how to manage it. So that is why we are so excited today to have a conversation about governance. I’m joined by Alon Yamin, CEO and co-founder of Copyleaks. Alon, thank you so much for joining us today on November 30th, the one-year anniversary of ChatGPT.

Alon Yamin: Yeah, thanks for having me, Tessa. A lot has changed in the last year, no?

Tessa Burg: Yes. I can’t believe it’s only been a year. I feel like with all of, even the process changes and conversations we’ve had to have, it’s felt much longer.

Alon Yamin: Totally. It feels like it’s been here forever, and it’s changing on a daily basis. So many new things are happening in the AI world, really every day. So I think it’s very exciting, and, yeah, I’m happy we get to speak on like this special day.

Tessa Burg: Yeah. So tell us a little bit about your background in Copyleaks. ‘Cause if any company’s had a crazy year, it’s Copyleaks.

Alon Yamin: Yeah, definitely. So my background is in technology. I used to be a developer before, you know, we started working on Copyleaks, or even after we started working on Copyleaks in the last couple of years. I’m more on the business side of things, but I used to be a developer mostly dealing with text analysis. And this is kind of like what brought me to this role. Together with my co-founder Yehonatan, we created an AI text analysis platform with the idea of really being able to understand meaning of textual content, source of text, distribution of text. And you can imagine, with ChatGPT and generative AI, how these kind of like new challenges came to our world. And now there is just a lot of things around governance, compliance of generative AI that we’re dealing with. So exciting times in the AI world for sure.

Tessa Burg: Yeah, I know that when ChatGPT was first released, I felt like I just jumped in and started trying it. And I think a lot of people maybe assume that everything was secure, you know? And it wasn’t until like the Samsung story broke that you felt like, “Oh my gosh, something got exposed.” Like, we gotta start paying attention. But there was first like this, “Holy crap, I can’t believe it can do this,” and then the, “Oh my gosh, my job’s gonna be replaced, and what are writers gonna do?” And some things have been normalized, but it has brought new threats. So what are some of those new threats that gen AI has introduced to our world, or kind of accelerated some that we’re used to already?

Alon Yamin: Yeah, so I can say that, you know, now we’re one year after, and still, when we’re talking with some of our customers, companies we’re working with, some of the world’s biggest corporations, they are still struggling with generative AI adoption and how to do it in a responsible, safe way. I think the main things we’re hearing about are concerns around security, around privacy of data. So you know, you’re using ChatGPT, and ChatGPT is only as good as really the input you’re giving it. So if, for example, I want ChatGPT to create a deck for me, I need to provide ChatGPT with information to create this deck with. And the problem is that a lot of people don’t realize that the information that they’re sharing with this platform is not going to stay safe and secure. They are training in new models, new data, on these data. And the data could be exposed to other users of these platforms. So there are major, major security and privacy concerns for using these platforms. You can imagine that corporations have some proprietary information that they would not wanna share with their third parties, and a lot of their employees are using their platforms, and they’re having really a hard time monitoring and understanding what’s the kind of like level of usage, what’s being shared, how it’s being shared, and what’s the implications for the companies. So I think this is one thing that we’re seeing as a major concern. Another thing, which is, you know, everyone is talking about is the accuracy of these tools. You know, you’re using it, you’re getting responses. The responses seem reasonable, but no one really knows and understand whether or not it’s kind of like fact-proof, fact checks, if it’s accurate or not. There is a lot of what we call AI hallucination, so inaccurate facts, data that is being provided by these platforms, and then shared either, you know, even on the marketing side, you want to create some marketing materials using ChatGPT in order to do that. And a lot of the materials that is getting created is not always the most accurate one. So I think this is definitely another major challenge for organizations and generally people using these type of platforms. And I think that other thing is really, how do you even safeguard your own data? So data is being used to train these models and all these large language models. AI platforms are using data that is publicly available, and a lot of the companies we’re working with have their data publicly available, and they’re not sure what is being done with it. So this is also another new challenge. And there are also major other challenges around, you know, copyrights in IP, being able to use this platform, but in a safe way, so being able to integrate text or code, or whatever it is from these platforms, and making sure that whatever it is that you’re working with is actually not copyrighted or not licensed, or you’re using it in a safe way. So a lot of different challenges. It’s a lot of it around kind of like accuracy, privacy, security, IP. Those are the main areas we’re dealing with.

Tessa Burg: Yeah, and it’s, with any technology advancements, I feel like the bad actors are the fastest and most efficient at taking advantage of what people don’t know. So a lot of these mistakes are caused by, because I don’t know what I don’t know. The one area that I think people have to be aware of, you know, there’s a lot of hype around, oh, actors will start using, you know, agents that could have their same voice, and have their same tone, and, you know, that sort of productive use, and that could benefit maybe a voice actor that could do more things. But the alternative side, and just from like a plain, how does it impact us every day in marketing, is, you know, phishing attacks, people impersonating you or your company. So if you are not careful, or if your staff doesn’t understand what is IP, what is the data we want to keep private, then it does give fuel to someone else who wants to do harm, and it makes it even more challenging to protect your brand when you’re not monitoring what it is your employees are doing. Because it’s impossible, and just in less than a year, for everybody to have that level of training, that, “Oh, I fully understand what is the data and the IP and what I want to protect.” So I think that it’s, one, awareness is so important and for marketers and any user of the technology to understand that it’s not just the cool stuff that could be…

Alon Yamin: Exactly.

Tessa Burg: It really is, like the tech space stuff can also be equally revealing and can cause as much damage and harm to your business as some of the things that we see in the news.

Alon Yamin: Yeah, 100%. I think, you know, this is obviously a very strong, powerful technology. You can use it for amazing things, automations, really research, education, so many amazing things that you can do with this technology. But there are definitely also dark sides to it, exactly. Like, it can automate good things, it can also automate bad things, and we always need to be aware of that. And I think this is why it’s so important to be able to identify and distinguish AI from normal human content, and moving forward, I think it’s just going to be more and more crucial to do so because AI is just getting smarter and smarter and better and better, and it’s going to be even harder to identify and distinguish it between, you know, human content. So all these kind of like bad apples will even have a more powerful tool to use. So I think it’s really up to us at this point. First of all, make sure that people are aware of all these threats, and are kind of like looking for them, looking for issues, looking for AI, understanding that this is something that they have to be aware of, but also making sure that we’re creating, you know, framework at this point where we are able to govern these powerful technologies. There’s gotta be some sort of boundaries and limitations to this technology, otherwise it’s going to be very hard to really control it.

Tessa Burg: Yes, I agree. So in that vein of protections and safety, a few weeks ago, the Biden administration signed its first executive order for AI. And I know a lot of people may have not read it, but to summarize the spirit and the key points in that executive order, included to be safe and secure, responsible use, responsible innovation. It cited policies that would be consistent with the advancement of equity and civil rights. And its overall purpose is to protect people and to emphasize the importance that we have to move forward and manage the risk while still leading the way in our technology development. From your perspective, why do you think this is a good first step?

Alon Yamin: I think it’s crucial to have regulation around generative AI. I think, you know, the technology is already out there, and it can, like you said, be used in different forms and ways, and I think it’s very important that the government took this step, even though it’s still like a pretty initial step, but kind of like defining the issue and what are the steps that we need to take in order to tackle it. I think there was a lot of focus in the executive order on like ways of identifying generated AI, and I think that’s very important. I know they talked a lot about watermarking, so basically being able to kind of like just mark a document or a file, or any piece of content, in a way where it’ll be clear that it was created by AI, ChatGPT, Bard, any other type of LLM, large language model. But I think it will be very important to expand from there and understand that different content types, so, for example, videos versus images, versus text, versus music, so different content types will need different types of solutions in order to make sure that we understand and we identify whether or not they are created by AI or created by humans. I got very excited seeing this executive order. I really think that this is just the beginning. I hope that, you know, it will follow with actual, you know, procedures and kind of like ground rules for using generative AI later on. So I definitely think it’s a step in the right direction, but we’re still far from kind of like figuring out how we’re tackling it as a whole.

Tessa Burg: Yeah, and I agree. I think there’s a need for the government to have an early position, especially within the first year, that they wanna keep people safe ’cause it’s a real concern, and citing that being able to call it out provides that reassurance that the government also cares about what is real and what feels not real. And it’s not even that things coming out of generative AI aren’t real, but again, that they’re recognizing that there is a risk, and so there is a value in calling out and citing that risk. There were some specifics in there about how things will be audited and monitored that some people don’t feel is actually possible and contradicts the mission of keeping the US as a leader in this space. What are some of your thoughts on, you know, that opinion that, “Hey, we’re not sure that this order is going to be as effective as people think or provide the security and reassurances that people are looking for, without being detrimental to the technology field itself?”

Alon Yamin: Yeah, I can definitely understand both sides. I really think that, you know, you gotta have also a responsible adoption here. So of course something’s gonna come over something else, so obviously we wanna make sure that we’re still using AI for innovations, and we can move fast, and that startup have access to these technologies, and we don’t want to create like barriers that will allow only like really big corporations with resources to have access to these type of technologies, and we don’t want to create tons of bureaucracy around it. So all of this makes 100% sense. I think there’s gotta be a balance between that to securing ourselves as well. This is, again, a very powerful technology that, if it’s in the hands of the wrong people, could present threats for humans all over. So I think it’s very important to have this balance. Of course innovation is important, and I believe also that having this type of regulation or compliance around generative AI will actually potentially help innovation, if it will be done in a way that is not restrictive. So I believe it’s all about balance here, making sure, on one hand, that we are responsibly using generative AI, understanding the risks, are able to identify them to a certain extent. I agree, by the way, that nothing is 100% here. I always look at it like antivirus, right? You have an antivirus on your laptop. You’re thinking you’re protected, but it’s never 100%. You’re always at the 99 point something, and that’s why you have new versions of the antivirus that are protecting against new type of viruses. So it’s gonna be, I think, very similar also in the generative AI compliance and governance side of things, so trying to aspire to get to this 100%, but, you know, you’re never actually hitting that. But I think, nonetheless, you have to have some measures in place to get this close as you can to this 100%, and to get it, we also have the balance of, you know, pushing innovation, still making sure that there is not tons of bureaucracy, and most importantly, I think, allowing startup access to this technology. You don’t want to create a situation where only big corporations and governments have access to this technology.

Tessa Burg: Yeah, I agree. And I like that aspiring to 100% and, you know, starting that journey now. Like, we know that more detail will be added, that this, you know, executive order was directional, but still important. But the National Institute of Standards and Technology will soon be defining what those best practices are, what the guidelines are. But what should companies be doing now, regardless of their opinion of that executive order, to start important work around a governance program?

Alon Yamin: Yeah, I think, you know, first step is really understanding, as a company, what’s your policy around generative AI? What is it that you’re trying to achieve? In what areas you’re interested in using generative AI, in what areas you’d like to restrict it, maybe because you have security or privacy concerns. So I think, and this is, by the way, we’re working with different companies in different markets in different areas, and the answer to that is different from one company to another. And you would be surprised that some companies don’t even have strategy yet around generative AI, even one year after. So I think that’s really the first step, really understanding what’s your strategy, how are you trying to work with these solutions, or maybe not working with them as well. We have some, you know, banks and financial institutions that we’re seeing that just decided to completely restrict generative AI from their organization. Here I’ll ask the question how they’re actually enforcing it. That’s a whole other set of challenges, but that’s also a potential approach. So I think first step, really understanding what your policies around generative AI are. And I think second step, and, again, you might be surprised, but not a lot of companies are aware of that, of how generative AI is currently being used within the organization. Organizations don’t know simply because, you know, even if they restrict ChatGPT from the browsers on the company laptop, they don’t know if their employees are going on, you know, their mobile phone, and going to ChatGPT through that. So it’s very hard, in a way, to understand the usage of generative AI within the organization. So I think today, you know, we’re providing AI detection solutions that are able to identify exactly that, give you this visibility to generative AI usage within your organization, and understanding what’s even your exposure level, where it’s being used, how it’s being used, with what type of data, what’s being uploaded, so really understanding the current situation before you’re even kind of like building a strategy moving forward. So I think these things are really the first two steps every organization needs to take. Doesn’t matter if it’s a small organization, if it’s a very large corporation, very important to take these steps. I think next steps would be really figuring out how you’re safeguarding your own content and your own data, and making sure that you don’t have any exposures on your side. There is a lot you can do there, mainly around, again, being able to identify the distribution of your content, where it’s being used, where it’s being shown, identifying for copyright infringement, for plagiarism of your content. So I think this is a very important part, and making sure that you’re safeguarding yourselves from, you know, exposures that you have with using these platforms. I think, you know, what a lot of people don’t realize is that, you know, all these platforms are working with existing data. They’re not just creating content from out of the air. So the problem is that a lot of the content that is getting created with ChatGPT, with Bard, with all these platforms, is actually plagiarized content. We did internal research that included a lot of content that came from ChatGPT, and we found that around 60% of the outputs of ChatGPT contained plagiarized content. And this is very problematic for anyone, but let alone companies or, you know, marketers that are using this content later on. So there is a lot of exposures there. So I think companies have to understand how generative AI is being used within their organization now, what’s the implication, whether or not they already have copyrighted or licensed material that they’re not aware of. By the way, this is also relevant for source code, that’s relevant for the Samsung story. So I think those are the main first steps for organizations, to realize, just kind of like approach it with a strategy.

Tessa Burg: Yeah, and I think, though governance may not be front and center for a lot of brand marketers, it is all about protecting your brand. Like, the great things you put out there need to be protected, but also, you don’t wanna unintentionally take someone else’s work, and then, again, damage your brand or, God forbid, become the next Samsung story. That would be terrible. The other piece that I think, specifically when we talk about balance, like you made that point earlier, it is so important for marketers to realize it’s on two different fronts. It’s on probably a lot of fronts, but two that are critical for marketing: One, that you don’t need to use generative AI in everything to get the benefits. I understand why banks might say no, not on anything. I’m not sure, I don’t agree with anything that’s 100% no, but like, I more understand there and in medical. But you should be able to get the benefits without saying this is gen AI’s use across everything. It’s so important when you’re coming up with those policies that you’re aligning back to benefit, return, and things that marketers do normally. You know, how is this providing value to this end customer, and how is it providing return to the company, and what’s that mutual value exchange look like? The other piece, you know, at Mod Op we’re very much leaning into the use of not just gen AI, but AI and ML across create, delivery, and analytics, is not relying solely on all one platform. Like, what I love about Copyleaks is, in a way, it also helps hold ChatGPT accountable. And I’m very sure that a lot of the core platforms and large businesses are going to be introducing tools and technology where it’s like, “Oh, yeah, no, we’ll let you know, and that’s a hallucination, we’re fixing that,” or “We’ll let you know if that’s plagiarized.” And we’re gonna do things that are just on your own data, but you need something like a Copyleaks, ’cause even if you have an LLM that’s operating off your own data, and you made this point earlier, the quality matters. So are you inputting data that came out of a tool that was plagiarized? So I think it’s really important for marketers, you know, especially brand marketers and people in communication, to remember this is about protecting the brands, about protecting your company. Doesn’t mean don’t use it, but find that balance so that you’re using it responsibly and getting the benefits.

Alon Yamin: Yeah, I think that’s exactly right. I think from a brand perspective, we’re seeing it also across markets. I’ll give you some examples. Even from, you know, the media world, we’re working with a lot of media publications, and, you know, everyone is using these type of technologies today. It’s like a year ago, if you’re doing a research for any specific subject, you go to Google. Now you go to ChatGPT, or Bard, or any of these platforms. And I think what’s important is also to authenticate yourself in the process. So we’re seeing it with a lot of media companies, that even before they’ll publish something, just to make sure that, even by mistake, they forgot to quote something or change something enough to make it their own, are using AI detection solutions, as well as copyright infringement solutions like ours, to authenticate themselves and protect also the brand. Again, if you’re going to now publish something that includes plagiarized content or inaccurate facts, this is going to seriously hurt your brand. And we’re seeing it also in other markets, so we’re working also a lot in the education market, in the research area. And in research, obviously you want to make sure that you’re not publishing your research in an academic journal or publication now that includes, you know, wrong statements or plagiarized content. So, same things there. Academic institutions care a lot about their reputation, and this can obviously really hurt their reputation. So we’re seeing this also as a very important point for, you know, really across markets. For marketers, media companies, education, it’s a very important point. I think what we’re trying to do is really, you know, generative AI is this kind of like black box where you’re giving input, you get an output, and no one knows what’s happening kind of like in between. So what we’re really trying is to give people really some visibility, understanding of what’s created by AI, where it was taken from, what’s the implications. That’s really our goal. And to your point of like, you don’t think anything needs to be 100% no, 100% yes, I totally agree. I think our goal is really that people will leverage this amazing technology. I mean, generative AI is obviously a game-changing technology, and we’re seeing it live across different markets every day, but it’s very important to also use it in a safe way, a responsible way, and mitigating the risks that are coming with it. So that’s kind of like the angle that we’re trying to take, and really pushing people to use these technologies and not to be afraid of them. Leverage them, but do it in a safe way.

Tessa Burg: Yes, I agree. So not only is today the anniversary of ChatGPT, but it’s also the end of the year. So, you know, December is tomorrow, which is crazy, but 2024 is right around the corner. What are you most looking forward to? I mean, it’s going to be just as crazy as this year, so interested to hear what your thoughts are on 2024.

Alon Yamin: Yeah, I think, you know, the fun is just starting in the AI world, really, from my perspective. I think, you know, this was the first year of, like you said, it was really the, it’s been a year, a lot has changed. People were just kind of like trying to figure out, people and companies by the way, trying to figure out how to even use this technology, how does it work, for where it’s best, et cetera. I think next year will be really the year of not only realizing the potential of this technology, but also kind of like figuring out more the compliance and regulation side of things. I think that’s definitely gonna be our focus. I believe that if we want to use this technology in a high scale moving forward, and in a safe way, this is one point we have to figure out. So I think that’s gonna be a major focus for next year. There’s gonna be, I’m sure a lot of companies popping up with different solutions around compliance and governance, and there are also so many different aspects of that with generative AI. So there is so much room for innovation there and new technologies, so that really makes me really excited. Personally, I’m just waiting for kind of like the holidays, for end of the year, just having these few days of rest. We’ve had like a very, very busy year, so always nice to kind of like have a couple days of relaxation and just getting ready for next year, which I’m sure will be busy and an interesting one.

Tessa Burg: Well, thank you so much for being a guest today. This was an awesome conversation. If people have questions after hearing this episode and wanna reach you, where can they find you?

Alon Yamin: Yeah, thanks for having me, Tessa. This was really great. And yeah, please, you can find me on my LinkedIn. This is probably the best place to approach me, Alon Yamin on LinkedIn. I’d be happy to hear from everyone who’s listening.

Tessa Burg: Awesome. Thanks again, Alon. And the name of the business is Copyleaks. Put it on your roadmap for next year, and we will talk to you later. If you want to hear from more thought leaders in tech, marketing, and communications, including the impact of AI and ML on our industry, visit modop.com and click Podcast. That’s M-O-D-O-P.com. We interview thought leaders from inside and outside of Mod Op on topics in digital transformation, digital marketing and PR, and a lot of our clients are thought leaders. We are so lucky to learn and collaborate with them, like Alon and Copyleaks. So check it out and subscribe to “Leader Generation” wherever you listen to podcasts.

Alon Yamin

CEO & Co-Founder of Copyleaks
Alon Yamin Circle 600x600

Alon is the CEO and Co-Founder of Copyleaks a cloud platform focused on AI-powered text analysis to identify potential plagiarism, uncover AI-generated content, ensure responsible generative AI adoption, verify authenticity and ownership and empower error-free writing. Prior to Copyleaks, Alon worked as a software developer for Unit 8200 – Israeli Intelligence Corps, where they worked on different intelligence projects mainly for the Air Force.

“We’re providing AI detection solutions that give you visibility to generative AI usage within your organization.”

 

Scroll to Top