Recently, I had the opportunity to help test a new AI platform at Mod Op called Brand Agent. Brand Agent is a large language model (LLM) that is extensively trained on client data and information to act as an information retrieval portal when asked pointed questions.
It was an interesting experience – not just because I got an inside look at how the platform functions, but because I played a role in coordinating the testing process, gathering insights, and helping shape how we might use Brand Agent internally in the future.
Getting the Right People Involved
When we kicked off testing, one of the first steps was ensuring we had a well-rounded group of testers. Since Brand Agent is designed to support various aspects of our work, we needed input from people across different parts of the agency – PR, strategy, creative, and more. Each of these teams interact with client data in unique ways, so it was important to have a diverse set of perspectives to get the full picture of what Brand Agent could do.
Once we identified our key testers, we made sure they understood what we were looking for in their feedback. We didn’t want the testing to feel like just another task to check off: we wanted it to be a meaningful experience that would actually help improve the platform. That meant setting clear expectations and making sure everyone was logging their findings consistently.
Watching the System in Action
One of the most fascinating parts of the process was seeing how Brand Agent handles data. Once testers selected which client they wanted to work with, we loaded the platform with relevant publicly accessible data. Seeing this in action made it clear just how much potential AI has in our industry, especially when it comes to making sense of large amounts of data quickly.
But, as with any new tool, Brand Agent needed refining. Some testers found certain functions intuitive, while others ran into roadblocks that highlighted areas for improvement. The feedback we gathered helped pinpoint what was working well and what needed to be tweaked to make Brand Agent more effective across different teams.
Turning Feedback into Action
At the end of the testing phase, I compiled all the insights into a wrap-up report. This wasn’t just about listing what people liked or didn’t like, it was about identifying patterns, surfacing key takeaways, and outlining next steps for making Brand Agent a more valuable internal tool.
One of the most valuable things we learned was how different teams approached the platform in their own ways. Creative teams, for example, were focused on brand guidelines and messaging insights, while strategists wanted a deeper dive into audience behavior. Understanding these nuances will be key in shaping how we refine Brand Agent moving forward.
Looking Ahead
Overall, this was a great experience – not just because I got to see an AI platform evolve in real time, but because it gave me a new perspective on how different teams within the agency interact with technology. The testing phase was just the beginning, and I’m excited to see how Brand Agent develops from here. There’s a lot of potential for AI to enhance the way we work, and I’m looking forward to seeing where we take Brand Agent next.
The Latest
We study the game as hard as we play it.
Learn with us what’s now and next.