It takes less than a minute for Midjourney to generate and spit out an image. But is an image generated with AI as “creative” as one made by a skilled graphic designer? Does it matter? What do apps like Midjourney mean for the future of art and design?
In this AI app-focused blog series, we’re looking at the synergy between AI and human creativity, focusing on how generative AI tools can fuel and enhance creative processes — rather than replace them.
Next up in our AI tools series: Midjourney.
Our reviews are made possible by Mod Op’s AI Playground — an initiative designed to fast-track innovation by allowing team members to experiment with new AI tools quickly and responsibly. When a Mod Op-er finds a promising app, they can submit it. If the application passes our rigorous AI risk and compliance framework, the IT team will assist in setting up a 90-day evaluation period, during which users provide insights on the app’s value. If the application doesn’t pass our risk and compliance framework, we may choose to still move forward with testing, using the vetting framework to create very specific responsible use guardrails. (Spoiler alert: Midjourney did not our team’s risk and compliance framework pass, more on that in a bit.)
What is Midjourney?
Midjourney is an AI tool that generates images from text descriptions. According to ChatGPT, Midjourney “works similarly to other AI-driven image generation models like DALL·E (by OpenAI) and Stable Diffusion. Users can input descriptive prompts, and Midjourney will interpret those prompts to create highly detailed and often visually stunning images.”
Other AI image generation tools include Craiyon, Adobe Firefly, Lenardo.Ai and Bing Image Creator.
How Creatives Can Use Midjourney: Q&A with Aaron Grando
I connected with Aaron Grando, the Technology Director for our award-winning creative team, to learn more about how the team is using Midjourney and where he sees opportunities for the future.
Would you mind sharing a little about how the team is using Midjourney?
First off, it’s important to note that for many projects and brands, AI-generated images from Midjourney are not appropriate to use, for several reasons.
Most importantly, we prioritize good judgement and respect for a client’s product and audience. Take entertainment brands and properties – to the artists that work on them and those that enjoy them – the (conscious or subconscious) value assigned to intellectual property is based on the human point of view, creativity, technique and effort required to produce the final work. So, as marketers, we respect what audiences value about these properties. And it’s clear right now that many audiences are not keen on AI-generated artwork being in the marketing toolkits for many of these brands.
But we have a lot of clients, and many are interested more in speed to market and the impact of the creative. Some even ask us, “Can’t you just AI that?” in cases where we might traditionally spend hours Photoshopping artwork together. For those clients, we’re happy to say “yes – we can.”
This is where our AI Responsible Use Policy kicks in. It helps us understand the outer edges of the creative sandbox where we play but gives us room to experiment and push boundaries. In practice, this means that for a lot of photo and video production, we’re doing some pre-vis work with Midjourney-generated images. Those images are generally not made public, but they help us put together sets, props, costumes, and more as we’re working on creating content that is ultimately released. We’re also increasingly finding use for Midjourney assets in a similar way to how we’ve used stock photography in the past. Which is to say, almost never wholly using stock images, but rather elements from those generated images combined with other visuals – sometimes photographed stock, brand assets, or other AI-generated images – to create final compositions.
Responsible use is so important. Beyond the reasons you’ve already shared, are there other reasons the team tends to reserve Midjourney for pre-production work?
Pre-pro represents the majority of our Midjourney usage at this point, for the reasons mentioned above, but also for simple practical reasons. Midjourney isn’t built into the tools we use day to day, it lives (primarily) in Discord, an app that is still a bit foreign to many creatives.
The images Midjourney generates can be amazing, but they still tend to “feel” AI-generated, which is not an impression we want to give with our work. And, at the moment, Midjourney isn’t great at reproducing images of consumer products in a way that works for marketing, where using accurate depictions are important for consumer trust.
What are your top reasons for using Midjourney?
Midjourney gives teams ways to imagine very specific concepts in mostly polished, highly believable visualizations. In our testing, it produces more aesthetically pleasing images than most, if not all, of its competitors. And dare I say, it’s fun to spin the image generator roulette wheel and see what you get.
What, if anything, can you share about Midjourney and how it impacts speed and turnaround time?
That “roulette wheel” effect is real – it’s still a gamble that usable imagery can be generated more quickly than simply using traditional tools like Photoshop to comp together usable artwork. When it works, it can produce great-looking photo and art assets very quickly. When it doesn’t work, you may burn an hour or longer generating image after image, chasing the perfect iteration, but wind up with nothing that perfectly hits the mark. It’s a concern with all AI-generated content.
How do you expect creatives to leverage AI art generation tools, like Midjourney, in the future?
Short term, I’m looking forward to more integration with traditional design tools, like Figma and Photoshop – I expect that Midjourney at some point will move in that direction. Right now, the copy-and-paste workflow that it demands represents a lot of unnecessary friction.
Medium term, model-agnostic LoRAs or embeddings – chunks of data that tells an AI image generator how to perfectly generate an image of a specific person, place or thing – will start to show up among the assets exchanged between clients and agencies, similar to how 3D objects and high-resolution product photography are currently exchanged. These will let creatives generate images that convincingly integrate client goods directly into the scene, as opposed to needing to ‘shop them into place after generation.
Longer term, I expect that the ability to direct scenes in Midjourney will increase and expand beyond text prompting and region editing (which lets you regenerate sections of generated images). Midjourney has already previewed 3D environments using its technology, so it’s not a stretch to imagine that generated 3D scenes could fed back into the machine to allow creatives to control camera positioning, object placement, lighting, visual effects, and more.
A Word of Caution about Midjourney
While the team’s exploration of Midjourney was made possible by our AI Playground, it’s important to note that the app did not ultimately fall within our risk and compliance framework.
To better understand why, I reached out to Patty Parobek, Mod Op’s VP, AI Transformation, who explained the many reasons that Midjourney didn’t pass our tests:
- Lack of assurances that inputs are excluded from training data
- Unclear sourcing and licensing of training data
- Broad licensing claims over user inputs and outputs, creating potential IP and confidentiality risks
- No safeguards to prevent IP infringement in outputs
“That said, through the playground, we can use the vetting framework to create very specific responsible-use guardrails that a user must sign off on before we grant access,” explains Patty. “In this case, Midjourney is only allowed to be used internally or in ‘sandboxing’, unless we have a specific client who acknowledges the risks and says, ‘I know the risks, let’s do it anyway.’”
Interested in applying AI to your creative process, but aren’t sure where to start? Want to learn more about how our team qualifies – and disqualifies apps? Get more insights on the AI applications tested by our team, check out Mod Op’s AI playground.
The Latest
We study the game as hard as we play it.
Learn with us what’s now and next.