3 Rules for Using AI with Your Creative Team
In the rush to adopt AI, don't accidentally forget yourself, or your partners
For the last 3 years or so, everyone has been telling us—with ever-increasing intensity—that we need to embrace AI tools to work smarter and faster. “Don’t be left behind!” they tell us.
Fair enough. None of us wants to be left behind on the wave of a massive new technology. And I do see some exciting and helpful things coming from AI tools.
However, I want to offer 3 ground rules when using LLM tools to help with a creative project.
Give AI context on the problem and permission to ask questions before having it create something.
Much of the really interesting AI work I’ve seen comes from people who treat their AI as a thought partner. They instruct the AI to structure an approach to solving the problem and to ask clarifying questions. Then they give it background information on the problem they’re trying to solve, the challenges they need to overcome, and the perspective of the people they need to convince. Under those conditions, I’ve seen AI tools provide some remarkable help.
It should be no surprise that AI performs better under those conditions. Maybe there was a point in your own career where people treated you like a pixel pusher or a wireframe jockey, and told you to create things with little to no context. Did you do your best work under those conditions? Probably not. Most designers begin to do truly excellent work when they develop the ability to ask smart questions and push back on assumptions before they start creating.
Unfortunately, most AI tools will cheerfully create whatever you ask for, not stopping to ask for context or clarification. For now, it’s up to you to prompt the AI to act like a smart and experienced partner. If you don’t, then you can expect to receive professional-looking—but fundamentally flawed—outputs.
You must deeply understand the problem you’re trying to solve.
If you’re going to treat AI like a thought partner, then you need to be able to clearly and effective describe the problem you want to solve. At least for now, AI cannot infer this for you. Your AI tool may not specifically ask you for a clear problem statement, but that only means it’s making guesses and assumptions without telling you.
It may seem like clearly articulating the problem, the constraints you’re working within, and what success looks like should be an easy job, but often it is not. In fact, understanding the problem well enough to describe it to someone else can sometimes be the hardest part of a project. Think of the AI like a contractor who knows something about your industry, but certainly doesn’t know your company, your project history, or your stakeholders.
No wonder, then, that many people are prone to skip that step and simply ask their AI tools to generate something. Write this slide. Synthesize these notes. Draft this chapter.
Unfortunately, if you haven’t taken the time to understand and describe the problem for yourself, then you may not even recognize whether your AI tool is giving you a good idea or a terrible idea. Just because it’s written with correct grammar, looks familiar, or contains the correct concepts, that doesn’t mean it will be effective!
Don’t value AI tools more than your human partners.
Because of the pressure to adopt AI tools, I’ve noticed some people are in a rush to use AI tools as quickly and frequently as possible. Maybe you’re discussing an upcoming project. Somebody starts feeding everything the group says, in real time, into an LLM and asking for it to rewrite your ideas more concisely. Maybe you ask someone to help you synthesize a group of user interviews, and their first step is to generate an AI summary. Maybe you’ve been asked to create a proposal for your leaders, and before anyone else can speak, someone has used an AI tool to generate a starter slide deck.
This is an anti-pattern for creative teams for two reasons.
If you’re the person whose first response is to plug everything into an AI tool, then you’re not engaging your own critical thinking skills.
By asking AI to immediately create outputs, you’re reverting back to that junior version of yourself who made things without clearly framing the problem, asking smart questions, and challenging assumptions. There’s a reason you were invited to work on this project. Your team wants and needs your thoughts and perspective. Question whether this project makes sense. Question the assumptions everyone else is accepting. Insist on a clear statement of what success looks like. After you’ve engaged with your own mind, then there’s room to frame the problem for an AI partner.
By focusing on AI, you subtly undermine the value of your human partners.
As an analogy, think about AI tools as if they were another person physically present in the room. Have you ever been in a room where there’s one person whose opinion matters more than everyone else? Maybe the creative director or the VP. Whenever anyone else voices an idea, people turn to see what that high-ranking person thinks. Once that person gives their opinion, it’s hard to contradict it. In a physical room, you can actually see happening this in the posture and orientation of the people. In a dysfunctional team, you might even see the reverse, where a disliked or undervalued person is effectively ignored.
The point being, you can tell a lot about “who matters” in a room based on how people face each other and respond to each other.
If you’re focused on using an AI tool, then you’re facing that tool and you’re running everything past that tool, perhaps waiting for AI’s reaction before you give your own. In effect, you’re treating the AI tool the same way you might treat a VP in the room. That behavior communicates to your human partners that their input is less valuable than the AI’s.
Unless you want your human partners to let you and the AI solve this on your own, then you should consider this an anti-pattern for collaboration. Instead, give your human partners your full attention and engage their ideas with your own mind before feeding them to the AI.
Perhaps one day, a better interface for AI tools will allow them to engage in creative activities and offer ideas like a human partner, not requiring a human to type everything into a chat interface. Until then, don’t let the chat interface of an AI tool accidentally train you and your human partners to value an AI’s output more highly than you value yourself and your human partners.
What Do You Think?
I certainly have not seen everything there is to see with LLM AI tools. Let me know if you think there’s something I’m missing. Also, please let me know if there are other ground rules you would add to this list.
> By focusing on AI, you subtly undermine the value of your human partners.
This is a really great point. Although I worry some people might say, "My human partners *aren't* as valuable." 😅
I love your point on "Thought Partner." For me, it is an amazingly helpful Socratic Sparring Partner, a smart friend that I'm riffing with when no smart friend is around to help. This mentality adds a ton of value while preventing me from looking at it as a know-it-all or God.