Artificial Intelligence & Humanity

What I rely on AI for—and what I don’t.

Published on

Around 5 minutes to read

I’m a big fan of AI. I use it a lot—multiple uses per day for ChatGPT, Bard, and Midjourney—and I’ve come up with a few guidelines for where I do and don’t rely on it:

Overall, I rely on on AI to help me design more and design faster.

I don’t rely on AI to help me design better.

I definitely don’t rely on AI to design for me or instead of me.

For better or worse, the process I learned in design school was:

  1. Come up with a ton of ideas
  2. Choose a promising set from within those ideas
  3. Riff on those ideas, even if creating more in the combination and variation of them
  4. Converge those ideas until arriving at a promising solution (or set of solutions)

Step #1 takes forever. It always does. When I’m designing logos, my first step is to draw 100 versions. At its worst, there’s a separation between the designers who are willing to do this and the designers who aren’t. Not to mention the designers that are capable of doing this vs. the designers that aren’t.

This is where Midjourney helps. (I’ll use “Midjourney” as a placeholder for “general AI tool” in this example.) Because I’m human, I’m not as good at coming up with the quantity of ideas that a computer can in the same amount of time. Where it often takes me hours or days to draw 100 different logos, Midjourney can do that in a matter of minutes.

Midjourney sucks at step #2. Of the four images generated, how would it know which are good and which are bad? Only the prompter can decide, because they know the criteria, and there’s no way to share it yet. I imagine that’s why there’s the ability to request variations on any particular image. Why doesn’t Midjourney do variations automatically? Aside from business reasons, it doesn’t have a way to determine which of the original ideas to riff on.

Once it’s directed to riff, though, Midjourney is great at step #3. And it goes back to sucking at step #4.

What can we learn from these steps? AI is great anything quantity-related and bad and anything quality-related.

Why is AI so bad at the quality-related part? One reason is that it works too fast. In his book Where Good Ideas Come From, media theorist Steven Johnson talks about the importance of pace in originality:

…snap judgments of intuition are rarities in the history of world-changing ideas. Most hunches that turn into important innovations unfold over much longer time frames. They start with a vague, hard-to-describe sense that there’s an interesting solution to a problem that hasn’t yet been proposed, and they linger in the shadows of the mind, sometimes for decades, assembling new connections and gaining strength… Because these slow hunches need so much time to develop, they are fragile creatures, easily lost to the more pressing needs of day-to-day issues. But that long incubation period is also their strength, because true insights require you to think something that one has thought before in quite the same way.

Many of the planned innovations in AI seem to promise an even speedier generation process with higher quality results. But maybe what we need is a slower generation process, with room to ruminate and let ideas collide on their own. It takes me 2 days to draw 100 logos and 2 minutes for Midjourney. I used to think it was the “100 logos” part that was crucial, but lately I’m entertaining the idea that it might be “2 days” part that’s the secret ingredient instead.

That’s not to say that speed isn’t valuable… only that when speed is deployed is an important factor. I’m reminded of the anecdote of famed graphic designer Paula Scher drawing the logo for the new Citi—a merger between Citibank and Travelers Insurance Company—in a matter of seconds in a meeting. As she explains it:

Travelers Insurance Company had a red umbrella [in its logo]. [Citi has] a “t”. The bottom of a lowercase “t” has a little hook on the bottom. If you put an arc on the top, that’s an umbrella.

Original sketch of the Citi logo by Paula Scher, and the final rendering

When asked later how she was able to arrive at that idea so quickly, she said:

It happened in a second. How can it be that you talk to someone and it’s done in a second? But it is done in a second… it’s done in a second after 34 years. It’s done in a second, after every experience, and every movie, and after everything in my life that’s in my head.

This logo idea comes from the all of the things she’s experienced in her life. I might even posit that Scher is the only one that could have come up with this logo, as no one else on the planet has the same combination of experiences that she’s had. Said differently, the combination of her own experiences is the training data for how to generate this so quickly.

Along those lines, it could be safe to assume that the right training data can lead to similar results. For example, if we could somehow feed Midjourney all of Paula Scher’s experiences, could it conceivably design logos they way she does? Could it have arrived at the Citi logo?

I say no. I’m no expert on machine learning, so please do correct me here if I’m grossly misinterpreting, but, as I understand it, diffusion—the kind of model Midjourney and other image generators like it use—works by understanding the relationship between an image and the text used to describe it, at scale. So, if it finds millions of images that relate the word “cyclops” to an image of a face with only one eye as opposed to two, the likelihood is high that a prompt containing the word “cyclops” desires a one-eyed face as a result. In other words, it leans heavily on averages; the closer the training data matches an average, the higher degree of confidence that the result is more “correct,” or at least desirable.

The problem is that this is the polar opposite of what we consider creativity to be. Creativity isn’t about averages. It’s about the outliers, sometimes the one thing that’s different than all the rest. Often times, the only thing we can credit for that deviation is serendipity.

Back to Johnson:

…serendipity is not just about embracing random encounters for the sheer exhilaration of it. Serendipity is built out of happy accidents, to be sure, but what makes them happy is the fact that the discovery you’ve made is meaningful to you.

To date, AI has little criteria on understanding what’s meaningful to me, or you, or us. Because, as humans, we’re really bad about understanding what’s meaningful to us, and why. We laugh and cry at the oddest things. We sometimes find pleasure in the mundane, and we’re sometimes apathetic at the pleasurable. So the only way for AI to figure out something that we haven’t is for the creation to surpass its creator.

Until AI can figure out that “happy“ part, it’s just accidents. That’s not necessarily a bad thing either; the pieces of my design process that thrive on accidents—whether happy or not—are the pieces I’m happy to outsource to the machines. If I embrace the role of AI as an accident generator, then I’d gladly give it all the areas of my life where accidents have little penalty: the parts that have little to do with my humanity. I find it best said by novelist SJ Sindu:

We don’t need AI to make art. We need AI to write emails and clean the house and deliver the groceries so humans can make more art.

Read Next

The “Free” Part of “Full Price or Free”

Join 53,200+ subscribers to the weekly Dan Mall Teaches newsletter. I promise to keep my communication light and valuable!