NEW YORK, NY.- For years, the conventional wisdom among Silicon Valley futurists was that artificial intelligence and automation spelled doom for blue-collar workers whose jobs involved repetitive manual labor. Truck drivers, retail cashiers and warehouse workers would all lose their jobs to robots, they said, while workers in creative fields like art, entertainment and media would be safe.
Well, an unexpected thing happened recently: AI entered the creative class.
In the past few months, AI-based image generators like DALL-E 2, Midjourney and
Stable Diffusion have made it possible for anyone to create unique, hyper-realistic images just by typing a few words into a text box.
These apps, though new, are already astoundingly popular. DALL-E 2, for example, has more than 1.5 million users generating more than 2 million images every day, while Midjourneys official Discord server has more than 3 million members.
These programs use whats known as generative AI, a type of AI that was popularized several years ago with the release of text-generating tools like GPT-3 but has since expanded into images, audio and video.
Its still too early to tell whether this new wave of apps will end up costing artists and illustrators their jobs. What seems clear, though, is that these tools are already being put to use in creative industries.
Recently, I spoke to five creative-class professionals about how theyre using AI-generated art in their jobs.
It Spit Back a Perfect Image.
Collin Waldoch, 29, a game designer in the New York City borough of Brooklyn, recently started using generative AI to create custom art for his online game, Twofer Goofer, which works a bit like a rhyming version of Wordle. Every day, players are given a clue like a set of rhythmic moves while in a half-conscious state and are tasked with coming up with a pair of rhyming words that matches the clue. (In this case, trance dance.)
Initially, Waldoch planned to hire human artists through gig-work platform Upwork to illustrate each days rhyming word pair. But when he saw the cost between $50 and $60 per image, plus time for rounds of feedback and edits he decided to try using AI instead. He plugged word pairs into Midjourney and DreamStudio, an app based on Stable Diffusion, and tweaked the results until they looked right. Total cost: a few minutes of work, plus a few cents. (DreamStudio charges about 1 cent per image; Midjourneys standard membership costs $30 per month for unlimited images.)
I typed in carrot parrot, and it spit back a perfect image of a parrot made of carrots, he said. That was the immediate aha moment.
Waldoch said he didnt feel guilty about using AI instead of hiring human artists, because human artists were too expensive to make the game worthwhile.
We wouldnt have done this if not for AI, he said.
I Dont Feel Like It Will Take My Job Away.
Isabella Orsi, 24, an interior designer in San Francisco, recently used a generative AI app called InteriorAI to create a mock-up for a client.
The client, a tech startup, was looking to spruce up its office. Orsi uploaded photos of the clients office to InteriorAI, then applied a cyberpunk filter. The app produced new renderings in seconds showing what the offices entryway would look like with colored lights, contoured furniture and a new set of shelves.
Orsi thinks that rather than replacing interior designers entirely, generative AI will help them come up with ideas during the initial phase of a project.
I think theres an element of good design that requires the empathetic touch of a human, she said. So I dont feel like it will take my job away. Somebody has to discern between the different renderings, and at the end of the day, I think that needs a designer.
Its Like Working With a Really Willful Concept Artist.
Patrick Clair, 40, a filmmaker in Sydney, started using AI-generated art this year to help him prepare for a presentation to a film studio.
Clair, who has worked on hit shows including Westworld, was looking for an image of a certain type of marble statue. But when he went looking on Getty Images his usual source for concept art he came up empty. Instead, he turned to DALL-E 2.
I put marble statue into DALL-E, and it was closer than what I could get on Getty in five minutes, Clair said.
Since then, he has used DALL-E 2 to help him generate imagery, such as an image of a Melbourne tram in a dust storm, that isnt readily available from online sources.
He predicted that rather than replacing concept artists or putting Hollywood special effects wizards out of a job, AI image generators would simply become part of every filmmakers tool kit.
Its like working with a really willful concept artist, he said.
Photoshop can do things that you cant do with your hands, in the same way a calculator can crunch numbers in a way that you cant in your brain, but Photoshop never surprises you, he continued. Whereas DALL-E surprises you and comes back with things that are genuinely creative.
What If We Could Show What the Dogs Playing Poker Looked Like?
During a recent creative brainstorm, Jason Carmel, 49, an executive at New York advertising agency Wunderman Thompson, found himself wondering if AI could help.
We had three and a half good ideas, he said of his team. And the fourth one was just missing a visual way of describing it.
The image they wanted a group of dogs playing poker, for an ad being pitched to a pet medicine company would have taken an artist all day to sketch. Instead, they asked DALL-E 2 to generate it.
We were like, what if we could show what the dogs playing poker looked like? Carmel said.
The resulting image didnt end up going into an ad, but Carmel predicts that generative AI will become part of every ad agencys creative process. He doesnt, however, think that using AI will meaningfully speed up the agencies work or replace their art departments. He said many of the images generated by AI werent good enough to be shown to clients and that users who werent experienced users of these apps would probably waste a lot of time trying to formulate the right prompts.
When I see people write about how its going to destroy creativity, they talk about it as if its an efficiency play, Carmel said. And then I know that they maybe havent played around with it that much themselves, because its a time suck.
This Is a Sketch Tool.
Sarah Drummond, a service designer in London, started using AI-generated images a few months ago to replace the black-and-white sketches she did for her job. These were usually basic drawings that visually represented processes she was trying to design improvements for, like a group of customers lining up at a stores cash register.
Instead of spending hours creating what she called blob drawings by hand, Drummond, 36, now types what she wants into DALL-E 2 or Midjourney.
All of a sudden, I can take like 15 seconds and go, Woman at till, standing at kiosk, black-and-white illustration, and get something back thats really professional looking, she said.
Drummond acknowledged that AI image generators had limitations. They arent good at more complex sketches, for example, or creating multiple images with the same character. And like the other creative professionals, she said she didnt think AI designers would replace human illustrators outright.
Would I use it for final output? No. I would hire someone to fully make what we wanted to realize, she said. But the throwaway work that you do when youre any kind of designer, whether its visual, architectural, urban planner youre sketching, sketching, sketching. And so this is a sketch tool.
This article originally appeared in
The New York Times.