What problems could we be solving if we weren't so intent on "automating" art?
I wrote a year or so ago about a conversation I had at an illustration conference with a representative of an artist advocacy organization:
(You can read that full illustrated piece here.)
I have been thinking about this lately. Last week, OpenAI started giving people access to its new Sora app, where they can easily generate high quality video of almost anything. Apparently some people are enjoying it, but I can only see the dark side.
I don’t think making videos felt like drudgery to anyone, yet this, of all the annoying, manual tasks out there, is the activity they decided to pour their resources into. The team that pushed out this product focused on automating delight, instead of solving an actual problem or challenge.
Now Pandora’s box has been opened ever wider for misinformation and invented courtroom evidence, in the form of AI-generated video. I’ve already found myself questioning videos that don’t come from highly reputable news sources. How are we to judge reality going forward?
Last year, a team of prominent computer scientists wrote a vision paper about how to focus the development of AI responsibly,1 and I translated it into simple comics. The paper’s main point is: where we focus our resources, there will follow the technology, so let’s focus on the right things.
Here is an excerpt of the visual adaptation:
There are so many things that rub me the wrong way about AI, but it’s here to stay in some form. Since that’s the case, then I am inspired by the voices talking about how to develop it in a direction that is most likely to service the public good. Unfortunately, developing tools that are increasingly good at creating fake videos and images is achieving the exact opposite of that goal.
I find the zeal to automate art-related activities baffling. The reason art is remarkable is because of all the tiny decisions, layers of effort, time spent mastering a craft; all of those elements that led to a single creation. Those are necessary steps on the path to a single piece of art.
Before my junior year of high school, my violin teacher made me come to her house on a summer day to painstakingly copy the exact bowing marks from a massive photo book of Mozart’s original handwritten music and markings. I wasn’t allowed to use whatever the music publisher had already printed on the music. I had to make sure I was moving my violin bow exactly the way that Mozart had wanted. Why? Because it would sound the best that way, and the artistry of violin music was her love.
Maybe that task felt like a bit of drudgery in the moment. Actually, I know it did. I only did it because my violin teacher was a woman you did not say “no” to. But as I think about that task today, it’s clear it’s not the kind that should be automated. The time I spent sitting on the rough carpet, copying the markings from the book into my copy of Mozart’s violin Concerto No. 3 in G Major2 was part of developing the craft I needed to play that piece.
That violin teacher passed away last week at the age of 95. I wonder what she would have thought about AI-generated music. Actually, I don’t wonder. I bet she would have hated it. She was the woman who (deviously?) scheduled her annual “violin party” with her students on Super Bowl Sunday. Instead of football, we would watch black and white recordings of Mischa Elman playing classical violin while we awkwardly ate our cheese and crackers. She would have had no time for any AI-generated slop.
I admire the goal that the Shaping AI paper has of intentionally developing the technology in the most responsible way possible. But it feels a bit like the climate temperatures rising…the stakes just keep going up. At the very least, please developers, can we just focus our resources to develop in a way that automates drudgery or solves an actual problem?