Ge Wang: Human Well-Being Should Be AI Creators’ Goal

0
861


Ge Wang might be one of the most playful professors at Stanford. He designed an ocarina app for iPhone; conducts SLOrk, the Stanford Laptop Orchestra, which uses speakers made from IKEA salad bowls; wrote a book titled Artful Design: Technology in Search of the Sublime; and teaches his Music and AI students to take a playful approach to creating music using computers and AI.

But Wang’s motto is actually “worry, be happy,” as in a “post-Bobby McFerrin way of looking at the world,” he says. 

Introducing New Faculty, Staff Leaders at Stanford HAI

 

In this recent conversation, he describes his path to computer music, his worries for the future as AI develops uncritically, and his hope that, as a new associate director of the Stanford Institute for Human-Centered AI, he can help HAI interrogate its premises and goals.

How did you become a professor of music and computer science?

For someone whose life is focused on artful design, my path converged in a way that was not at all designed. There was, however, a kind of gravity that pulled together a few threads in my life, including a love of music making, tinkering, coding, and teaching.

At age 13, my parents got me an electric guitar. I didn’t ask for it and I have always wondered if it was to avoid something worse like the drums! I was at the age where I loved video games, and then I realized how much I love making music. 

At the same time, I like to build things. When I went to Duke University in 1996 to study computer science, that was me pursuing the joy of building things. At some point, I started to wonder if I could connect my love of making music with my love of tinkering and coding, which led me to grad school at Princeton, where I studied computer music. 

Grad school was very freeing. My PhD advisor, Perry Cook, had converted a coffee mug into a musical instrument that he played like a trumpet. And I said, “Wait, this can be research?” and Perry said, “This IS research!”

I’m also a fourth-generation teacher. My dad’s a professor, both my grandparents are teachers, and my great-grandfather was a science teacher who translated many modern chemistry, physics, and biology texts into Chinese.

So, not by design, these different forces – the joy of making music, the joy of building things, and my teaching lineage – all kind of converged to bring me to this job at Stanford, where I’ve been for 16 years.

What are your concerns about where AI is taking humanity?

I think society and our educational institutions haven’t given engineers and designers the tools to deal with the social, cultural, and historical contexts in which technology is being deployed. When designers and engineers create AI and other technology tools, they are literally designing today’s world. They basically have godlike power – the power to shape the way other people work, play, live, and relate to other people. And with that power comes responsibility. I think most of us do not know how to begin to think about how to create AI that, with humility, shapes the world for the benefit of humans in their messy human, social context.

It’s simplistic to say AI is dangerous. The potential for profound peril is ever present. The technology is developing so rapidly that we don’t have time to read the new AI papers that come out or even the headlines regarding what happened today in AI. And given that speed, how can we possibly hope that AI development is being done with our well-being and our community’s well-being – never mind society’s well-being – at heart? Which to me means AI should be a source of fundamental concern for us all.

As you take on this new associate director role, what are your hopes for Stanford HAI?

HAI has been an institution funded with hundreds of millions of dollars, with a lot of corporate interests and a lot of academic interests – with those two interests growing more and more aligned, for better or for worse. What I bring to the AD role is a desire and willingness to interrogate ourselves. If we don’t do that as an institution, then I don’t think we can in good conscience say we’re doing our best.  

One of my colleagues said that one of the hallmarks of a vibrant field of inquiry is the capacity and the insistence to question its most fundamental definitions. And at HAI, that would include questioning the idea that what can be automated should be automated, which is a premise for so many people. Or that optimization should be the ultimate virtue of engineering or problem solving. Maybe instead of optimization, the goal should be human well-being. What if we build toward human and planetary flourishing instead of building toward making something more efficient?

We think we’re designing for social good with tech when most of the time we are only trying to make something slightly more convenient because that’s good for business. But have we really changed anything for the better – for cultures, communities, or individuals?

I also think we haven’t figured out what human-centered even means. And rather than human-centered, perhaps we need to be humanistic. Because in my heart of hearts, I’m not sure we as humans are worthy of being centered on this planet with other life forms! 

And if HAI doesn’t ask these questions of what we’re doing, and if we don’t interrogate ourselves compassionately but critically, then should we even be here in the form we are? If we don’t jolt the system, and if we don’t artfully overstep our boundaries now, we may lose the chance to alter the course that HAI, Stanford, and society as a whole are currently on.

In a perfect world, five years from now, where is HAI, and perhaps society in general, vis-à-vis AI?

What I hope for HAI and for society five years from now is that we have a much deeper awareness of the need to slow down and ask the most important question of all the questions we can ask, which is: How do we want to live? What do we really want with artificial intelligence 

It’s not a question of how to make things more efficient or even more fair and unbiased. It’s a question of what we desire in the first place and how to evaluate those desires. It’s a nuanced cultural question that is going to shape how we live. Right now, we have placed too much weight on optimization and on making machines indistinguishable from humans, when what we really need is to slow down and ask the very most fundamental questions.

I’m also hoping that in five years HAI has engaged in a critical thinking, education, and outreach project to encourage people to ask themselves how humans want to live with AI. HAI could become a kind of “AI research whisperer” – like a horse whisperer – that gets at what really matters and can generate the kind of discourse and political will to think more culturally, socially, aesthetically about what we’re going to do with the tools we create. Because at the end of the day, our tools create us.

Can you point to an example of a researcher slowing down and stepping away from AI for a humanistic reason?

In my graduate school application I said I wanted to build the world’s most badass algorithmic composition engine. But when I told a guitarist I wanted to build this algorithmic composition engine, he looked at me very earnestly and asked, “What’s the point?” And I realized that I didn’t know what the point was. Why would I want to make a generative music engine when making music yourself is so joyful? 

So in grad school, instead of making an algorithmic composition engine, which these days we’d probably call generative AI, I made an old-school programming language that can be used as a tool to make instruments. And that was the beginning of my views on aesthetic engineering of the value in designing a tool that offers you a new way of thinking about what you’re trying to do, and that augments human effort rather than replaces it. 

You teach a course called Music and AI. What are your goals for this course?

In my Music and AI class, I have two goals for my students. I want them to have a chance to play with AI because by the time they get here, many of them have had all the creativity and play wrung out of them like water from a rag, and I feel…sad for them. Second, I want them to think about AI as broadly as possible. I don’t care what they build so long as it’s playful and thoughtful, and hopefully gets closer to our shared human values rather than further away from them.

One effect of this approach is that it helps address the feeling of AI FOMO – fear of missing out – that most of my students have. 

What is AI FOMO?

My music students come to me saying they’ve spent four years making music – even computer music – and they wonder if they should have taken deep learning classes even though they were never interested in it. 

For computer science students, AI FOMO takes the form of wondering whether the research they are working on now will even matter by the time they are done because the technology is moving so quickly.

I think one way to lessen and mitigate AI FOMO is to learn more about AI and to think about it broadly and critically, and to figure out for ourselves what makes us happy and whole, which has nothing to do with AI.

Can you give an example of how to reduce AI FOMO?

Yes. In my class, we use a machine-learning tool called Wekinator, which is an interactive machine learning framework that lets humans train an AI model by example. For example, you can create a new musical instrument by training a model to produce a particular sound given a 3D gesture or press of a button on a joystick or gamepad. Given a few input/output examples, Wekinator makes a guess as to what the sound should be for the gestures you haven’t specified. Those guesses may or may not be reasonable or interesting, but for those that are undesirable, Wekinator allows you to incrementally provide more examples.

It’s a human-in-the-loop type of machine learning. You’re teaching the model how you want it to work by acting it out yourself using your own data, which consists of physical gestures and sounds.

As one of my students who is a computer science master’s student working in AI commented in her end-of-semester reflection, it was remarkable how students went in wildly different creative directions using this single tool. She said she learned that “AI is astoundingly competent today, but humans are astounding.” That’s an example of a student getting over AI FOMO.

In making music using computers and AI, you’ve advocated for keeping a human in the loop. Why is that essential? 

A widely accepted premise of AI design is that it should be designed to be more and more indistinguishable from humans, à la the Turing test. It’s part of pop culture, research culture, and education culture, and for 50 years it’s been the dominant metric of progress. Think of chess, image generation, and ChatGPT. All of these involve making AI do something that humans can do. But by never questioning that premise, we fall into the Turing trap.

And to get out of that trap, we need to ask ourselves what other muscles will atrophy as AI becomes more humanlike. For example, when you write a book, it’s during the process of writing that you figure out what you’re trying to say. And it’s that process and the craft of working with our full selves that will be lost when we use AI to write for us.

I’m worried that by going deeper and deeper into using AI to make everything more convenient and more automated using machines that are humanlike, we think we’re going to make our lives better in the long run. But it will only make things marginally easier in the short run without beginning to address what we want from AI in the long run.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more



Source link