Top AI researcher dismisses fears of AI ‘extinction’, challenging the ‘scientist hero’ narrative.

Join top executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more


Kyunghyun Cho, a prominent AI researcher and associate professor at New York University, has expressed frustration with the current discourse on AI risk. While luminaries like Geoffrey Hinton and Yoshua Bengio recently warned of potential existential threats from the future development of artificial general intelligence (AGI) and called for regulation or a moratorium on research, Cho believes these doomsday narratives are distracting from the real issues, both positive and negative, posed by today’s artificial intelligence.

In a recent interview with VentureBeat, Cho, who is highly regarded for his seminal work on neural machine translation, which helped lead to the development of the Transformer architecture on which ChatGPT is based, expressed disappointment at the lack of concrete proposals to recent Senate hearings related to regulating the current harms of AIs, as well as a lack of discussion on how to increase the beneficial uses of AIs.

While he respects researchers like Hinton and his former supervisor Bengio, Cho also warned against glorifying hero scientists or taking one person’s warnings as gospel, and expressed his concerns about the Effective Altruism movement that funds many of the US’s efforts. AGI. (Editor’s note: This interview has been edited for length and clarity.)

VentureBeat: You recently expressed disappointment with the recent AI Senate hearings on Twitter. Could you elaborate and share your thoughts on the AI ​​Risk Statement signed by Geoffrey Hinton, Yoshua Bengio and others?

Event

Transform 2023

Join us in San Francisco July 11-12, where top executives will share how they integrated and optimized AI investments for success and avoided common pitfalls.

subscribe now

Kyunghyun Cho: First of all, I think there are too many letters. In general, I have never signed any of these petitions. I always tend to be a little more careful when signing something with my name. I don’t know why people just sign their names so lightly.

As for the Senate hearings, I read the entire transcript and felt a little sad. It’s very clear that nuclear weapons, climate change, potential rogue AI, obviously can be dangerous. But there are many other harms that are actually caused by AI, as well as the immediate benefits that we see from AI, yet there hasn’t been a single potential proposition or discussion of what we can do about the immediate benefits as well as the immediate harms. of the AI.

For example, I think Lindsey Graham pointed out the military use of AI. This is actually happening now. But Sam Altman could not make a single proposal on how to regulate the immediate military use of AI. At the same time, AI has the potential to optimize healthcare so that a better and more equitable healthcare system can be implemented, but none of this has actually been discussed.

I’m disappointed in many of these discussions of existential risk; now they even call it literal extinction. She is sucking the air out of the room.

VB: Why do you think this is so? Why is the discussion of existential risk sucking the air out of the room at the expense of more immediate harms and benefits?

Kyunghyun Cho: In a way, it’s a great story. May this AGI system we create turn out to be as good as us, or better than us. This is precisely the fascination that humanity has always had from the very beginning. The Trojan horse [that appears harmless but is malicious] it’s a similar story, isn’t it? It’s about exaggerating aspects that are different from us but intelligent like us.

In my view, it is good that the general public is fascinated and excited by the scientific advances they were making. The unfortunate thing is that scientists as well as politicians, the people who make decisions or create these advances, are only positively or negatively enthusiastic about these advances, not critical of them. Our job as scientists, and also as policy makers, is to be critical of many of these apparent advances which can have both positive and negative impacts on society. But right now, AGI is kind of a magic wand that they’re just trying to swing to hypnotize people so that people can’t be critical of what’s happening.

VB: But what about the machine learning pioneers who are part of it? Geoffrey Hinton and Yoshua Bengio, for example, signed the AI ​​Risk Statement. Bengio said he feels lost and somewhat sorry for his life’s work. What do you say?

Kyunghyun Cho: I have immense respect for both Yoshua and Geoff as well as Yann [LeCun], I know them all pretty well and I’ve studied with them, I’ve worked with them. But the way I see it is: obviously individual scientists or not can have their own assessment of what kinds of things are more likely to happen, what kinds of things are less likely to happen, what kinds of things are more devastating than others . Choosing the distribution about what will happen in the future and choosing the utility function that is related to each of those events, these are not like the hard sciences; there is always subjectivity there. That’s great.

But what I see as a really problematic aspect [the repeated emphasis on] Yoshua and Geoff, especially in the media these days, is that this is a typical example of some kind of heroism in science. This is exactly the opposite of what has actually happened in science, and especially machine learning.

There has never been a single scientist who stays in their lab and 20 years later comes out saying it’s AGI. It has always been a collective effort of thousands, if not hundreds of thousands of people around the world, over the decades.

But now the scientist hero narrative is back. There’s a reason they always put Geoff and Yoshua at the top in these letters. I think this is actually harmful in a way I never would have thought of. Whenever people were talking about their problems with this kind of scientist hero fiction, I was like, oh well, that’s a funny story. Why not?

But looking at what’s happening now, I think we’re seeing the downside of the hero scientist. They are all just individuals. They may have different ideas. Sure, I respect them, and I think that’s how the scientific community always works. We always have dissenting opinions. But now this hero worship, combined with this doomerism of AGI that I don’t know about, is too much for me to follow.

VB: The other thing that strikes me as odd is that many of these petitions, like the AI ​​Risk Statement, are funded behind the scenes by the folks at Effective Altruism [the Statement on AI Risk was released by the Center for AI Safety, which says it gets over 90% of its funding from Open Philanthropy, which in turn is primarily funded by Cari Tuna and Dustin Moskovitz, prominent donors in the Effective Altruism movement]. How do you feel about it?

Kyunghyun Cho: I’m not a fan of Effective Altruism (EA) in general. And I’m very aware that the EA movement is what’s actually driving this whole thing around AGI and existential risk. I think there are too many people in Silicon Valley with this kind of savior complex. They all want to save us from the inevitable doom that only they see and think only they can fix.

Along these lines, I agree with what Cohere’s Sara Hooker said for AI [in your article]. These people are loud, but they’re still a fringe group within the whole of society, let alone the entire machine learning community.

VB: So what’s the counter-narrative to that? Would you write your letter or give your statement?

Kyunghyun Cho: There are things you can’t write a letter about. It would be ridiculous to write a letter saying there is absolutely no way there is a rogue AI that will turn everyone into paper clips. It would be like, what are we doing?

I am an educator by profession. I feel like what’s missing right now is exposure to the little things that are being done so that AI can benefit humanity, the little victories achieved. We need to expose the general public to this small but sure stream of successes that are being achieved here.

Because at the moment, unfortunately, sensational stories are read more. The idea is that either the AI ​​will kill us all or the AI ​​will cure everything both of which are incorrect. And maybe that’s not even the role of the media [to address this]. In fact, it’s probably the role of AI education, say K-12, to introduce core concepts that aren’t actually complicated.

VB: So, if you were talking to your fellow AI researchers, what would you say you believe regarding the risks of AI? Would it be focused on current risks, as you described? Would you add anything about how it will evolve?

Kyunghyun Cho: I don’t really tell people about my perception of AI risk, because I know I’m just an individual. My authority is not well calibrated. I know this because I’m a researcher myself, so I tend to be very careful about talking about things that have extremely miscalibrated uncertainty, especially if it’s some kind of prediction.

What I tell AI researchers are not the older ones, they know best, but to my students, or the younger researchers, I just do my best to show them what I work on, what I think we should work on to give us small but tangible benefits. This is why I work on AI for healthcare and science. That’s why I spend 50% of my time at [biotechnology company] Genentech, part of the Prescient Design team for computational antibody and drug design. I just think it’s the best I can do. I won’t write a big letter. I am very bad at this.

VentureBeat’s mission it is to be a digital city square for technical decision makers to gain insights into transformative business technology and transactions. Discover our Briefings.

#Top #researcher #dismisses #fears #extinction #challenging #scientist #hero #narrative

Leave a Comment