WASHINGTON – Recent movies, such as Ex Machina, may not be too far off in raising concerns about progress in creating superintelligent computers that could threaten humans by thinking faster, some top scientists said Tuesday.

“If the system is better than you at taking into account more information and looking further ahead into the future, and it doesn’t have the exactly the same goals as you…then you have a problem,” said Stuart Russell, a top computer science professor at the University of California, Berkeley.

It’s hard to predict when artificial general intelligence (AGI) or even artificial super intelligence (ASI) will become realities, leading to machines with intentionality or even consciousness.

“I looked at how my daughter interacts with Siri. She’s 9 years old. She really thinks Siri is real,” said Robert Atkinson, president of the Information Technology and Innovation Foundation, which sponsored the discussion. Although Siri works well, it’s still an artificial narrow intelligence (ANI) whose scope of system is very limited.

But “breakthroughs could be happening at any time. Each one of those breakthroughs could happen with no warnings,” said Russell.

The real risk, according to Russell, is when super intelligence is powerful enough to outthink human beings, it can avoid being shut down by people who don’t like the outcome because it can always look further into the future. That’s when people might lose control of the machine. Russell warned.

Nevertheless, he still believes that the government should continue funding the research. “It seems to me that we need to look at where this road is going. Where does it end? And if it ends somewhere we don’t like, then we need steer it to a different direction,” he said.

“But if we don’t fund the basic research, there’s no basic sense of being worried about safety issues at this point of time,” said Ronald Arkin, an associate dean in the College of Computing at Georgia Tech, said funding the basic research is crucial and that, at this point, it is premature to be worried about humans’ safety in a world with artificial super intelligent computers..

Atkinson disagreed, saying that if the risk is too high, the benefit, no matter how important, should be turned back.

Manuela Veloso, a professor at Carnegie Mellon University, said moving into the world of artificial intelligence is no different than all the other advances in computing.

Using Google Map as an example, she said Google Map could lead a user to an alley where someone might shoot them, but they still use it to get around. “We just have to sample the world,” she said, “we have to build trust, we have to use, and eventually things become familiar to us.”

“It will be a shame for humans who are so intelligent to not make good use of this technology,” Veloso said.


Published in conjunction with PC World Logo