WASHINGTON — Artificial intelligence (AI) experts on Wednesday urged Congress to jump into the intimidating world of regulating AI and avoid some of the pitfalls of the past, when the government failed to rein in transformative technology. On the morning of November 8, the Senate Homeland Security and Governmental Affairs committee hosted a hearing titled “The Philosophy of AI: Learning from history, shaping our future.”

“This is not the first time that humans have developed staggering new innovations. Such moments in history have not just made our technologies more advanced, they’ve affected our politics, influenced our culture, and changed the fabric of our society,” the chairman of the committee, Sen. Gary Peters, D-MI, said in his opening remarks.

Past waves of technological change disrupted society in different ways. Today, AI promises widespread automation, which was also a consequence of the British Industrial Revolution and, in the US, mechanization and agriculture in the 1800s. In his testimony, Dr. Daron Acemoglu, an economist and professor at the Massachusetts Institute of Technology (MIT), said during that time, automation ultimately created millions of jobs. In modern days, however, his work found downsides to automation. “Automation accounts for more than half, and perhaps as much as three quarters, of the surge in US wage inequality,” he wrote. The difference today, he told the committee, lies in the fact that AI automation aims to replace human labor, instead of empower it.

“To improve human performance, we need to think beyond creating AI systems that seek to achieve artificial, general human intelligence and human parity,” said Acemoglu.

Applying a philosophical lens to regulation is part of what the government needs to do to take charge, the experts said.

“The reason we must consider the philosophy of AI is because we are at a critical juncture in history, where we are faced with a decision – either the law governs AI, or AI governs the law,” said Margaret Hu, a research and law professor at William & Mary Law School.

Expert witnesses said one current narrative is that humans cannot control AI, but they urged Congress to step in and regulate how AI is created and used to change that narrative. However, the committee’s ranking member, Sen. Ron Johnson, R-WI, expressed doubt that the federal government would come together to address the issue.

“This is an incredibly important issue and question, and I just really don’t know whether this dysfunctional place is gonna come up with the right answers,” he said.

In the current political and socio-economic climate, American democracy is weakened and susceptible to succumbing to the downsides and misuse of advanced technology, according to Dr. Shannon Vallor, chair of Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute, University of Edinburgh.

AI threatens security, privacy; exacerbates inequality

“On social media and commercial tech stages, generative-AI evangelists are now asking: what if the future is merely about humans writing down the questions, and letting something else come up with the answers?” Vallor said. “That future is an authoritarian’s paradise.”

Sen. Peters queried the witnesses as to whether they saw any merit in more laissez-faire approaches to AI.

“There’s a popular line of thought out there touted by many influential people that unfettered technological innovation will solve all of our problems and it’s going to lead to increased wellbeing for all. Just let it go and we should all be very happy about the end result,” he said, characterizing a point of view that resembles ideas put forward in a recent widely circulated manifesto by prominent Silicon Valley venture capitalist Marc Andreesen.

“I completely disagree,” said Acemoglu, arguing that the advance of technology must take place in an ecosystem “where social input, democratic input and government expertise are part of setting the agenda for innovation.”

“I think this type of techno-utopianism is something that we really need to look at with an eye of skepticism, and especially in a constitutional democracy,” said Hu. “I think that it poses the problem that the ends may be not justifying the means in a constitutional democracy.”

“We solve problems often with the help of technology, but technology doesn’t solve problems,” said Vallor. “And when we start expecting technology to solve our problems for us without human wisdom and responsibility guiding it, our problems actually tend to get worse.”

Where there is technology, there will be a person or party capable and willing to misuse it, Vallor said. Realistically, AI is not something that can be taken out of the hands of bad actors. A better solution would be to identify and mitigate the incentives for misuse. One such incentive could be the tech industry’s large-scale data mining to target digital ads and products. This is just one driver of many adverse effects on users and society, according to the witnesses.

Sen. Laphonza Butler (D-CA) raised concerns that AI would be used to target disadvantaged communities in America and increase inequity. “The reality is that this technology is already widening preexisting inequities by empowering systems that have long histories of racist and anti activist surveillance,” said Sen. Butler.

“I think we have seen plenty of evidence that if we do nothing, the use of AI technologies will continue to disproportionately harm the most vulnerable and marginalized communities here in this country and also in other countries,” said Vallor. “When you allow large companies and investors to reap the benefits of the innovation in ways that push all the risk and cost of that process onto society, and particularly onto the most vulnerable members of society as we’re seeing today, you produce an engine of accelerating inequality and accelerating injustice.”

Another inequity gap could come between blue- and white-collar workers. Sen. Johnson said he was worried about how college-educated individuals could suffer from AI tools taking their jobs.

Acemoglu argued that AI is more likely to be used to replace technical work of blue-collar workers. Meanwhile, college-educated workers would be more likely to use AI to help them with their tasks. For AI innovation to be used for good, he suggested using it to provide support and training to blue-collar and trades workers, specifically, instead of replacing their tasks altogether.

High stakes, potentially irreversible decisions

The witnesses urged the government to get involved in overseeing how AI is developed and used in the private sector. With regard to data privacy, Hu described the issue as a “triangle of negotiation” between the technology companies who create tools, citizens who use them, and the government that regulates the market. She raised the prospect that a Constitutional amendment may be necessary to “enshrine privacy as a fundamental right.”

Acemoglu stressed that the government needs to direct the market to put the focus back on useful innovation and away from profit maximization. A solution could involve taxing digital advertising, which would shift tech business models away from personal data collection. In a similar fashion, policy could direct the type of AI products being created. Acemoglu said OpenAI’s product, ChatGPT, learned from speech patterns on social media to mimic how humans talk.

“The amount of information was not important, it was sounding like humans,” he said. “This is why government regulation is important.”

Sen. Jon Ossoff, D-GA, asked what types of policy could avoid quick-fixes where “whack-a-mole” regulation only patches problems as they arise, and instead create a solid foundation with basic law and fundamentals to a more comprehensive regime.

In addition to fundamental privacy rights, Hu said another avenue could be amending the Civil Rights Act of 1964 to address possible discrimination in automated systems.

Alternatively, Sen. Ossoff asked Acemoglu for input on a fiduciary approach to data protection.

“I think we just don’t know which one of these different models is going to be most appropriate for the emerging data age,” said Acemoglu.

The fiduciary model has positives, according to Acemoglu, but there is also room for failure. He said Europe’s General Data Protection Regulation (GDPR) had backfired and may have ultimately handed an advantage to large companies who can bear the burden of compliance.

“If we want to have a fighting chance for an alternative, the government may need to invest in another government agency,” he said.

Acemoglu stressed that government should not stand by and treat privacy issues as an afterthought.

“We really need to institute the right sort of regulations and legislation about what rights people have to different types of data that they have created, and whether those rights are going to be exercised individually or collectively,” he said.

The Biden Administration proposed an executive order on AI policy last week, which could be the start of more significant federal AI regulation. Vice President Kamala Harris said during the signing of the executive order that America is a leader on the AI front, and can catalyze global action and consensus unlike any other country.

Vallor pointed to the executive order in a response to Sen. Butler.

“You see also in the executive order recently released many moves to empower and instruct federal agencies to begin to take action so that we can in a way start by making sure that government uses of AI are appropriately governed, audited, monitored, and that the powers that government has to use AI are used to increase the opportunity and equity and justice in society rather than decrease it,” she said.


Published in conjunction with Tech Policy Press Logo