News
        
        Senate Tackles AI Oversight After Deepfake Concerns
        
        
        
        
In his opening remarks at a U.S. Senate hearing focused on  the oversight AI, Senator Richard Blumenthal (D-Conn) included comments that  had been authored by ChatGPT and were presented in a voice resembling his own,  thanks to the utilization of voice cloning software.
The fabricated comments included the following declaration:  "We've frequently witnessed the consequences of technology advancing  faster than regulation can keep pace. This has led to the unrestrained  exploitation of personal data, the dissemination of misinformation, and the  exacerbation of societal disparities. We've observed how algorithmic biases can  perpetuate discrimination and bias, and how the absence of transparency can  erode public trust. This is not the future we aspire to create."
Blumenthal initiated the hearing, entitled "Oversight  of AI: Rules for Artificial Intelligence," to explore the need for  regulations to combat the misuse of technologies such as audio, image, and  video deepfakes, 
Blumenthal, who is chair of the Senate Judiciary  Subcommittee on Privacy, Technology, and the Law, led the hearing that featured  several witnesses including: Sam Altman, the CEO of OpenAI, Christina  Montgomery, the chief privacy and trust officer of IBM, and Gary Marcus, a  professor emeritus of psychology and neural science at NYU.
Altman, perhaps the single person most responsible for fears  of runaway advanced AI causing a multitude of potential problems (the  extermination of humanity being on the high end), made news by actually  advocating for licensing and testing requirements for development and release  of AI models above a certain threshold of capabilities.
He also said the U.S. should require companies to disclose  the data used to train their AI models, something that OpenAI stopped doing in  the rush to monetize generative AI tech along with partner Microsoft and gain a  competitive edge on rivals like cloud giant Google.
According to various news reports, some Altman quotes  included:
  - "I think if this technology goes wrong, it  can go quite wrong, and we want to be vocal about that. We want to work with  the government to prevent that from happening."
 
  - "I do think some regulation would be quite  wise on this topic. People need to know if they're talking to an AI, if content  they're looking at might be generated or might not."
 
  - "We might consider a combination of  licensing and testing requirements for development and release of AI models  above a threshold of capabilities."
 
  - "When Photoshop came onto the scene a long  time ago, for a while people were really quite fooled by Photoshopped images  and then pretty quickly developed an understanding that images were  Photoshopped. This will be like that, but on steroids."
 
  - "As this technology advances, we understand  that people are anxious about how it could change the way we live. We are  too."
 
In conclusion, he said:
"This  is a remarkable time to be working on AI technology. Six months ago, no one had  heard of ChatGPT. Now, ChatGPT is a household name, and people are benefiting  from it in important ways.
"We  also understand that people are rightly anxious about AI technology. We take  the risks of this technology very seriously and will continue to do so in the  future. We believe that government and industry together can manage the risks  so that we can all enjoy the tremendous potential."
Montgomery, meanwhile emphasized the importance of trust and  transparency in AI development and deployment, while highlighting IBM's  principles of responsible stewardship, data rights and accountability. She said  in conclusion:
"Mr.  Chairman, and members of the subcommittee, the era of AI cannot be another era of  move fast and break things. But neither do we need a six-month pause -- these  systems are within our control today, as are the solutions. What we need at  this pivotal moment is clear, reasonable policy and sound guardrails. These  guardrails should be matched with meaningful steps by the business community to  do their part. This should be an issue where Congress and the business  community work together to get this right for the American people. It's what  they expect, and what they deserve."
Marcus, in his testimony, didn't hold back, even taking some  shots at OpenAI, with the company's CEO Altman right there:
"The  big tech companies' preferred plan boils down to 'trust us,'" he  reportedly said. "Why should we? The sums of money at stake are  mind-boggling. And missions drift. OpenAI's original mission statement  proclaimed 'Our goal is to advance [AI] in the way that is most likely to  benefit humanity as a whole, unconstrained by a need to generate financial  return.'
"Seven  years later, they are largely beholden to Microsoft, embroiled in part in an  epic battle of search engines that routinely make things up -- forcing Alphabet  to rush out products and deemphasize safety. Humanity has taken a back seat.
"OpenAI  has also said, and I agree, "it's important that efforts like ours submit  to independent audits before releasing new systems", but to my knowledge  they have not yet submitted to such audits. They have also said "at some  point, it may be important to get independent review before starting to train  future systems." But again, they have not submitted to any such advance  reviews so far."
        
        
        
        
        
        
        
        
        
        
        
        
            
        
        
                
                    About the Author
                    
                
                    
                    David Ramel is an editor and writer at Converge 360.