Who is Zico Kolter? A professor now leads OpenAI’s new safety panel, which has the authority to pause any AI release deemed unsafe

 

#ZicoKolter #OpenAI #AISafety #ArtificialIntelligence #TechNews #AIEthics #MachineLearning #AIResearch #Innovation #ResponsibleAI

If you’re concerned about the risks AI could pose to humanity, one professor at Carnegie Mellon University now holds a crucial role in the tech world.

Zico Kolter chairs a four-person panel at OpenAI with the power to block the release of AI systems deemed unsafe. These systems could range from extremely powerful technology that might be misused to create weapons of mass destruction to poorly designed chatbots that could negatively affect people’s mental health.

“Very much we’re not just talking about existential concerns here,” Kolter told The Associated Press. “We’re talking about the full spectrum of safety and security issues that arise when dealing with widely used AI systems.”

Kolter was appointed head of OpenAI’s Safety and Security Committee over a year ago. His role gained added importance recently when regulators in California and Delaware made his oversight a key condition for allowing OpenAI to create a new corporate structure designed to facilitate fundraising and profit-making.

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.