OpenAI allegedly violated SEC norms by signing agreements with employees that stopped them from reporting any non-compliance to the watchdog. Credit: Shutterstock Some employees of ChatGPT-maker OpenAI have reportedly written to the US Securities and Exchange Commission (SEC) seeking a probe into some employee agreements, which they term restrictive non-disclosure agreements (NDAs). These staffers-turned-whistleblowers have written to the SEC alleging that the company forced their employees to sign agreements that were not in compliance with SEC’s regulations. “Given the well-documented potential risks posed by the irresponsible deployment of AI, we urge the commissioners to immediately approve an investigation into OpenAI’s prior NDAs, and to review current efforts apparently being undertaken by the company to ensure full compliance with SEC rules,” read the letter shared with Reuters by the office of Senator Chuck Grassley. The same letter alleges that OpenAI made employees sign agreements that curb their federal rights to whistleblower compensation and urges the financial watchdog to impose individual penalties for each such agreement signed. Further, the whistleblowers have alleged that OpenAI’s agreements with employees restricted them from making any disclosure to authorities without checking with the management first and any failure to comply with these agreements would attract penalties for the staffers. The company, according to the letter, also did not create any separate or specific exemptions in the employee non-disparagement clauses for disclosing securities violations to the SEC. An email sent to OpenAI about the letter went unanswered. The Senator’s office also cast doubt about the practices at OpenAI. “OpenAI’s policies and practices appear to cast a chilling effect on whistleblowers’ right to speak up and receive due compensation for their protected disclosures,” the Senator was quoted as saying. Experts in the field of AI have been warning against the use of the technology without proper guidelines and regulations. In May, more than 150 leading artificial intelligence (AI) researchers, ethicists, and others signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems to maintain basic protection against the risks of using large-scale AI. Last April, the who’s who of the technology industry called for AI labs to stop training the most powerful systems for at least six months, citing “profound risks to society and humanity.” That open letter, which now has more than 3,100 signatories including Apple co-founder Steve Wozniak, called out San Francisco-based OpenAI Lab’s recently announced GPT-4 algorithm in particular, saying the company should halt further development until oversight standards were in place. OpenAI, on the other hand, in May formed a safety and security committee led by board members as they started researching their next large language models. More OpenAI news: OpenAI is working on new reasoning AI technology OpenAI reportedly stopped staffers from warning about security risks OpenAI: Musk’s control issues at heart of ongoing rift OpenAI models still available in China via Azure cloud despite company ban Related content opinion Agentic RAG AI — more marketing hype than tech advance CIOs are so desperate to stop generative AI hallucinations they’ll believe anything. Unfortunately, Agentic RAG isn’t new and its abilities are exaggerated. By Evan Schuman Aug 16, 2024 5 mins Technology Industry Generative AI Emerging Technology news Researchers tackle AI fact-checking failures with new LLM training technique Deductive Closure Training (DCT) looks to address the problems of LLM bias, misleading information, and outright contradiction. By John E. Dunn Aug 15, 2024 4 mins Generative AI IBM Technology Industry news MIT delivers database containing 700+ risks associated with AI Called the AI Risk Repository, the goal, its creators say, is to provide an accessible and updatable overview of risk landscape. By Paul Barker Aug 15, 2024 1 min Generative AI Security news brief Hollywood unions OK AI-cloned voices in commercials But companies must first obtain consent from the actor for any ad that uses the digital voice copy. By Viktor Eriksson Aug 15, 2024 1 min Generative AI Technology Industry Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe