Each release of OpenAI and each advancement in functionality evokes a sense of awe and anxiety. This is evident in Sora's stunningly realistic AI-generated video clips that have gone viral online, upsetting an industry that relies on original footage. But the company is once again being secretive about its AI, which could be used to spread misinformation. From the report: As usual, OpenAI is reluctant to talk about the most important elements built into this new tool, even though they are releasing it to be tested by a wide range of people before making it available to the public. plug. The approach should be the opposite. OpenAI needs to be more open about the data used to train Sora and more secretive about the industry and the tools themselves that could potentially disrupt elections. OpenAI CEO Sam Altman said Sora's red teaming will begin Thursday, when the tool was announced and shared with beta testers. Red teaming is when an expert tests the security of an AI model by pretending to be a bad actor who hacks or exploits his AI model. The goal is to prevent the same thing from happening in the real world. When I asked OpenAI how long it takes to run these tests on Sora, a spokesperson said there is no set length. “We will take our time to assess critical areas for harm or risk,” she added.
The company spent about six months testing its latest language model, GPT-4, before releasing it last year. If Sora's check takes the same amount of time, that means it could be released to the public in his August month, about three months before the US presidential election. OpenAI should seriously consider waiting to release until voters go to the polls. […] OpenAI, on the other hand, has been frustratingly secretive about the sources it used to create Sora. When we asked the company what datasets were used to train the model, a spokesperson said the training data came from “our licensed content as well as publicly available content.” Ta. She did not elaborate further.