Sora is now available to red team personnel to assess harm and risk in critical areas.
Video credit: OpenAi
“We’re also giving access to a number of visual artists, designers, and filmmakers to get feedback on how to evolve the model to best serve creative professionals.
“We are collaborating with people outside of OpenAI to get feedback and share our research progress early to give the public a feel for what AI capabilities will be like. Masu.”
The company says Sora can generate complex scenes with multiple characters, specific types of movement, and precise details of subjects and backgrounds. The model understands not only what the user asks for in a prompt, but also how those things exist in the physical world.
“This model has a deep understanding of language and can accurately interpret prompts and generate engaging characters that express vivid emotions. Within a single generated video, Sora can: You can also create multiple shots that accurately preserve the character and visual style,” says OpenAi.
Video credit: OpenAi
safety
“We plan to take some important safeguards before making Sora available for use in OpenAI’s products. We work with experts in the areas of hateful content, bias, and more.”
OpenAI said it is also building tools to help detect misleading content, including a detection classifier that can tell when a video was generated by Sora.
“We continue to work with policymakers, educators, and artists around the world to understand their concerns and identify positive use cases for this new technology. Despite extensive research and testing, We cannot predict all the ways that people will use our technology for good or all the ways that people will misuse it. That's why we believe that learning from real-world use will take time. We believe this is a key element in creating and releasing increasingly secure AI systems over time.”