On Thursday, OpenAI announced Sora, a text-to-video AI model that can generate 60-second-long photorealistic HD video from written descriptions. While it's only a research preview that we have not tested, it reportedly creates synthetic video (but not audio yet) at a fidelity and consistency greater than any text-to-video model available at the moment. It's also freaking people out.
"It was nice knowing you all. Please tell your grandchildren about my videos and the lengths we went to to actually record them," wrote Wall Street Journal tech reporter Joanna Stern on X.
"This could be the 'holy shit' moment of AI," wrote Tom Warren of The Verge.
Read 23 remaining paragraphs | Comments
source https://arstechnica.com/?p=2003861
Post a Comment
Hey Everyone!