Apple researchers have announced a new AI tool called “Keyframer” (PDF) that harnesses the power of large-scale language models (LLMs) to animate static images through natural language prompts. From the report: The novel application, detailed in a new research paper published on arxiv.org, represents a major leap forward in the integration of artificial intelligence into the creative process and is also available on iPad Pro and Vision Pro. This research paper entitled “Keyframer: Enhancing Animation Design Using Large-Scale Language Models” explores uncharted territory in the application of his LLM to the animation industry, effectively creating motion in natural language. It presents unique challenges, such as how to write.
Please try to imagine. You're an animator and you have an idea you want to explore. You have a still image and a story to tell, but the thought of spending countless hours hunched over an iPad to bring your work to life makes you cringe. Enter your keyframer. You can write just a few sentences and those images will start dancing across your screen, as if reading your mind. Rather, like Apple's Large-Scale Language Model (LLM). Keyframer leverages a large language model (we used GPT-4 in our research) that can generate CSS animation code from static SVG images and prompts. “Large-scale language models have the potential to impact a wide range of creative areas, but the application of LLM to animation is understudied, and there are many challenges such as how users can effectively describe motion in natural language. , poses new challenges,” the researchers explain.