In efforts to add yet another advancement to artificial intelligence, Runway Research announced on February 6, Gen–1: a new AI model that uses language and images to generate new videos out of existing ones. Incorporating Gen–1 technology, users can transfer the style of any image or prompt to every frame of the video.

Users can also turn mockups like a stack of books lined up on a desk and mock it up into fully stylized and animated renders. Users can also mask or render subjects in their videos to modify them with simple text prompts or turn untextured renders into realistic outputs by applying an input image or prompt. Finally, users can unleash the full power of Gen–1 by customizing images for higher fidelity results.

Runway Research made its announcement of Gen-1 via Twitter:

Runway is a research company dedicated to building the multimodal AI systems that will enable new forms of creativity. Their mission is to be at the forefront of these developments, “ensuring the future of creativity is accessible, controllable and empowering for all.” According to Runway, Gen–1 represents another pivotal step towards the future of storytelling.

In less than 24 hours, Runway Research’s tweeted announcement garnered 1.6 million views with an uproar of interest on just how their breakthrough technology works. With Runway Reseach at the forefront of this AI innovation, it won’t be long before competitors jump on the bandwagon.

Add comment