AI Video Startup Haiper Announces Funding and Plans for AGI

London-based AI video startup Haiper has emerged from stealth mode with $13.8 million in seed funding and a platform that generates up to two seconds of HD video from text prompts or images. Founded by alumni from Google DeepMind, TikTok and various academic research labs, Haiper is built around a bespoke foundation model that aims to serve the needs of the creative community while the company pursues a path to artificial general intelligence (AGI). Haiper is offering a free trial of what is currently a web-based user interface similar to offerings from Runway and Pika.

On launch, Haiper offers a standard definition option that can run at up to four seconds long. With SD, users can also control motion levels. In addition to generative video, the platform has a paint feature that enables modification of colors, texture, background and elements.

At present, Haiper does not offer a workaround to extend the duration of sequences, as does Runway, but “the company claims it plans to launch the capability soon,” reports VentureBeat.

While the quality of Haiper’s generative imagery has been appreciatively received, VentureBeat writes that based on its own early tests, “it still appears to lag behind what OpenAI has to offer with Sora,” the generative video model the San Francisco-based company debuted last month.

But Haiper is ambitious, planning to use its new funding — from Octopus Ventures, one of Europe’s largest venture capital firms —  “to scale its infrastructure and improve its product, ultimately building towards an AGI capable of internalizing and reflecting human-like comprehension of the world,” according to VentureBeat.

Haiper CEO Yishu Miao and CTO Ziyu Wang, both did stints at DeepMind. Miao was also previously part of TikTok’s global trust and safety team, while Wang “worked as a research scientist for both DeepMind and Google” before incorporating Haiper in 2022, per TechCrunch, which says “the pair has expertise in machine learning and started working on the problem of 3D reconstruction using neural networks,” which led to video generation, “a more fascinating problem.”

Both Miao and Wang earned PhDs in machine learning from Oxford, reports SiliconANGLE, noting the two plan to use their ML knowledge pursuing AGI. “Our end goal is to build an AGI with full perceptual abilities, which has boundless potential to assist with creativity” and “enhance human storytelling,” said Miao.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.