Alibaba Cloud has announced the open source availability of four AI models
The cloud computing provider aims to provide creators with a range of video content generation tools, in its latest contribution to open source development.

Four models of its 14-billion(B)-parameterĀ
The models,T2V-14B, T2V-1.3B, I2V-14B-720P and I2V-14B-480P, enable image and video generation from text and image inputs, said the company.
The Wan 2.1 series is the first model to support text effects in Chinese and English, said Alibaba. With an overall score of 86.22 per cent, the model is at the top of the VBench leaderboard, a benchmark suite for video generative models, leading in key dimensions such as dynamic degree, spatial relationships, colour and multi-object interactions.
T2V-14B enables the creation of high-quality visuals with substantial motion dynamics while T2V-1.3B balances quality against available compute, for example allowing a five second video at 480p resolution to be created with a personal laptop in around four minutes.
The I2V-14B-720P and I2V-14B-480P models provide image-to-video in addition to text-to-video generation, with a single image input and text description sufficient to generate dynamic video content.
First releasing its Qwen 7B open source AI model in 2023, Alibaba’s models have consistently topped the HuggingFace Open LLM leaderboards, it said. More than 100,000 Qwen derivative models have been developed on HuggingFace, making it one of the largest AI model families in the world.