Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Sand AI New Video Model Sparks Censorship Concerns

Sand AI New Video Model Sparks Censorship Concerns Sand AI New Video Model Sparks Censorship Concerns
IMAGE CREDITS: JI DAILY

China-based startup Sand AI has launched a powerful new video-generating model, Magi-1, which has quickly attracted attention for its high-quality output and support from influential figures like Kai-Fu Lee, founding director of Microsoft Research Asia. However, despite its technological promise, the model is also facing criticism for its aggressive censorship of politically sensitive content.

High-Quality Output, High Hardware Requirements

Magi-1 is designed to create videos by autoregressively predicting sequences of frames, allowing it to simulate motion and physical interactions with impressive realism. Sand AI claims that Magi-1 produces videos with more accurate physics than rival open-source models.

However, the model’s scale poses accessibility challenges. At 24 billion parameters, Magi-1 requires between four to eight Nvidia H100 GPUs to run effectively—hardware that is far out of reach for most individual users. As a result, Sand AI’s hosted platform is the primary way to experience the model in action.

To generate a video, users must upload a “prompt” image. But not all images are welcome.

Sand AI Strict Filtering and Political Censorship

Testing revealed that Sand AI’s platform blocks images tied to politically sensitive topics in China. This includes photos of Chinese President Xi Jinping, the Tank Man of Tiananmen Square, the Taiwanese flag, and symbols associated with Hong Kong independence. Attempts to upload these images result in error messages, even if the image files are renamed—indicating filtering occurs at the image content level.

While other Chinese platforms like MiniMax’s Hailuo AI also block similar images, Sand AI’s filters appear to be more stringent. For example, Hailuo allows images of Tiananmen Square, but Sand AI does not.

This level of censorship aligns with China’s 2023 law requiring AI models to avoid generating content that could “damage national unity” or “disrupt social harmony.” As a result, Chinese AI companies often bake political filters into their systems to stay compliant, either at the input level or through model fine-tuning.

Interestingly, while political speech is tightly controlled, Chinese video AI models have been criticized for having looser restrictions around adult content. A report by 404 revealed that several Chinese-made video generators lack safeguards against generating nonconsensual nudity, a stark contrast to U.S.-based models which generally block such content by default.

The launch of Magi-1 highlights both the rapid advancements in generative AI and the trade-offs that arise in politically controlled environments. While Sand AI’s tech may push video generation forward, its censorship practices also raise important concerns about freedom of expression and the global divergence in AI governance.

Share with others