data becomes a strategic asset in the AI era

Content creators may feel the most profound shift and play a more important role as data becomes a strategic asset in the AI era

  

By Lee Shih Ta

As the global AI race heats up, it’s becoming clear that data doesn’t lose its value once large models reach the reasoning stage. On the contrary, it’s even more critical due to the need for dynamic knowledge. The so-called “last mile” of high-quality datasets often determines a model’s ultimate performance.

That is likely why Facebook parent Meta Platforms (META.US) made a $14.3 billion strategic investment in Scale AI, a company focused on data labeling and cleaning for AI training.

Scale AI provides structured, high-quality datasets to OpenAI, Meta, Google and other tech giants by combining the output of massive human labor with automated pipelines. Its data labeling process involves tagging images, texts or audio with meaningful metadata — such as identifying pedestrians in a photo or labeling the point of an article. Data cleaning eliminates errors, duplicates or irrelevant material to ensure consistency and accuracy.

Another example of the growing value of quality data is a recent licensing deal between The New York Times and Amazon (AMZN.US), which allows fact-checked editorial content to be used for training AI models. A similar agreement between the Associated Press and OpenAI has also been signed.

Though these arrangements are described as content licensing, they reflect a deeper shift: content has become data, and data has become a service. These deals highlight how media organizations are reassessing the value of their content, while AI developers continue to pursue high-quality material with growing urgency.

In contrast, the Chinese-language AI ecosystem faces unique challenges, such as a shortage of publicly available data, lack of large-scale professional annotation and difficulty digitizing classical and cultural texts at scale. Such obstacles highlight the challenges facing development of localized large AI models.

Chinese-language materials are relatively scarce

A white paper published by Alibaba Research Institute notes that English accounts for 59.8% of all crawlable web text, while Chinese represents just 1.3%. Wikipedia, a commonly used open resource, has over 7 million English articles, whereas there are only 1.5 million Chinese — less than a quarter of the volume.

This imbalance creates a major disadvantage. Without sufficient publicly available Chinese material, local large language models in Chinese may fall far behind their English-language counterparts in natural understanding and text generation — potentially leading to culturally mismatched outputs and a sense that these models have “consumed too much foreign ink.”

Chinese authorities have long recognized this gap and have taken steps to address it. Platforms such as People’s Daily and Xinhua are actively constructing curated, high-quality materials, consisting of vetted news, commentary and policy interpretation, designed to ensure alignment with official values and to support AI safety from a moral and ideological standpoint.

Initiatives like the “Cyber Research Large Language Model” further concentrate on integrating data from legal and policy documents, state media and other publications, reinforcing alignment with Chinese values.

In China, such value alignment has become a basic requirement for any domestic AI system. While China has yet to produce a company of Scale AI’s size, several local firms, including Aishu Technology, Testin, iFlytek (002230.SZ) and Haitai Ruisheng (688787.SH), are building up their capabilities in large-scale data annotation and cleaning. The Shanghai AI Lab is also developing a platform-based material processing system in partnership with policy and academic resources, laying the foundation for a “Chinese version of Scale AI.”

According to market research firm IDC, the value of China’s AI training data market was estimated at $260 million in 2023, and is expected to grow to approximately $2.32 billion by 2032, representing a compound annual growth rate of 27.4%.

Ultimately, the performance of any AI model depends on the content it consumes. In the AI era, content creators — especially those in journalism — must recognize that they are no longer merely material providers. They are now an integral part of the data services supply chain.

When news stories, commentary, academic papers and cultural archives are structured, semantically labeled and integrated into AI training pipelines, their value shifts from real-time information to durable data assets. Content creators who proactively organize and annotate their materials, and pursue licensing partnerships with AI developers, may find themselves unlocking new revenue opportunities.

It’s time for content to be seen not just as narrative, but also as infrastructure.

Lee Shih Ta is an editor at Bamboo Works.

You can contact him at shihtalee@thebambooworks.com

To subscribe to Bamboo Works weekly free newsletter, click here

Recent Articles

BRIEF: Luk Fook Holdings fiscal year profit drops 40%

Jewelry retailer Luk Fook Holdings (International) Ltd. (0590.HK) announced on Tuesday that it expects to report its net profit fell by 40% year-on-year in its fiscal year through March 2025.…