我们保护儿童安全的方法

05-09 07:04

阅读原文→
Runway公司遵循Thorn的"生成式AI安全设计"原则,全流程保护儿童免受AI滥用。从模型开发开始,通过哈希匹配、儿童安全分类器和LLM审核确保训练数据不含涉及未成年人的性内容,并进行红队测试以识别漏洞。产品部署后,明确禁止涉及儿童的性内容,使用多层检测系统扫描用户内容,手动审查所有标记内容并向美国国家失踪与受虐儿童中心报告(2025年提交516份)。同时实施C2PA来源信号追踪内容生成,并持续与行业组织合作应对威胁

原文内容

我们保护儿童安全的方法

Our Approach to Child Safety

May 8, 2026

by Conner McDowell

Our Approach to Child Safety

Generative AI is changing what's possible in creative tools, and with that comes a responsibility to ensure those tools can't be used against the people most vulnerable to harm. Protecting children from sexual exploitation is one of our deepest held commitments as a company.

In line with our general approach to safety, we integrate child safety considerations at every level, from model development to product launches to end user generations. This approach closely aligns with Thorn's Safety by Design for Generative AI principles, which set an industry standard for how generative AI developers can guard against the creation and spread of child sexual abuse material (CSAM), including AI-generated CSAM, and other sexual harms against children. Below is a summary of how those principles show up across our products and processes.

1. Develop: Building Models That Proactively Address Risk

Safeguarding Training Data

Safety starts well before a user ever touches our product. We take deliberate steps during model development to reduce the risk that our models can be used to generate CSAM or other sexual content involving minors. We integrate hash matching, child safety classifiers and LLM-based moderation to ensure we do not train our models on sexual content involving minors or adults.

Red Teaming and Evaluation

Before a model ships, we conduct thorough testing to the extent legally permissible to identify and resolve potential vulnerabilities. We conduct such testing across text, image, video and audio to mitigate the possibility that CSAM or other sexual content involving minors could be produced by a user. This continuous testing ensures that our mitigations keep pace as models and new techniques emerge, and threat vectors evolve.

2. Deploy: Safeguards, Policies and Enforcement

Clearly Defining Usage Restrictions

We ensure that our strict boundary against any sexual content involving children is clear to all of our users. Our Usage Policy prohibits all "content that depicts, facilitates or promotes child sexual abuse or the sexualization of children," and clearly outlines that any violation will result in a permanent account ban, and, where appropriate, reporting to the National Center for Missing and Exploited Children (NCMEC).

Detecting CSAM & Sexual Content Involving Children

Once a model is deployed, we rely on multiple layers of detection to catch potentially harmful content, and attempts to create such content. This includes scanning all user-provided content against known-CSAM hash databases and a CSAM-specific classifier to detect previously unknown CSAM. We also apply AI-based classifiers to identify attempts to create CSAM or other sexual content involving children.

Reporting to NCMEC

We manually review all flagged content, and report all confirmed CSAM content to NCMEC. In 2025, we submitted a total of 516 reports to NCMEC's CyberTipline.

Deploying Content Provenance

We implement C2PA provenance signals so that content generated with our tools can be traced back to its origin. Provenance isn't a complete solution to misuse, but it gives platforms, researchers and law enforcement a meaningful signal for identifying AI-generated content.

3. Maintain: Monitoring, Improvement and Collaboration

Monitoring and Continuous Iteration

The techniques used to create and distribute CSAM evolve quickly. We are continuously testing and iterating on our models and safeguards to ensure that our safeguards can keep pace, especially as we move toward more content generated in real time.

Collaborating with Industry and Civil Society

We also recognize that no single company can solve this problem alone. We are committed to engaging with organizations like Thorn and Tech Coalition and with peers across the industry because the best defenses come from shared knowledge. We'll keep investing in these partnerships, and we'll keep updating our approach as the threat landscape changes and the tools to counter it evolve.

You can read more about our broader safety work on our Safety page. If you encounter content that you believe was created with our tools and raises concerns, please report it here.

Discover more

Runway Partners with LionsgateNewsRunway Partners with LionsgateHow “House of David” Used Runway to Become Amazon’s Latest Hit SeriesCustomer StoriesHow “House of David” Used Runway to Become Amazon’s Latest Hit SeriesExploring the Future of Filmmaking: Runway’s programming partnership with Tribeca Festival 2024NewsExploring the Future of Filmmaking: Runway’s programming partnership with Tribeca Festival 2024

原文图片

原文图片

原文图片

原文图片