From: redpointai

Trust and safety are critical for businesses, particularly those serving large enterprise customers [00:00:00]. A key aspect of ensuring trust and data security in AI on platforms involves two main areas: the creation of avatars and the video content produced [00:00:09].

Safeguarding Avatar Creation

To prevent the unauthorized creation of digital avatars, a strict consent process is implemented [00:00:17]. For every avatar generated on the platform, a video consent format is required [00:00:17]. This process is reinforced by advanced AI technology that matches the consent video with the footage to confirm it’s the same person [00:00:23]. This makes it nearly impossible to create an avatar without the individual’s explicit consent [00:00:29].

Content Moderation

Platforms utilize a hybrid solution combining AI models and human review to ensure content adherence to policies [00:00:56]. Beyond AI-driven verification, a dedicated moderation team conducts human reviews of footage to ensure all content meets expected standards [00:00:35].

The platform’s moderation policy strictly prohibits several categories of content [00:00:44]:

These measures contribute to maintaining AI governance and data security within enterprise environments and address concerns and considerations for AI safety and regulation.