The information provided herein is generated by experimental artificial intelligence and is for informational purposes only.
This summary text is fully AI-generated and may therefore contain errors or be incomplete.

Social media analytics company Graphika has reported an alarming increase in the use of “AI undressing” techniques. This involves the use of artificial intelligence tools to remove clothing from images without the consent of the individuals depicted. Graphika’s research found a significant rise in the number of comments and posts on platforms like Reddit and X containing referral links to websites and Telegram channels offering synthetic Non-Consensual Intimate Images (NCII) services.

These services utilize AI technology to generate explicit content at scale, making it easier and more cost-effective for providers. Graphika warns that this trend could lead to the creation of fake explicit content and contribute to issues such as targeted harassment, sextortion, and the production of child sexual abuse material (CSAM).

Furthermore, AI has also been used to create video deepfakes using the likeness of celebrities, raising concerns about the authenticity of media content. The Internet Watch Foundation (IWF) has discovered a significant number of child abuse images on dark web forums, and they caution that AI-generated child pornography could overwhelm the internet.

The United Nations has labeled AI-generated media as a “serious and urgent” threat to information integrity, particularly on social media platforms. In response to these concerns, the European Parliament and Council have recently agreed on rules governing the use of AI in the European Union.

Notifications 0