Instagram is supposedly working on a system that can expose photos produced or modified by artificial intelligence.
A buzz has been sparked in recent months by images of the pope in a flashy white Balenciaga quilted jacket and, worse yet, Trump’s alleged arrest, during which the former US president was beaten by police. These were obviously false images that illustrated the positive but mostly negative potential of artificial intelligence.
This is why Meta will take steps to prevent the spread of fake news, potentially extremely dangerous, using software that detects deception, so that social platforms can then report it to users.
This news, which has not yet been officially confirmed by Meta, was reported by Alessandro Paluzzi, who has spread insider information about new Instagram developments several times. Through his Twitter profile, he posted a screenshot of Instagram that has a label signaling that the content in question was created using artificial intelligence.
#Instagram is working to label the contents created or modified by #AI in order to be identified more easily 👀 pic.twitter.com/ bHvvYuDpQr
— Alessandro Paluzzi (@alex193a) July 30, 2023
Therefore, the goal is to warn users by trying to guarantee transparency and verification of information, which was difficult in social networks, even before the boom of chatbots and artificial intelligence. In the recent past, high-tech big names have partnered with the US government to sign “Ensuring Safe, Secure, and Trustworthy AI.”