With the rise of Generative Artificial Intelligence (AI) technology, the creation of fake content has become more accessible than ever. And with that, the use of deepfakes has extended beyond the range of sketchy websites: content created with deepfake technology can now be found on every social media platform.
So much so, that organizations have started to call on governments and the European Union to take action, arguing that we cannot solely rely on Big Tech to protect us from Bad AI. However, legal measures often lag far behind the fast development of new technologies.
But what if we think outside the box and use existing legal frameworks to solve the solution? This is where eIDAS 2.0 comes in. As digital identity experts such as Koen de Jong have already noted, the eIDAS 2.0 framework gives us the opportunity to be creative. And this applies for the battle against Bad AI as well: what if we extend the use case of Digital Identity to not only prove our identity, but also to prove what we post online?
Good AI, Bad AI
So what are we talking about when we talk about Gen AI, Good AI, and Bad AI – and what are the dangers exactly?
Generative AI refers to AI systems that can create content such as text, images, audio, and video. Examples include OpenAI’s ChatGPT text generation and DALL-E for image generation.
‘Good AI’ refers to AI applications that are designed and implemented to benefit society, enhance human capabilities, and address social, economic, and environmental challenges. For example, think about the use of AI in healthcare for diagnostics and treatment recommendations, or the use of AI for environmental monitoring and climate change mitigation.
‘Bad AI’ refers to AI applications that are designed with malicious intent, go against human values, or those that cause harm, for example through cyber warfare, surveillance, and autonomous weapons.
Gen AI risks
There’s no stopping Gen AI – and it’s undeniable that Gen AI will have huge benefits for society. The fear of AI is therefore mostly unfounded, experts say. It is, however, important to be aware of the risks, your own vulnerabilities on the internet, and the steps you can take to prevent falling into Bad AI traps.
The main risk is that of misinformation and deepfakes. Generative AI can create realistic fake content that can be used to spread misinformation, fake news, or propaganda.
At the moment, over 95 percent of deepfakes are pornographic. For example, earlier this year, a sexually explicit, deepfake image of Taylor Swift was circulated online. It was viewed a reported 47 million times before being taken down. And deepfakes are accessible to everyone. When googling deepfakes, the first hit brings you to an online deepfake software platform that allows you to create your own content.
Additionally, the ability to generate content indistinguishable from human-created works raises questions and ethical concerns about authenticity, authorship, and originality. This includes intellectual property issues as well: Generated content might infringe on existing copyrights or create unauthorized reproductions of protected material.
Another huge risk to be aware of is that of bias and discrimination. AI models can perpetuate or even exacerbate biases present in the data they are trained on, leading to unfair treatment or representation of certain groups.
How to spot a deepfake on social media
You won’t be the first person to google it. According to security company McAfee, there are ways of calling out a liar. In a roadmap, they tell social media users to slow down, validate the account, seek another source of information, or even call for help from a professional fact-checker.
In addition, it’s important to look for signs, such as typos, repetition, and a lack of style in AI-generated texts, zoom in on AI-generated images to find inconsistencies, and look at the eyes of a speaker in an AI-generated video.
Yet, Gen AI is continually evolving. Can we keep up with its changes and improvements, or is fake content becoming too real?
Digital identity: a verification tool
eIDAS 2.0 has created the framework for a digital identity wallet. This wallet serves as a secure repository for your personal information, credentials, and other verifiable credentials that are issued by trusted entities.
Similar to the Apple wallet you might have on your phone, the digital identity wallet can include your driver’s license, your diploma, or your medical prescriptions. Through the use of decoupled technologies and advanced encryption techniques, these wallets ensure the integrity and confidentiality of user data.
With your digital identity wallet, you will be able to prove that you’re legally allowed to drive a car, or that you’re legally able to buy alcohol. But at the same time, your digital identity will allow you to securely sign legal documents, for example when opening a bank account or signing mortgage documents.
If we can securely sign legal documents, why not extend this to signing social media posts?
Digital identity verification on social media: a solution
When we talk about identity verification on social media now, we think about the X.com blue checkmark, which everyone can receive as long as they pay for it, or LinkedIn’s gray checkmark which forces you to scan your passport – without the user knowing where their data might end up.
The eIDAS 2.0 solution brings us a safer, more secure, and more reliable alternative.
Whether we want to verify social media entire profiles or merely important social media posts, we can do this in a secure manner for both sides: the poster will be able to prove their identity without losing personal data to the social media platforms, and the readers will be able to rely on proof, rather than personal feelings or intuition, to know whether a post is real or not.
Ubiqu’s role
Want to know more about eIDAS 2.0? Read our article on Understanding eIDAS 2.0.
Ubiqu’s solutions are designed to be fully compatible with the upcoming eIDAS 2.0 standards and the European Digital Identity Wallet (EUDI Wallet) specifications. We have the certification needed to ensure compliance with all regulatory and technical frameworks.
For organizations looking to navigate the eIDAS 2.0 landscape successfully, Ubiqu organizes serveral webinars to discuss how government institutions can meet the specific compliance requirements.
Want to share your thoughts? Join us for an in-depth discussion about eIDAS 2.0 and its implications for governments. To register, click here.