This is a survey that says a lot. According to a survey carried out by the company Kantar in 2023, 72% of French people are aware of disclosing their personal information online. Likewise, 92% of respondents are concerned about these issues. This shows that this issue has become essential.
On the occasion of European Personal Data Protection Day, we had the idea of exploring what could be the consequences of the arrival of generative AI such as ChatGPT and its rivals on protecting our privacy online.
Privacy risks ?
Activists have already expressed outrage at the attacks that these new tools could have on the confidentiality of our personal data. In a fascinating article the Official Monetary and Financial Institutions Forum (OMFIF), a think tank interested in economic policies and public investment, shows that these technologies rely on the collection of vast databases to function.
In theory, these language models only draw on open information, including social media profiles if they are public. However, this could contravene the GDPR, the regulation that protects personal data within the European Union.
Quoted by the experts, Chris Elwell-Sutton is a partner in the data, privacy and cybersecurity team at the British law firm TLT, underlines:
There is a common belief that if data is extracted from publicly available sources, it falls outside the scope of GDPR and similar privacy regimes. This is a mistake, potentially very costly. Once your personal data is stored in a file system, you benefit from GDPR protection, regardless of its original source.
Aware of the risks, OpenAI, the company that develops ChatGPT, has reviewed its confidentiality rules. In particular, it allows its users to refuse that their data be used to train AI. The company has also taken steps to remove personally identifiable information from all of this training data.
The progress enabled by AI
If these new generative AIs carry risks, they could also be a boon for better protecting our information malicious actors online. Thus, these tools can improve the effectiveness of online protection. This is what Sam King, general manager of the Veracode security group, explains to Financial Times:
Security teams have been using AI for years to detect vulnerabilities and generate threat alerts, but generative AI takes this to another level. Now we can use technology not only to detect problems, but also to solve and ultimately prevent them.
How to explain this wind of optimism? Proponents of generative AI explain that chatbots already help human analysts to better detect potential threats. These models trained on these specific subjects can also be used to test and guarantee the security of a company's code.
It will thus suggest suitable fixes or corrective measures. mitigation against risks. This is therefore a scenario where AI can better protect the personal data of Internet users stored on a service.
However, some experts want to temper these hopes. They thus remind our colleagues that chatbots have also been developed to help cybercriminals. Tools such as FraudGPT or WormGPT democratize access to hacking for people with limited technical skills.
What to remember:
- AI brings fears and hopes for the protection of our personal data
- The collection of sensitive data could threaten online confidentiality
- The implementation of suitable tools can also counter certain data thefts
📍 For Don't miss any news from Presse-citron, follow us on Google News and WhatsApp.