Financial Markets

LINKEDIN STEALTHILY OPTS USERS INTO DATA-DRIVEN AI TRAINING WITHOUT CONSENT!

In an era increasingly defined and driven by artificial intelligence (AI), data privacy concerns are on the front burner. Two prominent social media giants, LinkedIn and Meta, the parent company of Facebook, recently made headlines putting those concerns under the spotlight.

LinkedIn has added fuel to the ongoing debate by automatically opting its users into a new program, according to which, data from the platform is being used to enhance and train generative AI models, a fact addressed in their updated privacy policy.

Generative AI is a significant facet of LinkedIn's digital structure, primarily utilized in features like their smart writing assistant. This assistant offers users suggested auto-fills for their messages, skill endorsements, along with other written components on the platform. Essentially, members' words and language patterns are absorbed by the system and integrated into an AI language model, which increases the tool's relevance, pertinence, and overall efficiency.

Post the recent privacy policy update, users can now decide if they wish to participate in this AI model training. They can avoid their data being mined in the future by adjusting their account settings under “Data for Generative AI Improvement.” However, any past training your data has contributed to, remains unaffected.

Addressing user concerns, LinkedIn assures that it deploys privacy-enhancing technologies to expunge personal data from training sets. But, the company also mentioned that it does not process data for AI training from users who are outside the European Union (EU), European Economic Area (EEA), or Switzerland.

In addition to training AI models, LinkedIn also uses machine learning tools for other aspects such as personalisation and moderation, and this too utilises user data. If members wish to object this data usage, they can do so via the LinkedIn Data Processing Objection Form.

LinkedIn's news follows on the heel of a revelation from Meta, which confirmed that they have engaged in training AI models using non-private user data as far back as 2007, resurfacing several data privacy questions.

The stark reality bound to us today is that AI shapes much of what we see, hear, and experience on the internet. As more corporations employ AI for improving customer experiences, the core question that arises concerns the balance between technological advance and personal data security.

As our digital footprints extend and the boundary between public and private data blurs more than ever, it is clear that the conversation surrounding digital privacy is not going away. Instead, it will continue to evolve and influence how tech companies operate in the future. For that matter, LinkedIn's recent move might just be the tip of the iceberg.

As we move forward, user awareness and control over their data will become paramount. Companies will have to find a way to balance their developmental needs with user's rights to privacy and security. The future of AI holds immense possibilities, but the road to it should not be paved with our privacy.