Follow Us @
What dating apps can teach big tech about privacy
– best Ai and Ai related updates, fresh and up to date Ai reviews, technologies and best Ai earning Opportunities near you!
I have been working in the field of data privacy for over 15 years, but nothing has prepared me for the complexities that AI has created for companies since it became popular in 2022.
Every week, a new company is in the spotlight. Examples of this are the criticism against Meta for using Facebook and Instagram user data for AI training, Microsoft's Recall feature (which was later forced to backtrack), and Adobe users' cancellation of subscriptions because they feared that their artworks would feed generative models.
The question almost always comes down to whether users can opt out of handing over their data to AI systems. If they have that option, what will that experience be like? Is the data collection notice embedded in the Terms and Conditions? Is the opt-out mechanism full of dark patterns designed to trick users into accepting everything or discourage them from trying to change any detail?
Most people blindly click “accept all cookies” when browsing the internet. And that's no reason to celebrate. This reveals a failure by technology companies to explain what information they collect, why it is collected and how it is used.
The root of the problem is that, by default, consumers almost always automatically opt-in to new experiences and features and the resulting data collection and use that comes with them. Even when they are given the option to decline, the process is often too confusing or full of obstacles for anyone to bother.
So consumers continue to provide their data, day after day, until finally the cup overflows with public reactions and controversial headlines. This cycle isn't just bad for consumer privacy rights – it's bad for businesses and the future of AI models.
In the world of data privacy, we advocate the concept of data minimization. It is considered best practice to collect as little data as possible and to inform consumers in advance about what we collect and how we use it. Why, then, do technology companies continue to insist on default opt-in, making it the norm?
In a magazine article “Wired” about Anthropic's privacy policies, I learned that the company uses user requests and results to train its model, Claude. But an Anthropic spokesperson says they only use this data for training when “the user gives express permission to do so, such as clicking a thumbs up or thumbs down sign in a specific Claude output to provide us feedback”.
Reading this reminded me of my previous role as chief privacy officer at the dating app Grindr and privacy program lead at Match Group (parent company of Tinder and Hinge). This also inspired me to think about a new model for a future that is free from ready-made privacy standards.
Why do technology companies continue to insist on default opt-in, making it the norm?
Mimicking dating app gestures – like swiping left or right or a simple thumbs up or down – at every point of data collection is a brilliant way to gain consent, especially when collecting data for training. models.
A consumer may be comfortable with their data being used to train one type of model, such as a model for disease detection, but not others. Another consumer may give up all of their health data for training but refuse to give up their creative writing data. Still another may “swipe left” through all of this.
The reality is that cutting off access to all data is not the solution for any company. And AI models need them most. Researchers estimate that we could run out of data for model training within a decade. There are too many innovative use cases for AI in the world – from healthcare and finance to consumer technology and education – to allow this to happen.
Implementing timely, easy-to-use controls like swiping to improve how we collect consent gives users real power to decide when to grant access to their data based on what it is and how it will be used .
Not only are these controls intuitive and scalable, they are also aligned with consent best practices that reduce business risk. This could result in more and better data collected to train the AI models of the future.
While some companies may argue that a swipe-to-consent model would result in less data collection, I disagree. An informed and entitled consumer is a loyal and engaged consumer.
For the future of artificial intelligence, this is a model I would adopt immediately.
What dating apps can teach big tech about privacy
Follow AFRILATEST on Google News and receive alerts for the main trending Law and layers near you, accident lawyers, insurance lawyer, robotic Lawyer and lots more! What dating apps can teach big tech about privacy
SHARE POST AND EARN REWARDS:
Join our Audience reward campaign and make money reading articles, shares, likes and comment >> Join reward Program
FIRST TIME REACTIONS:
Be the first to leave us a comment – What dating apps can teach big tech about privacy
, down the comment section. click allow to follow this topic and get firsthand daily updates.
JOIN US ON OUR SOCIAL MEDIA: << FACEBOOK >> | << WHATSAPP >> | << TELEGRAM >> | << TWITTER >
What dating apps can teach big tech about privacy