📢 CAIDP Authors Letter in The New York Times on AI Training, Slack, and Opt-In (June 28, 2024) In a letter published today in The New York Times, CAIDP's Marc Rotenberg, Christabel R. and Kara Solange Kelawan wrote: "Re “A.I. Devices Want More of Our Data,” by Brian X. Chen (Tech Fix column, June 26): 💡 "Mr. Chen makes a good point about the privacy and security risks of companies collecting customer data to train artificial intelligence models. But the companies are not waiting for user permission to repurpose this data. They assume they can do this unless customers object." 🤔 "In a recent statement to Slack, we opted out of its plan to use our data and the data of our friends and colleagues who collaborate with us for training A.I. models. We cannot assume that they would want their content, created for our common projects, to be used by the company for other purposes. ➡ "We have also urged Slack to withdraw the proposed change in its business practices and adopt an opt-in model for the use of customer data for A.I. training. 🔥 "If some users wish to provide their data to Slack for A.I. training, that should be their choice. But it is simply unfair and deceptive for Slack to take user data without explicit permission, particularly after it announced, “You own and control the content within your Slack workspace.” 🔥 🔥 "We have also notified the Federal Trade Commission of our concerns." [In 2023, the Center for AI and Digital Policy filed a detailed complaint with the Federal Trade Commission, explaining that OpenAI had violated US consumer protection law when it released ChatGPT knowing of the various risks to public safety, democratic institutions, privacy, intellectual property, and cybersecurity.] #aigovernance #OurData https://lnkd.in/ecx-tCg2
Center for AI and Digital Policy’s Post
More Relevant Posts
-
Google's AI email experiment raises privacy concerns - Inc. #privacyconcerns 🤝 Follow us on Discord 🔜: https://lnkd.in/gt823Zd3 🤝 Follow us on Whatsapp 🔜 https://wapia.in/wabeta _ ❇️ Summary: Google's AI email experiment, called Smart Compose, aims to make email composition faster and more efficient by predicting what users will type next. While this feature may seem convenient, concerns about privacy and data security arise. Users may be uncomfortable with Google's AI reading and analyzing their emails to make suggestions. This raises questions about the potential invasion of privacy and how Google will handle sensitive information. Despite the benefits of Smart Compose, users must consider the trade-offs in terms of privacy. Hashtags: #chatGPT #GoogleAIprivacyconcerns #emailexperimentsecurity
To view or add a comment, sign in
-
Zoom has faced criticism for its terms and conditions change regarding training AI models. Even after clarifying and obtaining additional consent from users, Zoom still uses their data to train its models. I wonder how this affects the privacy rights of users and what are the implications for data protection laws. I would love to hear from experts in #privacy on this issue. I guess my bottom line is that in the era of AI #data is not the new oil because it is is much more valuable! #privacy #ai #zoom #dataprotection
How Zoom’s terms of service and practices apply to AI features | Zoom Blog
blog.zoom.us
To view or add a comment, sign in
-
Cultural Scientist | PR and Communications - love dealing with #innovation #sciencecommunication #sustainability #socialimpact #innovationmanagement #innovationresearch
🤖 »Worst case, you’ll have to find another job.« Data privacy is always a tricky subject since it's difficult to tell the average consumer why protecting your data (to a certain extent) is worth it. With big companies like Zoom apparently reviving/increasing their data collection efforts to sell data to AI model trainers or to train their own AI models things become more tricky. Do the benefits outweigh the risks? Obviously, nobody should trust big private profit-driven companies to do "the right thing" and make ethically sound decisions but once they are omnipresent (talk about a non-competitive market), you might not be able to make the choice of avoiding them. Hans-Böckler-Stiftung https://lnkd.in/eSsrghFj #dataprivacy #aimodels #artificalintelligence #dataanalytics
Zoom Is Using You to Train AI. So Will Everyone Else
https://www.rollingstone.com
To view or add a comment, sign in
-
🕵️♀️ Vendors training AI with customer data = enterprise risk ❌ Did you know tech companies have been using data about their customers' use of services for years? But what companies need to train today's ML models goes beyond your metadata, they need your information and stored content 😱 With lawsuits involving Google, Microsoft, and others are leading people to question privacy and copyrights, how far has "big tech" crossed the line? https://hubs.la/Q0218kVP0
Vendors Training AI With Customer Data Is an Enterprise Risk
darkreading.com
To view or add a comment, sign in
-
Day 14 of 100 #cybertechdave100daysofcyberchallenge Today I came across this new article that I thought was interesting enough to share: https://lnkd.in/ed8CWH5P The TLDR is that Zoom has updated their terms of service to allow for the collection of data from calls to train their AI service (that has now started rolling out). Security and other tech industry professionals are pushing back on the overbroad language of the new terms of service around data collection for AI training and the, apparent, lack of the ability to opt out of the AI data collection in advanced. Zoom, for their part, assures that folks can opt out on calls or can choose not to use the AI service. My take is that more and more companies are going to go the route of collecting data for AI training as that's where the next big tech data collection boom is headed. Two questions that I would ask Zoom about this is: 1. How is the data collected for the AI being stored and is it anonymized? 2. Can the AI training data from the organization data have the ability to be deleted upon request? What do you think?
Zoom's Terms of Service Updates on AI Features Raise Privacy Concerns
secureworld.io
To view or add a comment, sign in
-
Microsoft has unveiled Bing Chat Enterprise, a new AI-powered chat tool claiming to offer better data protection for businesses. Is it the privacy solution it promises to be? Dive into the details in our latest article. #microsoft #AI #bingchatenterprise
Microsoft announces new AI tool for businesses
https://sendent.com
To view or add a comment, sign in
-
#ai | #artificialintelligence | #privacy : Slack may have been using users' chats to train its AI models. Slack faces scrutiny for using customer data without permission, sparking outrage.Corey Quinn highlighted the issue. Users frustrated by lack of transparency and opt-out process. Inconsistencies in privacy policies and premium generative AI tools raise concerns about user control. Current opt out approach may require policy changes to ensure greater transparency and user data control. Read more at: https://lnkd.in/gNTGDVAG
Slack may have been using users' chats to train its AI models - ET CISO
ciso.economictimes.indiatimes.com
To view or add a comment, sign in
-
🚫👁️🗨️ Zoom AI Training: Addressing Privacy Concerns and Providing Mitigations 🕵️♂️📊 Zoom’s use of user data for AI model training has raised concerns over privacy and data usage. To ensure transparency and protect user information, consider these suggestions and mitigations: 1. ⚙️ Transparent Communication: Zoom should communicate clearly and openly with users about its AI training practices, providing insights into data usage and explaining the benefits. 2. 🔒 Opt-Out Option: Offer users the ability to opt-out of AI model training, respecting their privacy preferences and giving them control over their data. 3. 🔄 Data Anonymization: Implement robust data anonymization techniques to protect user identities and ensure that sensitive information remains secure during AI training. 4. 👥 User Consent: Obtain explicit consent from users before utilizing their data for AI training purposes, ensuring compliance with privacy regulations. 5. 🛡️ Data Protection Measures: Strengthen data protection measures to safeguard user information, encrypting data both in transit and at rest. By implementing these suggestions and mitigations, Zoom can bolster user trust, maintain data privacy, and ensure responsible AI model training practices. 📲🔒 #ZoomAI #PrivacyConcerns #UserData #AIModelTraining #DataUsage #DataPrivacy #OptOutOption #Transparency #UserConsent #DataProtection #CyberSecurity #UserPrivacyRights #AIIntegration #DataSecurity #OnlinePrivacy #DataAnonymization #DataProtectionMeasures #ProtectUserPrivacy #CyberAwareness #ResponsibleAI #DataSafeguards #UserTrust #PrivacyMitigations #SecureDataUsage #RespectPrivacyPreferences #DataEncryption #UserDataPrivacy Zoom trains its AI model with some user data, without giving them an opt-out option
Zoom trains its AI model with some user data, without giving them an opt-out option
https://securityaffairs.com
To view or add a comment, sign in
55,222 followers