Organizations are turning on builtin copilots and turning them off for a few reasons. (1) copilots are exposing weakness in security and governance controls in the underlying apps and data systems. Huge risks of sensitive info being exposed broadly internally or even externally (2) underlying data needs curation for different use cases and it needs to be automated. (3) quality of answers suffers with it also Teams are back to drawing boards to first automate the underlying data controls and then turn it on. Copilots are bound to happen, just needs more prep and automation for it. Well placed article Isabelle Bousquette Sharon Mandell
My latest in today's print edition of The Wall Street Journal: AI work assistants are off to a slow start. AI work assistants were designed to provide businesses a relatively easy avenue into the cutting edge technology. It isn’t quite turning out that way, with CIOs saying it requires a heavy internal lift to get full value from the pricey tools. “It has been more work than anticipated,” said Sharon Mandell, CIO of Juniper Networks, who is testing tools from several vendors but doesn’t feel ready to put any into production. Tools like Copilot for Microsoft 365 or Gemini for Google Workspace aim to access large bodies of enterprise data—including emails, documents and spreadsheets— deliver reliable answers to questions such as “what are our latest sales figures?” But that isn’t always the case—in part because the enterprise data they are accessing isn’t always up-to-date or accurate and in part because the tools themselves are still maturing. Mandell said if she asks a question related to 2024 data, the AI tool might deliver an answer based on 2023 data. At Cargill, an AI tool failed to correctly answer a straightforward question about who is on the company’s executive team, the agricultural giant said. At Eli Lilly and Company, a tool gave incorrect answers to questions about expense policies, said Diogo Rau, the pharmaceutical firm’s CIDO. Vendors have noted the issue. “As companies started using Copilot, people started finding data that companies didn’t know they had access to, or that they realized wasn’t as fresh or as valuable as it could be. And then they realized, ‘Oh, we’ve got to do more,’” said Jared Spataro, corporate vice president of AI at Work at Microsoft. Read the full, unlocked story here for how CIOs and vendors are aiming to solve the issue: https://lnkd.in/eKKPcDjt #tech #cio #ai #artificialintelligence
Excellent article. AI work assistants like Microsoft's Copilot and Google's Gemini, designed to streamline data management within businesses, are encountering some teething problems. CIOs, including Juniper Networks' Sharon Mandell, are finding these tools more labor-intensive to implement than expected, often providing outdated or incorrect data due to the immaturity of the technology and issues with enterprise data quality. This reality underscores the challenges businesses face when integrating cutting-edge technologies and highlights the need for improvements in data accuracy and tool capabilities. As AI continues to evolve, the journey toward fully effective AI work assistants is still underway. #AI #BusinessTechnology #DigitalTransformation
Ensuring compliant data pipelines are the need of the hour for organizations looking to operationalize AI LLMs for business use cases. Without the proper controls in place, AIs are more liabilities than assets.
Good and usefull article
Well put, Rehan!
Learner. Lawyer. Evangelist. Privacy Preacher. Cybersecurity Crusader.
4moI wonder if some of the challenges come from assumptions made by companies like Google and Microsoft (which have likely spent millions to billions of dollars on data infrastructure) assume that all of their AI customers have implemented similar structures, particularly around personal information due to the regulatory requirements. When those assumptions fail, then the product may not behave as anticipated. This is a key reason why transparency obligations are making their way into legal compliance requirements for AI - users have to know the assumptions that were built into the models so as to avoid risking exposure of their own confidential and restricted data.