Generative AI-enabled software development promises to boost productivity significantly. In fact, research by Harvard reveals a 43 percent increase in productivity, depending on the task and seniority of the specialist. Nevertheless, most market research on generative AI-attributed productivity improvement comes from controlled settings that don’t necessarily reflect real-world nuance.
A leading digital transformation services and product engineering company sought to capture the components of a real-world integration by helping one of its clients integrate generative AI into the work processes of 10 development teams across three workstreams, including more than 100 specialists. The practical findings from this large-scale implementation can assist organizations as they overcome adoption challenges and craft a company-wide roadmap that scales AI tools, culture and practices.
5 Common Challenges When Implementing Generative AI
- Compatibility with AI tools.
- Integration problems.
- Data privacy and security concerns.
- Specialists’ attitudes toward generative AI and resistance to change.
- Complexity of real-world project conditions.
Addressing Generative AI Adoption Challenges
Accounting for these variables when objectively measuring how new generative AI tools impact productivity can be nigh impossible on an individual level. As such, businesses should measure the change in productivity by examining the change in output for an entire team.
Several challenges impede adoption, such as compatibility with AI tools and integration issues. Likewise, data privacy and security concerns with tool usage can cause problems. Even when companies successfully resolve those challenges, the two main roadblocks encountered during this large-scale generative AI implementation were specialists’ misaligned attitudes and expectations regarding AI as well as the complexities of real-world project conditions.
Before implementing generative AI, companies must navigate the attitudes and expectations of their workforce. Specialists’ negative attitudes around generative AI typically emerge when their initial expectations do not align with the outcomes concerning quality or execution time. Often, these attitudes amount to feeling that the tools should “do the work for me.” When they don’t, specialists will say, “This won’t help me,” or “I don’t have time for this.”
One approach organizations can take to encourage adoption is to analyze all of their specialists through surveys and assessments. These will allow companies to track attitudes and perceived personnel engagement, helping establish a baseline. Business leaders can then identify subgroups with similar attitudes and approach their coaching techniques differently.
For example, two subgroups could include people with a high versus a low self-perceived generative AI proficiency score. Companies can create individual change management strategies for these groups, providing more coaching, training and resources for those who admitted to not being highly proficient with generative AI.
Measuring Success of the Adoption and Its Impact
Research from the Thomson Reuters Institute found that, while specialists from various industries agreed they could and should apply generative AI tools to their work, they were overwhelmingly hesitant because of a lack of technical knowledge. Such findings exemplify the need for a project roadmap that marks the start and end of the AI integration process. This roadmap must include a list of deliverables for baseline and final reporting stages and outline which metrics to measure.
Companies should also classify these metrics into objective and subjective categories. Some objective metrics used by the engineering company were velocity in time, throughput, average rework and code review time, code review failure and acceptance rates and time spent on bug fixing.
For subjective metrics, companies should use surveys that ask teams what they think about the AI tools. For example, how helpful or unhelpful are the tools? How often do you use them in your workday? Were you already familiar with these tools?
Comparing the results of objective and subjective metrics will allow businesses to find correlations. For example, there is likely to be a useful correlations if teams that report higher foreknowledge of the tools demonstrate faster development cycles.
In light of the complexities that come with real-world projects, organizations must use thorough data cleanup when necessary to ensure a more accurate evaluation of generative AI’s impact on productivity. For example, most workplaces entrust their employees with a sign-in, sign-out method to measure the quality of progress and productivity on software development tasks. This data can become invalid if developers do not accurately report their time spent on a project.
As a solution, some of the best data cleanup practices include eliminating unreliable data and grouping data by projects, teams, task types and sizes, etc. Also, businesses should record measurements on a routine schedule, depending on the duration of the integration project.
Other valuable metrics companies can use to track adoption progression amongst teams include average daily impact, perceived proficiency, performance changes, work coverage, usage of AI tools and uninterrupted workflow.
Promoting an Always-Learning and AI-Centric Culture
While managing and measuring generative AI adoption, businesses must also prioritize some additional considerations and best practices to support continuous learning and an AI-centric culture. For example, leaders should collaborate closely with their teams, encouraging individuals to share what is and isn’t working. There should also be growth and development priorities at the individual and team levels, accompanied by suitable learning paths.
If a company hires a group of new graduates, it is inappropriate to hand them a new AI solution and walk away. Rather, this company should provide adequate learning materials, coaches and reporting tools that allow the new hires to report issues with the AI solution. Then, assigned champions can highlight quick-win use cases from these new teams — namely, for code generation, task automation and artifact or issue analysis — to help inspire other teams and convince skeptical specialists of the power of generative AI.
Likewise, by establishing security guidelines and rules of engagement, leaders can empower their teams to explore and experiment with generative AI without exposing the company to risk. Teams should never insert proprietary information or code into ChatGPT. Also, if teams use GitHub Copilot, they should use chatbots that can specify whether or not to use licensed open-source snippets in their work.
Similarly, organizations should promote the agile adoption of emerging AI technologies and adherence to industry standards by constantly reviewing the landscape, educating teams and updating tools when necessary.
Real-World Examples Make the Best Guides
This real-world example practically illustrates the time, effort and careful deliberations required to go from proof of concept to a successful deployment with tangible productivity gains. Although these various adoption challenges and corresponding recommendations are not exhaustive, they highlight the extensive legwork that precedes a generative AI implementation, including the realistic expectations businesses should have when approaching a project.