With AI adoption on the rise, developers face a challenge — handling risk
- At an AI roundtable in November, developers said AI tools were playing a key role in coding.
- They said that while AI could boost productivity, stakeholders should understand its limitations.
- This article is part of "CXO AI Playbook" โ straight talk from business leaders on how they're testing and using AI.
At a Business Insider roundtable in November, Neeraj Verma, the head of applied AI at Nice, argued that generative AI "makes a good developer better and a worse developer worse."
He added that some companies expect employees to be able to use AI to create a webpage or HTML file and simply copy and paste solutions into their code. "Right now," he said, "they're expecting that everybody's a developer."
During the virtual event, software developers from companies such as Meta, Slack, Amazon, Slalom, and more discussed how AI influenced their roles and career paths.
They said that while AI could help with tasks like writing routine code and translating ideas between programming languages, foundational coding skills are necessary to use the AI tools effectively. Communicating these realities to nontech stakeholders is a primary challenge for many software developers.
Understanding limitations
Coding is just one part of a developer's job. As AI adoption surges, testing and quality assurance may become more important for verifying the accuracy of AI-generated work. The US Bureau of Labor Statistics projects that the number of software developers, quality-assurance analysts, and testers will grow by 17% in the next decade.
Expectations for productivity can overshadow concerns about AI ethics and security.
"Interacting with ChatGPT or Cloud AI is so easy and natural that it can be surprising how hard it is to control AI behavior," Igor Ostrovsky, a cofounder of Augment, said during the roundtable. "It is actually very difficult to, and there's a lot of risk in, trying to get AI to behave in a way that consistently gives you a delightful user experience that people expect."
Companies have faced some of these issues in recent AI launches. Microsoft's Copilot was found to have problems with oversharing and data security, though the company created internal programs to address the risk. Tech giants are investing billions of dollars in AI technology โ Microsoft alone plans to spend over $100 billion on graphics processing units and data centers to power AI by 2027 โ but not as much in AI governance, ethics, and risk analysis.
AI integration in practice
For many developers, managing stakeholders' expectations โ communicating the limits, risks, and overlooked aspects of the technology โ is a challenging yet crucial part of the job.
Kesha Williams, the head of enterprise architecture and engineering at Slalom, said in the roundtable that one way to bridge this conversation with stakeholders is to outline specific use cases for AI. Focusing on the technology's applications could highlight potential pitfalls while keeping an eye on the big picture.
"Good developers understand how to write good code and how good code integrates into projects," Verma said. "ChatGPT is just another tool to help write some of the code that fits into the project."
Ostrovsky predicted that the ways employees engage with AI would change over the years. In the age of rapidly evolving technology, he said, developers will need to have a "desire to adapt and learn and have the ability to solve hard problems."