❌

Reading view

There are new articles available, click to refresh the page.

Business leaders share 5 ways they're taking AI from pilot to use case

Workforce Innovation Series template with vertical, colorful stripes on the left and bottom sides. A blue-tinted photo of coworkers looking at computer monitors

Getty Images; Andrius Banelis for BI

In the business world, there are few areas that artificial intelligence hasn't touched. Many industries are rushing to adopt AI, and the technology is changing how employees collaborate and complete tasks.

Generative AI is a major buzzword for business leaders. But actually integrating AI can be a different story.

"A lot of our clients have dozens of AI pilots everywhere," Jack Azagury, the group chief executive for consulting at Accenture, said at one Workforce Innovation roundtable. "Very few have a coherent business case and a true reinvention and transformation."

How do companies move forward as the novelty of AI wears off? Business Insider's Julia Hood asked members of the Workforce Innovation board how they transitioned their AI pilots into real-world use cases. Board members shared five major ways their companies were moving AI from theory to operations.

"Before we go and tell our clients to embark on AI fully, we want to be an AI-first organization," said Anant Adya, an executive vice president, service-offering head, and head of Americas delivery at Infosys. "We want to show our clients we are using AI, whether it is in HR when it comes to driving better employee experience or when it comes to recruitment."

Members also highlighted employee training and peer-to-peer learning opportunities.

The roundtable participants were:

  • Anant Adya, an executive vice president, service-offering head, and head of Americas Delivery at Infosys.
  • Lucrecia Borgonovo, a chief talent and organizational-effectiveness officer at Mastercard.
  • Neil Murray, the CEO of Work Dynamics at JLL.
  • Justina Nixon-Saintil, a vice president and chief impact officer at IBM.
  • Marjorie Powell, a chief HR officer and senior vice president at AARP.

The following has been edited for length and clarity.


Identify early adopters, like human resources

Nixon-Saintil: Because we provide these platforms and solutions to clients, we are usually client zero. We implemented AI across our business and multiple functions, and one of the first things we did was our AskHR product, which I think answered over 94% of questions employees had.

HR employees now spend time doing higher-order work and partnerships with business units instead of answering basic questions that a virtual assistant can answer. I think that's when you start seeing a lot of the benefits of it.

Borgonovo: HR has been leading the way in terms of embedding AI to enhance the employee experience end to end, right before you hire somebody all the way to after they leave the organization. There are tons of opportunities to improve performance and productivity and provide greater personalization.


Invest in ongoing training

Adya: There are certain AI certifications and courses that everybody has to take to be knowledgeable about AI. So we are driving education in terms of what is the impact of AI, what is gen AI, what are LLMs, and how you look at use cases. And certainly educating everybody that it's not about job losses but about amplifying your potential to do more.

Powell: We have hands-on skill building. This past year we posted over 20 AI workshops helping teams integrate AI into their work. We really encourage our staff to participate. We have a product we're using behind our firewall, so they can engage and play with it. We're just telling them go ahead and try to break it, so they can give us feedback on what's working.

There was a team of people who said we want to see how you could use AI with PowerPoint or Excel. And they're finding, well, it's not so good in those things. But as it continues to grow, they'll be ready for that, and they'll know what it was able to do and what it wasn't. I think it's just making it fun, and that way it's not so scary.

Murray: Our internal large language model is now a widget on everybody's dashboard that is accessible on your landing page. Training is super important here to make people comfortable with it. Even if it's just an online module, you have to get people comfortable.

Nixon-Saintil: We've also done companywide upskilling. We had two Watsonx challenges. Watsonx is our AI data platform. This is one of the ways we've upskilled a majority of the organization. The outcome of that is there are some great ideas that employees actually ideated, and they're now implementing those ideas and solutions in different functions.

Borgonovo: Employees want to use AI, and I think they're eager to learn how to use AI to augment their jobs. For that, we built a three-tiered learning approach. One is democratizing access for everybody and building general knowledge of AI.

The second tier is much more role-specific. How do we drive new ways of working by having people in different roles embrace AI tools? Software engineering, consulting, sales β€” you name it. And then something we definitely want to build for the future is thinking proactively about how you re-skill people whose roles may be impacted by AI so they can become more comfortable doing high-level tasks or can shift to a different type of role that is emerging within the organization.

The other piece is where we're seeing the greatest demand internally, which is for knowledge management. It's gathering information from a lot of different sources in a very easy way.

Another job family that is very eager to get their hands on new AI technology is software engineering. We have taken a very measured approach in deploying coding assistants within the software-engineering community. This year we did a pilot with a subset of them using coding assistants. The idea is to just learn and, based on our learning, scale more broadly across our software-engineering community in 2025.

One of the really interesting learnings from this pilot was that the software engineers who were using the coding assistants probably the best were people who had received training. What we're learning is that before you start rolling out all of these technologies or AI-specific platforms for different job families, you have got to be really intentional about incorporating prompt training.


Unlock peer-to-peer learning

Powell: We have idea pitch competitions and a year-round idea pipeline program where people can put in ideas on how to use AI and share what they've learned. It sparks a lot of peer learning and creativity on our digital-first capabilities to help us with our digital transformation.

Then we collaborate through community. We have a generative-AI community of practice. This is somewhat like how companies have employee resource groups; we have communities of practice as well. They give employees a space to share their techniques and learn from each other and stay ahead of evolving trends. They meet monthly, they have an executive sponsor, and they have all kinds of activities and learning opportunities.

Murray: As we monitored AI use and what sort of questions were being asked, we identified super users across all departments β€” so the people who were capable of developing the most evolved prompts. I suppose those prompts are now appearing in pull-down menus to help people who maybe aren't as advanced in their use of it, because prompting is a really important part of this. And so the super users are driving everybody else to show them what's possible across the organization.


Find customer pain points to solve

Borgonovo: One of the use cases that drives not only knowledge management but also efficiencies is around customer support. Customer support is probably one of the areas that has been leading the way.

We have a customer onboarding process that can be very lengthy, very technical, involving hundreds of pages of documentation and reference materials. It was our first use case for a chat-based assistant that we processed in terms of streamlining and creating greater efficiency and a much better customer experience.


Reinforce responsible leadership

Powell: We want our leaders, people leaders particularly, to guide employees to use AI effectively and responsibly. We want to make sure they're emphasizing privacy, policy, and efficiency. So we encourage managers to point the staff toward training that we offer, and we offer quite a bit of training.

Read the original article on Business Insider

The rise of the "AI engineer" and what it means for the future of tech jobs

Three software developers sitting next to each other in a row and looking at their laptops.
Some software developers are transitioning to AI jobs at their companies.

Maskot/Getty Images

  • AI is opening new career tracks for software developers who want to shift to different roles.
  • Developers at an AI roundtable said that the tech job market is fluctuating rapidly with gen AI.
  • This article is part of "CXO AI Playbook" β€” straight talk from business leaders on how they're testing and using AI.

A few years ago, Kesha Williams was prepared to step away from her tech career β€” but then the AI boom brought her back.

"I've been in tech for 30 years, and before gen AI, I was ready to retire," she said. "I think I'll stay around just to see where this goes." Williams is the head of enterprise architecture and engineering at Slalom.

Williams and six other developers from companies including Amazon, Meta, Anaconda, and more joined Business Insider's virtual roundtable in November to discuss how AI is changing the software development landscape.

While hiring and recruitment in many tech jobs are dropping with the increased adoption of AI coding tools, developers say AI is also opening new career opportunities.

A new career path

Panelists said that the emergence of jobs focused on building AI models and features is a recent development in the industry.

"One of the biggest things I've noticed in the last two to three years is the rise of a job title called 'AI engineer,' which did not exist before, and it's kind of in between a machine-learning engineer and a traditional software engineer," Shruti Kapoor, a lead member of technical staff at Slack, said. "I'm starting to see more and more companies where 'AI engineer' is one of the top-paying jobs available for software engineers."

Salary trends from Levels.fyi, an online platform that allows tech workers to compare their compensation packages, found that in the past two years, entry-level AI engineers can earn 8% more than their non-AI engineer counterparts, and senior engineers can earn nearly 11% more.

Neeraj Verma, the head of applied AI at Nice, said at the roundtable that AI has enabled software engineers at his company to transition internally to AI roles. He said that over 20% of the developers at Nice have moved to AI-related positions in the past two years, with about 100 of those individuals considered experts in prompt engineering.

Verma said the company's developers are not just being supported by AI; they are actively involved in using the technology to build other AI features.

He added that many senior-level developers with strong coding abilities at the company have shown interest in moving to AI to apply their skill sets in new ways. Nice created training programs to help these employees learn the technology and make internal career shifts.

AI-specialized jobs encompass machine-learning engineers, prompt engineers, and AI researchers, among other roles. Although the skills that would be useful for each of these jobs can differ, Kapoor said that an AI engineering role does not necessarily require a specific tech background. Workers with prior experience in sectors like accounting and product management, for instance, have been able to pivot into the AI space.

Adapting to change

Just as AI is changing the software development process, developers say that the professional opportunities in AI could also be in constant flux.

"Software development will change in five years much more rapidly than anything we've seen before," Igor Ostrovsky, the cofounder of Augment, said at the roundtable. "How you architect, develop, test, and maintain software β€” that will all change, and how exactly you interact with AI will also evolve."

Researchers are already questioning the long-term potential of prompt engineering jobs, which skyrocketed in demand in 2023. They say that generative AI models could soon be trained to optimize their own prompts.

"I think prompt engineering is more of a sign that some developers have the desire to learn and are eager to figure out how to interact with artificial intelligence, but it won't necessarily be how you interact with AI in three years or five years," Ostrovsky said.

The pace of technological development means that software developers' ability to learn, adapt, and solve problems creatively will be more important than ever to stay ahead of the curve.

Read the original article on Business Insider

With AI adoption on the rise, developers face a challenge — handling risk

A computer programmer or software developer working in an office
Software developers can be involved in communicating expectations for gen AI to stakeholders.

Maskot/Getty Images

  • At an AI roundtable in November, developers said AI tools were playing a key role in coding.
  • They said that while AI could boost productivity, stakeholders should understand its limitations.
  • This article is part of "CXO AI Playbook" β€” straight talk from business leaders on how they're testing and using AI.

At a Business Insider roundtable in November, Neeraj Verma, the head of applied AI at Nice, argued that generative AI "makes a good developer better and a worse developer worse."

He added that some companies expect employees to be able to use AI to create a webpage or HTML file and simply copy and paste solutions into their code. "Right now," he said, "they're expecting that everybody's a developer."

During the virtual event, software developers from companies such as Meta, Slack, Amazon, Slalom, and more discussed how AI influenced their roles and career paths.

They said that while AI could help with tasks like writing routine code and translating ideas between programming languages, foundational coding skills are necessary to use the AI tools effectively. Communicating these realities to nontech stakeholders is a primary challenge for many software developers.

Understanding limitations

Coding is just one part of a developer's job. As AI adoption surges, testing and quality assurance may become more important for verifying the accuracy of AI-generated work. The US Bureau of Labor Statistics projects that the number of software developers, quality-assurance analysts, and testers will grow by 17% in the next decade.

Expectations for productivity can overshadow concerns about AI ethics and security.

"Interacting with ChatGPT or Cloud AI is so easy and natural that it can be surprising how hard it is to control AI behavior," Igor Ostrovsky, a cofounder of Augment, said during the roundtable. "It is actually very difficult to, and there's a lot of risk in, trying to get AI to behave in a way that consistently gives you a delightful user experience that people expect."

Companies have faced some of these issues in recent AI launches. Microsoft's Copilot was found to have problems with oversharing and data security, though the company created internal programs to address the risk. Tech giants are investing billions of dollars in AI technology β€” Microsoft alone plans to spend over $100 billion on graphics processing units and data centers to power AI by 2027 β€” but not as much in AI governance, ethics, and risk analysis.

AI integration in practice

For many developers, managing stakeholders' expectations β€” communicating the limits, risks, and overlooked aspects of the technology β€” is a challenging yet crucial part of the job.

Kesha Williams, the head of enterprise architecture and engineering at Slalom, said in the roundtable that one way to bridge this conversation with stakeholders is to outline specific use cases for AI. Focusing on the technology's applications could highlight potential pitfalls while keeping an eye on the big picture.

"Good developers understand how to write good code and how good code integrates into projects," Verma said. "ChatGPT is just another tool to help write some of the code that fits into the project."

Ostrovsky predicted that the ways employees engage with AI would change over the years. In the age of rapidly evolving technology, he said, developers will need to have a "desire to adapt and learn and have the ability to solve hard problems."

Read the original article on Business Insider

AI adoption is surging — but humans still need to be in the loop, say software developers from Meta, Amazon, Nice, and more

Photo collage featuring headshots of Greg Jennings, Aditi Mithal, Pooya Amini, Shruti Kapoor, Neeraj Verma, Kesha Williams, Igor Ostrovsky
Top Row: Greg Jennings, Aditi Mithal, Pooya Amini, and Shruti Kapoor. Bottom Row: Neeraj Verma, Kesha Williams, and Igor Ostrovsky.

Alyssa Powell/BI

This article is part of "CXO AI Playbook" β€” straight talk from business leaders on how they're testing and using AI.

The future of software-development jobs is changing rapidly as more companies adopt AI tools that can accelerate the coding process and close experience gaps between junior- and senior-level developers.

Increased AI adoption could be part of the tech industry's "white-collar recession," which has seen slumps in hiring and recruitment over the past year. Yet integrating AI into workflows can offer developers the tools to focus on creative problem-solving and building new features.

On November 14, Business Insider convened a roundtable of software developers as part of our "CXO AI Playbook" series to learn how artificial intelligence was changing their jobs and careers. The conversation was moderated by Julia Hood and Jean Paik from BI's Special Projects team.

These developers discussed the shifts in their day-to-day tasks, which skills people would need to stay competitive in the industry, and how they navigate the expectations of stakeholders who want to stay on the cutting edge of this new technology.

Panelists said AI has boosted their productivity by helping them write and debug code, which has freed up their time for higher-order problems, such as designing software and devising integration strategies.

However, they emphasized that some of the basics of software engineering β€” learning programming languages, scaling models, and handling large-scale data β€” would remain important.

The roundtable participants also said developers could provide critical insight into challenges around AI ethics and governance.

The roundtable participants were:

  • Pooya Amini, software engineer, Meta.
  • Greg Jennings, head of engineering for AI, Anaconda.
  • Shruti Kapoor, lead member of technical staff, Slack.
  • Aditi Mithal, software-development engineer, Amazon Q.
  • Igor Ostrovsky, cofounder, Augment.
  • Neeraj Verma, head of applied AI, Nice.
  • Kesha Williams, head of enterprise architecture and engineering, Slalom.

The following discussion was edited for length and clarity.


Julia Hood: What has changed in your role since the popularization of gen AI?

Neeraj Verma: I think the expectations that are out there in the market for developers on the use of AI are actually almost a bigger impact than the AI itself. You hear about how generative AI is sort of solving this blank-paper syndrome. Humans have this concept that if you give them a blank paper and tell them to go write something, they'll be confused forever. And generative AI is helping overcome that.

The expectation from executives now is that developers are going to be significantly faster but that some of the creative work the developers are doing is going to be taken away β€” which we're not necessarily seeing. We're seeing it as more of a boilerplate creation mechanism for efficiency gains.

Aditi Mithal: I joined Amazon two years ago, and I've seen how my productivity has changed. I don't have to focus on doing repetitive tasks. I can just ask Amazon Q chat to do that for me, and I can focus on more-complex problems that can actually impact our stakeholders and our clients. I can focus on higher-order problems instead of more-repetitive tasks for which the code is already out there internally.

Shruti Kapoor: One of the big things I've noticed with writing code is how open companies have become to AI tools like Cursor and Copilot and how integrated they've become into the software-development cycle. It's no longer considered a no-no to use AI tools like ChatGPT. I think two years ago when ChatGPT came out, it was a big concern that you should not be putting your code out there. But now companies have kind of embraced that within the software-development cycle.

Pooya Amini: Looking back at smartphones and Google Maps, it's hard to remember how the world looked like before these technologies. It's a similar situation with gen AI β€” I can't remember how I was solving the problem without it. I can focus more on actual work.

Now I use AI as a kind of assisted tool. My main focus at work is on requirement gathering, like software design. When it comes to the coding, it's going to be very quick. Previously, it could take weeks. Now it's a matter of maybe one or two days, so then I can actually focus on other stuff as AI is solving the rest for me.

Kesha Williams: In my role, it's been trying to help my team rethink their roles and not see AI as a threat but more as a partner that can help boost productivity, and encouraging my team to make use of some of the new embedded AI and gen-AI tools. Really helping my team upskill and putting learning paths in place so that people can embrace AI and not be afraid of it. More of the junior-level developers are really afraid about AI replacing them.


Hood: Are there new career tracks opening up now that weren't here before?

Verma: At Nice, we have something like 3,000 developers, and over the last, I think, 24 months, 650 of them have shifted into AI-specific roles, which was sort of unheard of before. Even out of those 650, we've got about a hundred who are experts at things like prompt engineering. Over 20% of our developers are not just developers being supported by AI but developers using AI to write features.

Kapoor: I think one of the biggest things I've noticed in the last two to three years is the rise of a job title called "AI engineer," which did not exist before, and it's kind of in between an ML engineer and a traditional software engineer. I'm starting to see more and more companies where AI engineer is one of the top-paying jobs available for software engineers. One of the cool things about this job is that you don't need an ML-engineering background, which means it's accessible to a lot more people.

Greg Jennings: For developers who are relatively new or code-literate knowledge workers, I think they can now use code to solve problems where previously they might not have. We have designers internally that are now creating full-blown interactive UIs using AI to describe what they want and then providing that to engineers. They've never been able to do that before, and it greatly accelerates the cycle.

For more-experienced developers, I think there are a huge number of things that we still have to sort out: the architectures of these solutions, how we're actually going to implement them in practice. The nature of testing is going to have to change a lot as we start to include these applications in places where they're more mission-critical.

Amini: On the other side, looking at threats that can come out of AI, new technologies and new positions can emerge as well. We don't currently have clear regulations in terms of ownership or the issues related to gen AI, so I imagine there will be more positions in terms of ethics.

Mithal: I feel like a Ph.D. is not a requirement anymore to be a software developer. If you have some foundational ML, NLP knowledge, you can target some of these ML-engineer or AI-engineer roles, which gives you a great opportunity to be in the market.

Williams: I'm seeing new career paths in specialized fields around ML and LLM operations. For my developers, they're able to focus more on strategy and system design and creative problem-solving, and it seems to help them move faster into architecture. System design, system architecture, and integration strategies β€” they have more time to do that because of AI.


Jean Paik: What skills will developers need to stay competitive?

Verma: I think a developer operating an AI system requires product-level understanding of what you're trying to build at a high level. And I think a lot of developers struggle with prompt engineering from that perspective. Having the skills to clearly articulate what you want to an LLM is a very important skill.

Williams: Developers need to understand machine-learning concepts and how AI models work, not necessarily how to build and train these models from scratch but how to use them effectively. As we're starting to use Amazon Q, I've realized that our developers are now becoming prompt engineers because you have to get that prompt right in order to get the best results from your gen-AI system.

Jennings: Understanding how to communicate with these models is very different. I almost think that it imparts a need for engineers to have a little bit more of a product lens, where a deeper understanding of the actual business problem they're trying to solve is necessary to get the most out of it. Developing evaluations that you can use to optimize those prompts, so going from prompt engineering to actually tuning the prompts in a more-automated way, is going to emerge as a more common approach.

Igor Ostrovsky: Prompt engineering is really important. That's how you interact with AI systems, but this is something that's evolving very quickly. Software development will change in five years much more rapidly than anything we've seen before. How you architect, develop, test, and maintain software β€” that will all change, and how exactly you interact with AI will also evolve.

I think prompt engineering is more of a sign that some developers have the desire to learn and are eager to figure out how to interact with artificial intelligence, but it won't necessarily be how you interact with AI in three years or five years. Software developers will need this desire to adapt and learn and have the ability to solve hard problems.

Mithal: As a software developer, some of the basics won't change. You need to understand how to scale models, build scalable solutions, and handle large-scale data. When you're training an AI model, you need data to support it.

Kapoor: Knowledge of a programming language would be helpful, specifically Python or even JavaScript. Knowledge of ML or some familiarity with ML will be really helpful. Another thing is that we need to make sure our applications are a lot more fault-tolerant. That is also a skill that front-end or back-end engineers who want to transition to an AI-engineering role need to be aware of.

One of the biggest problems with prompts is that the answers can be very unpredictable and can lead to a lot of different outputs, even for the same prompt. So being able to make your application fault-tolerant is one of the biggest skills we need to apply in AI engineering.


Hood: What are the concerns and obstacles you have as AI gains momentum? How do you manage the expectations of nontech stakeholders in the organization who want to stay on the leading edge?

Ostrovsky: Part of the issue is that interacting with ChatGPT or cloud AI is so easy and natural that it can be surprising how hard it is actually to control AI behavior, where you need AI to understand constraints, have access to the right information at the right time, and understand the task.

When setting expectations with stakeholders, it is important they understand that we're working with this very advanced technology and they are realistic about the risk profile of the project.

Mithal: One is helping them understand the trade-offs. It could be security versus innovation or speed versus accuracy. The second is metrics. Is it actually improving the efficiency? How much is the acceptance rate for our given product? Communicating all those to the stakeholders gives them an idea of whether the product they're using is making an impact or if it's actually helping the team become more productive.

Williams: Some of the challenges I'm seeing are mainly around ethical AI concerns, data privacy, and costly and resource-intensive models that go against budget and infrastructure constraints. On the vendor or stakeholder side, it's really more about educating our nontechnical stakeholders about the capabilities of AI and the limitations and trying to set realistic expectations.

We try to help our teams understand for their specific business area how AI can be applied. So how can we use AI in marketing or HR or legal, and giving them real-world use cases.

Verma: Gen AI is really important, and it's so easy to use ChatGPT, but what we find is that gen AI makes a good developer better and a worse developer worse. Good developers understand how to write good code and how good code integrates into projects. ChatGPT is just another tool to help write some of the code that fits into the project. That's the big challenge that we try to make sure our executives understand, that not everybody can use this in the most effective manner.

Jennings: There are some practical governance concerns that have emerged. One is understanding the tolerance for bad responses in certain contexts. Some problems, you may be more willing to accept a bad response because you structure the interface in such a way that there's a human in the loop. If you're attempting to not have a human in the loop, that could be problematic depending on what you want the model to do. Just getting better muscle for the organization to have a good intuition about where these models can potentially fail and in what ways.

In addition to that, understanding what training data went into that model, especially as models are used more as agents and have privileged access to different applications and data sources that might be pretty sensitive.

Kapoor: I think one of the biggest challenges that can happen is how companies use the data that comes back from LLM models and how they're going to use it within the application. Removing the human component scares me a lot.

Verma: It's automation versus augmentation. There are a lot of cases where augmentation is the big gain. I think automation is a very small, closed case β€” there are very few things I think LLMs are ready in the world right now to automate.

Read the original article on Business Insider

❌