❌

Reading view

There are new articles available, click to refresh the page.

In an AI-driven world, the employer-employee relationship is poised to change

Illustration of a person standing in front of a large curtain shaped like a castle tower, pulling it open to reveal bright light behind them, and dashed arrows across the background

Andrius Banelis for BI

As AI rapidly transforms workplaces, employees are on edge.

Roughly two years after ChatGPT's release sparked widespread interest in generative AI, it's becoming clear that most workers' jobs will fundamentally change β€” and some may disappear. An analysis by the International Monetary Fund published in January forecast that artificial intelligence would affect nearly 40% of jobs.

But the impact of AI on employment is complex and far-reaching. Some roles may become obsolete; others may be augmented or even created by AI. Workers are simultaneously experiencing anxiety, doubt, and excitement. What new skills will I need to develop? How can I stay relevant? And importantly, is my organization prepared for this AI-driven future?

Whether employees can trust their organization's leaders to navigate these opportunities is a pivotal question, said Brian Solis, the head of global innovation at ServiceNow, a cloud-based automation platform, and author of the book "Mindshift: Transform Leadership, Drive Innovation, and Reshape the Future." He said that while many executives recognize AI's promise in increasing efficiency by automating repetitive tasks, they often fail to grasp the technology's profound potential.

"Leaders talk about the new normal or the next normal, but then they natively snap back to business as usual," Solis said. "It's the leaders who explore and ask: 'What if? Who will unlock entirely new ways of working?'"

Headshot of Brian Solis
Brian Solis is the head of global innovation at ServiceNow.

Photo Courtesy of Brian Solis

Workers themselves have a responsibility to learn and grow, he added. They need to experiment with new technologies both in and outside work and challenge themselves to push beyond their comfort zones. "You need to literally rewire your brain," he said. "If you're waiting for someone to tell you what to do, you're on the wrong side of innovation."

'Workers need to be proactive'

Despite the breathless headlines about AI changing everything about the way we work, the reality is more mundane.

In a quarterly Gallup survey of American workers conducted in May, seven in 10Β respondents said they never used AI in their jobs, and only one in 10 said they used it regularly. The survey used a random sample of 21,543 working adults. Among those who said they did use AI, the most common applications included generating ideas, consolidating information, and automating basic tasks.

Still, investment in AI continues to surge. A report from IDC predicted that global spending would reach $632 billion by 2028, more than double what it is now, covering AI apps, infrastructure, and related services.

Companies are investing in AI to avoid falling behind, said Mansour Javidan, an expert in digital transformation and the executive director of the Najafi Global Mindset Institute at Arizona State's Thunderbird School of Global Management. "There's a lot of hype driven by board expectations, and that's led to a herd mentality to move quickly," he said. "No CEO is going to look bad by investing in AI right now."

Headshot of Mansour Javidan
Mansour Javidan is the executive director of the Najafi Global Mindset Institute.

Photo Courtesy of Mansour Javidan

Workers, meanwhile, are caught between uncertainty and anticipation. "There's a disconnect," Javidan said. "At the highest levels of the organization, there's a lot of excitement about AI. But among lower- and midlevel employees, there's a good deal of anxiety and ambiguity because there's no clear path."

But "workers mustn't rely on senior executives and hope things will turn out rosy," he said.

Javidan advises employees to seize development opportunities within their organizations and seek out online courses. Many top universities, including MIT and Stanford, provide free classes and workshops to help people build their skills. Grassroots and community-based learning groups, such as Women Defining AI, can be valuable resources.

"Workers need to be proactive and educate themselves," he said.

AI as a strategic collaborator

Beyond formal training and coursework, getting comfortable with AI requires a fundamental mindset shift, experts say.

"We were born with skills like curiosity, wonder, and imagination, but we often unlearned these in schools," Solis said. "The aim with AI should not be to generate expected answers or reinforce existing thinking but to challenge our conventions."

Solis said he uses AI as a tool for perspective taking, asking it to generate responses from the personas of the Apple founder Steve Jobs and Walt Disney. This approach helps him identify blind spots, explore alternative viewpoints, and seek inspiration. "They're my personal coaches," he said.

Molly Sands, the head of the teamwork lab at the software company Atlassian, which studies teamwork in the age of AI and distributed work, recommends viewing AI as a creative partner, not just a task-completion machine. "The people who are saving the most time and seeing the biggest benefits are those who see AI as a strategic collaborator," she said.

Headshot of Molly Sands
Molly Sands is the head of the teamwork lab at Atlassian.

Photo Courtesy of Molly Sands

This involves engaging with AI through dynamic, iterative conversations β€” much like working with a team of experts, she said. A new study by researchers at the MIT Sloan School of Management backed this up, finding that human-AI teams showed the most promise in creative tasks like generating content and imagery and translating software code.

"A lot of people use it for one or two use cases, but the growth we're going to see in the next year or two is the people who think about it more ubiquitously," Sands said. "Agents will be a key driver of this."

Her team at Atlassian, for example, has developed a custom agent designed to help employees write more clearly. Essentially, she said, workers "word-vomit" into the agent with information about their audience, context, and key details. The agent then offers up a tailored draft in the worker's voice.

"Our workdays are consumed by writing emails, creating slide decks, and other routine tasks," Sands said. "If AI can take on some of this load β€” freeing us up for creative thinking and solving meaty problems β€” the better off we'll be."

The value of soft skills

Learning how to work with AI is imperative for most workers, but it's important to recognize that human skills remain essential.

After all, said Hakan Ozcelik, a professor of management at California State University, Sacramento, the value of human workers lies in their cognitive, behavioral, and emotional abilities. "There are all sorts of skills that AI doesn't have yet, and maybe never will," he said.

"Humans are inherently social beings, constantly interacting with customers, colleagues, competitors, and their physical environment," Ozcelik said. "These interpersonal skills are invaluable assets for any organization."

Headshot of Hakan Ozcelik
Hakan Ozcelik is a professor of management at California State University, Sacramento.

Photo Courtesy of Hakan Ozcelik

While AI can process information and perform repetitive functions with speed and accuracy, it lacks the soft skills necessary for effective communication and strategic decision-making. A report by Cornerstone, a skills-development platform, said that while generative-AI-related job postings had risen 411% since 2023, the demand for soft skills such as leadership, communication, and emotional intelligence outpaced digital skills by 2.4 times in North America and 2.9 times in Europe.

This is why Ozcelik advises employees to embark on what he calls "a process of professional soul-searching." Closely analyze your daily activities to determine your unique contributions and core competencies that cannot be outsourced, he said: "Dissect your work and look at what you offer your organization in a given day or a week."

Also, identify areas where AI could offer assistance. For example, teachers may realize that while AI can handle grading for grammar and syntax, they should focus on evaluating students' ideas and nurturing creativity. Similarly, healthcare professionals can leverage AI for administrative tasks or data analysis while dedicating more quality time to patients.

In an AI-driven world, the need for human skills will not change; instead, these skills will become even more vital as workers learn to collaborate effectively with technology, Ozcelik said.

"It's about what you contribute and the value you bring," he said.

Read the original article on Business Insider

How Depop's AI image-recognition tool speeds up selling for 180,000 daily listings

A woman taking a photo of a brown tank top on a clothing hanger
Depop users can buy and sell clothing items on the platform.

Courtesy of Depop

  • Depop's new gen-AI feature creates item descriptions based on photos that users upload.
  • The tool has boosted the number of listings on the company's website and saves sellers time.
  • This article is part of "CXO AI Playbook" β€” straight talk from business leaders on how they're testing and using AI.

Depop is an online fashion marketplace where users can buy and sell secondhand clothing, accessories, and other products. Founded in 2011, the company is headquartered in London and has 35 million registered users. It was acquired by Etsy, an online marketplace, in 2021.

Situation analysis: What problem was the company trying to solve?

Depop's business model encourages consumers to "participate in the circular economy rather than buying new," Rafe Colburn, its chief product and technology officer, told Business Insider. However, listing items to sell on the website and finding products to buy take time and effort, which he said can be a barrier to using Depop.

"By reducing that effort, we can make resale more accessible to busy people," he said.

To improve user experience, Depop has unveiled several features powered by artificial intelligence and machine learning, including pricing guidance to help sellers list items more quickly and personalized algorithms to help buyers identify trends and receive product recommendations.

In September, Depop launched a description-generation feature using image recognition and generative AI. The tool automatically creates a description for an item once sellers upload a product image to the platform.

"What we've tried to do is make it so that once people have photographed and uploaded their items, very little effort is required to complete their listing," Colburn said. He added that the AI description generator is especially useful for new sellers who aren't as familiar with listing on Depop.

Headshot of Rafe Colburn
Rafe Colburn is the chief product and technology officer of Depop.

Courtesy of Depop

Key staff and stakeholders

The AI description-generation feature was developed in-house by Depop's data science team, which trained large language models to create it. The team worked closely with product managers.

Colburn said that in 2022, the company moved its data science team from the engineering group to the product side of the business, which has enabled Depop to release features more quickly.

AI in action

To use the description generator, sellers upload an image of the item they want to list to the Depop platform and click a "generate description" button. Using image recognition and gen AI, the system generates a product description and populates item-attribute fields on the listing page, including category, subcategory, color, and brand.

The technology incorporates relevant hashtags and colloquial language to appeal to buyers, Colburn said. "We've done a lot of prompt engineering and fine-tuning to make sure that the tone and style of the descriptions that are generated really fit the norms of Depop," he added.

Sellers can use the generated description as is or adjust it. Even if they modify descriptions, sellers still save time compared to starting with "an empty box to work with," Colburn said.

Did it work, and how did leaders know?

Depop has about 180,000 new listings every day. Since rolling out the AI-powered description generation in September, the company has seen "a real uplift in listings created, listing time, and completeness of listings," Colburn said. However, as the tool was launched recently, a company spokesperson said that specific data was not yet available.

"Aside from the direct user benefits in terms of efficiency and listing quality, we have also really demonstrated to ourselves that users value features that use generative AI to reduce effort on their end," Colburn said.

Ultimately, Depop wants sellers to list more items, and the company's goal is to make it easier to do so, he added. Automating the process with AI means sellers can list items quicker, which Colburn said would create a more robust inventory on the platform, lead to more sales, and boost the secondhand market.

What's next?

Colburn said Depop continues to look for ways to apply AI to address users' needs.

For example, taking high-quality photos of items is another challenge for sellers. It's labor-intensive but important, as listings with multiple high-quality photos of garments are more likely to sell. He said Depop was exploring ways to make this easier and enhance image quality with AI.

A challenge for buyers is sometimes finding items that fit. Depop is also looking into how AI can help shoppers feel more confident that the clothing they purchase will fit so that their overall satisfaction with the platform will be enhanced, Colburn said.

Read the original article on Business Insider

Shutterstock earned over $100 million in revenue thanks in part to its AI-powered image-generator tool

A digital camera with a big lens sits on a desk and a person edits an image on a desktop computer in the background.
Shutterstock's approach to AI integration focused on the user experience.

dusanpetkovic/Getty Images

  • Shutterstock added gen AI to its stock-content library to generate $104 million in revenue.
  • The company has partnered with tech giants including Meta, Amazon, Apple, OpenAI, and Nvidia.
  • This article is part of "CXO AI Playbook" β€” straight talk from business leaders on how they're testing and using AI.

Shutterstock, founded in 2003 and based in New York, is a global leader in licensed digital content. It offers stock photos, videos, and music to creative professionals and enterprises.

In late 2022, Shutterstock made a strategic decision to embrace generative AI, becoming one of the first stock-content providers to integrate the tech into its platform.

Dade Orgeron, the vice president of innovation at Shutterstock, leads the company's artificial-intelligence initiatives. During his tenure, Shutterstock has transitioned from a traditional stock-content provider into one that provides several generative-AI services.

While Shutterstock's generative-AI offerings are focused on images, the company has an application programming interface for generating 3D models and plans to offer video generation.

Situation analysis: What problem was the company trying to solve?

When the first mainstream image-generation models, such as Dall-E, Stable Diffusion, and Midjourney, were released in late 2022, Shutterstock recognized generative AI's potential to disrupt its business.

"It would be silly for me to say that we didn't see generative AI as a potential threat," Orgeron said. "I think we were fortunate at the beginning to realize that it was more of an opportunity."

He said Shutterstock embraced the technology ahead of many of its customers. He recalled attending CES in 2023 and said that many creative professionals there were unaware of generative AI and the impact it could have on the industry.

Orgeron said that many industry leaders he encountered had the misconception that generative AI would "come in and take everything from everyone." But that perspective felt pessimistic, he added. But Shutterstock recognized early that AI-powered prompting "was design," Orgeron told Business Insider.

Key staff and stakeholders

Orgeron's position as vice president of innovation made him responsible for guiding the company's generative-AI strategy and development.

However, the move toward generative AI was preceded by earlier acquisitions. Orgeron himself joined the company in 2021 as part of its acquisition of TurboSquid, a company focused on 3D assets.

Side profile of a man with a beard wearing black glasses and a black jacket.
TK

Photo courtesy of Dade Orgeron

Shutterstock also acquired three AI companies that same year: Pattern89, Datasine, and Shotzr. While they primarily used AI for data analytics, Orgeron said the expertise Shutterstock gained from these acquisitions helped it move aggressively on generative AI.

Externally, Shutterstock established partnerships with major tech companies including Meta, Alphabet, Amazon, Apple, OpenAI, Nvidia, and Reka. For example, Shutterstock's partnership with Nvidia enabled its generative 3D service.

AI in action

Shutterstock's approach to AI integration focused on the user experience.

Orgeron said the company's debut in image generation was "probably the easiest-to-use solution at that time," with a simple web interface that made AI image generation accessible to creative professionals unfamiliar with the technology.

That stood in contrast to competitors such as Midjourney and Stable Diffusion, which, at the time Shutterstock launched its service in January 2023, had a basic user interface. Midjourney, for instance, was initially available only through Discord, an online chat service more often used to communicate in multiplayer games.

This focus on accessibility set the stage for Shutterstock.AI, the company's dedicated AI-powered image-generation platform. While Shutterstock designed the tool's front end and integrated it into its online offerings, the images it generates rely on a combination of internally trained AI models and solutions from external partners.

Shutterstock.AI, like other image generators, lets customers request their desired image with a text prompt and then choose a specific image style, such as a watercolor painting or a photo taken with a fish-eye lens.

However, unlike many competitors, Shutterstock uses information about user interactions to decide on the most appropriate model to meet the prompt and style request. Orgeron said Shutterstock's various models provide an edge over other prominent image-generation services, which often rely on a single model.

But generative AI posed risks to Shutterstock's core business and to the photographers who contribute to the company's library. To curb this, Orgeron said, all of its AI models, whether internal or from partners, are trained exclusively on Shutterstock's legally owned data. The company also established a contributor fund to compensate content creators whose work was used in the models' training.

Orgeron said initial interest in Shutterstock.AI came from individual creators and small businesses. Enterprise customers followed more cautiously, taking time to address legal concerns and establish internal AI policies before adopting the tech. However, Orgeron said, enterprise interest has accelerated as companies recognize AI's competitive advantages.

Did it work, and how did leaders know?

Paul Hennessy, the CEO of Shutterstock, said in June the company earned $104 million in annual revenue from AI licensing agreements in 2023. He also projected that this revenue could reach up to $250 million annually by 2027.

Looking ahead, Shutterstock hopes to expand AI into its video and 3D offerings. The company's generative 3D API is in beta. While it doesn't offer an AI video-generation service yet, Orgeron said Shutterstock plans to launch a service soon. "The video front is where everyone is excited right now, and we are as well," he said. "For example, we see tremendous opportunity in being able to convert imagery into videos."

The company also sees value in AI beyond revenue figures. Orgeron said Shutterstock is expanding its partnerships, which now include many of the biggest names in Silicon Valley. In some cases, partners allow Shutterstock to use their tech to build new services; in others, they license data from Shutterstock to train AI.

"We're partnered with Nvidia, with Meta, with HP. These are great companies, and we're working closely with them," he said. "It's another measure to let us know we're on the right track."

Read the original article on Business Insider

The rise of the "AI engineer" and what it means for the future of tech jobs

Three software developers sitting next to each other in a row and looking at their laptops.
Some software developers are transitioning to AI jobs at their companies.

Maskot/Getty Images

  • AI is opening new career tracks for software developers who want to shift to different roles.
  • Developers at an AI roundtable said that the tech job market is fluctuating rapidly with gen AI.
  • This article is part of "CXO AI Playbook" β€” straight talk from business leaders on how they're testing and using AI.

A few years ago, Kesha Williams was prepared to step away from her tech career β€” but then the AI boom brought her back.

"I've been in tech for 30 years, and before gen AI, I was ready to retire," she said. "I think I'll stay around just to see where this goes." Williams is the head of enterprise architecture and engineering at Slalom.

Williams and six other developers from companies including Amazon, Meta, Anaconda, and more joined Business Insider's virtual roundtable in November to discuss how AI is changing the software development landscape.

While hiring and recruitment in many tech jobs are dropping with the increased adoption of AI coding tools, developers say AI is also opening new career opportunities.

A new career path

Panelists said that the emergence of jobs focused on building AI models and features is a recent development in the industry.

"One of the biggest things I've noticed in the last two to three years is the rise of a job title called 'AI engineer,' which did not exist before, and it's kind of in between a machine-learning engineer and a traditional software engineer," Shruti Kapoor, a lead member of technical staff at Slack, said. "I'm starting to see more and more companies where 'AI engineer' is one of the top-paying jobs available for software engineers."

Salary trends from Levels.fyi, an online platform that allows tech workers to compare their compensation packages, found that in the past two years, entry-level AI engineers can earn 8% more than their non-AI engineer counterparts, and senior engineers can earn nearly 11% more.

Neeraj Verma, the head of applied AI at Nice, said at the roundtable that AI has enabled software engineers at his company to transition internally to AI roles. He said that over 20% of the developers at Nice have moved to AI-related positions in the past two years, with about 100 of those individuals considered experts in prompt engineering.

Verma said the company's developers are not just being supported by AI; they are actively involved in using the technology to build other AI features.

He added that many senior-level developers with strong coding abilities at the company have shown interest in moving to AI to apply their skill sets in new ways. Nice created training programs to help these employees learn the technology and make internal career shifts.

AI-specialized jobs encompass machine-learning engineers, prompt engineers, and AI researchers, among other roles. Although the skills that would be useful for each of these jobs can differ, Kapoor said that an AI engineering role does not necessarily require a specific tech background. Workers with prior experience in sectors like accounting and product management, for instance, have been able to pivot into the AI space.

Adapting to change

Just as AI is changing the software development process, developers say that the professional opportunities in AI could also be in constant flux.

"Software development will change in five years much more rapidly than anything we've seen before," Igor Ostrovsky, the cofounder of Augment, said at the roundtable. "How you architect, develop, test, and maintain software β€” that will all change, and how exactly you interact with AI will also evolve."

Researchers are already questioning the long-term potential of prompt engineering jobs, which skyrocketed in demand in 2023. They say that generative AI models could soon be trained to optimize their own prompts.

"I think prompt engineering is more of a sign that some developers have the desire to learn and are eager to figure out how to interact with artificial intelligence, but it won't necessarily be how you interact with AI in three years or five years," Ostrovsky said.

The pace of technological development means that software developers' ability to learn, adapt, and solve problems creatively will be more important than ever to stay ahead of the curve.

Read the original article on Business Insider

With AI adoption on the rise, developers face a challenge — handling risk

A computer programmer or software developer working in an office
Software developers can be involved in communicating expectations for gen AI to stakeholders.

Maskot/Getty Images

  • At an AI roundtable in November, developers said AI tools were playing a key role in coding.
  • They said that while AI could boost productivity, stakeholders should understand its limitations.
  • This article is part of "CXO AI Playbook" β€” straight talk from business leaders on how they're testing and using AI.

At a Business Insider roundtable in November, Neeraj Verma, the head of applied AI at Nice, argued that generative AI "makes a good developer better and a worse developer worse."

He added that some companies expect employees to be able to use AI to create a webpage or HTML file and simply copy and paste solutions into their code. "Right now," he said, "they're expecting that everybody's a developer."

During the virtual event, software developers from companies such as Meta, Slack, Amazon, Slalom, and more discussed how AI influenced their roles and career paths.

They said that while AI could help with tasks like writing routine code and translating ideas between programming languages, foundational coding skills are necessary to use the AI tools effectively. Communicating these realities to nontech stakeholders is a primary challenge for many software developers.

Understanding limitations

Coding is just one part of a developer's job. As AI adoption surges, testing and quality assurance may become more important for verifying the accuracy of AI-generated work. The US Bureau of Labor Statistics projects that the number of software developers, quality-assurance analysts, and testers will grow by 17% in the next decade.

Expectations for productivity can overshadow concerns about AI ethics and security.

"Interacting with ChatGPT or Cloud AI is so easy and natural that it can be surprising how hard it is to control AI behavior," Igor Ostrovsky, a cofounder of Augment, said during the roundtable. "It is actually very difficult to, and there's a lot of risk in, trying to get AI to behave in a way that consistently gives you a delightful user experience that people expect."

Companies have faced some of these issues in recent AI launches. Microsoft's Copilot was found to have problems with oversharing and data security, though the company created internal programs to address the risk. Tech giants are investing billions of dollars in AI technology β€” Microsoft alone plans to spend over $100 billion on graphics processing units and data centers to power AI by 2027 β€” but not as much in AI governance, ethics, and risk analysis.

AI integration in practice

For many developers, managing stakeholders' expectations β€” communicating the limits, risks, and overlooked aspects of the technology β€” is a challenging yet crucial part of the job.

Kesha Williams, the head of enterprise architecture and engineering at Slalom, said in the roundtable that one way to bridge this conversation with stakeholders is to outline specific use cases for AI. Focusing on the technology's applications could highlight potential pitfalls while keeping an eye on the big picture.

"Good developers understand how to write good code and how good code integrates into projects," Verma said. "ChatGPT is just another tool to help write some of the code that fits into the project."

Ostrovsky predicted that the ways employees engage with AI would change over the years. In the age of rapidly evolving technology, he said, developers will need to have a "desire to adapt and learn and have the ability to solve hard problems."

Read the original article on Business Insider

Empowering a multigenerational workforce for AI

Workforce Innovation Series: Marjorie Powell on light blue background with grid
Marjorie Powell.

AARP

  • Marjorie Powell, AARP's CHRO, is a member of BI's Workforce Innovation board.
  • Powell says creating a collaborative learning environment is key to helping employees adapt to AI.
  • This article is part of "Workforce Innovation," a series exploring the forces shaping enterprise transformation.

As the chief human resources officer at AARP, Marjorie Powell devotes much of her professional energy to meeting the needs of the multigenerational workforce. These days, much of that involves navigating AI's impact to ensure every employee at the nonprofit is prepared for the technological changes shaping the workplace.

"Our goal in everything we do for our employees is to provide the resources, support, and capabilities they need to make good decisions within the company's guidelines," she said. "We take the same approach with AI."

Powell's mission extends beyond AARP's workforce. As an advocate for the 50-and-over demographic, she champions the adaptability and contributions of older workers in a tech-driven economy.

"There's an assumption that people over a certain age are not comfortable with technology, but what's overlooked is that many older people β€” particularly those at the end of the baby boomer generation β€” were at the forefront of this technological revolution," she said.

The following has been edited for length and clarity.

How did AARP handle the introduction of AI in its workforce?

We decided to use Copilot because we're already a Microsoft company. We got enough licenses to set up a working group with key people we thought would be super users. The idea was to experiment with AI tools and see how they fit into our workflows.

We wanted to learn and figure out what works and what doesn't. Then, we could make a decision about how we were going to roll it out to the company, since one, it's costly; and two, we wanted people to feel comfortable with it.

What were some of the outcomes of the working group, and how did those results shape the way AARP approached training and support?

We issued a policy, a generative AI use case approval process, and a mandatory training for all staff to complete to learn how to use gen AI in the workplace. The training focused on internal and external use and the types of information that can be shared, public versus private, and so on.

We encouraged our staff to 'Go out there and play with it.' We then surveyed them and asked, What are you using it for? What are some great use cases you've developed? How's it helping you enhance your productivity? How are you using this tool to further the AARP mission?

We also considered what existing structure we could use to encourage staff to use AI and explore the technology. We already had a structure in place called Communities of Practice β€” groups where employees learn and share. It's like an employee resource group (ERG), but focused on learning and development within industry, so we used this model to create an AI Community of Practice.

What are some of the 'great use cases' for AI for your HR team specifically?

We get a lot of calls and emails on simple things about AARP benefits and policies. People ask questions like: I'm having knee surgery next month. How do I sign up for FMLA? or Where do I find my W2? or I bought a Peloton. Is that eligible for the fitness credit? So we started building an HR chatbot to provide that kind of information. It's much easier for employees to ask the chatbot instead of overwhelming a team member with those queries.

We're currently piloting the chatbot with 300-400 frequently asked questions and answers preloaded. It directs employees to the right information without them having to dig and helps us understand what additional information we need to include.

Many employers are using AI tools in hiring, but there are concerns about potential bias. What's your perspective on this?

We use AI for sourcing candidates. All AARP recruiters are certified to conduct Boolean searches to increase the accuracy of identifying talent with specific skill sets in the marketplace.

But when it comes to screening and interviewing, we don't use AI. We find that the technology is still very biased, specifically when it comes to age. Until the technology matures enough to minimize bias, I don't believe it's a good idea to use it without that human component of judgement.

Speaking of age, what are your thoughts on ageism in the workplace today, especially from companies hesitant to hire older workers?

Companies don't want to be the kind of organization that isn't welcoming to talent, regardless of age. Due to the economy and the rising cost of healthcare, many people in the 50-plus community are re-entering the workforce.

Many in that age group have valuable skills and experience and are eager to return. They often say, 'I don't need to be in a leadership role. Been there, done that. I just want to help and be of use.' They also naturally take on mentorship roles, as people seek their guidance. By embracing this segment of the workforce, companies can gain huge value.

What do employers misunderstand about older workers and technology?

Baby boomers were at the forefront of the technology era, and they're more comfortable with technology than many people realize. In fact, they are among the largest consumers of technology products. Tech companies really need to pay attention to this demographic.

I look at myself β€” I'm about to turn 60 β€” and I was selling Commodore 64s when I was in high school. I've seen everything from floppy disks to CDs, to cassette tapes, to 8-tracks, to digital streaming and everything else. I've experienced all versions of technology, and I've adapted. I'm still willing to adapt, and I'm still learning.

Read the original article on Business Insider

AI can be used to create job promotion, not be a job replacement, says AWS vice president

Swami Sivasubramanian in front of a blue backdrop
Swami Sivasubramanian, VP of AI and Data at Amazon Web Services, shares how AI can change the future of work.

Amazon

  • Swami Sivasubramanian is the VP of AI and Data at Amazon Web Services.
  • He shares how while AI may cause short-term job displacement, it offers long-term productivity gains.
  • He suggests that workers use AI to take over mundane tasks so they can do more valuable work.

The era of generative AI has arrived, bringing both promise and caution. Many people are wondering if AI can coexist peacefully and productively alongside a broad human workforce with diverse talents, skills, and abilities.

I believe that AI isn't coming to take away jobs β€” it's coming to take away tasks. 80% of employees' time is consumed by low-value, repetitive, time-intensive, uninteresting tasks, while only 20% gets devoted to the more interesting activities that generate higher value for the organization.

What if we could flip that ratio? What if we could free humans to solve, build, and create? That's what I believe AI can do. In the process, this new AI paradigm can unlock the value of an accelerated and more fulfilling career. I strongly believe that what we invent today can lead to a profound impact on the world β€” changing industries and people's lives.

Marie Kondo-ing with AI

Thoughtfully deployed, generative AI can remove drudgery and help people find more meaning in their work. It can free you to work on the parts of your job that are more interesting and more valuable β€” the reasons you got into your profession in the first place.

I think the Marie Kondo principle applies: If the task doesn't spark joy, let AI take it from you. Our goal must be to kindle (or rekindle) our joy, to bring out curiosity and creativity, and to reimagine what's possible, now that we're no longer burdened with an assortment of mundane tasks. In a sense, AI can give each of us a job promotion, not be a job replacement.

There will be short-term displacement but it will self-correct

Of course, the widescale adoption of Gen AI will have impacts and implications, and it would be foolhardy to ignore them. Increased productivity and greater cost efficiencies will inevitably lead to short-term workforce displacement β€” for example, contact centers with faster resolution times need fewer workers.

However, I believe that AI will also play a self-correcting role in such a macroeconomic picture. Efforts must be made to close wage disparities and potential economic or opportunity gaps. Community colleges should offer guided and hands-on training to ensure AI is accessible to the broadest areas of our workforce. Similarly, technology companies must offer low-cost or free training and certification programs to promote AI's widespread adoption and use.

I believe AI can cut the time for this upskilling process in half and foresee a world where nearly anyone can be an app builder and creator or where a junior technician can do senior-level repairs.

Decisions should still be made by humans

Whether it's conversational search, agent assistants, image creation, or other forms of intelligent automation, AI becomes a supportive foundation that translates into time β€” time to evaluate, investigate, strategize β€” and solves problems.

AI will give us access to a nearly limitless set of highly accurate, data-driven predictions. Nonetheless, decisions shouldn't be automated. They should remain the sole province of humans, who have a better understanding of tradeoffs, nuances, and strategies.

Here are some examples of how humans can work alongside AI:

  • Customer Care: Gen AI can provide agents with personalized, real-time responses and prompts based on customer questions and interactions. Agents then exercise their judgment to use them.
  • Manufacturing Design: Engineers can use AI to create digital twins to simulate interactions and their effects with far greater speed and far less expense before they decide on a design.
  • Consumer Behavior: Gen AI can predict the preferences and responses of groups or individual buyers. This will allow marketers to focus on how to optimize campaigns and offers.
  • Drug Discovery: With Gen AI, scientists can slash the time required for drug discovery, accelerate therapy development, and find vaccines that are cheaper and more accessible.
  • Media: By automating the difficult art and production aspects of games and entertainment, Gen AI frees designers to create and ideate more. It can even create personalized gaming experiences.
  • Financial Services: Gen AI can strengthen fraud detection and compliance while increasing the efficiency of loan officers making lending decisions.

By being able to focus on decisions and outcomes, we unlock new creativity that we can channel to solving bigger and harder problems. With this new era of generative AI discovery, there has never been a better time to transform businesses and work as we know it.

Dr. Swami Sivasubramanian is the Vice President of AI & Data at AWS. His team's mission is to help organizations leverage the power of AI and data to solve their most urgent business needs.

If you're an AI expert and would like to share your opinions on the impact of AI on the future of work, email Jane Zhang at [email protected].

Read the original article on Business Insider

AI adoption is surging — but humans still need to be in the loop, say software developers from Meta, Amazon, Nice, and more

Photo collage featuring headshots of Greg Jennings, Aditi Mithal, Pooya Amini, Shruti Kapoor, Neeraj Verma, Kesha Williams, Igor Ostrovsky
Top Row: Greg Jennings, Aditi Mithal, Pooya Amini, and Shruti Kapoor. Bottom Row: Neeraj Verma, Kesha Williams, and Igor Ostrovsky.

Alyssa Powell/BI

This article is part of "CXO AI Playbook" β€” straight talk from business leaders on how they're testing and using AI.

The future of software-development jobs is changing rapidly as more companies adopt AI tools that can accelerate the coding process and close experience gaps between junior- and senior-level developers.

Increased AI adoption could be part of the tech industry's "white-collar recession," which has seen slumps in hiring and recruitment over the past year. Yet integrating AI into workflows can offer developers the tools to focus on creative problem-solving and building new features.

On November 14, Business Insider convened a roundtable of software developers as part of our "CXO AI Playbook" series to learn how artificial intelligence was changing their jobs and careers. The conversation was moderated by Julia Hood and Jean Paik from BI's Special Projects team.

These developers discussed the shifts in their day-to-day tasks, which skills people would need to stay competitive in the industry, and how they navigate the expectations of stakeholders who want to stay on the cutting edge of this new technology.

Panelists said AI has boosted their productivity by helping them write and debug code, which has freed up their time for higher-order problems, such as designing software and devising integration strategies.

However, they emphasized that some of the basics of software engineering β€” learning programming languages, scaling models, and handling large-scale data β€” would remain important.

The roundtable participants also said developers could provide critical insight into challenges around AI ethics and governance.

The roundtable participants were:

  • Pooya Amini, software engineer, Meta.
  • Greg Jennings, head of engineering for AI, Anaconda.
  • Shruti Kapoor, lead member of technical staff, Slack.
  • Aditi Mithal, software-development engineer, Amazon Q.
  • Igor Ostrovsky, cofounder, Augment.
  • Neeraj Verma, head of applied AI, Nice.
  • Kesha Williams, head of enterprise architecture and engineering, Slalom.

The following discussion was edited for length and clarity.


Julia Hood: What has changed in your role since the popularization of gen AI?

Neeraj Verma: I think the expectations that are out there in the market for developers on the use of AI are actually almost a bigger impact than the AI itself. You hear about how generative AI is sort of solving this blank-paper syndrome. Humans have this concept that if you give them a blank paper and tell them to go write something, they'll be confused forever. And generative AI is helping overcome that.

The expectation from executives now is that developers are going to be significantly faster but that some of the creative work the developers are doing is going to be taken away β€” which we're not necessarily seeing. We're seeing it as more of a boilerplate creation mechanism for efficiency gains.

Aditi Mithal: I joined Amazon two years ago, and I've seen how my productivity has changed. I don't have to focus on doing repetitive tasks. I can just ask Amazon Q chat to do that for me, and I can focus on more-complex problems that can actually impact our stakeholders and our clients. I can focus on higher-order problems instead of more-repetitive tasks for which the code is already out there internally.

Shruti Kapoor: One of the big things I've noticed with writing code is how open companies have become to AI tools like Cursor and Copilot and how integrated they've become into the software-development cycle. It's no longer considered a no-no to use AI tools like ChatGPT. I think two years ago when ChatGPT came out, it was a big concern that you should not be putting your code out there. But now companies have kind of embraced that within the software-development cycle.

Pooya Amini: Looking back at smartphones and Google Maps, it's hard to remember how the world looked like before these technologies. It's a similar situation with gen AI β€” I can't remember how I was solving the problem without it. I can focus more on actual work.

Now I use AI as a kind of assisted tool. My main focus at work is on requirement gathering, like software design. When it comes to the coding, it's going to be very quick. Previously, it could take weeks. Now it's a matter of maybe one or two days, so then I can actually focus on other stuff as AI is solving the rest for me.

Kesha Williams: In my role, it's been trying to help my team rethink their roles and not see AI as a threat but more as a partner that can help boost productivity, and encouraging my team to make use of some of the new embedded AI and gen-AI tools. Really helping my team upskill and putting learning paths in place so that people can embrace AI and not be afraid of it. More of the junior-level developers are really afraid about AI replacing them.


Hood: Are there new career tracks opening up now that weren't here before?

Verma: At Nice, we have something like 3,000 developers, and over the last, I think, 24 months, 650 of them have shifted into AI-specific roles, which was sort of unheard of before. Even out of those 650, we've got about a hundred who are experts at things like prompt engineering. Over 20% of our developers are not just developers being supported by AI but developers using AI to write features.

Kapoor: I think one of the biggest things I've noticed in the last two to three years is the rise of a job title called "AI engineer," which did not exist before, and it's kind of in between an ML engineer and a traditional software engineer. I'm starting to see more and more companies where AI engineer is one of the top-paying jobs available for software engineers. One of the cool things about this job is that you don't need an ML-engineering background, which means it's accessible to a lot more people.

Greg Jennings: For developers who are relatively new or code-literate knowledge workers, I think they can now use code to solve problems where previously they might not have. We have designers internally that are now creating full-blown interactive UIs using AI to describe what they want and then providing that to engineers. They've never been able to do that before, and it greatly accelerates the cycle.

For more-experienced developers, I think there are a huge number of things that we still have to sort out: the architectures of these solutions, how we're actually going to implement them in practice. The nature of testing is going to have to change a lot as we start to include these applications in places where they're more mission-critical.

Amini: On the other side, looking at threats that can come out of AI, new technologies and new positions can emerge as well. We don't currently have clear regulations in terms of ownership or the issues related to gen AI, so I imagine there will be more positions in terms of ethics.

Mithal: I feel like a Ph.D. is not a requirement anymore to be a software developer. If you have some foundational ML, NLP knowledge, you can target some of these ML-engineer or AI-engineer roles, which gives you a great opportunity to be in the market.

Williams: I'm seeing new career paths in specialized fields around ML and LLM operations. For my developers, they're able to focus more on strategy and system design and creative problem-solving, and it seems to help them move faster into architecture. System design, system architecture, and integration strategies β€” they have more time to do that because of AI.


Jean Paik: What skills will developers need to stay competitive?

Verma: I think a developer operating an AI system requires product-level understanding of what you're trying to build at a high level. And I think a lot of developers struggle with prompt engineering from that perspective. Having the skills to clearly articulate what you want to an LLM is a very important skill.

Williams: Developers need to understand machine-learning concepts and how AI models work, not necessarily how to build and train these models from scratch but how to use them effectively. As we're starting to use Amazon Q, I've realized that our developers are now becoming prompt engineers because you have to get that prompt right in order to get the best results from your gen-AI system.

Jennings: Understanding how to communicate with these models is very different. I almost think that it imparts a need for engineers to have a little bit more of a product lens, where a deeper understanding of the actual business problem they're trying to solve is necessary to get the most out of it. Developing evaluations that you can use to optimize those prompts, so going from prompt engineering to actually tuning the prompts in a more-automated way, is going to emerge as a more common approach.

Igor Ostrovsky: Prompt engineering is really important. That's how you interact with AI systems, but this is something that's evolving very quickly. Software development will change in five years much more rapidly than anything we've seen before. How you architect, develop, test, and maintain software β€” that will all change, and how exactly you interact with AI will also evolve.

I think prompt engineering is more of a sign that some developers have the desire to learn and are eager to figure out how to interact with artificial intelligence, but it won't necessarily be how you interact with AI in three years or five years. Software developers will need this desire to adapt and learn and have the ability to solve hard problems.

Mithal: As a software developer, some of the basics won't change. You need to understand how to scale models, build scalable solutions, and handle large-scale data. When you're training an AI model, you need data to support it.

Kapoor: Knowledge of a programming language would be helpful, specifically Python or even JavaScript. Knowledge of ML or some familiarity with ML will be really helpful. Another thing is that we need to make sure our applications are a lot more fault-tolerant. That is also a skill that front-end or back-end engineers who want to transition to an AI-engineering role need to be aware of.

One of the biggest problems with prompts is that the answers can be very unpredictable and can lead to a lot of different outputs, even for the same prompt. So being able to make your application fault-tolerant is one of the biggest skills we need to apply in AI engineering.


Hood: What are the concerns and obstacles you have as AI gains momentum? How do you manage the expectations of nontech stakeholders in the organization who want to stay on the leading edge?

Ostrovsky: Part of the issue is that interacting with ChatGPT or cloud AI is so easy and natural that it can be surprising how hard it is actually to control AI behavior, where you need AI to understand constraints, have access to the right information at the right time, and understand the task.

When setting expectations with stakeholders, it is important they understand that we're working with this very advanced technology and they are realistic about the risk profile of the project.

Mithal: One is helping them understand the trade-offs. It could be security versus innovation or speed versus accuracy. The second is metrics. Is it actually improving the efficiency? How much is the acceptance rate for our given product? Communicating all those to the stakeholders gives them an idea of whether the product they're using is making an impact or if it's actually helping the team become more productive.

Williams: Some of the challenges I'm seeing are mainly around ethical AI concerns, data privacy, and costly and resource-intensive models that go against budget and infrastructure constraints. On the vendor or stakeholder side, it's really more about educating our nontechnical stakeholders about the capabilities of AI and the limitations and trying to set realistic expectations.

We try to help our teams understand for their specific business area how AI can be applied. So how can we use AI in marketing or HR or legal, and giving them real-world use cases.

Verma: Gen AI is really important, and it's so easy to use ChatGPT, but what we find is that gen AI makes a good developer better and a worse developer worse. Good developers understand how to write good code and how good code integrates into projects. ChatGPT is just another tool to help write some of the code that fits into the project. That's the big challenge that we try to make sure our executives understand, that not everybody can use this in the most effective manner.

Jennings: There are some practical governance concerns that have emerged. One is understanding the tolerance for bad responses in certain contexts. Some problems, you may be more willing to accept a bad response because you structure the interface in such a way that there's a human in the loop. If you're attempting to not have a human in the loop, that could be problematic depending on what you want the model to do. Just getting better muscle for the organization to have a good intuition about where these models can potentially fail and in what ways.

In addition to that, understanding what training data went into that model, especially as models are used more as agents and have privileged access to different applications and data sources that might be pretty sensitive.

Kapoor: I think one of the biggest challenges that can happen is how companies use the data that comes back from LLM models and how they're going to use it within the application. Removing the human component scares me a lot.

Verma: It's automation versus augmentation. There are a lot of cases where augmentation is the big gain. I think automation is a very small, closed case β€” there are very few things I think LLMs are ready in the world right now to automate.

Read the original article on Business Insider

❌