โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Motorola Solutions says its AI-powered 911 software saves time and eases pressure on emergency response teams

18 December 2024 at 13:20
A male firefighter sits in a firetruck on the computer while a female firefighter on his left gets the truck ready to drive.
The company's AI software can improve the human element of emergency response.

LPETTET/Getty Images

  • Motorola Solutions uses AI to help address delays in 911 emergency calls and improve response times.
  • Its Vesta NXT software helps 911 call handlers gather and summarize data for quicker communication.
  • This article is part of "CXO AI Playbook" โ€” straight talk from business leaders on how they're testing and using AI.

Motorola Solutions is a Chicago-based provider of technology and communications solutions focused on public safety and enterprise security. It has about 21,000 employees worldwide.

Situation analysis: What problem was the company trying to solve?

The National Emergency Number Association estimates that 240 million 911 calls are made in the US each year. But fragmented emergency-response systems across various agencies and organizations can lead to dangerous delays.

"You hope to never call 911, but when you do, it needs to work," Jehan Wickramasuriya, the corporate vice president of AI and platforms at Motorola Solutions, told Business Insider.

He added that call takers' jobs can be demanding and unpredictable, and they're often under intense pressure. "There can be a high level of stress if there's an active shooter or domestic disturbance," he said. "They're trying to keep a caller calm and simultaneously find out if they need medical help." Meanwhile, he said, callers may be "speaking so fast that it's difficult to understand and retain everything they say."

Headshot of Jehan Wickramasuriya
Jehan Wickramasuriya is the corporate vice president of AI and platforms at Motorola Solutions.

Motorola Solutions

Pinpointing a caller's location adds a layer of complexity. Mobile 911 calls are typically routed based on cell-tower locations rather than the caller's actual position. This requires calls to be redirected, adding several seconds to response times.

"At the end of the day it's a data problem," Wickramasuriya said, "because a lot of information needs to get transmitted in each call."

Motorola Solutions is using AI to consolidate this data in a single platform.

Key staff and stakeholders

The company structures its AI research team around specialized AI domains, such as computer vision and speech and audio processing, rather than individual product lines.

Wickramasuriya said the core AI team consisted of about 50 scientists, developers, and engineers who collaborate closely with hundreds of product managers, designers, and user-experience specialists.

Motorola Solutions also works with various cloud and technology vendors on its AI-enabled products and services.

AI in action

In June, Motorola Solutions launched Vesta NXT, software designed to help 911 call handlers manage emergency calls. It brings data from various public-safety systems onto one platform, helping the handlers gather and summarize information.

The tool uses AI to surface details including the caller's location and, for callers who have opted to share their medical profile from their phone, any underlying health conditions. It can also suggest the best entrance to a building. "That's important information for first responders," Wickramasuriya said.

The software has translation and transcription capabilities, helping English speakers and non-English speakers communicate. AI also helps call handlers manage nonemergency calls โ€” by streamlining the reporting of issues like abandoned cars or stolen property, call handlers can focus more on critical emergencies.

Most important, AI can improve the human element of emergency response. "AI is working in the background to help the call taker attend to the person on the other end of the line," Wickramasuriya said.

Did it work, and how did leaders know?

Motorola Solutions says roughly 60% of 911 call centers in the US use its call-handling software. It's transitioning existing Vesta 911 users to its new system with the AI features.

The company says these AI tools are already translating millions of minutes of audio each month and have helped lighten emergency-call handlers' workloads partly by resolving nonemergency calls and connecting callers to other resources.

Lee County is the first Public Safety Answering Point, which is a call center that handles emergency calls and coordinates responses, to use the VESTA NXT. Motorola Solutions said administrators there found the AI-generated searchable text transcripts and real-time summaries of 911 calls that call handlers can share with dispatchers and first responders helped save time and alleviate stress for call handlers.

What's next?

Wickramasuriya said the company was focused on improving Vesta NXT.

He said the goal was to "expand the usefulness" of the software by integrating it more deeply into existing workflows, including by developing features that connect first responders directly with dispatchers and call takers.

Another aim, he said, is to help understaffed 911 call centers "understand their staffing needs and identify which call takers are handling high-stress situations and address stress and fatigue among call handlers."

Read the original article on Business Insider

How Depop's AI image-recognition tool speeds up selling for 180,000 daily listings

16 December 2024 at 08:58
A woman taking a photo of a brown tank top on a clothing hanger
Depop users can buy and sell clothing items on the platform.

Courtesy of Depop

  • Depop's new gen-AI feature creates item descriptions based on photos that users upload.
  • The tool has boosted the number of listings on the company's website and saves sellers time.
  • This article is part of "CXO AI Playbook" โ€” straight talk from business leaders on how they're testing and using AI.

Depop is an online fashion marketplace where users can buy and sell secondhand clothing, accessories, and other products. Founded in 2011, the company is headquartered in London and has 35 million registered users. It was acquired by Etsy, an online marketplace, in 2021.

Situation analysis: What problem was the company trying to solve?

Depop's business model encourages consumers to "participate in the circular economy rather than buying new," Rafe Colburn, its chief product and technology officer, told Business Insider. However, listing items to sell on the website and finding products to buy take time and effort, which he said can be a barrier to using Depop.

"By reducing that effort, we can make resale more accessible to busy people," he said.

To improve user experience, Depop has unveiled several features powered by artificial intelligence and machine learning, including pricing guidance to help sellers list items more quickly and personalized algorithms to help buyers identify trends and receive product recommendations.

In September, Depop launched a description-generation feature using image recognition and generative AI. The tool automatically creates a description for an item once sellers upload a product image to the platform.

"What we've tried to do is make it so that once people have photographed and uploaded their items, very little effort is required to complete their listing," Colburn said. He added that the AI description generator is especially useful for new sellers who aren't as familiar with listing on Depop.

Headshot of Rafe Colburn
Rafe Colburn is the chief product and technology officer of Depop.

Courtesy of Depop

Key staff and stakeholders

The AI description-generation feature was developed in-house by Depop's data science team, which trained large language models to create it. The team worked closely with product managers.

Colburn said that in 2022, the company moved its data science team from the engineering group to the product side of the business, which has enabled Depop to release features more quickly.

AI in action

To use the description generator, sellers upload an image of the item they want to list to the Depop platform and click a "generate description" button. Using image recognition and gen AI, the system generates a product description and populates item-attribute fields on the listing page, including category, subcategory, color, and brand.

The technology incorporates relevant hashtags and colloquial language to appeal to buyers, Colburn said. "We've done a lot of prompt engineering and fine-tuning to make sure that the tone and style of the descriptions that are generated really fit the norms of Depop," he added.

Sellers can use the generated description as is or adjust it. Even if they modify descriptions, sellers still save time compared to starting with "an empty box to work with," Colburn said.

Did it work, and how did leaders know?

Depop has about 180,000 new listings every day. Since rolling out the AI-powered description generation in September, the company has seen "a real uplift in listings created, listing time, and completeness of listings," Colburn said. However, as the tool was launched recently, a company spokesperson said that specific data was not yet available.

"Aside from the direct user benefits in terms of efficiency and listing quality, we have also really demonstrated to ourselves that users value features that use generative AI to reduce effort on their end," Colburn said.

Ultimately, Depop wants sellers to list more items, and the company's goal is to make it easier to do so, he added. Automating the process with AI means sellers can list items quicker, which Colburn said would create a more robust inventory on the platform, lead to more sales, and boost the secondhand market.

What's next?

Colburn said Depop continues to look for ways to apply AI to address users' needs.

For example, taking high-quality photos of items is another challenge for sellers. It's labor-intensive but important, as listings with multiple high-quality photos of garments are more likely to sell. He said Depop was exploring ways to make this easier and enhance image quality with AI.

A challenge for buyers is sometimes finding items that fit. Depop is also looking into how AI can help shoppers feel more confident that the clothing they purchase will fit so that their overall satisfaction with the platform will be enhanced, Colburn said.

Read the original article on Business Insider

Shutterstock earned over $100 million in revenue thanks in part to its AI-powered image-generator tool

13 December 2024 at 09:09
A digital camera with a big lens sits on a desk and a person edits an image on a desktop computer in the background.
Shutterstock's approach to AI integration focused on the user experience.

dusanpetkovic/Getty Images

  • Shutterstock added gen AI to its stock-content library to generate $104 million in revenue.
  • The company has partnered with tech giants including Meta, Amazon, Apple, OpenAI, and Nvidia.
  • This article is part of "CXO AI Playbook" โ€” straight talk from business leaders on how they're testing and using AI.

Shutterstock, founded in 2003 and based in New York, is a global leader in licensed digital content. It offers stock photos, videos, and music to creative professionals and enterprises.

In late 2022, Shutterstock made a strategic decision to embrace generative AI, becoming one of the first stock-content providers to integrate the tech into its platform.

Dade Orgeron, the vice president of innovation at Shutterstock, leads the company's artificial-intelligence initiatives. During his tenure, Shutterstock has transitioned from a traditional stock-content provider into one that provides several generative-AI services.

While Shutterstock's generative-AI offerings are focused on images, the company has an application programming interface for generating 3D models and plans to offer video generation.

Situation analysis: What problem was the company trying to solve?

When the first mainstream image-generation models, such as Dall-E, Stable Diffusion, and Midjourney, were released in late 2022, Shutterstock recognized generative AI's potential to disrupt its business.

"It would be silly for me to say that we didn't see generative AI as a potential threat," Orgeron said. "I think we were fortunate at the beginning to realize that it was more of an opportunity."

He said Shutterstock embraced the technology ahead of many of its customers. He recalled attending CES in 2023 and said that many creative professionals there were unaware of generative AI and the impact it could have on the industry.

Orgeron said that many industry leaders he encountered had the misconception that generative AI would "come in and take everything from everyone." But that perspective felt pessimistic, he added. But Shutterstock recognized early that AI-powered prompting "was design," Orgeron told Business Insider.

Key staff and stakeholders

Orgeron's position as vice president of innovation made him responsible for guiding the company's generative-AI strategy and development.

However, the move toward generative AI was preceded by earlier acquisitions. Orgeron himself joined the company in 2021 as part of its acquisition of TurboSquid, a company focused on 3D assets.

Side profile of a man with a beard wearing black glasses and a black jacket.
TK

Photo courtesy of Dade Orgeron

Shutterstock also acquired three AI companies that same year: Pattern89, Datasine, and Shotzr. While they primarily used AI for data analytics, Orgeron said the expertise Shutterstock gained from these acquisitions helped it move aggressively on generative AI.

Externally, Shutterstock established partnerships with major tech companies including Meta, Alphabet, Amazon, Apple, OpenAI, Nvidia, and Reka. For example, Shutterstock's partnership with Nvidia enabled its generative 3D service.

AI in action

Shutterstock's approach to AI integration focused on the user experience.

Orgeron said the company's debut in image generation was "probably the easiest-to-use solution at that time," with a simple web interface that made AI image generation accessible to creative professionals unfamiliar with the technology.

That stood in contrast to competitors such as Midjourney and Stable Diffusion, which, at the time Shutterstock launched its service in January 2023, had a basic user interface. Midjourney, for instance, was initially available only through Discord, an online chat service more often used to communicate in multiplayer games.

This focus on accessibility set the stage for Shutterstock.AI, the company's dedicated AI-powered image-generation platform. While Shutterstock designed the tool's front end and integrated it into its online offerings, the images it generates rely on a combination of internally trained AI models and solutions from external partners.

Shutterstock.AI, like other image generators, lets customers request their desired image with a text prompt and then choose a specific image style, such as a watercolor painting or a photo taken with a fish-eye lens.

However, unlike many competitors, Shutterstock uses information about user interactions to decide on the most appropriate model to meet the prompt and style request. Orgeron said Shutterstock's various models provide an edge over other prominent image-generation services, which often rely on a single model.

But generative AI posed risks to Shutterstock's core business and to the photographers who contribute to the company's library. To curb this, Orgeron said, all of its AI models, whether internal or from partners, are trained exclusively on Shutterstock's legally owned data. The company also established a contributor fund to compensate content creators whose work was used in the models' training.

Orgeron said initial interest in Shutterstock.AI came from individual creators and small businesses. Enterprise customers followed more cautiously, taking time to address legal concerns and establish internal AI policies before adopting the tech. However, Orgeron said, enterprise interest has accelerated as companies recognize AI's competitive advantages.

Did it work, and how did leaders know?

Paul Hennessy, the CEO of Shutterstock, said in June the company earned $104 million in annual revenue from AI licensing agreements in 2023. He also projected that this revenue could reach up to $250 million annually by 2027.

Looking ahead, Shutterstock hopes to expand AI into its video and 3D offerings. The company's generative 3D API is in beta. While it doesn't offer an AI video-generation service yet, Orgeron said Shutterstock plans to launch a service soon. "The video front is where everyone is excited right now, and we are as well," he said. "For example, we see tremendous opportunity in being able to convert imagery into videos."

The company also sees value in AI beyond revenue figures. Orgeron said Shutterstock is expanding its partnerships, which now include many of the biggest names in Silicon Valley. In some cases, partners allow Shutterstock to use their tech to build new services; in others, they license data from Shutterstock to train AI.

"We're partnered with Nvidia, with Meta, with HP. These are great companies, and we're working closely with them," he said. "It's another measure to let us know we're on the right track."

Read the original article on Business Insider

The rise of the "AI engineer" and what it means for the future of tech jobs

By: Jean Paik
12 December 2024 at 07:11
Three software developers sitting next to each other in a row and looking at their laptops.
Some software developers are transitioning to AI jobs at their companies.

Maskot/Getty Images

  • AI is opening new career tracks for software developers who want to shift to different roles.
  • Developers at an AI roundtable said that the tech job market is fluctuating rapidly with gen AI.
  • This article is part of "CXO AI Playbook" โ€” straight talk from business leaders on how they're testing and using AI.

A few years ago, Kesha Williams was prepared to step away from her tech career โ€” but then the AI boom brought her back.

"I've been in tech for 30 years, and before gen AI, I was ready to retire," she said. "I think I'll stay around just to see where this goes." Williams is the head of enterprise architecture and engineering at Slalom.

Williams and six other developers from companies including Amazon, Meta, Anaconda, and more joined Business Insider's virtual roundtable in November to discuss how AI is changing the software development landscape.

While hiring and recruitment in many tech jobs are dropping with the increased adoption of AI coding tools, developers say AI is also opening new career opportunities.

A new career path

Panelists said that the emergence of jobs focused on building AI models and features is a recent development in the industry.

"One of the biggest things I've noticed in the last two to three years is the rise of a job title called 'AI engineer,' which did not exist before, and it's kind of in between a machine-learning engineer and a traditional software engineer," Shruti Kapoor, a lead member of technical staff at Slack, said. "I'm starting to see more and more companies where 'AI engineer' is one of the top-paying jobs available for software engineers."

Salary trends from Levels.fyi, an online platform that allows tech workers to compare their compensation packages, found that in the past two years, entry-level AI engineers can earn 8% more than their non-AI engineer counterparts, and senior engineers can earn nearly 11% more.

Neeraj Verma, the head of applied AI at Nice, said at the roundtable that AI has enabled software engineers at his company to transition internally to AI roles. He said that over 20% of the developers at Nice have moved to AI-related positions in the past two years, with about 100 of those individuals considered experts in prompt engineering.

Verma said the company's developers are not just being supported by AI; they are actively involved in using the technology to build other AI features.

He added that many senior-level developers with strong coding abilities at the company have shown interest in moving to AI to apply their skill sets in new ways. Nice created training programs to help these employees learn the technology and make internal career shifts.

AI-specialized jobs encompass machine-learning engineers, prompt engineers, and AI researchers, among other roles. Although the skills that would be useful for each of these jobs can differ, Kapoor said that an AI engineering role does not necessarily require a specific tech background. Workers with prior experience in sectors like accounting and product management, for instance, have been able to pivot into the AI space.

Adapting to change

Just as AI is changing the software development process, developers say that the professional opportunities in AI could also be in constant flux.

"Software development will change in five years much more rapidly than anything we've seen before," Igor Ostrovsky, the cofounder of Augment, said at the roundtable. "How you architect, develop, test, and maintain software โ€” that will all change, and how exactly you interact with AI will also evolve."

Researchers are already questioning the long-term potential of prompt engineering jobs, which skyrocketed in demand in 2023. They say that generative AI models could soon be trained to optimize their own prompts.

"I think prompt engineering is more of a sign that some developers have the desire to learn and are eager to figure out how to interact with artificial intelligence, but it won't necessarily be how you interact with AI in three years or five years," Ostrovsky said.

The pace of technological development means that software developers' ability to learn, adapt, and solve problems creatively will be more important than ever to stay ahead of the curve.

Read the original article on Business Insider

With AI adoption on the rise, developers face a challenge — handling risk

By: Jean Paik
10 December 2024 at 10:34
A computer programmer or software developer working in an office
Software developers can be involved in communicating expectations for gen AI to stakeholders.

Maskot/Getty Images

  • At an AI roundtable in November, developers said AI tools were playing a key role in coding.
  • They said that while AI could boost productivity, stakeholders should understand its limitations.
  • This article is part of "CXO AI Playbook" โ€” straight talk from business leaders on how they're testing and using AI.

At a Business Insider roundtable in November, Neeraj Verma, the head of applied AI at Nice, argued that generative AI "makes a good developer better and a worse developer worse."

He added that some companies expect employees to be able to use AI to create a webpage or HTML file and simply copy and paste solutions into their code. "Right now," he said, "they're expecting that everybody's a developer."

During the virtual event, software developers from companies such as Meta, Slack, Amazon, Slalom, and more discussed how AI influenced their roles and career paths.

They said that while AI could help with tasks like writing routine code and translating ideas between programming languages, foundational coding skills are necessary to use the AI tools effectively. Communicating these realities to nontech stakeholders is a primary challenge for many software developers.

Understanding limitations

Coding is just one part of a developer's job. As AI adoption surges, testing and quality assurance may become more important for verifying the accuracy of AI-generated work. The US Bureau of Labor Statistics projects that the number of software developers, quality-assurance analysts, and testers will grow by 17% in the next decade.

Expectations for productivity can overshadow concerns about AI ethics and security.

"Interacting with ChatGPT or Cloud AI is so easy and natural that it can be surprising how hard it is to control AI behavior," Igor Ostrovsky, a cofounder of Augment, said during the roundtable. "It is actually very difficult to, and there's a lot of risk in, trying to get AI to behave in a way that consistently gives you a delightful user experience that people expect."

Companies have faced some of these issues in recent AI launches. Microsoft's Copilot was found to have problems with oversharing and data security, though the company created internal programs to address the risk. Tech giants are investing billions of dollars in AI technology โ€” Microsoft alone plans to spend over $100 billion on graphics processing units and data centers to power AI by 2027 โ€” but not as much in AI governance, ethics, and risk analysis.

AI integration in practice

For many developers, managing stakeholders' expectations โ€” communicating the limits, risks, and overlooked aspects of the technology โ€” is a challenging yet crucial part of the job.

Kesha Williams, the head of enterprise architecture and engineering at Slalom, said in the roundtable that one way to bridge this conversation with stakeholders is to outline specific use cases for AI. Focusing on the technology's applications could highlight potential pitfalls while keeping an eye on the big picture.

"Good developers understand how to write good code and how good code integrates into projects," Verma said. "ChatGPT is just another tool to help write some of the code that fits into the project."

Ostrovsky predicted that the ways employees engage with AI would change over the years. In the age of rapidly evolving technology, he said, developers will need to have a "desire to adapt and learn and have the ability to solve hard problems."

Read the original article on Business Insider

AI adoption is surging — but humans still need to be in the loop, say software developers from Meta, Amazon, Nice, and more

22 November 2024 at 09:27
Photo collage featuring headshots of Greg Jennings, Aditi Mithal, Pooya Amini, Shruti Kapoor, Neeraj Verma, Kesha Williams, Igor Ostrovsky
Top Row: Greg Jennings, Aditi Mithal, Pooya Amini, and Shruti Kapoor. Bottom Row: Neeraj Verma, Kesha Williams, and Igor Ostrovsky.

Alyssa Powell/BI

This article is part of "CXO AI Playbook" โ€” straight talk from business leaders on how they're testing and using AI.

The future of software-development jobs is changing rapidly as more companies adopt AI tools that can accelerate the coding process and close experience gaps between junior- and senior-level developers.

Increased AI adoption could be part of the tech industry's "white-collar recession," which has seen slumps in hiring and recruitment over the past year. Yet integrating AI into workflows can offer developers the tools to focus on creative problem-solving and building new features.

On November 14, Business Insider convened a roundtable of software developers as part of our "CXO AI Playbook" series to learn how artificial intelligence was changing their jobs and careers. The conversation was moderated by Julia Hood and Jean Paik from BI's Special Projects team.

These developers discussed the shifts in their day-to-day tasks, which skills people would need to stay competitive in the industry, and how they navigate the expectations of stakeholders who want to stay on the cutting edge of this new technology.

Panelists said AI has boosted their productivity by helping them write and debug code, which has freed up their time for higher-order problems, such as designing software and devising integration strategies.

However, they emphasized that some of the basics of software engineering โ€” learning programming languages, scaling models, and handling large-scale data โ€” would remain important.

The roundtable participants also said developers could provide critical insight into challenges around AI ethics and governance.

The roundtable participants were:

  • Pooya Amini, software engineer, Meta.
  • Greg Jennings, head of engineering for AI, Anaconda.
  • Shruti Kapoor, lead member of technical staff, Slack.
  • Aditi Mithal, software-development engineer, Amazon Q.
  • Igor Ostrovsky, cofounder, Augment.
  • Neeraj Verma, head of applied AI, Nice.
  • Kesha Williams, head of enterprise architecture and engineering, Slalom.

The following discussion was edited for length and clarity.


Julia Hood: What has changed in your role since the popularization of gen AI?

Neeraj Verma: I think the expectations that are out there in the market for developers on the use of AI are actually almost a bigger impact than the AI itself. You hear about how generative AI is sort of solving this blank-paper syndrome. Humans have this concept that if you give them a blank paper and tell them to go write something, they'll be confused forever. And generative AI is helping overcome that.

The expectation from executives now is that developers are going to be significantly faster but that some of the creative work the developers are doing is going to be taken away โ€” which we're not necessarily seeing. We're seeing it as more of a boilerplate creation mechanism for efficiency gains.

Aditi Mithal: I joined Amazon two years ago, and I've seen how my productivity has changed. I don't have to focus on doing repetitive tasks. I can just ask Amazon Q chat to do that for me, and I can focus on more-complex problems that can actually impact our stakeholders and our clients. I can focus on higher-order problems instead of more-repetitive tasks for which the code is already out there internally.

Shruti Kapoor: One of the big things I've noticed with writing code is how open companies have become to AI tools like Cursor and Copilot and how integrated they've become into the software-development cycle. It's no longer considered a no-no to use AI tools like ChatGPT. I think two years ago when ChatGPT came out, it was a big concern that you should not be putting your code out there. But now companies have kind of embraced that within the software-development cycle.

Pooya Amini: Looking back at smartphones and Google Maps, it's hard to remember how the world looked like before these technologies. It's a similar situation with gen AI โ€” I can't remember how I was solving the problem without it. I can focus more on actual work.

Now I use AI as a kind of assisted tool. My main focus at work is on requirement gathering, like software design. When it comes to the coding, it's going to be very quick. Previously, it could take weeks. Now it's a matter of maybe one or two days, so then I can actually focus on other stuff as AI is solving the rest for me.

Kesha Williams: In my role, it's been trying to help my team rethink their roles and not see AI as a threat but more as a partner that can help boost productivity, and encouraging my team to make use of some of the new embedded AI and gen-AI tools. Really helping my team upskill and putting learning paths in place so that people can embrace AI and not be afraid of it. More of the junior-level developers are really afraid about AI replacing them.


Hood: Are there new career tracks opening up now that weren't here before?

Verma: At Nice, we have something like 3,000 developers, and over the last, I think, 24 months, 650 of them have shifted into AI-specific roles, which was sort of unheard of before. Even out of those 650, we've got about a hundred who are experts at things like prompt engineering. Over 20% of our developers are not just developers being supported by AI but developers using AI to write features.

Kapoor: I think one of the biggest things I've noticed in the last two to three years is the rise of a job title called "AI engineer," which did not exist before, and it's kind of in between an ML engineer and a traditional software engineer. I'm starting to see more and more companies where AI engineer is one of the top-paying jobs available for software engineers. One of the cool things about this job is that you don't need an ML-engineering background, which means it's accessible to a lot more people.

Greg Jennings: For developers who are relatively new or code-literate knowledge workers, I think they can now use code to solve problems where previously they might not have. We have designers internally that are now creating full-blown interactive UIs using AI to describe what they want and then providing that to engineers. They've never been able to do that before, and it greatly accelerates the cycle.

For more-experienced developers, I think there are a huge number of things that we still have to sort out: the architectures of these solutions, how we're actually going to implement them in practice. The nature of testing is going to have to change a lot as we start to include these applications in places where they're more mission-critical.

Amini: On the other side, looking at threats that can come out of AI, new technologies and new positions can emerge as well. We don't currently have clear regulations in terms of ownership or the issues related to gen AI, so I imagine there will be more positions in terms of ethics.

Mithal: I feel like a Ph.D. is not a requirement anymore to be a software developer. If you have some foundational ML, NLP knowledge, you can target some of these ML-engineer or AI-engineer roles, which gives you a great opportunity to be in the market.

Williams: I'm seeing new career paths in specialized fields around ML and LLM operations. For my developers, they're able to focus more on strategy and system design and creative problem-solving, and it seems to help them move faster into architecture. System design, system architecture, and integration strategies โ€” they have more time to do that because of AI.


Jean Paik: What skills will developers need to stay competitive?

Verma: I think a developer operating an AI system requires product-level understanding of what you're trying to build at a high level. And I think a lot of developers struggle with prompt engineering from that perspective. Having the skills to clearly articulate what you want to an LLM is a very important skill.

Williams: Developers need to understand machine-learning concepts and how AI models work, not necessarily how to build and train these models from scratch but how to use them effectively. As we're starting to use Amazon Q, I've realized that our developers are now becoming prompt engineers because you have to get that prompt right in order to get the best results from your gen-AI system.

Jennings: Understanding how to communicate with these models is very different. I almost think that it imparts a need for engineers to have a little bit more of a product lens, where a deeper understanding of the actual business problem they're trying to solve is necessary to get the most out of it. Developing evaluations that you can use to optimize those prompts, so going from prompt engineering to actually tuning the prompts in a more-automated way, is going to emerge as a more common approach.

Igor Ostrovsky: Prompt engineering is really important. That's how you interact with AI systems, but this is something that's evolving very quickly. Software development will change in five years much more rapidly than anything we've seen before. How you architect, develop, test, and maintain software โ€” that will all change, and how exactly you interact with AI will also evolve.

I think prompt engineering is more of a sign that some developers have the desire to learn and are eager to figure out how to interact with artificial intelligence, but it won't necessarily be how you interact with AI in three years or five years. Software developers will need this desire to adapt and learn and have the ability to solve hard problems.

Mithal: As a software developer, some of the basics won't change. You need to understand how to scale models, build scalable solutions, and handle large-scale data. When you're training an AI model, you need data to support it.

Kapoor: Knowledge of a programming language would be helpful, specifically Python or even JavaScript. Knowledge of ML or some familiarity with ML will be really helpful. Another thing is that we need to make sure our applications are a lot more fault-tolerant. That is also a skill that front-end or back-end engineers who want to transition to an AI-engineering role need to be aware of.

One of the biggest problems with prompts is that the answers can be very unpredictable and can lead to a lot of different outputs, even for the same prompt. So being able to make your application fault-tolerant is one of the biggest skills we need to apply in AI engineering.


Hood: What are the concerns and obstacles you have as AI gains momentum? How do you manage the expectations of nontech stakeholders in the organization who want to stay on the leading edge?

Ostrovsky: Part of the issue is that interacting with ChatGPT or cloud AI is so easy and natural that it can be surprising how hard it is actually to control AI behavior, where you need AI to understand constraints, have access to the right information at the right time, and understand the task.

When setting expectations with stakeholders, it is important they understand that we're working with this very advanced technology and they are realistic about the risk profile of the project.

Mithal: One is helping them understand the trade-offs. It could be security versus innovation or speed versus accuracy. The second is metrics. Is it actually improving the efficiency? How much is the acceptance rate for our given product? Communicating all those to the stakeholders gives them an idea of whether the product they're using is making an impact or if it's actually helping the team become more productive.

Williams: Some of the challenges I'm seeing are mainly around ethical AI concerns, data privacy, and costly and resource-intensive models that go against budget and infrastructure constraints. On the vendor or stakeholder side, it's really more about educating our nontechnical stakeholders about the capabilities of AI and the limitations and trying to set realistic expectations.

We try to help our teams understand for their specific business area how AI can be applied. So how can we use AI in marketing or HR or legal, and giving them real-world use cases.

Verma: Gen AI is really important, and it's so easy to use ChatGPT, but what we find is that gen AI makes a good developer better and a worse developer worse. Good developers understand how to write good code and how good code integrates into projects. ChatGPT is just another tool to help write some of the code that fits into the project. That's the big challenge that we try to make sure our executives understand, that not everybody can use this in the most effective manner.

Jennings: There are some practical governance concerns that have emerged. One is understanding the tolerance for bad responses in certain contexts. Some problems, you may be more willing to accept a bad response because you structure the interface in such a way that there's a human in the loop. If you're attempting to not have a human in the loop, that could be problematic depending on what you want the model to do. Just getting better muscle for the organization to have a good intuition about where these models can potentially fail and in what ways.

In addition to that, understanding what training data went into that model, especially as models are used more as agents and have privileged access to different applications and data sources that might be pretty sensitive.

Kapoor: I think one of the biggest challenges that can happen is how companies use the data that comes back from LLM models and how they're going to use it within the application. Removing the human component scares me a lot.

Verma: It's automation versus augmentation. There are a lot of cases where augmentation is the big gain. I think automation is a very small, closed case โ€” there are very few things I think LLMs are ready in the world right now to automate.

Read the original article on Business Insider

Siemens' AI tools are harnessing 'human-machine collaboration' to help workers solve maintenance problems

21 November 2024 at 08:25
Two people in white hard hats and yellow safety jackets stand in front of metal machinery
Siemens is helping companies in the industrial sector predict machine maintenance problems.

Gorodenkoff/Shutterstock

  • Siemens uses AI to tackle industrial challenges like safety and workforce shortages.
  • Siemens says its AI tools, such as Senseye, boost productivity and reduce costs for global clients.
  • This article is part of "CXO AI Playbook" โ€” straight talk from business leaders on how they're testing and using AI.

Siemens is a German technology company that operates in many sectors, including industry, infrastructure, transportation, and healthcare. It has about 320,000 employees worldwide.

Situation analysis: What problem was the company trying to solve?

The industrial sector faces several challenges, including security and safety regulations, environmental sustainability, and a shortage of skilled experts. Peter Koerte, Siemens' chief technology officer and chief strategy officer, said the company aims to solve many of these issues with artificial intelligence.

"What's most important for AI is that in the industrial context, it needs to be safe, it needs to be reliable, and it needs to be trustworthy," he told Business Insider. Siemens, which has been investing in AI for about 50 years, offers several industrial AI products that help manufacturers across industries, such as automotive and aerospace, to predict maintenance issues and improve worker productivity using data.

"We believe if we can take data from the real world, simulate it, understand it in the digital world, we can be much faster for our customers, and our customers can be more competitive, more resilient, and more sustainable," Koerte said.

Key staff and stakeholders

Koerte said Siemens works with a number of tech partners on its industrial AI products and services, including Google, Microsoft, Nvidia, Amazon Web Services, and Meta. The company has about 1,500 employees with AI expertise who work closely with these tech companies, and Siemens' internal product development teams are also involved.

AI in action

Siemens' industrial AI work focuses on predictive maintenance, technology to assist workers, and generative product design.

One product is Senseye Predictive Maintenance, a tool that integrates with a manufacturer's data sources and uses AI to analyze the information. The company said the platform provides insights into how well machinery, tools, and other infrastructure are running. The tech can also help predict maintenance issues, which increases productivity and helps companies speed up the adoption of technology across their businesses.

Headshot of man in a black blazer and white button-down shirt
Peter Koerte is the chief technology officer and chief strategy officer at Siemens.

Courtesy of Siemens

Recently, Siemens debuted Industrial Copilot, a generative AI-powered assistant for engineers in industrial environments. The assistant can generate code automatically, identify problems quickly, and provide advice to support engineering tasks, such as troubleshooting equipment maintenance. The company said the tool can boost "human-machine collaboration" and enable companies to address workforce shortages while staying competitive.

Koerte said that when Industrial Copilot notifies a worker of an issue with equipment or software, that employee can use verbal commands in any language to create a work order, which is automatically sent to a team in a different country to take action to solve the issue. "AI breaks down barriers and democratizes many of the technologies because we take the complexity out of them," he said.

Did it work, and how did leaders know?

Siemens found that companies using Senseye Predictive Maintenance have reduced maintenance costs by 40%, increased maintenance staff productivity by 55%, and decreased the amount of time a machine is unavailable for maintenance by 50%.

The Australian steel company BlueScope implemented the predictive maintenance platform in 2021 to minimize downtime across its plants, increase operating time, improve the rate at which it can produce products, and lower costs. Together, Senseye and BlueScope's IoT sensors can detect abnormal vibrations in equipment early, preventing maintenance problems and saving the company money.

Schaeffler Group, a German automotive and industrial supplier, augmented a production machine with Industrial Copilot. Its engineers are now able to generate code faster for programmable logic controllers, the devices that control machines in factories. Siemens said the technology is helping Schaeffler Group automate repetitive tasks, reduce errors, and free up engineers for "higher-value work."

What's next?

Koerte said Siemens continues to research and develop new use cases for AI.

The company is working on a project that feeds computer-aided design data, such as models and digital drawings, into large language models and prompts it to create products.

The project is still in the early stages of development, but Koerte said it could enable design engineers, particularly in the automotive sector, to create more product variations and produce higher-quality items faster.

Read the original article on Business Insider

AI is helping one software security company send 5 times the number of threat alerts in record time

20 November 2024 at 13:13
A person's finger types on a lit-up keyboard on their laptop.
Black Duck says its AI tool sent more than 5,200 security advisories from March to October.

d3sign/Getty Images

  • Black Duck Software uses AI to speed up sending security advisories to customers.
  • It says that with AI it can send out about five times its usual number of notifications a month.
  • This article is part of "CXO AI Playbook" โ€” straight talk from business leaders on how they're testing and using AI.

For "CXO AI Playbook," Business Insider takes a look at mini case studies about AI adoption across industries, company sizes, and technology DNA. We've asked each of the featured companies to tell us about the problems they're trying to solve with AI, who's making these decisions internally, and their vision for using AI in the future.

Black Duck Software, formerly Synopsys Software Integrity Group, offers security products and services โ€” including security testing, audits, and risk assessments โ€” to help companies protect their software. Black Duck is headquartered in Burlington, Massachusetts, and has about 2,000 employees.

Situation analysis: What problem was the company trying to solve?

Beth Linker, a senior director of product management for AI and static application security testing at Black Duck, said the company had been using artificial intelligence internally for several years but recently began developing the tech for its customers.

The company sends Black Duck Security Advisories, or BDSAs, to notify users that their software is at risk and potentially exploitable. Linker said this spring Black Duck started using generative AI to send BDSAs faster so that customers could act swiftly to address issues.

A woman with short hair and glasses wears a dark grey blazer and blue button-down shirt.
Beth Linker is a senior director of product management for AI and static application security testing at Black Duck.

Courtesy of Black Duck

The need for speedier BDSAs arose after the National Vulnerability Database, a government cybersecurity resource that provides information on data threats, started publishing fewer vulnerability reports because of a backlog. At the same time, Linker said, the Linux kernel, an open-source operating system, began flagging more risks, significantly increasing the number of vulnerabilities it disclosed.

"The net effect was that all of a sudden you had a much larger number of vulnerabilities and less support from the National Vulnerability Database," Linker said. "This is something that was making things a lot harder for our customers because they were not able to get all the info that they were used to receiving."

Key staff and partners

Linker said Black Duck's engineering and research teams were involved in integrating gen AI with BDSAs. The system also uses some commercially available large language models.

AI in action

Linker said that accelerating BDSA delivery with gen AI was an opportunity to provide customers with a "timely and comprehensive feed of data that they need to make decisions."

To speed up BDSAs, Black Duck developed prompts, which they input into commercial LLMs, to query their internal data. This information is used to compile the advisory reports. Previously, this process was done manually.

A researcher reviews each AI-produced report before it's sent to customers. "Hallucinations are a risk," Linker said, "and everything we put in front of our customers has to meet a certain standard of quality."

Once BDSAs are created, the research teams review the reports and provide analysis and context about the seriousness of an identified vulnerability. This helps customers make decisions about the risk: Some vulnerabilities may need immediate attention, while others are less serious and could be fixed during a planned software update.

Did it work, and how did leaders know?

Linker said that more than 5,200 BDSAs were created with AI from March to October and that the company could now send out about five times the number of notifications each month that it could send before the tech was rolled out.

"We've been able to really scale this up to meet the need," they said.

What's next?

Black Duck recently unveiled Polaris Assist, an AI-powered security assistant. This new addition to the platform will help customers' security and development teams work more efficiently. It combines the company's existing application security tools with LLMs to give automated summaries of detected vulnerabilities and suggestions for how to fix the code.

"It's still a work in progress," Linker said. Polaris Assist is in beta testing, which is likely to wrap up by the end of the year.

They added that Black Duck continues to invest in AI to serve its customers. "A lot of that boils down to how can we make application security testing and remediation easier, faster, and more scalable?" they said.

Read the original article on Business Insider

How Alaska Airlines used AI to save over 1.2 million gallons of jet fuel

20 November 2024 at 09:30
Alaska Airlines plane taking off
Alaska Airlines uses Air Space Intelligence's AI technology to help plan flight routes.

Kevin Carter/Getty Images

  • Alaska Airlines partnered with Air Space Intelligence to use an AI tool that suggests flight routes.
  • The tool, Flyways AI Platform, factors in data such as historical flight traffic and predicted weather.
  • This article is part of "CXO AI Playbook" โ€” straight talk from business leaders on how they're testing and using AI.

For "CXO AI Playbook," Business Insider takes a look at mini case studies about AI adoption across industries, company sizes, and technology DNA. We've asked each of the featured companies to tell us about the problems they're trying to solve with AI, who's making these decisions internally, and their vision for using AI in the future.

Coordinating airline flights seems easy on paper. Nearly all travel routes are planned months in advance, and they're designed to ensure there aren't too many aircraft flying at one time. But frequent airline delays show that this seemingly simple task can become mind-bogglingly complex.

One out of every five flights in the US is delayed by at least 15 minutes. "The fundamental problem is that when a human being sits down to plan a flight, they only have information about their one flight," Pasha Saleh, the head of corporate development at Alaska Airlines, said.

To solve that, Alaska Airlines partnered with an AI startup called Air Space Intelligence, the creator of the Flyways AI Platform, which uses artificial intelligence to suggest optimal flight routes. The partnership started three years ago and was renewed in August. Now, half the flight plans reviewed by Alaska Airlines' dispatchers include a plan suggested by Flyways.

Pasha Saleh headshot
Pasha Saleh, the head of corporate development at Alaska Airlines.

Alaska Airlines

Situation analysis: What problem was the company trying to solve?

All major airline flights are logged with the Federal Aviation Administration and generally filed at least several hours ahead of time. Most commercial passenger flights follow common routes flown on a schedule.

In theory, that means air traffic is predictable. But the reality in the air is often more hectic. Saleh said that air-traffic control is "often very tactical, not strategic."

That leads to last-minute diversions and delays that inconvenience passengers and cost Alaska money as pilots, crews, and planes sit idle.

"Airplanes are expensive assets, and you only make money when they're flying," Saleh said.

Key staff and partners

Alaska Airlines and ASI worked in partnership from the beginning of the partnership.

Saleh met Phillip Buckendorf, the CEO of Air Space Intelligence, in 2018. Buckendorf wanted to use AI to route self-driving cars. Saleh wondered whether the idea could be applied to airlines and invited Buckendorf to visit Alaska Airlines' operations center.

"He looked at those screens expecting to see something out of 'Star Trek.' Instead, he saw something one generation removed from IBM DOS," Saleh said, referring to an operating system that was discontinued over 20 years ago. "Pretty much on the spot, he decided to pivot to airlines."

The resulting product, Flyways, was adopted by Alaska Airlines in 2021.

While Air Space Intelligence developed the Flyways AI Platform, it did so in close cooperation with the airline's stakeholders.

"Airlines are very unionized environments, so we wanted to make sure this wasn't seen as a threat to dispatchers," Saleh said. Alaska Airlines used dispatcher feedback to hone Flyways.

Flyways now works as an assistant to the airline's dispatchers, who see its options presented when creating a flight plan.

AI in action

The partnership between Alaska Airlines and Air Space Intelligence began with a learning period for both organizations.

ASI's staff shadowed the airline's dispatchers to learn how they worked, while Alaska Airlines learned more about how a machine-learning algorithm could be used to route traffic. Saleh said ASI spent about 1 ยฝ years developing the first version of the Flyways AI Platform.

Flyways trains its AI algorithm on historical flight data. At its most basic level, this includes information like a flight's scheduled departure and arrival, actual departure and arrival, and route.

However, Flyways also ingests data on less obvious variables, like restricted military airspace (including temporary restrictions, like those surrounding Air Force One) and wind speeds at cruising altitude. Even events like the Super Bowl, which causes a surge in demand and leads to airspace restrictions around the event, are considered.

Saleh said Flyways connects to multiple sources of information to acquire this data and automatically ingests it through application programming interfaces. Flyways then runs its AI model to determine the suggested route.

"Suggested" is a keyword: While Flyways uses AI to predict the best route, it's not an automated or agentic system and doesn't claim the reasoning capabilities of generative-AI services like ChatGPT.

Dispatchers see Flyways' flight plans as an option in the software interface they use to plan a flight, but a plan isn't put into use until a human dispatcher approves it.

Did it work, and how did leaders know?

Alaska Airlines' dispatchers accept 23% of Flyways' recommendations. While that might seem low, those accepted routes helped reduce Alaska Airlines' fuel consumption by more than 1.2 million gallons in 2023, according to the airline's annual sustainability report.

Reduced fuel consumption is necessary if Alaska is to reach its goal of becoming the most-fuel-efficient airline by 2025. The airline also ranks well on delays: It was the No. 2 most-on-time US airline in 2023, with some of the fewest cancellations.

Meanwhile, ASI has grown its head count from a handful of engineers to 110 employees across offices in Boston, Denver, Poland, and Washington, DC. In addition to its partnership with Alaska, the company has contracts with the US Air Force and received $34 million in Series B funding in December from Andreessen Horowitz.

Read the original article on Business Insider

โŒ
โŒ