โŒ

Reading view

There are new articles available, click to refresh the page.

A massive seaport in Texas is using an AI-powered digital replica to track ships and prepare for emergencies

A ship goes into Port of Corpus Christi.
A petroleum tanker ship that was loaded at the Port of Corpus Christi passes through the Aransas Channel.

Getty Images; Alyssa Powell/BI

  • The Port of Corpus Christi is using AI and a digital replica of its port to track moving ships.
  • It's also using a large language model that generates hypothetical incidents for training.
  • This article is part of "How AI Is Changing Everything," a series on AI adoption across industries.

The Port of Corpus Christi in Texas is among the United States' most important seaports. It's the country's third-largest port by tonnage, and it exports more US crude oil than any other domestic port. 2024 was a record year, with the port recording more than 200 million tons of shipments, 130 million of which were crude oil.

Coordinating a port of this size is a huge logistical undertaking. To manage this challenge, the port commissioned the development of an AI-enhanced command-and-control software called the Overall Port Tactical Information System, or OPTICS.

OPTICS is built on the Unity 3D engine, which creates a 3D digital twin โ€” a virtual replica โ€” of the port using real-world data. That real-world data is managed by Esri's ArcGIS, which can handle the large amounts of current and historical data needed to make this project possible. The result looks a bit like Google Earth but shows up-to-date information on the port's operations.

"In the acronym OPTICS, tactical is meant in the sense of making smart business decisions informed by real-time information," Darrell Keach, the business systems manager at the Port of Corpus Christi, told Business Insider. "So, that's what we built."

OPTICS displays a ship moored in the Port of Corpus Christi.
OPTICS can show information such as a ship's name, status, size, and location while the vessel is moored at the Port of Corpus Christi.

The Port of Corpus Christi

Tracking ships with machine learning

All large commercial vessels have a transponder that broadcasts the vessel's identification, course, speed, and destination, among other information.

But it isn't a real-time system โ€” ships report their position intermittently every few seconds or minutes.

"The frequency of updates we get from a transponder varies on a couple of factors," said Starr Long, the executive producer at The Acceleration Agency, which developed OPTICS for the Port of Corpus Christi. "When ships are at rest, we get updates about every four minutes. When they're moving, we can get updates about every two minutes."

Long said gaps could be worsened by a switch between tracking systems. Ships outside radio communications relay tracking data over satellite but then switch to radio as they come into port. The transition can extend the interval between updates to about six minutes.

Such gaps were incompatible with OPTICS' goal of creating a real-time overview of port operations. The digital replica wouldn't be very realistic if the virtual ships seemed to teleport between positions.

The Acceleration Agency used machine learning to help solve this problem. Unlike traditional vessel tracking systems, which may appear to show ships skipping between update points, OPTICS uses an AI model โ€” trained on about a year of ship movement data from the Port of Corpus Christi โ€” to predict a ship's position. This allows a smoother, more realistic view of port operations at any given moment.

a map shows the path of vessels coming in and out of a port
This OPTICS display shows a large cargo vessel navigating out of the Port of Corpus Christi with the help of two tugboats. The red markers show the ship's projected direction of movement.

The Port of Corpus Christi

Keach emphasized the safety implications of this improvement. Larger ships are "almost a thousand feet long, a hundred feet wide, and full of very flammable liquid," he said. "The margins are fairly narrow, so having as much data as possible for navigation is important."

The Port of Corpus Christi also has ambitious plans for how this system could expand its scope. Keach said OPTICS' next development cycle would hopefully include vessel-crossing predictions that could anticipate and prevent collisions.

Generative AI for emergency response training

Many of the ships entering and exiting the Port of Corpus Christi carry hazardous cargo, but the port's infrastructure also has risks. In 2020, a dredging vessel operating in the port struck a liquid propane pipeline, causing a deadly explosion.

The Port of Corpus Christi conducts emergency response exercises to prepare for events like this. As part of the deployment of OPTICS, the port wanted to create hypothetical events based on past incidents for training purposes.

But this feature could conflict with federal security requirements if it were to reproduce past events with protected criminal justice information.

To solve for that, The Acceleration Agency trained a large language model capable of generating situations that are similar to โ€” but not exact reproductions of โ€” real incidents.

"What we did was take basically a year's worth of actual security incidents from the police department, like chemical spills, trespassing, vehicle collisions, and trained an LLM to generate synthetic events based on that history," Long explained.

OPTICS system displaying hypothetical emergencies for a training.
OPTICS can display a series of hypothetical incident events for testing and training purposes. These events are generated at realistic locations based on past incident data.

The Port of Corpus Christi

The use of AI-generated events, rather than real-life past events, offered another benefit that became clear during development and testing. Initially, the OPTICS software generated trainings that, in some ways, were too realistic. They presented trainees with a historically accurate ratio of noncritical to urgent scenarios โ€” as a result, those trainings addressed fewer emergency incidents, which are relatively uncommon. So Long's team had OPTICS increase the frequency of emergency events.

"We had to go back and tell it: 'No, don't do it for real. Do it much faster,'" Long said. The use of an LLM, which can process requests in natural language, simplified the creation and modification of the hypothetical events used for training.

The future of port operations

Keach said the Port of Corpus Christi's deployment of OPTICS, which started rolling out at the end of 2024, was just the start.

He said the port's investment in OPTICS was happening alongside other infrastructure investments, such as weather sensors, cameras, and a private 5G network to serve port operations.

OPTICS, which is used only by workers coordinating port traffic, might eventually aid the crews of ships coming into the port, Keach said. He added that OPTICS, equipment using augmented and virtual reality, high-tech sensors, and predictive AI could help ships navigate tough weather conditions, such as fog.

Since the port uses third-party tools โ€” Esri's ArcGIS platform and Unity's 3D engine โ€” as the basis of its digital twin, deploying those applications would probably be less strenuous than if it used proprietary tech. These technologies provide the flexibility to add additional data and incorporate additional devices. Unity, for instance, already supports a range of devices, including smartphones and AR headsets.

"The future state will drive it more into the field," said Keach.

Read the original article on Business Insider

This tech startup wants to shake up AR — and the aerospace industry is paying attention

Three collaborators viewing a 3D model of a product in an office.
Campfire, an AR/VR software development company, was founded in 2018.

Courtesy of Campfire

  • Campfire's platform lets users collaborate on 3D design files in augmented or virtual reality.
  • Collins Aerospace is using Campfire to develop aircraft equipment more efficiently.
  • This article is part of "Build IT: Connectivity," a series about tech powering better business.

Reviewing 3D models on a 2D platform like PowerPoint can make product development difficult, but AR/VR might change that.

At Collins Aerospace, a leading manufacturer of aerospace equipment, engineers and designers can now use AR/VR headsets to view CAD files in real time, letting them view and change designs far more quickly than before. Thomas Murphy, a manufacturing programs chief engineer at Collins Aerospace, told Business Insider the change is like Sears switching from a catalog to e-commerce.

To make this possible, the company has tapped a relatively young AR/VR collaboration tool called Campfire.

Jay Wright, the CEO of Campfire, sees the platform's use at companies like Collins Aerospace as just the beginning of AR/VR collaboration. Unlike many of Campfire's competitors, which often target narrow use cases, Wright hopes to make AR/VR collaboration as popular and accessible as videoconferencing platforms like Zoom.

"People can just start. They can download something for free, and then they can upgrade to a paid plan when they feel they've exhausted the features of what's free," Wright said. "Just like a Zoom, a Teams, a Slack, a Miro, a Figma. That's the exact same thing with Campfire."

Jay Wright
Jay Wright is the CEO of Campfire.

Courtesy of Campfire

Taking AR/VR collaboration mainstream

The adoption of AR/VR collaboration has been slow in part because most tools are difficult to download and use. Many lack a free trial, or if one is offered, it may only be available for a limited time. Hardware requirements, like a headset or a powerful computer, and device compatibility are additional obstacles. This can raise barriers for companies and individuals looking to explore the tech before making a full investment.

Like many AR/VR collaboration apps, Campfire is designed for 3D, real-time collaboration in an AR/VR environment. Users can load 3D files and view them at an accurate scale, zoom in and out to see components in more detail, and make alterations on the fly.

But unlike most competitors, Campfire also provides a comprehensive free tier. Under this plan, users can view up to five projects with up to five collaborators and receive 5GB of total file storage. The free tier has no time limits. Campfire also offers broad device compatibility, including Windows, Mac, Varjo headsets, and more.

"It's a model similar to other software-as-a-service, where people can download something for free," Wright said. "People can use models up to a certain size, in certain formats, and it's really good. They can see what the collaboration experience looks like, they can put their own data in."

Even with the recent release of more affordable and accessible headsets, like Meta's Quest 3, flexibility is key to the company's strategy. Wright said roughly 80% of Campfire's users log in through a computer, tablet, or phone. If collaborators on a project lack a headset, they can still use a laptop to view the perspective of a team member who's wearing the device.

Thomas Murphy
Thomas Murphy is a manufacturing programs chief engineer at Collins Aerospace.

Courtesy of Thomas Murphy

From ideation to the air

Collins Aerospace, a subsidiary of RTX, builds components for commercial and defense aviation, from navigation equipment and landing gear to passenger seats. The company began using Campfire in 2023.

Murphy told BI he sees an opportunity for the tech to reinvent the company's complicated review process. Collaborators typically view the 3D models, take detailed notes, create action items, make changes, reconvene to discuss the updates, and repeat until they finalize the product.

"We have design reviews, and we're pasting 3D models into two-dimensional PowerPoint slides and going through those cross-sectional views on a Zoom call," Murphy said.

Campfire, by contrast, allows direct and real-time collaboration. Murphy said users can view 3D CAD files that offer a much clearer representation of what a final product will look like. Collaborators can also alter the file in real time, making it possible to share iterations on the spot and experiment with new ideas.

The aerospace industry's demanding timelines make speedy collaboration particularly valuable. Murphy said that Collins Aerospace needs to move in step with major customers. "From the Boeing and Airbus perspective, they're looking for us to have the agility," he said.

While the tech has been used successfully at Collins Aerospace and companies like DataFusion and Whirlpool, Campfire could face potential adoption hurdles as tech giants remain undecided about AR/VR technology. Microsoft, for example, has largely retreated from Windows Mixed Reality and HoloLens, the holographic headset it once pitched to engineers, and Meta's Reality Labs reported a $4.2 billion loss in the first quarter of 2025.

Even so, Wright told BI that the time is right for AR/VR collaboration to go mainstream.

"Everything is not obvious until the moment that it's very obvious," he said. "The promise has been there for a long time, and it's just a matter of getting to that tipping point where you've got price, performance, and a user experience that makes it simple."

Read the original article on Business Insider

Morgan Stanley and Bank of America are focusing AI power on tools to make employees more efficient

Photo of a bank teller and a customer on an abstract AI-themed Background.
ย 

Getty Images; Karan Singh for BI

  • Some financial institutions are prioritizing internal AI tools to enhance daily operations.
  • For example, Morgan Stanley and Bank of America have trained staff to use AI with human supervision.
  • This article is part of "AI in Action," a series exploring how companies are implementing AI innovations.

The financial industry's approach to artificial intelligence reveals considerable pragmatism.

Popular notions of generative AI, guided by the explosive growth of OpenAI's ChatGPT, often center on consumer-facing chatbots. But financial institutions are leaning more heavily on internal AI tools that streamline day-to-day tasks.

This requires training programs and user-experience design that help a bank's entire organization โ€” from relationship bankers directing high-value accounts to associates โ€” understand the latest AI technology.

From AI classification to AI generation

Banks have long used traditional AI and machine learning techniques for various functions, such as customer service bots and decision algorithms that provide a faster-than-human response to market swings.

But modern generative AI is different from prior AI/ML methods, and it has its own strengths and weaknesses. Hari Gopalkrishnan, Bank of America's chief information officer and head of retail, preferred, small business, and wealth technology, said generative AI is a new tool that offers new capabilities, rather than a replacement for prior AI efforts.

"We have a four-layer framework that we think about with regards to AI," Gopalkrishnan told Business Insider.

The first layer is rules-based automation that takes actions based on specific conditions, like collecting and preserving data about a declined credit card transaction when one occurs. The second is analytical models, such as those used for fraud detection. The third layer is language classification, which Bank of America used to build Erica, a virtual financial assistant, in 2016.

"Our journey of Erica started off with understanding language for the purposes of classification," Gopalkrishnan said. But the company isn't generating anything with Erica, he added: "We're classifying customer questions into buckets of intents and using those intents to take customers to the right part of the app or website to help them serve themselves."

The fourth layer, of course, is generative AI.

Koren Picariello, a Morgan Stanley managing director and its head of wealth management generative AI, said Morgan Stanley took a similar path. Throughout the 2010s, the company used machine learning for several purposes, like seeking investment opportunities that meet the needs and preferences of specific clients. Many of these techniques are still used.

"Historically, I was working in analytics, data, and innovation within the wealth space. In that space, Morgan Stanley did focus on the more traditional AI/ML tools," Picariello told BI. "Then in 2022, we started a dialogue with OpenAI before they became a household name. And that began our generative-AI journey."

How banks are deploying AI

Given the history, it'd be reasonable to think banks would turn generative-AI tools into new chatbots that more or less serve as better versions of Bank of America's Erica, or as autonomous financial advisors. But the most immediate changes instead came to internal processes and tools.

Morgan Stanley's first major generative-AI tool, Morgan Stanley Assistant, was launched in September 2023 for employees such as financial advisors and support staff who help clients manage their money. Powered by OpenAI's GPT-4, it was designed to give responses grounded in the company's library of over 100,000 research reports and documents.

The second tool, Morgan Stanley Debrief, was launched in June. It helps financial advisors create, review, and summarize notes from meetings with clients.

"It's kind of like having the most informed person at Morgan Stanley sitting next to you," Picariello said. "Because any question you have, whether it was operational in nature or research in nature, what we've asked the model to do is source an answer to the user based on our internal content."

Bank of America is pursuing similar applications, including a call center tool that saves customer associates' time by transcribing customer conversations in real time, classifying the customer's needs, and generating a summary for the agent.

Keeping humans in the loop

The decision to deploy generative AI internally first, rather than externally, was in part due to generative AI's most notable weakness: hallucinations.

In generative AI, a hallucination is an inaccurate or nonsensical response to a prompt, like when Google Search's AI infamously recommended that home chefs use glue to keep cheese from sliding off a pizza.

Banks are wary of consumer-facing AI chatbots that could make similar errors about bank products and policies.

Deploying generative AI internally lessens the concern. It's not used to autonomously serve a bank's customers and clients but to assist bank employees, who have the option to accept or reject its advice or assistance.

Bank of America provides AI tools that can help relationship bankers prep for a meeting with a client, but it doesn't aim to automate the bank-client relationship, Gopalkrishnan told BI.

Picariello said Morgan Stanley takes a similar approach to using generative AI while maintaining accuracy. The company's AI-generated meeting summaries could be automatically shared with clients, but they're not. Instead, financial advisors review them before they're sent.

Training the finance workforce for AI

Bank of America and Morgan Stanley are also training bank employees on how to use generative-AI tools, though their strategies diverge.

Gopalkrishnan said Bank of America takes a top-down approach to educating senior leadership about the potential and risks of generative AI.

About two years ago, he told BI, he helped top-level staff at the bank become "well aware" of what's possible with AI. He said having the company's senior leadership briefed on generative AI's perks, as well as its limitations, was important to making informed decisions across the company.

Meanwhile, Morgan Stanley is concentrating on making the company's AI tools easy to understand.

"We've spent a lot of time thinking through the UX associated with these tools, to make them intuitive to use, and taking users through the process and cycle of working with generative AI," Picariello said. "Much of the training is built into the workflow and the user experience." For example, Morgan Stanley's tools can advise employees on how to reframe or change a prompt to yield a better response.

For now, banks are focusing AI initiatives on identifying and automating increasingly more complex and nuanced tasks within the organizations rather than developing one-off applications targeted at the customer experience.

"We try to approach problems not as a technology problem but as a business problem. And the business problem is that Bank of America employees all perform lots of tasks in the company," said Gopalkrishnan. "The opportunity is to think more holistically, to understand the tasks and find the biggest opportunities so that five and 10 years from now, we're a far more efficient organization."

Read the original article on Business Insider

Cybersecurity execs face a new battlefront: 'It takes a good-guy AI to fight a bad-guy AI'

Security cameras against blue sky
ย 

Getty Images

  • Generative-AI models often face security threats such as prompt injections and data exfiltration.
  • Cybersecurity firms are fighting fire with fire โ€” using AI to secure LLMs โ€” but there are costs.
  • This article is part of "How AI Is Changing Everything," a series on AI adoption across industries.

Generative artificial intelligence is a relatively new technology. Consequently, it presents new security challenges that can catch organizations off guard.

Chatbots powered by large language models are vulnerable to various novel attacks. These include prompt injections, which use specially constructed prompts to change a model's behavior, and data exfiltration, which involves prompting a model thousands, maybe millions, of times to find sensitive or valuable information.

These attacks exploit the unpredictable nature of LLMs, and they've already inflicted significant monetary pain.

"The largest security breach I'm aware of, in monetary terms, happened recently, and it was an attack against OpenAI," said Chuck Herrin, the field chief information security officer of F5, a multicloud-application and security company.

headshot of Chuck Herrin.
Chuck Herrin, F5's field chief information security officer.

F5

AI models are powerful but vulnerable

Herrin was referencing DeepSeek, an LLM from the Chinese company by the same name. DeepSeek surprised the world with the January 20 release of DeepSeek-R1, a reasoning model that ranked only a hair behind OpenAI's best models on popular AI benchmarks.

But DeepSeek users noticed some oddities in how the model performed. It often constructed its response similarly to OpenAI's ChatGPT and identified itself as a model trained by OpenAI. In the weeks that followed, OpenAI told the Financial Times it had evidence that DeepSeek had used a technique called "distillation" to train its own model by prompting ChatGPT.

That evidence OpenAI said it had was not made public, and it's unclear whether the company will pursue the matter further.

Still, the possibility caused serious concern. Herrin said DeepSeek was accused of distilling OpenAI's models down and stealing its intellectual property. "When the news of that hit the media, it took a trillion dollars off the S&P," he said.

Alarmingly, it's well known that exploiting AI vulnerabilities is possible. LLMs are trained on large datasets and generally designed to respond to a wide variety of user prompts.

A model doesn't typically "memorize" the data it's trained on, meaning it doesn't precisely reproduce the training data when asked (though memorization can occur; it's a key point New York Times' copyright infringement lawsuit against OpenAI). However, prompting a model thousands of times and analyzing the results can allow a third party to emulate a model's behavior, which is distillation. Techniques like this can also gain some insight into the model's training data.

This is why you can't secure your AI without securing the application programming interface used to access the model and "the rest of the ecosystem," Herrin told Business Insider. So long as the API is available without appropriate safeguards, it can be exploited.

To make matters worse, LLMs are a "black box." Training an LLM creates a neural network that gains a general understanding of the training data and the relationships between data in it. But the process doesn't describe which specific "neurons" in an LLM's network are responsible for a specific response to a prompt.

That, in turn, means it's impossible to restrict access to specific data within an LLM in the same way an organization might protect a database.

Sanjay Kalra, the head of product management at the cloud security company Zscaler, said: "Traditionally, when you place data, you place it in a database somewhere." At some point, an organization could delete that data if it wanted to, he told BI, "but with LLM chatbots, there's no easy way to roll back information."

Headshot of Sanjay Kalra.
Sanjay Kalra, the head of product management at Zscaler.

Zscaler

The solution to AI vulnerabilities is โ€ฆ more AI

Cybersecurity companies are tackling this problem from many angles, but two stand out.

The first is rooted in a more traditional, methodical approach to cybersecurity.

"We already control authentication and authorization and have for a long time," Herrin said. He added that while authenticating users for an LLM "doesn't really change" compared with authenticating for other services, it remains crucial.

Kalra also stressed the importance of good security fundamentals, such as access control and logging user access. "Maybe you want a copilot that's only available for engineering folks, but that shouldn't be available for marketing, or sales, or from a particular location," he said.

But the other half of the solution is, ironically, more AI.

LLMs' "black box" nature makes them tricky to secure, as it's not clear which prompts will bypass safeguards or exfiltrate data. But the models are quite good at analyzing text and other data, and cybersecurity companies are taking advantage of that to train AI watchdogs.

These models position themselves as an additional layer between the LLM and the user. They examine user prompts and model responses for signs that a user is trying to extract information, bypass safeguards, or otherwise subvert the model.

"It takes a good-guy AI to fight a bad-guy AI," Herrin said. "It's sort of this arms race. We're using an LLM that we purpose-built to detect these types of attacks." F5 provides services that allow clients to use this capability both when deploying their own AI model on premises and when accessing AI models in the cloud.

But this approach has its difficulties, and cost is among them. Using a security-tuned variant of a large and capable model, like OpenAI's GPT-4.1, might seem like the best path toward maximum security. However, models like GPT-4.1 are expensive, which makes the idea impractical for most situations.

"The insurance can't be more expensive than the car," Kalra said. "If I start using a large language model to protect other large language models, it's going to be cost-prohibitive. So in this case, we see what happens if you end up using small language models."

Small language models have relatively few parameters. As a result, they require less computation to train and consume less computation and memory when deployed. Popular examples include Meta's Llama 3-8B and Mistral's Ministral 3B. Kalra said Zscaler also has an AI and machine learning team that trains its own internal models.

As AI continues to evolve, organizations face an unexpected security scenario: The very technology that suffers vulnerabilities has become an essential part of the defense strategy against those weak spots. But a multilayered approach, which combines cybersecurity fundamentals with security-tuned AI models, can begin to fill the gaps in an LLM's defenses.

Read the original article on Business Insider

Shutterstock earned over $100 million in revenue thanks in part to its AI-powered image-generator tool

A digital camera with a big lens sits on a desk and a person edits an image on a desktop computer in the background.
Shutterstock's approach to AI integration focused on the user experience.

dusanpetkovic/Getty Images

  • Shutterstock added gen AI to its stock-content library to generate $104 million in revenue.
  • The company has partnered with tech giants including Meta, Amazon, Apple, OpenAI, and Nvidia.
  • This article is part of "CXO AI Playbook" โ€” straight talk from business leaders on how they're testing and using AI.

Shutterstock, founded in 2003 and based in New York, is a global leader in licensed digital content. It offers stock photos, videos, and music to creative professionals and enterprises.

In late 2022, Shutterstock made a strategic decision to embrace generative AI, becoming one of the first stock-content providers to integrate the tech into its platform.

Dade Orgeron, the vice president of innovation at Shutterstock, leads the company's artificial-intelligence initiatives. During his tenure, Shutterstock has transitioned from a traditional stock-content provider into one that provides several generative-AI services.

While Shutterstock's generative-AI offerings are focused on images, the company has an application programming interface for generating 3D models and plans to offer video generation.

Situation analysis: What problem was the company trying to solve?

When the first mainstream image-generation models, such as Dall-E, Stable Diffusion, and Midjourney, were released in late 2022, Shutterstock recognized generative AI's potential to disrupt its business.

"It would be silly for me to say that we didn't see generative AI as a potential threat," Orgeron said. "I think we were fortunate at the beginning to realize that it was more of an opportunity."

He said Shutterstock embraced the technology ahead of many of its customers. He recalled attending CES in 2023 and said that many creative professionals there were unaware of generative AI and the impact it could have on the industry.

Orgeron said that many industry leaders he encountered had the misconception that generative AI would "come in and take everything from everyone." But that perspective felt pessimistic, he added. But Shutterstock recognized early that AI-powered prompting "was design," Orgeron told Business Insider.

Key staff and stakeholders

Orgeron's position as vice president of innovation made him responsible for guiding the company's generative-AI strategy and development.

However, the move toward generative AI was preceded by earlier acquisitions. Orgeron himself joined the company in 2021 as part of its acquisition of TurboSquid, a company focused on 3D assets.

Side profile of a man with a beard wearing black glasses and a black jacket.
TK

Photo courtesy of Dade Orgeron

Shutterstock also acquired three AI companies that same year: Pattern89, Datasine, and Shotzr. While they primarily used AI for data analytics, Orgeron said the expertise Shutterstock gained from these acquisitions helped it move aggressively on generative AI.

Externally, Shutterstock established partnerships with major tech companies including Meta, Alphabet, Amazon, Apple, OpenAI, Nvidia, and Reka. For example, Shutterstock's partnership with Nvidia enabled its generative 3D service.

AI in action

Shutterstock's approach to AI integration focused on the user experience.

Orgeron said the company's debut in image generation was "probably the easiest-to-use solution at that time," with a simple web interface that made AI image generation accessible to creative professionals unfamiliar with the technology.

That stood in contrast to competitors such as Midjourney and Stable Diffusion, which, at the time Shutterstock launched its service in January 2023, had a basic user interface. Midjourney, for instance, was initially available only through Discord, an online chat service more often used to communicate in multiplayer games.

This focus on accessibility set the stage for Shutterstock.AI, the company's dedicated AI-powered image-generation platform. While Shutterstock designed the tool's front end and integrated it into its online offerings, the images it generates rely on a combination of internally trained AI models and solutions from external partners.

Shutterstock.AI, like other image generators, lets customers request their desired image with a text prompt and then choose a specific image style, such as a watercolor painting or a photo taken with a fish-eye lens.

However, unlike many competitors, Shutterstock uses information about user interactions to decide on the most appropriate model to meet the prompt and style request. Orgeron said Shutterstock's various models provide an edge over other prominent image-generation services, which often rely on a single model.

But generative AI posed risks to Shutterstock's core business and to the photographers who contribute to the company's library. To curb this, Orgeron said, all of its AI models, whether internal or from partners, are trained exclusively on Shutterstock's legally owned data. The company also established a contributor fund to compensate content creators whose work was used in the models' training.

Orgeron said initial interest in Shutterstock.AI came from individual creators and small businesses. Enterprise customers followed more cautiously, taking time to address legal concerns and establish internal AI policies before adopting the tech. However, Orgeron said, enterprise interest has accelerated as companies recognize AI's competitive advantages.

Did it work, and how did leaders know?

Paul Hennessy, the CEO of Shutterstock, said in June the company earned $104 million in annual revenue from AI licensing agreements in 2023. He also projected that this revenue could reach up to $250 million annually by 2027.

Looking ahead, Shutterstock hopes to expand AI into its video and 3D offerings. The company's generative 3D API is in beta. While it doesn't offer an AI video-generation service yet, Orgeron said Shutterstock plans to launch a service soon. "The video front is where everyone is excited right now, and we are as well," he said. "For example, we see tremendous opportunity in being able to convert imagery into videos."

The company also sees value in AI beyond revenue figures. Orgeron said Shutterstock is expanding its partnerships, which now include many of the biggest names in Silicon Valley. In some cases, partners allow Shutterstock to use their tech to build new services; in others, they license data from Shutterstock to train AI.

"We're partnered with Nvidia, with Meta, with HP. These are great companies, and we're working closely with them," he said. "It's another measure to let us know we're on the right track."

Read the original article on Business Insider

How Alaska Airlines used AI to save over 1.2 million gallons of jet fuel

Alaska Airlines plane taking off
Alaska Airlines uses Air Space Intelligence's AI technology to help plan flight routes.

Kevin Carter/Getty Images

  • Alaska Airlines partnered with Air Space Intelligence to use an AI tool that suggests flight routes.
  • The tool, Flyways AI Platform, factors in data such as historical flight traffic and predicted weather.
  • This article is part of "CXO AI Playbook" โ€” straight talk from business leaders on how they're testing and using AI.

For "CXO AI Playbook," Business Insider takes a look at mini case studies about AI adoption across industries, company sizes, and technology DNA. We've asked each of the featured companies to tell us about the problems they're trying to solve with AI, who's making these decisions internally, and their vision for using AI in the future.

Coordinating airline flights seems easy on paper. Nearly all travel routes are planned months in advance, and they're designed to ensure there aren't too many aircraft flying at one time. But frequent airline delays show that this seemingly simple task can become mind-bogglingly complex.

One out of every five flights in the US is delayed by at least 15 minutes. "The fundamental problem is that when a human being sits down to plan a flight, they only have information about their one flight," Pasha Saleh, the head of corporate development at Alaska Airlines, said.

To solve that, Alaska Airlines partnered with an AI startup called Air Space Intelligence, the creator of the Flyways AI Platform, which uses artificial intelligence to suggest optimal flight routes. The partnership started three years ago and was renewed in August. Now, half the flight plans reviewed by Alaska Airlines' dispatchers include a plan suggested by Flyways.

Pasha Saleh headshot
Pasha Saleh, the head of corporate development at Alaska Airlines.

Alaska Airlines

Situation analysis: What problem was the company trying to solve?

All major airline flights are logged with the Federal Aviation Administration and generally filed at least several hours ahead of time. Most commercial passenger flights follow common routes flown on a schedule.

In theory, that means air traffic is predictable. But the reality in the air is often more hectic. Saleh said that air-traffic control is "often very tactical, not strategic."

That leads to last-minute diversions and delays that inconvenience passengers and cost Alaska money as pilots, crews, and planes sit idle.

"Airplanes are expensive assets, and you only make money when they're flying," Saleh said.

Key staff and partners

Alaska Airlines and ASI worked in partnership from the beginning of the partnership.

Saleh met Phillip Buckendorf, the CEO of Air Space Intelligence, in 2018. Buckendorf wanted to use AI to route self-driving cars. Saleh wondered whether the idea could be applied to airlines and invited Buckendorf to visit Alaska Airlines' operations center.

"He looked at those screens expecting to see something out of 'Star Trek.' Instead, he saw something one generation removed from IBM DOS," Saleh said, referring to an operating system that was discontinued over 20 years ago. "Pretty much on the spot, he decided to pivot to airlines."

The resulting product, Flyways, was adopted by Alaska Airlines in 2021.

While Air Space Intelligence developed the Flyways AI Platform, it did so in close cooperation with the airline's stakeholders.

"Airlines are very unionized environments, so we wanted to make sure this wasn't seen as a threat to dispatchers," Saleh said. Alaska Airlines used dispatcher feedback to hone Flyways.

Flyways now works as an assistant to the airline's dispatchers, who see its options presented when creating a flight plan.

AI in action

The partnership between Alaska Airlines and Air Space Intelligence began with a learning period for both organizations.

ASI's staff shadowed the airline's dispatchers to learn how they worked, while Alaska Airlines learned more about how a machine-learning algorithm could be used to route traffic. Saleh said ASI spent about 1 ยฝ years developing the first version of the Flyways AI Platform.

Flyways trains its AI algorithm on historical flight data. At its most basic level, this includes information like a flight's scheduled departure and arrival, actual departure and arrival, and route.

However, Flyways also ingests data on less obvious variables, like restricted military airspace (including temporary restrictions, like those surrounding Air Force One) and wind speeds at cruising altitude. Even events like the Super Bowl, which causes a surge in demand and leads to airspace restrictions around the event, are considered.

Saleh said Flyways connects to multiple sources of information to acquire this data and automatically ingests it through application programming interfaces. Flyways then runs its AI model to determine the suggested route.

"Suggested" is a keyword: While Flyways uses AI to predict the best route, it's not an automated or agentic system and doesn't claim the reasoning capabilities of generative-AI services like ChatGPT.

Dispatchers see Flyways' flight plans as an option in the software interface they use to plan a flight, but a plan isn't put into use until a human dispatcher approves it.

Did it work, and how did leaders know?

Alaska Airlines' dispatchers accept 23% of Flyways' recommendations. While that might seem low, those accepted routes helped reduce Alaska Airlines' fuel consumption by more than 1.2 million gallons in 2023, according to the airline's annual sustainability report.

Reduced fuel consumption is necessary if Alaska is to reach its goal of becoming the most-fuel-efficient airline by 2025. The airline also ranks well on delays: It was the No. 2 most-on-time US airline in 2023, with some of the fewest cancellations.

Meanwhile, ASI has grown its head count from a handful of engineers to 110 employees across offices in Boston, Denver, Poland, and Washington, DC. In addition to its partnership with Alaska, the company has contracts with the US Air Force and received $34 million in Series B funding in December from Andreessen Horowitz.

Read the original article on Business Insider

โŒ