Google has changed its terms to clarify that customers can deploy its generative AI tools to make βautomated decisionsβ in βhigh-riskβ domains, like healthcare, so long as thereβs a human in the loop. According to the companyβs updated Generative AI Prohibited Use Policy, published on Tuesday, customers may use Googleβs generative AI to make βautomated [β¦]
In the West, where herds of thousands of cattle are common, researchers are seeing cases rise at poultry and dairy operations. More than 50 workers have contracted the virus.
They said that while AI could boost productivity, stakeholders should understand its limitations.
This article is part of "CXO AI Playbook" β straight talk from business leaders on how they're testing and using AI.
At a Business Insider roundtable in November, Neeraj Verma, the head of applied AI at Nice, argued that generative AI "makes a good developer better and a worse developer worse."
He added that some companies expect employees to be able to use AI to create a webpage or HTML file and simply copy and paste solutions into their code. "Right now," he said, "they're expecting that everybody's a developer."
During the virtual event, software developers from companies such as Meta, Slack, Amazon, Slalom, and more discussed how AI influenced their roles and career paths.
They said that while AI could help with tasks like writing routine code and translating ideas between programming languages, foundational coding skills are necessary to use the AI tools effectively. Communicating these realities to nontech stakeholders is a primary challenge for many software developers.
Understanding limitations
Coding is just one part of a developer's job. As AI adoption surges, testing and quality assurance may become more important for verifying the accuracy of AI-generated work. The US Bureau of Labor Statistics projects that the number of software developers, quality-assurance analysts, and testers will grow by 17% in the next decade.
Expectations for productivity can overshadow concerns about AI ethics and security.
"Interacting with ChatGPT or Cloud AI is so easy and natural that it can be surprising how hard it is to control AI behavior," Igor Ostrovsky, a cofounder of Augment, said during the roundtable. "It is actually very difficult to, and there's a lot of risk in, trying to get AI to behave in a way that consistently gives you a delightful user experience that people expect."
For many developers, managing stakeholders' expectations β communicating the limits, risks, and overlooked aspects of the technology β is a challenging yet crucial part of the job.
Kesha Williams, the head of enterprise architecture and engineering at Slalom, said in the roundtable that one way to bridge this conversation with stakeholders is to outline specific use cases for AI. Focusing on the technology's applications could highlight potential pitfalls while keeping an eye on the big picture.
"Good developers understand how to write good code and how good code integrates into projects," Verma said. "ChatGPT is just another tool to help write some of the code that fits into the project."
Ostrovsky predicted that the ways employees engage with AI would change over the years. In the age of rapidly evolving technology, he said, developers will need to have a "desire to adapt and learn and have the ability to solve hard problems."
Insurance has been fertile ground for artificial intelligence innovation, working as it does at the nexus of giant datasets, risk assessment, predictive analytics, fintech, and customer service. Federato, a startup riding that momentum, has now raised $40 million to expand its business: an AI-powered underwriting platform to help insurers better understand and respond to risk.Β [β¦]