Amazon is trying to stop people using AI to cheat in job interviews, internal messages and guidelines show
PhonlamaiPhoto/Getty, SDI Productions/Getty, Ava Horton/BI
- Amazon is cracking down on the use of AI tools in job interviews.
- AI tools in interviews pose ethical challenges, and have sparked debate across Silicon Valley.
- Some Amazon employees consider AI tools beneficial, while others see them as dishonest.
Generative AI tools like coding assistants and "teleprompter" apps feed live answers during job interviews, giving a leg up to candidates looking for an edge.
Amazon, one of the largest employers in the world, wants to curb this growing trend.
Recent Amazon guidelines shared with internal recruiters at the company show that job applicants can be disqualified from the hiring process if they are found to have used an AI tool during job interviews.
Amazon believes AI tools in interviews give candidates an "unfair advantage" and prevent the company from evaluating their "authentic" skills and experiences, according to the guidelines, which were obtained by Business Insider.
"To ensure a fair and transparent recruitment process, please do not use GenAl tools during your interview unless explicitly permitted," the guidelines state. "Failure to adhere to these guidelines may result in disqualification from the recruitment process."
The guidelines also instruct Amazon recruiters to share these rules with job candidates.
The crackdown highlights one of the many ethical challenges that are bubbling up from the rise of generative AI. Amazon has restricted employee use of AI tools such as ChatGPT, even as it encourages the use of internal AI apps to boost productivity. Hacking job interviews with AI has become a growing trend, prompting debate across Silicon Valley.
In a recent internal Slack conversation seen by BI, some Amazon employees debated the need to ban AI tools during job interviews when they can improve the quality of work.
"This is certainly an increasing trend, especially for tech/SDE roles," said one of the Slack messages, referring to software development engineers.
An Amazon spokesperson said the company's recruiting process "prioritizes ensuring that candidates hold a high bar."
When applicable, candidates must acknowledge that they won't use "unauthorized tools, like GenAI, to support them" during the interview, the spokesperson added in an email to BI.
Tips to identify the use of GenAI tools
The trend has become a big enough problem for Amazon that the company even shared internal tips on how to spot applicants using GenAI tools during job interviews.
The indicators, according to the guidelines, include:
- The candidate can be seen typing whilst being asked questions. (Note, it is not uncommon for candidates to write down/type the question asked as they prepare to answer.)
- The candidate appears to be reading their answers rather than responding naturally. This could include correcting themselves when they misread a word.
- The candidate's eyes appear to be tracking text or looking elsewhere, rather than viewing their primary display or moving naturally during conversation.
- The candidate delivers confident responses that do not clearly or directly address the question.
- The candidate reacts to the outputs of the AI tool when they appear to be incorrect or irrelevant. This is often demonstrated by the candidate being distracted or confused as they are trying to make sense of the outputs.
While candidates are permitted to talk about how they have used generative AI applications to "achieve efficiencies" in their current or previous roles, they are strictly prohibited from using them during job interviews, the Amazon guidelines added.
A recent video produced by an AI company that claims to have received a job offer from Amazon after using its coding assistant during the interview raised alarms internally, one person familiar with the matter told BI. This person asked not to be identified because they were not authorized to speak to the media.
'Mainstream' problem
This is not just an Amazon problem. Job seekers are becoming increasingly bold in interviews, using different AI tools. A recent experiment found it was easy to cheat in job interviews using AI tools like ChatGPT.
In October, xAI cofounder Greg Yang wrote on X that he'd caught a job candidate cheating with Anthropic's Claude AI service.
"The candidate tried to use claude during the interview but it was way too obvious," Yang wrote.
Matthew Bidwell, a business professor at Wharton, told BI that these AI tools "definitely penetrated the mainstream, and employers are worried about it," based on conversations with students in his executive-management program.
Bidwell said it's a problem when employers can't detect these tools and the job candidates are uncomfortable admitting their use.
"There's a strong risk of people using it to misrepresent their skills, and I think that is somewhat unethical," Bidwell said.
Bar Raising?
Not everyone is opposed to it. Some Silicon Valley companies are open to allowing these apps in job interviews because they already use them at work. Others are making the technical interview an open-book test but adding questions for a deeper assessment.
Some Amazon employees appear less concerned about it, too.
One person wrote in a recent Slack conversation at Amazon that their team is "studying" the possibility of providing a generative AI assistant to candidates and changing their hiring approach, according to internal messages seen by BI. Another person said that even if a candidate gets hired after using these tools, Amazon has "other mechanisms" to address those who do not meet expectations for their roles.
A third person questioned whether Amazon could benefit from this. Using generative AI may be "dishonest or unprofessional," but on the other hand, it is "raising the bar" for Amazon by improving the quality of the interview, this person argued.
"If judged solely by the outcome, it could be considered bar-raising," this person wrote.
Have a tip?
Contact this reporter via email at [email protected] or Signal/Telegram/WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.