❌

Reading view

There are new articles available, click to refresh the page.

Latimer AI startup to launch bias detection tool for web browsers

John Pasmore Cofounder and CEO Latimer AI
John Pasmore Cofounder and CEO Latimer AI

Latimer AI

  • Latimer AI plans to launch a bias detection tool as a Chrome browser extension in January.
  • The tool scores text from one to 10, with 10 being extremely biased.
  • Latimer AI hopes the product will attract new users.

Bias is in the eye of the beholder, yet it's increasingly being evaluated by AI. Latimer AI, a startup that's building AI tools on a repository of Black datasets, plans to launch a bias detection tool as a Chrome browser extension in January.

The company anticipates the product could be used by people who run official social media accounts, or anyone who wants to be mindful of their tone online, Latimer CEO John Pasmore told Business Insider.

"When we test Latimer against other applications, we take a query and score the response. So we'll score our response, we'll score ChatGPT or Claude's response, against the same query and see who scores better from a bias perspective," Pasmore said. "It's using our internal algorithm to not just score text, but then correct it."

The tool assigns a score from one through 10 to text, with 10 being extremely biased.

Patterns of where bias is found online, are already emerging from beta testing of the product.

For instance, text from an April post by Elon Musk, in which he apologized for calling Dustin Moskowitz a derogatory name, was compared to an August post from Bluesky CEO Jay Graber.

An Elon Musk post on X is analyzed for bias and scores 6.8 out of 10, or "high bias" according to Latimer AI.
An Elon Musk post on X is analyzed for bias and scores 6.8 out of 10, or "High Bias" according to Latimer AI.

Latimer AI

Musks' post scored 6.8 out of 10, or "High Bias," while Graber's scored 3.6 out of 10, or "Low Bias".

Bluesky CEO Jay Graber's post to the platform is analyzed for bias and scores a 3.6 out of 10, or "Low Bias" according to Latimer AI.
Bluesky CEO Jay Graber's post to the platform is analyzed for bias and scores a 3.6 out of 10, or "Low Bias" according to Latimer AI.

Latimer AI

Latimer's technology proposed a "fix" to the text in Musk's post by changing it to the following: "I apologize to Dustin Moskowitz for my previous inappropriate comment. It was wrong. What I intended to express is that I find his attitude to be overly self-important. I hope we can move past this and potentially become friends in the future."

While what is deemed biased is subjective, Latimer isn't alone in trying to tackle this challenge through technology. The LA Times plans to display a "bias meter" in 2025, for instance.

Latimer hopes its bias tool will draw in more users.

"This will help us identify a different set of users who might not use a large language model, but might use a browser extension," Pasmore said.

The bias detector will launch at $1 a month, and a pro version will let users access multiple bias detection algorithms.

Read the original article on Business Insider

You might want to have your next job interview in the morning

Two women in a job interview reviewing resume
Scheduling a job interview in the morning could be a smart strategy.

Olga Rolenko

  • Morning interviews may yield higher scores due to interviewer bias, research shows.
  • Bias in hiring can be influenced by the time of day, affecting candidate evaluations.
  • AI tools could reduce this, offering fairer assessments than manual methods.

If you get to choose when to schedule a job interview, you might want to grab a coffee and go for a morning slot.

That's because some people conducting interviews tend to give higher scores to candidates they meet with earlier in the day compared with the afternoon, a startup's review of thousands of interviews found.

It's not an absolute, of course, and candidates can still kill it well after lunchtime. Yet, in a job market where employers in fields like tech have been slow to hire, even a modest advantage could make a difference, Shiran Danoch, an organizational psychologist, told Business Insider.

"Specific interviewers have a consistent tendency to be harsher or more lenient in their scores depending on the time of day," she said.

It's possible that in the morning, interviewers haven't yet been beaten down by back-to-back meetings β€” or are perhaps still enjoying their own first coffee, she said.

Danoch and her team noticed the morning-afternoon discrepancy while reviewing datasets on thousands of job interviews. Danoch is the CEO and founder of Informed Decisions, an artificial intelligence startup focused on helping organizations reduce bias and improve their interviewing processes.

She said the inferences on the time-of-day bias are drawn from the datasets of interviewers who use Informed Decisions tools to score candidates. The data reflected those who've done at least 20 interviews using the company's system. Danoch said that in her company's review of candidates' scores, those interviewed in the morning often get statistically significant higher marks.

The good news, she said, is that when interviewers are made aware that they might be more harsh in the afternoon, they often take steps to counteract that tendency.

"In many cases, happily, we're actually seeing that the feedback that we're providing helps to reduce the bias and eventually eliminate the bias," Danoch said.

However, she said, interviewers often don't get feedback about their hiring practices, even though finding the right talent is "such a crucial part" of what hiring managers and recruiters do.

She said other researchers have identified how the time of day β€” and whether someone might be a morning person or an evening person β€” can affect decision-making processes.

An examination of more than 1,000 parole decisions in Israel found that judges were likelier to show leniency at the start of the day and after breaks. However, that favorability decreased as judges made more decisions, according to theΒ 2011 research.

Tech could help

It's possible that if tools like artificial intelligence take on more responsibility for hiring, job seekers won't have to worry about the time of day they interview.

For all of the concerns about biases in AI, partiality involved in more "manual" hiring where interviewers ask open-ended questions often leads to more bias than does AI, said Kiki Leutner, cofounder of SeeTalent.ai, a startup creating tests run by AI to simulate tasks associated with a job. She has researched AI ethics and that of assessments in general.

Leutner told BI that it's likely that in a video interview conducted by AI, for example, a candidate might have a fairer shot at landing a job.

"You don't just have people do unstructured interviews, ask whatever questions, make whatever decisions," she said.

And, because everything is recorded, Leutner said, there is documentation of what decisions were made and on what basis. Ultimately, she said, it's then possible to take that information and correct algorithms.

"Any structured process is better in recruitment than not structuring it," Leutner said.

Humans are 'hopelessly biased'

Eric Mosley, cofounder and CEO of Workhuman, which makes tools for recognizing employee achievements, told BI that data created by humans will be biased β€” because humans are "hopelessly biased."

He pointed to 2016 research indicating that juvenile court judges in Louisiana doled out tougher punishments β€”Β particularly to Black youths β€”Β after the Louisiana State University football team suffered a surprise defeat.

Mosley said, however, that AI can be trained to ignore certain biases and look for others to eliminate them.

Taking that approach can help humans guard against some of their natural tendencies. To get it right, however, it's important to have safeguards around the use of AI, he said. These might include ethics teams with representatives from legal departments and HR to focus on issues of data hygiene and algorithm hygiene.

Not taking those precautions and solely relying on AI can even risk scaling humans' biases, Mosley said.

"If you basically just unleash it in a very simplistic way, it'll just replicate them. But if you go in knowing that these biases exist, then you can get through it," he said.

Danoch, from Informed Decisions, said that if people conducting interviews suspect they might be less forgiving after the morning has passed, they can take steps to counteract that.

"Before you interview in the afternoons, take a little bit longer to prepare, have a cup of coffee, refresh yourself," she said.

Read the original article on Business Insider

The 'halo effect' is compelling but can be risky for both employers and job seekers

Job seekers at a job fair are standing in a line
Certain attributes a job candidate appears to possess can outshine others.

Joe Raedle/Getty Images

  • Job seekers with prestigious schools or employers on their rΓ©sumΓ©s can benefit from a "halo" effect.
  • Yet education and experience are not reliable indicators of job performance, an expert told BI.
  • Some employers are shifting focus to skills and behaviors to improve hiring outcomes.

Job seekers who are attractive, who went to the right school, or who worked at the right company can enjoy a so-called "halo effect" that outweighs other factors that often are better predictors of how well someone will perform in a role.

While they look good on paper, that's a problem for employers and many job seekers, executives told Business Insider.

Shiran Danoch saw firsthand how bias can affect hiring. Early in her career, she thought she'd found the perfect candidate for a role she was trying to fill.

Yet after Danoch's boss interviewed the woman, he called Danoch into his office.

"He said, 'Why did you bring her here? She isn't one of us,'" Danoch told BI.

It slowly occurred to Danoch that her boss's problem was with the candidate's ethnicity despite what Danoch saw as her obvious fit for the role.

There's a lot of work to do to reduce bias that unfairly hurts β€” and helps β€” candidates, said Danoch, an organizational psychologist who's the CEO and founder of Informed Decisions, an artificial intelligence startup that aims to help organizations reduce bias and improve their interviewing processes.

Danoch estimates that perhaps as many as nine in 10 hires either suffer or benefit from a bias that shapes the interviewer's perceptions of the candidate's aptitude for the role.

She said this means people who aren't a great fit could end up landing the role, and candidates who would do the job better might be sidelined.

Education and experience aren't sure bets

Danoch said analysis of thousands of interviews on the Informed Decisions interview platform, combined with findings from broader academic research, highlights that "dominant-skill" bias is a prominent risk.

"When you're interviewing a candidate, there might be one specific skill that paints your overall impression," she said. Often, Danoch said, that is "effective communication." That can mean job seekers who are strong communicators can talk their way past their weaknesses.

Another risk is being wowed by grads from top schools or those who worked at high-profile companies. Substantial bodies of research have shown that education and experience aren't good predictors of how successful someone will be in a job, she said.

Meantime, it's easy to see why a hiring manager might assume someone who'd worked at one big-name tech firm might be a good fit for another. That's not always the case, according to Alan Price, the global head of talent acquisition at Deel, a global HR company that helps employers hire abroad.

He told BI that in past roles at other companies, there was often a push to focus on Ivy League grads or people who'd worked at certain tech firms. That made it hard for candidates coming from small startups, for example, to get hired, he said.

"You'd work at Facebook. You'd work at Google. You'd go to LinkedIn. There's a merry-go-round," Price said.

Yet he said those in sales, for example, who had halo rΓ©sumΓ©s by virtue of having been at top companies, weren't always the strongest contributors when it came to basic metrics like how much revenue they brought in.

"The top people weren't only coming from the big, established organizations," Price said.

Hiring for skills

To improve the quality of its hires, Price said, Deel reformatted its interviewing process to focus on behaviors and less on factors like education and experience. That's led managers to report being more satisfied with the work they were getting from new hires, he said.

Price said it's not that experience doesn't count. Instead, it's evaluated alongside factors like functional skills for doing the job, behaviors, and motivation. To gain insight into skills, Deel will often have job seekers complete assessments.

That can help root out candidates who might toss around industry buzzwords, though they might lack some abilities.

"Because you've worked here and you've worked on this problem type, my assumption is, from a halo CV perspective, you're going to be really good," he said.

Price said that because some job seekers might stay at an organization for two to three years, hiring managers could take that to mean the candidates are good at what they do.

Yet "that is a big assumption," he said.

Some employers have announced efforts to look more at abilities rather than pedigree. In some cases, this can mean waiving degree requirements.

However, David Deming, a professor of political economy at Harvard's Kennedy School, previously told BI that even as some employers do away with prerequisites that candidates for some roles have a bachelor's degree, those doing the hiring might still consider whether a candidate has one.

"Firms are wanting credit for removing a requirement, but that doesn't necessarily mean they're changing their hiring at the end of the day," he said.

Strong communicators can win out

Danoch, from Informed Decisions, said one reason strong communicators can benefit from a halo effect in interviews relates to those doing the hiring.

"Because a lot of interviewers are inexperienced in interviewing, that's what grabs them," she said, referring to a candidate's communication chops.

While such abilities are often among the soft skills many employers say they value, Danoch said being able to communicate well isn't likely to be the only attribute needed for a role. Even if communication is important, she said, it shouldn't be the sole factor for hiring.

Danoch said the halo effect can be problematic if it leads employers to hire candidates who might not be the best fit. Conversely, she said, a "shadow effect" can result in capable job seekers being discounted.

"The candidate is either all good or either all bad," Danoch said.

Do you have something to share about what you're seeing in your job search? Business Insider would like to hear from you. Email our workplace team from a nonwork device at [email protected] with your story, or ask for one of our reporter's Signal numbers.

Read the original article on Business Insider

Should broadcast media owners worry about Brendan Carr, Trump's pick to run the FCC?

Brendan Carr, Donald Trump's pick to head the Federal Communications Commission, speaking at the Conservative Political Action Conference, 2024
Brendan Carr, Donald Trump's pick to head the Federal Communications Commission, says broadcast licenses are not "sacred cows" β€”Β which suggests that media companies that have them could lose them.

Celal Gunes/Anadolu via Getty Images

  • Brendan Carr, Trump's pick to run the FCC, says he'll be scrutinizing broadcast TV companies, like CBS and NBC.
  • What does that mean? Carr is vague.
  • That vagueness may be the point: It could cause broadcast TV companies to think twice before running something Carr, or Trump, doesn't like.

The next Trump administration says it wants to get rid of regulations.

But not all regulations.

Brendan Carr, Trump's choice to head the Federal Communications Commission, says he plans to scrutinize broadcast TV operators to see if they are operating in "the public interest" β€” a requirement tied to the 1934 Communications Act. If they're not, he says, they could lose their license to use the public airwaves.

What exactly does that mean? Carr isn't super-specific. And Carr, who already is an FCC commissioner, didn't mention the issue when he wrote about the FCC for Project 2025, a conservative planning document Trump allies are using to help staff the next administration. But he has been talking about it quite a bit over the last few weeks.

Shortly after Trump nominated Carr to lead the FCC, Carr announced that the agency would "enforce this public interest obligation." He brought the idea up again in a Fox News interview shortly after. On Friday, he talked about it again, via a CNBC interview.

"Look, the law is very clear. The Communications Act says you have to operate in the public interest," he said. "And if you don't, yes, one of the consequences is potentially losing your license. And of course, that's on the table. I mean, look, broadcast licenses are not sacred cows."

Asked to clarify if he meant he was going to target broadcasters he thought were too liberal, Carr said that wasn't the case, and that he wasn't trying to rein in speech.

"At the end of the day, obviously there's a statutory provision that prevents the FCC from engaging in censorship. I don't want to be the speech police. But there is something that's different about broadcasters than, say, podcasters, where you have to operate in a public interest."

Then Carr argued that all he plans on doing is enforcing existing regulations.

"I'm just saying follow the law. I mean, this law has been on the books for a long time," he said. "It's not my decision to hold broadcasters to a public interest obligation. It's Congress. And if they don't like that, then they should go to Congress to change the law."

(It's worth noting the act applies only to companies with over-the-air broadcast operations, like CBS and NBC. But all four of the big broadcast networks are part of larger media outfits. In the case of CBS and NBC, that's Paramount and Comcast, respectively.)

You can see the whole thing here:

I've asked Carr and his office for comment and clarification about where he thinks broadcasters may have acted against the public interest.

But in the meantime, it's worth noting that he's already argued that CBS deserves scrutiny over the way its "60 Minutes" program handled an interview with Kamala Harris β€” which is also the center of a lawsuit Trump filed against CBS last month. And that Carr also complained about Harris making an appearance on NBC's "Saturday Night Live" the weekend before the election.

Perhaps Carr has also criticized the way broadcasters have treated Harris or other Democrats. But I haven't seen or heard it.

All of which suggests that Carr may try using the power of his agency to affect the way broadcasters treat Trump and his allies. Even if he says that's not the case.

But none of this is super clear-cut. For instance: Carr has talked about bringing up Trump's "60 Minutes" complaint when Larry and David Ellison, who are trying to buy CBS owner Paramount, need approval to transfer the CBS broadcast license. But it's hard to imagine a Carr-led FCC actually holding up the Paramount deal, given that Larry Ellison is both a Trump supporter and good pals with Elon Musk, a Carr ally.

And it's also worth noting that Carr also has carrots available to help get broadcasters on board, in addition to sticks. Most notably: Lots of media owners are hoping that the next Trump administration will make it easier for them to consolidate, and Carr has repeatedly said he's in favor of that. So this could easily get muddy.

But all of it has the potential to cause media companies to think twice, or a third time, before airing something they think Donald Trump has a problem with. Is that what Brendan Carr wants?

Read the original article on Business Insider

Google says its new AI models can identify emotions β€” and that has experts worried

Google says its new AI model family has a curious feature: the ability to β€œidentify” emotions. Announced on Thursday, the PaliGemma 2 family of models can analyze images, enabling the AI to generate captions and answer questions about people it β€œsees” in photos. β€œPaliGemma 2 generates detailed, contextually relevant captions for images,” Google wrote in […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

❌