Ad Buyers Blast Google, Amazon, and Others After Ads Appear on Site Hosting Child Abuse Content
MLCommons, a nonprofit AI safety working group, has teamed up with AI dev platform Hugging Face to release one of the worldβs largest collections of public domain voice recordings for AI research. The dataset, called Unsupervised Peopleβs Speech, contains more than a million hours of audio spanning at least 89 languages. MLCommons says it was [β¦]
Β© 2024 TechCrunch. All rights reserved. For personal use only.
OpenAI has quietly removed language endorsing βpolitically unbiasedβ AI from one of its recently published policy documents. In the original draft of its βeconomic blueprintβ for the AI industry in the U.S., OpenAI said that AI models βshould aim to be politically unbiased by default.β A new draft, made available Monday, deletes that phrasing. When [β¦]
Β© 2024 TechCrunch. All rights reserved. For personal use only.
The European Commission (EC) is planning to "energetically" advance its probe into content moderation on X (formerly Twitter), potentially ordering changes at Elon Musk's social network in the coming months, Bloomberg reported.
Since 2023, the EC has been investigating X for possible violations of the Digital Services Act (DSA). Notably, it's the group's first formal probe under the DSA, which requires very large online platforms to meet strict content moderation and transparency standards to ensure user safety, reduce misinformation, prevent illegal/harmful activity, and facilitate "a fair and open online platform environment."
In a letter to European lawmakers viewed by Bloomberg, EC tech commissioner Henna Virkkunen and justice chief Michael McGrath apparently confirmed that the investigation into X will end βas early as legally possible."
Β© Tom Williams / Contributor | CQ-Roll Call, Inc.
Latimer AI
Bias is in the eye of the beholder, yet it's increasingly being evaluated by AI. Latimer AI, a startup that's building AI tools on a repository of Black datasets, plans to launch a bias detection tool as a Chrome browser extension in January.
The company anticipates the product could be used by people who run official social media accounts, or anyone who wants to be mindful of their tone online, Latimer CEO John Pasmore told Business Insider.
"When we test Latimer against other applications, we take a query and score the response. So we'll score our response, we'll score ChatGPT or Claude's response, against the same query and see who scores better from a bias perspective," Pasmore said. "It's using our internal algorithm to not just score text, but then correct it."
The tool assigns a score from one through 10 to text, with 10 being extremely biased.
Patterns of where bias is found online, are already emerging from beta testing of the product.
For instance, text from an April post by Elon Musk, in which he apologized for calling Dustin Moskowitz a derogatory name, was compared to an August post from Bluesky CEO Jay Graber.
Latimer AI
Musks' post scored 6.8 out of 10, or "High Bias," while Graber's scored 3.6 out of 10, or "Low Bias".
Latimer AI
Latimer's technology proposed a "fix" to the text in Musk's post by changing it to the following: "I apologize to Dustin Moskowitz for my previous inappropriate comment. It was wrong. What I intended to express is that I find his attitude to be overly self-important. I hope we can move past this and potentially become friends in the future."
While what is deemed biased is subjective, Latimer isn't alone in trying to tackle this challenge through technology. The LA Times plans to display a "bias meter" in 2025, for instance.
Latimer hopes its bias tool will draw in more users.
"This will help us identify a different set of users who might not use a large language model, but might use a browser extension," Pasmore said.
The bias detector will launch at $1 a month, and a pro version will let users access multiple bias detection algorithms.
Olga Rolenko
If you get to choose when to schedule a job interview, you might want to grab a coffee and go for a morning slot.
That's because some people conducting interviews tend to give higher scores to candidates they meet with earlier in the day compared with the afternoon, a startup's review of thousands of interviews found.
It's not an absolute, of course, and candidates can still kill it well after lunchtime. Yet, in a job market where employers in fields like tech have been slow to hire, even a modest advantage could make a difference, Shiran Danoch, an organizational psychologist, told Business Insider.
"Specific interviewers have a consistent tendency to be harsher or more lenient in their scores depending on the time of day," she said.
It's possible that in the morning, interviewers haven't yet been beaten down by back-to-back meetings β or are perhaps still enjoying their own first coffee, she said.
Danoch and her team noticed the morning-afternoon discrepancy while reviewing datasets on thousands of job interviews. Danoch is the CEO and founder of Informed Decisions, an artificial intelligence startup focused on helping organizations reduce bias and improve their interviewing processes.
She said the inferences on the time-of-day bias are drawn from the datasets of interviewers who use Informed Decisions tools to score candidates. The data reflected those who've done at least 20 interviews using the company's system. Danoch said that in her company's review of candidates' scores, those interviewed in the morning often get statistically significant higher marks.
The good news, she said, is that when interviewers are made aware that they might be more harsh in the afternoon, they often take steps to counteract that tendency.
"In many cases, happily, we're actually seeing that the feedback that we're providing helps to reduce the bias and eventually eliminate the bias," Danoch said.
However, she said, interviewers often don't get feedback about their hiring practices, even though finding the right talent is "such a crucial part" of what hiring managers and recruiters do.
She said other researchers have identified how the time of day β and whether someone might be a morning person or an evening person β can affect decision-making processes.
An examination of more than 1,000 parole decisions in Israel found that judges were likelier to show leniency at the start of the day and after breaks. However, that favorability decreased as judges made more decisions, according to theΒ 2011 research.
It's possible that if tools like artificial intelligence take on more responsibility for hiring, job seekers won't have to worry about the time of day they interview.
For all of the concerns about biases in AI, partiality involved in more "manual" hiring where interviewers ask open-ended questions often leads to more bias than does AI, said Kiki Leutner, cofounder of SeeTalent.ai, a startup creating tests run by AI to simulate tasks associated with a job. She has researched AI ethics and that of assessments in general.
Leutner told BI that it's likely that in a video interview conducted by AI, for example, a candidate might have a fairer shot at landing a job.
"You don't just have people do unstructured interviews, ask whatever questions, make whatever decisions," she said.
And, because everything is recorded, Leutner said, there is documentation of what decisions were made and on what basis. Ultimately, she said, it's then possible to take that information and correct algorithms.
"Any structured process is better in recruitment than not structuring it," Leutner said.
Eric Mosley, cofounder and CEO of Workhuman, which makes tools for recognizing employee achievements, told BI that data created by humans will be biased β because humans are "hopelessly biased."
He pointed to 2016 research indicating that juvenile court judges in Louisiana doled out tougher punishments βΒ particularly to Black youths βΒ after the Louisiana State University football team suffered a surprise defeat.
Mosley said, however, that AI can be trained to ignore certain biases and look for others to eliminate them.
Taking that approach can help humans guard against some of their natural tendencies. To get it right, however, it's important to have safeguards around the use of AI, he said. These might include ethics teams with representatives from legal departments and HR to focus on issues of data hygiene and algorithm hygiene.
Not taking those precautions and solely relying on AI can even risk scaling humans' biases, Mosley said.
"If you basically just unleash it in a very simplistic way, it'll just replicate them. But if you go in knowing that these biases exist, then you can get through it," he said.
Danoch, from Informed Decisions, said that if people conducting interviews suspect they might be less forgiving after the morning has passed, they can take steps to counteract that.
"Before you interview in the afternoons, take a little bit longer to prepare, have a cup of coffee, refresh yourself," she said.
Joe Raedle/Getty Images
Job seekers who are attractive, who went to the right school, or who worked at the right company can enjoy a so-called "halo effect" that outweighs other factors that often are better predictors of how well someone will perform in a role.
While they look good on paper, that's a problem for employers and many job seekers, executives told Business Insider.
Shiran Danoch saw firsthand how bias can affect hiring. Early in her career, she thought she'd found the perfect candidate for a role she was trying to fill.
Yet after Danoch's boss interviewed the woman, he called Danoch into his office.
"He said, 'Why did you bring her here? She isn't one of us,'" Danoch told BI.
It slowly occurred to Danoch that her boss's problem was with the candidate's ethnicity despite what Danoch saw as her obvious fit for the role.
There's a lot of work to do to reduce bias that unfairly hurts β and helps β candidates, said Danoch, an organizational psychologist who's the CEO and founder of Informed Decisions, an artificial intelligence startup that aims to help organizations reduce bias and improve their interviewing processes.
Danoch estimates that perhaps as many as nine in 10 hires either suffer or benefit from a bias that shapes the interviewer's perceptions of the candidate's aptitude for the role.
She said this means people who aren't a great fit could end up landing the role, and candidates who would do the job better might be sidelined.
Danoch said analysis of thousands of interviews on the Informed Decisions interview platform, combined with findings from broader academic research, highlights that "dominant-skill" bias is a prominent risk.
"When you're interviewing a candidate, there might be one specific skill that paints your overall impression," she said. Often, Danoch said, that is "effective communication." That can mean job seekers who are strong communicators can talk their way past their weaknesses.
Another risk is being wowed by grads from top schools or those who worked at high-profile companies. Substantial bodies of research have shown that education and experience aren't good predictors of how successful someone will be in a job, she said.
Meantime, it's easy to see why a hiring manager might assume someone who'd worked at one big-name tech firm might be a good fit for another. That's not always the case, according to Alan Price, the global head of talent acquisition at Deel, a global HR company that helps employers hire abroad.
He told BI that in past roles at other companies, there was often a push to focus on Ivy League grads or people who'd worked at certain tech firms. That made it hard for candidates coming from small startups, for example, to get hired, he said.
"You'd work at Facebook. You'd work at Google. You'd go to LinkedIn. There's a merry-go-round," Price said.
Yet he said those in sales, for example, who had halo rΓ©sumΓ©s by virtue of having been at top companies, weren't always the strongest contributors when it came to basic metrics like how much revenue they brought in.
"The top people weren't only coming from the big, established organizations," Price said.
To improve the quality of its hires, Price said, Deel reformatted its interviewing process to focus on behaviors and less on factors like education and experience. That's led managers to report being more satisfied with the work they were getting from new hires, he said.
Price said it's not that experience doesn't count. Instead, it's evaluated alongside factors like functional skills for doing the job, behaviors, and motivation. To gain insight into skills, Deel will often have job seekers complete assessments.
That can help root out candidates who might toss around industry buzzwords, though they might lack some abilities.
"Because you've worked here and you've worked on this problem type, my assumption is, from a halo CV perspective, you're going to be really good," he said.
Price said that because some job seekers might stay at an organization for two to three years, hiring managers could take that to mean the candidates are good at what they do.
Yet "that is a big assumption," he said.
Some employers have announced efforts to look more at abilities rather than pedigree. In some cases, this can mean waiving degree requirements.
However, David Deming, a professor of political economy at Harvard's Kennedy School, previously told BI that even as some employers do away with prerequisites that candidates for some roles have a bachelor's degree, those doing the hiring might still consider whether a candidate has one.
"Firms are wanting credit for removing a requirement, but that doesn't necessarily mean they're changing their hiring at the end of the day," he said.
Danoch, from Informed Decisions, said one reason strong communicators can benefit from a halo effect in interviews relates to those doing the hiring.
"Because a lot of interviewers are inexperienced in interviewing, that's what grabs them," she said, referring to a candidate's communication chops.
While such abilities are often among the soft skills many employers say they value, Danoch said being able to communicate well isn't likely to be the only attribute needed for a role. Even if communication is important, she said, it shouldn't be the sole factor for hiring.
Danoch said the halo effect can be problematic if it leads employers to hire candidates who might not be the best fit. Conversely, she said, a "shadow effect" can result in capable job seekers being discounted.
"The candidate is either all good or either all bad," Danoch said.
Do you have something to share about what you're seeing in your job search? Business Insider would like to hear from you. Email our workplace team from a nonwork device at [email protected] with your story, or ask for one of our reporter's Signal numbers.
Celal Gunes/Anadolu via Getty Images
The next Trump administration says it wants to get rid of regulations.
But not all regulations.
Brendan Carr, Trump's choice to head the Federal Communications Commission, says he plans to scrutinize broadcast TV operators to see if they are operating in "the public interest" β a requirement tied to the 1934 Communications Act. If they're not, he says, they could lose their license to use the public airwaves.
What exactly does that mean? Carr isn't super-specific. And Carr, who already is an FCC commissioner, didn't mention the issue when he wrote about the FCC for Project 2025, a conservative planning document Trump allies are using to help staff the next administration. But he has been talking about it quite a bit over the last few weeks.
Shortly after Trump nominated Carr to lead the FCC, Carr announced that the agency would "enforce this public interest obligation." He brought the idea up again in a Fox News interview shortly after. On Friday, he talked about it again, via a CNBC interview.
"Look, the law is very clear. The Communications Act says you have to operate in the public interest," he said. "And if you don't, yes, one of the consequences is potentially losing your license. And of course, that's on the table. I mean, look, broadcast licenses are not sacred cows."
Asked to clarify if he meant he was going to target broadcasters he thought were too liberal, Carr said that wasn't the case, and that he wasn't trying to rein in speech.
"At the end of the day, obviously there's a statutory provision that prevents the FCC from engaging in censorship. I don't want to be the speech police. But there is something that's different about broadcasters than, say, podcasters, where you have to operate in a public interest."
Then Carr argued that all he plans on doing is enforcing existing regulations.
"I'm just saying follow the law. I mean, this law has been on the books for a long time," he said. "It's not my decision to hold broadcasters to a public interest obligation. It's Congress. And if they don't like that, then they should go to Congress to change the law."
(It's worth noting the act applies only to companies with over-the-air broadcast operations, like CBS and NBC. But all four of the big broadcast networks are part of larger media outfits. In the case of CBS and NBC, that's Paramount and Comcast, respectively.)
You can see the whole thing here:
I've asked Carr and his office for comment and clarification about where he thinks broadcasters may have acted against the public interest.
But in the meantime, it's worth noting that he's already argued that CBS deserves scrutiny over the way its "60 Minutes" program handled an interview with Kamala Harris β which is also the center of a lawsuit Trump filed against CBS last month. And that Carr also complained about Harris making an appearance on NBC's "Saturday Night Live" the weekend before the election.
Perhaps Carr has also criticized the way broadcasters have treated Harris or other Democrats. But I haven't seen or heard it.
All of which suggests that Carr may try using the power of his agency to affect the way broadcasters treat Trump and his allies. Even if he says that's not the case.
But none of this is super clear-cut. For instance: Carr has talked about bringing up Trump's "60 Minutes" complaint when Larry and David Ellison, who are trying to buy CBS owner Paramount, need approval to transfer the CBS broadcast license. But it's hard to imagine a Carr-led FCC actually holding up the Paramount deal, given that Larry Ellison is both a Trump supporter and good pals with Elon Musk, a Carr ally.
And it's also worth noting that Carr also has carrots available to help get broadcasters on board, in addition to sticks. Most notably: Lots of media owners are hoping that the next Trump administration will make it easier for them to consolidate, and Carr has repeatedly said he's in favor of that. So this could easily get muddy.
But all of it has the potential to cause media companies to think twice, or a third time, before airing something they think Donald Trump has a problem with. Is that what Brendan Carr wants?
Google says its new AI model family has a curious feature: the ability to βidentifyβ emotions. Announced on Thursday, the PaliGemma 2 family of models can analyze images, enabling the AI to generate captions and answer questions about people it βseesβ in photos. βPaliGemma 2 generates detailed, contextually relevant captions for images,β Google wrote in [β¦]
Β© 2024 TechCrunch. All rights reserved. For personal use only.
University of Rochester physicist Ranga Dias made headlines with his controversial claims of high-temperature superconductivityβand made headlines again when the two papers reporting the breakthroughs were later retracted under suspicion of scientific misconduct, although Dias denied any wrongdoing. The university conducted a formal investigation over the past year and has now terminated Dias' employment, The Wall Street Journal reported.
βIn the past year, the university completed a fair and thorough investigationβconducted by a panel of nationally and internationally known physicistsβinto data reliability concerns within several retracted papers in which Dias served as a senior and corresponding author,β a spokesperson for the University of Rochester said in a statement to the WSJ, confirming his termination. βThe final report concluded that he engaged in research misconduct while a faculty member here.β
The spokesperson declined to elaborate further on the details of his departure, and Dias did not respond to the WSJ's request for comment. Dias did not have tenure, so the final decision rested with the Board of Trustees after a recommendation from university President Sarah Mangelsdorf. Mangelsdorf had called for terminating his position in an August letter to the chair and vice chair of the Board of Trustees, so the decision should not come as a surprise. Dias' lawsuit claiming that the investigation was biased was dismissed by a judge in April.
Ars has been following this story ever since Dias first burst onto the scene with reports of a high-pressure, room-temperature superconductor, published in Nature in 2020. Even as that paper was being retracted due to concerns about the validity of some of its data, Dias published a second paper in Nature claiming a similar breakthrough: a superconductor that works at high temperatures but somewhat lower pressures. Shortly afterward, that paper was retracted as well. As Ars Science Editor John Timmer reported previously:
Dias' lab was focused on high-pressure superconductivity. At extreme pressures, the orbitals where electrons hang out get distorted, which can alter the chemistry and electronic properties of materials. This can mean the formation of chemical compounds that don't exist at normal pressures, along with distinct conductivity. In a number of cases, these changes enabled superconductivity at unusually high temperatures, although still well below the freezing point of water.
Dias, however, supposedly found a combination of chemicals that would boost the transition to superconductivity to near room temperature, although only at extreme pressures. While the results were plausible, the details regarding how some of the data was processed to produce one of the paper's key graphs were lacking, and Dias didn't provide a clear explanation.
The ensuing investigation cleared Dias of misconduct for that first paper. Then came the second paper, which reported another high-temperature superconductor forming at less extreme pressures. However, potential problems soon became apparent, with many of the authors calling for its retraction, although Dias did not.
Β© YouTube/SciTech Publishing