Current:Home > ContactAI pervades everyday life with almost no oversight. States scramble to catch up -Trailblazer Capital Learning
AI pervades everyday life with almost no oversight. States scramble to catch up
View
Date:2025-04-12 00:00:48
DENVER (AP) — While artificial intelligence made headlines with ChatGPT, behind the scenes, the technology has quietly pervaded everyday life — screening job resumes, rental apartment applications, and even determining medical care in some cases.
While a number of AI systems have been found to discriminate, tipping the scales in favor of certain races, genders or incomes, there’s scant government oversight.
Lawmakers in at least seven states are taking big legislative swings to regulate bias in artificial intelligence, filling a void left by Congress’ inaction. These proposals are some of the first steps in a decades-long discussion over balancing the benefits of this nebulous new technology with the widely documented risks.
“AI does in fact affect every part of your life whether you know it or not,” said Suresh Venkatasubramanian, a Brown University professor who co-authored the White House’s Blueprint for an AI Bill of Rights.
“Now, you wouldn’t care if they all worked fine. But they don’t.”
Success or failure will depend on lawmakers working through complex problems while negotiating with an industry worth hundreds of billions of dollars and growing at a speed best measured in lightyears.
Last year, only about a dozen of the nearly 200 AI-related bills introduced in statehouses were passed into law, according to BSA The Software Alliance, which advocates on behalf of software companies.
Those bills, along with the over 400 AI-related bills being debated this year, were largely aimed at regulating smaller slices of AI. That includes nearly 200 targeting deepfakes, including proposals to bar pornographic deepfakes, like those of Taylor Swift that flooded social media. Others are trying to rein in chatbots, such as ChatGPT, to ensure they don’t cough up instructions to make a bomb, for example.
Those are separate from the seven state bills that would apply across industries to regulate AI discrimination — one of the technology’s most perverse and complex problems — being debated from California to Connecticut.
Those who study AI’s penchant to discriminate say states are already behind in establishing guardrails. The use of AI to make consequential decisions — what the bills call “automated decision tools” — is pervasive but largely hidden.
It’s estimated as many as 83% of employers use algorithms to help in hiring; that’s 99% for Fortune 500 companies, according to the Equal Employment Opportunity Commission.
Yet the majority of Americans are unaware that these tools are being used, polling from Pew Research shows, let alone whether the systems are biased.
An AI can learn bias through the data it’s trained on, typically historical data that can hold a Trojan Horse of past discrimination.
Amazon scuttled its hiring algorithm project after it was found to favor male applicants nearly a decade ago. The AI was trained to assess new resumes by learning from past resumes — largely male applicants. While the algorithm didn’t know the applicants’ genders, it still downgraded resumes with the word “women’s” or that listed women’s colleges, in part because they were not represented in the historical data it learned from.
“If you are letting the AI learn from decisions that existing managers have historically made, and if those decisions have historically favored some people and disfavored others, then that’s what the technology will learn,” said Christine Webber, the attorney in a class-action lawsuit alleging that an AI system scoring rental applicants discriminated against those who were Black or Hispanic.
Court documents describe one of the lawsuit’s plaintiffs, Mary Louis, a Black woman, applied to rent an apartment in Massachusetts and received a cryptic response: “The third-party service we utilize to screen all prospective tenants has denied your tenancy.”
When Louis submitted two landlord references to show she’d paid rent early or on time for 16 years, court records say, she received another reply: “Unfortunately, we do not accept appeals and cannot override the outcome of the Tenant Screening.”
That lack of transparency and accountability is, in part, what the bills are targeting, following the lead of California’s failed proposal last year — the first comprehensive attempt at regulating AI bias in the private sector.
Under the bills, companies using these automated decision tools would have to do “impact assessments,” including descriptions of how AI figures into a decision, the data collected and an analysis of the risks of discrimination, along with an explanation of the company’s safeguards. Depending on the bill, those assessments would be submitted to the state or regulators could request them.
Some of the bills would also require companies to tell customers that an AI will be used in making a decision, and allow them to opt out, with certain caveats.
Craig Albright, senior vice president of U.S. government relations at BSA, the industry lobbying group, said its members are generally in favor of some steps being proposed, such as impact assessments.
“The technology moves faster than the law, but there are actually benefits for the law catching up. Because then (companies) understand what their responsibilities are, consumers can have greater trust in the technology,” Albright said.
But it’s been a lackluster start for legislation. A bill in Washington state has already floundered in committee, and a California proposal introduced in 2023, which many of the current proposals are modeled off of, also died.
California Assembly member Rebecca Bauer-Kahan has revamped her legislation that failed last year with the support of some tech companies, such as Workday and Microsoft, after dropping a requirement that companies routinely submit their impact assessments. Other states where bills are, or are expected to be, introduced are Colorado, Rhode Island, Illinois, Connecticut, Virginia and Vermont.
While these bills are a step in the right direction, said Venkatasubramanian of Brown University, the impact assessments and their ability to catch bias remain vague. Without greater access to the reports — which many of the bills limit — it’s also hard to know whether a person has been discriminated against by an AI.
A more intensive but accurate way to identify discrimination would be to require bias audits — tests to determine whether an AI is discriminating or not — and to make the results public. That’s where the industry pushes back, arguing that would expose trade secrets.
Requirements to routinely test an AI system aren’t in most of the legislative proposals, nearly all of which still have a long road ahead. Still, it’s the start of lawmakers and voters wrestling with what’s becoming, and will remain, an ever-present technology.
“It covers everything in your life. Just by virtue of that you should care,” said Venkatasubramanian.
——-
Associated Press reporter Trân Nguyễn in Sacramento, California, contributed.
veryGood! (3478)
Related
- Skins Game to make return to Thanksgiving week with a modern look
- Dead Birds Washing Up by the Thousands Send a Warning About Climate Change
- Two and a Half Men's Angus T. Jones Is Unrecognizable in Rare Public Sighting
- Here's how much money Americans think they need to retire comfortably
- Will the 'Yellowstone' finale be the last episode? What we know about Season 6, spinoffs
- In some states, hundreds of thousands dropped from Medicaid
- What to know about the 5 passengers who were on the Titanic sub
- We asked, you answered: How do you feel about the end of the COVID-19 'emergency'
- Woman dies after Singapore family of 3 gets into accident in Taiwan
- Debris from OceanGate sub found 1,600 feet from Titanic after catastrophic implosion, U.S. Coast Guard says
Ranking
- Realtor group picks top 10 housing hot spots for 2025: Did your city make the list?
- He helped cancer patients find peace through psychedelics. Then came his diagnosis
- After Two Nights of Speeches, Activists Ask: Hey, What About Climate Change?
- Two Farmworkers Come Into Their Own, Escaping Low Pay, Rigid Hours and a High Risk of Covid-19
- New Zealand official reverses visa refusal for US conservative influencer Candace Owens
- Today’s Dylan Dreyer Shares Son Calvin’s Celiac Disease Diagnosis Amid “Constant Pain”
- Turning Skiers Into Climate Voters with the Advocacy Potential of the NRA
- Post Roe V. Wade, A Senator Wants to Make Birth Control Access Easier — and Affordable
Recommendation
SFO's new sensory room helps neurodivergent travelers fight flying jitters
Reese Witherspoon Debuts Her Post-Breakup Bangs With Stunning Selfie
A woman is in custody after refusing tuberculosis treatment for more than a year
Would Ryan Seacrest Like to Be a Dad One Day? He Says…
Paula Abdul settles lawsuit with former 'So You Think You Can Dance' co
Can multivitamins improve memory? A new study shows 'intriguing' results
Worried about your kids' video gaming? Here's how to help them set healthy limits
Will China and the US Become Climate Partners Again?