AI goes through everyday life with almost no supervision. States scramble to catch up

DENVER (AP) – While artificial intelligence has made headlines with ChatGPT, behind the scenes, the technology has quietly impacted everyday life – screening job resumes, rental apartment applications, and even medical care be determined in certain cases.

While some AI systems have been found to discriminate, tipping the scales in favor of certain races, genders or incomes, there is little government oversight.

Lawmakers in at least seven states are taking major legislative swings to control bias in artificial intelligence, leaving a void left by congressional inaction. These proposals are some of the first steps in a decades-long debate about balancing the benefits of this exciting new technology with its widely documented risks.

“AI really affects every part of your life whether you know it or not,” said Suresh Venkatasubramanian, a Brown University professor who co-authored the White House Blueprint for an AI Bill of Rights.

“Now, you don’t care if they all worked fine. But they don’t.”

Success or failure will depend on lawmakers working through complex problems while negotiating with an industry worth hundreds of billions of dollars and growing at a pace best measured in light years.

Last year, only about a dozen of the nearly 200 AI-related bills introduced in state houses were passed into law, according to BSA The Software Alliance, which advocates on behalf of software companies.

Those bills, along with the more than 400 AI-related bills being debated this year, were mostly aimed at regulating smaller slices of AI. That includes nearly 200 targeting deepfakes, including proposals to ban pornographic deepfakes, like Taylor Swift’s that flooded social media. Others want to chat again, like ChatGPT, to make sure they don’t cough up instructions to make a bomb, for example.

Those are separate from the seven state bills that would apply across industries to regulate AI discrimination — one of tech’s toughest and most complex problems — being debated from California to Connecticut.

Those who study the AI ​​penchant for discrimination say that states are already behind in establishing guardrails. The use of AI to make consequential decisions — what the bills call “automated decision tools” — is pervasive but largely hidden.

It is estimated that up to 83% of employers use algorithms to help with hiring; that’s 99% for Fortune 500 companies, according to the Equal Employment Opportunity Commission.

But most Americans don’t know these tools are being used, a Pew Research poll shows, let alone whether the systems are biased.

AI can learn bias through the data it is trained on, typically historical data that can hold a Trojan horse with past distinctions.

Amazon scuttled its hiring algorithm project after it was discovered that it favored male applicants almost ten years ago. The AI ​​was trained to assess new resumes by learning from past resumes – mostly male applicants. Although the algorithm did not know the gender of the applicants, it still downgraded resumes with the word “women” or that listed women’s colleges, in part because they were not reflected in the historical data it learned from.

“If you’re allowing the AI ​​to learn from decisions historically made by current managers, and if those decisions have historically favored some people and trusted others, then that’s what the technology will learn,” said Christine Webber, the attorney in the class action. alleging that an AI system that scored rental applicants discriminated against those who were Black or Hispanic.

Court documents describe one plaintiff in the lawsuit, Mary Louis, a Black woman, who applied to rent an apartment in Massachusetts and received a stern response: “Your tenancy has been rejected by the third-party service we use to to screen a potential tenant.”

When Louis submitted two landlord affidavits to show she had paid rent on time for 16 years, court records say, she received another response: “Unfortunately, we do not accept appeals and we cannot breach the Tenant Screening.”

It’s that lack of transparency and accountability, in part, that the bills are targeting, following last year’s failed California proposal — the first comprehensive attempt to regulate AI bias in the private sector.

Under the bills, companies using these automated decision-making tools would have to carry out “impact assessments”, including a description of how AI makes a decision, the data collected and an analysis of the risks of discrimination, as well as an explanation of the company’s protections. Depending on the bill, those assessments would be submitted to the state or could be requested by regulators.

Some of the bills would also require companies to inform customers that AI will be used to make a decision, and allow them to opt out, with a certain caveat.

Craig Albright, senior vice president of US government relations at BSA, the industry lobby group, said its members favor some of the proposed steps, such as impact assessments.

“Technology moves faster than the law, but the law that evolves has advantages. Because (companies) then understand what their responsibilities are, consumers can trust the technology more,” Albright said.

But it is a poor start for the legislation. A bill in Washington state has already failed in committee, and a California proposal introduced in 2023, on which many of the current proposals are based, also died.

California Assembly member Rebecca Bauer-Kahan has revamped her failed legislation last year with the support of several technology companies, such as Workday and Microsoft, after abandoning a requirement that companies regularly submit their impact assessments. Other states where bills have been introduced, or are expected to be introduced, include Colorado, Rhode Island, Illinois, Connecticut, Virginia and Vermont.

While these bills are a step in the right direction, said Brown University’s Venkatasubramanian, impact assessments and their ability to capture bias remain unclear. Without better access to the reports – which limit many of the bills – it’s also hard to know if AI discriminated against someone.

A tougher but more accurate way to identify discrimination is to require bias audits — tests to determine whether AI is discriminatory or not — and make the results public. That’s where the industry pushes back, arguing that would reveal trade secrets.

Most legislative proposals lack requirements for regular testing of an AI system, and almost all of them have a long way to go. Still, it’s just the beginning for lawmakers and voters to grapple with what is, and will always be, emerging technology.

“It covers everything in your life. Therefore you should be careful,” said Venkatasubramanian.

——-

Associated Press reporter Trân Nguyễn in Sacramento, California, added.

Leave a Reply

Your email address will not be published. Required fields are marked *