‘The risks posed by AI are real’: EU takes action to defeat life-ruining algorithms | Artificial Intelligence (AI)


IIt started with a single tweet in November 2019. Top tech entrepreneur David Heinemeier Hansson lashed out at Apple’s new credit card, calling it “sexist” for offering his wife a credit limit 20 times lower than his.

The allegations have spread like wildfire, with Hansson pointing out that artificial intelligence – now widely used to make loan decisions – was to blame. “It doesn’t matter what individual Apple representatives intend, it doesn’t matter what THE ALGORITHM they have put their trust in does.” And what it does is discriminate. Damn it.

As Apple and its Goldman Sachs underwriters were finally cleared by US regulators for breaching fair lending rules last year, it has reignited a broader debate about the use of AI in the public and private.

European Union politicians are now planning to introduce the world’s first comprehensive model of AI regulation, as institutions increasingly automate routine tasks in a bid to increase efficiency and, ultimately, to reduce costs.

This legislation, known as the Artificial Intelligence Act, will have implications beyond EU borders and, like the EU’s General Data Protection Regulation, will apply to any institution, including UK banks, that serves EU customers. “The impact of the law, once passed, cannot be overstated,” said Alexandru Circiumaru, European public policy manager at the Ada Lovelace Institute.

There’s a push to introduce tough rules on how AI is used to screen applications for jobs, college or aid, says EU’s final list of ‘high-risk’ uses social security, or – in the case of lenders – to assess the creditworthiness of potential borrowers.

EU officials hope that with additional oversight and restrictions on the type of AI models that can be used, the rules will reduce the type of machine-based discrimination that could influence life-changing decisions, like if you can afford a house or a student loan. .

“AI can be used to analyze your entire financial health, including spending, savings, other debts, to come up with a bigger picture,” said Sarah Kocianski, independent fintech consultant. “If designed correctly, these systems can provide broader access to affordable credit.”

But one of the biggest dangers is unintended bias, in which algorithms end up denying loans or accounts to certain groups, including women, migrants, or people of color.

Part of the problem is that most AI models can only learn from historical data that has been fed to them, which means they will learn what type of customer has ever been lent to and which customers have been marked as unreliable. “There’s a danger that they’re biased in terms of what a ‘good’ borrower looks like,” Kocianski said.Notably, gender and ethnicity often play a role in AI’s decision-making processes based on the data it was taught on: factors that are in no way relevant to the ability of an AI. person to repay a loan. »

In addition, some models are designed to ignore so-called protected characteristics, which means that they are not supposed to take into account the influence of gender, race, ethnicity or disability. But these AI models can still discriminate due to the analysis of other data points such as zip codes, which may correlate with historically disadvantaged groups who have never applied for, obtained or repaid loans or mortgages before.

One of the biggest dangers is unintended bias, in which algorithms discriminate against certain groups, including women, migrants, or people of color. Photography: metamorworks/Getty Images/iStockphoto

And in most cases, when an algorithm makes a decision, it’s hard for anyone to understand how they arrived at that conclusion, resulting in what’s commonly known as the “black box” syndrome. This means that banks, for example, might struggle to explain what an applicant could have done differently to qualify for a loan or credit card, or whether changing an applicant’s gender male to female could result in a different result.

Circiumaru said the AI ​​law, which could come into force at the end of 2024, would benefit tech companies that have successfully developed what he called “trustworthy AI” models that comply with the new rules. of the EU.

Darko Matovski, chief executive and co-founder of London-based artificial intelligence startup causaLens, thinks his company is one of them.

The startup, which launched in January 2021, has already licensed its technology to companies like asset manager Aviva and quantitative trading firm Tibra, and says a number of retail banks are in the process. to sign agreements with the company before the entry into force of the EU rules. in force.

The entrepreneur said causaLens offers a more advanced form of AI that avoids potential bias by accounting for and controlling for discriminatory correlations in data. “Correlation-based models learn past injustices and they just replay them in the future,” Matovski said.

He believes the proliferation of so-called casual AI models like his will lead to better outcomes for marginalized groups who may have missed out on educational and financial opportunities.

“It’s really hard to understand the extent of the damage already done, because we can’t really inspect this model,” he said. “We don’t know how many people didn’t go to college because of a broken algorithm. We don’t know how many people weren’t able to get their mortgage due to algorithm bias. We just don’t know.

Matovski said the only way to protect against potential discrimination was to use protected characteristics such as disability, gender or race as input, but to ensure that whatever those specific inputs were, the decision did not change. not.

He said it was about ensuring that AI models reflected our current social values ​​and avoided perpetuating any racist, ableist or misogynist decision-making from the past. “Society thinks we should treat everyone the same, regardless of gender, zip code, race. So algorithms not only have to try to do that, but they have to guarantee it,” he said. -he declares.

Sign up for the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk

While the new EU rules are likely to be a big step forward in tackling machine-based biases, some experts, including those at the Ada Lovelace Institute, are pushing for consumers to have the right to complain and seek redress if they feel they have been put at a disadvantage.

“The risks posed by AI, especially when applied in certain specific circumstances, are real, significant and already present,” Circiumaru said.

“AI regulation should ensure that individuals will be appropriately protected from harm by approving or disapproving uses of AI and have remedies when approved AI systems malfunction or cause harm. We cannot claim that approved AI systems will always work flawlessly and fail to prepare for instances when they do not. »


About Author

Comments are closed.