AI

What does it mean for an algorithm to be “fair”?

VAN Der VLIET and other social welfare defenders who met on my journey, such as the representatives of the Welfare Union in Amsterdam, described what they see as a number of challenges faced by about 35,000 beneficiaries of the city’s benefits: general insults between knowledge and government do not constantly reflect.

City care officials themselves get to know the defects of the regime, which “are assembled by rubber and jams”, as Harry Bodar, the chief policy consultant in the city that focuses on the application of social welfare fraud, told us. “And if you are at the bottom of this system, you are the first to fall through cracks.”

So, the participation council did not want a smart check at all, even if Bodaar and others working in the department hope to reform the system. It is a classic example of a “evil problem”, which is a social or cultural issue with no one clear answer and many possible consequences.

After publishing the story, I heard from Suresh Venkatasubramanian, a former technology advisor for the White House Science and Technology Office policy who participated in writing the Biden Rights law (which now canceled Trump). “We need to participate early on societies,” he said, but he added that it is also important to what officials do with comments – and whether there is “willingness to reformulate the intervention based on what people really want.”

If the city started with a different question – what people really want – maybe a completely different algorithm. The Dutch digital rights advocate Hans de told us visitors, “We are flooded through technological solutions to the wrong problems … Why does the municipality not build a algorithm looking for people who do not apply for social assistance but they are entitled to it?”

These are the basic types of questions that artificial intelligence developers will need to think about, or are at risk of repetition (or ignoring) the same mistakes over and over again.

Venkatasubramanian told me that he found that the story “emphasizes” in highlighting the need for “those responsible for managing these systems” asking difficult questions … starting with whether they should be used at all. “

But he also described the story of “humility”: “Even with good intentions, and the desire to benefit from all research on responsible artificial intelligence, it is still possible to build mainly defective systems, for reasons that exceed the details of the system’s creation.”

To better understand this discussion, read our full story here. And if you want more details about how to perform our prejudice tests after the city gave us unprecedented up to the smart examination algorithm, check the methodology in Lightthouse. (For any Dutch speakers there, here is the accompanying story in release) Thanks to the Politzer Center to support our reports.

This story was originally appeared in AlgorithmWeekly newsletter on artificial intelligence. For stories like this in your inbox first, subscribe here.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-06-17 08:49:00

Related Articles

Back to top button