When Algorithms Are Not Neutral
Monday, February 9, 2026
Bias in generative artificial intelligence algorithms risks producing systemic discrimination across sectors, from job recruitment to public services.
arsip tempo : 177598503191.
CONCRETE examples of the dangers of artificial intelligence (AI) bias continue to emerge as adoption of the technology expands across society. Ardi Sutedja, Chair of the Indonesia Cyber Security Forum, said bias in AI emerges when the algorithms or training data used are not neutral. As a result, the outputs generated tend to favor or marginalize certain groups. “The risk is not only individual but also systemic,” he said on Wedn
...
Subscribe to continue reading.
We craft news with stories.
For the benefits of subscribing to Digital Tempo, See More











