maaf email atau password anda salah

When Algorithms Are Not Neutral

Monday, February 9, 2026

Bias in generative artificial intelligence algorithms risks producing systemic discrimination across sectors, from job recruitment to public services.

arsip tempo : 177598503191.

A researcher develops an application using an AI large language model (LLM) chatbot. Shutterstock. tempo : 177598503191.

CONCRETE examples of the dangers of artificial intelligence (AI) bias continue to emerge as adoption of the technology expands across society. Ardi Sutedja, Chair of the Indonesia Cyber Security Forum, said bias in AI emerges when the algorithms or training data used are not neutral. As a result, the outputs generated tend to favor or marginalize certain groups. “The risk is not only individual but also systemic,” he said on Wedn

...

Subscribe to continue reading.
We craft news with stories.

For the benefits of subscribing to Digital Tempo, See More

The Best Choice

Rp 750.000/12 months

  • *Flexible payment methods
  • *Unlimited access to Tempo Plus & Tempo Magz

Rp 386.280/6 months

  • *Auto-renews every 6 months
  • *Cancel at anytime
  • *Unlimited access to Tempo Plus & Tempo Magz

See Other Packages

Already a Subscribed? Log in here
To receive daily news by Email, Sign up for Tempo ID.

More Articles

More exclusive contents

  • April 6, 2026

  • March 30, 2026

  • March 23, 2026

  • March 12, 2026

Independent journalism needs public support. By subscribing to Tempo, you will contribute to our ongoing efforts to produce accurate, in-depth and reliable information. We believe that you and everyone else can make all the right decisions if you receive correct and complete information. For this reason, since its establishment on March 6, 1971, Tempo has been and will always be committed to hard-hitting investigative journalism. For the public and the Republic.

Login Subscribe