ETV Bharat / science-and-technology

Five ways to make AI a greater force for good in 2021

With spending more time online, employees working from home, the whole concept of algorithms and AI came to light. The various uses of AI got highlighted. Neurosymbolic AI, participatory machine learning, OpenAI, and many others have been discussed. If used properly, these will be powerful forces in 2021 in various fields of life. Hope for 2021 is to see more of these ideas explored and adopted in earnest.

AI advancements, AI a greater force for good in 202
Five ways to make AI a greater force for good in 2021
author img

By

Published : Jan 17, 2021, 10:30 AM IST

Updated : Feb 16, 2021, 7:31 PM IST

MIT Technology Review, USA: Here are five hopes that AI has in the coming year.

Reduce corporate influence in research

The tech giants have disproportionate control over the direction of AI research. This has shifted the direction of the field as a whole toward increasingly big data and big models, with several consequences. It blows up the climate impact of AI advancements, locks out resource-constrained labs from participating in the field, and leads to lazier scientific inquiry by ignoring the range of other possible approaches. As Google’s ousting of Timnit Gebru revealed, tech giants will readily limit the ability to investigate other consequences as well.

But much of corporate influence comes down to money and the lack of alternative funding. The bet proved unsustainable, and four years later, OpenAI signed an investment deal with Microsoft.

Well, we have to see more governments step into this void to provide non-defense-related funding options for researchers. It won’t be a perfect solution, but it’ll be a start. Governments are beholden to the public, not the bottom line.

Refocus on common-sense understanding

The overwhelming attention on bigger and badder models has overshadowed one of the central goals of AI research: to create intelligent machines that don’t just pattern-match but actually understand the meaning. While corporate influence is a major contributor to this trend, there are other culprits as well. Research conferences and peer-review publications place a heavy emphasis on achieving “state of the art” results. But the state of the art is often poorly measured by tests that can be beaten with more data and larger models.


It’s not that large-scale models could never reach a common-sense understanding. That’s still an open question. But there are other avenues of research deserving greater investment. Some experts have placed their bets on neurosymbolic AI, which combines deep learning with symbolic knowledge systems. Others are experimenting with more probabilistic techniques that use far less data, inspired by a human child’s ability to learn from very few examples.

In 2021, I hope the field will realign its incentives to prioritize comprehension over prediction. Not only could this lead to more technically robust systems, but the improvements would have major social implications as well. The susceptibility of current deep-learning systems to being fooled, for example, undermines the safety of self-driving cars and poses dangerous possibilities for autonomous weapons. The inability of systems to distinguish between correlation and causation is also at the root of algorithmic discrimination.

Empower marginalized researchers

If algorithms codify the values and perspectives of their creators, a broad cross-section of humanity should be present at the table when they are developed.

Google’s treatment of Gebru, one of the few prominent Black women in the industry, showed how far there still is to go. Diversity in numbers is meaningless if those individuals aren’t empowered to bring their lived experience into their work. The flashpoint marked by Gebru’s firing turned into a critical moment of reflection for the industry.


Center the perspectives of impacted communities

There’s also another group to bring to the table. One of the most exciting trends from last year was the emergence of participatory machine learning. It’s a provocation to reinvent the process of AI development to include those who ultimately become subject to the algorithms.

In July, the first conference workshop dedicated to this approach collected a wide range of ideas about what it could look like. Suggestions included new governance procedures for soliciting community feedback; new model-auditing methods for informing and engaging the public; and proposed redesigns of AI systems to give users more control of their settings.

Hope for 2021 is to see more of these ideas explored and adopted in earnest. Facebook is already making a start: if it follows through with allowing its external oversight board to make binding changes to the platform’s content moderation policies, the governance structure could become a feedback mechanism worthy of emulation.

Codify guard rails into regulation

Thus far, grassroots efforts have led the movement to mitigate algorithmic harms and hold tech giants accountable. But it will be up to national and international regulators to set up more permanent guard rails. The good news is lawmakers around the world have been watching and are in the midst of drafting legislation. In the US, members of Congress have already introduced bills to address facial recognition, AI bias, and deepfakes. Several of them also sent a letter to Google expressing their intent to continue pursuing this regulation.

So the last hope for 2021 is that we see some of these bills pass. It’s time, we codify what we’ve learned over the past few years, and move away from the fiction of self-regulation.
___
Copyright 2021 Technology Review, Inc.
Distributed by Tribune Content Agency, LLC




MIT Technology Review, USA: Here are five hopes that AI has in the coming year.

Reduce corporate influence in research

The tech giants have disproportionate control over the direction of AI research. This has shifted the direction of the field as a whole toward increasingly big data and big models, with several consequences. It blows up the climate impact of AI advancements, locks out resource-constrained labs from participating in the field, and leads to lazier scientific inquiry by ignoring the range of other possible approaches. As Google’s ousting of Timnit Gebru revealed, tech giants will readily limit the ability to investigate other consequences as well.

But much of corporate influence comes down to money and the lack of alternative funding. The bet proved unsustainable, and four years later, OpenAI signed an investment deal with Microsoft.

Well, we have to see more governments step into this void to provide non-defense-related funding options for researchers. It won’t be a perfect solution, but it’ll be a start. Governments are beholden to the public, not the bottom line.

Refocus on common-sense understanding

The overwhelming attention on bigger and badder models has overshadowed one of the central goals of AI research: to create intelligent machines that don’t just pattern-match but actually understand the meaning. While corporate influence is a major contributor to this trend, there are other culprits as well. Research conferences and peer-review publications place a heavy emphasis on achieving “state of the art” results. But the state of the art is often poorly measured by tests that can be beaten with more data and larger models.


It’s not that large-scale models could never reach a common-sense understanding. That’s still an open question. But there are other avenues of research deserving greater investment. Some experts have placed their bets on neurosymbolic AI, which combines deep learning with symbolic knowledge systems. Others are experimenting with more probabilistic techniques that use far less data, inspired by a human child’s ability to learn from very few examples.

In 2021, I hope the field will realign its incentives to prioritize comprehension over prediction. Not only could this lead to more technically robust systems, but the improvements would have major social implications as well. The susceptibility of current deep-learning systems to being fooled, for example, undermines the safety of self-driving cars and poses dangerous possibilities for autonomous weapons. The inability of systems to distinguish between correlation and causation is also at the root of algorithmic discrimination.

Empower marginalized researchers

If algorithms codify the values and perspectives of their creators, a broad cross-section of humanity should be present at the table when they are developed.

Google’s treatment of Gebru, one of the few prominent Black women in the industry, showed how far there still is to go. Diversity in numbers is meaningless if those individuals aren’t empowered to bring their lived experience into their work. The flashpoint marked by Gebru’s firing turned into a critical moment of reflection for the industry.


Center the perspectives of impacted communities

There’s also another group to bring to the table. One of the most exciting trends from last year was the emergence of participatory machine learning. It’s a provocation to reinvent the process of AI development to include those who ultimately become subject to the algorithms.

In July, the first conference workshop dedicated to this approach collected a wide range of ideas about what it could look like. Suggestions included new governance procedures for soliciting community feedback; new model-auditing methods for informing and engaging the public; and proposed redesigns of AI systems to give users more control of their settings.

Hope for 2021 is to see more of these ideas explored and adopted in earnest. Facebook is already making a start: if it follows through with allowing its external oversight board to make binding changes to the platform’s content moderation policies, the governance structure could become a feedback mechanism worthy of emulation.

Codify guard rails into regulation

Thus far, grassroots efforts have led the movement to mitigate algorithmic harms and hold tech giants accountable. But it will be up to national and international regulators to set up more permanent guard rails. The good news is lawmakers around the world have been watching and are in the midst of drafting legislation. In the US, members of Congress have already introduced bills to address facial recognition, AI bias, and deepfakes. Several of them also sent a letter to Google expressing their intent to continue pursuing this regulation.

So the last hope for 2021 is that we see some of these bills pass. It’s time, we codify what we’ve learned over the past few years, and move away from the fiction of self-regulation.
___
Copyright 2021 Technology Review, Inc.
Distributed by Tribune Content Agency, LLC




Last Updated : Feb 16, 2021, 7:31 PM IST
ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.