Scientists Must Push AI Toward Responsible AI
For many in the research community, it has become difficult to be optimistic about the effects of artificial intelligence.
As authoritarianism rises around the world, AI-generated “throwaways” are overshadowing legitimate media, while AI-generated deepfakes spread disinformation and replicate extremist messages. Artificial intelligence makes war more precise and deadly in the midst of intractable conflicts. AI companies exploit people in the Global South who work as data compilers, and take advantage of content creators around the world by using their work without licensing or compensation. The industry also impacts an already turbulent climate due to its enormous energy requirements.
On the other hand, public investment in science, especially in the United States, appears to have increased Redirection It focused on artificial intelligence at the expense of other disciplines. Big technology companies are working to consolidate their control over the artificial intelligence ecosystem. In these and other ways, AI seems to make everything worse.
This is not the whole story. We should not give up because AI is harming humanity. None of us should accept this as inevitable, especially those who are in a position to influence science, government, and society. Scientists and engineers can push AI toward a useful path. Here’s how.
Academy’s vision for artificial intelligence
A Pew study In April, it was found that 56% of AI experts (authors and presenters of conference papers related to AI) expect that AI will have positive impacts on society. But this optimism does not extend to the scientific community as a whole. A 2023 survey of 232 scientists by the Center for Science, Technology and Environmental Policy Studies at Arizona State University found that there is greater concern than excitement about the use of generative AI in everyday life, by about three to one.
We have encountered this feeling time and time again. Our career in diverse applied work has brought us into contact with many research communities: privacy, cybersecurity, physical sciences, drug discovery, public health, public interest technology, and democratic innovation. In all of these areas, we found strong negative sentiment about the impacts of AI. This sentiment is so evident that we are often asked to be the voice of the AI ​​optimist, even though we spend most of our time writing about the need to reform AI development structures.
We understand why these audiences see AI as a destructive force, but this passivity generates a different concern: that those with the power to direct AI’s development and direct its impact on society will view it as a lost cause and dismiss the process.
Elements of a positive vision for artificial intelligence
a lot They argued that transforming climate action requires setting a clear path toward positive outcomes. In the same way, while scientists and technology experts should anticipate, warn about, and help mitigate the potential harms of AI, they should also highlight ways in which technology can be harnessed for good, and catalyze public action to achieve these ends.
There are countless ways to leverage and reshape AI to improve people’s lives, distribute rather than concentrate power, and even strengthen democratic processes. Many examples have emerged from the scientific community and deserve to be celebrated.
Some examples: AI is removing communication barriers across languages, including in resource-poor contexts e.g Marginalized sign languages and indigenous African languages. It helps policymakers integrate the views of multiple voters through AI-powered deliberation and legislative engagement. Big language models can scale individual dialogues to address uncertainties about climate change, disseminating accurate information at a critical moment. National laboratories are building basic models of artificial intelligence to accelerate scientific research. In the fields of medicine and biology, machine learning solves scientific problems such as predicting protein structure to aid in drug discovery, for which it has been recognized with a Nobel Prize in 2024.
While each of these applications is certainly nascent and imperfect, they all demonstrate that AI can be used to advance the public good. Scholars should embrace, support, and expand such efforts.
A call to action for scientists
In our new book, Rewiring democracy: How Artificial Intelligence will Change Our Politics, Government, and Citizenshipwe describe four key actions for policymakers committed to steering AI toward the public good.
This applies to scientists as well. Researchers should work on repair For the AI ​​industry to be more ethical, fair and trustworthy. We must collectively Develop Ethical standards for research developing and applying AI, and should use and draw attention to AI developers who adhere to these standards.
Secondly, we should Resists Harmful uses of AI by documenting negative applications of AI and highlighting inappropriate uses.
Third, we should Responsible use Artificial intelligence to improve the lives of society and peoples, and exploit its capabilities to help the communities they serve.
Finally, we must defend renewal and organizations to prepare them for the impacts of AI; Universities, professional associations, and democratic organizations are all vulnerable to disruption.
Scientists have a special privilege and responsibility: we are close to the technology itself, and thus well placed to influence its course. We must work to create an AI-filled world that we want to live in. Technology as a historian Melvin Kranzberg noted“Neither good nor bad, nor neutral.” Whether the artificial intelligence we build is harmful or beneficial to society depends on the choices we make today. But we cannot create a positive future without a vision of what it will look like.
From articles on your site
Related articles around the web
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-10-29 13:00:00



