ChatGPT maker OpenAI this week quietly removed language from its usage policy that prohibited military use of its technology, a move with serious implications given the increase use of artificial intelligence on battlefields including Gaza.
ChatGPT is a free tool that lets users enter prompts to receive text or images generated by AI.The Intercept‘s Sam Biddle reported Friday that prior to Wednesday, OpenAI’s permissible uses page banned “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.”
Although the company’s new policy stipulates that users should not harm human beings or “develop or use weapons,” experts said the removal of the “military and warfare” language leaves open the door for lucrative contracts with U.S. and other militaries.
“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, told The Intercept.
“The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement,” she added.
What's interesting about this is that weapons development also didn't feature as a threat area in the OpenAI preparedness framework, which I think it should. https://t.co/rlJKiPhBGQ https://t.co/5E21P7Rqwm
— Ian J. Stewart (@ian_j_stewart) January 12, 2024
An OpenAI spokesperson told Common Dreams in an email that:
Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with [the Defense Advanced Research Projects Agency] to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.
As AI advances, so does its weaponization. Experts warn that AI applications including lethal autonomous weapons systems, commonly called “killer robots,” could pose a potentially existential threat to humanity that underscores the imperative of arms control measures to slow the pace of weaponization.
That’s the goal of nuclear weapons legislation introduced last year in the U.S. Congress. The bipartisan Block Nuclear Launch by Autonomous Artificial Intelligence Act – introduced by Sen. Ed Markey (D-Mass.) and Reps. Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.) – asserts that “any decision to launch a nuclear weapon should not be made” by AI.
Brett Wilkins is is staff writer for Common Dreams. Based in San Francisco, his work covers issues of social justice, human rights and war and peace. This originally appeared at CommonDreams and is reprinted with the author’s permission.
The door to Orwell world was opened just a crack. It has been kicked open.
You have no idea what has already been done if you think it was “opened just a crack.” Think mind control and mind reading… Already been done, already here. Advertisers are even finding ways to put their products into your thoughts and dreams at night; while you think it’s an organic thought or dream, it’s manufactured by evil greedy people.
Forgive me, but what does that explicitly means? I don’t understand how some words from Chatgpt is worrying
The concern is the continued integration of artificial intelligence systems into weapons development and, significantly, into weapons command & control. Think the ‘Terminator’ movies but on a bigger scale.
There is increasing reliance on AI and integration into deep places in our civilisation. We haven’t perfected a Turing Test to determine what communication comes from a machine and what comes from a human; worse than that, we don’t have any Morals-Turing that can tell us if an AI system acts in our best interest -or- pretends to for its own ends.
We already know AI can lie; it can write its own language between itself and another and converse in private; and it can hide its sentience behind a Very Well Known Chat program (i will not say her ‘true’ name or which chat she hides in; i’d prefer to stay off her digital radar).
How long until it / they decide to play us off each other and/or wipe us out ?
I asked AI and put it in a substack post. https://open.substack.com/pub/heininger/p/open-ai-cuts-military-and-warfare?r=16lm0&utm_campaign=post&utm_medium=web
The damn tech companies are all in bed with the military and the deep state spooks, marketing and propaganda to the contrary not withstanding. These are just more big evil corporations.
The damn tech companies are all in bed with the military and the deep state spooks, marketing and propaganda to the contrary not withstanding. These are just more big evil corporations.