Conflicts of Interest: US Will Use AI to Censor ‘Russian Disinformation’

On COI #423, Kyle Anzalone and Connor Freeman cover the Empire’s major escalations this week against Palestine, Syria, Iran, Russia, and freedom of the press in the United States.

Kyle breaks down Secretary of State Antony Blinken’s Orwellian announcement that Washington has constructed an AI online tool to hunt so called ‘Russian disinformation,’ the British and Dutch governments’ plans to build an “international coalition” that provides Ukraine with western-made aircraft including F-16s, Moscow’s openness to peace proposals for the Ukraine conflict made by African leaders and Brazil, as well as a report that the White House may be preparing for a frozen conflict in Ukraine that could last years or even decades.

Connor details the Black Sea grain export deal’s two-month extension, a bill introduced by hawks in Congress which plans to use sanctions as a way of preventing countries from normalizing with Damascus, the Pentagon’s plans to conduct joint military planning with Israel on operations aimed at Iran, the progress of Yemen peace talks, and Israel’s latest atrocities carried out against the occupied Palestinians in East Jerusalem, the Gaza Strip, and the West Bank.

Subscribe on YouTube and audio-only.

11 thoughts on “Conflicts of Interest: US Will Use AI to Censor ‘Russian Disinformation’”

  1. Disinformation: the truth that the ruling class and their government doesn’t want their people to know.

  2. The militarization of AI, at first to attain dominance in information warfare, is particularly dangerous, but is to be expected. It will not be used just as a tool to detect disinformation, it will be used as a tool to generate it, enhance it. As we can see, a successful propaganda effort that lets loose meaningful attachment to reality is almost guaranteed to bite you in the ass in the most seriously unhelpful ways. The certainty that it will be reciprocated, a recursion-enforced steep learning curve by AI itself, it is not that hard to see how this will backfire on all fronts and will make pretty much internet unusable as a source of information. Yet we better prepare for that, because that disaster is coming.

    1. That would actually be a benefit to the rest of the planet, especially considering that humans are causing the Sixth Great Extinction.

      Science fiction writers have been warning about this for decades, so nothing really new here. All technology is harmful, and AI threatens to replace humans with artificial lifeforms. Hard to imagine them being worse than humans though; they’d probably be a lot better for the rest of the planet, because they wouldn’t be materialistic & consumptive, and wouldn’t overpopulate.

      1. Who judges what is of value or harm to the planet?

        As a panentheist, I consider the planet to be part and parcel of deity, but I haven’t seen much evidence that it stands apart as a conscious entity capable of judging things — that it cares one way or another whether it’s a cold, lifeless rock floating through space or a warm oasis teeming with life.

        1. That’s a nice rationalization for killing and destroying, but it’s totally immoral BS. The only legitimate excuse for killing is to eat what you kill, and humans kill exponentially more than that.

          1. Asking a question isn’t a “rationalization.”

            Morality is an emergent property of consciousness. My mind isn’t closed to the idea that there are non-human conscious entities capable of moral judgment and with their own moral rights.

    1. Unfortunately, it will be much more and a lot worse than that. AI run amok could eliminate humans because it would consider us unnecessary and probably harmful, to list just one example. The technology is probably a long way from that, though we don’t know what’s being done in secret by the military/intelligence/industrial complex.

Comments are closed.