MS Holding AB dan repost
Thus, AI can be used as an effective propaganda tool, which both the corporations creating them and the governments and agencies regulating them have recognized.
Misinformation and Error
States have long refused to admit that they benefit from and use propaganda to steer and control their subjects. This is in part because they want to maintain a veneer of legitimacy as democratic governments that govern based on (rather than shape) people’s opinions. Propaganda has a bad ring to it; it’s a means of control.
However, the state’s enemies—both domestic and foreign—are said to understand the power of propaganda and do not hesitate to use it to cause chaos in our otherwise untainted democratic society. The government must save us from such manipulation, they claim. Of course, rarely does it stop at mere defense. We saw this clearly during the covid pandemic, in which the government together with social media companies in effect outlawed expressing opinions that were not the official line (see Murthy v. Missouri).
AI is just as easy to manipulate for propaganda purposes as social media algorithms but with the added bonus that it isn’t only people’s opinions and that users tend to trust that what the AI reports is true. As we saw in the previous article on the AI revolution, this is not a valid assumption, but it is nevertheless a widely held view.
If the AI then can be instructed to not comment on certain things that the creators (or regulators) do not want people to see or learn, then it is effectively “memory holed.” This type of “unwanted” information will not spread as people will not be exposed to it—such as showing only diverse representations of the Founding Fathers (as Google’s Gemini) or presenting, for example, only Keynesian macroeconomic truths to make it appear like there is no other perspective. People don’t know what they don’t know.
Of course, nothing is to say that what is presented to the user is true. In fact, the AI itself cannot distinguish fact from truth but only generates responses according to direction and only based on whatever the AI has been fed. This leaves plenty of scope for the misrepresentation of the truth and can make the world believe outright lies. AI, therefore, can easily be used to impose control, whether it is upon a state, the subjects under its rule, or even a foreign power.
The Real Threat of AI
What, then, is the real threat of AI? As we saw in the first article, large language models will not (cannot) evolve into artificial general intelligence as there is nothing about inductive sifting through large troves of (humanly) created information that will give rise to consciousness. To be frank, we haven’t even figured out what consciousness is, so to think that we will create it (or that it will somehow emerge from algorithms discovering statistical language correlations in existing texts) is quite hyperbolic. Artificial general intelligence is still hypothetical.
As we saw in the second article, there is also no economic threat from AI. It will not make humans economically superfluous and cause mass unemployment. AI is productive capital, which therefore has value to the extent that it serves consumers by contributing to the satisfaction of their wants. Misused AI is as valuable as a misused factory—it will tend to its scrap value. However, this doesn’t mean that AI will have no impact on the economy. It will, and already has, but it is not as big in the short-term as some fear, and it is likely bigger in the long-term than we expect.
No, the real threat is AI’s impact on information. This is in part because induction is an inappropriate source of knowledge—truth and fact are not a matter of frequency or statistical probabilities. The evidence and theories of Nicolaus Copernicus and Galileo Galilei would get weeded out as improbable (false) by an AI trained on all the (best and brightest) writings on geocentrism at the time. There is no progress and no learning of new truths if we trust only historical theories and presentations of fact.
Misinformation and Error
States have long refused to admit that they benefit from and use propaganda to steer and control their subjects. This is in part because they want to maintain a veneer of legitimacy as democratic governments that govern based on (rather than shape) people’s opinions. Propaganda has a bad ring to it; it’s a means of control.
However, the state’s enemies—both domestic and foreign—are said to understand the power of propaganda and do not hesitate to use it to cause chaos in our otherwise untainted democratic society. The government must save us from such manipulation, they claim. Of course, rarely does it stop at mere defense. We saw this clearly during the covid pandemic, in which the government together with social media companies in effect outlawed expressing opinions that were not the official line (see Murthy v. Missouri).
AI is just as easy to manipulate for propaganda purposes as social media algorithms but with the added bonus that it isn’t only people’s opinions and that users tend to trust that what the AI reports is true. As we saw in the previous article on the AI revolution, this is not a valid assumption, but it is nevertheless a widely held view.
If the AI then can be instructed to not comment on certain things that the creators (or regulators) do not want people to see or learn, then it is effectively “memory holed.” This type of “unwanted” information will not spread as people will not be exposed to it—such as showing only diverse representations of the Founding Fathers (as Google’s Gemini) or presenting, for example, only Keynesian macroeconomic truths to make it appear like there is no other perspective. People don’t know what they don’t know.
Of course, nothing is to say that what is presented to the user is true. In fact, the AI itself cannot distinguish fact from truth but only generates responses according to direction and only based on whatever the AI has been fed. This leaves plenty of scope for the misrepresentation of the truth and can make the world believe outright lies. AI, therefore, can easily be used to impose control, whether it is upon a state, the subjects under its rule, or even a foreign power.
The Real Threat of AI
What, then, is the real threat of AI? As we saw in the first article, large language models will not (cannot) evolve into artificial general intelligence as there is nothing about inductive sifting through large troves of (humanly) created information that will give rise to consciousness. To be frank, we haven’t even figured out what consciousness is, so to think that we will create it (or that it will somehow emerge from algorithms discovering statistical language correlations in existing texts) is quite hyperbolic. Artificial general intelligence is still hypothetical.
As we saw in the second article, there is also no economic threat from AI. It will not make humans economically superfluous and cause mass unemployment. AI is productive capital, which therefore has value to the extent that it serves consumers by contributing to the satisfaction of their wants. Misused AI is as valuable as a misused factory—it will tend to its scrap value. However, this doesn’t mean that AI will have no impact on the economy. It will, and already has, but it is not as big in the short-term as some fear, and it is likely bigger in the long-term than we expect.
No, the real threat is AI’s impact on information. This is in part because induction is an inappropriate source of knowledge—truth and fact are not a matter of frequency or statistical probabilities. The evidence and theories of Nicolaus Copernicus and Galileo Galilei would get weeded out as improbable (false) by an AI trained on all the (best and brightest) writings on geocentrism at the time. There is no progress and no learning of new truths if we trust only historical theories and presentations of fact.