Tygers Dream of Electric Sheep
News from the Bladerunner Universe
William Blake’s prophetic poem begins:
Tyger Tyger, burning bright,
In the forests of the night;
What immortal hand or eye,
Could frame thy fearful symmetry?
The Tyger was written by Blake in 1794 as a counter point to his 1789 poem, The Lamb. The Tyger questions who created the mills that in his lifetime had sprung up in the English countryside. The poem itself is a searing critique of early capitalism, which grew out of textile mills that emerged following gradual process of enclosing the commons and ongoing dispossession of cottage industry weavers in Britain.
In Philip K. Dick’s 1968 novel, Do Androids Dream of Electric Sheep, all of Blake’s nightmares have come to pass. The fallout from a nuclear war, which no one can remember who started it nor who won, has killed most of the animals on earth. Most humans have emigrated off-planet.. People remaining on Earth in the novel spend money and time on owning electric sheep and other animals, groping for familiarity in a hostile, damaged world.
The plot that many people will recall from the Bladerunner movies, the core plot, concerns a bounty hunter, Rick Deckard, searching for escaped killer androids. The theme that runs through the novel plays with the distinctions between original vs. simulation, living vs. inert; biological vs. electronic, human vs. android, reality vs. hallucination. Deckard wonders at one point in the novel if androids dream. We know that AIs hallucinate, so why shouldn’t androids dream?
The New York Times ran an article today concerning the increasing amount of AI “hallucinations,” which AI companies are at a loss to explain:
Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.
These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.”1
The article above gives an example of an AI support bot spontaneously providing incorrect licensing information to the clients of a company, causing cancellations of accounts by users and so on. Everyone has seen AI-generated graphics of humans and animals with extra digits, or weird eyes, etc. The ability of AI to cause human error is a known unknown.
Business are increasingly putting their operations in the hands of AIs because AIs can process market information faster than any human. In the largest financial market in the world, the US Stock Market, there is no unified regulation of AI usage by financial firms at all. All the present regulations concerning AI use by financial firms deals with legacy computer platforms. To the extent that the SEC and CTFC are concerned with the issue, they are permitting firms to develop their own policies. But what is the remedy if AIs hallucinate and cause a market crash? Who pays? The FDIC? what if AIs cause defective products due to overuse of AI in design?
The US Department of Defense and other militaries around the world are developing AI-controlled drones. What could go wrong? It was only a matter of time before militaries began to field autonomous weapons platforms. If anything, the war in Ukraine has shown us that the face of war has changed forever. The dominant weaponry in the Ukraine conflict is drones for surveillance and for hunting and killing enemy soldiers. Both sides of the conflict are using these technologies. Of course, the use of drones is not new, but the addition of AI capabilities is.
And what about oppression and terrorism? The governments around the world are already using AI platforms designed by companies like Palantir2 to monitor civilian populations, assist law enforcement and immigration control, and design military applications. One of the most concerning trends in Palantir is the neo-crusader rhetoric used on their recruitment page, “Palantirians deliver mission-critical outcomes for the West's most important institutions.” At the AI Expo for National Competitiveness last October, in a truly Orwellian statement, Palantir’s CEO, Alex Karp stated:
“The peace activists are war activists,” Karp insisted. “We are the peace activists.”3
And when it comes to terrorism, in Ministry for the Future, Kim Stanley Robinson’s 2020 novel of environmental redemption, a terrorist group called Children of Kali plays a key role in changing the direction of the world economy, and thus the pace of climate change. An anonymous character reminisces:
For that, drones are best. Much of the job becomes intelligence; finding the guilty, finding their moments of exposure. Not easy, but once accomplished, boom. The drones keep getting faster and faster. The guilty often have defenses, but these can often be overwhelmed by numbers. A swarm of incoming drones the size of sparrows, moving at hundreds or even thousands of meters per second—these are hard to stop. The guilty died by the dozens in those years.4
Given that AIs have no grip on reality at all, and their promoters even less, we cannot expect that this age of AI is going to work out well for humanity. While it is true that those with economic privilege stand to see benefits, the rest of the world does not. The environmental costs of AI are staggering,5 and there is no end in sight to the sheer amount damage post-industrial civilization is inflicting on the world.
The cheerleaders for new technologies always promise us greater freedom, but in truth, the history of technological progress shows that technological innovation rarely offers more freedom, and always enables more and more restrictions, and always upon the least fortunate. New technologies are always hailed these days as “disruptive,” which just signals new forms of primitive accumulation and dispossession through destruction of the commons. Jerry Mander observes:
Given that technology was supposed to make life better, and given its apparent failure in both the social and the environmental spheres, shouldn't reason dictate that we sharply question the wild claims we have accepted about technology?6
Buddhist prophecies tell us that in the final era of this iteration of human civilization, we will experience an age of illness, followed by an age of weapons, concluding with an age of famine. Many generations of science fiction writers have already warned us about the perils of technology and the corrupting power it exerts. The AI epoch is just one more chapter in how the enchantment of technology has bewitched us with its promise of material comforts.
We cannot remain passive, there are things we must do. We need to understand that the sufferings of this age we see blossoming around us are a result of human nonvirtue and malfeasance. We need to begin there, correcting ourselves, and so begin the process of restoring the world.
I will leave you with the opening lines of Appeasing the Disturbances of the Mamos and Ḍākinīs, a ritual text revealed by Rigzin Jigme Lingpa, begins:
When the degenerate evil age arises,
there is a storm of negative actions because of negative intentions,
and an age of illness, war, and famine dawns.
At that time the mamos and ḍākinīs are disturbed,
and it is important to make effort to mend this disturbance.
A manifesto written by Palantir’s CTO, Shyam Sankar, is a frightening read. Palantir’s mission statement even more concerning, “Our software powers real-time, AI-driven decisions in critical government and commercial enterprises in the West, from the factory floors to the front lines.” The reader will be reminded that palantirs were crystal balls described by Tolkien in the Lord of the Rings trilogy, the use of which required great wisdom and strength. People using these palantirs proved to be easily corruptible by Sauron. The wisdom of the people running this company is thus questionable.
Robinson, Kim Stanley. The Ministry for the Future: A Novel (p. 135). (Function). Kindle Edition.
Mander, Jerry. In Absence of the Sacred: The Failure of Technology and the Survival of Indian Nations. Sierra Club Books, 1992.


Can't say I like the sound of this. Electric sheep aren't good eatin'
Its interesting the relationship between P.K.D's conversion to Christianity, his visionary experience, and his role as a modern day Christian prophet of dystopia through his fiction.