Socrates real method is an outcome of his discovery and practice of a fundamental principle of reason: Arguments are only as strong as their strongest criticism. The acceptance and practice of this principle implies that your intellectual opponents and enemies are actually your greatest friends. This obversion of traditional Athenian morality, which called for you to keep your friends close and your enemies at a maximal disadvantage, is the cornerstone of the entire Western intellectual tradition. It is the foundation of the university, with Plato's Academy being its first tangible institutional fruit.
Socrates recognition of this principle guided his entire life. He spent all his time (or so it seems) pursuing interlocutors with whom he could practice the quid pro quo suggested by his principle. He criticized others' arguments for their most deeply held beliefs in the hope that they would return the favour.
Yet, traditional descriptions of "the Socratic Method" often leave this last part out. Instead they emphasize only the former part of the method, in which Socrates poses probing questions that discomfit and confuse his discussion partners. They limit his aim only to the attempt to promote self-reflection and self-criticism, but with the implication that this is for the benefit of his interlocutors. The result is the belief, as one of my colleagues once put, that the heart of Socrates' method and the heart of the discipline of Philosophy is to learn how to be "a professional pain in the ass." This turns Socrates into an insufferable know-it-all, and completely betrays his personal mottos "gnōthi sauton" (know thyself) and that the only thing he truly knew was that he "knew nothing."
I'm not sure what this core principle of Socrates should be called. One can find on the Net one suggestion "The Rubuttal Principle." As a standard piece of logic advice goes:
"Check for rebuttals: Does the argument effectively address the strongest counterarguments? "
the author continues:
"Some ways that arguments can fail to meet the Rebuttal principle include: Misrepresenting the criticism, Bringing up trivial objections, Using humor or ridicule, Ignoring or denying counterevidence, and Attacking the critic instead of the criticism."
It is very difficult to imagine the full-scale cultural endorsement of this principle without the influence of classical Periclean Athens (and its public assembly with a requirement for voters' attendance at debates) and of Socrates. His rejection of the Sophists and their techniques for winning debates makes no sense without recognition of something like this principle.
The Sophists were certainly not above (or so we are told by Plato) using many of the techniques listed above. But for Socrates it was clearly more than a piece of sage logical advice. It was not a mere question: "Have you considered the strongest counterarguments?" It was a call to action "Go find the strongest counter arguments!" Could there ever be an end to such a call? The true recognition of this principle puts one on a perpetual quest, a quest tilted towards engaging your intellectual enemies, and away from your intellectual friends and supporters. This tilt is what ultimately led to his death. Clearly someone who did not frequently challenge people of a different mind on things would not have ended up having to drink hemlock. It is a dangerous and even a deadly principle.
Part of what made it so deadly was that Socrates clearly felt that it could not be practiced at a distance (at least permanently). Apparently he could write, but he was not a "man of letters." Instead, he chose to prioritize face-to-face discourse. There are obvious advantages to this approach. It is certainly more difficult to misrepresent, trivialize, ridicule or deny your opponent, when they are standing right in front of you, listening. In his practice of this principle there is a warning for our mediated age. You attenuate yourselves from your enemies at your peril.
Clearly, media today are not "broadcast" but "narrowcast." That is to say as Marshall McLuhan put it, "In the electric age, when our central nervous system is technologically extended to involve us in the whole of mankind … the globe is no more than a village" (1964, p. 5). The "new media" do away with traditional "broadcasting", which only work by reaching out to broad general audiences from central points (newspapers, radio, TV). Therefore, these old media structurally tend towards creating forms of "virtual Agora/Assembly" through competition between a limited number of these points, which encourages forms of quality control and professional editorial curation and convening of broad-based discourse. Instead, with the new media you have "all margins" and "point to point" communications that tend to create loosely interconnected parochial networks "like villages", in which you have the forces of conventionalism bearing down on participants, while also endless possibilities for fragmented sub-groups and cliques ("echo chambers"). McLuhan used the phrase "village" to raise all the specters of "a place anyone with a right mind wishes to escape from", rather than misty nostalgia of the "there's no place like home" type. Socrates would agree, and would undoubtedly eschew all the pseudo forms of "engagement" that social media create, and instead head downtown, to engage with his fellow citizens, for real.
I want to discuss what I think is a false dichotomy between religion and science, or more correctly between theism and naturalism. It is a basic working assumption of 10s of thousands of Internet debaters along these fault lines that there is a fundamental choice that must be made between believing in a world where magical stuff happens and a world that must be conceived of as having no possibilities for such. I am using the term magic here as a shorthand for anything that breaks laws of nature i.e. of what we learn from physics and the other sciences. I could use the term “supernatural” but that term is problematic for reasons I can’t get into here. Another term I like to avoid is “miraculous,” since it can be defined in ways that it can either encompass the operation of laws of nature or not.
It is a strange confluence that most of the ardent religious and most of the ardent atheistic naturalists agree on the following fundamental dichotomy: The world either has magical elements or it does not. The bulk of religious people assert that it does, so they are able to assert their kind of supernatural theism. The atheists assert that it does not, so they reject religion and assert the non-existence of gods or God.
This seems a false dichotomy. Two other possibilities exist. As the longstanding traditions of diverse forms of "Deism" make clear, and also newer perspectives like Slavoj Žižek's "Christian Atheism," it is possible for this world to be a place in which no magic occurs, but which none-the-less, has its source in some form of eternal agency. Or it might be possible that there is no ultimate agency, but that the world, is still capable of manifesting mind-bending alterations and exceptions to the operations of natural laws, such as the terrifying prospects of the so called "Big Rip", in which the universe decoheres in an instant, or the endless weird possibilities provided by multiple universes and quantum uncertainty. Recent fictional portrayals in movies well illustrate visions of such, and the our less grandiose and more fine grained examples.
With all these possibilities on the table one must consider the question this way:
a. Reality (all possible worlds) might be founded in a natural non-agent eternal source such as random matter.
b. Reality (all possible worlds) might be founded in an eternal agent (or agencies).
c. The evidence, or lack thereof, of magical elements cannot decide the matter between these two possibilities.
So, the debate is not just about the existence of an eternal agent at the source of reality. The debate must also encompass the possibilities for “the magical” to be real. But many on both sides seem committed to the belief that the presence of the magical is what determines the judgement on the question of the existence of gods or God without major argument about why this dichotomy must be accepted.
Radical breaches or inexplicable aspects of the operation of natural laws are compatible with either metaphysical possibility being true. Indeed, there is an increasing recognition of this in popular culture. There are untold numbers of people who are skeptical about or even outright hostile towards the God of traditional theism, who still none-the-less believe in all kinds of wonderous possibilities, such as ESP, telekinesis, alternate universes, or more humble “spiritual” conceits such as that “love conquers all,” or that there is someone out that they are fated to love, or that moral obligations transcend the brute realities of material possibility and so on. As the saying goes “The truth is out there.”
And yet untold oceans of ink and digital bits are wasted trying to argue the issue based on this supposed unavoidable choice. But if one rejects the dichotomy and believes instead that the question of the existence of God or gods cannot be decided by evidence or counter evidence for magical elements of reality, this opens a whole new issue. Can one avoid agnosticism about the metaphysical position of naturalism? Regardless of arguments about the issue of God one must also resolve a parallel question of naturalism, which is:
1. Can we ever know that reality can never involve profound and radical breaches of laws of nature?
One might be tempted to describe this as simply another variation of “the problem of miracles,” as many philosophers and theologians might be inclined to do, but it clearly refers us to a host of not specifically traditional religiously topics. The problem of explaining the "everyday miracles" of human consciousness, the experience of meaning, free will and the power of moral obligation, and the possibilities of our speculations about alternate dimensions or universes, apparent evidence for fine tuning, the emergence of life from dead matter, and its very definition, etc. Theists have been accused of resting their faith irrationally on a “God of the Gaps”, which is to say, to feel entitled to believe in God as possible explanation for aspects of nature not yet fully explained. But it is just as much a matter of faith to believe in the inevitability future naturalistic explanations of phenomenon like these, which have so far avoided such.
In the meantime, people can speculate on such issues. And to the extent that their reasoning about matters of practical ethics and sense of self-worth depend on it, utilize their best judgements about such matters. Such philosophizing is an inescapable aspect of the human condition. In the absence of the resolution of metaphysical and epistemological conundrums like those of naturalism or theism or their perceived conflict, people must make their best judgments and be prepared to defend those judgements based on the practical implications for themselves and others.
In short, most theists are magical thinkers, but many atheists also seem to be magical thinkers too-- they are just magical thinkers in waiting. Most theists just think the contentious evidence of the magical is enough. Most atheists just think the contrary. But in the presence of any residue of metaphysical uncertainty, these theists must live with the specter of doubt, as is well known, but as is less well acknowledged these atheists live with the ever present possibility as C.S. Lewis put it, of being "surprised by faith." Why because, it is impossible to do away with the possibility for reality to suddenly begin providing indications of magic.
For example, they both expect that the miraculous events portrayed in scriptures must have occurred exactly as presented if they are to be adjudged meaningful. Both camps are wooden literalists, who miss the mundane miracle of scriptures, which is that these stirring and potentially profound narratives emerge through contingent processes of human imagination and reasoning working through complex interactions of entire cultures grappling with the implications of their metaphysical judgements through immense periods of time. They are wonderous because they emerge from such natural processes, not despite of them. That we today are also able to join in those discourses simply adds to the wonder. Theistic critics can make fun of theists who feel they must add hidden fairies to their gardens to evoke proper appreciation of the beauty that can be found there, but then proceed to engage only theistic opponents of this magical sort as the "real believers" while assiduously avoiding modern liberal theists who are satisfied with the garden itself.
It is for this reason that I agree with Žižek that something like Christian belief is the only possible foundation for the embrace of a true materialism (the subtitle of his book), although I disagree with his claim that this can only result in a paradoxical form of Christian Atheism. Only by positing a creator can one imagine a world ordered enough to be devoid of the possibility of radical departures from the operation of material laws. The belief supposedly firmly and grimly embraced by naturalists of a world created from processes "red in tooth and claw" is fatally compromised when your outlook is fundamentally open to the possibilities that you can at any point be "surprised by faith." But a Christian view of God doing the ethical work to discern the acceptability of creating naturalistic world, that is to say worlds like those supposedly grimly embraced by realists naturalists, can be expected to never provide some magical out, not even possibly in the form of an afterlife.
There is a significant practical conclusion that can be drawn from consideration of the issue of the supposed forced choice between naturalism and theism. First, in the absence of resolution to perplexing metaphysical matters people have the achievable epistemological obligation to discuss the ethical, self-identity and practical implications of their decision-making drawing on their working assumptions about such matters. So, exploring the empirical benefits and harms of religious belief or their rejection for individuals, therefore, should have much greater priority than abstract debates about the existence of God.
Second, the fact that many people inclined towards naturalism believe that their work is done if the possibilities of magic are put in doubt reveals that many of them simply assume that deities by definition must be positively disposed towards magic and therefore disinclined from creating naturalistic worlds. It is only by this assumption that they can draw the conclusion that evidence for the absence of magic allows them to proceed directly to the conclusion that no ultimate agency exists. Otherwise, they would have explain why the absence of magic necessarily implies the non-existence of an ultimate agent. In this blind sport they overlook what I call the real core question of theism:
2. Is there a reason for a God not to make worlds and beings like those naturalists expect emerges from purely natural chaotic material processes?
The belief that gods or God would never make such worlds seems to be a mere assumption on the part of many naturalists and most theists. But if the presence or absence of magic in reality cannot decide the matter of the existence of God, they are may well be wasting everyone's time and distracting us from the discussion of these two much more critical and interesting metaphysical matters. I wish the those debating the existence of God, who flood my Internet feeds, would move along.
I’ve been asked to speak
on the social implications of AI, which I take to mean its ethical and
political implications. I suspect that I
got this invitation because I teach a course on the Philosophy of Technology
for which I have written a textbook (plug plug).But I also suspect that I got it in part because
I teach a course called “Minds: Natural and Artificial”, which focuses on the
topic of “The Philosophy of Mind” and more particularly the issue called “the
hard problem”, which considers the mysterious phenomenon of consciousness.I won’t bore you with details, but I will
note that the term AI often invokes in people’s minds issues of consciousness
and the nature of mind and awareness.One of the authors we read is Hubert Dreyfus who wrote a famous and
contentious book called “What Computers Can’t Do” in which he argues that
computers will never be able to manifest consciousness.In the class I approach the “hard problem” as
an open philosophical question.It is a
very fraught issue in which there is plenty of opportunity for debate.I mention these points only because the
question of whether computers can think often lurks in the background of
discussions of “AI” and adds most of the frisson surrounding the term in
people’s minds.
Many people associate AI
with images like that of commander Data in a Star Fleet courtroom defending his
right to be recognized as a person, or the robot from the movie iRobot pleading
with Will Smith’s character to recognize the plight of his people at the hands
of an exploitive humanity.I mention my course
and the “hard problem” because outside the context of a classroom, in more
practical settings like this one, I feel obliged to speak more frankly about
the prospects of machine intelligence.I
agree with Dreyfus that there are certain things that computers can’t do and
that it is extremely unlikely that they will ever manifest a level thinking
that would allow them to be considered independently creative or conscious.
Another issue I feel
obliged to deal with is the issue of my technical grasp of the topic.As a professor in the Arts and Humanities it
might be easy to assume that I am somewhat out of my depth when it comes to a
highly technical subject like artificial intelligence.Computers are black boxes for most people,
and philosophers might be considered about as far from the nuts-and-bolts of software
engineering as you can possibly get.I
will just mention that I have been an active computer programmer, largely as a
hobby, but in the past working on academic projects, for over 40 years.I have written tens of thousands of lines of
code over the years, including programs using what are typically described as AI
techniques.I would direct you to my
Internet Archive collection of early 8-bit programs and my Github repository
and pages to check my bona fides.(jggames.github.io and https://archive.org/details/AI8-bitBASICprograms)
So, on the issue of
computer intelligence and creativity, I would qualify my prior opening remarks
by stating that I think computer software, as has been well demonstrated over
the past half century, can be a great aid to human creativity.For example, Eric Topol's 2013 book, The
Creative Destruction of Medicine, illustrates some useful possibilities for
developing software to take up the load of medical diagnosis and better
information management desperately needed in public health systems.And Erik Brynjolfsson and Andrew McAfee’s The
Second Machine Age, give a wonderful rundown on the economic positives of
new tech.But what these recent improvements
in medicine and commerce illustrate is that what we
are really concerned with is a much narrower definition of intelligence.Computers can indeed “think” in the much more modest sense of carrying out tasks formerly carried out only in human brains.
They have been doing so since at least the Antikythera mechanism built by the
ancient Greeks to calculate astronomical events and the timing of the Olympic
games, and ancient Chinese abacuses.There is nothing new about machines doing intellectual tasks except
perhaps the recent substantial increases in the pace of change that is to be
expected in a society at the apogee of a bonanza of cheap high intensity energy
like that provided by fossil fuels over the last two centuries.
In brief, I see the term
AI more as a contemporary buzz term, spurred by recent improvements in language
recognition software in combination with advancements in visual and auditory
generative programs made possible by access to vast amounts of data generated
by the Internet. The recent tendency to use the term AI with its exciting
connections to the “hard problem” is no doubt a convenient tool for anyone
connected with the need to raise investment capital required in free market
economies. But as a coder I really can’t see the term as anything more than a fund-raising
or talent recruiting trick aimed at spurring on new projects dreamed up by software
engineers.
I am not alone in holding such
views. As Linus Torvalds, inventor of the Linux operating system put it in a recent interview, it represents a bunch of people "with their hands out" and another hype-cycle like crypto or "cloud native." Others have written books about AI
viewed primarily as a marketing device.I will just mention Katie Crawford’s well-received “Atlas of AI” and
Merideth Broussard’s Artificial Unintelligence.Yuval Noah Harari has a fascinating chapter
in his book Homo Deus on the new religions of Silicon Valley and what he
calls “Data Religion.”I would more
observe that the real computer revolution occurred long before ChatGPT in the final
four decades of the last century, when the application of mundane computer
software and automation equipment de-industrialized our society and shrank the blue-collar
sector from just over 35 percent of employment to something closer to 10 or 15
percent.As economists and historians of
deindustrialization have observed, most of that process did not result from the
offshoring jobs but processes of automation carried on within our society.
If anything, offshoring
occurred late in the process, the last decade or so, largely to help
deindustrialized workers maintain their buying power.Ten-dollar T-shirts from Asia have helped
maintain family incomes that would have otherwise noticeably shrunk over the
last decades.The vast increases in
productivity in the industrial sector was achieved as result of trillions of
dollars of investment.But trillions
were also spent in the final four decades of the last century in the service
sector as well, with almost no measurable increase in productivity measured,
until recently.As American economist
Robert Solo famously quipped in 1987 “You can see the computer age everywhere but in the
productivity statistics."
Through that period of
deindustrialization people did not go on about the potential impacts of
“artificial labour.”The term
“automation” was sufficient.Since that
time economists have been waiting for the shoe to drop in the service
sector.But employment just kept growing
and growing in that sector, without significant attendant productivity growth, despite
vast investments in computerization.The
result has possibly been the creation of a vast array of what David Graeber
calls “bullshit jobs” in his provocative book of that title.Sometimes I am inclined to think that AI is
simply a term preferred by white collar workers, me included, who feel somewhat
threatened by the impending true application of automation to our bloated sector.It grants the process the higher level of cache
that we feel our work deserves compared to that of our blue-collar
fellows.Which brings us to the first major
moral issue regarding AI, which is the issue of technological unemployment.
It is an open question
whether technological development can or will eventually lead to an acute
crisis of employment rather than the wage stagnation and heightening itinerancy
with which we are familiar.This is an
empirical issue and still to some extent a future issue.We have been able to keep many people
employed, or occupied with education, early retirement or social supports,
although anyone familiar with the various drug and mental health epidemics will
tell you about the limits of such efforts.Recent
studies aside suggesting that we might finally be seeing a decoupling of
productivity growth from levels of employment growth, there is a robust
philosophical and ethical debate going on about whether the work that we do
have and can expect to have will be of an edifying nature, regardless of
whether enough of the resulting wealth can be appropriately shared.Some people argue for a guaranteed annual
income or other wealth distributing schemes.I would simply note that such proposals do not grapple with the more fundamental
issue of the quality and meaningfulness of work.Figuring out how to make such judgements and
how to best ensure that human beings can have enough opportunity to apply
themselves to meaningful tasks is a critical question that continues to vex
regardless of proposals regarding the sharing of wealth.
In a somewhat related
vein, there is the fundamental question raised by authors like Crawford, of the
relation of AI to the more general environmental crisis.It is a connection that is often overlooked,
but it is a highly relevant observation to make, as she does in her book, that
computers and electronics are high energy and resource intensive activities,
both in their infrastructural requirements and typical applications.One need only note that in the early 2000s
the improvements made in Great Britain in terms of increases in energy
efficiency achieved through intensive public actions and investments motivated
global treaty obligations, were entirely offset by increases in energy requirements
needed for the infrastructure of the digital revolution. Crawford’s exploration
of the vast air-conditioned server farms needed to host our cat videos, not to
mention the now vastly expanding AI infrastructures, is sobering.But as Crawford also points out the infrastructure
of AI is tightly interwoven with activities still primarily focused on exploiting
natural resources, as has been the hallmark of commercial activity since the
industrial revolution.Nothing so far in
the empirical data robustly indicates that AI represents a radical shift from this
pattern of consumption. But the human species must collectively consider sustainable
alternatives to this economic model as was well illustrated by MIT’s original 1972
Limits to Growth model and its recent updates in 1992, 2012 and 2022.
Finally, there are specific
ethical issues related to the development of AI tools themselves and their
application for specific purposes.First,
the development of Large Language Models and visual and auditory generative
techniques have been highly dependent on access to vast amounts of human
generated training data used to apply to the various “machine-learning” methods
required to develop such applications.These
processes raise many issues regarding the use of “our” data to benefit other
people’s commercial purposes. These include issues about copyright,
intellectual property rights and privacy.More broadly the incentive of big data companies to gain access to our
information create many potential moral hazards regarding the farming of users
for their information.Since we are
currently in the very midst of such processes of development it is easy to
overstate the challenges and the difficulties of finding reasonable administrate
and legal solutions.
A second example relating
to application simply involves the possibilities of the new tools to facilitate
new kinds of malfeasance, that we might be insensitive to simply because of the
novelty of the activities attending the new tools.This is an abiding issue of technological
change. Are the newfangled automobiles love hotels on wheels for teenagers? Is selling bootleg video tapes theft? Is
hacking a kind of trespassing? Is
texting while driving recklessness? And,
of course, most recently, is not properly attributing material produced by
machine a form of fraud?
A somewhat novel type of
issue regarding AI development and application can be described by the term
“the alignment problem” coined by Brian Christian in his book of the same
title.Since AI programing techniques
like “machine-learning” apply the kind of programing techniques that coders at
one time simply called “self-modifying code,” which we were told by our
teachers represented the “ultimate” in programmer laziness and warned to
absolutely avoid at all costs, means the resulting software has the unavoidable
quality of a black box.Unlike
traditional algorithmic or heuristic methods, contemporary programmers don’t have
a good grasp of how their system operate and will continue to operate in novel
conditions.This raises many issues
about the handing over of tasks normally requiring human judgement to
machines.There are now famous instances
of what used to simply be called “expert systems” manifesting hidden biases
often resulting from tendencies buried deeply in the human created training
data, but sometimes simply from the imponderables of the programing methods as
such.
One specific example of
an issue regarding the application of AI is the issue of robotic forms of
warfare.The questions of whether
machines should be handed even greater levels of discretion regarding the exercise
of lethal judgment on our behalf is a very challenging ethical question,
although I would note that such issues have been around since poison gas and
delayed action munitions.So, I don’t
think these types of questions are really an issue specific to what we are now
calling AI.
I would to put most of these
specific issues of development and use in the “scare the horses” category.As in the case of the early automobile when
people didn’t know how we would manage issues like maintenance, traffic flow
and driving etiquette, these now largely forgotten vitriolic debates were
quickly resolved.But as the case of
automobile would also suggest we might well have done a better job of looking
at infrastructural issues, like what would happen with all the exhaust fumes
coming out of vehicles and how their operating requirements would influence us
in re-shaping our cities.So, I would
tend to weigh the issues of energy and resource use more highly.
It is a simple reality of
physics that the development of AI to the degree being predicated by its main
advocates will require vast increases in access to energy, both for running
computer systems supporting AI processing, but also soon, for creating and storing
the vast amounts of artificially created training data that will be needed.The proposed levels of advances in machine
learning will require much more data than even our prodigious current Net use could
ever supply.But the gurus of AI, when
asked about these more mundane energy issues, quickly flip into modes of
magical thinking, speaking about fusion and mining asteroids, and the like.
So, we cannot escape the preeminent
technological issue of our age regarding energy.And the complex systems of our energy systems
raise many possible cases of whether there are some technological activities
that simply should not be done, or as the Latins so concisely put it: ab esse
ad posse non valet consequentia (just because something can be done doesn’t
mean it should be)Considering whether
there are limits to the creation and application of technologies are not as
deeply considered questions as they should be, although I would note the
positive signs that this may be changing illustrated by Canada’s leading role
in the international treaty banning landmines and recent efforts to limit single
use plastics.
Finally, although the word
“technology” is one of the most prominent terms of our age, the definition of
this concept turns out to be a highly contested philosophical topic.The fact that such a key term could be so
philosophically confused and misunderstood stands itself as the main moral failing
of our age.As Marshall McLuhan so
sagely put it, “the medium is the message.”Interpreting the meaning of technology as such is the preeminent moral
challenge of our time.
Bibliography
Topol, Eric. The Creative
Destruction of Medicine: How the Digital Revolution Will Create Better Health
Care (2013)
I generally try to avoid succumbing to the temptations to catastrophize
that I am prone to these days, but the United Nations recently reported that
the earth is "well outside the safe operating space for
humanity." As reported in the Guardian "The assessment, which
was published in the journal Science Advances and was based on
2,000 studies, indicated that several planetary boundaries were passed long
ago." These boundaries include categories like biosphere integrity, land
use, climate change, fresh water, nitrogen and phosphorus flows, synthetic pollution,
and ocean acidification. The scientists suggest that 6 of 9 major boundaries
have already been surpassed and the others soon will be.
And of course our society continues to struggle with growing social
crises such as the opioid epidemic, which according
to the Lancet has killed over 30,000 Canadians since 2016, the Covid-19
pandemic, mental health crises of various sorts, such as anxiety, loneliness
and depression, all amidst a declining health system. There is also the housing
crisis, increasing family debt, stagnant family incomes for over 4 decades, the
ongoing impacts of automation leading to deindustrialization, not to mention
the nebulous oncoming threats of "AI."
And yet in the face of such crises we also find our society increasingly
politically polarized and riven by "divisiveness" (https://youtu.be/vRV_6XQrMoI?si=DFHGerT7NLkjtZGk), which undercuts our ability
to respond effectively through democratic institutions. My talk today is a speculation
about the root causes of this divisiveness, which concludes with some political
recommendations for addressing it that goes against the grain of the
suggestions made by many contemporary commentators.
Steven Levitsky, Daniel Ziblatt and their recent New York Times
bestselling book Tyranny of the Minority have garnered some
prominence and can serve as an example of one mainstream view. They blame
American political institutions, which they see as skewed by the founding
fathers too heavily against preventing tyrannies of the majority and too little
against the obstructive powers of minorities. But as Zack Beauchamp argues in a
review of their book, there are other countries with “crises with root causes
strikingly similar to America’s, such as Israel and Hungary," not to
mention other global examples such as the Philippines, Brazil Argentina,
Bolivia, Slovakia and Poland, which work against an American exceptionalist
explanation. But then again, Beauchamp proceeds to argue for his own
explanation of growing extremism as based in “entrenched racial hierarchies”
and their weakening place in American society. It is unclear how this
hypothesis applies to Israel and Hungary and other global examples.
My alternate hypothesis about the root causes of polarization grows out
of the work of anthropologist Joseph Tainter and his theory of civilizational collapse and
the elaboration of that theory by political scientist Thomas Homer-Dixon.
According to Tainter, societies collapse when they reach a point of
cultural/technological development where the energy available to the system is
no longer sufficient to respond to the major social and environmental problems
thrown up by that system. (https://www.bbc.com/future/article/20190218-are-we-on-the-road-to-civilisation-collapse)
In other words, as Homer Dixon describes the process, major problems arise
as by-products of the current technological system taken as a whole, which that
system is fundamentally energy deficient to address. Of course, major new sources
of energy or efficiencies can allow for such problems to be addressed. But if
the ingenuity required to open up such sources or to provide the needed
efficiencies is unavailable or practically out of reach then a
society will be unable to respond with new levels of complexity and will face
what he calls an "ingenuity gap." He is doubtful that
currently proposed new sources of energy and proposed efficiencies will be able
to provide the new levels of required energy.
The inevitable result is some kind of contraction of the civilization in
terms of complexity (i.e. its level of progress). Dixon argues that such
contraction need not be catastrophic. There can be strategic walk-back of
certain aspects of the technological system that can set the stage for future
growth, but often civilizational collapse results where the unravelling of
complexity takes the form of a cascading collapse of interconnected aspects of
complexity triggered by "energy overshoot." The system needs to
rise to new levels of complexity in response to consequences of its current
operation but can't.
As such failure unfolds major derivative crises occur (environmental and
social) that are the more obvious manifestation of the more fundamental lack of
energy. Such crises as symptoms of collapse are identifiable by their
intransigence. They are obvious, but the solutions are not, because there
is fundamentally a lack of extra energy.
So, they will go unaddressed. Lip service might be able to be paid
to solving them, overblown panaceas can be floated, promises can be made, but
in real situations of civilizational collapse nothing substantial will be able
to be done because the society has maxed-out its energy budget supporting its existing
level of technological complexity.
For Tainter and Homer-Dixon energy is the "master resource"
because it cannot simply be skirted around by way of technological development
in general, only developments that expand access to energy sources will address
the crises. In other words, technology itself is not an energy source. It is only a facilitator of access to energy
resources in nature. It is those resources that determine all technological
possibilities, including those of accessing new energy resources. This
aspect of the dynamic is known as the concept of EROI or Energy Return on
Investment. It is ultimately this ratio that determines the amount of progress
possible for society. In brief, all civilizational collapses are just
energy crises masquerading as an array of more manifest crises.
In situations of energy overshoot, one would expect that democratic
leaders would have an especially hard time of it. In times of excess energy,
which has been our experience for the last 200 years, societies can always simply
add new forms of complexity, typically in the form of new public services, if
new problems arise. Politicians can
simply propose these new forms of complexity to get elected because there is unused
energy available to the system. They might still lie for selfish or strategic
reasons and make promises they know they can't keep, but this will not be a necessity.
But once overshoot begins to take hold (I think of this as us
collectively riding down right side of "Hubbert's Peak" (aka the "Peak
Oil" curve represented as a giant roller coaster) and if ingenuity gaps
manifest themselves, it will become increasingly tempting for politicians to
become mendacious. Barring some kind of major energy revolution providing
new high EROI sources of energy to replace the high EROI oil and coal that have
fueled progress for the last 200 years, solutions will be limited to strategic
compromises of existing aspects of progress, which will inevitably alienate some
existing constituency. Since easy solutions that simply involve adding
complexity will be increasingly less available, there will be an acute increase
in the level of leadership skill needed to fashion political consensuses.
Whether from necessity or fecklessness or both, the temptation will be for
leaders to oversell panaceas, reach for desperate and extreme solutions, focus
on issues where non-energy intensive moral victories can be achieved (possibly
as a kind of distraction), or use nefarious rhetorical tricks to maintain
themselves in power.
As choices for addressing problems diminish, voters will become
increasingly disenchanted and political apathy will increase. Also, since
little substantive difference in terms of addressing problems will manifest
itself, the act of voting will increasingly become an exercise in random
selection. If choosing one way or another makes little practical difference,
there can be no actual rationale for choosing in one direction over another.
Non-rational, essentially random features of human psychology, personality and
circumstance aggregated across vast populations will become the deciding
factors for how people select political parties. Political life will
increasingly become an arbitrary exercise of "team picking", even as
voters become splintered into ever finer and increasingly less-rational
factions. In democracies, which must ultimately always filter choices
through majority parties or coalitions, the teams will tend towards the mean
(50% going this way, 50% going that way). Such a process will likely involve
much scapegoating, blame-laying and intensifying vilification and demonizing of
the other team as a psychological relief-valve for the persistent failure of their
growing concerns to be addressed.
In short, political polarization in democratic societies would likely be
an effect of, and sign of, those societies being in a process of collapse. Could such a process be happening to us? Certainly not if we take the perspective of
many commentators on polarization, who often suggest that polarization itself is
a cause of potential social breakdown. For example, Morgan Kelly, an
author at the High Meadows Environmental Institute observes:
As social interactions and
individual decisions isolate people into only a few intractable camps, the
political system becomes incapable of addressing the range of issues — or
formulating the variety of solutions — necessary for government to function and
provide the services critical for society.
In short, it is people's personal dispositions that create polarization,
not wider forces affecting society that influence those dispositions.
However, in subsequent remarks Kelly points to other factors based in
the new media as potential culprit for these negative personal dispositions.
His comments, and those of many other popular commentators on the issue, raise
the possibility of what Marshall McLuhan called, in the regrettable culturally
insensitive terminology of his time, the "tribalizing" effects of new media or
what theorists drawing on his work now call "narrowcast" media, such as the social media
platforms with their tendency to create "echo chambering" in public discourse.
McLuhan would also add that such media also, paradoxically, embody
possibilities for increasing exposure to negative and critical viewpoints,
which he described as "the global village" effect, which can lead to
a disruption of the development of a sense of personal identity. He
observes that situations of the loss of identity either individually or in
societies almost always result in violent responses. Such complex effects are
real, but I would argue that they might only feed off and accentuate a process
of polarization like that described above.
My reason for questioning the suggestion that the echo chambering effect
is the primary cause of polarization is because this suggestion only explains
increasing insensitivity and decreasing tolerance of public discourse, not the
increasing sense deadlock and decreasing confidence in political life. As
reported by the Carnegie institute, several trends developing in democracies
seem to characterize polarization in Both North America and Europe, such as:
popular confidence in political institutions has plummeted
U.S. and European voters are disenchanted with mainstream political
parties
The last
point's connection to the prior two is somewhat baffling. Why, if confidence is
so low in institutions and parties, would substantial numbers of voters be
inclined to cling more ardently to parties? Why would others choose
to move to extreme groups rather than simply, as has always been the tendency
in democracies, nudge dominant parties to take up new policies and transform
them into mainstream policies? Instead, we find societies increasingly
split. For example, in the last Canadian election the victorious Liberals won
32.6 percent of the popular vote, whereas the Conservatives actually won the
popular vote with 33.7 percent (https://www.sfu.ca/~aheard/elections/1867-present.html). It seems
increasingly that elections hang on knife edges. In the U.S, Trump only pushed
over the top because of the archaic state-elector system and recent elections
in Israel required new levels of coalition building. There seems to be an
increasing tendency in democracies for a lack of clear mandates. Instead,
slim minorities often determine major shifts in political direction, such as in
Brexit. What can explain this phenomenon of oscillation around the middle?
If the effect of echo chambering was main cause of polarization, the
effect would be to coarsen debate between existing party constituencies as they
increasingly lose contact with each other. But this hypothesis alone does not
explain the growing sense of gridlock. Indeed, if new media and their echo
chambering effect decrease the awareness of other groups, we should expect that
the effect would be to simply to solidify existing groups at the numbers that
existed before their effects took hold. But what we seem to find instead is
societies gravitating towards equally balanced fundamentally opposed
pluralities.
Such a tendency for oscillation around the middle of the political
spectrum might be what causes the sudden major flips on issues with only the narrowest
of margins, as in Brexit, or the paradoxical victories parties who lack popular
support, but who eke out victories due the arcane minutiae of electoral
systems. This split at the middle of the spectrum does not seem widely
discussed by commentators on polarization. Rather, the focus is on the vitriol
and extremity of groupings making up the coalitions of left and right (https://youtu.be/x_Q9ynm2Rfg?si=j6_z83RbH2NJzby8).
As Kelly's comment exemplifies, commentators frequently point at
personal tendencies of thought and communication as the main causes of polarization.
In a TEDx talk titled "That Open Secret About
Political Polarization" Jake Teeny points out, for example that
surveys indicate there is an increasing tendency for people to report feeling
reluctant to engage in discussions with political opponents because of an
"expectation of being unheard." In response he presents some
practical suggestions for overcoming and managing such feelings.
Tainter and Homer-Dixon's analysis suggests a different explanation for
why people might be having feelings of being unheard. They suggest that it
could be the inherent intransigence of the problems in situations of energy
overshoot and the encouragement such a situation creates for exploring
desperate measures. Such desperation and the inevitable fruitlessness of
addressing side-effects could be the cause of an increasing sense of
frustration with political discourse, at least in democratic societies.
In contrast, as Homer-Dixon and the historian Ronald Wright explore in
some detail, in authoritarian societies one might simply find increasing
inequality between marginalized groups and elites as elites leverage their
power and privilege to preserve and even expand their interests in the lead up
to collapse. Wright in his Short History of Progress discuss actual empirical
evidence of this, such as can be found in surveys of the bone density of
skeletons of different classes in societies that have experienced collapse,
such in meso-American societies like the Maya. Such inequality will undoubtedly
also manifest in democratic societies, but conceivably to a lesser extent.
Instead, polarization might be the primary political effect.
If Tainter, Homer-Dixon and Wright’s portrayal of civilizational
collapse is accurate, societies experiencing it will need to focus their
leadership expertise on addressing the root causes of this kind of situation.
Priority must be put on seeking radically new energy sources, major new
efficiencies in current usage and possibilities for walking-back non-essential
aspects of so-called "progress," to allow for controlled forms of
"collapse" in the meantime.
Attributing blame to who is being most intransigent will not be
helpful. Nor will seeking ways for ameliorating polarization directly,
such as suggestions like that of Henry E. Brady of UC Berkeley's Goldman School
of Public Policy who recommends better "civics courses." If polarization is largely a symptom of a
largely unrecognized dynamics of collapse, viewing civics as a "giant
killer" of the phenomenon might be a mere panacea, no matter the intrinsic
value of such educational activity. Instead, what will be critical will
be civic activism focused on the fundamental energy and technological systems dynamics
feeding collapse. Think of the case of the recent American government shutdown,
where an ultra-conservative minority held a slim balance of power that allowed
it to hold the operation of the entire U.S. government hostage. Many in both
the right and left news media were happy to portray that minority as mere
ignorant attention seekers. Far too few were inclined to look for deeper
causes for why conservative people were increasingly behaving like political
nihilists and anarchists.
So instead of seeking to lay blame we must unrelentingly demand of our
political leaders to explain what they are doing to find new forms of energy, and
energy efficiency and what existing technological system they think are not as
vital as most believe. As a catch phase we would need "innovation and
discrimination about innovation." Political leaders and our own political
discourse must be judged in terms of these priorities, until civilizational
collapse has been managed or fundamentally averted, if collapse is indeed
occurring.
Acknowledgements
I'd like to thank all those who provided feedback on my presentation of this piece at the Atlantic Regional Philosophical Annual Meeting in Charlottetown and especially Dr. Will Sweet and Dr. Pamela Courtney-Hall.
This piece by Spinney strikes me as sensationalism that is exploiting common anxieties and misperceptions about AI loosely connected with some legitimate concerns about the declining momentum of scientific progress ("The End of Science" a la Horgan and the challenges of "Big Science"). One remark in particular makes me question the author's judgement:
particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions.
The highlighted part is overstatement. All machine learning(including "neural networks") begins with basic assumptions and methods, and specific goals ("end points"). These starting methods just allow for their own refinement and alteration through the processing of large amounts of data, which the digital revolution has made available. This is simple feedback, which has been a part of programming from as far back as Ada Lovelace. But it is made so much more "sensational' when buzz words like "neural network" "AI" and machine learning are used instead of mundane programming terminology. 30 years ago, we used terms like "self-modifying code" for such mundane techniques of software development.
Pieces like this one indicate to me that there is something very strange at work in the lives of computer programmers and software companies today that has led them to develop this rhetoric about AI, machine learning and neural networks. I worry that the "sexing up" of software engineering in the face of the failure of real AI by these folks is being exploited by big IT to act as a cover for its activities of getting lots of people to buy into the inanities of our largely unregulated tech industry.
By even using the terms "AI" and "machine learning" instead of more accurate descriptors like "clever coding" or "data mining" automated programming", members of the public have already ceded the issue of whether these applications should be embraced or avoided, legally limited or left to users to guide completely by themselves. Who can be apposed to the application of intelligence? Who would want to limit "a learner". The reality is that these terms are mere marketing hype which non-programmers should refuse to use.