Friday, 15 November 2024

Socrates' Real Method

Socrates real method is an outcome of his discovery and practice of a fundamental principle of reason: Arguments are only as strong as their strongest criticism. The acceptance and practice of this principle implies that your intellectual opponents and enemies are actually your greatest friends. This reversal of traditional Athenian morality, which called for you to keep your friends close and your enemies at a maximal disadvantage, is the cornerstone of the entire Western intellectual tradition. It is the foundation of the university, with Plato's Academy being its first tangible institutional fruit.

Socrates recognition of this principle guided his entire life.  He spent all his time (or so we are told) pursuing interlocutors with whom he could practice the quid pro quo suggested by his principle.  He criticized others' arguments for their most deeply held beliefs in the hope that they would return the favour.

Yet, traditional descriptions of "the Socratic Method" often leave this last part out.  Instead they emphasize only the former part of the method, in which Socrates poses probing questions that discomfit and confuse his discussion partners. These descriptions limit his aim to the attempt to promote self-reflection and self-criticism in the minds of his interlocutors. The result is the belief, as one of my colleagues once put, that the heart of Socrates' method and the discipline of Philosophy as a whole is to learn how to be "a professional pain in the ass." This turns Socrates into an insufferable know-it-all or mere provocateur, and completely betrays his personal mottos of "gnōthi sauton" (know thyself) and that the only thing he truly knew was "that he knew nothing."

I'm not sure what this core principle of Socrates should be called.  One can find on the Net one suggestion: "The Rubuttal Principle."  As a standard piece of logic advice goes:

"Check for rebuttals: Does the argument effectively address the strongest counterarguments? "

the author continues:

"Some ways that arguments can fail to meet the Rebuttal principle include: Misrepresenting the criticism, Bringing up trivial objections, Using humor or ridicule, Ignoring or denying counterevidence, and Attacking the critic instead of the criticism."

It is very difficult to imagine the full-scale cultural embrace of this principle in European intellectual traditions without the influence of classical Periclean Athens (and its public assembly with a requirement for voters' attendance at debates) and of Socrates.  His rejection of the Sophists and their techniques for winning debates makes no sense without recognition of something like this principle.

The Sophists were certainly not above (or so we are told by Plato) using many of the techniques listed above. But for Socrates Rebuttal Principle was clearly more than a piece of sage logical advice. It was not a mere question regarding argument formation. It was a call to action "Go find the strongest counter arguments!" Could there ever be an end to such a search?  The true recognition of this principle puts one on a perpetual quest, a quest tilted towards engaging your intellectual enemies, and away from your intellectual friends and supporters. This tilt is what ultimately led to his death. Clearly someone who did not frequently challenge people of a different mind on things would not have ended up having to drink hemlock. It is a dangerous and even a deadly principle.

Part of what made it so deadly was that Socrates clearly felt that it could not be practiced at a distance (at least permanently).  Apparently he could write, but he was not a "man of letters."  Instead, he chose to prioritize face-to-face discourse.  There are obvious advantages to this approach.  It is certainly more difficult to misrepresent, trivialize, ridicule or deny your opponent, when they are standing right in front of you, listening.  In his practice of this principle there is a warning for our mediated age.  You attenuate yourselves from your enemies at your peril.

Clearly, media today are not "broadcast" but "narrowcast." That is to say as Marshall McLuhan put it, "In the electric age, when our central nervous system is technologically extended to involve us in the whole of mankind … the globe is no more than a village" (1964, p. 5).  The "new media" do away with traditional "broadcasting", which only work by reaching out to broad general audiences from central points (newspapers, radio, TV).  These old media structurally tended towards creating forms of  "virtual agora/assembly" through competition between a limited number of these points.  This necessity of their very structure and operation encouraged forms of quality control, professional editorial curation (the role played by "moderators" in real group discussions), which allowed for the convening of accessible but not chaotic broad-based public discourse.  Instead, with the new media you have "all margins" and "point to point" communication that tends to create only loosely interconnected parochial networks "like villages", in which you have possibilities for the force of conventionalism to bear down on participants, while also providing endless possibilities for the creation of fragmented sub-groups and cliques ("echo chambers"). McLuhan used the phrase "village" to raise all the specters of "a place anyone with a right mind wishes to escape from", rather than misty nostalgia of the "there's no place like home" type.  Socrates would agree, and would undoubtedly eschew all the pseudo forms of "engagement" that social media create, and instead head downtown, to engage with his fellow citizens, for real.


Saturday, 7 September 2024

Why Ardent Atheists and Theists are Wasting Everyone's Time

I want to discuss what I think is a false dichotomy between religion and science, or more correctly between theism and naturalism. It is a basic working assumption of 10s of thousands of Internet debaters along these fault lines that there is a fundamental choice that must be made between believing in a world where magical stuff happens and a world that must be conceived of as having no possibilities for such. I am using the term magic here as a shorthand for anything that can't be fully accounted for by the operation of laws of nature i.e. of what we learn from physics and the other sciences. I could use the term “supernatural” but that term is problematic for reasons I can’t get into here. Another term I like to avoid is “miraculous,” since it can be defined in ways that it can either encompass the operation of laws of nature or not.

It is a strange confluence that most of the ardent religious and most of the ardent atheistic naturalists agree on the following fundamental dichotomy: The world either has magical elements or it does not. The bulk of religious people assert that it does, so they are able to assert their kind of supernatural theism. The bulk of atheists assert that it does not, so they reject the supernatural and assert the non-existence of gods or God.

This seems a false dichotomy. Two other possibilities exist. As the longstanding traditions of diverse forms of "Deism" make clear, and also newer perspectives like Slavoj Žižek's "Christian Atheism," it is possible for this world to be a place in which no magic occurs, but which none-the-less, has as its source some form of eternal agency. Or it is possible that there is no eternal agency, but that the world, is still capable of manifesting profoundly inexplicable events, such as the endless possibilities for weirdness thrown up by the vastness of multiple universes and/or quantum uncertainties. Recent fictional portrayals in movies well illustrate visions of such.

If people were truly to take all these possibilities seriously one would have to accept that one's assessment of evidence, or lack thereof, of the reality or possible reality of magical elements cannot decide the matter between the options theism and atheism.

So, the debate about God cannot just be a debate about the existence or non-existence of an eternal agent as as such. The debate must also discussion about what the presence of “the magical” indicates about reality. That is to say, theists must explain why it must be assumed that God would obviously  prefer to make a significantly magical world.  And for Atheists why it is obvious that a God or gods would never make worlds like those they assume would possibly result from blind material processes. In other words, they would have to explain in detail why this dichotomy must be accepted.

But as discussions of multiple universes and quantum uncertainty has proliferated, an increasing awareness has grown in popular culture of how inexplicable aspects of nature could be compatible with either theism or atheism being true. There are untold numbers of people who are skeptical about or even outright hostile towards the God of traditional theism, who still none-the-less believe in all kinds of wonderous "spiritual conceits" ranging from ESP, telekinesis, interacting with the dead, to more humble beliefs such as that “love conquers all,” or that there is someone out that they are "fated" to love.  As the saying goes “The truth is out there.”

But if one rejects the dichotomy and believes instead that the question of the existence of God or gods cannot be decided by one's assessment of the possibility of magical elements of reality, this raises a major new issue. Can we avoid agnosticism about naturalism?  Regardless of arguments about the issue of God one must also resolve this fundamental question:

1. Can we know that reality cannot involve radical departures from the normal expected operation of laws of nature?

One might be tempted to describe this as simply another variation of “the problem of miracles,” as many philosophers and theologians might be inclined to do, but it clearly refers us to a host of not specifically traditional religious topics. The problem of explaining the "everyday miracles" of human consciousness, the experience of meaning, free will and the power of moral obligation, and the possibilities of our speculations about alternate dimensions or universes, apparent evidence for fine tuning, the emergence of life from dead matter, and its very definition, etc. Theists have been accused of resting their faith irrationally on a “God of the Gaps”, which is to say, to feel entitled to believe in God as an explanation for aspects of nature not yet fully understood.  But it is just as much a matter of faith to believe in the inevitability of adequate future natural explanations of phenomenon like these.

In short, most theists are magical thinkers, but many, perhaps most, atheists seem to be magical thinkers too-- they are just magical thinkers in waiting.  Most theists think the contentious evidence of the magical is enough.  Most atheists think the contrary.  In the presence of any residue of uncertainty, such theists must live with the ever-present specter of doubt, as is well known, but as is less well acknowledged most atheists seem to live with the ever-present possibility as C.S. Lewis put it, of being "surprised by faith."  Why because, it is empirically impossible to do away with the possibility for reality to manifest indications of magic.

Both groups are in their happy place when engaging with each other about their ever-shifting personal assessments about the magical. For example, they both expect that the miraculous events portrayed in scriptures must have occurred as literally presented if they are to be adjudged religiously meaningful.  As such they ignore the mundane "miracle" of scriptures, which is that these stirring and challenging narratives emerge through vastly complex processes of human imagination and possibly reasoning working through entire cultures grappling with the implications of their metaphysical judgements through immense periods of time. They are wonderous because they emerge from such processes, not despite of these origins. That we today can also join these conversations simply adds to the wonder.  But atheist critics like Dawkins focus only on making fun of theists who feel they must "add fairies" to their gardens to evoke appreciation of the beauty that can be found there, while also arguing that so called "liberal theists," who are satisfied with the garden itself, are not "true believers."

It is for these reasons that I agree with Žižek that something like Christian belief is the only possible foundation for the embrace of a real materialism (the subtitle of his book on Christian Atheism is "How to be a Real Materialist"), although I disagree with his claim that this can only result in a paradoxical form of Christian atheism. Only by positing a creator can one imagine a world ordered enough to be significantly devoid of radical departures from the operation of pitiless material laws. The belief supposedly firmly embraced by naturalists of a world created from blind processes "red in tooth and claw" is fatally compromised when your outlook is fundamentally and perpetually open to the possibilities of being "surprised by faith." But a Christian vision of God doing the ethical work to discern the acceptability of creating substantially pitiless material worlds, that is to say, of worlds like those supposedly grimly embraced by "realist" naturalists.

There is a significant practical conclusion that can be drawn from questioning the supposed forced choice between naturalism and theism. First, in the absence of resolution to perplexing metaphysical matters people have the more achievable epistemological goal of discussing the ethical, self-identity and practical implications of their decision-making drawing on their working assumptions and speculations about such matters. So, exploring the empirical benefits and harms of religious belief or their rejection for individuals, therefore, should have much greater priority than abstract debates about the existence of God.

Second, the fact that many people inclined towards naturalism believe that their work is done if the possibilities of magic are put in doubt reveals that many of them simply assume that deities must be disposed towards magic and therefore barred from creating naturalistic worlds. It is only by this assumption that they can draw the conclusion that evidence for the absence of magic allows them to proceed directly to the conclusion that ultimate agency does not exist.  Otherwise, they would have to explain why the absence of evidence of magic necessarily implies the non-existence of an ultimate agent. In this blind spot they overlook what I call the core question of theism:

2. Is there a reason for a God not to make worlds and beings like those naturalists have expected to emerge from purely natural material processes?

The belief that gods or God would never make such worlds seems to be a mere assumption on the part of most naturalists and most theists. But if the presence or absence of magic cannot decide the matter of the existence of God or gods, both these groups may well be wasting everyone's time and distracting us from the discussion of these two much more critical and interesting metaphysical questions. I wish those debating the existence of God while ignoring these issues, who flood my Internet feeds, would move along.


Wednesday, 17 July 2024

AI Hype is a Distraction

I’ve been asked to speak on the social implications of AI, which I take to mean its ethical and political implications.  I suspect that I got this invitation because I teach a course on the Philosophy of Technology for which I have written a textbook (plug plug).  But I also suspect that I got it in part because I teach a course called “Minds: Natural and Artificial”, which focuses on the topic of “The Philosophy of Mind” and more particularly the issue called “the hard problem”, which considers the mysterious phenomenon of consciousness.  I won’t bore you with details, but I will note that the term AI often invokes in people’s minds issues of consciousness and the nature of mind and awareness.  One of the authors we read is Hubert Dreyfus who wrote a famous and contentious book called “What Computers Can’t Do” in which he argues that computers will never be able to manifest consciousness.  In the class I approach the “hard problem” as an open philosophical question.  It is a very fraught issue in which there is plenty of opportunity for debate.  I mention these points only because the question of whether computers can think often lurks in the background of discussions of “AI” and adds most of the frisson surrounding the term in people’s minds.

Many people associate AI with images like that of commander Data in a Star Fleet courtroom defending his right to be recognized as a person, or the robot from the movie iRobot pleading with Will Smith’s character to recognize the plight of his people at the hands of an exploitive humanity.  I mention my course and the “hard problem” because outside the context of a classroom, in more practical settings like this one, I feel obliged to speak more frankly about the prospects of machine intelligence.  I agree with Dreyfus that there are certain things that computers can’t do and that it is extremely unlikely that they will ever manifest a level thinking that would allow them to be considered independently creative or conscious.

Another issue I feel obliged to deal with is the issue of my technical grasp of the topic.  As a professor in the Arts and Humanities it might be easy to assume that I am somewhat out of my depth when it comes to a highly technical subject like artificial intelligence.  Computers are black boxes for most people, and philosophers might be considered about as far from the nuts-and-bolts of software engineering as you can possibly get.  I will just mention that I have been an active computer programmer, largely as a hobby, but in the past working on academic projects, for over 40 years.  I have written tens of thousands of lines of code over the years, including programs using what are typically described as AI techniques.  I would direct you to my Internet Archive collection of early 8-bit programs and my Github repository and pages to check my bona fides. (jggames.github.io and https://archive.org/details/AI8-bitBASICprograms)

So, on the issue of computer intelligence and creativity, I would qualify my prior opening remarks by stating that I think computer software, as has been well demonstrated over the past half century, can be a great aid to human creativity. For example, Eric Topol's 2013 book, The Creative Destruction of Medicine, illustrates some useful possibilities for developing software to take up the load of medical diagnosis and better information management desperately needed in public health systems.  And Erik Brynjolfsson and Andrew McAfee’s The Second Machine Age, give a wonderful rundown on the economic positives of new tech.  But what these recent improvements in medicine and commerce illustrate is that what we are really concerned with is a much narrower definition of intelligence.  Computers can indeed “think” in the much more modest sense of carrying out tasks formerly carried out only in human brains. They have been doing so since at least the Antikythera mechanism built by the ancient Greeks to calculate astronomical events and the timing of the Olympic games, and ancient Chinese abacuses. There is nothing new about machines doing intellectual tasks except perhaps the recent substantial increases in the pace of change that is to be expected in a society at the apogee of a bonanza of cheap high intensity energy like that provided by fossil fuels over the last two centuries.

In brief, I see the term AI more as a contemporary buzz term, spurred by recent improvements in language recognition software in combination with advancements in visual and auditory generative programs made possible by access to vast amounts of data generated by the Internet. The recent tendency to use the term AI with its exciting connections to the “hard problem” is no doubt a convenient tool for anyone connected with the need to raise investment capital required in free market economies. But as a coder I really can’t see the term as anything more than a fund-raising or talent recruiting trick aimed at spurring on new projects dreamed up by software engineers. 

I am not alone in holding such views.  As Linus Torvalds, inventor of the Linux operating system put it in a recent interview, it represents a bunch of people "with their hands out" and another hype-cycle like crypto or "cloud native."  Others have written books about AI viewed primarily as a marketing device.  I will just mention Katie Crawford’s well-received “Atlas of AI” and Merideth Broussard’s Artificial Unintelligence.  Yuval Noah Harari has a fascinating chapter in his book Homo Deus on the new religions of Silicon Valley and what he calls “Data Religion.”  I would more observe that the real computer revolution occurred long before ChatGPT in the final four decades of the last century, when the application of mundane computer software and automation equipment de-industrialized our society and shrank the blue-collar sector from just over 35 percent of employment to something closer to 10 or 15 percent.  As economists and historians of deindustrialization have observed, most of that process did not result from the offshoring jobs but processes of automation carried on within our society.

If anything, offshoring occurred late in the process, the last decade or so, largely to help deindustrialized workers maintain their buying power.  Ten-dollar T-shirts from Asia have helped maintain family incomes that would have otherwise noticeably shrunk over the last decades.  The vast increases in productivity in the industrial sector was achieved as result of trillions of dollars of investment.  But trillions were also spent in the final four decades of the last century in the service sector as well, with almost no measurable increase in productivity measured, until recently.  As American economist Robert Solo famously quipped in 1987 “You can see the computer age everywhere but in the productivity statistics."

Through that period of deindustrialization people did not go on about the potential impacts of “artificial labour.”  The term “automation” was sufficient.  Since that time economists have been waiting for the shoe to drop in the service sector.  But employment just kept growing and growing in that sector, without significant attendant productivity growth, despite vast investments in computerization.  The result has possibly been the creation of a vast array of what David Graeber calls “bullshit jobs” in his provocative book of that title.  Sometimes I am inclined to think that AI is simply a term preferred by white collar workers, me included, who feel somewhat threatened by the impending true application of automation to our bloated sector.  It grants the process the higher level of cache that we feel our work deserves compared to that of our blue-collar fellows.  Which brings us to the first major moral issue regarding AI, which is the issue of technological unemployment.

It is an open question whether technological development can or will eventually lead to an acute crisis of employment rather than the wage stagnation and heightening itinerancy with which we are familiar.  This is an empirical issue and still to some extent a future issue.  We have been able to keep many people employed, or occupied with education, early retirement or social supports, although anyone familiar with the various drug and mental health epidemics will tell you about the limits of such efforts.  Recent studies aside suggesting that we might finally be seeing a decoupling of productivity growth from levels of employment growth, there is a robust philosophical and ethical debate going on about whether the work that we do have and can expect to have will be of an edifying nature, regardless of whether enough of the resulting wealth can be appropriately shared.  Some people argue for a guaranteed annual income or other wealth distributing schemes.  I would simply note that such proposals do not grapple with the more fundamental issue of the quality and meaningfulness of work.  Figuring out how to make such judgements and how to best ensure that human beings can have enough opportunity to apply themselves to meaningful tasks is a critical question that continues to vex regardless of proposals regarding the sharing of wealth.

In a somewhat related vein, there is the fundamental question raised by authors like Crawford, of the relation of AI to the more general environmental crisis.  It is a connection that is often overlooked, but it is a highly relevant observation to make, as she does in her book, that computers and electronics are high energy and resource intensive activities, both in their infrastructural requirements and typical applications.  One need only note that in the early 2000s the improvements made in Great Britain in terms of increases in energy efficiency achieved through intensive public actions and investments motivated global treaty obligations, were entirely offset by increases in energy requirements needed for the infrastructure of the digital revolution. Crawford’s exploration of the vast air-conditioned server farms needed to host our cat videos, not to mention the now vastly expanding AI infrastructures, is sobering.  But as Crawford also points out the infrastructure of AI is tightly interwoven with activities still primarily focused on exploiting natural resources, as has been the hallmark of commercial activity since the industrial revolution.  Nothing so far in the empirical data robustly indicates that AI represents a radical shift from this pattern of consumption. But the human species must collectively consider sustainable alternatives to this economic model as was well illustrated by MIT’s original 1972 Limits to Growth model and its recent updates in 1992, 2012 and 2022.

Finally, there are specific ethical issues related to the development of AI tools themselves and their application for specific purposes.  First, the development of Large Language Models and visual and auditory generative techniques have been highly dependent on access to vast amounts of human generated training data used to apply to the various “machine-learning” methods required to develop such applications.  These processes raise many issues regarding the use of “our” data to benefit other people’s commercial purposes. These include issues about copyright, intellectual property rights and privacy.  More broadly the incentive of big data companies to gain access to our information create many potential moral hazards regarding the farming of users for their information.  Since we are currently in the very midst of such processes of development it is easy to overstate the challenges and the difficulties of finding reasonable administrate and legal solutions.

A second example relating to application simply involves the possibilities of the new tools to facilitate new kinds of malfeasance, that we might be insensitive to simply because of the novelty of the activities attending the new tools.  This is an abiding issue of technological change. Are the newfangled automobiles love hotels on wheels for teenagers?  Is selling bootleg video tapes theft? Is hacking a kind of trespassing?  Is texting while driving recklessness?  And, of course, most recently, is not properly attributing material produced by machine a form of fraud?

A somewhat novel type of issue regarding AI development and application can be described by the term “the alignment problem” coined by Brian Christian in his book of the same title.  Since AI programing techniques like “machine-learning” apply the kind of programing techniques that coders at one time simply called “self-modifying code,” which we were told by our teachers represented the “ultimate” in programmer laziness and warned to absolutely avoid at all costs, means the resulting software has the unavoidable quality of a black box.  Unlike traditional algorithmic or heuristic methods, contemporary programmers don’t have a good grasp of how their system operate and will continue to operate in novel conditions.  This raises many issues about the handing over of tasks normally requiring human judgement to machines.  There are now famous instances of what used to simply be called “expert systems” manifesting hidden biases often resulting from tendencies buried deeply in the human created training data, but sometimes simply from the imponderables of the programing methods as such.

One specific example of an issue regarding the application of AI is the issue of robotic forms of warfare.  The questions of whether machines should be handed even greater levels of discretion regarding the exercise of lethal judgment on our behalf is a very challenging ethical question, although I would note that such issues have been around since poison gas and delayed action munitions.  So, I don’t think these types of questions are really an issue specific to what we are now calling AI.

I would to put most of these specific issues of development and use in the “scare the horses” category.  As in the case of the early automobile when people didn’t know how we would manage issues like maintenance, traffic flow and driving etiquette, these now largely forgotten vitriolic debates were quickly resolved.  But as the case of automobile would also suggest we might well have done a better job of looking at infrastructural issues, like what would happen with all the exhaust fumes coming out of vehicles and how their operating requirements would influence us in re-shaping our cities.  So, I would tend to weigh the issues of energy and resource use more highly.

It is a simple reality of physics that the development of AI to the degree being predicated by its main advocates will require vast increases in access to energy, both for running computer systems supporting AI processing, but also soon, for creating and storing the vast amounts of artificially created training data that will be needed.  The proposed levels of advances in machine learning will require much more data than even our prodigious current Net use could ever supply.  But the gurus of AI, when asked about these more mundane energy issues, quickly flip into modes of magical thinking, speaking about fusion and mining asteroids, and the like.

So, we cannot escape the preeminent technological issue of our age regarding energy.  And the complex systems of our energy systems raise many possible cases of whether there are some technological activities that simply should not be done, or as the Latins so concisely put it: ab esse ad posse non valet consequentia (just because something can be done doesn’t mean it should be)  Considering whether there are limits to the creation and application of technologies are not as deeply considered questions as they should be, although I would note the positive signs that this may be changing illustrated by Canada’s leading role in the international treaty banning landmines and recent efforts to limit single use plastics.

Finally, although the word “technology” is one of the most prominent terms of our age, the definition of this concept turns out to be a highly contested philosophical topic.  The fact that such a key term could be so philosophically confused and misunderstood stands itself as the main moral failing of our age.  As Marshall McLuhan so sagely put it, “the medium is the message.”  Interpreting the meaning of technology as such is the preeminent moral challenge of our time.

Bibliography

Topol, Eric. The Creative Destruction of Medicine: How the Digital Revolution Will Create Better Health Care (2013)

Crawford, Kate. Atlas of AI (2021)

Broussard, Meredith. Artificial Unintelligence (2019)

Gerrie, James. A Plea for the Preservation of Early BASIC Game Programs Canadian Journal of

            Media Studies/Revue Canadienne d'études des médias,18(1) 2022, pp. 90-113.

Gerrie, James. "Software Preservation Insights on the Power of BASIC" in Game Science. Digital

            Humanities for Games and Gaming. (Disk Book). Melanie Fritsch, Stefan Höltgen,

            Torsten Roeder, Editors. Weimar: PolyPlay 2023.

Graeber, David. Bullshit Jobs: A Theory (2018)

Christian, Brian. The Alignment Problem: Machine Learning and Human Values (2020)

Harari, Yuval Noah. Homo Deus (2015)