Sunday 9 January 2022

Is the Rhetoric of AI just a Cover for Big Tech?

The Guardian published an opinion piece recently:

Are we witnessing the dawn of post-theory science?

Does the advent of machine learning mean the classic methodology of hypothesise, predict and test has had its day?

by Laura Spinney

https://www.theguardian.com/technology/2022/jan/09/are-we-witnessing-the-dawn-of-post-theory-science


This piece by Spinney strikes me as sensationalism that is exploiting common anxieties and misperceptions about AI loosely connected with some legitimate concerns about the declining momentum of scientific progress ("The End of Science" a la Horgan and the challenges of "Big Science"). One remark in particular makes me question the author's judgement:
particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions.
The highlighted part is overstatement. All machine learning(including "neural networks") begins with basic assumptions and methods, and specific goals ("end points"). These starting methods just allow for their own refinement and alteration through the processing of large amounts of data, which the digital revolution has made available. This is simple feedback, which has been a part of programming from as far back as Ada Lovelace. But it is made so much more "sensational' when buzz words like "neural network" "AI" and machine learning are used instead of mundane programming terminology. 30 years ago, we used terms like "self-modifying code" for such mundane techniques of software development.

Pieces like this one indicate to me that there is something very strange at work in the lives of computer programmers and software companies today that has led them to develop this rhetoric about AI, machine learning and neural networks. I worry that the "sexing up" of software engineering in the face of the failure of real AI by these folks is being exploited by big IT to act as a cover for its activities of getting lots of people to buy into the inanities of our largely unregulated tech industry.


By even using the terms "AI" and "machine learning" instead of more accurate descriptors like "clever coding" or "data mining" automated programming", members of the public have already ceded the issue of whether these applications should be embraced or avoided, legally limited or left to users to guide completely by themselves.  Who can be apposed to the application of intelligence?  Who would want to limit "a learner".  The reality is that these terms are mere marketing hype which non-programmers should refuse to use.