AI – There’s something there, but don’t believe all the hype!

“AI” is hot on the “hype cycle”

If you haven’t been inundated with promises, marketing hype and conspiracy theories about “artificial intelligence” recently, then you best check your pulse immediately! It’s everywhere: it’s driving your car, making us smarter and safer, taking away all our mundane tasks… or possibly taking our jobs away?

If you’re unfamiliar with the Gartner Hype Cycle, here is what it looks like:

Gartner Hype Cycle

In my opinion, we are just about at the “peak of inflated expectations” right now, and I encourage you to take almost any claims of “AI” with an enormous grain of salt! Let me explain why….

Artificial Intelligence”: What it is and is not

A recent article in Wired featured an interview with Kate Crawford, a USC professor and researcher at Microsoft, and author of one of the definitive books on AI, “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence”. In her opinion, “the name is deceptive: AI is neither artificial nor intelligent.” She goes on to state:

“AI is made from vast amounts of natural resources, fuel, and human labor. And it’s not intelligent in any kind of human intelligence way. It’s not able to discern things without extensive human training, and it has a completely different statistical logic for how meaning is made. Since the very beginning of AI back in 1956, we’ve made this terrible error, a sort of original sin of the field, to believe that minds are like computers and vice versa. We assume these things are an analog to human intelligence, and nothing could be further from the truth.”

Kate Crawford

There is a lot to unpack in that paragraph, but ultimately you can seize on the concept that all “AI” is a result of tons of input. It ultimately adheres to the GIGO concept: garbage in, garbage out. We’ve seen many examples of this as we have been ascending up the steep slope from the technology trigger to the peak of disillusionment on the hype cycle! For example, AI image generators have been found to be racially and sexually biased, due to the fact that the images they were trained on were overwhelmingly white males. Many other examples have appeared in industry press outlets, but these sources are off the beaten track for the common person compared to traditional news outlets that are constantly banging on the AI drum.

Let’s call it “machine learning” instead!

I find the term “machine learning” (ML) to be a much clearer description of what is really out there versus “artificial intelligence” (AI), primarily because of the term “learning”: an ML system must first be taught something, and the validity/quality of the results produced by such a system is directly related to the quality of that input! Add to that the fact that the thing learning is a “machine”, as opposed to some sentient artificial being, and I think the context of what this really is becomes much clearer. Sure, they are just words, but let’s choose them wisely!

In reality, ML has been around for quite some times, and if we look for simple things, we can see how it is a part of everyday life. Here’s an example: recently, I was traveling at a conference (out of town). “Siri suggestions” on my iPhone popped up a list of local restaurants in the city I was in at around 7PM. This isn’t magic: “Siri makes suggestions for what you might want to do next, such as call into a meeting or confirm an appointment, based on your routines and how you use your apps”. Siri knew I was out of town (based on GPS), and knows that I usually eat dinner around 7PM, so it made this “suggestion”.

My photo library can look at faces in incoming images, and attach a label for a specific person if the incoming photo matches well enough to one already stored with that label. Your email client searches incoming mail and can suggest adding an event to your calendar or adding a new contact when it sees certain patterns of field in the email body. There are tons of similar examples that we almost take for granted now!

Will AI destroy the world, or take my job away?

The short answer is no, at least not in its present form and for the foreseeable future. While there are lots of examples of AI technology with poor outcomes (I’m looking at you, Tesla!), these are generally the result of poor technology hoisted onto uneducated consumers with false promises. One advertisement I’ve seen on TV really makes my blood boil: a guy driving a 7,000 pound pickup truck, towing a trailer with 5 ATVs loaded on it, taking his hands off the wheel while the AI “Super Cruise” technology drives for him. What could possibly go wrong? If this driver had a front-tire blowout with hands off the wheel, I would not want to be anywhere near him! If you’ve ever had a tire blow out while driving, you know why I say this.

The media hype around AI has created this awful sense among the unwashed masses that “machines will do everything for us” in the near future. The doomsayers seize on this and tell us that AI will eventually kill us, but this is just sensationalism (and click-bait). A great article in the Electronic Frontier Foundation website, The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies, talks about why this is a fallacy, and once again harps on the point of “AI does what we teach it to”: bad outcomes are the result of bad training.

This article mentions a term coined by Emily Bender, a computational linguist at the University of Washington: she has called AI and especially text-based applications Stochastic Parrots. This means that these systems echo back to us things we taught it with, as “determined by random, probabilistic distribution.”

The unintended consequences of AI hype

So, even if we are content understanding that AI can be used for good purposes, and that the training of the system is key, we also need to understand the unintended consequences of blindly implementing AI technology. Does this problem we are trying to solve using AI actually benefit human productivity, or is it just the technological equivalent of a skateboard ollie?

To that end, we need to do a better job of educating users on what AI can and can’t do, so that they are not just blindly using it and accepting the answers given by the system without question. We have already seen Large Language Models (LLMs) produce hallucinations, making up quotations, citations and even events that have never occurred! It reminds me of being back in engineering school, when we were first allowed to use calculators (yes, I’m dating myself!): we first had to learn to do the calculations “by hand” before using the calculator. That way, when we saw the number coming out of the calculator, we had a reasonable feel for if it was even “in the ballpark”!

If we just blindly accept the results from LLMs, then we begin to make up alternate realities! This has even been seen already in scientific research, which should be solely the domain of facts! There are already alarms being sounded about misuse of LLMs in scientific research: see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10879553/, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10025985/, https://www.nature.com/articles/s41562-023-01744-0 and others.

Last, but certainly not least, we need to understand the environmental consequences of AI, and particularly LLMs. For every new LLM published and released in the world, there is an enormous server farm (or farms) out there, consuming enormous amounts of electricity and water (for cooling). Are we willing to pay the consequences for faster (sometimes wrong) answers at the expense of a significant increase in greenhouse gases? While nations strive to move to EVs, all the gains possibly being made there are being offset by server farms mining cryptocurrencies and servicing AI. We are now in a new nation-state arms race over building the biggest LLMs and AI applications without regard to the consequences. As an example, Microsoft recently released its 2024 Sustainability Report, noting a 29% increase in global emissions “primarily due to “new technologies, including generative AI.””.

AI: move forward with caution!

I am by no means “anti-AI”; far from it. My only reason for bringing up the subject is that I feel we need to take a deep breath and think carefully about how and when we use it. Just like computing and the internet before, we need to understand when new technology will really help us, and when it may actually be counterproductive. Automating mundane tasks to free up the human mind for decision-making is a good use of technology. But, “technology for technology’s sake” is never a good direction. Use AI wisely, and when it makes sense, making sure to think about unintended consequences before diving in.

We’ll all meet again at the “trough of disillusionment” in another year or so 😉


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *