We’re still on “The peak of inflated expectations”:
Since my original post on AI (AI – There’s something there, but don’t believe all the hype!), I have continued to follow the topic with great interest. Unfortunately, I don’t think we’ve even begun to slide down the hype curve toward reality yet, with the media and Wall Street continuing to dominate with rosy projections!

Current political and news coverage portrays AI as a new battleground for nation-state supremacy, with constant announcements of financial initiatives and posturing by companies and nations for leadership in the field. This is much like the semiconductor industry was in its infancy: since the industry will be a large global revenue driver it is important for long term economic growth in nations, so certainly Government initiatives in these nations is warranted.
However, we’re still operating at an inflated hype level that seems greatly “unencumbered by real data”! On January 27th, tech stocks fell into a tailspin after China’s DeepSeek announcement: DeepSeek sparks AI stock selloff; Nvidia posts record market-cap loss. The fact is, because this announcement comes out of China, we know very little factually about what it all means (what chips were really used, how was the model created, etc.), but it is being treated as a “sky is falling” moment by the media and politicians. We need to know more first!
An article in Forbes, Panic Over DeepSeek Exposes AI’s Weak Foundation On Hype, goes further to describe how the hype is leading to some predicting we will soon achieve AGI (Artificial General Intelligence), whereby machines could mimic human cognitive capabilities:
Given the audacity of the claim that we’re heading toward AGI – and the fact that such a claim could never be proven false – the burden of proof falls to the claimant, who must collect evidence as wide in scope as the claim itself. Until then, the claim is subject to Hitchens’s razor: “What can be asserted without evidence can also be dismissed without evidence.”
AI acceptance and simultaneous skepticism:
An interesting observation was recently published online in the Journal of Marketing, titled Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity, and summarized by the authors in Knowing less about AI makes people more open to having it in their lives – new research. The basic premise of the author’s research is that:
People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.
This is an unexpected result, as most people probably assume that the tech-savvy who understand how AI works would be the most likely to adopt its use. The authors contend that this is the result of the “magical” nature of some AI:
The reason behind this link lies in how AI now performs tasks we once thought only humans could do. When AI creates a piece of art, writes a heartfelt response or plays a musical instrument, it can feel almost magical – like it’s crossing into human territory.
Of course, AI doesn’t actually possess human qualities. A chatbot might generate an empathetic response, but it doesn’t feel empathy. People with more technical knowledge about AI understand this.
Contrast this “magical thinking” with this article: Americans Are Uncomfortable with Automated Decision-Making. A Consumer Reports nationwide survey found that the majority of people are “uncomfortable” with decisions made that effect their lives being made using AI:
The survey findings indicate that people are feeling disempowered by lost control over their digital footprint, and by corporations and government agencies adopting AI technology to make life-altering decisions about them.
In particular, the survey found that a majority are uncomfortable specifically with AI being used in job interview processes (resume screening, etc.), banks using AI to screen loan applications, video surveillance systems with facial recognition, and the medical field using AI for diagnosis and treatment planning. How do we reconcile these two seemingly opposing opinions of AI: the “magical thinking” versus the “don’t make decisions for me using it”?
AI projects: low success rates and backlash:
A Rand study, published in August 2024, The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed stated that:
By some estimates, more than 80 percent of AI projects fail. This is twice the already-high rate of failure in corporate information technology (IT) projects that do not involve AI.
The study outlined five key reasons for this high failure rate, and cited the primary one being management’s belief that AI is some sort of “silver bullet” that will magically transform their company. They fail to understand how the technology can be applied to their business, the resources required and how long it will take. At a base level, they do not understand what problems AI is applicable to in their organization and how to implement the technology. In addition, “many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.”
We have also begun to see some backlash against the use of AI, particularly in the publishing field. In one example, a mass resignation of the editorial board of Elsevier’s Journal of Human Evolution occurred due to the use of AI to replace copy editors in the production process, without informing either the editors or the authors:
AI processing continues to be used and regularly reformats submitted manuscripts to change meaning and formatting and require extensive author and editor oversight during proof stage.
Putting aside the possibility of AI actually altering the meaning of an article, which is bad enough, this use of AI in the production process actually violates the journal’s own AI policies: “Authors should be informed at the time of submission how AI will be used in their work.” Authors would not want their work to be altered in any way by AI that might affect the validity of the actual science. This is similar to the skepticism by consumers noted above for use of AI in life-altering decisions.
Bias amplification and model collapse:
In Large Language Models (LLMs), “bias amplification” occurs when these systems “amplify biases present in their training data, resulting in the generation of biased content that perpetuates societal prejudices and stereotypes. This occurs because LLMs learn from the biases embedded in the large datasets they are trained on.” This is a simple corollary of the long standing GIGO (Garbage in, garbage out) principle in computer science. Many different types of bias amplification can occur, with some of the most egregious occurrences actually making it into public media, but only when easily discoverable.
The phenomenon of “model collapse” is often lumped together with “bias amplification, but really should be treated separately. Model collapse occurs when AI models degrade over time due to being trained on other AI generated models:
The effects of model collapse — long-term poisoning of language model data sets — has been occurring since before the mainstreaming of technology such as ChatGPT. Content farms have been used for years to intentionally influence search algorithms and social networks to make changes in their valuation of content. For example, Google devalues content that appears to be farmed or low-value, and it focuses more on rewarding content from trustworthy sources such as education domains.
Negative environmental effects of AI are being ignored:
What many find the most disturbing about the growth of AI is the seeming total lack of discussion of the environmental impact. As previously stated, AI, and particularly LLMs require exorbitant amounts of computing and storage resources to function. This has created an insatiable demand for electricity and cooling which is unprecedented in scale, and will soon become one of the largest contributors to greenhouse gases.
Those old enough to remember the “Three Mile Island accident”, may be slightly discouraged to hear that Microsoft will be bringing the mothballed nuclear plant back on-line in 2008 to help power its AI efforts. In fact, the nuclear energy industry in general is forecasting large increases in demand due to AI’s demands, and the AI companies are focusing on nuclear as “clean energy”, and therefore can help in their future “carbon-neutral” or “carbon negative” publicity promises:
Nuclear power is attractive to tech companies because it provides low-carbon electricity round-the-clock, unlike solar and wind, which run intermittently unless coupled with a form of energy storage.
Places like Data Center Alley’ in Virginia are mostly powered by nonrenewable energy sources such as natural gas, and energy providers are delaying the retirement of coal power plants to keep up with the increased demands of technologies like AI. Data centers are slurping up huge amounts of freshwater from scarce aquifers, pitting local communities against data center providers in places ranging from Arizona to Spain. In Taiwan, the government chose to allocate precious water resources to chip manufacturing facilities to stay ahead of the rising demands instead of letting local farmers use it for watering their crops amid the worst drought the country has seen in more than a century.
The electrical appetite of AI is forcing AI companies to forge deals to by exclusive access to some of the newest renewable energy projects:
For the past five years, tech companies have been on an increasingly frenzied shopping spree for renewable contracts known as power purchase agreements (PPAs), which can enable data center operators to reserve power from a wind farm or solar site before the projects have even been built.
Amazon recently signed a deal purchasing more than half of the projected power to be produced by the Moray West offshore wind farm in Scotland, for example. Initailly, the project was supposed to supply power to 1.3 million homes, but that number will be cut in half with Amazon’s diversion.
Further, the actual electrical grid is becoming a bottleneck:
Yet renewables still need to run through the electricity grid, which is becoming a bottleneck—especially in Europe, as a surge of renewable producers try to connect to feed green transition demand across a multitude of sectors. “We’re going to run into energy constraints,” Meta CEO Mark Zuckerberg predicted on a podcast in April. At Davos this year, OpenAI CEO Sam Altman also warned that the status quo was not going to be able to provide AI with the power it needed to advance. “There’s no way to get there without a breakthrough,” he said at a Bloomberg event.
Even the AI companies recognize that an energy breakthrough is needed if we are to keep up with the pace of development. This is forcing new data centers to build their own energy supplies off-grid, and has led to moratoriums on new data center constructions in some countries.
There is no plan on how to deal with the economics and environmental aspects of electrical supplies for AI, and it is not actively being even discussed outside of a few small circles. There no free lunch, and sooner or later we all will pay if this development is left to continue unchecked.
The AI train is barreling forward faster than we can lay down tracks!
There is absolutely no doubt that AI will be an enormous part of our future, and that there are many positive aspects to its implementation. However, it is so new, and so much is unknown that we need to stop seeing it as a panacea, and perhaps slow down the implementation until we fully understand the potential effects of each implementation.
Where there is large financial opportunity, you will find lots of “magic elixirs” being peddled, and AI is no exception. This is the “wild west” all over again, where the mantra seems to be “ready, fire, aim”, rather than understanding and planning first. It’s going to be a rocky ride to reach the “plateau of productivity” in the hype cycle.
While I’m not a huge fan of regulation, I do believe that AI is an area where regulations of some sort will be needed. At very least, let’s require that anything AI generated be prominently labeled as being AI generated! But the international community needs to be involved, so that this “nation-state race to AI dominance” doesn’t quickly turn into a new “cold war”, with all consequences pushed aside.
Sorry for the “doom and gloom”, but civilization itself is at stake here in the long term……
Leave a Reply