AI industry is going to crash, destroying the world economy 💀 Will it affect 3D printing?
RECODE.AM #32
Today’s article will be a bit different than usual, because additive manufacturing will play a somewhat secondary role here. This piece is about the AI industry and the very real possibility that we may soon witness a collapse larger than anything seen before.
Damn - compared to the current problems of the major players in this sector, the whole Desktop Metal, Nexa3D, and similar epics now seem almost amusing. That was a kindergarten bubble…
Artificial intelligence is permeating nearly every sector of the economy, from finance and marketing to medicine, logistics, and manufacturing.
In the 3D printing industry, it promises automated design, optimization of production processes, cost reduction, and faster implementation (blah, blah, blah).
At the same time, very serious doubts are growing around AI itself. More and more commentators and analysts are asking whether we are witnessing a genuine technological revolution, or rather a gigantic speculative bubble.
Possibly the largest in the history of global finance?
Recently, I came across phenomenal articles by Will Lockett, published on his Substack, which mercilessly expose the foundations of the contemporary AI boom and pose deeply uncomfortable questions about its future.
Lockett dismantles the dominant narrative according to which artificial intelligence is supposed to revolutionize everything, become the next mega-industry, and usher in a new, fully automated industrial revolution.
He shows that this story, fueled by technological and financial circles, has been used to justify massive investments and debt, effectively placing a large part of the Western economy on a single bet.
The problem is that real-world data increasingly show that these promises are not being fulfilled in practice.
Lockett cites the example of Microsoft, which since 2020 has invested hundreds of billions of dollars in AI infrastructure and in OpenAI. The flagship outcome of this spending is Copilot, an agent-based AI system integrated into Windows. Despite enormous financial outlays, the product has turned out to be a commercial failure.
Microsoft has been forced to significantly lower its sales targets, and revenues from the entire AI campaign are negligible relative to the costs incurred. The author emphasizes that this is not a matter of minor corrections, but a signal that the technology into which astronomical sums have been poured simply does not work as promised.
The source of the problem is the low usefulness of generative tools. Studies cited by Lockett show that so-called agentic AI systems fail to correctly perform even simple tasks in most cases.
Similarly, ChatGPT, OpenAI’s flagship product, suffers from a chronic problem of hallucinations, meaning the generation of incorrect, fabricated, or misleading answers.
Even the newest and most expensive models are not free from this phenomenon, which in practice means that users must spend more time verifying results than they would need to complete the task themselves.
The market consequences are obvious. Only a small fraction of users decide to pay for access to tools like ChatGPT, and even high subscription fees do not cover the costs of maintaining them.
Moreover, demand for AI in the corporate sector is not only failing to grow, but is actually declining.
Companies are abandoning pilot implementations, and studies by institutions such as MIT, BCG, and Forrester consistently show that the vast majority of AI projects deliver no measurable business value.
As a result, more companies are postponing planned spending on artificial intelligence, undermining the foundations of the entire industry, which is largely built on expectations of a corporate adoption boom.
Lockett goes even further and delves into the financial innards of OpenAI, which he describes as the epicenter of the AI bubble.
The numbers presented are devastating. Despite billions in revenue, the company generates even larger losses, and the rate at which it burns capital is unprecedented.
OpenAI is losing many times more than it earns, and forecasts indicate that annual losses could reach tens of billions of dollars. The key issue is the structure of financing, increasingly based on debt and instruments such as convertible bonds, which merely delay the moment when real costs must be reckoned with.
Lockett also analyzes in detail OpenAI’s plans for further infrastructure expansion.
The construction of gigantic data centers, worth up to a trillion dollars in total, is supposedly meant to lead to a qualitative breakthrough in AI. However, the author shows that the operational costs of such investments are enormous and vastly exceed the construction costs.
Even under very optimistic revenue assumptions, OpenAI is unable to generate an income stream that would justify such a scale of spending.
Worse still, the company itself admits in its own research that the hallucination problem is a structural feature of generative models and cannot be eliminated simply by scaling data and computing power.
At this point, a key question arises about the significance of these problems for the 3D printing market…
The additive manufacturing industry has been using computational algorithms for years for design, simulation, and process control, and AI is seen as the next step in this evolution.
Promises include automatic geometry generation, intelligent support for engineers, self-optimizing print parameters, and predictive quality control.
However, Lockett’s analysis casts a long shadow over these promises.
3D printing is an area where errors have direct, physical consequences. A faulty model, poorly chosen parameters, or an incorrect algorithmic decision can lead to scrap, failures, and in industrial applications even safety hazards.
In this context, the hallucination problem, so downplayed in the marketing of generative AI, becomes absolutely critical. A system that “often gets things wrong” may be acceptable for generating marketing copy, but it is unacceptable in engineering processes.
At the same time, it is worth noting that AI used in 3D printing does not have to, and should not, rely on the same paradigms as generative language models.
Traditional machine learning algorithms, systems based on process data, predictive models, and optimization algorithms operate in a far more controlled environment.
They are trained on specific, verified data and evaluated based on precise, measurable results.
The problem is that the hype around generative AI often leads to unreflective attempts to transplant these technologies into areas where they simply do not work.
A potential AI market collapse, as described by Lockett, could therefore have a dual impact on 3D printing.
On the one hand, there is a risk that the bursting of the bubble will lead to reduced funding for all projects labeled “AI,” including those that are sensible and engineering-driven. This could slow the development of tools that genuinely improve the quality and efficiency of additive manufacturing.
On the other hand, the collapse of unrealistic promises may clear the market of marketing noise and force the industry toward a more pragmatic approach.
If capital and attention shift away from expensive, unreliable generative models toward specialized, deterministic solutions, 3D printing could benefit.
Instead of “magical” AI that supposedly does everything for the engineer, the focus would move to tools that actually solve concrete problems: process stability, repeatability, quality control, and material optimization.
In this sense, an AI crisis does not necessarily have to mean a crisis for 3D printing, but rather the end of a certain illusion.
The conclusions drawn from Will Lockett’s texts are therefore a warning for the 3D printing industry, but not a verdict. A potential collapse in the generative AI market may hurt, especially where investments were based purely on hype.






