Operator vs. algorithm – where does automation of process decisions in AM actually end?
RECODE.AM #39
Today, print preparation software does things that just five years ago required an experienced engineer and several hours of work. It generates intelligent supports, optimizes orientation, simulates distortion, selects parameters. You click “Prepare” and wait.
Sounds great. And that’s where the problem begins.
Let’s start with what the algorithm actually does very well.
It’s tireless. It doesn’t have a bad day. It won’t skip a step in the procedure because it just came back from lunch.
For repetitive tasks - nesting identical geometries, generating reports, validating input files - it beats humans hands down in every respect. Faster, cheaper, and without errors caused by inattention.
That’s undeniable and not worth arguing about.
But there’s a category of decisions where the algorithm starts to stumble. And worse - it often doesn’t know it has stumbled.
Let’s imagine a classic scenario. The software analyzes the geometry of an aerospace part and proposes an orientation that minimizes support volume. The calculations are correct. The thermal simulation looks fine. The algorithm is satisfied with itself.
An experienced operator looks at the same thing and sees something else. They see that in this orientation, a critical functional surface will fall into a zone where this particular machine - this one, not the theoretical one - has shown slight density deviations for months. It’s not in any database. It’s in the operator’s head, because they handled three complaints with the exact same error signature.
The algorithm doesn’t have that knowledge. And it can’t have it, because nobody ever wrote it down.
This gets to the core of the issue. AM systems collect more and more process data, but operator knowledge is of a different kind. These aren’t numbers in a table. It’s contextual understanding - this specific machine, this batch of material, that customer who always questions surface roughness but never mentions it upfront.
This is called tacit knowledge - hidden, uncodified, existing only in human experience. And it is absolutely critical for AM production quality.
The problem is that modern software increasingly pushes the operator out of the decision-making loop. It doesn’t do this maliciously. It does it because it was designed to “simplify” and “automate.” Every new feature is another step toward a single-button workflow.
Sound familiar? It’s exactly the same pattern seen in aviation autopilots.
For years, the aviation industry systematically delegated more and more decisions to machines. Pilots became better trained interface operators and, gradually, worse aviators. The Air France 447 disaster is still studied as a textbook example of what happens when a human suddenly has to take control but has lost the intuition needed to exercise it.
AM hasn’t reached that point yet. But the direction is uncomfortably similar.
So where should the boundary lie?
Automation absolutely belongs wherever the decision is repeatable, well-defined, and its outcome can be verified without invoking contextual judgment. Nesting, basic support generation, geometry validation, reporting - let the algorithm handle those. Better than humans and without debate.
But decisions involving trade-offs - between quality and lead time, between process safety and cost, between what the model says and what experience with a specific machine suggests - those should remain in human hands. Not because the algorithm is bad, but because responsibility in such decisions needs an identifiable owner.
There is another dimension to this issue that is discussed far too rarely.
When the operator is an active participant in decision-making, they learn. They build the same tacit knowledge that a year from now will allow them to spot a problem before it causes damage. When the operator merely supervises the algorithm - clicking “approve” without understanding why - that knowledge never forms.
It evaporates from the organization with every experienced employee who leaves.
Automation that removes humans from thinking is not efficiency. It is borrowing time from the future.
Good AM software should not replace the operator. It should amplify their ability to make better decisions - faster, with more data, and with less risk of oversight. That is the fundamental difference between a tool and a substitute.
For now, the industry too often chooses the latter path.
Because it’s easier to sell.



