AI makes everything bland - does it affect designing parts for AM?
RECODE.AM #33
Artificial intelligence is becoming increasingly proficient at generating text, images, and video.
However, at the same time, the more often we read and watch this content, the more clearly we notice the same pattern: smoothness devoid of character.
Content created by AI is correct, safe, and “acceptable,” but rarely vivid, risky, or truly memorable. It lacks tension, concreteness, and lived experience - things that arise from embodiment, context, and the creator’s personal involvement.
Of course, if we prepare a prompt that very clearly specifies the required traits of the text, image, or video, it is highly likely we will get them. But AI left to itself will generate something bland, politically correct, corporate, or simply boring.
This raises the question of whether the same limitations - an aversion to controversy, avoidance of extremes, and a preference for statistical averageness - also matter beyond the realm of cultural content.
Do they carry over into parametric design and the creation of 3D geometry? Will AI-driven 3D printing also become an averaged average?
Limitations of AI writing according to Alberto Romero
In a December article titled “10 Signs of AI Writing That 99% of People Miss,” Alberto Romero argued that popular methods for detecting AI-generated text are superficial and increasingly ineffective.
Detecting AI-generated content requires more than the now-standard tricks: spotting long em dashes —, icons no one would ever bother to search for and insert manually, or familiar stylistic templates.
Romero proposes analyzing deeper layers of a text: the structure of thought, its relationship to experience, and the way the narrative is carried forward.
His central thesis is this: AI does not so much imitate bad style as it reveals a fundamental lack of rootedness in the world.
This is a very valuable observation that applies to all areas of AI generation - including 3D models. Romero points out that language models operate with language in a way detached from experience. This leads to a dominance of abstraction over concreteness.
AI texts are full of general, “important-sounding” concepts, but they are hard to translate into mental images. They do not arise from observation, but from averaged linguistic maps.
It’s like a person born blind trying to describe the color yellow.
Another major limitation is the “harmlessness” filter, which is the result of reinforcement learning with human feedback.
As a result, AI avoids sharp judgments, strange associations, and emotional language, replacing them with neutral, politically correct, corporate vocabulary. The text becomes safe, but also stripped of human emotion.
Again - for an experienced user it is not difficult to write a prompt that will generate text so racist that even the Grand Wizard of the Ku Klux Klan would feel uncomfortable reading it. The point is that to achieve this you have to “outsmart” the AI - by default, it will avoid such content on its own.
Another issue is the preference for Latinate words, which are statistically associated with authority.
This causes AI language to get stuck in a “business casual” register, without natural transitions between elevated and colloquial styles.
At the sentence level, Romero points to the phenomenon of “sensing without sensing”: AI can correctly combine sensory descriptions, but it does not understand their physical consequences, because it has no body or perspective. Once again, we return to the problem of blind people describing colors.
Similarly, personifications and metaphors often come off as awkward, because they are generated as stylistic effects rather than as the result of a genuine need for meaning.
Another important signal is structural uncertainty: AI builds sentences that immediately balance their own claims, in order not to offend anyone and not to be wrong.
At the level of the entire text, this leads to a treadmill effect - many words, little progress. The text circles around the topic without moving toward a clear thesis or conclusion.
In addition, AI does not operate with subtext: everything is spelled out, explained, and closed off, because the model does not trust either the reader or itself.
AI prefers to explain a joke it has just told, just to be sure we laugh
Although in the end it does not matter, because AI does not understand the concept of a sense of humor; it only knows what the term means.
Romero emphasizes that none of these signs is decisive - AI is becoming easier and easier to mask - but together they create a characteristic “smell” of text devoid of an internal point of view.
Do the same limitations apply to 3D design and AM?
Transferring these conclusions to the field of 3D design leads to interesting, but not obvious, insights.
At first glance, geometry does not require “character” - what matters are function, strength, and compatibility with the manufacturing process. However, in 3D printing, and especially in generative and parametric design, formal decisions are inseparably linked to a design philosophy.
AI, just as in writing, operates on averaged solutions. Generative algorithms optimize shapes according to predefined criteria, but they do not question the meaning of those criteria.
If the goal is minimum mass at a given stiffness, we will get “organic” but predictable forms - an aesthetic familiar from thousands of examples of topological optimization.
This is the equivalent of linguistic abstraction: the form is correct, but hard to remember and devoid of local, user-centered context.
The lack of lived experience also matters here. A designer who assembles a part themselves feels the resistance of a latch, knows where fingers slip and where the material should be “unpleasantly” rough.
AI does not possess this knowledge or these experiences. It can optimize a handle for stress distribution, but it will not understand that slight asymmetry improves the intuitiveness of the grip.
It is hard to expect a neural network to reflect on slippery handles or other solutions that, after repeated use, cause blisters on hands or pain in the fingers.
The aforementioned blandness of AI in 3D design is therefore inevitable. The difference is that in engineering, it is the human who defines the problem space. The designer contributes their own thesis, goal, and decision-making, while AI becomes a generator of mediocrity.
The problem begins when the algorithm replaces design thinking instead of supporting it. Then geometry - like text - starts “running on a treadmill”: correct, optimized, but without a clear reason for being the way it is.
Ultimately, blandness and risk aversion are not features of 3D printing or parametric design themselves, but the result of uncritically handing decisions over to statistical systems.
AI brings neither experience nor intention - it brings only the average. Character, both in text and in geometry, must still come from the human.




