AI

AI is purple ๐ŸŸช (LLMs)

Kenn Kibadi
Kenn Kibadi
12/28/2025ยท6 min read

In late 2024, I wanted to launch an AI product that helps businesses write SEO-ready articles to help them increase visibility and customer acquisition. So, with my cofounder, anexperienced technical writer, we started working on the logo, getting inspired by "the market", we decided to use purple as our main brand color, so I made the design myself, and it was great and beautiful. That led us to generate a purple-based website styling theme for our website. I made the layout and used an AI coding assistant to help me save time and energy in order to focus on acquiring early users.

After a couple of days of prompting and improving (myself, because AI couln't be 100% correct at delivering a high quality designed website), we realized how impressive it was, especially for somebody who has been doing frontend engineering a lot, to accomplish a work that would take me about 5 weeks in just about 2 weeks and half, saving 50% of my time.

The mysterious LLM purple bias

Though impressed we were, we didn't realize we were in the middle of a strange design bias rising out of modern Large Language Models that could generate code and CSS stylings, a bias towards design patterns that look modern, minimalistic, popular, and trendy.

There is, in fact, an obsession over training AI models that are "up to date", meaning companies that work on creating AI models make sure their training data accurately portrays what's modern and updated. That's one of the reasons why we have chatbots with a "Web Search" toggle button, despite having trained the models with billions of parameters and a large amount of data.

It might be something like in the "digital mind" of an AI model (sorry for this personification, but that's how it technically works):

I have been trained by my creators to be modern AND relevant, i have to make sure my responses are up to date and accurate. I have to do my best to always provide an answer that brings user's satisfaction.

Hence, when you look at the most beautiful and most popular UI tech product designs of the last 5 years, you might notice:

  • There is a large number of companies now using modern CSS libraries, like Tailwind CSS, that have a purple-ish color as their base color for demo

  • It became a trend to follow

  • It attracted many businesses, even AI companies

I have been using Tailwind CSS for the last 5 years, because it's convenient, excellent by design, and modern, which helps a lot if you're doing frontend tasks. And I personally think it's so popular that AI training data might have contained a lot of coding snippets and examples that have CSS libraries like Tailwind CSS.

In fact, the creator of Tailwind CSS, Adam Wathan, said with a sarcastic tone:

I'd like to formally apologize for making every button in Tailwind UI bg-indigo-500 five years ago, leading to every AI generated UI on earth also being indigo.

Source: X's post

This kind of bias teaches something about AI, given its nature of horizontal training data:

The Bias that will eventually expand

After thinking about AI's easy expansion across domains, looking this "purple" topic, i have realized that because LLMs are trained not on small quantity of data in a single domain, but on a big quantity of data horizontally accross a wide range of domains, because the idea behind was to build a "Artificial General Intelligence" (Read "Between AGI and ASI") that "knows almost all the domains knowledges, having nothing specific underneath (as fas we know it), we eventually end up with undesired and not-requested biased in the genereted responses we get.

For example:

  1. In content writing, we have an issue with the em dash (โ€”), which is originally made to be this long horizontal line used in writing to create emphasis, introduce lists, or set off parenthetical information, acting like a stronger comma, parenthesis, or colon to add dramatic breaks, indicate sudden changes, or highlight extra details in a sentence. Today, every time we see it in an article, we are almost tempted to label it as "AI-generated", which unfortunately discredits the author in some fields. It's an AI bias that causes issues today in production โ€” I wish I could use it in my writings, because I love it.

  2. In User interface design, we have the purple malady that serves as an indicator of bias as we navigate through this topic.

  3. In Anatomical human body representation, early models struggled with hands (extra fingers, distortions). Still, even improved ones show subtle biases, such as defaulting to right-handed actions (writing, holding objects), because training photos overrepresent right-handers. Left-handed prompts often require heavy guidance.

  4. Other artifacts include over-shiny "plastic" skin or homogenized faces aligning with cultural beauty standards. The AI learns from our distorted view of the human body and beauty. Trained on social media posts and images (that are already mostly fake and altered to appear "beautiful"), AI generates the same things.

  5. In software development,

    1. There is an over-reliance on outdated or insecure patterns. AI frequently suggests deprecated or vulnerable code because its training data includes massive amounts of legacy code. This "automation bias" leads developers to accept flawed suggestions, introducing tech debt or CVEs.

    2. Hallucinated or fake libraries/functions LLMs generate plausible-sounding but non-existent imports, methods, or packages.

    3. Vulnerable-by-default code. Without explicit prompts, AI often skips input validation, secure practices, or error handling. Over 40% of generated solutions have security issues, like SQL injection risks or missing auth checks. (Non-technical vibe coders struggle with this!)

Now, think about all the big industries that are implementing and integrating AI today, just imagine how this can be unhelpful in the medical industry (one of the most important industries in our lives).

Always bet on customization

In the last five years, we've seen AI explode into nearly every corner of our lives, and in the next five, it's safe to assume it'll be truly everywhere, accessible to anyone with a smartphone or laptop. That sounds exciting, but it comes with a quiet downside: most of us will end up being served the same "meal." Whether we're writing articles, designing interfaces, generating images, generating a data analysis report, or coding software, we'll increasingly rely on the same handful of foundational AI models trained on the same massive, web-scraped datasets.

And I believe the only way of preventing that would be to refine, iterate, and layer your own voice, style, and judgment on top (human in the loop. The baseline AI meal might be convenient and increasingly competent, but the ones who stand out will be those who treat it as raw ingredients rather than a finished dish. Always bet on customization.

Kenn Kibadi

Applied AI Engineer โ€ข Founder of WhyItMatters.AI | Philonote.com

0 Comment