AI

AI and Social Media downfall

Kenn Kibadi
Kenn Kibadi
12/18/2025Ā·7 min read

It’s December 2025, right in the middle of a flood of AI models, AI solutions, AI apps, and AI content generation created to bring humans closer, to improve communication and information sharing, to increase human capabilities, and to solve problems.

AI as a daily helper

Although noble-sounding, this didn’t come without consequences, some of which were positive, while others were negative. I (@Kenn) , as an AI and Software Engineer, can speak of some of the benefits I’ve been noticing:

  • There is a tool called ā€GitHub Copilotā€ from Microsoft that works as a virtual coding assistant, helping engineers fix issues, improve functions and their algorithmic efficiency, and write code faster than ever. I’ve been using it for a couple of years now. I can testify, it’s a huge time saver and a great helper.

  • There are other tools like ChatGPT from OpenAI, Grok from xAI, Gemini from Google, just to name a few. I’ve been using them in parallel to help brainstorm project ideas.

  • I use AI daily, as a product engineer; don’t even look far, I made šŸ“philonote ’s business plan and business model with the help of these tools. (Read "Philonote Creator Manifesto", from Philonote, by Kenn Kibadi)

So, AI has so much impact that it’s almost in every industry, every domain of our daily life, and one of the most important of all, the one that shapes us as human beings, social interaction with one another.

Social and Professional life, an AI target

Our social life, whether digital or non-digital, is not outside of the AI impact zone.

I was born and raised in the heart and center of the big continent of Africa, the Democratic Republic of the Congo (D.R. Congo), the second largest country in the continent, where we speak 4 national languages (Lingala, Kikongo, Tshiluba, Swahili) AND one official language(French), with English probably becoming our second official language in the coming years. If you visit, you will notice that on the average, a Congolese speaks at least 2 languages and is probably trying to speak another foreign language like English, Spanish, Russian, or Chinese (ironically, currently learning Chinese, wish me luck!).

1. Language is impacted

If there is an area where AI helped humans communicate better despite their language barrier, it’s language.

I use it daily to help me communicate better with Chinese speakers.

  • People use it for translation

  • People use it for grammar improvement

  • People use it for language learning

  • And so on

More and more, we will start seeing AI as a language Assistant for most of these tasks (I believe translation jobs are a little bit safe now, as of December 2025, but that’s a topic for another day).

2. Online Communication is impacted

Those who work online, communicate via email, and build networks through social media platforms and forums have seen a great amount of help AI can provide them daily.

  • They use it to craft a better email, with better grammar and wording, for better clarity as they communicate.

  • They use it to summarize long-form content for time-saving purposes

  • They also use it to translate, as they communicate with people from foreign lands

  • They use it to craft great marketing copy for their companies and businesses

  • They use it to brainstorm business ideas

  • They use it for market research

So, we can see how professional AI is now unavoidable and necessary for those who want efficiency, time saving, and sometimes quick solutions.

BUT, despite that important usefulness, we have a problem with how AI can get dangerous, not in its training, but in its excessive, greedy, and unethical use in our daily life, especially on social media.

Social media, antisocial platforms?

Initially, social media were made to get us closer, help long-distance friends communicate daily, share life, and keep people connected. How cool is that? In 2025, people can connect as in real life, make phone calls, I mean social calls, see each other, share news, updates, and even work together.

There is one side of user experience that we just mentioned, there is also another side of the companies behind social media platforms, the latter talks business, does business, develops business from the inside.

1. Platforms that don't help anymore

"All" the Social media companies have an AI engineering team (called with different names depending on the company) with a two-fold goal in mind, helping with:

(1) How to make the software sticky and people stuck on our platform using AI (Recommendation Systems)

(2) How to get businesses to efficiently target their ideal customers through advertisements

Given that, as a business, they must make the company profitable through every "ethical" means possible and pragmatically, when it works, meaning when the platform generates revenue, the company doubles down on that; they work harder to maintain user retention, business partnerships, and the revenue machine that helps them keep going.

With that goal in mind, just imagine how worst things can get when the user experience is only focused on the algorithmic optimization function, meaning putting users in an addictive box, without thinking about the social impact that comes with: instead of getting connected, they get separated, they get uninformed and full of fake news and fake theories of life and science sometimes (!), which is literally the opposite of what social media should be doing.

Worse than that? When users get into the habit of playing with algorithms, faking things until it works, "through the means of AI".

2. People who get greedy with social hacking

The word "hacking" here means the use of malicious practices with the intent of "growing in numbers, in views, in followers, in likes".

Humans are intelligent beings. When they see something that looks smart, they work hard on understanding it; once they understand, they get creative; Once they see the benefit of that creation, they decide to use it as a tool ethically or unethically, which is not a surprise given the whole history of the world: "that's what humans are best for", they say (I don't know if I fully agree though).

That's to show how things get really bad. Content creation mixed with AI can either help social media to have its original meaning or become the reason why many people are becoming digitally "anti-social".

When AI is used as an instrument of growth hacking with content creators being careless about delivering value, you don't want your children to become part of that experiment; you don't even want your brain to be affected by it. I don't think it's good for the human brain either.

3. Blurry information

AI, I believe, is still in its genesis. Researchers are still trying to figure it out, and engineers and developers are still playing with it on the application layer: building chatbots, adding generative features to existing softwares, fine-tuning the models, etc.

We are all still learning it and still in the "wow" phase. So, we don't know what the future will digitally and fully look like, despite the countless predictions we are making. It's still blurry.

But something is certain: social media ain't the same anymore.

Where do we go from here?

Is there still hope for our beloved social media with the AI massive wave? Yes, there is.

AI is here to stay. Social media is here to stay. From an engineering standpoint, if we’re still relying on LLMs, we will notice that AI will get so good at faking content that it will be indistinguishable from real content. And the only solution will probably be people learning to have a moderate daily use of social media, with social media providing tools (like fact-checking, AI detection or other vertical solutions) for better moderation and, maybe, creating AI-free zones for humans to feel human, maybe by keeping AI out of the private interpersonal human chats, maybe having AI stay in the professional world as a powerful agentic assistant. Maybe. Maybe.

Kenn Kibadi

Applied AI Engineer • Founder of WhyItMatters.AI | Philonote.com

0 Comment