diff --git a/src/assets/photos/why-i-write-these-posts/cover.jpg b/src/assets/photos/why-i-write-these-posts/cover.jpg
new file mode 100644
index 0000000..205798a
Binary files /dev/null and b/src/assets/photos/why-i-write-these-posts/cover.jpg differ
diff --git a/src/content/blog/why-i-write-these-posts.mdx b/src/content/blog/why-i-write-these-posts.mdx
new file mode 100644
index 0000000..3f5a06b
--- /dev/null
+++ b/src/content/blog/why-i-write-these-posts.mdx
@@ -0,0 +1,85 @@
+---
+title: "Why I write these posts"
+description: "A small attempt to put real words on the internet, not just synthetic noise."
+pubDate: 2026-01-12
+heroImage: "../../assets/photos/why-i-write-these-posts/cover.jpg"
+tags: ["writing", "ai", "thoughts"]
+
+---
+
+Most of the internet today feels like soup: warm, endless, and somehow always the same.
+
+I’m writing this blog because I don’t want my own thoughts to become soup too.
+
+## The simple reason
+
+AI can write fast. Too fast. Hyper fast... and when everything is “a summary of a summary (of a summary)” we end up with *trash in, trash out*. Not only for models, also for humans!
+
+So I write posts to keep something real on the page, something that has fingerprints, so something that happened for real.
+
+Not synthetic. Not “perfect”. Not optimized for vibes. With a lot of errors and bold opinions 😎
+
+## What I mean by “synthetic noise”
+
+By *synthetic noise* I mean content that looks like information, but it’s basically empty.
+
+It’s the stuff that:
+
+- repeats the same ideas with different words
+- summarizes other summaries
+- avoids details (because details are risky)
+- sounds confident even when it’s vague
+- is written to fill space
+
+It’s “content-shaped” and not knowledge, not an experience of someboyd.
Sometimes it’s made by AI and sometimes it’s made by humans, copying AI pattern and that’s **even worse**, because it spreads like a virus but is sneaky and very hard to identify!
+
+## The main long-term problem
+
+The main problem is simple:
+
+> **synthetic noise becomes the input of everything else.**
+
+Search results, docs, blog posts, tutorials also internal company docs.
+If the web is full of low-value text, then people learn slower, decisions get made on weak info, everyone repeats the same half-truths and real experience gets lost!
+
+For AI models it’s the same idea: trash in, trash out.
+If the training data gets more “soup”, the output gets more “soup”, so it's feedback loop, like a resonating microphone with speaker during a call when you don't use the headphones!
+
+
+## The real enemy
+
+The worst part is not that the content is wrong, it’s that it’s *vague in a way that looks correct*.
+
+You read it, you think you understood, you feel productive, but you learned nothing.
+
+It gives you the *illusion* of understanding and over time, that illusion makes you lazy in a way that you stop asking “what exactly happened?”, stop demanding for numbers, constraints, trade-offs etc.. and you accept generic advice as if it was universal truth.
+
+This is how we and also the future LLM get weaker.
+
+It's not just an human problem, it's also a future LLM-problem!
+
+## What I’m trying to do here in my blog
+
+I know that a *blog* is really anacrhonistic nowadays, and probably it's a way to make the LLM stronger and more humanized because i'm providing "human-generated content" to the crawlers, and i know that i'm just a drop in the *synthethic sea*, but it's a *drop of myself* that will be digested from an LLM and from some human (if somebody will read), and i'm happy in anycase, because in the worst case i'm contributing fix a problem that at some point we will face it.
+
+Joking a-part, maybe is not-a-problem and i'm just hallucinating 😝 and probably i'm acting like Don Chischiotte.
+
+
+