Beauty of Zenbleed and its walk-through

Last week (23w30) I stumbled upon such a ‘great’ vulnerability, but also the great explanation (and I’ve read a ‘few’).

Explanation of Zenbleed is here
And I highly recommend it to anyone interested in learning some of the ‘magic’ of modern CPUs on a ever-expanding 40+ year old x86 instruction set.

And a bit of jealous how clean and clear the walk-through is written ๐Ÿ™‚

Old is new again (at least on the web)

I came across a comprehensive analysis of all the โ€˜inovationsโ€™ in full-stack development, which can be found at It brought back some fun memories ๐Ÿ™‚

Every few years, I found myself needing or wanting to create a website. This involved trying to use “best” tools and frameworks for the job that everyone was hyping at the time

Yet, each time, I found myself gravitating back to good old WordPress, with a sprinkle of jQuery or a lightweight framework.


Well, I felt โ€˜stupidโ€™ that it took so much time to set up a simple CRUD website using these supposedly “modern” tools.

The rule: simple task needs simple tools, always ๐Ÿ™‚

George Carlin VS state-of-the-art AI

My current take on recent AI development is that it is getting more and more useful.

BUT, in the end, it is still a glorified, but novel statistics [1].

And whenever AI learns from human input, I find funny/insightful/dark quote from George Carlin:

Or in other words, current crop of AI cannot escape law of large numbers [3].

Especially in today’s world, you need very-very-very valid input sources vetted, by of course, error prone humans.
Even reviews for the simplest of stuff you can buy cannot be trusted as they are bought in bulk.
And as misinformation efforts are running loose in the wild, it is hard to keep sources clean.

Even for pure technical domains such as programming, ChatGPT has been banned by StackOverflow [2] due to high percentage of only looking correct answers.

And some fun chat-bot responses from history characters ๐Ÿ™‚


So until AI can learn different real-life models instead of shoveling data into hundreds of billions of (statistical, black-box) parameters with insane compute power needs, it will just be useful statistics.

But there is a lot of room for research in how even today’s AI works, even on much smaller scale models.


[1] –

[2] –

[3] –