I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.
Feedback is very much welcome. Thank you.
Short answer: It’s because of binary.
Computers are very good at calculating with powers of two, and because of that a lot of computer concepts use powers of two to make calculations easier.
Edit: Oops… It’s 210, not 27
Sorry y’all… 😅
I’m confused, why this quotation? 1024 is 210, not 27
Long answer
So the problem is that our decimal number system just sucks. Should have gone with hexadecimal 😎
/Joking, if it isn’t obvious. Thank you for the explanation.