I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.
Feedback is very much welcome. Thank you.
deleted by creator
Short answer: It’s because of binary.
Computers are very good at calculating with powers of two, and because of that a lot of computer concepts use powers of two to make calculations easier.
Edit: Oops… It’s 210, not 27
Sorry y’all… 😅
I’m confused, why this quotation? 1024 is 210, not 27
Long answer
So the problem is that our decimal number system just sucks. Should have gone with hexadecimal 😎
/Joking, if it isn’t obvious. Thank you for the explanation.
I believe it’s because you always use bytes in pairs in a computer. If you always pair the pairs, you would eventually get the number 1024, which is the closest number to a 1000.
The logic is like this:
2+2 = 4
4+4 = 8
8+8 = 16
16+16 = 32
32+32 = 64
64+64 = 128
128+128 = 256
256+256 = 512
512+512 = 1024
“Kilo” means 1000 under the official International System of Units.
With some computer hardware, it’s more convenient to use 1024 for a kilobyte and in the early days nobody really cared that it was slightly wrong. It has to do with the way memory is physically laid out in a memory chip.
These days, people do care and the correct term for 1024 is “Kibi” (kilo-binary). For example Kibibyte. There’s also Gibi, Tebi, Exbi, etc.
It’s mostly CPUs that use 1024 - and also RAM because it’t tightly coupled to the CPU. The internet, hard drives, etc, usually use 1000 because they don’t have any reason to use a weird numbering system.