• 1 Post
  • 98 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • The value a thing creates is only part of whether the investment into it is worth it.

    It’s entirely possible that all of the money that is going into the AI bubble will create value that will ultimately benefit someone else, and that those who initially invested in it will have nothing to show for it.

    In the late 90’s, U.S. regulatory reform around telecom prepared everyone for an explosion of investment in hard infrastructure assets around telecommunications: cell phones were starting to become a thing, consumer internet held a ton of promise. So telecom companies started digging trenches and laying fiber, at enormous expense to themselves. Most ended up in bankruptcy, and the actual assets eventually became owned by those who later bought those assets for pennies on the dollar, in bankruptcy auctions.

    Some companies owned fiber routes that they didn’t even bother using, and in the early 2000’s there was a shitload of dark fiber scattered throughout the United States. Eventually the bandwidth needs of near universal broadband gave that old fiber some use. But the companies that built it had already collapsed.

    If today’s AI companies can’t actually turn a profit, they’re going to be forced to sell off their expensive data at some point. Maybe someone else can make money with it. But the life cycle of this tech is much shorter than the telecom infrastructure I was describing earlier, so a stale LLM might very well become worthless within years. Or it’s only a stepping stone towards a distilled model that costs a fraction to run.

    So as an investment case, I’m not seeing a compelling case for investing in AI today. Even if you agree that it will provide value, it doesn’t make sense to invest $10 to get $1 of value.



  • Physics don’t change fundamentally between 6 meters and 120 meters

    Yes it does. Mass to strength ratio of structural components changes with scale. So does the thrust to mass ratio of a rocket and its fuel. So does heat dissipation (affected by ratio of surface area to mass).

    And I don’t know shit about fluid dynamics, but I’m skeptical that things scale cleanly, either.

    Scaling upward will encounter challenges not apparent at small sizes. That goes for everything from engineering bridges to buildings to cars to boats to aircraft to spacecraft.


  • What’s tricky is figuring out the appropriate human baseline, since human drivers don’t necessarily report every crash.

    Also, I think it’s worth discussing whether to include in the baseline certain driver assistance technologies, like automated braking, blind spot warnings, other warnings/visualizations of surrounding objects, cars, bikes, or pedestrians, etc. Throw in other things like traction control, antilock brakes, etc.

    There are ways to make human driving safer without fully automating the driving, so it may not be appropriate to compare fully automated driving with fully manual driving. Hybrid approaches might be safer today, but we don’t have the data to actually analyze that, as far as I can tell.


  • What’s annoying, too, is that a lot of the methods that have traditionally been used for discounts (education, nonprofit, employer-based discounts) are now only applicable to the subscriptions. So if you do want to get a standalone copy and would ordinarily qualify for a discount, you can’t apply that discount to that license.




  • if we highly restrict the parameters of what information we’re looking at, we then get a possible 10 bits per second.

    Not exactly. More the other way around: that human behaviors in response to inputs are only observed to process about 10 bits per second, so it is fair to conclude that brains are highly restricting the parameters of the information that actually gets used and processed.

    When you require the brain to process more information and discard less, it forces the brain to slow down, and the observed rate of speed is on the scale of 5-40 bits per second, depending on the task.


  • I think the fundamental issue is that you’re assuming that information theory refers to entropy as uncompressed data but it’s actually referring to the amount of data assuming ideal/perfect compression.

    Um, so each character is just 0 or 1 meaning there are only two characters in the English language? You can’t reduce it like that.

    There are only 26 letters in the English alphabet, so fitting in a meaningful character space can be done in less than 5 bits (2^5 = 32). Morse code, for example, encodes letters in less than 4 bits per letter (the most common letters use fewer bits, and the longest use 4 bits). A typical sentence will reduce down to an average of 2-3 bits per letter, plus the pause between letters.

    And because the distribution of letters in any given English text is nonuniform, there’s less meaning per letter than it takes to strictly encode things by individual letter. You can assign values to whole words and get really efficient that way, especially using variable encoding for the more common ideas or combinations.

    If you scour the world of English text, the 15-character string of “Abraham Lincoln” will be far more common than even the 3-letter string of “xqj,” so lots of those multiple character expressions only convey a much smaller number of bits of entropy. So it might be that it takes someone longer to memorize a random 10 character string that is truly random, including case sensitivity and symbols and numbers, than it would to memorize a 100-character sentence that actually carries meaning.

    Finally, once you actually get to reading and understanding, you’re not meticulously remembering literally every character. Your brain is preprocessing some stuff and discarding details without actually consciously incorporating them into the reading. Sometimes we glide past typos. Or we make assumptions (whether correct or not). Sometimes when tasked with counting basketball passes we totally miss that there was a gorilla in the video. The actual conscious thinking discards quite a bit of the information as it is received.

    You can tell when you’re reading something that is within your own existing knowledge, and how much faster it is to read than something that is entirely new, on a totally novel subject that you have no background in. Your sense of recall is going to be less accurate with that stuff, or you’re going to significantly slow down how you read it.

    I can read a whole sentence with more than ten words, much less characters, in a second while also retaining what music I was listening to, what color the page was, how hot it was in the room, how itchy my clothes were, and how thirsty I was during that second if I pay attention to all of those things.

    If you’re preparing to be tested on the recall of each and every one of those things, you’re going to find yourself reading a lot slower. You can read the entire reading passage but be totally unprepared for questions like “how many times did the word ‘the’ appear in the passage?” And that’s because the way you actually read and understand is going to involve discarding many, many bits of information that don’t make it past the filter your brain puts up for that task.

    For some people, memorizing the sentence “Linus Torvalds wrote the first version of the Linux kernel in 1991 while he was a student at the University of Helsinki” is trivial and can be done in a second or two. For many others, who might not have the background to know what the sentence means, they might struggle with being able to parrot back that idea without studying it for at least 10-15 seconds. And the results might be flipped for different people on another sentence, like “Brooks Nader repurposes engagement ring from ex, buys 9-carat ‘divorce ring’ amid Gleb Savchenko romance.”

    The fact is, most of what we read is already familiar in some way. That means we’re actually processing less information than we’re actually taking in, and discarding a huge chunk of what we perceive towards what we actually think. And when we encounter things that didn’t necessarily expect, we slow down or we misremember things.

    So I can see how the 10-bit number comes into play. It cited various studies showing the image/object recognition tends to operate in the high 30’s in bits per second, and many memorization or video game playing tasks involve processing in the 5-10 bit range. Our brains are just highly optimized for image processing and language processing, so I’d expect those tasks to be higher performance than other domains.


  • The problem here is that the bits of information needs to be clearly defined, otherwise we are not talking about actually quantifiable information

    here they are talking about very different types of bits

    I think everyone agrees on the definition of a bit (a binary two-value variable), but the active area of debate is which pieces of information actually matter. If information can be losslessly compressed into smaller representations of that same information, then the smaller compressed size represents the informational complexity in bits.

    The paper itself describes the information that can be recorded but ultimately discarded as not relevant: for typing, the forcefulness of each key press or duration of each key press don’t matter (but that exact same data might matter for analyzing someone playing the piano). So in terms of complexity theory, they’ve settled on 5 bits per English word and just refer to other prior papers that have attempted to quantify the information complexity of English.


  • Speaking which is conveying thought, also far exceed 10 bits per second.

    There was a study in 2019 that analyzed 17 different spoken languages to analyze how languages with lower complexity rate (bits of information per syllable) tend to be spoken faster in a way that information rate is roughly the same across spoken languages, at roughly 39 bits per second.

    Of course, it could be that the actual ideas and information in that speech is inefficiently encoded so that the actual bits of entropy are being communicated slower than 39 per second. I’m curious to know what the underlying Caltech paper linked says about language processing, since the press release describes deriving the 10 bits from studies analyzing how people read and write (as well as studies of people playing video games or solving Rubik’s cubes). Are they including the additional overhead of processing that information into new knowledge or insights? Are they defining the entropy of human language with a higher implied compression ratio?

    EDIT: I read the preprint, available here. It purports to measure externally measurable output of human behavior. That’s an important limitation in that it’s not trying to measure internal richness in unobserved thought.

    So it analyzes people performing external tasks, including typing and speech with an assumed entropy of about 5 bits per English word. A 120 wpm typing speed therefore translates to 600 bits per minute, or 10 bits per second. A 160 wpm speaking speed translates to 13 bits/s.

    The calculated bits of information are especially interesting for the other tasks (blindfolded Rubik’s cube solving, memory contests).

    It also explicitly cited the 39 bits/s study that I linked as being within the general range, because the actual meat of the paper is analyzing how the human brain brings 10^9 bits of sensory perception down 9 orders of magnitude. If it turns out to be 8.5 orders of magnitude, that doesn’t really change the result.

    There’s also a whole section addressing criticisms of the 10 bit/s number. It argues that claims of photographic memory tend to actually break down into longer periods of study (e.g., 45 minute flyover of Rome to recognize and recreate 1000 buildings of 1000 architectural styles translates into 4 bits/s of memorization). And it argues that the human brain tends to trick itself into perceiving a much higher complexity that it is actually processing (known as “subjective inflation”), implicitly arguing that a lot of that is actually lossy compression that fills in fake details from what it assumes is consistent with the portions actually perceived, and that the observed bitrate from other experiments might not properly categorize the bits of entropy involved in less accurate shortcuts taken by the brain.

    I still think visual processing seems to be faster than 10, but I’m now persuaded that it’s within an order of magnitude.







  • So what IS their strategy now?

    I think they need to bet the company on regaining their previous lead in actual cutting edge fabrication of semiconductors.

    TSMC basically prints money, but the next stage is a new paradigm where TSMC doesn’t necessarily have a built-in advantage. Samsung and Intel are gunning for that top spot with their own technologies in actually manufacturing and packaging chips, hoping to leapfrog TSMC as the industry tries to scale up mass production of chips using backside power and gate all around FETs (GAAFETs).

    If Intel 18A doesn’t succeed, the company is done.



  • I actually have fairly high hopes for Intel’s 18A and the upcoming technology changes presenting competition for TSMC (including others like Samsung and the Japanese startup Rapidus). And even if it turns into a 3-way race among Asian companies, the three nations are different enough that there’s at least some strength in diversity.

    TSMC’s dominance in the last decade I think can be traced to their clear advantage in producing finFETs at scale better than anyone else. As we move on from the finFET paradigm and move towards GAA and backside power delivery, there are a few opportunities to leapfrog TSMC. And in fact, TSMC is making such good money on their 3nm and 4nm processes that their roadmap to GAAFETs and backside power is slower than Intel’s and Samsung’s, seemingly to squeeze the very last bit out of finFETs before moving on.

    If there’s meaningful competition in the space, we might see lower prices, which could lead to greater innovation from their customers.

    Do I think it will happen? I’m not sure. But I’m hopeful, and wouldn’t be surprised if the next few process nodes show big shakeups in the race.