“The new device is built from arrays of resistive random-access memory (RRAM) cells… The team was able to combine the speed of analog computation with the accuracy normally associated with digital processing. Crucially, the chip was manufactured using a commercial production process, meaning it could potentially be mass-produced.”

Article is based on this paper: https://www.nature.com/articles/s41928-025-01477-0

  • Alexstarfire@lemmy.world
    link
    fedilink
    English
    arrow-up
    111
    arrow-down
    6
    ·
    edit-2
    19 days ago

    It uses 1% of the energy but is still 1000x faster than our current fastest cards? Yea, I’m calling bullshit. It’s either a one off, bullshit, or the next industrial revolution.

    EDIT: Also, why do articles insist on using ##x less? You can just say it uses 1% of the energy. It’s so much easier to understand.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      19 days ago

      I mean it‘s like the 10th time I‘m reading about THE breakthrough in Chinese chip production on Lemmy so lets just say I‘m not holding my breath LoL.

    • Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      18 days ago

      Same here. I wait to see real life calculations done by such circuits. They won’t be able to e.g. do a simple float addition without losing/mangling a bunch of digits.

      But maybe the analog precision is sufficient for AI, which is an imprecise matter from the start.

        • Treczoks@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          3
          ·
          18 days ago

          No, it wouldn’t. Because you cannot make it reproduceable on that scale.

          Normal analog hardware, e.g. audio tops out at about 16 bits of precision. If you go individually tuned and high end and expensive (studio equipment) you get maybe 24 bits. That is eons from the 52 bits mantissa precision of a double float.

        • Limonene@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          18 days ago

          The maximum theoretical precision of an analog computer is limited by the charge of an electron, 10^-19 coulombs. A normal analog computer runs at a few milliamps, for a second max. So a max theoretical precision of 10^16, or 53 bits. This is the same as a double precision (64-bit) float. I believe 80-bit floats are standard in desktop computers.

          In practice, just getting a good 24-bit ADC is expensive, and 12-bit or 16-bit ADCs are way more common. Analog computers aren’t solving anything that can’t be done faster by digitally simulating an analog computer.

            • turmacar@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              18 days ago

              Every operation your computer does. From displaying images on a screen to securely connecting to your bank.

              It’s an interesting advancement and it will be neat if something comes of it down the line. The chances of it having a meaningful product in the next decade is close to zero.

            • Limonene@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              18 days ago

              They used to use analog computers to solve differential equations, back when every transistor was expensive (relays and tubes even more so) and clock rates were measured in kilohertz. There’s no practical purpose for them now.

              In cases of number theory, and RSA cryptography, you need even more precision. They combine multiple integers together to get 4096-bit precision.

              If you’re asking about the 24-bit ADC, I think that’s usually high-end audio recording.

  • NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    9
    ·
    18 days ago

    Look, It’s one of those articles again. The bi-monthly “China invents earth-shattering technology breakthrough that we never hear about again.”

    “1000x faster?” Learn to lie better. Real technological improvements are almost always incremental, like “10-20% faster, bigger, stronger.” Not 1000 freaking times faster. You lie like a child. Or like Trump.

  • Quazatron@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    19 days ago

    This was bound to happen. Neural networks are inherently analog processes, simulating them digitally is massively expensive in terms of hardware and power.

    Digital domain is good for exact computation, analog is better for approximate computation, as required by neural networks.

    • bulwark@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      19 days ago

      That’s a good point. The model weights could be voltage levels instead of digital representations. Lots of audio tech uses analog for better fidelity.I also read that there’s a startup using particle beams for lithography. Exciting times.

  • Godort@lemmy.ca
    link
    fedilink
    English
    arrow-up
    29
    ·
    19 days ago

    This seems like promising technology, but the figures they are providing are almost certainly fiction.

    This has all the hallmarks of a team of researchers looking to score an R&D budget.

  • Melobol@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    22
    ·
    edit-2
    19 days ago

    Edit: I removed a chatgtp generated summary because I thought it could have been useful.
    Anyway just have a good day.

    • kalkulat@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      19 days ago

      It was a decent summary, I was replying when you pulled it. Analog has its strengths (the first computers were analog, but electronics was much cruder 70 years ago) and it is def. a better fit for neural nets. Bound to happen.

    • bcovertigo@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      19 days ago

      I appreciate that you wanted to help people even if it didn’t land how you intended. :)

    • kalkulat@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      19 days ago

      Nice thorough commentary. The LiveScience article did a better job of describing it for people with no background in this stuff.

      The original computers were analog. They were fast, but electronics was -so crude- at the time, it had to evolve a lot … and has in the last half-century.