Formerly u/CanadaPlus101 on Reddit.

  • 8 Posts
  • 4.09K Comments
Joined 3 年前
cake
Cake day: 2023年6月12日

help-circle
















  • If how exactly it’s implemented matters, regardless of similarity in internal dynamics and states, and there’s an imminent tangibility to it like rain or torque, I think you’re actually talking about a soul.

    Behaviorally, analog systems are not substrate dependent. The same second-order differential equations describes RLC circuits, audio resonators and a ball on a spring, for example.

    Analog AI chips exist, FWIW.

    If you’re looking at complexity theory, I’m pretty sure all physics is in EXPTIME. That’s a strong class, which is why we haven’t solved every problem, but it’s still digital and there’s stronger ones that can come up, like with Presburger arithmetic. Weird fundamentally-continuous problems exist, and there was a pretty significant result in theoretical quantum computer science about it this decade, but actual known physics is very “nice” in a lot of ways. And yes, that includes having numerical approximations to an arbitrary degree of precision.

    To be clear, there’s still a lot of problems with the technology, even if it can replace a graphics designer. Your screenshot is a great example of hallucination (particularly the bit about practical situations), or just echoing back a sentiment that was given.



  • Biological neurons are actually more digital than artificial neural nets are. They fire with equal intensity, or don’t fire (that at least is well understood). Meanwhile, a node in your LLM has an approximately continuous range of activations.

    They’re just tracking weighted averages about what word comes next.

    That’s leaving out most of the actual complexity. There’s gigabytes or terabytes of mysterious numbers playing off of each other to decide the probabilities of each word in an LLM, and it’s looking at quite a bit of previous context. A human author also has to decide the next word to type repeatedly, so it doesn’t really preclude much.

    If you just go word-by-word or few-words-by-few-words straightforwardly, that’s called a Markov chain, and they rarely get basic grammar right.

    Like you said, the issue is how to do it consistently and not in an infinite sea of garbage, which is what would happen if you increase stochasticity in service of originality. It’s a design limitation.

    Sure, we agree on that. Where we maybe disagree is on whether humans experience the same kind of tradeoff. And then we got a bit into unrelated philosophy of mind.

    and you can literally program an LLM inside a fax machine if you wanted to.

    Absolutely, although it’d have to be more of an SLM to fit. You don’t think the exact hardware used is important though, do you? Our own brains don’t exactly look like much.