Alt Text: an image of Agent Smith from The Matrix with the following text superimposed, “1999 was described as being the peak of human civilization in ‘The Matrix’ and I laughed because that obviously wouldn’t age well and then the next 25 years happened and I realized that yeah maybe the machines had a point.”
Unless you just died or are about to, you can’t really confidently make that statement.
There’s no technical reason to think we won’t in the next ~20-50 years. We may not, and there may be a technical reason why we can’t, but the previous big technical hurdles were the amount of compute needed and that computers couldn’t handle fuzzy pattern matching, but modern AI has effectively found a way of solving the pattern matching problem, and current large models like ChatGPT model more “neurons” than are in the human brain, let alone the power that will be available to them in 30 years.
I don’t think that’s true. Parameter counts are more akin to neural connections, and the human brain has something like 100 trillion connections.
There’s no technical reason to think we will in the next ~20-50 years, either.
there’s plenty of reason to believe that, whether we have it or not, some billionaire asshole is going to force you to believe and respect his corportate AI as if it’s sentient (while simultaneously treating it like slave labor)
There’s plenty of economic reasons to think we will as long as it’s technically possible.
Was it? I thought it was always about we haven’t quite figure it out what thinking really is
I mean, no, not really. We know what thinking is. It’s neurons firing in your brain in varying patterns.
What we don’t know is the exact wiring of those neurons in our brain. So that’s the current challenge.
But previously, we couldn’t even effectively simulate neurons firing in a brain, AI algorithms are called that because they effectively can simulate the way that neurons fire (just using silicon) and that makes them really good at all the fuzzy pattern matching problems that computers used to be really bad at.
So now the challenge is figuring out the wiring of our brains, and/or figuring out a way of creating intelligence that doesn’t use the wiring of our brains. Both are entirely possible now that we can experiment and build and combine simulated neurons at ballpark the same scale as the human brain.
Aren’t you just saying the same thing? We know it has something to do with the neurons but couldn’t figure it out exactly how
The distinction is that it’s not ‘something to do with neurons’, it’s ‘neurons firing and signalling each other’.
Like, we know the exact mechanism by which thinking happens, we just don’t know the precise wiring pattern necessary to recreate the way that we think in particular.
And previously, we couldn’t effectively simulate that mechanism with computer chips, now we can.
Other than that nobody has any idea how to go about it? The things called “AI” today are not precursors to AGI. The search for strong AI is still nowhere close to any breakthroughs.
Assuming that the path to AGI involves something akin to all the intelligence we see in nature (i.e. brains and neurons), then modern AI algorithms’ ability to simulate neurons using silicon and math is inarguably and objectively a precursor.
Machine learning, renamed “AI” with the LLM boom, does not simulate intelligence. It integrates feedback loops, which is kind of like learning and it uses a network of nodes which kind of look like neurons if you squint from a distance. These networks have been around for many decades, I’ve built a bunch myself in college, and they’re at their core just polynomial functions with a lot of parameters. Current technology allows very large networks and networks of networks, but it’s still not in any way similar to brains.
There is separate research into simulating neurons and brains, but that is separate from machine learning.
Also we don’t actually understand how our brains work at the level where we could copy them. We understand some things and have some educated guesses on others, but overall it’s pretty much a mistery still.