Verno wrote on May 26, 2023, 09:31:Razumen wrote on May 24, 2023, 21:04:
If an algorithm is eventually made that can "imitate" human intelligence with no tells, then there will literally be no basis for you to say it isn't intelligent. You're basically going down the route of the "No True Scotsman" fallacy.
At it's core it's still inputs and outputs with parameters that we control. Imitating human intelligence does not make it intelligent. If you put AI in a situation that requires original thought and problem solving, it cannot perform in the same way a human can.
Razumen wrote on May 24, 2023, 21:04:
If an algorithm is eventually made that can "imitate" human intelligence with no tells, then there will literally be no basis for you to say it isn't intelligent. You're basically going down the route of the "No True Scotsman" fallacy.
jdreyer wrote on May 26, 2023, 02:06:Razumen wrote on May 24, 2023, 21:04:A calculator calculates with no tells (mistakes). Is it intelligent?jdreyer wrote on May 24, 2023, 13:44:
Then you're getting an algorithm that imitates human intelligence with no "tells," but isn't truly intelligent.
If an algorithm is eventually made that can "imitate" human intelligence with no tells, then there will literally be no basis for you to say it isn't intelligent. You're basically going down the route of the "No True Scotsman" fallacy.
Razumen wrote on May 24, 2023, 21:04:A calculator calculates with no tells (mistakes). Is it intelligent?jdreyer wrote on May 24, 2023, 13:44:
Then you're getting an algorithm that imitates human intelligence with no "tells," but isn't truly intelligent.
If an algorithm is eventually made that can "imitate" human intelligence with no tells, then there will literally be no basis for you to say it isn't intelligent. You're basically going down the route of the "No True Scotsman" fallacy.
jdreyer wrote on May 24, 2023, 13:44:
Then you're getting an algorithm that imitates human intelligence with no "tells," but isn't truly intelligent.
HorrorScope wrote on May 24, 2023, 13:23:
Unrelated Beamer stated: "works for my wife when it actually listens to her", in my house that means a complete failure. Anything I bring in like this has to be borderline flawless or it is my wife up my ass as to why not. She's been a great barometer of "is this really better or needed?".
HorrorScope wrote on May 24, 2023, 13:23:You think we'll see true AGI in silicon in our lifetimes? I do not. What we're seeing now are impressive imitation algorithms, but are completely dependent on vast amounts of preexisting data to function and incapable of inspiration or originality. I think that is much further down the road.jdreyer wrote on May 23, 2023, 20:33:
It's not to say that you couldn't replicate the functioning of a meat brain in silicon: you absolutely could. We're just not there yet, and not close to being there yet and won't be there until well past our lifetimes.
I don't know how a tech person can say this.
Unrelated Beamer stated: "works for my wife when it actually listens to her", in my house that means a complete failure. Anything I bring in like this has to be borderline flawless or it is my wife up my ass as to why not. She's been a great barometer of "is this really better or needed?".
Razumen wrote on May 24, 2023, 01:20:Then you're getting an algorithm that imitates human intelligence with no "tells," but isn't truly intelligent.jdreyer wrote on May 23, 2023, 20:42:
And then humans showed how Go engines weren't AI. If it can be beaten again and again by a trick that any good human player would recognize, it's hardly intelligent.
Yes, that's how progress is made, identifying failures and correcting them.
I guess they should just give up now because their program wasn't perfect.I guess, if that's how you feel? As I said below, I think they will get to true general intelligence eventually, just not in our lifetimes.
jdreyer wrote on May 23, 2023, 20:33:
It's not to say that you couldn't replicate the functioning of a meat brain in silicon: you absolutely could. We're just not there yet, and not close to being there yet and won't be there until well past our lifetimes.
jdreyer wrote on May 23, 2023, 20:42:
And then humans showed how Go engines weren't AI. If it can be beaten again and again by a trick that any good human player would recognize, it's hardly intelligent.
ZandarKoad wrote on May 22, 2023, 21:46:
"The reality is that we barely understand human intelligence and sentience. Saying that it's impossible to create one with computers, even if it's just at the level of animal intelligence, is just pure folly."
Wait, what? Wouldn't the suggestion that it is possible to create one be pure folly in this scenario? I don't follow your logic.
Sepharo wrote on May 22, 2023, 21:18:Slashman wrote on May 22, 2023, 21:07:
I'll leave this article link here but I know it won't do any good.
Wow yeah you found the one guy... from Vice... who claimed it cheated.
Have a look at this specific section of this article:
https://en.wikipedia.org/wiki/OpenAI_Five#Comparisons_with_other_game_AI_systemsOpenAI Five observes every fourth frame, generating 20,000 moves. By comparison, chess usually ends before 40 moves, while Go ends before 150 moves.
Thus, playing Dota 2 requires making inferences based on this incomplete data, as well as predicting what their opponent could be doing at the same time. By comparison, Chess and Go are "full-information games", as they do not hide elements from the opposing player.
Without counting the perpetual aspects of the game, there are an average of ~1,000 valid actions each tick. By comparison, the average number of actions in chess is 35 and 250 in Go.
The OpenAI system observes the state of a game through developers’ bot API, as 20,000 numbers that constitute all information a human is allowed to get access to. A chess board is represented as about 70 lists, whereas a Go board has about 400 enumerations.
Remember that Go itself used to be considered unbeatable by AI because it was too advanced.
And remember that this AI that beat the world champions and 99.4% of games played against regular players of the game... did that 4 years ago.
And finally remember that this was your original claim:When you increase complexity and there are trade offs to be made...the computer will always fail. It doesn't know what is better in a given situation...it doesn't think ahead.No... those are exactly the things it's succeeding at.
edit: Should also be noted that the video I linked is about an event that occurred after that Vice article...
A human player has comprehensively defeated a top-ranked AI system at the board game Go, in a surprise reversal of the 2016 computer victory that was seen as a milestone in the rise of artificial intelligence.
Kellin Pelrine, an American player who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw that had been identified by another computer. But the head-to-head confrontation in which he won 14 of 15 games was undertaken without direct computer support.
The triumph, which has not previously been reported, highlighted a weakness in the best Go computer programs that is shared by most of today’s widely used AI systems, including the ChatGPT chatbot created by San Francisco-based OpenAI.
The tactics that put a human back on top on the Go board were suggested by a computer program that had probed the AI systems looking for weaknesses. The suggested plan was then ruthlessly delivered by Pelrine.
“It was surprisingly easy for us to exploit this system,” said Adam Gleave, chief executive of FAR AI, the Californian research firm that designed the program. The software played more than 1 million games against KataGo, one of the top Go-playing systems, to find a “blind spot” that a human player could take advantage of, he added.
Prez wrote on May 22, 2023, 05:16:It's not to say that you couldn't replicate the functioning of a meat brain in silicon: you absolutely could. We're just not there yet, and not close to being there yet and won't be there until well past our lifetimes.
I have to say I'm with Platty on this one. Telling us why impressive emergent behavior in an AI isn't actually intelligence sounds reductive to me and more just an argument of semantics. Our brains as has been stated function on what ultimately amounts to neurons firing in a specific sequence. Just like the 1's and 0's of a computer, only with vastly more complexity. Admittedly I am not nearly as technical as many here, but I can't help but wondering if that leads to many experts not seeing the forest for the trees.
Burrito of Peace wrote on May 23, 2023, 12:11:
Then you'll need to learn, intimately and in-depth, how and why it works. If you have not already, take a look at HomeAssistant. It is unarguably the best of breed when it comes to self-hosted home automation. Probably not a use case for you, but I prototyped a watering system for the gardens here for Mrs. Burrito with it. Modified some plug-ins and created two new ones to allow it to understand moisture levels, light levels, and pH levels to water when the soil moisture level hit a threshold and send a report to Mrs. Burrito with that plus current pH levels. Went on to do other things with it and I will be using it extensively on the homestead to take some repetitive tasks off of my plate.
Beamer wrote on May 23, 2023, 11:25:
Absolutely no offense is taken, and the respect is mutual.
Also, I'm not debating the pros and cons about how AI works. Much like social media algorithms, I'm pretty firmly in the "con" phase. But my opinion hardly matters. It exists, it's out there, and my competitors are going to use it. Either I adapt, or I die.
Beamer wrote on May 23, 2023, 11:25:
But there are positives. You mention Home Automation. I don't want to cede control of my home to AI. I want to control what I want when I want. But I do want to move away from Amazon Echo, which has long since gotten far too verbose and keeps responding to my questions with additional questions. But it's helpful as hell to be able to turn off or dim lights with my voice, to set voice timers while cooking, and ask dumb questions either while cooking or while talking with my wife and not feeling like looking it up on the phone.
Basically any Home Automation platform will be able to add in the latter. Bing is far better than Alexa at turning queries into verbal answers. There's no reason why Home Automation won't take over a huge chunk of Alexa's capabilities in the next few years, allowing me to ditch the corporation looking to monetize what I paid for and shift to something open source and self contained in my house.
fujiJuice wrote on May 22, 2023, 23:01:
This is my primary concern. Not that it will be become self-aware or anything like that. The speed of it's increasing complexity and spread.
I am worried about the spread of useless information masquerading as intelligently written articles.
Burrito of Peace wrote on May 23, 2023, 04:38:
I'm sure I would qualify in Beamer's opinion for the "old man and his rotary phone" status. I view people like Beamer as "kids who vapidly embrace shiny toys with no understanding of how they work, why they work, or the pros and cons of their existence". That's not a slight at Beamer in particular because I do respect his opinion.
in to kinetic energy. We've bolted a ton of things on to them to make them more powerful, more efficient, and less polluting but they are no more revolutionary (heh. Pun) than their forebears. That still doesn't make what we have a true AI in any sense.
Beamer wrote on May 22, 2023, 22:27:
Things are changing rapidly.