Send News. Want a reply? Read this. More in the FAQ.   News Forum - All Forums - Mobile - PDA - RSS Headlines  RSS Headlines   Twitter  Twitter
Customize
User Settings
Styles:
LAN Parties
Upcoming one-time events:
Chicago, IL 11/17

Regularly scheduled events

Morning Tech Bits

View
25 Replies. 2 pages. Viewing page 1.
< Newer [ 1 2 ] Older >

25. Re: Morning Tech Bits Jul 19, 2017, 22:02 RedEye9
 
Mr. Tact wrote on Jul 17, 2017, 12:02:
MajorD wrote on Jul 17, 2017, 11:42:
It may be decades away, but it is better to act/regulate sooner than later in lieu of just marveling at our own advances, and then act when it may be too late.
Maybe. Personally I'd expect any kind of "true AI" will be accidental/unplanned. I find the idea put forward in Dan Simmons' Hyperion series compelling. AI developing pretty much on it's own within our cyber space. Growing from simply understood 80 byte code segments to super intelligences without any human interaction.

Even if it was the result of carefully quarantined experiments, one mistake -- and it would be out. Humans tend to make mistakes fairly often....
https://techcrunch.com/2017/07/19/this-famous-roboticist-doesnt-think-elon-musk-understands-ai/
 
Avatar 58135
 
unclouded by conscience, remorse, or delusions of morality
http://spaf.cerias.purdue.edu/quotes.html
Reply Quote Edit Delete Report
 
24. Re: Morning Tech Bits Jul 19, 2017, 17:14 eRe4s3r
 
jdreyer wrote on Jul 19, 2017, 13:31:
Mr. Tact wrote on Jul 18, 2017, 15:41:
eRe4s3r wrote on Jul 18, 2017, 15:32:
I don't think any high intelligence would put "mass genocide" on it's to-do list... at least not intentionally and unprovoked. So there is that.
Actually, this is part of the problem. There is no reason to believe any created intelligence would share our values. Unless we somehow manage to integrate them, which when you actually think about it, isn't an easy thing to do. So, if you suddenly have something which thinks much faster than you, but has no morals or sense of right and wrong.... well, the problems are obvious.

Right, and as Asimov is always so fond of pointing out (60 years ago!), the AI can always interpret specific prime directives thinking it's doing the right thing when it's in fact causing harm.

Only when the prime directives are a logical paradox, and we know already which 3 prime directives NOT to use
 
Avatar 54727
 
Reply Quote Edit Delete Report
 
23. Re: Morning Tech Bits Jul 19, 2017, 13:36 jdreyer
 
SunnyD wrote on Jul 18, 2017, 23:55:
Mr. Tact wrote on Jul 18, 2017, 15:41:
eRe4s3r wrote on Jul 18, 2017, 15:32:
I don't think any high intelligence would put "mass genocide" on it's to-do list... at least not intentionally and unprovoked. So there is that.
Actually, this is part of the problem. There is no reason to believe any created intelligence would share our values. Unless we somehow manage to integrate them, which when you actually think about it, isn't an easy thing to do. So, if you suddenly have something which thinks much faster than you, but has no morals or sense of right and wrong.... well, the problems are obvious.


Also, the larger bugbear in the room is glossed over by many: weaponized versions. There isn't a single person in here I'm willing to bet that would bet against other nations approaching the question of A.I. with a more militant glint in their eyes, and while knowing this, isn't safe in assuming that 'our' government isn't at work doing the same at this moment. Lets be real here tho, as with many inventions that have come our way, a great many had been known, worked on, and used by militaries the world over, ours included, before being released to the public.

It's that aspect I'm more concerned with. Just how far along are we in actuality that we aren't in a position to know due to our non/low-security clearances. And the kind of people with their hands on the trigger of this particular tool for exporting democracy and improving our lives have shown something of a consistency as regards to actual usage of new tools regarding the war-fighting environment.

It's not only A.I.'s morals and sense of right and wrong we have to worry about, humans still being pretty broken in those aspects at this time, themselves.

~Finis~

Right, think about an AI system that is given the directives of protecting humans but also protecting it's own internal structure from intrusion. If it's developers upload new algorithms that contain bugs, it may "learn" that those developers are a threat, and prioritize protecting it's own system over the humans, and take unpredictable actions.

The good thing about this is that we've been having these discussions for decades. Probably the most anticipated tech change in the history of mankind. We didn't have the chance, for example, to have thousands of debates, fictionalized accounts, research papers, etc. when developing nuclear weapons. We had to figure it out after they were a thing. At least we're having these discussions about AI now, and some of humanity's greatest minds are thinking about the ramifications.
 
Avatar 22024
 
Stay a while, and listen.
Reply Quote Edit Delete Report
 
22. Re: Morning Tech Bits Jul 19, 2017, 13:31 jdreyer
 
Mr. Tact wrote on Jul 18, 2017, 15:41:
eRe4s3r wrote on Jul 18, 2017, 15:32:
I don't think any high intelligence would put "mass genocide" on it's to-do list... at least not intentionally and unprovoked. So there is that.
Actually, this is part of the problem. There is no reason to believe any created intelligence would share our values. Unless we somehow manage to integrate them, which when you actually think about it, isn't an easy thing to do. So, if you suddenly have something which thinks much faster than you, but has no morals or sense of right and wrong.... well, the problems are obvious.

Right, and as Asimov is always so fond of pointing out (60 years ago!), the AI can always interpret specific prime directives thinking it's doing the right thing when it's in fact causing harm.
 
Avatar 22024
 
Stay a while, and listen.
Reply Quote Edit Delete Report
 
21. Re: Morning Tech Bits Jul 18, 2017, 23:55 SunnyD
 
Mr. Tact wrote on Jul 18, 2017, 15:41:
eRe4s3r wrote on Jul 18, 2017, 15:32:
I don't think any high intelligence would put "mass genocide" on it's to-do list... at least not intentionally and unprovoked. So there is that.
Actually, this is part of the problem. There is no reason to believe any created intelligence would share our values. Unless we somehow manage to integrate them, which when you actually think about it, isn't an easy thing to do. So, if you suddenly have something which thinks much faster than you, but has no morals or sense of right and wrong.... well, the problems are obvious.


Also, the larger bugbear in the room is glossed over by many: weaponized versions. There isn't a single person in here I'm willing to bet that would bet against other nations approaching the question of A.I. with a more militant glint in their eyes, and while knowing this, isn't safe in assuming that 'our' government isn't at work doing the same at this moment. Lets be real here tho, as with many inventions that have come our way, a great many had been known, worked on, and used by militaries the world over, ours included, before being released to the public.

It's that aspect I'm more concerned with. Just how far along are we in actuality that we aren't in a position to know due to our non/low-security clearances. And the kind of people with their hands on the trigger of this particular tool for exporting democracy and improving our lives have shown something of a consistency as regards to actual usage of new tools regarding the war-fighting environment.

It's not only A.I.'s morals and sense of right and wrong we have to worry about, humans still being pretty broken in those aspects at this time, themselves.

~Finis~
 
Reply Quote Edit Delete Report
 
20. Re: Morning Tech Bits Jul 18, 2017, 15:41 Mr. Tact
 
eRe4s3r wrote on Jul 18, 2017, 15:32:
I don't think any high intelligence would put "mass genocide" on it's to-do list... at least not intentionally and unprovoked. So there is that.
Actually, this is part of the problem. There is no reason to believe any created intelligence would share our values. Unless we somehow manage to integrate them, which when you actually think about it, isn't an easy thing to do. So, if you suddenly have something which thinks much faster than you, but has no morals or sense of right and wrong.... well, the problems are obvious.
 
Truth is brutal. Prepare for pain.
Reply Quote Edit Delete Report
 
19. Re: Morning Tech Bits Jul 18, 2017, 15:32 eRe4s3r
 
I don't think any high intelligence would put "mass genocide" on it's to-do list... at least not intentionally and unprovoked. So there is that.

However, I am not so sure the same would apply to ALIEN intelligences, with different thought concepts a different perception comes, and that could swing either way.... I refuse to believe that any alien intelligence would be by default "EVIL" though, you don't harm your cat or dog, so why would a higher intelligence (compared to us) harm us? Whether we would be happy with that role.. well let me tell you, I'd be very happy with the life my cat has.... ^^
 
Avatar 54727
 
Reply Quote Edit Delete Report
 
18. Re: Morning Tech Bits Jul 18, 2017, 02:05 SunnyD
 
jdreyer wrote on Jul 17, 2017, 21:21:
LittleMe wrote on Jul 17, 2017, 19:43:

It's the militaries that will develop the truly scary/evil AI and that will certainly be outside the scope of regulation. It will be the same entities you want to regulate who will develop the worst types of AI.

Shall. We. Play. A. Game?

Mr. McKittrick, after very careful consideration, sir, I've come to the conclusion that your new defense system sucks.


~Finis~
 
Reply Quote Edit Delete Report
 
17. Re: Morning Tech Bits Jul 17, 2017, 21:21 jdreyer
 
LittleMe wrote on Jul 17, 2017, 19:43:

It's the militaries that will develop the truly scary/evil AI and that will certainly be outside the scope of regulation. It will be the same entities you want to regulate who will develop the worst types of AI.

Shall. We. Play. A. Game?
 
Avatar 22024
 
Stay a while, and listen.
Reply Quote Edit Delete Report
 
16. Re: Morning Tech Bits Jul 17, 2017, 20:32 Mr. Tact
 
jdreyer wrote on Jul 17, 2017, 18:00:
Machine learning is already a thing. So is machines teaching machines. We're closer to this than 50 years. The good thing is that this is being discussed and planned for, with conversations like this one.

Sam Harris had a great podcast with Wired founder Kevin Kelly about this very topic just 2 weeks ago. Well worth your time.

https://www.samharris.org/podcast/item/landscapes-of-mind
Interesting conversation. But I didn't hear anything to make me think we are within 50 years of anything I would recognize as an AI. However, I am willing to admit it is such a complex thing, it is possible I am wrong. But if I could expect to collect on betting against it, I would. The chimpanzee visual memory test was also interesting, I had not heard about that. Here is one of the videos I found. As I think I might have guessed, children do better than adults.

I think the term they used of "alien intelligences" vs. AI was a good thought process. Yes, there will be computer "intelligences", hell there already are some. A computer which can beat any human in the world at chess is, at some level, intelligent. And those kind of specialized machines will certainly become more common place as time goes on. I saw a 60 Minutes segment on IBM's Watson being used to help find treatments for cancer patients. These things are "on the road" to what I would think of as an AI. But that road is very long...

I did like the idea of the first human singularity being the creation of language. And that no human prior to the creation of language could imagine what life would become after language was created. And how, no two cavemen sat around and said, "Hey, it is so cool we created language. We should have done this a long time ago." It happened without the realization of what a significant change it was. It was only eons later upon reflection the significance was realized.

Although they didn't say this, I think it is reasonable to think if the computer singularity is going to happen, it is likely we won't see it coming and we can not currently imagine what life will be like if it does. And it is possible it will happen without us noticing...
 
Truth is brutal. Prepare for pain.
Reply Quote Edit Delete Report
 
15. Re: Morning Tech Bits Jul 17, 2017, 20:05 RedEye9
 
LittleMe wrote on Jul 17, 2017, 19:43:
MajorD wrote on Jul 17, 2017, 11:42:
Mr. Tact wrote on Jul 17, 2017, 10:29:
Musk needs better medication. At best (worst?) we are many decades from the kind of AI which could/would be a danger to And frankly, even if it did cause some type of confrontation between AI and humanity, I'm sorry I won't live to see it.

It may be decades away, but it is better to act/regulate sooner than later in lieu of just marveling at our own advances, and then act when it may be too late.

It's the militaries that will develop the truly scary/evil AI and that will certainly be outside the scope of regulation. It will be the same entities you want to regulate who will develop the worst types of AI.
https://edgylabs.com/2017/07/07/war-robots-automated-kalashnikov-neural-network-gun/ Already happening
 
Avatar 58135
 
unclouded by conscience, remorse, or delusions of morality
http://spaf.cerias.purdue.edu/quotes.html
Reply Quote Edit Delete Report
 
14. Re: Morning Tech Bits Jul 17, 2017, 19:43 LittleMe
 
MajorD wrote on Jul 17, 2017, 11:42:
Mr. Tact wrote on Jul 17, 2017, 10:29:
Musk needs better medication. At best (worst?) we are many decades from the kind of AI which could/would be a danger to And frankly, even if it did cause some type of confrontation between AI and humanity, I'm sorry I won't live to see it.

It may be decades away, but it is better to act/regulate sooner than later in lieu of just marveling at our own advances, and then act when it may be too late.

It's the militaries that will develop the truly scary/evil AI and that will certainly be outside the scope of regulation. It will be the same entities you want to regulate who will develop the worst types of AI.

 
Avatar 23321
 
Perpetual debt is slavery.
Reply Quote Edit Delete Report
 
13. Re: Morning Tech Bits Jul 17, 2017, 18:09 jdreyer
 
RE: Fake Ryzen CPUs. Are we returning to the days of rebranded Celerons?  
Avatar 22024
 
Stay a while, and listen.
Reply Quote Edit Delete Report
 
12. Re: Morning Tech Bits Jul 17, 2017, 18:00 jdreyer
 
Fion wrote on Jul 17, 2017, 12:09:
I sometimes wish the guy would throw his weight behind a problem that exists now.. like Net Neutrality, instead of focusing on something that may be a problem 50 years from now.

Machine learning is already a thing. So is machines teaching machines. We're closer to this than 50 years. The good thing is that this is being discussed and planned for, with conversations like this one.

Sam Harris had a great podcast with Wired founder Kevin Kelly about this very topic just 2 weeks ago. Well worth your time.

https://www.samharris.org/podcast/item/landscapes-of-mind
 
Avatar 22024
 
Stay a while, and listen.
Reply Quote Edit Delete Report
 
11. Re: Morning Tech Bits Jul 17, 2017, 16:37 Creston
 
Fion wrote on Jul 17, 2017, 12:09:
I sometimes wish the guy would throw his weight behind a problem that exists now.. like Net Neutrality, instead of focusing on something that may be a problem 50 years from now.

He doesn't have enough money to outspend Verizon/Comcast/Charter etc. And Ajit Pai is a giant streetwhore who votes what the biggest paycheck tells him to vote.
 
Avatar 15604
 
Reply Quote Edit Delete Report
 
10. Re: Morning Tech Bits Jul 17, 2017, 15:19 Red
 
There's really only one way to make everything secure from super AI: don't connect everything to external networks. Ala Battlestar Galactica. No amount of computer security is going to protect you from the ultimate computer.  
Avatar 8335
 
Reply Quote Edit Delete Report
 
9. Re: Morning Tech Bits Jul 17, 2017, 14:38 Mr. Tact
 
MajorD wrote on Jul 17, 2017, 13:21:
Not familiar with Dan Simmons' Hyperion series, but you've peaked my curiosity to check it out.
Fantastic series IMHO. It is basically two connected stories, each story broken into two novels.

"Hyperion" and "The Fall of Hyperion"

"Endymion" and "The Rise of Endymion"
 
Truth is brutal. Prepare for pain.
Reply Quote Edit Delete Report
 
8. Re: Morning Tech Bits Jul 17, 2017, 13:21 MajorD
 
Mr. Tact wrote on Jul 17, 2017, 12:02:
MajorD wrote on Jul 17, 2017, 11:42:
It may be decades away, but it is better to act/regulate sooner than later in lieu of just marveling at our own advances, and then act when it may be too late.
Maybe. Personally I'd expect any kind of "true AI" will be accidental/unplanned. I find the idea put forward in Dan Simmons' Hyperion series compelling. AI developing pretty much on it's own within our cyber space. Growing from simply understood 80 byte code segments to super intelligences without any human interaction.

Even if it was the result of carefully quarantined experiments, one mistake -- and it would be out. Humans tend to make mistakes fairly often....

Yeah, that's the scary part; if we develop AI to the level that it can/will advance to super intelligence levels beyond our control and comprehension. That is why I personally feel we have to do everything we can to regulate/monitor it as best as possible sooner than later; however, any effort may eventually be futile though......

Not familiar with Dan Simmons' Hyperion series, but you've peaked my curiosity to check it out.
 
Avatar 55780
 
Still counting...
Reply Quote Edit Delete Report
 
7. Re: Morning Tech Bits Jul 17, 2017, 12:09 Fion
 
I sometimes wish the guy would throw his weight behind a problem that exists now.. like Net Neutrality, instead of focusing on something that may be a problem 50 years from now.  
Avatar 17499
 
Reply Quote Edit Delete Report
 
6. Re: Morning Tech Bits Jul 17, 2017, 12:02 Mr. Tact
 
MajorD wrote on Jul 17, 2017, 11:42:
It may be decades away, but it is better to act/regulate sooner than later in lieu of just marveling at our own advances, and then act when it may be too late.
Maybe. Personally I'd expect any kind of "true AI" will be accidental/unplanned. I find the idea put forward in Dan Simmons' Hyperion series compelling. AI developing pretty much on it's own within our cyber space. Growing from simply understood 80 byte code segments to super intelligences without any human interaction.

Even if it was the result of carefully quarantined experiments, one mistake -- and it would be out. Humans tend to make mistakes fairly often....

This comment was edited on Jul 17, 2017, 12:24.
 
Truth is brutal. Prepare for pain.
Reply Quote Edit Delete Report
 
25 Replies. 2 pages. Viewing page 1.
< Newer [ 1 2 ] Older >




Blue's News is a participant in Amazon Associates programs
and earns advertising fees by linking to Amazon.



footer

Blue's News logo