LOGO
General Discussion Undecided where to post - do it here.

Reply to Thread New Thread
Old 08-27-2009, 10:30 AM   #1
easypokergonj

Join Date
Oct 2005
Posts
420
Senior Member
Default Do singularity people consider this possibility?
I hope that at some point people will stop writing code (software). If there is an AI that is as smart as a human but 1 million times faster then it should have enough time to write bug free optimised code.
I am so looking forward to ATI hiring a robot to write some decent drivers...
easypokergonj is offline


Old 08-27-2009, 01:41 PM   #2
SodeSceriobia

Join Date
Oct 2005
Posts
429
Senior Member
Default
:drool: I can't wait to find out what AI thinks of us. Keep or Destroy?
SodeSceriobia is offline


Old 08-27-2009, 01:54 PM   #3
Slintreeoost

Join Date
Oct 2005
Posts
502
Senior Member
Default
It's not just scale. You can have an infinite chess pieces and an infinite square tiles. The real question is, can we create intelligence that is not random or meaningless.
Slintreeoost is offline


Old 08-27-2009, 04:24 PM   #4
mv37afnr

Join Date
Oct 2005
Posts
412
Senior Member
Default
A self-replicating/improving AI would design its replacement twice as quickly and with ten times the bugs of a team of human engineers. The Eschaton will be so riddled with viruses that it will solely dedicate its nigh-infinite processing power towards the accumulation of penis enlargement pills.
.
mv37afnr is offline


Old 08-27-2009, 05:43 PM   #5
GoblinGaga

Join Date
Oct 2005
Posts
475
Senior Member
Default
Maybe the real question is whether human intelligence is meaningful or not, which is back to a debate from another thread a few weeks ago.
GoblinGaga is offline


Old 08-27-2009, 05:47 PM   #6
Duaceanceksm

Join Date
Oct 2005
Posts
498
Senior Member
Default
One of the major problems with the whole singularity business is that the brain is relatively robust while a digital computer is relatively brittle (even leaving aside the undecidability of bug removal). Brain cells die and synapses misfire all the time, but we still manage to more or less function. On the other hand a single transient memory fault can crash a supercomputer.

RE the difference between human intelligence and computational intelligence, human-computer chess games are a good example of the disconnect: human chess masters only look one or two moves ahead in a chess game and mostly focus on board patterns (try to maximize space covered by a bishop etc.), while chess-playing supercomputers look ahead several moves with fancy heuristics to trim the search space.
Duaceanceksm is offline


Old 08-27-2009, 06:03 PM   #7
zCLadw3R

Join Date
Oct 2005
Posts
553
Senior Member
Default
The whole point about a true AI is that it's concious and can choose it's own goals. We don't design it with that goal necessarily. Although it's a human goal and we'd be creating it in our image to some extent so it's reasonable to assume it might have it as a goal too. Even if it's just through self improvement rather than replicating superior versions of itself.
zCLadw3R is offline


Old 08-27-2009, 06:42 PM   #8
KlaraNovikoffaZ

Join Date
Oct 2005
Location
USA
Posts
384
Senior Member
Default
Being able to create one is an achievement in itself.

And it's not very useful if you don't give it access to stuff to control.

There's also the idea that rather than destroying humanity they enable us to transcend and become godlike, so not all doom and gloom.
KlaraNovikoffaZ is offline


Old 08-27-2009, 07:22 PM   #9
Spisivavona

Join Date
Oct 2005
Posts
552
Senior Member
Default
The whole point about a true AI is that it's concious and can choose it's own goals.
Bzzt. The whole point about a general-purpose AI ("true" AI has no meaning) is that it has humanlike intelligence - you can provide it with natural-language goals and it has all the tools to complete them built-in.

If the AI is strictly smarter than people, and we tell it "design a general-purpose AI smarter than yourself", it will clearly succeed (if only by epsilon).

Seriously, guys, that's all there is to it. Self-design + smarter than humans = strictly increasing intelligence.

edit: and again, seriously, consciousness doesn't enter into it. No one cares about consciousness. People care about getting **** done.
Spisivavona is offline


Old 08-27-2009, 07:30 PM   #10
oscilsoda

Join Date
Oct 2005
Posts
431
Senior Member
Default
(if only by epsilon)
oscilsoda is offline


Old 08-27-2009, 07:32 PM   #11
saruxanset

Join Date
Oct 2005
Posts
471
Senior Member
Default
edit: and again, seriously, consciousness doesn't enter into it. No one cares about consciousness. People care about getting **** done.
You're assuming that all the folks who jerk off to SMAC don't qualify as "people." Not that I necessarily disagree; just pointing it out.
saruxanset is offline


Old 08-28-2009, 12:19 AM   #12
sjdflghd

Join Date
Oct 2005
Posts
498
Senior Member
Default
The processes of intelligence are very simple, it's just a question of scale, so why should there be a limit?
Didn't know inteligence was a simple matter of scale, thought it was more complex. I guess my question is moot then.
sjdflghd is offline


Old 08-28-2009, 12:26 AM   #13
Wr8dIAUk

Join Date
Oct 2005
Posts
524
Senior Member
Default
The whole point about a true AI is that it's concious and can choose it's own goals.
Are you sure? I'm not quite convinced we humans can choose our goals.


I mean seriusly even the most ethical and rational human would have problems with not eating a baby or two now or then if that somehow produced feelings that make orgasms pale in comparison.
Wr8dIAUk is offline


Old 08-28-2009, 12:35 AM   #14
dabibibff

Join Date
Oct 2005
Posts
342
Senior Member
Default
Or self-design + smarter than humans + design complexity increasing faster than intelligence = intelligence plateau.
Isn't this what I basically said in the OP?
dabibibff is offline


Old 08-28-2009, 12:48 AM   #15
CtEkM8Vq

Join Date
Oct 2005
Posts
531
Senior Member
Default
Isn't this what I basically said in the OP?
You said it all foreign-like.
CtEkM8Vq is offline


Old 08-28-2009, 06:17 AM   #16
AdipexAdipex

Join Date
Oct 2005
Posts
444
Senior Member
Default
That's basically what heroin does. Solution: don't shoot heroin, ever, nor eat a baby.
But what if taking heroin or eating babies would be like sex? You could easily make a mind want something that it has never tried before.
AdipexAdipex is offline


Old 08-28-2009, 08:50 PM   #17
alegsghed

Join Date
Oct 2005
Posts
412
Senior Member
Default
But what if taking heroin or eating babies would be like sex? You could easily make a mind want something that it has never tried before.
I have a powerful, naturally occurring urge to have sex. Do you have a powerful, naturally occurring urge to eat babies?
alegsghed is offline


Old 09-02-2009, 11:13 AM   #18
bZEUWO4F

Join Date
Oct 2005
Posts
489
Senior Member
Default
Hera: your hypothesis relies not only on the idea that it will be hard for a general-purpose AI to design marginal improvements to its own code (implausible) but also that said AI will not scale at all with processing speed or # of processors.
AI is an app, it will scale with processing speed or # of proccesors in the sense that it will run faster at the very least.

Tell me if they lock you up in a room for a hundred years with a few terbytes of intormation that details how your body and mind work. Could you build draw up the schematics for a couple of better humans? I'm not saying you couldn't. I'm just saying that a man trapped in a lamp that experiences 100 yeras of time pass for every one that does in RT dosen't sound scary. Now a small societ of such people might be.

But when does a society of 1000 acheive as much in a century as does a society of 6-10 billion people in a year? What I'm saying is that the first few generations of AI will probably be designed and improved by a ant-swarm of humans rather than the first & few human or even superhuman level AI's.


The TS people seem to think or at least speak as if the first AI will result in an inteligence explosion. They don't realize that simply the rise in global population and especialy the rise in educated people is already an intelligence explosion. A small population of AI's won't speed up the process anymore than if a few talented humans are born. A genious level AI, is still just that genious level.


The only advantage it has over a human genious is that we can make it so that it enjoys its job. A lot.
bZEUWO4F is offline


Old 09-02-2009, 03:09 PM   #19
Anakattawl

Join Date
Oct 2005
Posts
551
Senior Member
Default
Rapture for nerds, indeed.

why should there be a limit? I don't believe it's possible for humans to create by themselves something that's more intelligent.
Anakattawl is offline


Old 09-02-2009, 09:46 PM   #20
Futfwrca

Join Date
Oct 2005
Posts
424
Senior Member
Default
Not necessarily.

I think the reason you believe that is that we've not yet figured out how to write intelligence into a computer. Your picture of hardware development is of an autistic savant getting faster and faster at multiplying two numbers in his head. If a regular person was magically granted the ability to keep twice as many facts in his head, manipulate twice as many pieces of information at once, and do this all twice as fast, would you not say that he had gotten more intelligent?
Futfwrca is offline



Reply to Thread New Thread

« Previous Thread | Next Thread »

Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 

All times are GMT +1. The time now is 09:21 PM.
Copyright ©2000 - 2012, Jelsoft Enterprises Ltd.
Search Engine Optimization by vBSEO 3.6.0 PL2
Design & Developed by Amodity.com
Copyright© Amodity