General Discussion Undecided where to post - do it here. |
Reply to Thread New Thread |
![]() |
#22 |
|
I believe that it's perfectly possible to design artificial intelligences that are superior to humans in specific ways. We can design AIs that process information faster, store more data, beat the world's best chess players, etc. Is there some fundamental quality of human intelligence that cannot theoretically be reproduced by an artificial intelligence? With a big enough database and sufficient processor speed you can brute force chess. As it is, it is a matter of two things.
1. Efficient pruning algorithm. 2. Perfect memory. Both of which were designed by people in the first place. Could a computer paint? Write best selling novels? |
![]() |
![]() |
#23 |
|
Why not? Both could be "brute forced." Copies, sure. Scan the image and print it out. But originals?
You could have a giant database of paintings/novels rated by popular and professional opinion, and then you randomly plot colors/words on canvass/paper until it incorporates enough of the most highly rated concepts in said paintings/novels to be considered good. It would probably do for modern art which says everything you need to know about it. And there are more advanced problem solving techniques than "brute force." Oh, I understand that, I'm just questioning what would be the algorithm to paint? We can't even do good word recognition or translations let alone something altogether new. |
![]() |
![]() |
#24 |
|
|
![]() |
![]() |
#25 |
|
He did, but I only know you can teach one to paint by numbers. Translations are complicated even for humans. (The author picked one of 7 words in French that mean the same thing, now the translator must pick one of 5 in English that mean the same thing.) Writing with a thesaurus at the front of the brain probably wouldn't help. That is exactly how the smart program I mentioned does in fact work. Add the complication that self-aware machines would be unfathomabe to humans as to interest and attention.
|
![]() |
![]() |
#26 |
|
|
![]() |
![]() |
#27 |
|
|
![]() |
![]() |
#28 |
|
|
![]() |
![]() |
#29 |
|
No, it's a bit more elusive than that. Intelligence is not just processing speeds or we would in fact consider Rain Men to be the smartest men on the planet. True intelligence involves, among other things:
-ability to approach problems from different angles -ability to learn new concepts and integrate them with existing knowledge -ability to imagine unusual possibilities, and question one's own assumptions -ability to act and learn independently None of which computers have. IIUC, computers have a very fixed set of ideas to work with, and they don't think so much as plug input into an equation and give you the output. The equation may be very complicated, but it has to be written beforehand, and the computer can't choose to modify it. It can't "think" about the problem at all. Now, if I'm getting you right (tell me if I'm not), you're saying we just need to find a way to have computers think, "IF this ain't making sense THEN step back and examine the problem in a new light." But how does the computer know what new light to look at it under? You'd have to program in all possible ways of looking at a problem, which is impossible--unless you already have the problem solved, no? In which case, there's no need for the computer in the first place, it's just acting out a role you wrote for it. You need to teach a computer to imagine, to think for itself. Which is fundamentally different from what computers do. Again, IIUC. I'm definitely not a comp sci person. |
![]() |
![]() |
#30 |
|
No, it's a bit more elusive than that. Intelligence is not just processing speeds or we would in fact consider Rain Men to be the smartest men on the planet. I specifically explained that making assertions based on current computers (WHICH I EVEN DESCRIBED AS "AUTISTIC") was NOT meaningful. ![]() Now, if I'm getting you right (tell me if I'm not), you're saying we just need to find a way to have computers think, "IF this ain't making sense THEN step back and examine the problem in a new light." But how does the computer know what new light to look at it under? You'd have to program in all possible ways of looking at a problem, which is impossible--unless you already have the problem solved, no? In which case, there's no need for the computer in the first place, it's just acting out a role you wrote for it. You need to teach a computer to imagine, to think for itself. Which is fundamentally different from what computers do. Again, IIUC. I'm definitely not a comp sci person. OMFG. What the **** do you think this: IF we've managed to write some sort of general purpose AI THEN it becomes much more reasonable to assert that better hardware -> higher intelligence means? This is the WHOLE POINT: that at some future time we'll have managed to create a sort of general purpose AI; one that's at least as flexible in its capabilities as human beings. Once you get there, even pure hardware evolution is a plausible mechanism to take us toward a singularity (more properly, to take us far along a path of more-than-exponential growth in capabilities). On the other hand, there is no reason to suppose that hardware evolution is the ONLY method the AI would use to improve on its own design, unless it turns out that human beings are so universal in their intelligence that we're close to ideal in that respect. You need to calm down. What I am contesting is the idea that we will in fact manage to create a "general-purpose AI." Intelligence is not just crunching numbers, and the genetic algorithm Lori posted (again, I'm not a computer scientist) would seem to be a sophisticated form of trial and error aided by pseudo-evolutionary processes: if solution X does not achieve optimal result, modify via mutation and try again, possibly combining with other solutions which come closer to the ideal. It's still brute force, trying (almost) everything blindly. That's not at all how a human thinks, and I can't imagine how we could make a computer think that way. That doesn't seem to be how computers work in the first place, and saying computers can achieve it sounds to me like saying that a train can be made to fly like an airplane. Utterly fundamental changes would have to be made. The genetic algorithm doesn't even take a step in the direction of true thought, it's just a clever mechanism to compensate for the computer's blindness. The key steps of imagination and in-depth analysis are missing. |
![]() |
![]() |
#31 |
|
So far this conversation is as follows:
Elok: Increasing a computer's speed speed doesn't make it more intelligent; it just makes it faster Me: In the future computers might not be autistic Elok: But what about Rain Man? He's not that smart, because all he can do is count cards really fast Me: This isn't about Rain Man. This is about what happens when we build computers that are like regular people Elok: But we haven't built any such computers yet ![]() |
![]() |
![]() |
#32 |
|
|
![]() |
![]() |
#33 |
|
Elok, what makes... But for example, what happens if you give it the instruction, "devise a procedure for constructing a Pembrose triangle?" I imagine it will work forever trying method after method, and it will at no point conclude, "you can't build a Pembrose triangle, it's impossible by nature." It'll keep running on and on until you pull the plug or it breaks down, unless you include a line telling it to give up after X million failed attempts. Or you program it with data: "NOTE: Pembrose triangles are impossible, as are various other head-hurting structures devised by rat bastard mathematicians." Which isn't intelligence, it's just the programmer intervening to compensate. Whereas a human being of reasonable intellect will look at the figure and realize after a few seconds of inspection that the structure violates the laws of physics. The human mind can look at the problem itself, searching for "creative" answers; a computer's got no choice but to follow instructions, and those instructions (as well as the computer's algorithm) are limited by the intelligence of the designer and the amount of care taken. That's the real hitch: a computer only does what you tell it to do. You don't need to tell a human mind to learn. It takes initiative, learns of its own volition. In practice, I suppose it doesn't much matter, since computers are supposed to be our drudges anyway. And under normal circumstances they can achieve very good results. All I'm saying is that 'taint intelligence. |
![]() |
![]() |
#34 |
|
I guess I should just ignore KH for the duration of this thread, we're having communication problems. Or maybe he's just being abusive for fun. Anyway, substantively different? Do you mean in terms of achieving the desired result? Well, you could brute-force almost anything given any amount of trial and error, and enough time or sufficiently fast trials. But I would not characterize that as intelligence. If you present a computer with a given task, and it has a genetic algorithm, it will eventually come up with a very good solution for that task, normally. |
![]() |
![]() |
#35 |
|
Elok wants to convey the idea pf an intuitive leap perhaps. If the computer can do that, or suffer doubt, or keep secrets on its own, revealing them to a select few and no others THEN it is past the "Singularity."
KH, no one in my immediate circle (computer scientists all) can figure out how one would program your "general purpose" computer. In one sense all computers with an operating system are general purpose. In another, this is insufficient for self-development of intuitive leaps, imagination, multithreaded test patterns, or innovative non-preprogrammed solutions. |
![]() |
![]() |
#36 |
|
|
![]() |
Reply to Thread New Thread |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|