I believe that in the quest for immortality, we will experience a zombie epidemic. The military will release the virus over in the middle east and the oil will be OURS!
The mass production of flying cars I actually know one of the guys who's working on the latest flying car that's slated for mass production, and hearing him talk about it is awesome. I'm pretty sure I'll never be rich enough to own one, but it would be cool to live in a world where they exist. I'm also seconding teleportation. Since I live in a city, I wouldn't have much use for a flying car, but man oh man would I fucking love to just zoom on over to wherever I want in the city instead of dealing with the fucking subway. Or, you know, to pop over to Paris or something. Semi-serious question for the engineers and more science-oriented people: if teleportation were to happen in our lifetime, would they charge you per trip, and increase the price depending on the distance you're teleporting? Or would you just have to pay for the teleporting vessel thing and that would be the end of it? I've always been curious about that.
I've never thought that flying cars were really at all practical, and I seriously doubt we will ever see them. There are too many people who can barely drive in two dimensions, add a third dimension and we're fucked.
I damn sure don't want to live forever. Imagine having to work forever. I'm only 38, and I dream of retiring already. Now, if I could live my normal life span but stop aging, sign me up.
It wouldn't be as cool as you think. Cracked.com had a good/funny article on this one. Basically everytime you want to teleport, you die. You don't get broken down into little pieces and transmitted, you get vaporized and a copy o you gets created on the other side every time. Still wanna teleport?
I'd settle for near instant internet bandwidth for every one. We're getting close to it already with cell networks. I imagine that by the time I'm close to death, we'll be able to stream high definition video on demand with zero buffering in the middle of nowhere. Combine this with smartphones and cloud computing, it seems likely that it will be the end of PCs as we know it.
If the world wants to go Maximum Overdrive on itself, be my guest. Just make sure you invent the goddman Hoverboards before that happens. Fuck you, Robert Zemekis. That shit's supposed to happen in FOUR YEARS!!!!!!
Well, intuition and judgment are just heuristics, shortcuts that our brain uses to make decisions faster. As I understand it, there are really two things going on here: 1) Our brain processes much more situational information than we are consciously aware of - the temperature, the setting, things happening around us. This leads to: 2) Using that information, our brain informs decisions we need to make - "intuition" or "gut feeling". It seems like it's unique to us, because we don't consciously notice the reasons behind it. Someone with more knowledge of the field correct me if I'm wrong, but this is very conceivably within the limits of binary (i.e. not quantum) computing. How crazy would something like (common, affordable) OCR* or speech-to-text software been ten or fifteen years ago? There's a lot of work being done in this front, and I'm not going to be the one to underestimate it. And this is just extrapolating from what we can currently do. If we achieve a breakthrough like quantum computing, the likelihood of which I have no idea, there's really no telling what'll happen. *Optical character recognition - basically the ability for software to look at a scanned picture of handwriting and translate it into computer-readable text. Too true.
To the people making the argument that computers won't be able to have intuition, judgment, etc....you guys realize that the brain is a computational system as well right? There is no homunculus/meta-brain/soul required to make the brain do what it does. Shit, the vast majority of our decision making is automated anyway. In fact, there is increasing amounts of evidence that our self-awareness is largely just another module that evolution developed on top of the rest of the pre-existing brain; essentially, that feeling of "I" and "me" is manufactured in the brain through a good bit of effort. Our brain appears to be largely made up of modules that often compete against each other, which our self-awareness module sort-of compresses into a feeling of "I am making this decision." If you take all of this into account, making a human-like computer would actually be quite easy. One step that we are getting quite close to is the experiments where computers are programmed to fool others into thinking they are talking to a human. Once we reach the point where a computer can convince anyone that it is a human, then for all practical purposes, it might as well be us, but with higher computational power and data storage. Computers can already store vast more quantities of data and can compute much faster than us so all that remains at that point is building a self-awareness module, much like nature has built for us. So sure, you can complain all you want that it's "just a bunch of algorithms put into it by a programmer" but our brain is more or less the same thing. Our algorithms were put in our brain by good ole evolution, but even though mother nature has had a huge head start, computer scientists can work much much faster. Another form I've seen the argument take is that all it really takes is a computer than can crank out non-perfect replicas of itself. In that case, a computer form of natural selection can take over and then, given the constraints that are put on it (whether by us or otherwise) it will automatically develop as all organisms have done via "natural" selection. As far as the immortality debate goes, while I'm not sure I want to live forever, I would certainly love the option of choosing when I wanted to die. The chances of that becoming possible in the next 100-150 years is pretty high I think. If we develop the technology to upload our brains into a computer, then we're all set. Whenever you feel like you're done living, hit the power button.
So you get vaporized but the matter that is now vapor isn't transmitted? Just copied like a fax and it's basically an electronic signal that is interpreted at the other end to reconstruct you out of new matter?
A programmer can spend years coding in responses, but there will always be an end to these answers. As complicated as a the coder tries to make it, the computer isn't creating original thought, it's picking responses from a list. And that's the difference. If all the computer scientists currently banded together to create as realistic of a question - answer conversational system they still wouldn't have created artificial intelligence. AI will not be created by making a more and more complicated question - response system.
Ever played Chess against a computer? While there are certain common opening moves that the computer can be taught to respond to, as well as fairly limited set of common end games, the middle game has like...I don't know, a gazillion possible variations. A computer's specific mid-game moves are not programmed in a question-answer format (ie: if this configuration of pieces, then that move); instead, it considers its possible moves, your possible responses to those moves, it's possible moves on the next turn, and so on, until it decides on the best move to make. Computers can also play Scrabble, where not even the opening move can be programmed in a Q&A fashion. And then there's Go, a game with so many possible permutations that it is estimated the number of ways a game of Go can be played out is larger than the number of atoms in the known universe. The computer will look at a board that not only it has never seen before, but which never has been seen by its programmer either, and will figure out what move to make. Computers can already come up with answers without a list to choose from. Conversation is just a very complex game.
You guys arguing brains/computers are missing the point. We don't really understand our own intelligence. You can't program a computer to act in a human way if you don't understand why we act that way first. Creativity is almost entirely outside of our understanding right now. What drives it, why creative people can tend to be very crazy, where inspiration comes from. Those are very nebulous ideas that have no programmatic equivalent. We also each have a pretty unique ability in that all of us can invent solutions to situations we've never seen before based on past experiences. Drawing parallels to machines like Deep Blue actually emphasizes how far we have to go with AI. Deep Blue was developed for over a decade by a whole team of programmers and all it can do is play chess. That's it. It can't draw on past experiences and suddenly apply chess logic to other tasks. The so-called creativity in the machine can't start painting. All of this that is contained in the tiny few pounds of brain takes decades and teams of already-brilliant people and huge refrigerated super computers and millions of dollars. We aren't even close. Is it lost on everyone that it takes dozens or hundreds of brilliant people many years, collating the wisdom of dozens or hundreds of other brilliant people, to make a machine that is only marginally intelligent at a single task? Not even close.
Just to touch on this again, We are already at the point where we could recreate a 3D printer from another set of printers, at least on the mechanical side...only downside is that it would be terrible quality, and would not have the accuracies of machining that the current ones do. Plastics (as most printers deal with) are easy, take a look into Selective Laser Sintering
The chess program picks the best move by measuring the goodness of possible moves (based on searching for all possible sequences of moves within a limit for the number of moves the computer can think ahead, since making the search space all combinations of moves till the end of the game would be practically impossible) with an evaluation function, which is usually programmed by humans. It is possible to feed a computer a number of chess games, and make it construct its own evaluation function by statistical methods, but that would not make it a smarter computer, just a computer running smarter human-written code. And chess is a well-defined problem compared to conversation, with a very small amount of rules and possible events, and a clearly determined and measurable outcome (win/lose/draw). You, as a human being know if a conversation is good, but how can you define what makes it good without using concepts (like ’interesting’, ’funny’) which require a definition themselves and so on, not to mention the context of the conversation etc.?
What's the difference? The difference between a moron and Einstein...was it that one was smarter, or than one had a brain with better evolution-written code? See what I mean? I know that we don't fully understand how our brains work but we do know what things we define as intelligence. We also know that our brain's functions (along with everything else in the universe) is REDUCEABLE. And things that are reduceable can be described by mathematics and algorithms. Like I said, our brain can be described by algorithms, we just have to get to a point where we get a rough estimate of what they are. The process of finding out is insanely hard when done from the ground up, but not nearly as hard with trial and error; hence, computers made to fool humans into thinking they aren't computers. Once computers can do that, then we are not so far off from a rough estimate of human brain equivalent algorithms.
I have little first hand knowledge of computer science; I'm mostly just parroting people I have read so I probably messed up on word choice a bit. What I mean is, everything in the universe can be described mathematically, including our brains. Each brain can be described by its own set of mathematical expressions, and there also exists a mathematical expression describing human brain. This means that computers can be programmed to behave in the exact same way as a brain; such that they are indistinguishable. Of course, is really really really hard. Approximations through trial and error are easier, and (my main point) if we are not that far from creating a computer than is indistinguishable by all other humans, then we aren't so far than having (for practical purposes) more intelligent than human computers, right? These posts, though they don't have my point in mind, should make what I'm trying to say clearer: <a class="postlink" href="http://lesswrong.com/lw/r0/thou_art_physics/" onclick="window.open(this.href);return false;">http://lesswrong.com/lw/r0/thou_art_physics/</a> <a class="postlink" href="http://lesswrong.com/lw/og/wrong_questions/" onclick="window.open(this.href);return false;">http://lesswrong.com/lw/og/wrong_questions/</a>
The difference is critical. The difference is that, no matter how powerful computers get, they are never any smarter. They are still at the limits of what humans can program. If we don't understand our own intelligence, we cannot program it into a computer. What possible reason do you have to believe that we are "not that far" away from making a program that is indistinguishable from a human?