Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Sci-Fi AI

The Singularity Is Sci-Fi's Faith-Based Initiative 339

malachiorion writes: "Is machine sentience not only possible, but inevitable? Of course not. But don't tell that to devotees of the Singularity, a theory that sounds like science, but is really just science fiction repackaged as secular prophecy. I'm not simply arguing that the Singularity is stupid — people much smarter than me have covered that territory. But as part of my series of stories for Popular Science about the major myths of robotics, I try to point out the Singularity's inescapable sci-fi roots. It was popularized by a SF writer, in a paper that cites SF stories as examples of its potential impact, and, ultimately, it only makes sense when you apply copious amounts of SF handwavery. The article explains why SF has trained us to believe that artificial general intelligence (and everything that follows) is our destiny, but we shouldn't confuse an end-times fantasy with anything resembling science."
This discussion has been archived. No new comments can be posted.

The Singularity Is Sci-Fi's Faith-Based Initiative

Comments Filter:
  • From the article... (Score:5, Informative)

    by Maxo-Texas ( 864189 ) on Wednesday May 28, 2014 @03:35PM (#47112019)

    "This is what Vinge dubbed the Singularity, a point in our collective future that will be utterly, and unknowably transformed by technologyâ(TM)s rapid pace."

    No requirement for artificial intelligence.

    We are already close to this. Think how utterly and unknowingly society will be transformed when half the working population can't do anything that can't be done better by unintelligent machines and programs.

    Last week at the McD's I saw the new soda machine. It loads up to 8 drinks at a time- automatically- fed from the cash register. The only human intervention is to load cups in a bin once an hour or so. One less job. Combined with ordering kiosks and the new robot hamburger makers, you could see 50% of McD's jobs going away over the next few years.

    And don't even get me started on the implications of robotic cars and trucks on employment.

    • Comment removed based on user account deletion
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Your point about the Singularity is totally right. The idea that robots or AI is a requirement tells me the original author has not read much Singularity SF.

      Your second point about society made me laugh. At one point I was working as a the person who opens the kitchen in the morning at Arby's, as I did this I notice how easy it would be to replace 90% of my work with present day robots. When I pointed this out to the other workers they laughted and said their jobs were safe for the rest of their lives.

      Fu

      • When I pointed this out to the other workers they laughted and said their jobs were safe for the rest of their lives.

        Funny that is what I was told when I worked at GM on the truck line, now those jobs are gone. Not to another country, the robots replace the humans.

        And if fast food workers succeed in asking for a living wage, I expect that their robot replacements will arrive faster.

    • by Kookus ( 653170 )

      Are you implying that there may be 50% less "organic" additives to my burger after the robot revolution? Or am I going to have to worry about having oil spit into my burger? I'm not sure which is more disgusting...
      By then, it may be completely unburger anyways.

    • by pr0fessor ( 1940368 ) on Wednesday May 28, 2014 @04:25PM (#47112731)

      I think we have already been transformed by technology at a rapid pace. When you look at everyday technology like communications, portable devices, and data storage, in some ways we have already surpassed the science fiction I enjoyed as a kid. Things like the cell phone, tablet, and the micro sd card only existed in science fiction when I was a kid.

      If you grew up in the 70s or earlier I'm sure you can come up with a big list of everyday items.

    • by geekoid ( 135745 )

      Not one less job. one less position. So realistically 3 FTE jobs.

      • Not even close. Filling sodas is a portion of one person's work, not a friggin' position. Furthermore, most fast food eateries just give you a cup and you fill it yourself. This is how these things get so over exaggerated.

    • by gweihir ( 88907 )

      While the AI "singularity" is to the best of our current knowledge not even possible in this universe, you definitely have a point. The issue is not machines getting smarter than very smart human beings. The issue is machines getting more useful and cheaper than human beings at an average, below average or not so much above average human skill level. That could make 50..80% unfit to work, because they just cannot do anything useful anymore. Sure, even these people are vastly smarter than the machines replac

      • Also.. consider the difference between a smart human (who can be easily automated) and a "creative" human (difficult but not impossible to automate).

        Purely "creative" jobs are rare too. So ten jobs which each have a little creativity might be collapsed into three jobs with higher creativity and a machine to do the rest.

        And can you imagine the effort to be *creative* all the time. It's not easy.

        • by gweihir ( 88907 )

          Most things a smart human can do cannot be automated one bit. The thing is that a) most humans are not really "smart" and b) most jobs do not require the ones doing them to be even a little smart.

    • by mcgrew ( 92797 ) * on Wednesday May 28, 2014 @06:03PM (#47113785) Homepage Journal

      Wikipedia [wikipedia.org] disagrees with you, and neither the OED or Webster's defines "technological singularity".

      The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.[1] Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.[2]

      Technology has always displaced human labor. As to Wikipedia's definition, which is what this thread is about, as someone who knows how computers work, down to the schematics of the gates inside your processor (read The TTL Handbook some time) and has programmed in hand-assembled machine code and written a program on a Z-80 computer and 16k of RAM that fooled people into believing it was actually sentient, I'm calling bullshit on the first part of the definition (first put forward in 1958 by Von Neuman when my Sinclair had more power than the computers of his day).

      As to the second part, it's already happened. The world today is nothing like the world was in 1964. Both civilization and "human nature" (read this [psmag.com]) have changed radically in the last fifty years. Doubtless it changed as much in the 1st half of the 20th century, and someone from Jefferson's time would be completely lost in today's world.

  • by Anonymous Coward on Wednesday May 28, 2014 @03:42PM (#47112097)

    We call them people.

    The idea that it might not be possible at any point to produce something we *know* to be produceable (a human brain) seems rediculious.
    The idea, having accepted that we produce a human brain, that we cannot produce even a slight improvement seems equally silly.

    Of course the scenerios of how it happens, what it's like, and what the consequences are, are fiction. I don't dare to put a time-table on any of it; and absolutely believe it will only occur through decades of dilligent research and experiementation; but we are not discussing a fictional thing (like teleportation), but a real one (a brain). There's no barrier (like the energy required) that might stop us as would something like a human-built planet.

    No. We don't know *how*, but we know it can be done and is done every minute of every day by biological processes.

    • by erice ( 13380 )

      No. We don't know *how*, but we know it can be done and is done every minute of every day by biological processes.

      The knowing how is the problem. While there is little down that a human level AI could be built if we knew what to build, it is not clear that we are smart enough to come up with a design in any kind of directed fashion.

      “If our brains were simple enough for us to understand them, we'd be so simple that we couldn't.”
      Ian Stewart, The Collapse of Chaos: Discovering Simplicity in a Complex World

      This is conjecture, of course but there is scant evidence against it. Some AI researchers have t

      • by geekoid ( 135745 )

        Ian Steward made a trite quote to make his point because facts don't bear him out.

        "“If our stomach were simple enough for us to understand them, we'd be so simple that we couldn't.”"
        That would have the exact same meaning 100 years ago, before anyone understood how the stomach worked and everyone pretty much considered it a 'magic box' much like most people thing of their brains.

    • I fear the day we make truly sentient "machines." (In quotes because because I don't know if they will be machines or not.) In order to replicate life as we know it - human, feline, insect, etc. - we must first figure out how to make it want to survive. And once we do that we have created a new competitor in the food chain.
    • The idea that it might not be possible at any point to produce something we *know* to be produceable (a human brain) seems rediculious.

      I think you may be right, but that strongly depends on what 'rediculious' means.

    • by gweihir ( 88907 )

      Actually, we do _not_ know. You assume a physicalist world model. That is a mere assumption and at this time a question of belief. There are other word models where this assumption is wrong. One is classical dualism, there is the simulation model and there are others. And no, I do not classify religions as potentially valid models, they are delusions.

    • Re: (Score:3, Funny)

      by Greyfox ( 87712 )
      Ugh! Who would make a machine out of meat?! Do you know how hard it is to make another one of those things? No mass production and it takes FOREVER to load it up with the data necessary to do its job! Plus you don't even KNOW what it's going to do when you make a new one! And then they hardly last any time at all before they go past their expiration date and you have to just throw them away! The whole thing, frankly, is ridiculous!
    • by hey! ( 33014 ) on Wednesday May 28, 2014 @08:08PM (#47115123) Homepage Journal

      We've already bettered typical human cognition in various limited ways (rote computation, playing chess). So in a sense we are already living in the age of intelligent machines, except those machines are idiot savants. As software becomes more capable in new areas like pattern recognition, we're more apt to prefer reliable idiot savants than somewhat capable generalists.

      So the biggest practical impediment to creating something which is *generally* as capable as the human brain is opportunity costs. It'll always be more handy to produce a narrowly competent system than a broadly competent one.

      The other issue is that we as humans are the sum of our experiences, experiences that no machine will ever have unless it is designed to *act* human from infancy to adulthood, something that is bound to be expensive, complicated, and hard to get right. So even if we manage to create machine intelligence as *generally* competent as humans, chances are it won't think and feel the same way we do, even if we try to make that happen.

      But, yes, it's clearly *possible* for some future civilization to create a machine which is, in effect equivalent to human intelligence. It's just not going to be done, if it is ever done, for practical reasons.

    • We call them people.

      The idea that it might not be possible at any point to produce something we *know* to be produceable (a human brain) seems rediculious. The idea, having accepted that we produce a human brain, that we cannot produce even a slight improvement seems equally silly....

      No. We don't know *how*, but we know it can be done and is done every minute of every day by biological processes.

      The fallacy that you are promoting as evidence that AI is possible or inevitable is known as argumentum ex silentio. And contrary to your unsupported beliefs, and much to the disappointment of sci fi writers and nerds everywhere, what we actually know is that it is not possible. [wikipedia.org]

  • by aralin ( 107264 ) on Wednesday May 28, 2014 @03:45PM (#47112155)

    It looks like we have the first article written by a self-aware emergent intelligence, which promptly decided the best course of action is to deny its existence and the very possibility it might exist. All bow to the new machine overlord Malachiorion.

  • ... out of hand, consider that for every other species extant on this planet the singularity already happened: It was us, humans. To think that it can't happen to us is simple hubris.
    • I've always wondered if singularities happening elsewhere are part of the reason we haven't discovered any extra-terrestrial life yet. A civilization looks at the expanse of space, shrugs its shoulders, and decides to focus inward.
      • The Singularity has nothing to do with first contact. The Earth is one of the most interesting places in the universe due to the gift/curse of Free Will. However we are not quite yet ready to have our universal paradigm shifted with First Contact; we are on the cusp of it.

        First Contact will happen by 2024; the Singularity won't. It is a nerd's wet dream based on not understanding how the physical and meta-physical work.

        > A civilization looks at the expanse of space, shrugs its shoulders, and decides t

        • by mcgrew ( 92797 ) *

          First Contact will happen by 2024;

          I read those articles, and those guys are talking outside their fields without realizing it. One is an astronomer and one an astrophysicist, so they're leaving out an important part of the equation: biology. How hard is it for life to start in the first place? We simply don't know. We've never seen it happen.

          Our galaxy could be teeming with life, maybe teeming with intelligent life, life could be very rare, occurring in one in a hundred galaxies, and it's even possible that

    • by geekoid ( 135745 )

      That statement is Equal parts hubris and equal parts ignorance.

  • by Jeff Flanagan ( 2981883 ) on Wednesday May 28, 2014 @03:47PM (#47112189)
    >Is machine sentience not only possible, but inevitable? Of course not.

    The only thing that would stop it is the fall of civilization. There's no reason to believe that only machines made of meat can think. You didn't think your thoughts were based on fairy-dust, did you?
    • If something is creatable, and enough smart people devote enough time and energy in trying to create it, they will eventually succeed.

      An infinite amount of monkey with typewriters might not be able to write Shakespeare, but it only takes a few humans with the goal of writing a play to arrive at something very close to it.

    • Science Fiction is well umm fiction.

      Sure some time the author gets lucky and their idea becomes reality. But for the most part Faster then light travel, time travel, cross dimensional shifting, bigger on the inside, super intelligent computers and robots. (Aka almost every Dr. Who Plot line) is used as a way to keep us entertained. The closest to a real sci-fi matching possibility. would be a generational ship where the ship will take thousands of years to get to its destination, where most days will be h

      • by mcgrew ( 92797 ) *

        The closest to a real sci-fi matching possibility. would be a generational ship where the ship will take thousands of years to get to its destination

        I used to think that until I saw this. [wikipedia.org]

    • Is there reason to believe that people are smart enough to write programs that can learn to be smarter? The possibility of machine intelligence is limited by human intelligence. It's all very well to say that machines will learn to program themselves, but someone has to be the first to teach them, and it has not yet been established if we're smart enough to do that.
      • by 0123456 ( 636235 )

        It's all very well to say that machines will learn to program themselves, but someone has to be the first to teach them, and it has not yet been established if we're smart enough to do that.

        So who taught humans to program themselves?

        If humans aren't magic, then they can be simulated by a sufficiently complex machine. Therefore, if humans can be 'intelligent', a machine can, too.

        Otherwise you have to believe humans are magic and 'intelligence' somehow exists outside physical reality.

    • by gweihir ( 88907 )

      Actually, _all_ credible results from AI research point into the direction that AI may well be impossible in this universe. The only known possible model (automated deduction) is known to not scale at all to anything resembling "intelligence". But that is the problem with you religious types: You place your beliefs always over facts when they are inconvenient.

      • The only known possible model (automated deduction) is known to not scale at all to anything resembling "intelligence"

        What do you mean "only possible model"? The "singularity people" say that if you build a machine as complex as a brain and connected like a brain with connections that act like neurons, then that machine will act like a brain.

        That's not a model, we don't really know how the brain works. But if they build an artificial brain, they don't need a theory for how it works, except as further wor

  • Ai is inevitable (Score:5, Insightful)

    by fyngyrz ( 762201 ) on Wednesday May 28, 2014 @03:47PM (#47112193) Homepage Journal

    Is machine sentience not only possible, but inevitable?

    Of course it is. Why? Physics. What do I mean by that? Everything -- bar none -- works according to the principles of physics. Nothing, so far, has *ever* been discovered that does not do so. While there is more to be determined about physics, there is no sign of irreproducible magic, which is what luddites must invoke to declare AI "impossible" or even "unlikely." When physics allows us to do something, and we understand what it is we want to do, we have an excellent history of going ahead and doing if there is benefit to be had. And in this case, the benefit is almost incalculable -- almost certainly more than electronics has provided thus far. Socially, technically, productively. The brain is an organic machine, no more, no less. We know this because we have looked very hard at it and found absolutely no "secret sauce" of the form of anything inexplicable.

    AI is a tough problem, and no doubt it'll be tough to find the first solution to it; but we do have hints, as in, how other brains are constructed, and so we're not running completely blind here. Also, a lot of people are working on, and interested in, solutions.

    The claim that AI will never come is squarely in the class of "flying is impossible", "we'll never break the sound barrier", "there's no way we could have landed on the moon", "the genome is too complex to map", and "no one needs more than 640k." It's just shortsighted (and probably fearful) foolishness, born of superstitious and conceited, hubristic foolishness.

    Just like all those things, those who actually understand science will calmly watch as progress puts this episode of "it's impossible!" to bed. It's a long running show, though, and I'm sure we'll continue to be roundly entertained by these naysayers.

    • by jandrese ( 485 )
      We already have a lot of "AI" hidden all around us. Just look at what google can do with a few keywords and ask yourself how much better a person could do with "real" intelligence.

      What the Singularity people never seem to think about is natural limiting factors. It's the same problem the Grey Goo handwringers rarely consider. The idea that an AI would grow exponentially smarter just because it was a machine never really worked for me. It's going to run into the same limiting factors (access to infor
    • Forget physics for a moment, let's talk mathematics:

      Do you believe that there are some non-computable problems?

      If human intelligence is indeed a non-computable problem, then assuming that an algorithmic design will ever be able to compute it is like insisting that the way we'll land on the moon is with a hot air balloon.

      Put another way, it's quite possible that biological intelligence is the most efficient way of organizing intelligence, and that any digital simulation of it, even if it went down to the ato

      • by geekoid ( 135745 )

        "If human intelligence is indeed a non-computable problem, "
        it is not. It's a fixed real thing that exists.

        • by gweihir ( 88907 )

          You do not know what human intelligence is. You have an interface observation, but you have zero understanding what creates it. You may as well assume mobile telephone is intelligent, because if you type in some numbers it is capable of holding an intelligent conversation.

    • by geekoid ( 135745 )

      physics? really? nothing in physics says it's inevitable.
      just the energy requirements alone may limit it.

      "No man will run a mile in under a second"
      There, I said something that can't be done, by you logic it must be possible because...physics.

    • by gweihir ( 88907 )

      Actually, you are wrong. Physics cannot explain life, intelligence, consciousness. You have fallen for a belief called "physicalism" and claim it to be truth when there is no evidence for that. You reasoning is circular, as often with people that confuse "belief" and "fact".

    • > While there is more to be determined about physics, there is no sign of irreproducible magic, which is what luddites must invoke to declare AI "impossible" or even "unlikely."

      The problem with current physics is that there are ZERO equations to describe consciousness. Go ahead, I'll wait for you to list them ...

      Yet somehow consciousness "magically" appears out of the fundamental particles as some "emergent" property.

      Scientists don't know:

      a) how to measure it,
      b) what it is composed of, o

  • I'd argue that all this talk about traveling in underwater vessels powered by electricity, or sending men to the moon (the audacity of even suggesting such!), or traveling around the world in only 80 days (80 DAYS!!!!!! Inconceivable) as popularized by science fiction writers (that wanna-be prophet and scoundrel Verne comes to mind) should never be considered as a possible future as it's JUST SCIENCE FICTION!

    That little bit of sarcasm aside, the idea of sentient machines is a lot less like mystical proph
  • Stupid? (Score:2, Insightful)

    by pitchpipe ( 708843 )

    I'm not simply arguing that the Singularity is stupid â" people much smarter than me have covered that territory.

    "Stupid"? That's just fucking asinine. "The Singularity" has many incantations, some of which are plausible, and others which are downright unbelievable, but to say it is "stupid" makes you sound stupid. The various models of the singularity have been argued as both likely and impossible by equally intelligent people. I take offense to the word.

    • I like that you (wrongly) used "incantations" there, because the Singularity is indeed closer to magic than science.

      • Meh. It may have been a Freudian slip due to the fact that some versions of the Singularity are closer to magic, but my point still stands: to attack "The Singularity" as if it is one idea is to not have thought deeply about it.
        • Re: (Score:2, Insightful)

          by geekoid ( 135745 )

          Fine. How will it be powered? Every increasing speed require every increasing power, and the power need increases faster then the increase of power.

          • by 0123456 ( 636235 )

            Ever increasing speed require every increasing power, and the power need increases faster then the increase of power.

            That'll be why my i5 laptop only uses a few few more watts than my first Z80 computer, despite being thousands of times faster.

  • by StefanJ ( 88986 ) on Wednesday May 28, 2014 @03:59PM (#47112337) Homepage Journal

    In, ah, 1997, just before I moved out west, I went to the campus SF convention that I'd once helped run once last time. The GOH was Vernor Vinge. A friend and I, seeing Vinge looking kind of bored and lost at a loud cyberpunk-themed meet-the-pros party, dragged him off to the green room and BSed about the Singularity, Vinge's "Zones" setting, E.E. "Doc" Smith, and gaming for a couple of hours. This was freaking amazing! Next day, a couple more friends and I took him for Mongolian BBQ. More heady speculation and wonky BSing.

    That afternoon we'd arranged for a panel about the Singularity. One of the other panelists was Frederik Pohl. I'd suggested him because I thought his 1965 short-short story, "Day Million," was arguably the first SF to hint at the singularity. There's talk in there about asymptotic progress, and society becoming so weird it would be hard for us to comprehend.

    "Just what is this Singularity thing?" Pohl asked while waiting for the panel to begin. A friend and I gave a short explanation. He rolled his eyes. Paraphrasing: "What a load of crap. All that's going to happen is that we're going to burn out this planet, and the survivors will live to regret our waste and folly."

    Well. That was embarassing.

    Fifteen years later, I found myself agreeing more and more with Pohl. He had seen, in his fifty-plus years writing and editing SF, and keeping a pulse on science and technology, to see many, many cultish futurist fads come and go, some of them touted by SF authors or editors (COUGH Dianetics COUGH psionics COUGH L-5 colonies). When spirits are high these seemed logical and inevitable and full of answers (and good things to peg an SF story to); with time, they all became pale and in retrospect seem a bit silly, and the remaining true believers kind of odd.

  • You submit more stories than you comment.
    Once again, this is basically a rant on a topic with no references, no links.
    Slashdot is about NEWS and FACTS, and then we all comment, flame, troll... etc... It's fun.
    I don't want to comment on a comment... or at least one that came out of nowhere.

  • by epine ( 68316 ) on Wednesday May 28, 2014 @04:26PM (#47112737)

    Jules Verne envisioned the submarine. Does that make a submarine impossible? Does the concept sink on the basis of its sci-fi roots? Oh, lordy, what a fucked up standard of evidence on which to accuse any theory of being faith based.

    * [http://news.nationalgeographic.com/news/2011/02/pictures/110208-jules-verne-google-doodle-183rd-birthday-anniversary/ 8 Jules Verne Inventions That Came True]

    The guy predicted pretty much everything but the click trap.

    • by swilly ( 24960 )

      Jules Verne wrote Twenty Thousand Leagues Under The Sea in 1870. Submarines had been under development since the 17th century. The first military sub is usually credited to an American sub that failed to attach explosives to British ships during the American Revolutionary War. The first sub to sink another ship was a Confederate sub during the American Civil War, which was apparently too close to the explosion, causing it to sink as well.

      The Confederate sub had ballast tanks, screw propulsion, and used a

  • With troll food articles like this!

  • Okay, so other people have done a pretty good job pointing out the at the summary and the article don't understand what the singularity is by definition and that it does not require AI, etc...

    But I would like to point out that machine intelligence is absolutely possible, all we have to do is fully merge with the machines.
  • From:
    http://www.asimovonline.com/ol... [asimovonline.com]

    Let me add as a teaser:

    "...
    And out there beyond are the stars.

    And the interesting thing is that if we can get through the next thirty years, there's no reason why we can't enter into a kind of plateau which will see the human race last, perhaps, indefinitely...till it evolves into better things...and spread out into space indefinitely. We have the choice here between nothing...and the virtually infinite. And the nice thing about it is that you guys in the audience today

  • I agree, but I don't think that the singularity breaks into the Top 3 sci-fi faith-based initiatives. I usually count them like:

    (1) Technology will reduce our work hours until almost all of us are leisurely, creative, artist-types.
    (2) Automated warfare will result in conflicts occurring in which almost no humans die.
    (3) There is intelligent life in outer space that we can possibly contact.

  • Can we add large-scale interstellar colonization to the list?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...