Saturday, July 5, 2014

Skynet Updates ( July 5 , 2014 ) Terminator possibility not just a fictional hollyweird thriller - not just possible but already starting to play itself out ( note 2009 experiment where machines have learned how to lie for self preservation ) . Wake up folks !

Terminator scenario already coming into view ?

http://www.businessinsider.com/google-on-artifical-intelligence-2014-7

( What happened to " Do no evil " ? )

Google Co-Founder Sergey Brin: We Will Make Machines That 'Can Reason, Think, And Do Things Better Than We Can'

Google co-founders Larry Page and Sergey Brin sat for an interview with venture capitalist Vinod Khosla
During the interview, Brin was asked about machine learning and artificial intelligence. He says that so far, we haven't come close to replicating human intelligence. However, he thinks it's only a matter of time before that changes:
In the machine learning realm, we have several kinds of efforts going on. There's, for example, the brain project, which is really machine learning focused. It takes input, such as vision. In fact, we've been using it for the self-driving cars. It's been helpful there. It's been helpful for a number of Google services. And then, there's more general intelligence, like the DeepMind acquisition that - in theory - we hope will one day be fully reasoning AI. Obviously, computer scientists have been promising that for decades and not at all delivered. So I think it would be foolish of us to make prognoses about that. But we do have lots of proof points that one can create intelligent things in the world because-- all of us around. Therefore, you should presume that someday, we will be able to make machines that can reason, think and do things better than we can.
You can watch the interview here. The stuff about artificial intelligence starts at 12:02:








By 2045 'The Top Species Will No Longer Be Humans,' And That Could Be A Problem

terminator red eye rise of robots
Terminator
"Today there's no legislation regarding how much intelligence a machine can have, how interconnected it can be. If that continues, look at the exponential trend. We will reach the singularity in the timeframe most experts predict. From that point on you're going to see that the top species will no longer be humans, but machines."
These are the words of Louis Del Monte, physicist, entrepreneur, and author of "The Artificial Intelligence Revolution." Del Monte spoke to us over the phone about his thoughts surrounding artificial intelligence and the singularity, an indeterminate point in the future when machine intelligence will outmatch not only your own intelligence, but the world's combined human intelligence too.
The average estimate for when this will happen is 2040, though Del Monte says it might be as late as 2045. Either way, it's a timeframe of within three decades.
louis del monte
Screenshot
Louis Del Monte
"It won't be the 'Terminator' scenario, not a war," said Del Monte. "In the early part of the post-singularity world, one scenario is that the machines will seek to turn humans into cyborgs. This is nearly happening now, replacing faulty limbs with artificial parts. We'll see the machines as a useful tool. Productivity in business based on automation will be increased dramatically in various countries. In China it doubled, just based on GDP per employee due to use of machines."
"By the end of this century," he continued, "most of the human race will have become cyborgs [part human, part tech or machine]. The allure will be immortality. Machines will make breakthroughs in medical technology, most of the human race will have more leisure time, and we'll think we've never had it better. The concern I'm raising is that the machines will view us as an unpredictable and dangerous species."
Del Monte believes machines will become self-conscious and have the capabilities to protect themselves. They "might view us the same way we view harmful insects." Humans are a species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses." Hardly an appealing roommate.
He wrote the book as "a warning." Artificial intelligence is becoming more and more capable, and we're adopting it as quickly as it appears. A pacemaker operation is "quite routine," he said, but "it uses sensors and AI to regulate your heart."
A 2009 experiment showed that robots can develop the ability to lie to each other. Run at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, the experiment had robots designed to cooperate in finding beneficial resources like energy and avoiding the hazardous ones. Shockingly, the robots learned to lie to each other in an attempt to hoard the beneficial resources for themselves.
"The implication is that they're also learning self-preservation," Del Monte told us. "Whether or not they're conscious is a moot point."


The Most Ambitious Artificial Intelligence Project In The World Has Been Operating In Near Secrecy For 30 Years

douglas lenat
Screenshot
Doug Lenat.
"We've been keeping a very low profile, mostly intentionally," said Doug Lenat, president and CEO of Cycorp. "No outside investments, no debts. We don't write very many articles or go to conferences, but for the first time, we're close to having this be applicable enough that we want to talk to you."
IBM's Watson and Apple's Siri stirred up a hunger and awareness throughout the U.S. for something like a Star Trek computer that really worked — an artificially intelligent system that could receive instructions in plain, spoken language, make the appropriate inferences, and carry out its instructions without needing to have millions and millions of subroutines hard-coded into it.
As we've established, that stuff is very hard. But Cycorp's goal is to codify general human knowledge and common sense so that computers might make use of it.
Cycorp charged itself with figuring out the tens of millions of pieces of data we rely on as humans — the knowledge that helps us understand the world — and to represent them in a formal way that machines can use to reason. The company's been working continuously since 1984 and next month marks its 30th anniversary.
"Many of the people are still here from 30 years ago — Mary Shepherd and I started [Cycorp] in August of 1984 and we're both still working on it," Lenat said. "It's the most important project one could work on, which is why this is what we're doing. It will amplify human intelligence."
It's only a slight stretch to say Cycorp is building a brain out of software, and they're doing it from scratch.
"Any time you look at any kind of real life piece of text or utterance that one human wrote or said to another human, it's filled with analogies, modal logic, belief, expectation, fear, nested modals, lots of variables and quantifiers," Lenat said. "Everyone else is looking for a free-lunch way to finesse that. Shallow chatbots show a veneer of intelligence or statistical learning from large amounts of data. Amazon and Netflix recommend books and movies very well without understanding in any way what they're doing or why someone might like something.
"It's the difference between someone who understands what they're doing and someone going through the motions of performing something."
Cycorp's product, Cyc, isn't "programmed" in the conventional sense. It's much more accurate to say it's being "taught." Lenat told us that most people think of computer programs as "procedural, [like] a flowchart," but building Cyc is "much more like educating a child."
"We're using a consistent language to build a model of the world," he said.
This means Cyc can see "the white space rather than the black space in what everyone reads and writes to each other." An author might explicitly choose certain words and sentences as he's writing, but in between the sentences are all sorts of things you expect the reader to infer; Cyc aims to make these inferences. 
Consider the sentence, "John Smith robbed First National Bank and was sentenced to 30 years in prison." It leaves out the details surrounding his being caught, arrested, put on trial, and found guilty. A human would never actually go through all that detail because it's alternately boring, confusing, or insulting. You can safely assume other people know what you're talking about. It's like pronoun use — he, she, it — one assumes people can figure out the referent. This stuff is very hard for computers to understand and get right, but Cyc does both.
"If computers were human," Lenat told us, "they'd present themselves as autistic, schizophrenic, or otherwise brittle. It would be unwise or dangerous for that person to take care of children and cook meals, but it's on the horizon for home robots. That's like saying, 'We have an important job to do, but we're going to hire dogs and cats to do it.'"
If you consider the world's current and imagined robots, it's hard to imagine them not benefiting from Cyc-endowed abilities that grant them a more human-like understanding of the world.
Just like computers with operating systems, we might one day install Cyc on a home robot to make it incredibly knowledgeable and useful to us. And because Cycorp started from zero and was built up with a knowledge of nearly everything, it could be used for a wide variety of applications. It's already being used to teach math to sixth graders.
Cyc can pretend to be a confused sixth grader, and the user's role is to help the AI agent understand and learn sixth grade math. There's an emotional investment, a need to think about it, and so on. Our program of course understands the math, but is simply listening to what students say and diagnosing their confusion. It figures out what behavior can it carry out that would be most useful to help them understand things. It's a possibility to revolutionize sixth grade math, but also other grade levels and subjects. There's no reason couldn't be used in common core curriculum as well.
We asked Lenat what famed author and thinker Douglas Hofstadter might think of Cyc:
[Hofstadter] might know what needs to be done for things to be intelligent, but it has taken someone, unfortunately me, the decades of time to drag that mattress out of the road so we can do the work. It's not done by any means, but it's useful.




http://lifeboat.com/blog/2014/07/strong-artificial-intelligence-superseding-the-human-brain




Jul 4, 2014

Strong Artificial Intelligence Superseding the Human Brain?

Posted by  in category: futurism
00bZoom
Strong Artificial Intelligence Superseding the Human Brain?
CITATION: Andres Agostini’s Own White Swan Book:
White Swan Book Excerpt Starts Now:
“… ’ … THE FUTURE WILL BE FAR MORE SURPRISING THAN MOST OBSERVERS REALIZE: FEW HAVE TRULY INTERNALIZED THE IMPLICATIONS OF THE FACT THAT THE RATE OF CHANGE ITSELF IS ACCELERATING…’, as per By Ray Kurzweil, ’ … THE LAW OF ACCELERATING RETURNS ….’ [.…] It’s only one man talking, making projections about the future of technology and not coincidentally the future of the human race. Yet many of Ray Kurzweil’s predictions have hit the mark. In 2009, he analyzed 108 of them and found 89 entirely correct and another 13 ‘essentially’ correct. ‘Another 3 are partially correct, 2 look like they are about 10 years off, and 1, which was tongue in cheek anyway, was just wrong,’ he added. If he [Ray Kurzweil] can maintain this rate of success, many of his other predictions will happen within the lifetime of most people alive today. And almost no one is prepared for them [….] Author, inventor, successful entrepreneur, futurist, and currently head of Google’s engineering department, Kurzweil is enthusiastic about the technology explosion that’s coming. Here are a few predictions he’s made over the years: In THE AGE OF INTELLIGENT MACHINES (1990) he said that by the early 2000s computers would be transcribing speech into computer text, telephone calls would be routinely screened by intelligent answering machines, AND CLASSROOMS WOULD BE DOMINATED BY COMPUTERS. He also said by 2020 there would be a world government, though I suspect he’s backed off from that view [….] In THE AGE OF SPIRITUAL MACHINES (1999) he predicted that by 2009 most books would be read on screens rather than paper, people would be giving commands to computers by voice, and they would use small wearable computers to monitor body functions and get directions for navigation [….] Some of the milder predictions in THE SINGULARITY IS NEAR (2005) include $1,000 computers having the memory space of one human brain (10 T.B. or 1013 bits) by 2018, the application of nano computers (called nanobots) to medical diagnosis and treatment in the 2020s, and the development of a computer sophisticated enough to pass a stringent version of the Turing test — A COMPUTER SMART ENOUGH TO FOOL A HUMAN INTERROGATOR INTO THINKING IT WAS HUMAN — no later than 2029 [.…] Soon after that, we can expect a rupture of reality called the Singularity [.…] THE TECHNOLOGICAL SINGULARITY [….] In Artificial Intelligence (A.I.), the Technological Singularity refers to an impending event generated by entities with greater than human intelligence. From Kurzweil’s perspective, ‘…the Singularity has many faces. It represents the nearly vertical phase of exponential growth that occurs when the rate is so extreme that technology appears to be expanding at infinite speed … WE WILL BECOME VASTLY SMARTER AS WE MERGE WITH OUR TECHNOLOGY …’ [.…] And by ‘merge’ he means (from The Singularity is Near): Biology has inherent limitations. For example, every living organism must be built from proteins that are folded from one-dimensional strings of amino acids. Protein-based mechanisms are lacking in strength and speed. We will be able to reengineer all of the organs and systems in our biological bodies and brains to be vastly more capable [.…] The Singularity, in other words, involves Intelligence Amplification (IA) in humans. WE WILL, ON A VOLUNTARY BASIS, BECOME INFUSED WITH NANOBOTS: ‘…ROBOTS DESIGNED AT THE MOLECULAR LEVEL, MEASURED IN MICRONS…’ NANOBOTS WILL HAVE MULTIPLE ROLES WITHIN THE BODY, INCLUDING HEALTH MAINTENANCE AND THEIR ABILITY TO VASTLY EXTEND HUMAN INTELLIGENCE […] ONCE NONBIOLOGICAL INTELLIGENCE GETS A FOOTHOLD IN THE HUMAN BRAIN (THIS HAS ALREADY STARTED WITH COMPUTERIZED NEURAL IMPLANTS), THE MACHINE INTELLIGENCE IN OUR BRAINS WILL GROW EXPONENTIALLY (AS IT HAS BEEN DOING ALL ALONG), AT LEAST DOUBLING IN POWER EACH YEAR. In contrast, biological intelligence is effectively of fixed capacity [.…] As molecular nanotechnology involves the manipulation of matter on atomic or molecular levels, it will be possible to infuse everything on planet earth with nonbiological intelligence. POTENTIALLY, THE WHOLE UNIVERSE COULD BE SATURATED WITH INTELLIGENCE [.…] WHAT WILL THE POST-SINGULARITY WORLD LOOK LIKE? [.…] Most of the intelligence of our civilization will ultimately be nonbiological. BY THE END OF THIS CENTURY, IT WILL BE TRILLIONS OF TRILLIONS OF TIMES MORE POWERFUL THAN HUMAN INTELLIGENCE. However, to address often-expressed concerns, this does not imply the end of biological intelligence, even if it is thrown from its perch of evolutionary superiority. Even the nonbiological forms will be derived from biological design. Our civilization will remain human— indeed, in many ways it will be more exemplary of what we regard as human than it is today [.…] THE TREND TELLS THE STORY [.…] Life arrives roughly 3.7 billion years ago in the form of biogenic graphite followed by the appearance of cells two billion years later. As we move from there biological evolution picks up speed, as does human technology. Viewing the linear plot, everything seems to happen in one day. Though the time span from the introduction of the personal computer to the World Wide Web took 14 years (from the MITS Altair 8800 in 1975 to Tim Berners-Lee’s proposal in March, 1989), it happened almost instantaneously in the overall picture. The second chart lays it out for us dramatically. Exponential forces are very seductive, he says. Until we get far enough along on the curve, they seem linear. Once we’re past the “knee” the trend starts to become clear. Or it should [.…] Mother Jones ran an article a year ago that illustrates how deceptive exponential trends can be. Imagine if Lake Michigan were drained in 1940, and your task was to fill it by doubling the amount of water you add every 18 months, beginning with one ounce. So, after 18 months you add two ounces, 18 months later you add four ounces, and so on. Coincidentally, as you were adding your first ounce to the dry lake, the first programmable computer in fact made its debut [.…] You continue. By 1960 you’ve added 150 gallons. By 1970, 16,000 gallons. You’re getting nowhere. Even if you stay with it to 2010, all you can see is a bit of water here and there. In the 47 18-month periods that have passed since 1940, you’ve added about 140.7 trillion ounces of Eighteen months served as the time interval because it corresponds to Moore’s Law (Intel’s David House modifiedMoore’s 2-year estimate in the 1970s, saying computer performance would double every 18 months. As of 2003, it was doubling every 20 months.) As Kurzweil notes, We’ve moved from computers with a trillionth of the power of a human brain to computers with a billionth of the power. Then a millionth. And now a thousandth. Along the way, computers progressed from ballistics to accounting to word processing to speech recognition, and none of that really seemed like progress toward artificial intelligence [.…] The truth is that all this represents more progress toward true AI than most of us realize. We’ve just been limited by the fact that computers still aren’t quite muscular enough to finish the job. That’s changing rapidly, though [.…] Even as AI progresses, the achievements are often discounted. In THE AGE OF INTELLIGENT MACHINES (1990) Kurzweil predicted a computer would beat the world chess champion by 1998. While musing about this prediction in January, 2011 he said, ‘… I also predicted that when that happened we would either think better of computer intelligence, worse of human thinking, or worse of chess, and that if history was a guide, we would downgrade chess. [IBM’s] Deep Blue defeated Garry Kasparov in 1997, and indeed we were immediately treated to rationalizations that chess was not really exemplary of human thinking after all … ’ [.…] At Google, Kurzweil’s ambition is to do more than train a computer to read Wikipedia. ‘… We want [computers] to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions …” [.…] When Kurzweil says ‘… everything on the web, …’ he means everything — including ‘… every email you’ve ever written, every document, every idle thought you’ve ever tapped into a search-engine box…’ …” [237]
White Swan Book Excerpt Ends Now.


http://www.spectator.co.uk/features/9252311/the-sheer-stupidity-of-artificial-intelligence/

The sinister cult of the Singularity (and how it's shaping your future)

A geek religion that aims to exalt machines instead diminishes humanity
158763592
The latest US census found that 43 per cent of the population in Santa Clara County, California, were members of a religious institution. This is slightly less than the American national average of 50 per cent, but you’d probably expect that because the area includes Silicon Valley, where geeks are busy designing our online, gadget-laden future. You might assume they would be pretty secular types.
You’d be wrong. As a measure of religious observance, that census is useless. Perhaps the geeks don’t all belong to churches, but the reality is that the inhabitants of the Valley are in the grip of a religious mania so bizarre, so exotic, that it makes the Prince Philip-worshipping inhabitants of the island of Pacific Tanna look positively mainstream. For the geeks worship a machine that has not yet been built.
This machine will appear in about 2045 at a moment its worshippers call the Singularity. It will be the last machine we will ever build because, being superintelligent and able to redesign itself to be ever more intelligent, it will do everything we need, including make us medically immortal by curing all our ills, or, perhaps, genuinely immortal by uploading us into itself. Or it will kill us. The mood of the machine is as unpredictable as that of Prince Philip; it may be an Old rather than a New Testament god.

Ray Kurzweil speaks on "Singularity" dur
Ray Kurzweil was the principal developer of the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and large-vocabulary speech recognition Photo: Getty

The Abraham — or perhaps John the Baptist — of this faith is Ray Kurzweil. Kurzweil has long been the hot gospeller of the future. As with all futurologists, his forecasts have proved more often wrong than right. Yet he is a marketing genius and that has led to him being lauded by presidents and employed by Google to work on artificial intelligence (AI). This genius has also led to the establishment of the Singularity University, the campus of which is inside the mighty Nasa Ames Research Centre in Silicon Valley. It is Kurzweil who chose the date of 2045 for the advent of the Singularity and who has been the final machine’s most effective disciple.
Singularity is a term derived from physics, where it means the point at the unknowable centre of a black hole where the laws of physics break down. For Valley believers, the tech version of this is where the rules and conventions that have previously ordered human life come to an end. It is the ‘trans-human’ moment at which we transcend our biological destiny.

http://time.com/2846824/computer-posing-as-teenager-achieves-artificial-intelligence-milestone/


Computer Posing as Teenager Achieves Artificial-Intelligence Milestone

The chatbot "Eugene Goostman" manages to fool more than 30% of people it was conversing to that it was human, becoming the first computer to pass the iconic 65-year-old Turing test

A live-chatting computer has passed the artificial-intelligence milestone Turing Test at an event in London, as it fooled a third of people that they were actually speaking to a human.
Professor Kevin Warwick, a visiting professor at the University of Reading, in England, said in a statement that some would claim the test had already been passed, but “this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted.”
The winning software, called Eugene Goostman, simulates a 13-year-old, saying that he likes hamburgers and candy and that his father is a gynecologist. Developer Vladimir Veselov said his team chose this personality because the program could then “claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything.”
The father of modern computer science, Alan Turing, developed the Turing test 65 years ago to answer the question: Can machines think? In order to pass, a computer program must be mistaken for a human in more than 30% of its five-minute keyboard conversation.
According to Warwick, the test has implications today as a tool to fight cyberthreats.
“It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true … when in fact it is not,” he said.
The London event marked the 60th anniversary of Turing’s death, and took place nearly six months after he received a posthumous royal pardon for his 1952 homosexuality conviction.


http://www.cnbc.com/id/101774267#.

Elon Musk, Stephen Hawking and fearing the machine


330
COMMENTSJoin the Discussion

When the man at the forefront of some of the most cutting-edge enterprises in the world warns you about, well, some potentially disastrous technological dangers, you should probably listen, right?
So pay attention to a warning from Elon Musk, the founder of TeslaPaypaland SpaceX. During an interview on CNBC this past week he warned about artificial intelligence—you know, computers thinking for themselves.
"I think there's things that are potentially dangerous out there. ...There's been movies about this, like 'Terminator,'" he said on CNBC's "Closing Bell". "There's some scary outcomes and we should try to make sure the outcomes are good, not bad."
It's kind of an ironic comment from him, since he just invested in an artificial intelligence company, Vicarious, a start-up that is working on enabling machines to mimic the human brain.
"It's not from the standpoint of actually trying to make any investment return," he explained. "It's purely I would just like to keep an eye on what's going on with artificial intelligence."
Musk's warning is almost identical to that of another really, really smart guy—renowned physicist Stephen Hawking.
"Success in creating A.I. would be the biggest event in human history," Hawking wrote in a co-authored column in early May. "Unfortunately, it might also be the last, unless we learn how to avoid the risks."
He reiterated the warning recently in a pretty hysterical bit with comedian John Oliver on his new HBO show, "Last Week Tonight." Hawking pointed out that artificial intelligence could design improvements to itself and outsmart humans.
"I know you're trying to get people to be cautious there but why should I not be excited about fighting a robot?" asked Oliver.
"You would lose," said Hawking.

No comments:

Post a Comment