Friday, July 4, 2014

Social media updates ( July 4 , 2014 - ironic items for Independence Day ) -- Facebook's contempt for its customers revealed -- Facebook’s Experiment and its CIA Roots......Google Has Received 250,000 Article Removal Requests as Internet Censorship Takes Off in Europe.....

http://wallstreetonparade.com/2014/07/facebook%E2%80%99s-experiment-and-its-cia-roots/




Facebook’s Experiment and its CIA Roots

By Pam Martens and Russ Martens: July 3, 2014
Let us see if we have this straight: Facebook is a company that has been publicly traded for just slightly more than two years. It pays no dividend so its key attraction for its shareholders is that it knows how to run and grow its business. Its initial public offering launch was one of the biggest fiascos in modern finance. Its core asset from which its revenues flow is based on the loyalty and growth of its user base upon whom it decided to conduct secret psychological experiments  – and then publish the findings.
But wait. It gets worse.
Facebook’s secret human lab rat study on a self-described “massive” 689,003 of its users facebook once again shows its contempt for its customer aka Lab rats - was published just last month in the Proceedings of the U.S. National Academy of Sciences under the title: “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” The study said the significant finding was that “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.”
According to Facebook, this is what they did to manipulate the behavior of its unpaid and involuntary human lab rats:
“In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and non-verbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.”
Facebook has observed along with the rest of America the fallout from the revelations of secret government surveillance of its citizens. Somehow the public outcry over secret surveillance did not send a “non-verbal cue” to Facebook that there might be an outcry to revelations that it was using an algorithm to manipulate the emotional mood of its users without their knowledge or informed consent.
What if some of these users were under psychiatric care for depression? What if they had just lost their job, or their marriage, or their home, or experienced the death of a loved one? How outrageously irresponsible is it to secretly attempt to manipulate the mood of an already depressed person to a more negative state?
But wait. It gets worse.
In 1994, the CIA declassified a secret paper outlining other attempts to manipulate a person’s behavior without their knowledge. The document, “The Operational Potential of Subliminal Perception” by Richard Gafford notes the following:
“Usually the purpose is to produce behavior of which the individual is unaware. The use of subliminal perception, on the other hand, is a device to keep him unaware of thesource of his stimulation. The desire here is not to keep him unaware of what he is doing, but rather to keep him unaware of why he is doing it, by masking the external cue or message with subliminal presentation and so stimulating an unrecognized motive.”
We’re also informed by the CIA that “The operational potential of other techniques for stimulating a person to take a specific controlled action without his being aware of the stimulus, or the source of stimulation, has in the past caught the attention of imaginative intelligence officers.”
And, the CIA offers some other helpful tips that Facebook may want to consider in its next human lab rat study:
“In order to develop the subliminal perception process for use as a reliable operational technique, it would be necessary a) to define the composition of a subliminal cue or message which will trigger an appropriate preexisting motive, b) to determine the limits of intensity between which this stimulus is effective but not consciously perceived, c) to determine what preexisting motive will produce the desired abnormal action and under what conditions it is operative, and d) to overcome the defenses aroused by consciousness of the action itself.”
But wait. It gets worse.
The jury is still out on whether this study had a military connection. The original press release issued by Cornell University, which was involved in the research study, indicated that the U.S. Army Research Office was one of the funders of the study. After there was a public uproar about the study itself, this correction appeared at the bottom of the press release:
“Correction: An earlier version of this story reported that the study was funded in part by the James S. McDonnell Foundation and the Army Research Office. In fact, the study received no external funding.”
While questions continue to swirl around this dubious study, one thing is not in doubt: Facebook has a unique talent for brand suicide.

http://rt.com/news/170448-facebook-study-journal-experiment/


Journal that published Facebook psych study sorry…social network not

Published time: July 04, 2014 10:20
Reuters/Robert Galbraith
Reuters/Robert Galbraith
The journal that has published the Facebook mood swings study regrets the way the study was conducted. Facebook issued a sorry statement for accessing the content of 700,000 people’s pages, but the company’s second-in-command said she has no regrets.
It was concluded by the Proceedings of the National Academy of Sciences journal that the move to manipulate the content appearing on the Facebook pages of about 700,000 people without their prior consent may have violated some principles of academic research.
However, as a non-scientific, profit-driven company, Facebook wasn’t obliged to comply with scientific ethics, Inder Verma, the journal's editor-in-chief, wrote a day after the initial article.
"It is nevertheless a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out," she stated.
The "editorial expression of concern" appeared in the journal on Thursday.
It came after the Facebook’s Chief Operating Officer Sheryl Sandberg apologized – although she wasn’t sorry about the experiment, she said, merely about the way it was carried out.
“This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated,” Sandberg told the Wall Street Journal.
Facebook's Chief Operating Officer (COO) Sheryl Sandberg (Reuters/Adnan Abidi)
Facebook's Chief Operating Officer (COO) Sheryl Sandberg (Reuters/Adnan Abidi)

The experiment carried out by the Facebook consisted of manipulating the content that appeared in the news feed of a small part of the social network’s almost 1.3 billion users, AP reported. The study was carried out in January 2012, and aimed to prove that people’s moods could spread like an “emotional contagion” based on what they were reading.
The results were published a month ago, but only a couple of days ago the global outrage began over the blogs and essays in the New York Times and the Atlantic raising questions as to whether the study was ethical.
Facebook data scientist Adam Kramer tried to provide explanation why the study was started in the first place.
“The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends' negativity might lead people to avoid visiting Facebook. We didn't clearly state our motivations in the paper. ”
Among those who participated in the study development, there was researcher Jeffrey T. Hancock of Cornell University connected to a Department of Defense-funded program to use the military to curb civil unrest. This news triggered online outrage.


UK authorities are currently probing if the experiment has broken UK data protection laws. If proven, the world’s most popular social network could face a 500,000-pound (US$857,000) fine.



http://www.infowars.com/google-has-received-250000-article-removal-requests-as-internet-censorship-takes-off-in-europe/

GOOGLE HAS RECEIVED 250,000 ARTICLE REMOVAL REQUESTS AS INTERNET CENSORSHIP TAKES OFF IN EUROPE

While the internet is an amazing tool for communication and free speech, we must also be aware of how it can be abused.
Google Has Received 250,000 Article Removal Requests as Internet Censorship Takes Off in Europe
by MICHAEL KRIEGER | LIBERTY BLITZKRIEG JULY 4, 2014


In the walls of the cubicle there were three orifices. To the right of the speakwrite, a small pneumatic tube for written messages, to the left, a larger one for newspapers; and in the side wall, within easy reach of Winston’s arm, a large oblong slit protected by a wire grating. This last was for the disposal of waste paper. Similar slits existed in thousands or tens of thousands throughout the building, not only in every room but at short intervals in every corridor. For some reason they were nicknamed memory holes. When one knew that any document was due for destruction, or even when one saw a scrap of waste paper lying about, it was an automatic action to lift the flap of the nearest memory hole and drop it in, whereupon it would be whirled away on a current of warm air to the enormous furnaces which were hidden somewhere in the recesses of the building.
He who controls the past controls the future. He who controls the present controls the past.
- From George Orwell’s 1984
The reason Big Brother and his band of technocrat authoritarians spend so much time and effort erasing history in the classic novel 1984, is because they are a bunch of total criminals and they know it. Their grip on power is made so much easier if the proles are kept ignorant, confused and in the dark. This strategy is not just fiction, it is the philosophy of tyrants and authoritarians throughout history.
While the internet is an amazing tool for communication and free speech, we must also be aware of how it can be abused by those in power who wish to whitewash history. For more on this epic struggle, read the post, Networks vs. Hierarchies: Which Will Win? Niall Furguson Weighs In. In it, Mr. Furguson explains that the biggest threat to networks overcoming hierarchies is if government technocrats are able to gain a hold of the technological tools we now use to communicate with each other. He fears this is already happening with the NSA’s PRISM program and the complicity of all the major tech companies in the agency’s unconstitutional spying.

So it appears Orwell’s feared “memory hole” has begun to emerge in Europe. This shouldn’t be seen as a surprise considering the region’s devastating youth unemployment rate and angst throughout society. The way censorship is gaining a foothold in the region is through something known as a right to be forgotten” ruling issued by the European Court of Justice. This ruling states that Google must essentially delete “inadequate, irrelevant or no longer relevant” data from its results when a member of the public requests it.
Of course this is incredibly vague, and who is to decide what it “no longer relevant” anyway? Seems quite subjective. This is clearly an attempt to take a tool designed to decentralize information flow (the internet) and centralize and censor it. As such, it must be resisted at all costs.
So far, we know of two major media organizations that have been informed of deleted or censored articles, the BBC and the Guardian. The BBC story is the one that has received the most attention because the content related to former ex-Merrill Lynch CEO Stan O’Neal, who received a $161.5 million golden parachute compensation package after running the Wall Street firm into the ground and playing a key role in destroying the U.S. economy. The BBC reports that:
A blog I wrote in 2007 will no longer be findable when searching on Google in Europe.
Which means that to all intents and purposes the article has been removed from the public record, given that Google is the route to information and stories for most people.
So why has Google killed this example of my journalism?
Well it has responded to someone exercising his or her new “right to be forgotten”, following a ruling in May by the European Court of Justice that Google must delete “inadequate, irrelevant or no longer relevant” data from its results when a member of the public requests it.
Now in my blog, only one individual is named. He is Stan O’Neal, the former boss of the investment bank Merrill Lynch.
My column describes how O’Neal was forced out of Merrill after the investment bank suffered colossal losses on reckless investments it had made.
Is the data in it “inadequate, irrelevant or no longer relevant”?
Hmmm.
Most people would argue that it is highly relevant for the track record, good or bad, of a business leader to remain on the public record – especially someone widely seen as having played an important role in the worst financial crisis in living memory (Merrill went to the brink of collapse the following year, and was rescued by Bank of America).
To be fair to Google, it opposed the European court ruling.
Maybe I am a victim of teething problems. It is only a few days since the ruling has been implemented – and Google tells me that since then it has received a staggering 50,000 requests for articles to be removed from European searches.
I asked Google if I can appeal against the casting of my article into the oblivion of unsearchable internet data.
Google is getting back to me.
Since the original post, the author has provided an update:
So there have been some interesting developments in my encounter with the EU’s “Right to be Forgotten” rules.
It is now almost certain that the request for oblivion has come from someone who left a comment about the story.
So only Google searches including his or her name are now impossible.
Which means you can still find the article if you put in the name of Merrill’s ousted boss, “Stan O’Neal”.
In other words, what Google has done is not quite the assault on public-interest journalism that it might have seemed.
I disagree with his conclusion, and here is why. As is noted on this Yahoo post:
We don’t know whether it was O’Neal who asked that the link be removed. In fact, O’Neal’s name may be being dragged through the mud unnecessarily here. Peston believes it may be someone mentioned by readers in the comments section under his story about the ruling. 
He suggests that as a “Peter Dragomer” search triggers the same disclosure that a result may have been censored, that perhaps it was not O’Neal who requested the deletion. In an amazing coincidence, the person posting as “Peter Dragomer” claims to be an ex-Merrill employee.
Of course, it’s not an amazing coincidence. In fact, going forward someone else can just post a comment below an article on a high profile person to get the article removed so that the person in the article can pretend it wasn’t his doing. In any event, someone who voluntarily leaves a comment should have zero say under this law. They went ahead and made the comment in the first place. Now you want an article article removed because of a comment you made? Beyond absurd.
Now here’s the Guardian’s take:
When you Google someone from within the EU, you no longer see what the search giant thinks is the most important and relevant information about an individual. You see the most important information the target of your search is not trying to hide.
Stark evidence of this fact, the result of a European court ruling that individuals had the right to remove material about themselves from search engine results, arrived in the Guardian’s inbox this morning, in the form of an automated notification that six Guardian articles have been scrubbed from search results.
The first six articles down the memory hole – there will likely be many more as the rich and powerful look to scrub up their online images, doubtless with the help of a new wave of “reputation management” firms – are a strange bunch.
The Guardian has no form of appeal against parts of its journalism being made all but impossible for most of Europe’s 368 million to find. The strange aspect of the ruling is all the content is still there: if you click the links in this article, you can read all the “disappeared” stories on this site. No one has suggested the stories weren’t true, fair or accurate. But still they are made hard for anyone to find.
As for Google itself, it’s clearly a reluctant participant in what effectively amounts to censorship. Whether for commercial or free speech reasons (or both), it’s informing sites when their content is blocked – perhaps in the hope that they will write about it. It’s taking requests literally: only the exact pages requested for removal vanish and only when you search for them by the specified name.
But this isn’t enough. The Guardian, like the rest of the media, regularly writes about things people have done which might not be illegal but raise serious political, moral or ethical questions – tax avoidance, for example. These should not be allowed to disappear: to do so is a huge, if indirect, challenge to press freedom. The ruling has created a stopwatch on free expression – our journalism can be found only until someone asks for it to be hidden.
Publishers can and should do more to fight back. One route may be legal action. Others may be looking for search tools and engines outside the EU. Quicker than that is a direct innovation: how about any time a news outlet gets a notification, it tweets a link to the article that’s just been disappeared. Would you follow @GdnVanished?
This last idea is actually a great one. Every time an article gets censored it should be highlighted. If we could get one Twitter account to aggregate all the deleted stories (or perhaps just the high profile ones) it could make the whole censorship campaign backfire as the stories would get even more press than they would have through regular searches. Ah…the possibilities.
Interestingly, due to all the controversy, a European Commission spokesman has come forth to criticize Google for removing the BBC article. You can’t make this stuff up. From the BBC:
Google’s decision to remove a BBC article from some of its search results was “not a good judgement”, a European Commission spokesman has said.
A link to an article by Robert Peston was taken down under the European court’s “right to be forgotten” ruling.
But Ryan Heath, spokesman for the European Commission’s vice-president, said he could not see a “reasonable public interest” for the action.
He said the ruling should not allow people to “Photoshop their lives”.
The BBC understands that Google is sifting through more than 250,000 web links people wanted removed.
Perhaps it wasn’t in “good judgment ” to issue this idiotic ruling in the first place. Just another government shit-show. As usual.


Censorship by ISP......


    THE UK’S INTERNET FILTERS BLOCK ALMOST 1 IN 5 WEBSITES

    20 percent blocked by filters.
    The UK's Internet Filters Block Almost 1 in 5 Websites
    by JOSEPH COX | VICE JULY 4, 2014


    Almost one in five websites are blocked by the UK’s internet service providers’ filters, according to the Open Rights Group. Using an in-house developed tool, the digital rights organisation have tested the top 100,000 sites on the web and found that many of the 20 percent blocked by filters—which are intended to protect kids from inappropriate content—included innocuous, inoffensive or educational content.
    For example, a website used to sell and service Porsches is blocked by O2, while TalkTalk denies access to a feminist rights blog. Other blocked sites include the political blog Guido Fawkes, whose editor Paul Staines said: “We would really appreciate it if TalkTalk would remove us from their block list. The only people who block us are them and the Chinese government.”
    Open Rights Group have launched the tool they used to test sites, which means anyone can check if a website has been filtered and keep track of a running total of blocked sites. At the time of publication, the number of blocked sites had risen since the initial launch to over 22,500.
    As for the point of all this, Jim Killock, executive director of Open Rights Group, said: “Through the Blocked project we wanted to find out about the impact of web filters. Already, our reports are showing that almost one in five websites tested are blocked, and that the problem of overblocking seems much bigger than we thought. Different ISPs are blocking different sites and the result is that many people, from businesses to bloggers, are being affected because people can’t access their websites.”