Sunday 16 June 2013


Google Inc has launched a small network of balloons over the Southern Hemisphere in an experiment it hopes could bring reliable Internet access to the world's most remote regions, the company said late Friday.

The pilot program, Project Loon, took off this month from New Zealand's South Island, using solar-powered, high-altitude balloons that ride the wind about 12.5 miles (20 kilometers) - twice as high as airplanes - above the ground, Google said.

Like the Internet search engine for which Google is best known, Project Loon uses algorithms to determine where the balloons need to go, then moves them into winds blowing in the desired direction, the company said.

By moving with the wind, the balloons form a network of airborne hot spots that can deliver Internet access over a broad area at speeds comparable to 3G using open radio frequency bands, Google said.

To connect to the balloon network, a special Internet antenna is attached to buildings below.

The Mountain View, Calif-based company announced the project on its official blog here, and its website www.google.com/loon/.

The 30 balloons deployed in New Zealand this month will beam Internet to a small group of pilot testers and be used to refine the technology and shape the next phase of Project Loon, Google said.

Google did not say what it was spending on the pilot project or how much a global network of balloons might cost.

Google has also developed self-driving vehicles, which the company says could significantly increase driving safety.

Those vehicles are beginning to gain support from lawmakers in places like California, where a bill legalizing their operation on state roads was signed into law last by Governor Jerry Brown.

Project Loon is a research and development project being developed by Google X with the mission of providing Internet access to rural and remote areas using high-altitude balloons placed in the stratosphere at an altitude of about 20 km (12 mi) to create an aerial wireless network with up to 3G-like speeds. Using wind data from the National Oceanic and Atmospheric Administration (NOAA), the balloons are maneuvered by adjusting their altitude to float to a wind layer after identifying the wind layer with the desired speed and direction. People connect to the balloon network using a special Internet antenna attached to their building. The signal bounces from balloon to balloon, then to the global Internet on Earth. The balloon system is also expected to improve communication in affected regions during natural disasters. Raven Aerostar , a company that makes weather balloons for NASA, provides the high-altitude balloons used in the project. Key people involved in the project include Rich DeVaul, chief technical architect who is also an expert on wearable technology; and Mike Cassidy, a project leader.

Development

The project began development in 2011 with a series of trial runs in California's Central Valley and was officially announced as a Google X project on June 14, 2013.

Pilot test

On June 16, 2013, official development on Project Loon begins with a pilot experiment in which about 30 balloons will be launched from the Tekapo area on New Zealand's South Island in coordination with the Civil Aviation Authority of New Zealand. 50 pilot testers located around Christchurch and the Canterbury Region will test if they are able to connect to the aerial network using the special Internet antennas.

Equipment

The balloon envelopes are made of polyethylene plastic about 0.076 mm (0.0030 in) thick, and stand 15 m (49 ft) across and 12 m (39 ft) tall when fully inflated. A small box containing the balloon’s electronic equipment hangs underneath the inflated envelope. This box contains circuit boards that control the system, radio antennae to communicate with other balloons and with Internet antennae on the ground, and batteries to store solar power so the balloons can operate during the night. Each balloon’s electronics are powered by an array of solar panels that sit between the envelope and the hardware. In full sun, these panels produce 100 Watts of power,sufficient to keep the unit running while also charging a battery for use at night. A parachute attached to the top of the envelope allows for a controlled descent and landing when a balloon is ready to be taken out of service.

The special ground stations are able to connect to the balloon-powered Internet when the balloons are in a 20 km (12 mi) radius.

Saturday 15 June 2013



Google announced on Thursdaythe retirement of a four-year-old plug-in designed to let users of older version of IE (Internet Explorer) run Chrome's browser engine, declaring mission accomplished.

Chrome Frame joins numerous other discontinued Google projects, including much higher-profile departures, like Google Reader earlier this year. Support and updates for Frame will end in January 2014.

Google portrayed Frame's retirement as a positive, saying it had done its job.

"Today, most people are using modern browsers that support the majority of the latest Web technologies," said Chrome engineer Robert Shield in a post to the Chromium blog yesterday. "Better yet, the usage of legacy browsers is declining significantly and newer browsers stay up to date automatically, which means the leading edge has become mainstream."

Google launched Frame in September 2009, a year after the debut of the Chrome browser, casting it as a way to instantly boost the then-slow JavaScript speed of IE, and as an answer to the conundrum facing Web developers when designing sites and Web applications that relied on Internet standards IE didn't then support, such as HTML 5.

Chrome Frame runs in IE6, IE7 and IE8. Since then, Microsoft has released both IE9 (2011) and IE10 (2012).

When Google shipped Frame nearly four years ago, IE6 accounted for the largest share of all copies of Internet Explorer: 35 percent. IE7 and IE8 were third and second, respectively, with shares of 27 percent and 31 percent.

As Shield noted, the mix has changed. As of May, IE6's share of all copies of the browser was 11 percent, while IE7's and IE8's were 3 percent and 41 percent. The newest browsers, IE9 and IE10, accounted for 27 percent and 17 percent.

Shield urged enterprises that now rely on Frame to switch to Chrome for Business, which offers something called "Legacy Browser Support," a Chrome add-in that automatically launches another browser -- an older version of IE, for example -- when IT-designated URLs are clicked. An optional add-on for IE then automatically switches users back to Chrome.

Others have come up with similar solutions for companies still tied to older versions of IE. Browsium, a Washington State-based vendor, offers both Catalyst, which acts as a traffic cop to open links in specified browsers, while its Ione add-on, formerly called Unibrows, lets enterprises run legacy Web apps designed for IE6 in newer versions of Internet Explorer.

But while Shield characterized the retirement of Chrome Frame as a win -- proof of "just how far the Web has come," he said Thursday -- several Web app developers begged to differ.

"This is disappointing news," said Kathryn Fraser in a comment to Shield's post. "While you are correct in terms of the majority of personal users and their browsers, this doesn't translate to many of the health care and education enterprises that our apps are built for. We were only able to recently retire support for IE6 because of the availability of Chrome Frame."

"Chrome Frame was the only way to offer support for our modern Web app to customers who refuse to upgrade from IE7 and IE8," commented Aaron Smith. "It's very disappointing, but understandable, that Google is tossing this to the wayside. It sure leaves me in a bind."

Austin Fatheree was a lot more blunt. "Unreal. You just undid years of my work," he said in a comment late Thursday.

In an FAQ for developers, Google said that while it will stop updating Frame next January, the plug-in will continue to work after that date. "All existing Chrome Frame installs will continue to work as expected," the FAQ said.

Could Samsung's "next big thing" come from the heart of the Big Apple or Silicon Valley?

The smartphone and consumer electronics maker is close to launching an incubator space for startups that are developing software and services for phones, tablet computers and televisions.

Based on Palo Alto's University Avenue and in New York's Chelsea neighborhood, the Samsung Accelerator is on the verge of opening its doors, and the company is already looking for its first round of early stage companies.

"We're looking for bright ideas to build the next next big thing," says a sign that went up this week outside the accelerator's space in Palo Alto.

"We will bring together the people, power and resources to leverage the world's largest device ecosystem and launch product on a massive scale," the sign says. "And we're just getting started. Want to join us? You bring the product vision, we'll bring the rest."

The initial focus of Samsung's investment will be on startups working on communications and productivity software and services, but that could expand later.

Startup accelerators typically provide investment, office space, technical support and other resources in return for a slice of the company and some rights to its product or service.

For Samsung, the company stands to gain unique access to software and services that could help differentiate it from competitors making Android smartphones. The Android phone market is dominated by companies that spend tens or hundreds of millions of dollars on hardware development but comparatively less money on exclusive software.

The company hasn't said much about the accelerator, and on Friday it declined to offer any more details on its plans.

A posting on the @samsungaccel Twitter account said "a sizable number" of applications have been received so far. It also posted a picture from inside the Palo Alto accelerator showing several desks and chairs apparently waiting for occupants.

In Palo Alto, the accelerator will be housed in the historic Varsity Theater, a well-known landmark on the city's leafy University Avenue not far from where companies such as Facebook, Pinterest and Hewlett-Packard got their starts.

The building served as a theater from 1927 until 1994, when it became a Borders bookstore. It's been vacant since Borders went out of business in September 2011 but still carries the bookstore's name. The Samsung Accelerator will be housed on the second floor, and a project is under way to convert the first floor to restaurant space.

The Silicon Valley office is due to open on July 11. A starting date for the New York office is not known.


"A small team has been working on a big idea,"

Facebook appears to have a new product up its sleeve that it will unveil next week at its headquarters in Menlo Park, California.

"A small team has been working on a big idea," said a letter sent to reporters on Friday, according to an ABC News report.

A new product will be announced at the event, slated for June 20, according to the report.

No other details were given. Facebook did not immediately respond to a request for comment on the event.

Facebook's latest product announcement came on Wednesday, when the company unveiled hashtags to let people more easily surface content around specific topics on the site.

The introduction of hashtags is just the first of a series of features the social network plans to roll out to "surface some of the interesting discussions people are having about public events, people and topics," Facebook said in that announcement.

However, whether next week's big reveal will have anything to do with surfacing content or searching across the social network in any other way is unknown.

There was no formal media event to mark the availability of hashtags.

Facebook's last big product unveiling to be held at its headquarters was for Home, a suite of software for Android-powered smartphones to put the social network front and center on those devices.

During a briefing with reporters last month, Facebook reported that Home had attracted close to 1 million downloads.

Revelations about the extent of NSA data gathering programs has sparked a debate about government surveillance. We asked Network Computing readers to weigh in and received a variety of thoughtful responses, including fear of misuse of gathered data, questions about the efficacy of technology to protect against terrorism, and whether the actions of the NSA and other government agencies violate the principles, if not the laws, on which the country was founded.

Ward Thrasher writes:

We are sacrificing our freedoms in the name of perceived security. Much like the failed ability of facial recognition to pick out known security threats in a crowd, this snooping provides government with far, far more data than is necessary to combat terrorism or other national security threats. This snooping has failed to thwart many terrorist attacks since it was in place, and arguably has been only partially, even tangentially, involved in stopping those which have been reported.

During the Clinton administration, the government decided national security was best handled through the application of technology, rather than deploying human assets to gather intelligence and assess threats.

Bin Laden demonstrated how easily terrorists can avoid the prying eyes (ears) of PRISM. Using the old-school sneaker net permits information to be moved without exposing it to technology-based surveillance. As PRISM's details emerge, the bad guys will simply move off the grid, rotate cell numbers frequently and take other steps to eliminate the (arguable) benefit of the PRISM program.

In exchange for this perceived blanket of security, the government now has access to behavioral patterns of folks who are not accused of criminal wrongdoing. This information can be used in any number of nefarious ways to the detriment of individuals, groups and society. An official promise to not do so is less than reassuring that this information will not be misused.

Not only does PRISM violate the constraints of the Constitution, its acceptance by society as a "necessary evil" places us on a slippery slope of ever more infringement of rights in the name of security. As Franklin is oft quoted, "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety."

The use of technology is something that must be deployed judiciously when it comes to the essential liberties guaranteed us by our Founding Fathers.

Antonio Leding sees the potential for abuse when technology moves faster than our ability to create sensible policies. He writes:

This is the exact event the Framers worried about. Part of the genius of America was that it started with the simple belief that our lives - both individually and collectively - are ours to manage as we see fit. The Framers gave us a set of starting blocks to begin from and if, over time, we decided that fundamental rights should be abridged, then the Constitution is supposed to make that process extremely difficult and force the massive debate that should occur with something so grave as that kind of change.

From a policy and legal perspective, America does not deal with rapid changes very well at all. As technology accelerates [it] enables us to completely short-circuit processes that previously could not be avoided - and these very processes are lynchpins in the American experiment. We all seem to forget an extremely important point--that humans are always flawed and that technology has the unique property by which to magnify the harmful impact of human laziness and flaws.

To those who say PRISM is necessary to protect America, the first question is, who's protecting us from PRISM??? I've searched far and wide and I've yet to find a single instance of a big brother state where those being governed led happy and meaningful lives like we have mostly been accustomed to here in the USA.

Sam Brauer questions the validity of big data analysis as a mechanism for thwarting terrorism.

To begin with, I'm not an IT guy, and I'm no whizz at analyzing big data. I'm a guy who's been part of and hung around the scientific research community for a long time (I help commercialize advanced materials) so perhaps I have a different perspective on the NSA/PRISM debate.

There's an interesting parallel that can be drawn with the debate about analyzing genomes in the biology community. The limitations of the big data approach in biology are becoming clear.

We know for certain that genetic patterns in individuals are different. But if we look at the assumptions that the NSA has made, why on earth would terrorists have different patterns in phone conversations than, say, high school gossip? There's no physical basis, and no information basis that's readily apparent.

How can the NSA know that they have a representative sample of terrorists' call patterns to analyze? If they don't know how many terrorists there are, then whoever they've got in their net might be too small a sample from which to extrapolate meaningful information. Why would the terrorists be in constant touch? Wasn't the point of the supposed Red sleeper networks to NOT communicate with agents so that they can't be identified? Why does the NSA assume that there is a constant flow of communication to terrorists that can be identified?

From my perspective, given the idiotic assumptions (and there are only synonyms that make sense here), this whole program has become an exercise in trying to find signal in the noise. At least when the physicists do it, they have a good hypothesis. The NSA doesn't. You can spend fortunes in time and money looking for signals that aren't there.

At the end of the day, if this is an example of how NSA spends its resources and our tax dollars, I think the organization's initials must stand for No Significant Accomplishments--except for impinging on all of our freedoms.

What are your thoughts about the government's broad data-gathering activities? Are they necessary? Effective? Or do they go too far? Share your comments and join the debate.

Friday 14 June 2013


Xbox One's price, $100 more than PS4, and potentially restrictive DRM hurt the console's outlook

A series of blunders by Microsoft has paved the way for Sony's PlayStation 4 console to capture the top spot in year-end holiday sales.

"In terms of the initial sales of Xbox One and the PS4 I think the hundred dollar price difference will make a difference," said Lewis Ward, research manager for gaming at IDC. "It will probably set up Sony to at least have the opportunity to emerge as the top eighth-generation console sales leader."

When Microsoft announced the price of its new console at a press event Monday there was an audible gasp from the audience of gamers and media. The US$499 price tag is higher than the $375 to $450 range that Ward predicted ahead of E3. Part of the reason for the premium is because the Xbox One comes prepackaged with a Kinect sensor. Sony's PlayStation 4 won't require the additional depth-sensing camera and microphone combination that Kinect offers.

A game developer who was waiting to try Microsoft's Xbox One, but did not want to be identified by name, said he currently owns an Xbox 360, but that, "Microsoft has completely lost me." He said that being forced to buy a Kinect sensor and pay a $100 premium is what turned him off. He said he plans to buy a PlayStation 4 when it becomes available later this year.

"I like the PS4 more and the whole $100 less helps out a little," said Jonathan Toyoda another gamer attending E3 who owns the PlayStation 3 and Xbox 360. He said that he'd put the $100 towards buying another game.

"I like the games that come along with the PS4," Paul Hsu a gamer at E3. He owns a PlayStation 3 and said he'd consider buying an Xbox One, but would likely stay loyal to Sony.

Another point of contention among gamers is the new DRM (digital rights management) that Microsoft revealed.

"It's a new layer of complexity with what happens with used game discs," Ward said.

Microsoft places much of the control over disc-based games in the hands of publishers. In the policy outlined on its website, Microsoft said, "We designed Xbox One so game publishers can enable you to trade in your games at participating retailers."

Regarding lending games to friends, the policy states, "Xbox One is designed so game publishers can enable you to give your disc-based games to your friends."

Game developer Microsoft Studios said it will enable the reselling and lending of games, but "third party publishers may opt in or out of supporting game resale."

At Sony's press event the crowd erupted into cheers and applause when Jack Tretton, president and CEO of Sony Computer Entertainment America, announced that the company would stick to the status quo allowing gamers to trade, sell, lend or keep disc-based games.

"When a gamer buys a PS4 disc, they have the rights to use that copy of the game," Tretton said on stage. "In addition, PlayStation 4 disc-based games don't need to be connected online to play."

"That will probably leave a little bit more money in gamers' pockets and it will remove the complexity around how many versions I canA give to my friend,'" Ward said.

The lack of DRM in Sony's plans might thrill gamers, but not necessarily game developers and publishers, he said.

"You could say Microsoft took one for the team," he said. "If they limit the resale then it may force more people to buy a new disc and at the end of the day that's how they make their money."

Sony's aim is to convert current PlayStation 3 owners to PS4 buyers, Ward said. Making inroads in North America and stealing Xbox 360 gamers or prospective Xbox One customers would be "huge," he said.

"If Sony really wants to emerge as the leader in the eighth-generation console race, they're going to have to take away momentum from Microsoft beginning this holiday season in North America."

From 2005 until the end of 2012, 250 million seventh-generation consoles were sold between Sony's PlayStation 3, Microsoft's Xbox 360 and Nintendo's Wii, according to IDC. The Wii was a runaway with nearly 100 million units sold, but second and third place were much closer. Microsoft shipped 76 million consoles, while Sony 75 million.

Even when presented with copious email evidence, Apple VP Eddy Cue had little to say about charges of e-book price collusion.

Apple Senior Vice President Eddy Cue offered only short answers in testimony Thursday in federal court when questioned by U.S. Justice Department prosecutors trying to solidify their case that Apple, along with five of the largest book publishers, worked together to illegally set the prices of electronic books for the market.

Apple co-founder Steve Jobs, who was instrumental in the negotiations, cast an even larger shadow across the proceedings. Cue, who is senior vice president of Internet software and services, led the effort in late 2009 through 2010 to get publishers to release their titles on Apple's iBookstore reader for the soon-to-be-launched iPad, and consulted frequently with Jobs.

"Steve and I worked very closely on this," Cue said.

Asked who had decision-making authority regarding e-books pricing, Cue replied that Jobs, who died in October 2011, was the ultimate arbitrator for all decisions made at Apple during that time.

The Department of Justice charges that Apple colluded with the book publishers, including Harper Collins, the Penguin Group and Simon & Schuster, to set up a new pricing model, in violation of the Sherman Antitrust Act. The agency has offered a large collection of emails and phone records that strongly suggest that publishing company CEOs, Jobs and Cue worked together to hammer out an agreement. DOJ prosecutors spent the morning Thursday asking Cue to elaborate on his emails with publishing CEOs and Jobs.

According to the Justice Department, book publishers were worried in 2009 that Amazon, then the largest retailer of electronic books, was lowering the perceived wholesale value of the books, due to its strategy of discounting books for US$9.99.

Apple, eager to get into the e-book business with the iPad, used the "Amazon threat" as a calling card for the publishers, according to the DOJ. The company presented the idea of moving the book industry from a wholesale model, where retailers buy books at the wholesale rate and then charge whatever price they want, to an agency model. Under that model, the publisher sets the price and the seller, in this case Apple, gets a fixed percentage. Apple pioneered the agency model in the electronic realm with its app store.

With the agency model, Apple could offer the publishers the ability to set the prices at whatever level they liked, such as $12.99, $14.99 or $16.99. Crucial to this deal, originally, was the stipulation that all the publishers would switch over to the agency model, and that they would switch all the retailers, including Amazon, to this model of book purchasing.

Cue, who often worked in between the CEOs and Jobs himself, is the central witness in the trial at the U.S. Southern District Court of New York, with District Judge Denise Cote presiding. Cue confirmed the emails and other documentation that the DOJ had assembled to show this chain of events, but rarely elaborated in his answers to DOJ questions, even when he disagreed with the prosecutors' assertions.

When the prosecutors asked Cue if, for instance, the publisher CEOs were talking among themselves about Apple's proposed move to the agency model, Cue denied knowing anything about their discussions, or even admitting that he suspected that they were talking among themselves. The DOJ pointed to records of more than 100 phone calls the publisher CEOs had made among themselves in one month after Apple proposed the agency model. "No one [at Apple] had knowledge that publisher meetings were happening," he said.

Cue was careful to point out that while Apple was promising that it could sell books at the publishers' preferred rate, it made no promises as to whether these preferred rates would be accepted by other retailers. At one point., Apple offered to move away from the agency model, should the publishers offer it "Most Favored Nation" (MFN) pricing, in which the publishers guaranteed that Apple could sell their e-books, with the 30 percent markup, at the lowest price offered by any retailer. The publishers balked at this proposal, however.

The DOJ also pointed to an email that one student sent to Jobs asking why Apple was raising the price of e-books, which the student summarized as greedy. Jobs responded in an email to the student that it was the publishers, not Apple, who were raising the prices. The DOJ asserted that Apple was only interested in ensuring that it would get a 30 percent cut of the sales, which was in line with what Apple gets with its sales of other content, such as music and television shows.

Cue also downplayed any assertions that Apple was attempting to replicate what Amazon was doing with its Kindle-based e-book selling. The DOJ pointed to one email exchange between Cue and Jobs, in which Jobs asked if Amazon is selling self-published books, and Cue replied that it was, so Jobs then decided to consider the idea for Apple, and later included self-publishers. But when the prosecutor then asserted, from this exchange, that Jobs was basing his decisions on what Amazon was doing, Cue responded that the idea was "incorrect," without elaborating.

Cue will take the stand again later Thursday and is expected to be more forthcoming with details when questioned by Apple's chief counsel, Orin Snyder.

Facebook users: Get ready to see a lot more of the hashtag in your News Feed.

The social network announced yesterday that it is rolling out the popular feature to users over the next few weeks.

Hashtags, made famous by microblogging site Twitter and used on a number of other social sites such as Instagram, Pinterest and Tumblr, turn topics and phrases into clickable links on your personal timeline or your Page. They also make your post searchable.

"To date, there has not been a simple way to see the larger view of what's happening or what people are talking about," says Greg Lindley, product manager at Facebook. Hashtags, he says, will help bring more conversations to the forefront.

According to Facebook, hashtags will appear blue and will redirect to a search page with other posts that include the same hashtag.

As part of the rollout, Facebook says you will also be able to click hashtags that originated on other services, such as Instagram, which is owned by Facebook. It also plans to roll out additional features, including trending hashtags, in the near future, it says.

While hashtags are widely used on other sites, there are a couple of things you need to know about the new feature and how it does and doesn't affect your Facebook privacy.

First, adding a hashtag does not affect the privacy of your post. If your privacy settings are set to Friends, for example, only your friends can view it. Similarly, if your friends search for a hashtag that you've used in the past, your post will appear only to them-and no one else-in search results, Facebook says.

"As always, you control the audience for your posts, including those with hashtags," Lindley says.

Second, if you use a hashtag in a post you publish and you want it to be searchable to everyone, remember that your most-recent privacy setting is the one Facebook will default to for subsequent posts, unless you change it back.

For example, say your privacy settings are "Friends Only." You decide to change the privacy setting for one particular post to "Public." Your subsequent posts will be public unless you change it back to "Friends Only."

Monday 10 June 2013



When we have a question, our smartphones are there for us. They are the revolutionary personal assistants that tend to all our planning and searching needs.

But one tech developer wants to push mobile gadgets one step further to help us even more.

Benki, which launched its Kickstarter campaign Tuesday, promises to combine our home devices into one well-designed interface with three components: a smart outlet, a battery-powered camera and an open-and-close sensor for doors and windows.

Kickstarter has hosted similar smart-device projects before, but Benki's creators want your phones to turn appliances on and off, rather than just gather data and monitor usage. Benki, which is essentially a service for connecting your devices, is controlled with an app for iOS and Android, so you know what your devices are doing wherever you are.

The smart outlet can turn plugged-in appliances on and off at remote command; it also comes equipped with a temperature sensor, light sensor and microphone. So if you want to get creative, you can use Benki as a baby monitor (shown above), or for a variety of other tasks. It will also track your power usage over time.

The battery-powered camera gives you visual insight into what's happening in a given area of your home throughout the day, and the open-and-close sensor will alert you if a door, cupboard or window becomes ajar.

With a goal of $220,000, Benki has a ways to go (it raised just over $20,000 at press time), but backers will have until July 4 to make their voices heard. Those who pledge early can nab all three Benki devices if they spend $180.

What do you think? Would you use Benki in your home? Share your thoughts in the comments, below.


Confused about this week's deluge of news about the NSA's secret surveillance programs? Start here.

It began on Wednesday, with the revelation that Verizon had received a secret court order requesting it to give the National Security Agency access to the metadata of all Americans' phone calls. In a few hours, reports confirmed it was a standard, routine, ongoing, and recurring practice, one which is reauthorized every three months.

On Thursday, another secret NSA program called PRISM was revealed. The program involves some of the larger Internet companies like Google, Facebook, Microsoft and Apple, which have all apparently collaborated with the government to wiretap their foreign users communications.

Shortly after, another report confirmed it wasn't just Verizon — AT&T and Sprint have been receiving the same orders too.

Here's our FAQ on the two separate programs.

What is the phone call metadata collection program?

This allows the NSA to gather information on all phone calls (at least all Verizon, AT&T and Sprint calls) made in the United States. It doesn't allow the NSA to listen on the calls. The agency doesn't receive the names of the customers. All information is anonymous.

All the spy agency gets is telephone numbers, duration of the calls, the unique serial numbers of the cellphones, and potentially the location of the call participants.

This information is provided daily to the NSA by the telecom companies for periods of three months. According to reports, this has been going on for seven years. The Verizon order is similar to one first issued by the Foreign Intelligence Surveillance Court in 2006, an order which is reissued routinely every 90 days.

This was confirmed by Senators Dianne Feinstein (D.-Calif.) and Saxy Chambliss (R.-Ga.). "As far as I know, this is an exact three-month renewal of what has been the case for the past seven years," Feinstein said.

Why is this a big deal?

Authorities have been quick to discount the importance of this type of data. But privacy experts and advocates argue that such an extensive and continuous collection can be even more revealing than spying directly on phone calls.

"Even without intercepting the content of communications, the government can use metadata to learn our most intimate secrets – anything from whether we have a drinking problem to whether we’re gay or straight," Jay Stanley and Ben Wizner, from the American Civil Liberties Union, wrote on Reuters.

Jane Mayer, at The New Yorker, has a great piece on how much metadata can reveal, and how wrong is to think that it's not as intrusive as tapping phone calls.

What's the legal basis for this program?

This program legally stems from section 215 of Patriot Act, a controversial portion of the law. The section expands the Foreign Intelligence Surveillance Act, a law that was passed in 1978 but has been significantly expanded after 9/11.

Section 215 authorizes the U.S. government to request businesses to turn over "the production of any tangible things (including books, records, papers, documents, and other items)," a long as it needs the information for an investigation "to protect against international terrorism or clandestine intelligence activities."

This provision allows the government to investigate both American citizens and non-American citizens. The only difference is that American citizens can't be investigated solely on the basis of activities protected by the First Amendment. The NSA shouldn't investigate an American just based on what books he or she reads, for example.

Authorities don't need to show probable cause. They don't even need to show grounds that the person targeted is a criminal — in other words, there's no need for a warrant, which is normally required in search and seizure cases. All the government needs to do is convince the secretive Foreign Intelligence and Surveillance Court that that the information sought is relevant to a terrorism investigation.

Section 215 also comes with a gag clause. Businesses who receive the order are legally prevented form disclosing its existence to anybody — including the subject of the order. That's why so few cases have been disclosed.

According to a Congressional Research Service report from 2012, Section 215 "both enlarged the scope of materials that may be sought and lowered the standard for a court to issue an order compelling their production."

What do officials say about this program?

The White House, as well as Congress members defended the program, arguing that it's not invading privacy since it doesn't allow authorities to listen in on the actual calls, and that it's a critical tool in the fight against terrorism.

"Everyone should just calm down and understand this isn't anything that is brand new,'" said Senate majority leader Harry Reid.

James Clapper, the Director of National Intelligence, released a statement arguing that this program is "a sensitive intelligence collection operation, on which members of Congress have been fully and repeatedly briefed. The classified program has been authorized by all three branches of the Government."

What is the NSA Internet companies wiretapping program?

We have fewer details about this program, which is still shrouded in secrecy. The program, called PRISM, allows the federal government to secretly collect information on foreigner users of popular Internet services provided by Google, Yahoo, Facebook, Apple and at least five more companies, according to both The Guardian and the Washington Post, who obtained sections of a classified 41-page PowerPoint presentation of the program.

Under this program, the NSA allegedly has real-time access to e-mail, chat services, videos, photos, stored data, and file transfers from the collaborating services. In other words, the NSA has permission to eavesdrop and conduct blanket surveillance on foreigners' online communications.

Pretty much all the companies named in the secret presentation have either denied their participation in the program, or even denied knowing that the program existed at all. But President Obama, Clapper and legislators have effectively confirmed its existence.

We don't know yet about the technicalities of the program. In other words, we don't know if the NSA has compelled these companies to install a backdoor in their servers (something that the companies denied), or if they access an API (a theory also denied by some companies). What the companies said, basically, is that they don't allow any unfettered open-ended access, but that they just respond to lawful requests. What that precisely entails, in this case, is anyone's guess.

Why is this a big deal?

"They quite literally can watch your ideas form as you type," an intelligence officer, who leaked the secret documents to the Washington Post, told the paper.

If that's true and the reports turn out to be accurate, this program is just like having somebody look over your shoulder while you're on your computer, all the time.

What's the legal basis for this program?

As we explained here, PRISM stems from Title VII, Section 702 of the Foreign Intelligence Surveillance Act, or FISA. It was created by the FISA Amendments Act of 2008.

Section 702 allows the NSA to acquire information on foreign targets. The provision is pretty clear on this point: it's "foreign-intelligence information concerning non-U.S. persons located outside the United States." If the program intercepts information pertaining to American citizens, that interception can only be "incidental."

Even with such precise language, it's easy to see how the program could be abused. American data can easily get trapped in this surveillance dragnet if U.S. citizens communicate with somebody outside of the country.

What do officials say about this program?

Pretty much the same thing they said about the other one: it's about fighting terrorism, it's about making Americans safe, and it's perfectly legal.

"Information collected under this program is among the most important and valuable intelligence information we collect, and is used to protect our nation from a wide variety of threats," said Clapper, in a seperate statement addressing the PRISM program and Section 702.

Obama told a press conference that PRISM "does not apply to U.S. citizens and it does not apply to people living in the United States. And again, in this instance, not only is Congress fully apprised of it, but what is also true is that the FISA Court has to authorize it."

In other words, it's all legal, and it has all been approved by the powers that be.


Facebook Founder and CEO Mark Zuckerberg is personally denying Facebook's reported involvement in the National Security Agency's secret PRISM Internet surveillance program.

From Zuckerberg's Facebook page:

"I want to respond personally to the outrageous press reports about PRISM:

Facebook is not and has never been part of any program to give the US or any other government direct access to our servers. We have never received a blanket request or court order from any government agency asking for information or metadata in bulk, like the one Verizon reportedly received. And if we did, we would fight it aggressively. We hadn't even heard of PRISM before yesterday.

When governments ask Facebook for data, we review each request carefully to make sure they always follow the correct processes and all applicable laws, and then only provide the information if is required by law. We will continue fighting aggressively to keep your information safe and secure.

We strongly encourage all governments to be much more transparent about all programs aimed at keeping the public safe. It's the only way to protect everyone's civil liberties and create the safe and free society we all want over the long term."

"Protecting the privacy of our users and their data is a top priority for Facebook. We do not provide any government organization with direct access to Facebook servers. When Facebook is asked for data or information about specific individuals, we carefully scrutinize any such request for compliance with all applicable laws, and provide information only to the extent required by law."

The statements, however, still leave wiggle room for Facebook's participation in some kind of surveillance effort or efforts. The point about "direct access" to servers, in particular, is leading some commentators to posit the NSA program may involve third party servers.



Should you accept LinkedIn connection requests from strangers? Before deciding, make sure you understand the security and reputation risks.

Facebook, Twitter, Pinterest and the like make sense professionally for some people, but not for others. Presence on LinkedIn, on the other hand, is a no-brainer.
Although LinkedIn has gone through lots of changes lately, for the most part it is what it is: the leading social network and collaboration space for people who want to make and develop professional contacts and their own careers. What's less clear about LinkedIn is how far your network should extend. Sure, having lots of connections looks good on your profile, but is any connection a good connection? Can some connections actually hurt you?

There are two schools of thought on this issue, according to Ari Lightman, professor at Carnegie Mellon University and director of its CIO Institute. "If you're an open networker, it makes sense to connect to as many folks as possible -- that broadens your network and gives you reach which might come in handy and provide greater visibility," he said. "The other camp says if you do not know the person you should not connect with them."

Lightman said the arguments for being more selective about the people you connect with are focused on relevance and security. "More people make it more difficult to receive information that really might be of value," he said. "Another argument for no is that you open up yourself to spam from folks who want to sell you products and services. This is commonplace, and many people simply tune it out. But when there is malicious intent -- say, a phishing attempt -- then clicking on a link can load a virus onto your system. There have also been several fake connection requests infecting the unsuspecting user with virus attacks."

And as LinkedIn and other social networks soak up more and more of our personal information, people may be looking to connect to perpetrate identity theft.

"There is plenty of information that someone could mine if they wanted to try and recreate your identity, including work history," said Lightman. "Exposing your information to a wide community gives them access to lots of data about you that could be used maliciously. LinkedIn, as well as all other social networks, are trying to get this under control by allowing people to adjust their privacy settings. It comes down to the classic trade-off of openness/transparency versus risk mitigation."

In addition, indiscriminate connecting isn't just a threat to you; it could be a threat to your company and your colleagues, according to security consultant Brad Causey.

"LinkedIn is a goldmine of reconnaissance and attack opportunities," he said. "Once connected, competitors will have access to your other connections, and can often dissect the organization chart of the company. This can lead to targeted recruitment efforts, or even insight into proprietary processes. ... In addition to recon and competitor insight, spear-phishing campaigns allow fake groups or fake profiles to target specific company employees for compromise."

However, Bruce Hurwitz, president and CEO of Hurwitz Strategic Staffing, warned that refusing invitations could close off opportunities you haven't even imagined.

"I once did a search for an economist in Columbus, Ohio," said Hurwitz. "I know no one in Columbus. So I sent a message to all my first-degree connections in Columbus. I got a candidate. Now, as a recruiter, the old-fashioned way of doing business would have been for me to call financial institutions and universities to see if I could find someone. But I used LinkedIn. The person who got me the candidate was the owner of a beauty parlor. One of her customers was married to the economist. Never in a million years would I have called beauty parlors in Columbus to look for an economist. But through LinkedIn, that's how I found him. And that's the best example I can give of why you do not want to limit your network."

Jake Wengroff, founder and principal analyst with social business consulting firm JXB1, noted that this issue is not a new one. In fact, he said, it's been divisive for years, and it's an issue that people may change their minds about, depending on where they are in their careers: "Closing yourself off to people may make sense when you are happily, gainfully employed, but what happens when circumstances change?" he said.

He added that connecting to strangers improves search results on LinkedIn, especially when searching for jobs -- as connections increase, so do job listings.

That said, Wengroff and others noted, it's important to do some level of digging when you get an invitation from someone you don't know -- especially when their profile information is scant.

"When I receive an invitation from a stranger," said Wengroff, "I find the email alert in my external inbox and send a reply message with a short note saying, 'Hi, thank you for the invitation to connect on LinkedIn. How did you find me?' This helps in determining whether it's a good idea to connect with the stranger. It's been hit or miss, but I have met about a dozen such strangers who took the time to craft original messages and explain why they wanted to connect with me."

Hurwitz said he accepts all invitations to connect, with a few exceptions.

"I accept invitations from everyone," he said. "If I have a competitor who wants to connect with me, I have no problems with that. They will receive tweets and updates about what I am doing, and that's how I build my reputation in my industry. I do not, however, accept invitations from individuals with provocative photos or who are members of the 'adult film industry.' That's because I care about my reputation."

What's your cutoff for connecting on LinkedIn? Do you have to know (or know of) the person who is making the invitation, or do you throw caution to the wind to grow your network as much as possible? Please let us know in the comments section below.

Sunday 9 June 2013



Whether its editing, watching, or backing up your videos, I give you the low-down on the free tools you can use.

Share your videos 


Almost everyone who shoots videos wants to share them. Well, the best way to do this is to upload those clips to YouTube. Besides, your Google ID provides you with your own channel on the service. And you can even keep your videos private, and accessible only to friends and family, not exceeding 50 people (it should be noted that the folk you intend to share your videos with will also require a YouTube/Google account). Some tips for uploading videos:

Use the MP4 format for smoother uploads. 


If you want to upload videos longer than 15 minutes, click on 'increase your limit' on YouTube's upload page. It's free, but you will need to provide your cell phone number for verification.

YouTube offers an easy-to-use online editor where you can edit your clips, apply effects, and add audio before uploading.

If you have a high-speed internet connection, upload full-resolution videos. YouTube will automatically create lower-res versions. This means your viewers will the have option to either watch it in full resolution or in lower resolution.

On the other hand, if you want a video-hosting site that allows you to share videos with more than 50 people, try Vimeo. All you have to do is add a password to your video, and then share the link and the password with your friends. It's as simple as that.

Edit your videos 


So you have an idea of how your holiday videos can be made more interesting with the help of a few cuts and edits. The only problem is that video-editing software is rather expensive. And heck, even if you had such a tool, it would probably be too tough to master. Well, not true. Here are a few programs that are not only free, but also simple to use.

Windows Movie Maker: When it comes to basic video editing, few software beat WMM. Adding a title to your video takes a couple of clicks; adding effects like 'sepia' or 'black-and-white' requires a minute or two - indeed, if you are a beginner, it will only take you a few days to learn how to carry out basic editing tasks like trimming, applying filters, joining clips, etc.

And once you're satisfied with your "director's cut", you can export the final product to the MP4 format.

The program even has a few pre-defined video settings for FullHD TVs, as well as iOS and Android devices, but if you are comfortable with technicalities such as bitrates and frame rates, you can also manually play around with these settings.

VideoPad: If you're looking for greater flexibility than what Windows Movie Maker offers you, then try VideoPad - a free software that offers sequence-based video editing. This means you can cut parts from various clips, rearrange them in a different order, apply effects and filters on selective parts, and overlay the clips with audio. It sounds a bit daunting, but VideoPad is simple to use. And of course, once your video is done, you can export it to any of the popular video formats.

Any Video Converter: Your buddy has just sent a few videos of your school reunion. Sadly, your tablet doesn't recognize the file format. Try Any Video Converter. Just select the clip that you want to convert and then choose the device on which you want to play it. For example, if you're going to play the video on an Android tablet, select that option from the list. Just hit convert and the program will take care of the rest. AVC also supports batch conversion which makes it easier to convert several clips at once.

Manage your DVD collection 


We all have our favourite DVDs and VCDs, but if you're looking for a way in which you can store/backup your expensive collection on a hard drive then you'd do well to keep reading...

ImgBurn: This is a free program that creates an exact copy (ISO file) of a DVD, which is then saved to your hard drive. And it's simple to use too. Insert your DVD into the optical drive of your PC. Click on ImgBurn, select 'create image from a disk' and follow the on-screen instructions. You can later burn the resulting ISO file onto a writable DVD by using the 'write image file to disk' option. Making an image of a DVD is the best way to preserve it because ISO files include the disc's data in its entirety, without changing anything in it.

Daemon Tools Lite: This program, which is free for non-commercial use, will help you play the ISO files you have created on your computer. It basically creates a virtual optical drive on your PC through which you can 'mount' the ISO images stored on your PC to play them. Using Daemon Tools is simple: Once you have installed the program, it creates a virtual disk that shows up alongside your hard drive or other optical drives. Depending on how many drives you have in your computer it will take the next free drive letter. For example, if you have a 'C' and a 'D' drive, it will be called 'E'. To access it, open Daemon Tools Lite. You will see the virtual drive within the program. Double-click on this drive, navigate to and select the ISO file you want to play. Mount the ISO file and it should start playing. Unmount it when you're done.

DVDFab Decrypter: The free version of this software strips DVDs and Blu-rays of copy protection and 'rips' the movie to the hard drive. Ripping films with DVDFab Decrypter is easy and just takes a few clicks. The ripped movies can then be converted into other formats such as MP4, FLV and 3GP to view on tablets or older television sets by using programs like Any Video converter.

XBMC: Now that you have ripped your DVDs, there must be a large number of folders and videos on your hard disk. Use XBMC to organize it all. The free program seeks out the movies on your PC, and then tags and sorts them into various genres by pulling this information, including album art, from websites like IMDB. XBMC is also a capable video player that can handle many video formats and codecs with ease.

Watch videos on streaming sites 


While the best way to watch an online video is in a web browser, there are a few plug-ins and extensions that actually improve your experience...

MagicActions for YouTube: This Chrome browser extension adds quite a few nifty features to your YouTube experience. It lets you play videos in a loop - and it even comes with what it calls the 'cinema mode' where the space around the video goes black so you can enjoy it without any distractions. The extension also lets you easily share videos on Facebook; play the clips in HD by default; control audio with the mouse wheel; zoom into a scene during playback; and even hide the player controls.

Install from the Chrome extension store 


Flash Video Downloader: This extension for the Firefox browser lets you download videos from quite a few websites, including YouTube, DailyMotion, FailBlog, CNET Video Reviews, CollegeHumour and Vimeo. One handy feature within this extension is that it can download clips in different formats (MP4, AVI, FLV or 3GP), and in the case of YouTube, different resolutions.

If you use Chrome, you can use a similar extension called YouTube Downloader. You won't find this tool in the Chrome store because Google doesn't condone people downloading YouTube videos. Instead, get it from the link listed below. Once this extension is installed, it creates a button that shows up under the videos on YouTube. Clicking on the button downloads the video for you.


Android malware is becoming more like Windows or Mac malware; in other words, more dangerous to users. One of the latest, a Trojan application called Odad.a, offers capabilities that rival many types of malware currently targeting Windows or Mac OS X systems, say experts.

For starters, the new malware creates an attacker-accessible backdoor on infected Android devices, can download and install additional malware, infect nearby devices with the malware -- via Wi-Fi or Bluetooth -- and receive further instructions from the attacker. For good measure, the malware also can send SMS messages to premium phone numbers, thus generating revenue for attackers or their business associates.

"At a glance, we knew this one was special," said Roman Unuchek, a security researcher at Kaspersky Lab, in a blog post citing the fact that whoever developed the malware not only built in numerous capabilities, but also carefully hid the code to make it difficult to detect or study.

"Malware writers typically try to make the codes in their creations as complicated as possible, to make life more difficult for anti-malware experts. However, it is rare to see concealment as advanced as Odad.a's in mobile malware," Unuchek said. That concealment extends to the Android user experience, as the application malware works in background mode and has no interface.

Although the malware is somewhat rare, it's reportedly being distributed in a typical way: most likely disguised as a legitimate app via "alternative app stores and fishy websites," reported Android Police.

Whoever built the malware took advantage of three different flaws in the Android operating system, or related software, to make the malware more difficult to detect or eradicate. For example, the attackers used a vulnerability in the dex2jar software -- often used by malware analysts to convert Android application package (APK) files into Java Archive (JAR) format for easier analysis -- that prevents the APK file from being successfully converted.

Attackers also discovered a vulnerability in the AndroidManifest.xml file specification, which provides essential information about every application to the Android operating system. Using this vulnerability, attackers were able to give the malware a file description that can't be automatically parsed by analysis tools, but which is still processed correctly by the Android operating system.

Finally, the malware's developers "also used yet another previously unknown error in the Android operating system," said Unuchek, which results in the malware being granted "extended Device Administrator privileges without appearing on the list of applications which have such privileges." From a user-interface standpoint, it also means that once the malware infects the device, a user can't revoke those privileges or even delete the application through the operating system.

Using these privileges, the malware can disable access to the device's screen for up to 10 seconds, which is likely used to conceal bad behavior, because it "typically happens after the device is connected to a free Wi-Fi network or Bluetooth is activated," said Unuchek. "With a connection established, the Trojan can copy itself and other malicious applications to other devices located nearby."

"Backdoor.AndroidOS.Obad.a looks closer to Windows malware than to other Android Trojans, in terms of its complexity and the number of unpublished vulnerabilities it exploits," Unuchek said. "This means that the complexity of Android malware programs is growing rapidly alongside their numbers."

Looking beyond Odad.a, the volume of malware that targets Android devices continues to increase. "Our count of mobile malware samples, just about exclusively for the Android OS, continues to skyrocket," said a threat report released last month by security firm McAfee, which analyzes the first three months of 2013. "Almost 30% of all mobile malware [ever recorded] appeared this quarter," it said. "Malicious spyware and targeted attacks highlighted the latest assaults on mobile phones."

Until last year, the majority of mobile malware attacks targeted users in Russia and China. But that's changing, according to McAfee's study. In recent months, for example, banking customers in Australia, Italy and Thailand were targeted with malware known as FKsite that purported to be secure online banking software. "Instead it forwards mobile transaction authorization numbers (mTANs) to attackers," said the report, referring to the one-time codes generated by some banks, which are sent via SMS to a subscribers' phone, and which must be used to authorize unusual or high-value transactions. Of course, such malware isn't new; the Zeus variant known as Zitmo, which debuted in 2011, targets mTANs.

Other recently discovered malware includes Smsilence.A, which is disguised as a coupon app for a popular South Korean coffee chain, but which can relay the device's phone number and forward or delete SMS messages. It only infects devices with a phone number beginning with South Korea's country code (+82).

Some mobile malware is even simpler, and recalls the scam Reveton ransomware, which tricks users into paying a fine for alleged illegal activity, supposedly to the FBI. One Android equivalent is Fakejoboffer, which targets users in India, telling them they've won a prize, but must pay a small fee to collect it. Of course, after paying the fee, they receive no prize.

Meanwhile, malware known as Ssucl.a -- a Trojan disguised as a system cleanup utility -- serves as a node in a botnet, and can launch phishing attacks to retrieve Google and Dropbox log-in credentials. Closing the gap between malware that's designed for desktop operating systems versus mobile devices, SSucl.a also can launch auto-run infections at any Windows system to which it gets connected.


Exponential growth in the use of smart devices has led to significant and increased demand for bandwidth across 84 per cent of organisations surveyed globally, according to new research commissioned by BT and Cisco. More than half (56 per cent) of IT managers have also noticed a resulting performance decline in some applications, which impacts negatively the productivity gains promised by smart devices. Almost half (46 per cent) of workers with Wi-Fi access in their office have experienced delays logging on or accessing an application, while 39 per cent have noticed they are running more slowly now than before.

The research, which surveyed attitudes towards workers' use of their own smart devices (laptops, tablets and smartphones) in 13 regions, reveals 76 per cent believe their organisations need to take further steps to fulfill the potential productivity gains that smart devices offer. Increased use of cloud solutions (33 per cent), greater use of specialist software (32 per cent) and greater support for smart device users (32 per cent) are what is needed to seize the opportunity.

Ubiquitous Wi-Fi access over a better network is key to the development of Bring Your Own Device (BYOD), but 45 per cent of employees still don't have wireless access to their corporate networks. Of those workers currently without Wi-Fi access in their organisation, over two thirds (68 per cent) believe it would have a positive impact on their work, for example, it would make them more efficient and productive (31 per cent), help them work more flexibly (30 per cent) and stay in-touch (26 per cent).

The findings also indicate that network capacity is not the only challenge holding back benefits of BYOD. Despite overwhelming positivity among IT managers – 84 per cent think adopting a BYOD policy confers a competitive advantage – the research also highlights a lack of progress in adopting or articulating a consistent policy across wired, wireless and Virtual Private Network (VPN).

Trust in employees continues to play a large role in whether companies permit BYOD. Just over a quarter (26 per cent) of IT managers think that all workers understand their access requirements or permissions for their mobile devices. This figure has increased from 19 per cent in 2012, pointing to an increase in confidence. Yet only 26 per cent of employees that use a personal device for work recognise that this presents a risk to company security, suggesting IT managers are nervous with some justification.

Neil Sutton, VP Global Portfolio, BT Global Services, said: "With networks creaking under the demands of smart devices and more than three quarters, (76 per cent) of users convinced that their organisation needs to step up to the opportunity, it's clear that enabling BYOD in its many forms is about much more than simply cool devices and a mobile contract. Organisations need to consider elements of device compatibility, security, Wi-Fi, network, application performance, with a focus on driving costs down.

"Behind every great device you need a great performing network. With the right control and connectivity you can deliver a great user experience on any device.  At BT we are working with more and more customers to understand and implement this coming of age of consumerisation and turn it to business advantage, reliably, securely and cost effectively."

Gordon Thomson, Director, Enterprise Networks, EMEAR, Cisco, said: "We implemented a BYOD model internally, starting with mobile phones in 2009, and have managed to lower our costs per employee by 25 per cent. Over the last few years, we have added 82 per cent more devices to our base with 28 per cent more users. Organisations looking to deploy a BYOD program should look at a comprehensive BYOD plan and think beyond just the device and operating system, but about the services delivered to that device, user experience and productivity gains."

Adrian Drury, practice leader, Consumer Impact IT, Ovum said: "The growth in employee smartphone and tablet ownership is changing the ways we work. Implementing a BYOD policy is about enabling employees to work more flexibly, and be more productive.

"Draconian Wi-Fi access limitations or failure to invest in sufficient Wi-Fi coverage is a fast way to ensure a poor employee experience. However, this is not a mandate for open networks. Businesses still need to ensure that network security policies are maintained, and ideally they should take an integrated approach to network access control, device management and application management."

About the research


This research was undertaken by Vanson Bourne for BT Global Services and Cisco in May 2013. 2,200 interviews were carried out with IT decision makers and office workers in medium to enterprise size organisations across 13 regions – UK, France, Germany, Spain, Italy, Benelux, Turkey, USA, Brazil, China, India, Singapore and UAE – and in a range of sectors –  Fast Moving Consumer Goods (FMCG), finance, logistics, retail, healthcare, energy, pharmaceutical and government.

About BT


BT is one of the world's leading providers of communications services and solutions, serving customers in more than 170 countries.  Its principal activities include the provision of networked IT services globally; local, national and international telecommunications services to its customers for use at home, at work and on the move; broadband and internet products and services and converged fixed/mobile products and services.  BT consists principally of four lines of business: BT Global Services, BT Retail, BT Wholesale and Openreach.

In the year ended 31 March 2013, BT Group's revenue was £18,017m with profit before taxation of £2,501m.

British Telecommunications plc (BT) is a wholly-owned subsidiary of BT Group plc and encompasses virtually all businesses and assets of the BT Group.  BT Group plc is listed on stock exchanges in London and New York.

Saturday 8 June 2013


In the beginning of the year 2013, Sony made a big splash by showcasing its flagship smartphones Xperia Z and Xperia ZL. There have been some news reports that suggest Sony is working on a phablet, which has been codenamed 'Togari'.



Now a fresh report by VR-Zone is suggesting that this device is likely to be named Xperia ZU. Furthermore, Sony has already sent out invitations for an event that is taking place in Germany on June 25 and the website believes that this phablet will be launched there.

As per previous rumours, Sony Togari is expected to come with 6.44-inch full-HD display with pixel density of about 342ppi. Apart from this, it is touted to come with X-Reality Engine and an upgraded Mobile Bravia Engine 2. This phablet is rumoured to be powered by either Qualcomm Snapdragon 600 or 800 processor.

Earlier news reports had suggested that the device will come with a 13-megapixel camera but now the report suggests that this smartphone will come with an 8-megapixel shooter. The reason for the switch is being attributed to the shortage of the camera module.

Just like the Sony Xperia Z, this phablet is expected to come with IP58 certification, which means this will be another waterproof and dust proof device.

Sony Xperia ZU, if and when it launches, will be the first phablet by the Japanese company and it is expected to compete head on with the Samsung Galaxy Mega 6.3, Galaxy Note II and Huawei Ascend Mate. Asus too has entered into phablet arena with Asus FonePad Note FHD6. This phablet sports a 6-inch, 1080p Super IPS+ display along with 1.6GHz dual-core Intel Atom Z2560 processor and 2GB of RAM.

Asus's phablet is just 9.4mm thick and comes in 16/ 32/ 64GB storage options.


The CEO of the Wi-Fi Alliance talks about Passpoint, 802.11ac, 802.11ad (WiGIG), and cognitive radio cellular/Wi-Fi convergence .

In a world without wires, where everyone has a smartphone and BYOD is the norm, Wi-Fi remains a cornerstone technology. Wi-Fi is continuously evolving, and new innovations promise to make it faster and more pervasive than ever before.

Sitting at the core of the Wi-Fi revolution is the Wi-Fi Alliance, which advances wireless technology and certifies interoperability. In an exclusive video interview with Enterprise Networking Planet, Wi-Fi Alliance CEO Edgar Figueroa detailed some of the key new innovations coming in the wireless landscape.

Passpoint


One of the new efforts that the Wi-Fi Alliance is pushing is Passpoint, an onboarding technology that automatically enables wireless devices to securely associate with an access point without user intervention.

Figueroa explained that Passpoint's authentication system uses technology that most wireless devices already have, such as SIM cards.

"It's something that mimics quite a bit what the cellular community already does to authenticate you to networks as you roam around the world," Figueroa said. "With Wi-Fi being so available everywhere, you should just carry one set of credentials, and those should be honored wherever you go."

The Passpoint solution involves elements of the IEEE 802.11u specification, which defines inter-wireless network roaming. Passpoint can also scan the spectrum for available networks and make an intelligent choice about which network to associate with, based on the user's preferences.

Passpoint deployment will not require forklift hardware upgrades. Figueroa explained that Passpoint is a software solution that can be enabled with firmware updates for consumer devices and Wi-Fi access points.

802.11ac


Bandwidth has always been a milestone touchpoint in Wi-Fi's evolution. The next big marker for Wi-Fi bandwidth is now emerging with the 802.11ac standard, which promises wireless speeds of over 1 Gigabit per second.

The Wi-Fi Alliance will take its traditional role with 802.11ac deployment by helping to establish interoperability and certifications.

A number of vendors in the market have already released pre-standard 802.11ac devices, which could potentially be an issue. The same situation occurred with the rollout of 802.11n, when early pre-standard implementations caused a degree of fragmentation and lack of interoperability.

Figueroa noted that the Wi-Fi Alliance learned lessons from the rollout of 802.11n, lessons the Alliance will apply to 802.11ac.

"We were years away from having the [802.11n] standard when early products started emerging," Figueroa said. "We learned from that, and this time we will have a program that is delivered in advance of the specification being done in IEEE."

Pre-standard implementations of 802.11ac have already been announced by Cisco and Aruba, among many other vendors. Figueroa expects that once the Wi-Fi Alliance 802.11ac certification program is launched and available, the vendors that have early implementations will come back and certify their products.

Carrier Wi-Fi Convergence


Today, wireless users need to navigate on their own between cellular and Wi-Fi networks. A movement underway will change that in the future.

"There are early implementations by some service providers that do seamless handover between Wi-Fi and cellular," Figueroa said. "There is a lot of interest in this area."

Figueroa expects that in the not-too-distant future, intelligent methods will provides seamless cellular Wi-Fi convergence that is invisible to users.

WiGig


In addition to the use of Wi-Fi for medium-range applications, there is now an effort to leverage it for short-range use.

WiGig, also known as 802.11ad, is a 60 GHz technology to enable wireless docking for consumers, as well as possible data center usage.

"60 GHz is a short range technology that is intended for in-room applications only," Figueroa explained.

Vision


Figueroa has a grand vision for Wi-Fi that's not just about replacing existing wires.

"Our vision is seamless connectivity," Figueroa said. "We want to see Wi-Fi adopted broadly, and beyond that, to take on the challenges of seamless connectivity."

One year after IPv6 Launch Day, what's happening with IPv6 adoption?

One year ago today, the Internet Society spearheaded an event dubbed World IPv6 Launch Day. One year later, has IPv6 adoption actually increased?

IPv6 Launch Day aimed to accelerate global IPv6 adoption by encouraging sites around the world to turn IPv6 on and leave it on. The Launch Day event was the Internet Society's second yearly event for IPv6, following IPv6 Day in 2011. 2013 sees no IPv6 Day event, but that doesn't mean the job of moving the world to IPv6 is done.

Phil Roberts, technology program manager at the Internet Society, told Enterprise Networking Planet that the Internet Society never planned to always hold an annual IPv6 Day event. That doesn't mean that the message is one that should stop being told, though.

"It's important for people to see that IPv6 is continuing to be deployed," Roberts said. "It's important to continue to beat the drum and let people know that IPv6 is out there and people are using it."

Roberts suggested that whenever a big network operator or site turns up its IPv6 usage, that should be noted and reported. However, he no longer thinks that it's necessary to have a single day event to evangelize IPv6 adoption.

"Events help, but they help in the context of needing to get something done," Roberts said.

The IPv6 Day event in 2011 aimed to give websites a reason and validation to try IPv6. By having a large number of sites participate, the idea was that if something went wrong, it would be noticed. The launch day event in 2012, on the other hand, marked a push to turn IPv6 on and keep it on. Roberts sees no such concrete motivation for doing a followup event at this point.

IPv6 in 2013


The big takeaway for Roberts now is the fact that one year after IPv6 Launch Day, there are now more networks and more traffic on IPv6.

The need for IPv6 continues to increase as more devices and people get onto the Internet.

The current IPv4 address space is a 32-bit system that has already been exhausted for net new allocations. In contrast, IPv6 provides a 128-bit addressing space.

IPv4 address space was officially depleted in February of 2011, though that doesn't mean IPv4 is dead. The exhaustion simply means that Regional Internet Registries (RIRs) can no longer get any net new IPv4 address space from IANA (Internet Assigned Numbers Authority). IPv6 and IPv4 address space will coexist for many years to come.

The volume of traffic going over IPv6 in 2013 is still small, but it is growing. Google is currently reporting that 1.27 percent of its traffic comes in over IPv6.

Cisco is currently forecasting that for 2013, 3.1 percent of total IP traffic will be IPv6, up from 1.3 percent at the end of 2012. By the end of 2017, 23.9 percent of IP traffic will be IPv6.

Chicken and Egg


One of IPv6's challenges involves getting more content and networks actually running IPv6.

According to statistics posted by the Internet Society, as of today, only 12 percent of the Alexa Top 1000 websites are accessible via IPv6.

From a network operator perspective, Roberts noted that a number of new IPv6 networks have shown up over the last year.

"The good news is that since 2012, the number of IPv6 networks has increased quite a bit, and there are deployments from major networks around the world," Roberts said.

In particular, Roberts pointed to German telecom giant Deutsche Telekom, which in 2012 did not have a measurable IPv6 deployment. In 2013, Deutsche Telekom's IPv6 ranks in the top ten for IPv6 usage in the world.

Other operators have also experienced growth in deploying IPv6 capabilities over the last year, with AT&T growing from 4 to 9 percent and Verizon Wireless growing from 7 to 31 percent.

"The good news is that almost everywhere we look, IPv6 is increasing," Roberts said. "It seems to be me that it's now at the groundswell stage, and it all looks like everything is up and to the right."

Enterprise Adoption


While carriers and some major websites are moving to IPv6, enterprise adoption is following at a slower pace.

In 2012, Roberts told Enterprise Networking Planet that the Internet Society never considered enterprises to be on path for early adoption of IPv6. It's a view that he still holds, though the landscape is beginning to change, thanks in part to the cloud.

"One of the things that is interesting in enterprises now, and a place where we could—or should—see IPv6 show up, is in all the emphasis in moving to the cloud," Roberts said. "There are a lot of cloud providers that have IPv6, and I'd really love to see enterprises making use of that."

Friday 7 June 2013


Don't discount the importance of hardware in a software defined network. Switches in particular remain crucial to network functionality.

Many within the networking space now surmise that as virtual or software-defined networking takes hold of enterprise infrastructure, hardware will diminish to mere interchangeable components of network architecture. This wouldn't necessarily be a bad thing, since software is more flexible and fungible than hardware, but it isn't going to happen. Switches, routers, and other devices aren't going to become generic boxes quietly humming away, only to be pulled and discarded when they inevitably fail. On the contrary, even if most of the actual network functionality moves to software, crucial decisions will still remain regarding the kinds of devices to be deployed and how they will be used.

In switching, for example, Dell'Oro Group, a networking and telecommunications market research firm, sees a number of significant forces affecting the market, forces that will combine to produce a surge in deployment activity followed by a steady drop-off. On the upside, network consolidation and the increasing popularity of cloud services will drive a need in the data center for substantial network overhauls to improve throughput and flexibility. Meanwhile, the increased use of mobile devices will likely cause a run on wireless LAN (WLAN) technology, which may ultimately supplant traditional switching and routing as the preferred network technology.

But even if the trusty switch can count on a long development path ahead, the top switch vendors might face a struggle. When it comes to switching and network capacity, nobody tops large web-facing enterprises like Facebook and Google. These firms, however, increasingly deploy their own turnkey hardware platforms to connect their massive data environments. Facebook has gone so far as to release its switch architecture to the public under the company's Open Compute Project, which recently announced plans to change the networking world as we know it. The Open Compute Project aims to provide an optimized, yet highly configurable, platform for high-speed, large-volume environments, something Facebook says it hasn't found in off-the-shelf devices.

Nevertheless, development of state-of-the-art switching platforms continues unabated. In fact, the ever-increasing need for density and dynamic performance offers opportunities for small networking firms to break the stranglehold of top-tier vendors like Cisco and HP. Taiwan's RubyTech, for example, is turning to advanced SoC designs by Vitesse Semiconductor for a new range of SMB/SME devices aimed at bringing carrier-class performance to the enterprise. The E-Stax-III platform, now part of RubyTech's GS-2328X and GS-2352X Stackable Layer-2+ Managed Ethernet Switch, provides less than 10 ms failover, as well as scalability to 800 Ethernet ports and a single-point management system for improved control and configuration. Applications range from videoconferencing and collaboration to VoIP and cloud-based SaaS/PaaS services.

And while large data centers are likely to embrace SDN in short order, that in itself will drive demand for large switching platforms. Huawei just introduced a new core switch, the CE12816, that pushes through a whopping 64 Tbps across more than 1,500 10 GbE ports. The system supports large virtual and cloud environments (CE stands for CloudEngine) using the company's Cluster Switch System (CSS) and Virtual System (VS) features to enable dynamic pooling of logical switches. It also supports TRILL-based Layer 2 networking on up to 500 nodes for rapid migration and flexible deployment of virtual machines.

Regardless of the type of switch or the manner in which the network uses it, one thing is certain: network technology across the board will have to accommodate data environments of a complexity that could not have been imagined a few short years ago. Most of the magic will happen in software, but it's the hardware that will set the stage.

Workers want to use their preferred file hosting services and productivity applications. How do you keep them happy while keeping your data and network secure?

Slowly but surely, bringing your own device to work is becoming mainstream. Though security concerns remain, Gartner estimated this month that by 2017, half of employers will stop providing employees with machines.

BYOD is a widely acknowledged trend. Not so widely acknowledged, but equally important to productivity and data and network security, is its lesser-known byproduct, BYOS.

BYOS as a growing force in the enterprise


Whether you expand BYOS to Bring Your Own Service or Bring Your Own Software, the acronym’s central meaning remains the same: many workers want to share and access company data without using enterprise software. To give just one example, over 2 million businesses use Dropbox, and those business users save over 600 million files on the service every workweek, according to a Dropbox spokesperson.

Workers are getting accustomed to using their own laptops and phones; it’s not surprising that they want to use their preferred file hosting services, social networks, and productivity applications as well. Having a choice is especially important to Millennials, who have been using consumer software for the majority of their lives. In the 2011 Cisco Connected World Technology Report, 56 percent of college students said that if a potential employer banned social media access, they would either find a way to get around the ban or outright reject a job offer. And that's just social media access.

It’s impossible to predict the future, but the evidence suggests that BYOS won’t be a short-lived trend. A future where employees regularly use non-enterprise services and applications could be desirable—if IT professionals navigate the security challenges with caution.

Establish clear policies


The Cheshire Cat from Alice in Wonderland said it best: if you don’t know where you’re going, any road will take you there. It’s difficult to choose the right tools for securely using non-enterprise services if you don’t have well-defined policies.

First, policymakers must identify the most sensitive data in the enterprise. The criteria for highly sensitive data vary by organization, but there are three main types: data that would give competitors a significant edge, private customer data, and data that requires protection to comply with industry-specific laws. This data should be kept off non-enterprise services entirely, or at least encrypted at the file level.

When it comes to less sensitive data, companies should decide which non-enterprise services and applications are secure enough and which ones pose too many risks. Like a good diet plan, this method gives employees options—but only healthy ones. To keep employees involved, companies could survey them to determine which services and applications are already popular.

Check non-enterprise services for security holes


When deciding which services to accept, it’s wise to go beyond reading reviews and ask for the results of a vulnerability scan.

Most companies that make popular file hosting services and productivity applications do a good job of describing their security features to potential customers. However, asking for concrete proof eliminates a certain amount of doubt. It also puts emphasis on the fact that your company expects network security to be taken very seriously.

In all likelihood, many companies have up-to-date reports at the ready. Even if they don’t, they can easily scan their code for vulnerabilities using services like Hewlett-Packard’s Fortify On Demand, which offers free, standard, and premium scans. The premium scan supports the scan of over 20 programming languages and looks for cross-site scripting and SQL injection.

Protect sensitive data


Even after choosing non-enterprise services carefully, IT administrators should select the right tools to prevent highly sensitive data from winding up on one of them.

Both network- and endpoint-based Data Loss Prevention (DLP) solutions stop employees from sharing sensitive data, whether the attempts take place through non-enterprise email accounts, social networks, or file hosting services. Network-based solutions detect sensitive data as it gets sent out of the enterprise network; endpoint-based solutions examine user activity that takes place on laptops, tablets, servers, and so forth. Some solutions, such as RSA Data Loss Prevention, notify users when they’ve attempted to violate a company policy, thus encouraging awareness of the rules.

As effective as DLP solutions can be, they don’t cover everything. It’s important to ensure that only the right people can access sensitive data in the first place. Varonis’s Data Governance Suite, for example, includes two components: DatAdvantage and DataPrivilege. The former records user activity, shows which users can access which folders, and makes recommendations upon perceiving superfluous access. The latter allows data owners to review permissions and grant or revoke them where necessary. (It should be noted that on May 22, Varonis announced the launch of DatAnywhere, a Dropbox-like service for the enterprise.)

Embracing the BYOS trend might still seem risky, but it’s important to remember that consumer software companies aren’t the enemy. Those who threaten enterprise networks and the data they carry are, and network and data security policymakers must keep that real threat in mind, considering consumer software and service providers as potential allies.

“We’re constantly working to improve systems, policies, and procedures that protect our user’s data,” the Dropbox spokesperson I spoke to wrote in an email, noting this month’s new development: single sign-on (SSO) for Dropbox for Business customers.

They have a vested interest in security, too.

How does your organization handle BYOS security? Let us know your thoughts in the comments.

In his keynote at Design Automation Conference, Samsung president Dr. NamSung Woo spells out where challenges lie for smarter mobile devices

he smartphone of the future is going to require a "revolution" in thinking about technical design and "not enough people are prepared" for the challenges coming in three years, the president of Samsung Electronics said in his keynote at the Design Automation Conference (DAC) here today.

Based on the chip and electronic design automation (EDA) in existence today, the industry has done well with smartphones so far and will roll along for the next two or three years, said Dr. NamSung ("Stephen") Woo, president of Samsung Electronics and general manager of the System LSI business. But the smartphone of the future is going to be one that has a flexible screen that can be folded like a handkerchief, plus better and new applications and high-end cameras that demand a profound design overhaul.

Putting forward a vision of the future smartphone, Woo suggested the broader semiconductor industry and the electronic-design automation industry, the audience he was addressing, and Samsung, too, need to do things a lot differently than they do today.

"We still have to use over a dozen chips to build a smartphone," said Woo. But smartphones are being asked to perform more services, such as voice command, location-based services, Web browsing and applications like gaming, as display-screen resolution and camera sensors improve dramatically. The basic engineering design challenge that relates to space, battery, low-power and heat level is soon going to hit a sort of crunch time.

That's because of the advent of "flexible display," a bendable screen that can fold up like a handkerchief and be unfolded again for use, Woo said.

Flexible screens, an innovation first shown by Samsung, is clearly a "disruptive technology," said Woo. It not only "requires a new look" for a smartphone, it calls for the need of a "system on a chip" which doesn't really exist today, he pointed out. "Still what we have is a system on a dozen chips, not a system on a chip."

It's going to require new technologies for three to five years from now to meet the challenges of the smartphone of the future. But today, the industry is not yet pointed toward it. "We are not there. We are walking one way that's good so far," said Woo, but Samsung, like others, "may have to shift our direction to address this challenge."