Sunday 16 June 2013


Google Inc has launched a small network of balloons over the Southern Hemisphere in an experiment it hopes could bring reliable Internet access to the world's most remote regions, the company said late Friday.

The pilot program, Project Loon, took off this month from New Zealand's South Island, using solar-powered, high-altitude balloons that ride the wind about 12.5 miles (20 kilometers) - twice as high as airplanes - above the ground, Google said.

Like the Internet search engine for which Google is best known, Project Loon uses algorithms to determine where the balloons need to go, then moves them into winds blowing in the desired direction, the company said.

By moving with the wind, the balloons form a network of airborne hot spots that can deliver Internet access over a broad area at speeds comparable to 3G using open radio frequency bands, Google said.

To connect to the balloon network, a special Internet antenna is attached to buildings below.

The Mountain View, Calif-based company announced the project on its official blog here, and its website www.google.com/loon/.

The 30 balloons deployed in New Zealand this month will beam Internet to a small group of pilot testers and be used to refine the technology and shape the next phase of Project Loon, Google said.

Google did not say what it was spending on the pilot project or how much a global network of balloons might cost.

Google has also developed self-driving vehicles, which the company says could significantly increase driving safety.

Those vehicles are beginning to gain support from lawmakers in places like California, where a bill legalizing their operation on state roads was signed into law last by Governor Jerry Brown.

Project Loon is a research and development project being developed by Google X with the mission of providing Internet access to rural and remote areas using high-altitude balloons placed in the stratosphere at an altitude of about 20 km (12 mi) to create an aerial wireless network with up to 3G-like speeds. Using wind data from the National Oceanic and Atmospheric Administration (NOAA), the balloons are maneuvered by adjusting their altitude to float to a wind layer after identifying the wind layer with the desired speed and direction. People connect to the balloon network using a special Internet antenna attached to their building. The signal bounces from balloon to balloon, then to the global Internet on Earth. The balloon system is also expected to improve communication in affected regions during natural disasters. Raven Aerostar , a company that makes weather balloons for NASA, provides the high-altitude balloons used in the project. Key people involved in the project include Rich DeVaul, chief technical architect who is also an expert on wearable technology; and Mike Cassidy, a project leader.

Development

The project began development in 2011 with a series of trial runs in California's Central Valley and was officially announced as a Google X project on June 14, 2013.

Pilot test

On June 16, 2013, official development on Project Loon begins with a pilot experiment in which about 30 balloons will be launched from the Tekapo area on New Zealand's South Island in coordination with the Civil Aviation Authority of New Zealand. 50 pilot testers located around Christchurch and the Canterbury Region will test if they are able to connect to the aerial network using the special Internet antennas.

Equipment

The balloon envelopes are made of polyethylene plastic about 0.076 mm (0.0030 in) thick, and stand 15 m (49 ft) across and 12 m (39 ft) tall when fully inflated. A small box containing the balloon’s electronic equipment hangs underneath the inflated envelope. This box contains circuit boards that control the system, radio antennae to communicate with other balloons and with Internet antennae on the ground, and batteries to store solar power so the balloons can operate during the night. Each balloon’s electronics are powered by an array of solar panels that sit between the envelope and the hardware. In full sun, these panels produce 100 Watts of power,sufficient to keep the unit running while also charging a battery for use at night. A parachute attached to the top of the envelope allows for a controlled descent and landing when a balloon is ready to be taken out of service.

The special ground stations are able to connect to the balloon-powered Internet when the balloons are in a 20 km (12 mi) radius.

Saturday 15 June 2013



Google announced on Thursdaythe retirement of a four-year-old plug-in designed to let users of older version of IE (Internet Explorer) run Chrome's browser engine, declaring mission accomplished.

Chrome Frame joins numerous other discontinued Google projects, including much higher-profile departures, like Google Reader earlier this year. Support and updates for Frame will end in January 2014.

Google portrayed Frame's retirement as a positive, saying it had done its job.

"Today, most people are using modern browsers that support the majority of the latest Web technologies," said Chrome engineer Robert Shield in a post to the Chromium blog yesterday. "Better yet, the usage of legacy browsers is declining significantly and newer browsers stay up to date automatically, which means the leading edge has become mainstream."

Google launched Frame in September 2009, a year after the debut of the Chrome browser, casting it as a way to instantly boost the then-slow JavaScript speed of IE, and as an answer to the conundrum facing Web developers when designing sites and Web applications that relied on Internet standards IE didn't then support, such as HTML 5.

Chrome Frame runs in IE6, IE7 and IE8. Since then, Microsoft has released both IE9 (2011) and IE10 (2012).

When Google shipped Frame nearly four years ago, IE6 accounted for the largest share of all copies of Internet Explorer: 35 percent. IE7 and IE8 were third and second, respectively, with shares of 27 percent and 31 percent.

As Shield noted, the mix has changed. As of May, IE6's share of all copies of the browser was 11 percent, while IE7's and IE8's were 3 percent and 41 percent. The newest browsers, IE9 and IE10, accounted for 27 percent and 17 percent.

Shield urged enterprises that now rely on Frame to switch to Chrome for Business, which offers something called "Legacy Browser Support," a Chrome add-in that automatically launches another browser -- an older version of IE, for example -- when IT-designated URLs are clicked. An optional add-on for IE then automatically switches users back to Chrome.

Others have come up with similar solutions for companies still tied to older versions of IE. Browsium, a Washington State-based vendor, offers both Catalyst, which acts as a traffic cop to open links in specified browsers, while its Ione add-on, formerly called Unibrows, lets enterprises run legacy Web apps designed for IE6 in newer versions of Internet Explorer.

But while Shield characterized the retirement of Chrome Frame as a win -- proof of "just how far the Web has come," he said Thursday -- several Web app developers begged to differ.

"This is disappointing news," said Kathryn Fraser in a comment to Shield's post. "While you are correct in terms of the majority of personal users and their browsers, this doesn't translate to many of the health care and education enterprises that our apps are built for. We were only able to recently retire support for IE6 because of the availability of Chrome Frame."

"Chrome Frame was the only way to offer support for our modern Web app to customers who refuse to upgrade from IE7 and IE8," commented Aaron Smith. "It's very disappointing, but understandable, that Google is tossing this to the wayside. It sure leaves me in a bind."

Austin Fatheree was a lot more blunt. "Unreal. You just undid years of my work," he said in a comment late Thursday.

In an FAQ for developers, Google said that while it will stop updating Frame next January, the plug-in will continue to work after that date. "All existing Chrome Frame installs will continue to work as expected," the FAQ said.

Could Samsung's "next big thing" come from the heart of the Big Apple or Silicon Valley?

The smartphone and consumer electronics maker is close to launching an incubator space for startups that are developing software and services for phones, tablet computers and televisions.

Based on Palo Alto's University Avenue and in New York's Chelsea neighborhood, the Samsung Accelerator is on the verge of opening its doors, and the company is already looking for its first round of early stage companies.

"We're looking for bright ideas to build the next next big thing," says a sign that went up this week outside the accelerator's space in Palo Alto.

"We will bring together the people, power and resources to leverage the world's largest device ecosystem and launch product on a massive scale," the sign says. "And we're just getting started. Want to join us? You bring the product vision, we'll bring the rest."

The initial focus of Samsung's investment will be on startups working on communications and productivity software and services, but that could expand later.

Startup accelerators typically provide investment, office space, technical support and other resources in return for a slice of the company and some rights to its product or service.

For Samsung, the company stands to gain unique access to software and services that could help differentiate it from competitors making Android smartphones. The Android phone market is dominated by companies that spend tens or hundreds of millions of dollars on hardware development but comparatively less money on exclusive software.

The company hasn't said much about the accelerator, and on Friday it declined to offer any more details on its plans.

A posting on the @samsungaccel Twitter account said "a sizable number" of applications have been received so far. It also posted a picture from inside the Palo Alto accelerator showing several desks and chairs apparently waiting for occupants.

In Palo Alto, the accelerator will be housed in the historic Varsity Theater, a well-known landmark on the city's leafy University Avenue not far from where companies such as Facebook, Pinterest and Hewlett-Packard got their starts.

The building served as a theater from 1927 until 1994, when it became a Borders bookstore. It's been vacant since Borders went out of business in September 2011 but still carries the bookstore's name. The Samsung Accelerator will be housed on the second floor, and a project is under way to convert the first floor to restaurant space.

The Silicon Valley office is due to open on July 11. A starting date for the New York office is not known.


"A small team has been working on a big idea,"

Facebook appears to have a new product up its sleeve that it will unveil next week at its headquarters in Menlo Park, California.

"A small team has been working on a big idea," said a letter sent to reporters on Friday, according to an ABC News report.

A new product will be announced at the event, slated for June 20, according to the report.

No other details were given. Facebook did not immediately respond to a request for comment on the event.

Facebook's latest product announcement came on Wednesday, when the company unveiled hashtags to let people more easily surface content around specific topics on the site.

The introduction of hashtags is just the first of a series of features the social network plans to roll out to "surface some of the interesting discussions people are having about public events, people and topics," Facebook said in that announcement.

However, whether next week's big reveal will have anything to do with surfacing content or searching across the social network in any other way is unknown.

There was no formal media event to mark the availability of hashtags.

Facebook's last big product unveiling to be held at its headquarters was for Home, a suite of software for Android-powered smartphones to put the social network front and center on those devices.

During a briefing with reporters last month, Facebook reported that Home had attracted close to 1 million downloads.

Revelations about the extent of NSA data gathering programs has sparked a debate about government surveillance. We asked Network Computing readers to weigh in and received a variety of thoughtful responses, including fear of misuse of gathered data, questions about the efficacy of technology to protect against terrorism, and whether the actions of the NSA and other government agencies violate the principles, if not the laws, on which the country was founded.

Ward Thrasher writes:

We are sacrificing our freedoms in the name of perceived security. Much like the failed ability of facial recognition to pick out known security threats in a crowd, this snooping provides government with far, far more data than is necessary to combat terrorism or other national security threats. This snooping has failed to thwart many terrorist attacks since it was in place, and arguably has been only partially, even tangentially, involved in stopping those which have been reported.

During the Clinton administration, the government decided national security was best handled through the application of technology, rather than deploying human assets to gather intelligence and assess threats.

Bin Laden demonstrated how easily terrorists can avoid the prying eyes (ears) of PRISM. Using the old-school sneaker net permits information to be moved without exposing it to technology-based surveillance. As PRISM's details emerge, the bad guys will simply move off the grid, rotate cell numbers frequently and take other steps to eliminate the (arguable) benefit of the PRISM program.

In exchange for this perceived blanket of security, the government now has access to behavioral patterns of folks who are not accused of criminal wrongdoing. This information can be used in any number of nefarious ways to the detriment of individuals, groups and society. An official promise to not do so is less than reassuring that this information will not be misused.

Not only does PRISM violate the constraints of the Constitution, its acceptance by society as a "necessary evil" places us on a slippery slope of ever more infringement of rights in the name of security. As Franklin is oft quoted, "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety."

The use of technology is something that must be deployed judiciously when it comes to the essential liberties guaranteed us by our Founding Fathers.

Antonio Leding sees the potential for abuse when technology moves faster than our ability to create sensible policies. He writes:

This is the exact event the Framers worried about. Part of the genius of America was that it started with the simple belief that our lives - both individually and collectively - are ours to manage as we see fit. The Framers gave us a set of starting blocks to begin from and if, over time, we decided that fundamental rights should be abridged, then the Constitution is supposed to make that process extremely difficult and force the massive debate that should occur with something so grave as that kind of change.

From a policy and legal perspective, America does not deal with rapid changes very well at all. As technology accelerates [it] enables us to completely short-circuit processes that previously could not be avoided - and these very processes are lynchpins in the American experiment. We all seem to forget an extremely important point--that humans are always flawed and that technology has the unique property by which to magnify the harmful impact of human laziness and flaws.

To those who say PRISM is necessary to protect America, the first question is, who's protecting us from PRISM??? I've searched far and wide and I've yet to find a single instance of a big brother state where those being governed led happy and meaningful lives like we have mostly been accustomed to here in the USA.

Sam Brauer questions the validity of big data analysis as a mechanism for thwarting terrorism.

To begin with, I'm not an IT guy, and I'm no whizz at analyzing big data. I'm a guy who's been part of and hung around the scientific research community for a long time (I help commercialize advanced materials) so perhaps I have a different perspective on the NSA/PRISM debate.

There's an interesting parallel that can be drawn with the debate about analyzing genomes in the biology community. The limitations of the big data approach in biology are becoming clear.

We know for certain that genetic patterns in individuals are different. But if we look at the assumptions that the NSA has made, why on earth would terrorists have different patterns in phone conversations than, say, high school gossip? There's no physical basis, and no information basis that's readily apparent.

How can the NSA know that they have a representative sample of terrorists' call patterns to analyze? If they don't know how many terrorists there are, then whoever they've got in their net might be too small a sample from which to extrapolate meaningful information. Why would the terrorists be in constant touch? Wasn't the point of the supposed Red sleeper networks to NOT communicate with agents so that they can't be identified? Why does the NSA assume that there is a constant flow of communication to terrorists that can be identified?

From my perspective, given the idiotic assumptions (and there are only synonyms that make sense here), this whole program has become an exercise in trying to find signal in the noise. At least when the physicists do it, they have a good hypothesis. The NSA doesn't. You can spend fortunes in time and money looking for signals that aren't there.

At the end of the day, if this is an example of how NSA spends its resources and our tax dollars, I think the organization's initials must stand for No Significant Accomplishments--except for impinging on all of our freedoms.

What are your thoughts about the government's broad data-gathering activities? Are they necessary? Effective? Or do they go too far? Share your comments and join the debate.

Friday 14 June 2013


Xbox One's price, $100 more than PS4, and potentially restrictive DRM hurt the console's outlook

A series of blunders by Microsoft has paved the way for Sony's PlayStation 4 console to capture the top spot in year-end holiday sales.

"In terms of the initial sales of Xbox One and the PS4 I think the hundred dollar price difference will make a difference," said Lewis Ward, research manager for gaming at IDC. "It will probably set up Sony to at least have the opportunity to emerge as the top eighth-generation console sales leader."

When Microsoft announced the price of its new console at a press event Monday there was an audible gasp from the audience of gamers and media. The US$499 price tag is higher than the $375 to $450 range that Ward predicted ahead of E3. Part of the reason for the premium is because the Xbox One comes prepackaged with a Kinect sensor. Sony's PlayStation 4 won't require the additional depth-sensing camera and microphone combination that Kinect offers.

A game developer who was waiting to try Microsoft's Xbox One, but did not want to be identified by name, said he currently owns an Xbox 360, but that, "Microsoft has completely lost me." He said that being forced to buy a Kinect sensor and pay a $100 premium is what turned him off. He said he plans to buy a PlayStation 4 when it becomes available later this year.

"I like the PS4 more and the whole $100 less helps out a little," said Jonathan Toyoda another gamer attending E3 who owns the PlayStation 3 and Xbox 360. He said that he'd put the $100 towards buying another game.

"I like the games that come along with the PS4," Paul Hsu a gamer at E3. He owns a PlayStation 3 and said he'd consider buying an Xbox One, but would likely stay loyal to Sony.

Another point of contention among gamers is the new DRM (digital rights management) that Microsoft revealed.

"It's a new layer of complexity with what happens with used game discs," Ward said.

Microsoft places much of the control over disc-based games in the hands of publishers. In the policy outlined on its website, Microsoft said, "We designed Xbox One so game publishers can enable you to trade in your games at participating retailers."

Regarding lending games to friends, the policy states, "Xbox One is designed so game publishers can enable you to give your disc-based games to your friends."

Game developer Microsoft Studios said it will enable the reselling and lending of games, but "third party publishers may opt in or out of supporting game resale."

At Sony's press event the crowd erupted into cheers and applause when Jack Tretton, president and CEO of Sony Computer Entertainment America, announced that the company would stick to the status quo allowing gamers to trade, sell, lend or keep disc-based games.

"When a gamer buys a PS4 disc, they have the rights to use that copy of the game," Tretton said on stage. "In addition, PlayStation 4 disc-based games don't need to be connected online to play."

"That will probably leave a little bit more money in gamers' pockets and it will remove the complexity around how many versions I canA give to my friend,'" Ward said.

The lack of DRM in Sony's plans might thrill gamers, but not necessarily game developers and publishers, he said.

"You could say Microsoft took one for the team," he said. "If they limit the resale then it may force more people to buy a new disc and at the end of the day that's how they make their money."

Sony's aim is to convert current PlayStation 3 owners to PS4 buyers, Ward said. Making inroads in North America and stealing Xbox 360 gamers or prospective Xbox One customers would be "huge," he said.

"If Sony really wants to emerge as the leader in the eighth-generation console race, they're going to have to take away momentum from Microsoft beginning this holiday season in North America."

From 2005 until the end of 2012, 250 million seventh-generation consoles were sold between Sony's PlayStation 3, Microsoft's Xbox 360 and Nintendo's Wii, according to IDC. The Wii was a runaway with nearly 100 million units sold, but second and third place were much closer. Microsoft shipped 76 million consoles, while Sony 75 million.

Even when presented with copious email evidence, Apple VP Eddy Cue had little to say about charges of e-book price collusion.

Apple Senior Vice President Eddy Cue offered only short answers in testimony Thursday in federal court when questioned by U.S. Justice Department prosecutors trying to solidify their case that Apple, along with five of the largest book publishers, worked together to illegally set the prices of electronic books for the market.

Apple co-founder Steve Jobs, who was instrumental in the negotiations, cast an even larger shadow across the proceedings. Cue, who is senior vice president of Internet software and services, led the effort in late 2009 through 2010 to get publishers to release their titles on Apple's iBookstore reader for the soon-to-be-launched iPad, and consulted frequently with Jobs.

"Steve and I worked very closely on this," Cue said.

Asked who had decision-making authority regarding e-books pricing, Cue replied that Jobs, who died in October 2011, was the ultimate arbitrator for all decisions made at Apple during that time.

The Department of Justice charges that Apple colluded with the book publishers, including Harper Collins, the Penguin Group and Simon & Schuster, to set up a new pricing model, in violation of the Sherman Antitrust Act. The agency has offered a large collection of emails and phone records that strongly suggest that publishing company CEOs, Jobs and Cue worked together to hammer out an agreement. DOJ prosecutors spent the morning Thursday asking Cue to elaborate on his emails with publishing CEOs and Jobs.

According to the Justice Department, book publishers were worried in 2009 that Amazon, then the largest retailer of electronic books, was lowering the perceived wholesale value of the books, due to its strategy of discounting books for US$9.99.

Apple, eager to get into the e-book business with the iPad, used the "Amazon threat" as a calling card for the publishers, according to the DOJ. The company presented the idea of moving the book industry from a wholesale model, where retailers buy books at the wholesale rate and then charge whatever price they want, to an agency model. Under that model, the publisher sets the price and the seller, in this case Apple, gets a fixed percentage. Apple pioneered the agency model in the electronic realm with its app store.

With the agency model, Apple could offer the publishers the ability to set the prices at whatever level they liked, such as $12.99, $14.99 or $16.99. Crucial to this deal, originally, was the stipulation that all the publishers would switch over to the agency model, and that they would switch all the retailers, including Amazon, to this model of book purchasing.

Cue, who often worked in between the CEOs and Jobs himself, is the central witness in the trial at the U.S. Southern District Court of New York, with District Judge Denise Cote presiding. Cue confirmed the emails and other documentation that the DOJ had assembled to show this chain of events, but rarely elaborated in his answers to DOJ questions, even when he disagreed with the prosecutors' assertions.

When the prosecutors asked Cue if, for instance, the publisher CEOs were talking among themselves about Apple's proposed move to the agency model, Cue denied knowing anything about their discussions, or even admitting that he suspected that they were talking among themselves. The DOJ pointed to records of more than 100 phone calls the publisher CEOs had made among themselves in one month after Apple proposed the agency model. "No one [at Apple] had knowledge that publisher meetings were happening," he said.

Cue was careful to point out that while Apple was promising that it could sell books at the publishers' preferred rate, it made no promises as to whether these preferred rates would be accepted by other retailers. At one point., Apple offered to move away from the agency model, should the publishers offer it "Most Favored Nation" (MFN) pricing, in which the publishers guaranteed that Apple could sell their e-books, with the 30 percent markup, at the lowest price offered by any retailer. The publishers balked at this proposal, however.

The DOJ also pointed to an email that one student sent to Jobs asking why Apple was raising the price of e-books, which the student summarized as greedy. Jobs responded in an email to the student that it was the publishers, not Apple, who were raising the prices. The DOJ asserted that Apple was only interested in ensuring that it would get a 30 percent cut of the sales, which was in line with what Apple gets with its sales of other content, such as music and television shows.

Cue also downplayed any assertions that Apple was attempting to replicate what Amazon was doing with its Kindle-based e-book selling. The DOJ pointed to one email exchange between Cue and Jobs, in which Jobs asked if Amazon is selling self-published books, and Cue replied that it was, so Jobs then decided to consider the idea for Apple, and later included self-publishers. But when the prosecutor then asserted, from this exchange, that Jobs was basing his decisions on what Amazon was doing, Cue responded that the idea was "incorrect," without elaborating.

Cue will take the stand again later Thursday and is expected to be more forthcoming with details when questioned by Apple's chief counsel, Orin Snyder.