Wednesday, April 6, 2011

HTTPS is great: here's why everyone needs to use it (so we can too)

This past weekend we ran a piece from Wired that looked at the issues surrounding unencrypted HTTP traffic and wondered why all websites don't use HTTPS by default. The article puts forth an interesting premise—the wholesale encryption of all HTTP traffic—and lists a number of reasons why this hasn’t happened yet.
The only problem is that many of these issues, mostly technical in nature, are red herrings and can be easily handled with cleverness by an engineering team focused on transmitting its entire application over an encrypted channel. The real issues begin to arise, however, when your application must include assets served by servers which also do not support SSL. We’re going to discuss some of the issues raised by the article, correct some of the more specious arguments, explain how an organization can work with the real constraints of HTTPS, and give some insight into what we consider to be the real barriers to wholesale HTTPS encryption of the Web.

Caching

According to the Wired article, one of the things keeping SSL down is that content served over SSL cannot be cached. There's a lot of confusion about this issue, but the takeaway is that it is indeed very possible to make sure that your content transmitted over SSL is cacheable by end users.
There are a few levels of resource caching for most HTTP requests. Client-side browser caching is important and helps browsers avoid making duplicate requests for fresh content. Server-side caching (in proxies, for instance) is important for some users some of the time.
Modern browsers cache secure content like they’re supposed to, even when using SSL. They respect the various cache-control headers servers send and let Web developers minimize HTTP requests for commonly requested content. On Ars Technica, for instance, you pull down a number of stylesheets, scripts, and image files that will sit around in your local cache for a very long time. When you visit a new article, you will only need to load the text and images specific to that piece. Assuming we transmitted content over SSL, our client-side caching would still work gloriously well.
Public proxy caching, though, does not work for SSL traffic. Public proxies can’t even “read” responses as they pass through, which is kind of the point of SSL (think of a proxy as a man in the middle). ISPs in countries like Australia would historically run caching proxies to help minimize latency for commonly requested files. This practice is becoming less common, partially because global content delivery networks (CDNs) mean static files are geographically closer to their users, and partially because users spend time using sites like Facebook where pages are tailored specifically to them.
Web developers dealing with SSL need to understand the various cache-control headers along with how they instruct browsers to keep content around, and make sure to use them properly.

Performance impact

SSL has a performance impact on both ends of a connection—it should be noted that Google has been spending considerable effort to improve the situation, on both the server and the client side, and is trying to gently push the whole community into joining it. The initial negotiations are somewhat intensive, and Web providers need to be aware of the extra horsepower required to encrypt requests.
Performance is only rarely the reason sites don’t use SSL, though. It’s a consideration, but only a minor one for the vast majority of site owners. In the absence of other issues, it really wouldn’t matter for most people.

Firesheep: how a great UI can make the Internet more secure

Sending and receiving unencrypted data is not generally a big deal, until you need to identify yourself to a website and see customized pages—Facebook, Twitter, and even Google deliver content that is unique to a signed-in user. Most sites of this nature generate a unique session token for your account and transmit it to your browser, where it’s stored as a cookie. On all subsequent requests to that domain, your browser will send the content of all your cookies—including your session cookie, which will uniquely identify you. There are a number of techniques to make this practice more secure.
This works remarkably well… until you’re sitting in your local PeetsArgobucks coffee shop on its open WiFi and your HTTP data is whizzing past other peoples’ heads—and past their wireless connections. It’s all easy enough to snatch out of the air—passwords, session tokens, credit cards, you name it. You are on a hubbed connection where everyone’s traffic is exposed, unencrypted, to all other users of the same network.
Until recently, most people—and most website operators—gave little thought to the prospect of someone snarfing these unique strings from plain-text connections. Most people really only considered SSL a necessity for online banking or similar “sensitive” applications.
The combination of ubiquitous WiFi and the number of services linked directly and deeply to our personal lives ensured it was only a matter of time before someone came up with a dead-simple way of grabbing your data out of thin air. Don’t be fooled though; this has always been possible. A kid with a Linux laptop running Wireshark (née Ethereal) could have done the exact same thing in 2001—the only thing holding it back was this tool’s arcane UI and übernerd operating parameters.
Firesheep is a user-friendly way of doing exactly this with a simple Firefox plug-in that anyone can operate. Each time you request a Facebook page, your browser sends its token along and your friendly coffee shop skeezebag grabs it with Firesheep. He or she then sends that token back to Facebook, successfully pretending to be you.
SSL prevents this, mostly. That is, when you make a request over SSL, the would-be interloper can’t see what’s in that request. So having an SSL option is great, right? The answer is a definitive “maybe."
The HTTP spec defines a “Secure” flag for cookies, which instructs the browser to only send that cookie value over SSL. If sites set that cookie like they’re supposed to, then yes, SSL is helping you out. Most sites don’t, however, and browsers will happily send the sensitive cookies over unencrypted HTTP. Our hypothetical skeezebag really just needs some way to trick you into opening a normal HTTP URL, maybe by e-mailing you a link to http://yourbank.com/a-picture-of-ponies-and-rainbows.gif so he can sniff the plain-text cookie off your unencrypted HTTP request, or by surreptitiously embedding a JavaScript file via some site’s XSS vulnerability.

Mixed modes

We’ve all experienced “mixed mode” warnings, with some browsers being much more annoying about them than others. "Mixed mode" means you requested a page over SSL, but some of the resources needed to fully render that page are only available over unencrypted HTTP. The page you’re looking at includes a mix of components, including images, scripts, stylesheets, and third-party assets—all of which would need to be delivered via SSL to avoid mixed-mode warnings.
Browsers will rightfully complain about mixed mode, usually by styling the SSL/HTTPS browser icons. Chrome puts a nice red X over the lock icon and strikes the “https” text. Some browsers are even more annoying and strict. IE pops up a modal warning each time a user requests a secure page with insecure assets. Some organizations have it locked down even further, keeping IE from even requesting insecure content—resulting in a badly formatted pages being displayed.
Annoyances aside, mixed mode is still a problem to be avoided. Annoying as the warnings are, they also potentially subvert the entire purpose of encrypting traffic in the first place. A nefarious hotspot operator can not only read unencrypted traffic, he can also alter it as it crosses his network. A “secure” page that includes insecure JavaScript makes it relatively easy to hijack session tokens (again). In many cases, JavaScript in a page has access to the same cookie data the server does. The HTTP spec does define a “HttpOnly” flag for cookies that instructs browsers to keep the value out of the DOM. It’s extremely rare to see that set, though.

Doing SSL right

We’ve looked pretty extensively at serving Ars Technica over HTTPS in the past. Here’s what we’d need to do to make this a reality:
First, we would need to ensure that all third-party assets are served over SSL. All third-party ad providers, their back-end services, analytics tools, and useful widgets we include in the page would need to come over HTTPS. Assuming they even offer it, we would also need to be confident that they’re not letting unencrypted content sneak in. Facebook and Twitter are probably safe (but only as of the past few weeks), and Google Analytics has been fine for quite a while. Our ad network, DoubleClick, is a mixed bag. Most everything served up from the DoubleClick domain will work fine, but DoubleClick occasionally serves up vetted third-party assets (images, analytics code) which may or may not work properly over HTTPS. And even if it “works,” many of the domains this content is served from are delivered by CDNs like Akamai over a branded domain (e.g. the server’s SSL cert is for *.akamai.com, not for s0.mdn.net, which will cause most browsers to balk).
Next, we would need to make sure our sensitive cookies have both the Secure and HttpOnly flags set. Then we would need to find a CDN with SSL abilities. Our CDN works really well over HTTP, just like most other CDNs. We even have a lovely “static.arstechnica.net” branded host. CDNs that do expose HTTPS are rare (Akamai and Amazon’s CloudFront currently support it), and leave you with URLs like “static.arstechnica.net.cdndomain.com”. It would work, but we’d be sad to lose our spiffy host name and our great arrangement with CacheFly.
We would also have to stick another web server in front of Varnish. We use Varnish as a cache, which would still work fine since it would speak plain HTTP over our private network. It can’t encrypt traffic, though, so we’d need another proxy to decrypt data from readers and take Varnish responses and encrypt them.
Lastly, we would have to find a way to handle the user-embedded-content scenario. Images in comments or forums can come from any domain, and these hosts almost universally support SSL poorly or not at all. One solution is to prohibit user-embedded content (which we don’t want to do), or proxy it through a separate HTTPS server that we control. Github has implemented this using a product called camo and Amazon's CloudFront CDN. When every page is rendered, its front-end application rewrites all image links specified with the ‘http://’ protocol to be requested from its camo server, which requests and caches the image and serves it over SSL.
All of this is very technically doable; the difficulty comes in getting our partners on board. Our hands are unfortunately tied by the limits of their capabilities.

Tuesday, April 5, 2011

37 Android Related Patent Disputes

The number of patent lawsuits related to the Android operating system is unprecedented. Never has an operating system had so many challenges to its intellectual property in such a short time period as the Google operating system has had in the last year.
As Florian Mueller points out, the latest Android lawsuit comes from Microsoft, which has sued Barnes & Noble, Foxconn and Inventec. That's just one to consider among the 36 others that Mueller highlights in an accompanying infographic that well conveys the complexities that come when facing such a barrage of challenges.
Android Litigation Infographic.png
The lawsuits point to a simmering issue. And that's concerning the questions that are coming up about Google's licensing practices.
Mueller writes that for Google it could be a spell for trouble:
In my previous blog post I raised the question of whether Google manages intellectual property matters diligently and respects right holders. For example, the way Google habitually "launders" software published under the GPL (open source license) for its purposes baffles me. The relevant version of the GPL -- GPLv2 -- has been around since the early 1990s, and Google is the first commercial operator to believe that a program can just cut out the copyrightable parts and deprive GPL'd software of all protection. A company that treates copyright this way -- also according to Oracle's allegations -- may be similarly arrogant and reckless when patents are concerned. Google just exposes its entire ecosystem to legal risks. If it goes well, Google reaps most of the rewards. If it doesn't, others will have to pick up most of the bill.
The potential issues are just too numerous to consider in their entirety but do read up about the Google GPL licenses if you get the time. Google spreads its technology with these licenses throughout its entire application environment.
This is not the end but the beginning of what we expect will be an even longer trail of lawsuits that relate to the Android operating system. The outcomes will be defining for Google and will test its strength as a technology giant. Barned & Noble's involvement is just the latest twist.
But it is also in line with the complexities that Google and other technology companies face with their media counterparts over the future of such devices such as the Samsung Galaxy and the HTC line of smartphones that Apple is challenging. It's a struggle for relevance in the mobile age. The fight for a place in the market will continue to boil up patent disputes but also plenty of copyright matters, too.
But the lawsuits will also do something else. They will test the courts and how it treats patent disputes that affect millions of people. In the meantime, we'll see how many of these disputes settle before even getting to trial.

Saturday, March 12, 2011

You Can HELP JAPAN !!

If you run your own site — and we know lots of you do — you can use your pageviews and influence to help Japanese people struggling to recover from yesterday’s devastating natural disasters. All you need is a couple lines of code from the Hello Bar.
We showed off the Hello Bar a while ago; it’s a slender bar that floats at the top of your website, giving visitors a brief message and a link.
Best of all, you only have to insert the code snippet on your site once. From a convenient web dashboard, you can customize the bar with your colors and text. You can also tweak the behaviors of the bar and easily turn it on or off from the dashboard. All of this makes it incredibly easy to solicit donations for Japan now, then turn the bar off or change the message and link later, if you so desire.
For example, you might set your Hello Bar to read something like, “Japan has been hit by a devastating earthquake and tsunami. Click here to make a Red Cross donation.” Then later, when Japan is well on its way to recovery, you can change the bar to contain a message about your favorite charity instead, or simply switch the bar off for the time being.
You can set the bar to appear for a brief interval at the beginning of a website visit and hide itself afterward. If you run multiple websites, you can run multiple Hello Bars, again controlling them all from the same dashboard.
The Hello Bar comes from UK design shop digital-telepathy. If you haven’t used Hello Bar before, you’ll need a new account; just sign up with the invite code “helpjapan.”
And if you don’t feel like signing up for a new app, you can just use this code anywhere in the <body> tag of your site to display a standard donation request:

Friday, March 4, 2011

Behind Open Source.

 
Unless you're a developer, or have dabbled in programming, the whole concept of open source software may be a bit confusing, so let's start with the basics.
All software has source code behind it. This is code written by the developers in whatever programming language they chose. This code is usually compiled, eventually, into a form that your computer can understand. With this compiled code (what you'll see as an EXE program or something similar), your computer can run it, but you can't see any of the underlying code. The original developers still retain the original (or source) code and can do with it what they wish, including making changes or adding features. Through what is basically reverse engineering, knowledgeable users can still hack in some changes, but even they would rather have the original code to do with as they wish.
An application's source code is the property of the developers, and they can choose to keep it entirely to themselves (closed source) or to share it with the world so that others can make changes to it or include it in their own software (open source). That's not to say that anyone is free to do anything they want with open source code; most open source software is usually licensed to dictate how other can use it. For example, some licenses require that any software created using the source code is also released as open source, with full credit to the original developers, so improvements can go back to the community; others restrict the source's use in commercial products or the like. One notable example of open source licensing was in 2009, when Microsoft accidentally used open source code in a closed-source tool released to the public; after realizing the mistake, they had to release the entire source code of the tool according to its original licensing.
Seems simple enough, but a number of misconceptions can arise from this distinction. People may see large corporations like Microsoft or Apple as greedy because they keep most of their code to themselves and don't allow others to see, use, or improve upon it. Of course, the closed-source choice often makes sense from a business standpoint, and proponents of this model say that keeping the code a secret allows them to ultimately put more money into the product and make it better over time.
People also often see open source endeavors as being run by a few unkempt coders in their parents' basements on a budget of nothing, updating when they get a chance (if ever). While many open source projects are run by less than a handful of contributors, larger open source systems like the Mozilla Foundation and the several Linux distributions clearly show that the system can work on a large-scale as well. In these cases, greater understanding of the underlying code can lead to more customizations and further development without actually requiring more money.
Open source isn't necessarily right for every piece of software out there, but we do love open source. It can provide (and has provided) the world with some excellent software that anyone who knows what she's doing can change to suit her desires. In the end, the software isn't necessarily better or worse, but just different from a point of view that most users will never see.

Tuesday, March 1, 2011

Pictographic story of social news



It runs from pre-historic cave paintings all the way up to social news aggregators like Paper.li, Flipboard, Pulse and Taptu, taking in the likes of marathon runners, radio and colour TV along the way.

"I'm fascinated by the social news aggregation space right now, particularly how we got to where we are today and where it might go.Who would have thought thirty years ago that the internet would go mainstream and the World Wide Web would transform content business models (and many other business models come to that) so radically?"
 
You can see the image over the page. It's available under a Creative Commons Attribution-ShareAlike 3.0 Unported License. You can read more about it on Sheldrake's blog.






Friday, February 25, 2011

Google Adds New Filtering Options to Mobile Places Search

Google has introduced several new features to its mobile search for iPhone and Android, including the ability to filter results by star rating, distance and businesses that are open.
For example, you might only look for restaurants that are open right now, within a two-mile radius of your current location, that have a rating of four or more stars.
As I’ve recently learned, searching for nearby restaurants or other points of interests on your mobile phone can be a slow and painful experience, and this feature could speed it up considerably.
Other new features include review images in search results and small design changes such as bigger buttons for viewing a map and the ability to call a business.
To try out Google’s mobile Places search, open www.google.com on your mobile browser and click on the Places link at the top of the page. The new features also work when you do a search for businesses on Google Maps on an Android device.

Wednesday, February 23, 2011

Can Wikipedia Survive next 10 years?

Wikipedia is just the latest in a long line of encyclopedias. In fact, encyclopedias have been around in some form or another for 2,000 years. The oldest, Naturalis Historia, written by Pliny the Elder, is still in existence.
             
How do I know this? I looked it up on Wikipedia, of course. Is it true? Possibly.

Ten years after its founding, it’s hard to imagine what life was like before Wikipedia. When I was growing up, our family had a dusty set of encyclopedias that were at least 10 years old, which is fine if you’re looking up dinosaurs, but not so good if you want to know, for instance, who the current president of the Congo is. But though the limitations of the old encyclopedias were obvious, they were authoritative in ways that Wikipedia is not.

Like most people, I’ll take the tradeoff. I have no desire to go back to the days of printed Funk & Wagnalls. If someone would have told me back in 2001 that, within a few years, there would be a comprehensive, free online encyclopedia, I wouldn’t have believed them. Why would someone do that? How?

Origins

By now, we all know the story: Two Ayn Rand devotees, Jimmy Wales and Larry Sanger, created Wikipedia in January 2001. The founding can be traced to a post by Sanger entitled “Let’s Make a Wiki” that was intended as a feature to Sanger and Wales’s other project, Nupedia. Wiki, Sanger explained in the post, was derived from “wikiwiki,” a Polynesian word for “quick.” “What it means is a VERY open, VERY publicly editable series of web pages,” Sanger wrote in the post.

As often happens, the feature grew out of proportion with its original intent. Wales, who was originally against the idea of a Wiki, became a strong proponent of it, while Sanger, who became estranged from the project in 2002, now charges that Wales hogged the credit for the venture. (Wales could not be reached for comment. To his credit, Sanger is mentioned as a co-founder on Wikipedia’s entry about its founding.)

Rooted in open source thinking, Wikipedia contributors began penning a voluminous number of entries (the site claims there are now 17 million such articles), which began showing up in Google searches, furthering the site’s growth and notoriety. Meanwhile, a subculture developed around Wikipedia, with self-appointed guardians doing their best to make sure entries were accurate and free of vandalism. As an authoritative 2006 Atlantic article on Wikipedia noted, “A study by IBM suggests that although vandalism does occur (particularly on high-profile entries like ‘George W. Bush’), watchful members of the huge Wikipedia community usually swoop down to stop the malfeasance shortly after it begins.”
“Truth” on the Internet

All of this made Wikipedia a pretty good reference, but one that you’d be wise not to take completely at face value. Wikipedia works best as an introduction to a subject. Since the articles usually cite references, readers can investigate further whether the claims are actually true. Despite this, Wikipedia soon earned a reputation for loopy reportage, an aspect best expressed in The Onion headline “Wikipedia Celebrates 750 Years of American Independence.”

Such criticism, though, has it backwards. Wikipedia is, in the best-case scenario, an antidote for the echo chamber of the web. After all, good luck finding “truth” on the Internet. Facts may be facts, but they’re subject to so much spin that it can be hard to get a handle on what’s objectively real.

All the more reason why the idea of Wikipedia is laudable, albeit a bit impractical. Though Jimmy Wales could have made a fortune selling ads on the site, he decided to make the Wikimedia Foundation a non-profit charitable organization. But someone has to keep all those servers running and pay those 50 full-time staffers, which is why Wales appeared in a ubiquitous banner ad on Wikipedia asking for donations. The site eventually collected $16 million.

The Future

Can Wikipedia sustain itself for another 10 years? As The Economist recently pointed out, the number of Wikipedia’s English language contributors fell from 54,000 in March 2007 to 35,000 in September 2010, but here Wikipedia may be the victim of its own success. As the site gets more comprehensive, there are fewer entries that need to be written. One thing’s for sure — if Wikipedia ever does go away, it will be hard to believe it. After all, where will we go to confirm such a thing?

Tuesday, February 8, 2011

OH YA ITS 404 !


While Surfing on  the Internet we usully get a unpleasant experince of getting error pages,
but some of the web designers have added a bit of sunlight on this world with their quirky 404 page
Some of the which I had discovered is a bunch of little of known gems 
I hope you enjoy them, and as always, add your own in the comments below!

Who knew that a site for free ringtones could have such an artistic 404 page?

 
Rule #1 of Internet : cute animal always win.

"How embarrassing" 
 
 Austin Coolers have very sporty nature


bit.ly
AAH angry fish!

 BligDeal
BLIGGY 
Blizzard Entertainment
If you cant think what to say, just blame the user



 Chillopedia
One again, cute animals always win. 




FrogsThemes
AAH this is eye catching

 

Hootsuite
 It's true. Some people are like 404 pages







Iconfinder
Poor BOT


LocalFitness
Ouch. 


MoMA
Museum of Modern Art featuring an image of Edward Ruscha's "OOF"




Reef Light Interactive
Damn It!
 
 
SLBJ Women's Conference
This is women's work 

 

 Spiritual But Not Religious
Saying from Soul

 

 WeAreTeachers
Only an online community of teachers pull of this page
Zappos
Hover one of the options to make penguin happy again

 




Sunday, January 30, 2011

Europe worries Infosys

DAVOS: Infosys Technologies, India's No. 2 software exporter, sees the debt crisis in Europe as one of its top concerns.

"Although we can't do much about it, we can actually increase our footprint and get more customers," Chief Executive Kris Gopalakrishnan said at the World Economic Forum in Davos, Switzerland.

Infosys, which counts Goldman Sachs, BT and BP among its clients, plans to hire more people in the new fiscal year, increasing its staff by around 25,000 from its current 127,000.

Infosys said it added 5,311 employees in October-December quarter of 2010. The company hired 11,067 employees in the third quarter this fiscal, but nearly 5,756 employees left the organisation during the same period, bringing the net addition at 5,311, the company said.

The company has also revised its revenue guidance upwards for the fiscal 2010-11 for the third consecutive time on a healthy double digit growth of 24 per cent year-on-year during the third quarter (Oct-Dec).

In a regulatory filing, the IT bellwether said its consolidated revenue for the fiscal under review would be Rs 27,445 crore, projecting 21 per cent year-on-year growth, according to the Indian accounting system.


Tuesday, January 18, 2011

iPad 2 unlocked

Time for a roundup of some of the most popular and widespread iPad 2 related rumors doing the rounds.
#1 - Smaller, thinner iPad

Everything that Apple makes gets smaller and thinner, so this a no-brainer!

#2 - Front/rear camera

Given that the current iPad has a space in the design for a camera, and given that Apple released FaceTime video tool for the iPhone and the Mac, the next iPad is almost certain to have a forward-facing camera. What about a rear-facing camera? Dunno, not sure how anyone would be able to sensibly use one.

#3 - 7-inch iPad

Doubtful, especially given that Steve Jobs himself bad mouthed the 7-inch market. It’s hard to see how Apple could price them competitively enough given that a full-sized iPad start out at $500.

#4 - High-resolution 2048×1536 panel

A retina-display style panel on the iPad would fall into line with the iPhone, but the 2048×1536 number being thrown around at present is awfully high resolution - even 1080p HD video would either be windowed or need upscaling. It also takes a lot of processor power and a big battery drive such a large screen. A screen like this is also unlikely to be cheap.

This resolution would however allow existing iPad apps to be upscaled x2 in the same way that iPhone apps are upscaled on the iPad.

Not sure which way to call this one.

#5 - Better speakers

Absolutely. The iPad’s speakers are … well, to put it mildly … plain awful.

Highly likely the next iPad will have better speakers.

#6 - Multi-core processor

Why not? The iPad could do with more power, especially if it’s going to have that high-resolution screen.

#7 - More storage

Absolutely. Next …

#8 - SD Card slot

I keep hearing rumors that the iPad will have an SD Card slot. Also, the mockups and case designs that are turning up have SD Card slots in them.

So, would Apple put an SD Card slot in the iPad?

I don’t think so, and here’s why. Apple will sell you a 16GB iPad for $500. If you want 32GB that’ll cost you an extra $100. A 64GB version will add another $100 to the price. But what if that 16GB iPad had an SD Card slot in it? Well, then I could bump up the storage by 64GB (taking it to 80GB in total) for as around $50 by buying an SD Card.

Apple might be disruptive, but it’s not disruptive to its own pricing structure.  
Thoughts? What would you like to see on the next iPad?

Monday, January 17, 2011

U.S.-Israel Tested Worm Linked to Iran's Atom Woes

WASHINGTON: Israel has tested a computer worm believed to have sabotaged Iran's nuclear centrifuges and slowed its ability to develop an atomic weapon, The New York Times reported.

In what the Times described as a joint Israeli-US effort to undermine Iran's nuclear ambitions, it said the tests of the destructive Stuxnet worm had occurred over the past two years at the heavily guarded Dimona complex in the Negev desert.  
The newspaper cited unidentified intelligence and military experts familiar with Dimona who said Israel had spun centrifuges virtually identical to those at Iran's Natanz facility, where Iranian scientists are struggling to enrich uranium.

"To check out the worm, you have to know the machines," an American expert on nuclear intelligence told the newspaper. "The reason the worm has been effective is that the Israelis tried it out."  
Western leaders suspect Iran's nuclear program is a cover to build atomic weapons, but Tehran says it is aimed only at producing electricity.

Iran's centrifuges have been plagued by breakdowns since a rapid expansion of enrichment in 2007 and 2008, and security experts have speculated its nuclear program may have been targeted in a state-backed attack using Stuxnet.

In November, Iranian President Mahmoud Ahmadinejad said that malicious software had created "problems" in some of Iran's uranium enrichment centrifuges, although he said the problems had been resolved.

The Times said the worm was the most sophisticated cyber-weapon ever deployed and appeared to have been the biggest factor in setting back Iran's nuclear march. Its sources said it caused the centrifuges to spin wildly out of control and that a fifth of them had been wiped out.

It added it was not clear the attacks were over and that some experts believed the Stuxnet code contained the seeds for more versions and assaults.

The retiring chief of Israel's Mossad intelligence agency, Meir Dagan, said recently that Iran's nuclear program had been set back and that Tehran would not be able to build an atomic bomb until at least 2015. US officials, including Secretary of State Hillary Clinton, have not disputed Dagan's view.

Neither Clinton nor Dagan mentioned Stuxnet or any other cyber-warfare possibly used against the Iranian program.

Israel has voiced alarm over a nuclear Iran and Israeli Prime Minister Benjamin Netanyahu has said only the threat of military action will prevent Iran from building a nuclear bomb.

Israel itself is widely believed to have built more than 200 atomic warheads at its Dimona reactor but it maintains an official policy of "ambiguity" over whether it is a nuclear power.

Any delays in Iran's enrichment campaign could buy more time for efforts to find a diplomatic solution to its stand-off with six world powers over the nature of its nuclear activities.  
US and Israeli officials refused to comment officially on the worm, the newspaper said.
-courtesy times international

Wednesday, January 12, 2011

Memory Is Just Now a Nanowire

IBM's ultra-dense racetrack memory is closer to commercialization.

New research brings closer a new type of computer memory that would combine the capacity of a magnetic hard disk with the speed, size, and ruggedness of flash memory.

The storage technology, called racetrack memory, was first proposed in 2004 by Stuart Parkin, a research fellow at IBM's Almaden Research Center in San Jose, California. Now a team led by Parkin has determined exactly how the bits within a racetrack memory system move under the influence of an electrical current. This knowledge will help engineers ensure that data is stored without overwriting previously stored information.
The new work also helps explain a mystery that surrounded the basic physics of the racetrack memory—whether the bits act like particles with mass, accelerating and decelerating, when moved by electric current. "To further develop racetrack memory, we need to understand the physics that makes it possible," says Parkin.
In racetrack memory, bits of information are represented by tiny magnetized sections called domain walls along the length of a nanowire. These domain walls can be pushed around—to flip a bit from "0" to "1" or vice versa—when electrical current is applied. Unlike current storage technology, racetrack memory has the potential to store bits in three dimensions, if the nanowires are embedded vertically into a silicon chip. The stored information is read magnetically.
In 2008, the journal Science published a paper coauthored by Parkin that showed how multiple domain walls can traverse the length of a nanowire without being destroyed. The new work, also published in Science, specifies the velocity and acceleration of domain walls as they make their way along a nanowire when an electrical current is applied.
"There's been debate among theorists about how domain walls will respond," says Parkin. Researchers understood the motion of domain walls when they were exposed to magnetic fields, but they still had questions about how domain walls move in response to an electrical current—a crucial point because an actual memory device would use electrical current to manipulate bits. One important question was whether domain walls would behave like particles with mass, taking time to speed up and slow down.

Monday, January 10, 2011

Facebook’s Modern Messaging System: Seamless, History, And A Social Inbox

We’re here today at the St. Regis in San Francisco where Facebook is unveiling what CEO Mark Zuckerberg is calling a “modern messaging system”.
Zuckerberg recalled talking to high schoolers recently and asking them how they communicate with one another. They don’t like email. “It’s too formal,” Zuckerberg noted. So about a year ago, Facebook set out to overhaul the system. But this isn’t just about email.
This is not an email killer. This is a messaging experience that includes email as one part of it,” Zuckerberg said. It’s all about making communication simpler. “This is the way that the future should work,” he continued.
Here are the keys to what a modern messaging system needs according to Zuckerberg:
  • seamless
  • informal
  • immediate
  • personal
  • simple
  • minimal
  • short
To do that, Facebook has created three key things: Seamless messaging, conversation history, and a social inbox. Essentially, they’ve created a way to communicate no matter what format you want to use: email, chat, SMS — they’re all included. “People should share however they want to share,” engineer Andrew Bosworth said.
All of this messaging is kept in a single social inbox. And all of your conversation history with people is kept.
Alongside the product on Facebook.com, this is going to work on their mobile applications as well. An updated iPhone app is launching shortly. It’s important that you can keep messages going while you’re on the go, Bosworth noted.
But you don’t need an app. It’s important to note that this can work with SMS too.
And yes, everyone can get an @facebook.com email address if they want. But they don’t need to get one — you can use any email address. And yes, IMAP support is coming soon too (but not just yet)
In order to make this work, “we had to completely rebuild the infrastructure that this system is build on,” Bosworth noted. People are aware of Facebook’s Cassandra system, but now they’ve built something new called hBase (working alongside the open source community again).
He said that 15 engineers have worked on this product — remarkably, this is the most that have ever worked on a single Facebook project.
Right now, this system is merging four main things: SMS, IM, email, and Facebook messages. Zuckerberg said that they’d consider other tech, like VoIP in the future. But right now this is mainly about consolidating text-based messages.
This messaging system will be rolling out pretty slowly over the period over the next few months, Zuckerberg said.


Intel's Light Peak interconnect technology is ready


An Intel executive on Friday said that its Light Peak interconnect technology, designed to link PCs to devices like displays and external storage, is ready for implementation.
Light Peak, announced in 2009, was originally designed to use fiber optics to transmit data among systems and devices, but the initial builds will be based on copper, said David Perlmutter, executive vice president and general manager of Intel's Architecture Group, in an interview with IDG News Service at the Consumer Electronics Show in Las Vegas.
"The copper came out very good, surprisingly better than what we thought," Perlmutter said. "Optical is always a new technology which is more expensive," he added.
Perlmutter declined to comment on when devices using Light Peak would reach store shelves, saying shipment depended on device makers. Intel has in the past said that devices with Light Peak technology would start shipping in late 2010 or early this year.
For the majority of user needs today, copper is good, Perlmutter said. But data transmission is much faster over fiber optics, which will increasingly be used by vendors in Light Peak implementations.
Intel has said Light Peak technology would use light to speed up data transmission between mobile devices and products including storage, networking and audio devices. It would transfer data at bandwidths starting at 10 gigabits per second over distances of up to 100 meters. But with copper wires, the speed and range of data transmission may not be as great.
PCs today are linked to external devices using connectors like USB, but Perlmutter refused to be drawn into a debate on whether Light Peak would ultimately replace those technologies.
"USB 3.0 already has a traction in the market. I don't know if that will change," Perlmutter said.
There could be co-existence, with USB, display and networking protocols running on top of Light Peak.
"Look at [Light Peak] as a medium by which you can do things, not necessarily as one replacing the other," Perlmutter said.

Light Peak: 
Interesting Facts:


·  If you were using Light Peak at 10Gbps, you could transfer a full length Blu-Ray movie in less than 30 seconds.
·  If all the books in the Library of Congress were digitized, they would amount to over 20 terabytes of data (a 2 with 1 3 zeroes after it). If you used Light Peak technology operating at 1 0 Gb/s, you could transfer the whole library of congress in less than 35 minutes.
·  If you had an MP3 player with 64GB of storage, it would only take a minute to fill it up with music using Light Peak at 1 0Gbps.
·  The optical fibers used in Light Peak have a diameter of 1 25 microns, about the width of a human hair.
·  With Light Peak you could have thin, flexible optical cables that are up to 100 meters long. With Light Peak you could have a PC at one end of a football field talking to a device at the other end of the field.
·  With Light Peak at 1 0Gbps, one could transfer close to 10 million tweets in one second.
·  Light Peak can send and receive data at 1 0 billion bits per second. That is a 1 with ten zeros after it. If you had $10 billion dollars in single dollar bills and piled them on top of each other it would form a stack about 700 miles high.

·  Optical modules traditionally used for telecom and datacom are physically larger than the Light Peak optical module. For example, 1 20 Light Peak optical modules could fit in the area of a traditional telecom module.
·  The Light Peak optical module was designed to be lower cost than telecom optical modules through clever design and volume manufacturing. Telecom optical modules may cost more than 10 times more than Light Peak modules.
·  The first laser was invented in 1960 by Dr. Maiman. Some of his contemporaries commented that his invention was a solution looking for a problem. Today, lasers are everywhere including doctor’s offices, internet data centers and in factories for cutting thick sheets of steel. With Light Peak, you will have lasers in your everyday PC.
·  Text Box:  The lasers used in Light Peak are called VCSELs (Vertical Cavity, Surface Emitting Laser) and are a mere 250 microns by 250 microns in dimension. This is as wide as two human hairs.