Friday, September 05, 2003

From the Economist

The Economist speaks....
DVD piracy may be coming soon with broadband rising and file storing becoming smarter.
How do you fight it ?

American Media Moguls are adopting the following strategies

1. Delete content after the user has “consumed” it.
2. Offer movies cheaply online
3. seeking new laws
4. Educate the young with new curriculum which teaches that swapping content is wrong.
5.Go after file swappers in court

Movielink, an online site charging $3-5 to download a movie. But the service is still “clunky

According to the Pew Internet & American Life Project, 65% of people who share music and video files online say they do not care if material is copyrighted

In the 1980s, software companies used to fight online pirates with DRM technology. But they found that copy protection annoyed users, and got rid of it. The makers of Lotus 1-2-3, abandoned it after finding that they had merely created a new market for software that could defeat copy protections. Now the music industry is realising that often some of the downloaders it labels as thieves are actually trying out music before they buy it, and that controlled, legal file-sharing could be a marketing tool. Viral marketing of that kind could be powerful.


Infoviz
killer applications” - the term was invented 25 years ago for - the spreadsheet. That was the reason why many people bought their first PC. It allowed them to build models and play with their data. With spreadsheets, “what if” scenarios could be calculated and recalculated easily. If the value in one cell was changed, the data in related cells were automatically adjusted. Users can, “converse with the data”.

information visualisation is all about making data visible—or, more precisely, the patterns that are hidden in the data. Graphic aids such as charts have done this for ages.What is new, he and his colleagues explain in the book, “is that the evolution of computers is making possible a medium for graphics with dramatically improved rendering, real-time interactivity and cost.”


Nanotech for solving Energy crisis
Working out how to provide the world with enough energy when the population reaches 10 billion and the global energy requirement has soared from today's 14 terawatts (ie, 14m megawatts) to anything from 30 to 60 terawatts of capacity.The lack of energy he contends, is the single biggest issue facing mankind today. Certainly, some of the world's more intractable problems—war and poverty, water and food shortages, disease and pollution—are connected, in some way or other, with energy deficiency.
Short of building nuclear power-plants outside every major city, he believes that the only way out of the energy impasse is through the use of nanotechnology

SCO - All bark no bite ?
Roughly as apes and humans allegedly have common ancestors, several operating systems can trace their lineage to UNIX, including Linux. The SCO chairman claims he soon found “massive and widespread violations” of Caldera's intellectual property in the Linux code

What most bothers the open-sourcers is SCO's refusal to reveal which lines of code it considers problematic. “Here are these people who claim we are pirates but refuse to say where and how,” says Bruce Perens, an open-source evangelist. After all, he says, remedying the situation would be “trivially easy”. The Linux “community”—numberless hobby hackers—would simply converge on the code and rewrite it within hours or days.

SCO has caused enough uncertainty that technology consultancies, such as Gartner and Yankee Group, are advising clients to wait and see before adopting Linux. It has not gone unnoticed that Microsoft is one of the few companies that has actually paid SCO for a Linux licence, even though Microsoft has no use for one. Microsoft and SCO vehemently deny that they are in league, but most open-sourcers assume that the evil Redmond giant is bankrolling a mercenary.


Backing up everything ever published

How do you ensure that readers will still be able to access electronic academic journals even centuries after they have been published ?

The project, called LOCKSS (short for “lots of copies keep stuff safe”), addresses a vexing problem that librarians face everywhere. Increasingly, academic journals are published online; many are not even available in print. As a result, libraries are losing the option of maintaining local collections—but are leery of discontinuing paper subscriptions.

looked long and hard at what the great libraries of the world have done over the millennia. First, they acquire copies and make them available to their local readers, while seeking to preserve them to the best of their ability. But if copies get lost or destroyed, they also lend them to each other. It is these circulating collections—which in effect form a peer-to-peer network with no central authority—that LOCKSS seeks to mimic.

Efficient Wind Power generation

Wind powered Turbines must be able to generate electricity at a cost that is competitive with fossil-fuel sources. One way to do this is to make cheaper windmills. Until now, large-scale wind turbines have faced into the wind. That makes them easier to design but heavy. Because the wind blows the turbine blades towards the supporting structure, they have to be made stiff enough to stop them bending and hitting the tower.

If the whole contraption could be turned around, and the fan placed downwind from the support pole, this problem would disappear. The blades could then be less stiff, and would therefore be lighter and up to 25% cheaper. So why, throughout history, have windmills always pointed upwind rather than downwind? The answer is that downwind turbines are tricky to design and subject to all sorts of aerodynamic interference caused by the supporting tower

The main problem that the WTC had to solve was how to damp the vibrations caused when a blade passes through the “wind shadow” of the tower. Calculations suggest that downwind power could be generated on the site for about 3.5 cents per kilowatt-hour—ie, competitive with coal

Bumping against the built in speed limits of the Net
How do you make the internet go faster than laying bigger, faster data pipes? There turns out to be a fundamental speed limit built into the internet's software foundations—the “transmission control protocol”, better known as TCP. The speed limit only becomes apparent at very high transmission speeds, measured in the hundreds of megabits per second (Mbps). , the efficiency of the connection was less than 30%. Why?

The problem stems from the way that TCP responds to congestion. The internet has been able to scale up from millions to billions of users over the past few years due to the simplicity of its design. Computers talk to each other in TCP using a simple rule to ensure that they make good use of available network capacity. One computer sends a chunk of data, called a packet, to another computer, and waits for an acknowledgment message, or ACK. If no ACK arrives, the sending computer assumes that the network is congested and the original packet has been lost, and scales back its transmission rate to half of the previous one. Once reliable transmission has been resumed, the sender gradually starts to increase the transmission rate, until eventually the network becomes congested again, the rate is halved, and so on. The advantage of this simple approach is that millions of computers can share a network with no need for centralised traffic control. When capacity is available, transmission speeds go up; when it is not, they go down.

This approach works well on today's internet, which is a bewildering patchwork of different networks operating at different speeds. But difficulties arise when the bottlenecks in the internet are removed, as they are on the high-speed links used by scientists. The problem, says Dr Low, is that TCP reduces the transmission rate too drastically at the first sign of congestion, and only increases speed again gradually. It is, he says, akin to a driver who can see only ten metres in front of his car, and who increases speed gradually when the road seems clear, but slams on the brakes as soon as another car comes into view. “On a slow street it may work, but on a superhighway it does not,” he says

So Dr Low and his team have devised a tweaked version of TCP, called FAST. Like the original TCP, it is a decentralised system: each computer monitors the responses to sent packets in order to adjust transmission speed in the face of varying levels of congestion. But FAST does more than simply check to see if an ACK has arrived for each packet sent. Instead, it takes into account the delay between the packet's transmission and the arrival of the corresponding ACK from the recipient. Calculating a running average of this delay time provides far more precise information about the congestion.

Transmission speed can then be adjusted carefully, smoothly scaling back when the first signs of congestion appear, and quickly ramping up again once the congestion has eased. Using FAST, Dr Low and his colleagues were able to improve the efficiency of a 1,000 Mbps link so that it reached 95%—even in the presence of a small amount of background traffic from other users. In other words, the protocol is not just fast, but backwards compatible. Computers speaking FAST can share a network with other machines using standard TCP.

0 Comments:

Post a Comment

<< Home