The Four Myths of Modern Telecommunications


Eversheds Dinner-Discussion, July 12th 2006

The House of Lords; London, UK


By Kenneth Neil Cukier

Technology and Telecoms Correspondent, The Economist






Good evening. Let me begin by thanking Lord Foulkes and Alan Jenkins, as well as Neil Brown and Graeme Gordon of Eversheds for the opportunity to talk to you tonight.


As an American in these august halls of Parliament, I cannot help but draw references to history, to the past. And that may be an appropriate place to begin my thoughts on the “four myths of telecommunications.” To fully appreciate where we are today, let’s take a moment to consider where we were as far back as ten years ago, five years ago and as recently as just last year.


Ten years ago -- 1996: The advent of liberalization and deregulation.


Think about the industry a decade ago: France Telecom, for example, was 100% owned by the state and was run as an arm of government (it didn’t take a corporate form until 1997, and its shares listed a year later). All throughout Europe, the situation was largely the same. There was no competition. If you were a corporate customer and wanted to get a leased line, you had to pay exorbitant prices and wait weeks, possibly months, for it to be installed. There were no “customers” in telecoms -- only “rate-payers.”


At the same time, if you recall, mobile phones were considered a luxury item, and few people had them. There was a good reason why they were considered a luxury -- the prices were outrageous. Pre-paid tariffs didn’t yet exist on a large scale. SMS didn’t exist either, for the most part. And BlackBerries, for instance, were not introduced until 2000.


Five years ago -- 2001: The dot-com bubble popped.


It was the crash of the dot-com delirium, and with it, the IT and telecoms sectors. For example, a year after buying 3G wireless licenses for tens of billions of dollars, some of the operators had to shed their mobile divisions in order to remain solvent.


In America, the WorldCom accounting scandal brought it into bankruptcy. AT&T started its collapse. And of the many billions of dollars spent on infrastructure, the vast majority had to be written off. Consider: Tyco spent over $2.5 billion laying undersea cables connecting three continents. Last year it sold it to an Indian telecoms company called VSNL for a mere $130 million -- pennies on the pound. Today, there is so much excess capacity that it is impossible to use it all. VSNL bought the infrastructure as a matter of national economic security, since India’s economy is predicated its ability to ship digital information around the world. Yet, although VSNL paid a pittance, the revenue VSNL earns from the capacity doesn’t even cover the system’s operating costs.


Still, throughout all the difficulties, something else was going on: Internet traffic continued to double every year; mobile use soared. Prices dropped. And keep in mind that many of today’s staple technology trends -- from blogs and wikis to iPods -- didn’t even exist. Google was just getting known.


One year ago -- 2005: The rise of broadband.


Last year was a period of new network build-outs, a time when telecoms had clearly turned a corner. But what is most interesting about the rise in broadband is that it is not for Internet access. It’s not for voice either. It’s for all sorts of things -- the “telco” has disappeared and in its place has simply come the “communications carrier,” offering a quadruple-play of Internet, voice, mobile and even TV. Soon it will be quintuple-play, I’m sure, and on and on until journalists deplete their knowledge of Greek prefixes.


We’ve also seen consolidation. Europe’s incumbent operators are attacking each other’s markets, and using convergence as their way in -- Telefonica in Britain via O2 (which it acquired last year); France Telecom throughout Europe via Orange (which recently bundled its mobile and broadband arms). In America, AT&T (which was acquired last year by nothing less than a “Baby Bell,” SBC) has started to sell television.


The lessons are two-fold: the power of regulation and the power of the market.


Regulation and the market shouldn’t seem to be diametrically opposed forces, or hostile to one another. Rather, each needs the other. The power of regulation is that it brings competition. And the power of the market is that it brings investment, as well as better services and lower prices for consumers. It is a symbiotic relationship.


Yet I draw attention to the past not to praise our accomplishments as an industry, but so that it can inform our thinking when we look to the future.


We know very little about the future of technology and telecoms, but we know this: tomorrow will be different than today. This may sound obvious, but it is hard to appreciate in practice. Often, regulation forgets this and presumes that tomorrow will look just like today. (Think of America’s 1996 Telecom Act, that barely mentioned the Internet.) We know a second thing: we cannot predict today what will happen tomorrow. And a third thing: enlightened public policy -- or at least the challenge of regulation -- is to set rules today that do not inhibit developments that may emerge in future.


But the thorny bit comes when we look closely at what is happening between past and future -- here, in the present. When I do this, I see four myths of modern telecoms.



Myth One -- Telecoms companies provide telecommunications.


No: Many sorts of companies are in the telecoms business now -- regulations must encourage this, not see it as a problem because it doesn’t fit into classic categories.


For example, consider eBay. What business is it in? It relies on the Internet as its sales channel, but it is not really an Internet company -- it’s an auction house. But is it really in the auction business? It is certainly in the software business. And last year it bought a private software company -- and paid around $3 billion for it. It is called Skype, which lets people make phone calls just like sending email -- for free, regardless of distance. So is eBay really an auction house? Well, yes -- but it’s also a communications company.


This pattern -- of companies masquerading as being in one business, but actually doing something related to telecoms -- is repeated everywhere in the technology sector. Think about Google. It serves up searches on the Web. In fact, it has become so important, and so dominant, that perhaps we should apply the “essential facilities” test to it and regulate the firm. After all, Google has become an indispensable on-ramp to the information society for millions of people. (Of course, by regulating it we might destroy the very entrepreneurial spirit that gave life to Google in the first place, and decrease the incentives for venture capital investors, et cetera).


But what is Google doing? If you read its financial statements, its biggest expense is not bandwidth or R&D but real-estate. It needs two things: space for computer-server farms, and access to electricity. Why does it want all those servers? To serve up ads alongside web pages? No. Because its end-game is advertising on a much bigger scale -- Google wants to display personalised ads in real-time on broadcast television. That is where the real ad money is. Meanwhile, Yahoo is investing in programming. So should they be regulated by the EU’s “Television Without Frontiers” directive? Would that make sense?


There are other examples. The biggest computer maker is not Dell but Nokia, since mobile phones are achieving the functionality of PCs. We think Apple is a computer maker but with iPods, they are in the digital-content business, and with iTunes, the communications sector. The point is basic: telecoms isn’t something necessarily done by telecoms companies anymore. Smart regulatory policy must take this into account.



Myth Two -- Centralising regulation is a good thing.


Not always: There is an intermediary period in which regulations will overlap, and industry will be forced to serve two masters -- probably to the satisfaction of neither.


On the surface, the idea of centralising regulations seems a good thing -- we like harmonising rules. But it often entails a transition that, if handled badly, could create problems. As an example, consider the EU’s proposal to regulate mobile roaming charges, which was unveiled today. In some respects, it is a good thing: tariffs are high.


But look at it in another way. From the point of view of the mobile operators, after paying huge sums to national regulators for 3G spectrum and binding themselves to their stringent rules, here comes a supranational regulator changing the rules of the game in mid play. This isn’t what they thought they were buying into when they paid for the spectrum -- do they deserve a rebate? Should countries be forced to return some of the auction revenue to operators because the integrity of contract has been violated?


In terms of the roaming rules, we can probably empathise with the commission, since the prices seem unnaturally high. And we can appreciate the idea of centralising rules to make them more efficient. In Vivian Reding’s review of telecom regulations, she floated the idea of an EU-wide regulator -- a “super-regulator” to oversee the national regulatory agencies. It is a worthwhile thing to consider. But the point is that in our zeal to centralise regulations for efficiency’s sake, we sometimes overlook the transitional period it takes to get from here to there, which must be carefully handled.



Myth Three -- Regulators must preserve the openness of the Internet.


No: Let the market develop the optimal approach for the information society to advance, and if there is a market-failure or identifiable anticompetitive practices, fix that.


The idea of “openness” lies at the heart of the Internet, and has been responsible for its phenomenal growth. It is a result of technical decisions made in the 1960s and ‘70s by the small group of American techies who built the Internet. It is called the “end-to-end” principle. It means that any user can reach any other directly: the network is “neutral.”


This is a complete contrast to the telecoms system, which is a centralised network. If you want to add a new service, you have to knock on the door of the telecoms operator and ask permission. But on the Internet, the end-to-end design means that users are autonomous and can control what they do, without interference from any central organization who may have an interest in the network doing one thing but not another.


The result is innovation. A British physicist working in Geneva can create the World Wide Web; an Israeli can develop a popular instant messaging system (called ICQ) that can become wildly popular and spread widely. An 18 year old in California can create software for sharing music for free -- Napster -- and place it on the internet where it takes off, to dismay of the recording labels. In short, the end-to-end principle underpins the Internet’s genius. It is like free-trade: open interaction among parties, free of roadblocks.


However, the “neutrality” of the network is under attack. Telecom operators -- for the moment in America but this is starting to come to Europe, too -- are asking why they should only earn income from charging the end-user, rather than also from sites like Google or eBay that blast huge amounts of traffic across their pipes. Operators are investing billions into building new high-speed networks, and feel they should be allowed to monetise it as they see fit. In June, it became a major political issue in America, after legislation was proposed in Congress to enshrine the idea of “network neutrality” (as the politicians call it) into law.


Network neutrality is a good thing, because it leads to innovation -- telecom operators can’t act as toll booths on the information highway. Start-ups will have the same chance of competing with bigger firms, and big firms won’t face extortion from operators, who could otherwise allocate poorer bandwidth.


That said, do we really want a law to preserve the end-to-end principle? Isn’t it somewhat “un-Internet-like” to write into stone anything about the Internet, as if to say that a technology characterised by rapid change has a dimension to it that must never change? Moreover, isn’t odd that we like the end-to-end concept because it resembles a free-market approach to content and services -- yet want to abrogate the role of the market to preserve it? If it were truly optimal, why would we need to mandate it in law? Such a rule would implicitly suggest that there is only one economic model for the Internet. For a network typified by experimentation and diversity, this is unlikely.


The point is that although the “openness” of the Internet is probably worth preserving, it makes more sense to let the market work out the best arrangement. If problems arise due to concentration of power -- which could be treated under normal antitrust rules -- we can remedy that, specifically. This seems better than creating rules for the Internet under the presumption that government knows what is best for the Internet for all time.



Myth Four -- We must apply market approaches to wireless spectrum allocation.


No: This reform does not go far enough; we need to establish a huge swath of space for unlicensed spectrum -- a spectrum commons -- to promote innovation.


All wireless devices rely on radio spectrum, from mobile phones to broadcast television -- it is a scarce resource, the oil of the digital age. But the way that the spectrum is allocated is inefficient: regulators cleave off big chunks and grant them to companies (in the past for free, now for huge sums of money). The conditions are you can only use it for a specific, pre-defined purpose, and you can’t share it, trade it or sell it.


This is ludicrous. Economists know that resources are often better managed when there is an incentive, such as by granting property-rights. Otherwise, it can lead to a “tragedy of the commons.” Applying market principles is more efficient: auctioning spectrum brings money into the public till; letting firms use it as they see fit provides flexibility and an incentive to innovate; permitting resale puts it into the hands of those who most want it. We might even consider placing an annual tax on spectrum, to encourage conservation.


However, although market-based approaches are desirable, they are only a first step, not the finish line. More important than that reform is one that goes even farther, and the benefits even greater: allowing a large amount of “unlicensed spectrum” for the free use of anyone -- a sort of “national park” for wireless communications. It will permit inexpensive, decentralised experimentation and lead to innovation, which will benefit society -- and ultimately, even benefit the companies using pricey, licensed spectrum.


The problem to overcome is less technical than it is political. Today, regulation makes presumptions about how wireless technology works that is unchanged from the days of Marconi, when radios were comprised of coiled copper wires and crystals. Science has evolved, but the rules have not. Modern radios are encoded in software; they have microprocessors that make them smart devices. They can pluck out signals in the airwaves, in the same way as the human ear can distinguish one conversation in a crowded stadium, surrounded by the din of thousands of voices.


In the past, “smart radios” were not possible, so regulators handed out huge slices of spectrum so broadcasters could SHOUT THEIR MESSAGE to reduce interference from other signals. Today, the problem of interference, while still important, does not have to be such a limiting factor. So the way we hand out spectrum should change, only that it has not. One reason why is because of political factors: existing license holders -- from radio and television broadcasters to mobile-phone operators -- are loath to give up their valuable resource or see the creation of possible competitors from “free spectrum.”


But it should be allowed. The airwaves are a public resource. Government must ensure free spectrum just as it ensures free speech. It will be a test of the integrity of our public officials if they take the courageous step in establishing a “spectrum commons.”


Always in regulatory policy, a diversity of approaches is preferable. So we need to apply market principles to spectrum allocation. But we also must create a “national park” for spectrum. We should do this on commercial grounds, to spark innovation, and on public-interest grounds, to uphold democratic principles of allowing people the free use of the airwaves that, after all, belong to them.




These four myths might have us reconsider our current telecoms regulation. Applied thoughtfully, it can lead to new business models for industry, new services for consumers -- and I hope, in deference to our hosts tonight, more work for lawyers!


If I began my remarks looking at the past -- and delayed you from dinner by looking at the present -- let me conclude with a few predictions about the future, looking out one, five and ten years from now.


One year from now, in 2007.


Roaming charges will go down throughout the EU without the need for legislation -- we’ll find that the threat of regulation was enough for it to have the same effect.


“Network neutrality” will not become law in the US or EU, but the openness of the Internet won’t be jeopardised: market forces will find an acceptable solution for everyone. However, in places where laws compel technology to conform to what legislators and regulators believe is best -- such as France’s rules mandating compatibility with iPod and iTunes music -- unforeseen consequences will prove problematic.


Five years from now, 2011.


Network connectivity will be ubiquitous in the West and most cities in the developing world, mainly via wireless. People will be online to such an extent that it will change the very fabric of life. One of the biggest changes will come in the degree to which ordinary people go from being consumers to producers of media, armed with something as simple as a mobile phone. We got a taste of this on July 7, 2005 when individuals posted news and photos of the bombings. The BBC used the material in its reports, and the London Metropolitan Police requested it for its investigation. We are witnessing the emergence of citizen-journalism -- what I call “The Fifth Estate” -- that will shake things up in both deeply beneficial and profoundly terrible ways.


Moreover, objects that today do not have processors will now have them, and be linked to a network, from eyeglasses and shoes to medications. The great trend will be in “cross-over computing” -- objects that used to do only one thing now doing multiple things. For example, wristwatches that also send biometric information to monitor our health; doorknobs that fingerprint-authenticate us for entry; light bulb sockets that also act as smoke-detectors. Notably, this is communications, but non-human to non-human communications. It will eventually supercede person-to-person communications, and mark a major new business opportunity for operators and other sorts of companies.


Ten years from now, in 2016.


This is easiest to forecast. I am certain that anything I or anyone else tries to predict will be wrong.


I say this with confidence because if the preceding four myths of telecoms teaches us anything, it is that we must approach the future with humility. The best we can hope for is that regulation can ensure that there is a vibrant market -- and that a robust market obviate the need for much regulation.


The point is this: the role of regulation is not to predict the future but to let it happen.


Thank you




About the speaker:


Kenneth Neil Cukier is the technology and telecoms correspondent of The Economist. He was written extensively on global business, technology and international affairs. Some of his work is available at:




© Copyright Kenneth Neil Cukier 2006, under the Creative Commons Attribution-Noncommercial-No Derivative Works 2.5 License.