The Future of Convergence & Regulation:

Trends -- and Errors! -- from the US and EU

 

By Kenneth Neil Cukier

Business Correspondent, The Economist

 

Keidanren Kaikan Convergence Regulatory Symposium

February 26, 2008 - Tokyo, Japan

 

 

Introduction

 

Good afternoon. It is an honour to speak to you today. I recently moved to Japan, eight months ago, and very much appreciate the warm hospitality that I have received. But it means that I have only limited experience with Japan. Yet as someone who has lived and worked in America and Europe, I can speak about how media convergence and regulation is taking shape in both places.

 

My remarks will look specifically at mistakes that American and European companies have made, and errors in the EU and US regulatory policies. I realize that things are done very differently in Japan. But the experiences of the West might be useful. I would be happy if I can help Japan to learn from the mistakes in America and Europe.

 

                                                            *        *           *              

A Story

 

I would like to begin with a story that is relevant to consider when thinking about convergence and regulation.

 

In the early 1990s, AT&T laid down cables across the United States to build a nationwide network. After digging the trenches, they wanted to seal it securely. So they poured cement over the trenches. Then -- of course -- something happened: the Internet. Between 1994 and 1998, AT&T sold as much telecoms capacity as it expected to sell in almost 20 years, for an era when voice calls were the main reason people used telecoms. Now demand for capacity was huge. And bandwidth technology didn’t stay the same, as it generally did in the world of voice calls. Instead, it improved greatly during this period.

 

However, when AT&T wanted upgrade the cables with faster technology, it realized it had a very serious problem. The cost of cracking through the cement to reach the cables was enormous. AT&T had made a huge error. The company hadn’t build their network in a way that made it “extensible” or “upgradeable.” In some ways, it was an understandable mistake. The company did not know at the outset how the technology would evolve. Ultimately, AT&T had to spend a huge amount of money upgrading its lines -- something that contributed to its demise when it was sold in 2005 for a small sum, $16 billion, to one of the “Baby Bells” (which in recent years adopted the famous name).

 

The story is important not only because of its lessons for the technology and telecoms industry, but because of what it means for policy makers. Regulations are often like this: it pours concrete on an industry. It hardens the economic structure into place. And like AT&T’s cement, the regulations are hard to undo once they have been set.

 

In some ways this is understandable, as well: it is impossible to know at the start how things will evolve. To apply this idea, it would suggest that the solution is to design regulations that are flexible or “upgradeable,” that presume from the outset that the regulatory environment will change, because the technology and usage themselves will certainly change -- and in ways we cannot predict.

 

                                                            *        *           *              

Overview


This story serves as a basis for my remarks today. Although we cannot know how things will evolve, as business leaders, policy makers, academics, economists -- and yes, even journalists -- we are forced to place our bets on the direction things may go. To lend my voice to the cacophony, my remarks will cover three themes.

 

First, I will note the errors that some American and European companies have made in their failure to accept convergence and a new regulatory environment, in two areas: telecoms and newspapers.

 

Second, I will briefly look at three technology trends -- blogs, mobile television and “Web 2.0.” Together, they highlight the underlying characteristics of media convergence: the way it allows for open, self-organizing collaboration among individuals and groups. This radically changes how businesses form and interact. It represents both a challenge and an opportunity for established players.

 

Third, I will examine two regulatory issues, and note the mistakes that were encountered, in part because of a failure to appreciate these technology trends. The first is the European Union’s “Audiovisual Media Services Directive” -- also known by its earlier name, “Television Without Frontiers.” The second is proposed legislation in the US on network neutrality.

 

My remarks will conclude by explaining why getting convergence and regulation right is so important, because of what lies over the horizon, namely, ubiquitous networking for machines, objects, appliances, bodies, and so on.

 

                                                            *        *           *              

I. Errors in Business

 

In America and Europe, many companies have faced difficulties due to the Internet while others have prospered. Often what distinguishes the winners from the losers is the degree to which they have embraced rather than resisted convergence, and accepted or rejected new regulatory structures.

 

The media and telecoms companies in the West found that there is a heavy price to trying to protect the status quo, even though it seems more comfortable than the uncertainty of finding new business models. Consider two cases:

 

1. Telecoms

 

Over the past decade, the telecoms industry faced new technologies and new uses -- from Internet usage, to Voice over Internet Protocol (VOIP) and mobile calls -- that threatened its business. Of these changes, Internet calling was clearly the most disruptive. This should have been obvious to operators: between 1965 and 1980, the average revenue per call minute fell from slightly over $1.50 to slightly below 50 cents, according to America’s Federal Communications Commission (FCC). That was due to deregulation, and the revenue stayed steady for almost a decade. The Internet served the final blow: from 1995 to 2005 average revenue per minute fell from around 45 cents to below 5 cents.

 

Still, by 2006 many operators remained very dependent on voice service. Verizon got almost 80% of its revenue for voice, and AT&T derived over 60%. In Europe, incumbent operators such as Deutsche Telekom, KPN and France Telecoms received around 60% of their revenue from voice. The result is that their share price -- an indication of business performance -- have shed around 50% of their value since 1999. In America, long-distance operators like MCI and Sprint basically atrophied and were acquired -- with only their Internet backbone or wireless unit of any value. This set of affairs is particularly damning when one considers that spending on communications was one of the big areas of growth among consumer categories since 1990 (up almost 30%), according to the Organization for Economic Cooperation and Development (OECD).

 

But instead of seeing Internet calling as an opportunity to forge a new commercial relationship with customers, operators considered it a threat -- and rushed to regulators for protection. They wanted to collect fees when smaller telecoms firms connected to their network with modems for Internet traffic. They sought to apply restrictions on Internet calls. They sought to have regulatory requirements placed upon the start-ups to thwart them, such as forcing them to make their systems open to wiretapping or emergency service calls. Regulators largely resisted this.

 

All the time they spent trying to stop Internet calling was not spent on competing. The Internet communications firm Skype launched in August 2003 and by December 2005 it had over 50 million subscribers. Today we know that voice over Internet Protocol (VOIP) is not just about cheap calls, but about new forms of real-time collaboration with voice, images, documents and the like. What American and European telecoms operators found out was that to resist change is to risk being devoured by it. The entire business had changed but the companies did not.

 

2. Newspapers

 

The same is true for media companies in Europe and America. Newspapers are struggling. In America, many large newspaper groups have been sold, as their business has begun to erode. In 2006 McClatchy purchased Knight Ridder, a major chain of daily newspapers, in 2007 the billionaire investor Sam Zell bought the Tribune Company (home of the Chicago Tribune and Los Angeles Times) and Rupert Murdoch of News Corp acquired Dow Jones, which publishes the Wall Street Journal. In Europe, newspapers are also struggling. Britain’s tabloids have taken to inserting a free DVD in certain issues as a way to boost sales -- and have launched free editions to gain readers and attract advertisers. In France, three main dailies, Liberation, Le Monde and Les Echos are all on the blocks for new owners.

 

Strikingly, the newspapers industry has been timid in adapting to the Web and experimenting with new business models. The Internet would offer them a global audience and a chance for targeted advertising. But they resisted change. One sign of their resistance is their early disparagement of “blogs” -- the musing of ordinary people seemed to be drivel. But after a series of scandals in which blogs revealed inaccuracies in the media, and as some of the brightest minds in academia and professional life now blog, they have lost that stigma. Indeed, many newspapers now publish web blogs, from the New York Times to the Wall Street Journal.

 

But still the resistance to the Internet remains. A more powerful sign of newspaper group’s unease to new models was the reaction to Google News, which aggregates content and provided links. Many people used it as their news page, bypassing the newspaper sites. Their trade associations forced Google to stop the activity, that increased their brand awareness and drove traffic to their website. A Belgian newspaper association and the French wire agency AFP actually sued Google to stop the practice.

 

Today, the dispute is resolved with archived stories being available online -- for sale. So one can purchase a program on America’s National Public Radio (NPR) from August 1995 that quoted the economist Thomas Hazlitt (who is a member of the Keidanren’s convergence and regulatory working group) for $3.95. An article from the Washington Times from 1994 is 2.95. That is still too expensive for most people, considering so much is available online for free. The papers apparently don’t see that there is more to be gained by earning revenue in ancillary ways, such as targeted advertising or affiliate referral fees, than simple sales.

 

In this context, it should be unsurprising that soon after Mr Murdoch acquired the Wall Street Journal he suggested making the content freely available -- and around the same time, the New York Times made free its paid-for content sections. What the papers began to understand was that new media sites that are only a few years old, like Digg.com and some blogs, get more traffic per day than major newspapers sites. They have to change their business models.

 

In America, newspaper groups benefit from regulation in many ways, particularly in regards to television and radio broadcasting licenses. In 2007 they successfully lobbied the FCC to increase the media ownership caps slightly. The justification was that the Internet increases the pool of information available to the public. But to their credit, the media companies understand that they must compete, even if they are uncertain how and therefore reluctant to do so. They are not trying to use regulation to block new entrants -- only ensure that they have the ability to compete themselves, by increasing their heft, and being able to sell ads across different media, may in the theory let profitable radio and TV businesses subsidize less profitable newspapers, as they re-engineer their business model for the online world.

 

There is little time to lose. In the case of the Wall Street Journal, News Corp was able to swallow Dow Jones so easily in part because News Corp is performing so well, and a major reason is the website division. News Corp bought MySpace, a social networking site -- and one of the most popular sites on the Internet -- for around $600 million in 2005. But most newspaper publishers, and media companies in general, have not been as bold. They have been reluctant to change their businesses to adopt to convergence.

 

Lessons

 

From these examples, companies in the West have been exposed to a few lessons (though whether they have learned from them is another matter):

- Convergence is an opportunity, not a threat.

- It enables the telecoms and media sector to revitalize itself.

- It lets regulation be updated for new technology and modern business.

 

If industry and policy makers seize the opportunity of drafting new regulations:

- It offers rewards to companies that embrace it.

- It is like telecoms privatization, liberalization and deregulation in the 1980s: after short-term uncertainty it will unlock success for many players -- both old and new.

- It will do this by growing the overall pie, not just slicing it into different sized pieces.

- It benefits the public, and enables firms to meet their mission helping society.

 

                                                            *        *           *                              

II. Technology Trends

 

Three trends in technology -- blogging, mobile TV and Web 2.0 -- are important to consider in the framework of convergence and regulation, since they point in the direction that the Internet as a medium is heading. Moreover, they expose problems with how regulations are crafted for the online world (addressed in the next section).

 

Blogging

 

Blogs are usually described as “online journals” -- but that is dangerously inaccurate, because it misses their fundamental importance. Journals are (mostly) private musings, locked on paper and stored in a desk. Blogs are more than public thoughts -- they are online conversations. They are implicitly about sharing information and exchanging views. Their social nature is critical to appreciate. Second, they are not about text but pictures, video and audio.

 

What is happening is that just as publishing led to literacy, and then an outpouring of written works (which later would lead to copyright laws and freedom of expression laws to regulate idea in the 1700s, and overturn censorship by monarchies) we are witnessing a similar transformation: a new form of audiovisual literacy among the mass public -- available to a global audience instantaneously. (But the regulations still treat this sort of media as something to be licensed, as I’ll describe later).

 

Until recently, blogs were considered unreliable writings by an army of unemployed, caffeinated nerds writing in their bathrobes at home. Now they are mainstream. They are actually not very new, having started in 1997. Between 2004 and 2006 the number of blog posts doubled to as much as 1.3 million a day, according to data compiled by the OECD. By now it may have doubled again (and bear in mind, some of this commentary competes with mainstream media; all the newspapers and wire agencies in the world could never publish close to 1 million dispatches a day). Today, 34% of blogs are in English (the second most popular language is Japanese, comprising 21% of blogs), according to Technorati, which compiles data on blogging. By 2006 there were more than 35 million blogs in China alone. So it is a global phenomenon.

 

Mobile Television

 

Televisions used to be big boxes with rabbit-ear antennas in the corner of the living room. There were a small handful of stations, all heavily regulated for ownership, content, and advertising amounts. The reason was that licenses were hard to come by since spectrum was then considered scarce.

 

But today the idea of television has changed completely. First, is the because of the Internet: sites like YouTube allow individuals to “broadcast” their video content. People can access almost anything, at any time. The constraint of spectrum to get a broadcast license no longer exists. The second change is with mobile phones. It lets people access this content from anywhere, as well. Moreover, they can produce content in the same way, since the phone is a two-way video technology enabling sending as much as receiving. That is a radical change. However, it should not be considered “broadcasting” for two reasons: viewers actively select the content, and the content is in most cases destined to a very narrow audience.

 

Yet, while the technology has changed enormously, the revisions to the regulations have not kept pace (as will be discussed later). Today there is a big gap between the two, with big consequences for the economy and public expression. Superficially, maintaining  archaic regulations seems to thwart the ability of new companies to challenge media incumbents. But on a deeper level, mobile TV actually offers established media groups the chance to be even more successful -- provided they seize the opportunity to reform their businesses.

 

Web 2.0

 

The term Web 2.0 is a confusing one, because it tends to describe so many different things. Perhaps the best way to understand it is to contrast it with what came before it. When the web first became a mainstream medium in the mid 1990s, it was largely a passive medium -- people were entertained by it like a television or a book. It was “interactive,” of course: people could send emails or click on a link -- but this was more like talking back, or changing the channel. It wasn’t truly creative. The new iteration of the web is. Web 2.0 is active, not passive. (Or as one famous formulation put it: the web has gone from being a “noun” to a “verb.”

 

A distinguishing element is social networking and open collaboration. People or processes come together on the fly to interact and do new things. So, for example, Google offers a map service, with an open API or programming hook, for other developers to write web software programs to interact with. So we can visualize other information atop a map -- such as the location of American presidential campaign contributors (from interacting with the US Federal Election Commission’s online database), or real-estate listings. This is a very primitive example of Web 2.0, but it explains the idea.

 

Yet the most important trait is constant change. We are building this medium, the aggregation of more than a billion individual contributions. The very term “Web 2.0” implicitly reminds us that nothing prevents, indeed the nature of the technology begets, a Web 3.0, 4.0 and so on. The technology is not stable, like so many other communications media like radio, TV and even telephones. This continual metamorphose -- or “regenerativity,” in the phrase of Professor Jonathan Zittrain of Oxford Internet Institute and Harvard Law School -- is in part due to its open, decentralized, bottom-up character. Nobody owns it, yet we all contribute to it.

 

Regulatory policy will have a hard time applying classic rules covering media and communications onto Web 2.0. The idea of “organizations” that used to have articles of incorporation, headquarters, meeting minutes, ledgers and a bank account, is now replaced by self-organizing bands of people and groups that collaborate and dissipate like the tide. We will have to design new regulatory rules -- and we will only learn what they are from experience, not before the fact.

 

                                                            *        *           *                              

III. Errors in Regulation

 

These technology trends are important to understand in the context of two regulatory pushes: the EU Audiovisual Media Services Directive of 2007, and recent proposed legislation in the US to mandate “network neutrality.” In both cases, there were problems in how regulators thought through the rules that are useful to understand so as to avoid the errors.

 

EU Audiovisual Media Services Directive

 

The European Union’s Audiovisual Media Services Directive was approved in December 2007 and must be transposed into national law by member states by the end of 2009. It was designed to modernize the “Television Without Frontiers” directives of 1989 and 1997 that was designed to facilitate cross-border market entrance among broadcasters by harmonizing European rules. It places controls over content, programming times, and advertising amounts.

 

For example, a draft liberalized the frequency of an advertising break allowed in film programming from every 45 minutes to every 35 minutes. But in return, ad breaks for news and children’s shows -- the least commercially viable programming -- increased from every 30 minutes to 35 minutes. This is the degree to which television is regulated; and this is what constitutes reform.

 

Most importantly, the directive introduces the idea of “linear” vs “non-linear” content. There is a fuzzy, gray area between the two, but basically linear content is programming that sort of looks and feels like regular TV from 1970: it reaches a wide audience, there is scheduling, it is for-profit, the station keeps a record of what appeared and when, etc. On the other hand, “non-linear” content is material that may be intended for a small audience, done without a commercial motive, viewed by people on specific request (such as clicking in a web link, as opposed to turning on the set).

 

The problem with these definitions is that the are artificial, and try to reproduce the classic “licensing” model of broadcasting onto a medium that has few characteristics of the traditional television model. Today, anyone can be a “TV station” today, just as anyone could be a “publisher” in 1455, but new rules restrict this. It is not “broadcasting” but “narrowcasting” -- it often goes to a small audience rather than the broad public, even if it is accessible to anyone.

 

So why did the EU draft a policy that seemed to apply classic regulatory models onto a new medium that did not require such control, since the original justification of regulation -- the broad public and the scarce spectrum allocation -- does not exist on the web? The reason is because public broadcasters, feeling threatened, sought to define “linear” vs “non-linear” content with a gray areas so as to place the same regulatory burdens on new online content creators. As one pan-European mobile phone company executive quipped to The Economist in October 2006: anything that increases the barrier to entry for content providers is a good thing for his firm.

 

But this is short-sighted. What many of today’s incumbents are now realizing is that instead of using regulation to block new rivals, they can use the combination of new technology and a permissive regulatory environment to launch new sorts of audiovisual programming online and on mobile phones that compete very successfully against the start-ups. Incumbent broadcasters in Europe did not appreciate the degree to which easing the rules would benefit them greatly, since they have professional content, an established audience base, and most importantly, have production and programming management expertise (that is, they understand audience tastes and how to manage a large organization that caters to it).

 

But today, media groups are starting to understand that they are very well placed to benefit from convergence and new regulation -- provided that they don’t make the mistake of trying to hold it back.

 

US legislation on “network neutrality”

 

The term “network neutrality” refers to the idea that the Internet should as far as possible be an “open” and “neutral” platform for parties to send and receive traffic, without a middleman, the network service provider, acting as either a censor or a tollbooth. This idea of the “end-to-end” nature of the Internet is one of its founding principles: the network was designed to be as decentralized as possible, in contrast to the telephone network that was highly centralized and usage was tightly controlled (in terms of what devices could connect to the network and the services it offered, less the actual content).

 

The idea of network neutrality is what underpins the amazing innovation that has taken place over the Internet, including not least the World Wide Web itself (which was not invented by Internet engineers but a British physicist working in Geneva in 1991). Take away network neutrality, and you basically destroy the Internet, say purists.

 

For example, telecom operators could demand that websites like Google and eBay share some of their revenue for sending so much data over their networks. Or, cable companies that offer Internet access might block video sites since it competes with their core offering, just as telecoms firms can block Internet calling to protect their core revenue. Network operators such as AT&T, Verizon and Comcast, meanwhile, argue that they need to increase their revenue streams if they are to pay the huge costs of upgrading the networks to handle all the traffic these big and lucrative companies are sending.

 

In fact, this week the FCC is meeting at Harvard Law School to hear views on the network neutrality issue. It comes as a cable provider, Comcast, stands accused of intentionally slowing down the service of BitTorrent, a popular peer-to-peer file-sharing service, which consumes a lot of bandwidth as many people illegally download copyrighted music.

 

Internet companies such as Google and eBay have lobbied Congress to introduce a number of pieces of legislation to mandate network neutrality (though none have become law). One that has made the rounds in previous and the current legislative sessions is the “Internet Freedom Act” submitted by Ed Markey (a Massachusetts Democrat) and Charles Pickering (a Missouri Republican). It would oblige the FCC to institute “open-access” rules that forbid network providers from interfering with lawful content over their networks.

 

The problem is that the policy, although sensible, is unworkable: network operators have to prioritize the heavy data over their networks -- and the best way to solve resource allocation issues is through economics, eg, forcing people to pay more. Moreover, the legislation presumes that there is only one economic model for the Internet -- the one that currently exists. For a medium that is characterized by constant change, that is a very spurious presumption. The rules actually tend to centralize a system that is distinguished by its decentralization.

 

Furthermore, the rules will surely have unintended consequences which may disturb the natural technical evolution of the network. (There were many other moments in the history of the Internet in which regulation seemed necessary -- such as an Internet service provider interconnection dispute in 1994 and the controversy over “peering among ISPs in 1998 -- but the Internet weathered the storms through the marketplace.) Indeed, the law may not even be needed if classic instruments of regulatory policy are applied, such as interconnection, non-discrimination, market dominance, etc.

 

In the end, network neutrality seems like a good thing and worthy of being preserved. It is like free trade: the least encumbrance among parties in an exchange as possible. But from the way that lawmakers and special interest groups have participated in the debate, the legislation would undermines their worthy goals, by foisting uncompromising regulations onto a medium that has often shown it evolves best when left to the minimum of control.

 

Indeed, the error here is regulatory over-reach: the central problem is the scarcity of access and transmission -- that is, the lack of a true market in network providers. That ought be addressed in itself, rather than treating just the consequence of this inefficient market. Instead, network neutrality legislation presumes policymakers are better than the market to establish revenue models -- a difficult view to sustain, particularly in case where the underlying technology and business models are continually open to change.

 

Summary

 

What unites these two regulatory errors with the three technology trends I identified earlier is the idea that the Internet that we have is one that we are building ourselves. That it is typified by user-generated content, developed in a bottom-up, collaborative and often non-proprietary, not-for-profit way. It is giving rise to new businesses and business models that will surely challenge today’s incumbents in media and telecoms. But that the experience of the West is that regulation that tries to protect one party or another risks holding everyone back.

 

                                                            *        *           *                              

Conclusion

 

Convergence leads to benefits to companies provided they take the initiative. It also represents a chance to clean up the regulatory mess that has accumulated over the past century of policy covering print, radio and television as these three media merge. Indeed the two things go hand in hand.

 

It is important to set the policy well now, considering what lies just over the horizon. While today convergence refers to the commingling of networks, services, devices and industries that were formerly separate, in future it will refer to a melding of the physical and virtual worlds through “ubiquitous networking.” This refers to the way in which we will embed networking connections in everything from machines, objects, appliances, buildings, the environment, even human bodies for medical purposes. That is convergence on a totally different scale. We need to set the rules for this iteration of convergence well now, since the really hard questions are still to come.

 

Although keeping an eye on the future is important, for the moment we need to focus on the hear-and-now. Japan is well placed to do well in this new world of media, telecoms and technology convergence. Provided the right regulatory structure is in place, there may be a flourishing  media industry on many types of platforms. The country’s expertise in hardware, leadership in broadband and the creativity of its public, may unleash new innovations, services and business models.

 

What brings us here today is the Japanese government’s process to revise its telecoms and broadcasting laws to fit modern communications technologies. It is a very important process and I congratulate the Japanese government and business leaders for their foresight, dedication and goodwill. I recognize that the current regulatory structures are familiar to many companies. Changing them seems to undermine market arrangements that today seem to serve all parties well. Moreover, the changes in technology create uncertainty for companies’ businesses.

 

The mistakes made in Europe and American suggest that it would have made more sense for large companies to have embraced the changes and not resisted them. In fact, the changes can actually be more rewarding for companies than sticking with the status quo. The other lesson from the experiences of American and European companies is that a failure to change, an attempt to stay the same, entails a large cost: the technology is not held back forever, and the global market is unfriendly to those who fail to adapt.

 

Regulations can assist these changes, but when they are used to protect some players against competition, they not only hurt the public interest -- they end up actually holding back the companies that sought them in the first place. That is because convergence tends to increase the size of the overall pie, not diminish the slice of current players. But this is provided that business leaders and regulators are willing to rise to the challenge and accept the need to change, to serve the public.

 

I hope that Japan might find the experiences of the EU and US useful in its reflection process for the country’s regulatory reform.

 

Thank you.

 

# # #