If you use DSL, you’ve already lost …

I’ve been meaning to write something about this for a long time. This won’t be a perfectly thought out piece. It’s been years since this stuff was in my head, so it is what it is.

In thinking about net neutrality, I was remembering back to the early, early days of DSL. At the time, I was a system admin, jack-of-all-trades at one of the largest regional ISPs in the Puget Sound, based in Seattle.

The thing I realized as we were all discussing whether to get into the DSL business was the fact that DSL traffic was being routed over the ATM network. Any packets on the ATM network from DSL were not given priority and could be dropped. This was in the contracts that the ISP had to sign. The data to and from ISP customers connecting via DSL were the least important data on the ATM network and if there were any congestion, that data could be killed.

I remember realizing that this was the big dirty secret. Not only could the big teleco get dial-up people off the wire with their long calls and need for decent quality, but by moving the customers of independent ISPs on to the ATM network the big teleco could do all kinds of prejudicial and nasty routing of that data.

So, my point is this: independent ISPs were forced by the big telecos to give up network neutrality when they started to offer DSL services back in the 90s. And, individuals connecting to the Internet via DSL also lost network neutrality silently in the same moment.

Big telecos might be so cavalier about net neutrality these days because they know they’ve been winning that war for years already. They’ve always been trying to monetize the data both ways, and now that the independent regional and smaller ISP is pretty much irrelevant … who’s left to be vigilant if not the individuals themselves?

Suddenly scope-locked on net neutrality

Om Malik posts about an article the issue of network neutrality which appears quite good. Additionally, there’s a much better metaphor than mine here:

Via Om Malik’s Broadband Blog, “Net Neutrality Not An Optional Feature of Internet“:

“The telco and cable companies have in mind creating another type of customer not a class of service. They want suppliers to pay for the right of transit. It amounts to airlines charging Time Warner for the right of readers to take Time magazine on an airplane. It means charging Ford tolls in addition to drivers for the right of Ford cars to use highways.”

Great point about this being a way for companies to buy market results, not just network performance. This seems like my worry about the notion of “postal” charges for routing around spam filters. After all, if a company is paying a large amount of money, the mail provider is incentivized to keep that revenue, not in controlling the content of the messages that would otherwise be filtered.

And, here’s the thing: a source is only likely to pay to be routed around a spam filter if the content of the message is likely to be filtered in the first place. Perhaps there’s a reason it was going to be filtered, and the recipient hadn’t added the source to their whitelist or address book? It’s because that content is or resembles spam, right? Or why pay to be routed around a filter in the first place?

So, who’s the product for then? It’s a way to get questionable content to the recipient, so the product is, in a way, to create a threshold, and economic barrier to questionable content. It doesn’t eliminate the questionable content, but rather gentrifies it.

The network management quality of service argument for ending network neutrality misses the fact QoS does not work outside a private network environment where a single entity controls usage end to end. The implementation of QoS remains limited to private networks, because it makes the negotiation of interconnection compensation intractable.

Or, if the mahor carries are able to create a formal or informal cartel, in which they perhaps become as close as they can towards an oligonomy. With the quickly diminishing number of carriers, there’s very likley to be a time when informally the battle fields for competition are agreed upon by the carriers instead of determined by the market.

I note with interest the “free lunch” meme seen from AT&T’s Whitacre now appears in a report of words from Verizon’s John Thorne as well. Sure, it could have been independent development, but it’s interesting to see both appearing to espouse such similar thoughts. These two nominal competitors have aligned, and that’s not good.

Via Washington Post, “Verizon Executive Calls for End to Google’s ‘Free Lunch’“:

“The network builders are spending a fortune constructing and maintaining the networks that Google intends to ride on with nothing but cheap servers,” Thorne told a conference marking the 10th anniversary of the Telecommunications Act of 1996. “It is enjoying a free lunch that should, by any rational account, be the lunch of the facilities providers.”

The current government is overwhelmingly aligned with large corporate and multi-national interests, so network neutrality could be in clear and present danger.

Monetize that service!

Over at Boing Boing, they offer comment on something going around the Net, AOL/Yahoo: our email tax will make the net as good as the post office!

AOL and Yahoo have proposed a system to charge senders a quarter of a cent for each email delivered to their customers.

I keep hearing Adam Ant singing, “Stand and deliver, your money or your life!”

This is another potential loss of network neutrality, of course. The large providers are transiting huge amounts of mail, and they could create tiers. I would expect they would develop at least a third tier of expedited delivery and interstitial-like behaviour for an even greater premium.

I think the concern over groups not being able to deliver is a little bit reactionary. I would suspect that non-paying e-mail would be treated like spam, with exceptions for contacts in one’s own address book. In the NYT article, this is pretty much explicit by saying the cost of the stamp is “if they want to be certain” and the the system “gives preferential treatment” to paid deliveries. This is essentially a way for a company to buy a way around spam filters.

The danger comes not from the stamp charge, but if the rest of the e-mail is treated differently than it is now when the stamp cost is put into place. Do I trust that they won’t try to incentivize sources to upsell by treating unpaid mail poorly? Not really. Do I trust that they will not treat paying senders preferentially by delivering corporate spam wrapped up in a cloak of respect, like AOL’s old pop=up ads? Not really.

I notice that the NYT article does make an explicit connection between this topic and the broader issues of network neutrality.

Update: Boing Boing has updated their posting to better reflect the source material, and now say:

AOL and Yahoo have proposed a system to charge senders a quarter of a cent for guaranteed delivery on each email delivered sent to their customers.

Google is the new black

Rumours mount over Google’s internet plan

Google is working on a project to create its own global internet protocol (IP) network, a private alternative to the internet controlled by the search giant, according to sources who are in commercial negotiation with the company.

With job postings for positions that fit with Google transitioning from a search engine into a global backbone provider.

Very interesting follow-up to other thoughts about all that dark fibre. But don’t forget the borg cubes they’ve been working on. This would be a content provider with their own backbone, and that means they are more akin to an Internet version of a cable network.

I can’t help but be reminded by this search engine becoming a global carrier of the way that many BBS operators became Internet Service Providers in the first days of the commercial Internet, early in the 90’s when the rules changed.

I don’t think many of those BBS operations managed to survive independently. I suppose, now that I think about it, AOL started that way and they consumed Compuserve too, which was the service that was packaged with every modem for so many years, including my old C64’s 300 baud modem.

But, the mom & pops all died or got bought, I suspect. Quite a few got consumed by small telecoms or conglomerated into national providers, like Olywa being purchased by ATG and Verio/NTT, or even Earthlink’s aquisitions as just a sample. They got shoved out of the business because they couldn’t keep up with the constant changes in technology, with two version of 56k, ISDN, and then DSL. Then, on the other side, there’s the changes in the marketplace. Once the big companies realized it was a stable market in which profits could be made, they stepped in to take that profit directly, instead of indirectly through intermediaries to whom they sold bandwidth.

When ISPs were making money, the backbone providers cried sour grapes about all the money they weren’t making, that was being made by the providers. Now, once again, there’s a similar reaction to Google making money providing services over the backbone. This reminds me of the way that large technology players drop R&D onto the marketplace, and then purchase what is successful, this is the Microsoft strategy par excellence. By externalizing the cost of risky development, most of which is likely to fail, it is possible to stablize R&D on more conservative ventures; this is in part a response to investor skittishness as well?

It may just happen that content moves closer to the large players, like how CBS has pulled in episodes of Survivor onto their own website. Seems like AOL just missed the curve when they got some cable broadband. I wonder if they should buy a backbone like Google is doing?

This is an interesting pattern of networks and services contracting into and diverging from each other. I suppose exploring back, the FIDOnets and BBS networking was a bit of and expansion from when BBS services were self-contained. Then, there’s also cases where ISPs attempted to provide local services, like game servers, etc … Any local service was almost free in comparison to accessing something outside the local network.

When the network is tied to the service, people will desire to uncouple them. But, when they are too abstracted, I wonder whether they desire them more closely tied together, for ease of use. This was the dynamic that kept AOL customer tumbling off into the hands of the local ISPs, until the major telecoms figured out how they could be unfriendly enough to push the independents out while staying within their regulatory binds.

There ain’t no such thing as a free ride.

Via Broadband Reports, in “FT.com / Companies – AT&T chief warns on internet costs“:

“We have to figure out who pays for this bigger and bigger IP network,” said Mr Whitacre, who was in New York ahead of AT&T’s annual presentation to investors and analysts on Tuesday. “We have to show a return on our investments.”

“I think the content providers should be paying for the use of the network – obviously not the piece from the customer to the network, which has already been paid for by the customer in Internet access fees – but for accessing the so-called Internet cloud.”

{snip}

“Now they might pass it on to their customers who are looking at a movie, for example. But that ought to be a cost of doing business for them. They shouldn’t get on [the network] and expect a free ride.”

There ain’t not such thing as a free ride and there never was.

First, this is the CEO of AT&T asking that other carriers demand payment for transiting data that originates through AT&T. Every other carrier should immediately send AT&T a bill, care of Mr. Whitacre CC’d to the shareholders.

Second, there is already no free ride. Every connection to the Internet costs someone something already. Even if one were to connect up to a public peering point, there’s a cost for equipment, but I’m willing to bet that Google does pay someone quite a bit for access already. So, who’s Google’s upstream provider? Is that upstream provider going to allow AT&T to surcharge its own customer without retaliation?

On the otherhand, is this a pre-emptive strike against Google before they light up all that dark fibre? If Google used newly lit fibre to bypass much of the existing backbone, much like Internap innovated to bypass public peering with cross-negotiated connections with the carriers directly, doesn’t Google actually become on par with an AT&T as a fellow backbone provider? I wonder if that’s the real fear expressed here. AT&T could become irrelevant.

This strategy is one that the other carriers should encourage AT&T to follow through on because it well put AT&T out of business.

Third, the Internet routes around damage. If AT&T pipes become expensive, then other carriers will see business increase. AT&T will not see the money that Witacre seems to think it will. The loss of network neutrality will slow adoption of services and lose AT&T customers in the long run.

On the other hand, if all the carriers adopt the same strategy, then broadband is dead. The two-way, interactive Internet is dead and becomes just another implementation of on-demand cable services. I wonder what would grow up to replace it?

Wouldn’t this be the same kind of refusal to service customers that motivated Tacoma to bypass TCI for cable service and implement a municiple network instead?

At the vary least, a loss of network neutrality would make it increasingly profitable by comparison for a service company like Google to light up that dark fibre and start selling DSL services. That was pretty much the direction that Earthlink seemed to be innovating.

AT&T could drop off the Internet as consumers and providers all route around those pipes.

Fourth, doesn’t AT&T have service agreements that this would contradict, over which they could be sued? If a customer buys a DSL connection with some broadband speed, and AT&T itself throttles the sites the customer desires to reach … isn’t that misleading, a hidden cost, or worse?

Fifth, I forget what my fifth point was, but … Oh, yeah, so this whole “so-called Internet Cloud” thing … by analogy that would be a toll to drive your car onto the freeway paid directly out of your pocket and a charge each time you drove off the freeway, passed on to you in the cost of the goods you purchased and services you consumed at your destination. There is no such thing, really, as an “Internet Cloud” … which is really a web of interconnected private TCP/IP networks, which is the definition if the capital “I” Internet. (And, yes, there can be more than one. This isn’t Highlander, folks.)

Sixth, if I were an investor in AT&T, I would think seriously about diversifying. This is the CEO of AT&T saying that they do not have a sustainable business model. This is old school phone company monopoly behaviour in a world that has moved on to other monopolies. I doubt “retro” was the company image AT&T wanted in the marketplace.

I wrote a strategy memo a long time ago, over a decade ago now, while I was working at a regional ISP. In this document I talk about ideas of continued success, and touch on network neutrality issues. In one paragraph, I talked about the two-tier Internet that might be around the corner, both then (last edited Aug 98) and, apparently now again:

One of the theories that I’d had a couple of years ago was that the Internet as we now know it was going to split into two networks under the pressure from commercialization. My theory stated that since the needs of the commercial use of the Internet are essentially one-way and not interactive, there would be developed technologies that would provide high-bandwidth to the consumer and an asymmetrical bandwidth back up the pipe. This would satisfy the extent of the commercial use of the Internet so that people could click on the “Buy Now!” button while not burdening the commercial providers with any of the abstracted technical needs of a more fully interactive network. Further, this split would leave the ISP and other symmetrically allocated services as a second-class citizen on slower networks, thus relegating the non-commercial Internet to a backwater of pokey interconnections they’d negotiated among themselves.

Out of nostalgia, I’ve attached that document to this posting: Continued Success? (rtf 24k). It’s interesting to read these things.

Moving the service to the public peering points

Via Slashdot, “Google’s Secret Plans For All That Dark Fiber?“:

“The idea is to plant one of these puppies anywhere Google owns access to fiber, basically turning the entire Internet into a giant processing and storage grid. While Google could put these containers anywhere, it makes the most sense to place them at Internet peering points, of which there are about 300 worldwide.”

Via Slashdot, there’s a comment about a Cringley article talking about something that Google is up to with both dark fiber, that I mentioned before, and what may essentially be semi-mobile processing and storage borg cube. These borg cubes then become a platform for just about anything. Certainly, they can cache the google video and image content closer to the users which are becoming more and more capable of broadband speeds. Google could also leverage the cubes for delivery services, like what Akamai does or acting like active peers in BitTorrent or other sharing networks. These borg cubes could also become hosting locations for network applications or more mundane web and blog services.

Moreover, along with just about anything the imagination could conjure, these cubes could become the host of a Google VOIP service and act as the peering points for Google MuniWiFi services. (One might watch to see if they put one of these cubes at the peering point for any place where they’ve implemented their MuniWiFi.)

Instead of moving the network connection to the public peering site, Google might be moving the services to the public peering site and using the fiber as a private network.

UPDATE 7jan06: Well, looks like I was close with the comment about caching video closer to the users, but didn’t quite see that they would be going after the pay-to-play / TV downloads market.

There’s gold in them there network packets

Via Business Week, At SBC, It’s All About “Scale and Scope”:

“So there’s going to have to be some mechanism for these people who use these pipes to pay for the portion they’re using. Why should they be allowed to use my pipes?

The Internet can’t be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! (YHOO ) or Vonage or anybody to expect to use these pipes [for] free is nuts!”

Here’s a returning argument that was used by MCI many years ago, and I recalled in a posting about the tussle between Cogent and Level 3.

Ars Technica responds,

“… he [CEO Edward Whitacre] leaves out the most important part: their customers. It’s SBC’s DSL customers who are paying to “use these pipes,” and the idea that certain kinds of usage are categorically different than others has a fair share of problems.”

A related argument is about the way in which ISPs have been asked, on occasion, to pay additional taxes to offer Internet services, such as when the city of Tacoma, WA has attempted to tax ISPs. The problem is that ISPs already pay taxes for each phone line, at that time it was all about dial-up, and various other fees. For example, ISPs were still paying for 911 services for every line POTS line.

On Fark.com, “[Interesting] Two companies come to an agreement that will keep a large chunk of the Internet from collapsing like it did last month

This thing between Level 3 and Cogent isn’t the first peering issue. There have been others, like when several backbone carriers threatened to start charging to transit non-local traffic. I think that was MCI, which at the time essentially was the backbone. Slightly different was the issue when some podunk ISP in Florida had munged BGP locally and the damage got propagated such that huge swaths of the Internet got routed down to some tiny ISP …

This is what the public peering points were supposed to be for, but they were always seriously overloaded. Internap’s big entry into the market was all about private peering agreements in order to bypass the public peering points, in order to route more efficiently.

But, here’s the delayed consequences of relaxing the commercial rules about the use of the Internet back before the big expansion in 1995. I’m not saying the change was necessarily bad, but the public and open infrastructure has become increasingly privatized and negotiated. If two companies can de-peer each other and isolate entire islands of the Internet, then the power of the protocol to route around fault is at risk. The inherent strength of the Internet to withstand disruption of specific nodes and routes is compromised. If the physical structure of peering is such that it is no longer possible to route around damage, then the whole infrastructure is at risk.

Google has been buying dark fibre, and there’s a whole lot of dark fibre. Fibre in the ground was one of the big projects during the technical boom, but it’s managed like the diamond or oil supply: supply is kept back in order to manage prices and profit. So lots of fibre was put in the ground, but left dark. Now, Google has been buying it. They’ve certainly got reason, by way of massive amounts of content being served, to want their own network with private peering agreements.

However, it seems to me that dark fibre should really become the new highway system, not even more private toll-roads on an already endangered system of information transportation.