Thursday, March 31, 2011

Announcing the most important tool for Twitter users ever invented

Harritronics is very proud to announce a major breakthrough that will help anyone who has ever tried to share a URL via Twitter or an SMS message.

As you may know, these messaging systems limit users to a mere 140 characters per message. Many URLs are more than 140 characters in length, and simply cannot be shared using these tools.

Given this limitation, you would have expected others to step up to the plate, to make tiny URLs, URLs that are a bit of the size of the original, but try as I might, I have yet to find such a tool.

So we are very pleased to announce the world's first tool to allow users to share URLs with others via Twitter and SMS. It's called Paul's Really Good URL Shortener, and you can use it at


This amazing tool allows you to use shorter URLs in place of larger ones. Simply enter the URL in the box provided, and long URLs like:

http://blog.harritronics.com/2011/03/blog-housekeeping-comments-policies-and.html

become shorter, easy to use URLs like:


This is powerful tool is completely free, for the entire first day of April. Once you use it, you'll never want to use anything else.

Fixing a spelling mistake

Different environments require different approaches to the software release cycle.

I've worked in environments in which programmers essentially develop on production, fixing things as they cropped up. It wasn't pretty.

I've also worked in environments in which any changes required a CRF, which would then be passed by the engineers for a quote, approved by management, fixed on development, tested on test by QA, released to stage by technical services using a form submitted by the developer, and then scheduled for release to production by technical services.

The latter system always seems great in theory, but it can be mindbogglingly bad in terms of fixing small issues or getting anything done in a reasonable period of time. One issue with it is that it makes everything expensive: time that could be spent by developers, testers, system administrators, etc, on getting things done is spent instead on meetings and bureaucracy. And all groups end up resenting one another because their procedures are preventing progress.

So what happens if you try to fix a spelling mistake in the above environment? Well, unless it's prominent, nothing. You can't. It's too expensive. The only way it's going to be done is if a developer decides to fix it while doing a lot of other scheduled work.

What's the happy medium? It all really depends on the application you're developing. If you're talking about an application that's going to be deployed by a third party, who'll expect "releases" and "bug patches" - essentially, the type of application that, once upon a time, was distributed on CDs and sold in stores - then the above isn't actually a bad model to use.

Internal projects, be they tools used by your colleagues, or the code that supports your public websites, need a somewhat different approach. Your business needs to ask itself to what extent temporary downtimes are acceptable. You need to make heavy use of backups and SCM. And you need to design a rational development process against your specific requirements.


Wednesday, March 30, 2011

On Web SQL Database and IndexedDB

One of the trickier aspects of web development is dealing with storage. When I started out, the only thing you could store on the user’s web browser were cookies, which generally didn’t give you a lot of room to store anything complex.

Time moves on and so does web technology. The major, major, issue that web developers wanted addressed was the fact they could develop really beautifully advanced applications in Javascript, but could only store data by sending it over the network to a separate server. With HTML5, there was also a desire to develop applications so that they’d run off-line. Where could they store data?

There are currently three ways to store data in a web browser. Only one is a real (both de-facto and de-jure) standard, it’s called Web Storage, and most mainstream web browsers, with the exception of Internet Explorer versions 1-7, support it, to a certain extent. The system is crude, allowing applications to store key/value pairs (ie “USERNAME=paulh”) but nothing more than that.

To advance beyond this, Apple came up with a system called Web SQL Database.

Web SQL Database is based on a free database product called SQLite. SQLite is a database system programmable in a large subset of the SQL programming language. The system is more freeform than most SQL databases like Oracle or PostgreSQL, and the code is much simpler and easy to integrate into existing applications, which means a lot of the popular applications you use every day actually have SQLite built into them.

Apple’s solution is popular, it works, and it’s also been adopted by Google for Chrome and Android. However, there are detractors, notably Microsoft and Mozilla. They argue that Web SQL Database has a number of shortcomings:
  • SQL isn't really a standard. There are differences between the official standards, and virtually every implementation has non-standard extensions to deal with shortcomings in the language.
  • Web SQL Database's detractors also argue that SQL is too high level. If a programmer merely wants to insert and remove things from a table, why force them to use a complex programming language?
  • Web SQL Database is based on SQLite, but SQLite doesn't implement a specific standard, and it would be hard, it's been argued, to create a standard "SQL" for Web SQL Database.
Many detractors also complain that SQL itself has flaws, and thus building a persistent storage system based on SQL is not a good idea. This movement is called the "NoSQL" movement, and they have many reasons for being unhappy with SQL.
  • SQL is an entirely different system, unrelated to the languages its usually embedded within.
  • It's easy for inexperienced programmers to embed security flaws in their SQL statements
  • It's easy for inexperienced programmers to develop overly complex queries that database systems find impossible to process in a reasonable period of time.
  • SQL is a non-standard. While it's possible to write SQL that will work under Oracle, PostgreSQL, and MySQL, it's very hard to do without a working knowledge of the quirks and flaws of all three systems.
Mozilla proposed an alternative, called IndexedDB. IndexedDB works like this:
  • On the browser's back end, data is stored in a database, either using a custom technology, or, more often, using SQLite,
  • A standard, DOM-like, interface is provided allowing programmers to make basic queries into the database using regular Javascript functions.
  • If programmers want to use SQL, they can find someone who's written a SQL-front end to the IndexedDB API and use their library.
This proposal is controversial. Critics argue that IndexedDB is substantially worse than the system it replaces, and that many of Mozilla's expressed concerns are overblown. They argue that creating a standard based on the current iteration of SQLite would not be as hard as Mozilla claims; they argue that given a SQL database is being used at a low level anyway, the system is less efficient than Web SQL Database; they argue that Web SQL Database is, due to its support for SQL, much more powerful, given you can build complex relational queries in the system, while IndexedDB does not support relational databases.

At this point, the W3C has officially "deprecated" Web SQL Database and is promoting IndexedDB. That said, given the relative unpopularity of IndexedDB, and popularity of Web SQL Database, it seems highly unlikely that the latter will disappear as a de-facto standard. In some ways, the W3C's decisions are disappointing as they reduce the likelihood that a version of Web SQL Database that has been "fixed" will appear.

While I can see both sides of the argument, I'm firmly in the camp that sees IndexedDB as a solution that'll result in more complex, more difficult to maintain, code, that'll encourage dependencies on bloated third party libraries. I'm not seeing the upside.

Related

Flash

One of the major controversies throughout the industry has been the desirability of Adobe Flash as a way to put rich media content on websites. Many people consider it proprietary kludge that isn't well integrated with the web, is a resource hog, and is frequently misused, while others don't like it at all.

Yet websites keep using it. Why?

Well, in fairness, while the need to use Flash for, say, ads, has receded, Flash has been, for a while, pretty much the easiest way to embed sound and/or video on webpages for a very long time now.

To deal with the fact so many people want to embed movies on webpages, the standards organizations have tried a variety of solutions over the last few years but for the most part, these standards haven't actually provided enough functionality to allow web developers to completely replace Flash.

HTML5 introduces the "video" tag (and a related audio tag) which is supposed to deal with many of the issues that the OBJECT tag before it had. Unfortunately, this tag only partially deals with the issues.

Here's why Flash is going to continue to be used for some years now, and the Video tag is likely to see the same fate as "Object":


1. It doesn't standardize everything

There's a major hole in the Video tag's specification - it doesn't actually even attempt to define the format of the video. Web developers are supposed to encode their videos in every format they think might be used and hope the target web browser and operating system support it.

To be fair, there are efforts to standardize on one of three formats (H.264/AAC, WebM/Vorbis, and Theora/Vorbis), but none has attracted enough support to be fairly described as a standard yet, largely due to arguments about software patents.


2. You can copy sound and video

While YouTube may be one of the most popular video sites out there, there are also a number of other sites that use Flash to deliver video, sites like Hulu and Amazon, that use it for quite another reason. These sites have business models that rely upon you only having temporary access to the video you're watching. They can't let you download the videos.

To that end, Flash implements something called DRM (Digital Restrictions Management) that has the videos encrypted while being transmitted over the Internet so that the Flash system is the only system that can decode them.

It's very hard to implement DRM in an "open" way, and the Video tag doesn't even try. So you're going to continue to see Flash used for sites that deliver non-free video for some time.


3. It can only play sound and video

The Video and Audio tags provide quite a bit of functionality over the older OBJECT tag, including the ability to be controlled using Javascript, go full screen, display buttons, etc. However, like Object, when it comes to audio or video, the system is strictly one way. Flash is used in some environments to provide two way voice, and even two way video. The audio and video tags simply aren't spec'd to do that.


All of these mean that while HTML5's Video and Audio tags have done a lot to reduce dependence on Flash, you're going to see the need for Flash for a while.

A question that commonly comes up is: if Flash is necessary, why is Steve Jobs convinced otherwise? Why does he refuse to allow iPhone/iPad users to install it, for example?

One thing to bear in mind is that Apple has a competing framework to Flash, called Quicktime. Quicktime is often thought of as just a way to display movies, but like Flash, it's actually a complete, self-contained, scriptable multimedia framework. Like Flash, it implements specific file formats, and supports DRM, although I don't believe it's capable of being used in a two way form (but am open to being corrected on this.)

There is a pseudo-open alternative to Flash, called Silverlight. Microsoft has published the specification of Silverlight, with the aim that third parties be able to implement the technology for platforms they don't wish to support directly. Unfortunately, major components of Silverlight, notably the DRM system, are closed, and not available to non-Microsoft implementations.

Before its collapse into the Oracle black hole, Sun also attempted to create an open multimedia framework with Flash like functionality called JavaFX, but the system is currently suffering from industry mistrust of Java's current owners: while in theory the technology works on any platform that supports Java, and therefore could be used as a drop-in replacement for Flash that works virtually everywhere Flash does, in practice nobody wants to touch the technology right now.

Is there a way forward? Could Flash eventually be rendered obsolete and unnecessary?

The standards communities keep chipping away at the "need" for Flash without really addressing what it does and why it's so popular, so the way I see it, we're a while away from seeing it replaced. It shouldn't be necessary to design a "plug in" to provide the functionality Flash does, but for now, it is.

What may drive things forward in the medium term is the mobile web. Companies like Apple and Google are keen to allow web applications to access features of their devices, such as the camera and GPS system, and I suspect in the medium term, we'll see that philosophy expanded to encompass the rest of the hardware. As far as DRM goes, that's going to be more difficult to do in a standardized, open, way, but it may be that "good enough" solutions become possible with some minor fixes to the HTML5 specification.

But for now, don't be surprised that Flash is still necessary to access much of the web. It's not always the web developer's fault, we don't like Flash either, we're just using the only tools available to us!

Tuesday, March 29, 2011

Voice and LTE

So, I posted earlier about LTE, the "4G" standard that's rapidly becoming the standard for mobile communications - with one or two prominent exceptions, just about everyone who's rolling out a next generation network is rolling out LTE.

LTE is "All IP", that is, all communications between the handset and the rest of the world (data, voice, text messages, etc) go over what's essentially an Internet connection, with the lower levels of LTE providing the Internet connection. This raises an obvious question - how do you make a phone call over it?

First of all, unless you're interested in the technical side, you're not going to see a difference. You still have a keypad you dial numbers on, and when you get an incoming call you get a ring tone, as usual.

What's going on underneath is that the operator will be using something called "Voice over IP" (another subject I've written about.) This is the same type of thing that's going on if you've ever used a service like Skype, the phone dialer thing on GMail, that MagicJack thingie, or Vonage. Your voice gets encoded into a digital form, transmitted over the Internet, and then decoded and played back, in real time, either to the end user (if they're on the Internet too) or to a line on the phone network.

This is all great in theory, and when the designers of LTE sat down to figure out how their next generation system would work, they kind of assumed that the whole "Voice over IP" thing was already done and dusted and they didn't need to work on that, they could just concentrate on the Internet connection side. After all, the group standardizing on LTE, called the 3GPP, had indeed released its own Voice-over-IP system some years back, called IMS (IP Multimedia System.) IMS is more than VoIP, and in theory as it was already standardized and integrated into the GSM family of systems, of which LTE is the latest version, there was nothing more that needed to be done.

Some operators, including Verizon, disagreed. They felt IMS was too high level and didn't address low level issues necessary to make a phone call.

Let's explain that by using an example. Let's suppose that a hundred people use a single Internet connection to access the web, in, say, the space of a minute. It's very unlikely they'll all try at once, but it is likely that some seconds of that minute will have more people trying to access the Internet than at other times. If you were to graph the amount of data coming through the Internet connection then, you'd see something that looks like a mountain range, rather than a nice neat curve or a straight line.

Now, during the times a lot of people are trying to get data, the network is likely to get congested - that is, there'll not be enough capacity to support all of the data being sent. So what happens? Well, each user's connection is effectively slowed down until there's enough capacity to support everything that still needs to be done.

On the web, this doesn't matter. You can read a website regardless of whether it took five seconds to load, or six.

But voice isn't like that. The system cannot slow down one person's connection if the network becomes congested because that would result in the conversation being stalled, or chunks of audio becoming missing. The designers of IMS did some things to reduce the chance of this happening, by letting the quality of calls drop in the event of network congestion, but this isn't a desirable path to follow.

What Verizon and others want is for, instead, anyone on a voice call to be able to reserve a certain amount of data throughput, so that the amount of capacity makes a difference only in whether you can make a call to begin with, not whether you can still make sense of what someone's telling you a few seconds into the call.

What they're working on is coming called Voice-over-LTE (VoLTE.) The concept builds upon IMS and allows bandwidth to be reserved for each call.

Many operators are holding off implementing voice on their LTE networks until the VoLTE work is finalized. Once this is done, it will be possible to buy any LTE phone that supports your operator's frequencies, and know it will work, just as you can with a GSM or UMTS phone today. But it also means you need to be aware that LTE phones today are only half ready. They generally rely on a separate 3G or 2G system to make and receive regular phone calls, and will not work with VoLTE once it's rolled out.

Monday, March 28, 2011

Things AT&T could do to pacify me

So AT&T is going to destroy - as in "buy the assets and customers of, with no intention of keeping anything going" - T-Mobile, and frankly what they've announced thus far is pretty bad. As you probably know from my blog entries on the subject, I'm pretty upset about it. I'm hoping the FCC or DoJ will prevent the purchase from happening, but I'm not exactly optimistic.

AT&T is very unlikely to keep me as a customer if they go ahead. Here's what they can do to change that.


1. Adopt a culture of openness

As I've said before, T-Mobile is virtually the only real open network at the moment. Systems like Android would never have happened if T-Mobile didn't open the door for it. AT&T needs to stop locking down its handsets. Owners of Android phones should have the same rights to manage their handsets that customers of T-Mobile have/had.


2. Don't punish customers who prefer to buy their own phones

T-Mobile recognized some time ago that some customers would rather buy their devices outright rather than buy subsidized hardware and pay through the nose while being locked to a contract. While some carriers do accept this fact, it tends to be all or nothing - you either buy into their way of working, or find a different carrier.

I'm a subscriber to T-Mobile's Even More Plus plan. I've always bought unlocked hardware, except for our most recent phones, which were still bought unsubsidized. Because I did that, I pay about $10 less per month, and I don't have to worry about contracts - which, ironically, means I'm more likely to stay with T-Mobile (no "I'm finally out of this constraining contract! I'm free!" moment...)

AT&T needs to offer the same deal


3. Nobody likes overages. Deal with them properly.

What happens if you go over your alloted data quota on AT&T? On AT&T, you get charged for it. On T-Mobile, they reduce your available bandwidth, but you can still use your phone, and you don't have to worry about an unexpected and obscene bill next month.

Guess which I prefer?


4. Get decent customer service

AT&T has a reputation, and it's not a good one. T-Mobile has always been helpful, friendly, and itching to get the right thing done. I don't think I need to say more.


5. Do the right thing

T-Mobile's customers are going to get the short end of the stick, especially those of us with 3G/"4G" phones. At the very least, replace those phones with genuine equivalents - phones that are as open, as feature full, and as advanced, as the ones that shutting down the 3G frequencies will kill, and when AT&T does this, they need to do so for free - that doesn't mean "We'll replace the phone for free... if you take out a 24 month contract on a new AT&T plan", that means "Everything stays the same, you get a phone that works on your network, and you don't lose anything in the process."


That's what I want to see AT&T do. It's not what I expect them to even think about doing.

Android "Honeycomb" and Open Source

It's hard not to be disappointed that Google are, thus far, refusing to release the source code to their tablet operating system, Android "Honeycomb" (or Android 3.0.) Their reasons, as stated, seem dubious to me, but seem to reflect a wider issue that they're disappointed with many of the devices running their operating systems, and want to make sure the Android Tablets get off to a good start before opening things up.

Cyanogen, the force behind the popular CyanogenMod variant of Android, believes that Google are doing the right thing here, as long as the move is temporary, and Google's engineers certainly seem to consider releasing the source "getting it right".

There's been some comment that, given Android Honeycomb and Android Gingerbread (the latest version of the mobile phone variant of Android) were developed in parallel, the chances are the Honeycomb is Gingerbread with some temporary, not particularly attractive, modifications, with Google intending to merge the two into one operating system in the near future. This doesn't mean your Android phone will have a tablet user interface or be capable of running applications developed for tablet use only, it just means Android will return to being a single product.

This makes a lot of sense. iOS is developed similarly, and it would certainly explain Google's actions - leaving aside the fact they don't want Android associated with crazy third parties who'd release phones with user interfaces designed for tablets, there's the more practical issue that they don't want third parties customizing an operating system to work on a particular hardware configuration, when that version of the operating system is a dead end. Companies like Motorola, Samsung, and LG have the resources to deal with that situation, smaller organizations don't and would end up releasing hardware that's difficult for them to support (even if the community would support it anyway!)

At this point I guess we have to wait. Those who want supportable tablets need to hold out and see what comes out of Google in the next six months. Those who need something now need to remember that one key advantage of the Android platform, that it's open, simply doesn't apply to Honeycomb. That's not to say it's as closed as iOS, you can still run software of your choosing, you don't need the manufacturer's permission, but the future-proofing nature of open source is something that isn't going to help you.

Friday, March 25, 2011

Android Tablets

To give them some credit, Apple's iPad has been a roaring success, and I doubt anyone else would have pulled it off. The "tablet" concept has been around for a while, with Microsoft in particular wasting a lot of time working on the concept, with very few people wanting a tablet that ran Windows, or more specifically a desktop user interface.

Apple's main contributions to the concept were to use what it had learned from the development of the iPhone to put a user interface that was more friendly to the concept of control via a touch screen. Apple also reduced the price somewhat.

Now, sometimes I'm excited by new technologies, and sometimes I'm completely confused as to why someone would want such a thing, and I'm a little baffled when it comes to the iPad's runaway success. It's still expensive, in comparison to a Netbook, and if I can't put it in my pocket, I'd rather have a full computer like a Netbook than a stripped down, keyboardless, device like an iPad, but the reality is I'm in a minority on this one, and people love them.

The success of the iPad has caused numerous parties to want to create their own tablets. The obvious technology to use has been Android, but Google has been unhappy about manufacturers using stock Android for this, and finally released a special, tablet oriented, version of Android a month ago, called Honeycomb.

If you want to play with Honeycomb, you can try downloading the Android SDK, the latest version of which includes the Honeycomb system. The SDK features an emulator, that can be set up to emulate any version of Android. However, it's slow, and the systems that come with the SDK come with only the Android components that are essential. Features like the Android Market are missing, for example.

Honeycomb is an interesting system, it's very slick looking, and if I was in the market for a tablet I'd consider it, but it comes at a cost. For the same reason as Google is unhappy about mobile phone versions of Android appearing on tablets, it's also, reportedly, very unhappy about the possibility of a tablet-optimized version of Android appearing on a mobile phone. I suspect they're concerned, too, that Android might be seen as a cheap alternative to iPad if the first Honeycomb tablets that come out are underpowered and low cost. So unlike prior versions of Android, Google are, for now, keeping the Honeycomb variant of Android proprietary. I have to say this is disappointing, and I hope Google opens Honeycomb soon. I'm certainly not convinced by their argument - I think it's highly unlikely any major phone would be released with an operating system designed for such a large screen, especially if the price for releasing such a thing is to be prevented from having Google's own apps, such as Android Market, available for the device.

So far three Honeycomb tablets, the Motorola Xoom, Samsung Galaxy Tab 10.1, and T-Mobile G-Slate, have been announced although only the Motorola Xoom is currently available. All three are fairly expensive, with only the Xoom available in an unlocked, unsubsidized, wifi-only version, weighing in at $600. It's fairly clear that those promoting Honeycomb are promoting it as a more advanced system than iPad, rather than a lower cost alternative.

The potential success of Honeycomb is difficult to gauge right now. I'd be the first to admit, as someone who's used various touch screen devices including Nokia's N800, to varying degrees of success, that the concept just isn't appealing to me which makes it difficult for me to weigh the arguments. I read one prominent Apple enthusiast arguing that Apple's "Apple Store" will make a major difference in terms of whether the iPad or Android tablets will be more successful, although I'd be surprised if that's the case. More conventionally, the iPad has mindshare, and the demonstrations of iPad2 have been much talked about, with much of the buzz about things like, for example, its musical instrument app, reminding me somewhat of the buzz about the Wii (except I "got" the Wii buzz!) To that end, Honeycomb's success will depend in large part on whether imaginative developers are willing to get around it, and they may be put off if the only way to get one is to spend $500+ on a device that most of us see as inferior to a $250 Netbook.

And that brings me to, well, me. I feel as a developer I should get such a thing. If nothing else, even if the iPad proves to be more successful, any tablet, Android or iPad, would give me the opportunity to test websites I've produced, etc, for that form factor and user interface. But sometimes I'm excited to try new technologies, and sometimes I resent it, and right now I'm in the latter camp when it comes to tablets. They're an expensive and clumsy way to access the web compared to the alternatives, yet this is being promoted as the future so I have to hop on board, come what may.

Oh well.

Thursday, March 24, 2011

Unix shell accounts

When the Internet was in its infancy, the way people accessed it was frequently through what was called a Unix shell account. People would dial in, or connect via their internal network (frequently not a TCP/IP network), and then run Internet clients on the remote computer. "Internet clients"?

Well, the web didn't really take off until the mid-nineties, and back then the main systems for communication on the Internet were email and various discussion forum systems. At the time, email was text only, so it was easy to log in to a remote computer, an email client, and use it, just using a text-based "terminal emulator."

Why did we do that? Because it was easy to set up. Because it was hard to get full network stacks for personal computers, and even if you had them, running these stacks over a modem was torturous, back when the fastest modems ran at 2400bps.

The move towards connecting personal computers directly to the Internet opened up many possibilities, and the web in particular became possible and useful when we finally started doing that. Did we lose anything by moving away from shell accounts? Well, not a lot. But there were certain advantages to having one, as you had a private location, separate from your computers, reachable from anywhere.

Today I'd guess the nearest equivalent would be a VPS. Virtual Private Servers are complete servers, with their own operating systems and storage, you can rent. The "Virtual" comes from the implementation - most are implemented as virtual computers running on a much bigger server, but for all intents and purposes you can treat each instance as a computer you own.

And you can set up shell accounts on those computers.

There are a wide variety of VPS providers out there. Right now I use Linode.com - that's not an endorsement as such, it happens to be the one I use and they seem reliable to me, but I recommend you investigate the available options.

Setting up a VPS takes a little bit of time, though not as much as you might think, but once set up you have a server you can use an ssh client like OpenSSH, Putty, or, on Android, ConnectBot, from wherever you are. Email clients like PINE can be used to read email, although you'll need to set something up to collect that email. There's even a couple of web browsers that work over a text-based shell interface; confusingly one's called Lynx and the other is called Links.

You can upload files and download them using SSH's SCP system. And you can even set up a web server if you want.

What do you get with this all set up? A private space, out there on the Internet, accessible from wherever you are.

IPv6 and internal networks

One problem with the migration to IPv6 is that many people have problems understanding how IPv6 is different, making the assumption that if something existed in IPv4, it must exist in IPv6. This isn't the case. In IPv6, various things that existed in IPv4 are no longer needed or relevant, and so the approach network administrators take to solving similar problems is different.

Let's look specifically at how you'd administer your local, internal, network.


IPv4

In IPv4, the elements of a modern internal network are:
  • Every machine is allocated an IP address in the so-called private addresses space, typically 10.x.x.x or 192.168.x.x.
  • To connect to the Internet, a device called a transparent proxy, or a NAT gateway (the latter is more common) is provided that allows the machines to make outgoing connections, making those connections appear to come from the gateway/proxy itself. This machine is connected to both the internal network, and the Internet.
  • IP addresses are allocated using a system called DHCP: a central server maintains a list of IP addresses and computers. When a computer connects to the internal network, it asks this server for an IP address, and the server gives it one.
  • If a machine has to have a specific address, the DHCP server is programmed to give it that address, otherwise it allocates addresses dynamically, from a pool.
There are reasons for all of these features:

  • The "private address spaces" exist because IP pools aren't easy to allocate, there aren't enough public IP addresses for the number of networks.
  • The transparent proxy is used because the machines on the internal network do not have public IP addresses, and so can't communicate with the Internet directly.
  • A central DHCP server maintains a list of IP addresses because each network only has a small number compared to the number of devices in the world, machines can't allocate IP addresses for themselves because they might end up with the same address as something else on the network.
  • The central DHCP server has to maintain a list of IP addresses for certain specific machines because otherwise there's no way to give those machines a predictable IP address.
Now, let's look at how you'd administer an IPv6 network.

IPv6

In IPv6, the key elements on an internal network are:
  • Every machine is allocated a public IP address, all with the same 64 bit prefix.
  • To connect to the Internet, a simple router is used. This router also has an IP address with the 64 bit prefix mentioned earlier. The packets pass through the router to and from the Internet unchanged.
  • IP addresses are allocated using a system called NDP. In NDP, the router transmits messages across the network with basic networking information - typically, that the router is a router, what the prefix is that's in use, and other useful configuration information. Devices configure their IP address by taking this advertised prefix, and prepend it to a static number, based upon something called their MAC address (a 48 bit number programmed into every network chip.) This becomes that device's IPv6 address.
  • IP addresses are always static, as long as the network prefix does not change.
Why is this different? Well:
  • There are plenty of IP addresses in IPv6, and it's been designed to be easy to obtain entire blocks of public IP addresses. Indeed, that's exactly what your ISP will give you, a block of public IP addresses - your ISP gives you a prefix, and you're all set.
  • If everyone's using real IP addresses, there's no reason for routers to do anything other than pass packets unchanged between the Internet and your network.
  • There are so many IP addresses on each network, that devices can come preprogrammed with their own IP address (minus the prefix, of course) without any risk of two devices clashing. There's no reason to have a central server keeping track of who's been allocated what.
  • All machines, thanks to the way IP addresses are allocated, have predictable IP addresses. There's no risk a machine's IP address will unexpectedly change.
But what if I don't want to be connected to the Internet?

Private networks aren't always connected to the Internet. Experimental networks, for example, might need their own IP address ranges. Or you might have a security issue that means you absolutely must not have the network connected to anything other than specific other networks.

You have multiple options, including one official option, but one thing to understand is that you can get away with using pretty much any IP address range you want, even one in use by another party, as long as you don't plan to route the packets outside of your own network. The "official" option is unnecessary and arguably misleading, although that doesn't mean I recommend not using it in place of an arbitrary address!

Why would any prefix work for a network of nodes not connected to the Internet? Well, remember that each device allocates its own IP address from a combination of a public prefix, and a device specific 64 bit number. The devices on your network will, therefore, not have the same addresses as devices on any other network, even networks that use the same prefix! So the need for the so-called "private address" ranges becomes unnecessary, you don't need them.

This is a key thing to understand about IPv6 addresses: the "prefix" is merely used to control the route. The other part of the IP address is almost always going to be unique, unless you deliberately override it.

OK, but what happens if you're experimenting, but you intend to eventually - once your network is all situated - hook it up to the Internet?

Well, it still doesn't matter what range you pick. Changing prefix is easy as the NDP server will merely advertise the new prefix, and everyone will pick it up. The only problem you'll run in to will be with DNS - you'll have to change any DNS records you've set up manually, but that's an issue being addressed by several different parties.

That said, rather than pick one at random, there is a prefix specifically allocated for private networks. It's dubious, controversial, and I think most people who implement it don't know what they're doing, but fc00:: can be used as a private network prefix. I would strongly advise that you avoid using it. If you're asking what prefix to use, there's a very good chance you don't quite understand what you're trying to do.

Here's why. Private networks are rarely private in practice. Usually those private networks will need to communicate with one or more specific external networks from time to time, even if they're not supposed to connect to the entire world. Adopt "private" IPv6 addresses like those with fc00:: prefixes, and, just as with IPv4, you suddenly make everything much more complicated, having to create proxies with convoluted forwarded ports that may or may not work.

So what should you use for your internal, private, network whose nodes will usually only talk to one another, and the occasional, controlled, external party? You should use a real IPv6 block that's been allocated to you. They're easy to get hold of. There's no reason not to.


What's "fe80::" and why can't I connect to it?

OK, you may have noticed that your IPv6-compatible operating system always allocates itself at least one IPv6 address, with a prefix of fe80. This is called a "Link local" address, and it's actually used for control messages. The idea behind fe80:: is that when a machine starts up, it needs an IP address to use to learn about the network it's connected to - to find the NDP server, for example. It's also used for other computers on the same network to learn about your device.

While it's allocated on all devices, devices generally only enable network control messages on that address, "real" clients and servers - web browsers and web servers, FTP, etc - are forbidden from using it. Part of the reason for this is that fe80:: may only work on a small subset of your network as a whole - one room in your office building, for example, depending on how the network has been implemented. You must have real IPv6 addresses for your network.


DHCP for IPv6

It's probably worth noting, to avoid confusion, that there is an implementation of DHCP for IPv6, but it's not quite used the same way as it is on IPv4. On IPv4, DHCP became popular as a way to allocate IP addresses, but it actually does a lot more, being capable of sending all kinds of information about the network to clients. DHCP for IPv6 is generally used to implement these additional features, although it can be used to allocate IP addresses - it's just not good practice to do so.

How can I learn more?

I'll be putting together a HOWTO in the coming days about how you can build a little IPv6 test environment. Something you can read up on in the meantime is Radvd, a popular NDP server program.

Wednesday, March 23, 2011

LTE, what it is, and why it might undo the damage AT&T hath wrought.

So having described the history of the whole mobile phone "generation" thing in my previous post, what is "LTE"?

Well, to recap:
  • The first and second generation mobile networks were primarily voice based, with the second generation being intended to fix the problems of the first.
  • The third and fourth generation mobile networks are data and voice based, with the fourth generation being intended to fix the problems of the third.
There's some overlap. Many "2G" standards supported data, but the real, usable, data is in 3G.

So, with that in mind, what's LTE and why is it a really good thing?

A brief summary of LTE

LTE is the third version of the GSM mobile communications standard. The first version is generally just called GSM, and introduced the basic concepts of a mobile phone system, notably calls (voice and data), messaging, and something called "personal mobility" - essentially the ability for a user to separate their account from their device, allowing them to switch devices at will. In GSM this is achieved using a SIM card.

The second version, called 3GSM or UMTS (often, misleadingly, known as W-CDMA, HSDPA, HSPA, or HSPA+ - these are technologies used by UMTS) added something called "packet switched data" to GSM. While there were extensions to the first version, called GPRS and EDGE, the second version of GSM was a redesign that integrated data into the heart of the system and was designed to support high data rates.

The third version of GSM is called LTE. LTE is another redesign. This time LTE itself is data only. Voice services run over the Internet, and LTE is used as your Internet connection. You'll not notice this of course, to you an LTE phone will work just as any other phone, but the fact voice works this way leads to some general improvements in how things work.

Now, in theory, by separating the services from the network they run over, and using the Internet as the network opens up all kinds of possibilities. The biggest is that you're no longer dependent on one carrier to ensure you have coverage - use wifi if you can't get a cellular signal, for example. Carriers and phone manufacturers alike can also benefit from this architecture - it's easy to swap in and out other technologies to provide an Internet connection.

In practice, this isn't going to happen right away. Indeed, many operators are specifically avoiding using LTE for voice, seeing the technology as not quite ready for that yet.

What LTE gives you however is the following:
  • You choose the hardware - LTE, like UMTS and GSM 2G before it, uses SIM cards
  • The technology is, well, it's not future proof (nothing is), but it's extremely scalable, so it'll be some years before it's in need of replacement, unlike UMTS
  • Very high bitrates
  • Much higher reliability than UMTS
  • In theory, believe it or not, cheaper communications in the long term!

Where we're at in the US, and how it relates to the end of T-Mobile

A number of carriers are rolling out LTE. The two biggest are Verizon Wireless and AT&T. A smaller company, MetroPCS, has also started rolling out its LTE network, and others are likely to follow.

The two major companies bucking the trend are Sprint PCS and T-Mobile T-Mobile wants LTE, but doesn't have the spectrum. If the government approves AT&T's proposal to close down T-Mobile, then that ends that issue. Sprint PCS wanted to start early, and so has heavily invested in WiMAX, LTE's only real competitor. Sprint have said publicly they'll switch over to LTE if it becomes necessary, but in all honesty, more than any other standards, the "All IP" nature of both networks makes it possible for the two to co-exist and even interoperate to a certain extent.

But here's the thing: Verizon Wireless is switching from CDMA2000, a system they favored because of the control it gave them over their customers, to LTE. MetroPCS is switching from CDMA2000, a system they favored because of the control it gave them over their customers, to LTE. AT&T... well, they're already a GSM shop, but they're not switching away.

In being the overwhelming choice of carriers, and being part of the only open mobile phone standard family, LTE may well undo some of the damage we're seeing from AT&T's plan to destroy T-Mobile. It should be easier than ever before for third parties to independently develop hardware for LTE networks, without needing the approval of anyone other than the people who plan to use the systems (and, of course, the FCC...) Had this not happened, had AT&T been the sole GSM provider in the US, I think things would be beyond terrible for the mobile phone industry. But the move to LTE (and hence GSM based networks) means that's not going to happen, AT&T will need to compete, and it'll have difficulty imposing its will on phone manufacturers.

We'll have to see. The next generation networks will be under the thumb of providers with no history of supporting innovation and respecting their customers, but the technology itself might change them for the better.

Still busy but...

...going to post about LTE later today. It's possibly the only positive thing that might offset the AT&T-swallowing-the-last-great-open-network-operator thing.

Stay tuned...

Tuesday, March 22, 2011

Talking about my generation

In the T-Mobile articles, I've referred to what it calls "4G" in quotes, and I thought I'd expand on that a little.

Right now there are supposedly four generations of mobile technology. It's essentially marketing, with the possible exceptions of "1G" and "2G".


First Generation Mobile Technology

There's pretty much no debate as to what 1G is in the mobile phone world - it's basic analog cellular. A digital control channel exists to set up and tear down calls, and also handle phones moving from cell to cell, but essentially when a call starts, a mobile phone is allocated a frequency to transmit and receive on, it receives and transmits basic analog radio (as in, what your FM radio can pick up), and if the signal is weak, then the phone and tower "talk to each other" to see if the phone can hop onto another tower. And that's it. It's relatively simple.

First Generation is considered "bad" for a number of reasons. The major one is that it's not very spectrum efficient and cannot be. While capacity is proportional to the amount of spectrum available and the number of cells, you can't easily increase either. Spectrum can't be increased for obvious reasons, but cells can't for a different reason - if two analog cellphones are transmitting on the same frequency and are too close to one another, there's no easy way to separate the signals - think when you're in a car and you're listening to the radio, and you're drifting into boundary where two stations are equally far away and broadcasting on that frequency.

Well, that was the major, major, problem with cellular. There's an easy fix, as it happens, and that brings us on to 2G:


Second generation mobile telephony

If the problem is interference, then the obvious solution is to transmit the audio in such a way that it can be separated from other audio broadcasting on the same frequency. In 2G, the audio is turned into a digital signal, which means it can be encoded in a way that makes it practical for a receiver to distinguish between it and a signal further away.

There are multiple ways of doing this and multiple standards for how you transmit and receive once you've converted the signal into a digital one. All of these have certain things in common: the audio is converted into a digital signal, the digital signal is "compressed" - that is, reduced in size by removing redundant information - and the power level of the handset's transmitter is adjusted so that it's signal is no stronger than it absolutely has to be.

The US D-AMPS system used something called TDMA, dividing each frequency into little timeslots, and each phone broadcasting and receiving during an allocated time slot.

The GSM system was more sophisticated, using a combination of TDMA and something called "spread spectrum", where each GSM phone would switch frequency each time it used a time slot in a way synchronised with the tower. This it did to avoid any issue where one phone might be slightly out of sync, or be broadcasting on slightly the wrong frequency, causing problems for any phone using adjacent frequencies or timeslots. GSM was less efficient with spectrum than D-AMPS if given the same towers, but because GSM was much more resilient due to this technique, you could increase capacity very quickly just by creating more towers.

The final popular mobile system in use was cdmaOne. cdmaOne used a system called CDMA, where instead of broadcasting in narrow channels and timeslots, each "bit" - the rawest part of any digital signal - would be broadcasted multiple times using a signal as wide as the bandwidth available. This is also a spread spectrum technique, and it was very efficient, much more efficient than D-AMPS or GSM given similar numbers of towers. Unfortunately, the technique is also very power hungry, as phones that use CDMA have to be constantly transmitting or receiving (and doing so over a much wider spectrum), and also suffers from something called "breathing", where as traffic increases, it becomes steadily more difficult to distinguish between signals at cell boundaries. Early cdmaOne adopters such as Sprint PCS became notorious for overloaded networks with staggering numbers of call drops during peak periods, because of this issue and because many networks who adopted cdmaOne did so because it was "cheap" - that is, they were under the impression all they had to do was roll out enough towers to give people coverage, and cdmaOne itself would take care of the capacity issues.

While traditional modems could kinda, sorta, work on first generation systems, 2G saw the first adoption of standard cellular data systems. GSM, whose upper levels were essentially based on the all-digital ISDN system (popular in some parts of Europe, and also used by almost every office that has multiple phone lines), had data from the beginning, with the others following later on as GSM-like features were slowly grafted on. 2G data was "circuit switched", where data connections were treated as just another phone call. GSM allowed up to 56kbps, but only by combining multiple channels (treated as making multiple phone calls at once.) The other systems were more limited.

2G systems also saw the first two way short message systems (SMS.) This, again, started with GSM, and spread to the other networks systems.

Very few people would argue that these weren't the second generation systems. The only quibbles have to do with whether certain systems were more advanced than others. cdmaOne advocates believe, strongly, that the "air interface" technology - the CDMA - was so much more advanced than GSM's that it was practically a generation ahead. GSM's advocates believe, strongly, that GSM's high level ISDN-based architecture and support for functionality still to be deployed in cdmaOne or its successors makes it a generation ahead. In theory, both groups should have been pacified by what happened next. In practice...

Third Generation Networks

With both cdmaOne and GSM, the major backers of the standards involved wanted to move on, though for somewhat dubious reasons. European companies especially had issues with capacity, that ultimately could only be solved by having more spectrum allocated to them. In order to sell the authorities on allocating more spectrum, they needed to come up with justifications, and so set about supporting enhancements to GSM that would make it more efficient, and more functional.

Politically, a very influential player was a company called Qualcomm. Qualcomm was the developer of the cdmaOne system, and a company with the majority of patents on CDMA. Qualcomm tried to get European companies to adopt CDMA in some shape or form, even working with Vodafone in Britain to test a version of GSM with CDMA replacing the lower levels, and generally the response was hostile. Convinced it was the victim of a conspiracy between European manufacturers and European governments, it ran a concerted campaign to lobby the US government to support its system, and promoted the idea that its cdmaOne system was vastly ahead of GSM (something most people exposed to both systems as end users would question, especially at the time when a cdmaOne handset - with two way messaging and data yet to be released - was generally no more functional than an analog phone!)

The result of this intensive lobbying was that the GSM people devising the "next generation mobile standard" felt that they had to include CDMA in some shape or form into their system, as pretty much nobody was talking about any other technologies and politicians seemed likely to reject requests for new spectrum without it. But the politics being what they were, there was no desire to see Qualcomm have a "win", with the result that a non-Qualcomm proposal for how the CDMA should be implemented was adopted, called W-CDMA.

Qualcomm subsequently released a competing "3G" standard, an upgrade to their cdmaOne system, called CDMA2000, and as a result of these machinations, the division between GSM and cdmaOne based networks continued.

While this was going on, the ITU came up with a definition for something they called "IMT-2000", that became the commonly accepted definition of 3G. The ITU IMT-2000 definition was based on available data rates rather than anything else. The definition was so dubious that an enhancement to GSM, called EDGE, and a cordless phone standard called DECT, both qualified.

Of the two major standards, Qualcomm's CDMA2000 was clearly the inferior, except with one major area: the standard was easier to graft over existing cdmaOne networks. Qualcomm used less spectrum per channel (with a corresponding decrease in maximum data rate), which made it easier to use in conjunction with other network standards when spectrum was at a premium.

The GSM effort, called UMTS, was as much a leap over 2G GSM, and CDMA2000, as GSM was over 1G and cdmaOne. UMTS brought greater extensibility, the ability to have multiple connections at once (so, for example, UMTS users can make a phone call and check their email at the same time), and the body behind UMTS, the 3GPP, standardized a large number of new systems, including multimedia systems, that eventually made their way to UMTS's competitors. Ever wondered why your phone stores movies in a format called ".3gp"? It's named after the 3GPP, which decided upon that particular combination of standards.


Fourth Generation Mobile Communications

At this point in the story, we get to a turning point. UMTS in its early form turned out to be a colossal disappointment, for several reasons:
  • First, the CDMA system on which it was based was not the magic bullet CDMA advocates claimed it would be. Qualcomm distanced themselves from the UMTS version, called W-CDMA, but much of the problem was the concept, not the implementation. W-CDMA shared cdmaOne's disadvantages with power consumption and dropped calls during peak periods. CDMA was also not scalable: while the UMTS designers' decision to use large amounts of spectrum per channel had ensured it could grow to support data rates much greater (three times greater, in fact) than the theoretical highest speed supported by CDMA2000, the reality was that there was still a limit, and that limit was going to be reached fairly quickly.
  • Second, as a new system, UMTS had teething problems.
  • Third, in some ways UMTS was a prototype. Engineers had been asked to design a network for fast data and decent sounding voice calls, and had used CDMA's channel oriented architecture to put together something that would implement the two, but anyone standing back and looking at the system would immediately ask why they'd done it that way. If voice is digital, then voice is data. So why not just have one type of network, a big, high bandwidth, data network? Why complicate things more than they have to be?
  • Fourth, there were massive roll-out issues with the technology, especially in North America where operators had to choose between continuing to support their existing network, or deploying UMTS and forcing all of their customers to replace their handsets, in many areas. Not surprisingly, most operators simply didn't roll out UMTS to anything close to their general coverage area.
As the technology matured, UMTS got better, but again the various groups got together and started working on the long term evolution of the GSM standard, which ended up being called LTE. At the same time, other groups were working on their own systems. The IEEE was working on WiMAX, a high data rate wireless data system originally intended for ISPs to use. And Qualcomm, not wanting to be left behind, started work on "UWD", a project they eventually abandoned due to lack of interest. (Qualcomm then threw its weight behind LTE, which is good, because it's the end of the great rift between the experts in these kinds of things.)

All three standards discarded CDMA in favor of a system, OFDMA, that split the available spectrum into lots of tiny little channels, with devices combining as many channels as they needed to transmit data. This wasn't a new concept, it was a fairly popular design for high speed modems in the 1980s, but it was new in the radio world, and had huge advantages over CDMA. The technology didn't suffer from the same issues as CDMA during congested periods; OFDMA is more power efficient because handsets only have to use as much spectrum as they need, rather than transmitting constantly across a large swathe of the ether; oh, and it's really scalable too - need to make it faster? Add more channels.

Again, the need for a new way to distinguish these standards came into being, this time because the technology improvements were going to result in radically faster, more powerful, networks rather than because more spectrum was needed. The ITU came up with a definition for what they felt "The stuff after IMT-2000" should be called, called IMT-Advanced. This was informally adopted, initially, as the definition of 4G.

...but not for long. Here's what happened. The de-facto definition of "A generation greater than 3G" was a network standard that:
  • Supported much, much, greater data rates than 3G
  • Was "IP-only" - ie the system didn't have separate voice and data channels, it was a single system that treated everything as data.
Who cared how it was implemented as long as it was implemented? Well, various experts looked at UMTS's implementation of CDMA technology, and decided that the system still had a lot of potential, and came up with an enhancement called HSPA+. This could run in a data only mode, and available data rates could easily hit 50-100Mbps, comparable to WiMAX and LTE.

On top of that, early versions of WiMAX and LTE didn't support data rates quite as high as the ITU definition suggested, yet both were clearly next generation networks.

So, Sprint, who was rolling out WiMAX, but a variant that wasn't IMT-Advanced, said "Screw it, this is 4G, everyone knows it's 4G, let's call it 4G!", and advertised their new service as 4G.

Then Verizon, who was rolling out LTE, but a variant that wasn't IMT-Advanced, said "Sprint is right, and you know what, everyone knows LTE is 4G, let's call it 4G!", and advertised their new service as 4G, followed closely by AT&T.

And then T-Mobile, who didn't have an LTE or WiMAX network, but had been implementing HSPA+ everywhere and optimizing it so it was really, really, fast said "Screw it. If Sprint can call it's non-IMT-Advanced version of WiMAX 4G, and Verizon and AT&T can call their non-IMT-Advanced version of LTE 4G, we certainly can call our just-as-fast-as-their's HSPA+ system 4G too", and that's what they did.

And everyone got mad, because T-Mobile's "4G" network is actually, literally, the same as its 3G network - it's just some enhancements running in certain areas. And finally the word came down from the ITU that it was perfectly OK for T-Mobile to call HSPA+ "4G" because they weren't trying to define 4G at all, they were defining "IMT-Advanced", and if people wanted to equate the two, then that was up to them, but the ITU certainly wasn't going to do that kind of vulgar thing.


Where does that leave us?

In technical terms, GSM and cdmaOne are 2G standards. EDGE, cdma2000, and UMTS are 3G. HSPA+, first generation WiMAX, and first generation LTE, may or may not be 4G, but they're not 3G. And LTE and WiMAX will evolve into unambigiously 4G standards. In marketing terms, EDGE drops down to 2G, and all the "may or may not be 4G" standards become 4G. Magic!

1G exists because - well, you had to start somewhere.
2G exists because 1G had many problems, and 2G adopted digital communications as a fix.
3G exists because going digital wasn't enough, by itself, to improve the capacity problems, and carriers needed a political argument for getting more spectrum.
The official IMT-Advanced 4G concept exists because the standards created by 3G had too many problems, and 4G discards CDMA and separating voice and data in order to fix those problems.

There's a cycle here actually. I'm guessing 5G will be a spectrum grab again. And 6G will fix the problems in 5G.

In the mean time, 4G as a term has become a little meaningless. What is clear is that there's a move to an entirely different type of network, a network based upon data. That's really exciting, and that's what the generation after 3G is all about.

Monday, March 21, 2011

Update: Why the AT&T-Mobile is bad for T-Mobile customers

Some details have come out about AT&T's plans for T-Mobile, and one of the details is not pretty. In fact, it's very, very, bad.

When AT&T takes over T-Mobile, it will shut down T-Mobile's 3G/"4G" network because it wants to use the spectrum for its next generation network (LTE.)

AT&T's interest in T-Mobile appears to be because T-Mobile bought a national swathe of what's called AWS. T-Mobile didn't have enough spectrum to roll out 3G, so it bought the AWS spectrum and rolled out 3G (and its "4G" standard which is an enhancement of its existing 3G system) into that spectrum. All T-Mobile 3G phones are designed to work on those frequencies, and very few of them support any other frequencies in common use within the US for 3G.

AT&T wants T-Mobile because it can use that national swathe of spectrum to plug holes in its spectrum, where it's going to be unable to roll out LTE because it doesn't have spectrum in the 700MHz band available.

To put it bluntly: if you have a smartphone on T-Mobile, you WILL have to throw it away when the merger happens - or live with 2G "EDGE" speeds and reliability.

Source: TMOnews/AT&T Press conference

AT&T-Mobile - why it's a terrible thing

Yesterday brought news that the owners of T-Mobile USA, Deutch Telekom, have decided to sell their US operations to AT&T. AT&T will take over the fourth largest network in the US becoming the largest operator in North America. Even from a competitive standpoint, this is clearly not a great thing, but from a technical and innovation standpoint, it's probably the worst thing that could happen, short of Verizon Wireless buying T-Mobile. Here's why.

Some US mobile history

The US mobile industry wasn't really going anywhere until the US auctioned off some so-called PCS spectrum in the mid nineties. Before that, spectrum shortages resulted in there being a maximum of two operators in each location, and no national operators because of the political decisions that had been made when allocating that spectrum. PCS was an opportunity to move forward, almost any entity could bid on it, as long as they were prepared to deploy a public, digital, cellular mobile phone system.

A huge number of companies were formed who bought this spectrum and started using it. The biggest, if I recall correctly, were the existing cellular operators and landline operators, Sprint, Omnipoint, and VoiceStream, the latter pair being entirely new companies, and Sprint being owned by the famous long distance operator but otherwise run as a new company.

Immediately upon existing, PCS operators had to choose a technology to base their networks upon. Three candidates were available. D-AMPS or IS-136, also known, misleadingly, as TDMA, was the immediate "successor" to the AMPS analog mobile phone system. cdmaOne or IS-95, also misleadingly known as CDMA, was a Qualcomm designed alternative successor to AMPS. Both systems were designed as basic upgrades to AMPS, using a similar model. Finally there was GSM, which had an entirely different heritage. GSM is a mobile version of ISDN, and at the time was probably the most advanced, reliable, digital mobile system in the world.

The three standards had different strengths and weaknesses.
  • For hardware cost and support (low to high), the order would probably have been GSM, D-AMPS, cdmaOne.
  • For spectrum efficiency (high to low), the order would probably have been cdmaOne, D-AMPS, GSM, although direct comparisons were always difficult between the three standards - GSM was very easy to scale, as long as building new towers wasn't a problem. Of course, very often, it is, and easy or not, it's expensive!
  • For integration with analog networks (important to existing operators), cdmaOne and D-AMPS did it, and GSM didn't.
  • In terms of user features (advanced to crappy), GSM was way ahead of the other two, which were more or less equal at the time - GSM already supported data and messaging, which came to cdmaOne much, much, later, and GSM supported other features too like the SIM card based "personal mobility" system that have yet to be supported by any operator of cdmaOne or its successors
That last thing was almost certainly the deciding point for many PCS operators but not in the way you'd expect. GSM offered users options. It allowed users to buy a phone from anywhere, and as long as it was a real GSM phone, and supported the operator's spectrum, they could just plug their SIM card into it, and it would work.

For many operators, perhaps even a majority of the smaller start-ups, this was definitely a feature. They could sell advanced phones with advanced features out of the gate, and their customers could take advantage of the latest technologies without them doing a thing. From their standpoint, openness was a good thing.

For others, they took a corporate standpoint that more freedom for their customers meant less options for themselves. They wanted to control the mobile experience for their customers as much as possible.

Operators that were already in existence when PCS came into being generally split between cdmaOne and D-AMPS, usually based upon whether they were already rolling out a D-AMPS network or not, but the reality was that they seemed like to make that pick anyway - these were companies with conservative outlooks, who were less interested in providing new services than they were in capturing markets and keeping people locked to them.

Newer companies, such as Sprint, divided into two camps. Those companies that wanted to provide advanced services to their customers generally standardized on GSM. Those, however, who wanted more flexibility when it came to marketing - especially those offering all-you-can-eat, or very low cost talk plans, went with cdmaOne, where they could control what phone you used, and where things like data were considered - at a time when most Internet access was dial-up - to be a liability.

Who picked what? Today's AT&T is made up of Cingular, which itself was an alliance of several cellular networks owned by Baby Bell companies, and AT&T Wireless, which, for all intents and purposes, is not the same company. AT&T has standardized on GSM, but originally almost all of its ancestors standardized on D-AMPS, and made reluctant switches to GSM in the early part of the last decade after being pressured by equipment operators.

Today's T-Mobile comprises of almost all of the original GSM operators, including Omnipoint and Voicestream, with the exception of BellSouth Mobility DCS (part of AT&T), and some minor companies that were swallowed by the major cdmaOne operators.

Today's Sprint PCS is... well, the original Sprint PCS. They bought Nextel, a non-PCS operator that uses a system called iDEN which has some GSM parts, but is mostly a proprietary standard, but their major network uses cdmaOne, with Nextel being operated almost as a separate business.

Today's Verizon is a merger of almost all the other Baby Bell-owned cellular and PCS companies. These companies standardized on cdmaOne early on, and Verizon uses it and its successors.

What does this mean?

Well, of the four, only two has a heritage in independent companies, and only one of those two made technology choices based upon giving the customer options. That company, of course, is T-Mobile.

How does that translate in practice?

T-Mobile is the only open network

Say it with me: T-Mobile is the only open network. They're the only US network that genuinely has a commitment to letting you get your own equipment on board and letting you use it. That's not hype, it's not "True in theory, not in practice", they really mean it. They make mistakes on this from time to time, but I've never seen them stick it out when they've done so.

How open are they? Well, what if I were to tell you that if you want to install a custom operating system on your Android phone, one of the first places to go is... T-Mobile's own forums?

T-Mobile was, of course, Google's chosen operator for the roll-out of the first Android phone. Google was, of course, competing with Apple, trying to produce an alternative to the iPhone. While the iPhone's operating system would only be available on iPhones, Android would exist on all kinds of phones. While the iPhone would be locked to a single operator, Android phones would be available for every operator. While Apple would decide what software you're allowed to use on an iPhone, Android phones would leave the decisions up to you, the owner of the device.

These were important differentiations, representing the fact that the difference between the iPhone and Android system was rooted in philosophical differences even more than technologies; but it's open to question whether any of the existing operators would have even accepted the first Android phones in the way T-Mobile did? While they expressed an interest, Apple had justified many of its decisions concerning the iPhone by claiming that they'd been done to "protect" AT&T's network. It's hard to imagine, with Android's biggest competitor being so heavily crippled, that Verizon or AT&T or Sprint, none of whom had expressed any interest in openness before beyond vague statements of principle, would have allowed Android onto their networks without significant changes.

T-Mobile opened the door, and that almost certainly lead to a loosening once the other operators saw that Android was popular, that it wasn't causing any real problems with their networks, and that it has huge potential. The remaining three operators jumped on board the Android train a few months later.

AT&T doesn't have the right attitude

So, let's be clear here. I don't want to completely blame AT&T for the locked down nature of the iPhone - the iPod Touch and Wifi-iPad are similarly locked down for no good reason, but certainly "protecting AT&T's network" has come up numerous times as an explanation for at least some of the sealed nature of the iPhone device, and that's not an excuse that would have any traction if AT&T didn't express those concerns to Apple. "Open phones" have never really been a thing AT&T was particularly happy about. Outside of basic J2ME functionality, most AT&T phones were locked down, and, before the iPhone, AT&T's data pricing options seemed to be based upon the idea that you wouldn't use them.

Open is good

Anyone reading my blog knows my views on providing users with as much freedom as possible. Freedom doesn't just help users in terms of how much they can do, it also helps foster innovation and competition.

My view is that AT&T swallowing T-Mobile is a very bad thing because what we're seeing is the last pro-openness mobile network in the US closed down, effectively. This means, in practice, that Android may well be the last seriously innovative technology to be released in the US that makes heavy use of mobile networks.

So I'm hoping that, for whatever reason, the merger falls through. I want T-Mobile to survive as an independent company. And if that's not possible, I'd rather T-Mobile takes over Verizon, than AT&T take over T-Mobile.

Friday, March 18, 2011

Ten things that should be IPv6 ready but aren't

IPv6 is, unquestionably, a better system for building the Internet upon, and a migration to it can't come soon enough. Connecting to the Internet today is a mess, involving a lot of hacks, confusing configuration options, and things that ought to work but don't, simply because IPv4 was never designed for it. IPv6 makes the Internet as easy to connect to as your power supply.

But supporting IPv6 comes at a price, the system is not compatible with IPv4 in any way whatsoever. From being, effectively, an entirely different network, to needing a slightly different approach from software needing to use it, IPv6 requires work by a large number of parties for it to become viable.

In order for IPv6 to be adopted, hardware, software, and infrastructure needs to be ready to support it. It's perfectly possible to run IPv6 at the moment, but using it as your primary protocol is out of the question. Too much of the Internet is IPv4 only, and much of your software needs to be updated to support IPv6. The good news is that support is growing, but there's still work to do. And while I've come up with a fairly scary list below, always keep in mind the fact that you can venture into the IPv6 world while remaining connected to the IPv4 world for now. The two may be incompatible, but you're not going to be forced to choose one or the other any time soon.


1. Your router/gateway

If you're going to use IPv6, then the device you hook everything up to, that in turn hooks up to the your building's Internet connection, has to support IPv6 too, and most router/gateways don't. That's not going to stay that way, increasing numbers of barebones, cheap, routers are supporting IPv6, but it's hard to tell which when you're buying them at the store.


2. Your desktop operating system

The good news is that most operating systems support IPv6 already. The bad news is that for most, you have to manually enable the system, it's not right there, ready to work once hooked up to the network.

There's little reason for this. It's extremely easy for an operating system to detect whether it's hooked up to an IPv6 network, and to turn on IPv6 if it is, but some operating system vendors are wary of doing so after early attempts to "force" IPv6 to turn on were hampered by bloody-mindedness from certain ISPs, and system administrators in large corporate environments expressed reservations about having a new networking system deployed without them specifically managing it. But turning the entire system off by default, and having computers not even attempt to use an existing, already set-up, IPv6 network is very clearly overkill.


3. Your ISP

If you want to use IPv6, it would stand to reason you need an IPv6 connection from your ISP. Unfortunately, very few actually offer such a thing. Worse still, those people who want to use systems like "6to4" to get an IPv6 connection when their ISP doesn't support it often find themselves out of luck because their ISP blocks it, seeing anything but basic web service as a premium product that only large, rich, corporate entities would want. It's stupidity, but there you are.


4. Your enterprise's security systems

The MIT version of the Kerberos system is the de-facto standard for authorization outside of Windows, and last time I looked it didn't support IPv6. That's bad, what's worse is that virtually any sane post-IPv6 security system requires that computers handle their own security, with central management being provided by a mix of directory services and authentication systems. What does this large blob of jargon mean?

Well, in IPv4, security is generally provided by a box called a "firewall", that filters content from the Internet to an internal network. It's a lousy approach, but has been necessary over the years because security had been grafted onto the Internet almost as an afterthought.

In IPv6, a more fine grained approach is used where each computer is responsible for its own security. Computers talk to each other using encrypted connections, via a system called IPSec, and they guarantee those connections are secure using a system called IKE. The computer's operating system filters connections, ensuring that only authorized applications make and receive authorized connections to other computers.

But some key exchange standards needs Kerberos to do the authentication. And if MIT is the de-facto standard...

Now, I titled this "Your enterprise's security system", but actually any organization, no matter how small, is going to end up needing this issue fixed to move forward. There are alternatives to MIT, notably Heimdal, that does support IPv6, but the choices of which software to use are rarely made in isolation - especially when, as with Kerberos, the operating system vendor is more likely to have made the choice for you.

What's the alternative to using Kerberos to securely exchange keys? DNSSEC, and that's not exactly ready either...


5. The World Wide Web

A combination of inertia and the lack of support from ISPs means IPv6 sites are still rare on the web at the moment. For ISPs that don't actively block 6to4, most website servers can be switched to support both IPv6 and IPv4 at the flick of a switch (or rather installation of some configuration options), but there are multiple steps here:
  1. The owner of the site has to want to do it.
  2. The administrator of the site has to know how to do it.
  3. The ISP used by the site to connect to the outside world has to, at the very least, not block it.
In my experience, you can't rely on any of those being true.


6. Your Android phone and other mobile devices

One area where IPv6 support is being pushed fairly heavily is the mobile world, where operators are keen to migrate to technologies built on the newer standard. Unfortunately, Android isn't ready! With Google being a dabbler in IPv6, and with operators keen on making the switch, this is somewhat surprising.

On a separate note, while I'm thinking about it, and nothing to do with the topic at hand, the switch to IPv6 will be, uh, interesting for many. Those used to free tethering, for example, where you use your phone to hook your computer up to the Internet, might be a little surprised when they find that the entirely different nature of IPv6, and lack of NAT, means tethering really will be a different service, that mobile networks will be keen on charging for.


7. The things on your network that aren't computers

At home I have a high definition player, a satellite box, a couple of games consoles, and probably some other stuff I can't think of right now, hooked up to my Internet connection. You might add a Roku box or a Vonage router to that mix. Not all are IPv6 ready, despite being the very things that IPv6 will make easier to connect.


8. Your employer's networks

Planning on working from home? Or conversely accessing stuff at home from work? You and your employer will need compatible networks. Mention IPv6 to the average system administrator and they'll express a range of emotions, from joy that the networks are actually going to work properly, to concern about the amount of work that still needs to be done before it can be deployed, to fear about the risks during the transition. Your office will probably not be upgrading any time soon.

Talking of which...


9. The people who are responsible for your networks

It doesn't matter who you are, whether you work at home or use an office network, whether you're down with the whole IPv6 thing or are reading this wondering what a vee six is, there are always people in the chain of responsibility for your networks that are not quite fully aware of what IPv6 is and isn't, and what it takes to support it.

IPv6 isn't on most of the networks you connect through because someone, be it a manager, a sysadmin, someone who provides infrastructure, or someone inbetween, has decided, rightly or wrongly, that it shouldn't be implemented yet. And no, these aren't "stupid" or "lazy" people, in many cases they're some of the smartest people around. But in order to move forward, they need to be convinced, and they need to know it's an issue, and be part of the team that's pushing things forward and knocking down those roadblocks.


10. YOU

OK, that might be unfair, you might very well be reading this over a network you've set up that's IPv6 ready. But if you work in the industry, you might ask yourself the following questions:
  1. Do you understand IPv6 as a general technology?
  2. Do you understand the different approach to security and routing IPv6 brings to the table?
  3. Have you set up IPv6 at home, using either an IPv6 broker, or 6to4?
  4. Do you have any idea what applications in your organization are capable of supporting IPv6?
You can't make the migration until you're ready.

Did I miss anything? Did this suck less than my usual "Ten reasons" posts? Let me know below!

Blog housekeeping - comments policies and updates

I've made a number of changes to Paul's blog over the last week or two, in some cases to help people out, in others because I'm trying to use it, to a certain extent, as a tool to help me better serve my customers and employers by getting a better idea of how these things work, so bear with me, and if you see a change you really don't like, let me know.
  • Not many people leave comments, but if you do be aware that, for the moment, comments are moderated but anonymous comments are allowed. This is because I had an issue reported to me about being unable to log in (and when I experimented later, I found that even when it worked it was confusing.)
  • Comments will generally be posted, but needless to say, be polite, honest, and on-topic as comments that aren't will go into the bit bucket.
  • Advertising on this site is provided by Google. From what I can figure out, the ads seem to be mostly on-topic and I haven't seen any that were obnoxious. If you find them obtrusive or obnoxious, let me know, send me a screenshot if the latter as I have some options in terms of barring the display of some ads. If you're wondering how much money I make from them, let's just say that Google requires a minimum of $100 before they send you a check, and at the current rate I'm expecting my first check sometime around 2021...
  • This blog has generally, thus far, covered my opinions, largely on technical issues. I'm in the process of setting up a site that'll cover practical issues and solutions to specific problems, stay tuned.
Thanks for reading!



Paul

Thursday, March 17, 2011

Bye bye Buy American?

Given the economic and energy situation, I've been trying to make a concerted effort not to buy cheap crap from a certain human rights hole in Asia lately, buying "locally" produced goods (ie American) and if that fails concentrating on nearby countries or Europe.

Not that easy though when everything in a particular category in the stores you go to is made elsewhere. Tuesday evening, went in to Lowes to get some electrical gear (multimeter, circuit tester, etc.) With the exception of some plastic sleeves, which were made in Canada, I came up empty for locally produced merchandise.

Yes, I'm sure if I did enough research I could have found something, but that's somewhat hard to do if you need the hardware at short notice (and driving around isn't going to help matters either.)

Happy St Patrick's Day

As an Englishman in America, I usually get asked whether I celebrate St Patrick's Day or if I hate the Irish or something. It's a little confusing, I think because most Americans have heard a somewhat... confused version from a combination of the media and the various factions, so let's explain it quickly.

The Republic of Ireland is a wonderful country full of wonderful people. I say this sincerely, I've been there multiple times, they're the most friendly people you'll ever meet, and no, my English accent was never a problem. The Republic got its independence many decades ago and while there's good reason for many Irish to feel antagonistic about Britain, the reality is the majority simply aren't. My ancestors, not me, invaded and mismanaged that country, and frankly, the majority of Britons don't see our rule of Ireland as a positive episode in British history. We're not the same people.

Americans can look at the situation as being similar, in fact, to their own. Americans and Brits have a strong kinship, yet 250 years ago the situation was fairly dire between us. Less time has passed, of course, but the world is a different place to what it was back when news took weeks to cross the oceans.

Northern Ireland... well, that's an interesting location in its own right. I've never been there, I don't particularly want to visit. It's dominated by two groups, both natively born, of whom minorities in both groups utterly hate one another. How big are those minorities? Hard to tell without actually visiting, but it took a very long time for those who didn't hate to gain enough power to put a stop to what was going on.

The interesting part of that particular mess is that most people I've met in Britain want nothing to do with Northern Ireland, we'd like it gone, merged into Ireland, but while we might want that in the long term, a majority feels it would be inappropriate to do right now, because (a) the majority there doesn't want that and (b) it's been argued it would be a bloodbath, with some justification. But that's an argument for another time.

(Of course, this might be out of date, I haven't lived in Britain for twelve years, and it's quite possible public opinion has changed since the troubles ended.)

Anyway, that's Northern Ireland, which is a very different environment to Ireland as a whole. The bottom line: yes, we do celebrate St Patrick's Day!

Wednesday, March 16, 2011

Here comes the rain again

Technology trends come and go, as do marketing terms. Since the beginning of the commercial Internet, there's been a move towards building applications on the Internet. As time went on, companies started to sell services (directly, or indirectly via advertising) via the Internet too.

email makes for a great example. Third parties have been offering ever improved email services since the start of the Internet. Various models and services were offered, from "store and forward" services for individual mailboxes or entire domains, to entire hosting facilities where companies could contract a company to manage their email, with their employees picking up their individual mailboxes using IMAP, POP, or later on, web applications.

An early adopter of this type of thing was the open source community, as you might imagine. Companies like SourceForge offered "project hosting", where entire software projects could be managed online using their services to do everything from store source code to manage deadlines, bugs, and releases.

The applications have steadily become more numerous and diverse, in part because of improved web browser functionality, and also because of increased demands.

And somewhere along the line, relatively recently, some marketing genius decided to invent a new name for all of this: now it's called "the cloud".

Within the tech community, the term is derided as it's a new term for stuff everyone was already doing, but in some ways labelling these services "the cloud" is a positive thing, as doing so has focussed effort on the entire concept and helped highlight technologies that, prior to the decision to label it as a thing in its own right, might have languished. Companies as diverse as Google, Microsoft, and Amazon have put resources into researching better ways for these applications to interoperate.

"The cloud" doesn't cover one way of doing things. Services on offer over the Internet vary from fully built applications, such as Google Docs or Yahoo Mail, to CPU resources and memory.

Google, Amazon, and Oracle (via Sun) offer the ability to run your own applications remotely, in an environment in which they can demand sudden increases in CPU, memory, and/or bandwidth at a moment's notice without the need to invest in infrastructure that can support all of these resources all of the time. The technologies involved, at this point, are largely proprietary and incompatible, although all three make heavy use of Java.

At the other end of a similar spectrum are companies offering so-called VPS servers, servers that appear to be complete computers, where the benefit is that they're sitting in someone else's data center, using that data center's managed resources, rather than requiring you build your own.

At this point I'm loath to suggest that "the cloud" is something anyone but the smallest company with the simplest needs should adopt wholesale, although anyone can make use of it to provide parts of their computing infrastructure. The major issue, for me, is that the interoperability and standardization is not quite ready yet. Google, for example, is keen to create an infrastructure that is integrated, but it's internally integrated - third parties can certainly write applications that integrate with Google Apps, but those applications need more work to make integrated with other third party services. Custom applications need to either be hosted outside of Google, or if implemented using Google's "App Engine" - the CPU service I mentioned earlier -  need to be written specifically for it.

Part of the problem, I suspect, is deliberate wheel reinvention in order to be able to distinguish between small companies who can't afford to pay much for their services, and giant enterprises. Much of the work that's been done on integrating applications, especially in areas like security, is supported by technologies generally used by enterprises rather than smaller organizations.

Over time, I would hope this will improve. But there are other hurdles likely to come up:
  • Recent actions by Apple, Twitter, Facebook, and others have made developers increasingly nervous of developing for platforms where decisions about what can and can't run are made by third parties, rather than users. Apple's "App Store" has become notorious for arbitrary bans and bizarre rules. Twitter and Facebook have a, uh, flexible approach to rulemaking and APIs that means that an app that was perfectly legitimate a year ago might not even run today, and if it does might be in violation of rules that didn't exist previously.
  • And lest anyone think cloud providers would never undermine the reputation of their own businesses in that way, Amazon recently banned Wikileaks from using its service, providing a sophistry ridden explanation for doing so. There's little doubt that Amazon's decision was based on politics - with the US government announcing a move to "cloud computing", it couldn't afford to host services that were severely critical of that government.
  • The lack of a commitment to so-called "Network neutrality", and more importantly, a lack of a clear definition of Internet access, will continue to raise questions about the reliability of the very infrastructure necessary to provide cloud services. Even wireline ISPs currently block arbitrary ports, throttle arbitrary services, and get into major, disconnection ready, arguments with peers over who pays for what. Mobile Internet access, which is growing in popularity, has even worse problems in that regard.
  • The Web "isn't quite there yet" although it's getting better. Many are pinning their hopes on a standard called HTML 5 to enhance browsers with the necessary technologies, but in my view even that will not be enough for many developers. Still, we're a resourceful lot, and used to building all kinds of workarounds for the issues we come across.
  • IPv4 is woefully inadequate for the exponential increase in Internet usage we hope to see, and IPv6 is suffering from poor adoption and ISP indifference or even opposition.
In the end I think the move towards cloud computing is a good thing, but it's going to be many years before the technologies are ready for a wholesale move, where your "office computer" is whatever machine with a web browser happens to be closest, and your data center is no longer a room in a building, but the ether outside of it.

Tuesday, March 15, 2011

10 reasons why Ubuntu is better than Windows

It's been decades (two, actually) since operating systems based upon GNU and Linux started hitting the desktop, but early on the systems developed a reputation for being friendly only to geeks and those willing to invest time and effort into setting them up and learning how to use them.

As interest grew, so did the number of people working on it, and eventually Canonical picked up the popular Debian GNU/Linux system, cleaned it up, packaged it, and started work on turning it into the operating system for everyone.

I've said a few times that I believe Ubuntu is more functional and usable than Windows. But there are caveats. Ubuntu has yet to make much of an impact in the enterprise due to its lack of working integration with enterprise networks. But, for the rest of us, it's a great system. What makes it great?


1. Better hardware support

A quick review of the system requirements for both XP and Windows 7, vs Ubuntu, shows that Windows, not Ubuntu, is the one you really can't be sure will work on the computers you own.

This is a stunning turn of events - only a few years ago, the complete opposite was true! This has happened mostly because of two things: the relative efficiency of Ubuntu, and the community of relentless developers behind it who have been determined to make sure operating systems that use the Linux kernel will work on everything they own.

To be sure, there are limitations: PCs less than a year old frequently have problems with Ubuntu, but even there, many manufacturers are making sure their machines can run Ubuntu out of the box, because it's a platform they want to offer. Typically, a customized version of the system will be released with the computer, and the customizations will make it into the next official release of the operating system.


2. No crapware

Microsoft has been busy trying to prevent manufacturers from destroying their operating systems by bundling "free" software that slows everything down and makes it a pain to use. But the closed nature of Windows and the necessity to get around it means that's never going to happen. Most of us who have a Windows computer can see, just in our notifications bar, a long list of icons for applications we don't want or need, that have the annoying habit of popping up unexpectedly. And when our computers freeze unexpectedly or crawl, we're left wondering what we have to uninstall or switch off.

Not so Ubuntu. With a clean desktop out of the box, a community of developers willing to make vastly better software than any that a manufacturer might bundle - from video drivers to PDF viewers, and with the option available to download - for free - a completely untainted copy of the operating system if you're unfortunate enough to be given a machine with this garbage pre-installed, you never have to worry about software you don't want.


3. Great software, at no extra charge

Ubuntu comes with the best software already integrated and packaged. From OpenOffice.org to Firefox, the FOSS communities have released some superb productivity software over the years, and you don't have to do anything to get it, which brings us to...


4. The App Store before the App Store

You remember how awesome that Android Market thing seemed to be on your Android phone? Just use it to select the software you want and it'd take care of everything else. And if you didn't want a program any more, it'd take care of getting rid of it, without any trouble?

You probably thought the Android Market was copied from Apple or something. After all, Apple has an "App Store", and everyone's always claiming that Android is a clone of iOS.

Well, big surprise: The Android Market is based not on the Apple way of doing things at all, but the way the GNU and Linux does things and Apple's App Store, if anything, is a poor copy of that, not the other way around.

In Debian, the way to download and install software was through something called a package manager. The package manager would keep track of all of the resources a program needed and used, and if you ever decided you didn't want it any more, it would get rid of everything except those components you specifically asked for or still need. And that system is still in Ubuntu, both in its raw Debian form, called "APT", and a cleaned, polished, version called the Ubuntu Software Center, that runs over the top of APT.

What's the difference between the Ubuntu way and Windows? In Windows, you're at the mercy of the application itself to manage its own installation and removal. If the authors of the application decided they just didn't want to spend the time on helping users remove their own software, then tough, you can't install it. And even those applications that do include their own removal scripts very often have bugs that leave large quantities of their code behind.

To be fair, only applications approved by Canonical can be put in the Ubuntu Software Center, but to get around that there is a system called DPKG. When you download a file whose name ends in .DEB, you can install the software within it just by opening the file and giving your administrator password. When it's time to remove the application, you can use the standard tools to remove it, without being at the mercy of the application's developers.

Easy. It "just works". No wonder Apple is trying to copy it!


5. It's friendlier


No, really, it is! Compare a Ubuntu environment to Windows in real life, spend a few days using both, and you'll see what I mean. Both operating systems give you a cleanish desktop with integrated file manager, and a way to launch applications, but should you need to change anything, from connect to a wireless network to configure your email, the differences can be astonishing. Here's a few ways in which Ubuntu is easier, chosen because they demonstrate the philosophy:
  • On the Windows "Start" menu, applications are organized by software vendor. On the Ubuntu "Applications" menu, applications are organized by function.
  • Configuring email in Windows involves making sure you have the right client installed. When you run it for the first time, it will generally help you set up email, but if you change anything you'll be given a maze of "settings", "preferences", "information", "properties", and other dialogs to fathom as you try to work out where to go. In Ubuntu, the Evolution email client is already installed, setting up email for the first time is a matter of clicking on the envelope and selecting "Set up mail". Need to make changes or add accounts? Go to Edit->Preferences, and there are your accounts, in one place with all the other settings. And if you can't find that, it's also in System->Preferences->EMail settings
  • The scroll bar works. No seriously! You know how Windows has a bug in it that means that if you drift too far from the scroll bar (which you're going to do if you do a lot of scrolling), it'll "snap" back to where it was? Ubuntu doesn't contain that bug! There's a lot of garbage in Windows that makes you think "What were they thinking?!"; Ubuntu doesn't hang on to ridiculous ideas just because someone did it in the past.
  • I mentioned the Ubuntu Software Center earlier. Ubuntu also handles all updates for all installed applications by itself, using a centralized "Update Manager" tool. All of the software installed using Ubuntu Software Center is supported directly, and applications you download outside of USC can also register with it to make their updates available too. It's one place for all updates - no more mysterious dialogs from Java, Adobe, and your virus scanner, popping up when you least want it.
I've never met anyone who had problems using a Ubuntu desktop, no matter what their skill level was. I can't say the same about Windows!


6. It's fast

Part of what makes Ubuntu a pleasure is that even on machines with relatively little RAM, Ubuntu is optimized for speed. Much of this has to do with the lack of anything installed or running that you really don't need. And much has to do with the community of developers who work tirelessly on optimizing the system for every possible hardware configuration they see people wanting to use.


7. It's more functional

With GNU/Linux's Unix inspired origins, software has been developed for the platform since the mid-seventies, and much of the Internet was built on the frameworks Ubuntu is based upon. With the world moving over to the Internet, you can imagine that Unix-inspired operating systems are coming into their own, and Ubuntu is benefiting from that movement.

Ubuntu also benefits from a massive community of developers who want their system to be more functional than the proprietary competition, and over the years we've seen that come to fruition, with a mix of software clearly inspired by proprietary designs (Evolution, for example, is a great alternative to Microsoft Outlook, and Rhythmbox is a great alternative to iTunes), to software you just will not find anywhere else, including MPlayer, and the GIMP.

By themselves, this mix of tools would be good, but not enough to overcome to volume of applications available for Windows, but Ubuntu, through its Unix origins, adds an extraordinary command line environment that makes it easy to perform amazingly sophisticated tricks using collections of much smaller tools. And Ubuntu has those tools. While some like FFMPEG have been ported to Windows, they're still more functional under Ubuntu because they can be scripted in ways that Windows users can only dream about. There are some fantastic utilities you can throw in the mix, including a remote shell environment, that I'll get to in a moment, that means you can do extraordinary things over a network merely because the command line itself is this sophisticated.

Talking of which...


8. It's the Network

As I noted above, Unix, the platform Ubuntu is ultimately inspired by, was the system used to build the Internet. As you might imagine then, Ubuntu is a great networker. Let's cover some of the ways Ubuntu is network ready:

Ubuntu supports an amazing remote access system called ssh. ssh allows you to make connections to other machines that are encrypted and that can use authentication schemes not possible under telnet. But replacing "telnet" barely covers what ssh can do. ssh can be used as a wrapper for other services, allowing you to gain full access to a computer if you're authorized to use it. Want to transfer files? ssh will do it. Want to set up a tunnel so you can connect back to a machine hidden behind NAT? ssh will do it. Want to access that machine's desktop (via VNC)? ssh will do that too.

ssh is like a swiss army knife of network access tools. You can even use it to mount other computer's file systems (via a tool called sshfs.) And unlike the Windows equivalents, it's secure - if you're an admin you can let users use it, knowing that hackers will not find it significantly easier to hack into your computers if it's enabled.

Ubuntu also has available servers to implement almost any network protocol, and clients to test and debug virtually any network protocol. Want to implement an LDAP server? Just install it via the Ubuntu Software Center, and find some tools in the same place to manage it. Need to provide email, and don't want to build a separate server or outsource it to the cloud? IMAP servers are available to install at the click of a button. Ubuntu doesn't just support these servers, they're - in most cases - the industry standard implementations.


9. It's FREE

It might sound obvious to you, but "free" has advantages all in itself:
  • You don't have to worry about upgrading to the latest version, because that's free too. You're not on an upgrade treadmill having to hand over cash every year.
  • You don't have to worry about licence keys. If you have to re-install the operating system for any reason, you can just download a copy over the Internet, there's no need to hunt for the manufacturer's customized Windows install disks to ensure your OEM key will work.
  • The different versions of Ubuntu are based on use, not ability to pay. The Netbook version is optimized for netbooks, using a netbook sized screen. The desktop version has all the desktop tools you'd want, and the server version contains as little as possible, so that there's nothing to impede the applications that run on it. With Windows, the editions are based on ability to pay, and in some cases the operating systems are suboptimal because Microsoft needs to justify you, for example, paying more for a server operating system than a desktop system. With Ubuntu, install the best available to you, there's no reason not to.
  • There's no need to worry about legalities. Licensing is so much easier in the free software/open source world, when you know that you have a right to install and use any software you have a copy of.

10. It's OPEN

I said this about Android, but I'll say it again: Ubuntu is so much better because it's open. It has an enormous community of developers working on making it better - faster, more powerful, and easier to use. And when Canonical wants to do something they see as better for their users than the things others in the community are doing, they can go their own way, because the technologies they're building their system upon are also open.

What other reasons can you think of for using Ubuntu over Windows? Or do you prefer Windows? Let me know below!