Right now there are supposedly four generations of mobile technology. It's essentially marketing, with the possible exceptions of "1G" and "2G".
First Generation Mobile Technology
There's pretty much no debate as to what 1G is in the mobile phone world - it's basic analog cellular. A digital control channel exists to set up and tear down calls, and also handle phones moving from cell to cell, but essentially when a call starts, a mobile phone is allocated a frequency to transmit and receive on, it receives and transmits basic analog radio (as in, what your FM radio can pick up), and if the signal is weak, then the phone and tower "talk to each other" to see if the phone can hop onto another tower. And that's it. It's relatively simple.
First Generation is considered "bad" for a number of reasons. The major one is that it's not very spectrum efficient and cannot be. While capacity is proportional to the amount of spectrum available and the number of cells, you can't easily increase either. Spectrum can't be increased for obvious reasons, but cells can't for a different reason - if two analog cellphones are transmitting on the same frequency and are too close to one another, there's no easy way to separate the signals - think when you're in a car and you're listening to the radio, and you're drifting into boundary where two stations are equally far away and broadcasting on that frequency.
Well, that was the major, major, problem with cellular. There's an easy fix, as it happens, and that brings us on to 2G:
Second generation mobile telephony
If the problem is interference, then the obvious solution is to transmit the audio in such a way that it can be separated from other audio broadcasting on the same frequency. In 2G, the audio is turned into a digital signal, which means it can be encoded in a way that makes it practical for a receiver to distinguish between it and a signal further away.
There are multiple ways of doing this and multiple standards for how you transmit and receive once you've converted the signal into a digital one. All of these have certain things in common: the audio is converted into a digital signal, the digital signal is "compressed" - that is, reduced in size by removing redundant information - and the power level of the handset's transmitter is adjusted so that it's signal is no stronger than it absolutely has to be.
The US D-AMPS system used something called TDMA, dividing each frequency into little timeslots, and each phone broadcasting and receiving during an allocated time slot.
The GSM system was more sophisticated, using a combination of TDMA and something called "spread spectrum", where each GSM phone would switch frequency each time it used a time slot in a way synchronised with the tower. This it did to avoid any issue where one phone might be slightly out of sync, or be broadcasting on slightly the wrong frequency, causing problems for any phone using adjacent frequencies or timeslots. GSM was less efficient with spectrum than D-AMPS if given the same towers, but because GSM was much more resilient due to this technique, you could increase capacity very quickly just by creating more towers.
The final popular mobile system in use was cdmaOne. cdmaOne used a system called CDMA, where instead of broadcasting in narrow channels and timeslots, each "bit" - the rawest part of any digital signal - would be broadcasted multiple times using a signal as wide as the bandwidth available. This is also a spread spectrum technique, and it was very efficient, much more efficient than D-AMPS or GSM given similar numbers of towers. Unfortunately, the technique is also very power hungry, as phones that use CDMA have to be constantly transmitting or receiving (and doing so over a much wider spectrum), and also suffers from something called "breathing", where as traffic increases, it becomes steadily more difficult to distinguish between signals at cell boundaries. Early cdmaOne adopters such as Sprint PCS became notorious for overloaded networks with staggering numbers of call drops during peak periods, because of this issue and because many networks who adopted cdmaOne did so because it was "cheap" - that is, they were under the impression all they had to do was roll out enough towers to give people coverage, and cdmaOne itself would take care of the capacity issues.
While traditional modems could kinda, sorta, work on first generation systems, 2G saw the first adoption of standard cellular data systems. GSM, whose upper levels were essentially based on the all-digital ISDN system (popular in some parts of Europe, and also used by almost every office that has multiple phone lines), had data from the beginning, with the others following later on as GSM-like features were slowly grafted on. 2G data was "circuit switched", where data connections were treated as just another phone call. GSM allowed up to 56kbps, but only by combining multiple channels (treated as making multiple phone calls at once.) The other systems were more limited.
2G systems also saw the first two way short message systems (SMS.) This, again, started with GSM, and spread to the other networks systems.
Very few people would argue that these weren't the second generation systems. The only quibbles have to do with whether certain systems were more advanced than others. cdmaOne advocates believe, strongly, that the "air interface" technology - the CDMA - was so much more advanced than GSM's that it was practically a generation ahead. GSM's advocates believe, strongly, that GSM's high level ISDN-based architecture and support for functionality still to be deployed in cdmaOne or its successors makes it a generation ahead. In theory, both groups should have been pacified by what happened next. In practice...
Third Generation Networks
With both cdmaOne and GSM, the major backers of the standards involved wanted to move on, though for somewhat dubious reasons. European companies especially had issues with capacity, that ultimately could only be solved by having more spectrum allocated to them. In order to sell the authorities on allocating more spectrum, they needed to come up with justifications, and so set about supporting enhancements to GSM that would make it more efficient, and more functional.
Politically, a very influential player was a company called Qualcomm. Qualcomm was the developer of the cdmaOne system, and a company with the majority of patents on CDMA. Qualcomm tried to get European companies to adopt CDMA in some shape or form, even working with Vodafone in Britain to test a version of GSM with CDMA replacing the lower levels, and generally the response was hostile. Convinced it was the victim of a conspiracy between European manufacturers and European governments, it ran a concerted campaign to lobby the US government to support its system, and promoted the idea that its cdmaOne system was vastly ahead of GSM (something most people exposed to both systems as end users would question, especially at the time when a cdmaOne handset - with two way messaging and data yet to be released - was generally no more functional than an analog phone!)
The result of this intensive lobbying was that the GSM people devising the "next generation mobile standard" felt that they had to include CDMA in some shape or form into their system, as pretty much nobody was talking about any other technologies and politicians seemed likely to reject requests for new spectrum without it. But the politics being what they were, there was no desire to see Qualcomm have a "win", with the result that a non-Qualcomm proposal for how the CDMA should be implemented was adopted, called W-CDMA.
Qualcomm subsequently released a competing "3G" standard, an upgrade to their cdmaOne system, called CDMA2000, and as a result of these machinations, the division between GSM and cdmaOne based networks continued.
While this was going on, the ITU came up with a definition for something they called "IMT-2000", that became the commonly accepted definition of 3G. The ITU IMT-2000 definition was based on available data rates rather than anything else. The definition was so dubious that an enhancement to GSM, called EDGE, and a cordless phone standard called DECT, both qualified.
Of the two major standards, Qualcomm's CDMA2000 was clearly the inferior, except with one major area: the standard was easier to graft over existing cdmaOne networks. Qualcomm used less spectrum per channel (with a corresponding decrease in maximum data rate), which made it easier to use in conjunction with other network standards when spectrum was at a premium.
The GSM effort, called UMTS, was as much a leap over 2G GSM, and CDMA2000, as GSM was over 1G and cdmaOne. UMTS brought greater extensibility, the ability to have multiple connections at once (so, for example, UMTS users can make a phone call and check their email at the same time), and the body behind UMTS, the 3GPP, standardized a large number of new systems, including multimedia systems, that eventually made their way to UMTS's competitors. Ever wondered why your phone stores movies in a format called ".3gp"? It's named after the 3GPP, which decided upon that particular combination of standards.
Fourth Generation Mobile Communications
At this point in the story, we get to a turning point. UMTS in its early form turned out to be a colossal disappointment, for several reasons:
- First, the CDMA system on which it was based was not the magic bullet CDMA advocates claimed it would be. Qualcomm distanced themselves from the UMTS version, called W-CDMA, but much of the problem was the concept, not the implementation. W-CDMA shared cdmaOne's disadvantages with power consumption and dropped calls during peak periods. CDMA was also not scalable: while the UMTS designers' decision to use large amounts of spectrum per channel had ensured it could grow to support data rates much greater (three times greater, in fact) than the theoretical highest speed supported by CDMA2000, the reality was that there was still a limit, and that limit was going to be reached fairly quickly.
- Second, as a new system, UMTS had teething problems.
- Third, in some ways UMTS was a prototype. Engineers had been asked to design a network for fast data and decent sounding voice calls, and had used CDMA's channel oriented architecture to put together something that would implement the two, but anyone standing back and looking at the system would immediately ask why they'd done it that way. If voice is digital, then voice is data. So why not just have one type of network, a big, high bandwidth, data network? Why complicate things more than they have to be?
- Fourth, there were massive roll-out issues with the technology, especially in North America where operators had to choose between continuing to support their existing network, or deploying UMTS and forcing all of their customers to replace their handsets, in many areas. Not surprisingly, most operators simply didn't roll out UMTS to anything close to their general coverage area.
As the technology matured, UMTS got better, but again the various groups got together and started working on the long term evolution of the GSM standard, which ended up being called LTE. At the same time, other groups were working on their own systems. The IEEE was working on WiMAX, a high data rate wireless data system originally intended for ISPs to use. And Qualcomm, not wanting to be left behind, started work on "UWD", a project they eventually abandoned due to lack of interest. (Qualcomm then threw its weight behind LTE, which is good, because it's the end of the great rift between the experts in these kinds of things.)
All three standards discarded CDMA in favor of a system, OFDMA, that split the available spectrum into lots of tiny little channels, with devices combining as many channels as they needed to transmit data. This wasn't a new concept, it was a fairly popular design for high speed modems in the 1980s, but it was new in the radio world, and had huge advantages over CDMA. The technology didn't suffer from the same issues as CDMA during congested periods; OFDMA is more power efficient because handsets only have to use as much spectrum as they need, rather than transmitting constantly across a large swathe of the ether; oh, and it's really scalable too - need to make it faster? Add more channels.
Again, the need for a new way to distinguish these standards came into being, this time because the technology improvements were going to result in radically faster, more powerful, networks rather than because more spectrum was needed. The ITU came up with a definition for what they felt "The stuff after IMT-2000" should be called, called IMT-Advanced. This was informally adopted, initially, as the definition of 4G.
...but not for long. Here's what happened. The de-facto definition of "A generation greater than 3G" was a network standard that:
- Supported much, much, greater data rates than 3G
- Was "IP-only" - ie the system didn't have separate voice and data channels, it was a single system that treated everything as data.
Who cared how it was implemented as long as it was implemented? Well, various experts looked at UMTS's implementation of CDMA technology, and decided that the system still had a lot of potential, and came up with an enhancement called HSPA+. This could run in a data only mode, and available data rates could easily hit 50-100Mbps, comparable to WiMAX and LTE.
On top of that, early versions of WiMAX and LTE didn't support data rates quite as high as the ITU definition suggested, yet both were clearly next generation networks.
So, Sprint, who was rolling out WiMAX, but a variant that wasn't IMT-Advanced, said "Screw it, this is 4G, everyone knows it's 4G, let's call it 4G!", and advertised their new service as 4G.
Then Verizon, who was rolling out LTE, but a variant that wasn't IMT-Advanced, said "Sprint is right, and you know what, everyone knows LTE is 4G, let's call it 4G!", and advertised their new service as 4G, followed closely by AT&T.
And then T-Mobile, who didn't have an LTE or WiMAX network, but had been implementing HSPA+ everywhere and optimizing it so it was really, really, fast said "Screw it. If Sprint can call it's non-IMT-Advanced version of WiMAX 4G, and Verizon and AT&T can call their non-IMT-Advanced version of LTE 4G, we certainly can call our just-as-fast-as-their's HSPA+ system 4G too", and that's what they did.
And everyone got mad, because T-Mobile's "4G" network is actually, literally, the same as its 3G network - it's just some enhancements running in certain areas. And finally the word came down from the ITU that it was perfectly OK for T-Mobile to call HSPA+ "4G" because they weren't trying to define 4G at all, they were defining "IMT-Advanced", and if people wanted to equate the two, then that was up to them, but the ITU certainly wasn't going to do that kind of vulgar thing.
Where does that leave us?
In technical terms, GSM and cdmaOne are 2G standards. EDGE, cdma2000, and UMTS are 3G. HSPA+, first generation WiMAX, and first generation LTE, may or may not be 4G, but they're not 3G. And LTE and WiMAX will evolve into unambigiously 4G standards. In marketing terms, EDGE drops down to 2G, and all the "may or may not be 4G" standards become 4G. Magic!
1G exists because - well, you had to start somewhere.
2G exists because 1G had many problems, and 2G adopted digital communications as a fix.
3G exists because going digital wasn't enough, by itself, to improve the capacity problems, and carriers needed a political argument for getting more spectrum.
The official IMT-Advanced 4G concept exists because the standards created by 3G had too many problems, and 4G discards CDMA and separating voice and data in order to fix those problems.
There's a cycle here actually. I'm guessing 5G will be a spectrum grab again. And 6G will fix the problems in 5G.
In the mean time, 4G as a term has become a little meaningless. What is clear is that there's a move to an entirely different type of network, a network based upon data. That's really exciting, and that's what the generation after 3G is all about.
No comments:
Post a Comment