Demystifying 3G (and mobile communications)
“What’s 3G?” is a question that I’ve heard many times in the past few months, with the deployment of 3G in Lebanon many people around me started asking what is it about, what does it have to offer and why 3G?
Here’s an explanation that, albeit long, is easy to read (hopefully) for those who are not familiar with the field, and that is still detailed enough to be interesting, offering starting points for the curious inquisitive minds; there are many keywords that you can search and explore. The post will run in paragraphs quickly exploring the historical course of the development of the technology that led to the 3rd generation in mobile telecom networks, if while reading you get bored or feel that things got too detailed skip to the next paragraph.
How it all started? 0G
With the development of the radio communications technologies during the WWII and the need for connectivity while being mobile in the post war era, developments for mobile telephony solutions were carried and resulted in what is now commonly referred to as 0G telephone systems or pre-cellular telephone systems.
The 0G systems were mobile radio telephones that allowed users to make calls to numbers on the fixed telephone network and vice versa, they were basically traditional radio communication sets(similar to the ones used by the police and the military) that allowed people to usually connect to an operator who personally took the dialing request to call a number on the fixed network. And this operator was in charge of vocally announcing on the radio network that a call is bound from the fixed network to one of the customers in order to relay it if he/she gets a reply from that particular customer.
There was a number of 0G systems developed in different countries such as the Mobile Telephone System, Improved Mobile Telephone System, Advanced mobile telephone system, Mobile telephony system D, etc.
So in a brief description, the handset used was very bulky, usually fitted in a brief case and the system usually required a person to relay phone calls. Improvements were made to eliminate the need for an operator but the capacity was always very limited: few channels were available for a tower covering a wide area and each channel was capable of carrying one customer.
1st Evolution: 1G
As communicating while being mobile grew more popular in the 70s, the 0G systems suffered quickly from interference and congestion due to the limited resources of the design (few radio channels) and the lack of any multiple access scheme to allow more users to make simultaneous calls. A remedy was needed to solve this problem and to create a service that can be easily available for a large number of customers.
Key technology in 1G: Cellular networks.
The idea was smart and simple to implement, the creation of coverage cells with each cell servicing customers who are in its coverage area. This allowed to distribute the available channels on these cells and reuse them infinitely (relatively) in cells that are far enough from each other. By using higher frequencies in 1G compared to 0G it was possible to practically reuse these frequencies due to the fact that higher frequencies will attenuate more over the same distance, thus reducing interference. So it was possible to reuse the same frequency in cells that were relatively far from each other and this was a key factor in increasing the capacity of the system.
Handover, which is the technique that allows a user to remain connected to the system as he moves across the coverage areas of different cells was also created. It was an important feature that enabled mobility while being connected to the system.
The system was a big evolution compared to the old 0G systems however the handsets were still bulky. They were not easy enough to handle and carry in order to provide a comfortable mobile telephony experience. As the technology grew popular, the system suffered from congestion especially in dense urban areas.
Why congestion? The system was analog (in analog systems the voice as an acoustic pressure wave is converted into an analogous electrical signal, the latter one is transmitted through electromagnetic waves, received at the other end and converted back as is to an acoustic wave by the phone's speaker at the receiver’s end), that meant that each user occupied a frequency when he was calling and it couldn’t be used by another user, and since the number of allocated frequencies on each cell was limited, the number of users making calls in each cell simultaneously was very limited. Another limitation on the system added to the bulky handsets and limited capacity was the fact that the handsets consumed hefty amounts of power and even with the huge battery used on them they still needed to be recharged frequently; battery life was around one hour of talk time in a device that weighed around 730g, devices were slimmed down to around 300g by the late 80s and early 90s but the talk time was still around 1hour per battery charge.
Obviously, solutions to capacity problems, power drainage and the huge battery were needed and that’s what lead to the 2nd generation or 2G.
2nd Evolution: 2G
By the late 80s early 90s, mobile communications have become an integral part of the popular culture. Devices got relatively smaller and cheaper thus becoming more attractive and allowing a bigger number of people to access the service. However the nature of the 1G system was not able to accommodate such a growth due to physical limitations as we’ve seen in the previous paragraph, so a technical solution was needed to evolve and move to a better system. Many systems were developed around the world but the most popular and widespread one, was and still is, the GSM system.
Key technologies in 2G: TDMA & Digital Signal Processing (which were allowed by the digitization of the system), and using channels with a bigger bandwidth.
The main limitation of the 1G system came from its limited multiple access scheme, one frequency was equal to one user dedicated voice channel. A new access scheme was needed to use the available frequency bands more efficiently and that’s where TDMA(Time division multiple access) came to play its role. Instead of separating users by putting them on different frequencies only(frequencies that were limited in number) few users were made able to share the same frequency by allocating it to each user for a limited amount of time called a time slot (8 slots per frequency in GSM), and thus multiple users were able to talk simultaneously using the same frequency.
The digitization of the system allowed the usage of TDMA and opened the door for using digital signal processing techniques, especially compression. The voice acoustic signal is converted into an analog electrical signal first, the analog electrical signal is sampled then quantized and converted into a bit stream , that’s how it becomes digital, then audio codecs or vocoders are used to reduce the bit rate (from 64kbps down to as low as 9.6kbps for GSM) while keeping an intelligible and good voice quality. And this rate reduction resulted in a better usage of the allocated spectral bandwidth.
Definition: To avoid confusion with other metaphorical definitions, when I say Bandwidth I mean by it the spectral bandwidth which is the width in hertz of a dedicated space in the frequency spectrum used for a particular radio communication.
An additional benefit due to the digitization of the system was channel coding, which is the usage of special coding techniques (Block and Convolutional for GSM) to combat errors due to interference and fading on the wireless path.
All these techniques helped the 2G systems to achieve a better voice quality, increase the overall efficiency and capacity compared to the limited 1G.
Another feature worth mentioning is the increase of the channel bandwidth from around 30KHz for the 1G systems to around 200KHz for the 2G systems namely the GSM (although 30KHz are largely sufficient to carry the human voice that doesn’t go above 4 KHz normally). This increase helped to achieve the same channel capacity while significantly reducing the transmit power needed and that’s mainly how batteries got smaller and talk time increased.
This change was due to a better understanding of Claude Shannon’s work who is considered to be the father of “information theory”. To roughly explain how this bandwidth increase delivered this improvement I’ll cite the equation defining the capacity of the simplistic Additive White Gaussian Channel (white noise is the background noise normally found in nature) which was found to be:
B is the bandwidth, S is the signal power and N is the noise power.
We can quickly notice that the capacity is directly proportional to the bandwidth, whereas the signal power is inside the log which is an increasing concave function. And that means that the effect of signal power S on the channel capacity is severely dampened by the log function, thus the huge increase in signal power needed to reach a given capacity can be easily replaced by a much smaller increase in bandwidth to reach the same capacity.
The work of Shannon remained unfortunately relatively unknown in the engineering community for a long while due to various reasons, however I highly recommend his paper “A Mathematical Theory of communication” if you have a good mathematical background and have a taste for communication technologies, it’s one of my favorite papers for its clarity, elegance and revolutionary impact on the telecommunication technologies we have today http://circuit.ucsd.edu/~yhk/ece287a-win08/shannon1948.pdf
And that’s basically how 2G systems provided us a higher capacity and higher quality calls and grew so popular to accommodate more than 3 billion users in 2011.
3rd Evolution: 3G
The development of 2G systems started in the early 80s, and their deployment began in the late 80s and early 90s and that’s before the official commercialization of the internet in 1995. When 2G systems were designed, they were designed to efficiently carry voice calls, and these systems were similar to landline networks in being circuit switched, that means a dedicated physical circuit is established between the callers each time a call is made. This circuit switched nature is very different from the interconnection system used in the internet. The interconnection system of the internet doesn’t establish dedicated physical circuits between the source and the destination, it uses data packets labeled with destination addresses that are thrown into the network and left for transport protocols to deliver them, the way they please, based on the addresses labeling these packets.
Due to their design as voice carrying systems, all 2G systems were not compatible to carry packet switched traffic. GPRS (General Packet Radio Switching) was added to GSM to offer a solution but it remained limited in potential and capacity. Later on, EDGE (Enhanced Data rates for GSM Evolution) was developed but still didn’t satisfy the hunger for wireless data connectivity. By the end of the 90s with the exponential growth of the internet and the business potential and importance it carried, a dire need to introduce mobility to the internet appeared, and designers and standardization bodies started to release new standards for a new system that made it possible to connect to the internet while being mobile and that’s how we evolved to 3G.
3G key technologies: DSSS (Direct sequence spread spectrum), WCDMA (Wideband Code division multiple access)
DSSS is a technology that was initially developed for the military at the end of WWII and afterwards as a means to combat jamming, the main idea is to spread a certain stream of data that will be communicated by multiplying it with a code that has a much higher data rate(called chip rate in this case), the result of this operation is that the initial spectral bandwidth needed for communication is spread over a much larger one. This operation almost hides the signal under the noise floor, and at the receiving end, the receiver knows the code used and uses it to de-spread the signal using the same operation done at the transmitters end; it extracts the signal from the surrounding noise and spreads the latter one at the same time. This results in what is called a processing gain that further enhances the signal power to noise power ratio allowing the receiver to clearly receive the message that was hidden in the noise.
WCDMA is used to apply the DSSS technique (it provides the codes with which the initial signal is multiplied for spreading), and to allow multiple users to connect simultaneously.
In GSM we’ve seen that users were separated by frequency(FDMA) by having different users put on different frequencies, and those who are on the same frequency are separated by time (TDMA) by allocating that frequency for a defined period of time (Time slot) to each user.
In 3G the game has changed and the air interface has changed, all users can use the same frequency the whole time and they’re separated by orthogonal and pseudo-orthogonal codes attributed to them, orthogonality means that a perfect (or perfect enough) differentiation and separation can be achieved, you can think of it by lots of people in a room talking at the same time but each couple is using a different language so only those who are talking the same language understand each other and anything else they hear is just unintelligible noise that doesn't hamper their communication.
Example on orthogonal codes: Walsh Codes https://en.wikipedia.org/wiki/Hadamard_code
This technique allowed a better efficiency in the usage of the spectral bandwidth, there was no need to partition it into frequency bands and distribute them among cells, so the useful bandwidth available increased from 200KHz in GSM to 5MHz in UMTS, which increased the capacity on the system, gave a lot of room to increase the data rates, and hence allowed a better support for internet applications in addition to live video calls.
These are the main and most important factors that made the migration to 3G, further developments were made to achieve even higher rates (HSPA, HSPA+) that introduced additional techniques in the radio access network, new channels were introduced in the physical layer and transport layer, new faster packet scheduling techniques were introduced, Hybrid Automatic Repeat Request was introduced to guarantee error correction with minimum retransmissions, and adaptive modulation techniques were introduced in the physical layer (going from QPSK up to 64QAM) and MIMO (multiple input multiple output) was introduced starting from the release 7 of the 3GPP standard. All these techniques aimed at increasing the spectral efficiency to load more bits/s on each hertz of frequency thus allowing higher, much higher data rates.
So basically that’s in an extremely brief way what 3G is about and how mobile communications evolved to reach it. The system is obviously way much more complicated than this and you have experts specialized in each part of it, it’s not possible to be an expert in all its intricate details. However I want to mention a feature of the coverage in the radio access part that affects the users very much especially in Lebanon which is “cell breathing”.
In 3G and due to the fact that users are using the same frequency all the time, a base station (NodeB in 3G) constantly changes the range of the geographical area it covers. As the number of users increases in the cell the coverage shrinks to ensure proper service to the users and to force users at the edge of the cell to switch to other nearby cells. So the main idea behind cell breathing is “load balancing” to efficiently distribute users among cells and to avoid interference between users due to differences in their transmission power.
What you need to know in Lebanon:
-Cell breathing is a feature that affects cell coverage in 3G, and since the network was forcibly launched before achieving adequate coverage because of political reasons, the network still has many dead spots and cell breathing can make this even worse making the reception bars go from full to none while you’re sitting in your couch. Your phone will be forced to fall back to the GSM network through a hard-handover mechanism, and this means that a new connection has to be established with the GSM network and this in its turn means if you were in an active call it’ll be dropped and cut.
-Power consumption: Your battery won’t last in 3G mode same as in 2G, though the power per bit needed in 3G is less than in 2G but the amount of data that you transceive is significantly higher and the signaling (registering with the station, power control, etc.) with the base station (NodeB) is more important thus leading to more power drainage. Add to it an abnormal amount of hard handovers from/to GSM due to the reasons stated in the previous paragraph that will increase the signaling overhead, the power consumption will be higher even than in normal 3G operation.
What you can do:
- Never force your phone to 3G mode only, even in the best 3G networks the GSM remains as a backup network and still holds a significant amount of the voice calls and data sessions.
- If you’re experiencing lots of drops in calls, force your phone to GSM, its coverage is more mature in Lebanon and congestion is less likely to happen on it, switch back to GSM/3G dual mode when you want to connect to the internet and make heavy data sessions.
- Don’t forget that your phone will drain more power from your battery, charge well before you go out, forcing phone to GSM when not in need for heavy data sessions can significantly increase your battery life.
- And finally keep on lobbying and pressuring to improve the 3G network.
If you have any questions don’t hesitate to drop a comment.