Wednesday, July 4, 2007
Digital Rights Management (DRM)
The use of mobile and portable devices to download and use content raises the
issues of rights management. The originator of the content, whether music, images,
movies or games, will have spent a lot of time and money developing that content
and will want to exert some control over the future use of that content. This is
where DRM systems can be applied to limit the use, and therefore prevent misuse
of content.
There are a large number of proprietary DRM systems at present, which reflects
the growing availability of digital content over the Internet and other networks,
and is indicative of the lack of standards in this area. The Open Mobile Alliance
(OMA) has worked on enablers for DRM systems aimed at mobile handsets and
other portable devices, and many handset manufacturers now support OMA DRM
but may also support proprietary techniques as the market dictates. DRM systems
are built around a trusted entity known as a DRM agent (or client), which in the
case of a mobile network resides in the handset.
The DRM client is able to download content and the rights to use that content.
The content and rights can be kept separate or can be delivered in a combined format.
Rights are a mixture of permissions and constraints that indicate what can
be done with the associated content.
For example, content may be valid for 30 days, or until the end of the month;
the number of plays may be limited; etc. It is the responsibility of the DRM agent
to apply the rights to the content and to track its usage, so that if, for example, it
is a song that can only be played 30 times, then the content cannot be used unless
new rights are obtained.
OMA release 1 DRM included a number of basic features, which were forward
lock (preventing content forwarding), combined delivery, and separate delivery.
Separate delivery, where the content and rights are kept separate, supports something
known as super-distribution, whereby users are able to distribute content at
will, but not the rights for that content. This can be very useful where the content
has some sort of preview mode that can be used to encourage recipients to acquire
their own rights to that content.
issues of rights management. The originator of the content, whether music, images,
movies or games, will have spent a lot of time and money developing that content
and will want to exert some control over the future use of that content. This is
where DRM systems can be applied to limit the use, and therefore prevent misuse
of content.
There are a large number of proprietary DRM systems at present, which reflects
the growing availability of digital content over the Internet and other networks,
and is indicative of the lack of standards in this area. The Open Mobile Alliance
(OMA) has worked on enablers for DRM systems aimed at mobile handsets and
other portable devices, and many handset manufacturers now support OMA DRM
but may also support proprietary techniques as the market dictates. DRM systems
are built around a trusted entity known as a DRM agent (or client), which in the
case of a mobile network resides in the handset.
The DRM client is able to download content and the rights to use that content.
The content and rights can be kept separate or can be delivered in a combined format.
Rights are a mixture of permissions and constraints that indicate what can
be done with the associated content.
For example, content may be valid for 30 days, or until the end of the month;
the number of plays may be limited; etc. It is the responsibility of the DRM agent
to apply the rights to the content and to track its usage, so that if, for example, it
is a song that can only be played 30 times, then the content cannot be used unless
new rights are obtained.
OMA release 1 DRM included a number of basic features, which were forward
lock (preventing content forwarding), combined delivery, and separate delivery.
Separate delivery, where the content and rights are kept separate, supports something
known as super-distribution, whereby users are able to distribute content at
will, but not the rights for that content. This can be very useful where the content
has some sort of preview mode that can be used to encourage recipients to acquire
their own rights to that content.
Press-to-Talk
Press-to-Talk (PTT) service is a real-time, point-to-point and point-to-multipoint,
voice-based instant messaging application. This has proved a popular service in the
United States on the Nextel network, based on Motorola’s iDEN architecture. Many
of the features are the same as text-based IM, with buddy lists and chat rooms, etc.
Once again, the biggest potential problem affecting the widespread adoption of the
service will be compatibility between handsets and across networks.
There are, at present, four PTT solutions on the market.
1. Motorola iDEN. This is a well-established system that has a proven track
record in the United States with good performance. Motorola also supplies
a wide range of handsets to support the service. However, only Motorola
manufactures handsets for this service. iDEN does not support a presence
service.
2. Kodiak. This system uses circuit-switched connections, which results in good
performance but misses out on the efficiency of packet-based delivery. The
system is also technology-agnostic. It has been deployed on GSM, CDMA,
and analog networks. However, there is a limited range of handsets available
for this service. Kodiak has a thin-client available to allow vendors to implement
the service.
3. Qualcom Qchat. This service is proprietary to Qualcom and is only supported
on cdmaOne 1X only. In this case, the performance of the 1X network may
not be sufficient to ensure adequate performance. The QChat service is implemented
on Qualcomm’s BREW platform.
4. PTT over Cellular (PoC). The final method is a proposed standard mechanism
from Ericsson, Nokia, Motorola, and Siemens for a PTT over Cellular
(PoC) system. This has been presented to the Open Mobile Alliance (OMA),
which is developing this proposal as a standard in line with the 3G UMTS
and IMS specification from the 3GPP. It is based, generally, on existing protocols
and methods: IP and SIP. This system has the advantage that it will
run over any network. There will be products available in GSM and GPRS,
UMTS, and cdma2000. It is anticipated that it will be possible to interoperate
a PTT service across network boundaries.
voice-based instant messaging application. This has proved a popular service in the
United States on the Nextel network, based on Motorola’s iDEN architecture. Many
of the features are the same as text-based IM, with buddy lists and chat rooms, etc.
Once again, the biggest potential problem affecting the widespread adoption of the
service will be compatibility between handsets and across networks.
There are, at present, four PTT solutions on the market.
1. Motorola iDEN. This is a well-established system that has a proven track
record in the United States with good performance. Motorola also supplies
a wide range of handsets to support the service. However, only Motorola
manufactures handsets for this service. iDEN does not support a presence
service.
2. Kodiak. This system uses circuit-switched connections, which results in good
performance but misses out on the efficiency of packet-based delivery. The
system is also technology-agnostic. It has been deployed on GSM, CDMA,
and analog networks. However, there is a limited range of handsets available
for this service. Kodiak has a thin-client available to allow vendors to implement
the service.
3. Qualcom Qchat. This service is proprietary to Qualcom and is only supported
on cdmaOne 1X only. In this case, the performance of the 1X network may
not be sufficient to ensure adequate performance. The QChat service is implemented
on Qualcomm’s BREW platform.
4. PTT over Cellular (PoC). The final method is a proposed standard mechanism
from Ericsson, Nokia, Motorola, and Siemens for a PTT over Cellular
(PoC) system. This has been presented to the Open Mobile Alliance (OMA),
which is developing this proposal as a standard in line with the 3G UMTS
and IMS specification from the 3GPP. It is based, generally, on existing protocols
and methods: IP and SIP. This system has the advantage that it will
run over any network. There will be products available in GSM and GPRS,
UMTS, and cdma2000. It is anticipated that it will be possible to interoperate
a PTT service across network boundaries.
Labels:
Kodiak,
Motorola iDEN,
PTT over Cellular,
Qualcom Qchat
DVB-H and UMTS
Studies have been conducted in relation to the roll-out of DVB-H and its possible integration
with cellular networks. DVB-H is backward-compatible with DVB-T, and
the two formats can be mixed in the same digital multiplex. While one multiplex can
convey six to eight DVB-T channels, it can be used for up to fifty DVB-H channels.
The DVB technology can be kept separate and used to deliver content to DVBH-
enabled terminals, or a cellular technology, such as UMTS, could be used to
provide a reverse channel that allows users to browse for content and, when content
selection is made, have that content delivered over a DVB-T channel (Figure 4.67).
This would mean that some degree of cooperation would be required between the
UMTS network operator and a broadcaster. The convergence of telecommunications
and entertainment services means that such cooperation is very likely.
with cellular networks. DVB-H is backward-compatible with DVB-T, and
the two formats can be mixed in the same digital multiplex. While one multiplex can
convey six to eight DVB-T channels, it can be used for up to fifty DVB-H channels.
The DVB technology can be kept separate and used to deliver content to DVBH-
enabled terminals, or a cellular technology, such as UMTS, could be used to
provide a reverse channel that allows users to browse for content and, when content
selection is made, have that content delivered over a DVB-T channel (Figure 4.67).
This would mean that some degree of cooperation would be required between the
UMTS network operator and a broadcaster. The convergence of telecommunications
and entertainment services means that such cooperation is very likely.
Services and Security for Handset
Given the range of mobile device capability in today’s market, there are many types
of service that may be available to the user. Some of these services depend on handset
capability; others rely on the services supported by the network.
Until recently, many services relied on proprietary techniques in the handset
or network, which means that services are often limited to just a certain handset
model or are only available with a particular mobile operator. The process of defining
services by organizations such as 3GPP has led to the standardization of many
services, or the development of standard environments where services can be developed,
deployed, and executed.
Described below are some of the services that can be seen on handsets that
are available today. All of these services have benefited from the process of
standardization.
Support for Location Technologies
There are a number of techniques for providing location information for handsets.
Some of these require involvement of the handset in the location process; others use
data that is already collected by the handset during normal modes and operation
and do not place additional processing requirements on the handset.
The least accurate location technique uses either Timing Advance (TA) in GSM
or Cell Identity (Cell ID) in UMTS. When a mobile connects to the network (in
dedicated mode), the TA or Cell ID is known and can be reported to a location
server and used to deliver some form of location service to the user. More accurate
location information can be obtained using a triangulation process, wherein the
handset measures information from multiple base stations and uses this, along with
data obtained from the network, to calculate the true time differences of the signals
it has received.
In GSM, this technique is known as Observed Time Difference of Arrival
(OTDOA). In UMTS, the equivalent process is Enhanced-Observed
Time Difference (E-OTD). In both cases, the handset measures the time difference
of signals between base stations; it also receives information about the real-time difference
(either sent by broadcast messages or to one specific mobile) and it calculates
from these a geometric time difference. This geometric time difference is the result
of location and can be reported to the network to deliver location services. This
calculation requires some additional processing in the handset.
The final location mechanism is potentially the most accurate, and involves integrating
a GPS receiver in the handset. When required, the handset
can take GPS measurements and report these to the location servers in the network.
To speed the satellite acquisition process, the network can broadcast assistance
data, which tells handsets in a particular area which satellites are visible. GPS-based
location has the largest impact on the mobile because of the GPS receiver and the
processing that is required to resolve position information. It is possible to combine
location techniques; thus, for example, when GPS fails to give a fix because the
handset is inside a building, the location information could be obtained from the
time-difference techniques.
Mobile TV Reception
The multimedia capabilities of handsets have raised the possibility of delivering
television to mobile or handheld devices, and indeed a number of services have
been launched that allow users to download short video clips and even video ringtones
onto their phones.
However, these services are using the bearers or channels in the 2.5G or 3G
network, which in many cases may not have the bandwidth or the format to support
high-quality video transmission. There is also an impact on all other services.
Because video has a relatively high bandwidth, it will limit the capacity for all the
other services delivered by the network.
A possible solution is the use of a technology known as Digital Video Broadcast-
Handheld (DVB-H). DVB-H is a derivative of the main DVB Terrestrial (DVBT)
format, which includes a number of features targeted at delivering content to
mobile devices.
To minimize power consumption in a DVB-H receiver, the information sent
to the handheld device is time-sliced. That is, it is delivered in concentrated bursts
so the receiver is not switched on all the time. This reduces power consumption by
up to 95 percent compared to DVB-T. In addition, the DVB-H standard includes
a number of features aimed at supporting user mobility, such as seamless handovers
between DVB transmitters. The signal format is able to accommodate users moving
at several hundred kilometers per hour.
of service that may be available to the user. Some of these services depend on handset
capability; others rely on the services supported by the network.
Until recently, many services relied on proprietary techniques in the handset
or network, which means that services are often limited to just a certain handset
model or are only available with a particular mobile operator. The process of defining
services by organizations such as 3GPP has led to the standardization of many
services, or the development of standard environments where services can be developed,
deployed, and executed.
Described below are some of the services that can be seen on handsets that
are available today. All of these services have benefited from the process of
standardization.
Support for Location Technologies
There are a number of techniques for providing location information for handsets.
Some of these require involvement of the handset in the location process; others use
data that is already collected by the handset during normal modes and operation
and do not place additional processing requirements on the handset.
The least accurate location technique uses either Timing Advance (TA) in GSM
or Cell Identity (Cell ID) in UMTS. When a mobile connects to the network (in
dedicated mode), the TA or Cell ID is known and can be reported to a location
server and used to deliver some form of location service to the user. More accurate
location information can be obtained using a triangulation process, wherein the
handset measures information from multiple base stations and uses this, along with
data obtained from the network, to calculate the true time differences of the signals
it has received.
In GSM, this technique is known as Observed Time Difference of Arrival
(OTDOA). In UMTS, the equivalent process is Enhanced-Observed
Time Difference (E-OTD). In both cases, the handset measures the time difference
of signals between base stations; it also receives information about the real-time difference
(either sent by broadcast messages or to one specific mobile) and it calculates
from these a geometric time difference. This geometric time difference is the result
of location and can be reported to the network to deliver location services. This
calculation requires some additional processing in the handset.
The final location mechanism is potentially the most accurate, and involves integrating
a GPS receiver in the handset. When required, the handset
can take GPS measurements and report these to the location servers in the network.
To speed the satellite acquisition process, the network can broadcast assistance
data, which tells handsets in a particular area which satellites are visible. GPS-based
location has the largest impact on the mobile because of the GPS receiver and the
processing that is required to resolve position information. It is possible to combine
location techniques; thus, for example, when GPS fails to give a fix because the
handset is inside a building, the location information could be obtained from the
time-difference techniques.
Mobile TV Reception
The multimedia capabilities of handsets have raised the possibility of delivering
television to mobile or handheld devices, and indeed a number of services have
been launched that allow users to download short video clips and even video ringtones
onto their phones.
However, these services are using the bearers or channels in the 2.5G or 3G
network, which in many cases may not have the bandwidth or the format to support
high-quality video transmission. There is also an impact on all other services.
Because video has a relatively high bandwidth, it will limit the capacity for all the
other services delivered by the network.
A possible solution is the use of a technology known as Digital Video Broadcast-
Handheld (DVB-H). DVB-H is a derivative of the main DVB Terrestrial (DVBT)
format, which includes a number of features targeted at delivering content to
mobile devices.
To minimize power consumption in a DVB-H receiver, the information sent
to the handheld device is time-sliced. That is, it is delivered in concentrated bursts
so the receiver is not switched on all the time. This reduces power consumption by
up to 95 percent compared to DVB-T. In addition, the DVB-H standard includes
a number of features aimed at supporting user mobility, such as seamless handovers
between DVB transmitters. The signal format is able to accommodate users moving
at several hundred kilometers per hour.
Mobile Operating Systems
The role of an operating system (OS) within a handset or handheld device is no different
than the OS deployed in computing terminals; the major differences in the
OS between the two environments are the result of handset constraints.
The OS is responsible for a range of tasks, which include management of the
processor, memory, and devices. Processor management determines
when an application can use the central processor and how to manage the resources
when multiple processes have to operate simultaneously. Memory management
allocates memory to processes so that they do not overlap and controls the reading
and writing of data to memory locations. Additionally, the OS will look after storage
of data, perhaps on a card or even a disk, and will also manage devices, or the
input/output (I/O) capabilities. The user interface (UI) is considered part of the OS
although not all OS include a UI that allows licensees to customize a UI to their
own design.
Operating above the OS will in most cases be a series of applications; and to
support these, the OS will have an application programming interface (API), which
abstracts the functionality of the OS for application developers.
In the context of a mobile handset, an OS has a set of limitations placed on it
that are the result of the processor capabilities and limited memory. Therefore, the
OS in these cases needs a very small footprint, which means very efficient code
writing. There are broadly two approaches to writing an OS for mobiles. The first
is to develop an OS from the ground up, specifically for the mobile environment.
The second is to take an OS that is perhaps used in desktop devices and produce a
compact version.
One critical area for mobile OS coders is reliability. The end user of a mobile
device would not tolerate systems crashes and lockups. This means not only reliability,
but also robustness as the underlying connectivity between the device and
the network is error-prone.
The range of handset OSs in today’s market includes completely closed or proprietary
systems through to open platforms (Figure 4.50). There are many variants
in between these two ends of the scale, where developers are able to create content
for a particular OS without having to know its technical details.
Proprietary Operating Systems
Many handset products have an OS that is proprietary, and in many cases the
details of the OS are unavailable, even perhaps to developers. However, the popularity
of smart phones, with comprehensive operating systems, and a recognition
that content developers can generate revenues for carriers, has meant that even
proprietary OS have a degree of openness associated with them.
For example, a proprietary OS will often include Java functionality, which
offers developers a route to content production through standardized, well-published
APIs; meanwhile, the core of the OS that drives the phone functionality
remains hidden. Handset vendors that have moved along this path will generally
offer developer platforms, software development kits (SDKs) and training, documentation,
and support options
than the OS deployed in computing terminals; the major differences in the
OS between the two environments are the result of handset constraints.
The OS is responsible for a range of tasks, which include management of the
processor, memory, and devices. Processor management determines
when an application can use the central processor and how to manage the resources
when multiple processes have to operate simultaneously. Memory management
allocates memory to processes so that they do not overlap and controls the reading
and writing of data to memory locations. Additionally, the OS will look after storage
of data, perhaps on a card or even a disk, and will also manage devices, or the
input/output (I/O) capabilities. The user interface (UI) is considered part of the OS
although not all OS include a UI that allows licensees to customize a UI to their
own design.
Operating above the OS will in most cases be a series of applications; and to
support these, the OS will have an application programming interface (API), which
abstracts the functionality of the OS for application developers.
In the context of a mobile handset, an OS has a set of limitations placed on it
that are the result of the processor capabilities and limited memory. Therefore, the
OS in these cases needs a very small footprint, which means very efficient code
writing. There are broadly two approaches to writing an OS for mobiles. The first
is to develop an OS from the ground up, specifically for the mobile environment.
The second is to take an OS that is perhaps used in desktop devices and produce a
compact version.
One critical area for mobile OS coders is reliability. The end user of a mobile
device would not tolerate systems crashes and lockups. This means not only reliability,
but also robustness as the underlying connectivity between the device and
the network is error-prone.
The range of handset OSs in today’s market includes completely closed or proprietary
systems through to open platforms (Figure 4.50). There are many variants
in between these two ends of the scale, where developers are able to create content
for a particular OS without having to know its technical details.
Proprietary Operating Systems
Many handset products have an OS that is proprietary, and in many cases the
details of the OS are unavailable, even perhaps to developers. However, the popularity
of smart phones, with comprehensive operating systems, and a recognition
that content developers can generate revenues for carriers, has meant that even
proprietary OS have a degree of openness associated with them.
For example, a proprietary OS will often include Java functionality, which
offers developers a route to content production through standardized, well-published
APIs; meanwhile, the core of the OS that drives the phone functionality
remains hidden. Handset vendors that have moved along this path will generally
offer developer platforms, software development kits (SDKs) and training, documentation,
and support options
Universal Serial Bus (USB), Bluetooth, Bluetooth Profiles
The USB initiative was an attempt by the IT industry to provide a simple, standardized
interface that could support many applications and was capable of plugand-
play operation. The outcome of this was the definition of the USB connection.
There are two major versions of USB: version 1.1, which supports
data rates of 1.5 Mbps (low-speed) and 12 Mbps (full-speed), was supplemented
recently by USB version 2.0, which supports 480 Mbps (high-speed).
USB is electrically and mechanically a very simple interface with data lines
and power connections that allows a USB host device to provide power to connected
devices. Although the USB standard has been modified over time to include
the possibility of USB ports on mobile phones and similar devices, many handset
vendors depart from the USB standard when it comes to the physical connection
of USB on the phone.
The standard USB connection is too large for mobile devices; and although a
small form-factor version is now in the standard, many handset vendors choose to
use a proprietary physical connection for the USB on their handset ranges. This
means that end users will require a manufacturer-specific cable to connect their
handsets to other USB devices.
Commonly, the USB port found on handsets operates at full-speed (12 Mbps),
with the handset acting as a USB device; some handsets support version 1.1 whereas
Bluetooth
Bluetooth is a radio-based connection option that aims to solve some of the issues
addressed by IrDA, in particular the number of different cables that users require
to interconnect the multitude of terminals they own.
To overcome some of the limitations of IrDA, Bluetooth operates in the 2.4-
GHz, Industrial, Scientific and Medical (ISM) band. The advantage
of this band is that it is license exempt, which means radio equipment operating in
this band can do so without users requiring operating licenses. However, to support
the coexistence of many radio applications in the band, it is regulated in terms of
usable power levels and spectrum parameters.
Bluetooth was developed by the telecommunications industry, so the initial focus
of the standard was to provide a means for mobile handsets to interconnect with
associated devices, such as headsets, PDAs, and laptop computers. However, because
of its ready availability, Bluetooth is finding its way into other consumer products.
The Bluetooth radio component operates across up to 79 channels in the 2.4-
GHz band and, to mitigate interference, frequency hops around these channels at
a rate of 1600 hops per second. It should be noted that the full range of Bluetooth
channels is not available in all countries because of local regulatory constraints. The
power classes defined for Bluetooth devices support typical ranges up to 10 meters,
although the class 1 devices at 20 dBm can achieve ranges greater than 100 meters.
more recent handsets support version 2.0.
Bluetooth Profiles
As the Bluetooth specifications were written, it became obvious that there were
numerous real-world applications for the technology and that it would be unrealistic
to include each and every one of these in the standards. Therefore, Bluetooth
is based on the concept of a series of defined profiles, where a profile specifies how
the Bluetooth protocols should operate to provide a set of functions. The profiles
can be viewed as a series of building blocks from which real applications can be
constructed. New profiles can be added to the Bluetooth standard after completing
an agreement process.
An example of the relationship between a usage model and profiles is the 3-
in-1 phone. The 3-in-1 phone has three operational modes: (1) it is able to act as a
normal mobile handset and access the cellular network; (2) it can use Bluetooth to
access a gateway device attached to a landline (therefore acting as a cordless phone);
and (3) it is able to connect directly to other handsets using Bluetooth (therefore
acting as an intercom device). This usage model is based on two Bluetooth profiles:
(1) the Cordless Telephony Profile (CTP) and (2) the Intercom Profile (IP)
Outside the profiles that support applications, there are two generic Bluetooth
profiles, known as the Generic Access Profile (GAP) and the Service Discovery
Application Profile (SDAP).
GAP concerns the discovery procedures that allow Bluetooth devices to find
one another, and includes functions for establishing a Bluetooth connection and
optionally adding security. SDAP is used by one Bluetooth device to discover what
services are offered by a remote Bluetooth device.
Although there are a number of WLAN standards on the market, the dominant
series are those produced by the Institute of Electrical and Electronic Engineers
(IEEE). The IEEE project 802 oversees standards for all LAN technologies,
both wired and wireless, and the working group 802.11 is responsible for WLAN
standards. IEEE 802.11 has defined three radio technologies for WLAN, known
as 802.11b, 802.11a, and 802.11g, and Draft 802.11n. These differ in terms of the
data rates they support and the spectrum band in which they operate. Two of the
WLAN standards are designed to operate in the 2.4-GHz ISM band, while the
third, 802.11a, operates in bands around 5 GHz.
The preexistence of radar systems at 5 GHz in Europe means that 802.11a systems
cannot be deployed in this region without suitable modification. Additional
interference mitigation techniques were added by the 802.11h standard, which
adapted 802.11a radio for use under European regulations.
The term “WiFi” (wireless fidelity) is often applied to 802.11 systems. WiFi is actually
a brand name that belongs to an industry association, the WiFi Alliance, whose role
is to test interoperability of WLAN products. Any device that carries the
WiFi mark will have been tested against a baseline implementation of the standards
and has been demonstrated to operate with products from other manufacturers.
interface that could support many applications and was capable of plugand-
play operation. The outcome of this was the definition of the USB connection.
There are two major versions of USB: version 1.1, which supports
data rates of 1.5 Mbps (low-speed) and 12 Mbps (full-speed), was supplemented
recently by USB version 2.0, which supports 480 Mbps (high-speed).
USB is electrically and mechanically a very simple interface with data lines
and power connections that allows a USB host device to provide power to connected
devices. Although the USB standard has been modified over time to include
the possibility of USB ports on mobile phones and similar devices, many handset
vendors depart from the USB standard when it comes to the physical connection
of USB on the phone.
The standard USB connection is too large for mobile devices; and although a
small form-factor version is now in the standard, many handset vendors choose to
use a proprietary physical connection for the USB on their handset ranges. This
means that end users will require a manufacturer-specific cable to connect their
handsets to other USB devices.
Commonly, the USB port found on handsets operates at full-speed (12 Mbps),
with the handset acting as a USB device; some handsets support version 1.1 whereas
Bluetooth
Bluetooth is a radio-based connection option that aims to solve some of the issues
addressed by IrDA, in particular the number of different cables that users require
to interconnect the multitude of terminals they own.
To overcome some of the limitations of IrDA, Bluetooth operates in the 2.4-
GHz, Industrial, Scientific and Medical (ISM) band. The advantage
of this band is that it is license exempt, which means radio equipment operating in
this band can do so without users requiring operating licenses. However, to support
the coexistence of many radio applications in the band, it is regulated in terms of
usable power levels and spectrum parameters.
Bluetooth was developed by the telecommunications industry, so the initial focus
of the standard was to provide a means for mobile handsets to interconnect with
associated devices, such as headsets, PDAs, and laptop computers. However, because
of its ready availability, Bluetooth is finding its way into other consumer products.
The Bluetooth radio component operates across up to 79 channels in the 2.4-
GHz band and, to mitigate interference, frequency hops around these channels at
a rate of 1600 hops per second. It should be noted that the full range of Bluetooth
channels is not available in all countries because of local regulatory constraints. The
power classes defined for Bluetooth devices support typical ranges up to 10 meters,
although the class 1 devices at 20 dBm can achieve ranges greater than 100 meters.
more recent handsets support version 2.0.
Bluetooth Profiles
As the Bluetooth specifications were written, it became obvious that there were
numerous real-world applications for the technology and that it would be unrealistic
to include each and every one of these in the standards. Therefore, Bluetooth
is based on the concept of a series of defined profiles, where a profile specifies how
the Bluetooth protocols should operate to provide a set of functions. The profiles
can be viewed as a series of building blocks from which real applications can be
constructed. New profiles can be added to the Bluetooth standard after completing
an agreement process.
An example of the relationship between a usage model and profiles is the 3-
in-1 phone. The 3-in-1 phone has three operational modes: (1) it is able to act as a
normal mobile handset and access the cellular network; (2) it can use Bluetooth to
access a gateway device attached to a landline (therefore acting as a cordless phone);
and (3) it is able to connect directly to other handsets using Bluetooth (therefore
acting as an intercom device). This usage model is based on two Bluetooth profiles:
(1) the Cordless Telephony Profile (CTP) and (2) the Intercom Profile (IP)
Outside the profiles that support applications, there are two generic Bluetooth
profiles, known as the Generic Access Profile (GAP) and the Service Discovery
Application Profile (SDAP).
GAP concerns the discovery procedures that allow Bluetooth devices to find
one another, and includes functions for establishing a Bluetooth connection and
optionally adding security. SDAP is used by one Bluetooth device to discover what
services are offered by a remote Bluetooth device.
Although there are a number of WLAN standards on the market, the dominant
series are those produced by the Institute of Electrical and Electronic Engineers
(IEEE). The IEEE project 802 oversees standards for all LAN technologies,
both wired and wireless, and the working group 802.11 is responsible for WLAN
standards. IEEE 802.11 has defined three radio technologies for WLAN, known
as 802.11b, 802.11a, and 802.11g, and Draft 802.11n. These differ in terms of the
data rates they support and the spectrum band in which they operate. Two of the
WLAN standards are designed to operate in the 2.4-GHz ISM band, while the
third, 802.11a, operates in bands around 5 GHz.
The preexistence of radar systems at 5 GHz in Europe means that 802.11a systems
cannot be deployed in this region without suitable modification. Additional
interference mitigation techniques were added by the 802.11h standard, which
adapted 802.11a radio for use under European regulations.
The term “WiFi” (wireless fidelity) is often applied to 802.11 systems. WiFi is actually
a brand name that belongs to an industry association, the WiFi Alliance, whose role
is to test interoperability of WLAN products. Any device that carries the
WiFi mark will have been tested against a baseline implementation of the standards
and has been demonstrated to operate with products from other manufacturers.
Video Coding
Although reducing the number of pixels in an image reduces the bit rate of a video
signal quite dramatically, further reductions are necessary if video is to be supported
over the limited bandwidth channels of a cellular network. For most applications,
the number of frames per second can also be reduced from the standard 50
or 60 frames per second used in broadcast systems to a figure between 10 and 30
frames per second. Frame rates as low as 10 frames per second will be acceptable for
some applications, such as videoconferencing or videocalling.
Further reductions in bit rate are achieved by employing coding or compression
techniques. Although there are a number of techniques on the market for processing
video, they have similarities in terms of concept, using a combination of spatial and
temporal compression (Figure 4.36). Spatial compression techniques analyze redundancy
within a frame, produced for example by a large number of adjacent pixels all
having the same or similar levels of brightness (luminance) and color (chrominance).
This redundancy can then be removed by coding. In a similar fashion, temporal
compression looks for redundancy between adjacent frames; this is often the result
of an image background, for example, that does not change significantly between
one frame and the next. Again, this redundancy can be removed by coding.
As the power of electronic processors, particularly digital signal processors
(DSPs), has improved, video codecs have been designed that are able to offer equivalent
quality to their predecessors but at reduced bit rates.
Coding for still images is based on the same spatial compression as used for
video; there is no need to apply temporal compression to a single image. However,
the different still image formats are better suited to one type of image or another.
For example, JPEG works well with black and white or color natural images (such
as photographs), whereas GIF works better for black and white images that contain
lines and blocks (such as cartoons).
There is also a distinction between coding that is lossy and coding that is lossless.
The video coding techniques described here and JPEG for still images are all
classified as lossy in that they remove information through coding that cannot be
regenerated later. On the other hand, GIF is a lossless coding technique and subsequently
does not remove information through its coding.
Video Coding Standards
A number of video coding standards exist in commercial applications and two
main groups have worked on these standards: (1) the International Standards Organization
(ISO) and (2) the International Telecommunication Union – Telecommunications
branch (ITU-T). The ISO is responsible for the Moving Pictures Expert
Group (MPEG) that has produced a series of video coding systems (Figure 4.37).
The first MPEG standard, MPEG-1, was released in 1992 and was aimed at
providing acceptable, but sub-broadcast, quality video that could be used for CDs
and games. The video coding produced an output at 150 kbps. The audio coding
portion of the MPEG-1 standard included three coding options offering progressively
greater compression rates for the audio component. The third of these
options, MPEG-1 Layer 3 or simply MP3, has become a dominant standard for the
distribution of music and audio over the Internet.
In 1994, MPEG-2 was released, and offered improvements on the MPEG-1
standard. MPEG-2 can work at a variety of bit rates, but at 1 to 3 Mbps outperforms
the quality of MPEG-1. MPEG-2 is used for digital versatile disks (DVDs)
and digital video broadcast (DVB), and includes a range of audio coding options
such as advanced audio coding (AAC).
The most recent addition to the MPEG family, MPEG-4 was originally designed
for low bit services, with channels operating sub-64 kbps but can also be employed at
high bit rates into the Mbps range. The design intention was to provide video coding
for some of the “new” applications that were appearing, such as streaming services
signal quite dramatically, further reductions are necessary if video is to be supported
over the limited bandwidth channels of a cellular network. For most applications,
the number of frames per second can also be reduced from the standard 50
or 60 frames per second used in broadcast systems to a figure between 10 and 30
frames per second. Frame rates as low as 10 frames per second will be acceptable for
some applications, such as videoconferencing or videocalling.
Further reductions in bit rate are achieved by employing coding or compression
techniques. Although there are a number of techniques on the market for processing
video, they have similarities in terms of concept, using a combination of spatial and
temporal compression (Figure 4.36). Spatial compression techniques analyze redundancy
within a frame, produced for example by a large number of adjacent pixels all
having the same or similar levels of brightness (luminance) and color (chrominance).
This redundancy can then be removed by coding. In a similar fashion, temporal
compression looks for redundancy between adjacent frames; this is often the result
of an image background, for example, that does not change significantly between
one frame and the next. Again, this redundancy can be removed by coding.
As the power of electronic processors, particularly digital signal processors
(DSPs), has improved, video codecs have been designed that are able to offer equivalent
quality to their predecessors but at reduced bit rates.
Coding for still images is based on the same spatial compression as used for
video; there is no need to apply temporal compression to a single image. However,
the different still image formats are better suited to one type of image or another.
For example, JPEG works well with black and white or color natural images (such
as photographs), whereas GIF works better for black and white images that contain
lines and blocks (such as cartoons).
There is also a distinction between coding that is lossy and coding that is lossless.
The video coding techniques described here and JPEG for still images are all
classified as lossy in that they remove information through coding that cannot be
regenerated later. On the other hand, GIF is a lossless coding technique and subsequently
does not remove information through its coding.
Video Coding Standards
A number of video coding standards exist in commercial applications and two
main groups have worked on these standards: (1) the International Standards Organization
(ISO) and (2) the International Telecommunication Union – Telecommunications
branch (ITU-T). The ISO is responsible for the Moving Pictures Expert
Group (MPEG) that has produced a series of video coding systems (Figure 4.37).
The first MPEG standard, MPEG-1, was released in 1992 and was aimed at
providing acceptable, but sub-broadcast, quality video that could be used for CDs
and games. The video coding produced an output at 150 kbps. The audio coding
portion of the MPEG-1 standard included three coding options offering progressively
greater compression rates for the audio component. The third of these
options, MPEG-1 Layer 3 or simply MP3, has become a dominant standard for the
distribution of music and audio over the Internet.
In 1994, MPEG-2 was released, and offered improvements on the MPEG-1
standard. MPEG-2 can work at a variety of bit rates, but at 1 to 3 Mbps outperforms
the quality of MPEG-1. MPEG-2 is used for digital versatile disks (DVDs)
and digital video broadcast (DVB), and includes a range of audio coding options
such as advanced audio coding (AAC).
The most recent addition to the MPEG family, MPEG-4 was originally designed
for low bit services, with channels operating sub-64 kbps but can also be employed at
high bit rates into the Mbps range. The design intention was to provide video coding
for some of the “new” applications that were appearing, such as streaming services
Handset Sound Capabilities
Handsets need audio coders and decoders for a variety of reasons, the most fundamental
being the encoding and decoding of human speech for telephony services.
However, with a typical handset now supporting multimedia functions, the sound
capabilities will support music, the audio that accompanies a video clip, and applications
such as games and ringtones.
Ringtones
Ringtones have become big business as users attempt to differentiate their handset
models from all identical models. The ringtone represents an easy route to phone
personalization. There are numerous formats (Figure 4.28) in which to provide
ringtones, and this represents the dramatic changes in capability from the very
early phones, which could only support monophonic tones, to the devices of today,
which are able to support complex music files coded with the same techniques used
to record CDs.
Some of the ringtone formats that have appeared in the marketplace are proprietary
and are perhaps only supported by a limited range of models from one
supplier or a small number of suppliers.
Ringtone Formats
Many early handsets had very limited capability in terms of ringtone support —
the tone output was monophonic as the sound elements could only play one note at
any moment in time. The manufacturer would maybe supply a handful of built-in
ringtones for the user to choose from, and there was no capability for downloading
new tones. Monophonic tones have a very artificial sound to them.
Increased capability brought polyphonic ringtones to handsets and, combined
with features and services that allowed users to download new tones on their
phones, the market for ringtones was created. There are a number of polyphonic
ringtone formats (see Table 4.2), including the Musical Instrument Digital Interface
(MIDI). MIDI differs from the other tone formats in that the MIDI file does
not actually contain coded music, but rather a set of instructions about the notes
to be played, the voice to be used for each note, and the duration and depth of each
note. The consequence is that MIDI files are very compact and therefore ideally
suited to ringtone downloads.
An improvement on MIDI is Scalable Polyphonic MIDI, which allows the same
content to be played on devices that differ in terms of their polyphonic capability.
A low-end phone might only have four-note polyphony while a high-end phone
might have 32-note polyphony; the same file could play on both handsets through
a process of scaling.
The eXtensible Music Format (XMF) was introduced to overcome the limitations
of the 128 fixed instrument pallet of MIDI. XMF allows downloading of new
sounds to replace the default MIDI sounds.
More recently, ringtones have been provided as MP3 files using the same audio
coding techniques used for music distribution. These files offer much more realistic
sound capabilities, and the availability of chart music as ringtones is evidence of the
popularity of this format.
There are a number of manufacturer-specific ringtone formats in use and also
formats devised by third parties; for example, the polyphonic Synthetic music
Mobile Application Format (SMAF) from Yamaha is supported on a range of
phones from different manufacturers.
Audio Coding
As with ringtones, handsets must be capable of supporting a wide range of audio
formats if an end user wants to decode audio from a variety of sources. There are
two distinct families of audio coder found in handsets. The first family is related
to the need to code the human voice for telephony services, although some of the
coders used are derivatives that support signals with a wider bandwidth than speech
(e.g., music). The second family of coders consists of those that comprise the audio
layer used in video coding techniques.
GSM handsets were originally built around the Full-Rate (FR) codec, which
was later supplemented by the Half-Rate (HR) codec, the Enhanced Full-Rate
(EFR) codec, and the Enhanced Half-Rate (EHR) codec (Figure 4.29). All these
coding mechanisms are built around a model of the human voice and, therefore,
while they offer good quality for speech, they are not optimized for non-speech
signals such as music.
The GSM specifications moved on to an Adaptive Multi-Rate (AMR) codec
that was also adopted as the standard by 3GPP for UMTS networks. This codec
could switch rates according to needs and conditions, but was still speech oriented.
However, recent improvements have been made to the codec, first by improving
quality, and second by extending the audio bandwidth and adding stereo capability.
Thus, the codec has evolved to support not only voice, but also high-quality
audio, including stereo music.
The second family of codecs found in handsets is based on the Advanced Audio
Codec (AAC) taken from the MPEG specifications. As with the AMR codec, the
AAC codec has evolved to improve quality and support stereo signals.
being the encoding and decoding of human speech for telephony services.
However, with a typical handset now supporting multimedia functions, the sound
capabilities will support music, the audio that accompanies a video clip, and applications
such as games and ringtones.
Ringtones
Ringtones have become big business as users attempt to differentiate their handset
models from all identical models. The ringtone represents an easy route to phone
personalization. There are numerous formats (Figure 4.28) in which to provide
ringtones, and this represents the dramatic changes in capability from the very
early phones, which could only support monophonic tones, to the devices of today,
which are able to support complex music files coded with the same techniques used
to record CDs.
Some of the ringtone formats that have appeared in the marketplace are proprietary
and are perhaps only supported by a limited range of models from one
supplier or a small number of suppliers.
Ringtone Formats
Many early handsets had very limited capability in terms of ringtone support —
the tone output was monophonic as the sound elements could only play one note at
any moment in time. The manufacturer would maybe supply a handful of built-in
ringtones for the user to choose from, and there was no capability for downloading
new tones. Monophonic tones have a very artificial sound to them.
Increased capability brought polyphonic ringtones to handsets and, combined
with features and services that allowed users to download new tones on their
phones, the market for ringtones was created. There are a number of polyphonic
ringtone formats (see Table 4.2), including the Musical Instrument Digital Interface
(MIDI). MIDI differs from the other tone formats in that the MIDI file does
not actually contain coded music, but rather a set of instructions about the notes
to be played, the voice to be used for each note, and the duration and depth of each
note. The consequence is that MIDI files are very compact and therefore ideally
suited to ringtone downloads.
An improvement on MIDI is Scalable Polyphonic MIDI, which allows the same
content to be played on devices that differ in terms of their polyphonic capability.
A low-end phone might only have four-note polyphony while a high-end phone
might have 32-note polyphony; the same file could play on both handsets through
a process of scaling.
The eXtensible Music Format (XMF) was introduced to overcome the limitations
of the 128 fixed instrument pallet of MIDI. XMF allows downloading of new
sounds to replace the default MIDI sounds.
More recently, ringtones have been provided as MP3 files using the same audio
coding techniques used for music distribution. These files offer much more realistic
sound capabilities, and the availability of chart music as ringtones is evidence of the
popularity of this format.
There are a number of manufacturer-specific ringtone formats in use and also
formats devised by third parties; for example, the polyphonic Synthetic music
Mobile Application Format (SMAF) from Yamaha is supported on a range of
phones from different manufacturers.
Audio Coding
As with ringtones, handsets must be capable of supporting a wide range of audio
formats if an end user wants to decode audio from a variety of sources. There are
two distinct families of audio coder found in handsets. The first family is related
to the need to code the human voice for telephony services, although some of the
coders used are derivatives that support signals with a wider bandwidth than speech
(e.g., music). The second family of coders consists of those that comprise the audio
layer used in video coding techniques.
GSM handsets were originally built around the Full-Rate (FR) codec, which
was later supplemented by the Half-Rate (HR) codec, the Enhanced Full-Rate
(EFR) codec, and the Enhanced Half-Rate (EHR) codec (Figure 4.29). All these
coding mechanisms are built around a model of the human voice and, therefore,
while they offer good quality for speech, they are not optimized for non-speech
signals such as music.
The GSM specifications moved on to an Adaptive Multi-Rate (AMR) codec
that was also adopted as the standard by 3GPP for UMTS networks. This codec
could switch rates according to needs and conditions, but was still speech oriented.
However, recent improvements have been made to the codec, first by improving
quality, and second by extending the audio bandwidth and adding stereo capability.
Thus, the codec has evolved to support not only voice, but also high-quality
audio, including stereo music.
The second family of codecs found in handsets is based on the Advanced Audio
Codec (AAC) taken from the MPEG specifications. As with the AMR codec, the
AAC codec has evolved to improve quality and support stereo signals.
Display Technologies
The displays used in handsets are based on liquid crystal diode (LCD) technology
that has been used in consumer products for some time. Liquid crystal materials
have some special properties that are exploited to create displays; most importantly,
the crystals have a twisted structure and the amount of twist can be altered by
applying a voltage to the crystal material.
LCD Display Structure
A basic LCD display is a sandwich of layers through which light passes.
One of these layers is the liquid crystal that is situated between two layers of glass
that contain electrical connections. By altering the signal to these connections, the
crystals can be made to alter their twist, which has an effect on the polarization of
the light and can be used to create the dark/light contrast necessary for a display.
The specific LCD technology found in many displays is super twisted nematic
(STN), which relates to the special form of liquid crystal that is used. Displays can
be characterized as being either reflective or transmissive. A reflective display relies
on incident light from the front of the display, passing through all the layers to a final
reflective layer where it is reflected back to the front of the display. It is possible to
provide front-lighting or back-lighting to reflective displays. A transmissive display
uses backlight from within the display. The use of back- or front-lighting will increase
the energy consumption of the display; and when used in handsets, the lighting has
an associated sleep circuit to switch off the light after a few seconds of user inactivity.
There are a number of variants of the twisted nematic (TN) display, although
they all generally employ the same principles of operation.
Color Displays
Adding color to a display is relatively simple. Each pixel in the display
has three separate filters associated with it: one red, one green, and one blue.
Therefore, each pixel is effectively divided into three sub-pixels. The filters can be
activated so that only light of a particular color can pass through for that pixel. The
three sub-pixels can be manipulated to create a range of colors.
Display Types
The way pixels in a display are addressed, so that they can be switched between states,
has led to two main display technologies, the so-called passive and active displays.
In a super-twisted nematic display, a passive display (often called STN,
the individual pixels are addressed by row and by column signals, one
pixel at a time, and thus the display is relatively slow because it takes time to build
up an image pixel by pixel.
On the other hand, active displays add another component in the form of a
transparent transistor at each pixel, hence these displays are referred to as thin film
transistor (TFT). Using the active technology allows a whole row (or column) of
pixels to be addressed at once, which means that creating an image is much more
rapid than in an STN display.
The disadvantages of STN displays are their relatively slow operation, and
there are also issues relating to brightness and angle of view. However, they are
cheap to manufacture and use less energy than a TFT display,
which corrects the major problems of the STN format. In handsets with two
displays, where a simple display is used for phone functions and a higher specification
display is used for viewing videos and playing games, it is common to
find both technologies deployed — STN for the simple display and TFT for the
high-quality display.
Other display technologies are being developed; one in particular, the organic
LED (OLED), is receiving a lot of interest. OLED are emissive devices and thus
do not require a backlight, they can also be created on very thin layers of polymer
(almost like printing), and they consume much less energy than LCD displays,
which makes them ideal for handsets and other power-constrained devices.
OLEDs are being demonstrated in consumer products and much research is
being conducted into these components to overcome some of their limitations, at
which point widespread deployment would become a reality. For example, the lifespan
of blue OLED elements is only a few thousand hours, which means their use
in a TV or phone display is not yet commercially possible.
that has been used in consumer products for some time. Liquid crystal materials
have some special properties that are exploited to create displays; most importantly,
the crystals have a twisted structure and the amount of twist can be altered by
applying a voltage to the crystal material.
LCD Display Structure
A basic LCD display is a sandwich of layers through which light passes.
One of these layers is the liquid crystal that is situated between two layers of glass
that contain electrical connections. By altering the signal to these connections, the
crystals can be made to alter their twist, which has an effect on the polarization of
the light and can be used to create the dark/light contrast necessary for a display.
The specific LCD technology found in many displays is super twisted nematic
(STN), which relates to the special form of liquid crystal that is used. Displays can
be characterized as being either reflective or transmissive. A reflective display relies
on incident light from the front of the display, passing through all the layers to a final
reflective layer where it is reflected back to the front of the display. It is possible to
provide front-lighting or back-lighting to reflective displays. A transmissive display
uses backlight from within the display. The use of back- or front-lighting will increase
the energy consumption of the display; and when used in handsets, the lighting has
an associated sleep circuit to switch off the light after a few seconds of user inactivity.
There are a number of variants of the twisted nematic (TN) display, although
they all generally employ the same principles of operation.
Color Displays
Adding color to a display is relatively simple. Each pixel in the display
has three separate filters associated with it: one red, one green, and one blue.
Therefore, each pixel is effectively divided into three sub-pixels. The filters can be
activated so that only light of a particular color can pass through for that pixel. The
three sub-pixels can be manipulated to create a range of colors.
Display Types
The way pixels in a display are addressed, so that they can be switched between states,
has led to two main display technologies, the so-called passive and active displays.
In a super-twisted nematic display, a passive display (often called STN,
the individual pixels are addressed by row and by column signals, one
pixel at a time, and thus the display is relatively slow because it takes time to build
up an image pixel by pixel.
On the other hand, active displays add another component in the form of a
transparent transistor at each pixel, hence these displays are referred to as thin film
transistor (TFT). Using the active technology allows a whole row (or column) of
pixels to be addressed at once, which means that creating an image is much more
rapid than in an STN display.
The disadvantages of STN displays are their relatively slow operation, and
there are also issues relating to brightness and angle of view. However, they are
cheap to manufacture and use less energy than a TFT display,
which corrects the major problems of the STN format. In handsets with two
displays, where a simple display is used for phone functions and a higher specification
display is used for viewing videos and playing games, it is common to
find both technologies deployed — STN for the simple display and TFT for the
high-quality display.
Other display technologies are being developed; one in particular, the organic
LED (OLED), is receiving a lot of interest. OLED are emissive devices and thus
do not require a backlight, they can also be created on very thin layers of polymer
(almost like printing), and they consume much less energy than LCD displays,
which makes them ideal for handsets and other power-constrained devices.
OLEDs are being demonstrated in consumer products and much research is
being conducted into these components to overcome some of their limitations, at
which point widespread deployment would become a reality. For example, the lifespan
of blue OLED elements is only a few thousand hours, which means their use
in a TV or phone display is not yet commercially possible.
Types of Mobile Terminal Memory
Mobile handsets have always required elements of memory and storage as somewhere
for the operating system code to reside and execute. In early
phones, the user may have had access to a very limited area of memory to store contacts,
as an alternative to storing these on the subscriber identity module (SIM).
The increase in volume of applications such as messaging and the ability for
modern phones to support multimedia has increased the demands for storage
capacity within the handset. A typical handset contains a range of memory or storage
elements, which typically include the SIM card, hardware memory elements,
removable cards or memory sticks, and, most recently, hard disk drives.
With users now storing pictures, videos, music, and games on their handsets,
the total storage requirement has risen very rapidly; combined with this is the
increased complexity of multimedia signal processing, placing additional demands
on memory requirements.
Handset Hardware Memory
In common with any processing platform, a mobile handset requires a number of
different memory types to support its processing sequences. The two broad categories
of memory are Random Access Memory (RAM), which is as a working area
and as memory for user and application data storage; the other type of memory is
used to store the main program code and other applications. RAM is volatile and
will lose its contents unless power is maintained, whereas the E2PROM or Flash
memory used for the code storage is nonvolatile.
The RAM types found in mobile handsets include Static RAM (SRAM) and
Dynamic RAM (DRAM). SRAM will hold its contents as long as power is maintained
and, unlike DRAM, does not require refreshment on a regular basis. DRAM
memory stores bits of data as a charge on a capacitor; this charge must be regularly
topped up to avoid it leaking away.
Electronically Erasable Programmable Read-Only Memory (EEPROM or
E2PROM) is a nonvolatile memory type that can be programmed and reprogrammed
many times by means of electrical signals. E2PROMs have life cycles of
between 100,000 and 10 million write operations, although they may be read any
number of times. A limitation of E2PROM devices is that only one memory location
can be written to at any one time, this can be overcome by using Flash memory
devices, which can have a number of locations written in one operation.
The amount of hardware memory in handsets has increased considerably in
recent years as phones have included evermore complex operating systems and
applications, and also as users have required more memory for storing personal
data and multimedia files.
At the low end, the amount of E2PROM or Flash memory required might be
16 MB with an additional 8 MB of SRAM or DRAM. In addition, even low-end
phones might offer users memory of between 5 and 20 MB.
The manufacturing requirement to get more and more memory in a handset,
while at the same time decreasing the size of the device, has led to new techniques
such as multi-chip packaging (MCP) where one physical chip package houses both
SRAM and Flash.
Memory Growth
Driven by multimedia applications, the amount of hardware memory in handsets
has increased dramatically from the relatively small amount in voice-centric handsets
to a total of between 500 MB and 1 GB in 3G devices. The memory is typically
a mixture of RAM and Flash.
Until recently there were two main types of Flash memory in production, and
both of these are encountered in handsets; they differ in regard to the type of logic gate
deployed in the memory matrix — either NOR or NAND. However, hybrid Flash
memory devices are now appearing in the marketplace that combine the best features
of the two memory types, and these may soon find their way into mobile products.
NOR Flash was developed first; and while it is a true random access memory
(any location can be addressed when necessary), it suffers from relatively slow write
and erase times and a more limited lifespan than NAND Flash, which is both faster
and cheaper.
Memory Cards
Driven by appliances such as digital cameras, a number of removable data storage
formats have been invented, some of which are also deployed in mobile phones.
These memory cards or sticks can be used to store music, pictures, videos, and
games, which might have been downloaded or, in the case of pictures and videos,
captured by the user with the handset camera.
These memory devices are intentionally small, often a few tens of millimeters
square, and weigh only a few grams. Their storage capability however might be up
to several gigabytes. Typically, these devices are based on Flash technology and are
housed within the handset in a manner similar to the SIM.
for the operating system code to reside and execute. In early
phones, the user may have had access to a very limited area of memory to store contacts,
as an alternative to storing these on the subscriber identity module (SIM).
The increase in volume of applications such as messaging and the ability for
modern phones to support multimedia has increased the demands for storage
capacity within the handset. A typical handset contains a range of memory or storage
elements, which typically include the SIM card, hardware memory elements,
removable cards or memory sticks, and, most recently, hard disk drives.
With users now storing pictures, videos, music, and games on their handsets,
the total storage requirement has risen very rapidly; combined with this is the
increased complexity of multimedia signal processing, placing additional demands
on memory requirements.
Handset Hardware Memory
In common with any processing platform, a mobile handset requires a number of
different memory types to support its processing sequences. The two broad categories
of memory are Random Access Memory (RAM), which is as a working area
and as memory for user and application data storage; the other type of memory is
used to store the main program code and other applications. RAM is volatile and
will lose its contents unless power is maintained, whereas the E2PROM or Flash
memory used for the code storage is nonvolatile.
The RAM types found in mobile handsets include Static RAM (SRAM) and
Dynamic RAM (DRAM). SRAM will hold its contents as long as power is maintained
and, unlike DRAM, does not require refreshment on a regular basis. DRAM
memory stores bits of data as a charge on a capacitor; this charge must be regularly
topped up to avoid it leaking away.
Electronically Erasable Programmable Read-Only Memory (EEPROM or
E2PROM) is a nonvolatile memory type that can be programmed and reprogrammed
many times by means of electrical signals. E2PROMs have life cycles of
between 100,000 and 10 million write operations, although they may be read any
number of times. A limitation of E2PROM devices is that only one memory location
can be written to at any one time, this can be overcome by using Flash memory
devices, which can have a number of locations written in one operation.
The amount of hardware memory in handsets has increased considerably in
recent years as phones have included evermore complex operating systems and
applications, and also as users have required more memory for storing personal
data and multimedia files.
At the low end, the amount of E2PROM or Flash memory required might be
16 MB with an additional 8 MB of SRAM or DRAM. In addition, even low-end
phones might offer users memory of between 5 and 20 MB.
The manufacturing requirement to get more and more memory in a handset,
while at the same time decreasing the size of the device, has led to new techniques
such as multi-chip packaging (MCP) where one physical chip package houses both
SRAM and Flash.
Memory Growth
Driven by multimedia applications, the amount of hardware memory in handsets
has increased dramatically from the relatively small amount in voice-centric handsets
to a total of between 500 MB and 1 GB in 3G devices. The memory is typically
a mixture of RAM and Flash.
Until recently there were two main types of Flash memory in production, and
both of these are encountered in handsets; they differ in regard to the type of logic gate
deployed in the memory matrix — either NOR or NAND. However, hybrid Flash
memory devices are now appearing in the marketplace that combine the best features
of the two memory types, and these may soon find their way into mobile products.
NOR Flash was developed first; and while it is a true random access memory
(any location can be addressed when necessary), it suffers from relatively slow write
and erase times and a more limited lifespan than NAND Flash, which is both faster
and cheaper.
Memory Cards
Driven by appliances such as digital cameras, a number of removable data storage
formats have been invented, some of which are also deployed in mobile phones.
These memory cards or sticks can be used to store music, pictures, videos, and
games, which might have been downloaded or, in the case of pictures and videos,
captured by the user with the handset camera.
These memory devices are intentionally small, often a few tens of millimeters
square, and weigh only a few grams. Their storage capability however might be up
to several gigabytes. Typically, these devices are based on Flash technology and are
housed within the handset in a manner similar to the SIM.
Handset Processing, Processor Architectures, Coprocessors
As handsets have evolved from simple voice-only analog phones through to complex
3G multimedia platforms, an increasing processing load has been assumed of
the devices. Processing loads and capabilities can be compared using the measurement
of “millions of instructions per second” (MIPS), and device manufacturers
will usually quote the MIPS value for processor chips.
The processing requirement for a 2G GSM phone is in the order of 10 MIPS,
with much of this processing requirement resulting from the voice coding function
(Figure 4.16). The addition of a 2.5G technology such as GPRS raises this figure
to somewhere on the order of 12 MIPS, although the processing complexity of a
2.5G phone will differ significantly across the range of device types encountered,
and may be as high as 40 MIPS.
A UMTS (W-CDMA) handset operating in a 3G network requires a total
processing capability in the region of 500 MIPS, with 40 percent of this requirement
resulting from the relative complexity of the air interface. Adding features,
particularly video processing, will increase the MIPS requirement and there is
already discussion of phones with 1000 MIPS (1 GigaMIP) requirements.
An issue for handset manufacturers and chip designers has always been the
processing limits of DSP chips, which is why handsets typically consist of multiple
processors and hardware accelerators (which remove some of the repetitive tasks
from the DSP and implement these in hardware).
Processor Architectures
The division of tasks between multiple processors within a handset is very common,
and a typical architecture would include a microprocessor (or microcontroller), a
DSP, and hardware accelerators. The role of the hardware accelerator is to remove
from the DSP the more routine repetitive tasks, such as radio channel processing,
leaving the DSP free to focus on other layer 1 tasks and vocoding (implementing a
compression algorithm particular to voice).
In this architecture, the hardware accelerator can be labeled as a coprocessor,
although this is a generic term that may have other meanings in the context of
handsets. Silicon manufacturers have in some cases produced single-chip solutions
that contain the three processing elements; this is an attempt to reduce the area,
volume, and cost of these vital handset components.
The typical task distribution in today’s handsets places the emphasis on the
accelerators rather than the DSP. However, as DSP technology improves, more and
more of the total processing load could be assumed by the DSP, although of course
by that time the evolution of services may be placing even more demands on the
handset processors.
Coprocessors
The spread of processing load has led to a number of different strategies for the
use of coprocessors (Figure 4.18). For example, in a dual-mode 2G/3G phone, a
chip manufacturer might offer a main processor that is responsible for the 2G and
2.5G baseband functions and a companion chip that adds 3G baseband functions.
These two processors can then be coupled to their corresponding radio modules.
This solution is perhaps suited to a manufacturer that is looking to evolve a range
of 2G/2.5G to support 3G capability. The basic core design of the handset can be
reused and the 3G functions are added in parallel. Another possible solution is to
use one processor for all the digital baseband processing and to use a separate coprocessor
to handle specific tasks such as multimedia services.
3G multimedia platforms, an increasing processing load has been assumed of
the devices. Processing loads and capabilities can be compared using the measurement
of “millions of instructions per second” (MIPS), and device manufacturers
will usually quote the MIPS value for processor chips.
The processing requirement for a 2G GSM phone is in the order of 10 MIPS,
with much of this processing requirement resulting from the voice coding function
(Figure 4.16). The addition of a 2.5G technology such as GPRS raises this figure
to somewhere on the order of 12 MIPS, although the processing complexity of a
2.5G phone will differ significantly across the range of device types encountered,
and may be as high as 40 MIPS.
A UMTS (W-CDMA) handset operating in a 3G network requires a total
processing capability in the region of 500 MIPS, with 40 percent of this requirement
resulting from the relative complexity of the air interface. Adding features,
particularly video processing, will increase the MIPS requirement and there is
already discussion of phones with 1000 MIPS (1 GigaMIP) requirements.
An issue for handset manufacturers and chip designers has always been the
processing limits of DSP chips, which is why handsets typically consist of multiple
processors and hardware accelerators (which remove some of the repetitive tasks
from the DSP and implement these in hardware).
Processor Architectures
The division of tasks between multiple processors within a handset is very common,
and a typical architecture would include a microprocessor (or microcontroller), a
DSP, and hardware accelerators. The role of the hardware accelerator is to remove
from the DSP the more routine repetitive tasks, such as radio channel processing,
leaving the DSP free to focus on other layer 1 tasks and vocoding (implementing a
compression algorithm particular to voice).
In this architecture, the hardware accelerator can be labeled as a coprocessor,
although this is a generic term that may have other meanings in the context of
handsets. Silicon manufacturers have in some cases produced single-chip solutions
that contain the three processing elements; this is an attempt to reduce the area,
volume, and cost of these vital handset components.
The typical task distribution in today’s handsets places the emphasis on the
accelerators rather than the DSP. However, as DSP technology improves, more and
more of the total processing load could be assumed by the DSP, although of course
by that time the evolution of services may be placing even more demands on the
handset processors.
Coprocessors
The spread of processing load has led to a number of different strategies for the
use of coprocessors (Figure 4.18). For example, in a dual-mode 2G/3G phone, a
chip manufacturer might offer a main processor that is responsible for the 2G and
2.5G baseband functions and a companion chip that adds 3G baseband functions.
These two processors can then be coupled to their corresponding radio modules.
This solution is perhaps suited to a manufacturer that is looking to evolve a range
of 2G/2.5G to support 3G capability. The basic core design of the handset can be
reused and the 3G functions are added in parallel. Another possible solution is to
use one processor for all the digital baseband processing and to use a separate coprocessor
to handle specific tasks such as multimedia services.
Types of Handset and the Market
Handset Segmentation
Segmentation of mobile handsets is usually based on technology, with a correlation
between technology and price; that is, the more complex or feature-rich the device,
the more expensive it tends to be. However, more recently, and for the foreseeable
future, alternative segmentation strategies have become increasingly important for
both vendors and operators. They both seek to segment handsets and their markets
in a variety of ways to cope with market trends and aid in the differentiation of
their products to attract and retain subscribers:
Technology.
Technology is the traditional handset segment. Handsets are
divided on a technical basis, with a direct correlation to pricing so that a
handset incorporating less of the latest technology is invariably cheaper in
the marketplace than one with greater technical prowess. This can now also
be applied to features on the phone, such as the number of megapixels on a
camera or the amount of storage capacity.
Lifestyle.
This is an increasingly ubiquitous segmentation strategy, reflecting
the growing importance of the end user in the value web. This involves
matching the functionality of a handset with the specific needs of the end–
user (e.g., youth or style).
Price.
This strategy is beginning to be used more, owing to the move into
developing markets where customer income is low. It is primarily based on
the cost to manufacture the handset and the customer market segment at
which it is aimed. It also includes customer purchase profiling.
Application specific. This segments the market by optimizing components of the
handset to focus on a specific application, such as games or music. It is not a
technology-hungry mode, as the handset design is quite flexible and is a comparatively
cheaper process because it eliminates many nonessential components.
Low-end phones.
The term “low-end handset” is used normally to describe those
mobile phones that support only basic voice and data services and perhaps do
not support many features such as color displays or cameras. These phones
are usually targeted at the new user and prepaid user markets. While these
phones are normally at the cheaper end of the market and have traditionally
not supported much in the way of advanced features, the effect of technology
trickle-down is beginning to be felt even in this market sector.
It is not unusual to find color screens, cameras, and even WAP-based Internet
access available on some handsets. Polyphonic ringtones are very popular across all
market sectors and a good revenue generator for the network operators; therefore, it
is in the interest of the operator to enable the downloading and playing of various
types of ringtones, even in the so-called low-end device market.
Also in this segment is the low-cost handset or ultra-low-cost handset. The
GSMA is encouraging handset vendors to address the developing market by producing
handsets at a very low price point, U.S.$40 to U.S.$50, for example. These
devices have very few features in an effort to keep manufacturing costs to the bare
minimum. Often, these devices will support only voice and text messaging and
perhaps a limited selection of preprogrammed, non-changeable ringtones. While
these handsets are designed to address the developing market to make mobile communication
more affordable to those people that have little disposable income, they
may also find a place in the senior (or “gray”) market.
Low-Feature Phones
Low-feature handsets are generally geared toward more mature users and the
replacement market. The main attraction of the mid-range handset is the more
advanced feature set, in comparison to a low-end phone. Designs are more ergonomic,
with shapes and keypads that are easy to hold and handle. Displays are generally bigger than those on low-end phones and are more likely to be color. The
graphics quality and user interface are also expected to be superior, and this segment
is most likely to see the first significant showing of tri-band handsets.
Music and Entertainment
Music entertainment handsets are generally equipped with a mobile music player
for MP3 and AAC files, stereo FM radio, digital recorder, and Flash memory. This
is an extremely popular market segment, largely due to the popularity of standalone
music players such as the Apple iPod. Several handset vendors have launched
mobile phones that address the music market directly. These players are often
shipped to the subscriber with some form of access to a music downloading site,
(e.g., iTunes). One challenge that faces the industry in this market sector is the issue
of copyright and digital rights management (DRM). For the service to be popular,
musical content must be easy to distribute and exchange between authorized music
sites and vendors, as well as between the consumers of the music. This, however,
is at odds with the management of the copyright holder’s rights to the content.
Industry bodies such as the Open Mobile Alliance (OMA) are working on methods
to resolve these issues.
Feature Phones
This segment encompasses devices with high-technology capabilities and a variety of
features but without harboring an advanced OS, which would put them into the smart
phone device category. This segment has many of the attributes assigned to lesser segments
and also incorporates advanced features such as video capture and playback,
music, expandable memory slot, high-resolution screen, and megapixel camera.
Smart Phones
The smart phone is generally defined as a converged device whose primary function
is that of a phone with added advanced computing capabilities or PDA functionality
with an advanced operating system (OS). The category was born out of the
amalgamation of mobile phones that offer PDA functionality with an advanced OS
and a PDA with a WWAN connection. Smart phone devices invariably have a large
color display, are inclined to have a larger form factor, and are feature-packed.
There are two primary types of smart phone. There are those that are mainly
rich media devices with advanced OS and computing functionality that are basically
mobile phones embedding a number of additional features outside the usual
ones found on PDAs, such as MP3, camera, MMS, games, Java, or e-mail. These
devices might be used in the business and corporate markets. At the higher end
of the smart phone market, the devices tend to be PDA computers with phone capabilities,
typically manufactured by handheld device vendors such as HP, PalmOne,
and Toshiba; these devices are aimed at the business professional.
Segmentation of mobile handsets is usually based on technology, with a correlation
between technology and price; that is, the more complex or feature-rich the device,
the more expensive it tends to be. However, more recently, and for the foreseeable
future, alternative segmentation strategies have become increasingly important for
both vendors and operators. They both seek to segment handsets and their markets
in a variety of ways to cope with market trends and aid in the differentiation of
their products to attract and retain subscribers:
Technology.
Technology is the traditional handset segment. Handsets are
divided on a technical basis, with a direct correlation to pricing so that a
handset incorporating less of the latest technology is invariably cheaper in
the marketplace than one with greater technical prowess. This can now also
be applied to features on the phone, such as the number of megapixels on a
camera or the amount of storage capacity.
Lifestyle.
This is an increasingly ubiquitous segmentation strategy, reflecting
the growing importance of the end user in the value web. This involves
matching the functionality of a handset with the specific needs of the end–
user (e.g., youth or style).
Price.
This strategy is beginning to be used more, owing to the move into
developing markets where customer income is low. It is primarily based on
the cost to manufacture the handset and the customer market segment at
which it is aimed. It also includes customer purchase profiling.
Application specific. This segments the market by optimizing components of the
handset to focus on a specific application, such as games or music. It is not a
technology-hungry mode, as the handset design is quite flexible and is a comparatively
cheaper process because it eliminates many nonessential components.
Low-end phones.
The term “low-end handset” is used normally to describe those
mobile phones that support only basic voice and data services and perhaps do
not support many features such as color displays or cameras. These phones
are usually targeted at the new user and prepaid user markets. While these
phones are normally at the cheaper end of the market and have traditionally
not supported much in the way of advanced features, the effect of technology
trickle-down is beginning to be felt even in this market sector.
It is not unusual to find color screens, cameras, and even WAP-based Internet
access available on some handsets. Polyphonic ringtones are very popular across all
market sectors and a good revenue generator for the network operators; therefore, it
is in the interest of the operator to enable the downloading and playing of various
types of ringtones, even in the so-called low-end device market.
Also in this segment is the low-cost handset or ultra-low-cost handset. The
GSMA is encouraging handset vendors to address the developing market by producing
handsets at a very low price point, U.S.$40 to U.S.$50, for example. These
devices have very few features in an effort to keep manufacturing costs to the bare
minimum. Often, these devices will support only voice and text messaging and
perhaps a limited selection of preprogrammed, non-changeable ringtones. While
these handsets are designed to address the developing market to make mobile communication
more affordable to those people that have little disposable income, they
may also find a place in the senior (or “gray”) market.
Low-Feature Phones
Low-feature handsets are generally geared toward more mature users and the
replacement market. The main attraction of the mid-range handset is the more
advanced feature set, in comparison to a low-end phone. Designs are more ergonomic,
with shapes and keypads that are easy to hold and handle. Displays are generally bigger than those on low-end phones and are more likely to be color. The
graphics quality and user interface are also expected to be superior, and this segment
is most likely to see the first significant showing of tri-band handsets.
Music and Entertainment
Music entertainment handsets are generally equipped with a mobile music player
for MP3 and AAC files, stereo FM radio, digital recorder, and Flash memory. This
is an extremely popular market segment, largely due to the popularity of standalone
music players such as the Apple iPod. Several handset vendors have launched
mobile phones that address the music market directly. These players are often
shipped to the subscriber with some form of access to a music downloading site,
(e.g., iTunes). One challenge that faces the industry in this market sector is the issue
of copyright and digital rights management (DRM). For the service to be popular,
musical content must be easy to distribute and exchange between authorized music
sites and vendors, as well as between the consumers of the music. This, however,
is at odds with the management of the copyright holder’s rights to the content.
Industry bodies such as the Open Mobile Alliance (OMA) are working on methods
to resolve these issues.
Feature Phones
This segment encompasses devices with high-technology capabilities and a variety of
features but without harboring an advanced OS, which would put them into the smart
phone device category. This segment has many of the attributes assigned to lesser segments
and also incorporates advanced features such as video capture and playback,
music, expandable memory slot, high-resolution screen, and megapixel camera.
Smart Phones
The smart phone is generally defined as a converged device whose primary function
is that of a phone with added advanced computing capabilities or PDA functionality
with an advanced operating system (OS). The category was born out of the
amalgamation of mobile phones that offer PDA functionality with an advanced OS
and a PDA with a WWAN connection. Smart phone devices invariably have a large
color display, are inclined to have a larger form factor, and are feature-packed.
There are two primary types of smart phone. There are those that are mainly
rich media devices with advanced OS and computing functionality that are basically
mobile phones embedding a number of additional features outside the usual
ones found on PDAs, such as MP3, camera, MMS, games, Java, or e-mail. These
devices might be used in the business and corporate markets. At the higher end
of the smart phone market, the devices tend to be PDA computers with phone capabilities,
typically manufactured by handheld device vendors such as HP, PalmOne,
and Toshiba; these devices are aimed at the business professional.
Open Service Access
The specification, design, and implementation of services in a telecommunications
network were at one time limited to a very small pool of programmers, as services
were normally written as additional software for network elements (e.g., switches).
Even the advent of Intelligent Networks (IN) did little to improve this situation,
as a service designer still needed an intimate knowledge of telecommunications
networks and protocols.
Open service access (OSA) is an attempt to provide a much more open, yet
secure, environment for service development by abstracting the network functions
into a high-level view and providing access to these functions by means of standard,
well-defined application programming interfaces (APIs) (Figure 4.12). The work
on what is now known by some as OSA was started by the Parlay Group, which
focused on the U.K. network of British Telecom (BT), which the regulator wanted
to open up to third-party service providers. The Parlay Group works on the APIs
and the model for service providers to access the network functions.
The OSA project of the 3GPP is now aligned with Parlay, and the names are
now interchangeable. The OSA consists of an API that exposes, in an abstract manner,
the service capabilities of the network. A service designer can manipulate these
service capabilities to build innovative services (e.g., a service based on the location
of a mobile device).
A major concern for network operators is security and the security of information;
therefore, the OSA gateway that supports service providers also includes
a framework element that is responsible for authenticating service providers and
authorizing what functions (and information) that provider is able to see. Using
standard network signaling protocols such as SS7-based protocols, the OSA gateway
is able to manipulate the network elements to achieve the functions of a particular
service, although the service designer is isolated from the detail of these
protocols. The service provider in an OSA environment could be either within the
network (i.e., the network operators themselves) or a trusted third party.
Live OSA-based mobile services are available in some networks, although the
end user will not be aware that this is how the service is provided. These services
include prepay, location services, virtual private networks (VPNs), and unified
messaging (UM).
network were at one time limited to a very small pool of programmers, as services
were normally written as additional software for network elements (e.g., switches).
Even the advent of Intelligent Networks (IN) did little to improve this situation,
as a service designer still needed an intimate knowledge of telecommunications
networks and protocols.
Open service access (OSA) is an attempt to provide a much more open, yet
secure, environment for service development by abstracting the network functions
into a high-level view and providing access to these functions by means of standard,
well-defined application programming interfaces (APIs) (Figure 4.12). The work
on what is now known by some as OSA was started by the Parlay Group, which
focused on the U.K. network of British Telecom (BT), which the regulator wanted
to open up to third-party service providers. The Parlay Group works on the APIs
and the model for service providers to access the network functions.
The OSA project of the 3GPP is now aligned with Parlay, and the names are
now interchangeable. The OSA consists of an API that exposes, in an abstract manner,
the service capabilities of the network. A service designer can manipulate these
service capabilities to build innovative services (e.g., a service based on the location
of a mobile device).
A major concern for network operators is security and the security of information;
therefore, the OSA gateway that supports service providers also includes
a framework element that is responsible for authenticating service providers and
authorizing what functions (and information) that provider is able to see. Using
standard network signaling protocols such as SS7-based protocols, the OSA gateway
is able to manipulate the network elements to achieve the functions of a particular
service, although the service designer is isolated from the detail of these
protocols. The service provider in an OSA environment could be either within the
network (i.e., the network operators themselves) or a trusted third party.
Live OSA-based mobile services are available in some networks, although the
end user will not be aware that this is how the service is provided. These services
include prepay, location services, virtual private networks (VPNs), and unified
messaging (UM).
Handset and Standards Bodies
The large number of technologies incorporated in a handset is reflected in the fact
that the standards on which this technology is based emanate from a diverse range
of standards organizations and industry forums. From the perspective
of the core cellular functionality, the most significant standards bodies are the two
3G bodies, the Third Generation Partnership Project (3GPP) and the Third Generation
Partnership Project 2 (3GPP2).
3GPP is responsible for the UMTS standard and also assumes responsibility from
ETSI for the GSM standard on which UMTS is based. 3GPP2 is responsible for the
cdma2000 family of technologies, which includes the Evolution systems, 1xEV-DO
(1x Evolution Data Optimized) and 1 x EV-DV (1x Evolution Data and Voice).
Both 3GPP and 3GPP2 have very similar structures, and indeed members.
The two organizations are composed of organizational partners (OPs), which are
national or regional standards organizations such as ETSI or the Telecommunications
Industry Association (TIA). It is important to note that there are cooperative
links between the two bodies as there is a lot of crossover technology that does not
rely on the underlying network standards and can therefore be transferred.
In addition to the OPs, both the 3GPP and 3GPP2 have market representation
partners (MRPs) and observers who contribute to the standards processes.
The technologies covered by the two 3G bodies represent approximately 98 percent
of the global users of mobile or cellular networks. In addition to the 3GPP and
3GPP2 standards, many other organizations have a bearing on handset design and
functionality; these include:
The Internet Engineering Task Force (IETF)
The World Wide Web Consortium (W3C)
The Open Mobile Alliance (OMA)
The International Standards Organization (ISO)
The range of applicable standards in a handset is indicative of the convergence
between telecommunications, Internet technology, and broadcasting and
entertainment.
that the standards on which this technology is based emanate from a diverse range
of standards organizations and industry forums. From the perspective
of the core cellular functionality, the most significant standards bodies are the two
3G bodies, the Third Generation Partnership Project (3GPP) and the Third Generation
Partnership Project 2 (3GPP2).
3GPP is responsible for the UMTS standard and also assumes responsibility from
ETSI for the GSM standard on which UMTS is based. 3GPP2 is responsible for the
cdma2000 family of technologies, which includes the Evolution systems, 1xEV-DO
(1x Evolution Data Optimized) and 1 x EV-DV (1x Evolution Data and Voice).
Both 3GPP and 3GPP2 have very similar structures, and indeed members.
The two organizations are composed of organizational partners (OPs), which are
national or regional standards organizations such as ETSI or the Telecommunications
Industry Association (TIA). It is important to note that there are cooperative
links between the two bodies as there is a lot of crossover technology that does not
rely on the underlying network standards and can therefore be transferred.
In addition to the OPs, both the 3GPP and 3GPP2 have market representation
partners (MRPs) and observers who contribute to the standards processes.
The technologies covered by the two 3G bodies represent approximately 98 percent
of the global users of mobile or cellular networks. In addition to the 3GPP and
3GPP2 standards, many other organizations have a bearing on handset design and
functionality; these include:
The Internet Engineering Task Force (IETF)
The World Wide Web Consortium (W3C)
The Open Mobile Alliance (OMA)
The International Standards Organization (ISO)
The range of applicable standards in a handset is indicative of the convergence
between telecommunications, Internet technology, and broadcasting and
entertainment.
Inter-Network Roaming , Handset User Interfaces , Human Interfaces
A critical feature of most mobile network standards has been the support of internetwork
roaming, where a subscriber to one network is able to use his phone on
another network. Roaming is represented by both a commercial agreement
between two network operators to accept roamers and to provide billing data
in an agreed format; it is also a technical issue where the handset must be capable
of operating on both networks.
The GSM and UMTS standards have always included roaming as a key aspect,
and phones sold to subscribers on one network should work on other networks using
the same technology. However, the roaming user may not always have access to the
same services when roaming and indeed the look and feel of the visited network may
be somewhat different from the home network. Recently, work has been undertaken
in the standards to minimize the difference seen by roamers across networks. This
gave rise to the concept of the virtual home environment (VHE), which is based on
Intelligent Network (IN) techniques and appropriate inter-network signaling.
The situation for some roamers is eased in some cases by the existence of agreements
between networks that allow a roamer to dial his normal shortcode for, say,
mailbox access, and for the networks to translate this number behind the scenes.
While this may not be based on the VHE standards, it does start to solve the issues
of look and feel.
Another factor aiding some roamers is the coalescence of the market around a
number of large global operators. Once a mobile network operator owns multiple networks,
it is clearly able to provide a consistent user experience across those networks.
Handset User Interfaces
Manufacturers often have entire teams or departments that are responsible for the
design and implementation of user interfaces (UI), as this aspect of handset design
is so critical. The complexity of handsets and the multitude of functions they contain
make the UI critical in influencing the user experience. The UI supported by a
device will depend largely on device capabilities and will range from a very simple
UI for voice-centric handsets to very complex icon-oriented UIs, maybe incorporating
touch-screen technology, for data-centric devices. The best UIs are of course
those that are largely intuitive and that make redundant the complex (and costly to
produce) user handbooks or operating manuals.
The dynamics of the market often come into conflict at the UI. The handset
manufacturer will have spent a lot of time and money designing the UI, only for the
network operator to insist in some cases that it be replaced, in whole or in part, with
an operator-specific UI. This area of UI customization is another area in which the
manufacturer must fight to preserve brand identity.
Human Interfaces
The issue of user-personalization is also influencing the UI; many end users want
in some way to personalize the UI so it gives more ready access to the features and
capabilities that they use most often. The human or user interface on handsets
consists of a number of elements, including the display, the keypad or keyboard,
the navigation keys, and any soft keys or hot keys. The precise configuration of the
UI is determined by a number of factors, such as the device type and manufacturer
preferences
roaming, where a subscriber to one network is able to use his phone on
another network. Roaming is represented by both a commercial agreement
between two network operators to accept roamers and to provide billing data
in an agreed format; it is also a technical issue where the handset must be capable
of operating on both networks.
The GSM and UMTS standards have always included roaming as a key aspect,
and phones sold to subscribers on one network should work on other networks using
the same technology. However, the roaming user may not always have access to the
same services when roaming and indeed the look and feel of the visited network may
be somewhat different from the home network. Recently, work has been undertaken
in the standards to minimize the difference seen by roamers across networks. This
gave rise to the concept of the virtual home environment (VHE), which is based on
Intelligent Network (IN) techniques and appropriate inter-network signaling.
The situation for some roamers is eased in some cases by the existence of agreements
between networks that allow a roamer to dial his normal shortcode for, say,
mailbox access, and for the networks to translate this number behind the scenes.
While this may not be based on the VHE standards, it does start to solve the issues
of look and feel.
Another factor aiding some roamers is the coalescence of the market around a
number of large global operators. Once a mobile network operator owns multiple networks,
it is clearly able to provide a consistent user experience across those networks.
Handset User Interfaces
Manufacturers often have entire teams or departments that are responsible for the
design and implementation of user interfaces (UI), as this aspect of handset design
is so critical. The complexity of handsets and the multitude of functions they contain
make the UI critical in influencing the user experience. The UI supported by a
device will depend largely on device capabilities and will range from a very simple
UI for voice-centric handsets to very complex icon-oriented UIs, maybe incorporating
touch-screen technology, for data-centric devices. The best UIs are of course
those that are largely intuitive and that make redundant the complex (and costly to
produce) user handbooks or operating manuals.
The dynamics of the market often come into conflict at the UI. The handset
manufacturer will have spent a lot of time and money designing the UI, only for the
network operator to insist in some cases that it be replaced, in whole or in part, with
an operator-specific UI. This area of UI customization is another area in which the
manufacturer must fight to preserve brand identity.
Human Interfaces
The issue of user-personalization is also influencing the UI; many end users want
in some way to personalize the UI so it gives more ready access to the features and
capabilities that they use most often. The human or user interface on handsets
consists of a number of elements, including the display, the keypad or keyboard,
the navigation keys, and any soft keys or hot keys. The precise configuration of the
UI is determined by a number of factors, such as the device type and manufacturer
preferences
Introduction to Handset Technologies
The mobile handset, in terms of its design and functionality, must satisfy three distinct
sets of requirements . These relate to the network operators, the endusers,
and the handset vendors themselves. The requirements of the end user will often focus
on features and capabilities, and the overall ease of use (usability), and of course, the end user would like maximum functionality for minimum cost. Increasingly, the aesthetic
appeal of the handset is important as it evolves from the functional devices of
early handsets to fashion items that a user may wish to change every 12 to 18 months.
Although users often look for complex feature sets in handsets, research indicates that
a typical user frequently uses only 15 to 20 percent of the phone’s capabilities.
For a network operator the handset represents the front end of the network and
is the platform through which users gain access to the rich set of services offered.
This again means feature sets and capabilities, and the ease of use is critical. The
network operator is also very interested in handset cost because in many markets
the handset is still subsidized by the operators, and therefore they bear some of the
real handset costs.
Handset vendors are increasingly asked to provide evermore complex devices
that are smaller and lighter than their predecessors yet cost no more or even less.
The overall cost of the actual handset components is further complicated by the
issue of technology licensing. A current 3G phone will contain many noncellular
technologies, such as video codecs and picture compression algorithms, for which
a license fee is payable to third-party innovators. Another major issue for handset
vendors is brand maintenance; in the mind of the enduser, the phone may become
associated with the network and not the original handset manufacturer, and the
manufacturers expend a lot of effort in keeping their brand profile high.
The requirement to satisfy the sometimes-conflicting requirements of all parties
means that the market for handsets has inevitably fragmented in terms of device
capability and form factor. The devices produced for the prepay segment are very
different from those aimed at the high end of the market.
Original Equipment and Device Manufacturers
In the area of manufacturing — and mobile handsets are no exception — the topic
of original equipment manufacturers (OEMs) will normally surface. An OEM
is a manufacturing company that produces components or sometimes
complete pieces of equipment to the specifications and design of another company,
the value-added reseller (VAR).
The VAR will either take components from one or
more OEMs and integrate these into a system, or take an entire OEM assembly and
repackage it. The level of repackaging might be as basic as relabeling products to
adding greater value (e.g., software and other elements).
Batteries
The handset battery is an essential component and is one of the largest contributors
to the total weight and volume of the final product. It represents the balance
between the competing demands of supplying energy and being as light and small
as possible.
There are many processes within the mobile handset that are relatively energy
hungry. These include the radio circuits, the processing for voice and
multimedia signals, and the display. Various techniques are deployed both within
the handset and in the specifications of the radio technology that aim at reducing
energy consumption and therefore lengthening the life of the battery.
Although battery technology has come a long way since the early days of mobile
networks, it can take up to ten years for a battery technology to proceed from concept
to being fully commercially available.
sets of requirements . These relate to the network operators, the endusers,
and the handset vendors themselves. The requirements of the end user will often focus
on features and capabilities, and the overall ease of use (usability), and of course, the end user would like maximum functionality for minimum cost. Increasingly, the aesthetic
appeal of the handset is important as it evolves from the functional devices of
early handsets to fashion items that a user may wish to change every 12 to 18 months.
Although users often look for complex feature sets in handsets, research indicates that
a typical user frequently uses only 15 to 20 percent of the phone’s capabilities.
For a network operator the handset represents the front end of the network and
is the platform through which users gain access to the rich set of services offered.
This again means feature sets and capabilities, and the ease of use is critical. The
network operator is also very interested in handset cost because in many markets
the handset is still subsidized by the operators, and therefore they bear some of the
real handset costs.
Handset vendors are increasingly asked to provide evermore complex devices
that are smaller and lighter than their predecessors yet cost no more or even less.
The overall cost of the actual handset components is further complicated by the
issue of technology licensing. A current 3G phone will contain many noncellular
technologies, such as video codecs and picture compression algorithms, for which
a license fee is payable to third-party innovators. Another major issue for handset
vendors is brand maintenance; in the mind of the enduser, the phone may become
associated with the network and not the original handset manufacturer, and the
manufacturers expend a lot of effort in keeping their brand profile high.
The requirement to satisfy the sometimes-conflicting requirements of all parties
means that the market for handsets has inevitably fragmented in terms of device
capability and form factor. The devices produced for the prepay segment are very
different from those aimed at the high end of the market.
Original Equipment and Device Manufacturers
In the area of manufacturing — and mobile handsets are no exception — the topic
of original equipment manufacturers (OEMs) will normally surface. An OEM
is a manufacturing company that produces components or sometimes
complete pieces of equipment to the specifications and design of another company,
the value-added reseller (VAR).
The VAR will either take components from one or
more OEMs and integrate these into a system, or take an entire OEM assembly and
repackage it. The level of repackaging might be as basic as relabeling products to
adding greater value (e.g., software and other elements).
Batteries
The handset battery is an essential component and is one of the largest contributors
to the total weight and volume of the final product. It represents the balance
between the competing demands of supplying energy and being as light and small
as possible.
There are many processes within the mobile handset that are relatively energy
hungry. These include the radio circuits, the processing for voice and
multimedia signals, and the display. Various techniques are deployed both within
the handset and in the specifications of the radio technology that aim at reducing
energy consumption and therefore lengthening the life of the battery.
Although battery technology has come a long way since the early days of mobile
networks, it can take up to ten years for a battery technology to proceed from concept
to being fully commercially available.
Operational and Business Support Operational Support System (OSS)
The Requirements
Support for the operational network must be comprehensive in terms of structure
and procedures, and also in terms of systems and software tools. Broadly speaking,
the requirement can be split into three main areas:
1. Network management deals with managing the operational network, including
infrastructure, maintenance, and fault handling. The requirement is wide ranging,
allowing the complete network to be viewed in overview, or detail, with network
elements being handled remotely to keep the network running smoothly.
2. The customer service system allows for effective customer relations, covering
areas such as new orders, billing queries, and technical issues. The requirement
is easy to identify, and systems could be dedicated to this role, or be
fully integrated within a wider operational support system. In either case, information
must be available to the customer service representatives, relying on
good data interfaces between related functions (for example, customer billing
records need to be available to the customer service representative).
3. Other requirements can be grouped under the heading of business management
and include sales and marketing functions, finance, personnel, and
logistics, and a wide range of others.
Support for the operational network must be comprehensive in terms of structure
and procedures, and also in terms of systems and software tools. Broadly speaking,
the requirement can be split into three main areas:
1. Network management deals with managing the operational network, including
infrastructure, maintenance, and fault handling. The requirement is wide ranging,
allowing the complete network to be viewed in overview, or detail, with network
elements being handled remotely to keep the network running smoothly.
2. The customer service system allows for effective customer relations, covering
areas such as new orders, billing queries, and technical issues. The requirement
is easy to identify, and systems could be dedicated to this role, or be
fully integrated within a wider operational support system. In either case, information
must be available to the customer service representatives, relying on
good data interfaces between related functions (for example, customer billing
records need to be available to the customer service representative).
3. Other requirements can be grouped under the heading of business management
and include sales and marketing functions, finance, personnel, and
logistics, and a wide range of others.
The Role of CAMEL
CAMEL has been specified to allow operator-specific services (IN services) to be
provided for subscribers even while roaming abroad. The service control point
(SCP) is generally located in the home network, allowing serving network switches
to interact with it to provide the advanced services.
CAMEL is an acronym for Customized Applications for Mobile networks
Enhanced Logic, and has been specified to allow for the introduction of IN
services in mobile networks alongside both call control functions and mobility
management.
In the later phases of CAMEL, it also allows for the provision of advanced
services in support of GPRS, SMS, and supplementary services. The interfaces
required for CAMEL are standardized to a high degree, allowing network elements
from different networks to work together using the CAMEL Application
Part (CAP) message set of Signaling System Number 7 (SS7). An example CAMEL
procedure is shown in Figure 3.47 for clarity.
The aim is to allow Intelligent Network interactions with a service control point
located in the subscriber’s home network. CAMEL takes account of mobile originated,
mobile terminated, and mobile forwarding cases, and also allows for both
roaming and non-roaming cases.
Modifications at the MSC essentially give the switch the ability to recognize the
requirement for a CAMEL-based service and build the required messages for the
specified SCF. In addition, both the HLR and VLR must be modified to hold the
relevant CAMEL subscriber details.
CAMEL has been defined in different phases to allow a staged approach to
implementation. Each phase is a superset of the previous one. Not all networks will
be at the latest phase of CAMEL, or even support CAMEL at all — but in many
networks, CAMEL is used to support prepaid services.
provided for subscribers even while roaming abroad. The service control point
(SCP) is generally located in the home network, allowing serving network switches
to interact with it to provide the advanced services.
CAMEL is an acronym for Customized Applications for Mobile networks
Enhanced Logic, and has been specified to allow for the introduction of IN
services in mobile networks alongside both call control functions and mobility
management.
In the later phases of CAMEL, it also allows for the provision of advanced
services in support of GPRS, SMS, and supplementary services. The interfaces
required for CAMEL are standardized to a high degree, allowing network elements
from different networks to work together using the CAMEL Application
Part (CAP) message set of Signaling System Number 7 (SS7). An example CAMEL
procedure is shown in Figure 3.47 for clarity.
The aim is to allow Intelligent Network interactions with a service control point
located in the subscriber’s home network. CAMEL takes account of mobile originated,
mobile terminated, and mobile forwarding cases, and also allows for both
roaming and non-roaming cases.
Modifications at the MSC essentially give the switch the ability to recognize the
requirement for a CAMEL-based service and build the required messages for the
specified SCF. In addition, both the HLR and VLR must be modified to hold the
relevant CAMEL subscriber details.
CAMEL has been defined in different phases to allow a staged approach to
implementation. Each phase is a superset of the previous one. Not all networks will
be at the latest phase of CAMEL, or even support CAMEL at all — but in many
networks, CAMEL is used to support prepaid services.
Intelligent Networks and CAMEL
Introduction to Intelligent Networks (INs)
Traditionally, switching equipment (telephone exchanges) would need upgrading
each time a new service was added to the network (Figure 3.42). In some networks,
the presence of hundreds or thousands of telephone exchanges meant that this was
a very long and labor-intensive task.
The concept of Intelligent Networks (INs) allows the service to be provided
within a central computer or service control point (Figures 3.43). Once the telephone
exchanges have been upgraded to include the IN features, no further upgrades are
required except for the updating of the tables that identify the IN triggers, such as
the 1-800- or 0800-digit string to identify the toll-free or freephone service.
These services are generally provided through service control points, which
translate the service-specific dialed numbers (an 800 number perhaps) into standard
numbers for routing to the actual destination (telephone) in the network.
The special number is the initial trigger for the telephone exchange to contact the
service control point, to translate the number, and to carry out special billing, such
as reverse charging or collect calls, or premium rate.
These early services were soon followed by further advanced services based on
the intelligence or control features within the service control point. Extra service
features such as interaction with the user allow further customization of services.
IN separates service intelligence and switching. This means that new services
can be quicker and cheaper to install, and that service creation and switching is
split into two markets, thereby increasing vendor competition.
Most IN, including GSM Phase 2+ networks, use SS7 (Signaling System Number
7) protocols to enable the switches (known as service switching points, or SSPs)
to communicate with databases known as service control points (SCPs). A standardized
set of SS7 messages known as INAP (Intelligent Network Application
Part) is used for interaction between the SSP and SCP.
The intelligent applications that control IN services are defined by the operator,
and are not themselves standardized. This means that IN offers a route to operator
differentiation, but also that in many cases the same services cannot be offered
outside the network of that operator.
Service Control Platforms
Service control platform (SCP) is the name given to a platform that is, in some way,
controlling services on a network, or in some cases, across network boundaries. One
of the main advantages in using an SCP within a network is that it significantly
reduces the upgrade costs within a network whenever a new service is introduced.
Control of supplementary services. These may include services such as conference
calls, advice on billing, and provision of call-back services such as ring
back when free. Supplementary services are seen as key revenue generators for
network operators.
Number translation services. In most modern networks it is possible to dial tollfree,
local or premium rate services. These are, in effect, virtual numbers that
have special tariffs. Number translation services (NTS) minimize the amount
of configuration required on the network when a new service is provided.
CAMEL (Customized Applications for Mobile Networks Enhanced Logic)
INs and Mobile Networks: The Problems
In fixed networks, the IN concept can be implemented in various ways.
In each case, however, the interfaces are confined within a single network, allowing
the defined procedures and information flows to be applied on an interface
with known and defined endpoints, and within hardware belonging to the single
network operator.
Testing is therefore more straightforward as each interface can be tested independently
and, in addition, non-standard procedures and information can be used
to satisfy particular problems (even if this is just a certain way of interpreting an
ambiguous part of the standards). The IN procedures have been specified to interwork
(mainly) with call control, hence a simple relationship exists between the two.
For mobile networks, the situation is more complex because roaming scenarios
require that equipment in more than one network is involved. Considering
all the roaming agreements each network will have in place, and the rapid change of each of these networks as new hardware (switches, etc.) is added, it is
impossible to test each individual interface that will likely be involved in an IN call,
or to interpret specifications in anything other than a single standard way.
The next big complication is that IN procedures need to interwork with call
control, mobility management functions, and the additional features of mobile
networks such as the General Packet Radio Service and the Short Message
Service. Finally, any announcements should be supported by special announcement
machines, the positioning of which needs careful thought. Location in the
home network provides ease of management, but an international call leg would
be needed to play announcements to roaming subscribers. Locating the announcement
machines in serving networks leads to a higher cost of hardware and increased
management costs, but reduced cost per call.
Traditionally, switching equipment (telephone exchanges) would need upgrading
each time a new service was added to the network (Figure 3.42). In some networks,
the presence of hundreds or thousands of telephone exchanges meant that this was
a very long and labor-intensive task.
The concept of Intelligent Networks (INs) allows the service to be provided
within a central computer or service control point (Figures 3.43). Once the telephone
exchanges have been upgraded to include the IN features, no further upgrades are
required except for the updating of the tables that identify the IN triggers, such as
the 1-800- or 0800-digit string to identify the toll-free or freephone service.
These services are generally provided through service control points, which
translate the service-specific dialed numbers (an 800 number perhaps) into standard
numbers for routing to the actual destination (telephone) in the network.
The special number is the initial trigger for the telephone exchange to contact the
service control point, to translate the number, and to carry out special billing, such
as reverse charging or collect calls, or premium rate.
These early services were soon followed by further advanced services based on
the intelligence or control features within the service control point. Extra service
features such as interaction with the user allow further customization of services.
IN separates service intelligence and switching. This means that new services
can be quicker and cheaper to install, and that service creation and switching is
split into two markets, thereby increasing vendor competition.
Most IN, including GSM Phase 2+ networks, use SS7 (Signaling System Number
7) protocols to enable the switches (known as service switching points, or SSPs)
to communicate with databases known as service control points (SCPs). A standardized
set of SS7 messages known as INAP (Intelligent Network Application
Part) is used for interaction between the SSP and SCP.
The intelligent applications that control IN services are defined by the operator,
and are not themselves standardized. This means that IN offers a route to operator
differentiation, but also that in many cases the same services cannot be offered
outside the network of that operator.
Service Control Platforms
Service control platform (SCP) is the name given to a platform that is, in some way,
controlling services on a network, or in some cases, across network boundaries. One
of the main advantages in using an SCP within a network is that it significantly
reduces the upgrade costs within a network whenever a new service is introduced.
Control of supplementary services. These may include services such as conference
calls, advice on billing, and provision of call-back services such as ring
back when free. Supplementary services are seen as key revenue generators for
network operators.
Number translation services. In most modern networks it is possible to dial tollfree,
local or premium rate services. These are, in effect, virtual numbers that
have special tariffs. Number translation services (NTS) minimize the amount
of configuration required on the network when a new service is provided.
CAMEL (Customized Applications for Mobile Networks Enhanced Logic)
INs and Mobile Networks: The Problems
In fixed networks, the IN concept can be implemented in various ways.
In each case, however, the interfaces are confined within a single network, allowing
the defined procedures and information flows to be applied on an interface
with known and defined endpoints, and within hardware belonging to the single
network operator.
Testing is therefore more straightforward as each interface can be tested independently
and, in addition, non-standard procedures and information can be used
to satisfy particular problems (even if this is just a certain way of interpreting an
ambiguous part of the standards). The IN procedures have been specified to interwork
(mainly) with call control, hence a simple relationship exists between the two.
For mobile networks, the situation is more complex because roaming scenarios
require that equipment in more than one network is involved. Considering
all the roaming agreements each network will have in place, and the rapid change of each of these networks as new hardware (switches, etc.) is added, it is
impossible to test each individual interface that will likely be involved in an IN call,
or to interpret specifications in anything other than a single standard way.
The next big complication is that IN procedures need to interwork with call
control, mobility management functions, and the additional features of mobile
networks such as the General Packet Radio Service and the Short Message
Service. Finally, any announcements should be supported by special announcement
machines, the positioning of which needs careful thought. Location in the
home network provides ease of management, but an international call leg would
be needed to play announcements to roaming subscribers. Locating the announcement
machines in serving networks leads to a higher cost of hardware and increased
management costs, but reduced cost per call.
BREW (Binary Runtime Environment for Wireless)
BREW is also seen as an ecosystem. It has been associated mainly with cdma-based
systems; however, technically, it is available for use with GSM/GPRS and UMTS
networks — that is, it is mobile technology agnostic.
BREW provides a whole range of features, applications, and services that can
be accessed by the user via the BREW software on the handset. BREW application
developers are provided with the development and testing tools they need to make
the applications available to the users. Operators benefit from a flexible system that
provides a platform for a variety of advanced services and features.
The BREW distribution system (BDS) is the network-based system that distributes
the content and applications.
systems; however, technically, it is available for use with GSM/GPRS and UMTS
networks — that is, it is mobile technology agnostic.
BREW provides a whole range of features, applications, and services that can
be accessed by the user via the BREW software on the handset. BREW application
developers are provided with the development and testing tools they need to make
the applications available to the users. Operators benefit from a flexible system that
provides a platform for a variety of advanced services and features.
The BREW distribution system (BDS) is the network-based system that distributes
the content and applications.
Messaging Platforms
Voicemail Platforms
While voicemail appears to be a very simple service to provide to customers,
the fact is that it is an extremely important platform for operators — especially in
terms of revenue generation.
First, any call that is answered by a voicemail announcement is technically
terminated. All of the networks involved in the delivery of the call will receive
revenue. Second, on receipt of a voicemail message, many users will return the call,
generating yet further revenue for the operators.
In many cases, basic voicemail services are provided free of charge or included
in the monthly tariff. However, operators can charge a premium for premium
voicemail services, where the user is able to keep messages stored for a longer
period. Combining voicemail with calling line ID, operators can provide a service
that allows the user to call the person who left the message by pressing a single
key.
Short Message Service SMS
SMS allows for the exchange of short alphanumeric messages between a mobile
station and an SMS service center (SMSC). The messages can be either mobile
terminated
(from SMSC to mobile) or mobile originated (from mobile to SMSC). Of
course, in most SMS interactions, the mobile originated service is followed almost
immediately by the mobile terminated service (of the same message, but different
users). Messages are limited to 160 characters (depending on the language used).
Mobile Terminated Point-to-Point
These messages are sent from an SM service center (Figure 3.37) to a mobile station.
Upon receipt of the message, the mobile station will return a confirmation to
the SM service center. The message may be received when the mobile is not being
used, or even while it is being used for a voice call.
Mobile Originated Point-to-Point
Here the message is from the mobile to the SM service center, and a confirmation of
receipt (not necessarily delivery) is given. The generally accepted problem of mobile
originated SMS is the inputting of the text to the phone, which can be a slow and
laborious process. One increasingly popular short-cut is to edit the message on a
computer, laptop, or PDA (personal digital assistant) and then transfer the message
to the phone. On mobile phones themselves, intelligent text recognition software is
becoming increasingly sophisticated.
Enhanced Messaging Service (EMS)
An extension of SMS, EMS allows ringtones, operator logos, and other simple visual
images and icons to be sent to compatible devices. Pictures, sounds, animation, and
text can be sent to a device in an integrated package. No modification of the SMSC
is necessary because EMS uses the User Data Header (UDH) that is already present
in SMS messages. However, EMS-compatible handsets are required. The EMS
standard is included as part of the standard 3GPP feature set.
SMS Cell Broadcast
The SMS cell broadcast service transmits the same message to all mobiles within
a particular cell (or group of cells). The message limit in this case is 93 characters,
and the mobile must be in idle mode to receive the message. No acknowledgment
of receipt is given. Cell broadcast does not generate revenue because broadcast messages,
which offer no confirmation of receipt, cannot be charged. They tend, therefore,
to be seen as a value-added feature to attract customers. Uses might include
advertising (e.g., other network features) or the broadcasting of PSTN local area
codes such that a mobile user is able to distinguish between local and long-distance
calls (tariffs may vary between the two).
The Multimedia Messaging Service (MMS)
MMS is a non-real-time service, often seen as a natural progression from the GSM
Short Message Service. Like SMS, messages can be stored before being forwarded
to the recipient whenever they are available or they request to see the message. It
combines different networks and integrates messaging systems that already exist in
these networks, for example, SMS in GSM and so-called “Instant Messaging” via
the Internet.
MMS is designed to support either standard e-mail addresses or standard ISDN
telephone numbers; and WAP (Wireless Application Protocol) development also
provides significant support for MMS.
The user terminal operates in the Multimedia Messaging Service Environment
(MMSE). MMSE provides the service elements such as delivery, storage, and
notification, which may be located in one network or distributed across different
networks.
The basis of connectivity between the networks is provided by IP (Internet Protocol)
and its associated set of messaging protocols, enabling compatibility between
2G and 3G wireless messaging and Internet messaging.
While voicemail appears to be a very simple service to provide to customers,
the fact is that it is an extremely important platform for operators — especially in
terms of revenue generation.
First, any call that is answered by a voicemail announcement is technically
terminated. All of the networks involved in the delivery of the call will receive
revenue. Second, on receipt of a voicemail message, many users will return the call,
generating yet further revenue for the operators.
In many cases, basic voicemail services are provided free of charge or included
in the monthly tariff. However, operators can charge a premium for premium
voicemail services, where the user is able to keep messages stored for a longer
period. Combining voicemail with calling line ID, operators can provide a service
that allows the user to call the person who left the message by pressing a single
key.
Short Message Service SMS
SMS allows for the exchange of short alphanumeric messages between a mobile
station and an SMS service center (SMSC). The messages can be either mobile
terminated
(from SMSC to mobile) or mobile originated (from mobile to SMSC). Of
course, in most SMS interactions, the mobile originated service is followed almost
immediately by the mobile terminated service (of the same message, but different
users). Messages are limited to 160 characters (depending on the language used).
Mobile Terminated Point-to-Point
These messages are sent from an SM service center (Figure 3.37) to a mobile station.
Upon receipt of the message, the mobile station will return a confirmation to
the SM service center. The message may be received when the mobile is not being
used, or even while it is being used for a voice call.
Mobile Originated Point-to-Point
Here the message is from the mobile to the SM service center, and a confirmation of
receipt (not necessarily delivery) is given. The generally accepted problem of mobile
originated SMS is the inputting of the text to the phone, which can be a slow and
laborious process. One increasingly popular short-cut is to edit the message on a
computer, laptop, or PDA (personal digital assistant) and then transfer the message
to the phone. On mobile phones themselves, intelligent text recognition software is
becoming increasingly sophisticated.
Enhanced Messaging Service (EMS)
An extension of SMS, EMS allows ringtones, operator logos, and other simple visual
images and icons to be sent to compatible devices. Pictures, sounds, animation, and
text can be sent to a device in an integrated package. No modification of the SMSC
is necessary because EMS uses the User Data Header (UDH) that is already present
in SMS messages. However, EMS-compatible handsets are required. The EMS
standard is included as part of the standard 3GPP feature set.
SMS Cell Broadcast
The SMS cell broadcast service transmits the same message to all mobiles within
a particular cell (or group of cells). The message limit in this case is 93 characters,
and the mobile must be in idle mode to receive the message. No acknowledgment
of receipt is given. Cell broadcast does not generate revenue because broadcast messages,
which offer no confirmation of receipt, cannot be charged. They tend, therefore,
to be seen as a value-added feature to attract customers. Uses might include
advertising (e.g., other network features) or the broadcasting of PSTN local area
codes such that a mobile user is able to distinguish between local and long-distance
calls (tariffs may vary between the two).
The Multimedia Messaging Service (MMS)
MMS is a non-real-time service, often seen as a natural progression from the GSM
Short Message Service. Like SMS, messages can be stored before being forwarded
to the recipient whenever they are available or they request to see the message. It
combines different networks and integrates messaging systems that already exist in
these networks, for example, SMS in GSM and so-called “Instant Messaging” via
the Internet.
MMS is designed to support either standard e-mail addresses or standard ISDN
telephone numbers; and WAP (Wireless Application Protocol) development also
provides significant support for MMS.
The user terminal operates in the Multimedia Messaging Service Environment
(MMSE). MMSE provides the service elements such as delivery, storage, and
notification, which may be located in one network or distributed across different
networks.
The basis of connectivity between the networks is provided by IP (Internet Protocol)
and its associated set of messaging protocols, enabling compatibility between
2G and 3G wireless messaging and Internet messaging.
The Virtual Home Environment (VHE) Concept
In the VHE , users are consistently presented with the same personalized
features, user interface, customization, and services — in whatever network
they are located, or terminal they may be using (assuming that capabilities in the
network and terminal exist).
In defining the VHE, it is useful to introduce the concept of the home environment.
This can be synonymous with the user’s home network and subscribed
services, but can also include other value-added service providers (VASPs), which
are accessed through this home network service provider. The home environment
provides (and controls) the personal service environment in association with the
user’s own personal profile.
The serving network describes the network to which the user is attached at
the time, and thus may be a network in which they are roaming (when traveling
abroad). In the VHE concept, this network should be invisible to the user, with services
transported seamlessly. It may be another mobile network, but could equally
also be applied to a fixed network, the Internet, etc., depending how the users
choose to access their services at any one time.
VHE also takes into account the possibility of value-added service providers, who
may be part of neither the home nor the serving environment. For example, a banking
service may be provided directly from a bank VASP. Users should still be able to
transparently access these services whether or not they are in their home network.
Location Platforms
3G systems (including UMTS) are designed from the outset to provide for accurate
location of user equipment (mobile handsets), and this allows for providing
advanced location-based services. The location information is collated and managed
by location platforms (primarily the serving mobile location center and the gateway
mobile location center). This information can then be accessed and used by the
service platform in a variety of different services.
In UMTS, the location determination can be achieved in three main ways (discussed
shortly), and may also be provided as part GSM or GPRS.
The location information can be used by the PLMN operator, emergency services,
value-added service providers, and for lawful interception by authorized
agencies. The PLMN could use location information for a variety of purposes,
including handover optimization. Emergency services can radically improve the
overall response time if automatic mobile terminal location is provided. Valueadded
services can be significantly enriched and in some cases enabled — for
example, downloading a map showing where the subscriber is and how to reach a
local address.
The quality of location information is defined in terms of horizontal accuracy
(10 to 100 meters, according to the application), vertical accuracy (up to 10 meters),
and response time (no delay, low delay, or delay-tolerant criteria). It can also be
defined by priority (e.g., emergency services have highest priority), time stamping
(vital for some applications such as lawful interception), security measures (to
ensure controlled access to user location information), and privacy.
features, user interface, customization, and services — in whatever network
they are located, or terminal they may be using (assuming that capabilities in the
network and terminal exist).
In defining the VHE, it is useful to introduce the concept of the home environment.
This can be synonymous with the user’s home network and subscribed
services, but can also include other value-added service providers (VASPs), which
are accessed through this home network service provider. The home environment
provides (and controls) the personal service environment in association with the
user’s own personal profile.
The serving network describes the network to which the user is attached at
the time, and thus may be a network in which they are roaming (when traveling
abroad). In the VHE concept, this network should be invisible to the user, with services
transported seamlessly. It may be another mobile network, but could equally
also be applied to a fixed network, the Internet, etc., depending how the users
choose to access their services at any one time.
VHE also takes into account the possibility of value-added service providers, who
may be part of neither the home nor the serving environment. For example, a banking
service may be provided directly from a bank VASP. Users should still be able to
transparently access these services whether or not they are in their home network.
Location Platforms
3G systems (including UMTS) are designed from the outset to provide for accurate
location of user equipment (mobile handsets), and this allows for providing
advanced location-based services. The location information is collated and managed
by location platforms (primarily the serving mobile location center and the gateway
mobile location center). This information can then be accessed and used by the
service platform in a variety of different services.
In UMTS, the location determination can be achieved in three main ways (discussed
shortly), and may also be provided as part GSM or GPRS.
The location information can be used by the PLMN operator, emergency services,
value-added service providers, and for lawful interception by authorized
agencies. The PLMN could use location information for a variety of purposes,
including handover optimization. Emergency services can radically improve the
overall response time if automatic mobile terminal location is provided. Valueadded
services can be significantly enriched and in some cases enabled — for
example, downloading a map showing where the subscriber is and how to reach a
local address.
The quality of location information is defined in terms of horizontal accuracy
(10 to 100 meters, according to the application), vertical accuracy (up to 10 meters),
and response time (no delay, low delay, or delay-tolerant criteria). It can also be
defined by priority (e.g., emergency services have highest priority), time stamping
(vital for some applications such as lawful interception), security measures (to
ensure controlled access to user location information), and privacy.
The Open Service Architecture (OSA) Concept
OSA defines an architecture to enable operators and third-party developers (e.g.,
value-added service providers (VASPs)) to make use of network functionality
through an open standardized interface (called an application programming interface
[API]). It provides applications with access to service capability servers, and
thus provides the “glue” between the applications and the service capabilities of the
network.
In this way, applications become independent of the network. But within the
network, features such as Intelligent Networks and CAMEL provide support for the
required services. The actual applications, however, can be executed in application
servers physically separated from the core network entities. They may be part of the
operator domain, or may be third-party applications. The OSA API is secure and
independent of vendor-specific solutions and programming languages, operating
systems, etc.
value-added service providers (VASPs)) to make use of network functionality
through an open standardized interface (called an application programming interface
[API]). It provides applications with access to service capability servers, and
thus provides the “glue” between the applications and the service capabilities of the
network.
In this way, applications become independent of the network. But within the
network, features such as Intelligent Networks and CAMEL provide support for the
required services. The actual applications, however, can be executed in application
servers physically separated from the core network entities. They may be part of the
operator domain, or may be third-party applications. The OSA API is secure and
independent of vendor-specific solutions and programming languages, operating
systems, etc.
Software and Service Platform Requirements
There is a whole range of service platforms required in modern networks. They
include platforms for value-added services (VAS), such as voicemail, Short Message
Service SMS), and Multimedia Message Service (MMS). They also include
mobility databases, including home and visitor location registers (both of which
play an important role in service provision for GSM, GPRS, and UMTS); generic
service platforms, such as service control points (used in Intelligent Networks) and
service nodes. Finally, they include content servers, which provide access to basic
and advanced information, graphics, pictures, music, and video.
With the reduction in the cost of a voice call, the service platforms are assuming
greater significance as a way of providing value-added services in an attempt to
maintain and increase the average revenue per user (ARPU). In networks that are
optimized for data, such as ADSL, GPRS, and UMTS/3G, they provide the main
mechanism to grow the number of subscribers and to increase revenue.
The user would either be using these advanced data networks as a bearer to
access content and services from an independent provider, or be directed by network
resources (network-provided service platforms) to a point in the network where the
service can be accessed (network-provided service platforms or third-party service
provider platforms).
Network operators would like their customers to access the operator-provided
content and services so that they can capture the ensuing revenue, but increasingly,
third-party service providers are seen as an integral part of the overall system. In
fact, work has been done within industry and standards bodies to provide standard
ways of connecting third-party providers to the network so they can be accessed by
customers. Examples of third-party content and services include information such
as movie schedules, restaurant opening times and locations, credit card payment for
calls, navigation system updates (traffic updates), etc.
Vendors
Provision of service platforms of whatever sort is considered big business. For some
vendors, it is seen as a secondary function, helping to make the core product (often
telephone exchanges and routers) more attractive in that an integrated solution
can be provided, with the subsequent reduction in administration, testing, and
required employee knowledge base. Some vendors are seen as specialists in the area,
relying on reputation, and above all on an excellent product (or product range).
Example equipment vendors include, but are not limited to:
Alcatel
Ericsson
LogicaCMG
Lucent
Nokia
Nortel
Telcordia
Capacity
Capacity issues for service platforms can be quite complex, with huge variations
in usage throughout the day (which can be expected and predicted), and in addition,
variations occurring as a result of specific applications. These variations in
demand could be triggered by advertising campaigns for certain content, televoting
(by standard circuit-switched call, or messaging, for example), or perhaps by world
events dictating mass network usage (New Year’s celebrations, as an example).
In all cases, the service must be effectively managed. This includes the service and
application handling itself, as well as any congestion condition. Any request for service
that is not satisfied directly results in lost revenue. In addition, if congestion is encountered,
and appropriate announcements or responses are not given to the customer,
they will be dissatisfied and reluctant to try later, again resulting in lost revenue.
One of the main requirements therefore is to balance the cost of equipment
against potential usage, taking into account lost revenue through congestion.
This is easier to achieve for some platforms than for others. For example, platforms
that primarily provide database applications (such as a home location register
in GSM) can be planned, with forecasts of subscriber growth allowing a steady
scale-up of network resources. A known history of usage patterns for the HLR on a
daily and seasonal basis can be catered for.
Other platforms are not so easy to plan. These are generally the service platforms
that support a range of services (or a single service) that are influenced by less
predictable variables. For example, a service control point (explained later) handling
a televoting service will hit congestion levels early (and remain there) if a vote that
had forecast to receive two million votes actually receives three million. Although
mechanisms do exist within the technology to handle the congestion, the lost revenue
is still a huge problem. In any case, scalability is a big factor in service platform
provision.
Upgrading the network must also be transparent to the customer, leading to
the requirement for a flexible architecture. This would allow one to view a single
functional entity in terms of access to the service, but with the functionality actually
provided by a number of physical entities (servers), as shown in Figure 3.31.
This gives the required flexibility and scalability, allowing the network to grow at a
planned rate, while ensuring that spare capacity is calculated to cater for peaks, but
not be so great that it is wasted investment.
Redundancy
Some services and applications may be more critical than others. The high revenue
services should be protected from congestion when a mass service scenario is
expected. In addition, services and applications can be very different, with widely
varying requirements.
For both of these reasons, it is usual for a network operator to implement a number
of different service and content platforms. Even when the platform is generic
(such as an Intelligent Network service control point), it would generally be the case
that different platforms would support different services. There may be a platform
for prepaid billing, one for dialed network services, one for fraud control, one for
mass calling type services, etc.
In reality, all of these chosen examples could be provided by a single platform but
common sense dictates otherwise. Having a single platform implemented for each
service would not be ideal either. First, capacity requirements may require more than
one platform, but the other compelling reason is that of resilience and redundancy.
Continued availability of any service, even under platform failure, is extremely
important; hence, it makes sense for a service to be available for use by the required
customers in at least two platforms. The platforms would ideally be sited in different
geographical locations
include platforms for value-added services (VAS), such as voicemail, Short Message
Service SMS), and Multimedia Message Service (MMS). They also include
mobility databases, including home and visitor location registers (both of which
play an important role in service provision for GSM, GPRS, and UMTS); generic
service platforms, such as service control points (used in Intelligent Networks) and
service nodes. Finally, they include content servers, which provide access to basic
and advanced information, graphics, pictures, music, and video.
With the reduction in the cost of a voice call, the service platforms are assuming
greater significance as a way of providing value-added services in an attempt to
maintain and increase the average revenue per user (ARPU). In networks that are
optimized for data, such as ADSL, GPRS, and UMTS/3G, they provide the main
mechanism to grow the number of subscribers and to increase revenue.
The user would either be using these advanced data networks as a bearer to
access content and services from an independent provider, or be directed by network
resources (network-provided service platforms) to a point in the network where the
service can be accessed (network-provided service platforms or third-party service
provider platforms).
Network operators would like their customers to access the operator-provided
content and services so that they can capture the ensuing revenue, but increasingly,
third-party service providers are seen as an integral part of the overall system. In
fact, work has been done within industry and standards bodies to provide standard
ways of connecting third-party providers to the network so they can be accessed by
customers. Examples of third-party content and services include information such
as movie schedules, restaurant opening times and locations, credit card payment for
calls, navigation system updates (traffic updates), etc.
Vendors
Provision of service platforms of whatever sort is considered big business. For some
vendors, it is seen as a secondary function, helping to make the core product (often
telephone exchanges and routers) more attractive in that an integrated solution
can be provided, with the subsequent reduction in administration, testing, and
required employee knowledge base. Some vendors are seen as specialists in the area,
relying on reputation, and above all on an excellent product (or product range).
Example equipment vendors include, but are not limited to:
Alcatel
Ericsson
LogicaCMG
Lucent
Nokia
Nortel
Telcordia
Capacity
Capacity issues for service platforms can be quite complex, with huge variations
in usage throughout the day (which can be expected and predicted), and in addition,
variations occurring as a result of specific applications. These variations in
demand could be triggered by advertising campaigns for certain content, televoting
(by standard circuit-switched call, or messaging, for example), or perhaps by world
events dictating mass network usage (New Year’s celebrations, as an example).
In all cases, the service must be effectively managed. This includes the service and
application handling itself, as well as any congestion condition. Any request for service
that is not satisfied directly results in lost revenue. In addition, if congestion is encountered,
and appropriate announcements or responses are not given to the customer,
they will be dissatisfied and reluctant to try later, again resulting in lost revenue.
One of the main requirements therefore is to balance the cost of equipment
against potential usage, taking into account lost revenue through congestion.
This is easier to achieve for some platforms than for others. For example, platforms
that primarily provide database applications (such as a home location register
in GSM) can be planned, with forecasts of subscriber growth allowing a steady
scale-up of network resources. A known history of usage patterns for the HLR on a
daily and seasonal basis can be catered for.
Other platforms are not so easy to plan. These are generally the service platforms
that support a range of services (or a single service) that are influenced by less
predictable variables. For example, a service control point (explained later) handling
a televoting service will hit congestion levels early (and remain there) if a vote that
had forecast to receive two million votes actually receives three million. Although
mechanisms do exist within the technology to handle the congestion, the lost revenue
is still a huge problem. In any case, scalability is a big factor in service platform
provision.
Upgrading the network must also be transparent to the customer, leading to
the requirement for a flexible architecture. This would allow one to view a single
functional entity in terms of access to the service, but with the functionality actually
provided by a number of physical entities (servers), as shown in Figure 3.31.
This gives the required flexibility and scalability, allowing the network to grow at a
planned rate, while ensuring that spare capacity is calculated to cater for peaks, but
not be so great that it is wasted investment.
Redundancy
Some services and applications may be more critical than others. The high revenue
services should be protected from congestion when a mass service scenario is
expected. In addition, services and applications can be very different, with widely
varying requirements.
For both of these reasons, it is usual for a network operator to implement a number
of different service and content platforms. Even when the platform is generic
(such as an Intelligent Network service control point), it would generally be the case
that different platforms would support different services. There may be a platform
for prepaid billing, one for dialed network services, one for fraud control, one for
mass calling type services, etc.
In reality, all of these chosen examples could be provided by a single platform but
common sense dictates otherwise. Having a single platform implemented for each
service would not be ideal either. First, capacity requirements may require more than
one platform, but the other compelling reason is that of resilience and redundancy.
Continued availability of any service, even under platform failure, is extremely
important; hence, it makes sense for a service to be available for use by the required
customers in at least two platforms. The platforms would ideally be sited in different
geographical locations
Radio Resources
Radio Resource Procedures for GSM, GPRS, and EDGE
The general purpose of radio resource procedures is to establish, maintain, and
release radio connections. To be effective, the radio resource procedures are complex
and include the cell selection/reselection and handover procedures.
Radio Resource in Idle Mode
The mobile handset continuously monitors the system information broadcasts on
the broadcast channel, interpreting the information appropriately. This information
contains parameters such as the identity of the serving cell and information on
how to access the network.
Once the cell is selected, the mobile leaves the idle state to register its location with
the network. In the absence of any other activity, it then falls back to the idle mode, to
monitor the paging channel (to identify any incoming calls), updating its location as
required (periodically, or as location area or routing area boundaries are crossed).
Radio resource in idle mode:
Monitor system information broadcast (contains required information)
Cell selection
Cell reselection
Location registration and updating
Monitor paging channel
Radio Resource Connection Establishment
This is instigated from idle mode. The RRC connection request is sent from the
mobile to the network. This may be to make a call, transfer data, or update a location
or routing area. Once the connection is established, a confirmation is sent to
the mobile.
The mobile now moves into the connected state, sending a Radio Resource
Connection Setup Complete indication to the network on the dedicated channel that
has just been assigned.
Paging
For idle mode mobiles, paging is initiated over the required location area, or routing
area for incoming calls, or for session setup in the case of packet services. The
core network node (MSC / VLR or SGSN) initiates the paging procedure.
The paging message is broadcast using Radio Resource procedures on the paging
channel that each idle mode mobile monitors within the cell(s).
Other Radio Resource Procedures
Radio resource procedures are extremely comprehensive and cover a wide range of
requirements. Some of the procedures not dealt with here include:
Power control (uplink and downlink)
Handovers (can be extremely complex in UMTS/W-CDMA)
Connection release
Packet mode procedures
Radio Resource Procedures for UMTS (including HSDPA)
UMTS (Universal Mobile Telecommunication System) procedures follow the
general requirements for GSM and GPRS, except that the radio interface and
channel structure are more complex and therefore require more (and more complex)
procedures. This is also true for HSDPA (High-Speed Downlink Packet
Access), which is a specific implementation of W-CDMA used on the UMTS
radio interface.
The mobility management (MM) and connection management (CM) procedures
are essentially very similar to those discussed above, except that more options
are generally available with UMTS because of the multimedia-capable nature of
the connections. In fact, the procedures for the GSM family of technologies are
contained within a common set of documents (available through the Third Generation
Partnership Project [3GPP] Web site at www.3GPP.org).
The documents are arranged such that all procedures relating to MM and CM
will be in a single document, irrespective of access technology.
Radio resource (RR) management is where the main differences occur between
the technologies and, in this case, the procedures for UMTS (W-CDMA) are separated
from those for GSM, GPRS, and EDGE by the use of different documents
detailing the RR procedures.
First, the radio elements are known as the Node B, which is equivalent to
the BTS in GSM, GPRS, and EDGE, and the radio network controller (RNC),
which is equivalent to the BSC. The term “radio network subsystem” (RNS)
is used in place of the base station subsystem (BSS) used in GSM, GPRS, and
EDGE.
To illustrate just two of the major differences between UMTS (W-CDMA)
and GSM, GPRS, and EDGE, two procedures relating to radio resources (RR) are
described below.
The general purpose of radio resource procedures is to establish, maintain, and
release radio connections. To be effective, the radio resource procedures are complex
and include the cell selection/reselection and handover procedures.
Radio Resource in Idle Mode
The mobile handset continuously monitors the system information broadcasts on
the broadcast channel, interpreting the information appropriately. This information
contains parameters such as the identity of the serving cell and information on
how to access the network.
Once the cell is selected, the mobile leaves the idle state to register its location with
the network. In the absence of any other activity, it then falls back to the idle mode, to
monitor the paging channel (to identify any incoming calls), updating its location as
required (periodically, or as location area or routing area boundaries are crossed).
Radio resource in idle mode:
Monitor system information broadcast (contains required information)
Cell selection
Cell reselection
Location registration and updating
Monitor paging channel
Radio Resource Connection Establishment
This is instigated from idle mode. The RRC connection request is sent from the
mobile to the network. This may be to make a call, transfer data, or update a location
or routing area. Once the connection is established, a confirmation is sent to
the mobile.
The mobile now moves into the connected state, sending a Radio Resource
Connection Setup Complete indication to the network on the dedicated channel that
has just been assigned.
Paging
For idle mode mobiles, paging is initiated over the required location area, or routing
area for incoming calls, or for session setup in the case of packet services. The
core network node (MSC / VLR or SGSN) initiates the paging procedure.
The paging message is broadcast using Radio Resource procedures on the paging
channel that each idle mode mobile monitors within the cell(s).
Other Radio Resource Procedures
Radio resource procedures are extremely comprehensive and cover a wide range of
requirements. Some of the procedures not dealt with here include:
Power control (uplink and downlink)
Handovers (can be extremely complex in UMTS/W-CDMA)
Connection release
Packet mode procedures
Radio Resource Procedures for UMTS (including HSDPA)
UMTS (Universal Mobile Telecommunication System) procedures follow the
general requirements for GSM and GPRS, except that the radio interface and
channel structure are more complex and therefore require more (and more complex)
procedures. This is also true for HSDPA (High-Speed Downlink Packet
Access), which is a specific implementation of W-CDMA used on the UMTS
radio interface.
The mobility management (MM) and connection management (CM) procedures
are essentially very similar to those discussed above, except that more options
are generally available with UMTS because of the multimedia-capable nature of
the connections. In fact, the procedures for the GSM family of technologies are
contained within a common set of documents (available through the Third Generation
Partnership Project [3GPP] Web site at www.3GPP.org).
The documents are arranged such that all procedures relating to MM and CM
will be in a single document, irrespective of access technology.
Radio resource (RR) management is where the main differences occur between
the technologies and, in this case, the procedures for UMTS (W-CDMA) are separated
from those for GSM, GPRS, and EDGE by the use of different documents
detailing the RR procedures.
First, the radio elements are known as the Node B, which is equivalent to
the BTS in GSM, GPRS, and EDGE, and the radio network controller (RNC),
which is equivalent to the BSC. The term “radio network subsystem” (RNS)
is used in place of the base station subsystem (BSS) used in GSM, GPRS, and
EDGE.
To illustrate just two of the major differences between UMTS (W-CDMA)
and GSM, GPRS, and EDGE, two procedures relating to radio resources (RR) are
described below.
Procedure Sequences
For all cellular systems, each transaction (incoming or outgoing call, data session,
text message, etc.) carried out by the user requires a number of separate procedures
to occur in a set sequence. Each transaction will have its own unique requirements,
but the general sequence will usually be common to all transactions.
The general sequence used for the GSM family of technologies is shown in
At this stage, this sequence ignores the complex requirement for setting
up the radio resources. In reality, the radio resource procedures would occur
within the sequence shown at any point where there is a requirement to allocate,
change, or release radio connections.
Note that the standard sequence illustrates the required procedures in six identifiable
steps. This sequence is referred to as the elementary procedure. Before any
communication with the core network, there must be a radio connection in place.
The mobile must also identify the user to the network prior to any service request
being granted.
The first step in the sequence is the assignment of an appropriate channel, which
may be after a paging message has been received from the network in case of an
incoming call. This is followed by the mobile handset requesting the service that is
required, as well as the security procedures (authentication of the user, and ciphering
of the radio channel). Both security procedures are optional; and although
available in most networks, there are exceptions.
These mobility management procedures are used for establishing a mobility
management (MM) connection with the core network, including the authentication
of the user. Once complete, connection management (CM) procedures are used
to set up and manage the transaction itself, including establishing outgoing or
incoming calls, setting up and managing data sessions, or sending or receiving a
text message. Once complete, all relevant channels will be released.
The elementary procedures will be initiated while a handset is in idle mode,
requiring it to move into dedicated mode for the duration of the sequence, before
falling back to idle mode once the sequence is complete. If another transaction
is required while one is in progress, the mobile uses the existing MM procedures
already in place as the basis for the new transaction. For example, no further security
procedures, or even a paging message, will be required for a mobile that receives
a text message while in the middle of a voice call. For GPRS and UMTS packetdata
operation, similar considerations apply.
text message, etc.) carried out by the user requires a number of separate procedures
to occur in a set sequence. Each transaction will have its own unique requirements,
but the general sequence will usually be common to all transactions.
The general sequence used for the GSM family of technologies is shown in
At this stage, this sequence ignores the complex requirement for setting
up the radio resources. In reality, the radio resource procedures would occur
within the sequence shown at any point where there is a requirement to allocate,
change, or release radio connections.
Note that the standard sequence illustrates the required procedures in six identifiable
steps. This sequence is referred to as the elementary procedure. Before any
communication with the core network, there must be a radio connection in place.
The mobile must also identify the user to the network prior to any service request
being granted.
The first step in the sequence is the assignment of an appropriate channel, which
may be after a paging message has been received from the network in case of an
incoming call. This is followed by the mobile handset requesting the service that is
required, as well as the security procedures (authentication of the user, and ciphering
of the radio channel). Both security procedures are optional; and although
available in most networks, there are exceptions.
These mobility management procedures are used for establishing a mobility
management (MM) connection with the core network, including the authentication
of the user. Once complete, connection management (CM) procedures are used
to set up and manage the transaction itself, including establishing outgoing or
incoming calls, setting up and managing data sessions, or sending or receiving a
text message. Once complete, all relevant channels will be released.
The elementary procedures will be initiated while a handset is in idle mode,
requiring it to move into dedicated mode for the duration of the sequence, before
falling back to idle mode once the sequence is complete. If another transaction
is required while one is in progress, the mobile uses the existing MM procedures
already in place as the basis for the new transaction. For example, no further security
procedures, or even a paging message, will be required for a mobile that receives
a text message while in the middle of a voice call. For GPRS and UMTS packetdata
operation, similar considerations apply.
GPRS Connections
To move information from content and application servers to the handset and vice
versa, a path through the network must be defined. This is achieved using the PDP
context procedure, where the handset, SGSN, and GGSN all store data relating to
this virtual connection (Figure 3.26).
The GGSN defines the gateway to the content — maybe the Internet, MMS
server, or a WAP gateway. The GGSN is identified by a special identity known as
the Access Point Name — effectively a URI (Universal Resource Indicator, as used
for Web pages, etc.).
Because billing and other administrative functions may be based on this
GGSN, it does not make sense in most instances to change this gateway dynamically.
Hence, even when a user is roaming abroad, the GGSN is usually located in
the subscriber’s home network, and a path is defined through the serving networkand an inter-PLMN backbone network (incorporating GPRS Roaming Exchanges
(GRX)), to the home network. The border gateways define the edge of the administratively
separate networks with associated firewalls.
An alternative arrangement may be that the SGSN and GGSN are located in the
same network. This would certainly be the case where the subscriber is attached via
the user’s home network, but may also be true, for example, where connection settings
in the handset for Internet access use a generic APN (access point name), shared by
different operators, allowing access to the nearest GGSN (in the serving network).
Using different procedures, the connection can be maintained. If necessary, the
PDP context information can be moved from SGSN to SGSN without having to
reestablish either the connection or the PDP context.
versa, a path through the network must be defined. This is achieved using the PDP
context procedure, where the handset, SGSN, and GGSN all store data relating to
this virtual connection (Figure 3.26).
The GGSN defines the gateway to the content — maybe the Internet, MMS
server, or a WAP gateway. The GGSN is identified by a special identity known as
the Access Point Name — effectively a URI (Universal Resource Indicator, as used
for Web pages, etc.).
Because billing and other administrative functions may be based on this
GGSN, it does not make sense in most instances to change this gateway dynamically.
Hence, even when a user is roaming abroad, the GGSN is usually located in
the subscriber’s home network, and a path is defined through the serving networkand an inter-PLMN backbone network (incorporating GPRS Roaming Exchanges
(GRX)), to the home network. The border gateways define the edge of the administratively
separate networks with associated firewalls.
An alternative arrangement may be that the SGSN and GGSN are located in the
same network. This would certainly be the case where the subscriber is attached via
the user’s home network, but may also be true, for example, where connection settings
in the handset for Internet access use a generic APN (access point name), shared by
different operators, allowing access to the nearest GGSN (in the serving network).
Using different procedures, the connection can be maintained. If necessary, the
PDP context information can be moved from SGSN to SGSN without having to
reestablish either the connection or the PDP context.
Making Calls
Calls to Mobile Networks
Making a call to a mobile network involves keying in the called subscriber’s
MSISDN (mobile subscriber’s ISDN) number. This number routes the call to the
called subscriber’s home network via a gateway mobile switching center (GMSC).
Here, signaling interactions with the home location register (HLR) occur. The
HLR, in turn, interacts with the visitor location register at the mobile switching
center at which the called subscriber is currently registered.
Eventually, although it takes very little time, new routing information is sent
back to the GMSC to replace the subscriber’s MSISDN number. The new information
allows the call to be routed to the network in which the called subscriber is
currently registered and, more specifically, to the serving MSC (either at home or
abroad [roaming cases]).
Calls from Mobile Networks
For outgoing calls from a mobile network, the serving MSC acts as a local exchange
for the subscriber that is making the call — with the call being routed directly from
the serving MSC (rather than having to route back through the subscriber’s home
network if abroad).
In all cases, however, the call is in accordance with any service information
being held in the VLR (having been previously sent from the subscriber’s HLR
when first registering at this MSC). This may include such restrictions as call barring
or advanced interactions required for prepaid subscribers
Making a call to a mobile network involves keying in the called subscriber’s
MSISDN (mobile subscriber’s ISDN) number. This number routes the call to the
called subscriber’s home network via a gateway mobile switching center (GMSC).
Here, signaling interactions with the home location register (HLR) occur. The
HLR, in turn, interacts with the visitor location register at the mobile switching
center at which the called subscriber is currently registered.
Eventually, although it takes very little time, new routing information is sent
back to the GMSC to replace the subscriber’s MSISDN number. The new information
allows the call to be routed to the network in which the called subscriber is
currently registered and, more specifically, to the serving MSC (either at home or
abroad [roaming cases]).
Calls from Mobile Networks
For outgoing calls from a mobile network, the serving MSC acts as a local exchange
for the subscriber that is making the call — with the call being routed directly from
the serving MSC (rather than having to route back through the subscriber’s home
network if abroad).
In all cases, however, the call is in accordance with any service information
being held in the VLR (having been previously sent from the subscriber’s HLR
when first registering at this MSC). This may include such restrictions as call barring
or advanced interactions required for prepaid subscribers
WLAN and Mobile Networks
The existing GPRS/GSM and newer 3G networks have for a long time offered data
services to their subscribers.
Original circuit-switching data at rates of 9.6 kbps do not allow a great deal
of flexibility and severely restrict the service and content that can be offered. The
advent of GPRS has improved this matter significantly. The packet-switching service
can offer much higher rates to the user and at the same time offer resource
efficiencies to the network operator. However, the data rates are still in the range 10
to 40 kbps. Even if EGDE is deployed in the network, the data rate will only be a
maximum of hundreds of kilobits per second. The advent of 3G networks improves
on this data rate, thus allowing a greater range of services to the users.
These data rates, no matter how impressive, are simply blown away by the rate
available through WiFi hotspots — where typical data rates can be as much a
2 Mbps. The obvious difference here is that the cellular networks are providing
wide area coverage for voice and data, whereas the WiFi service is limited to a very
specific place and does not yet offer mobility services. There may be advantages to
the cellular operators by allowing some form of interworking between the WiFi
systems and their own networks. It is likely that this can be achieved in several different
ways, depending on available technologies and potential agreements between
WiFi and cellular operators.
Some cellular operators have formed alliances with the WiFi service providers,
allowing the cellular subscriber access to the hotspots under the single mobile
subscription. Mobile phone manufacturers are already producing equipment that
supports the WiFi radio in multi-mode devices. This allows the phone itself to use
the WiFi hotspot for data download or even in some cases to make phone calls
using VoIP protocols.
In due course it may become common for the cellular operators themselves to
build and maintain their own WiFi infrastructure, connecting directly to the mobile
core network. Using mobile IP and a 3G core network, this offers users seamless
mobility between mobile networks and WiFi systems under a single subscription.
WiFi is illustrated as a stand-alone technology that connects
directly to the Internet, or as an integral part of the cellular operator’s network —
effectively providing alternative access. Note that IMS refers to the IP Multimedia
Subsystem that is designed to provide support for advanced multimedia services.
services to their subscribers.
Original circuit-switching data at rates of 9.6 kbps do not allow a great deal
of flexibility and severely restrict the service and content that can be offered. The
advent of GPRS has improved this matter significantly. The packet-switching service
can offer much higher rates to the user and at the same time offer resource
efficiencies to the network operator. However, the data rates are still in the range 10
to 40 kbps. Even if EGDE is deployed in the network, the data rate will only be a
maximum of hundreds of kilobits per second. The advent of 3G networks improves
on this data rate, thus allowing a greater range of services to the users.
These data rates, no matter how impressive, are simply blown away by the rate
available through WiFi hotspots — where typical data rates can be as much a
2 Mbps. The obvious difference here is that the cellular networks are providing
wide area coverage for voice and data, whereas the WiFi service is limited to a very
specific place and does not yet offer mobility services. There may be advantages to
the cellular operators by allowing some form of interworking between the WiFi
systems and their own networks. It is likely that this can be achieved in several different
ways, depending on available technologies and potential agreements between
WiFi and cellular operators.
Some cellular operators have formed alliances with the WiFi service providers,
allowing the cellular subscriber access to the hotspots under the single mobile
subscription. Mobile phone manufacturers are already producing equipment that
supports the WiFi radio in multi-mode devices. This allows the phone itself to use
the WiFi hotspot for data download or even in some cases to make phone calls
using VoIP protocols.
In due course it may become common for the cellular operators themselves to
build and maintain their own WiFi infrastructure, connecting directly to the mobile
core network. Using mobile IP and a 3G core network, this offers users seamless
mobility between mobile networks and WiFi systems under a single subscription.
WiFi is illustrated as a stand-alone technology that connects
directly to the Internet, or as an integral part of the cellular operator’s network —
effectively providing alternative access. Note that IMS refers to the IP Multimedia
Subsystem that is designed to provide support for advanced multimedia services.
Subscribe to:
Posts (Atom)