What is the actual maximum throughput on Gigabit Ethernet?

Examining the usable bandwidth on a Gigabit Ethernet network

Examining how much throughput of actual throughput can be achieved on a Gigabit Ethernet based network and how much this increases by using Jumbo Frames.  Also covered is how that relates to throughput of a Wireless Link with Gigabit Ethernet interfaces.

Gigabit Ethernet Physical Layer

On a Fibre Optic Gigabit Ethernet Network (1000BaseSX, 1000BaseLX), the raw line rate is 1.25Gbps.  This raw data rate is chosen to include 8b10b Line Coding.  Line Coding is used to ensure “DC balance” of the data stream, remove long runs of consecutive 0’s and 1’s, which makes the physical transceivers easier to design, implement, and maximises performance of the fibre optic transceiver range capability.  When the 8b10b line coding is removed from the raw data stream by the Gigabit Ethernet chipset, this allows an uncoded payload of exactly 1.0Gbps.

On a copper based Gigabit Ethernet Network (1000BaseT),  transmission uses four lanes over all four cable pairs for simultaneous transmission in both directions through the use of echo cancellation with adaptive equalization and five-level pulse amplitude modulation (PAM-5). The symbol rate is identical to that of 100BASE-TX (125 megabaud).

Gigabit Ethernet Net Data rate

The theoretical maximum bandwidth on a Gigabit Ethernet network is defined by a node being able to send 1 000 000 000 bits each second (bits per second, bps, bp/s), that is one billion 1 or 0s every second. A Byte of data consists of 8 Bits, hence the net capacity of this Gigabit link is the capability to transfer 125 000 000 bytes per second (1000000000 / 8), also termed Bbps, Bytes/sec or Bytes/s.

Frames, Preamble, Interframe Gap

In a real-world network, not all of the 125000000 bytes/second can be used to send data as there are multiple layers of overhead. Data transferred over a Ethernet based network must be divided into “frames”. The size of these frames regulates the maximum number of bytes to send together. The maximum frame size for Ethernet has been 1518 byte for the last 25 years or more.

 

Each frame will cause some overhead, both inside the frame but less known also on the “outside”. Before each frame is sent there is certain combination of bits that must be transmitted, called the Preamble, which basically signals to the receiver that a frame is coming right behind it. The preamble is 8 bytes and is sent just before each and every frame.

When the main body of the frame (1518 byte) has been transferred the network devices want to send another one. Since we are not using the old CSMA/CD access method (used only for half duplex) the devices do not have to “sense the cable” to see if it is free – which would incur a time penalty, but the Ethernet standard defines that for full duplex transmissions there has to be a certain amount of idle bytes before next frame is sent onto the wire.

This is called the Interframe Gap and is 12 bytes long. So between all frames devices have to leave at least 12 bytes “empty” to give the receiver side the time needed to prepare for the next incoming frame.

This will mean that each frame actually uses:

12 empty bytes of Interframe Gap + 1518 bytes of frame data + 8 bytes of preamble = 1538

This makes that each frame actually consumes 1538 bytes of bandwidth and if we remember that there are  “time slots” for sending 125000000 bytes each second this will allow space for 81274 frames per second. (125000000 / 1538)

So on default Gigabit Ethernet we can transmit over 81000 full size frames each second. Since Gigabit Ethernet is always at running full duplex we can at the same time receive 81000 frames simultaneously.

Nore detail on the overhead for this: For each frame, we lose 12 + 8 bytes used for Interframe Gap and Preamble, which is considered the “outside” of the frame. Plus, there is some more overhead is going on:

Ethernet header, Frame Check Sequence

The first 14 byte of the frame will be used for the Ethernet header and the last 4 bytes will contain a checksum trying to detect transfer errors. This uses the CRC32 checksum algorithm and is called the Frame Check Sequence (FCS).

The Maximum Transmission Unit, MTU

This means that we lose a total of 18 bytes in overhead for the Ethernet header in the beginning and the checksum at the end. (The blue parts above are seen as something like a “frame” around the data carried inside.) The number of bytes left is called the Maximum Transmission Unit (MTU) and will be 1500 bytes on default Ethernet. MTU is the payload that could be carried inside an Ethernet frame, see picture above. It is a common misunderstanding that MTU is the frame size, but really is the data inside the frame only.

IP Header, TCP Header, Maximum Segment Size

Just behind the Ethernet header we will most likely find the IP header. If using ordinary IPv4 this header will be 20 bytes long. And behind the IP header we will also most likely find the TCP header, which have the same length of 20 bytes. The amount of data that could be transferred in each TCP segment is called the Maximum Segment Size (MSS) and is typically 1460 bytes.

So the Ethernet header and checksum plus the IP and TCP headers will together add 58 bytes to the overhead. Adding the Interframe Gap and the Preamble gives 20 more. So for each 1460 bytes of datasent we have a minimum of 78 extra bytes handling the transfer at different layers. All of these are very important, but does cause an overhead at the same time.

Efficiency using Standard Ethernet Frames

At the beginning of this article we noted the potential to send 125000000 bytes/second on Gigabit Ethernet. When each frame consumes 1538 byte of bandwidth that gave us 81274 frames/second (125000000 / 1538). If each frame carries a maximum of 1460 bytes of user data this means that we could transfer 118660598 data bytes per second (81274 frames x 1460 byte of data), i.e. around 118 MB/s.

This means that when using default Ethernet frame size of 1518 byte (MTU = 1500) we have an efficiency of around 94% (118660598 / 125000000), meaning that the other 6% is used for the protocols at various layer, which we could call overhead.

Efficiency using Jumbo Frames

If supported by the connected equipment – enabling so called Jumbo Frames on all equipment in the chain, we could have a potential increase in the actual bandwidth used for our data. Let us look at that now:

A commonly used MTU value for Jumbo Frames is 9000. First we have to add the overhead for Ethernet (14+4 bytes), Preamble (8 bytes) and Interframe Gap (12 bytes). This makes the frame consume 9038 bytes of bandwidth and from the total amount of 125000000 bytes available to send each second we will have a total of 13830 jumbo frames (125000000 / 9038). So a lot less frames than the 81000 normal sized frames, but we will be able to carry more data inside each of the frames and by that reduce the network overhead.

(There are also other types of overhead: including CPU time in hosts, processing work done at network interface cards, switches and routers, but in this article we will only look at the bandwidth usage.)

If we remove the overhead for Interframe Gap, Ethernet CRC, TCP, IP, Ethernet header and the Preamble we would end up with 8960 bytes of data inside each TCP segment. This means that the Maximum Segment Size, the MSS, is 8960 byte and is a lot larger than default 1460 byte. A MSS of 8960 multiplied with 13830 (number of frames) gives 123916800 bytes for user data.

This will give us a really great efficiency, of 99% (123916800 / 125000000). So by increasing the frame size we would have almost five percent more bandwidth available for data, compared to about 94% for default frame size.

Wireless Links with Gigabit Ethernet Interfaces

Note that for Wireless links such as Microwave, Radio, Millimeter Wave or Free Space Optics, the Airside Interface often uses different coding and modulation than the network side interface.  This difference is often due to limitations in the amount of RF spectrum available (for example, a 40MHz, 56MHz, 60MHz, 80MHz or even 112MHz channel) from the regulatory body and channel planning, the modulation used (for example, up to 256QAM or 1024QAM) which affects both transmit power and receiver sensitivity, aggregation features such as MIMO or XPIC, and especially for longer links, the corresponding Link Budget between the two ends which includes the Antenna Gain at both sides, plus any losses caused by transmission waveguides, connectors, plus atmospheric fade effects.  This airside interface may therefore impose a lower capacity for the “end to end” wireless link even if the network interfaces at each end are connected at 1Gbps Gigabit Ethernet rate.

Transparent Wireless Links

Note that only some wireless technologies such as Free Space Optics (FSO) are capable of fully transparent transmission using the exact same modulation used on Fibre Optic networks, so the full 1.25Gbps line rate, along with all packet structure is maintained exactly.  The advantages of transparent transmission is that throughput is easily predicted, and latency is the lowest possible as transmission is generally one bit at a time.

Conclusion

The default Gigabit Ethernet has a potential frame throughput of 81000 per second and therefore a high throughput for actual data (about 118 MB/s), giving efficiency of 94%, or 940Mbps.  For networking equipment where Jumbo Frames are supported, by increasing the MTU to 9000 can deliver even more data on the same bandwidth link, up to 123 MB/s, thanks to the decreased amount of overhead by utilising a lower number of frames. Jumbo Frames can therefore potentially offer 99% of the theoretical Gigabit Ethernet bandwidth to carry data, which means 990Mbps capacity.

For Further Information

For further information on Applications and Solutions of Gigabit Wireless products please Contact Us

CableFree-contact-us-button

IEEE 802.11ay wireless technology: Next-gen 60GHz WiFi

A new standard for 60GHz Wi-Fi goes beyond 802.11ad wireless speed & range

A new standard for high speed multi-gigabit WiFi is emerging.  Though products based on the IEEE 802.11ad (WiGig) standard have really only begun rolling out, an effort to deliver an enhancement called IEEE 802.11ay that promises to deliver faster and longer range Wi-Fi networks is gaining steam.

The up-coming 802.11ay is as an enhancement of 802.11ad in the unlicensed 60 GHz millimeter wave band of spectrum, and should be a natural upgrade. The upgrade will offer significant speed and range improvements.

IEEE 802.11ay 60GHz networking
CableFree WiFi Logo

Technical Summary

802.11ay is a type of WLAN in the IEEE 802.11 set of WLANs. It will have a frequency of 60 GHz, a transmission rate of 20–40 Gbit/s and an extended transmission distance of 300–500 meters. It has also been noted that it is likely to have mechanisms for channel bonding and MU-MIMO technologies. It is expected to be released in 2017. 802.11ay will not be a new type of WLAN in the IEEE 802.11 set, but will simply be an improvement on 802.11ad.

Where 802.11ad uses a maximum of 2.16 GHz bandwidth, 802.11ay bonds four of those channels together for a maximum bandwidth of 8.64 GHz. MIMO is also added with a maximum of 4 streams. The link-rate per stream is 44Gbit/s, with four streams this goes up to 176Gbit/s. Higher order modulation is also added, probably up to 256-QAM.   802.11ay applications could include replacement for Ethernet and other cables within offices or homes, and provide backhaul connectivity outside for service providers.

What is the difference between ad and ay?

The 802.11ad standard was published in 2012 and the technology gives devices access to the unlicensed and relatively unclogged 60 GHz millimeter wave spectrum band for multimedia streaming, VR headset connectivity, computer-to-monitor wireless links and other apps that don’t require more than say 30 or 40 feet of unimpeded space. It has been adopted by chipmakers as well as vendors of routers, access points and other devices. The Wi-Fi Alliance runs a WiGig certification program for vendors, and the early 11ad gear on the market most commonly supports data transfer rates of 4.6Gbps – way faster than 802.11n and 11ac, but more limited in range and unable to penetrate solid objects.

The backwards compatible 802.11ay amendment to 802.11ad is designed to boost speeds several-fold. That initially would amount to a transmission rate of 20 to 30Gbps and a range of 33 to 100 feet with 11ay-to-11ay device setups, but once channel bonding, MIMO and other capabilities are exploited, you could be getting closer to 200Gbps and reaching distances approaching 1,000 feet, according to industry players.

11ay, as the specs are being developed, “is really allowing for a wider range of products than you’d get with ad, which has one set of data rates that everyone supports… ay has a lot more parameters to play with in channel bonding, MIMO and features at the MAC level to allow a far greater range of performance and products” according to one chipset vendor.

Other up-coming Fast WiFi standards: 802.11ax

IEEE 802.11ay 60GHz networking
IEEE 802.11ay 60GHz networking

Users should not confuse 802.11ay with 802.11ax, which will work in the 2.5 and 5 GHz bands.  The lower frequency bands for 11ax will penetrate walls.  11ay will not.

What will 802.11ay be used for?

It remains to be seen how soon the high speeds of 11ay will really be needed for internal uses, as 802.11ac — including Wave 2 products — are already pretty robust. But experts say that if 11ad doesn’t quite do it for you given its distance limitations, “11ay will finally be the technology that would let you snip that Ethernet cord – you no longer have to run Ethernet cables to everyone’s desk… there’s enough wireless bandwidth in ay.”

Many are enthusiastic about 802.1ay’s potential as a fixed point-to-point or point-to-multipoint outdoor backhaul technology, especially in light of scaled back fiber rollout plans by providers like Google and Verizon in the face of extraordinary costs associated with such implementations. “I’m more bullish on using ad & ay for backhaul (instead of mesh) in the case of campus & city networks — provided that it has a useful range” according to one industry expert

But it’s possible that 802.11ay could find a role in internal mesh and backbone networks as well as for other uses such as providing connectivity to VR headsets, supporting server backups and handling cloud applications that require low latency. “I believe that eventually, there will be enterprise applications for this – but it’s probably a few years into the future, given that we will have 802.11ax fairly soon & because there’s still a lot of 5 GHz band available for that (and ac).

When will 802.11ay become reality?

The 802.11ay task group had its initial meeting in 2015 and the spec only hit the Draft 0.1 stage in January. Though it is expected to reach Draft 1.0 by July 2017, according to the IEEE task group. If that mark is hit, expect pre-standard 11ay products to start rolling out within a year of that time.

Who is behind 802.11ay?

The IEEE task force leading the 11ay work includes representatives from major equipment and chipsets vendors.  The group states its goal as this: “Task Group ay is expected to develop an amendment that defines standardized modifications to both the IEEE 802.11 physical layers (PHY) and the IEEE 802,11 medium access control layer (MAC) that enables at least one mode of operation capable of supporting a maximum throughput of at least 20 gigabits per second (measured at the MAC data service access point), while maintaining or improving the power efficiency per station. This amendment also defines operations for license-exempt bands above 45 GHz while ensuring backward compatibility and coexistence with legacy directional multi-gigabit stations (defined by IEEE 802.11ad-2012 amendment) operating in the same band.”

For Further Information

Please Contact Us

IEEE 802.11ax: The new standard for Wi-Fi

The new standard 802.11ax for Wi-Fi goes beyond 802.11ac wireless

A new standard for high speed multi-gigabit WiFi is emerging.  Current WiFi products use chips based on the IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11 and IEEE 802.11ac standard have really only begun rolling out, an effort to deliver an enhancement called IEEE 802.11ax that promises to deliver faster and longer range Wi-Fi networks.

The up-coming 802.11ax is as an enhancement of 802.11ac in the unlicensed 2.4 and 5GHz bands of spectrum, and should be a natural upgrade. The upgrade will offer significant speed and range improvements.

IEEE 802.11ax Wireless Networking
CableFree WiFi Logo

Technical Summary

IEEE 802.11 ax is a type of WLAN in the IEEE 802.11 set of types of WLANs. It is designed to improve overall spectral efficiency especially in dense deployment scenarios. It is still in a very early stage of development, but is predicted to have a top speed of around 10 Gb/s, it works in 2.4 and/or 5 GHz, in addition to MIMO and MU-MIMO it introduces OFDMA technique to improve spectral efficiency and also higher order 1024 QAM modulation support for better throughputs. Though the nominal data rate is just 37% higher comparing with 802.11ac, the new amendment will allow achieving 4X increase of user throughput thanks to more efficient spectrum usage. It is due to be publicly released in 2019.

Modulation and coding schemes for single spatial stream
MCS
index
Modulation
type
Coding
rate
Data rate (in Mb/s)
20 MHz channels 40 MHz channels 80 MHz channels 160 MHz channels
1600 ns GI 800 ns GI 1600 ns GI 800 ns GI 1600 ns GI 800 ns GI 1600 ns GI 800 ns GI
0 BPSK 1/2 4 4 8 9 17 18 34 36
1 QPSK 1/2 16 17 33 34 68 72 136 144
2 QPSK 3/4 24 26 49 52 102 108 204 216
3 16-QAM 1/2 33 34 65 69 136 144 272 282
4 16-QAM 3/4 49 52 98 103 204 216 408 432
5 64-QAM 2/3 65 69 130 138 272 288 544 576
6 64-QAM 3/4 73 77 146 155 306 324 613 649
7 64-QAM 5/6 81 86 163 172 340 360 681 721
8 256-QAM 3/4 98 103 195 207 408 432 817 865
9 256-QAM 5/6 108 115 217 229 453 480 907 961
10 1024-QAM 3/4 122 129 244 258 510 540 1021 1081
11 1024-QAM 5/6 135 143 271 287 567 600 1134 1201

Technical improvements

The 802.11ax amendment will bring several key improvements over 802.11ac. 802.11ax addresses frequency bands between 1 GHz and 6 GHz. Therefore, unlike 802.11ac, 802.11ax will also operate in the unlicensed 2.4 GHz band. To meet the goal of supporting dense 802.11 deployments the following features have been approved.

Other up-coming Fast WiFi standards: 802.11ay

IEEE 802.11ax Wireless Networking
IEEE 802.11ax Wireless Networking

Users should not confuse 802.11ax with 802.11ay, which will work in the 60GHz bands.  The lower frequency bands 1-6GHz for 11ax will penetrate walls.  11ay will not.

What will 802.11ax be used for?

802.11ax is an upgrade for existing 802.11a, 802.11b, 802.11g, 802.11n and 802.11ac networks, Many are enthusiastic about 802.1ax’s potential as a fixed point-to-point or point-to-multipoint outdoor backhaul technology, especially in light of scaled back fiber rollout plans by providers like Google and Verizon in the face of extraordinary costs associated with such implementations. Therefore 11ax will find applications outdoors as well as indoors.

Who is behind 802.11ax?

The IEEE task force leading the 11ax work includes representatives from major equipment and chipsets vendors.
In 2012 and 2013, IEEE 802.11 received various submissions in its Standing Committee (SC) Wireless Next Generation (WNG) looking at issues of IEEE 802.11ac and potential solutions for future WLANs.  Immediately after the publication of IEEE 802.11ac in March 2013, the IEEE 802.11 Working Group (WG) established Study Group (SG) High Efficiency WLAN (HEW)

For Further Information

Please Contact Us

4G to 5G Roadmap

What is 5G?

5G Mobile Networks
5G Mobile Networks

5G radio access technology will be a key component of the Networked Society. It will address high traffic growth and increasing demand for high-bandwidth connectivity. It will also support massive numbers of connected devices and meet the real-time, high-reliability communication needs of mission-critical applications. 5G will provide wireless connectivity for a wide range of new applications and use cases, including wearables, smart homes, traffic safety/control, critical infrastructure, industry processes and very-high-speed media delivery. As a result, it will also accelerate the development of the Internet of Things. ITU Members including key industry players, industry forums, national and regional standards development organizations, regulators, network operators, equipment manufacturers as well as academia and research institutions together with Member States, gathered as the working group responsible for IMT systems, and completed a cycle of studies on the key performance requirements of 5G technologies for IMT-2020.

The Aim of 5G

The overall aim of 5G is to provide ubiquitous connectivity for any kind of device and any kind of application that may benefit from being connected. 5G networks will not be based on one specific radio-access technology. Rather, 5G is a portfolio of access and connectivity solutions addressing the demands and requirements of mobile communication beyond 2020.

CableFree 5G Technology
CableFree 5G Technology

The specification of 5G will include the development of a new flexible air interface, NX, which will be directed to extreme mobile broadband deployments. NX will also target high-bandwidth and high-traffic-usage scenarios, as well as new scenarios that involve mission-critical and realtime communications with extreme requirements in terms of latency and reliability.

In parallel, the development of Narrow-Band IoT (NB-IoT) in 3GPP is expected to support massive machine connectivity in wide area applications. NB-IoT will most likely be deployed in bands below 2GHz and will provide high capacity and deep coverage for enormous numbers of connected devices.

Ensuring interoperability with past generations of mobile communications has been a key principle of the ICT industry since the development of GSM and later wireless technologies within the 3GPP family of standards.

4G to 5G Evolution

In a similar manner, LTE will evolve in a way that recognizes its role in providing excellent coverage for mobile users, and 5G networks will incorporate LTE access (based on Orthogonal Frequency Division Multiplexing (OFDM)) along with new air interfaces in a transparent manner toward both the service layer and users. Around 2020, much of the available wireless coverage will continue to be provided by LTE, and it is important that operators with deployed 4G networks have the opportunity to transition some – or all – of their spectrum to newer wireless access technologies.

For operators with limited spectrum resources, the possibility of introducing 5G capabilities in an interoperable way – thereby allowing legacy devices to continue to be served on a compatible carrier – is highly beneficial and, in some cases, even vital. At the same time, the evolution of LTE to a point where it is a full member of the 5G family of air interfaces is essential, especially since initial deployment of new air interfaces may not operate in the same bands. The 5G network will enable dual-connectivity between LTE operating within bands below 6GHz and the NX air interface in bands within the range 6GHz to100GHz. NX should also allow for user-plane aggregation, i.e. joint delivery of data via LTE and NX component carriers. This paper explains the key requirements and capabilities of 5G, along with its technology components and spectrum needs.

In order to enable connectivity for a very wide range of applications with new characteristics and requirements, the capabilities of 5G wireless access must extend far beyond those of previous generations of mobile communication. These capabilities will include massive system capacity, very high data rates everywhere, very low latency, ultra-high reliability and availability, very low device cost and energy consumption, and energy-efficient networks.

MASSIVE SYSTEM CAPACITY

Traffic demands for mobile-communication systems are predicted to increase dramatically.  To support this traffic in an affordable way, 5G networks must deliver data with much lower cost per bit compared with the networks of today. Furthermore, the increase in data consumption will result in an increased energy footprint from networks. 5G must therefore consume significantly lower energy per delivered bit than current cellular networks. The exponential increase in connected devices, such as the deployment of billions of wirelessly connected sensors, actuators and similar devices for massive machine connectivity, will place demands on the network to support new paradigms in device and connectivity management that do not compromise security. Each device will generate or consume very small amounts of data, to the extent that they will individually, or even jointly, have limited impact on the overall traffic volume. However, the sheer number of connected devices seriously challenges the ability of the network to provision signaling and manage connections.

VERY HIGH DATA RATES EVERYWHERE

LTE Roadmap 4G to 5G
LTE Roadmap 4G to 5G

Every generation of mobile communication has been associated with higher data rates compared with the previous generation. In the past, much of the focus has been on the peak data rate that can be supported by a wireless-access technology under ideal conditions. However, a more important capability is the data rate that can actually be provided under real-life conditions in different scenarios.

  • 5G should support data rates exceeding 10Gbps in specific scenarios such as indoor and dense outdoor environments.
  • Data rates of several 100Mbps should generally be achievable in urban and suburban environments.
  • Data rates of at least 10Mbps should be accessible almost everywhere, including sparsely populated rural areas in both developed and developing countries.

VERY LOW LATENCY

Very low latency will be driven by the need to support new applications. Some envisioned 5G use cases, such as traffic safety and control of critical infrastructure and industry processes, may require much lower latency compared with what is possible with the mobile-communication systems of today. To support such latency-critical applications, 5G should allow for an application end-to-end latency of 1ms or less, although application-level framing requirements and codec limitations for media may lead to higher latencies in practice. Many services will distribute computational capacity and storage close to the air interface. This will create new capabilities for real-time communication and will allow ultra-high service reliability in a variety of scenarios, ranging from entertainment to industrial process control.

ULTRA-HIGH RELIABILITY AND AVAILABILITY

5G Wireless Technologies
5G Wireless Technologies

In addition to very low latency, 5G should also enable connectivity with ultra-high reliability and ultra-high availability. For critical services, such as control of critical infrastructure and traffic safety, connectivity with certain characteristics, such as a specific maximum latency, should not merely be ‘typically available.’ Rather, loss of connectivity and deviation from quality of service requirements must be extremely rare. For example, some industrial applications might need to guarantee successful packet delivery within 1 ms with a probability higher than 99.9999 percent.

VERY LOW DEVICE COST AND ENERGY CONSUMPTION

Low-cost, low-energy mobile devices have been a key market requirement since the early days of mobile communication. However, to enable the vision of billions of wirelessly connected sensors, actuators and similar devices, a further step has to be taken in terms of device cost and energy consumption. It should be possible for 5G devices to be available at very low cost and with a battery life of several years without recharging.

ENERGY-EFFICIENT NETWORKS

While device energy consumption has always been prioritized, energy efficiency on the network side has recently emerged as an additional KPI, for three main reasons:

  • Energy efficiency is an important component in reducing operational cost, as well as a driver for better dimensioned nodes, leading to lower total cost of ownership.
  • Energy efficiency enables off-grid network deployments that rely on medium-sized solar panels as power supplies, thereby enabling wireless connectivity to reach even the most remote areas.
  • Energy efficiency is essential to realizing operators’ ambition of providing wireless access in a sustainable and more resource-efficient way.

The importance of these factors will increase further in the 5G era, and energy efficiency will therefore be an important requirement in the design of 5G wireless access.

For More Information

Please Contact Us for more information on 5G and IMT-2020

Gigabit LTE and 5G:

The Roadmap to Gigabit LTE and 5G:

Two questions: “Do I need Gigabit LTE?” and “Will mobile networks support these new speeds?” The short answer to both is a resounding “Yes.”

Do I need Gigabit LTE?

There’s a common misconception that we need to address right away. Some people think that extreme speeds are only realized in ideal lab conditions, so they’re not relevant in the real world. Their argument is that current LTE devices and networks already support peak speeds of 300 Mbps or 600 Mbps, but actual speeds are lower. It follows, then, that there’s already “enough headroom” in the networks and thus the faster speeds are irrelevant.

Nothing could be further from the truth.

Here’s the thing. Gigabit LTE — and every other LTE innovation we’ve helped commercialize in the past few years — directly contributes to improving the real-world speeds that you’ll experience.

Gigabit LTE provides more consistent Internet speeds as compared to previous generations of LTE. In an extensive network simulation conducted by Qualcomm Technologies, we placed LTE devices of varying capabilities from Cat 4 to Cat 16 (the Gigabit LTE category) in the same network. The average throughput achieved by a GB LTE device was comfortably above 100 Mbps. Depending on traffic type, the average throughput could be much higher. That’s compared to around 65 Mbps for Cat 6 devices, the current baseline for many LTE devices and networks.

And these simulation results bear out in the real world. At the Sydney event, one analyst who tried the first Gigabit LTE device, reached 360 Mbps in a speed test. A real device on a live network in the middle of a very crowded tourist area — that’s the power of Gigabit LTE.

The constituent technologies that make Gigabit LTE possible — carrier aggregation, 4×4 MIMO, and 256-QAM — are engineered to allow the network to allocate many more network resources to your device simultaneously than you would get with an older LTE device. Or, alternatively, allocate fewer resources to you without diminishing the speed.

There’s an additional benefit as well. A Gigabit LTE device has four antennas in order to support 4×4 MIMO, giving it a hidden edge. In good signal conditions, you can get four streams of data that increase your speed, as compared to two streams with conventional LTE. In weak signal conditions, the additional antennas act like additional “ears” that are designed to help your Gigabit LTE device lock on to the signal from the tower, which can yield up to 70 percent faster speeds. Think about how slow LTE speeds can get in weak signal conditions. Wouldn’t this speed bump help quite a bit? A real-world study of this on T-Mobile’s network – using the Samsung Galaxy S7, which is capable of 4×4 MIMO – confirms this.

Additionally, with Gigabit LTE devices, you should be able to finish your downloads much faster, with fewer resources from the network. This can improve the capacity of the network and allow it to serve other users sooner. Not only do you enjoy faster speeds, but other people connected to the same cell tower get faster speeds as well, even if they don’t have a Gigabit LTE device.

So yes, you do need Gigabit LTE. It can improve your average, real-world speeds, give you better speeds in weak signal conditions, and allow other people to enjoy faster speeds too.

Will mobile networks support these new speeds?

Here, again, the answer is “Yes.”

Fifteen mobile operators in 11 countries intend to launch or trial Gigabit LTE in 2017. They include: T-Mobile, Sprint, and AT&T in the U.S.; EE, T-Mobile Germany, Vodafone, and Telefonica in Europe; and NTT DoCoMo, SoftBank, KDDI, and SingTel in Asia.

And, of course, Telstra’s Gigabit LTE network is already live. We expect many more to come online over the next few years. It’s important to remember that many people are hanging on to their devices for longer. So even if on day one your network doesn’t support GB LTE, there’s a good chance it may over the lifetime of your phone.

2017 will be the year of Gigabit LTE. And with the right device, power users can enjoy next-gen experiences sooner than we expected.

10Gbps MMW Links installed for Safe City Applications

CableFree 10Gbps MMW links have been installed for Safe City applications

Using the latest 10Gbps Millimeter Wave wireless technology, the links connect Safe City customer sites with a full 10Gbps (10Gig-E) full duplex capacity, with no compression or slow-down.

10Gbps MMW Links installed in the Middle East
10Gbps MMW Links installed for Safe City Applications

CableFree has pioneered high speed 10 Gigabit Millimeter Wave (MMW) technology to connect sites where fibre optics are unavailable, too slow to provision, too expensive or at risk of damage. In busy cities, fibre optics is usually installed in ducts underground which are prone to disruption when digging or building works take place.

This client had already installed fibre optics for major CCTV backbones around the city. However, 3rd party building works disrupted the ducts severing the fibres, causing major outage in the network and loss of CCTV coverage – putting citizens at risk.

10Gbps MMW Links installed in the Middle East
10Gbps MMW Links installed for Safe City Applications

CableFree 10Gbps Millimeter Wave links offer an ideal alternative to fragile fibre optics: the radio units are installed on sites owned by the customer, bringing the full network under user control and management. The units are typically mounted on building rooftops well away from street-level disruption, which are easy to access, secure and defend. MMW wireless links can be installed in hours, not weeks, and at a tiny fraction of the cost of trenches and ducts for fibre optics.

Reliable operating distances of 5-8km depending on climatic region are ideal for city-scale networks. A full range of planning tools allows users to predict performance prior to purchase or installation. The E-band (70-80GHz) frequencies are available in many countries with “light license” and are uncongested, with narrow “pencil beams” allowing dense re-use of the spectrum with no interference between links or users. The narrow beams make such link are inherently secure, with proprietary signals and encoding.

ACM Automatic Coding Modulation for 10Gbps MMW Links
ACM Automatic Coding Modulation for 10Gbps MMW Links

For long links, the Adaptive Coding and Modulation feature enables the MMW link to dynamically adjust modulation in high rainfall conditions to ensure link uptime, capacity and range are maximised. For shorter links and long links in low rainfall regions, the links retain 10Gbps at all times.

10Gbps MMW links are a movable asset: if the network requirements change, or different sites require connecting, the links can be moved to the new sites immediately, retaining all the investment in infrastructure. For Special Events and Disaster Recovery, temporary links can be deployed using generator or alternative “off grid” (Solar + Battery) power if no AC power is available on sites. The units can be mounted on tripods or stationary vehicles as required for rapid deployment.

ACM Automatic Coding Modulation for 10Gbps MMW Links
ACM Automatic Coding Modulation for 10Gbps MMW Links

For mobile operators, advanced features such as IEEE 1588v2, SyncE and management are included which make CableFree MMW ideal for RAN backhaul for 4G & 5G networks. CableFree 10Gbps MMW is upgradable to 20Gbps and 40Gbps with “stacking” giving the very highest throughput in the wireless industry, comparable to fibre optic backbone networks.

For more information please visit the CableFree website or contact our expert team:

www.cablefree.net/10g

Comparison of QAM Modulation Types

Comparing QAM Modulation

Comparison between 8, 16, 32, 64, 128, and 256 QAM types of Quadrature Amplitude Modulation

Gigabit Wireless Networks commonly use QAM modulation to achieve high data rate transmission.  So what is QAM?

Introducing Quadrature Amplitude Modulation

QAM, Quadrature amplitude modulation is widely used in many digital data radio communications and data communications applications. A variety of forms of QAM are available and some of the more common forms include 16 , 32 , 64 , 128 and 256 QAM. Here the figures refer to the number of points on the constellation, i.e. the number of distinct states that can exist.

The various flavours of QAM may be used when data-rates beyond those offered by 8-PSK are required by a radio communications system. This is because QAM achieves a greater distance between adjacent points in the I-Q plane by distributing the points more evenly. And in this way the points on the constellation are more distinct and data errors are reduced. While it is possible to transmit more bits per symbol, if the energy of the constellation is to remain the same, the points on the constellation must be closer together and the transmission becomes more susceptible to noise. This results in a higher bit error rate than for the lower order QAM variants. In this way there is a balance between obtaining the higher data rates and maintaining an acceptable bit error rate for any radio communications system.

Applications

QAM is in many radio communications and data delivery applications. However some specific variants of QAM are used in some specific applications and standards.

For domestic broadcast applications for example, 64 and 256 QAM are often used in digital cable television and cable modem applications. In the UK, 16 and 64 QAM are currently used for digital terrestrial television using DVB – Digital Video Broadcasting. In the US, 64 and 256 QAM are the mandated modulation schemes for digital cable as standardised by the SCTE in the standard ANSI/SCTE 07 2000.

In addition to this, variants of QAM are also used for many wireless and cellular technology applications.

Constellation diagrams

The constellation diagrams show the different positions for the states within different forms of QAM, quadrature amplitude modulation. As the order of the modulation increases, so does the number of points on the QAM constellation diagram.

The diagrams below show constellation diagrams for a variety of formats of modulation:

modulation-constellation-bpsk
modulation-constellation-16qam
modulation-constellation-32qam
modulation-constellation-64qam

Bits per symbol

The advantage of using QAM is that it is a higher order form of modulation and as a result it is able to carry more bits of information per symbol. By selecting a higher order format, the data rate of a link can be increased.

The table below gives a summary of the bit rates of different forms of QAM and PSK.

MODULATION BITS PER SYMBOL SYMBOL RATE
BPSK 1 1 x bit rate
QPSK 2 1/2 bit rate
8PSK 3 1/3 bit rate
16QAM 4 1/4 bit rate
32QAM 5 1/5 bit rate
64QAM 6 1/6 bit rate

QAM noise margin

While higher order modulation rates are able to offer much faster data rates and higher levels of spectral efficiency for the radio communications system, this comes at a price. The higher order modulation schemes are considerably less resilient to noise and interference.

As a result of this, many radio communications systems now use dynamic adaptive modulation techniques. They sense the channel conditions and adapt the modulation scheme to obtain the highest data rate for the given conditions. As signal to noise ratios decrease errors will increase along with re-sends of the data, thereby slowing throughput. By reverting to a lower order modulation scheme the link can be made more reliable with fewer data errors and re-sends.

For Further Information

Please Contact Us

LTE Advanced and Gigabit Speeds for 4G

LTE: the roadmap ahead to Gigabit Speeds

Gigabit 4G LTE Technology
Gigabit 4G LTE Technology

LTE enabled smartphones made up only 1 in 4 new devices shipped back in 2013.  By 2015, that percentage share soared to over half of all new smartphones shipped globally.  In fact, LTE is arguably the most successful generational wireless technology having just been commercialized in late 2009 and evolved to capture the majority of the market for new handsets today.  Previous 3G technologies took over a decade to achieve what LTE has done in just 6 years.

The surge in LTE adoption, not coincidental, parallels the growth of the smartphone creating a symbiotic relationship that propelled massive adoption of wireless broadband and smartphone use.  The order of magnitude improvement in network latency provided by LTE wireless connectivity coupled with the rapid growth in digital content and the readily available computing power within everyone’s reach created a rich tapestry of mobile opportunities.

CableFree LTE CPE Indoor Desktop Router with WiFi and VOIP
CableFree LTE CPE Indoor Desktop Router with WiFi and VOIP

As the global LTE network deployments enter a new phase of network enhancements, the industry is now turning to enhanced wireless technologies to evolve the speed and capacity to keep up with consumer demand for ever faster downloads, video streams and mobile applications.  The first stage of LTE network improvement revolved around the use of carrier aggregation which is a method of combining disparate spectrum holdings to create a larger data pipe.  This development tracked with the evolution of LTE from single carrier Cat-3 devices to dual carrier Cat-4 and Cat-6 devices.  Further development of carrier aggregation extended the concept to include 3 carrier aggregation specified by Cat-9 LTE standards which brought the maximum throughput speed to 450mbps in the downlink.

However, in order for the industry to evolve further and keep up with the insatiable demand for mobile broadband, LTE Advanced will require further improvements.  The next step in the evolution of LTE relies on LTE Advanced.  This new set of technologies is destined to improve LTE speed to and past the gigabit-per-second barrier.  To this end, IHS will be delivering a series of LTE Advanced Insights to further explore the key enabling technologies to get us to that gigabit per second barrier. This article is the first of this series.

Critical Areas of Exploration

Operators typically ask critical questions including but not limited to:

  • What is higher order modulation and how does radio signaling enhancement lead to faster wireless broadband?
  • How can advanced antenna designs be incorporated into existing smartphone form factors and what are the physical challenges involved in doing so?
  • What are the opportunities to leverage additional spectrum use especially in the unlicensed portions of 3.5GHz and 5GHz?  What are the advantages as well as the challenges of doing so?
  • How can the industry take learnings from the 3G to 4G transition and build on the foundations of LTE moving into 5G?

QAM: Higher Order Modulation to Break Through Gigabit per Second Barrier

Higher order modulation schemes have been used throughout 3G technologies and now enabling the increased bandwidth coming into 4G LTE Advanced.   As WCDMA evolved into HSPA and HSPA+ in the 3G era, higher order modulations of 16QAM and 64QAM replaced the older QPSK modulation schemes to improve throughput data rates that enable mobile broadband services to take off.  Fundamentally, sophisticated signal processing such as 64QAM are used in wireless networks to improve the spectral efficiency of communications by packing in as many bits as possible into each transmission.  The bits-per-symbol carried by 16QAM modulation scheme is 4 bits while higher order 64QAM yields 6 bits per symbol, a 50% improvement.  Extending this concept, LTE Advanced will use 256QAM modulation from Category 11 onwards which is expected to provide a 33% improvement in spectral efficient over that of 64QAM over the same stream of LTE by increasing the bits-per-symbol to 8 from 6.

Modulation Level (QAM) Bits per Symbol Incremental Efficiency Gain
16QAM 4
64QAM 6 50%
256QAM 8 33%

Table 1 – Modulation Levels and Corresponding Efficiency Gains

While higher order modulations equate greater spectral efficiencies, within the framework of wireless networks, achieving higher order signaling remains a significant challenge.  Real world applications of higher order modulations are difficult to implement network wide as the more sophisticated signaling schemes are inherently less resilient to noise and interference.  In normal deployments of macro cellular coverage, network operators employ adaptive modulation techniques to detect signal channel conditions and adjust modulation schemes accordingly.  For example, if the wireless user is closer to the center of the macro cell area, the network will negotiate the signaling scheme to best take advantage of the wireless fidelity and communicate using the most efficient modulation scheme available.  However, if the conditions are deemed inadequate, for example, at a cell site coverage edge, the network may resort to lower orders of modulation signaling in order to achieve higher reliability of connections.

LTE Category Carrier Aggregation MIMO Spatial Streams Modulation Max. Throughput
Cat-4 2x10MHz 2×2 4 64QAM 150mbps
Cat-6 2x20MHz 2×2 4 64QAM 300mbps
Cat-9 3x20MHz 2×2 6 64QAM 450mbps
Cat-11 3x20MHz 2×2 6 256QAM 600mbps
Cat-16 3x20MHz 4×4 10 256QAM 1000mbps

Table 2 –LTE Categories and Corresponding Throughput Gains

LTE Advanced Base Station

The previous paragraph described limitation of higher order modulation was a hallmark of 3G networks.  However, as LTE deployments begin to rely a more 5G- like heterogeneous network architecture leveraging augmented network equipment such as small cells, the use of higher order modulation becomes more practical as the distance from LTE antenna and the mobile device is reduced.  Yet again, with a challenging transmission medium such as over the air wireless, obstacles still exist.  Even with the most optimized heterogeneous networks, issues such as site to site signal interference can negate much of the benefits of small cells. Therefore, network operators, with help from their equipment vendors, are working on network optimization software to accommodate these interferences.  At the end, any network, even ones designed for Cat-11 LTE and above, will not be able to cover all their mobile subscribers with the highest efficiency signaling.  In actual deployment, only a portion of the devices within a coverage area will be at 256QAM while the majority other devices will fall back to a lower modulation scheme such as 64QAM or 16QAM.  Also, additional challenges exist in carrier aggregated LTE connections whereby 2 or 3 carriers can be aggregated to form a wider virtual channel, here, depending on the frequencies used and the placement of cell towers associated with those specific spectrum, not all of the aggregated radio signals can be adapted to signal in higher order modulation.  Therefore, reaching the theoretical maximum throughput data rates using higher order modulation will be particularly difficult in actual network deployments.

With these real-world deployment limitations on the handset side in mind, higher order modulation schemes have been shown to be a net benefit for LTE networks.  Under trial tests, it has been shown that even a small fraction of users in a coverage cell using 256 QAM create improvements in network capacity performance.  As devices with 256QAM enter and exit a network faster and more efficiently, it frees up wireless capacity to serve non-256QAM signaling devices on the network.  Overall, enabling higher order modulation on an LTE network present a cost advantageous proposition to network carriers as the upgrade is primarily a software based solution.  Going to 256QAM gives network carriers immediate benefits without significant hardware changes that are typically associated with other LTE Advanced features such as adding additional carriers or scaling MIMO antennas.

Evolving LTE Advanced to Gigabit Speeds

Putting the Pieces Together:

In order to achieve gigabit speeds in LTE Advanced, higher order modulation is one tool in a vast tool box of technologies the industry can use to propel 4G LTE further as the market waits for the consensus around the next generation 5G networks.  By building on top of carrier aggregation technology discussed earlier in this series, the implementation of higher order modulation in Cat-11 LTE increased the maximum theoretical throughput to 600mbps using the same 3-carrier aggregated spectrum as dictated by LTE Cat-9.  This 33% improvement from 450mbps is directly accredited to the improved bits-per-symbol efficiency described in this paper.

QAM Modulation, MIMO, CA and Spectrum

So what else is required to take LTE Advanced to gigabit speeds? Well, if we look at the 3GPP Release 12, Cat-16 LTE can get to a theoretical gigabit per second speed using the following combination of the following technologies in concert:

  • 256QAM modulation
  • 4×4 MIMO with 10 spatial streams (2 high frequency carriers with 4 layers each and 1 low frequency carrier at 2 layers)
  • Multi-carrier aggregation (3x20MHz or greater)
  • Use of additional spectrum such as LTE over unlicensed frequencies

What’s needed? Chipset advances

LTE Advanced Category 6 CPE Devices
LTE Advanced Category 6 CPE Devices

Currently, while the bulk of LTE smartphones sold today is still on Cat-6 modems, modem manufacturers are fast working to prepare the electronic component ecosystem with very capable LTE modems that can take advantage of the huge potential and headroom of evolved LTE.  Qualcomm in particular has recently announced their X16 modem chipset which has been designed to take advantage of LTE Cat-16.  The company claims that volume shipment of X16 modem devices will begin in H2 2016.  Other modem makers have not yet announced a CAT-16 capable LTE modem yet but the next iteration of LTE Advanced will clearly be on their roadmap. Meanwhile, wireless infrastructure equipment manufacturers such as Ericsson are lining up technologies to achieve CAT-16 network deployments.  Therefore, technically, commercial gigabit speed LTE Advanced Pro networks and devices can be realized in early 2017.

For Further Information

Please Contact Us for more information on our exciting range of solutions using LTE technology

5G – 5th Generation Mobile Wireless Networks

5G Mobile Wireless Technology

Preliminary details and information about the wireless technology being developed for 5th generation or 5G mobile wireless or cellular telecommunications systems

5G Mobile Networks
5G Mobile Wireless Technology

With the 4G telecommunications systems now starting to be deployed, eyes are looking towards the development of 5th generation or 5G technology and services.

Although the deployment of any wireless or cellular system takes many years, development of the 5G technology systems is being investigated. The new 5G technologies will need to be chosen developed and perfected to enable timely and reliable deployment.

The new 5th generation, 5G technology for cellular systems will probably start to come to fruition around 2020 with deployment following on afterwards.

5G mobile systems status

The current status of the 5G technology for cellular systems is very much in the early development stages. Very many companies are looking into the technologies that could be used to become part of the system. In addition to this a number of universities have set up 5G research units focussed on developing the technologies for 5G

In addition to this the standards bodies, particularly 3GPP are aware of the development but are not actively planning the 5G systems yet.

Many of the technologies to be used for 5G will start to appear in the systems used for 4G and then as the new 5G cellular system starts to formulate in a more concrete manner, they will be incorporated into the new 5G cellular system.

The major issue with 5G technology is that there is such an enormously wide variation in the requirements: superfast downloads to small data requirements for IoT than any one system will not be able to meet these needs. Accordingly a layer approach is likely to be adopted. As one commentator stated: 5G is not just a mobile technology. It is ubiquitous access to high & low data rate services.

5G cellular systems overview

5G Wireless Technologies
5G Wireless Technologies

As the different generations of cellular telecommunications have evolved, each one has brought its own improvements. The same will be true of 5G technology.

  • First generation, 1G:   These phones were analogue and were the first mobile or cellular phones to be used. Although revolutionary in their time they offered very low levels of spectrum efficiency and security.
  • Second generation, 2G:   These were based around digital technology and offered much better spectrum efficiency, security and new features such as text messages and low data rate communications.
  • Third generation, 3G:   The aim of this technology was to provide high speed data. The original technology was enhanced to allow data up to 14 Mbps and more.
  • Fourth generation, 4G:   This was an all-IP based technology capable of providing data rates up to 1 Gbps.

Any new 5th generation, 5G cellular technology needs to provide significant gains over previous systems to provide an adequate business case for mobile operators to invest in any new system.

Facilities that might be seen with 5G technology include far better levels of connectivity and coverage. The term World Wide Wireless Web, or WWWW is being coined for this.

For 5G technology to be able to achieve this, new methods of connecting will be required as one of the main drawbacks with previous generations is lack of coverage, dropped calls and low performance at cell edges. 5G technology will need to address this.

5G specifications

Although the standards bodies have not yet defined the parameters needed to meet a 5G performance level yet, other organisations have set their own aims, that may eventually influence the final specifications.

Typical parameters for a 5G standard may include:

SUGGESTED 5G WIRELESS PERFORMANCE
PARAMETER SUGGESTED PERFORMANCE
Network capacity 10 000 times capacity of current network
Peak data rate 10 Gbps
Cell edge data rate 100 Mbps
Latency < 1 ms

These are some of the ideas being put forwards for a 5G standard, but they are not accepted by any official bodies yet.

Current research

There are several key areas that are being investigated by research organisations. These include:

  • Millimeter-Wave technologies:   Using frequencies much higher in the frequency spectrum opens up more spectrum and also provides the possibility of having much wide channel bandwidth – possibly 1 – 2 GHz. However this poses new challenges for handset development where maximum frequencies of around 2 GHz and bandwidths of 10 – 20 MHz are currently in use. For 5G, frequencies of above 50GHz are being considered and this will present some real challenges in terms of the circuit design, the technology, and also the way the system is used as these frequencies do not travel as far and are absorbed almost completely by obstacles. 
  • Future PHY / MAC:   The new physical layer and MAC presents many new interesting possibilities in a number of areas:
    • Waveforms:   One key area of interest is that of the new waveforms that may be seen. OFDM has been used very successfully in 4G LTE as well as a number of other high data rate systems, but it does have some limitations in some circumstances. Formats being proposed include: GFDM, Generalised Frequency Division Multiplexing, as well as FBMC, Filter Bank Multi-Carrier, UFMC, Universal Filtered MultiCarrier. Each has its own advantages and limitations and it is possible that adaptive schemes may be employed, utilising different waveforms adaptively for the 5G mobile systems as the requirements dictate. This provides considerably more flexibility for 5G mobile communications. Read more about 5G waveforms
    • Multiple Access Schemes:   Again a variety of new access schemes are being investigated for 5G technology. Techniques including OFDMA, SCMA, NOMA, PDMA, MUSA and IDMA have all been mentioned. Read more about 5G multiple access schemes
    • Modulation:   Whilst PSK and QAM have provided excellent performance in terms of spectral efficiency, resilience and capacity, the major drawback is that of a high peak to average power ratio. Modulation schemes like APSK could provide advantages in some circumstances. Read more about 5G modulation schemes
  • Duplex methods:   There are several candidate forms of duplex that are being considered. Currently systems use either frequency division duplex, FDD or time division duplex, TDD. New possibilities are opening up for 5G including flexible duplex, where the time or frequencies allocated are variable according toth e load in either direction or a new scheme called division free duplex or single channel full duplex. This scheme for 5G would enable simultaneous transmission and reception on the same channel. Read more about 5G full duplex
  • Massive MIMO:   Although MIMO is being used in many applications from LTE to Wi-Fi, etc, the numbers of antennas is fairly limited -. Using microwave frequencies opens up the possibility of using many tens of antennas on a single equipment becomes a real possibility because of the antenna sizes and spacings in terms of a wavelength.
  • Dense networks   Reducing the size of cells provides a much more overall effective use of the available spectrum. Techniques to ensure that small cells in the macro-network and deployed as femtocells can operate satisfactorily are required.

Other 5G concepts

5G Mobile Networks
5G Mobile Networks

There are many new concepts that are being investigated and developed for the new 5th generation mobile system. Some of these include:

  • Pervasive networks :   This technology being considered for 5G cellular systems is where a user can concurrently be connected to several wireless access technologies and seamlessly move between them.
  • Group cooperative relay:   This is a technique that is being considered to make the high data rates available over a wider area of the cell. Currently data rates fall towards the cell edge where interference levels are higher and signal levels lower.
  • Cognitive radio technology:   If cognitive radio technology was used for 5th generation, 5G cellular systems, then it would enable the user equipment / handset to look at the radio landscape in which it is located and choose the optimum radio access network, modulation scheme and other parameters to configure itself to gain the best connection and optimum performance.
  • Wireless mesh networking and dynamic ad-hoc networking:   With the variety of different access schemes it will be possible to link to others nearby to provide ad-hoc wireless networks for much speedier data flows.
  • Smart antennas:   Another major element of any 5G cellular system will be that of smart antennas. Using these it will be possible to alter the beam direction to enable more direct communications and limit interference and increase overall cell capacity.

There are many new techniques and technologies that will be used in the new 5G cellular or mobile telecommunications system. These new 5G technologies are still being developed and the overall standards have not yet be defined. However as the required technologies develop, they will be incorporated into the new system which will be defined by the standards bodies over the coming years.

For Further Information

For more information please Contact Us

Gigabit Wireless: CableFree MMW links deployed in the UAE

Gigabit Wireless Metro Networks: CableFree MMW links deployed in the UAE

CableFree MMW Link in UAE - night

CableFree has deployed Gigabit Wireless MMW links for Public Safety networks in the UAE with regional partner CDN (Computer Data Networks). For this project a number of 1Gbps MMW links have been implemented to upgrade and extend existing network infrastructure for Safe City applications.

CableFree Millimeter Wave (MMW) links offer up to 10Gbps Full Duplex capacity and are proven to operate well in the harsh climate and conditions in regions such as the UAE, including recent record summer temperatures. CableFree worked closely with CDN to ensure high uptime and availability are ensured throughout the network.

CableFree MMW Link in UAE

CableFree MMW is a proven and robust high speed technology for Line of Sight links.  High frequency microwave signals between 60 and 90GHz have “pencil beam” properties that avoid any interference and enable dense deployment in busy urban areas.
Applications for Millimeter Wave include 4G/LTE Mobile Backhaul, Safe Cities, Government, Corporate CCTV and ISP backbones.
Distances up to 5-15km can be deployed reliably: CableFree provide a full range of planning tools to enable customers to plan for high availability even in high rainfall regions.

CableFree MMW Link in UAE

CableFree MMW links are ideal for implementing wireless networks in many regions and can upgrade existing congested unlicensed and licensed microwave links, and extend the reach of fibre optic cabling.  The links are rapid to deploy within hours and can provide permanent, temporary or disaster-recovery scenarios, including resilient backup to fragile fibre optic cables and leased lines.

For more information on Millimeter Wave and Wireless Metro Networks please contact the CableFree team:
sales@cablefree.net