Upgrade your Geodesy FSO Free Space Optic Laser Link

How to Upgrade your Geodesy FSO Laser Link

Upgrade Geodesy FSO Free Space Optic Laser Link
Upgrade Geodesy FSO Free Space Optic Laser Link

Why upgrade your Geodesy FSO link?

Many users own FSO links including Geodesy / LaserBit which are old and sometimes problematic.  Often, users require higher reliability, uptime, capacity or distance than their older FSO laser links can provide.

The Need for Reliability and High Availability

Modern IP networks demand higher capacity and uptime, and as FSO links are installed outdoors often in harsh conditions where they age faster than indoor mounted IT equipment such as switches and routers, which are installed in nice airconditioned environments.   Modern Carrier Class wireless equipment is designed for all-outdoor use including harsh environments and can ensure ultra-high availability and reliability in practical use.

Alternatives to Geodesy and FSO

There are many alternatives available including Carrier Class FSO from other vendors, MMW links with 10Gbps+ capacity, Microwave links and MIMO radio.  These have different characteristics, capabilities and price points.  Modern links can offer up to 40Gbps capacity and for low-end solutions, MIMO radios at lower price points than FSO for sites where budgets are tight.

If the customer requires a direct replacement FSO link, there are relatively few FSO vendors currently available with reliable shipping products.

Other FSO vendors currently offering carrier grade FSO:

Geodesy – LaserBit – FSO Laser Links – Free Space Optic laser links – Manufacturer information

Established in 1996, Geodesy (formerly LaserBit) provides optical communications at the speed of light which operate license-free. With products capable of sending up to 1 Gbps full duplex of data, GeoDesy offers reliable, fibre-optic connections without the need for expensive physical fibre.

GeoDesy – LaserBit – Manufacturer information

  • Geodesy (formerly LaserBit in Hungary) is a Manufacturer of FSO bridges with claimed over 20,000 lasers installed
  • Geodesy claim 15 years experience of building wireless bridges
  • Geodesy claim Risk free 100% satisfaction guarantee on all laser products
  • Affordable solutions costing from £2,995 installed
  • Built for line of site (LOS) with ranges suitable up to 5km
  • Ultra secure connections using narrow beams of light are secure from RF packet sniffers
  • Reliable availability with five nines availability
  • Licence free operation using FSO technology

Upgrading from GeoDesy FSO AT Series

Geodesy state that the Auto tracking series is a 8th generation series that maintains precise beam alignment, even when environmental factors cause movement to the device. The AT series is also the most recommended solution from the GeoDesy range.

  • Beam Tracking System
  • Gigabit Ethernet connectivity up to 2500m
  • Full duplex connectivity
  • Secure and error free data transmission
  • Built-in automatic failover
  • License free operation

Upgrading from GeoDesy FSO AF Series

Geodesy state that the AF series is a 5th generation build, offering laser transmission using a unique modulation technique that ensures error free data transfer over distances up to 1000 meters.

  • Point to point communications up to 1 Gbps
  • Wireless Ethernet range up to 1000m
  • Error free data transfer
  • Secure data transmission
  • Built-in automatic failover
  • 99.999% availability

Upgrading from a GeoDesy FSO PX Series

Geodesy state that the PX 5th generation series offers speeds from 100 Mbps to 1 Gbps and ranges of connectivity up to 5000 meters, and suited for installations to solid structured buildings on budget constrained projects.

  • Point to point communications up to 1 Gbps
  • Wireless Ethernet range up to 5000m
  • Full duplex connectivity
  • Secure data transmission
  • Built-in automatic failover
  • Licence free operation
Upgrade Geodesy Gigabit Wireless FSO Free Space Optic Laser Link
Gigabit Wireless Technologies

Disclaimer

The technical specifications listed above are those advertised by the manufacturer.  No warranty is made to the accuracy of this information, which may vary widely in practical installations.  Many vendors are known to exaggerate or mis-state the capability of the equipment which they offer.

For More Information on Wireless Upgrades

If you would like more information on upgrading a GeoDesy AT/AF/PX wireless solutions please Contact Us and our experienced team of wireless experts will be delighted to assist.

What is the actual maximum throughput on Gigabit Ethernet?

Examining the usable bandwidth on a Gigabit Ethernet network

Examining how much throughput of actual throughput can be achieved on a Gigabit Ethernet based network and how much this increases by using Jumbo Frames.  Also covered is how that relates to throughput of a Wireless Link with Gigabit Ethernet interfaces.

Gigabit Ethernet Physical Layer

On a Fibre Optic Gigabit Ethernet Network (1000BaseSX, 1000BaseLX), the raw line rate is 1.25Gbps.  This raw data rate is chosen to include 8b10b Line Coding.  Line Coding is used to ensure “DC balance” of the data stream, remove long runs of consecutive 0’s and 1’s, which makes the physical transceivers easier to design, implement, and maximises performance of the fibre optic transceiver range capability.  When the 8b10b line coding is removed from the raw data stream by the Gigabit Ethernet chipset, this allows an uncoded payload of exactly 1.0Gbps.

On a copper based Gigabit Ethernet Network (1000BaseT),  transmission uses four lanes over all four cable pairs for simultaneous transmission in both directions through the use of echo cancellation with adaptive equalization and five-level pulse amplitude modulation (PAM-5). The symbol rate is identical to that of 100BASE-TX (125 megabaud).

Gigabit Ethernet Net Data rate

The theoretical maximum bandwidth on a Gigabit Ethernet network is defined by a node being able to send 1 000 000 000 bits each second (bits per second, bps, bp/s), that is one billion 1 or 0s every second. A Byte of data consists of 8 Bits, hence the net capacity of this Gigabit link is the capability to transfer 125 000 000 bytes per second (1000000000 / 8), also termed Bbps, Bytes/sec or Bytes/s.

Frames, Preamble, Interframe Gap

In a real-world network, not all of the 125000000 bytes/second can be used to send data as there are multiple layers of overhead. Data transferred over a Ethernet based network must be divided into “frames”. The size of these frames regulates the maximum number of bytes to send together. The maximum frame size for Ethernet has been 1518 byte for the last 25 years or more.

Gigabit Ethernet Frame Preamble

Each frame will cause some overhead, both inside the frame but less known also on the “outside”. Before each frame is sent there is certain combination of bits that must be transmitted, called the Preamble, which basically signals to the receiver that a frame is coming right behind it. The preamble is 8 bytes and is sent just before each and every frame.

When the main body of the frame (1518 byte) has been transferred the network devices want to send another one. Since we are not using the old CSMA/CD access method (used only for half duplex) the devices do not have to “sense the cable” to see if it is free – which would incur a time penalty, but the Ethernet standard defines that for full duplex transmissions there has to be a certain amount of idle bytes before next frame is sent onto the wire.

Gigabit Ethernet Interface Gap Frame Preamble

This is called the Interframe Gap and is 12 bytes long. So between all frames devices have to leave at least 12 bytes “empty” to give the receiver side the time needed to prepare for the next incoming frame.

This will mean that each frame actually uses:

12 empty bytes of Interframe Gap + 1518 bytes of frame data + 8 bytes of preamble = 1538

This makes that each frame actually consumes 1538 bytes of bandwidth and if we remember that there are  “time slots” for sending 125000000 bytes each second this will allow space for 81274 frames per second. (125000000 / 1538)

So on default Gigabit Ethernet we can transmit over 81000 full size frames each second. Since Gigabit Ethernet is always at running full duplex we can at the same time receive 81000 frames simultaneously.

Nore detail on the overhead for this: For each frame, we lose 12 + 8 bytes used for Interframe Gap and Preamble, which is considered the “outside” of the frame. Plus, there is some more overhead is going on:

Gigabit Ethernet Frame

Ethernet header, Frame Check Sequence

The first 14 byte of the frame will be used for the Ethernet header and the last 4 bytes will contain a checksum trying to detect transfer errors. This uses the CRC32 checksum algorithm and is called the Frame Check Sequence (FCS).

The Maximum Transmission Unit, MTU

This means that we lose a total of 18 bytes in overhead for the Ethernet header in the beginning and the checksum at the end. (The blue parts above are seen as something like a “frame” around the data carried inside.) The number of bytes left is called the Maximum Transmission Unit (MTU) and will be 1500 bytes on default Ethernet. MTU is the payload that could be carried inside an Ethernet frame, see picture above. It is a common misunderstanding that MTU is the frame size, but really is the data inside the frame only.

Gigabit Ethernet tcp ip mss

IP Header, TCP Header, Maximum Segment Size

Just behind the Ethernet header we will most likely find the IP header. If using ordinary IPv4 this header will be 20 bytes long. And behind the IP header we will also most likely find the TCP header, which have the same length of 20 bytes. The amount of data that could be transferred in each TCP segment is called the Maximum Segment Size (MSS) and is typically 1460 bytes.

So the Ethernet header and checksum plus the IP and TCP headers will together add 58 bytes to the overhead. Adding the Interframe Gap and the Preamble gives 20 more. So for each 1460 bytes of datasent we have a minimum of 78 extra bytes handling the transfer at different layers. All of these are very important, but does cause an overhead at the same time.

Efficiency using Standard Ethernet Frames

At the beginning of this article we noted the potential to send 125000000 bytes/second on Gigabit Ethernet. When each frame consumes 1538 byte of bandwidth that gave us 81274 frames/second (125000000 / 1538). If each frame carries a maximum of 1460 bytes of user data this means that we could transfer 118660598 data bytes per second (81274 frames x 1460 byte of data), i.e. around 118 MB/s.

This means that when using default Ethernet frame size of 1518 byte (MTU = 1500) we have an efficiency of around 94% (118660598 / 125000000), meaning that the other 6% is used for the protocols at various layer, which we could call overhead.

Efficiency using Jumbo Frames

If supported by the connected equipment – enabling so called Jumbo Frames on all equipment in the chain, we could have a potential increase in the actual bandwidth used for our data. Let us look at that now:

A commonly used MTU value for Jumbo Frames is 9000. First we have to add the overhead for Ethernet (14+4 bytes), Preamble (8 bytes) and Interframe Gap (12 bytes). This makes the frame consume 9038 bytes of bandwidth and from the total amount of 125000000 bytes available to send each second we will have a total of 13830 jumbo frames (125000000 / 9038). So a lot less frames than the 81000 normal sized frames, but we will be able to carry more data inside each of the frames and by that reduce the network overhead.

(There are also other types of overhead: including CPU time in hosts, processing work done at network interface cards, switches and routers, but in this article we will only look at the bandwidth usage.)

If we remove the overhead for Interframe Gap, Ethernet CRC, TCP, IP, Ethernet header and the Preamble we would end up with 8960 bytes of data inside each TCP segment. This means that the Maximum Segment Size, the MSS, is 8960 byte and is a lot larger than default 1460 byte. A MSS of 8960 multiplied with 13830 (number of frames) gives 123916800 bytes for user data.

This will give us a really great efficiency, of 99% (123916800 / 125000000). So by increasing the frame size we would have almost five percent more bandwidth available for data, compared to about 94% for default frame size.

Wireless Links with Gigabit Ethernet Interfaces

Note that for Wireless links such as Microwave, Radio, Millimeter Wave or Free Space Optics, the Airside Interface often uses different coding and modulation than the network side interface.  This difference is often due to limitations in the amount of RF spectrum available (for example, a 40MHz, 56MHz, 60MHz, 80MHz or even 112MHz channel) from the regulatory body and channel planning, the modulation used (for example, up to 256QAM or 1024QAM) which affects both transmit power and receiver sensitivity, aggregation features such as MIMO or XPIC, and especially for longer links, the corresponding Link Budget between the two ends which includes the Antenna Gain at both sides, plus any losses caused by transmission waveguides, connectors, plus atmospheric fade effects.  This airside interface may therefore impose a lower capacity for the “end to end” wireless link even if the network interfaces at each end are connected at 1Gbps Gigabit Ethernet rate.

Transparent Wireless Links

Note that only some wireless technologies such as Free Space Optics (FSO) are capable of fully transparent transmission using the exact same modulation used on Fibre Optic networks, so the full 1.25Gbps line rate, along with all packet structure is maintained exactly.  The advantages of transparent transmission is that throughput is easily predicted, and latency is the lowest possible as transmission is generally one bit at a time.

Conclusion

The default Gigabit Ethernet has a potential frame throughput of 81000 per second and therefore a high throughput for actual data (about 118 MB/s), giving efficiency of 94%, or 940Mbps.  For networking equipment where Jumbo Frames are supported, by increasing the MTU to 9000 can deliver even more data on the same bandwidth link, up to 123 MB/s, thanks to the decreased amount of overhead by utilising a lower number of frames. Jumbo Frames can therefore potentially offer 99% of the theoretical Gigabit Ethernet bandwidth to carry data, which means 990Mbps capacity.

For Further Information

For further information on Applications and Solutions of Gigabit Wireless products please Contact Us

CableFree-contact-us-button

IEEE 802.11ay wireless technology: Next-gen 60GHz WiFi

A new standard for 60GHz Wi-Fi goes beyond 802.11ad wireless speed & range

A new standard for high speed multi-gigabit WiFi is emerging.  Though products based on the IEEE 802.11ad (WiGig) standard have really only begun rolling out, an effort to deliver an enhancement called IEEE 802.11ay that promises to deliver faster and longer range Wi-Fi networks is gaining steam.

The up-coming 802.11ay is as an enhancement of 802.11ad in the unlicensed 60 GHz millimeter wave band of spectrum, and should be a natural upgrade. The upgrade will offer significant speed and range improvements.

IEEE 802.11ay 60GHz networking
CableFree WiFi Logo

Technical Summary

802.11ay is a type of WLAN in the IEEE 802.11 set of WLANs. It will have a frequency of 60 GHz, a transmission rate of 20–40 Gbit/s and an extended transmission distance of 300–500 meters. It has also been noted that it is likely to have mechanisms for channel bonding and MU-MIMO technologies. It is expected to be released in 2017. 802.11ay will not be a new type of WLAN in the IEEE 802.11 set, but will simply be an improvement on 802.11ad.

Where 802.11ad uses a maximum of 2.16 GHz bandwidth, 802.11ay bonds four of those channels together for a maximum bandwidth of 8.64 GHz. MIMO is also added with a maximum of 4 streams. The link-rate per stream is 44Gbit/s, with four streams this goes up to 176Gbit/s. Higher order modulation is also added, probably up to 256-QAM.   802.11ay applications could include replacement for Ethernet and other cables within offices or homes, and provide backhaul connectivity outside for service providers.

What is the difference between ad and ay?

The 802.11ad standard was published in 2012 and the technology gives devices access to the unlicensed and relatively unclogged 60 GHz millimeter wave spectrum band for multimedia streaming, VR headset connectivity, computer-to-monitor wireless links and other apps that don’t require more than say 30 or 40 feet of unimpeded space. It has been adopted by chipmakers as well as vendors of routers, access points and other devices. The Wi-Fi Alliance runs a WiGig certification program for vendors, and the early 11ad gear on the market most commonly supports data transfer rates of 4.6Gbps – way faster than 802.11n and 11ac, but more limited in range and unable to penetrate solid objects.

The backwards compatible 802.11ay amendment to 802.11ad is designed to boost speeds several-fold. That initially would amount to a transmission rate of 20 to 30Gbps and a range of 33 to 100 feet with 11ay-to-11ay device setups, but once channel bonding, MIMO and other capabilities are exploited, you could be getting closer to 200Gbps and reaching distances approaching 1,000 feet, according to industry players.

11ay, as the specs are being developed, “is really allowing for a wider range of products than you’d get with ad, which has one set of data rates that everyone supports… ay has a lot more parameters to play with in channel bonding, MIMO and features at the MAC level to allow a far greater range of performance and products” according to one chipset vendor.

Other up-coming Fast WiFi standards: 802.11ax

IEEE 802.11ay 60GHz networking
IEEE 802.11ay 60GHz networking

Users should not confuse 802.11ay with 802.11ax, which will work in the 2.5 and 5 GHz bands.  The lower frequency bands for 11ax will penetrate walls.  11ay will not.

What will 802.11ay be used for?

It remains to be seen how soon the high speeds of 11ay will really be needed for internal uses, as 802.11ac — including Wave 2 products — are already pretty robust. But experts say that if 11ad doesn’t quite do it for you given its distance limitations, “11ay will finally be the technology that would let you snip that Ethernet cord – you no longer have to run Ethernet cables to everyone’s desk… there’s enough wireless bandwidth in ay.”

Many are enthusiastic about 802.1ay’s potential as a fixed point-to-point or point-to-multipoint outdoor backhaul technology, especially in light of scaled back fiber rollout plans by providers like Google and Verizon in the face of extraordinary costs associated with such implementations. “I’m more bullish on using ad & ay for backhaul (instead of mesh) in the case of campus & city networks — provided that it has a useful range” according to one industry expert

But it’s possible that 802.11ay could find a role in internal mesh and backbone networks as well as for other uses such as providing connectivity to VR headsets, supporting server backups and handling cloud applications that require low latency. “I believe that eventually, there will be enterprise applications for this – but it’s probably a few years into the future, given that we will have 802.11ax fairly soon & because there’s still a lot of 5 GHz band available for that (and ac).

When will 802.11ay become reality?

The 802.11ay task group had its initial meeting in 2015 and the spec only hit the Draft 0.1 stage in January. Though it is expected to reach Draft 1.0 by July 2017, according to the IEEE task group. If that mark is hit, expect pre-standard 11ay products to start rolling out within a year of that time.

Who is behind 802.11ay?

The IEEE task force leading the 11ay work includes representatives from major equipment and chipsets vendors.  The group states its goal as this: “Task Group ay is expected to develop an amendment that defines standardized modifications to both the IEEE 802.11 physical layers (PHY) and the IEEE 802,11 medium access control layer (MAC) that enables at least one mode of operation capable of supporting a maximum throughput of at least 20 gigabits per second (measured at the MAC data service access point), while maintaining or improving the power efficiency per station. This amendment also defines operations for license-exempt bands above 45 GHz while ensuring backward compatibility and coexistence with legacy directional multi-gigabit stations (defined by IEEE 802.11ad-2012 amendment) operating in the same band.”

For Further Information

Please Contact Us

IEEE 802.11ax: The new standard for Wi-Fi

The new standard 802.11ax for Wi-Fi goes beyond 802.11ac wireless

A new standard for high speed multi-gigabit WiFi is emerging.  Current WiFi products use chips based on the IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11 and IEEE 802.11ac standard have really only begun rolling out, an effort to deliver an enhancement called IEEE 802.11ax that promises to deliver faster and longer range Wi-Fi networks.

The up-coming 802.11ax is as an enhancement of 802.11ac in the unlicensed 2.4 and 5GHz bands of spectrum, and should be a natural upgrade. The upgrade will offer significant speed and range improvements.

IEEE 802.11ax Wireless Networking
CableFree WiFi Logo

Technical Summary

IEEE 802.11 ax is a type of WLAN in the IEEE 802.11 set of types of WLANs. It is designed to improve overall spectral efficiency especially in dense deployment scenarios. It is still in a very early stage of development, but is predicted to have a top speed of around 10 Gb/s, it works in 2.4 and/or 5 GHz, in addition to MIMO and MU-MIMO it introduces OFDMA technique to improve spectral efficiency and also higher order 1024 QAM modulation support for better throughputs. Though the nominal data rate is just 37% higher comparing with 802.11ac, the new amendment will allow achieving 4X increase of user throughput thanks to more efficient spectrum usage. It is due to be publicly released in 2019.

Modulation and coding schemes for single spatial stream
MCS
index
Modulation
type
Coding
rate
Data rate (in Mb/s)
20 MHz channels 40 MHz channels 80 MHz channels 160 MHz channels
1600 ns GI 800 ns GI 1600 ns GI 800 ns GI 1600 ns GI 800 ns GI 1600 ns GI 800 ns GI
0 BPSK 1/2 4 4 8 9 17 18 34 36
1 QPSK 1/2 16 17 33 34 68 72 136 144
2 QPSK 3/4 24 26 49 52 102 108 204 216
3 16-QAM 1/2 33 34 65 69 136 144 272 282
4 16-QAM 3/4 49 52 98 103 204 216 408 432
5 64-QAM 2/3 65 69 130 138 272 288 544 576
6 64-QAM 3/4 73 77 146 155 306 324 613 649
7 64-QAM 5/6 81 86 163 172 340 360 681 721
8 256-QAM 3/4 98 103 195 207 408 432 817 865
9 256-QAM 5/6 108 115 217 229 453 480 907 961
10 1024-QAM 3/4 122 129 244 258 510 540 1021 1081
11 1024-QAM 5/6 135 143 271 287 567 600 1134 1201

Technical improvements

The 802.11ax amendment will bring several key improvements over 802.11ac. 802.11ax addresses frequency bands between 1 GHz and 6 GHz. Therefore, unlike 802.11ac, 802.11ax will also operate in the unlicensed 2.4 GHz band. To meet the goal of supporting dense 802.11 deployments the following features have been approved.

Other up-coming Fast WiFi standards: 802.11ay

IEEE 802.11ax Wireless Networking
IEEE 802.11ax Wireless Networking

Users should not confuse 802.11ax with 802.11ay, which will work in the 60GHz bands.  The lower frequency bands 1-6GHz for 11ax will penetrate walls.  11ay will not.

What will 802.11ax be used for?

802.11ax is an upgrade for existing 802.11a, 802.11b, 802.11g, 802.11n and 802.11ac networks, Many are enthusiastic about 802.1ax’s potential as a fixed point-to-point or point-to-multipoint outdoor backhaul technology, especially in light of scaled back fiber rollout plans by providers like Google and Verizon in the face of extraordinary costs associated with such implementations. Therefore 11ax will find applications outdoors as well as indoors.

Who is behind 802.11ax?

The IEEE task force leading the 11ax work includes representatives from major equipment and chipsets vendors.
In 2012 and 2013, IEEE 802.11 received various submissions in its Standing Committee (SC) Wireless Next Generation (WNG) looking at issues of IEEE 802.11ac and potential solutions for future WLANs.  Immediately after the publication of IEEE 802.11ac in March 2013, the IEEE 802.11 Working Group (WG) established Study Group (SG) High Efficiency WLAN (HEW)

For Further Information

Please Contact Us

4G to 5G Roadmap

What is 5G?

5G Mobile Networks
5G Mobile Networks

5G radio access technology will be a key component of the Networked Society. It will address high traffic growth and increasing demand for high-bandwidth connectivity. It will also support massive numbers of connected devices and meet the real-time, high-reliability communication needs of mission-critical applications. 5G will provide wireless connectivity for a wide range of new applications and use cases, including wearables, smart homes, traffic safety/control, critical infrastructure, industry processes and very-high-speed media delivery. As a result, it will also accelerate the development of the Internet of Things. ITU Members including key industry players, industry forums, national and regional standards development organizations, regulators, network operators, equipment manufacturers as well as academia and research institutions together with Member States, gathered as the working group responsible for IMT systems, and completed a cycle of studies on the key performance requirements of 5G technologies for IMT-2020.

The Aim of 5G

The overall aim of 5G is to provide ubiquitous connectivity for any kind of device and any kind of application that may benefit from being connected. 5G networks will not be based on one specific radio-access technology. Rather, 5G is a portfolio of access and connectivity solutions addressing the demands and requirements of mobile communication beyond 2020.

CableFree 5G Technology
CableFree 5G Technology

The specification of 5G will include the development of a new flexible air interface, NX, which will be directed to extreme mobile broadband deployments. NX will also target high-bandwidth and high-traffic-usage scenarios, as well as new scenarios that involve mission-critical and realtime communications with extreme requirements in terms of latency and reliability.

In parallel, the development of Narrow-Band IoT (NB-IoT) in 3GPP is expected to support massive machine connectivity in wide area applications. NB-IoT will most likely be deployed in bands below 2GHz and will provide high capacity and deep coverage for enormous numbers of connected devices.

Ensuring interoperability with past generations of mobile communications has been a key principle of the ICT industry since the development of GSM and later wireless technologies within the 3GPP family of standards.

4G to 5G Evolution

In a similar manner, LTE will evolve in a way that recognizes its role in providing excellent coverage for mobile users, and 5G networks will incorporate LTE access (based on Orthogonal Frequency Division Multiplexing (OFDM)) along with new air interfaces in a transparent manner toward both the service layer and users. Around 2020, much of the available wireless coverage will continue to be provided by LTE, and it is important that operators with deployed 4G networks have the opportunity to transition some – or all – of their spectrum to newer wireless access technologies.

For operators with limited spectrum resources, the possibility of introducing 5G capabilities in an interoperable way – thereby allowing legacy devices to continue to be served on a compatible carrier – is highly beneficial and, in some cases, even vital. At the same time, the evolution of LTE to a point where it is a full member of the 5G family of air interfaces is essential, especially since initial deployment of new air interfaces may not operate in the same bands. The 5G network will enable dual-connectivity between LTE operating within bands below 6GHz and the NX air interface in bands within the range 6GHz to100GHz. NX should also allow for user-plane aggregation, i.e. joint delivery of data via LTE and NX component carriers. This paper explains the key requirements and capabilities of 5G, along with its technology components and spectrum needs.

In order to enable connectivity for a very wide range of applications with new characteristics and requirements, the capabilities of 5G wireless access must extend far beyond those of previous generations of mobile communication. These capabilities will include massive system capacity, very high data rates everywhere, very low latency, ultra-high reliability and availability, very low device cost and energy consumption, and energy-efficient networks.

MASSIVE SYSTEM CAPACITY

Traffic demands for mobile-communication systems are predicted to increase dramatically.  To support this traffic in an affordable way, 5G networks must deliver data with much lower cost per bit compared with the networks of today. Furthermore, the increase in data consumption will result in an increased energy footprint from networks. 5G must therefore consume significantly lower energy per delivered bit than current cellular networks. The exponential increase in connected devices, such as the deployment of billions of wirelessly connected sensors, actuators and similar devices for massive machine connectivity, will place demands on the network to support new paradigms in device and connectivity management that do not compromise security. Each device will generate or consume very small amounts of data, to the extent that they will individually, or even jointly, have limited impact on the overall traffic volume. However, the sheer number of connected devices seriously challenges the ability of the network to provision signaling and manage connections.

VERY HIGH DATA RATES EVERYWHERE

LTE Roadmap 4G to 5G
LTE Roadmap 4G to 5G

Every generation of mobile communication has been associated with higher data rates compared with the previous generation. In the past, much of the focus has been on the peak data rate that can be supported by a wireless-access technology under ideal conditions. However, a more important capability is the data rate that can actually be provided under real-life conditions in different scenarios.

  • 5G should support data rates exceeding 10Gbps in specific scenarios such as indoor and dense outdoor environments.
  • Data rates of several 100Mbps should generally be achievable in urban and suburban environments.
  • Data rates of at least 10Mbps should be accessible almost everywhere, including sparsely populated rural areas in both developed and developing countries.

VERY LOW LATENCY

Very low latency will be driven by the need to support new applications. Some envisioned 5G use cases, such as traffic safety and control of critical infrastructure and industry processes, may require much lower latency compared with what is possible with the mobile-communication systems of today. To support such latency-critical applications, 5G should allow for an application end-to-end latency of 1ms or less, although application-level framing requirements and codec limitations for media may lead to higher latencies in practice. Many services will distribute computational capacity and storage close to the air interface. This will create new capabilities for real-time communication and will allow ultra-high service reliability in a variety of scenarios, ranging from entertainment to industrial process control.

ULTRA-HIGH RELIABILITY AND AVAILABILITY

5G Wireless Technologies
5G Wireless Technologies

In addition to very low latency, 5G should also enable connectivity with ultra-high reliability and ultra-high availability. For critical services, such as control of critical infrastructure and traffic safety, connectivity with certain characteristics, such as a specific maximum latency, should not merely be ‘typically available.’ Rather, loss of connectivity and deviation from quality of service requirements must be extremely rare. For example, some industrial applications might need to guarantee successful packet delivery within 1 ms with a probability higher than 99.9999 percent.

VERY LOW DEVICE COST AND ENERGY CONSUMPTION

Low-cost, low-energy mobile devices have been a key market requirement since the early days of mobile communication. However, to enable the vision of billions of wirelessly connected sensors, actuators and similar devices, a further step has to be taken in terms of device cost and energy consumption. It should be possible for 5G devices to be available at very low cost and with a battery life of several years without recharging.

ENERGY-EFFICIENT NETWORKS

While device energy consumption has always been prioritized, energy efficiency on the network side has recently emerged as an additional KPI, for three main reasons:

  • Energy efficiency is an important component in reducing operational cost, as well as a driver for better dimensioned nodes, leading to lower total cost of ownership.
  • Energy efficiency enables off-grid network deployments that rely on medium-sized solar panels as power supplies, thereby enabling wireless connectivity to reach even the most remote areas.
  • Energy efficiency is essential to realizing operators’ ambition of providing wireless access in a sustainable and more resource-efficient way.

The importance of these factors will increase further in the 5G era, and energy efficiency will therefore be an important requirement in the design of 5G wireless access.

For More Information

Please Contact Us for more information on 5G and IMT-2020

Gigabit LTE and 5G:

The Roadmap to Gigabit LTE and 5G:

Two questions: “Do I need Gigabit LTE?” and “Will mobile networks support these new speeds?” The short answer to both is a resounding “Yes.”

Do I need Gigabit LTE?

There’s a common misconception that we need to address right away. Some people think that extreme speeds are only realized in ideal lab conditions, so they’re not relevant in the real world. Their argument is that current LTE devices and networks already support peak speeds of 300 Mbps or 600 Mbps, but actual speeds are lower. It follows, then, that there’s already “enough headroom” in the networks and thus the faster speeds are irrelevant.

Nothing could be further from the truth.

Here’s the thing. Gigabit LTE — and every other LTE innovation we’ve helped commercialize in the past few years — directly contributes to improving the real-world speeds that you’ll experience.

Gigabit LTE provides more consistent Internet speeds as compared to previous generations of LTE. In an extensive network simulation conducted by Qualcomm Technologies, we placed LTE devices of varying capabilities from Cat 4 to Cat 16 (the Gigabit LTE category) in the same network. The average throughput achieved by a GB LTE device was comfortably above 100 Mbps. Depending on traffic type, the average throughput could be much higher. That’s compared to around 65 Mbps for Cat 6 devices, the current baseline for many LTE devices and networks.

And these simulation results bear out in the real world. At the Sydney event, one analyst who tried the first Gigabit LTE device, reached 360 Mbps in a speed test. A real device on a live network in the middle of a very crowded tourist area — that’s the power of Gigabit LTE.

The constituent technologies that make Gigabit LTE possible — carrier aggregation, 4×4 MIMO, and 256-QAM — are engineered to allow the network to allocate many more network resources to your device simultaneously than you would get with an older LTE device. Or, alternatively, allocate fewer resources to you without diminishing the speed.

There’s an additional benefit as well. A Gigabit LTE device has four antennas in order to support 4×4 MIMO, giving it a hidden edge. In good signal conditions, you can get four streams of data that increase your speed, as compared to two streams with conventional LTE. In weak signal conditions, the additional antennas act like additional “ears” that are designed to help your Gigabit LTE device lock on to the signal from the tower, which can yield up to 70 percent faster speeds. Think about how slow LTE speeds can get in weak signal conditions. Wouldn’t this speed bump help quite a bit? A real-world study of this on T-Mobile’s network – using the Samsung Galaxy S7, which is capable of 4×4 MIMO – confirms this.

Additionally, with Gigabit LTE devices, you should be able to finish your downloads much faster, with fewer resources from the network. This can improve the capacity of the network and allow it to serve other users sooner. Not only do you enjoy faster speeds, but other people connected to the same cell tower get faster speeds as well, even if they don’t have a Gigabit LTE device.

So yes, you do need Gigabit LTE. It can improve your average, real-world speeds, give you better speeds in weak signal conditions, and allow other people to enjoy faster speeds too.

Will mobile networks support these new speeds?

Here, again, the answer is “Yes.”

Fifteen mobile operators in 11 countries intend to launch or trial Gigabit LTE in 2017. They include: T-Mobile, Sprint, and AT&T in the U.S.; EE, T-Mobile Germany, Vodafone, and Telefonica in Europe; and NTT DoCoMo, SoftBank, KDDI, and SingTel in Asia.

And, of course, Telstra’s Gigabit LTE network is already live. We expect many more to come online over the next few years. It’s important to remember that many people are hanging on to their devices for longer. So even if on day one your network doesn’t support GB LTE, there’s a good chance it may over the lifetime of your phone.

2017 will be the year of Gigabit LTE. And with the right device, power users can enjoy next-gen experiences sooner than we expected.

Comparison of QAM Modulation Types

Comparing QAM Modulation

Comparison between 8, 16, 32, 64, 128, and 256 QAM types of Quadrature Amplitude Modulation

Gigabit Wireless Networks commonly use QAM modulation to achieve high data rate transmission.  So what is QAM?

Introducing Quadrature Amplitude Modulation

QAM, Quadrature amplitude modulation is widely used in many digital data radio communications and data communications applications. A variety of forms of QAM are available and some of the more common forms include 16 , 32 , 64 , 128 and 256 QAM. Here the figures refer to the number of points on the constellation, i.e. the number of distinct states that can exist.

The various flavours of QAM may be used when data-rates beyond those offered by 8-PSK are required by a radio communications system. This is because QAM achieves a greater distance between adjacent points in the I-Q plane by distributing the points more evenly. And in this way the points on the constellation are more distinct and data errors are reduced. While it is possible to transmit more bits per symbol, if the energy of the constellation is to remain the same, the points on the constellation must be closer together and the transmission becomes more susceptible to noise. This results in a higher bit error rate than for the lower order QAM variants. In this way there is a balance between obtaining the higher data rates and maintaining an acceptable bit error rate for any radio communications system.

Applications

QAM is in many radio communications and data delivery applications. However some specific variants of QAM are used in some specific applications and standards.

For domestic broadcast applications for example, 64 and 256 QAM are often used in digital cable television and cable modem applications. In the UK, 16 and 64 QAM are currently used for digital terrestrial television using DVB – Digital Video Broadcasting. In the US, 64 and 256 QAM are the mandated modulation schemes for digital cable as standardised by the SCTE in the standard ANSI/SCTE 07 2000.

In addition to this, variants of QAM are also used for many wireless and cellular technology applications.

Constellation diagrams

The constellation diagrams show the different positions for the states within different forms of QAM, quadrature amplitude modulation. As the order of the modulation increases, so does the number of points on the QAM constellation diagram.

The diagrams below show constellation diagrams for a variety of formats of modulation:

modulation-constellation-bpsk
modulation-constellation-16qam
modulation-constellation-32qam
modulation-constellation-64qam

Bits per symbol

The advantage of using QAM is that it is a higher order form of modulation and as a result it is able to carry more bits of information per symbol. By selecting a higher order format, the data rate of a link can be increased.

The table below gives a summary of the bit rates of different forms of QAM and PSK.

MODULATION BITS PER SYMBOL SYMBOL RATE
BPSK 1 1 x bit rate
QPSK 2 1/2 bit rate
8PSK 3 1/3 bit rate
16QAM 4 1/4 bit rate
32QAM 5 1/5 bit rate
64QAM 6 1/6 bit rate

QAM noise margin

While higher order modulation rates are able to offer much faster data rates and higher levels of spectral efficiency for the radio communications system, this comes at a price. The higher order modulation schemes are considerably less resilient to noise and interference.

As a result of this, many radio communications systems now use dynamic adaptive modulation techniques. They sense the channel conditions and adapt the modulation scheme to obtain the highest data rate for the given conditions. As signal to noise ratios decrease errors will increase along with re-sends of the data, thereby slowing throughput. By reverting to a lower order modulation scheme the link can be made more reliable with fewer data errors and re-sends.

For Further Information

Please Contact Us

LTE Advanced and Gigabit Speeds for 4G

LTE: the roadmap ahead to Gigabit Speeds

Gigabit 4G LTE Technology
Gigabit 4G LTE Technology

LTE enabled smartphones made up only 1 in 4 new devices shipped back in 2013.  By 2015, that percentage share soared to over half of all new smartphones shipped globally.  In fact, LTE is arguably the most successful generational wireless technology having just been commercialized in late 2009 and evolved to capture the majority of the market for new handsets today.  Previous 3G technologies took over a decade to achieve what LTE has done in just 6 years.

The surge in LTE adoption, not coincidental, parallels the growth of the smartphone creating a symbiotic relationship that propelled massive adoption of wireless broadband and smartphone use.  The order of magnitude improvement in network latency provided by LTE wireless connectivity coupled with the rapid growth in digital content and the readily available computing power within everyone’s reach created a rich tapestry of mobile opportunities.

CableFree LTE CPE Indoor Desktop Router with WiFi and VOIP
CableFree LTE CPE Indoor Desktop Router with WiFi and VOIP

As the global LTE network deployments enter a new phase of network enhancements, the industry is now turning to enhanced wireless technologies to evolve the speed and capacity to keep up with consumer demand for ever faster downloads, video streams and mobile applications.  The first stage of LTE network improvement revolved around the use of carrier aggregation which is a method of combining disparate spectrum holdings to create a larger data pipe.  This development tracked with the evolution of LTE from single carrier Cat-3 devices to dual carrier Cat-4 and Cat-6 devices.  Further development of carrier aggregation extended the concept to include 3 carrier aggregation specified by Cat-9 LTE standards which brought the maximum throughput speed to 450mbps in the downlink.

However, in order for the industry to evolve further and keep up with the insatiable demand for mobile broadband, LTE Advanced will require further improvements.  The next step in the evolution of LTE relies on LTE Advanced.  This new set of technologies is destined to improve LTE speed to and past the gigabit-per-second barrier.  To this end, IHS will be delivering a series of LTE Advanced Insights to further explore the key enabling technologies to get us to that gigabit per second barrier. This article is the first of this series.

Critical Areas of Exploration

Operators typically ask critical questions including but not limited to:

  • What is higher order modulation and how does radio signaling enhancement lead to faster wireless broadband?
  • How can advanced antenna designs be incorporated into existing smartphone form factors and what are the physical challenges involved in doing so?
  • What are the opportunities to leverage additional spectrum use especially in the unlicensed portions of 3.5GHz and 5GHz?  What are the advantages as well as the challenges of doing so?
  • How can the industry take learnings from the 3G to 4G transition and build on the foundations of LTE moving into 5G?

QAM: Higher Order Modulation to Break Through Gigabit per Second Barrier

Higher order modulation schemes have been used throughout 3G technologies and now enabling the increased bandwidth coming into 4G LTE Advanced.   As WCDMA evolved into HSPA and HSPA+ in the 3G era, higher order modulations of 16QAM and 64QAM replaced the older QPSK modulation schemes to improve throughput data rates that enable mobile broadband services to take off.  Fundamentally, sophisticated signal processing such as 64QAM are used in wireless networks to improve the spectral efficiency of communications by packing in as many bits as possible into each transmission.  The bits-per-symbol carried by 16QAM modulation scheme is 4 bits while higher order 64QAM yields 6 bits per symbol, a 50% improvement.  Extending this concept, LTE Advanced will use 256QAM modulation from Category 11 onwards which is expected to provide a 33% improvement in spectral efficient over that of 64QAM over the same stream of LTE by increasing the bits-per-symbol to 8 from 6.

Modulation Level (QAM) Bits per Symbol Incremental Efficiency Gain
16QAM 4
64QAM 6 50%
256QAM 8 33%

Table 1 – Modulation Levels and Corresponding Efficiency Gains

While higher order modulations equate greater spectral efficiencies, within the framework of wireless networks, achieving higher order signaling remains a significant challenge.  Real world applications of higher order modulations are difficult to implement network wide as the more sophisticated signaling schemes are inherently less resilient to noise and interference.  In normal deployments of macro cellular coverage, network operators employ adaptive modulation techniques to detect signal channel conditions and adjust modulation schemes accordingly.  For example, if the wireless user is closer to the center of the macro cell area, the network will negotiate the signaling scheme to best take advantage of the wireless fidelity and communicate using the most efficient modulation scheme available.  However, if the conditions are deemed inadequate, for example, at a cell site coverage edge, the network may resort to lower orders of modulation signaling in order to achieve higher reliability of connections.

LTE Category Carrier Aggregation MIMO Spatial Streams Modulation Max. Throughput
Cat-4 2x10MHz 2×2 4 64QAM 150mbps
Cat-6 2x20MHz 2×2 4 64QAM 300mbps
Cat-9 3x20MHz 2×2 6 64QAM 450mbps
Cat-11 3x20MHz 2×2 6 256QAM 600mbps
Cat-16 3x20MHz 4×4 10 256QAM 1000mbps

Table 2 –LTE Categories and Corresponding Throughput Gains

LTE Advanced Base Station
LTE Advanced Base Station

The previous paragraph described limitation of higher order modulation was a hallmark of 3G networks.  However, as LTE deployments begin to rely a more 5G- like heterogeneous network architecture leveraging augmented network equipment such as small cells, the use of higher order modulation becomes more practical as the distance from LTE antenna and the mobile device is reduced.  Yet again, with a challenging transmission medium such as over the air wireless, obstacles still exist.  Even with the most optimized heterogeneous networks, issues such as site to site signal interference can negate much of the benefits of small cells. Therefore, network operators, with help from their equipment vendors, are working on network optimization software to accommodate these interferences.  At the end, any network, even ones designed for Cat-11 LTE and above, will not be able to cover all their mobile subscribers with the highest efficiency signaling.  In actual deployment, only a portion of the devices within a coverage area will be at 256QAM while the majority other devices will fall back to a lower modulation scheme such as 64QAM or 16QAM.  Also, additional challenges exist in carrier aggregated LTE connections whereby 2 or 3 carriers can be aggregated to form a wider virtual channel, here, depending on the frequencies used and the placement of cell towers associated with those specific spectrum, not all of the aggregated radio signals can be adapted to signal in higher order modulation.  Therefore, reaching the theoretical maximum throughput data rates using higher order modulation will be particularly difficult in actual network deployments.

With these real-world deployment limitations on the handset side in mind, higher order modulation schemes have been shown to be a net benefit for LTE networks.  Under trial tests, it has been shown that even a small fraction of users in a coverage cell using 256 QAM create improvements in network capacity performance.  As devices with 256QAM enter and exit a network faster and more efficiently, it frees up wireless capacity to serve non-256QAM signaling devices on the network.  Overall, enabling higher order modulation on an LTE network present a cost advantageous proposition to network carriers as the upgrade is primarily a software based solution.  Going to 256QAM gives network carriers immediate benefits without significant hardware changes that are typically associated with other LTE Advanced features such as adding additional carriers or scaling MIMO antennas.

Evolving LTE Advanced to Gigabit Speeds

Putting the Pieces Together:

In order to achieve gigabit speeds in LTE Advanced, higher order modulation is one tool in a vast tool box of technologies the industry can use to propel 4G LTE further as the market waits for the consensus around the next generation 5G networks.  By building on top of carrier aggregation technology discussed earlier in this series, the implementation of higher order modulation in Cat-11 LTE increased the maximum theoretical throughput to 600mbps using the same 3-carrier aggregated spectrum as dictated by LTE Cat-9.  This 33% improvement from 450mbps is directly accredited to the improved bits-per-symbol efficiency described in this paper.

QAM Modulation, MIMO, CA and Spectrum

So what else is required to take LTE Advanced to gigabit speeds? Well, if we look at the 3GPP Release 12, Cat-16 LTE can get to a theoretical gigabit per second speed using the following combination of the following technologies in concert:

  • 256QAM modulation
  • 4×4 MIMO with 10 spatial streams (2 high frequency carriers with 4 layers each and 1 low frequency carrier at 2 layers)
  • Multi-carrier aggregation (3x20MHz or greater)
  • Use of additional spectrum such as LTE over unlicensed frequencies

What’s needed? Chipset advances

LTE Advanced Category 6 CPE Devices
LTE Advanced Category 6 CPE Devices

Currently, while the bulk of LTE smartphones sold today is still on Cat-6 modems, modem manufacturers are fast working to prepare the electronic component ecosystem with very capable LTE modems that can take advantage of the huge potential and headroom of evolved LTE.  Qualcomm in particular has recently announced their X16 modem chipset which has been designed to take advantage of LTE Cat-16.  The company claims that volume shipment of X16 modem devices will begin in H2 2016.  Other modem makers have not yet announced a CAT-16 capable LTE modem yet but the next iteration of LTE Advanced will clearly be on their roadmap. Meanwhile, wireless infrastructure equipment manufacturers such as Ericsson are lining up technologies to achieve CAT-16 network deployments.  Therefore, technically, commercial gigabit speed LTE Advanced Pro networks and devices can be realized in early 2017.

For Further Information

Please Contact Us for more information on our exciting range of solutions using LTE technology

Time to fix “Slow Internet Blues”: choose a CableFree Wireless Network instead!

Researchers agree that slow internet can stress you out

CableFree Solves Slow Internet
CableFree Solves Slow Internet

You’re not the only one who gets frustrated when videos buffer too much and too often. Ericsson found that the stress caused by trying to load videos on a slow mobile connection is comparable to the stress you feel while watching a horror movie. The Swedish company discovered that when it conducted an experiment called “The Stress of Streaming Delays.” Sure, Ericsson did it to show brands how slow internet affects them, and it’s true it only had 30 subjects. But we don’t think anyone would disagree that having to endure several seconds to minutes of buffering is frustrating.

CableFree Solves Slow Internet Problems
CableFree Solves Slow Internet Problems

Researchers measured the subjects’ brain, pulse and heart activities while they were performing tasks on a phone, found that video streaming delays increase heart rate by 38 percent. They also found that a two-second buffering period can double stress levels. When the researchers observed the subjects who were subjected to longer delays (around six seconds), though, they saw their stress levels rise, then fall. The participants showed signs of resignation, including eye movements that indicated distraction — they were already giving up.

We’ll bet that’s a feeling you only know too well. Why wait around for downloads and buffering on Slow Internet?  Choose a CableFree Wireless network and get into the fast lane with capacities up to 10Gbps!

www.cablefree.net  or visit our Facebook Page

Via: New York Mag Source: Ericsson (1), (2)

 

The Real Cost of Fiber Cuts: How to solve using Gigabit Wireless

Fiber Cuts – The Real Cost – How to solve using Gigabit Wireless

Fibre Optic Cable - Fibre Cuts cost time and moneyOften you can’t avoid fiber cuts: they happen on public land or under public streets, outside your control.  The vast majority of corporate LAN connections, cable, Internet and LTE backhaul, is done over fiber optic cable.   In one report CNN stated that about 99 percent of all international communications occur over undersea cabling. Alan Mauldin, research director at U.S.-based research firm Telegeography, noted that while some major cabling projects can come with high price tags, fiber optics is considered more robust and more cost-effective than common wireless alternatives like satellite.

Gigabit Wireless solves Fibre Cut outagesBut while fiber optic cabling is traditionally seen as the safer option, that may be a misconception.  When installed correctly, fiber optics is the “perfect” media, transmitting Gigabits of data without interruption.  However, any disruption to the fragile fiber causes data outages which take days or weeks to locate and repair. According to data from the Federal Communications Commission. about a quarter of all network outages that happened between 1993 and 2001 were from cables being cut. Regardless of how the fiber cut occurred, such outages can be particularly damaging.

How easy is it to repair a fiber cut?

Fiber is not a “self healing” media: skilled teams with specialist fiber-splicing and terminating equipment are required to repair a broken fiber connection.  Most data communication engineers do not have this equipment or training on using them. fiber repair is a specialist business and getting trained people and splicing equipment to site costs time and money.  Factoring the anticipated cost of a fiber repair into a budget for “downtime” and “unproductivity” for corporates – and missing SLA’s for uptime for Service Providers – is a serious issue, including business continuity planning.  For rural areas, access to sites can be limited, with some locations limited by poor weather, and for islands sometimes only with infrequent access by sea or air.

Common causes of fiber cut outages

As these instances show, there are many different ways in which fiber optic cabling can be disrupted:

By vandalism – This type of fiber cut outage has been worryingly common of late. According to CNN, there have been 11 separate incidents involving the cutting of fiber optic cable in the Bay Area since July 2015. The FBI noted that there have been more than 12 in the region since January, and that it’s been hard to stop in part because there is so much critical cabling in the area and because cables are typically clearly marked, The Wall Street Journal reported. Authorities noted that these incidents show no sign of slowing down either, as they don’t have a clear suspect(s) or motive at this time. The Journal also noted that some instances of fiber optic-related downtime are not due to vandalism, but rather someone trying to steal metal.
fiber cut duct causes outageBy accident – This is perhaps one of the most common causes of fiber cuts, but nevertheless they are just as damaging. In one example a 75-year-old woman in the country of Georgia was digging in a field when she accidentally severed a fiber optic cable, in an article in The Guardian. As a result of the mishap, close to 90 percent of Armenia and parts of Azerbaijan and Georgia were completely without Internet for five plus hours.

Fire and Ice cause cable outage network downtimeBy force of nature – Tornadoes, hurricanes, earthquakes and other major natural disasters all have the potential to cut or entirely destroy fiber optic cabling. Other seemingly more benign forces of nature can also cripple connectivity, as Level 3 reported that 28 percent of all damages it sustained to its infrastructure in 2010 were caused by squirrels.

Calculating the impact of a fiber outage

trench digging causes fiber optic cut network outageIn some of these fiber cut outage incidents, the fallout can be relatively minor. A cut that occurs in the middle of the night on a redundant line can be easy enough to deal with, with service providers sometimes able to reroute traffic in the interim. Unfortunately however, such incidents often lead to much bigger problems for end users. For example, a cut fiber optic cable in northern Arizona in April caused many thousands of people and businesses to go about 15 hours with telephone and Internet service. This meant many shops had to either close or resort to manual tracking, and that personal Internet usage grinded to a halt, The Associated Press reported. More importantly, 911 emergency communications were disrupted in the incident.

It’s not just a hassle for end users, as cut fiber can severely impact public health when emergency services like police departments, fire stations and EMTs can’t take and receive calls. Plus, such incidents are very costly for service providers, forced to repair expensive infrastructure. They can also lead to canceled service, as customers become irate at service providers for failing to provide reliable connectivity at all times.

What’s a solution to fiber cut outages?

One easy way to avoid the problems related to cut fiber is to not have fiber at all and instead pursue a wireless dark fiber alternative. For example, after a cable snafu caused residents of Washington state’s San Juan Islands to go without telephone, Internet and cell service for 10 days in 2013, CenturyLink installed a wireless mobile backhaul option there, according to The AP.

By opting for a solution like a Gigabit Wireless Microwave, MMW, Free Space Optics or MIMO OFDM Radio, service providers gain a wireless alternative to cabling that is just as robust and fast as fiber. With the Gigabit Wireless link in place, cut fiber optic cabling is less disruptive to end users and ISPs.

Where do I found out more information on solving fiber Cut Issues?

For more information please contact us