Comparison between 8, 16, 32, 64, 128, and 256 QAM types of Quadrature Amplitude Modulation
Gigabit Wireless Networks commonly use QAM modulation to achieve high data rate transmission. So what is QAM?
Introducing Quadrature Amplitude Modulation
QAM, Quadrature amplitude modulation is widely used in many digital data radio communications and data communications applications. A variety of forms of QAM are available and some of the more common forms include 16 , 32 , 64 , 128 and 256 QAM. Here the figures refer to the number of points on the constellation, i.e. the number of distinct states that can exist.
The various flavours of QAM may be used when data-rates beyond those offered by 8-PSK are required by a radio communications system. This is because QAM achieves a greater distance between adjacent points in the I-Q plane by distributing the points more evenly. And in this way the points on the constellation are more distinct and data errors are reduced. While it is possible to transmit more bits per symbol, if the energy of the constellation is to remain the same, the points on the constellation must be closer together and the transmission becomes more susceptible to noise. This results in a higher bit error rate than for the lower order QAM variants. In this way there is a balance between obtaining the higher data rates and maintaining an acceptable bit error rate for any radio communications system.
QAM is in many radio communications and data delivery applications. However some specific variants of QAM are used in some specific applications and standards.
For domestic broadcast applications for example, 64 and 256 QAM are often used in digital cable television and cable modem applications. In the UK, 16 and 64 QAM are currently used for digital terrestrial television using DVB – Digital Video Broadcasting. In the US, 64 and 256 QAM are the mandated modulation schemes for digital cable as standardised by the SCTE in the standard ANSI/SCTE 07 2000.
In addition to this, variants of QAM are also used for many wireless and cellular technology applications.
The constellation diagrams show the different positions for the states within different forms of QAM, quadrature amplitude modulation. As the order of the modulation increases, so does the number of points on the QAM constellation diagram.
The diagrams below show constellation diagrams for a variety of formats of modulation:
Bits per symbol
The advantage of using QAM is that it is a higher order form of modulation and as a result it is able to carry more bits of information per symbol. By selecting a higher order format, the data rate of a link can be increased.
The table below gives a summary of the bit rates of different forms of QAM and PSK.
BITS PER SYMBOL
1 x bit rate
1/2 bit rate
1/3 bit rate
1/4 bit rate
1/5 bit rate
1/6 bit rate
QAM noise margin
While higher order modulation rates are able to offer much faster data rates and higher levels of spectral efficiency for the radio communications system, this comes at a price. The higher order modulation schemes are considerably less resilient to noise and interference.
As a result of this, many radio communications systems now use dynamic adaptive modulation techniques. They sense the channel conditions and adapt the modulation scheme to obtain the highest data rate for the given conditions. As signal to noise ratios decrease errors will increase along with re-sends of the data, thereby slowing throughput. By reverting to a lower order modulation scheme the link can be made more reliable with fewer data errors and re-sends.
LTE enabled smartphones made up only 1 in 4 new devices shipped back in 2013. By 2015, that percentage share soared to over half of all new smartphones shipped globally. In fact, LTE is arguably the most successful generational wireless technology having just been commercialized in late 2009 and evolved to capture the majority of the market for new handsets today. Previous 3G technologies took over a decade to achieve what LTE has done in just 6 years.
The surge in LTE adoption, not coincidental, parallels the growth of the smartphone creating a symbiotic relationship that propelled massive adoption of wireless broadband and smartphone use. The order of magnitude improvement in network latency provided by LTE wireless connectivity coupled with the rapid growth in digital content and the readily available computing power within everyone’s reach created a rich tapestry of mobile opportunities.
As the global LTE network deployments enter a new phase of network enhancements, the industry is now turning to enhanced wireless technologies to evolve the speed and capacity to keep up with consumer demand for ever faster downloads, video streams and mobile applications. The first stage of LTE network improvement revolved around the use of carrier aggregation which is a method of combining disparate spectrum holdings to create a larger data pipe. This development tracked with the evolution of LTE from single carrier Cat-3 devices to dual carrier Cat-4 and Cat-6 devices. Further development of carrier aggregation extended the concept to include 3 carrier aggregation specified by Cat-9 LTE standards which brought the maximum throughput speed to 450mbps in the downlink.
However, in order for the industry to evolve further and keep up with the insatiable demand for mobile broadband, LTE Advanced will require further improvements. The next step in the evolution of LTE relies on LTE Advanced. This new set of technologies is destined to improve LTE speed to and past the gigabit-per-second barrier. To this end, IHS will be delivering a series of LTE Advanced Insights to further explore the key enabling technologies to get us to that gigabit per second barrier. This article is the first of this series.
Critical Areas of Exploration
Operators typically ask critical questions including but not limited to:
What is higher order modulation and how does radio signaling enhancement lead to faster wireless broadband?
How can advanced antenna designs be incorporated into existing smartphone form factors and what are the physical challenges involved in doing so?
What are the opportunities to leverage additional spectrum use especially in the unlicensed portions of 3.5GHz and 5GHz? What are the advantages as well as the challenges of doing so?
How can the industry take learnings from the 3G to 4G transition and build on the foundations of LTE moving into 5G?
QAM: Higher Order Modulation to Break Through Gigabit per Second Barrier
Higher order modulation schemes have been used throughout 3G technologies and now enabling the increased bandwidth coming into 4G LTE Advanced. As WCDMA evolved into HSPA and HSPA+ in the 3G era, higher order modulations of 16QAM and 64QAM replaced the older QPSK modulation schemes to improve throughput data rates that enable mobile broadband services to take off. Fundamentally, sophisticated signal processing such as 64QAM are used in wireless networks to improve the spectral efficiency of communications by packing in as many bits as possible into each transmission. The bits-per-symbol carried by 16QAM modulation scheme is 4 bits while higher order 64QAM yields 6 bits per symbol, a 50% improvement. Extending this concept, LTE Advanced will use 256QAM modulation from Category 11 onwards which is expected to provide a 33% improvement in spectral efficient over that of 64QAM over the same stream of LTE by increasing the bits-per-symbol to 8 from 6.
Modulation Level (QAM)
Bits per Symbol
Incremental Efficiency Gain
Table 1 – Modulation Levels and Corresponding Efficiency Gains
While higher order modulations equate greater spectral efficiencies, within the framework of wireless networks, achieving higher order signaling remains a significant challenge. Real world applications of higher order modulations are difficult to implement network wide as the more sophisticated signaling schemes are inherently less resilient to noise and interference. In normal deployments of macro cellular coverage, network operators employ adaptive modulation techniques to detect signal channel conditions and adjust modulation schemes accordingly. For example, if the wireless user is closer to the center of the macro cell area, the network will negotiate the signaling scheme to best take advantage of the wireless fidelity and communicate using the most efficient modulation scheme available. However, if the conditions are deemed inadequate, for example, at a cell site coverage edge, the network may resort to lower orders of modulation signaling in order to achieve higher reliability of connections.
Table 2 –LTE Categories and Corresponding Throughput Gains
The previous paragraph described limitation of higher order modulation was a hallmark of 3G networks. However, as LTE deployments begin to rely a more 5G- like heterogeneous network architecture leveraging augmented network equipment such as small cells, the use of higher order modulation becomes more practical as the distance from LTE antenna and the mobile device is reduced. Yet again, with a challenging transmission medium such as over the air wireless, obstacles still exist. Even with the most optimized heterogeneous networks, issues such as site to site signal interference can negate much of the benefits of small cells. Therefore, network operators, with help from their equipment vendors, are working on network optimization software to accommodate these interferences. At the end, any network, even ones designed for Cat-11 LTE and above, will not be able to cover all their mobile subscribers with the highest efficiency signaling. In actual deployment, only a portion of the devices within a coverage area will be at 256QAM while the majority other devices will fall back to a lower modulation scheme such as 64QAM or 16QAM. Also, additional challenges exist in carrier aggregated LTE connections whereby 2 or 3 carriers can be aggregated to form a wider virtual channel, here, depending on the frequencies used and the placement of cell towers associated with those specific spectrum, not all of the aggregated radio signals can be adapted to signal in higher order modulation. Therefore, reaching the theoretical maximum throughput data rates using higher order modulation will be particularly difficult in actual network deployments.
With these real-world deployment limitations on the handset side in mind, higher order modulation schemes have been shown to be a net benefit for LTE networks. Under trial tests, it has been shown that even a small fraction of users in a coverage cell using 256 QAM create improvements in network capacity performance. As devices with 256QAM enter and exit a network faster and more efficiently, it frees up wireless capacity to serve non-256QAM signaling devices on the network. Overall, enabling higher order modulation on an LTE network present a cost advantageous proposition to network carriers as the upgrade is primarily a software based solution. Going to 256QAM gives network carriers immediate benefits without significant hardware changes that are typically associated with other LTE Advanced features such as adding additional carriers or scaling MIMO antennas.
Evolving LTE Advanced to Gigabit Speeds
Putting the Pieces Together:
In order to achieve gigabit speeds in LTE Advanced, higher order modulation is one tool in a vast tool box of technologies the industry can use to propel 4G LTE further as the market waits for the consensus around the next generation 5G networks. By building on top of carrier aggregation technology discussed earlier in this series, the implementation of higher order modulation in Cat-11 LTE increased the maximum theoretical throughput to 600mbps using the same 3-carrier aggregated spectrum as dictated by LTE Cat-9. This 33% improvement from 450mbps is directly accredited to the improved bits-per-symbol efficiency described in this paper.
QAM Modulation, MIMO, CA and Spectrum
So what else is required to take LTE Advanced to gigabit speeds? Well, if we look at the 3GPP Release 12, Cat-16 LTE can get to a theoretical gigabit per second speed using the following combination of the following technologies in concert:
4×4 MIMO with 10 spatial streams (2 high frequency carriers with 4 layers each and 1 low frequency carrier at 2 layers)
Multi-carrier aggregation (3x20MHz or greater)
Use of additional spectrum such as LTE over unlicensed frequencies
What’s needed? Chipset advances
Currently, while the bulk of LTE smartphones sold today is still on Cat-6 modems, modem manufacturers are fast working to prepare the electronic component ecosystem with very capable LTE modems that can take advantage of the huge potential and headroom of evolved LTE. Qualcomm in particular has recently announced their X16 modem chipset which has been designed to take advantage of LTE Cat-16. The company claims that volume shipment of X16 modem devices will begin in H2 2016. Other modem makers have not yet announced a CAT-16 capable LTE modem yet but the next iteration of LTE Advanced will clearly be on their roadmap. Meanwhile, wireless infrastructure equipment manufacturers such as Ericsson are lining up technologies to achieve CAT-16 network deployments. Therefore, technically, commercial gigabit speed LTE Advanced Pro networks and devices can be realized in early 2017.
For Further Information
Please Contact Us for more information on our exciting range of solutions using LTE technology
Preliminary details and information about the wireless technology being developed for 5th generation or 5G mobile wireless or cellular telecommunications systems
With the 4G telecommunications systems now starting to be deployed, eyes are looking towards the development of 5th generation or 5G technology and services.
Although the deployment of any wireless or cellular system takes many years, development of the 5G technology systems is being investigated. The new 5G technologies will need to be chosen developed and perfected to enable timely and reliable deployment.
The new 5th generation, 5G technology for cellular systems will probably start to come to fruition around 2020 with deployment following on afterwards.
5G mobile systems status
The current status of the 5G technology for cellular systems is very much in the early development stages. Very many companies are looking into the technologies that could be used to become part of the system. In addition to this a number of universities have set up 5G research units focussed on developing the technologies for 5G
In addition to this the standards bodies, particularly 3GPP are aware of the development but are not actively planning the 5G systems yet.
Many of the technologies to be used for 5G will start to appear in the systems used for 4G and then as the new 5G cellular system starts to formulate in a more concrete manner, they will be incorporated into the new 5G cellular system.
The major issue with 5G technology is that there is such an enormously wide variation in the requirements: superfast downloads to small data requirements for IoT than any one system will not be able to meet these needs. Accordingly a layer approach is likely to be adopted. As one commentator stated: 5G is not just a mobile technology. It is ubiquitous access to high & low data rate services.
5G cellular systems overview
As the different generations of cellular telecommunications have evolved, each one has brought its own improvements. The same will be true of 5G technology.
First generation, 1G: These phones were analogue and were the first mobile or cellular phones to be used. Although revolutionary in their time they offered very low levels of spectrum efficiency and security.
Second generation, 2G: These were based around digital technology and offered much better spectrum efficiency, security and new features such as text messages and low data rate communications.
Third generation, 3G: The aim of this technology was to provide high speed data. The original technology was enhanced to allow data up to 14 Mbps and more.
Fourth generation, 4G: This was an all-IP based technology capable of providing data rates up to 1 Gbps.
Any new 5th generation, 5G cellular technology needs to provide significant gains over previous systems to provide an adequate business case for mobile operators to invest in any new system.
Facilities that might be seen with 5G technology include far better levels of connectivity and coverage. The term World Wide Wireless Web, or WWWW is being coined for this.
For 5G technology to be able to achieve this, new methods of connecting will be required as one of the main drawbacks with previous generations is lack of coverage, dropped calls and low performance at cell edges. 5G technology will need to address this.
Although the standards bodies have not yet defined the parameters needed to meet a 5G performance level yet, other organisations have set their own aims, that may eventually influence the final specifications.
Typical parameters for a 5G standard may include:
SUGGESTED 5G WIRELESS PERFORMANCE
10 000 times capacity of current network
Peak data rate
Cell edge data rate
< 1 ms
These are some of the ideas being put forwards for a 5G standard, but they are not accepted by any official bodies yet.
There are several key areas that are being investigated by research organisations. These include:
Millimeter-Wave technologies: Using frequencies much higher in the frequency spectrum opens up more spectrum and also provides the possibility of having much wide channel bandwidth – possibly 1 – 2 GHz. However this poses new challenges for handset development where maximum frequencies of around 2 GHz and bandwidths of 10 – 20 MHz are currently in use. For 5G, frequencies of above 50GHz are being considered and this will present some real challenges in terms of the circuit design, the technology, and also the way the system is used as these frequencies do not travel as far and are absorbed almost completely by obstacles.
Future PHY / MAC: The new physical layer and MAC presents many new interesting possibilities in a number of areas:
Waveforms: One key area of interest is that of the new waveforms that may be seen. OFDM has been used very successfully in 4G LTE as well as a number of other high data rate systems, but it does have some limitations in some circumstances. Formats being proposed include: GFDM, Generalised Frequency Division Multiplexing, as well as FBMC, Filter Bank Multi-Carrier, UFMC, Universal Filtered MultiCarrier. Each has its own advantages and limitations and it is possible that adaptive schemes may be employed, utilising different waveforms adaptively for the 5G mobile systems as the requirements dictate. This provides considerably more flexibility for 5G mobile communications. Read more about 5G waveforms
Multiple Access Schemes: Again a variety of new access schemes are being investigated for 5G technology. Techniques including OFDMA, SCMA, NOMA, PDMA, MUSA and IDMA have all been mentioned. Read more about 5G multiple access schemes
Modulation: Whilst PSK and QAM have provided excellent performance in terms of spectral efficiency, resilience and capacity, the major drawback is that of a high peak to average power ratio. Modulation schemes like APSK could provide advantages in some circumstances. Read more about 5G modulation schemes
Duplex methods: There are several candidate forms of duplex that are being considered. Currently systems use either frequency division duplex, FDD or time division duplex, TDD. New possibilities are opening up for 5G including flexible duplex, where the time or frequencies allocated are variable according toth e load in either direction or a new scheme called division free duplex or single channel full duplex. This scheme for 5G would enable simultaneous transmission and reception on the same channel. Read more about 5G full duplex
Massive MIMO: Although MIMO is being used in many applications from LTE to Wi-Fi, etc, the numbers of antennas is fairly limited -. Using microwave frequencies opens up the possibility of using many tens of antennas on a single equipment becomes a real possibility because of the antenna sizes and spacings in terms of a wavelength.
Dense networks Reducing the size of cells provides a much more overall effective use of the available spectrum. Techniques to ensure that small cells in the macro-network and deployed as femtocells can operate satisfactorily are required.
Other 5G concepts
There are many new concepts that are being investigated and developed for the new 5th generation mobile system. Some of these include:
Pervasive networks : This technology being considered for 5G cellular systems is where a user can concurrently be connected to several wireless access technologies and seamlessly move between them.
Group cooperative relay: This is a technique that is being considered to make the high data rates available over a wider area of the cell. Currently data rates fall towards the cell edge where interference levels are higher and signal levels lower.
Cognitive radio technology: If cognitive radio technology was used for 5th generation, 5G cellular systems, then it would enable the user equipment / handset to look at the radio landscape in which it is located and choose the optimum radio access network, modulation scheme and other parameters to configure itself to gain the best connection and optimum performance.
Wireless mesh networking and dynamic ad-hoc networking: With the variety of different access schemes it will be possible to link to others nearby to provide ad-hoc wireless networks for much speedier data flows.
Smart antennas: Another major element of any 5G cellular system will be that of smart antennas. Using these it will be possible to alter the beam direction to enable more direct communications and limit interference and increase overall cell capacity.
There are many new techniques and technologies that will be used in the new 5G cellular or mobile telecommunications system. These new 5G technologies are still being developed and the overall standards have not yet be defined. However as the required technologies develop, they will be incorporated into the new system which will be defined by the standards bodies over the coming years.
Gigabit Wireless Metro Networks: CableFree MMW links deployed in the UAE
CableFree has deployed Gigabit Wireless MMW links for Public Safety networks in the UAE with regional partner CDN (Computer Data Networks). For this project a number of 1Gbps MMW links have been implemented to upgrade and extend existing network infrastructure for Safe City applications.
CableFree Millimeter Wave (MMW) links offer up to 10Gbps Full Duplex capacity and are proven to operate well in the harsh climate and conditions in regions such as the UAE, including recent record summer temperatures. CableFree worked closely with CDN to ensure high uptime and availability are ensured throughout the network.
CableFree MMW is a proven and robust high speed technology for Line of Sight links. High frequency microwave signals between 60 and 90GHz have “pencil beam” properties that avoid any interference and enable dense deployment in busy urban areas.
Applications for Millimeter Wave include 4G/LTE Mobile Backhaul, Safe Cities, Government, Corporate CCTV and ISP backbones.
Distances up to 5-15km can be deployed reliably: CableFree provide a full range of planning tools to enable customers to plan for high availability even in high rainfall regions.
CableFree MMW links are ideal for implementing wireless networks in many regions and can upgrade existing congested unlicensed and licensed microwave links, and extend the reach of fibre optic cabling. The links are rapid to deploy within hours and can provide permanent, temporary or disaster-recovery scenarios, including resilient backup to fragile fibre optic cables and leased lines.
For more information on Millimeter Wave and Wireless Metro Networks please contact the CableFree team: email@example.com
Researchers agree that slow internet can stress you out
You’re not the only one who gets frustrated when videos buffer too much and too often. Ericsson found that the stress caused by trying to load videos on a slow mobile connection is comparable to the stress you feel while watching a horror movie. The Swedish company discovered that when it conducted an experiment called “The Stress of Streaming Delays.” Sure, Ericsson did it to show brands how slow internet affects them, and it’s true it only had 30 subjects. But we don’t think anyone would disagree that having to endure several seconds to minutes of buffering is frustrating.
Researchers measured the subjects’ brain, pulse and heart activities while they were performing tasks on a phone, found that video streaming delays increase heart rate by 38 percent. They also found that a two-second buffering period can double stress levels. When the researchers observed the subjects who were subjected to longer delays (around six seconds), though, they saw their stress levels rise, then fall. The participants showed signs of resignation, including eye movements that indicated distraction — they were already giving up.
We’ll bet that’s a feeling you only know too well. Why wait around for downloads and buffering on Slow Internet? Choose a CableFree Wireless network and get into the fast lane with capacities up to 10Gbps!
On Dec. 16 2013, Ofcom—the UK telecom regulator—announced a new approach for the use of E-band wireless communications in the United Kingdom.
To summarize, the new approach, which is available for licensing after Dec. 17, 2013, splits the band into two segments. Ofcom will coordinate the lower segment of 2GHz, while the upper segment of 2.5GHz will remain self-coordinated as per the prior policy.
The segment Ofcom coordinates will follow the usual regulatory processes for all the other fixed link bands it oversees. Moreover, OFCOM has already updated all the relevant documents and forms to accommodate E-band. While wireless vendors would have preferred the larger portion of spectrum to have been granted to the Ofcom-coordinated process, we welcome this new arrangement because it provides an option for greater security and peace of mind to operators in terms of protection from interference than was envisaged for the previous all self-coordinated spectrum regime.
Latest E-Band regulation by OFCOM
For a more detailed look at the new E-band arrangement, Figure 1 shows the Ofcom-coordinated section sitting in the lower half of both the 71-76GHz and 81-86GHz bands thus allowing for the deployment of FDD systems in line with ECC/REC(05)07.
Figure 1: Segmented Plan for Mixed Management Approach (click on figures to enlarge)
In terms of channelization within the Ofcom-coordinated block, the regulator announced that it would permit 8 x 250MHz channels, 4 x 500MHz channels, 1 x 750MHz channel and 1 x 1000MHz channel as per ECC/REC(05)07. Ofcom also stated that 62.5MHz and 125MHz channels will be implemented as soon as the relevant technical standards, etc., from ETSI are published. Figure 2 shows the Ofcom channel plan:
Figure 2: Ofcom Permitted E-band Channelizations
Regarding equipment requirements, Ofcom stated that it will allow equipment that meets the appropriate sections of EN 302 217-2-2 and EN 302 217-4-2. This includes the antenna classes (e.g., classes 2-4) that will allow the deployment of solutions with flat panel antennas. We welcome this approach and hopes that other regulators—notably the FCC in terms of antenna requirements—currently considering opening up and/or revising their rules for E-band adopt similar approaches.
The license fees for the self-coordinated segment remains at £50 per link per annum, whereas in the Ofcom-coordinated segment the fees are bandwidth based as reflected in Figure 3:
Notwithstanding the current fees consultation process that Ofcom is undertaking, these “interim fees” will remain in place for five years, after which time the results of the fees review may mean that they will be amended.
Figure 3: Ofcom Bandwidth-based Fees
Also because of responses received during the consultation process, within the self-coordinated block, Ofcom will now require the following additional information for the self-coordination database: antenna polarization (horizontal, vertical or dual), ETSI Spectrum Efficiency Class and whether the link is TDD or FDD.
Fiber Cuts – The Real Cost – How to solve using Gigabit Wireless
Often you can’t avoid fiber cuts: they happen on public land or under public streets, outside your control. The vast majority of corporate LAN connections, cable, Internet and LTE backhaul, is done over fiber optic cable. In one report CNN stated that about 99 percent of all international communications occur over undersea cabling. Alan Mauldin, research director at U.S.-based research firm Telegeography, noted that while some major cabling projects can come with high price tags, fiber optics is considered more robust and more cost-effective than common wireless alternatives like satellite.
But while fiber optic cabling is traditionally seen as the safer option, that may be a misconception. When installed correctly, fiber optics is the “perfect” media, transmitting Gigabits of data without interruption. However, any disruption to the fragile fiber causes data outages which take days or weeks to locate and repair. According to data from the Federal Communications Commission. about a quarter of all network outages that happened between 1993 and 2001 were from cables being cut. Regardless of how the fiber cut occurred, such outages can be particularly damaging.
How easy is it to repair a fiber cut?
Fiber is not a “self healing” media: skilled teams with specialist fiber-splicing and terminating equipment are required to repair a broken fiber connection. Most data communication engineers do not have this equipment or training on using them. fiber repair is a specialist business and getting trained people and splicing equipment to site costs time and money. Factoring the anticipated cost of a fiber repair into a budget for “downtime” and “unproductivity” for corporates – and missing SLA’s for uptime for Service Providers – is a serious issue, including business continuity planning. For rural areas, access to sites can be limited, with some locations limited by poor weather, and for islands sometimes only with infrequent access by sea or air.
By vandalism – This type of fiber cut outage has been worryingly common of late. According to CNN, there have been 11 separate incidents involving the cutting of fiber optic cable in the Bay Area since July 2015. The FBI noted that there have been more than 12 in the region since January, and that it’s been hard to stop in part because there is so much critical cabling in the area and because cables are typically clearly marked, The Wall Street Journal reported. Authorities noted that these incidents show no sign of slowing down either, as they don’t have a clear suspect(s) or motive at this time. The Journal also noted that some instances of fiber optic-related downtime are not due to vandalism, but rather someone trying to steal metal.
By accident – This is perhaps one of the most common causes of fiber cuts, but nevertheless they are just as damaging. In one example a 75-year-old woman in the country of Georgia was digging in a field when she accidentally severed a fiber optic cable, in an article in The Guardian. As a result of the mishap, close to 90 percent of Armenia and parts of Azerbaijan and Georgia were completely without Internet for five plus hours.
By force of nature – Tornadoes, hurricanes, earthquakes and other major natural disasters all have the potential to cut or entirely destroy fiber optic cabling. Other seemingly more benign forces of nature can also cripple connectivity, as Level 3 reported that 28 percent of all damages it sustained to its infrastructure in 2010 were caused by squirrels.
Calculating the impact of a fiber outage
In some of these fiber cut outage incidents, the fallout can be relatively minor. A cut that occurs in the middle of the night on a redundant line can be easy enough to deal with, with service providers sometimes able to reroute traffic in the interim. Unfortunately however, such incidents often lead to much bigger problems for end users. For example, a cut fiber optic cable in northern Arizona in April caused many thousands of people and businesses to go about 15 hours with telephone and Internet service. This meant many shops had to either close or resort to manual tracking, and that personal Internet usage grinded to a halt, The Associated Press reported. More importantly, 911 emergency communications were disrupted in the incident.
It’s not just a hassle for end users, as cut fiber can severely impact public health when emergency services like police departments, fire stations and EMTs can’t take and receive calls. Plus, such incidents are very costly for service providers, forced to repair expensive infrastructure. They can also lead to canceled service, as customers become irate at service providers for failing to provide reliable connectivity at all times.
What’s a solution to fiber cut outages?
One easy way to avoid the problems related to cut fiber is to not have fiber at all and instead pursue a wireless dark fiber alternative. For example, after a cable snafu caused residents of Washington state’s San Juan Islands to go without telephone, Internet and cell service for 10 days in 2013, CenturyLink installed a wireless mobile backhaul option there, according to The AP.
By opting for a solution like a Gigabit WirelessMicrowave, MMW, Free Space Optics or MIMO OFDM Radio, service providers gain a wireless alternative to cabling that is just as robust and fast as fiber. With the Gigabit Wireless link in place, cut fiber optic cabling is less disruptive to end users and ISPs.
Where do I found out more information on solving fiber Cut Issues?
FSO (Free Space Optics, Laser, Optical Wireless) Guide
Free Space Optics (FSO) communications, also called Optical Wireless (OW) or Infrared Laser, refers to the transmission of modulated visible or infrared (IR) beams through the atmosphere to obtain optical communications. Like fibre, Free Space Optics (FSO) uses lasers to transmit data, but instead of enclosing the data stream in a glass fibre, it is transmitted through the air. Free Space Optics (FSO) works on the same basic principle as Infrared television remote controls, wireless keyboards or IRDA ports on laptops or cellular phones.
History of Free Space Optics (FSO)
The engineering maturity of Free Space Optics (FSO) is often underestimated, due to a misunderstanding of how long Free Space Optics (FSO) systems have been under development. Historically, Free Space Optics (FSO) or optical wireless communications was first demonstrated by Alexander Graham Bell in the late nineteenth century (prior to his demonstration of the telephone!). Bell’s Free Space Optics (FSO) experiment converted voice sounds into telephone signals and transmitted them between receivers through free air space along a beam of light for a distance of some 600 feet. Calling his experimental device the “photophone,” Bell considered this optical technology – and not the telephone – his pre-eminent invention because it did not require wires for transmission.
Although Bell’s photophone never became a commercial reality, it demonstrated the basic principle of optical communications. Essentially all of the engineering of today’s Free Space Optics (FSO) or free space optical communications systems was done over the past 40 years or so, mostly for defense applications. By addressing the principal engineering challenges of Free Space Optics (FSO), this aerospace/defence activity established a strong foundation upon which today’s commercial laser-based Free Space Optics (FSO) systems are based.
How Free Space Optics (FSO) Works
Free Space Optics (FSO) transmits invisible, eye-safe light beams from one “telescope” to another using low power infrared lasers in the terahertz spectrum. The beams of light in Free Space Optics (FSO) systems are transmitted by laser light focused on highly sensitive photon detector receivers. These receivers are telescopic lenses able to collect the photon stream and transmit digital data containing a mix of Internet messages, video images, radio signals or computer files. Commercially available systems offer capacities in the range of 100 Mbps to 2.5 Gbps, and demonstration systems report data rates as high as 160 Gbps.
Free Space Optics (FSO) systems can function over distances of several kilometres. As long as there is a clear line of sight between the source and the destination, and enough transmitter power, Free Space Optics (FSO) communication is possible.
FSO: Wireless Links at the Speed of Light
Unlike radio and microwave systems, Free Space Optics (FSO) is an optical technology and no spectrum licensing or frequency coordination with other users is required, interference from or to other systems or equipment is not a concern, and the point-to-point laser signal is extremely difficult to intercept, and therefore secure. Data rates comparable to optical fibre transmission can be carried by Free Space Optics (FSO) systems with very low error rates, while the extremely narrow laser beam widths ensure that there is almost no practical limit to the number of separate Free Space Optics (FSO) links that can be installed in a given location.
How Free Space Optics (FSO) benefits you
FSO is free from licensing and regulation which translates into ease, speed and low cost of deployment. Since Free Space Optics (FSO) transceivers can transmit and receive through windows, it is possible to mount Free Space Optics (FSO) systems inside buildings, reducing the need to compete for roof space, simplifying wiring and cabling, and permitting Free Space Optics (FSO) equipment to operate in a very favourable environment. The only essential requirement for Free Space Optics (FSO) or optical wireless transmission is line of sight between the two ends of the link.
For Metro Area Network (MAN) providers the last mile or even feet can be the most daunting. Free Space Optics (FSO) networks can close this gap and allow new customers access to high-speed MAN’s. Providers also can take advantage of the reduced risk of installing an Free Space Optics (FSO) network which can later be redeployed.
The Market. Why FSO? Breaking the Bandwidth Bottleneck
Why FSO? The global telecommunications network has seen massive expansion over the last few years. First came the tremendous growth of the optical fiber long-haul, wide-area network (WAN), followed by a more recent emphasis on metropolitan area networks (MANs). Meanwhile, local area networks (LANs) and gigabit Ethernet ports are being deployed with a comparable growth rate. In order for this tremendous network capacity to be exploited, and for the users to be able to utilize the broad array of new services becoming available, network designers must provide a flexible and cost-effective means for the users to access the telecommunications network. Presently, however, most local loop network connections are limited to 1.5 Mbps (a T1 line). As a consequence, there is a strong need for a high-bandwidth bridge (the “last mile” or “first mile”) between the LANs and the MANs or WANs.
A recent New York Times article reported that more than 100 million miles of optical fibre was laid around the world in the last two years, as carriers reacted to the Internet phenomenon and end users’ insatiable demand for bandwidth. The sheer scale of connecting whole communities, cities and regions to that fiber optic cable or “backbone” is something not many players understood well. Despite the huge investment in trenching and optical cable, most of the fibre remains unlit, 80 to 90% of office, commercial and industrial buildings are not connected to fibre, and transport prices are dropping dramatically.
Free Space Optics (FSO) systems represent one of the most promising approaches for addressing the emerging broadband access market and its “last mile” bottleneck. Free Space Optics (FSO) systems offer many features, principal among them being low start-up and operational costs, rapid deployment, and high fiber-like bandwidths due to the optical nature of the technology.
Broadband Bandwidth Alternatives
Access technologies in general use today include telco-provisioned copper wire, wireless Internet access, broadband RF/microwave, coaxial cable and direct optical fiber connections (fiber to the building; fiber to the home). Telco/PTT telephone networks are still trapped in the old Time Division Multiplex (TDM) based network infrastructure that rations bandwidth to the customer in increments of 1.5 Mbps (T-1) or 2.024 Mbps (E-1). DSL penetration rates have been throttled by slow deployment and the pricing strategies of the PTTs. Cable modem access has had more success in residential markets, but suffers from security and capacity problems, and is generally conditional on the user subscribing to a package of cable TV channels. Wireless Internet access is still slow, and the tiny screen renders it of little appeal for web browsing.
Broadband RF/microwave systems have severe limitations and are losing favor. The radio spectrum is a scarce and expensive licensed commodity, sold or leased to the highest bidder, or on a first-come first-served basis, and all too often, simply unavailable due to congestion. As building owners have realized the value of their roof space, the price of roof rights has risen sharply. Furthermore, radio equipment is not inexpensive, the maximum data rates achievable with RF systems are low compared to optical fiber, and communications channels are insecure and subject to interference from and to other systems (a major constraint on the use of radio systems).
Free Space Optics (FSO) Advantages
Free space optical (FSO) systems offers a flexible networking solution that delivers on the promise of broadband. Only free space optics or Free Space Optics (FSO) provides the essential combination of qualities required to bring the traffic to the optical fiber backbone – virtually unlimited bandwidth, low cost, ease and speed of deployment. Freedom from licensing and regulation translates into ease, speed and low cost of deployment. Since Free Space Optics (FSO) optical wireless transceivers can transmit and receive through windows, it is possible to mount Free Space Optics (FSO) systems inside buildings, reducing the need to compete for roof space, simplifying wiring and cabling, and permitting the equipment to operate in a very favorable environment. The only essential for Free Space Optics (FSO) is line of sight between the two ends of the link.
Security and Free Space Optics (FSO)
The common perception of wireless is that it offers less security than wireline connections. In fact, Free Space Optics (FSO) is far more secure than RF or other wireless-based transmission technologies for several reasons:
Free Space Optics (FSO) laser beams cannot be detected with spectrum analyzers or RF meters
Free Space Optics (FSO) laser transmissions are optical and travel along a line of sight path that cannot be intercepted easily. It requires a matching Free Space Optics (FSO) transceiver carefully aligned to complete the transmission. Interception is very difficult and extremely unlikely
The laser beams generated by Free Space Optics (FSO) systems are narrow and invisible, making them harder to find and even harder to intercept and crack
Data can be transmitted over an encrypted connection adding to the degree of security available in Free Space Optics (FSO) network transmissions.
Free Space Optics (FSO) Challenges
The advantages of free space optical wireless or Free Space Optics (FSO) do not come without some cost. When light is transmitted through optical fiber, transmission integrity is quite predictable – barring unforseen events such as backhoes or animal interference. When light is transmitted through the air, as with Free Space Optics (FSO) optical wireless systems, it must contend with a a complex and not always quantifiable subject – the atmosphere.
Attenuation, Fog and Free Space Optics (FSO)
Fog substantially attenuates visible radiation, and it has a similar affect on the near-infrared wavelengths that are employed in Free Space Optics (FSO) systems. Note that the effect of fog on Free Space Optics (FSO) optical wireless radiation is entirely analogous to the attenuation – and fades – suffered by RF wireless systems due to rainfall. Similar to the case of rain attenuation with RF wireless, fog attenuation is not a “show-stopper” for Free Space Optics (FSO) optical wireless, because the optical link can be engineered such that, for a large fraction of the time, an acceptable power will be received even in the presence of heavy fog. Free Space Optics (FSO) optical wireless-based communication systems can be enhanced to yield even greater availabilities.
Free Space Optics (FSO) and Physical Obstructions
Free Space Optics (FSO) products which have widely spaced redundant transmitters and large receive optics will all but eliminate interference concerns from objects such as birds. On a typical day, an object covering 98% of the receive aperture and all but 1 transmitter; will not cause an Free Space Optics (FSO) link to drop out. Thus birds are unlikely to have any impact on Free Space Optics (FSO) transmission.
Free Space Optics (FSO) Pointing Stability – Building Sway, Tower Movement
Only wide-beamwidth fixed pointed Free Space Optics (FSO) systems are capable of handling the vast majority of movement found in deployments on buildings. Narrow beam systems are unreliable, requiring manual re-alignment on a regular basis, due to building movement. ‘Wide beam’ means more than 5milliradians. Narrow systems (1-2mRad) are not reliable without a tracking system
The combination of effective beam divergence and a well matched receive Field-of-View (FOV) provide for an extremely robust fixed pointed Free Space Optics (FSO) system suitable for most deployments. Fixed-pointed Free Space Optics (FSO) systems are generally preferred over actively-tracked Free Space Optics (FSO) systems due to their lower cost.
Free Space Optics (FSO) and Scintillation
Performance of many Free Space Optics (FSO) optical wireless systems is adversely affected by scintillation on bright sunny days; the effects of which are typically reflected in BER statistics. Some optical wireless products have a unique combination of large aperture receiver, widely spaced transmitters, finely tuned receive filtering, and automatic gain control characteristics. In addition, certain optical wireless systems also apply a clock recovery phase-lock-loop time constant that all but eliminate the affects of atmospheric scintillation and jitter transference.
Solar Interference and Free Space Optics (FSO)
Solar interference in Free Space Optics (FSO) free space optical systems can be combated in two ways. Optical narrowband filter proceeding the receive detector used to filter all but the wavelength actually used for intersystem communications. To handle off-axis solar energy, sophisticated spatial filters have been implemented in CableFree systems, allowing them to operate unaffected by solar interference that is more than 1 degree off-axis.
Free Space Optics (FSO) Reliability
Employing an adaptive laser power (Automatic Transmit Power Control or ATPC) scheme to dynamically adjust the laser power in response to weather conditions will improve the reliability of Free Space Optics (FSO) optical wireless systems. In clear weather the transmit power is greatly reduced, enhancing the laser lifetime by operating the laser at very low-stress conditions. In severe weather, the laser power is increased as needed to maintain the optical link – then decreased again as the weather clears. A TEC controller that maintains the temperature of the laser transmitter diodes in the optimum region will maximize reliability and lifetime, consistent with power output allowing the FSO optical wireless system to operate more efficiently and reliably at higher power levels.
For more information on Free Space Optics, please Contact Us
What is OFDM? (Orthogonal Frequency Division Multiplexing)
OFDM: Orthogonal Frequency Division Multiplexing, is a form of signal modulation that divides a high data rate modulating stream placing them onto many slowly modulated narrowband close-spaced subcarriers, and in this way is less sensitive to frequency selective fading.
Orthogonal Frequency Division Multiplexing or OFDM is a modulation format that is being used for many of the latest wireless and telecommunications standards.
OFDM has been adopted in the Wi-Fi arena where the standards like 802.11a, 802.11n, 802.11ac and more. It has also been chosen for the cellular telecommunications standard LTE / LTE-A, and in addition to this it has been adopted by other standards such as WiMAX and many more.
Orthogonal frequency division multiplexing has also been adopted for a number of broadcast standards from DAB Digital Radio to the Digital Video Broadcast standards, DVB. It has also been adopted for other broadcast systems as well including Digital Radio Mondiale used for the long medium and short wave bands.
Although OFDM, orthogonal frequency division multiplexing is more complicated than earlier forms of signal format, it provides some distinct advantages in terms of data transmission, especially where high data rates are needed along with relatively wide bandwidths.
What is OFDM? – The concept
OFDM is a form of multicarrier modulation. An OFDM signal consists of a number of closely spaced modulated carriers. When modulation of any form – voice, data, etc. is applied to a carrier, then sidebands spread out either side. It is necessary for a receiver to be able to receive the whole signal to be able to successfully demodulate the data. As a result when signals are transmitted close to one another they must be spaced so that the receiver can separate them using a filter and there must be a guard band between them. This is not the case with OFDM. Although the sidebands from each carrier overlap, they can still be received without the interference that might be expected because they are orthogonal to each another. This is achieved by having the carrier spacing equal to the reciprocal of the symbol period.
Traditional view of receiving signals carrying modulation
To see how OFDM works, it is necessary to look at the receiver. This acts as a bank of demodulators, translating each carrier down to DC. The resulting signal is integrated over the symbol period to regenerate the data from that carrier. The same demodulator also demodulates the other carriers. As the carrier spacing equal to the reciprocal of the symbol period means that they will have a whole number of cycles in the symbol period and their contribution will sum to zero – in other words there is no interference contribution.
One requirement of the OFDM transmitting and receiving systems is that they must be linear. Any non-linearity will cause interference between the carriers as a result of inter-modulation distortion. This will introduce unwanted signals that would cause interference and impair the orthogonality of the transmission.
In terms of the equipment to be used the high peak to average ratio of multi-carrier systems such as OFDM requires the RF final amplifier on the output of the transmitter to be able to handle the peaks whilst the average power is much lower and this leads to inefficiency. In some systems the peaks are limited. Although this introduces distortion that results in a higher level of data errors, the system can rely on the error correction to remove them.
Data on OFDM
The data to be transmitted on an OFDM signal is spread across the carriers of the signal, each carrier taking part of the payload. This reduces the data rate taken by each carrier. The lower data rate has the advantage that interference from reflections is much less critical. This is achieved by adding a guard band time or guard interval into the system. This ensures that the data is only sampled when the signal is stable and no new delayed signals arrive that would alter the timing and phase of the signal.
The distribution of the data across a large number of carriers in the OFDM signal has some further advantages. Nulls caused by multi-path effects or interference on a given frequency only affect a small number of the carriers, the remaining ones being received correctly. By using error-coding techniques, which does mean adding further data to the transmitted signal, it enables many or all of the corrupted data to be reconstructed within the receiver. This can be done because the error correction code is transmitted in a different part of the signal.
OFDM advantages & disadvantages
OFDM has been used in many high data rate wireless systems because of the many advantages it provides.
Immunity to selective fading: One of the main advantages of OFDM is that is more resistant to frequency selective fading than single carrier systems because it divides the overall channel into multiple narrowband signals that are affected individually as flat fading sub-channels.
Resilience to interference: Interference appearing on a channel may be bandwidth limited and in this way will not affect all the sub-channels. This means that not all the data is lost.
Spectrum efficiency: Using close-spaced overlapping sub-carriers, a significant OFDM advantage is that it makes efficient use of the available spectrum.
Resilient to ISI: Another advantage of OFDM is that it is very resilient to inter-symbol and inter-frame interference. This results from the low data rate on each of the sub-channels.
Resilient to narrow-band effects: Using adequate channel coding and interleaving it is possible to recover symbols lost due to the frequency selectivity of the channel and narrow band interference. Not all the data is lost.
Simpler channel equalisation: One of the issues with CDMA systems was the complexity of the channel equalisation which had to be applied across the whole channel. An advantage of OFDM is that using multiple sub-channels, the channel equalization becomes much simpler.
Whilst OFDM has been widely used, there are still a few disadvantages to its use which need to be addressed when considering its use.
High peak to average power ratio: An OFDM signal has a noise like amplitude variation and has a relatively high large dynamic range, or peak to average power ratio. This impacts the RF amplifier efficiency as the amplifiers need to be linear and accommodate the large amplitude variations and these factors mean the amplifier cannot operate with a high efficiency level.
Sensitive to carrier offset and drift: Another disadvantage of OFDM is that is sensitive to carrier frequency offset and drift. Single carrier systems are less sensitive.
There are several other variants of OFDM for which the initials are seen in the technical literature. These follow the basic format for OFDM, but have additional attributes or variations:
COFDM: Coded Orthogonal frequency division multiplexing. A form of OFDM where error correction coding is incorporated into the signal.
Flash OFDM: This is a variant of OFDM that was developed by Flarion and it is a fast hopped form of OFDM. It uses multiple tones and fast hopping to spread signals over a given spectrum band.
OFDMA: Orthogonal frequency division multiple access. A scheme used to provide a multiple access capability for applications such as cellular telecommunications when using OFDM technologies.
VOFDM: Vector OFDM. This form of OFDM uses the concept of MIMO technology. It is being developed by CISCO Systems. MIMO stands for Multiple Input Multiple output and it uses multiple antennas to transmit and receive the signals so that multi-path effects can be utilised to enhance the signal reception and improve the transmission speeds that can be supported.
WOFDM: Wideband OFDM. The concept of this form of OFDM is that it uses a degree of spacing between the channels that is large enough that any frequency errors between transmitter and receiver do not affect the performance. It is particularly applicable to Wi-Fi systems.
Each of these forms of OFDM utilise the same basic concept of using close spaced orthogonal carriers each carrying low data rate signals. During the demodulation phase the data is then combined to provide the complete signal.
OFDM, orthogonal frequency division multiplexing has gained a significant presence in the wireless market place. The combination of high data capacity, high spectral efficiency, and its resilience to interference as a result of multi-path effects means that it is ideal for the high data applications that have become a major factor in today’s communications scene.
For more information on wireless technology and OFDM, please Contact Us
Leased Line Alternatives and Resilience using Wireless
Leased lines are often very expensive to install, with high operating costs. Unless you already have fibre connected to your building then you will need to get fibre installed from the nearest point of presence (PoP), this involves digging a trench and laying an armoured glass fibre cable between the 2 locations.
The costs of doing these civil works be over £100 a metre in city locations. Urban areas require road closures, permits, traffic control, and repair bills to existing infrastructure such as drains. Therefore, the capital expenditure required for a wireless bridge is a fraction the cost, saving time and budget.