E-band MMW Licensing in the UK by OFCOM

E-band Regulation in the UK by OFCOM

On Dec. 16 2013, Ofcom—the UK telecom regulator—announced a new approach for the use of E-band wireless communications in the United Kingdom.

To summarize, the new approach, which is available for licensing after Dec. 17, 2013, splits the band into two segments. Ofcom will coordinate the lower segment of 2GHz, while the upper segment of 2.5GHz will remain self-coordinated as per the prior policy.

The segment Ofcom coordinates will follow the usual regulatory processes for all the other fixed link bands it oversees. Moreover, OFCOM has already updated all the relevant documents and forms to accommodate E-band. While wireless vendors would have preferred the larger portion of spectrum to have been granted to the Ofcom-coordinated process, we welcome this new arrangement because it provides an option for greater security and peace of mind to operators in terms of protection from interference than was envisaged for the previous all self-coordinated spectrum regime.

Latest E-Band regulation by OFCOM

For a more detailed look at the new E-band arrangement, Figure 1 shows the Ofcom-coordinated section sitting in the lower half of both the 71-76GHz and 81-86GHz bands thus allowing for the deployment of FDD systems in line with ECC/REC(05)07.

CableFree MMW E-band OFCOM Allocation

Figure 1: Segmented Plan for Mixed Management Approach (click on figures to enlarge)

In terms of channelization within the Ofcom-coordinated block, the regulator announced that it would permit 8 x 250MHz channels, 4 x 500MHz channels, 1 x 750MHz channel and 1 x 1000MHz channel as per ECC/REC(05)07. Ofcom also stated that 62.5MHz and 125MHz channels will be implemented as soon as the relevant technical standards, etc., from ETSI are published. Figure 2 shows the Ofcom channel plan:

CableFree MMW OFCOM E-band Permitted Channelisations

Figure 2: Ofcom Permitted E-band Channelizations

Regarding equipment requirements, Ofcom stated that it will allow equipment that meets the appropriate sections of EN 302 217-2-2 and EN 302 217-4-2. This includes the antenna classes (e.g., classes 2-4) that will allow the deployment of solutions with flat panel antennas. We welcome this approach and hopes that other regulators—notably the FCC in terms of antenna requirements—currently considering opening up and/or revising their rules for E-band adopt similar approaches.

The license fees for the self-coordinated segment remains at £50 per link per annum, whereas in the Ofcom-coordinated segment the fees are bandwidth based as reflected in Figure 3:

Notwithstanding the current fees consultation process that Ofcom is undertaking, these “interim fees” will remain in place for five years, after which time the results of the fees review may mean that they will be amended.CableFree MMW E-band OFCOM Fees

Figure 3: Ofcom Bandwidth-based Fees

Also because of responses received during the consultation process, within the self-coordinated block, Ofcom will now require the following additional information for the self-coordination database: antenna polarization (horizontal, vertical or dual), ETSI Spectrum Efficiency Class and whether the link is TDD or FDD.

OFCOM MMW E-band Allocation


The Real Cost of Fiber Cuts: How to solve using Gigabit Wireless

Fiber Cuts – The Real Cost – How to solve using Gigabit Wireless

Fibre Optic Cable - Fibre Cuts cost time and moneyOften you can’t avoid fiber cuts: they happen on public land or under public streets, outside your control.  The vast majority of corporate LAN connections, cable, Internet and LTE backhaul, is done over fiber optic cable.   In one report CNN stated that about 99 percent of all international communications occur over undersea cabling. Alan Mauldin, research director at U.S.-based research firm Telegeography, noted that while some major cabling projects can come with high price tags, fiber optics is considered more robust and more cost-effective than common wireless alternatives like satellite.

Gigabit Wireless solves Fibre Cut outagesBut while fiber optic cabling is traditionally seen as the safer option, that may be a misconception.  When installed correctly, fiber optics is the “perfect” media, transmitting Gigabits of data without interruption.  However, any disruption to the fragile fiber causes data outages which take days or weeks to locate and repair. According to data from the Federal Communications Commission. about a quarter of all network outages that happened between 1993 and 2001 were from cables being cut. Regardless of how the fiber cut occurred, such outages can be particularly damaging.

How easy is it to repair a fiber cut?

Fiber is not a “self healing” media: skilled teams with specialist fiber-splicing and terminating equipment are required to repair a broken fiber connection.  Most data communication engineers do not have this equipment or training on using them. fiber repair is a specialist business and getting trained people and splicing equipment to site costs time and money.  Factoring the anticipated cost of a fiber repair into a budget for “downtime” and “unproductivity” for corporates – and missing SLA’s for uptime for Service Providers – is a serious issue, including business continuity planning.  For rural areas, access to sites can be limited, with some locations limited by poor weather, and for islands sometimes only with infrequent access by sea or air.

Common causes of fiber cut outages

As these instances show, there are many different ways in which fiber optic cabling can be disrupted:

By vandalism – This type of fiber cut outage has been worryingly common of late. According to CNN, there have been 11 separate incidents involving the cutting of fiber optic cable in the Bay Area since July 2015. The FBI noted that there have been more than 12 in the region since January, and that it’s been hard to stop in part because there is so much critical cabling in the area and because cables are typically clearly marked, The Wall Street Journal reported. Authorities noted that these incidents show no sign of slowing down either, as they don’t have a clear suspect(s) or motive at this time. The Journal also noted that some instances of fiber optic-related downtime are not due to vandalism, but rather someone trying to steal metal.
fiber cut duct causes outageBy accident – This is perhaps one of the most common causes of fiber cuts, but nevertheless they are just as damaging. In one example a 75-year-old woman in the country of Georgia was digging in a field when she accidentally severed a fiber optic cable, in an article in The Guardian. As a result of the mishap, close to 90 percent of Armenia and parts of Azerbaijan and Georgia were completely without Internet for five plus hours.

Fire and Ice cause cable outage network downtimeBy force of nature – Tornadoes, hurricanes, earthquakes and other major natural disasters all have the potential to cut or entirely destroy fiber optic cabling. Other seemingly more benign forces of nature can also cripple connectivity, as Level 3 reported that 28 percent of all damages it sustained to its infrastructure in 2010 were caused by squirrels.

Calculating the impact of a fiber outage

trench digging causes fiber optic cut network outageIn some of these fiber cut outage incidents, the fallout can be relatively minor. A cut that occurs in the middle of the night on a redundant line can be easy enough to deal with, with service providers sometimes able to reroute traffic in the interim. Unfortunately however, such incidents often lead to much bigger problems for end users. For example, a cut fiber optic cable in northern Arizona in April caused many thousands of people and businesses to go about 15 hours with telephone and Internet service. This meant many shops had to either close or resort to manual tracking, and that personal Internet usage grinded to a halt, The Associated Press reported. More importantly, 911 emergency communications were disrupted in the incident.

It’s not just a hassle for end users, as cut fiber can severely impact public health when emergency services like police departments, fire stations and EMTs can’t take and receive calls. Plus, such incidents are very costly for service providers, forced to repair expensive infrastructure. They can also lead to canceled service, as customers become irate at service providers for failing to provide reliable connectivity at all times.

What’s a solution to fiber cut outages?

One easy way to avoid the problems related to cut fiber is to not have fiber at all and instead pursue a wireless dark fiber alternative. For example, after a cable snafu caused residents of Washington state’s San Juan Islands to go without telephone, Internet and cell service for 10 days in 2013, CenturyLink installed a wireless mobile backhaul option there, according to The AP.

By opting for a solution like a Gigabit Wireless Microwave, MMW, Free Space Optics or MIMO OFDM Radio, service providers gain a wireless alternative to cabling that is just as robust and fast as fiber. With the Gigabit Wireless link in place, cut fiber optic cabling is less disruptive to end users and ISPs.

Where do I found out more information on solving fiber Cut Issues?

For more information please contact us

FSO Guide – Free Space Optics, Optical Wireless

FSO (Free Space Optics, Laser, Optical Wireless) Guide

Free Space Optics (FSO) communications, also called Optical Wireless (OW) or Infrared Laser, refers to the transmission of modulated visible or infrared (IR) beams through the atmosphere to obtain optical communications. Like fibre, Free Space Optics (FSO) uses lasers to transmit data, but instead of enclosing the data stream in a glass fibre, it is transmitted through the air. Free Space Optics (FSO) works on the same basic principle as Infrared television remote controls, wireless keyboards or IRDA ports on laptops or cellular phones.

History of Free Space Optics (FSO)
The engineering maturity of Free Space Optics (FSO) is often underestimated, due to a misunderstanding of how long Free Space Optics (FSO) systems have been under development. Historically, Free Space Optics (FSO) or optical wireless communications was first demonstrated by Alexander Graham Bell in the late nineteenth century (prior to his demonstration of the telephone!). Bell’s Free Space Optics (FSO) experiment converted voice sounds into telephone signals and transmitted them between receivers through free air space along a beam of light for a distance of some 600 feet. Calling his experimental device the “photophone,” Bell considered this optical technology – and not the telephone – his pre-eminent invention because it did not require wires for transmission.

Although Bell’s photophone never became a commercial reality, it demonstrated the basic principle of optical communications. Essentially all of the engineering of today’s Free Space Optics (FSO) or free space optical communications systems was done over the past 40 years or so, mostly for defense applications. By addressing the principal engineering challenges of Free Space Optics (FSO), this aerospace/defence activity established a strong foundation upon which today’s commercial laser-based Free Space Optics (FSO) systems are based.

How Free Space Optics (FSO) Works
Free Space Optics (FSO) transmits invisible, eye-safe light beams from one “telescope” to another using low power infrared lasers in the terahertz spectrum. The beams of light in Free Space Optics (FSO) systems are transmitted by laser light focused on highly sensitive photon detector receivers. These receivers are telescopic lenses able to collect the photon stream and transmit digital data containing a mix of Internet messages, video images, radio signals or computer files. Commercially available systems offer capacities in the range of 100 Mbps to 2.5 Gbps, and demonstration systems report data rates as high as 160 Gbps.

Free Space Optics (FSO) systems can function over distances of several kilometres. As long as there is a clear line of sight between the source and the destination, and enough transmitter power, Free Space Optics (FSO) communication is possible.

FSO: Wireless Links at the Speed of Light
Unlike radio and microwave systems, Free Space Optics (FSO) is an optical technology and no spectrum licensing or frequency coordination with other users is required, interference from or to other systems or equipment is not a concern, and the point-to-point laser signal is extremely difficult to intercept, and therefore secure. Data rates comparable to optical fibre transmission can be carried by Free Space Optics (FSO) systems with very low error rates, while the extremely narrow laser beam widths ensure that there is almost no practical limit to the number of separate Free Space Optics (FSO) links that can be installed in a given location.

How Free Space Optics (FSO) benefits you
FSO is free from licensing and regulation which translates into ease, speed and low cost of deployment. Since Free Space Optics (FSO) transceivers can transmit and receive through windows, it is possible to mount Free Space Optics (FSO) systems inside buildings, reducing the need to compete for roof space, simplifying wiring and cabling, and permitting Free Space Optics (FSO) equipment to operate in a very favourable environment. The only essential requirement for Free Space Optics (FSO) or optical wireless transmission is line of sight between the two ends of the link.

For Metro Area Network (MAN) providers the last mile or even feet can be the most daunting. Free Space Optics (FSO) networks can close this gap and allow new customers access to high-speed MAN’s. Providers also can take advantage of the reduced risk of installing an Free Space Optics (FSO) network which can later be redeployed.

The Market. Why FSO? Breaking the Bandwidth Bottleneck
Why FSO? The global telecommunications network has seen massive expansion over the last few years. First came the tremendous growth of the optical fiber long-haul, wide-area network (WAN), followed by a more recent emphasis on metropolitan area networks (MANs). Meanwhile, local area networks (LANs) and gigabit Ethernet ports are being deployed with a comparable growth rate. In order for this tremendous network capacity to be exploited, and for the users to be able to utilize the broad array of new services becoming available, network designers must provide a flexible and cost-effective means for the users to access the telecommunications network. Presently, however, most local loop network connections are limited to 1.5 Mbps (a T1 line). As a consequence, there is a strong need for a high-bandwidth bridge (the “last mile” or “first mile”) between the LANs and the MANs or WANs.


A recent New York Times article reported that more than 100 million miles of optical fibre was laid around the world in the last two years, as carriers reacted to the Internet phenomenon and end users’ insatiable demand for bandwidth. The sheer scale of connecting whole communities, cities and regions to that fiber optic cable or “backbone” is something not many players understood well. Despite the huge investment in trenching and optical cable, most of the fibre remains unlit, 80 to 90% of office, commercial and industrial buildings are not connected to fibre, and transport prices are dropping dramatically.

Free Space Optics (FSO) systems represent one of the most promising approaches for addressing the emerging broadband access market and its “last mile” bottleneck. Free Space Optics (FSO) systems offer many features, principal among them being low start-up and operational costs, rapid deployment, and high fiber-like bandwidths due to the optical nature of the technology.

Broadband Bandwidth Alternatives
Access technologies in general use today include telco-provisioned copper wire, wireless Internet access, broadband RF/microwave, coaxial cable and direct optical fiber connections (fiber to the building; fiber to the home). Telco/PTT telephone networks are still trapped in the old Time Division Multiplex (TDM) based network infrastructure that rations bandwidth to the customer in increments of 1.5 Mbps (T-1) or 2.024 Mbps (E-1). DSL penetration rates have been throttled by slow deployment and the pricing strategies of the PTTs. Cable modem access has had more success in residential markets, but suffers from security and capacity problems, and is generally conditional on the user subscribing to a package of cable TV channels. Wireless Internet access is still slow, and the tiny screen renders it of little appeal for web browsing.

Broadband RF/microwave systems have severe limitations and are losing favor. The radio spectrum is a scarce and expensive licensed commodity, sold or leased to the highest bidder, or on a first-come first-served basis, and all too often, simply unavailable due to congestion. As building owners have realized the value of their roof space, the price of roof rights has risen sharply. Furthermore, radio equipment is not inexpensive, the maximum data rates achievable with RF systems are low compared to optical fiber, and communications channels are insecure and subject to interference from and to other systems (a major constraint on the use of radio systems).

Free Space Optics (FSO) Advantages
Free space optical (FSO) systems offers a flexible networking solution that delivers on the promise of broadband. Only free space optics or Free Space Optics (FSO) provides the essential combination of qualities required to bring the traffic to the optical fiber backbone – virtually unlimited bandwidth, low cost, ease and speed of deployment. Freedom from licensing and regulation translates into ease, speed and low cost of deployment. Since Free Space Optics (FSO) optical wireless transceivers can transmit and receive through windows, it is possible to mount Free Space Optics (FSO) systems inside buildings, reducing the need to compete for roof space, simplifying wiring and cabling, and permitting the equipment to operate in a very favorable environment. The only essential for Free Space Optics (FSO) is line of sight between the two ends of the link.

Security and Free Space Optics (FSO)
The common perception of wireless is that it offers less security than wireline connections. In fact, Free Space Optics (FSO) is far more secure than RF or other wireless-based transmission technologies for several reasons:
Free Space Optics (FSO) laser beams cannot be detected with spectrum analyzers or RF meters
Free Space Optics (FSO) laser transmissions are optical and travel along a line of sight path that cannot be intercepted easily. It requires a matching Free Space Optics (FSO) transceiver carefully aligned to complete the transmission. Interception is very difficult and extremely unlikely
The laser beams generated by Free Space Optics (FSO) systems are narrow and invisible, making them harder to find and even harder to intercept and crack
Data can be transmitted over an encrypted connection adding to the degree of security available in Free Space Optics (FSO) network transmissions.
Free Space Optics (FSO) Challenges
The advantages of free space optical wireless or Free Space Optics (FSO) do not come without some cost. When light is transmitted through optical fiber, transmission integrity is quite predictable – barring unforseen events such as backhoes or animal interference. When light is transmitted through the air, as with Free Space Optics (FSO) optical wireless systems, it must contend with a a complex and not always quantifiable subject – the atmosphere.

Attenuation, Fog and Free Space Optics (FSO)
Fog substantially attenuates visible radiation, and it has a similar affect on the near-infrared wavelengths that are employed in Free Space Optics (FSO) systems. Note that the effect of fog on Free Space Optics (FSO) optical wireless radiation is entirely analogous to the attenuation – and fades – suffered by RF wireless systems due to rainfall. Similar to the case of rain attenuation with RF wireless, fog attenuation is not a “show-stopper” for Free Space Optics (FSO) optical wireless, because the optical link can be engineered such that, for a large fraction of the time, an acceptable power will be received even in the presence of heavy fog. Free Space Optics (FSO) optical wireless-based communication systems can be enhanced to yield even greater availabilities.

Fog is a major source of attenuation of FSO (Free Space Optics) infrared signals
Fog is a major source of attenuation of FSO (Free Space Optics) infrared signals

Free Space Optics (FSO) and Physical Obstructions
Free Space Optics (FSO) products which have widely spaced redundant transmitters and large receive optics will all but eliminate interference concerns from objects such as birds. On a typical day, an object covering 98% of the receive aperture and all but 1 transmitter; will not cause an Free Space Optics (FSO) link to drop out. Thus birds are unlikely to have any impact on Free Space Optics (FSO) transmission.

Free Space Optics (FSO) Pointing Stability – Building Sway, Tower Movement
Only wide-beamwidth fixed pointed Free Space Optics (FSO) systems are capable of handling the vast majority of movement found in deployments on buildings. Narrow beam systems are unreliable, requiring manual re-alignment on a regular basis, due to building movement. ‘Wide beam’ means more than 5milliradians. Narrow systems (1-2mRad) are not reliable without a tracking system
The combination of effective beam divergence and a well matched receive Field-of-View (FOV) provide for an extremely robust fixed pointed Free Space Optics (FSO) system suitable for most deployments. Fixed-pointed Free Space Optics (FSO) systems are generally preferred over actively-tracked Free Space Optics (FSO) systems due to their lower cost.

Free Space Optics (FSO) and Scintillation
Performance of many Free Space Optics (FSO) optical wireless systems is adversely affected by scintillation on bright sunny days; the effects of which are typically reflected in BER statistics. Some optical wireless products have a unique combination of large aperture receiver, widely spaced transmitters, finely tuned receive filtering, and automatic gain control characteristics. In addition, certain optical wireless systems also apply a clock recovery phase-lock-loop time constant that all but eliminate the affects of atmospheric scintillation and jitter transference.

Solar Interference and Free Space Optics (FSO)
Solar interference in Free Space Optics (FSO) free space optical systems can be combated in two ways. Optical narrowband filter proceeding the receive detector used to filter all but the wavelength actually used for intersystem communications. To handle off-axis solar energy, sophisticated spatial filters have been implemented in CableFree systems, allowing them to operate unaffected by solar interference that is more than 1 degree off-axis.

Free Space Optics (FSO) Reliability
Employing an adaptive laser power (Automatic Transmit Power Control or ATPC) scheme to dynamically adjust the laser power in response to weather conditions will improve the reliability of Free Space Optics (FSO) optical wireless systems. In clear weather the transmit power is greatly reduced, enhancing the laser lifetime by operating the laser at very low-stress conditions. In severe weather, the laser power is increased as needed to maintain the optical link – then decreased again as the weather clears. A TEC controller that maintains the temperature of the laser transmitter diodes in the optimum region will maximize reliability and lifetime, consistent with power output allowing the FSO optical wireless system to operate more efficiently and reliably at higher power levels.

For more information on Free Space Optics, please Contact Us

Top of page

OFDM (Orthogonal Frequency Division Multiplexing)

What is OFDM?   (Orthogonal Frequency Division Multiplexing)

OFDM: Orthogonal Frequency Division Multiplexing, is a form of signal modulation that divides a high data rate modulating stream placing them onto many slowly modulated narrowband close-spaced subcarriers, and in this way is less sensitive to frequency selective fading.

Orthogonal Frequency Division Multiplexing or OFDM is a modulation format that is being used for many of the latest wireless and telecommunications standards.

OFDM has been adopted in the Wi-Fi arena where the standards like 802.11a, 802.11n, 802.11ac and more. It has also been chosen for the cellular telecommunications standard LTE / LTE-A, and in addition to this it has been adopted by other standards such as WiMAX and many more.

Orthogonal frequency division multiplexing has also been adopted for a number of broadcast standards from DAB Digital Radio to the Digital Video Broadcast standards, DVB. It has also been adopted for other broadcast systems as well including Digital Radio Mondiale used for the long medium and short wave bands.

Although OFDM, orthogonal frequency division multiplexing is more complicated than earlier forms of signal format, it provides some distinct advantages in terms of data transmission, especially where high data rates are needed along with relatively wide bandwidths.

What is OFDM? – The concept

OFDM is a form of multicarrier modulation. An OFDM signal consists of a number of closely spaced modulated carriers. When modulation of any form – voice, data, etc. is applied to a carrier, then sidebands spread out either side. It is necessary for a receiver to be able to receive the whole signal to be able to successfully demodulate the data. As a result when signals are transmitted close to one another they must be spaced so that the receiver can separate them using a filter and there must be a guard band between them. This is not the case with OFDM. Although the sidebands from each carrier overlap, they can still be received without the interference that might be expected because they are orthogonal to each another. This is achieved by having the carrier spacing equal to the reciprocal of the symbol period.

OFDM Signals

Traditional view of receiving signals carrying modulation

To see how OFDM works, it is necessary to look at the receiver. This acts as a bank of demodulators, translating each carrier down to DC. The resulting signal is integrated over the symbol period to regenerate the data from that carrier. The same demodulator also demodulates the other carriers. As the carrier spacing equal to the reciprocal of the symbol period means that they will have a whole number of cycles in the symbol period and their contribution will sum to zero – in other words there is no interference contribution.

OFDM Spectrum

One requirement of the OFDM transmitting and receiving systems is that they must be linear. Any non-linearity will cause interference between the carriers as a result of inter-modulation distortion. This will introduce unwanted signals that would cause interference and impair the orthogonality of the transmission.

In terms of the equipment to be used the high peak to average ratio of multi-carrier systems such as OFDM requires the RF final amplifier on the output of the transmitter to be able to handle the peaks whilst the average power is much lower and this leads to inefficiency. In some systems the peaks are limited. Although this introduces distortion that results in a higher level of data errors, the system can rely on the error correction to remove them.

Data on OFDM

The data to be transmitted on an OFDM signal is spread across the carriers of the signal, each carrier taking part of the payload. This reduces the data rate taken by each carrier. The lower data rate has the advantage that interference from reflections is much less critical. This is achieved by adding a guard band time or guard interval into the system. This ensures that the data is only sampled when the signal is stable and no new delayed signals arrive that would alter the timing and phase of the signal.

OFDM Guard Interval

The distribution of the data across a large number of carriers in the OFDM signal has some further advantages. Nulls caused by multi-path effects or interference on a given frequency only affect a small number of the carriers, the remaining ones being received correctly. By using error-coding techniques, which does mean adding further data to the transmitted signal, it enables many or all of the corrupted data to be reconstructed within the receiver. This can be done because the error correction code is transmitted in a different part of the signal.

OFDM advantages & disadvantages

OFDM advantages

OFDM has been used in many high data rate wireless systems because of the many advantages it provides.

  • Immunity to selective fading:   One of the main advantages of OFDM is that is more resistant to frequency selective fading than single carrier systems because it divides the overall channel into multiple narrowband signals that are affected individually as flat fading sub-channels.
  • Resilience to interference:   Interference appearing on a channel may be bandwidth limited and in this way will not affect all the sub-channels. This means that not all the data is lost.
  • Spectrum efficiency:   Using close-spaced overlapping sub-carriers, a significant OFDM advantage is that it makes efficient use of the available spectrum.
  • Resilient to ISI:   Another advantage of OFDM is that it is very resilient to inter-symbol and inter-frame interference. This results from the low data rate on each of the sub-channels.
  • Resilient to narrow-band effects:   Using adequate channel coding and interleaving it is possible to recover symbols lost due to the frequency selectivity of the channel and narrow band interference. Not all the data is lost.
  • Simpler channel equalisation:   One of the issues with CDMA systems was the complexity of the channel equalisation which had to be applied across the whole channel. An advantage of OFDM is that using multiple sub-channels, the channel equalization becomes much simpler.

OFDM disadvantages

Whilst OFDM has been widely used, there are still a few disadvantages to its use which need to be addressed when considering its use.

  • High peak to average power ratio:   An OFDM signal has a noise like amplitude variation and has a relatively high large dynamic range, or peak to average power ratio. This impacts the RF amplifier efficiency as the amplifiers need to be linear and accommodate the large amplitude variations and these factors mean the amplifier cannot operate with a high efficiency level.
  • Sensitive to carrier offset and drift:   Another disadvantage of OFDM is that is sensitive to carrier frequency offset and drift. Single carrier systems are less sensitive.

OFDM variants

There are several other variants of OFDM for which the initials are seen in the technical literature. These follow the basic format for OFDM, but have additional attributes or variations:

  • COFDM:   Coded Orthogonal frequency division multiplexing. A form of OFDM where error correction coding is incorporated into the signal.
  • Flash OFDM:   This is a variant of OFDM that was developed by Flarion and it is a fast hopped form of OFDM. It uses multiple tones and fast hopping to spread signals over a given spectrum band.
  • OFDMA:   Orthogonal frequency division multiple access. A scheme used to provide a multiple access capability for applications such as cellular telecommunications when using OFDM technologies.
  • VOFDM:   Vector OFDM. This form of OFDM uses the concept of MIMO technology. It is being developed by CISCO Systems. MIMO stands for Multiple Input Multiple output and it uses multiple antennas to transmit and receive the signals so that multi-path effects can be utilised to enhance the signal reception and improve the transmission speeds that can be supported.
  • WOFDM:   Wideband OFDM. The concept of this form of OFDM is that it uses a degree of spacing between the channels that is large enough that any frequency errors between transmitter and receiver do not affect the performance. It is particularly applicable to Wi-Fi systems.

Each of these forms of OFDM utilise the same basic concept of using close spaced orthogonal carriers each carrying low data rate signals. During the demodulation phase the data is then combined to provide the complete signal.

OFDM, orthogonal frequency division multiplexing has gained a significant presence in the wireless market place. The combination of high data capacity, high spectral efficiency, and its resilience to interference as a result of multi-path effects means that it is ideal for the high data applications that have become a major factor in today’s communications scene.

For more information on wireless technology and OFDM, please Contact Us

Gigabit Leased Line Replacement

Leased Line Alternatives and Resilience using Wireless

Leased lines are often very expensive to install, with high operating costs. Unless you already have fibre connected to your building then you will need to get fibre installed from the nearest point of presence (PoP), this involves digging a trench and laying an armoured glass fibre cable between the 2 locations.

Leased Line installations are expensive and slow to instal
Digging and installing leased lines in Urban areas can be expensive, slow and disruptive

The costs of doing these civil works be over £100 a metre in city locations. Urban areas require road closures, permits, traffic control, and repair bills to existing infrastructure such as drains. Therefore, the capital expenditure required for a wireless bridge is a fraction the cost, saving time and budget.

With a CableFree Wireless Network we can offer;

  • Gigabit Full Duplex connectivity
  • Multi-mile connectivity
  • 5 Nines or higher availability (99.999+% uptime)
  • Gigabit Links from under £8,000 fully installed
  • Rapid deployment
  • Fully supported and licenced

We offer a complete suite of variety of wireless link technologies including

  • Microwave
  • Millimeter Wave (E-band and V-band)
  • Free Space Optics (FSO)
  • AC MIMO OFDM Radio

to match exact customer requirements, ensuring the most cost effective solution for our customers.

Please Contact Us for more information on Leased Line Replacement options

Introduction to Millimeter Wave Technology

Introduction to Millimeter Wave Technology

CableFree Millimeter Wave MMW Link
CableFree Millimeter Wave (MMW) Link

Millimeter Wave, also know as MMW or Millimetre Wave technology is being rapidly adopted for users ranging from enterprise level data centres to single consumers with smart phones requiring higher bandwidth, the demand for newer technologies to deliver these higher data transmission rates is bigger than ever before.

A wide range of technologies exist for the delivery of high throughput, with fibre optic cable considered to be the highest standard. However, fibre optics is not unmatched, especially when all considering economic factors. Millimeter wave wireless technology offers the potential to deliver bandwidth comparable to that of fibre optics but without the logistical and financial drawbacks of the deployments.

Millimeter waves represent the RF Signal spectrum between the frequencies of 30GHz and 300GHz with a wavelength between 1 – 10 millimetres but in terms of wireless networking and communications equipment, the name Millimeter Wave generally corresponds to a few select bands of radio frequencies found around 38, 60 and, more recently, the high potential 70 and 80 GHz bands that have been assigned for the public domain for the purpose of wireless networking and communications.

Commercial Millimeter Wave (MMW) links from CableFree feature high performance, reliable, high capacity wireless networking with latest generation features.

MM Wave Spectrum

Millimeter Wave MMW Spectrum
Millimeter Wave MMW Spectrum

In the UK, there have been 3 frequency bands that have been allocated for commercial Millimeter Wave usage, these are as follows:

57 – 66GHz: The 60GHz Millimeter Wave Band or V-Band is governed by OFCOM for licensed operation. The large amount of signal absorption via atmospheric oxygen and tight regulations make this frequency band more suited to short range, Point-to-Point and Point-to-Multipoint Millimetre Wave solutions. Between 57 – 64GHz the band is licensed and regulated but from 64 – 66GHz the band is unlicensed and self coordinated.

71 – 76GHz and 81 – 86GHz: The 70GHz and 80GHz Millimeter Wave Bands or E-Bands are governed by OFCOM for licensed operation only and are regarded to be the most suited band for Point-to-Point and Point-to-Multipoint, Millimeter Wave Wireless Networking and communication transmission. Each band has a 5GHz spectral range available which totals to be more than all other assigned frequency bands added together. Each 5GHz range can act as a single contiguous wireless transmission channel allowing very efficient use of the whole band and in turn these result in high throughput speeds from 1 to 3 Gbps whilst only using simple modulation techniques such as OOK (On-Off-Keying) or BPSK (Binary Phase Shift Keying). These throughput speeds are substantially higher than those found in lower frequencies using much more complex and advanced orders of modulation so even higher throughput speeds should be achieved with Millimetre Wave devices when utilising the same advanced techniques. It should be only a matter time before market demand brings these to the forefront.

In the US, an additional band is available as well as the above which is:

92 – 95GHz: The 94GHz Millimeter Wave Band or W-Band is governed by the FCC Part 15 for unlicensed operation also but only for indoor usage. It may also be used to outdoor Point-to-Point applications following the FCC Part 101 regulations but due to a range between 94 – 94.1GHz being excluded, the band is less spectrally efficient than the others.

The 71-76, 81-86 and 92-95 GHz bands are also used for point-to-point high-bandwidth communication links. These frequencies, as opposed to the 60 GHz frequency, do not suffer from the effects of oxygen absorption, but require a transmitting license in the US from the Federal Communications Commission (FCC). There are plans for 10 Gbit/s links using these frequencies as well. In the case of the 92–95 GHz band, a small 100 MHz range has been reserved for space-borne radios, making this reserved range limited to a transmission rate of under a few gigabits per second.

The band is essentially undeveloped and available for use in a broad range of new products and services, including high-speed, point-to-point wireless local area networks and broadband Internet access. WirelessHD is another recent technology that operates near the 60 GHz range. Highly directional, “pencil-beam” signal characteristics permit different systems to operate close to one another without causing interference. Potential applications include radar systems with very high resolution.

The upcoming Wi-Fi standard IEEE 802.11ad will run on the 60 GHz (V band) spectrum with data transfer rates of up to 7 Gbit/s.

Uses of the millimeter wave bands include point-to-point communications, intersatellite links, and point-to-multipoint communications.

Because of shorter wavelengths, the band permits the use of smaller antennas than would be required for similar circumstances in the lower bands, to achieve the same high directivity and high gain. The immediate consequence of this high directivity, coupled with the high free space loss at these frequencies, is the possibility of a more efficient use of the spectrum for point-to-multipoint applications. Since a greater number of highly directive antennas can be placed in a given area than less directive antennas, the net result is higher reuse of the spectrum, and higher density of users, as compared to lower frequencies. Furthermore, because one can place more voice channels or broadband information using a higher frequency to transmit the information, this spectrum could potentially be used as a replacement for or supplement to fiber optics.


Bandwidth & Scalable Capacity

CableFree Millimeter Wave MMW Link
CableFree Millimeter Wave MMW Link

The main benefit that Millimeter Wave technology has over lower RF frequencies is the spectral bandwidth of 5GHz being available in each of the E-Band ranges, resulting in current speeds of 1.25Gbps Full Duplex with potential throughput speeds of up to 10Gbps Full Duplex being made possible. Once market demand increases and better modulation techniques are implemented, spectral efficiency of the equipment will improve allowing the equipment to meet the higher capacity demands of prospective future networks.

Whereas low frequency, microwave signals have a wide beamwidth angle which reduces the reuse of transmission of the same signal within the local geographic area, Millimeter Wave signals transmit in very narrow, focused beams which allows for multiple deployments in tight proximity whilst using the same frequency ranges. This allows a density of around 15 times more when comparing a 70GHz signal to a 20GHz example making Millimeter Wave ideal for Point-to-Point Mesh, Ring and dense Hub & Spoke network topologies where lower frequency signals would not be able to cope before cross signal interference would become a significant limiting factor.

Propagation & Signal Attenuation

In general, Millimeter Wave links can range in anywhere up to 10km depending on factors such as equipment specifications and environmental conditions. The propagation properties of Millimeter Waves are much like those of the other popular wireless networking frequencies in that they are most significantly affected by air moisture levels; atmospheric Oxygen is also a large factor in the 60GHz band but almost negligible in the other ranges, under 0.2 dB per km.

Water vapour affects the signal at between 0 and 3dB/km at high humidity levels and the propagation due to clouds and fog acts in a very similar way depending on the density and amount of droplets in the air. These losses are relatively low and only play a major factor when considering links at 5km+.

Effect Signal Loss (dB/km)
Oxygen absorption at Sea Level 0.22
Humidity of 100% at 30°C 1.8
Heavy Fog of 50m visibility 3.2
Heavy Rain Shower at 25mm/hr 10.7

At the 70 to 80GHz bands, water, in the form of rain, plays the most significant role in signal attenuation as it does with lower frequency signals too. The rate of rainfall, measured in mm/hour, is the depending factor in signal loss meaning that the harder it is raining, the lower the signal strength will be. Signal Propagation loss is also directly proportional to distance, so if the distance between transmitter and receiver is doubled, the loss in dB will be twice as much. Millimeter Wave performance is quite heavily dependent on rainfall and strongly affects Availability (discussed below), however, successful links can even be set up in areas of occasional heavy downpours.

Rainfall Type Rain Rate Signal Loss (dB/km)
Light Shower 1 mm/hour 0.9
Normal Rain 4 mm/hour 2.6
Heavy Burst 25mm/hour 10.7
Intense Storm 50 mm/hour 18.4


The reliability of a Millimeter Wave Wireless Network relies on the same principles as any other, in particular, the distance of operation, the radio’s link margin (being factors of transmit power, receiver sensitivity and beam divergence) and others such as redundancy paths. A link may be heavily affected by a period of intense rainfall but if it has a large enough margin, it will not suffer an outage.

The reliability of a network is called the availability and is measured as a percentage of time that the network will be functioning, for example, an availability of 99.999% over a year will equate to just over 5 hours of downtime. Much research by the ITU (International Telecommunication Union) has gone into collecting rainfall date from metropolitan areas around the world and how it will affect Millimeter Wave transmissions. You can see below an example of the expected availability of a widely available Millimeter Wave link for a few global cities and their respective availability for a 2km link.

Location Link Range (km, at 99.999% Availability) Availability (2 km link)
London 1.65 99.998%
Milan 1.35 99.994%
New York 1.25 99.991%
Los Angeles 1.75 99.998%
Sydney 1.20 99.99%
Riyadh 2.85 > 99.999%

Security is also an issue when dealing with wireless transmissions but due to Millimeter Wave’s inherently low beam widths (“pencil beams”) at about 0.36° radius with a 2ft. antenna along with, generally, lower peak transmit powers relative to lower frequencies the technology has a low probability of intercept and detection which is vital for the transference of confidential material.

Free Space Optics Technology

Introduction to Free Space Optics

CableFree FSO - Free Space Optics
CableFree Free Space Optics

FSO is a line-of-sight wireless communication technology that uses invisible beams of light to provide high speed wireless connections that can send and receive voice, video, and data information. Today, FSO technology – pioneered and championed by CableFree’s optical wireless offerings – has enabled the development of a new category of outdoor wireless products that can transmit voice, data, and video at bandwidths up to 1.25 Gbps. Free Space Optics connectivity doesn’t require expensive fibre-optic cable and removes need for securing spectrum licenses for radio frequency (RF) solutions. FSO technology requires light. The use of light is a simple concept similar to optical transmissions using fiber-optic cables; the only difference is the medium. Light travels through air faster than it does through glass, so it is fair to classify FSO technology as optical communications at the speed of light.

History of Free Space Optics

Optical communications, in various forms, have been used for thousands of years. The Ancient Greeks used a coded alphabetic system of signalling with torches developed by Cleoxenus, Democleitus and Polybius. In the modern era, semaphores and wireless solar telegraphs called heliographs were developed, using coded signals to communicate with their recipients. In 1880 Alexander Graham Bell and his assistant Charles Sumner Tainter created the Photophone, at Bell’s newly established Volta Laboratory in Washington, DC. Bell considered it his most important invention. The device allowed for the transmission of sound on a beam of light. On June 3, 1880, Bell conducted the world’s first wireless telephone transmission between two buildings, some 213 meters (700 feet) apart.  Its first practical use came in military communication systems many decades later, first for optical telegraphy. German colonial troops used Heliograph telegraphy transmitters during the 1904/05 Herero Genocide in German South-West Africa (today’s Namibia) as did British, French, US or Ottoman signals.

During the trench warfare of World War I when wire communications were often cut, German signals used three types of optical Morse transmitters called Blinkgerät, the intermediate type for distances of up to 4 km (2.5 miles) at daylight and of up to 8 km (5 miles) at night, using red filters for undetected communications. Optical telephone communications were tested at the end of the war, but not introduced at troop level. In addition, special blinkgeräts were used for communication with airplanes, balloons, and tanks, with varying success. A major technological step was to replace the Morse code by modulating optical waves in speech transmission. Carl Zeiss Jena developed the Lichtsprechgerät 80/80 (literal translation: optical speaking device) that the German army used in their World War II anti-aircraft defense units, or in bunkers at the Atlantic Wall.

The invention of lasers in the 1960s revolutionized free space optics. Military organizations were particularly interested and boosted their development. However the technology lost market momentum when the installation of optical fiber networks for civilian uses was at its peak.

FSO vendor CableFree has extensive experience in this area: CableFree developed some of the world’s first successful commercial FSO links, with world-first achievements including

  • World’s first commercial 622Mbps wireless link:  1997
  • World’s first commercial Gigabit Ethernet 1.25Gbps wireless link:  1999

While fibre-optic communications gained worldwide acceptance in the telecommunications industry, FSO communications is still considered relatively new. CableFree Free Space Optical technology from Wireless Excellence enables bandwidth transmission capabilities that are similar to fibre optics, using similar optical transmitters and receivers and even enabling WDM-like technologies to operate through free space.

How Free Space Optics / Laser Communications Work

CableFree Free Space Optics at London 2012 OlympicsThe concept behind FSO technology is very simple. It’s based on connectivity between FSO-based optical wireless units, each consisting of an optical transceiver with a transmitter and a receiver to provide full-duplex (bi-directional) capability. Each optical wireless unit uses an optical source, plus a lens or telescope that transmits light through the atmosphere to another lens receiving the information. At this point, the receiving lens or telescope connects to a high-sensitivity receiver via optical fibre. This Free Space Optics technology approach has a number of advantages: Requires no RF spectrum licensing. Is easily upgradeable, and its open interfaces support equipment from a variety of vendors, which helps enterprises and service providers protect their investment in embedded telecommunications infrastructures. Requires no security software upgrades. Is immune to radio frequency interference or saturation. FSO Can be deployed behind windows, eliminating the need for costly rooftop rights.

Choosing Free Space Optics or Radio Frequency Wireless

CableFree FSO links in Cairo, EgyptOptical wireless, using FSO technology, is an outdoor wireless product category that provides the speed of fibre, with the flexibility of wireless. It enables optical transmission at speeds of up to 1.25 Gbps and, in the future, is capable of speeds of 10 Gbps using WDM. This is not possible with any fixed wireless or RF technology. Optical wireless also eliminates the need to buy expensive spectrum (it requires no FCC or municipal license approvals worldwide), which further distinguishes it from fixed wireless technologies. Moreover, FSO technology’s narrow beam transmission is typically two meters versus 20 meters and more for traditional, even newer radio-based technologies such as millimeter-wave radio. Optical wireless products’ similarities with conventional wired optical solutions enable the seamless integration of access networks with optical core networks and helps to realize the vision of an all-optical network.

Free Space Technology in Communication Networks

CableFree FSO used in CCTV NetworksFree-space optics technology (FSO) has several applications in communications networks, where a connectivity gap exists between two or more points. FSO technology delivers cost-effective optical wireless connectivity and a faster return on investment (ROI) for Enterprises and Mobile Carriers. With the ever-increasing demand for greater bandwidth by Enterprise and Mobile Carrier subscribers comes a critical need for FSO-based products for a balance of throughput, distance and availability. During the last few years, customer deployments of FSO-based products have grown. Here are some of the primary network uses:

CableFree FSO NetworkEnterprise

Because of the scalability and flexibility of FSO technology, optical wireless products can be deployed in many enterprise applications including building-to-building connectivity, disaster recovery, network redundancy and temporary connectivity for applications such as data, voice and data, video services, medical imaging, CAD and engineering services, and fixed-line carrier bypass.

Mobile Carrier Backhaul

FSO - Free Space Optics InstallationFree Space Optics is valuable tool in Mobile Carrier Backhaul: FSO technology and optical wireless products can be deployed to provide traditional PDH 16xE1/T1, STM-1 and STM-4, and Modern IP Gigabit Ethernet backhaul connectivity and Greenfield mobile networks.

Front-Haul: Mobile Carrier Base Station “Hoteling”

Free Space Optics CPRI Front-Haul for 4G NetworksFSO-based products can be used to expand Mobile Carrier Network footprints through base station “hoteling.” using CPRI interface. Free Space Optics with CPRI enables “front haul” networks where the remote radio heads can be separated by up to 2km from the Base station with a 1.22Gbps CPRI “native” link between them.

Low Latency Networks

CableFree Free Space OpticsFree Space Optics is an inherently Low Latency Technology, with effectively no delay between packets being transmitted and received at the other end, except the Line of Sight propagation delay.  The Speed of Light through the air is approximately 40% higher than through fibre optics, giving customers an immediate 40% reduction in latency compared to fibre optics.  In addition, fibre optic installations are almost never in a straight line, with realities of building layout, street ducts and requirement to use existing telecom infrastructure, the fibre run can be 100% longer than the direct Line of Sight path between two end points.  Hence FSO is popular in Low Latency Applications such as High Frequency Trading and other uses.

Low Latency Wireless Networks for High Frequency Trading

Low Latency Wireless Networks for High Frequency Trading

CableFree Low Latency speed-of-light-wall-street-high-frequency-trading
Microwave Millimeter Wave and FSO used in Low Latency High Frequency Trading (HFT)

The need for speed:  Best Practices for Building Ultra-Low Latency Microwave Networks.

To achieve the lowest end-to-end Ultra Low Latency with the highest possible reliability and network stability not only requires a wireless transmission platform that supports cutting edge low latency performance, but also must be combined with the experience and expertise necessary to design, deploy, support and operate a Millimeter wave, Free Space Optics or Microwave transmission network.

In High Frequency Trading (HFT) applications where computers can make millions of decisions in fractions of a second, receiving data even a single millisecond sooner can equate to a distinct advantage and generate significant profits.  This is called Low Latency or Ultra Low Latency networking.

Reducing Latency is critical - MW, MMW, FSO
Reducing Latency is critical – MW, MMW, FSO

According to Information Week Magazine¹: “A one (1) millisecond advantage in trading applications can be worth $100 million a year to a major brokerage firm”. Currently electronic trading makes between 60% and 70% of daily volume of the New York Stock Exchange. Tabb Group, a research firm, estimated that High-frequency traders generated about $21 billion in 2008.

Financial Trading centres with HF Traders can be in locations separated by long distances for example Chicago and New York. Data communications between these locations is commonly over leased circuits on fibre optic networks. But if the data was carried over Microwave radio links between the same two locations it would arrive several milliseconds earlier. Why?

Millimeter Wave, MMW and Microwave are the best for Ultra Low Latency

Faster Propagation

Millimeter Wave, Microwave and Free Space Optics can ensure Lowest Latency
Millimeter Wave, Microwave and Free Space Optics can massively reduce network latency

Optical, Millimeter Wave and Microwave signals travel through the air about 40% faster than light through optical fiber. Latency in a data communications circuit, or the time difference between sending a request for data and receiving the reply, will consequently be longer over a fibre optic circuit than a microwave circuit of the exact same length.

Latency is largely a function of the speed of light, which is 299,792,458 meters/second in vacuum. Microwave signals travel through the air at approximately the same speed as light through a vacuum and will have a latency of approximately 5.4 microseconds for every mile of path length. Light travel in optical fibre has latency of 8.01 microseconds for every mile of cable, due to the refraction in the fibre. When data has to travel over 1400 miles from Chicago to New York and back again the latency difference due to the communications medium alone is more than 3.5 milliseconds.
Straighter Routes Microwave networks have shorter routes, reducing the total network distance and consequently further improving latency. Microwave links can overcome topographical obstacles such as rivers, mountains and highways while optical networks in many cases have to go around them or follow existing roads or bridges. In general, signals over fibre networks have to travel farther and thus take longer to get to their destination.

Total latency in any network includes additional latency due to data queuing delay, processing delay through gateways, network design, equipment configuration and extra distances due to circuitous routes.
Overall, CableFree Millimeter wave (MMW), Free Space Optics (FSO) and Microwave networks offer a better solution for Ultra Low Latency applications such as HFT in comparison to fiber optic equipment because of a combination of an advantage in transmission medium and simple geometry—shortest distance between two points is a straight line.

802.11ac Technology: Gigabit Wireless

802.11ac Introduction

Wi-Fi has become such an amazingly successful technology because it has continuously advanced while remaining backwards compatible. Every few years since the 802.11b
amendment was ratified, the industry has released successive amendments increasing Wi-Fi data rates and capabilities, but even the latest Wi-Fi systems are able to interoperate with
1999 equipment built to the original standard. This paper explains the latest advance in Wi-Fi, 802.11ac, which provides the next step forward in performance.
The current state-of-the-art Wi-Fi is known as Wi-Fi CERTIFIED n or 802.11n. In the four years since the Wi-Fi Alliance introduced its initial certification, this technology has become hugely popular. According to IHS iSuppli, 802.11n now accounts for over two-thirds of Wi-Fi chipset shipments, and is on track to take over completely from 802.11a/b/g in mainstream applications before the end of 2012.

802.11ac Chipset Forecast802.11n has become popular because it improves performance. The five-fold increase in bandwidth, along with improved reliability from multi-antenna MIMO techniques, has delivered a better user experience. In fact, a 2007 Burton Group report entitled “The end of Ethernet” accurately predicted a future where Wi-Fi will take over from wired Ethernet as the
primary edge connection for corporate networks.

802.11ac Market Forecast

802.11ac technology fundamentals
The current generation of 802.11ac Wave 1 products, that have been certified by the Wi-Fi Alliance since mid 2013, deliver a three-fold increase in performance. This is driven by a doubling of channel bandwidth to 80 MHz, addition of a more efficient 256-QAM encoding technique and explicit transmit beamforming to improve signal quality.
The 802.11ac project title succinctly reads “Enhancements for Very High Throughput for operation in bands below 6 GHz. There are more details in the scope paragraph:
This amendment defines standardized modifications to both the 802.11 physical layers (PHY) and the 802.11 Medium Access Control Layer (MAC) that enable modes of operation
capable of supporting:
• A maximum multi-station (STA) throughput (measured at the MAC data service access point), of at least 1 Gbps and a maximum single link throughput (measured at the MAC
data service access point), of at least 500 Mbps.
• Below 6-GHz carrier frequency operation excluding 2.4-GHz operation while ensuring backward compatibility and coexistence with legacy IEEE 802.11 devices in the
5-GHz unlicensed band.
It’s clear that the goal is to continue the thrust of 802.11n to extend rates and throughput. To simplify the task, 802.11ac is restricted to below 6 GHz, and in practice, to 5-6 GHz, as it
applies only to the 5-GHz bands. The important new technologies in 802.11ac should be
considered as extensions of the physical layer wireless techniques pioneered in 802.11n, notably using multiple antennas at the transmitter and receiver to exploit multiple
input/multiple output (MIMO) for parallel delivery of multiple spatial streams.
Most of the features extend the limits of 802.11n, adding more antennas, more spatial streams, wider RF channels and higher-level coding. New mechanisms are also defined, notably multi-user MIMO where an access point (AP) transmits simultaneously to multiple clients.

802.11ac Usage Models

Usage models

While consumer and residential applications were the initial drivers for the need for development of 802.11ac, it has become critical to address the needs of the #GenMobile workforce in today’s enterprise networks. New possibilities will be realized from 802.11ac:

802.11ac Video Bandwidth• The amount of bandwidth in a cell will increase, allowing a single AP to serve the same number of clients with greater per-client throughput. Even though 802.11n throughput routinely exceeds 100 Mbps per client, some corporate use-cases such as server connections require higher bandwidth, and 802.11ac will further squeeze the number of corner-cases where IT goes wired-because-we-must rather than wireless-where-we-can.

• Alternatively, a single AP will be capable of serving more clients with the same throughput. This is typically important in dense-client scenarios such as lecture theaters and conference centers, where huge numbers of clients must be served. Consider a company event where employees can follow along with live video, audio and slide feeds whether they are seated in the back of the auditorium or at their desks.

Wider RF channel bandwidths

This is so simple that it may be disappointing to a technology enthusiast. But it is clear that doubling the RF channel bandwidth allows twice the data throughput, representing a significant improvement. The 40-MHz channel of 802.11n is extended to 80- and 160-MHz in 802.11ac. There are practical obstacles to using these wider channels, but now that they are defined, equipment will be developed to use them. The details:
• 80-MHz and 160-MHz channel bandwidths are defined
• 80 MHz mandatory, 160 MHz optional
• 80-MHz channels are two adjacent 40-MHz channels but with tones (subchannels) in the middle filled in.
• 160-MHz channels are defined as two 80-MHz channels. The two 80-MHz channels may be contiguous or non-contiguous.
Enterprises will be able to utilize the 80 MHz channels but the future optional 160 MHz channel support will only be usable in home environments since there are only 1 (or 2 if DFS is enabled) 160 MHz channels available for designing an enterprise deployment while the use of 80 MHz channels can leverage up to 5 channels in the deployment plan.
More spatial streams 802.11n defines up to four spatial streams, although there are
to date few chips and APs using more than three streams.
802.11ac retains support of three spatial streams in todays products but allows for future support of up to eight spatial streams. There will be a number of consequences. A divergence between chips and equipment for APs (with four+antennas) and clients (typically with < four antennas) will occur due to cost, physical size and power constraints.
APs will grow by adding antennas, while clients will become more capable by implementing multiple spatial streams and beamforming features behind a smaller number of antennas.
This divergence will create opportunities for multi-user MIMO, where a high-capacity AP can communicate with multiple, lower-throughput clients simultaneously. Today’s 802.11ac products support three spatial streams and it is expected that the next wave will extend this to four streams. While it is not expected that we will see clients implementing
four spatial streams (with four antennas), this is most likely to benefit when combined with future MU-MIMO support.
Multi-user MIMO (MU-MIMO)
Thus far, all 802.11 communications has been point-to-point (one-to-one) or broadcast (one-to-all). With 802.11ac, a new feature allows an AP to transmit different streams to several
targeted clients simultaneously. This is a good way to make use of the expected surplus of antennas at APs over clients, and it requires beamforming techniques to steer signal maxima over the desired clients while minimizing the interference caused at other clients.
For example, if an AP wishes to use MU-MIMO for clients A and B simultaneously, it will beamform the transmission for A so it presents a maximum at A but a minimum at B, and vice
versa for the transmission for B. There are some new terms associated with this:
• Space Division Multiple Access (SDMA): A term for streams not separated by frequency or time, but instead resolved in space like 802.11n-style MIMO.
• Downlink MU-MIMO where the AP transmits simultaneously to multiple receiving devices is an optional mode.
MU-MIMO doesn’t increase the performance that users will see but allows the network to increase its utilization by transmitting to multiple clients simultaneously in the downstream direction from the AP. MU-MIMO is expected to become available as part of the future 802.11ac Wave 2 products but adoption is likely to be delayed due to the need for new clients with Wave 2 radios in order to see the benefits of the MU-MIMO or four spatial streams which will
take time for a large number of clients to become available and deployed.
Modulation and coding
As semiconductor radios become ever-more accurate, and digital processing ever-more powerful, 802.11ac continues to exploit the limits of modulation and coding techniques, this
time with the leap from 64-quadrature amplitude modulation (QAM) to 256-QAM.
• 256-QAM, rate 3/4 and 5/6 are added as optional modes. For the basic case of one spatial stream in a 20 MHz channel, this extends the previous highest rate of 802.11n from 65 Mbps (long guard interval) to 78 Mbps and 86.7 Mbps respectively, a 20% and 33% improvement. (Note that 802.11ac does not offer every rate option for every MIMO combination).

Other elements/features

Below is a summary of additional elements and features.
• Single sounding and feedback method for beamforming (vs. multiple in 11n). This should enable inter-vendor beamforming to work with 802.11ac devices; the diversity of optional feedback formats in 802.11n resulted in differing implementations and stifled adoption.
• MAC modifications (mostly to adapt to above changes)
• Coexistence mechanisms for 20-, 40-, 80- and 160-MHz channels, 11ac and 11a/n devices. Extensions of 802.11n techniques to ensure that an 802.11ac device is a good neighbor to older 802.11a/n equipment.
• Non-HT duplicate mode duplicates a 20-MHz non-HT (non-802.11n) transmission in four adjacent 20-MHz channels or two sets of four adjacent 20-MHz channels.
Sometimes termed quadruplicate and octuplicate mode. Bandwidth and throughput figures
Whenever there’s a new 802.11 standard, most of IT organizations want to know “How fast?” With 802.11n the answer becomes quite complicated, because there are many options and some types of devices, such as smartphone, will be restricted to a fraction of the theoretical full speed because of practical limits of space, cost and power consumption. The tables below offer some useful figures.
The basic set of rates is now known as MCS 0-9. From MCS 0-7, this is equivalent to 802.11n rates – the first two columns of the table above start at 6.5 Mbps for long guard interval
and 7.2 Mbps for short guard interval, and up to 65 Mbps and 72.2 Mbps the rates are identical to 802.11n. The MCS 8 and MCS 9 rates are new and enabled by advances in chip technology. MCS 9 is not applicable to all channel width/spatial stream combinations.

802.11ac Theoretical Link Rates

802.11ac Data Rates

The table shows how simple multiplication can generate all other rates, up to nearly 7 Gbps. But bear in mind that the conditions required for the highest rates – 160-MHz channels, eight spatial streams – are not likely to be implemented in any chipsets due to design complexity,
power requirements and limited frequency available for use.
Now is the time to move ahead with 802.11ac Wave 1 products that deliver 3X the performance of the prior 802.11n generation. Future 802.11ac Wave 2 products are also expected in a few years but will provide a marginal performance increase so if your network is experiencing
performance bottlenecks or an overload in client density then now is the time to look towards deploying 802.11ac.

PHY Layer Enhancements

PHY enhancements, beamforming and more The IEEE 802.11ac amendment is defined for frequencies below 6 GHz. In practice this means it is restricted to 5 GHz, as the 2.4-GHz band is not wide enough for useful operation: indeed, 2.4 GHz is specifically excluded from the
802.11ac amendment’s scope, while backwards-compatibility with older 802.11 (802.11a and n) devices at 5 GHz is required. Meanwhile the IEEE is also targeting the 60-GHz band
(57-63 GHz) with the 802.11ad amendment.

Summary of PHY enhancements

This table is from the IEEE 802.11ac draft rather than the Wi-Fi Alliance. Vendors will follow the latter’s guidance on mandatory and optional features, but the table above represents a good preview of the Wi-Fi Alliance’s probable classification.

802.11ac Mandatory and Optional Features

Channel width

It is a fundamental rule of wireless communication that more spectrum enables higher throughput, and it is no surprise that the 802.11ac task group has chosen to expand the
channel width from 40 MHz in 802.11n to 80 and 160 MHz.
This allows a pro-rata increase in effective data rates. However, since the spectrum allocated for Wi-Fi is limited, it has been necessary to allow for channels to be split across non-contiguous spectrum. The diagram below shows how the available 5-GHz bands are used for various channel widths.

802.11ac Channel Widths

In the United States, Wi-Fi uses three blocks of spectrum between 5 and 6 GHz. The U-NII 1 band is restricted to indoor operations, the U-NII 2 and U-NII 2 extended bands are for indoor and outdoor operations, and the U-NII 3/ISM band is intended for outdoor bridge products and may be used for indoor WLANs as well.
All channelization is based on the 20-MHz channels used in earlier 802.11 standards, and the same channel numbering scheme is used. Since channel numbers are defined every 5
MHz, an increment of four for the channel number indicates adjacent 20 MHz channels.
The band from Channel 36 (center frequency 5,180 MHz) to Channel 48 (5,240 MHz) is known as U-NII 1, while channels 52 (5,260 MHz) to 64 (5,320 MHz) comprise U-NII 2. Both are available for Wi-Fi, and they can be used for two 80-MHz channels or a single 160-MHz channel. Since the U-NII 1 and 2 bands have different FCC rules for antennas and transmit power, the more restrictive rule would apply to a 160-MHz channel spanning both bands.
The band from Channel 100 (center frequency 5,500 MHz) to Channel 144 (5,720 MHz), known as U-NII 2 extended or U-NII-2 Worldwide, is a little wider, and since Channel 144 is now allowed for 802.11ac, it can support three 80-MHz channels or one continuous 160-MHz channel.
The U-NII 3 band, from Channel 149 (center frequency 5,745 MHz) to Channel 165 (5,825 MHz) allows one 80-MHz channel but no contiguous 160-MHz channel. This band is not widely
available outside the U.S.
Because it is difficult to find 160 MHz of contiguous spectrum, 802.11ac allows two non-contiguous 80-MHz channels to be used together as a 160-MHz channel. For example, channels
36-48 and 116-128 comprise a viable 160-MHz channel, sometimes referred to as 80+80 MHz. But each of the underlying 80-MHz channels must be contiguous.

When considering channels in the 5-GHz band, there are two practical restrictions. A large part of the band is covered by regulatory requirements for radar avoidance, to prevent interference with prior users of the band, primarily weather and military radars. The industry response to these requirements was 802.11h, including dynamic frequency selection (DFS) and transmit power control (TPC). The latter is not normally required at the power levels used by Wi-Fi, but
equipment using channels from 5,250 to 5,725 MHz must be certified for DFS.

A WLAN that needs to support the minority of non-DFS devices will not be able to use these channels. Over time, the number of non-DFS devices will decline and this will become
a less significant restriction: The Wi-Fi Alliance has some work under way with the goal of decreasing the number of non-DFS 5-GHz devices.
After some incidents where non-compliant outdoor point-to-point Wi-Fi links were shown to interfere with airport weather radars, the FCC and other national regulators tightened the rules and placed a temporary moratorium on the band from 5,600 to 5,650 MHz. This is not currently
available, even to DFS equipment.
In terms of usable bandwidth, the increase in channel width delivers slightly more than pro-rata because the ratios of pilot and DC tones to subcarriers decrease. The diagram shows that moving from 20 to 40 and 80 MHz increases usable subcarriers by 108/52 (x2.07) and 234/52 (x4.50) respectively over the 20-MHz 802.11n standard. The 160-MHz channel is always treated as two 80-MHz channels for subcarrier assignment, whether contiguous or not.
The Wi-Fi Alliance will certify devices to a selected subset of 802.11ac criteria, and we don’t yet know the details of that subset but the current IEEE amendment states that 80-MHz channel capability is required, while 160-MHz channels are optional.

OFDM Subcarriers for 802.11ac

Review of MIMO techniques

Since 802.11ac realizes most of its gains by extending techniques that were pioneered in 802.11n, it is appropriate to briefly cover these techniques.
The breakthrough technology of 802.11n, achieving its most dramatic improvements in data rate, was the use of MIMO (multiple input/multiple output) spatial division multiplexing.
SDM requires MIMO, specifically the transmitting and receiving stations must each have multiple RF chains with multiple antennas – it does not work where either station has only a single antenna chain. Each antenna is connected to its own RF chain for transmit and receive. The baseband processing on the transmit side can synthesize different signals to send to each antenna, while at the receiver the signals from different antennas can be decoded individually.
Although practical systems will transmit in both direction, this explanation is simplified by showing only one direction of transmission.

802.11ac MIMO Antennas

Under normal, line of sight conditions, the receiving antennas all hear the same signal from the transmitter. Even if the receiver uses sophisticated techniques to separate the signals heard at antennas 1 and 2, it is left with the same data. If the transmitter attempts to send different signals to antennas A and B, those signals will arrive simultaneously at the receiver, and will effectively interfere with each other.
There is no way under these conditions to better the performance of a non-MIMO system: one might as well use only one antenna at each station. If noise or interference affects the signals unevenly, MRC or STBC techniques can restore it to a clear-channel line-of-sight condition, but in the absence of multipath, only one stream can be supported,
and the upper bound on performance is a clear-channel single-stream.
However, if there is sufficient RF distortion and especially multipath in the path, receiving antennas will see different signals from each transmit antenna. The transmit antenna radiates a signal over a broad arc, scattering and reflecting off various objects in the surrounding area.

Each reflection entails a loss of signal power and a phase shift, and the longer the reflected path, the more delay is introduced relative to a line-of-sight signal. In the past, multipath was the enemy of radio systems, as the receiver saw a dominant signal (usually line of sight), and all the multipath signals tend to interfere with this dominant signal, effectively acting as noise or interference and reducing the overall throughput of the system.
To understand how MIMO works, first consider the signal each receive antenna sees in a multipath environment. In the diagram above, antenna 1 receives signals from the transmitter’s antenna A (two paths) and antenna B. If the signal from antenna B is the highest-power, the receiver can choose to decode that signal.
Meanwhile, if it finds that the transmitter’s antenna A gives a good signal at antenna 2, it can decode that signal. If the transmitter understands this, it can send different data streams on the B-1 and A-2 paths simultaneously, knowing each will be received with little interference from the other, and hence double the system’s throughput. If MIMO is a difficult concept: multipath (reflected RF between transmitter and receiver) is normally the enemy of performance, but with
MIMO it can be used constructively. Line of sight normally gives the best performance, but with MIMO it provides just baseline data rates.
The diagrams below show the different techniques that can be used with MIMO in an 802.11n and 802.11ac system, when the client has a multiple antennas or a single antenna. In the following section we will briefly explain each technique.

802.11ac transmit-receive beamforming

802.11ac transmit-receive for single chain client

Cyclic shift diversity (CSD)

Sometimes called cyclic delay diversity (CDD), CSD is applied at the transmitter, when the same spatial stream is used to drive multiple antennas. It is necessary because closelyspaced
antennas act as beamforming arrays without wide phase spacing, and it is possible to inadvertently create signal maxima and minima over receive antennas due to
interference patterns.
This is avoided by giving each transmit antenna’s signal a large phase shift relative to the others. CSD also avoids inadvertent power peaks and keeps the transmitted power envelope even. It is a form of transmit diversity – for a single-antenna receiver, the chance of being in a local null for
all transmit antennas simultaneously is much less than with a single transmit antenna – so the probability of signal dropouts is reduced.

Transmit beamforming

While CSD is blind, unresponsive to actual channel or client conditions, TxBF in 802.11ac requires explicit feedback from the beamformee on the current channel state,
returned to the beamformer and used to weight the signals to each antenna.
If the correct weightings of amplitude and phase are chosen, the signal strength at the receive antennas is maximized in a local peak, which maximizes SNR and hence the sustainable link rate. TxBF can be thought of as directing a beam on a particular receive antenna, but there is no flashlight-like focused beam for 802.11n or 802.11ac devices, as one might expect from a high-gain directional antenna: the broader pattern is likely to be a patchwork rather than a beam.
Spatial division multiplexing (space division multiple access)
SDM was first introduced with 802.11n, and the term SDMA is used now that we have multi-user MIMO (MU-MIMO) in 802.11ac. SDM exploits multipath, where more than one independent RF path exists between a pair of devices. In its simplest form, the transmitter divides the data stream into two spatial streams and directs each spatial stream to a different antenna.
Experience with 802.11n has shown that SDM-friendly multipath is present surprisingly often in indoor WLANs. While transmit pre-weighting can improve SDM, current 802.11 chips use implicit feedback and match spatial streams to antennas with a simple algorithm, rather than taking explicit feedback from the receiver into account.

Space-time block coding

STBC is a technique where a pair of transmit antennas is used to transmit a known sequence of variants of the original OFDM symbol. If the receiver knows the sequence, it can use probabilistic methods to correct decoding errors, improving effective SNR for a given channel. STBC can be used where the transmitting device has more antennas than the receiver.
Although it is a powerful technique on paper, STBC is only just appearing in the newer 802.11n chipsets.

Maximal ratio combining

Where multiple receive antennas see the same spatial stream, their signals can be intelligently combined to improve the effective SNR. This is MRC, and it is employed where the number of receive antennas is greater than the number of spatial streams. MRC requires no coordination between transmitter and receiver, it is an internal technique used by the receiver. Most current 802.11n chips use MRC.

More spatial streams

Where 802.11n specified up to four spatial streams for MIMO, 802.11ac extends this to eight streams. The technique is unchanged, but the matrices for calculations become larger, as do the access points – there can be no more spatial streams than the number of transmitting or receiving antennas (whichever is smaller), so full 8SS performance will only be possible where both devices have eight antennas.
Without innovative antenna designs, this probably precludes handheld devices, but access points, set top boxes and the like will certainly be able to use multiple streams.
As with wider channels, adding spatial streams increases throughput proportionally. Assuming multipath conditions are favorable, two streams offer double the throughput of a single stream, and eight streams increase throughput eight-fold.

Beamforming and channel state information

Sounding frames were introduced in 802.11n for use with MIMO and beamforming. The concept is quite simple: A transmitter sends a known pattern of RF symbols from each
antenna, allowing the receiver to construct the matrix for how each of receive antenna hears each transmit antenna.
This information is then sent back to the transmitter, allowing it to invert the matrix and use the optimum amplitude-phase settings for best reception. With a single antenna receiver, this results in a local maximum for SNR, for effective beamforming.
Sounding frames are important for several MIMO techniques, as they enable ‘channel state information’ (CSI) at the transmitter. CSI (or CSI-T) is a very important concept in MIMO, and it is worth a few lines of explanation.
The most important MIMO technique of 802.11n is spatial division multiplexing (SDM), a technique where the receiver needs to know how its receive antennas hear the various transmit signals from the transmitter.
For example, if the receiver knows that it hears the transmitter’s antenna A signal at 100% power on its antenna 1, and at 20% power on its antenna 2, it can subtract the 20% signal at antenna 2 and recover other signals with that antenna.
This is relatively easy because each frame starts with a preamble that isolates transmit signals from each antenna in turn. By analyzing the reception of the long training fields (LTFs) in the preamble of each frame, the receiver builds a model for the state of the channel at that instant, a model that it then uses for subsequent symbols in the frame. The received LTFs provide channel state information at the receiver (CSI-R).

802.11ac Implicit and Explicit Feedback for Beamforming

Receiver CSI is very useful, but we can do better. If the transmitter knows how its signals are received by its target in sufficient detail, it can pre-code the signal to each antenna to achieve the very best throughput and lowest error rate the channel will support.
In 802.11ac this is used for beamforming, where multiple antennas are used to beam a signal onto the receiver’s antenna, and also for DL MU-MIMO, where it sets up transmissions to steer local maxima to the desired client, and minimums to other clients.
CSI at the transmitter is much more powerful than CSI at the receiver, but more difficult to achieve. This is because a large amount of information must be fed back across the wireless
medium, and the transmitter and receiver must agree on the data and format of such feedback.
The full matrix would indicate amplitude and phase for each transmit antenna, receive antenna, and each OFDM subcarrier in the RF channel – a large amount of data.
Therefore various shortcuts have been developed so a smaller amount of information can be fed back without compromising beamforming accuracy.
802.11n includes two methods for achieving CSI at the transmitter. Implicit beamforming allowed the receiver, or beamformee, to send a sounding frame back to the beamformer. The beamformer, on receiving the sounding frame, processed it and used the information under the assumption that the RF channel is reciprocal – knowing how transmit antenna A’s signal is received at antenna B, implies that antenna B’s transmissions would be received at antenna A in the same way.

This is a good assumption for wireless channels, but it cannot include onboard hardware components. In this case the path from B’s transmit chain and A’s receive chain is measured,
but when A transmits, its transmit chain and B’s receive chain effects that affect the calibration differences and nonlinearities cannot be measured. Thus, while implicit CSI feedback for beamforming is relatively easy to obtain, it is not very accurate.
In 802.11ac, implicit feedback is dropped in favor of explicit feedback. Here the beamformer transmits a sounding frame and the beamformee analyses how it receives the frame, compresses the results to a manageable size and transmits them back to the beamformer. This provides accurate channel state information, but requires a protocol for coordination.
Sounding frames in 802.11ac
802.11n included three options for beamforming feedback, and manufacturers have not been able to agree and implement a common set. In practice, some current 802.11n devices will successfully beamform when both ends of the connection include common chipsets, but
beamforming with explicit feedback is not generally a feature of current 802.11n equipment.
To avoid this situation, only one feedback mechanism, explicit feedback with the compressed V matrix is specified in 802.11ac. The full sounding sequence comprises a set of special sounding frames sent by the transmitter (either the beamformer or the access point in the case of DL MU-MIMO), and a set of compressed V matrix frames returned by the beamformee. Because multiple clients are involved in MU-MIMO, a special protocol ensures they answer with feedback frames in sequence following the sounding frame.

802.11ac Single User Beamforming

In 802.11ac, the protocol for generating CSI at the transmitter relies on sounding or null data packet (NDP) frames, together with announcement frames and response frames.
First, the beamformer sends a null data packet announcement (NDPA) frame identifying the intended recipients and the format of the forthcoming sounding frame. This is followed by the sounding NDP itself, and the beamformee then responds with a beamforming report frame.
The NDPA and NDP frames are quite simple. The NDPA identifies which stations should listen to the subsequent sounding frame, along with the dimensions of that frame depending on the number of antennas and spatial streams in use. The sounding frame itself is just a null data packet: It is the preamble with its LTFs that is of importance. The processing and construction of the beamforming report, however, is complicated.

802.11ac Downlink Multi-User MIMO

The beamformee measures the RF channel characteristics, then processes and returns the measurements as a compressed steering matrix to the beamformer. The calculations consist of a number of steps that are performed per-OFDM subcarrier.
First, a matrix of the received signals is constructed, with magnitude and phase for each antenna combination (transmit and receive). Next, successive matrix multiplication operations (Givens rotations) make it invertible, the form of matrix required by the transmitter.
Finally the parameters (angles) used in the matrix operations are assembled, along with some other power and phase figures, and the compressed matrix is returned to the beamformer.
Even with this compression, a beamforming report can range from less than 1 KB to greater than 20 KB, as it contains information per-subcarrier for each space-time stream and depends on the number of spatial streams and transmit antennas in use.

802.11ac Beamforming Compressed V-Matrix

The compressed V matrix is chosen for 802.11ac for several reasons:
• It is a predefined 802.11n technique, it distributes computation among the receivers rather than placing the burden on the transmitter.
• It is simple enough that the matrix algebra can be completed quickly for immediate feedback to the beamformer.
• It provides considerable data compression for the beamforming report.
• Where conditions are favorable, the calculation can be short-cut to further reduce the matrix size. Its accuracy is limited by the ‘quantization’ of the angles returned – with fewer bits per angle, the report frame shrinks but precision is lost. The parameters used in 802.11ac represent a compromise, allowing most of the theoretical beamforming gains to be realized with considerable savings in computation and feedback bandwidth.
Thus 802.11ac, by standardizing and enforcing compliance with the sounding sequence and the format of the compressed V matrix feedback frame will enable widespread adoption of beamforming and DL MU-MIMO, as well as potentially enabling better MIMO SDM performance.
MAC Layer Enhancements
Multi-user MIMO, modulation and MAC enhancements Multi-user MIMO
Some of the most significant throughput gains of 802.11ac are from multi-user MIMO (MU-MIMO). This exploits the same phenomenon of spatial diversity multiplexing (SDM) used in 802.11n, where multiple antennas send separate streams of data independently, although the transmissions occupy the same time and frequency space. This MU-MIMO technique in 802.11ac is also referred to as spatial diversity multiple access (SDMA).
MU-MIMO proposes that, instead of considering multiple spatial streams between a given pair of devices, we should be able to use spatial diversity to send multiple data streams
between several devices at a given instant. The difficulty lies in coordinating between the various devices in a network – how do you discover which pairs of antennas or devices
support diverse paths, and how does a device know that another is transmitting so it can safely transmit to its partner at the same instant?

802.11ac Downlink Multi-User MIMO

802.11ac solves these problems by simplifying them. It assumes that access points (APs) are different from client devices in that they are less space-, power-, and even price-constrained, so they are likely to have more transmitting antennas than client devices.
Therefore, since the number of spatially diverse paths depends on the number of antennas, and the number of opportunities depends on the amount of traffic buffered for transmission, the AP is allowed to transmit to several clients simultaneously should it find an opportunity to do so.
For example, a six-antenna AP could simultaneously transmit three spatial streams each to two client devices – provided conditions were favorable, of course. That means that the transmissions to one client device should not cause excessive interference at the other client and the usual MIMO SDM conditions should prevail where the streams between a given
pair of devices are isolated.
This downlink MU-MIMO (DL MU-MIMO) is the only configuration supported in 802.11ac. It precludes some other forms such as uplink MU-MIMO. Only one AP or client can transmit at any instant, and while the AP can transmit to multiple clients simultaneously, clients can only  transmit to the AP one by one.

802.11ac Downlink Multi-User MIMO Transmission Options

There is no uplink MU-MIMO, in part because it requires a more complicated protocol and because won’t be very useful, given that all traffic in Wi-Fi (apart from DLS) goes to or from
the AP, and we usually expect clients to consume more data than they generate.
The AP is also in a good position to monitor traffic for different clients and identify opportunities to exercise DL MU-MIMO. By matching the frames in its transmit buffers to
the known simultaneous paths to its clients, the AP can make sure that it uses all opportunities for SDMA.
In 802.11ac, DL MU-MIMO only works with beamforming feedback, where the AP sends a sounding (null data packet) frame and clients report how they hear the sounding frame
in the explicit beamforming feedback frame. This is because MU-MIMO introduces a new dimension.
While single-user MIMO is only concerned with how one client receives the AP signal, MU-MIMO throughput is limited by the interference caused when a signal aimed at one client bleeds over to another client.

802.11ac Downlink Multi-User MIMO Disallowed  Transmission Options

To counteract this effect, the AP calculates how much of the signal aimed at client A will be received at client B and/or client C, and uses beamforming techniques to steer a null onto the other clients, so they can successfully receive their own signals.
MU-MIMO throughput is very sensitive to this selfinterference, and the beamforming feedback frame for MU-MIMO has higher precision for the matrix angles, and also includes SNR information to improve accuracy and allow interference to be minimized.
Thus the data reported allows the AP to calculate the SDMA possibilities for different client groups, and the required steering matrices. This calculation is not part of the standard, but it is complex and there are several possible algorithms.
Precoding algorithms for beamforming and DL MU-MIMO The most accurate way of precoding for MU-MIMO is known as dirty paper coding (DPC). An elegant theorem with an intuitive conclusion, DPC states that if the interference state of the RF channel is known exactly, there is a precoding profile that allows maximum data transfer through that channel, no matter what the pattern of interference may be.

802.11ac Downlink Multi-User MIMO Nulling Interference

The analogy is to take a sheet of dirty paper, and write on it in such a way that the writing can be read. If the exact pattern of dirt is known, the writing can be made to stand out against it without the reader needing to know about the pattern.
Similarly, if a transmitter has exact CSI, it can calculate DPC and achieve the theoretical maximum channel throughput without the receiver knowing CSI.
Unfortunately DPC is a non-linear technique, which makes it difficult to apply in practice. Similar results, often nearly as good, can be achieved by approximating with linear techniques such as maximal likelihood transmission and zero-forcing.
The former concentrates on steering signal maxima onto the intended receiver’s antenna while the latter steers nulls or zeros to the other recipients of the MU-MIMO transmission, allowing them to decode their desired signals with minimum interference.
Further complicating the DL MU-MIMO precoding algorithm, the transmitter must choose which measure of throughput to maximize. With a single user, maximum data rate under a given error rate constraint would be the usual parameter, but with multiple users it is possible to weight each user’s throughput in the algorithm.
Most systems just sum throughput over all users with equal weighting, but this can result in favoring high-rate connections at the expense of lower-rate clients, which may be undesirable, especially when quality of service (QoS) is considered.
Scheduling DL MU-MIMO multiple-transmit opportunities When the precoding matrices are known, and good multi-user-groups identified, frames buffered for transmission must be grouped to ensure optimal throughput.
The matching process becomes quite complicated, as the QoS enhancements originally from 802.11e require the AP to maintain four transmit buffer queues, one for each access category of traffic.

802.11ac User Frame Selection and Precoding

802.11ac takes this into consideration, explicitly allowing the AP to pull forward the transmission of lower-priority traffic, if a transmit opportunity (TXOP) was legitimately won for the primary frame to be transmitted. The traffic bundled with the primary frame may jump the queue and get transmitted before higher-priority frames, but these frames don’t suffer, as they would not have been able to use the TXOP with the primary frame.
For an example of the power of properly-scheduled DL MU-MIMO, consider an AP with eight antennas serving a client with only one antenna.
Normally, only a single stream will be practicable, and while some of the extra antennas on the AP can be used to improve the SNR (with beamforming, STBC, and MRC), much of the potential from the AP’ extra antennas will be wasted.
But this effect can be mitigated by MU-MIMO. Now the AP can serve up to eight such clients in the same time interval. MU-MIMO and techniques with similar goals such as orthogonal frequency division multiple access (OFDMA) – where different clients utilize non-overlapping subsets of OFDM subcarriers – have already been explored in cellular networks, but the focus there has been on enabling simultaneous transmissions from several clients to the same base station. In 802.11ac, DL-MU-MIMO allows the AP to transmit simultaneously to a number of clients. The significant constraint on this technique is that the total number of spatial streams supported must not exceed the number of antennas transmitting from the AP, and the standard adds several further constraints: no more than four clients can be targeted simultaneously, no client can use more than four streams, and all streams in a DL MU-MIMO
transmission must use the same MCS.

Modulation and rates

The 802.11ac amendment continues to extend the complexity of its modulation techniques. Building on the rates up to 64- quadrature-amplitude modulation (QAM) of 802.11n, it now extends to 256-QAM. This means that each RF symbol represents one of 256 possible combinations of amplitude (the signal power) versus phase (a shift from the phase of the reference signal).
The diagram below illustrates how this complicates the task of encoding and decoding each symbol – there’s very little room for error, as the receiver has to discriminate between
16 possible amplitude levels and 16 phase shift increments – but increases the amount of information each symbol represents from 6 to 8 bits when comparing the top 802.11ac rate to 802.11n (before the coding of 5/6 is calculated, but this applies to both examples).

802.11ac Modulation Constellation Diagrams

While the 256-QAM 5/6 modulation provides a higher raw-data top speed, the table of available PHY rates is very long, as with 802.11n, to account for various other options.
The key determinants of PHY data rate are:
1. Channel width. We discussed this above. 802.11ac has options for 20 MHz, 40 MHz, 80 MHz, 160 MHz
2. Modulation and coding. All the earlier options are still available, and are used if SNR is too low to sustain the highest rates. But in the MCS table, the canon of 802.11n is extended to add 256-QAM options with coding of 3/4 and 5/6.
3. Guard interval. Unchanged from 802.11n, the long guard interval of 800 nsec is mandatory while the short guard interval of 400 nsec is an available option. The guard interval is the pause between transmitted RF symbols. It is necessary to avoid multipath reflections of one symbol from arriving late and interfering with the next symbol.
Since light travels at about 0.3 meter/nsec, a guard interval of 400 nsec would work where the path taken by the longest reflection is no more than 120m longer than the shortest (often the direct) path. Experience with 802.11n shows that the 400 nsec option is generally safe to use for enterprise WLANs.

802.11ac Selected Data Rates in Mbps

Increased coding in terms of bits/sec per hertz of spectrum comes at a price: The required signal level for good reception increases with the complexity of modulation and the channel bandwidth.
The graph below shows, for instance, that whereas -64 dBm was sufficient for the top rate (72 Mbps) of 802.11n in a 20-MHz channel, the requirement rises to -59 dBm for the top rate (86 Mbps) of 802.11ac, single-stream in a 20-MHz channel, and to -49 dBm for the top rate (866 Mbps) in a 160-MHz channel.

802.11ac Required Receiver Sensitivity

Adjacent channel interference requirements also become more difficult to meet with the higher rates of 802.11ac. This trend was apparent with 802.11n, where using adjacent channels noticeably affects the SNR, and the 256-QAM 5/6 rate requires some 8 dB more adjacent channel isolation than the equivalent case for 802.11n.
Modulation in 802.11ac is simplified compared with the original 802.11n, because equal modulation is now assumed (where multiple streams are used, they all have the same MCS modulation). It was theoretically possible in 802.11n for each spatial stream of a multistream transmission to use a different modulation, allowing some streams to use lower order modulation schemes depending on the SNR of the path. But unequal modulation was not included in Wi-Fi Alliance certifications, and current 802.11n devices don’t support it, so it was dropped for 802.11ac.
Both the binary convolutional code (BCC) and low-density parity check (LDPC) methods of forward-error correction are defined for the new rates, as for 802.11n rates. The former is mandatory, while the latter is optional. While it is a relatively new technique, LDPC offers an improvement of around 2 dB over BCC at packet error rates of 10-2 for 1000 B packets.
This worthwhile improvement can make the difference between moving to the next-higher order modulation rate (on the graph above), or alternatively, at the same modulation rate it can significantly reduce error packets.
MAC changes
There are few MAC changes in 802.11ac that primarily introduce a faster PHY layer. But improvements are made in a number of areas.
Frame aggregation, A-MPDU, A-MSDU
A client (or AP) must contend for the medium (a transmit opportunity on the air) with every frame it wishes to transmit. This results in contention, collisions on the medium and back-off delays that waste time that could be used to send traffic. 802.11n introduced mechanisms to aggregate frames and thus reduce the number of contention events.
Many tests have shown the effectiveness of reducing contention events in prior 802.11 standards. For instance, in 802.11g, a given configuration can send 26 Mbps of data using 1,500-byte frames, but when the frame length is reduced to 256 bytes, generating 6x the number of frames, throughput drops to 12 Mbps.

With MAC-layer aggregation, a station with a number of frames to send can opt to combine them into an aggregate frame (MAC MPDU). The resulting frame contains less header overhead than would be the case without aggregating, and because fewer, larger frames are sent, the contention time on the wireless medium is reduced.
Two different mechanisms are provided for aggregation, known as Aggregated MSDU (A-MSDU) and AggregatedMPDU (A-MPDU).

802.11ac MAC Frame Aggregation

In the A-MSDU format, multiple frames from higher layers are combined and processed by the MAC layer as a single entity.
Each original frame becomes a subframe within the aggregated MAC frame. Thus this method must be used for frames with the same source and destination, and only MSDUs of the same priority (access class, as in 802.11e) can be aggregated.
An alternative method, A-MPDU format, allows concatenation of MPDUs into an aggregate MAC frame. Each individual MPDU is encrypted and decrypted separately, and is separated by an A-MPDU delimiter which is modified for 802.11ac to allow for longer frames.
A-MPDU must be used with the block-acknowledgement function introduced in 802.11n. This allows a single ack frame to cover a range of received data frames. It is particularly useful for streaming video and other high-speed transmissions, but when a frame is corrupted or lost, there will be a delay before a non-acknowledge is received and re-transmission can be accomplished: this is not often a problem with broadcast video, where re-transmission is often not feasible, given the time constraints of the media, but may be problematic for other real-time applications.
In 802.11ac the A-MSDU limit is raised from 7,935 to 11,426 B, and the maximum A-MPDU size from 65,535 to 1,048,576 B. In the short-term, he practical constraint on PPDUs is likely to
be a 5.484-msec limit for the time-on-the-air: at 300 Mbps, a 200 KB A-MPDU would take the maximum 5.484 msec on the air.
It is possible to combine the techniques, combining a number of MSDUs and A-MSDUs in an A-MPDU. Theoretical studies have shown that this improves performance over either technique used alone. However, most practical implementations to date concentrate on A-MPDU, which
performs well in the presence of errors due to its selective retransmission ability.
Encryption and the GCMP option
A new encryption protocol, known as Galois Counter Mode Protocol (GCMP) is being introduced as for new, high-rate 802.11 applications. GCMP is defined as an option in 802.11ad, the 60-GHz-band amendment, and this forms the basis for its inclusion in the 802.11 baseline (in the next roll-up revision of 802.11) and its availability for 802.11ac.

GCMP is a good addition to the standard because it has better performance than CCMP, the current encryption protocol. Both protocols are block encryption ciphers that offer confidentiality so hackers cannot decrypt the data, authentication to ensure it comes from the authenticated peer, integrity so it can be decrypted, and replay protection so that old or doctored messages retransmitted by a hacker are rejected by the recipient. Both use keys of 128 bits and generate the same 24-bytes-per-frame packet format and overhead.
But GCMP requires only one pass to encrypt a given data block, and can encrypt and decrypt blocks in parallel. This improves on CCMP where two sets of calculations are required to encrypt a block, and each data block in a session must be processed in sequence, as the result of one block is used as an input to the next. This means GCMP is better suited to very high-rate data encryption and decryption.
GCMP is expected to be phased in over several years. Silicon will need to be redesigned, for both clients and APs, so CCMP and GCMP will overlap in practical networks for a long while.
There has been speculation that GCMP will be required as data speeds increase and CCMP implementations may not be able to keep up, but whether that point is reached at 10 Gbps
(reference 802.11-10/0438r2) or earlier is not clear today. It is possible that GCMP will never be required for 802.11, and that we will never see practical implementations, but it is established as a new option if required.
Power-save enhancements
Many 802.11 devices are still battery-powered, and although other components of a smartphone, notably the display still tax the battery much more than the Wi-Fi subsystem, power-saving additions are still worthwhile.
The new feature is known as VHT TXOP power save. It allows a client to switch off its radio circuit after it has seen the AP indicate that a transmit opportunity (TXOP) is intended for
another client.
This should be relatively uncontroversial, except that a TXOP can cover several frames, so the AP must ensure that, having allowed a client to doze at the beginning of a TXOP it does
not then transmit a frame for that client. Similarly, if a TXOP is truncated by the AP, it must remember that certain clients will still be dozing and not send new frames to them.
To allow clients to quickly identify if a frame is addressed to them, a new field called partial association ID (partial AID) or Group ID for MU-MIMO is added to the preamble. If the partial AID field is not its own address, the client can doze for the remainder of the TXOP.
One reason to introduce VHT TXOP power save is that the frames are getting longer. 802.11ac has extended frame lengths and now allows for frames approaching 8 KB in length, and aggregated frames (A-MPDU) to 1 MB. Some of this is accounted for by the increased rates, so time on the medium will not be extended pro-rata, but video and large file transfers, two of the more important use cases, drive large numbers of long frames (possibly aggregated as A-MSDU or A-MPDU frames at the Wi-Fi layer) so it may well be worthwhile switching off a radio while large numbers of frames are being delivered to other clients.
The other major power-saving feature of 802.11ac is its high data rates. Power consumption in 802.11 is heavily dependent on the time spent transmitting data, and the higher the rate, the shorter the transmission burst. The time spent receiving frames is also reduced by high rates, but not so significantly.
Other features, like beamforming contribute to higher rates by increasing the SNR at the receiver for any given scenario, so they can also be said to contribute to better battery life.
And general silicon advances in feature miniaturization and power-save techniques will all be adopted in new chips implementing 802.11ac.
Extended basic service set load element
802.11 already defines a load element that allows the AP to advertise its current load in the beacon and probe responses. The load element includes the number of clients on the AP, and also a measure of channel utilization. This is useful for client-initiated load balancing. When a client sees a number of APs, it can choose to associate with one with fewer clients or lower channel utilization, as that AP may offer better performance.
It also offers a form of soft call admissions control: if an application can signal its bandwidth requirements to the Wi-Fi chip, it can avoid associating with APs with insufficient bandwidth.

MU-MIMO introduces another dimension to AP load. It is not sufficient to indicate channel utilization, so an extended load element includes information about the number of multiuser-capable clients, the underutilization of spatial streams in its primary channel, as well as utilization in wider 40, 80 and 160-MHz channels, if applicable.
An 802.11ac client, reading the extended load element, can make a more informed decision about which AP to choose for association.
Co-existence and backwards compatibility
Because 802.11ac includes new, higher-speed techniques, its transmissions are by definition not decodable by older 802.11 equipment. But it is important that an 802.11ac AP, adjacent
to older APs, is a good neighbour.
802.11ac has a number of features for co-existence, but the main one is an extension of an 802.11n technique: A multipart RF header that is uses 802.11a and 802.11n modulation.
Non-802.11ac equipment can read these headers and identify that the channel will be occupied for a given time, and therefore can avoid transmitting simultaneously with the very high throughput frame.
Although 802.11n defines a greenfield mode for nonbackwards-compatible operation, it has never been implemented in practical networks and all 802.11ac APs are expected to run in mixed mode.
The main differences between 802.11n and 802.11ac are the new, wider channels used. If an 802.11ac device started transmitting in 80 MHz, older 802.11 stations in the vicinity would not be able to recognize the transmissions or decode them. Adding an 802.11n-like preamble solves this problem. But the stipulation that 802.11ac operates only in the 5-GHz band, not at 2.4 GHz, makes it easier, as only 802.11a and 802.11n need to be accounted for as legacy, not 802.11b.

802.11ac VHT Preamble Format

The 802.11ac preamble includes a number of training fields. It starts with L-STF, L-LTF and L-SIG, respectively the legacy short training field, long training field and signal field.
To allow for a wide channel, for instance 80 MHz, overlaying a neighbouring 20-MHz channel, it is necessary to transmit training fields in all possible channels. But with the wonders of OFDM, this can be done simultaneously in the same time slot so the frame does not become over-lengthy.
The L-STF and L-LTF allow the receiver to synchronize with the signal, so the rest of it can be correctly decoded. The final part of the legacy preamble, the SIG, includes information on the length of the frame. This is the part that allows legacy stations to set their network allocation vector (NAV), part of the existing medium access protocol.
Following the legacy preamble is the very high throughput (VHT) preamble. This again consists of STF, LTF and SIG sequences, but modulated in the particular channel being used by the AP.
The VHT-SIG-A field includes the channel bandwidth, number of spatial streams, MCS information (for single-user MIMO) and other data for use in demodulating the frame. This field
is transmitted as 20-MHz symbols, replicated over all underlying 20-MHz channels.
The VHT-STF field is used so the receiver can normalize the OFDM subcarriers in the subsequent transmission. To allow for non-contiguous 160-MHz channels, the field is repeated
in each 80-MHz channel.
VHT-LTF fields are next, one per spatial stream to be used for transmission. LTF fields allow the receiver to calculate the multipath characteristics of the channel and apply them to the MIMO algorithm.
Finally a second VHT-SIG-B is transmitted. This includes the length of the frame and more information about the distribution of spatial streams if MU-MIMO is to be used.
There are various references in the IEEE document to “apply phase rotation for each 20-MHz sub-band”. This is a technique to avoid a high peak power in the transmitter.
By rotating the phase per sub-band, the peak power output is reduced. The technique is already used in 802.11n 40-MHz channels.
When an AP is configured for 802.11ac and hence using an 80 or 160-MHz channel, it can act as an AP in 20-MHz channels by using non-HT duplicate mode. This allows it to transmit the same frame on several channels simultaneously.
Protection, dynamic bandwidth and channelization When an 80-MHz 802.11ac network operates in the neighbourhood of an older AP, or a network that’s only using a 20-MHz or 40-MHz channel, it must avoid transmitting simultaneously with a station in the neighbouring network.
How can this be achieved without permanently reducing its
channel bandwidth?
The answer is in three parts. How can a station (AP or client) that wants to operate at 80 MHz, warn older stations to stay off the air while it is transmitting in 802.11ac mode, which they can’t decode?
Then, how does the 802.11ac station know that the full channel is clear of other stations’ transmissions? And finally, how can bandwidth usage be optimized if, for instance, an older station is transmitting in just 20 MHz of the 80-MHz 802.11ac channel?

802.11ac Dynamic Bandwidth Operation

Sending a warning to other stations to stay off the air is achieved by RTS frames. The 802.11ac station sends out multiple parallel RTS protection frames in each 20 MHz of its 80-MHz channel, at rates an 802.11a or n client can understand.
The multiple RTS frames use duplicate, quadruplicate or octuplicate transmission. Before sending RTS, it performs clear channel assessment (CCA) to make sure it can’t hear any transmissions in progress. On receiving the RTS frame, older stations know how long to wait for the 802.11ac transmission.
Next, the recipient runs a clear channel assessment in each of the 20-MHz channels. The RTS frame format is extended so the originator can indicate its channel options and replies with a CTS response to indicate whether it hears transmissions in progress from any neighboring network. If not, the originator transmits the data frame using the full bandwidth – 80 MHz in this case.
However, if the recipient does find transmissions in progress on any secondary channel, it can continue responding with CTS but indicating which primary channels are clear (20 MHz or 40 MHz), then the originator can send its transmission using only the usable part of the 80-MHz channel.
This may force a reduction in channel from 80 MHz to 40 or even 20 MHz, but the frame will be transmitted using air-time that would otherwise be unused. This feature is called dynamic bandwidth operation.
The alternative to dynamic bandwidth operation is static bandwidth operation. If this is used, the recipient has only one choice to make. If the whole channel – 80 MHz in this case – is clear, it proceeds with CTS, but if any part of the channel is busy, it does not respond and the originator must start again with a new RTS frame.

802.11ac Dynamic Bandwidth and Channelisation Examples

Dynamic bandwidth optimization is constrained by 802.11ac’s definitions of primary and secondary channels. For each channel, such as an 80-MHz channel, one 20-MHz channel (sub-channel) is designated as primary. This is carried through from 802.11n, and in networks with a mix of 802.11ac and older clients, all management frames are transmitted in this channel so all clients can receive them.
The second part of the 40MHz channel is called the secondary 20-MHz channel. And the 40-MHz of the wide channel that does not contain the primary 20-MHz channel is the secondary 40-MHz channel. Data transmissions can be in the primary 20-MHz channel, the 40-MHz channel including the primary 20-MHz channel, or the full 80-MHz channel, but not in other channel combinations.
Finally, the introduction of wideband channels, especially the 80 + 80-MHz channels, requires some changes to the channel switch announcement (CSA) frame. CSA is used by an AP to inform its associated clients when it is about to switch channels after radar has been detected in the current channel: it was first introduced in 802.11h as part of DFS.
Otherwise, the operation of DFS is unchanged with 802.11ac. 802.11ad and fast session transfer
802.11ac is not the only very high throughput (VHT) protocol making its way through the IEEE 802.11 standards process.
The 802.11ad task group is just finishing its work, scheduled for completion in December 2012.
802.11ad uses the 60-GHz band, a globally-available spectrum. The standard includes four defined channels of 2.16 GHz, from 57-66 GHz. Only three can be used in the U.S.
but the fourth is available in most other countries. Because of the very large channel width, PHY rates are defined up to 4.6 Gbps for single carrier (SC) and 7 Gbps for OFDM modulation.

While 802.11ad is indeed very high throughput, it is also short-range. Generally we expect a range of about 10 meters, and even that will require beamforming with high-gain (13 dB+) antennas. The use of high-gain antennas and beamforming requires a node discovery protocol.
Since some nodes won’t be able to hear each other with an omni antenna pattern, but high-gain antennas are directional, the idea is that each node in turn sweeps through different sectors with its antenna, pointing a beam on different arcs until it has swept a complete circle.
Once two nodes have discovered each other in this way, they can optimize their beamforming parameters in a fine-tuning mode. These techniques are interesting because they may be applicable, eventually, to 802.11ac if beamforming is used to extend range.
At both the PHY and MAC layers, 802.11ad is very different from other 802.11 standards. This is because different techniques are applicable for 60 GHz, and also because the standard has its origins in the WiGig industry group.
However, the standard is careful to use the same higher-level architecture as 802.11, to maintain the 802.11 user experience, including the concept of an AP and basic service set (BSS), authentication and security. This enables a feature of 802.11ad that directly affects 802.11ac called fast session transfer (FST) or multiband operation. FST allows a pair of devices carrying a session to switch the connection seamlessly from a 60-GHz (802.11ad) link to an 802.11ac link at 5 GHz and vice versa.

802.11ac to 802.11ad Fast Session Transfer

There are several options in FST, depending on whether the interfaces have the same MAC address and common MAC management layers for the two links, in which case the switch can be completely transparent, or different MAC interfaces and addresses, which are more complicated (non-transparent) and slower. Also, some devices will be able to maintain simultaneous links in the two bands while others will not.
FST is important because it allows home networks to be built from a combination of 802.11ac and 802.11ad devices.
Short-range, high-rate communication across rooms will be handled by 60-GHz links, but if there are marginal conditions, the switch to 5-GHz is fast, and handled by lower protocol layers.
More complex networks can use a tunnel mode where packets from one type of connection can be forwarded on a second link. Consumer electronics manufacturers are implementing both 802.11ac and 802.11ad to enable fully wireless home multimedia networks.
History and timeline
Most observers agree that the 802.11ac amendment has, thus far, moved faster and more smoothly than the original 802.11n. This is partly due to the evolutionary nature of the amendment. It essentially uses the same techniques as 802.11n, but extends rather than synthesizes the whole MIMO structure from scratch. The IEEE also made a conscious decision to change the process.

For 802.11ac, the initial document was framework spec, listing an outline for each feature and building up detail feature by feature. This avoided the extra-curricular activities of 802.11n,
where companies formed ad-hoc alliances and sought to deliver fully-formed specification documents to the IEEE task group as the initial proposal.
The result was that voting members whose proposals were down-selected and were not part of the winning consortium tended to view the whole proposal as alien, resulting in continued opposition all the way to sponsor ballot stage. The new format has allowed more of the specification to be written from consensus, and this should continue to pay off in smoother passage through ratification.

802.11ac Mandatory and Optional Features

Regulatory limitations

Thus far, Wi-Fi has done an excellent job of creating an effectively global standard. A PC or other client device can move from continent to continent and receive consistent service, as far as the consumer is concerned. Below the surface, there are national differences concerning allowed channels and power levels, but these are accommodated in the 802.11 standard and are not significant enough to affect performance.
However, 802.11ac uses the 5-GHz spectrum, which is not quite unified globally, and as the channel width increases to 80 and 160 MHz, differences between national regulations will become more important.

802.11n experience

Over the four to five years since 802.11n devices became commercially available, we have learned a good deal about MIMO and technology adoption that can help predict how
802.11ac may roll out.
The most significant revelation is that MIMO SDM works widely and effectively, at least for indoor wireless. Even where there is a good line of sight, there seems to be sufficient
multipath that multi-stream connections offer good throughput gains nearly all the time.
Secondly, 40-MHz channels are very useful in the 5-GHz band. Most current enterprise WLANs use 20-MHz channels at 2.4-GHz and 40-MHz channels at 5-GHz with dual-radio APs. The only exception is that with very high user or device density, higher overall throughput is achieved by loadbalancing clients across many 20-MHz channels rather than a smaller number of 40-MHz channels.
The next significant success is from MAC frame aggregation, A-MPDU. The ability to contend once to get on the air, then send multiple frames back to back is very helpful for highrate traffic, chiefly video, which is usually responsible for high bandwidth utilization. Where high loads are due to mediumrate traffic from many clients, rather than high-rate traffic from just a few clients, A-MPDU is less effective, but the latter is the more prevalent case.
Several 802.11n features have not yet been widely deployed. The most disappointing is beamforming. While several chip vendors implemented implicit beamforming, most gains from it are only realized with accurate receiver feedback, and while it is in the standard, explicit beamforming between different vendors’ equipment is not yet a reality.
802.11ac streamlines the explicit beamforming section, removing many options, and requires explicit feedback for MU-MIMO, and we hope this will spur vendors’ implementation plans.
PCO is another feature that hasn’t been implemented, but it seems the various compatibility and coexistence mechanisms are quite adequate for mixed-mode operation of 802.11n with older clients and in the presence of older networks. A third technique is space-time block coding (STBC). Again, this is modified for MU-MIMO in 802.11ac and may see wider implementation as a result. For several years, APs and PCs were dual-band, while consumer devices like gaming platforms, barcode scanners and smartphones were 2.4 GHz-only. This has changed over the past year, as dual-band APs for residential use and devices such as high-end smartphones are becoming more common. Tablet devices are already nearly all dualband. This bodes well for 802.11ac, as a pool of 5-GHz devices
already exists.

802.11ac Performance Improvement

In its development and adoption cycle, 802.11n has quickly become the industry standard for enterprise and consumer equipment. Nearly all 802.11 equipment now uses 802.11n silicon, a sign that chip vendors are putting all their development efforts into 802.11n.
Even single-antenna, highly cost-sensitive devices like smartphones now use 802.11n, because innovations in low-power operation and large-scale production make them cost-effective with older technology.

802.11ac deployment

It is worth taking some time to consider how 802.11ac may affect the Wi-Fi market over the next few years. No doubt there will be similarities to the 802.11n roll out, but also differences. To begin with, it is better to think of 802.11ac as a set of tools that can be used individually or in combination, depending on the situation, rather than a monolithic feature. It gives us significant initial improvements, but also a number of dimensions that won’t be implemented for a while, and we may never see a single product that has 160-MHz channels or eight antennas. But that doesn’t detract from the standard’s value. Silicon vendors are already shipping dual-band chips with 802.11ac at 5 GHz and 802.11n for 2.4 GHz. It is clear that they will move development of new features – power-saving, SOC integration, new production processes – to 802.11ac and in a
few years these will become more cost-effective for equipment vendors.
802.11ac will become the mainstream Wi-Fi technology, but there is likely to be a wider spectrum of chip options for residential and enterprise use and between client devices and APs.
80-MHz channels should be widely used in residential networks. The home Wi-Fi environment tends to revolve around a single AP, with relatively little high-power interference from neighbouring networks, so the low number of 80-MHz channels shouldn’t be an issue.
In enterprise networks, the five available 80-MHz channels, of which three require DFS, should be sufficient for overlapping APs to provide contiguous coverage. Three-channel plans have been used in the 2.4GHz band for years, although some networks will have reasons to prefer a higher number of smaller-width channels. Although the widespread adoption of 160-MHz channels is unlikely, special applications that use this option will likely emerge.
We can also count the antennas. The most significant leap for 802.11n was to MIMO with two or three driven antennas and two spatial streams. This happened right at the beginning, with the first wave of 802.11n equipment. Subsequent progress was slower.
Most enterprise APs today have three antennas supporting two or three spatial streams, although 802.11n extends to four antennas and four streams. While the standards provide
step increases in capability, implementation is slower and more gradual.
When considering the amount of driven antennas and spatial streams afforded by 802.11ac, it is unlikely we will see those numbers in mainstream equipment for quite a while, as they
translate immediately into increased complexity, cost, size and power consumption. But when new applications demand higher performance, the standards will be ready.
The obvious new bandwidth-hungry application is residential video. Driving uncompressed or lightly compressed TV signals over wireless rather than cables is within the reach of 802.11ac, and depending on the relative success of 802.11ad at 60 GHz, it may prove to be an enormous market for the technology.
Even without a significant new application area, existing uses and users of 802.11n require more speed. As enterprises, schools and universities, conference centres and hotels are seeing more high-bandwidth demand, especially for video and for high-density areas. Here the MU-MIMO features will allow a single AP to serve many clients, and we may see super-size APs with many more antennas, developed especially for such areas.
Explicit beamforming is the one significant feature of 802.11n that did not live up to its promise. This is widely thought to be due to the breadth of implementation options and the preference of each chip developer for their own algorithm, but regardless of the reason there is hope that the simpler standard in 802.11ac will drive stronger adoption.


802.11ac takes all the techniques the Wi-Fi industry has learned up to 802.11n, and extends them. It is relatively uncontroversial to say that in a few years, Wi-Fi will be synonymous with 802.11ac, or whichever name the Wi-Fi Alliance chooses for it.

The significant improvements are from wider channels, higher-rate modulation and higher-level MIMO, all evolutionary except the MU-MIMO option, but together they offer a top speed that is 10 times that of 802.11n.
At this stage it is difficult to see a single device using all the options in the standard, but that is not the point, as Wi-Fi is branching in different directions and no doubt there will be applications for all of these new options.
The Wi-Fi Alliance is under-reported in this paper because its work takes place a little later than the IEEE, selecting parts of the standard and developing from them an interoperability certification.
But it plays a crucial role, as developers will build equipment to the eventual Wi-Fi Alliance certification rather than the IEEE standard. In the same way as 802.11n certification rolled out in two phases, 802.11ac will generate at least two Wi-Fi Alliance certifications over time.
In residential settings, we expect 802.11ac to accelerate the home multimedia network, as it will have the bandwidth to support multiple simultaneous video streams. We expect to see TV monitors fitted with Wi-Fi connections, along with many other home media devices.
Features that improve SNR, chiefly beamforming should extend the range of 802.11ac Wi-Fi and reduce coverage dead spots. It is difficult to quantify these improvements, but they could be as much as 30% greater useful range.
In enterprise networks, the higher rates and increased capacity of 802.11ac will break down the last remaining barriers to the all-wireless office. There should be enough capacity in an 802.11ac WLAN that users see equivalent performance to wired Ethernet.
We are already seeing applications such as wireless display projection from PCs to monitors and displays becoming popular in niches such as education, and with the increase in rates from 802.11n to 802.11ac that is bound to continue.
While beamforming will extend range in enterprises as in residential WLANs, the higher user density and slowly upgrading device base means it is unlikely AP distances will be increased substantially, except in specific cases.
Is 802.11ac the last word in Wi-Fi, at least at the physical layer? There is certainly a case for saying that it pushes most parameters to the limit – channel bandwidth, modulation, number of antennas and spatial streams, beamforming. There is some opportunity in MU-MIMO but it is difficult to see where significant improvements can be made in existing spectrum without some new invention. Nevertheless, 802.11ac provides plenty of runway. It will be several years
before chips and devices catch up with all the features in the standard, and by that time there will no doubt be many new developments signaling where the next wave of innovation should be directed.

Planning a Gigabit Wireless Network

Planning a Gigabit Wireless Network

Planning involves a Gigabit Wireless network requires some consideration to ensure a reliable, high performance network and choice of appropriate technologies.
Some topics include:

Gigabit Wireless Technology
Gigabit Wireless Technology

Site Survey

  • Does Line of Sight (LOS) exist?
  • Desktop Survey / feasibility check
  • Physical Survey
  • RF / Spectral Survey
  • Distances required to cover

Choice of Technology:

Our expert team has over 18 years experience in planning and deploying Gigabit Wireless Networks in over 65 countries, including indoor and outdoor wireless networks.  Our team will be delighted to assist with all aspects of design, planning and deployment.

For more information or questions on Technologies for Gigabit Wireless Networking please Contact Us