Examining the usable bandwidth on a Gigabit Ethernet network
Examining how much throughput of actual throughput can be achieved on a Gigabit Ethernet based network and how much this increases by using Jumbo Frames. Also covered is how that relates to throughput of a Wireless Link with Gigabit Ethernet interfaces.
Gigabit Ethernet Physical Layer
On a Fibre Optic Gigabit Ethernet Network (1000BaseSX, 1000BaseLX), the raw line rate is 1.25Gbps. This raw data rate is chosen to include 8b10b Line Coding. Line Coding is used to ensure “DC balance” of the data stream, remove long runs of consecutive 0’s and 1’s, which makes the physical transceivers easier to design, implement, and maximises performance of the fibre optic transceiver range capability. When the 8b10b line coding is removed from the raw data stream by the Gigabit Ethernet chipset, this allows an uncoded payload of exactly 1.0Gbps.
On a copper based Gigabit Ethernet Network (1000BaseT), transmission uses four lanes over all four cable pairs for simultaneous transmission in both directions through the use of echo cancellation with adaptive equalization and five-level pulse amplitude modulation (PAM-5). The symbol rate is identical to that of 100BASE-TX (125 megabaud).
Gigabit Ethernet Net Data rate
The theoretical maximum bandwidth on a Gigabit Ethernet network is defined by a node being able to send 1 000 000 000 bits each second (bits per second, bps, bp/s), that is one billion 1 or 0s every second. A Byte of data consists of 8 Bits, hence the net capacity of this Gigabit link is the capability to transfer 125 000 000 bytes per second (1000000000 / 8), also termed Bbps, Bytes/sec or Bytes/s.
Frames, Preamble, Interframe Gap
In a real-world network, not all of the 125000000 bytes/second can be used to send data as there are multiple layers of overhead. Data transferred over a Ethernet based network must be divided into “frames”. The size of these frames regulates the maximum number of bytes to send together. The maximum frame size for Ethernet has been 1518 byte for the last 25 years or more.
Each frame will cause some overhead, both inside the frame but less known also on the “outside”. Before each frame is sent there is certain combination of bits that must be transmitted, called the Preamble, which basically signals to the receiver that a frame is coming right behind it. The preamble is 8 bytes and is sent just before each and every frame.
When the main body of the frame (1518 byte) has been transferred the network devices want to send another one. Since we are not using the old CSMA/CD access method (used only for half duplex) the devices do not have to “sense the cable” to see if it is free – which would incur a time penalty, but the Ethernet standard defines that for full duplex transmissions there has to be a certain amount of idle bytes before next frame is sent onto the wire.
This is called the Interframe Gap and is 12 bytes long. So between all frames devices have to leave at least 12 bytes “empty” to give the receiver side the time needed to prepare for the next incoming frame.
This will mean that each frame actually uses:
12 empty bytes of Interframe Gap + 1518 bytes of frame data + 8 bytes of preamble = 1538
This makes that each frame actually consumes 1538 bytes of bandwidth and if we remember that there are “time slots” for sending 125000000 bytes each second this will allow space for 81274 frames per second. (125000000 / 1538)
So on default Gigabit Ethernet we can transmit over 81000 full size frames each second. Since Gigabit Ethernet is always at running full duplex we can at the same time receive 81000 frames simultaneously.
Nore detail on the overhead for this: For each frame, we lose 12 + 8 bytes used for Interframe Gap and Preamble, which is considered the “outside” of the frame. Plus, there is some more overhead is going on:
Ethernet header, Frame Check Sequence
The first 14 byte of the frame will be used for the Ethernet header and the last 4 bytes will contain a checksum trying to detect transfer errors. This uses the CRC32 checksum algorithm and is called the Frame Check Sequence (FCS).
The Maximum Transmission Unit, MTU
This means that we lose a total of 18 bytes in overhead for the Ethernet header in the beginning and the checksum at the end. (The blue parts above are seen as something like a “frame” around the data carried inside.) The number of bytes left is called the Maximum Transmission Unit (MTU) and will be 1500 bytes on default Ethernet. MTU is the payload that could be carried inside an Ethernet frame, see picture above. It is a common misunderstanding that MTU is the frame size, but really is the data inside the frame only.
IP Header, TCP Header, Maximum Segment Size
Just behind the Ethernet header we will most likely find the IP header. If using ordinary IPv4 this header will be 20 bytes long. And behind the IP header we will also most likely find the TCP header, which have the same length of 20 bytes. The amount of data that could be transferred in each TCP segment is called the Maximum Segment Size (MSS) and is typically 1460 bytes.
So the Ethernet header and checksum plus the IP and TCP headers will together add 58 bytes to the overhead. Adding the Interframe Gap and the Preamble gives 20 more. So for each 1460 bytes of datasent we have a minimum of 78 extra bytes handling the transfer at different layers. All of these are very important, but does cause an overhead at the same time.
Efficiency using Standard Ethernet Frames
At the beginning of this article we noted the potential to send 125000000 bytes/second on Gigabit Ethernet. When each frame consumes 1538 byte of bandwidth that gave us 81274 frames/second (125000000 / 1538). If each frame carries a maximum of 1460 bytes of user data this means that we could transfer 118660598 data bytes per second (81274 frames x 1460 byte of data), i.e. around 118 MB/s.
This means that when using default Ethernet frame size of 1518 byte (MTU = 1500) we have an efficiency of around 94% (118660598 / 125000000), meaning that the other 6% is used for the protocols at various layer, which we could call overhead.
Efficiency using Jumbo Frames
If supported by the connected equipment – enabling so called Jumbo Frames on all equipment in the chain, we could have a potential increase in the actual bandwidth used for our data. Let us look at that now:
A commonly used MTU value for Jumbo Frames is 9000. First we have to add the overhead for Ethernet (14+4 bytes), Preamble (8 bytes) and Interframe Gap (12 bytes). This makes the frame consume 9038 bytes of bandwidth and from the total amount of 125000000 bytes available to send each second we will have a total of 13830 jumbo frames (125000000 / 9038). So a lot less frames than the 81000 normal sized frames, but we will be able to carry more data inside each of the frames and by that reduce the network overhead.
(There are also other types of overhead: including CPU time in hosts, processing work done at network interface cards, switches and routers, but in this article we will only look at the bandwidth usage.)
If we remove the overhead for Interframe Gap, Ethernet CRC, TCP, IP, Ethernet header and the Preamble we would end up with 8960 bytes of data inside each TCP segment. This means that the Maximum Segment Size, the MSS, is 8960 byte and is a lot larger than default 1460 byte. A MSS of 8960 multiplied with 13830 (number of frames) gives 123916800 bytes for user data.
This will give us a really great efficiency, of 99% (123916800 / 125000000). So by increasing the frame size we would have almost five percent more bandwidth available for data, compared to about 94% for default frame size.
Wireless Links with Gigabit Ethernet Interfaces
Note that for Wireless links such as Microwave, Radio, Millimeter Wave or Free Space Optics, the Airside Interface often uses different coding and modulation than the network side interface. This difference is often due to limitations in the amount of RF spectrum available (for example, a 40MHz, 56MHz, 60MHz, 80MHz or even 112MHz channel) from the regulatory body and channel planning, the modulation used (for example, up to 256QAM or 1024QAM) which affects both transmit power and receiver sensitivity, aggregation features such as MIMO or XPIC, and especially for longer links, the corresponding Link Budget between the two ends which includes the Antenna Gain at both sides, plus any losses caused by transmission waveguides, connectors, plus atmospheric fade effects. This airside interface may therefore impose a lower capacity for the “end to end” wireless link even if the network interfaces at each end are connected at 1Gbps Gigabit Ethernet rate.
Transparent Wireless Links
Note that only some wireless technologies such as Free Space Optics (FSO) are capable of fully transparent transmission using the exact same modulation used on Fibre Optic networks, so the full 1.25Gbps line rate, along with all packet structure is maintained exactly. The advantages of transparent transmission is that throughput is easily predicted, and latency is the lowest possible as transmission is generally one bit at a time.
Conclusion
The default Gigabit Ethernet has a potential frame throughput of 81000 per second and therefore a high throughput for actual data (about 118 MB/s), giving efficiency of 94%, or 940Mbps. For networking equipment where Jumbo Frames are supported, by increasing the MTU to 9000 can deliver even more data on the same bandwidth link, up to 123 MB/s, thanks to the decreased amount of overhead by utilising a lower number of frames. Jumbo Frames can therefore potentially offer 99% of the theoretical Gigabit Ethernet bandwidth to carry data, which means 990Mbps capacity.
For Further Information
For further information on Applications and Solutions of Gigabit Wireless products please Contact Us