Abstract

802.11-based wireless mesh networks (WMNs) as last mile solutions frequently become bottlenecks in the overall Internet communication structure. The lack of end-to-end capacity on routes also affects vertical traffic coming from or flowing towards external networks, such as the Internet. The presented approach aims to increase the overall network performance by exploiting channel diversity and to additionally favor vertical traffic. To achieve this, first we propose a general system that modifies an existing mesh node architecture, in order to prepare a more efficient resource management and to enhance the restricted transmission capacity in standard WMNs. The parallel use of nonoverlapping channels, based on a multiradio node, marks the starting point. The system treats aspects of channel assignment, traffic analysis, and fast layer 2 forwarding. Then, the impact of a novel Multihop Radio Resource Management process is discussed as a relevant component of this new system architecture. The process combines per-hop priority queuing and load balancing in a novel way. It was designed, developed, and evaluated in the presented paper, resulting in the fact that capacity in WMNs was significantly increased, Quality-of-Service parameters were improved, and more efficient use of multiple radios could be reached. The proposed process was validated using a simulation approach.

1. Introduction

Often, last mile networks become bottlenecks in the Internet delivery chain, since they have to fulfill increasing user demands, in terms of Quality-of-Service (QoS) guarantees. This work focuses on last mile wireless networks, which can be direct user-to-user networks, wireless backbones (e.g., a public, city-wide network), or both, in a hybrid form [1].

Wireless mesh network (WMN) technology is mostly used to create economic and flexible backbones. Planners of wireless consumer- and industry networks have seen the various advantages and diverse applications of WMNs and have begun to adapt the technology to market-ready solutions. Nevertheless, a broad acceptance is still missing, mainly due to the fact that WMNs are mostly based on single interface (IF) nodes [2]. An 802.11-based WMN naturally suffers from known risks of negative channel conditions on the Physical (PHY) layer in 802.11, like fading or distortion effects in a non-line-of-sight situation. Such effects ultimately turn the pure throughput of an 802.11 Wireless LAN (WLAN) IF into a highly conditional parameter.

But there are other significant, more ad hoc-specific factors which may drastically restrict the transmission capacity of WLAN-based WMNs. 802.11g does not support full duplex communication [3], which causes a rapid performance- and capacity degradation on multihop routes [4, 5]. Although 802.11g is outdated in most high-performance setups, it may be still commonly used in mesh installations based on commodity hardware, for example, in rural scenarios [68]. Also, 802.11 Medium Access Control (MAC) is designed for shared channel access [9] and is partly based on random timers, making a consistent packet forwarding unreliable [10]. Route segments which are shared by multiple flows may be prone to congestion and unfair traffic treatment [11]. Finally, links of routes which are separated on layer 3 might still interfere within the same layer 2 collision domain [11].

In addition to the to various interference types, which should be always considered in an ad hoc network [12], traffic in WMN is often heterogeneous. Users with growing and wider external service demands create mostly vertical traffic in WMNs [13]. This leads to congestion [14] near those mesh nodes which serve as traffic gateways (GW) to outside networks, or the Internet. Vertical traffic is not protected on those routes and has the same priority as intramesh traffic. These basic limitations cause standard single-channel WMNs to have a limited transmission capacity.

To solve these issues, at first an improvement to a cross-layer mesh node architecture is proposed. It considers key functions and processes, in order to enhance the transmission capacity in WMNs. A multi-interface node offers a suitable basis for this intention, by exploiting its access to multiple orthogonal WLAN channels [15]. The novel cross-layer architecture combines and adapts methods of distributed Channel Assignment (CA) and traffic analysis and engineering, which have proven to be efficient as independent solutions. Then, a Multihop Radio Resource Management process becomes a relevant requirement within this modified architecture. This process aims to exploit capacity through packet scheduling (PS) modes and to support the protection of vertical traffic. Therefore, its development and a related proof-of-concept have priority in this work and represent the key contribution.

Several papers with the objective to exploit IF resources in a typical 802.11-based WMN have been identified in this section. To the best of our knowledge, none offers an integral or systemic answer for the described spectrum of problems so far. However, some ideas are considered as milestones by the authors towards future effective solutions.

An important definition is found in [3], which states that a carefully designed resource allocation strategy, which matches the node’s availability of radios to the desired network behavior, is a crucial success factor. Mainly, this requires us to introduce a distributed or centralized CA scheme and subsequently a load balancing (LB) mechanism.

Before network parameters are optimized, basic 1-hop connectivity needs to be guaranteed. The CA approach in [16] focuses on this aspect. The centralized CA protocol of Robitzsch et al. [17] facilitates an autonomously controlled entrance of a node into WMN, considering Adjacent- and Intercarrier Interference (A/I-CI). Less interference may lead to a reduced energy consumption in node batteries [18], which benefits mobility-oriented setups. Most CA approaches do not distinguish between orthogonal and overlapping channels. Despite this fact, the CA approach in [19] explicitly foresees an optimization for partially overlapping channels.

Receiver-Based Channel Assignment (RCA) schemes [20, 21] are straight-forward, proactive, topology-considerate, and easy to implement. Negotiation-based Channel Assignment (NCA) schemes perform CA on-demand and allow interference free transmissions in most cases [20]. But their reactive nature makes them more suitable for MAC layer approaches, where the channel is negotiated framewise. Still, with simple RCA and NCA schemes, there is no consideration of 2-hop neighbors, the next-hop type, and the assignment of multiple radios per neighbor.

After CA, the next conceptual stage to exploit channel diversity often involves an unmanaged, non-LB related solution, based on additional radios. A common approach in a WMN backbone is to deploy edge nodes with two separate radios, in order to grant interference-free access to their local clients, at best using separate bands (2.4 GHz and 5 GHz, e.g., in [22]). A next stage denotes the use of two or more radios within the backbone itself, to minimize intraflow interference. Within Fraunhofer’s Wireless Back-Haul (WiBACK) architecture [3], simply two 802.11 radios are deployed, with a gap of at least 60 MHz between two 20 MHz channels. This avoids a throughput decrease at each hop [23, 24]. In [25], full-duplex communication is achieved with a dual-radio scheme.

Managed scheduling depicts the next essential and logical step. If sufficient interface resources are available between two adjacent nodes, bundling is able to improve the resource utilization beyond CA measures [26]. Furthermore, channel bundling can be used to reduce signaling overhead [26]. Also, the allocation of channels to a single bundle group reduces computational cost, because when “all the channels in the same bundle are either available or busy simultaneously, a secondary user can sense each bundle of channels instead of each channel individually” [26].

Kim and Ko [27] describe a virtual interface (VI) which sits upon and controls multiple WLAN MACs. Within the VI, the IF with the best link quality is chosen for transmission, on a per-packet basis. Their approach segregates low performing interfaces in a bonded set of IFs. This may waste capacity in certain constellations. CA is not included whatsoever in the approach, which causes additional configuration efforts for the user. A neighbor table is maintained, to hold information on the interface availability and link states in the neighborhood. To signal a node’s associate IF addresses, a modification of the Address Resolution Protocol (ARP) is used.

Hu confirms that establishing channel diversity (by having sole single channel links) is not enough; this diversity must be actively utilized, in order to improve capacity. In his work [28], a system model is described, which uses multiple radios for parallel transmission between nodes. Again, a VI with a virtual MAC address is used. In his simulator testbed, two kinds of Transmit- (TX-) oriented scheduling algorithms are tested. Although entirely different in their behavior, both consider hop-to-hop scheduling. Hu defends this decision with the varying nature of the wireless medium, making multihop/flow coordinated scheduling too complex.

The paper of Prabhavat et al. [29] is considered highly useful, as it provides a comprehensive review on the existing load distribution models. They claim that skewness between routes is a major issue in multipath LB. With hop-to-hop (single-path) load balancing (LB), skewness is of minor importance.

A key concept depicts the abstraction of resources for the sake of simplicity, compatibility, and modularity. Adding a cross-layer design has high benefits [30]. The CARMEN architecture [31] introduces an abstraction layer, which hides particularities of each access technology. An open virtual layer is also deployed in [32]. A bundling within the virtual layer is not applied. Like many other Multi-interface/Multichannel (MIMC) approaches, the group targets to optimize throughput and end-to-end delay as QoS parameters.

A virtual layer/interface is essential for MIMC WMNs which shall be compatible with different mesh protocols and metrics. A VI can be used to gather and reorganize different types of performance-critical cross-layer input. A well designed VI is further able to provide a usable platform to combine different measures, in order to improve capacity and support heterogeneous traffic.

3. Technical Background

This section outlines a proposed cross-layer node scheme. This supporting node architecture is required, in order to host the core processes which are later described in Section 4.

In mesh backbones, limited multihop capacity and traffic unfairness have a negative influence on transmissions, particularly on those which flow to and from gateways. This work’s focus lies on the enhancement of transmission capacity in mesh and on the optimization of these vertical flows. A node cannot determine the final route of a packet; therefore, the necessity to enhance the performance of every single next-hop link was identified. The proposed modified node architecture incorporates the combined use of various radios. A prior step was the adaption and assembly of standard schemes and components in a custom manner. The considered standard technologies include mesh routing, QoS and traffic engineering (TE), routing topology analysis, priority queuing, and load balancing. The latter two components are described in detail in the Multihop Radio Resource Management (MHRRM) process in Section 4.

3.1. General System Overview

The novel proposal for modifications in a standard mesh node architecture is shown in Figure 1 and contains basic components in the envisioned MIMC node, which are grouped in four blocks.

It has to be distinguished in Figure 1 between standard/legacy components, supporting components (treated in the current Section) and core MHRRM components. Characters in curly brackets refer to the relationships among different components, respectively, and the type of exchanged information.

The Mesh Routing Protocol in layer 3 sits above Channel Assignment (which is envisioned as an interchangeable protocol in the system) and the three remaining component blocks, which are nested in a middle-layer (2.5) solution. In the following, the details of each component block that completes the system are described.

3.2. Mesh Routing Protocol

The internal host system/operation system provides local IP/MAC addresses and the routing table, as well as frame transmit- and channel switch-access of each radio. It was intended to tightly integrate the Mesh Routing Protocol into the system architecture, mainly because it already provides performance-critical routing information through the protocol-specific routing metric. The authors recommend the deployment of optimized link-state routing (OLSR) [33] protocol, but any proactive Mesh Routing Protocol which maintains proactive link states can be used. OLSR also provides connectivity and address information (MAC/IP of 1-hop neighbors) of the 1-hop topology . From the latter source, the identification of gateway nodes (Gateway Identifier) especially is relevant, since they inject, or receive vertical flows. With OLSR, the main IP address of the smallest index of the totality of all local radios depicts the main IP and at the same time the node’s identity in layer 3. The expected transmission time (ETT) metric [34] is recommended to be used with OLSR, since it introduces bandwidth-related link quality awareness. Proactive probing is performed with all radios by OLSR.

3.3. Traffic Analysis and Classification per Packet

Traffic Analysis and Classification (TAC) analyzes packets which enter or are created in the mesh network and thus pass the middle-layer module for the first time. A flow is identified via a five-tuple (Source (SRC)/Destination (DST) IP address and port, Transmission Control Protocol (TCP)/User Datagram Protocol (UDP)), and a class code . can be derived by a hash function which processes DiffServ Code Point (DSCP) encodings [35]. For the presented approach, five DSCP ranges have been summarized and mapped to five class codes; the network operator can determine further custom classes, alter the hashing, or process alternative input, such as the users’ preference bandwidth [36]. The class code later determines the chosen priority queue per packet. Identified flows are stored in a Class Flow Table, provided by TAC to MHRRM . The table is proposed in order to facilitate QoS-processing in the subsequent data plane components in MHRRM. Traffic classes can be freely defined, independent of the original DiffServ assignation. Even an arbitrary prioritization scheme contrary to the DiffServ priority order can be designed by the mesh network operator. The Class Flow Table then offers traffic engineering capabilities based on packet priorities, which may influence queuing and other QoS-control measures, such as bandwidth shaping or packet dropping.

3.4. Channel Assignment Protocol

CA is considered an external component in Figure 1, for the sake of having a modular architecture. CA is a necessary step to achieve a sensible utilization of resources. A set of requirements on the chosen CA protocol was defined, as well as a static CA output. The expected minimum information in this output is gathered in the Expected CA Table in Table 1.

Thus, Table 1 depicts a recommendation to the system to assign quantities of radios and channels to neighbors and may be further evaluated out of the CA component in Figure 1. Essential requirements on the CA protocol itself are as follows.(i)Assure 1-hop connectivity (topology preservation).(ii)Radios can operate on channels exclusively used with a single neighbor/next-hop or on channels shared with several neighbors.(iii)If a gateway is present in the 1- or 2-hop neighborhood, this next-hop to the GW (or leading to it) is prioritized in the assignment phase; it receives more radios and better channels.(iv)If no GW is present, channels/radios are equally distributed among neighbors.(v)There is a possibility to configure a designated Control Channel (CC) [37].(vi)One can see CA Signaling between distributed CA Protocol instances in the 1-hop neighborhood to guarantee a synchronized channel switch/handshake between neighbors. CA Signaling shall avoid packet loss due to unsynchronized switches to channels without neighbor connectivity. A suitable framework for CA protocols (including signaling approaches) for the OMNeT++ simulation environment can be found in [38].

Within CA, a Channel Switching Cost (CSC) Check [39] as a prior step and a predefined cost threshold should be considered to avoid the fact that channels are switched too often or that an unsuitable channel map is used, which limits connectivity or transmission capacity. In particular, those hops that are attached or close to gateways (identified via ) are frequently loaded in a WMN [14]. Deployed channels in these topology edges shall have a higher CSC, to avoid a temporary outage and to enable a more stable, inert channel map around GWs.

3.5. Traffic Engineering

Traffic engineering favors GW flows over horizontal traffic and paves the way for a faster processing of such in the Packet Commutation component. The TE component block foresees a Multiprotocol Label Switching- (MPLS-) inspired [40] TE Next-Hop Label (NHL) for selected packets. It can be either added in a separate custom header between the IP and MAC header or included in fields of the existing sublayer 3 headers. Only packets of vertical flows receive a NHL at the ingress mesh router. Gateways label all packets forwarded into the mesh cloud, whereas regular mesh routers label only vertical flows. The label is removed by the TE component block at the egress router/GW, or at the mesh DST node. The label allows a fast layer 2 forwarding of selected packets. In the ideal case, a packet is forwarded in intermediate nodes with complete transparency to the network layer. The sublayer 3 forwarding chain is depicted in Figure 2, including the required push, swap, and pop label operations.

In intermediate nodes, TE only performs the label swap operation and Packet Commutation, as shown in Figure 2. Other packets (those of horizontal flows) receive a classic layer 3 forwarding (longest-prefix-match lookup in the routing table), which is potentially more time-consuming, depending on the actual routing table size [41]. To determine the affiliation to a GW flow in a non-GW node, the packet’s DST IP must coincide with a mesh-external IP address. MPLS-like traffic steering via fixed Label Switched Paths (LSP) is not desired, as the concept of a predefined chain of routers is not conformable with the philosophy of ad hoc routing (“hop-to-hop” principle).

With OLSR, GW nodes typically broadcast Host and Network Association (HNA) [33] messages, which allow us to determine the mesh-internal IP address of a gateway node. Information about which mesh nodes function as gateways is provided via . This input is required to generate labels to all GWs in the topology. Labels are proactively generated by each node and maintained in the Extended Commutation Table, depicted in Table 2.

In Table 2, b is a bundle (of radios), h is its index, m is the current amount of registered 1-hop neighbors, and IP DST refers to an IP (v4) address of a mesh-internal GW flow endpoint (with an index o). l is the number of registered GW flow endpoints in the WMN. There is a unique bundle-index per 1-hop neighbor, which is provided to TE by MHRRM in Figure 1. The out-label is unique within an out-bundle (resp., per next-hop). The next-hop specification for a DST is directly overtaken from the routing table, filled by the Mesh Routing Protocol. As with MPLS, Label Signaling is required, in order to guarantee a flawless swapping operation. A node can signal its out-labels either via a LDP-like [40] derivative or by including labels in proactive signaling messages generated by the Mesh Routing Protocol.

4. Multihop Radio Resource Management

This section describes the technical core contribution of this work. Whether selective forwarding/Packet Commutation based on the Extended Commutation Table is applied or not influences the swiftness of the next-hop selection per packet. The subsequent treatment within MHRRM now further consists of enquiring and scheduling within a bundle. Both subprocesses represent the core operations of MHRRM and allow a mesh operator to fully exploit given radio resources and channels. This increases QoS-relevant performance parameters in the network, such as end-to-end throughput and delay. Figure 3 visualizes both subprocesses with the help of three representative packets, which enter a node (left side of the figure).

The first encapsulated packet belongs to a GW flow and is fast-forwarded. The second packet belongs to a horizontal flow. The third packet is also forwarded between two non-GW mesh nodes, but it additionally contains a valid class code/DSCP value.

MHRRM also defines bundles and has direct control over aggregated capacities . The envisioned resource management scheme is depicted in Figure 4.

Different neighbors receive separate bundle indices. A radio unit refers to a physical WLAN radio. This single resource unit can be a radio which is tuned to an exclusive or to a shared channel. Thus, a physical radio can be assigned to multiple bundles at the same time. The same condition applies to a radio in Figure 3. A Bundle Management Table (BMT) to define bundles per neighbors is maintained within MHRRM. The table is primarily based on the Expected Channel Assignment Table input (see Section 3.4). Additionally, the BMT stores statistical parameters for load balancing.

4.1. Priority Queuing

A corresponding queue is chosen in the Priority Queuing component in Figure 1, based on Table 3. Class codes/DSCP ranges are mapped to queues in this table.

First, the internal traffic class code is determined on a per-packet basis, based on the Class Flow Table provided by TAC (see Section 3.3). There is one queue set per bundle/neighbor. Queues in Table 3 are listed in a descending order of priority. The priority of GW traffic () excels any other internal traffic class specifications. Queues are reserved for horizontal flows which match a Class Flow Table entry. If multiple flows are directed towards the same next-hop, the queue set manipulates their packet sending order according to detected priorities. This manipulation becomes more effective with multiple hops and mainly favors vertical traffic (class ). Thus, GW and DiffServ packets experience a faster queue removal and have a higher chance for immediate forwarding when competing with flows of lower priorities. Queuing further enhances the subsequent multiradio packet scheduling: If Priority Queuing is combined with link-state sensitive packet scheduling, the best radios are offered to the most important packets/queues. A single set of queues is characterized by the following main parameters:(i)fixed, tunable amount of queues;(ii)tunable queue length (in packets);(iii)tail drop principle [42] within each queue;(iv)PFIFO principle [43] within each queue;(v)dequeuing policy based on Weighted Fair Queuing (WFQ) [44, 45];(vi)fixed, manually chosen weight w per queue.

4.2. Packet Scheduling

After a packet was dequeued, it is finally scheduled to be sent via one of the available radios in a bundle. A configurable set of load balancing modes is offered to a mesh operator. These modes are explained in the following. The mode parameter reflects two basic network response profiles: capacity- and stability-oriented networks. Both types tackle the pervasive issue of performance degradation due to single-radio hop-to-hop interference in mesh transmissions. Also, both profiles exploit channel diversity, but for different reasons. The first category uses channel resources in parallel, whereas the second may maintain extra resources as backup options. A mode is applied network-wide.

With the Weighted Fair Scheduling (WFS) mode, radios with the best quality shall bear the majority of packets. WFS calculates a TX probability per radio , based on its estimated link cost to determine the link’s usage frequency. Thus, WFS integrates well with link-state routing. It supports the forwarding of vertical flows when combined with Priority Queuing. WFS also permits a fair treatment of interfaces with underperforming links, to prevent starvation of such. WFS mode is based on ETT, as it is the more accurate, QoS-related metric, which is commonly available with OLSR. Applied to system parameters, the sending probability is calculated withwhere is the TX probability of a radio with the index , is the link state/metric value of a radio , and is the number of radios in a bundle .

The Round-Robin (RR) packet scheduling mode simply foresees that, for radios in a bundle , each radio will transmit of incoming packets. The RR mode and the previously discussed WFS mode aim to exploit the capacity of multiple radios.

The third mode describes an extended version of the simple Round-Robin mode. From a given set of n radios in a bundle b, a fixed number of Fallback (FB) radios , are reserved. This mode focusses on network stability. Extra resources in form of WLAN IFs are used redundantly and on demand, while the exploitation of the available channel spectrum is still possible. This is suitable to create robust setups such as emergency networks, where performance is a secondary goal and reliable communication has a top priority [46]. Fallback radios are used in case one or more of the currently used radios might fail. When , standard RR is applied. When , single interface transmission is applied on this link, while radios remain inactive. is specified by the user, so the operator has full control over the degree of WLAN hardware utilization. This is also relevant for fully mobile mesh nodes, where energy consumption is a limiting factor [18]. A fallback threshold rate per radio is maintained in the Bundle Management Table. triggers the replacement of an active radio. As a starting point, is the MAC frame loss rate between layers 1 and 2. Other advanced cross-layer information may be used related to the channel’s quality [36].

5. Measurements

Measurements were conducted within an OMNeT++ (OMNeT++ Network Simulation Library and Framework, https://omnetpp.org/) simulation environment. A general goal is to validate the bundle definition capabilities of the proposed architecture. Subsequently, the impact of the MHRRM process on transmissions in MIMC mesh networks shall be evaluated. Channel assignment is statically configured and class codes are manually set for each packet generator.

5.1. Scenarios

Three scenarios have been selected to evaluate MHRRM functionalities. Nodes have multiple 802.11g radios, using typical PHY and MAC layer settings. They are aligned in a grid or in a chain formation and have a fixed distance of 140 m between each other. Based on the available parameterization in the INETMANET framework (INETMANET Framework for OMNEST/OMNeT++ 4.x (based on INET Framework), build “4116c0c371”, https://github.com/inetmanet/inetmanet), a free-space radio environment was modelled. Circles in the following figures indicate the resulting minimum reception range; every packet received beyond this range is considered as noise. All 20 MHz 802.11a/b/g channels are considered orthogonal [38] in INETMANET. Also, contrary to the limitation of the standard, the used INETMANET build enables an arbitrary number of available orthogonal channels for 802.11g. Their index () used in this work does not correspond to standardized 802.11g numeric channel numbers [9].

To show the impact of multiple radios on WMN capacity, the scenario in Figure 5 was designed.

The total simulation time is 55 s per run. Figure 5 depicts a mesh grid with 37 nodes, in which node number 36 functions as a GW. Here, all nodes bear the same amount of radios (1, 2, 4, or 6), which are tuned to the same common set of channels 0 to 5. OLSR is combined with the ETX [34] metric. RR mode is used network-wide. The 37 nodes have to cope with only a single, or up to 7 active GW flows. A representative set of flows has been defined for various hop distances. Multiple instances of a File Transfer Protocol (FTP) application initiate a download from the GW. One after another (in different simulations), the nodes 14, 9, 26, 0, 5, 35, and 30 become active (flow indices 1–7). The simulation shall confirm that the hop distance plays a major role in performance degradation and that the increasing amount of radios per node will raise the overall capacity.

In the scenario in Figure 6, the impact of different packet scheduling modes is tested.

The shown chain between nodes 0 and 4 may as well represent a partial route towards a GW from the grid in Figure 5, but with a different traffic situation. A UDP-based stream from nodes 0 to 4 with a TX bandwidth of 5 Mbit/s (datagram sizes are set to 512 B/1.5 kB, start at 30 s; total simulation time: 100 s) is locally congested by two background streams from nodes 5 to 6 (start at 40 s) and from 7 to 8 (start at 60 s) (1 kB datagram size at 3 Mbit/s UDP TX for both streams). Nodes 5 to 8 have only one WLAN IF at their disposition. Congestion is caused on the single channels 0 (nodes 5 to 6) and 1 (nodes 7 to 8), while the nodes on the chain from 0 to 4 have 3 radios at their disposition, tuned to channels 0, 1, and 2. The two background flows interrupt the main flow from nodes 0 to 4 at different start times, to create a more heterogeneous traffic environment. The influence on network performance of all three scheduling modes included in the MHRRM process is evaluated. It is expected that the WFS mode will offer the best multihop performance, because link-state-based load balancing in MHRRM is supposed to avoid the congested channels 0 and 1. ETT is used.

The impact of different queue weighting schemes on parallel flows is tested with the last scenario shown in Figure 7. The total simulation time is 130 s.

Again a chain setup was chosen, which could be part of a larger WMN with multiple routes (see Figure 5). All nodes deploy one or two radios on channels 0 and 1 here; in the dual-radio case, the RR mode is used. UDP streams 1, 2, and 3 run from their prospective senders to destinations 1, 2, and 3. Datagram sizes for the three streams vary between 1 kB and 1.5 kB. Streams 1 and 3 have a TX bandwidth of 1 Mbit/s, while node 6 transmits stream 2 with 2 Mbit/s. It is forced that all streams share parts of the chain constellation between nodes 0 and nodes 4–6. At first, an uneven weight distribution scheme shall grant a 70% queue removal probability to packets which belong to flow 2. The remaining share of 30% is granted evenly to streams to destinations 1 and 3 and for broadcast traffic. The second weighting scheme includes an even removal probability for all traffic. The queue size in packets varies with parameter . It is expected that stream 2, which may represent a GW flow, is favored in terms of end-to-end delay, when the uneven weight scheme is used.

5.2. Results and Evaluation

Figures 8, 9, and 10 show the throughput results of flows 1, 2, and 3 in the grid setup from Figure 5.

Throughput levels and thus network capacity levels of flows 1, 2, and 3 rise notably with an increasing number of radios. But the sole usage of RR PS cannot solve intraroute fairness issues for flows 4 to 7, because all channels in the network are loaded evenly with RR, not adaptively. Our data suggests that flows 4–7 underperform in terms of TCP throughput (The supplementary throughput graphs of flow 4–7 are provided on request.), despite the availability of extra radios. Still, if the hop count to GWs can be kept short (1-3 hops) in a WMN, RR becomes an attractive and yet simplistic scheme to improve throughput proportional to the amount of equipped radios, as proven with flows 1, 2, and 3.

Now, the remaining PS modes shall be evaluated. Figure 11 suggests that the WFS scheduling enables the highest mean throughput levels (for the entire simulation period) in the next scenario in Figure 6.

This is due to the advantage that if ETT-based probing results drop on a single link in a bundle, adaptive PS with WFS assigns less load on it. WFS allows the multihop streams to exchange more packets in total, even if their route is selectively congested. Thus, the mode is recommended to facilitate the coexistence of different cross-flows suffering from interroute interference, what could be applied to vertical traffic. Also, weighted-fair-based scheduling is useful when a rerouting option is considered too cost expensive by OLSR. Available capacities on the reputed bad next-hop can still be optimized with WFS. The load shift within a bundle is not as radical as with the Extended RR mode, with which an absolute switch of an interface is forced.

Before the actual scheduling process, queues enable measures to improve performance of disadvantaged flows from the scenario depicted in Figure 7. The end-to-end delay is a crucial QoS parameter and can be selectively improved over other parallel flows, by applying per-hop queues. Figures 12 and 13 demonstrate these improvements.

In Figure 13, delay levels of flow 2 were reduced with a queue weighting scheme in its favor (70% removal probability). The effect visibly takes place when queue size (in packets) is chosen between 3 and 8, depending on the available capacity on the route (dual-radio nodes allow a smaller ). The prioritization resolves unfairness due to intraroute interference and varying hop counts. Still, in this specific scenario there exists a trade-off between the artificial delay (introduced with a larger ) and the effectiveness of prioritization. Also, in some cases, flow 2 shows slightly longer delays than flow 1 with the balanced scheme (in Figure 12), which does not match the flows respective hop distances to their DSTs. This effect occurs due to heavy traffic congestion in this particular scenario. Measurements have shown that a queue length of 15 packets or more is not feasible here. The authors strongly suggest to adapt queue parameters (size , weights) to the specific QoS demands of each mesh setup. Queuing is independent from bundling and has the potential to even favor streams on single-channel paths, given a customized prioritization scheme.

6. Conclusions

Standard, single-channel wireless mesh networks suffer from multihop and interference limitations and from the issue that vertical traffic cannot be properly protected. Several island solutions to exploit multiradio nodes, as well as QoS alternatives, can be applied. However, if applied isolated, each method may offer a discrete and independent benefit, but cannot ultimately solve the true issues of limited transmission capacity in WMNs.

A proposal, in general terms, for an improvement of the architecture of an 802.11-based mesh node is presented in this work for the first time. Standard node architecture is extended by combining different components, such as link-state routing information, Channel Assignment, and sublayer 3 traffic engineering in a novel way. The resulting system is built around a custom middle-layer module, which processes cross-layer information. Considering the idea to match a node’s availability of radios with the desired network behavior as a milestone of the proposed system, the Multihop Radio Resource Management component, which is nested in the previously described cross-layer architecture, was prioritized in its design, development, and testing. Then, the complementary concepts of combining available radios in bundles, managing them through a virtual interface, using priority queue processing and DiffServ to further protect vertical flows against horizontal traffic, and implementing specific scheduling as load balancing modes within each bundle (drawing on information from proactive routing protocols), were deployed in a systemic form, respectively, applied from a systemic point of view. The Multihop Radio Resource Management enables a better use of nonoverlapping channels on multiple radios and is the key contribution of this work.

Selected scenarios have shown that due to the novel combination of these techniques, the capacity in the WMN was significantly increased and a more efficient use of multiple radios was reached. The link quality-sensitive WFS mode is tailored for proactive, link-state mesh routing, and conquers interference on congested channels. Tunable queue parameters provide a toolset to facilitate QoS policies, which lead to selective end-to-end delay improvements on multihop paths.

At the same time, the paper also offers an overview of current trends and the state-of-the art of those components, which can be sensibly combined in multiradio mesh node architectures. Furthermore, the work allows planners of practical mesh installations to select among various methods to enhance capacity and quickly evaluate their impact within the simulated scenarios.

Future work will focus on the complete development and evaluation of the integral system and other more specific tasks, such as testing the impact of channel switches during run-time and the impact of layer 2 labeling and commutation techniques, among others.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.