Abstract

Packet filtering and processing rules management in firewalls and security gateways has become commonplace in increasingly complex networks. On one side there is a need to maintain the logic of high level policies, which requires administrators to implement and update a large amount of filtering rules while keeping them conflict-free, that is, avoiding security inconsistencies. On the other side, traffic adaptive optimization of large rule lists is useful for general purpose computers used as filtering devices, without specific designed hardware, to face growing link speeds and to harden filtering devices against DoS and DDoS attacks. Our work joins the two issues in an innovative way and defines a traffic adaptive algorithm to find conflict-free optimized rule sets, by relying on information gathered with traffic logs. The proposed approach suits current technology architectures and exploits available features, like traffic log databases, to minimize the impact of ACO development on the packet filtering devices. We demonstrate the benefit entailed by the proposed algorithm through measurements on a test bed made up of real-life, commercial packet filtering devices.

1. Introduction

A key challenge of secure systems is the management of security policies, from high level ones down to the platform specific implementation. Security policies define constraints, limitations, and authorization on data handling and communications. The growth of communication links speed brings forward a need for improved performance of packet filtering devices, such as firewalls and secure Virtual Private Networks (S-VPN) gateways. To improve performance while maintaining consistency, network security policies should be tailored according to the network traffic. We address specifically computer based packet filtering devices that do not use hardware specialized filters (e.g., based on FPGAs) and refer to that vastly widespread sequential rule list model, which accounts for most common, computer-based filtering devices currently deployed.

The process of inspecting incoming packets and looking up the policy rule set for a match often results in CPU overload and packet delay or even loss. As a matter of fact, rule lists do not exceed few hundreds active rules in well-maintained, operational packet filtering devices. Packets that match high rank rules require a small computation time compared to those that require scanning the whole rule set. The processing load per packet becomes increasingly concerning as the input line speed increases and as packet filtering functions are assigned to a larger number of inexpensive, relatively simple devices. Having packets matching high rank rules is not so unlikely; for example, typically undesired or unpredicted traffic is essentially dealt with by the “deny all” rule.

In this paper, we pursue saving of CPU power by shaping the rule set onto the network traffic impacting the device. The idea is to give high priority to rules intercepting a large fraction of current traffic. Algorithms aiming at packet filter processing time improvements are presented in [16]. The nontriviality of the optimization procedures is due to dependencies among rules, which puts constraints on rule reordering. Disregarding such dependencies can introduce inconsistency of policies implemented by the rule set of the devices. As reported in a number of works [711], conflicts among rules can cause holes in security, which are often hard to detect.

We develop an algorithm to solve the rule set optimization problem, under the constraint that the reordered rule set be conflict-free. Leveraging on this approach, already proposed, for example, in [5], we extend the optimization algorithm with the extraction of new rules from the “deny all” rule, in order to improve packet processing time further by capturing undesired packet flows that do not match any of the existing rules. The new rules are inserted in the rule set so as to maintain the optimization of the processing load with respect to the current traffic mix. The overall optimization procedure is named Adaptive Conflict-Free Optimization (ACO). Our test results prove that the extraction of rules from the “deny all” rule, as done in ACO, can improve CPU performance of packet filtering devices and it can reduce the impact of DoS (Denial of Service) and DDoS (Distributed Denial of Service) attacks.

We outline an adaptive procedure to automatically launch ACO according to traffic profile measured at device interfaces, aiming at striking a balance between device configuration updates and obtainable performance gains in a time varying environment, where traffic mix changes over time. Information on the network traffic mix is retrieved from log files collected by packet filtering devices. Using log files directly from the packet filtering device allows us to define an adaption algorithm that can be used for different kind of filtering, that is, whatever the fields exploited by rules are (e.g., header based or based on application level payload strings).

Our aim is to show how relatively simple means can gain a performance improvement without deeply affecting hardware and software of currently deployed devices, especially in the access networks, where their number is large and they are based on relative cheap, off-the-shelf machines.

The paper is organized as follows. In Section 2 we describe related works. In Section 3 we introduce the operational scenario and the software tools we have realized to run ACO. A detailed description of ACO algorithm is provided in Section 4. Section 5 outlines the algorithm that launches the ACO adaptively, according to the measured traffic mix. In Section 6 we describe experimental results based on a laboratory test-bed aiming at measuring ACO performance improvement and effectiveness against DoS attacks. Finally, we give some concluding remarks in Section 7.

In [12] the Policy Core Information Model (PCIM) is described as an object-oriented model for representing policy information as extensions to the Common Information Model (CIM) activity within the Distributed Management Task Force (DMTF: http://www.dmtf.org/). The definition of policy and policy rule presented in PCIM and its extension shown in RFC-3198 [13] gave to Basile and Lioy [14] the starting point to refine these concepts in a way useful for a formal approach. Hari et al. [7] aim at detecting if firewall rules are correlated to each other, while in [8, 9] a set of techniques and algorithms are defined to detect all policy conflicts. Along this line, [10] and [11] provide an automatic conflict resolution algorithm for a single firewall and a tuning algorithm for multiple cooperating firewalls, respectively.

In parallel, great emphasis has been placed on how to optimize packet filtering devices performance. The recent review in [15] offers a systematic comparison of traffic-aware approaches to rule-based traffic filtering in security devices. In [16] a simple algorithm based on rule reordering is presented. This work describes rule dependencies using Directed Acyclic Graphs (DAGs), yet it does not provide a methodology to build the DAG of a given device. In addition the proposed algorithm is unfeasible in a real environments with large rule sets and complex graphs. Framework and methodologies to inspect and analyze both multidimensional firewall rules and traffic logs information are proposed in [13]. In [1, 2] the optimization tool uses current traffic characteristics to define rule set ordering so as to minimize the operational cost of the firewall. Four schemes are used to achieve this goal (hot caching, total reordering, default proxy, and online adaptation). In [3] an adaptive firewall optimization framework, named OPTWALL, is proposed; it is built to reflect the current traffic pattern into rule sets. A limit of [13] is that it is not defined when the update process must be started and the weight parameters used in the rule size estimation. The approach proposed in [4] optimizes the performance by rule reordering, but how to create the necessary statistics for rule weight estimation as well as how to find dependency relations between rules is not defined. In [5] an algorithm to optimize firewall performance is presented; it orders the rules according to their weights and considers two factors to determine the weight of a rule: rule frequency and recency which reflect the number and time of rule matching, respectively. They present two types of update: performance-based triggered update and time-based periodic update. We adopt a similar approach, also taking into account for the further optimization brought about by breaking up the default “deny-all” rule. Reference [6] presents a process of managing firewall policy rules, consisting of anomaly detection, generalization, and policy update using Association Rule Mining and frequency-based techniques. However, a complex distributed network with multiple firewalls and log acquisition are not contemplated. TCAM based fast packet classification is proposed in [17]. However, TCAM are expensive and power hungry, as pointed out, for example, in [18]. Efficient packet classification by means of an especially designed software is tackled in [19]. In [18], after a wide review of many alternatives, Lim et al. propose and analyse the Boundary Cutting algorithm. It leads to a decision tree data structure that can be optimised to yield good search complexity even in very big rule lists (in the order of 100000 rules). A heuristic approach is explored in [20], by looking for a compromise between memory efficient trie data structures and search efficient decision trees. Detection of specific packets is considered in [21], where a randomised algorithm is considered: the emphasis here is placed on isolating specifically targeted packets from the mass of the wire traffic. Even though mere search performance can be quite improved by decision tree, still complexity, power consumption, and cost often call for simpler realization of packet filtering devices. So, adapting the rule list to the current traffic load remains a valid concept. Following that concept, an approach similar to ours, yet based on a more complex algorithm than the one we have developed, is defined in [22]. In [23, 24] different traffic-aware packet classification algorithms are defined, without considering specifically the traffic-adaptive optimization obtained by extracting detailed rules from the “deny all” rule. The rejection of massive undesired traffic is addressed in [25]. Their approach can be seen as complementary to the one here proposed, based on the extraction of new rules from the “deny all” rule.

A third relevant and correlated issue is about the impact of the rule extraction from the deny all string. The few works on this topic [1, 4] do not demonstrate if and in which cases this action benefits on CPU processing time. Moreover, those works do not detail how many rules should be extracted and according to which priority order. We give an extraction algorithm coupled with rule set optimization and demonstrate it can help relieving the effect of Denial of Service (DoS) attacks on the packet filtering devices. DoS attacks attempt to exhaust or disable access to resources at the victim. These resources are either network bandwidth, computing power, or operating system data structures. In flooding attacks, one or more attackers send streams of packets aimed at overwhelming link bandwidth or computing resources at the victim [26]. This type of attack, defined in [27], can be really dangerous because it can be performed also by using many unaware sources of attack (Distributed DoS), so reaching huge diffusion and volume, as shown in [28], where a three-week analysis of a network is reported that found more than 12000 DoS attacks. In particular, we focused our attention on a flooding attack towards a firewall, aiming at making the packet filtering device collapse by means of a huge quantity of messages matching “deny all” rule.

Current packet filtering technologies exploit traffic adaptive mechanisms, as take-in access list in cache [29]. In particular, the device stores a hash table whose entries match active packet flows and point at the corresponding rule/action of the rule set (cache association). This allows scanning the rule set only for the first packet of each active flow. Despite this method being adaptive to network traffic, its efficiency decreases when the size of the hash table grows. Moreover, this approach is ineffective with a large number of different undesired packet flows.

Finally, we give just a hint to different research directions on packet filtering devices. High speed packet filtering by means of specialized and optimized hardware is a prolific topic; for example, some recent works address the use of FPGAs (e.g., [3033]). These works focus on optimized hardware design or matching rule searching techniques that can be conveniently implemented with FPGAs. Instead, in this work we assume a general purpose computer server is used to run the filtering machine, which is typical of access networks devices. Another approach focuses on defining an efficient compiler to produce optimized implementation of a high level policy list, to minimize match search complexity (e.g., see [34, 35]). These works focus on optimization of the code implementing the filtering machine for the given list of rules, while our approach aims at optimally adapting the sorting of the rule list to the current analyzed traffic mix. These can be seen as complementary points of view.

3. System Architecture

3.1. Definitions and Notation

We assume that security policies are translated into an ordered list of predicates of the form: , where is a condition and is an action. We refer to predicates implementing security policies as rules. For security gateways and packet filtering devices, actions that can be carried out on a packet are or (In IPSec gateways a third possible action is for packets belonging to an activated security association needing to be encrypted and/or protected for authentication and integrity check.). The condition of a rule is obtained as the logical AND of a number of conditions of the type: “selector value from packet header/payload belongs to a given interval or set/matches the given string.” For example, classic implementation of network level packet filtering devices considers five selectors:(1), whose values can be represented by eight bit integers, that is, range between and ;(2) and , whose values can be represented in dotted decimal notation and correspond to integers ranging from up to (for IPv4);(3) and , whose values can be represented by sixteen bit integers, that is, range between and . A condition is specified by giving an interval of values for each selector; that is, a condition can be viewed as an interval contained in the five-dimensional, finite lattice space defined by

Different selectors could be considered, possibly involving header fields belonging to other layers than network one, for example, application layer, or using strings taken from packet payload. For example, a URL can be used in the rule condition. The basic structure of the list as a sequence of rules does not change though. In the end, the predicates reduce to text strings or to numeric intervals. The selected fields of each packet are checked against the predicates to verify whether they correspond to the string value or are comprised within the interval range.

Given a rule set organized as an ordered list, each packet delivered to the packet filtering device interfaces is checked against each rule, following the rule ordering, until the first matching rule is found. Then, the action of the matching rule is applied. The last rule, , is usually a “deny all,” that is, a rule with wild-cards for each condition field. The “deny all” discards any packet that has not matched any previous rule, so it implements the principle that anything which is not explicitly allowed must be denied. We assume there is always a “deny all” at the bottom of the rule list.

The processing cost per packet is proportional to the depth of the matching rule. Hence, it can be reduced by reordering the rules according to the fraction of the input load that matches each rule, under the constraint of maintaining the dependencies among rules. The adaptation algorithm of a tagged device is triggered only when the analysis of the overall hit ratios of rules of that device points out that a significant shift of the aggregated traffic mix through the tagged device has taken place. The traffic mix is monitored through the logs produced by the device itself, as detailed in the ensuing subsection.

3.2. Networking Scenario

The considered scenario is made up of packet filtering security devices deployed in a managed network. Network Management Systems (NMS) allow administrators to handle device configurations (rule lists) and to monitor packets flowing through devices using log messages collected and stored by the packet filtering device.

The overall architecture of the automated and adaptive policy management system that we have built up is depicted in Figure 1. The complete system comprises a policy conflict resolution tool, a log management infrastructure, and a tool that, based on log messages collected from all devices in the network, estimates rule matching ratios and triggers automatically and adaptively the rule set optimization process based on traffic statistics. The focus of this paper is on the optimization and adaptation part of the entire project.

All packet filtering devices, such as firewalls and security gateways, are set up to collect and send a log message reporting on packets they allow or deny as a normal part of their operations. We exploit this feature for ACO. The analysis of log messages allows us to figure out(i)real time traffic profile without using further devices such as network agents;(ii)how many rules are working and how many packets match with each rule.

A monitoring infrastructure is developed in order to collect and store log information into a log database (LogDB). In our testbed logs collected from devices are sent by using the “syslog” standard [36, 37]. Any other format could be used as well, provided it is “spoken” by both the device and the LogDB host. Figure 2 shows example data stored in LogDB. In particular, consider the following.(i)IP address is retrieved from “syslog” packet. It identifies a device interface on the network.(ii)Device type specifies rule list type; device could be configured with both FW and IPSec access list (this is an optional field).(iii)Rule rank is the offset of the rule reported by the log with respect to the top of the list that the rule belongs to.(iv)Count is number of packets that match that rule.

The optimization tool box in Figure 1 contains ACO algorithm. It retrieves the IP addresses of device interfaces to the networks and the device rule set from the DCDB. For each device ACO retrieves rule hit numbers from LogDB. Then it calculates rule weights and hence rule costs. These are the input parameters to the optimization algorithm (see Section 4).

Log centralization is the typical architecture used in current corporate and telcos networks. Our architecture aims at showing how to exploit log data collection of the NMS also to improve efficiency of packet filtering devices. Log reporting and updating of rule list are normally implemented functions and a LogDB is available in most networks independently of ACO. ACO exploits those functions for its own purposes, namely, to enhance packet filtering efficiency and harden them against DoS.

Packet filtering devices of the managed network are monitored and the ACO algorithm is started when at least one of the following events occurs:(i)rule set is modified by the administrator (such as rule insertion, modification, or removal);(ii)network traffic changes, that is, a new flow starts, or an existing flow varies its bit rate or terminates.

The first criterion is motivated mainly to check policy consistency and the second one to optimize performance adapting to traffic. We outline an algorithm for ACO automation in Section 5 specifically for this second situation. That is the part referred to by “Intelligent Decision Support System” in Figure 1.

4. Adaptive Conflict-Free Optimization (ACO) Algorithm Description

Let be the ordered, conflict-free rule list, provided as input to ACO; is the number of rules, besides the last rule, , which is assumed to be “deny all.” ACO aims at minimizing packet processing times, under the constraint of maintaining a conflict-free rule list. For a detailed discussion and formalization of security policy conflicts in a rule list see [7, 8, 10]. It suffices to say that, for a conflict-free list, any couple of rules in the list must be either disjoint or in an inclusive matching relation. Rules and are disjoint if no packet can match both of them; relative positions of and in the list are unconstrained. Rule is inclusive matching to , denoted as , if any packet matching also matches but the converse does not hold; moreover, actions associated with and must be different. For the list to be conflict-free, rule must precede rule (more specific rule first). The relevant point for ACO is that whenever two rules, say and , have a nondisjoint domain, that is, there exists at least one packet that matches both of them, those two rules are said to be dependent and their ordering must be preserved as given in the input rule list.

The optimization process defines a new rule list , which includes rules , possibly reordered, and (“deny all”) as the last rule. Further optimization is discussed in Sections 4.1 and 4.2, by merging into the rule list also rules extracted from . The optimized list must be equivalent to under the point of view of security policy implementation. Formally, for each given packet entering the device interfaces, the action performed by the device under and must be the same.

In the following, the subscript of rules refers to their rank in the original rule list. Let denote the rank of in the (possibly reordered) list, (Since the “deny all” is always the last rule, it is ; moreover, in the original rule list it is .). The rank is proportional to the processing cost of matching ; that is, for every packet matching , tests are required to check all rules until is hit. The weight of is , where is the number of packets hitting and is the overall number of packets received by the considered packet filtering device. The quantities and , are obtained by collecting the device logs over an observation time interval. The discussion of how to adapt the weights over time is given in Section 5.

The cost of is therefore . The overall cost of the list for a given rule ranking (Any feasible ranking is a permutation of the integer set .) is

ACO output is a rule set that minimizes the packet processing cost: under the constraint that the reordered list be conflict-free and equivalent to ; that is, if and () are dependent, it must be .

We can state the constraint in a way useful to the optimization algorithm by resorting to a Pseudo-Tree data structure describing the relationships among the rules, referred to as Device Pseudo-Tree (DPT) associated with the given rule list. An implicit definition of the DPT goes as follows: rule is a child of rule if and only if and there does not exist any rule such that for . Rules belonging to a conflict-free rule list, apart from the “deny all” rule, can be arranged in separate trees (possibly a single one) making up the DPT [10]. In each tree of the DPT there is a root node which represents a rule that includes all the rules in the tree and there are one or more leaves which represent the most specific rules in the tree. Given the DPT associated with , the constraint is checked by just requiring that no rule be assigned a rank smaller than its child rule(s); that is, scanning the list from top to bottom we must find any parent rule after its own descendant rules (i.e., the rules of the subtree rooted at the considered rule). Obviously, rules associated with disjoint subtrees of the DPT can be placed in any relative order.

The detailed steps of ACO algorithm are described in Appendix A. A full blown example of the procedure is developed in Appendix B.

4.1. Extracting Rules from Deny All

If a high rate undesired flow matches the “deny all” rule, it can be convenient to extract a specific rule for that flow and place it at the optimum rank in the rule list. Extracted rules are always disjoint from all others in the rule set, so they do not cause additional conflicts and can be placed anywhere in the rule list. However, the inclusion of extracted rules does not necessarily improve performance from processing load point of view.

In this section we define an algorithm for rule extraction from the “deny all” rule. It starts by identifying the minimum set of rules that covers the space of the denied traffic. As outlined in Section 1, the condition of a rule corresponds to the interval of the five-dimensional lattice described by the selector values specified in the rule condition . We denote the interval associated with rule , with .

Let be the set of indices of rules that are roots of the trees forming the DPT. The only rule more general than any , , is the “deny all” rule. So, for the nonredundancy of the rule list, the action associated with , , is necessarily allow. Then, the subspace comprising all allowed flows is given by

Let us define as the complementary space of in ; namely, . We are interested in the minimum partition of into intervals; that is, where the intervals are disjoint. This partition is not unique and can be obtained efficiently, for example, by using the same techniques as in ARC/PARC (Adaptive Resolution Classifier/Pruning ARC) min-max neurofuzzy classifiers [38].

Once the intervals of the partition of are found, we are given the list of extractable rules, , to insert in the optimized rule set in order to achieve a reduction of device processing load. Only those rules from that lead to a significant processing effort saving are included into . This depends on the weight of , that is, the fraction of packets matching during the observation interval.

For each in , rule weight in the observed time interval can be computed as , where is the share of packets blocked by the “deny all” rule that match and is the “deny all” weight. We assume that the numbering of rules is arranged so that they are listed in order of decreasing values of ; that is, it is . Let be the new list.

4.2. Inserting Rule Extracted from Deny All String

This phase consists of the insertion into of rules (), taken from . Thanks to the all disjoint relations among the rules in and among these rules and the ones in , the extracted rules can be inserted in any position of without generating conflicts.

Given the rule list , let its cost be When is inserted into with rank the following cost is obtained:

Equation (7) shows that is a decreasing function of the weight for a given value of . So, to reap the maximum gain (cost reduction), insertion should start from the extracted rule with the biggest weight. Once the optimum insertion location for this rule is found, the second biggest weight extracted rule can be considered and so on. By virtue of the ordering of , the insertion algorithm starts by considering and finds a value for , that is, the rank of in , which minimizes the overall rule list cost. To achieve this goal we should perform an exhaustive search. If the obtained minimum cost is less than the cost of the original list , then is updated by adding the extracted rule . The algorithm stores the updated list and its overall cost, and then it goes on evaluating the insertion of and so on, until it evaluates the insertion of all rules of .

As a result of the insertion of extracted rules, we obtain expanded rule lists, of length (including the “deny all” rule), where the added rules have been assigned positions , respectively, . The corresponding costs are denoted as ; by extension, we set also . Since any benefit brought by the insertion of grows up with and rules of are ordered by decreasing weights, the sequence of obtained costs is unimodal; that is, it has a unique minimum, say for index . Then is the optimum number of rules to be extracted from the “deny all” and it can be found at a cost linear with .

5. Traffic Driven Adaptation of ACO

The traffic mix at the input of a packet filtering device changes over time, so that each rule is matched by a varying number of packets as new traffic flows set on or running ones end up. The changing traffic mix impacts ACO since the weights of the rule list cost function defined in Section 4 are just the fraction of packets matching rule .

To address this issue we follow the same approach developed, for example, in [5, 39]. We define an adaptive, event driven mechanism to trigger running of ACO, including the rule extraction from “deny all.” The key elements of our proposed mechanism are (i) collection of device log information; (ii) statistical testing based on log data, to estimate traffic mix variation over time; (iii) extraction of rule from “deny all,” provided the cost of the added rules is more than compensated by the processing gain.

The logic of ACO adaption is as follows. Let be the last time that the rule list of the tagged device has been updated. Logs are collected from the packet filtering device, so that the management system can track the number of packets matching rule () and the overall number of packets arrived at the device over the time interval of duration . The collection time is defined so as enough logs are accumulated to evaluate a statistically reliable estimate of the packet traffic fractions matching each rule; that is, , and . The weight vector estimated at time is compared to the previous one, , that has been used to optimize the rule list at time . We test the hypothesis that the two weight vectors are drawn from the same probability distribution, by using the Chi Square test. If the hypothesis is inconsistent with the data (i.e., there is statistically reliable evidence that the input traffic mix has changed) a new optimization of the rule list is run, by taking the new weights equal to . On the contrary, a new collection period starts and the whole process repeats all over again.

The automation algorithm is run individually for each device. The processing can be centralized in a network management system, by downloading logs accumulated by the filtering devices and storing them into the LogDB. The Algorithm 1 summarizes the steps carried out by the ACO Decision Support System (ACO-DSS) to adapt the rule list according to the filtered traffic mix. The ACO-DSS samples the LogDB, to check whether the number of packets listed in the collected logs for the considered device in the th sampling interval of duration is larger than a threshold value . If that is the case, the Chi Square statistical test is performed. If the test detects that the traffic mix has changed, ACO is run, including extraction of rules from “deny all.” The performance gain of the resulting optimised list is assessed and compared with a threshold . The new list is implemented if the performance gain is big enough.

(1) for   to   do
(2)  if     then
(3)     logs matching rule for device dev
(4)     logs collected for device
(5)    if     then
(6)     
(7)     
(8)     if     then
(9)       
(10)         Extract rules from “deny all”
(11)      Optimize rule list of device dev
(12)      Evaluate percentage Cost Reduction pCR
(13)     if     then
(14)       Upload optimized rule list into device dev
(15)      end if
(16)     end if
(17)    end if
(18)    
(19)  end if
(20) end for

The parameters and sampling time can be dimensioned based on the following guidelines. Let us consider the th sampling interval, drop the subscript for simplicity, and let be the probability that a packet belongs to a given flow. The unbiased, asymptotically consistent estimator of is , where is the number of packets belonging to that flow out of the logs collected in the considered interval. The relative root mean square error of this estimator is . This can be made less than a given error (we set 0.01), by taking bigger than (10000 in our case). Accurate estimates of traffic flow rates are required especially for the largest flows, those that have the biggest impact on processing resources of the device so that their filtering can be optimized most profitably. Let be the fraction of the device max throughput such that we want accurate estimates for those flows offering at least . Then, we should set so that . For example, let , as in Section 6, and let ; that is, we aim at estimating accurately those flows whose rate is equal to or bigger than 5% of the device throughput. Then it must be , whence . Even if the input rate of the input packet flow is two orders of magnitude less than the example above, still the requirement on would be in the order of some hundred seconds. The fine tuning of should be carried out in the specific networking environment where the packet filtering device is deployed. This issue is further discussed at the end of this section.

The decision about traffic mix changing exploits the Chi Square test (CST), to determine if the current sample weight vector belongs to the same probability distribution as the previous one (see also [39]). The choice of the significance level , namely, the probability of false positive errors, is guided also by the observation that false positive errors are more critical than false negative errors. As a matter of fact, the latter implies that a real shift of traffic mix is overlooked: in that case all device rule lists stay the same so they might turn to be nonoptimized against the current traffic mix. In case of a false positive error, device rule lists would be updated erroneously, since a traffic mix change is estimated whereas no actual change has occurred. The choice of the level depends on error cost weighting of specific applications. We set .

Let be the number of logs matching rule in the current observation interval and the number of logs for which the rule list is currently optimized. The test variable is

The null hypothesis is that the outcomes are drawn from the same probability distribution as the , . The test variable in case of null hypothesis is asymptotically distributed as a Chi Square with degrees of freedom for large sizes of the collected log sample. Hence is compared with the Pearson threshold for the Chi Square test; namely, , the quantile of the Chi Square random variable with degrees of freedom. For , it is , where is the quantile of the standard Gaussian random variable. For it is .

If , the null hypothesis is accepted and the traffic mix is deemed to be unchanged. Then, the logs gathered in the last observation interval are discarded. In case the traffic mix is estimated to have changed, ACO is run, including the extraction of rules from “deny all.” The amounts of obtained performance improvement do not necessarily justify the upload of the new rule lists. They are sent to the devices only if there is enough performance improvement to be gained. This is realized by means of threshold , expressing the minimum percentage cost reduction (costRed%) that triggers upload of the new configuration into the DCDB and hence to the filtering devices (). The choice of is a trade-off between the benefits of the optimization and the costs of the configuration upload. These costs may be of different type, for example, unavailability of the device for a certain period of time (reset on upload), security issues, or reduced device redundancy. Note also that the benefits of the optimization may vary depending on the network devices and traffic, which is why should be chosen according to the specific scenario in which ACO is deployed.

A critical point for ACO automation to be feasible is the expected time scale of traffic mix variation. That depends on the specific networking context. We address specific examples in the next section, where the traffic mix changes due to a DoS attack that introduced abruptly new packet flows in an attempt to saturate the input capacity of the packet filtering device.

As an example of “ordinary” traffic variation over time (not affected by DoS attacks), we show in Figure 3 two traffic measurements taken from a tier-1 level Italian ISP operational public network (input and output traffic profiles are plotted on the positive and negative ordinates, resp.). The top graph reports the http/https traffic impacting a web portal of a major company. The traffic profile refers to a single IP address/port number (80) and is plotted in units of packets/s. The bottom graph shows UDP traffic impacting an authoritative DNS server (unique IP address/port number 53).

In both cases, it is apparent that significant changes of the volume of traffic of each flow occur over a time scale in the order of hours. This provides the opportunity to relax the requirement on the observation time interval to collect a reliable statistical sample of logs and determine when a significant change occurs. It also relaxes the computational power requirements to run ACO.

6. Performance Evaluation of ACO

We carried out an experimental evaluation of the benefits of rule set optimization and rule extraction from “deny all.” We set up a test-bed, outlined in Figure 4 and consisting of three Fast Ethernet subnets (physical link capacity: 100 Mbps). Two of them, net1 and net2, are connected by a single packet filtering device Amtec SAS 1000, referred to simply as “filtering device” in the following. The device rules are configured so that only traffic between net1 and net2 is allowed. Attacking flows originate from net3 and all of them match the “deny all” rule; hence they have the maximum possible processing cost.

The filtering device used in the experiments runs many security functions (i.e., known attacks detection, activity logging), which makes the test-bed a close picture of a real operational environment yet it forbids simple mathematical modeling of CPU activity. So, we run black box tests and we take packet loss ratio and packet throughput of a tagged flow through the filtering device as key performance indicators.

In Section 6.1 we discuss tests aimed at evaluating benefits of rule set optimization on processing performance of a packet filtering device. Section 6.2 deals with performance improvement by means of rule extraction from “deny all,” specifically benefits in rejecting Denial of Service attacks.

6.1. Effect of Rule Cost Optimization

To evaluate the packet filtering device performance improvement obtainable as a function of the position of rules inside the list, we have generated UDP flow from net1 to net2 with a carried rate (throughput) when the rank of the rule matching that flow is . In Figure 5 we plot the throughput gain as a function of the processing cost reduction , as the matching rule rank is decreased from to 1. Three different values of the inbound packet rates are considered. In all three considered cases, IP packets are 64 bytes long; it is and ranges between 3200 and 3428 packets/s (Some dispersion of numerical results of experiments is due to the well known burstiness of traffic generation by means of IPERF [40].).

The results in Figure 5 show that the percentage throughput improvement grows with packet rate. This is a useful feature of ACO, since the demand for lowering the processing cost arises, when the traffic intensity increases. On the contrary, the less the inbound packet rate is, the less the optimization benefit is.

6.2. DoS Rejection Capability via Extraction of Rules from “Deny All”

ACO can provide help in relieving the effect of DoS and DDoS attacks on the packet filtering devices. Denial of Service (possibly Distributed DoS) aims at overloading the CPU of the device by throwing a large amount of traffic on it, consisting of flows not envisaged in the policy design. These flows are discarded by virtue of the “deny all” rule, but this requires the entire list to be checked before a decision is taken on each packet. Even cache based accelerators can be ineffective, if a large number of different, undesired flows are thrown against a filtering device. That is not difficult to obtain, for example, by randomly changing source port, destination port, protocol type, or source address fields. ACO rule extraction from “deny all” can provide aggregated rules able to match the undesired traffic. Those rules can be merged in the rule list by the optimization procedure, so accounting for their weight in terms of matched packets.

ACO cannot be the only defence against DoS/DDoS attacks, especially when inbound link is saturated by anomalous traffic. In this case only the provider can definitely remove the effect of DoS/DDoS by disconnecting malicious sources of traffic. Despite that, we show that ACO is effective in detecting and reacting to DoS/DDoS attacks by relieving CPU load and protecting legitimate traffic.

Because of the limited number of associations that can be created and their single flow nature, cache based acceleration of processing works best with static traffic patterns. If a big surge of traffic made up of a large number of different and varying flows hits the filtering device, cache association is essentially ineffective. Extraction of rules from “deny all” as carried out in ACO aims at addressing this problem so as to complement the cache acceleration mechanism, by minimizing the time needed to match a packet. This is obtained by extracting maximally aggregated deny rules from the “deny all” and bringing them as close to the top of the rule set as dictated by the fraction of the inbound traffic hitting that rule.

The effectiveness of ACO is measured from a user point of view, as suggested in [41, 42], by injecting into the security device an allowed flow and measuring its degradation under the DoS attack. The considered types of legitimate traffic in our test-bed are TCP and UDP flows, as in [43], and FTP transactions. To measure network performances we take the following key performance indicators:(i)long-term average net throughput for TCP and UDP;(ii)average file transfer speed (in Mbit/s) for FTP.

For each type of legitimate traffic we vary the DoS attacking flow bit rate from 1 Mbit/s up to 35 Mbit/s. According to a worst case scenario, we set the attacking flow packet size to 40 bytes, so that attacking flow packet rates range from 3124 packets/s for a bit rate of 1 Mbit/s up to 110655 packets/s for a bit rate of 35 Mbit/s.

Results are shown in Figures 6, 7, and 8 for TCP, UDP, and FTP traffic, respectively. Each experiment consists of launching a legitimate flow. Let denote the start time of the experiment. All legitimate flows are set so that the filtering device processes them without any packet loss in case of no DoS attack. Performance worsening is only due to the onset of the attacking flow starting from time . At time ACO is run (The numerical values of these times are chosen to ease graph display; the reaction time of the automated ACO algorithm is in the orders of seconds; see Section 5.): a rule that captures the DoS flow is extracted from the “deny all” and the overall rule list is optimized as described in Section 4. The experiment run is stopped at time .

When the attack starts, performance of the legitimate flow degrades abruptly. After the extraction performed by ACO, it improves, in some cases getting back to the value observed prior to the attack. The legitimate flows react in different ways, according to the functionality of each protocol. For example, Figure 6 shows that TCP suffers major throughput loss even under a relatively mild attack (3124 pkts/s), due to TCP congestion window shrinking on packet loss detection. After ACO extraction of a rule filtering the attacking flow and optimization of the rule list, the device can process packets faster, thus reducing loss events and allowing TCP to attain a higher sending rate. UDP case is completely different (Figure 7), since there is no closed loop congestion control mechanism and datagram retransmission. In this case ACO extraction turns out to bring about a major performance improvement. The extraction phase of ACO is quite effective against DoS attack also in FTP case, as shown in Figure 8.

For each legitimate traffic and for each attack packet rate, we calculate the percentage improvement (PI) of the relevant performance indicator due to rule extraction from the “deny all” rule. PI of a given performance indicator is defined as follows: where Value  of before  ACO  execution and after  ACO  execution.

The two average values are taken over 200 s time intervals. is the average of the performance indicator from  s up to  s, whereas is computed by averaging the performance indicator from  s up to  s. In the setup of these experiments, we force the execution of ACO at time  s, to let the time for stable regime be reached both before and after ACO execution.

In Figure 9 PI of the average download speed is plotted for FTP as a function of the attacking flow packet rate . Other cases are qualitatively similar to this one. For DoS attack at packet rates lower than about the obtained PI is very low, so in those cases ACO rule extraction is not really needed. For bigger values of the attack flow packet rate the PI grows reaching a maximum for and then it decreases somewhat, still hovering around 60%. Even under a heavy attack, performing ACO rule extraction and optimization allows users to download a file via FTP more than twice faster as compared to a nonoptimized rule list.

ACO can be also exploited against DDoS attacks, since the rules extracted from the “deny all” include aggregates of flows: they are actually the most general rules that cover the selector parameter subspace complementary with the subspace of allowed flows. So, a small set of rules can deal with all possible DDoS flows. When DDoS attack flows, possibly generated from different sources, match with a single extracted rule, the distributed attack is faced by ACO just as if it were a DoS attack from a single source. To demonstrate this robustness of our approach, we perform an experiment keeping the test methodology and network scenario same as before, except that three different attacking flows are generated in net3, originating from three different PCs. Attacking flows are such that a single extracted rule matches all attacking flows. For space reasons we do not show all DDoS test results, but just the PI for every legitimate flows (Table 1). Attacking flows aggregate bit rates used in the experiments are as high as about . Even against such powerful attacks, the provision of ACO rule extraction and optimization reaps a performance gain up to about 60% (FTP case) with respect to the degradation due to the attack.

7. Conclusions

This work focuses on optimization techniques for packet filtering devices such as firewall and security gateways. The basis of our proposal is the reduction of the packet processing cost relying on traffic observed on the network. Our tool collects traffic information by means of logs, sent by the managed devices, and exploits them to reorder the device rule set. Furthermore, it creates new rules extracted from the “deny all” rule to match input traffic flows that are not captured by other rules. This last feature can be useful against DoS/DDoS attacks.

We have implemented ACO in an experimental testbed and measured the effect of ACO. Results point out that rules reordering entails a tangible improvement of packet filtering device processing performance. We have also tested the anti-DoS functionality of ACO extraction phase, measuring the attacks impact on legitimate traffic, and we have demonstrated that, for attacks with packet rate higher than a critical value, extracting rules from “deny all” allows legitimate users under attack to reach a performance improvement between 30% and 60% in most cases.

Appendices

A. ACO Procedure Details

ACO target can be stated as follows: find a ranking that minimizes the rule list cost in (2), under the constraint that if in the DPT defined in Section 4. With reference to the DPT, we write if belongs to the subtree rooted in .

Let be the conflict-free rule list given as ACO input. We place the “deny all” rule on top of the DPT associated with as a common root, thus formally making this structure a tree, which we refer to as Rule Tree (RT).

We define a reduction process of RT, starting from the leaf rules and reducing subtrees into single nodes, with an associated ordered partial rule list. The idea is that we start out with “atomic” lists made up of the single rule associated with each node of the DPT; then we visit the DPT starting from the leaves up to the root and we merge partial rule lists associated with visited nodes into a bigger list, with minimum cost and verifying the constraint. This procedure breaks up the problem of finding the optimal ordering of all rules into solving subproblems, consisting of merging two optimized lists into a bigger, still optimized one, until we end up with a list including all rules.

Let denote the rule associated with node of RT. Let be the list merging function, whose inputs are (optimized) rule lists and output is a unique optimized ordered list encompassing all rules appearing in the input lists. The steps of the reduction algorithm are as follows.(1)Initialization. Set , RT, and for each .(2)Leaf Node Reduction. Take all leaf nodes , which are children of the same parent node in the tree , remove them, and label with the ordered rule list ; repeat until all (original) leaves of are removed.(3)Stop Condition. Let be the residual tree after the reduction in step above. If the depth of is greater than 1, let , replace with , and go back to step . Since the depth of the RT is reduced by one at each step, the algorithm terminates in a finite number of steps, say . The ordered list associated with the unique node of is the optimized rule list . The list merging function is applied to disjoint rule lists of length () and yields a list of length . For it is defined recursively by with . It suffices to specify the merge function for two lists, that is, . The function merges the minimum cost lists and into a single minimum cost list satisfying the DPT constraint, provided that both and separately do satisfy the same constraint.

The algorithm to form can be described inductively. Let and be two minimum cost lists, satisfying the DPT constraints. Let also denote the sum of weights of rules for , and let denote the sum of weights of rules , for . Finally, let be the matrix defined implicitly by the recurrence and for . Then consider the following.(1)Set , , and (empty list).(2)If , let and replace with ; else let and replace with . Let . If , repeat step .(3)If , let , and then stop.(4)If , let , and then stop.

In the following it is proved that this algorithm provides the optimum (minimum cost) list if the two merged lists and are each separately optimized. Since and are each optimized, their ordering minimizes their respective costs:

Let us now consider the merged list . Let be the number of rules of that are placed in between and , ; let be the number of rules of that are placed before ; let be the number of rules of that are placed after . Similarly, let be the number of rules of that are placed in between and , ; let be the number of rules of that are placed before , and let be the number of rules of that are placed after . Note that and . Then the cost of can be expressed as

Given the ordered, conflict-free, and optimized lists and , the overall cost in (A.3) points out that minimization only depends on the two last sums (incremental cost due to merging). This merging cost can be rewritten in a different way. Let be 1 if the th element of belongs to and otherwise, for ; let also . Let , where and , for . Then, (A.3) can be modified as follows: where the ’s and ’s result from the ordering of the merged list .

The merging of the two lists is done by preserving the relative order of rules belonging to the same original list, because of the conflict-free constraint. Then, the minimization of the merging cost can be done by selecting at step which element to pick for the th position of from the current top elements of and in such a way that the sum in the rightmost side of (A.4) is minimized; and denote the lists obtained from and by deleting the elements already inserted in the first positions of (). The selection process is initialized with and .

This problem can be restated as finding the minimum cost route from origin to destination nodes in the graph of Figure 10.

The state of the graph refers to the (partial) lists and , the top elements of the two lists being and , respectively. From state a transition can be triggered to or , in case or is selected, respectively. It is intended that state components or represent the end of the list. The labels on the graph arcs are coded as where denotes the popped up element and is the arc cost.

The origin node is and the destination node is . The minimum cost route can be found, for example, by using a Bellman-Ford like approach, starting from the destination node. In general, let the minimum merging cost starting from state be . Then, from the graph structure it is easy to check that Starting from state , if , then is selected; otherwise is selected. The complexity of the algorithm is linear with and .

B. Example of ACO Application

We develop a full blown example of application of ACO.

An example of conflict-free rule list that can be fed as input to ACO is given in Table 2.

Reordering must respect rule dependencies to avoid introducing conflicts. For example, if rule in Table 2 is brought to the top of the list because of its large cost, that creates a conflict with rule , since the condition of is included in the condition of and their actions are opposite.

The DPT for the rule list in Table 2 is depicted in Figure 11. The “deny all” rule has been put on top of the DPT, as it is the most general rule.

The DPT of Figure 11 is used to optimize the rule list of Table 2 with the weights shown in the last column of Table 2. Figure 12 shows the optimization process in four steps (from left to right, from top to bottom). The final ordered, conflict-free, and optimized list is . Its overall cost is , to be compared with the initial cost (17.4% cost reduction).

As an example of how a list of rules extracted from “deny all” can be created, we refer to the rule list in Table 2. The correspondent DPT is shown in Figure 11. Table 3 illustrates the set of rules extractable from the “deny all” rule of the list in Table 2 and the associated normalized weights . So, the list of candidate rules for extraction is , since has weight.

If we apply “deny all” rule extraction to the rule list of Table 2, by using the extracted rule set of Table 3, it turns out that there is no cost reduction. This is because rule extraction has a useful impact only if the number of packets matching “deny all” is a significant fraction of the overall packets dealt with by the filtering device. In the example of Table 2 “deny all” traffic accounts for just 10%. As another example, let us assume that the weight of “deny all” is and other weights stay the same except they are scaled to make the sum of all weights equal to 1. The new weights are denoted with a tilde and are shown in Table 4. The last column of Table 4 reports the difference between the cost of the rule list and the one with rule inserted with rank ; namely, . The weight of the extracted rule is , according to the first line of Table 3. It is easily found that the most convenient rank for is for all , which is the intersection point between the straight lines corresponding to the first and fifth rules. For example, for , the cost where the cost of the list with no extracted rule is .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.