Stephen Reese

This post demonstrates how you mirror interfaces on a virtual private server (VPS) in a cloud environment, e.g. virtual machine (VM) on a hypervisor where you do not have access to network or virtualization infrastructure where a network TAP or SPAN port would be available. This technique is used to forward packets to a collection point for aggregation and/or analysis. A scenario may be monitoring network traffic for security threats with a central security stack running tools such a Snort, Suricata and/or Bro IDS. Example cloud providers are Linode, Digital Ocean and AWS.

While a single network interface will work and is used in our examples, the client node being monitored should have two network interfaces, one for production traffic and the second interface for sending traffic to your collection node, e.g. your cloud based security stack or where you want to store the packet captures. This is for performance reasons as you are essentially doubling the traffic on a single interface. You will need to be cognizant of the amount of data you are sending to your aggregation point (collection node) as it may become saturated as well if you send traffic from too many client nodes that exceed the collection node interface capacity. Sending traffic from 20 client nodes with 1Gbs interfaces to one capture node that has a 10Gbs will obviously drop packets depending on how much traffic is being forwarding from clients. Note that many providers do provide greater bandwidth internally, e.g. support 1Gbs public interfaces but 10+Gbs internally. Another mitigation would be shape the traffic using tc or something similar in order minimize this from the client nodes. You must also consider either encrypting the tunnel using IPSec or using a trusted transport network. We do not address the security or performance implications in this post but instead its implementation.

We will provide three examples using IPTables and two using tc (Traffic Control) over both VXLAN and GRE tunnels. The examples are performed on Ubuntu 16.04 hosts in AWS. From my experiments, I found VXLAN (example four) to be quite useful in that I did not have to specify remote endpoints on the collection node. This allows multiple clients to forward traffic over a multiple tunnels to one collection node interface which allows for easy capture and analysis. GRE tunnels are point-to-point which make capture and aggregation a difficult task for many client nodes which result in an interface per tunnel. If you are aware of a workaround for this, please let me know.

The first example is the easiest to configure but has a caveat that MAC addresses will appear from the client tunnel interface verse the actual source interface due to IPTables. This may be okay for one off usage but if using for a large deployment you will likely want the hardware address for performing analysis and traceability from the interface traffic is traversing verse having to track which virtual interface is associated with which client node.

Create VXLAN tunnel on collection node. VXLAN is used in this example but we will provide a second IPTables example where GRE is used

ip link add name vxlan42 type vxlan id 42 dev eth0 local 172.31.108.76 dstport 4789
ip address add 172.20.100.10/24 dev vxlan42
ip link set up vxlan42

Create VXLAN tunnel on client to collection node

ip link add name vxlan42 type vxlan id 42 dev eth0 local 172.31.102.153 remote 172.31.108.76 dstport 4789
ip address add 172.20.100.1/24 dev vxlan42
ip link set up vxlan42

Use IPTables on client node to forward traffic over tunnel to the collection node

iptables -I PREROUTING -t mangle -j TEE --gateway 172.20.100.10
iptables -I POSTROUTING -t mangle -j TEE --gateway 172.20.100.10
iptables -A POSTROUTING -t mangle -p tcp --tcp-flags SYN,RST SYN -o tun0 -j TCPMSS --clamp-mss-to-pmtu

On the collection node you will now see all the traffic traversing eth0 on the client node using a tool such as tcpdump, e.g. tcpdump -i tun0 -en. You can filter using IPTables on the client node in order to reduce traffic sent to collection node, e.g. only send traffic you care about storing or analyzing.

The second example uses gretap GRE tunnel but we have to establish a point-to-point link which requires multiple interfaces on the collection node if we want to support multiple client nodes. As you can imagine, if you had ten client nodes you were trying to capture from, you need to listen to ten interfaces, not a great solution for security monitoring. This solution allows us to maintain the MAC header over a GRE tunnel but in this example, we are still using IPTables to forward traffic over the tunnel therefore the MAC header is still associated with the tunnel verse actual interface as discussed in the first example.

Create GRE tunnel on collection node

ip link add tun0 type gretap local 172.31.108.76 remote 172.31.102.153
ip link set tun0 up
ip addr add 172.20.100.10/24 dev tun0

Create tunnel on client to collection node

ip link add tun0 type gretap local 172.31.102.153 remote 172.31.108.76
ip link set tun0 up
ip addr add 172.20.100.2/24 dev tun0

Use IPTables on client node to forward traffic over tunnel to the collection node

iptables -I PREROUTING -t mangle -j TEE --gateway 172.20.100.10
iptables -I POSTROUTING -t mangle -j TEE --gateway 172.20.100.10
iptables -A POSTROUTING -t mangle -p tcp --tcp-flags SYN,RST SYN -o tun0 -j TCPMSS --clamp-mss-to-pmtu

The third example uses an ip tunnel GRE point-to-point link which requires multiple interfaces on the collection node if we want to support multiple client nodes just as the case in the above gretap example. I am including this as some folks may not care about including the MAC header and the lack of it may provide a small performance improvement as the overall packet size is reduced.

Create GRE tunnel on collection node

modprobe ip_gre
lsmod | grep ip_gre
ip tunnel add tun0 mode gre local 172.31.108.76 remote 172.31.102.153 ttl 255
ip link set tun0 up
ip addr add 172.20.100.10/24 dev tun0

Create tunnel on client to collection node

modprobe ip_gre
lsmod | grep ip_gre
ip tunnel add tun0 mode gre local 172.31.102.153 remote 172.31.108.76 ttl 255
ip link set tun0 up
ip addr add 172.20.100.2/24 dev tun0

Use IPTables on client node to forward traffic over tunnel to the collection node

iptables -I PREROUTING -t mangle -j TEE --gateway 172.20.100.10
iptables -I POSTROUTING -t mangle -j TEE --gateway 172.20.100.10
iptables -A POSTROUTING -t mangle -p tcp --tcp-flags SYN,RST SYN -o tun0 -j TCPMSS --clamp-mss-to-pmtu

The fourth example uses tc in order to capture and forward traffic. tc offers a very rich set of tools for managing and manipulating the transmission of packets. We can forward packets or flows of our choice over the tunnel to the analysis node. In researching how to setup remote sensors in cloud computing environments, I learned that tc will not readily forward egress traffic over a tunnel interface. The solution is to forward the traffic we care about to our loopback adapter, then forward the ingress loopback traffic flow to the tunnel so we are then able to see the ingress and egress packets on our collection node. The use of tc allows us to maintain our original MAC header where as IPTables did not. For this example we start again using VXLAN which allows us to send multiple client tunnels to one interface on our collection node. A win for easily aggregating and analyzing traffic from multiple client nodes on one collection node.

Capture node

ip link add name vxlan42 type vxlan id 42 dev eth0 local 172.31.108.76 dstport 4789
ip address add 172.20.100.10/24 dev vxlan42
ip link set up vxlan42

Sending node

ip link add name vxlan42 type vxlan id 42 dev eth0 local 172.31.102.153 remote 172.31.108.76 dstport 4789
ip address add 172.20.100.2/24 dev vxlan42
ip link set up vxlan42

Send ingress traffic to tunnel

tc qdisc add dev eth0 ingress
tc filter add dev eth0 parent ffff: \
    protocol all \
    u32 match u8 0 0 \
    action mirred egress mirror dev vxlan42

Since loops are not hard to create in the egress qdiscs, we push to loopback and then the tunnel

tc qdisc add dev eth0 handle 1: root prio
tc filter add dev eth0 parent 1: \
    protocol all \
    u32 match u8 0 0 \
    action mirred egress mirror dev lo

Select all traffic

tc qdisc add dev lo ingress
tc filter add dev lo parent ffff: \
    protocol all u32 \
    match u8 0 0 \
    action mirred egress mirror dev vxlan42

Then drop VXLAN traffic so we do not see it again on the collection node

tc filter add dev lo parent ffff: \
   protocol ip u32 \
   match ip dst 172.31.108.76/32 \
   match ip dport 4789 0xffff \
   action drop

The fifth and last example uses gretap along with tc. This allows us to maintain the MAC header over a GRE tunnel but in this example, remember we are still using IPTables therefore the MAC header is still associated with the tunnel verse actual interface.

Create GRE tunnel on collection node

ip link add tun0 type gretap local 172.31.108.76 remote 172.31.102.153
ip link set tun0 up
ip addr add 172.20.100.10/24 dev tun0

Create tunnel on client to collection node

ip link add tun0 type gretap local 172.31.102.153 remote 172.31.108.76
ip link set tun0 up
ip addr add 172.20.100.2/24 dev tun0

Send ingress traffic to tunnel

tc qdisc add dev eth0 ingress
tc filter add dev eth0 parent ffff: \
    protocol all \
    u32 match u8 0 0 \
    action mirred egress mirror dev tun0

Since loops are not hard to create in the egress qdiscs, we push to loopback and then the tunnel

tc qdisc add dev eth0 handle 1: root prio
tc filter add dev eth0 parent 1: \
    protocol all \
    u32 match u8 0 0 \
    action mirred egress mirror dev lo

Select all traffic

tc qdisc add dev lo ingress
tc filter add dev lo parent ffff: \
    protocol all u32 \
    match u8 0 0 \
    action mirred egress mirror dev tun0

Then drop GRE traffic so we do not see it again on the collection node

 tc filter add dev lo parent ffff: \
    protocol ip u32 \
    match ip dst 172.31.108.76/32 \
    match ip protocol 0x2f 0xff \
    action drop

There you have it. Please leave a comment if you have any questions.


Comments

comments powered by Disqus