Score:0

Force IPv6 NS source to be global instead of link

tr flag

In summary, how can I force an interface to use the global IPv6 address as the source for neighbor solicitation messages rather than the link address?

Background: Many VPS providers do not allocate a routed IPv6 /64 but merely assign a block on a /48. The gateway is a global IPv6 address on the /48 but outside the /64. The gateway responds to NS messages sent from a global address, but does not respond to NS messages on the link address (nor to RS messages from any source). This is OK when global traffic is generated on the VPS. However, when the traffic originates inside a LXC container on the [KVM] VPS, the link address is used for NS to the gateway and neighbor discovery does not work.

Underlying problem (real-life example):

LXC origin -> lxcbr0 -> eth0 -> VPS provider gateway -> Internet

Starting with eth0. The basic KVM VPS setup is like this:

# ip a show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3c:a8:db:1b brd ff:ff:ff:ff:ff:ff
    inet6 2a01:8888:1:5555::4444/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3cff:fea8:db1b/64 scope link
       valid_lft forever preferred_lft forever
# ip -6 r
2a01:8888:1::1 dev eth0 metric 1024 pref medium
2a01:8888:1:5555::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via 2a01:8888:1::1 dev eth0 metric 1024 pref medium

This works in the following sense:

# ping ipv6.google.com -c 3
PING ipv6.google.com (2a00:1450:400e:810::200e): 56 data bytes
64 bytes from 2a00:1450:400e:810::200e: seq=0 ttl=117 time=36.298 ms
64 bytes from 2a00:1450:400e:810::200e: seq=1 ttl=117 time=33.580 ms
64 bytes from 2a00:1450:400e:810::200e: seq=2 ttl=117 time=33.112 ms

--- ipv6.google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 33.112/34.330/36.298 ms
# ndisc6 -s 2a01:8888:1:5555::4444 2a01:8888:1::1 eth0
Soliciting 2a01:8888:1::1 (2a01:8888:1::1) on eth0...
Target link-layer address: 3C:61:04:A4:1F:7C
 from 2a01:8888:1::1

Now LXC has the following lxcbr0 setup:

# ip a show dev lxcbr0
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:a7:17:58 brd ff:ff:ff:ff:ff:ff
    inet6 2a01:8888:1:5555:216::ffff/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fea7:1758/64 scope link
       valid_lft forever preferred_lft forever

and inside the container (using a veth link to lxcbr0) looks like this:

# ip a show eth0
2: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:2f:82:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2a01:8888:1:5555::1238/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe2f:82a7/64 scope link
       valid_lft forever preferred_lft forever
# ip -6 r
2a01:8888:1:5555::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::216:3eff:fea7:1758 dev eth0 metric 1024 pref medium

However, this does not work:

# ping ipv6.google.com -c 3
PING 2a00:1450:400e:810::200e (2a00:1450:400e:810::200e): 56 data bytes

--- 2a00:1450:400e:810::200e ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss

Cause of the problem

If I do a tcpdump on the host, I observe that when traffic originates in the LXC container, the NS message to the gateway comes from the link address:

# tcpdump ip6
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
08:41:55.229077 IP6 fe80::216:3cff:fea8:db1b > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2a01:8888:1::1, length 32
08:41:56.305550 IP6 fe80::216:3cff:fea8:db1b > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2a01:8888:1::1, length 32
08:41:57.345548 IP6 fe80::216:3cff:fea8:db1b > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2a01:8888:1::1, length 32
# ip -6 n show dev eth0
2a01:8888:1::1  router FAILED
...

In contrast, the same tcpdump when the ping is performed outside the LXC container:

# tcpdump ip6
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
10:06:33.926756 IP6 2a01:8888:1:5555::4444 > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2a01:8888:1::1, length 32
10:06:33.929886 IP6 2a01:8888:1::1 > 2a01:8888:1:5555::4444: ICMP6, neighbor advertisement, tgt is 2a01:8888:1::1, length 32
10:06:33.929917 IP6 2a01:8888:1:5555::4444 > ams17s12-in-x0e.1e100.net: ICMP6, echo request, id 52543, seq 0, length 64
10:06:33.962581 IP6 ams17s12-in-x0e.1e100.net > 2a01:8888:1:5555::4444: ICMP6, echo reply, id 52543, seq 0, length 64
10:06:34.926947 IP6 2a01:8888:1:5555::4444 > ams17s12-in-x0e.1e100.net: ICMP6, echo request, id 52543, seq 1, length 64
10:06:34.959682 IP6 ams17s12-in-x0e.1e100.net > 2a01:8888:1:5555::4444: ICMP6, echo reply, id 52543, seq 1, length 64
10:06:35.927166 IP6 2a01:8888:1:5555::4444 > ams17s12-in-x0e.1e100.net: ICMP6, echo request, id 52543, seq 2, length 64
10:06:35.959820 IP6 ams17s12-in-x0e.1e100.net > 2a01:8888:1:5555::4444: ICMP6, echo reply, id 52543, seq 2, length 64
# ip -6 n show dev eth0
2a01:8888:1::1 dev eth0 lladdr 3c:61:04:a4:1f:7c router REACHABLE
...

The provider does not give a link gateway IP. I tried using lxcbr0's global address as the gateway inside the LXC container without any difference. This configuration seems to be prevalent among providers, so asking them to change/fix their architecture is not likely to result in a solution.

How can I solve the underlying problem of IPv6 connectivity from within the LXC container while still using veth? Suggestions very welcome.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.