Direct access to K3s containers on localhost

k3s can create a compact kubernetes cluster for development.
With extra tweaks, you can access its containers from the host plain network.

  • Static route to service network via flannel
    • k3s has default subnetwork of 10.43.0.0./16 for services, and default host of 10.42.0.1 for flannel
  • DNS config including kube-dns
    • k3s has default host of 10.43.0.10 for kube-dns

You can refer k3s Network configuration for details.

Static route to services

You can add an ephemeral static route like following:

# sudo ip route add 10.43.0.0/16 via 10.42.0.1 [dev cni0]

dev cni0 is a default virtual network device. It depends on your configuration, but can be automatically detected on many cases.

Startup configuration depends on each distro. An example of NetworkManager is as follows:

# nmcli connection modify cni0 ipv4.routes "10.43.0.0/16 10.42.0.1"

This command will create /etc/NetworkManager/system-connections/cni0.nmconnection for later boot.

You can confirm this container connection with telnet 10.43.0.10 53. Ctrl-d will exit telnet.

failing route

Because k3s tends to start after all nework setup process, static route can be fail occasionally.
ip route add after boot will be a stable solution.

Adding kube-dns to resolv.conf

Service hostname need to be resolved by kube-dns, running on 10.43.0.10.
You can theoretically set up this in /etc/resolv.conf like following:

nameserver 10.43.0.10
nameserver <IP of default nameserver>
search default.svc.cluster.local svc.cluster.local cluster.local

You can confirm actual kube-dns address with kubectl get svc -nkube-system.

But the actual way you should set DNS configs depends on destros.
you need to consult specific docs on networking provided by each distribution.

systemd-resolved

Ubuntu uses NetworkManager for its networking layer. And some configs use systemd-resolved as a DNS backend.

systemd-resolved treats .local domains as mDNS target, so it will conflict with kubernetes .cluster.local domain.

For skipping systemd-resolved proxy, /etc/systemd/resolved.conf will be as follows:

DNS=10.43.0.10 <IP of default nameserver>
Domains=dev.svc.cluster.local svc.cluster.local cluster.local
DNSStubListener=no

DNSStubListener=no changes symlinks of /etc/resolv.conf.
Now, it will point /etc/resolve.conf -> /run/systemd/resolve/stub-resolv.conf -> /run/systemd/resolve/resolv.conf.

For immediate reflection, run systemctl restart systemd-resolved.service as root.
It will rewrite configs under /run/systemd/resolve/ automatically.

Domains field will convert into search field of resolv.conf. You can connect to some-svc without its FQDN of some-svc.default.svc.cluster.local.

Network security on each container

With route & DNS, you can access containers directly from localhost.
But several software have its own network security layer.

For example, PostgreSQL need to have configs allowing access from 192.168.x.x on pg_hba.conf.
This accessing host IP will come from plain LAN settings.

⁋ Nov 5, 2022↻ Nov 7, 2024
中馬崇尋
Chuma Takahiro