PureLB consists of two components that work together with the Kubernetes API server:

  • Allocator. A single-replica Deployment that watches for LoadBalancer Services and allocates IP addresses from configured pools.

  • LBNodeAgent. A DaemonSet running on every node that configures Linux networking to announce allocated addresses. When BGP is enabled, each LBNodeAgent pod includes a k8gobgp sidecar that advertises routes to upstream routers.

Kubernetes kube-proxy is not part of PureLB but plays an important role: once PureLB attracts external traffic to a node, kube-proxy’s nftables rules forward it to the correct pods within the cluster.

Think of PureLB as attracting packets to allocated addresses, with kube-proxy forwarding those packets within the cluster via the pod network.

Data Flow#

graph LR
    User["User creates<br/>LoadBalancer Service"] --> API["Kubernetes<br/>API Server"]
    API --> Alloc["Allocator<br/>(watches Services)"]
    Alloc -->|"Allocates IP<br/>from ServiceGroup"| API
    API --> LBN["LBNodeAgent<br/>(watches Services)"]
    LBN -->|"Configures<br/>Linux networking"| Node["Node interfaces<br/>(eth0 / kube-lb0)"]
    API --> KP["kube-proxy<br/>(watches Services)"]
    KP -->|"Configures<br/>nftables rules"| NFT["Packet forwarding<br/>to pods"]

Step by Step#

  1. A user creates a Service with type: LoadBalancer.
  2. The Allocator sees the new Service, selects a ServiceGroup pool, and allocates an IP address. It writes the address to the Service’s .status.loadBalancer.ingress and sets the purelb.io/allocated-by annotation.
  3. The LBNodeAgents see the updated Service. For local addresses, the election system picks a single winner node; for remote addresses, all nodes participate.
  4. The winning node(s) configure Linux networking using netlink: adding the IP to the physical interface (local) or dummy interface (remote), and optionally sending Gratuitous ARP.
  5. kube-proxy independently sees the Service and configures nftables rules to forward traffic arriving at the LoadBalancer address to the correct backend pods.

Custom Resource Definitions#

PureLB uses CRDs for all configuration:

CRDAPI GroupPurpose
ServiceGrouppurelb.io/v2Defines IP address pools (local, remote, or Netbox)
LBNodeAgentpurelb.io/v2Configures node agent behavior (interfaces, GARP, address lifetime)
BGPConfigurationbgp.purelb.io/v1Configures k8gobgp BGP peering (when BGP is enabled)
BGPNodeStatusbgp.purelb.io/v1Per-node BGP status (written by k8gobgp, read-only)

Namespace#

All PureLB components run in the purelb-system namespace. ServiceGroups and LBNodeAgents are namespaced resources – we recommend placing them in purelb-system for simplicity, but they can be created in other namespaces if RBAC requires it.

Security#

ComponentRuns AsCapabilitiesNetwork
AllocatorNon-root (UID 65534), read-only filesystemNoneCluster-internal only
LBNodeAgentRoot (required for netlink)NET_ADMIN, NET_RAWHost network
k8gobgpRoot (required for BGP port 179)NET_ADMIN, NET_BIND_SERVICE, NET_RAWHost network