Service Stuck in Pending#
The Service has type: LoadBalancer but no external IP.
Quick diagnosis:
kubectl purelb inspect <namespace>/<service-name>Common causes:
No ServiceGroup exists. Check
kubectl get servicegroups -n purelb-system. PureLB needs at least one ServiceGroup.Pool exhausted. Check
kubectl purelb pools. If a pool shows 0 free addresses, allocate a larger range in the ServiceGroup.Allocator not running. Check
kubectl get pods -n purelb-system. The allocator pod must be Running.Wrong service-group annotation. The
purelb.io/service-groupannotation must match an existing ServiceGroup name.Requested IP outside pool. If using
purelb.io/addresses, verify the address is within the ServiceGroup’s pool range.
Address Not Reachable (Local Pool)#
The Service has an external IP but clients cannot reach it.
Check which node is announcing:
kubectl describe svc <name> | grep announcingVerify the address is on the node’s interface:
# SSH to the announcing node, or use the plugin:
kubectl purelb ip addr show <interface>Check GARP:
# If GARP is enabled, verify packets are being sent:
kubectl purelb inspect <namespace>/<service>Common causes:
Switch ARP cache stale. After failover, the switch may still send traffic to the old node. GARP should update this, but some switches are slow. Check
purelb_lbnodeagent_garp_sent_totalandpurelb_lbnodeagent_garp_errors_totalmetrics.ARP settings not configured. Without
arp_ignore=1andarp_announce=2, other nodes may respond to ARP for the VIP. See Prerequisites.No pods running. Local pools use
externalTrafficPolicy: Clusterso all nodes forward traffic. If traffic still doesn’t reach pods, check that kube-proxy is healthy.
Address Not Reachable (Remote Pool)#
The Service has an external IP on kube-lb0 but clients cannot reach it.
Check the address is on kube-lb0:
kubectl purelb ip addr show kube-lb0Check BGP sessions:
kubectl purelb bgp sessions --checkCheck route pipeline:
kubectl purelb bgp dataplane --checkCommon causes:
BGP session not established. Verify port 179 is open, ASN values are correct, and the upstream router is configured for peering.
netlinkImport not configured. Without
netlinkImport.enabled: trueandinterfaceList: ["kube-lb0"]in the BGPConfiguration, k8gobgp won’t advertise any routes.Upstream router not accepting routes. Some routers reject
/32routes by default. Check the router’s BGP RIB.ECMP not enabled. The upstream router must have ECMP enabled to use multiple next-hops.
Election Issues#
Check election health:
kubectl purelb election --checkCheck Leases directly:
kubectl get leases -n purelb-system -l app=purelbCommon causes:
Node not participating. If a node’s Lease is missing or expired, check the lbnodeagent pod logs on that node.
Subnet not covered. The address’s subnet must be present in at least one node’s Lease annotations. Add the interface to the LBNodeAgent’s
interfacesfield if needed.Frequent winner changes. Check
purelb_election_winner_changes_total. Frequent changes may indicate Lease renewal failures (checkpurelb_election_lease_renewal_failures_total).
BGP Sessions Not Establishing#
kubectl purelb bgp sessions --checkCommon causes:
Port 179 blocked. Ensure firewall allows TCP 179 between nodes and the upstream router.
ASN mismatch. The
peerAsnin the BGPConfiguration must match the upstream router’s configured ASN.Router ID conflict. If multiple nodes auto-detect the same router ID, set it explicitly or use
${NODE_IP}.No BGPConfiguration CR. k8gobgp needs a
BGPConfigurationCR nameddefaultinpurelb-system.
Configuration Validation#
Run a comprehensive configuration check:
kubectl purelb validate --strictThis checks for:
- Overlapping address pools across ServiceGroups
- Subnets with no nodes (unreachable pools)
- Missing BGP configuration for remote pools
- LBNodeAgent configuration consistency
Linux Networking Tools#
PureLB uses standard Linux networking. You can observe its work with standard tools:
# Show addresses on all interfaces
ip addr show
# Show addresses on the dummy interface
ip addr show dev kube-lb0
# Show the routing table
ip route show
# Show the neighbor (ARP/NDP) table
ip neigh showTo run these inside a lbnodeagent pod:
kubectl purelb ip addr show
kubectl purelb ip route showLogging#
PureLB uses two log levels:
- Info – Normal operational messages (address allocation, election changes, BGP state transitions).
- Debug – Code-level troubleshooting detail (netlink calls, election hash values, GARP packet traces).
To check logs:
kubectl logs -n purelb-system deployment/allocator
kubectl logs -n purelb-system daemonset/lbnodeagent -c lbnodeagent
kubectl logs -n purelb-system daemonset/lbnodeagent -c k8gobgp