Routing to Kubernetes services
Address cluster services through a tunnel and the Apoxy edge.
This guide connects a Kubernetes cluster to the Apoxy edge using a tunnel and routes external traffic to services running inside the cluster. This is useful when your services aren't directly reachable from the internet — the tunnel provides the connectivity, and the Gateway API controls what gets exposed.
Prerequisites
- The Apoxy CLI installed and authenticated.
- A Kubernetes cluster with the Apoxy controller installed (
apoxy k8s install). - A service running in your cluster that you want to expose.
How it fits together
The data path looks like this:
- External request arrives at your Apoxy domain.
- The default Gateway's HTTPRoute forwards to a Backend.
- The Backend's FQDN encodes both the Kubernetes service address and the tunnel to reach it through. Apoxy's backplane resolves
my-svc.default.svc.cluster.local.my-cluster.tun.apoxy.netby forwardingmy-svc.default.svc.cluster.localto your cluster's DNS via the tunnel. - Your cluster's kube-dns resolves the service, and traffic flows to the pod.
The tun.apoxy.net FQDNs are internal to Apoxy's routing layer — they aren't reachable from the public internet. They're routing addresses that the backplane uses to reach services through the tunnel's overlay network.
The Apoxy controller can mirror your cluster's Gateway and route resources to the Apoxy control plane, so you can manage routing with kubectl in your cluster.
Install the controller
If you haven't already, install the Apoxy controller in your cluster:
apoxy k8s install --context my-cluster --mirror gatewayThis installs the controller that watches Gateway API resources in your cluster and mirrors them to Apoxy.
Create a tunnel from your cluster
The Apoxy controller automatically creates a TunnelNode for your cluster. Verify it exists:
apoxy tunnel listYou should see a TunnelNode corresponding to your cluster. Note its name — you'll use it in the Backend FQDN.
Create the Backend
Each Kubernetes service you want to expose gets its own Backend. The Backend's endpoint FQDN combines the service's Kubernetes DNS name with the tunnel name:
apiVersion: core.apoxy.dev/v1alpha2
kind: Backend
metadata:
name: my-svc-backend
spec:
endpoints:
- fqdn: my-svc.default.svc.cluster.local.my-cluster.tun.apoxy.netReplace my-svc.default.svc.cluster.local with the full Kubernetes DNS name of your service, and my-cluster with your tunnel name from the previous step.
The mirror controller mirrors Gateway and HTTPRoute resources but not Backends. Always create Backends directly in Apoxy:
apoxy apply -f backend.yamlDefine the route
Create an HTTPRoute that forwards to your Backend through the default Gateway:
apiVersion: gateway.apoxy.dev/v1
kind: HTTPRoute
metadata:
name: my-service-route
spec:
parentRefs:
- name: default
hostnames:
- myapp.your-org.apoxy.app
rules:
- backendRefs:
- kind: Backend
name: my-svc-backend
port: 8080The port in the backendRef is required — it specifies which port Envoy connects to on the resolved address. It is not inferred from Kubernetes DNS. Use the port your service actually listens on.
If you're using the Apoxy controller with --mirror gateway, you can apply the HTTPRoute in your cluster with kubectl and it will be mirrored to Apoxy:
kubectl apply -f route.yamlOr apply everything directly to Apoxy:
apoxy apply -f route.yamlVerify
Check the default Gateway:
apoxy gateway get defaultThen test:
curl http://myapp.your-org.apoxy.app/Traffic flows from the Apoxy edge, through the Backend and tunnel, into your cluster's DNS, and to your Kubernetes service.
Multiple services
Each service you want to expose gets its own Backend with a distinct FQDN. The Backends can all route through the same tunnel — only the Kubernetes service address changes:
apiVersion: core.apoxy.dev/v1alpha2
kind: Backend
metadata:
name: api-backend
spec:
endpoints:
- fqdn: api-svc.production.svc.cluster.local.my-cluster.tun.apoxy.net
---
apiVersion: core.apoxy.dev/v1alpha2
kind: Backend
metadata:
name: dashboard-backend
spec:
endpoints:
- fqdn: dashboard.production.svc.cluster.local.my-cluster.tun.apoxy.netThen create separate HTTPRoutes for each, all referencing the default Gateway:
apiVersion: gateway.apoxy.dev/v1
kind: HTTPRoute
metadata:
name: api-route
spec:
parentRefs:
- name: default
hostnames:
- api.your-org.apoxy.app
rules:
- backendRefs:
- kind: Backend
name: api-backend
port: 8080
---
apiVersion: gateway.apoxy.dev/v1
kind: HTTPRoute
metadata:
name: dashboard-route
spec:
parentRefs:
- name: default
hostnames:
- dashboard.your-org.apoxy.app
rules:
- backendRefs:
- kind: Backend
name: dashboard-backend
port: 3000The tunnel handles the connectivity. The Backends encode which service to reach. The routes control what's exposed and where.