In the container era, services don’t live on fixed servers with stable IP addresses. They live in pods, containers, and tasks that are created, destroyed, and rescheduled constantly. A service that existed at 10.0.5.20 five minutes ago might now be at 10.0.8.14 — or spread across twelve instances.
This ephemeral world needs a way to find services by name. Unsurprisingly, the answer is DNS — reimagined for a world where everything is in motion.
Docker’s Embedded DNS
Docker was one of the first container platforms to use DNS for service discovery. When you create a user-defined Docker network, Docker runs an embedded DNS server at 127.0.0.11 inside each container.
# Create a network and two containers
docker network create mynet
docker run -d --name web --network mynet nginx
docker run -it --network mynet alpine sh
# Inside the alpine container:
nslookup web
# Server: 127.0.0.11
# Name: web
# Address: 172.18.0.2
Containers on the same user-defined network can resolve each other by container name. Docker’s DNS server handles these queries internally and forwards everything else to the host’s configured resolvers.
Docker Compose Service Discovery
Docker Compose extends this pattern. Each service in a docker-compose.yml gets a DNS name matching its service key:
services:
web:
image: nginx
api:
image: myapp
environment:
DATABASE_URL: postgres://db:5432/myapp
db:
image: postgres
The api service can connect to the database using just db as the hostname. Docker’s DNS resolves db to the current IP of the database container. If the container is recreated (different IP), DNS automatically reflects the change.
This works for basic deployments, but Docker’s DNS lacks the sophistication needed for production orchestration. Enter Kubernetes.
Kubernetes DNS: The Foundation of Service Discovery
DNS is fundamental to Kubernetes — it’s how pods find services, and how services find each other. The Kubernetes DNS specification is defined in the official documentation and implemented by CoreDNS, which became the default cluster DNS since Kubernetes 1.13.
How Kubernetes DNS Works
Every Kubernetes Service gets a DNS name following a predictable pattern:
<service-name>.<namespace>.svc.cluster.local
For example, a service called backend in the production namespace gets:
backend.production.svc.cluster.local
When a pod queries this name, CoreDNS returns the service’s ClusterIP — a virtual IP that kube-proxy uses to load balance across the service’s pods.
Pod → DNS query: backend.production.svc.cluster.local
CoreDNS → Response: 10.96.45.12 (ClusterIP)
kube-proxy → Load balances to one of the backend pods
DNS Records Kubernetes Creates
Kubernetes automatically creates several types of DNS records:
A/AAAA records for Services:
# ClusterIP service
my-service.my-ns.svc.cluster.local → 10.96.0.15
# Headless service (ClusterIP: None) — returns pod IPs directly
my-headless.my-ns.svc.cluster.local → 10.244.1.5, 10.244.2.8, 10.244.3.2
SRV records for named ports:
_http._tcp.my-service.my-ns.svc.cluster.local → SRV 0 33 80 my-service.my-ns.svc.cluster.local
A records for individual pods (headless services):
10-244-1-5.my-headless.my-ns.svc.cluster.local → 10.244.1.5
A records for StatefulSet pods:
web-0.my-headless.my-ns.svc.cluster.local → 10.244.1.5
web-1.my-headless.my-ns.svc.cluster.local → 10.244.2.8
StatefulSet DNS is particularly important. Unlike Deployments where pods are interchangeable, StatefulSet pods have stable identities. web-0 is always web-0, even if rescheduled to a different node with a different IP. DNS provides the stable name that maps to the current IP.
DNS Search Domains
Kubernetes configures each pod’s /etc/resolv.conf with search domains that enable short-name resolution:
nameserver 10.96.0.10
search my-ns.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
This means a pod in the my-ns namespace can reach services using progressively shorter names:
backend.production.svc.cluster.local ← fully qualified
backend.production.svc ← omit cluster domain
backend.production ← omit svc
backend ← same namespace only
The ndots:5 option is crucial. It means any name with fewer than 5 dots is treated as a relative name and appended with each search domain before trying as-is. This is why backend resolves — it has 0 dots (less than 5), so Kubernetes appends my-ns.svc.cluster.local and queries backend.my-ns.svc.cluster.local.
The downside: external names like api.github.com (2 dots, less than 5) also get search domains appended first, resulting in wasted queries. Some teams tune ndots down to 2 or 3 to reduce unnecessary DNS traffic.
CoreDNS: The Engine Behind Kubernetes DNS
CoreDNS is a flexible, plugin-based DNS server written in Go. It replaced kube-dns as the default Kubernetes DNS in version 1.13 and has become the standard DNS server for cloud-native environments.
Architecture
CoreDNS is configured via a Corefile — a simple configuration file that chains plugins:
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
Each plugin in the chain handles a specific function:
- kubernetes — watches the Kubernetes API for Services and Pods, responds to cluster DNS queries
- forward — proxies non-cluster queries to upstream resolvers
- cache — caches responses (reducing load on upstream servers and the Kubernetes API)
- prometheus — exposes metrics for monitoring
- health/ready — liveness and readiness probes
Why CoreDNS Replaced kube-dns
The older kube-dns was a combination of three containers (dnsmasq, kube-dns, sidecar) bolted together. CoreDNS replaced this with a single binary that’s:
- Simpler — one process instead of three
- More extensible — plugin architecture makes it easy to add features
- More maintainable — written in Go, actively maintained by the CNCF
- More secure — no dependency on dnsmasq (which had its own CVE history)
Pod DNS Policies
Kubernetes gives you control over how individual pods resolve DNS through dnsPolicy:
ClusterFirst(default) — queries go to CoreDNS, which handles cluster names and forwards external queries upstreamDefault— pod inherits the node’s DNS configuration (/etc/resolv.conffrom the host)None— no DNS configuration is set; you must providednsConfigmanuallyClusterFirstWithHostNet— likeClusterFirst, but for pods using host networking
You can also customize DNS per-pod with dnsConfig:
apiVersion: v1
kind: Pod
spec:
dnsPolicy: "None"
dnsConfig:
nameservers:
- 10.96.0.10
- 8.8.8.8
searches:
- my-ns.svc.cluster.local
- svc.cluster.local
options:
- name: ndots
value: "2"
This is useful for pods that need to resolve both cluster services and external names with specific resolver configurations.
Service Mesh DNS: Consul and Istio
Service meshes add another layer of DNS-based service discovery on top of Kubernetes:
HashiCorp Consul
Consul provides its own DNS interface (typically on port 8600) where services register and are discoverable:
web.service.consul → Service "web" in the local datacenter
web.service.dc2.consul → Service "web" in datacenter "dc2"
Consul’s DNS is particularly powerful for multi-datacenter service discovery — something Kubernetes DNS doesn’t natively support. A service in one cluster can resolve services in another via Consul’s federated DNS.
Istio
Istio, the most widely deployed service mesh, relies on Kubernetes DNS for service discovery but enhances it at the proxy layer. Envoy sidecars intercept outbound traffic and apply routing rules, retries, and load balancing — effectively making DNS resolution smarter without changing DNS itself.
Istio also introduces ServiceEntry resources that allow mesh services to discover external services through DNS:
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
name: external-api
spec:
hosts:
- api.external-service.com
resolution: DNS
Key Takeaways
- Docker provides embedded DNS (
127.0.0.11) for container-to-container discovery on user-defined networks - Kubernetes DNS follows the pattern
<service>.<namespace>.svc.cluster.localwith automatic record creation for Services and Pods - CoreDNS is the default Kubernetes DNS server — plugin-based, extensible, and CNCF-maintained
- Search domains and ndots enable short-name resolution but can cause excess DNS queries for external names
- Headless services return pod IPs directly, enabling client-side load balancing and StatefulSet addressing
- Service meshes (Consul, Istio) extend DNS-based discovery across clusters and add intelligent routing at the proxy layer
- DNS in containers is the same protocol from 1983 — just adapted for an ephemeral, orchestrated world