Below you will find pages that utilize the taxonomy term “Kubernetes”
Patch Prometheus Operator `externalUrl`
I continue to learn “tricks” with Prometheus and (kube-prometheus ⊃ Prometheus Operator).
For too long, I’ve lived with Alertmanager alerts including “View in Alertmanager” and “Source” hyperlinks that don’t work because they defaulted incorrectly.
The solution when running either Prometheus or Alertmanager (same for both) is to use the flag:
--web.external-url="${EXTERNAL_URL}"
Conveniently, when running kube-prometheus, there are Prometheus and Alertmanager CRDs for configuring Prometheus and Alertmanager and these include the field:
externalUrl: "${EXTERNAL_URL}"
The question is: How to configure this value?
Kubernetes override public DNS name
How can I rewrite some publicly resolvable foo.example.com to an in-cluster service?
kubectl run curl \
--stdin --tty --rm \
--image=radial/busyboxplus:curl
nslookup foo.example.com
As expected, it’s unable to resolve|curl:
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'foo.example.com'
curl http://foo.example.com:8080/healthy
curl: (6) Couldn't resolve host 'foo.example.com'
Exit the curl pod so that the DNS may be refreshed.
If the cluster uses [CoreDNS]:
kubectl get deployment \
--selector=k8s-app=kube-dns \
--namespace=kube-system \
--output=name
deployment.apps/coredns
Let’s create an in-cluster service to act as the target:
List cluster's images
Kubernetes documentation provides this example but I prefer:
FILTER="{range .items[*].spec['initContainers', 'containers'][*]}{.image}{'\n'}{end}"
CONTEXT="..."
kubectl get pods \
--all-namespaces \
--output=jsonpath="${FILTER} \
--context="${CONTEXT}" \
| sort | uniq -c
Ingress contains no valid backends
Using MicroK8s with the new observability addon which uses Helm to install kube-prometheus.
This results in various Resources including several Service’s:
kubectl get services \
--namespace=observability
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP
kube-prom-stack-kube-prome-alertmanager ClusterIP 10.152.183.201 <none> 9093/TCP
kube-prom-stack-kube-prome-operator ClusterIP 10.152.183.44 <none> 443/TCP
kube-prom-stack-kube-prome-prometheus ClusterIP 10.152.183.206 <none> 9090/TCP
kube-prom-stack-kube-state-metrics ClusterIP 10.152.183.126 <none> 8080/TCP
prometheus-operated ClusterIP None <none> 9090/TCP
I’m using Tailscale Kubernetes Operator to expose MicroK8s services using Ingress to my tailnet.
Golang Kubernetes JSONPath
I’ve been spending some time learning Akri.
One proposal was to develop a webhook handler to check the YAML of Akri’s Configurations (CRDs). Configurations are used to describe Akri Brokers. They combine a Protocol reference (e.g. zeroconf) with a Kubernetes PodSpec (one of more containers), one of which references(using .resources.limits.{{PLACEHOLDER}}) the Akri device to be bound to the broker.
In order to validate the Configuration, one of Akri’s developers proposed using JSONPAth as a way to ‘query’ Kubernetes configuration files. This is a clever suggestion.
Kubernetes patching
Chatting to a developer with a question on Stack Overflow, showed an interesting use of imagePullSecrets that I’d not seen before. The container registry secret can be added to the default service account. This then enables e.g. kubectl run ... (which runs as the default service account) to access the private registry. Previously, I’ve resorted to creating Deployments that include imagePullSecrets to circumvent this challenge.
So, I have a secret:
kubectl get secret/ghcr --output=yaml
Yields: