GitHub Actions workflow
Some important changes to GitHub Actions workflows:
golangci-lint and its builder (golangci-lint-action v8.0.0)
The Go extension for Visual Studio Code supports golangci-lint albeit through the “Go” debug console. An advantage of accessing it in Visual Studio Code is that recommendations hyperlink to the code.
I’ve added golangci-lint to most of my repos’ GitHub Actions workflows. Belatedly, I realized it should run before not in parallel with the e.g. container builder.
Golang error handling
In Golang, it is common to see errors return as:
errors.New(msg)
And:
fmt.Error(msg)
fmt.Errorf("...",msg,foo,bar)
If there are nested errors, these can be wrapped into a new error using the %w formatting directive:
if err!=nil {
msg := "something went wrong"
return ...,fmt.Errorf("%s: %w", msg, err)
}
It is good practice to create custom error types.
Not only can this improve readability but it enables a mechanism where the error type can be used to clarify code.
Patch Prometheus Operator `externalUrl`
I continue to learn “tricks” with Prometheus and (kube-prometheus ⊃ Prometheus Operator).
For too long, I’ve lived with Alertmanager alerts including “View in Alertmanager” and “Source” hyperlinks that don’t work because they defaulted incorrectly.
The solution when running either Prometheus or Alertmanager (same for both) is to use the flag:
--web.external-url="${EXTERNAL_URL}"
Conveniently, when running kube-prometheus, there are Prometheus and Alertmanager CRDs for configuring Prometheus and Alertmanager and these include the field:
externalUrl: "${EXTERNAL_URL}"
The question is: How to configure this value?
Kubernetes override public DNS name
How can I rewrite some publicly resolvable foo.example.com to an in-cluster service?
kubectl run curl \
--stdin --tty --rm \
--image=radial/busyboxplus:curl
nslookup foo.example.com
As expected, it’s unable to resolve|curl:
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'foo.example.com'
curl http://foo.example.com:8080/healthy
curl: (6) Couldn't resolve host 'foo.example.com'
Exit the curl pod so that the DNS may be refreshed.
If the cluster uses [CoreDNS]:
kubectl get deployment \
--selector=k8s-app=kube-dns \
--namespace=kube-system \
--output=name
deployment.apps/coredns
Let’s create an in-cluster service to act as the target:
Prometheus Recording Rules
Prometheus Recording Rules well-documented but I’d not previously used this feature.
The simple(st?) prometheus.yml:
global:
rule_files:
- rules.yaml
scrape_configs:
- job_name: prometheus-server
static_configs:
- targets:
- localhost:9090
NOTE Prometheus doesn’t barf if the
rules.yamldoes not exist. Corrolary, if it can’t findrules.yaml, the/rulesendpoint will be empty.
I wanted something that was guaranteed to be available: prometheus_http_requests_total
When I started using Prometheus, I was flummoxed when I encountered colons (:) in metric names but learned that these are used (uniquely) for defining recording rules
Rust Tonic `serde::Serialize`'ing types including Google's WKTs
I’m increasing using Rust in addition to Golang to write code. tonic is really excellent but I’d been struggling with coupling its generated types with serde_json because, by default, tonic doesn’t generate types with serde::Serialize annotations.
For what follows, the Protobuf sources are in Google’s googleapis repo, specifically google.firestore.v1.
(JSON) serializing generated types
For example, I’d like to do the following:
use google::firestore::v1::ListenRequest;
async fn mmain() -> Result<(), Box<dyn std::error::Error>> {
...
let rqst = ListenRequest { ... };
let json_string = serde_json::to_string(&rqst)?;
dbg!(json_string);
}
Yields:
Convert GitHub Actions workflows to multi-platform
I have multiple GitHub Actions workflows that build AMD64 images.
Thanks to help from Oğuzhan Yılmaz who converted crtsh-exporter to multi-platform builds, I now have a template for the changes for other repos, revise:
build.yml(or equivalent)Dockerfiles
GitHub Actions workflow
Add QEMU step:
- name: QEMU
uses: docker/setup-qemu-action@v3
Replace:
- name: docker build && docker push
id: docker-build-push
with:
context: .
file: ./Dockerfile
build-args: |
VERSION=${{ env.VERSION }}
COMMIT=${{ github.sha }}
tags: ...
push: true
With:
- name: Buildx Multi-platform Linux Docker Images
id: docker-build-push-multi-platform
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64/v7,linux/arm64
file: ./Dockerfile
build-args: |
VERSION=${{ env.VERSION }}
COMMIT=${{ github.sha }}
tags: ...
push: true
Tweak:
Am I permitted?
gcloud includes gcloud iam roles describe
So you can enumerate a role’s (ROLE) permissions using:
ROLE="..."
gcloud iam roles describe ${ROLE}
But, you generally want to know whether the role includes specific permissions (PERM).
Customarily, you’d think you can gcloud ... --flatten=... --filter=... but gcloud only provides --filter on list methods (not describe). However, there is a filter projection:
ROLE="..."
PERM="..."
FORMAT="value(includedPermissions.filter(\"${PERM}\"))"
gcloud iam roles describe ${ROLE} \
--format="${FORMAT}"
Alternatively, it’s slightly more UNIX-y to have tools (such as gcloud) produce JSON or YAML and then use a JSON (e.g. jq) or YAML (e.g. yq) processor:
For which repos is Dependabot paused?
Configured complexty aside, another challenge I have with GitHub’s (otherwise very useful) Dependabot tool is that, when I receive multiple PRs each updating a single Go module, my preference is to combine the updates myself into one PR. A downside of this approach is that Dependabot gets pissed off and pauses updates on repos where I do this repeatedly.
In which repos is Dependabot enabled (this check can be avoid) but paused?
List cluster's images
Kubernetes documentation provides this example but I prefer:
FILTER="{range .items[*].spec['initContainers', 'containers'][*]}{.image}{'\n'}{end}"
CONTEXT="..."
kubectl get pods \
--all-namespaces \
--output=jsonpath="${FILTER} \
--context="${CONTEXT}" \
| sort | uniq -c
