Tag: Docker
GitHub Actions workflow
Some important changes to GitHub Actions workflows:
golangci-lint and its builder (golangci-lint-action v8.0.0)
The Go extension for Visual Studio Code supports golangci-lint albeit through the “Go” debug console. An advantage of accessing it in Visual Studio Code is that recommendations hyperlink to the code.
I’ve added golangci-lint to most of my repos’ GitHub Actions workflows. Belatedly, I realized it should run before not in parallel with the e.g. container builder.
Golang, Containers and private repos
A smörgåsbord of guidance involving Golang modules, private repos and containers. Everything herein is documented elsewhere (I’ll provide links) but I wanted to consolidate the information primarily for my own benefit.
GOPRIVATE
Using private modules adds complexity because builders need to be able to access private modules. Customarily, as you’re hacking away, you’ll likely not encounter issues but, when you write a Dockerfile or develop some CI, you’ll encounter something of the form:
Don't ignore the (hidden) ignore files
Don’t forget to add appropriate ignore files…
.dockerignorewhen using Docker.gitignorewhen using git.gcloudignorewhen using Google Cloud Platform
This week, I’ve been bitten twice in not using these.
They’re hidden files and so they’re more easy to forget unfortunately.
.dockerignore
docker build ...
Without .dockerignore
Sending build context to Docker daemon 229.9MB
Because, even though Rust’s cargo creates a useful .gitignore, it doesn’t create .dockerignore and, as you as you create ./target, you’re going to take up (likely uncessary) build context space:
ZeroConf
sudo systemctl status avahi-daemon
● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-11-20 09:26:13 PST; 14min ago
TriggeredBy: ● avahi-daemon.socket
Main PID: 1039 (avahi-daemon)
Status: "avahi-daemon 0.7 starting up."
Tasks: 2 (limit: 38333)
Memory: 2.3M
CGroup: /system.slice/avahi-daemon.service
├─1039 avahi-daemon: running [hades-canyon.local]
└─1098 avahi-daemon: chroot helper
avahi-browse --all
+ wlp6s0 IPv4 googlerpc-1 _googlerpc._tcp local
+ wlp6s0 IPv4 googlerpc _googlerpc._tcp local
+ enp5s0 IPv4 googlerpc-1 _googlerpc._tcp local
+ enp5s0 IPv4 googlerpc _googlerpc._tcp local
+ wlp6s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ wlp6s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ enp5s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ wlp6s0 IPv4 [GUID] _googlezone._tcp local
+ enp5s0 IPv4 [GUID] _googlezone._tcp local
+ enp5s0 IPv4 [GUID] _googlezone._tcp local
Tag: Github
GitHub Actions workflow
Some important changes to GitHub Actions workflows:
golangci-lint and its builder (golangci-lint-action v8.0.0)
The Go extension for Visual Studio Code supports golangci-lint albeit through the “Go” debug console. An advantage of accessing it in Visual Studio Code is that recommendations hyperlink to the code.
I’ve added golangci-lint to most of my repos’ GitHub Actions workflows. Belatedly, I realized it should run before not in parallel with the e.g. container builder.
Convert GitHub Actions workflows to multi-platform
I have multiple GitHub Actions workflows that build AMD64 images.
Thanks to help from Oğuzhan Yılmaz who converted crtsh-exporter to multi-platform builds, I now have a template for the changes for other repos, revise:
build.yml(or equivalent)Dockerfiles
GitHub Actions workflow
Add QEMU step:
- name: QEMU
uses: docker/setup-qemu-action@v3
Replace:
- name: docker build && docker push
id: docker-build-push
with:
context: .
file: ./Dockerfile
build-args: |
VERSION=${{ env.VERSION }}
COMMIT=${{ github.sha }}
tags: ...
push: true
With:
- name: Buildx Multi-platform Linux Docker Images
id: docker-build-push-multi-platform
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64/v7,linux/arm64
file: ./Dockerfile
build-args: |
VERSION=${{ env.VERSION }}
COMMIT=${{ github.sha }}
tags: ...
push: true
Tweak:
For which repos is Dependabot paused?
Configured complexty aside, another challenge I have with GitHub’s (otherwise very useful) Dependabot tool is that, when I receive multiple PRs each updating a single Go module, my preference is to combine the updates myself into one PR. A downside of this approach is that Dependabot gets pissed off and pauses updates on repos where I do this repeatedly.
In which repos is Dependabot enabled (this check can be avoid) but paused?
Recreating Go Module 'commit' paths
I’m spending some time trying to better understand best practices for Golang Modules and when used with GitHub’s Dependabot.
One practice I’m pursuing is to not (GitHub) Release Go Modules in separate repos that form part of a single application. In my scenario, I chose to not use a monorepo and so I have an application smeared across multiple repos. See GitHub help with dependency management
If a Module is not Released (on GithUb) then the go tooling versions the repo as:
Golang, Containers and private repos
A smörgåsbord of guidance involving Golang modules, private repos and containers. Everything herein is documented elsewhere (I’ll provide links) but I wanted to consolidate the information primarily for my own benefit.
GOPRIVATE
Using private modules adds complexity because builders need to be able to access private modules. Customarily, as you’re hacking away, you’ll likely not encounter issues but, when you write a Dockerfile or develop some CI, you’ll encounter something of the form:
GitHub Actions Env & Commits
Environment
There are several ways to consume environment variables in GitHub Actions but I was unsure how to set environment variables. I have a Go build that uses -ldflags "-X main.OSVersion=${VERSION} -X main.GitCommit=${COMMIT}" to set variables in the binary that can be e.g. exported via Prometheus.
The GitCommit is straightforward as it’s one of GitHub Actions’ provided values and is just ${{ github.sha }}.
I wanted to set Version to be the value of uname --kernel-release. Since the build is using ubuntu-latest, this is easy to get but, how to set?
Don't ignore the (hidden) ignore files
Don’t forget to add appropriate ignore files…
.dockerignorewhen using Docker.gitignorewhen using git.gcloudignorewhen using Google Cloud Platform
This week, I’ve been bitten twice in not using these.
They’re hidden files and so they’re more easy to forget unfortunately.
.dockerignore
docker build ...
Without .dockerignore
Sending build context to Docker daemon 229.9MB
Because, even though Rust’s cargo creates a useful .gitignore, it doesn’t create .dockerignore and, as you as you create ./target, you’re going to take up (likely uncessary) build context space:
GitHub Actions' Strategy Matrix
Yesterday, I was introduced to a useful feature of GitHub Actions which we’ll refer to as strategy matrix. I’m more famliar with Google Cloud Build but, to my knowledge, Cloud Build does not provide this feature.
The challenge is in providing an iterator for steps in e.g. a CI/CD platform.
Below is a summarized version of what I had. My (self-created) problem was that I had 4 container images to build, but the Dockerfile names didn’t exactly match the desired repository names. I had e.g. grpc.broker for the Dockerfile name and I wanted e.g. grpc-broker. The principle though is more general than my challenge naming things. The YAML delow describes the same step multiple times and what I would like to do is range over some set of values.
Tag: Github-Actions
GitHub Actions workflow
Some important changes to GitHub Actions workflows:
golangci-lint and its builder (golangci-lint-action v8.0.0)
The Go extension for Visual Studio Code supports golangci-lint albeit through the “Go” debug console. An advantage of accessing it in Visual Studio Code is that recommendations hyperlink to the code.
I’ve added golangci-lint to most of my repos’ GitHub Actions workflows. Belatedly, I realized it should run before not in parallel with the e.g. container builder.
Convert GitHub Actions workflows to multi-platform
I have multiple GitHub Actions workflows that build AMD64 images.
Thanks to help from Oğuzhan Yılmaz who converted crtsh-exporter to multi-platform builds, I now have a template for the changes for other repos, revise:
build.yml(or equivalent)Dockerfiles
GitHub Actions workflow
Add QEMU step:
- name: QEMU
uses: docker/setup-qemu-action@v3
Replace:
- name: docker build && docker push
id: docker-build-push
with:
context: .
file: ./Dockerfile
build-args: |
VERSION=${{ env.VERSION }}
COMMIT=${{ github.sha }}
tags: ...
push: true
With:
- name: Buildx Multi-platform Linux Docker Images
id: docker-build-push-multi-platform
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64/v7,linux/arm64
file: ./Dockerfile
build-args: |
VERSION=${{ env.VERSION }}
COMMIT=${{ github.sha }}
tags: ...
push: true
Tweak:
Golang, Containers and private repos
A smörgåsbord of guidance involving Golang modules, private repos and containers. Everything herein is documented elsewhere (I’ll provide links) but I wanted to consolidate the information primarily for my own benefit.
GOPRIVATE
Using private modules adds complexity because builders need to be able to access private modules. Customarily, as you’re hacking away, you’ll likely not encounter issues but, when you write a Dockerfile or develop some CI, you’ll encounter something of the form:
GitHub Actions Env & Commits
Environment
There are several ways to consume environment variables in GitHub Actions but I was unsure how to set environment variables. I have a Go build that uses -ldflags "-X main.OSVersion=${VERSION} -X main.GitCommit=${COMMIT}" to set variables in the binary that can be e.g. exported via Prometheus.
The GitCommit is straightforward as it’s one of GitHub Actions’ provided values and is just ${{ github.sha }}.
I wanted to set Version to be the value of uname --kernel-release. Since the build is using ubuntu-latest, this is easy to get but, how to set?
GitHub Actions' Strategy Matrix
Yesterday, I was introduced to a useful feature of GitHub Actions which we’ll refer to as strategy matrix. I’m more famliar with Google Cloud Build but, to my knowledge, Cloud Build does not provide this feature.
The challenge is in providing an iterator for steps in e.g. a CI/CD platform.
Below is a summarized version of what I had. My (self-created) problem was that I had 4 container images to build, but the Dockerfile names didn’t exactly match the desired repository names. I had e.g. grpc.broker for the Dockerfile name and I wanted e.g. grpc-broker. The principle though is more general than my challenge naming things. The YAML delow describes the same step multiple times and what I would like to do is range over some set of values.
Tag: Golangci-Lint
GitHub Actions workflow
Some important changes to GitHub Actions workflows:
golangci-lint and its builder (golangci-lint-action v8.0.0)
The Go extension for Visual Studio Code supports golangci-lint albeit through the “Go” debug console. An advantage of accessing it in Visual Studio Code is that recommendations hyperlink to the code.
I’ve added golangci-lint to most of my repos’ GitHub Actions workflows. Belatedly, I realized it should run before not in parallel with the e.g. container builder.
Tag: Golang
Golang error handling
In Golang, it is common to see errors return as:
errors.New(msg)
And:
fmt.Error(msg)
fmt.Errorf("...",msg,foo,bar)
If there are nested errors, these can be wrapped into a new error using the %w formatting directive:
if err!=nil {
msg := "something went wrong"
return ...,fmt.Errorf("%s: %w", msg, err)
}
It is good practice to create custom error types.
Not only can this improve readability but it enables a mechanism where the error type can be used to clarify code.
Recreating Go Module 'commit' paths
I’m spending some time trying to better understand best practices for Golang Modules and when used with GitHub’s Dependabot.
One practice I’m pursuing is to not (GitHub) Release Go Modules in separate repos that form part of a single application. In my scenario, I chose to not use a monorepo and so I have an application smeared across multiple repos. See GitHub help with dependency management
If a Module is not Released (on GithUb) then the go tooling versions the repo as:
Golang, Containers and private repos
A smörgåsbord of guidance involving Golang modules, private repos and containers. Everything herein is documented elsewhere (I’ll provide links) but I wanted to consolidate the information primarily for my own benefit.
GOPRIVATE
Using private modules adds complexity because builders need to be able to access private modules. Customarily, as you’re hacking away, you’ll likely not encounter issues but, when you write a Dockerfile or develop some CI, you’ll encounter something of the form:
Golang Kubernetes JSONPath
I’ve been spending some time learning Akri.
One proposal was to develop a webhook handler to check the YAML of Akri’s Configurations (CRDs). Configurations are used to describe Akri Brokers. They combine a Protocol reference (e.g. zeroconf) with a Kubernetes PodSpec (one of more containers), one of which references(using .resources.limits.{{PLACEHOLDER}}) the Akri device to be bound to the broker.
In order to validate the Configuration, one of Akri’s developers proposed using JSONPAth as a way to ‘query’ Kubernetes configuration files. This is a clever suggestion.
gRPC Healthchecking in Rust
Golang
Go provides an implementation gprc_health_v1 of the gRPC Health-checking Protocol proto.
This is easily implemented:
package main
import (
pb "github.com/DazWilkin/.../protos"
"google.golang.org/grpc"
"google.golang.org/grpc/health"
healthpb "google.golang.org/grpc/health/grpc_health_v1"
)
func main() {
...
serverOpts := []grpc.ServerOption{}
grpcServer := grpc.NewServer(serverOpts...)
// Register the pb service
pb.RegisterSomeServer(grpcServer, NewServer())
// Register the healthpb service
healthpb.RegisterHealthServer(grpcServer, health.NewServer())
listen, err := net.Listen("tcp", *grpcEndpoint)
if err != nil {
log.Fatal(err)
}
log.Printf("[main] Starting gRPC Listener [%s]\n", *grpcEndpoint)
log.Fatal(grpcServer.Serve(listen))
}
Because it’s gRPC, you need an implementation of the proto for the client, one is provided too grpc-health-probe:
Tag: Alertmanager
Patch Prometheus Operator `externalUrl`
I continue to learn “tricks” with Prometheus and (kube-prometheus ⊃ Prometheus Operator).
For too long, I’ve lived with Alertmanager alerts including “View in Alertmanager” and “Source” hyperlinks that don’t work because they defaulted incorrectly.
The solution when running either Prometheus or Alertmanager (same for both) is to use the flag:
--web.external-url="${EXTERNAL_URL}"
Conveniently, when running kube-prometheus, there are Prometheus and Alertmanager CRDs for configuring Prometheus and Alertmanager and these include the field:
externalUrl: "${EXTERNAL_URL}"
The question is: How to configure this value?
Pushover w/ AlertManager
I’m using Pushover’s (generous) 30-day trial. IIUC thereafter (for personal use) the app’s $5 for a perpetual license. That seems very reasonable to me.
I find Prometheus’ documentation “light”. Everything’s there but the docs feel oriented to the power|frequent user. I use Prometheus infrequently and struggle to understand the docs.
The AlertManager configuration for Pushover is ok but I struggled to understand the reference to (Golang) templates:
# Notification title.
[ title: <tmpl_string> | default = '{{ template "pushover.default.title" . }}' ]
# Notification message.
[ message: <tmpl_string> | default = '{{ template "pushover.default.message" . }}' ]
# A supplementary URL shown alongside the message.
[ url: <tmpl_string> | default = '{{ template "pushover.default.url" . }}' ]
I’m familiar with the Go’s templating but I was unclear how to interpret these configuration references. As I thought it, I assumed these must reference default templates (shipped with AlertManager) and found these here:
Tag: Kube-Prometheus
Patch Prometheus Operator `externalUrl`
I continue to learn “tricks” with Prometheus and (kube-prometheus ⊃ Prometheus Operator).
For too long, I’ve lived with Alertmanager alerts including “View in Alertmanager” and “Source” hyperlinks that don’t work because they defaulted incorrectly.
The solution when running either Prometheus or Alertmanager (same for both) is to use the flag:
--web.external-url="${EXTERNAL_URL}"
Conveniently, when running kube-prometheus, there are Prometheus and Alertmanager CRDs for configuring Prometheus and Alertmanager and these include the field:
externalUrl: "${EXTERNAL_URL}"
The question is: How to configure this value?
Tag: Kubernetes
Patch Prometheus Operator `externalUrl`
I continue to learn “tricks” with Prometheus and (kube-prometheus ⊃ Prometheus Operator).
For too long, I’ve lived with Alertmanager alerts including “View in Alertmanager” and “Source” hyperlinks that don’t work because they defaulted incorrectly.
The solution when running either Prometheus or Alertmanager (same for both) is to use the flag:
--web.external-url="${EXTERNAL_URL}"
Conveniently, when running kube-prometheus, there are Prometheus and Alertmanager CRDs for configuring Prometheus and Alertmanager and these include the field:
externalUrl: "${EXTERNAL_URL}"
The question is: How to configure this value?
Kubernetes override public DNS name
How can I rewrite some publicly resolvable foo.example.com to an in-cluster service?
kubectl run curl \
--stdin --tty --rm \
--image=radial/busyboxplus:curl
nslookup foo.example.com
As expected, it’s unable to resolve|curl:
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'foo.example.com'
curl http://foo.example.com:8080/healthy
curl: (6) Couldn't resolve host 'foo.example.com'
Exit the curl pod so that the DNS may be refreshed.
If the cluster uses [CoreDNS]:
kubectl get deployment \
--selector=k8s-app=kube-dns \
--namespace=kube-system \
--output=name
deployment.apps/coredns
Let’s create an in-cluster service to act as the target:
List cluster's images
Kubernetes documentation provides this example but I prefer:
FILTER="{range .items[*].spec['initContainers', 'containers'][*]}{.image}{'\n'}{end}"
CONTEXT="..."
kubectl get pods \
--all-namespaces \
--output=jsonpath="${FILTER} \
--context="${CONTEXT}" \
| sort | uniq -c
Ingress contains no valid backends
Using MicroK8s with the new observability addon which uses Helm to install kube-prometheus.
This results in various Resources including several Service’s:
kubectl get services \
--namespace=observability
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP
kube-prom-stack-kube-prome-alertmanager ClusterIP 10.152.183.201 <none> 9093/TCP
kube-prom-stack-kube-prome-operator ClusterIP 10.152.183.44 <none> 443/TCP
kube-prom-stack-kube-prome-prometheus ClusterIP 10.152.183.206 <none> 9090/TCP
kube-prom-stack-kube-state-metrics ClusterIP 10.152.183.126 <none> 8080/TCP
prometheus-operated ClusterIP None <none> 9090/TCP
I’m using Tailscale Kubernetes Operator to expose MicroK8s services using Ingress to my tailnet.
Golang Kubernetes JSONPath
I’ve been spending some time learning Akri.
One proposal was to develop a webhook handler to check the YAML of Akri’s Configurations (CRDs). Configurations are used to describe Akri Brokers. They combine a Protocol reference (e.g. zeroconf) with a Kubernetes PodSpec (one of more containers), one of which references(using .resources.limits.{{PLACEHOLDER}}) the Akri device to be bound to the broker.
In order to validate the Configuration, one of Akri’s developers proposed using JSONPAth as a way to ‘query’ Kubernetes configuration files. This is a clever suggestion.
Kubernetes patching
Chatting to a developer with a question on Stack Overflow, showed an interesting use of imagePullSecrets that I’d not seen before. The container registry secret can be added to the default service account. This then enables e.g. kubectl run ... (which runs as the default service account) to access the private registry. Previously, I’ve resorted to creating Deployments that include imagePullSecrets to circumvent this challenge.
So, I have a secret:
kubectl get secret/ghcr --output=yaml
Yields:
Tag: Prometheus
Patch Prometheus Operator `externalUrl`
I continue to learn “tricks” with Prometheus and (kube-prometheus ⊃ Prometheus Operator).
For too long, I’ve lived with Alertmanager alerts including “View in Alertmanager” and “Source” hyperlinks that don’t work because they defaulted incorrectly.
The solution when running either Prometheus or Alertmanager (same for both) is to use the flag:
--web.external-url="${EXTERNAL_URL}"
Conveniently, when running kube-prometheus, there are Prometheus and Alertmanager CRDs for configuring Prometheus and Alertmanager and these include the field:
externalUrl: "${EXTERNAL_URL}"
The question is: How to configure this value?
Prometheus Recording Rules
Prometheus Recording Rules well-documented but I’d not previously used this feature.
The simple(st?) prometheus.yml:
global:
rule_files:
- rules.yaml
scrape_configs:
- job_name: prometheus-server
static_configs:
- targets:
- localhost:9090
NOTE Prometheus doesn’t barf if the
rules.yamldoes not exist. Corrolary, if it can’t findrules.yaml, the/rulesendpoint will be empty.
I wanted something that was guaranteed to be available: prometheus_http_requests_total
When I started using Prometheus, I was flummoxed when I encountered colons (:) in metric names but learned that these are used (uniquely) for defining recording rules
Pushover w/ AlertManager
I’m using Pushover’s (generous) 30-day trial. IIUC thereafter (for personal use) the app’s $5 for a perpetual license. That seems very reasonable to me.
I find Prometheus’ documentation “light”. Everything’s there but the docs feel oriented to the power|frequent user. I use Prometheus infrequently and struggle to understand the docs.
The AlertManager configuration for Pushover is ok but I struggled to understand the reference to (Golang) templates:
# Notification title.
[ title: <tmpl_string> | default = '{{ template "pushover.default.title" . }}' ]
# Notification message.
[ message: <tmpl_string> | default = '{{ template "pushover.default.message" . }}' ]
# A supplementary URL shown alongside the message.
[ url: <tmpl_string> | default = '{{ template "pushover.default.url" . }}' ]
I’m familiar with the Go’s templating but I was unclear how to interpret these configuration references. As I thought it, I assumed these must reference default templates (shipped with AlertManager) and found these here:
Tag: Prometheus Operator
Patch Prometheus Operator `externalUrl`
I continue to learn “tricks” with Prometheus and (kube-prometheus ⊃ Prometheus Operator).
For too long, I’ve lived with Alertmanager alerts including “View in Alertmanager” and “Source” hyperlinks that don’t work because they defaulted incorrectly.
The solution when running either Prometheus or Alertmanager (same for both) is to use the flag:
--web.external-url="${EXTERNAL_URL}"
Conveniently, when running kube-prometheus, there are Prometheus and Alertmanager CRDs for configuring Prometheus and Alertmanager and these include the field:
externalUrl: "${EXTERNAL_URL}"
The question is: How to configure this value?
Ingress contains no valid backends
Using MicroK8s with the new observability addon which uses Helm to install kube-prometheus.
This results in various Resources including several Service’s:
kubectl get services \
--namespace=observability
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP
kube-prom-stack-kube-prome-alertmanager ClusterIP 10.152.183.201 <none> 9093/TCP
kube-prom-stack-kube-prome-operator ClusterIP 10.152.183.44 <none> 443/TCP
kube-prom-stack-kube-prome-prometheus ClusterIP 10.152.183.206 <none> 9090/TCP
kube-prom-stack-kube-state-metrics ClusterIP 10.152.183.126 <none> 8080/TCP
prometheus-operated ClusterIP None <none> 9090/TCP
I’m using Tailscale Kubernetes Operator to expose MicroK8s services using Ingress to my tailnet.
Tag: CoreDNS
Kubernetes override public DNS name
How can I rewrite some publicly resolvable foo.example.com to an in-cluster service?
kubectl run curl \
--stdin --tty --rm \
--image=radial/busyboxplus:curl
nslookup foo.example.com
As expected, it’s unable to resolve|curl:
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'foo.example.com'
curl http://foo.example.com:8080/healthy
curl: (6) Couldn't resolve host 'foo.example.com'
Exit the curl pod so that the DNS may be refreshed.
If the cluster uses [CoreDNS]:
kubectl get deployment \
--selector=k8s-app=kube-dns \
--namespace=kube-system \
--output=name
deployment.apps/coredns
Let’s create an in-cluster service to act as the target:
Tag: Firestore
Rust Tonic `serde::Serialize`'ing types including Google's WKTs
I’m increasing using Rust in addition to Golang to write code. tonic is really excellent but I’d been struggling with coupling its generated types with serde_json because, by default, tonic doesn’t generate types with serde::Serialize annotations.
For what follows, the Protobuf sources are in Google’s googleapis repo, specifically google.firestore.v1.
(JSON) serializing generated types
For example, I’d like to do the following:
use google::firestore::v1::ListenRequest;
async fn mmain() -> Result<(), Box<dyn std::error::Error>> {
...
let rqst = ListenRequest { ... };
let json_string = serde_json::to_string(&rqst)?;
dbg!(json_string);
}
Yields:
Tag: Google
Rust Tonic `serde::Serialize`'ing types including Google's WKTs
I’m increasing using Rust in addition to Golang to write code. tonic is really excellent but I’d been struggling with coupling its generated types with serde_json because, by default, tonic doesn’t generate types with serde::Serialize annotations.
For what follows, the Protobuf sources are in Google’s googleapis repo, specifically google.firestore.v1.
(JSON) serializing generated types
For example, I’d like to do the following:
use google::firestore::v1::ListenRequest;
async fn mmain() -> Result<(), Box<dyn std::error::Error>> {
...
let rqst = ListenRequest { ... };
let json_string = serde_json::to_string(&rqst)?;
dbg!(json_string);
}
Yields:
Tag: Grpc
Rust Tonic `serde::Serialize`'ing types including Google's WKTs
I’m increasing using Rust in addition to Golang to write code. tonic is really excellent but I’d been struggling with coupling its generated types with serde_json because, by default, tonic doesn’t generate types with serde::Serialize annotations.
For what follows, the Protobuf sources are in Google’s googleapis repo, specifically google.firestore.v1.
(JSON) serializing generated types
For example, I’d like to do the following:
use google::firestore::v1::ListenRequest;
async fn mmain() -> Result<(), Box<dyn std::error::Error>> {
...
let rqst = ListenRequest { ... };
let json_string = serde_json::to_string(&rqst)?;
dbg!(json_string);
}
Yields:
rust-analyzer and tonic
Solution: https://github.com/rust-analyzer/rust-analyzer/issues/5799
References: https://jen20.dev/post/completion-of-generated-code-in-intellij-rust/
build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
// gRPC Healthcheck
tonic_build::compile_protos("proto/grpc_health_v1.proto")?;
Ok(())
}
But, because this compiles the proto(s) at build time, the imports aren’t available to Visual Studio Code and rust-analyzer
pub mod grpc_health_v1 {
tonic::include_proto!("grpc.health.v1");
}
// These imports would be unavailable and error
use grpc_health_v1::{
health_check_response::ServingStatus,
health_server::{Health, HealthServer},
HealthCheckRequest, HealthCheckResponse,
};
However,
"rust-analyzer.cargo.loadOutDirsFromCheck": true,
Using tonic
Microsoft’s akri uses tonic to provide gRPC.
For the gRPC Health-checking Protocol proto:
./proto/grpc_health_v1.proto:
syntax = "proto3";
package grpc.health.v1;
service Health {
rpc Check(HealthCheckRequest) returns (HealthCheckResponse);
rpc Watch(HealthCheckRequest) returns (stream HealthCheckResponse);
}
message HealthCheckRequest {
string service = 1;
}
message HealthCheckResponse {
enum ServingStatus {
UNKNOWN = 0;
SERVING = 1;
NOT_SERVING = 2;
SERVICE_UNKNOWN = 3; // Used only by the Watch method.
}
ServingStatus status = 1;
}
Tonic supports compiling protos using build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
// compile refers to the path
tonic_build::compile_protos("proto/grpc_health_v1.proto")?;
Ok(())
}
And then useing these:
gRPC Healthchecking in Rust
Golang
Go provides an implementation gprc_health_v1 of the gRPC Health-checking Protocol proto.
This is easily implemented:
package main
import (
pb "github.com/DazWilkin/.../protos"
"google.golang.org/grpc"
"google.golang.org/grpc/health"
healthpb "google.golang.org/grpc/health/grpc_health_v1"
)
func main() {
...
serverOpts := []grpc.ServerOption{}
grpcServer := grpc.NewServer(serverOpts...)
// Register the pb service
pb.RegisterSomeServer(grpcServer, NewServer())
// Register the healthpb service
healthpb.RegisterHealthServer(grpcServer, health.NewServer())
listen, err := net.Listen("tcp", *grpcEndpoint)
if err != nil {
log.Fatal(err)
}
log.Printf("[main] Starting gRPC Listener [%s]\n", *grpcEndpoint)
log.Fatal(grpcServer.Serve(listen))
}
Because it’s gRPC, you need an implementation of the proto for the client, one is provided too grpc-health-probe:
Tag: Dependabot
For which repos is Dependabot paused?
Configured complexty aside, another challenge I have with GitHub’s (otherwise very useful) Dependabot tool is that, when I receive multiple PRs each updating a single Go module, my preference is to combine the updates myself into one PR. A downside of this approach is that Dependabot gets pissed off and pauses updates on repos where I do this repeatedly.
In which repos is Dependabot enabled (this check can be avoid) but paused?
Tag: Kubectl
List cluster's images
Kubernetes documentation provides this example but I prefer:
FILTER="{range .items[*].spec['initContainers', 'containers'][*]}{.image}{'\n'}{end}"
CONTEXT="..."
kubectl get pods \
--all-namespaces \
--output=jsonpath="${FILTER} \
--context="${CONTEXT}" \
| sort | uniq -c
kubectl patch'ing keys containing forward slash
I wanted to use kubectl to (JSON) patch an Ingress. The value needing patching is an annotation tailscale.com/funnel.
TL;DR The Stack overflow answer has the solution: replace
/with~1
With apologies for using YAML instead of JSON:
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
tailscale.com/funnel: "true"
The solution becomes:
VALUE="true" # Or "false"
# Pretty-printed for clarity
PATCH="
[
{
'op':'replace',
'path':'/metadata/annotations/tailscale.com~1funnel',
'value':'${VALUE}'
}
]"
kubectl patch ingress/${INGRESS} \
--namespace=${NAMESPACE} \
--context=${CONTEXT} \
--type=json \
--patch="${PATCH}"
`kubectl` auth changes in GKE v1.25
I was prompted by a question on Stack overflow “How to remove warning in kubectl with gcp auth plugin?” to try this new mechanism for myself. It’s described by Google in the a post Here’s what to know about changes to kubectl authentciation coming in GKE v1.25.
One question I’d not considered is: how is the change manifest? Thinking about it, I realized it’s probably evident in the users section of kubectl config. A long time, I wrote a blog post Kubernetes Engine: kubectl config that explains how kubectl leverages (!) gcloud to get an access token for GKE.
`kubectl get events`
NAMESPACE=test
kubectl create namespace ${NAMESPACE}
kubectl create deployment kuard \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--port=8080 \
--namespace=${NAMESPACE}
Typically, I’d then use kubectl describe pod to check the Events section for any issues:
kubectl describe pod \
--selector=app=kuard \
--namespace=test
But, from the Pod’s name (without the pod/ prefix), you can:
NAME=$(\
kubectl get pod \
--selector=app=kuard \
--namespace=test \
--output=name) && \
NAME=${NAME#pod/} && \
echo ${NAME}
kubectl get events \
--field-selector=involvedObject.name=${NAME} \
--namespace=${NAMESPACE} \
--output=jsonpath='{range .items[*]}{.message}{"\n"}{end}'
NOTE
range‘ing over the items permits adding newlines (\n) after each entry. Using{.items[*].message}yields a less manageable result.
Tag: Gcloud
`gcloud auth application-default unset-quota-project`
gcloud auth application-default includes a sub-command gcloud auth application-default set-quota-project
Unlike the similar gcloud config set and gcloud config unset pair, there’s no unset-quota-project.
However, it’s straightforward to undo set-quota-project because it’s primary side-effect is to update ${HOME}/.config/gcloud/application_default_credentials.json]
gcloud auth application-default set-quota-project ${PROJECT} \
--log-http
Request:
uri: https://cloudresourcemanager.googleapis.com/v1/projects/{PROJECT}:testIamPermissions?alt=json
method: POST
== headers start ==
b'accept': b'application/json'
b'authorization': --- Token Redacted ---
b'content-type': b'application/json'
b'x-goog-user-project': b'{PROJECT}'
== headers end ==
== body start ==
{"permissions": ["serviceusage.services.use"]}
== body end ==
Response:
Protect against accidental GCP Project Deletion
I’d forgotten about this feature but it’s a good way of protecting Google Cloud Platform (GCP) Projects against accidential (== user error) deletions.
Google documents it Protecting projects from accidental deletion
I’d forgotten that I’d applied it to a key project and then had to Google the above to recall how it works.
PROJECTs=$(gcloud projects list --format="value(projectId)")
for PROJECT in ${PROJECTS}
do
gcloud alpha resource-manager liens list \
--project=${PROJECT}
done
Simply:
gcloud container get-server-config
gcloud container clusters create is a complex command. The --cluster-version flag, often combined with --release-channel in order to have Google maintain the master and node versions, take values that are provided by gcloud container get-server-config.
The available Kubernetes versions differs by region and by zone:
gcloud container get-server-config \
--project=${PROJECT} \
--zone=us-west2-c
Yields:
channels:
- channel: RAPID
defaultVersion: 1.21.3-gke.2001
validVersions:
- 1.21.4-gke.301
- 1.21.3-gke.2001
Whereas:
gcloud container get-server-config \
--project=${PROJECT} \
--region=us-west2
Yields
channels:
- channel: RAPID
defaultVersion: 1.21.4-gke.301
validVersions:
- 1.21.4-gke.1801
- 1.21.4-gke.301
I assume the divergence results from a controlled rollout of Kubernetes versions across regions and zones.
Comma-separated list of GCP Projects
PROJECTS=$(\
gcloud projects list \
--format='csv[no-heading,terminator=","](projectId)') && \
PROJECTS="${PROJECTS%,}" && \
echo ${PROJECTS}
From:
gcloud projects list
PROJECT_ID NAME PROJECT_NUMBER
foo foo 1234567890123
bar bar 1234567890123
baz baz 1234567890123
To:
foo,bar,baz
Which you may find useful when you need to pass a list of Project IDs to some command-line flag.
Don't ignore the (hidden) ignore files
Don’t forget to add appropriate ignore files…
.dockerignorewhen using Docker.gitignorewhen using git.gcloudignorewhen using Google Cloud Platform
This week, I’ve been bitten twice in not using these.
They’re hidden files and so they’re more easy to forget unfortunately.
.dockerignore
docker build ...
Without .dockerignore
Sending build context to Docker daemon 229.9MB
Because, even though Rust’s cargo creates a useful .gitignore, it doesn’t create .dockerignore and, as you as you create ./target, you’re going to take up (likely uncessary) build context space:
Tag: Gcp
`gcloud auth application-default unset-quota-project`
gcloud auth application-default includes a sub-command gcloud auth application-default set-quota-project
Unlike the similar gcloud config set and gcloud config unset pair, there’s no unset-quota-project.
However, it’s straightforward to undo set-quota-project because it’s primary side-effect is to update ${HOME}/.config/gcloud/application_default_credentials.json]
gcloud auth application-default set-quota-project ${PROJECT} \
--log-http
Request:
uri: https://cloudresourcemanager.googleapis.com/v1/projects/{PROJECT}:testIamPermissions?alt=json
method: POST
== headers start ==
b'accept': b'application/json'
b'authorization': --- Token Redacted ---
b'content-type': b'application/json'
b'x-goog-user-project': b'{PROJECT}'
== headers end ==
== body start ==
{"permissions": ["serviceusage.services.use"]}
== body end ==
Response:
`kubectl` auth changes in GKE v1.25
I was prompted by a question on Stack overflow “How to remove warning in kubectl with gcp auth plugin?” to try this new mechanism for myself. It’s described by Google in the a post Here’s what to know about changes to kubectl authentciation coming in GKE v1.25.
One question I’d not considered is: how is the change manifest? Thinking about it, I realized it’s probably evident in the users section of kubectl config. A long time, I wrote a blog post Kubernetes Engine: kubectl config that explains how kubectl leverages (!) gcloud to get an access token for GKE.
Protect against accidental GCP Project Deletion
I’d forgotten about this feature but it’s a good way of protecting Google Cloud Platform (GCP) Projects against accidential (== user error) deletions.
Google documents it Protecting projects from accidental deletion
I’d forgotten that I’d applied it to a key project and then had to Google the above to recall how it works.
PROJECTs=$(gcloud projects list --format="value(projectId)")
for PROJECT in ${PROJECTS}
do
gcloud alpha resource-manager liens list \
--project=${PROJECT}
done
Simply:
Comma-separated list of GCP Projects
PROJECTS=$(\
gcloud projects list \
--format='csv[no-heading,terminator=","](projectId)') && \
PROJECTS="${PROJECTS%,}" && \
echo ${PROJECTS}
From:
gcloud projects list
PROJECT_ID NAME PROJECT_NUMBER
foo foo 1234567890123
bar bar 1234567890123
baz baz 1234567890123
To:
foo,bar,baz
Which you may find useful when you need to pass a list of Project IDs to some command-line flag.
Tag: Protoc
Visual Studio workspace-specific settings and proto path
I use Pbkit for Protobuf support. There are other extensions available.
I’d experienced problems with the tool when (correctly) import‘ing protobufs under a proto_path and learned that there’s a solution and also learned that it’s possible to use workspace-specific settings.
With a ${workspaceFolder}/protos folder containing:
protos
└── greet
├── v1
│ └── greet.proto
└── v2
└── greet.proto
And, in which protos/greet/v2/greet.proto contains:
import "greet/v1/greet.proto";
I had been receiving import and reference errors until I read the Stack overflow answer.
Tag: Visual-Studio-Code
Visual Studio workspace-specific settings and proto path
I use Pbkit for Protobuf support. There are other extensions available.
I’d experienced problems with the tool when (correctly) import‘ing protobufs under a proto_path and learned that there’s a solution and also learned that it’s possible to use workspace-specific settings.
With a ${workspaceFolder}/protos folder containing:
protos
└── greet
├── v1
│ └── greet.proto
└── v2
└── greet.proto
And, in which protos/greet/v2/greet.proto contains:
import "greet/v1/greet.proto";
I had been receiving import and reference errors until I read the Stack overflow answer.
rust-analyzer and tonic
Solution: https://github.com/rust-analyzer/rust-analyzer/issues/5799
References: https://jen20.dev/post/completion-of-generated-code-in-intellij-rust/
build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
// gRPC Healthcheck
tonic_build::compile_protos("proto/grpc_health_v1.proto")?;
Ok(())
}
But, because this compiles the proto(s) at build time, the imports aren’t available to Visual Studio Code and rust-analyzer
pub mod grpc_health_v1 {
tonic::include_proto!("grpc.health.v1");
}
// These imports would be unavailable and error
use grpc_health_v1::{
health_check_response::ServingStatus,
health_server::{Health, HealthServer},
HealthCheckRequest, HealthCheckResponse,
};
However,
"rust-analyzer.cargo.loadOutDirsFromCheck": true,
Tag: Microk8s
Ingress contains no valid backends
Using MicroK8s with the new observability addon which uses Helm to install kube-prometheus.
This results in various Resources including several Service’s:
kubectl get services \
--namespace=observability
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP
kube-prom-stack-kube-prome-alertmanager ClusterIP 10.152.183.201 <none> 9093/TCP
kube-prom-stack-kube-prome-operator ClusterIP 10.152.183.44 <none> 443/TCP
kube-prom-stack-kube-prome-prometheus ClusterIP 10.152.183.206 <none> 9090/TCP
kube-prom-stack-kube-state-metrics ClusterIP 10.152.183.126 <none> 8080/TCP
prometheus-operated ClusterIP None <none> 9090/TCP
I’m using Tailscale Kubernetes Operator to expose MicroK8s services using Ingress to my tailnet.
ctr and crictl
Developing with Akri, it’s useful to be able to purge container images because, once cached, if changed, these are pulled by tag rather than hash.
The way to enumerate images used by MicroK8s is using either ctr or crictl. I’m unfamiliar with both of these but, here’s what I know:
MicroK8s
MicroK8s leverages both technologies.
Both require sudo
ctr is a sub-command of microk8s and uses --address for the socket
Tag: Tailscale
Ingress contains no valid backends
Using MicroK8s with the new observability addon which uses Helm to install kube-prometheus.
This results in various Resources including several Service’s:
kubectl get services \
--namespace=observability
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP
kube-prom-stack-kube-prome-alertmanager ClusterIP 10.152.183.201 <none> 9093/TCP
kube-prom-stack-kube-prome-operator ClusterIP 10.152.183.44 <none> 443/TCP
kube-prom-stack-kube-prome-prometheus ClusterIP 10.152.183.206 <none> 9090/TCP
kube-prom-stack-kube-state-metrics ClusterIP 10.152.183.126 <none> 8080/TCP
prometheus-operated ClusterIP None <none> 9090/TCP
I’m using Tailscale Kubernetes Operator to expose MicroK8s services using Ingress to my tailnet.
Tag: Cargo
Rust dependencies
I was having an issue with k8s-openapi complaining:
None of the v1_* features are enabled on the k8s-openapi crate.
The k8s-openapi crate requires a feature to be enabled to indicate which version of Kubernetes it should support.
If you’re using k8s-openapi in a binary crate, enable the feature corresponding to the minimum version of API server that you want to support. In case your binary crate does not directly depend on k8s-openapi, add a dependency on k8s-openapi and enable the corresponding feature in it.
Tag: Rust
Rust dependencies
I was having an issue with k8s-openapi complaining:
None of the v1_* features are enabled on the k8s-openapi crate.
The k8s-openapi crate requires a feature to be enabled to indicate which version of Kubernetes it should support.
If you’re using k8s-openapi in a binary crate, enable the feature corresponding to the minimum version of API server that you want to support. In case your binary crate does not directly depend on k8s-openapi, add a dependency on k8s-openapi and enable the corresponding feature in it.
Don't ignore the (hidden) ignore files
Don’t forget to add appropriate ignore files…
.dockerignorewhen using Docker.gitignorewhen using git.gcloudignorewhen using Google Cloud Platform
This week, I’ve been bitten twice in not using these.
They’re hidden files and so they’re more easy to forget unfortunately.
.dockerignore
docker build ...
Without .dockerignore
Sending build context to Docker daemon 229.9MB
Because, even though Rust’s cargo creates a useful .gitignore, it doesn’t create .dockerignore and, as you as you create ./target, you’re going to take up (likely uncessary) build context space:
rust-analyzer and tonic
Solution: https://github.com/rust-analyzer/rust-analyzer/issues/5799
References: https://jen20.dev/post/completion-of-generated-code-in-intellij-rust/
build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
// gRPC Healthcheck
tonic_build::compile_protos("proto/grpc_health_v1.proto")?;
Ok(())
}
But, because this compiles the proto(s) at build time, the imports aren’t available to Visual Studio Code and rust-analyzer
pub mod grpc_health_v1 {
tonic::include_proto!("grpc.health.v1");
}
// These imports would be unavailable and error
use grpc_health_v1::{
health_check_response::ServingStatus,
health_server::{Health, HealthServer},
HealthCheckRequest, HealthCheckResponse,
};
However,
"rust-analyzer.cargo.loadOutDirsFromCheck": true,
Using tonic
Microsoft’s akri uses tonic to provide gRPC.
For the gRPC Health-checking Protocol proto:
./proto/grpc_health_v1.proto:
syntax = "proto3";
package grpc.health.v1;
service Health {
rpc Check(HealthCheckRequest) returns (HealthCheckResponse);
rpc Watch(HealthCheckRequest) returns (stream HealthCheckResponse);
}
message HealthCheckRequest {
string service = 1;
}
message HealthCheckResponse {
enum ServingStatus {
UNKNOWN = 0;
SERVING = 1;
NOT_SERVING = 2;
SERVICE_UNKNOWN = 3; // Used only by the Watch method.
}
ServingStatus status = 1;
}
Tonic supports compiling protos using build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
// compile refers to the path
tonic_build::compile_protos("proto/grpc_health_v1.proto")?;
Ok(())
}
And then useing these:
gRPC Healthchecking in Rust
Golang
Go provides an implementation gprc_health_v1 of the gRPC Health-checking Protocol proto.
This is easily implemented:
package main
import (
pb "github.com/DazWilkin/.../protos"
"google.golang.org/grpc"
"google.golang.org/grpc/health"
healthpb "google.golang.org/grpc/health/grpc_health_v1"
)
func main() {
...
serverOpts := []grpc.ServerOption{}
grpcServer := grpc.NewServer(serverOpts...)
// Register the pb service
pb.RegisterSomeServer(grpcServer, NewServer())
// Register the healthpb service
healthpb.RegisterHealthServer(grpcServer, health.NewServer())
listen, err := net.Listen("tcp", *grpcEndpoint)
if err != nil {
log.Fatal(err)
}
log.Printf("[main] Starting gRPC Listener [%s]\n", *grpcEndpoint)
log.Fatal(grpcServer.Serve(listen))
}
Because it’s gRPC, you need an implementation of the proto for the client, one is provided too grpc-health-probe:
Tag: Gke
`kubectl` auth changes in GKE v1.25
I was prompted by a question on Stack overflow “How to remove warning in kubectl with gcp auth plugin?” to try this new mechanism for myself. It’s described by Google in the a post Here’s what to know about changes to kubectl authentciation coming in GKE v1.25.
One question I’d not considered is: how is the change manifest? Thinking about it, I realized it’s probably evident in the users section of kubectl config. A long time, I wrote a blog post Kubernetes Engine: kubectl config that explains how kubectl leverages (!) gcloud to get an access token for GKE.
gcloud container get-server-config
gcloud container clusters create is a complex command. The --cluster-version flag, often combined with --release-channel in order to have Google maintain the master and node versions, take values that are provided by gcloud container get-server-config.
The available Kubernetes versions differs by region and by zone:
gcloud container get-server-config \
--project=${PROJECT} \
--zone=us-west2-c
Yields:
channels:
- channel: RAPID
defaultVersion: 1.21.3-gke.2001
validVersions:
- 1.21.4-gke.301
- 1.21.3-gke.2001
Whereas:
gcloud container get-server-config \
--project=${PROJECT} \
--region=us-west2
Yields
channels:
- channel: RAPID
defaultVersion: 1.21.4-gke.301
validVersions:
- 1.21.4-gke.1801
- 1.21.4-gke.301
I assume the divergence results from a controlled rollout of Kubernetes versions across regions and zones.
Tag: Bash
`kubectl get events`
NAMESPACE=test
kubectl create namespace ${NAMESPACE}
kubectl create deployment kuard \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--port=8080 \
--namespace=${NAMESPACE}
Typically, I’d then use kubectl describe pod to check the Events section for any issues:
kubectl describe pod \
--selector=app=kuard \
--namespace=test
But, from the Pod’s name (without the pod/ prefix), you can:
NAME=$(\
kubectl get pod \
--selector=app=kuard \
--namespace=test \
--output=name) && \
NAME=${NAME#pod/} && \
echo ${NAME}
kubectl get events \
--field-selector=involvedObject.name=${NAME} \
--namespace=${NAMESPACE} \
--output=jsonpath='{range .items[*]}{.message}{"\n"}{end}'
NOTE
range‘ing over the items permits adding newlines (\n) after each entry. Using{.items[*].message}yields a less manageable result.
bash loops with multi-statement conditions
I wanted to pause a script until the status of 2 Cloud Run services became ready. In this case, I wanted to check for the status condition RoutesReady to be True.
My initial attempt is ghastly:
while [[ "True"!=$(gcloud run services describe ...) && "True"!=$(gcloud run service describe ...)]]
do
...
loop
NOTE In actuality, is was worse than that because I had
--projectand--regionflags and pumped the result throughjq
I found an interesting comment by jonathan-leffler on this Stack overflow answer suggesting that the condition in bash loops could actually be an (arbitrarily?) complex sequence of commands as long as the final statement returns true. Thanks Jonathan!
Tag: Go
Recreating Go Module 'commit' paths
I’m spending some time trying to better understand best practices for Golang Modules and when used with GitHub’s Dependabot.
One practice I’m pursuing is to not (GitHub) Release Go Modules in separate repos that form part of a single application. In my scenario, I chose to not use a monorepo and so I have an application smeared across multiple repos. See GitHub help with dependency management
If a Module is not Released (on GithUb) then the go tooling versions the repo as:
Checking Go Modules direct dependencies for updates
I continue to be “challenged” grep’ing my Golang GitHub repos to check for module updates. I feel my code aging in cold storage and it bugs me.
Golang issues 40364: enable listing direct dependency updates includes this useful solution which filters indirect dependencies and returns a list of direct dependency updates (although presumably only 0–>1 for major versions):
go list -f '{{if not .Indirect}}{{.}}{{end}}' -u -m all
I’ll take what I can get.
Tag: Modules
Recreating Go Module 'commit' paths
I’m spending some time trying to better understand best practices for Golang Modules and when used with GitHub’s Dependabot.
One practice I’m pursuing is to not (GitHub) Release Go Modules in separate repos that form part of a single application. In my scenario, I chose to not use a monorepo and so I have an application smeared across multiple repos. See GitHub help with dependency management
If a Module is not Released (on GithUb) then the go tooling versions the repo as:
Checking Go Modules direct dependencies for updates
I continue to be “challenged” grep’ing my Golang GitHub repos to check for module updates. I feel my code aging in cold storage and it bugs me.
Golang issues 40364: enable listing direct dependency updates includes this useful solution which filters indirect dependencies and returns a list of direct dependency updates (although presumably only 0–>1 for major versions):
go list -f '{{if not .Indirect}}{{.}}{{end}}' -u -m all
I’ll take what I can get.
Golang, Containers and private repos
A smörgåsbord of guidance involving Golang modules, private repos and containers. Everything herein is documented elsewhere (I’ll provide links) but I wanted to consolidate the information primarily for my own benefit.
GOPRIVATE
Using private modules adds complexity because builders need to be able to access private modules. Customarily, as you’re hacking away, you’ll likely not encounter issues but, when you write a Dockerfile or develop some CI, you’ll encounter something of the form:
Tag: REST
Recreating Go Module 'commit' paths
I’m spending some time trying to better understand best practices for Golang Modules and when used with GitHub’s Dependabot.
One practice I’m pursuing is to not (GitHub) Release Go Modules in separate repos that form part of a single application. In my scenario, I chose to not use a monorepo and so I have an application smeared across multiple repos. See GitHub help with dependency management
If a Module is not Released (on GithUb) then the go tooling versions the repo as:
Tag: Gcr
`gcloud container images list` formatting
gcloud container images list --project=${PROJECT} does warn (!) but it’s easy to ignore that it only includes result for gcr.io and not any other subdomain (e.g. us.gcr.io):
gcloud container images list \
--project=${PROJECT}
NAME
gcr.io/${PROJECT}/endpoints-runtime-serverless
Only listing images in gcr.io/${PROJECT}. Use --repository to list images in other repositories.
gcloud container images list \
--repository=us.gcr.io/${PROJECT}
NAME
us.gcr.io/${PROJECT}/foo
us.gcr.io/${PROJECT}/bar
us.gcr.io/${PROJECT}/baz
NOTE Because the repository is explicitly
us.gcr.io/${PROJECT}, it does not include the previousendpoints-runtime-serverlessbecause that image is ingcr.io.
Tag: Image
`gcloud container images list` formatting
gcloud container images list --project=${PROJECT} does warn (!) but it’s easy to ignore that it only includes result for gcr.io and not any other subdomain (e.g. us.gcr.io):
gcloud container images list \
--project=${PROJECT}
NAME
gcr.io/${PROJECT}/endpoints-runtime-serverless
Only listing images in gcr.io/${PROJECT}. Use --repository to list images in other repositories.
gcloud container images list \
--repository=us.gcr.io/${PROJECT}
NAME
us.gcr.io/${PROJECT}/foo
us.gcr.io/${PROJECT}/bar
us.gcr.io/${PROJECT}/baz
NOTE Because the repository is explicitly
us.gcr.io/${PROJECT}, it does not include the previousendpoints-runtime-serverlessbecause that image is ingcr.io.
Tag: Repository
`gcloud container images list` formatting
gcloud container images list --project=${PROJECT} does warn (!) but it’s easy to ignore that it only includes result for gcr.io and not any other subdomain (e.g. us.gcr.io):
gcloud container images list \
--project=${PROJECT}
NAME
gcr.io/${PROJECT}/endpoints-runtime-serverless
Only listing images in gcr.io/${PROJECT}. Use --repository to list images in other repositories.
gcloud container images list \
--repository=us.gcr.io/${PROJECT}
NAME
us.gcr.io/${PROJECT}/foo
us.gcr.io/${PROJECT}/bar
us.gcr.io/${PROJECT}/baz
NOTE Because the repository is explicitly
us.gcr.io/${PROJECT}, it does not include the previousendpoints-runtime-serverlessbecause that image is ingcr.io.
Tag: Dependencies
Checking Go Modules direct dependencies for updates
I continue to be “challenged” grep’ing my Golang GitHub repos to check for module updates. I feel my code aging in cold storage and it bugs me.
Golang issues 40364: enable listing direct dependency updates includes this useful solution which filters indirect dependencies and returns a list of direct dependency updates (although presumably only 0–>1 for major versions):
go list -f '{{if not .Indirect}}{{.}}{{end}}' -u -m all
I’ll take what I can get.
Tag: Projects
Comma-separated list of GCP Projects
PROJECTS=$(\
gcloud projects list \
--format='csv[no-heading,terminator=","](projectId)') && \
PROJECTS="${PROJECTS%,}" && \
echo ${PROJECTS}
From:
gcloud projects list
PROJECT_ID NAME PROJECT_NUMBER
foo foo 1234567890123
bar bar 1234567890123
baz baz 1234567890123
To:
foo,bar,baz
Which you may find useful when you need to pass a list of Project IDs to some command-line flag.
Tag: Container
Golang, Containers and private repos
A smörgåsbord of guidance involving Golang modules, private repos and containers. Everything herein is documented elsewhere (I’ll provide links) but I wanted to consolidate the information primarily for my own benefit.
GOPRIVATE
Using private modules adds complexity because builders need to be able to access private modules. Customarily, as you’re hacking away, you’ll likely not encounter issues but, when you write a Dockerfile or develop some CI, you’ll encounter something of the form:
Tag: Dockerfile
Golang, Containers and private repos
A smörgåsbord of guidance involving Golang modules, private repos and containers. Everything herein is documented elsewhere (I’ll provide links) but I wanted to consolidate the information primarily for my own benefit.
GOPRIVATE
Using private modules adds complexity because builders need to be able to access private modules. Customarily, as you’re hacking away, you’ll likely not encounter issues but, when you write a Dockerfile or develop some CI, you’ll encounter something of the form:
Tag: Pushover
Pushover w/ AlertManager
I’m using Pushover’s (generous) 30-day trial. IIUC thereafter (for personal use) the app’s $5 for a perpetual license. That seems very reasonable to me.
I find Prometheus’ documentation “light”. Everything’s there but the docs feel oriented to the power|frequent user. I use Prometheus infrequently and struggle to understand the docs.
The AlertManager configuration for Pushover is ok but I struggled to understand the reference to (Golang) templates:
# Notification title.
[ title: <tmpl_string> | default = '{{ template "pushover.default.title" . }}' ]
# Notification message.
[ message: <tmpl_string> | default = '{{ template "pushover.default.message" . }}' ]
# A supplementary URL shown alongside the message.
[ url: <tmpl_string> | default = '{{ template "pushover.default.url" . }}' ]
I’m familiar with the Go’s templating but I was unclear how to interpret these configuration references. As I thought it, I assumed these must reference default templates (shipped with AlertManager) and found these here:
Tag: Chrome
[Chrome] Service Workers
I was frustrated to discover a phantom service bound to port 8080.
If I browsed localhost:8080 in Chrome, I received a mostly blank screen that recalled when I’d used the WebThings gateway:

And the process responded to /metrics requests too:

I was flummoxed because I was not expecting this process to be running, could not find (and so could not kill it) and became concerned that it was something more nefarious:
Tag: Service-Workers
[Chrome] Service Workers
I was frustrated to discover a phantom service bound to port 8080.
If I browsed localhost:8080 in Chrome, I received a mostly blank screen that recalled when I’d used the WebThings gateway:

And the process responded to /metrics requests too:

I was flummoxed because I was not expecting this process to be running, could not find (and so could not kill it) and became concerned that it was something more nefarious:
Tag: Webthings
[Chrome] Service Workers
I was frustrated to discover a phantom service bound to port 8080.
If I browsed localhost:8080 in Chrome, I received a mostly blank screen that recalled when I’d used the WebThings gateway:

And the process responded to /metrics requests too:

I was flummoxed because I was not expecting this process to be running, could not find (and so could not kill it) and became concerned that it was something more nefarious:
Tag: Code
Visual Studio Code env vars
This is documented but, for some reason, I always forget it.
Visual Studio Code is awesome and, in particular when debugging, it’s useful to set environment variables.
I write a lot of Google Cloud code and so I’m frequently wanting:
launch.json:
{
"version": "0.2.0",
"configurations": [
{
...
"env": {
"GOOGLE_APPLICATION_CREDENTIALS": "...",
"PROJECT": "...",
}
}
]
}
And, through a combo of forgetfulness and laziness, I tend to duplicate the values from the host environment as strings in these files.
Tag: Env
Visual Studio Code env vars
This is documented but, for some reason, I always forget it.
Visual Studio Code is awesome and, in particular when debugging, it’s useful to set environment variables.
I write a lot of Google Cloud code and so I’m frequently wanting:
launch.json:
{
"version": "0.2.0",
"configurations": [
{
...
"env": {
"GOOGLE_APPLICATION_CREDENTIALS": "...",
"PROJECT": "...",
}
}
]
}
And, through a combo of forgetfulness and laziness, I tend to duplicate the values from the host environment as strings in these files.
Tag: Environment
Visual Studio Code env vars
This is documented but, for some reason, I always forget it.
Visual Studio Code is awesome and, in particular when debugging, it’s useful to set environment variables.
I write a lot of Google Cloud code and so I’m frequently wanting:
launch.json:
{
"version": "0.2.0",
"configurations": [
{
...
"env": {
"GOOGLE_APPLICATION_CREDENTIALS": "...",
"PROJECT": "...",
}
}
]
}
And, through a combo of forgetfulness and laziness, I tend to duplicate the values from the host environment as strings in these files.
Tag: Akri
Golang Kubernetes JSONPath
I’ve been spending some time learning Akri.
One proposal was to develop a webhook handler to check the YAML of Akri’s Configurations (CRDs). Configurations are used to describe Akri Brokers. They combine a Protocol reference (e.g. zeroconf) with a Kubernetes PodSpec (one of more containers), one of which references(using .resources.limits.{{PLACEHOLDER}}) the Akri device to be bound to the broker.
In order to validate the Configuration, one of Akri’s developers proposed using JSONPAth as a way to ‘query’ Kubernetes configuration files. This is a clever suggestion.
Tag: Jsonpath
Golang Kubernetes JSONPath
I’ve been spending some time learning Akri.
One proposal was to develop a webhook handler to check the YAML of Akri’s Configurations (CRDs). Configurations are used to describe Akri Brokers. They combine a Protocol reference (e.g. zeroconf) with a Kubernetes PodSpec (one of more containers), one of which references(using .resources.limits.{{PLACEHOLDER}}) the Akri device to be bound to the broker.
In order to validate the Configuration, one of Akri’s developers proposed using JSONPAth as a way to ‘query’ Kubernetes configuration files. This is a clever suggestion.
Tag: Imagepullsecrets
Kubernetes patching
Chatting to a developer with a question on Stack Overflow, showed an interesting use of imagePullSecrets that I’d not seen before. The container registry secret can be added to the default service account. This then enables e.g. kubectl run ... (which runs as the default service account) to access the private registry. Previously, I’ve resorted to creating Deployments that include imagePullSecrets to circumvent this challenge.
So, I have a secret:
kubectl get secret/ghcr --output=yaml
Yields:
Tag: Patch
Kubernetes patching
Chatting to a developer with a question on Stack Overflow, showed an interesting use of imagePullSecrets that I’d not seen before. The container registry secret can be added to the default service account. This then enables e.g. kubectl run ... (which runs as the default service account) to access the private registry. Previously, I’ve resorted to creating Deployments that include imagePullSecrets to circumvent this challenge.
So, I have a secret:
kubectl get secret/ghcr --output=yaml
Yields:
Tag: Secret
Kubernetes patching
Chatting to a developer with a question on Stack Overflow, showed an interesting use of imagePullSecrets that I’d not seen before. The container registry secret can be added to the default service account. This then enables e.g. kubectl run ... (which runs as the default service account) to access the private registry. Previously, I’ve resorted to creating Deployments that include imagePullSecrets to circumvent this challenge.
So, I have a secret:
kubectl get secret/ghcr --output=yaml
Yields:
Tag: Serviceaccount
Kubernetes patching
Chatting to a developer with a question on Stack Overflow, showed an interesting use of imagePullSecrets that I’d not seen before. The container registry secret can be added to the default service account. This then enables e.g. kubectl run ... (which runs as the default service account) to access the private registry. Previously, I’ve resorted to creating Deployments that include imagePullSecrets to circumvent this challenge.
So, I have a secret:
kubectl get secret/ghcr --output=yaml
Yields:
Tag: Avahi-Browse
ZeroConf
sudo systemctl status avahi-daemon
● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-11-20 09:26:13 PST; 14min ago
TriggeredBy: ● avahi-daemon.socket
Main PID: 1039 (avahi-daemon)
Status: "avahi-daemon 0.7 starting up."
Tasks: 2 (limit: 38333)
Memory: 2.3M
CGroup: /system.slice/avahi-daemon.service
├─1039 avahi-daemon: running [hades-canyon.local]
└─1098 avahi-daemon: chroot helper
avahi-browse --all
+ wlp6s0 IPv4 googlerpc-1 _googlerpc._tcp local
+ wlp6s0 IPv4 googlerpc _googlerpc._tcp local
+ enp5s0 IPv4 googlerpc-1 _googlerpc._tcp local
+ enp5s0 IPv4 googlerpc _googlerpc._tcp local
+ wlp6s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ wlp6s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ enp5s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ wlp6s0 IPv4 [GUID] _googlezone._tcp local
+ enp5s0 IPv4 [GUID] _googlezone._tcp local
+ enp5s0 IPv4 [GUID] _googlezone._tcp local
Tag: Avahi-Daemon
ZeroConf
sudo systemctl status avahi-daemon
● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-11-20 09:26:13 PST; 14min ago
TriggeredBy: ● avahi-daemon.socket
Main PID: 1039 (avahi-daemon)
Status: "avahi-daemon 0.7 starting up."
Tasks: 2 (limit: 38333)
Memory: 2.3M
CGroup: /system.slice/avahi-daemon.service
├─1039 avahi-daemon: running [hades-canyon.local]
└─1098 avahi-daemon: chroot helper
avahi-browse --all
+ wlp6s0 IPv4 googlerpc-1 _googlerpc._tcp local
+ wlp6s0 IPv4 googlerpc _googlerpc._tcp local
+ enp5s0 IPv4 googlerpc-1 _googlerpc._tcp local
+ enp5s0 IPv4 googlerpc _googlerpc._tcp local
+ wlp6s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ wlp6s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ enp5s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ wlp6s0 IPv4 [GUID] _googlezone._tcp local
+ enp5s0 IPv4 [GUID] _googlezone._tcp local
+ enp5s0 IPv4 [GUID] _googlezone._tcp local
Tag: Dbus
ZeroConf
sudo systemctl status avahi-daemon
● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-11-20 09:26:13 PST; 14min ago
TriggeredBy: ● avahi-daemon.socket
Main PID: 1039 (avahi-daemon)
Status: "avahi-daemon 0.7 starting up."
Tasks: 2 (limit: 38333)
Memory: 2.3M
CGroup: /system.slice/avahi-daemon.service
├─1039 avahi-daemon: running [hades-canyon.local]
└─1098 avahi-daemon: chroot helper
avahi-browse --all
+ wlp6s0 IPv4 googlerpc-1 _googlerpc._tcp local
+ wlp6s0 IPv4 googlerpc _googlerpc._tcp local
+ enp5s0 IPv4 googlerpc-1 _googlerpc._tcp local
+ enp5s0 IPv4 googlerpc _googlerpc._tcp local
+ wlp6s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ wlp6s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ enp5s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ wlp6s0 IPv4 [GUID] _googlezone._tcp local
+ enp5s0 IPv4 [GUID] _googlezone._tcp local
+ enp5s0 IPv4 [GUID] _googlezone._tcp local
Tag: Crictl
ctr and crictl
Developing with Akri, it’s useful to be able to purge container images because, once cached, if changed, these are pulled by tag rather than hash.
The way to enumerate images used by MicroK8s is using either ctr or crictl. I’m unfamiliar with both of these but, here’s what I know:
MicroK8s
MicroK8s leverages both technologies.
Both require sudo
ctr is a sub-command of microk8s and uses --address for the socket
Tag: Ctr
ctr and crictl
Developing with Akri, it’s useful to be able to purge container images because, once cached, if changed, these are pulled by tag rather than hash.
The way to enumerate images used by MicroK8s is using either ctr or crictl. I’m unfamiliar with both of these but, here’s what I know:
MicroK8s
MicroK8s leverages both technologies.
Both require sudo
ctr is a sub-command of microk8s and uses --address for the socket
Tag: Rust-Analyzer
rust-analyzer and tonic
Solution: https://github.com/rust-analyzer/rust-analyzer/issues/5799
References: https://jen20.dev/post/completion-of-generated-code-in-intellij-rust/
build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
// gRPC Healthcheck
tonic_build::compile_protos("proto/grpc_health_v1.proto")?;
Ok(())
}
But, because this compiles the proto(s) at build time, the imports aren’t available to Visual Studio Code and rust-analyzer
pub mod grpc_health_v1 {
tonic::include_proto!("grpc.health.v1");
}
// These imports would be unavailable and error
use grpc_health_v1::{
health_check_response::ServingStatus,
health_server::{Health, HealthServer},
HealthCheckRequest, HealthCheckResponse,
};
However,
"rust-analyzer.cargo.loadOutDirsFromCheck": true,
Tag: Tonic
rust-analyzer and tonic
Solution: https://github.com/rust-analyzer/rust-analyzer/issues/5799
References: https://jen20.dev/post/completion-of-generated-code-in-intellij-rust/
build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
// gRPC Healthcheck
tonic_build::compile_protos("proto/grpc_health_v1.proto")?;
Ok(())
}
But, because this compiles the proto(s) at build time, the imports aren’t available to Visual Studio Code and rust-analyzer
pub mod grpc_health_v1 {
tonic::include_proto!("grpc.health.v1");
}
// These imports would be unavailable and error
use grpc_health_v1::{
health_check_response::ServingStatus,
health_server::{Health, HealthServer},
HealthCheckRequest, HealthCheckResponse,
};
However,
"rust-analyzer.cargo.loadOutDirsFromCheck": true,
Using tonic
Microsoft’s akri uses tonic to provide gRPC.
For the gRPC Health-checking Protocol proto:
./proto/grpc_health_v1.proto:
syntax = "proto3";
package grpc.health.v1;
service Health {
rpc Check(HealthCheckRequest) returns (HealthCheckResponse);
rpc Watch(HealthCheckRequest) returns (stream HealthCheckResponse);
}
message HealthCheckRequest {
string service = 1;
}
message HealthCheckResponse {
enum ServingStatus {
UNKNOWN = 0;
SERVING = 1;
NOT_SERVING = 2;
SERVICE_UNKNOWN = 3; // Used only by the Watch method.
}
ServingStatus status = 1;
}
Tonic supports compiling protos using build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
// compile refers to the path
tonic_build::compile_protos("proto/grpc_health_v1.proto")?;
Ok(())
}
And then useing these:
gRPC Healthchecking in Rust
Golang
Go provides an implementation gprc_health_v1 of the gRPC Health-checking Protocol proto.
This is easily implemented:
package main
import (
pb "github.com/DazWilkin/.../protos"
"google.golang.org/grpc"
"google.golang.org/grpc/health"
healthpb "google.golang.org/grpc/health/grpc_health_v1"
)
func main() {
...
serverOpts := []grpc.ServerOption{}
grpcServer := grpc.NewServer(serverOpts...)
// Register the pb service
pb.RegisterSomeServer(grpcServer, NewServer())
// Register the healthpb service
healthpb.RegisterHealthServer(grpcServer, health.NewServer())
listen, err := net.Listen("tcp", *grpcEndpoint)
if err != nil {
log.Fatal(err)
}
log.Printf("[main] Starting gRPC Listener [%s]\n", *grpcEndpoint)
log.Fatal(grpcServer.Serve(listen))
}
Because it’s gRPC, you need an implementation of the proto for the client, one is provided too grpc-health-probe:
Tag: Healthcheck
gRPC Healthchecking in Rust
Golang
Go provides an implementation gprc_health_v1 of the gRPC Health-checking Protocol proto.
This is easily implemented:
package main
import (
pb "github.com/DazWilkin/.../protos"
"google.golang.org/grpc"
"google.golang.org/grpc/health"
healthpb "google.golang.org/grpc/health/grpc_health_v1"
)
func main() {
...
serverOpts := []grpc.ServerOption{}
grpcServer := grpc.NewServer(serverOpts...)
// Register the pb service
pb.RegisterSomeServer(grpcServer, NewServer())
// Register the healthpb service
healthpb.RegisterHealthServer(grpcServer, health.NewServer())
listen, err := net.Listen("tcp", *grpcEndpoint)
if err != nil {
log.Fatal(err)
}
log.Printf("[main] Starting gRPC Listener [%s]\n", *grpcEndpoint)
log.Fatal(grpcServer.Serve(listen))
}
Because it’s gRPC, you need an implementation of the proto for the client, one is provided too grpc-health-probe:
