Golang Kubernetes JSONPath
I’ve been spending some time learning Akri.
One proposal was to develop a webhook handler to check the YAML of Akri’s Configurations (CRDs). Configurations are used to describe Akri Brokers. They combine a Protocol reference (e.g. zeroconf) with a Kubernetes PodSpec (one of more containers), one of which references(using .resources.limits.{{PLACEHOLDER}}) the Akri device to be bound to the broker.
In order to validate the Configuration, one of Akri’s developers proposed using JSONPAth as a way to ‘query’ Kubernetes configuration files. This is a clever suggestion.
Don't ignore the (hidden) ignore files
Don’t forget to add appropriate ignore files…
.dockerignorewhen using Docker.gitignorewhen using git.gcloudignorewhen using Google Cloud Platform
This week, I’ve been bitten twice in not using these.
They’re hidden files and so they’re more easy to forget unfortunately.
.dockerignore
docker build ...
Without .dockerignore
Sending build context to Docker daemon 229.9MB
Because, even though Rust’s cargo creates a useful .gitignore, it doesn’t create .dockerignore and, as you as you create ./target, you’re going to take up (likely uncessary) build context space:
GitHub Actions' Strategy Matrix
Yesterday, I was introduced to a useful feature of GitHub Actions which we’ll refer to as strategy matrix. I’m more famliar with Google Cloud Build but, to my knowledge, Cloud Build does not provide this feature.
The challenge is in providing an iterator for steps in e.g. a CI/CD platform.
Below is a summarized version of what I had. My (self-created) problem was that I had 4 container images to build, but the Dockerfile names didn’t exactly match the desired repository names. I had e.g. grpc.broker for the Dockerfile name and I wanted e.g. grpc-broker. The principle though is more general than my challenge naming things. The YAML delow describes the same step multiple times and what I would like to do is range over some set of values.
Kubernetes patching
Chatting to a developer with a question on Stack Overflow, showed an interesting use of imagePullSecrets that I’d not seen before. The container registry secret can be added to the default service account. This then enables e.g. kubectl run ... (which runs as the default service account) to access the private registry. Previously, I’ve resorted to creating Deployments that include imagePullSecrets to circumvent this challenge.
So, I have a secret:
kubectl get secret/ghcr --output=yaml
Yields:
ZeroConf
sudo systemctl status avahi-daemon
● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-11-20 09:26:13 PST; 14min ago
TriggeredBy: ● avahi-daemon.socket
Main PID: 1039 (avahi-daemon)
Status: "avahi-daemon 0.7 starting up."
Tasks: 2 (limit: 38333)
Memory: 2.3M
CGroup: /system.slice/avahi-daemon.service
├─1039 avahi-daemon: running [hades-canyon.local]
└─1098 avahi-daemon: chroot helper
avahi-browse --all
+ wlp6s0 IPv4 googlerpc-1 _googlerpc._tcp local
+ wlp6s0 IPv4 googlerpc _googlerpc._tcp local
+ enp5s0 IPv4 googlerpc-1 _googlerpc._tcp local
+ enp5s0 IPv4 googlerpc _googlerpc._tcp local
+ wlp6s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ wlp6s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ enp5s0 IPv4 Google-Home-Mini-... _googlecast._tcp local
+ wlp6s0 IPv4 [GUID] _googlezone._tcp local
+ enp5s0 IPv4 [GUID] _googlezone._tcp local
+ enp5s0 IPv4 [GUID] _googlezone._tcp local
ctr and crictl
Developing with Akri, it’s useful to be able to purge container images because, once cached, if changed, these are pulled by tag rather than hash.
The way to enumerate images used by MicroK8s is using either ctr or crictl. I’m unfamiliar with both of these but, here’s what I know:
MicroK8s
MicroK8s leverages both technologies.
Both require sudo
ctr is a sub-command of microk8s and uses --address for the socket
rust-analyzer and tonic
Solution: https://github.com/rust-analyzer/rust-analyzer/issues/5799
References: https://jen20.dev/post/completion-of-generated-code-in-intellij-rust/
build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
// gRPC Healthcheck
tonic_build::compile_protos("proto/grpc_health_v1.proto")?;
Ok(())
}
But, because this compiles the proto(s) at build time, the imports aren’t available to Visual Studio Code and rust-analyzer
pub mod grpc_health_v1 {
tonic::include_proto!("grpc.health.v1");
}
// These imports would be unavailable and error
use grpc_health_v1::{
health_check_response::ServingStatus,
health_server::{Health, HealthServer},
HealthCheckRequest, HealthCheckResponse,
};
However,
"rust-analyzer.cargo.loadOutDirsFromCheck": true,
Using tonic
Microsoft’s akri uses tonic to provide gRPC.
For the gRPC Health-checking Protocol proto:
./proto/grpc_health_v1.proto:
syntax = "proto3";
package grpc.health.v1;
service Health {
rpc Check(HealthCheckRequest) returns (HealthCheckResponse);
rpc Watch(HealthCheckRequest) returns (stream HealthCheckResponse);
}
message HealthCheckRequest {
string service = 1;
}
message HealthCheckResponse {
enum ServingStatus {
UNKNOWN = 0;
SERVING = 1;
NOT_SERVING = 2;
SERVICE_UNKNOWN = 3; // Used only by the Watch method.
}
ServingStatus status = 1;
}
Tonic supports compiling protos using build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
// compile refers to the path
tonic_build::compile_protos("proto/grpc_health_v1.proto")?;
Ok(())
}
And then useing these:
gRPC Healthchecking in Rust
Golang
Go provides an implementation gprc_health_v1 of the gRPC Health-checking Protocol proto.
This is easily implemented:
package main
import (
pb "github.com/DazWilkin/.../protos"
"google.golang.org/grpc"
"google.golang.org/grpc/health"
healthpb "google.golang.org/grpc/health/grpc_health_v1"
)
func main() {
...
serverOpts := []grpc.ServerOption{}
grpcServer := grpc.NewServer(serverOpts...)
// Register the pb service
pb.RegisterSomeServer(grpcServer, NewServer())
// Register the healthpb service
healthpb.RegisterHealthServer(grpcServer, health.NewServer())
listen, err := net.Listen("tcp", *grpcEndpoint)
if err != nil {
log.Fatal(err)
}
log.Printf("[main] Starting gRPC Listener [%s]\n", *grpcEndpoint)
log.Fatal(grpcServer.Serve(listen))
}
Because it’s gRPC, you need an implementation of the proto for the client, one is provided too grpc-health-probe: