No description
Find a file
thunerbl a7128e9886
Some checks failed
/ publish (push) Has been cancelled
Merge pull request 'sync-with-upsteam' (#1) from sync-with-upsteam into main
Reviewed-on: #1
2026-03-29 15:32:21 +00:00
.forgejo chore: re-organize images in monorepo logic 2025-07-04 16:46:17 +02:00
.tilt feat(tilt): use static secrets for keycloak 2026-03-13 09:26:29 +01:00
api feat(controller/drive): add drive app 2026-03-02 14:39:55 +01:00
cluster feat(grafana): update 2026-02-10 18:52:46 +01:00
cmd feat(controller/drive): add drive app 2026-03-02 14:39:55 +01:00
config refactor: use dev.localhost domain instead of dev.local 2026-03-11 15:54:47 +01:00
docs doc(pg): update by removing su 2026-02-05 14:29:27 +01:00
gen/proto/portability/v1alpha1 style: use REUSE compliant licensing 2025-01-09 18:07:41 +01:00
hack style: use REUSE compliant licensing 2025-01-09 18:07:41 +01:00
images feat(docs): build v4.8.2 2026-03-23 15:27:23 +01:00
internal Merge branch 'main' into local-cluster 2026-03-25 10:01:34 +01:00
LICENSES refactor(nix): use packagesFromDirectoryRecursive to declare pkgs & images + small fixes 2025-09-25 12:55:14 +02:00
nixlibs fix(go): use latest version only to buildGoApplication 2025-10-30 13:40:02 +01:00
nixpkgs feat(pastille): update for nextcloud link_editor support 2026-03-13 13:07:21 +01:00
pkg feat(controller): generates a deterministic random cron schedule 2026-03-24 18:20:09 +01:00
proto/portability/v1alpha1 style: make yaml lint compatible with controller-gen 2025-01-13 14:43:09 +01:00
renovate style: run treefmt 2025-01-08 17:31:07 +01:00
scripts refactor: use dev.localhost domain instead of dev.local 2026-03-11 15:54:47 +01:00
testing refactor: use dev.localhost domain instead of dev.local 2026-03-11 15:54:47 +01:00
tools/lsh-gen dev(lsh-gen): generate app operators 2025-09-19 16:11:13 +02:00
.dockerignore misc: fusion lsh projects 2024-01-11 15:40:45 +01:00
.gitignore refactor(tilt): use mkcert CA instead of cluster-generated CA 2026-03-11 15:54:47 +01:00
buf.gen.yaml style: make yaml lint compatible with controller-gen 2025-01-13 14:43:09 +01:00
CONTRIBUTING.md dev: add contributing guide 2025-01-09 18:41:21 +01:00
Dockerfile feat: update all go deps 2025-10-29 15:11:36 +01:00
Dockerfile.tilt feat: update all go deps 2025-10-29 15:11:36 +01:00
flake.lock feat(keycloak): bump to version 26.5.4 2026-02-24 09:23:12 +01:00
flake.nix fix(go): use latest version only to buildGoApplication 2025-10-30 13:40:02 +01:00
go.mod feat: update all go deps 2025-10-29 15:11:36 +01:00
go.sum feat: update all go deps 2025-10-29 15:11:36 +01:00
gomod2nix.toml feat: update all go deps 2025-10-29 15:11:36 +01:00
justfile dev: add a justfile 2025-04-01 15:17:17 +02:00
kind-config.yaml style: make yaml lint compatible with controller-gen 2025-01-13 14:43:09 +01:00
LICENSE feat: add cli 2023-04-14 15:53:41 +02:00
local.nix.example fix: example file for local nix overrides was broken 2025-01-16 12:32:13 +01:00
Makefile feat(make): add diff-install command 2025-11-06 11:05:47 +01:00
PROJECT feat(controller/drive): add drive app 2026-03-02 14:39:55 +01:00
README.md refactor: use dev.localhost domain instead of dev.local 2026-03-11 15:54:47 +01:00
renovate.json5 fix(renovate): change base branch 2025-03-12 10:21:05 +01:00
REUSE.toml refactor: use gomod2nix to avoid duplicate eval of go vendors + wrap deps of lsh-operator (#521) 2025-10-16 13:16:35 +00:00
shell.nix fix(go): use latest version only to buildGoApplication 2025-10-30 13:40:02 +01:00
Tiltfile feat(dev): deploy keycloak 2025-06-12 18:34:30 +02:00
treefmt.nix style: make yaml lint compatible with controller-gen 2025-01-13 14:43:09 +01:00

libre.sh

libre.sh is a platform to manage many instances of different applications at scale.

Use Cases

The use cases directory lists things we try to achieve with libre.sh.

Glossary

Application: an application is a web application that is usable by an end user (For instance: HedgeDoc, Discourse, …). Object Store (S3 API “standard”): An http API to store and retrieve objects. PITR: Point in Time Recovery

Personas

Cluster Operator

A Cluster Operator is a System Administrator, or Site Reliability Engineer that is transforming raw machines (physical, virtual) into a production Kubernetes cluster. This person is typically root on servers and on Kubernetes API.

Application Operator

An Application Operator is a person that is less technical than a Cluster Operator, and doesnt necessarily understand the command line interface. But this person, through a nice User interface, is able to manipulate high level objects that represent the application.

End User

A user that will interact only with an application.

Architecture decision records

Systems

libre.sh runtime

A collection of controllers and services that are required to deploy applications instances.

libre.sh runtime manager

The controller in charge of installing/configuring/upgrading the runtime.

Development

Requirements

  • at least 16GB of RAM
  • nix-shell installed
  • enough file watchers (see below)
  • allow binding ports 80 and 443 for non privileged users (see below)

Quickstart

# To increase the numbers of file watchers:
# https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files
sudo sed -i -n -e '/^fs\.inotify\.max_user_watches/!p' -e '$afs.inotify.max_user_watches = 1048576' /etc/sysctl.conf
sudo sed -i -n -e '/^fs\.inotify\.max_user_instances/!p' -e '$afs.inotify.max_user_instances = 1024' /etc/sysctl.conf

# https://discuss.linuxcontainers.org/t/error-with-docker-inside-lxc-container/922/5
sudo sed -i -n -e '/^kernel\.keys\.maxkeys/!p' -e '$akernel.keys.maxkeys = 5000' /etc/sysctl.conf
# or if you want change only on this session
sudo sysctl -w 'kernel.keys.maxkeys = 5000'


# Allows unprivileged users to bind ports as low as 80 instead of standard 1024
# Kind a besoin de bind les ports 80 et 443 avec podman en root-less
sudo sed -i -n -e '/^net\.ipv4\.ip_unprivileged_port_start/!p' -e '$anet.ipv4.ip_unprivileged_port_start = 80' /etc/sysctl.conf
# or if you want change only on this session
sudo sysctl -w 'net.ipv4.ip_unprivileged_port_start = 80'

# Immediatly apply kernel configuration changes (or reboot if you prefer)
sudo sysctl --system

Enter the shell:

nix-shell

Create a Kubernetes cluster and start deploying libre.sh using Tilt:

kind create cluster --config kind-config.yaml
tilt up

Install the mkcert root CA in your system trust store (one-time setup, persists across cluster recreations):

mkcert -install

or on Archlinux:

sudo cp "$(mkcert -CAROOT)/rootCA.pem" /etc/ca-certificates/trust-source/anchors/mkcert-rootCA.pem
sudo trust extract-compat

If needed, relaunch each job that is still failing, at the end, lsh operator job should pass.

Then we can start using libre.sh.

# Install flux (doc: https://fluxcd.io/flux/installation/)
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml

# Add Helm repositories
kubectl create -f ./cluster/repositories/flux-ks.yml

# Add libre.sh' GitRepository ressource
kubectl create -f ./cluster/libresh-cluster.yml

# You can check that `repositories` kustomization reconciliation status:
flux -n libresh-system get kustomizations

Then, we can install other tools (observability, etc.), please note that you shouldn't install ingress-nginx, cert-manager using flux (as stated below) if you provisioned them with tilt: you may encounter some conflicts

MinIO CLI access

To interact with the local MinIO instance from your host using mc or mcli:

export MINIO_USER=$(kubectl get secret -n libresh-system minio -o jsonpath="{.data.rootUser}" | base64 -d)
export MINIO_PASS=$(kubectl get secret -n libresh-system minio -o jsonpath="{.data.rootPassword}" | base64 -d)
mc alias set dev https://s3.dev.localhost $MINIO_USER $MINIO_PASS

Cleaning up

When you're done, you can cleanup your environment.

Deleting the cluster:

kind delete cluster --name libresh-dev

Uninstalling CA:

mkcert -uninstall

Minimal install

kubectl create ns libresh-system

kubectl create -f -  << EOF
apiVersion: v1
kind: Secret
metadata:
  name: cluster-settings
  namespace: libresh-system
type: Opaque
stringData:
  CLUSTER_DOMAIN: my-cluster.my-domain.fr
  CLUSTER_EMAIL: admin@my-domain.fr
  CLUSTER_NAME: my-cluster
  DEFAULT_CLUSTERISSUER: letsencrypt
  # BUCKET_PROVIDER: data
EOF

# Install flux (doc: https://fluxcd.io/flux/installation/)
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml

# Add libre.sh' GitRepository ressource
kubectl create -f ./cluster/libresh-cluster.yml

# Add Helm repositories
kubectl create -f ./cluster/repositories/flux-ks.yaml
kubectl create -f ./cluster/priorityclasses/flux-ks.yml
kubectl create -f ./cluster/components/networking/cert-manager/flux-ks.yml      # only if not using Tilt
kubectl create -f ./cluster/components/networking/ingress-nginx/flux-ks.yml     # only if not using Tilt

# Add zalando postgres operator or only zalando postgres crd (the crd is needed by libre.sh operator)
kubectl create -f ./cluster/components/databases/postgres-zalando/flux-ks.yaml  # only if not using Tilt
or 
kubectl create -f https://raw.githubusercontent.com/zalando/postgres-operator/refs/heads/master/manifests/postgresql.crd.yaml

Then, you can use libre.sh objects. There is some examples in the config/samples directory

Deploy CertManager ClusterIssuer

kubectl apply -f ./cluster/components/networking/cert-manager-issuers/self-signed.yaml

Deploy MinIO Tenant

# deploy minio operator
cd ./cluster/components/objectstore/minio/
kubectl create -f ./flux-ks.yml
cd tenant-example
cp ./config-example.env ./config.env
vi ./config.env
kubectl create ns minio
kubectl -n minio create secret generic --from-file=./config.env prod-storage-configuration
# deploy minio tenant - This part is given as a rough example, read and modify carefuly, but you can get the idea
export CLUSTER_DOMAIN=my-cluster.my-domain.fr
envsubst < ./tenant-example.yaml > ./tenant.yaml
vi ./tenant.yaml
kubectl apply -f ./tenant.yaml

configure libresh-system

kubectl create -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: libresh-config
  namespace: libresh-system
type: Opaque
stringData:
  object-storage.yml: |
    apiVersion: objectstorage.libre.sh/v1alpha1
    kind: ObjectStorageConfig
    mapping:
      data: my-s3
      pitr: my-s3
    providers:
    - name: my-s3
      host: CHANGE_ME
      insecure: false
      accessKey: CHANGE_ME
      secretKey: CHANGE_ME
  mailbox.yml: |
      apiVersion: config.libre.sh/v1alpha1
      kind: MailboxConfig
      spec:
        providers: []
  keycloak.yml: |
    default: ""
    providers: []
EOF

make install
IMG=registry.libre.sh/operator:v1.0.0-alpha.1 make deploy

Deploy observability

We'll deploy components in this order:

  • Loki
  • Thanos
  • Prometheus (with alertmanager)
  • Promtail
  • Grafana
# Install flux (doc: https://fluxcd.io/flux/installation/)
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml

# Add Helm repositories
kubectl create -f ./cluster/repositories/flux-ks.yml

# Add libre.sh' GitRepository ressource
kubectl create -f ./cluster/libresh-cluster.yml

# Create loki admin credentials and install Loki (https://grafana.com/oss/loki/, using flux)
kubectl create secret -n libresh-system generic loki.admin.creds --from-literal=username=admin --from-literal=password=$(cat /dev/random | tr -dc '[:alnum:]' | head -c 40)
kubectl apply -f ./cluster/components/observability/loki/flux-ks.yaml

# Install prometheus CRDs, needed by Thanos and Prometheus
kubectl apply -f ./cluster/crds/prometheus-crds/flux-ks.yaml

# Install Thanos (https://thanos.io/, using flux)
kubectl apply -f ./cluster/components/observability/thanos/flux-ks.yaml

# Install install Prometheus (https://prometheus.io, via flux)
kubectl apply -f ./cluster/components/observability/prometheus-stack/flux-ks.yaml

# Install promtail (https://grafana.com/docs/loki/latest/send-data/promtail/, via flux)
kubectl apply -f ./cluster/components/observability/promtail/flux-ks.yaml

# Create grafana admin credentials, and install grafana (https://grafana.com/grafana/, via flux)
kubectl create secret -n libresh-system generic grafana.admin.creds --from-literal=username=admin --from-literal=password=$(cat /dev/random | tr -dc '[:alnum:]' | head -c 40)
kubectl apply -f ./cluster/components/observability/grafana/flux-ks.yaml

You can use the following command to watch flux reconcilation progress:

flux get kustomizations -n libresh-system

Grafana is available at grafana.my-cluster.my-domain.fr. Grafana credentials can be found using scripts/lsh-get-secrets.sh

Deploy velero backups

k apply -f https://raw.githubusercontent.com/vmware-tanzu/helm-charts/main/charts/velero/crds/schedules.yaml
k apply -f ./cluster/components/backups/velero/flux-ks.yaml
# to sync creds of the bucket:
flux -n libresh-system suspend ks velero
flux -n libresh-system resume ks velero

Upgrade

Renovabot runs regularly, it will create MR against main branch.

Currently a human has to accept them.

Once you are happy with a state of the main branch, you can tag a release.

Then, to update your cluster, you just need to edit the tag in the gitrepository:

kubectl -n libresh-system edit gitrepositories libresh-cluster

This will update all components managed by libre.sh.

Update libre.sh operator on configured cluster

If you use tilt, a new build of the operator will be triggered and deployed automatically.

If you prefer to go the manual way, you should build a new image (via make docker-build), then install CRDs (via make install) and deploy (via make deploy) :

# Build a docker img which will be deployed in `lsh-operator`
make docker-build && make install && make deploy

Use a local cache for container images

See docs/development/using-local-caches.md.

Add personal customizations/tools to your dev env using nix-shell

You can obtain a shell with libre.sh tooling (shell.nix) plus your personal tools/customizations by using a local.nix file (there is an example: local.nix.example). Using nix-shell local.nix you'll have both libre.sh and yours tooling/configuration.

Known issues

MountVolume.SetUp failed for volume "config" : secret "libresh-config" not found

libresh-config secret is created in the libresh-system namespace by a Tilt job named Minio Config.

Ensure the following builds are successful: cert-managerand uncategorized, then trigger Minio Config