This commit is contained in:
2024-04-03 22:04:13 +02:00
parent 7e68609006
commit 0b373d31db
142 changed files with 7334 additions and 0 deletions

56
kubernetes/helm.md Normal file
View File

@ -0,0 +1,56 @@
# Helm
## Repository Management
COMMAND | DESCRIPTION
---|---
`helm repo list` | List Helm repositories
`helm repo update` | Update list of Helm charts from repositories
## Chart Management
COMMAND | DESCRIPTION
---|---
`helm search` | List all installed charts
`helm search <CHARTNAME>` | Search for a chart
`helm ls` | List all installed Helm charts
`helm ls --deleted` | List all deleted Helm charts
`helm ls --all` | List installed and deleted Helm charts
`helm inspect values <REPO>/<CHART>` | Inspect the variables in a chart
## Install/Delete Helm Charts
COMMAND | DESCRIPTION
---|---
`helm install --name <NAME> <REPO>/<CHART>` | Install a Helm chart
`helm install --name <NAME> --values <VALUES.YML> <REPO>/<CHART>` | Install a Helm chart and override variables
`helm status <NAME>` | Show status of Helm chart being installed
`helm delete --purge <NAME>` | Delete a Helm chart
## Upgrading Helm Charts
COMMAND | DESCRIPTION
---|---
`helm get values <NAME>` | Return the variables for a release
`helm upgrade --values <VALUES.YML> <NAME> <REPO>/<CHART>` | Upgrade the chart or variables in a release
`helm history <NAME>` | List release numbers
`helm rollback <NAME> 1` | Rollback to a previous release number
## Creating Helm Charts
COMMAND | DESCRIPTION
---|---
`helm create <NAME>` | Create a blank chart
`helm lint <NAME>` | Lint the chart
`helm package <NAME>` | Package the chart into foo.tgz
`helm dependency update` | Install chart dependencies
## Chart Folder Structure
```
wordpress/
Chart.yaml # A YAML file containing information about the chart
LICENSE # OPTIONAL: A plain text file containing the license for the chart
README.md # OPTIONAL: A human-readable README file
requirements.yaml # OPTIONAL: A YAML file listing dependencies for the chart
values.yaml # The default configuration values for this chart
charts/ # A directory containing any charts upon which this chart depends.
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes
```

130
kubernetes/k3s.md Normal file
View File

@ -0,0 +1,130 @@
# K3S
Lightweight [Kubernetes](kubernetes/kubernetes.md). Production ready, easy to install, half the memory, all in a binary less than 100 MB.
Project Homepage: [K3s.io](https://www.k3s.io/)
Documentation: [K3s Documentation](https://docs.k3s.io/)
---
## Installation
To install k3s, you can follow different approaches like setting up k3s with an **external database**, **embedded database**, or as a **single node**.
### K3s with external DB
Set up an HA K3s cluster backed by an external datastore such as MySQL, PostgreSQL, or etcd.
#### Install Database
Install [MariaDB](databases/mariadb.md).
#### Install Servers
```bash
curl -sfL https://get.k3s.io | sh -s - server \
--token=YOUR-SECRET \
--datastore-endpoint='mysql://user:pass@tcp(ipaddress:3306)/dbname' \
--node-taint CriticalAddonsOnly=true:NoExecute \
--tls-san your-dns-name --tls-san your-lb-ip-address
```
#### Node-Taint
By default, server nodes will be schedulable and thus your workloads can get launched on them. If you wish to have a dedicated control plane where no user workloads will run, you can use taints. The node-taint parameter will allow you to configure nodes with taints, for example `--node-taint CriticalAddonsOnly=true:NoExecute`.
#### SSL Certificates
To avoid certificate errors in such a configuration, you should install the server with the `--tls-san YOUR_IP_OR_HOSTNAME_HERE` option. This option adds an additional hostname or IP as a Subject Alternative Name in the TLS cert, and it can be specified multiple times if you would like to access via both the IP and the hostname.
#### Get a registered Address
TODO: WIP
#### Install Agents
TODO: WIP
```bash
curl -sfL https://get.k3s.io | sh -s - agent \
--server https://your-lb-ip-address:6443 \
--token YOUR-SECRET
```
### K3s with embedded DB
Set up an HA K3s cluster that leverages a built-in distributed database.
TODO: WIP
#### Install first Server
TODO: WIP
```bash
curl -sfL https://get.k3s.io | sh -s - server \
--token=YOUR-SECRET \
--tls-san your-dns-name --tls-san your-lb-ip-address \
--cluster-init
```
To avoid certificate errors in such a configuration, you should install the server with the `--tls-san YOUR_IP_OR_HOSTNAME_HERE` option. This option adds an additional hostname or IP as a Subject Alternative Name in the TLS cert, and it can be specified multiple times if you would like to access via both the IP and the hostname.
#### Install additional Servers
TODO: WIP
```bash
curl -sfL https://get.k3s.io | sh -s - server \
--token=YOUR-SECRET \
--tls-san your-dns-name --tls-san your-lb-ip-address \
--server https://IP-OF-THE-FIRST-SERVER:6443
```
The `--cluster-init` initializes an HA Cluster with an embedded etcd database. The fault tolerance requires an odd number, minimum three, nodes to function.
Total Number of nodes | Failed Node Tolerance
---|---
1|0
2|0
3|1
4|1
5|2
6|2
...|...
#### Get a registered Address
To achieve a high-available scenario you also need to load balance incoming connections between the server nodes.
TODO: WIP
#### Install Agents
You can still add additional nodes without a server function to this cluster.
```bash
curl -sfL https://get.k3s.io | sh -s - agent \
--server https://your-lb-ip-address:6443 \
--token YOUR-SECRET
```
### K3s single node
Set up K3s as a single node installation.
TODO: WIP
---
## Manage K3S
### Management on Server Nodes
`k3s kubectl`
### Download Kube Config
`/etc/rancher/k3s/k3s.yaml`
## Database Backups
### etcd snapshots
Stored in `/var/lib/rancher/k3s/server/db/snapshots`.

73
kubernetes/k9s.md Normal file
View File

@ -0,0 +1,73 @@
# K9s
K9s is a command line interface to easy up managing [Kubernetes Clusters](kubernetes/kubernetes.md).
Core features of k9s are for instance:
- Editing of resource manifests
- Shell into a Pod / Container
- Manage multiple Kubernetes clusters using one tool
More information and current releases of k9s, can be found on their [Github repository](https://github.com/derailed/k9s).
---
## Installation
### On Linux
#### Find and download the latest release
Check the release page [here](https://github.com/derailed/k9s/releases) and search for the
fitting package type (e.g. Linux_x86_64). Copy the link to the archive of your choice.
Download and unpack the archive like in this example:
```bash
wget https://github.com/derailed/k9s/releases/download/v0.26.6/k9s_Linux_x86_64.tar.gz
tar -xvf k9s_Linux_x86.tar.gz
```
#### Install k9s
```bash
sudo install -o root -g root -m 0755 k9s /usr/local/bin/k9s
```
---
## Commands
### Cluster selection
As soon as you've started k9s, you can use a bunch of commands to interact with your selected
cluster (which is the context you have selected in you current shell environment).
You can everytime change the cluster you want to work with by typing `:context`. A list of
available cluster configurations appear, you can select the cluster to connect to with the
arrow keys and select the context to be used by pressing enter.
### General command structure
**Menu**
You can switch between resource types to show using a text menu selection. You need to press `:`
to bring up this menu. Then you can type the resource type you want to switch to
(e.g. `pod`, `services`...). Press the enter key to finish the command.
**Selection**
Selections are made with the arrow keys. To confirm your selection or to show more information,
use the enter key again. For instance, you can select a pod with the arrow keys and type enter
to "drill down" in that pod and view the running containers in it.
**Filter and searches**
In nearly every screen of k9s, you can apply filters or search for something (e.g. in the log output
of a pod). This can be done by pressing `/` followed by the search / filter term. Press enter so apply
the filter / search.
Also in some screens, there are shortcuts for namespace filters bound to the number keys. Where `0`
always shows all namespaces.
### Useful shortcuts and commands
| Command | Comment | Compareable kubectl command |
|-------------|--------------------------------------------------------------------------------|---------------------------------------------------------------------------|
| `:pod` | Switches to the pod screen, where you can see all pods on the current cluster. | `kubectl get pods --all-namespaces` |
| `:services` | Switches to the service screen, where you can see all services. | `kubectl get services --all-namespaces` |
| `ctrl`+`d` | Delete a resource. | `kubectl delete <resource> -n <namespace>` |
| `ctrl`+`k` | Kill a resource (no confirmation) | |
| `s` | When on the Pod screen, you then open a shell into the selected pod. | `kubectl exec -n <namespace> <pod_name> -c <container_name> -- /bin/bash` |
| `l` | Show the log output of a pod. | `kubectl logs -n <namespace> <pod_name>` |

94
kubernetes/kind.md Normal file
View File

@ -0,0 +1,94 @@
# Kind
Using the Kind project, you are able to easily deploy a Kubernetes cluster on top of Docker as Docker containers. Kind will spawn separate containers which be shown as the Kubernetes nodes. In this documentation, you can find some examples, as well as a link to a Ansible playbook which can do the cluster creation / deletion for you. This document only describes the basics of Kind. To find more detailed information, you can check the [official Kind documentation](https://kind.sigs.k8s.io/docs/user/quick-start/).
Kind is ideal to use in a local development environment or even during a build pipeline run.
## Installation on Linux
Since Kind deploys Docker containers, it needs to have a Container engine (like Docker) installed.
Installing Kind can be done by downloading the latest available release / binary for your platform:
```bash
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.16.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
```
## Cluster management
### Cluster creation
You have to provide a configuration file which tells Kind how you want your Kubernetes cluster to be deployed. Find an example configuration file below:
```yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: testcluster
# 1 control plane node and 2 workers
nodes:
# the control plane node config
- role: control-plane
# the two workers
- role: worker
- role: worker
```
Create the cluster by the following command:
```bash
kind create cluster --config kind-cluster-config.yaml
Creating cluster "testcluster" ...
Ensuring node image (kindest/node:v1.25.2)
Preparing nodes
Writing configuration
Starting control-plane
Installing CNI
Installing StorageClass
Joining worker nodes
Set kubectl context to "kind-testcluster"
You can now use your cluster with:
kubectl cluster-info --context kind-testcluster
Not sure what to do next? Check out https://kind.sigs.k8s.io/docs/user/quick-start/
```
Checking for Docker containers running, you can see the following:
```bash
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac14d8c7a3c9 kindest/node:v1.25.2 "/usr/local/bin/entr..." 2 minutes ago Up About a minute testcluster-worker2
096dd4bf1718 kindest/node:v1.25.2 "/usr/local/bin/entr..." 2 minutes ago Up About a minute 127.0.0.1:42319->6443/tcp testcluster-control-plane
e1ae2d701394 kindest/node:v1.25.2 "/usr/local/bin/entr..." 2 minutes ago Up About a minute testcluster-worker
```
### Interacting with your cluster
You may have multiple Kind clusters deployed at the same time. To get a list of running clusters, you can use the following command:
```bash
kind get clusters
kind
kind-2
```
After cluster creation, the Kubernetes context is set automatically to the newly created cluster. In order to set the currently used kubeconfig, you may use some tooling like [kubectx](https://github.com/ahmetb/kubectx). You may also set the current context used by `kubectl` with the `--context` option, which refers to the Kind cluster name.
### Cluster deletion
To delete a Kind cluster, you can use the following command. Kind will also delete the kubeconfig of the deleted cluster. So you don't need to do this on your own.
```bash
kind delete cluster -n testcluster
Deleting cluster "testcluster" ...
```
## Further information
More examples and tutorials regarding Proxmox can be found in the link list below:
- Creating an Ansible playbook to manage Kind cluster: [Lightweight Kubernetes cluster using Kind and Ansible](https://thedatabaseme.de/2022/04/22/lightweight-kubernetes-cluster-using-kind-and-ansible/)

179
kubernetes/kubectl.md Normal file
View File

@ -0,0 +1,179 @@
# Kubectl
Kubectl is a command line tool for communicating with a [Kubernetes Cluster](kubernetes/kubernetes.md)'s control pane, using the Kubernetes API.
Documentation: [Kubectl Reference](https://kubernetes.io/docs/reference/kubectl/)
---
## Installation
### On Windows (PowerShell)
Install Kubectl with [chocolatey](tools/chocolatey.md):
```
choco install kubernetes-cli
```
### On Linux
> [!INFO] Installing on WSL2
> On WSL2 it's recommended to install Docker Desktop [[docker-desktop]], which automatically comes with kubectl.
#### Download the latest release
```bash
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
```
#### Install kubectl
```bash
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
```
### On mac OS
Install Kubectl with [homebrew](tools/homebrew.md):
```zsh
brew install kubernetes-cli
```
---
## Config Management
### Multiple Config Files
### On Windows (PowerShell)
```powershell
$env:KUBECONFIG = "$HOME/.kube/prod-k8s-clcreative-kubeconfig.yaml;$HOME/.kube/infra-home-kube-prod-1.yml;$HOME/.kube/infra-home-kube-demo-1.yml;$HOME/.kube/infra-cloud-kube-prod-1.yml"
```
### On Linux
```bash
export KUBECONFIG=~/.kube/kube-config-1.yml:~/.kube/kube-config-2.yml
```
Managing multiple config files manually can become extensive. Below you can find a handy script, which you can implement in your shell rc file (e.g. .bashrc or .zshrc). The script will automatically add all found kubeconfigs to the `KUBECONFIG` environment variable.
Script was copied from [here](https://medium.com/@alexgued3s/multiple-kubeconfigs-no-problem-f6be646fc07d)
```bash
# If there's already a kubeconfig file in ~/.kube/config it will import that too and all the contexts
DEFAULT_KUBECONFIG_FILE="$HOME/.kube/config"
if test -f "${DEFAULT_KUBECONFIG_FILE}"
then
export KUBECONFIG="$DEFAULT_KUBECONFIG_FILE"
fi# Your additional kubeconfig files should be inside ~/.kube/config-files
ADD_KUBECONFIG_FILES="$HOME/.kube/config-files"
mkdir -p "${ADD_KUBECONFIG_FILES}"OIFS="$IFS"
IFS=$'\n'
for kubeconfigFile in `find "${ADD_KUBECONFIG_FILES}" -type f -name "*.yml" -o -name "*.yaml"`
do
export KUBECONFIG="$kubeconfigFile:$KUBECONFIG"
done
IFS="$OIFS"
```
Another helpful tool that makes you changing and selecting the cluster context easier is
`kubectx`. You can download `kubectx` [here](https://github.com/ahmetb/kubectx).
:warning: The above script conflicts with kubectx, cause kubectx can only work with one
kubeconfig file listed in the `KUBECONFIG` env var. If you want to use both, add the following
lines to your rc file.
```bash
# now we merge all configs to one
kubectl config view --merge --flatten > $HOME/.kube/merged-config
export KUBECONFIG="$HOME/.kube/merged-config"
```
---
## Commands
### Networking
Connect containers using Kubernetes internal DNS system:
`<service-name>.<namespace>.svc.cluster.local`
Troubleshoot Networking with a netshoot toolkit Container:
`kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot -- /bin/bash`
### Containers
Restart Deployments (Stops and Restarts all Pods):
`kubectl scale deploy <deployment> --replicas=0`
`kubectl scale deploy <deployment> --replicas=1`
Executing Commands on Pods:
`kubectl exec -it <PODNAME> -- <COMMAND>`
`kubectl exec -it generic-pod -- /bin/bash`
### Config and Cluster Management
COMMAND | DESCRIPTION
---|---
`kubectl cluster-info` | Display endpoint information about the master and services in the cluster
`kubectl config view` |Get the configuration of the cluster
### Resource Management
COMMAND | DESCRIPTION
---|---
`kubectl get all --all-namespaces` | List all resources in the entire Cluster
`kubectl delete <RESOURCE> <RESOURCENAME> --grace-period=0 --force` | Try to force the deletion of the resource
---
## List of Kubernetes Resources "Short Names"
Short Name | Long Name
---|---
`csr`|`certificatesigningrequests`
`cs`|`componentstatuses`
`cm`|`configmaps`
`ds`|`daemonsets`
`deploy`|`deployments`
`ep`|`endpoints`
`ev`|`events`
`hpa`|`horizontalpodautoscalers`
`ing`|`ingresses`
`limits`|`limitranges`
`ns`|`namespaces`
`no`|`nodes`
`pvc`|`persistentvolumeclaims`
`pv`|`persistentvolumes`
`po`|`pods`
`pdb`|`poddisruptionbudgets`
`psp`|`podsecuritypolicies`
`rs`|`replicasets`
`rc`|`replicationcontrollers`
`quota`|`resourcequotas`
`sa`|`serviceaccounts`
`svc`|`services`
---
## 陼 Logs and Troubleshooting
...
### Logs
...
### MySQL
`kubectl run -it --rm --image=mysql:5.7 --restart=Never mysql-client -- mysql -u USERNAME -h HOSTNAME -p`
### Networking
`kubectl run -it --rm --image=nicolaka/netshoot netshoot -- /bin/bash`
---
## Resources stuck in Terminating state
...
```sh
(
NAMESPACE=longhorn-demo-1
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
)
```

View File

@ -0,0 +1,46 @@
# Kubernetes DNS
## DNS for Services and Pods
Kubernetes creates DNS records for Services and Pods. You can contact Services with consistent DNS names instead of IP addresses.
```
your-service.your-namespace.svc.cluster.local
```
Any Pods exposed by a Service have the following DNS resolution available:
```
your-prod.your-service.your-namespace.svc.cluster.local
```
---
## Custom DNS Settings
### Edit coredns config map
Add entry to the `Corefile: |` section of the `configmap/coredns` in section **kube-system**.
```yml
.:53 {
# ...
}
import /etc/coredns/custom/*.server
```
### Add new config map
Example for local DNS server using the **clcreative.home** zone.
```yml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
clcreative.server: |
clcreative.home:53 {
forward . 10.20.0.10
}
```

3
kubernetes/kubernetes.md Normal file
View File

@ -0,0 +1,3 @@
# Kubernetes
**'Kubernetes'** is an open-source container orchestration platform that automates the deployment, scaling, and management of applications in a containerized environment. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).