This commit is contained in:
2024-04-03 22:04:13 +02:00
parent 7e68609006
commit 0b373d31db
142 changed files with 7334 additions and 0 deletions

154
apps/argocd.md Normal file
View File

@ -0,0 +1,154 @@
# Argo CD
**Argo CD** is a declarative, GitOps continuous delivery tool for **[Kubernetes](kubernetes/kubernetes.md). It allows application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.
Documentation & Project Homepage: [Argo CD Docs](https://argo-cd.readthedocs.io/en/stable/)
---
## Installation
1. Install Argo CD on a **[Kubernetes](kubernetes/kubernetes.md) Cluster, using [kubectl](kubernetes/kubectl)**.
```bash
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
2. Add **[Traefik](traefik/traefik.md) IngressRoute.
```yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: argocd-server
namespace: argocd
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`argocd.example.com`)
priority: 10
services:
- name: argocd-server
port: 80
- kind: Rule
match: Host(`argocd.example.com`) && Headers(`Content-Type`, `application/grpc`)
priority: 11
services:
- name: argocd-server
port: 80
scheme: h2c
tls:
certResolver: default
```
3. Disable internal TLS
Edit the `--insecure` flag in the `argocd-server` command of the argocd-server deployment, or simply set `server.insecure: "true"` in the `argocd-cmd-params-cm` ConfigMap.
---
## Get the admin password
For Argo CD v1.8 and earlier, the initial password is set to the name of the server pod, for Argo CD v1.9 and later, the initial password is available from a secret named `argocd-initial-admin-secret`.
```bash
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
```
---
## Configuration
### Add private GitHub Repositories
1. Create a github token: https://github.com/settings/tokens
2. Add new repository in ArgoCD via **[kubectl](kubernetes/kubectl) or the GUI
```yaml
apiVersion: v1
kind: Secret
metadata:
 name: repo-private-1
 labels:
   argocd.argoproj.io/secret-type: repository
stringData:
 url: https://github.com/xcad2k/private-repo
 password: <github-token>
 username: not-used
```
3. Verify new repository is connected
---
### Declarative Application and ApplicationSet
Apart from using the WebUI to add managed apps to ArgoCD, you can configure `Application`
and `ApplicationSet` resources. This enables you to define not only ArgoCD and your apps
as code, but also the definition which application you want to manage.
With apps defined as YAML via an `Application`, you can e.g. deploy the app within a CI/CD
pipeline that deploys your Argo instance.
There are two types of resources. `Application` and `ApplicationSet`. The main difference is,
that you can specify so called inline generators which allow you to template your Application
definition. If you manage multiple clusters with ArgoCD and you want to get an `Application`
deployed with cluster specific parameters you want to use an `ApplicationSet`.
Below, you find an example for an `Application` and an `ApplicationSet`.
**Application:**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
spec:
destination:
namespace: default
server: 'https://kubernetes.default.svc'
source:
path: kustomize-guestbook
repoURL: 'https://github.com/argoproj/argocd-example-apps'
targetRevision: HEAD
project: default
syncPolicy:
automated:
prune: false
selfHeal: false
```
**ApplicationSet:**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: guestbook
namespace: argocd
spec:
generators:
- clusters: {} # This is a generator, specifically, a cluster generator.
template:
# This is a template Argo CD Application, but with support for parameter substitution.
metadata:
name: '{{name}}-guestbook'
spec:
project: "default"
source:
repoURL: https://github.com/argoproj/argocd-example-apps/
targetRevision: HEAD
path: kustomize-guestbook
destination:
server: '{{server}}'
namespace: default
```
## Further information
More examples and tutorials regarding ArgoCD can be found in the link list below:
- Basic tutorial for installation and configuration: [Let loose the squid - Deploy ArgoCD the declarative way](https://thedatabaseme.de/2022/06/05/let-loose-the-squid-deploy-argocd-the-declarative-way/)
- Writing ArgoCD Plugins: [ArgoCD Custom Plugins](https://dev.to/tylerauerbeck/argocd-custom-plugins-creating-a-custom-plugin-to-process-openshift-templates-4p5m)

175
apps/bind9.md Normal file
View File

@ -0,0 +1,175 @@
# BIND9
BIND9 (Berkeley Internet Name Domain version 9) is an open-source [[DNS]] (Domain Name System) software system. It is the most widely used DNS server software on the Internet and is maintained by the Internet Systems Consortium (ISC). BIND9 provides a robust and scalable platform for resolving domain names into IP addresses and vice versa, as well as supporting advanced DNS features such as [[DNSSEC]] (DNS Security Extensions), dynamic updates, and incremental zone transfers. BIND9 runs on a variety of operating systems, including [[Linux]], [[Unix]], and [[Windows]], and is highly configurable and extensible through the use of plugins and modules.
Project Homepage: https://www.isc.org/bind/
---
## Installation
ISC provides executables for Windows and packages for [Ubuntu](linux/distros/ubuntu.md), [CentOS](linux/distros/centos.md), [Fedora](linux/distros/fedora.md) and [Debian](linux/distros/debian.md) - BIND 9 ESV, Debian - BIND 9 Stable, Debian - BIND 9 Development version. Most operating systems also offer BIND 9 packages for their users. These may be built with a different set of defaults than the standard BIND 9 distribution, and some of them add a version number of their own that does not map exactly to the BIND 9 version.
### Ubuntu Linux
BIND9 is available in the Main repository. No additional repository needs to be enabled for BIND9.
```sh
sudo apt install bind9
```
### Ubuntu Docker
As part of the [Long Term Supported OCI Images](https://ubuntu.com/security/docker-images), Canonical offers Bind9 as a hardened and maintained [Docker](docker/docker.md).
```sh
docker run -d --name bind9-container -e TZ=UTC -p 30053:53 ubuntu/bind9:9.18-22.04_beta
```
---
## Configuration
BIND 9 uses a single configuration file called `named.conf`, which is typically located in either `/etc/bind`, `/etc/namedb` or `/usr/local/etc/namedb`.
The `named.conf` consists of `logging`, and `options` blocks, and `category`, `channel`, `directory`, `file` and `severity` statements.
### Named Config
```conf
options {
...
};
zone "domain.tld" {
type primary;
file "domain.tld";
};
```
### Zone File
Depending on the functionality of the system, one or more `zone` files is required.
```conf
; base zone file for domain.tld
$TTL 2d ; default TTL for zone
$ORIGIN domain.tld. ; base domain-name
; Start of Authority RR defining the key characteristics of the zone (domain)
@ IN SOA ns1.domain.tld. hostmaster.domain.tld. (
2022121200 ; serial number
12h ; refresh
15m ; update retry
3w ; expiry
2h ; minimum
)
; name server RR for the domain
IN NS ns1.domain.tld.
; mail server RRs for the zone (domain)
3w IN MX 10 mail.domain.tld.
; domain hosts includes NS and MX records defined above
; plus any others required
; for instance a user query for the A RR of joe.domain.tld will
; return the IPv4 address 192.168.254.6 from this zone file
ns1 IN A 192.168.254.2
mail IN A 192.168.254.4
joe IN A 192.168.254.6
www IN A 192.168.254.7
```
#### SOA (Start of Authority)
A start of authority record is a type of resource record in the Domain Name System ([DNS](networking/dns.md)) containing administrative information about the zone, especially regarding zone transfers. The SOA record format is specified in RFC 1035.
```conf
@ IN SOA ns1.domain.tld. hostmaster.domain.tld. (
2022121200 ; serial number
12h ; refresh
15m ; update retry
3w ; expiry
2h ; minimum
)
```
---
## Forwarders
DNS forwarders are servers that resolve DNS queries on behalf of another DNS server.
To configure bind9 as a forwarding DNS server, you need to add a `forwarders` clause inside the `options` block. The `forwarders` clause specifies a list of IP addresses of other DNS servers that bind9 will forward queries to.
```conf
options {
// ... other options ...
forwarders {
8.8.8.8; // Google Public DNS
1.1.1.1; // Cloudflare DNS
};
};
```
---
## Access Control
To configure permissions in BIND9, you can use the “acl” statement to define access control lists, and then use the “allow-query” and “allow-transfer” statements to specify which hosts or networks are allowed to query or transfer zones.
```conf
acl "trusted" {
192.168.1.0/24;
localhost;
};
options {
// ...
allow-query { any; };
allow-transfer { "trusted"; };
// ...
};
zone "example.com" {
// ...
allow-query { "trusted"; };
// ...
};
```
In this example, we define an ACL called “trusted” that includes the 192.168.1.0/24 network and the local host. We then specify that hosts in this ACL are allowed to transfer zones, and that any host is allowed to query.
For the “example.com” zone, we specify that only hosts in the “trusted” ACL are allowed to query.
You can also use other ACL features, such as “allow-recursion” and “allow-update”, to further control access to your DNS server.
---
## Dynamic Updates
Dynamic updates in BIND allow for the modification of DNS records in real-time without having to manually edit zone files.
### Secure DNS updates with TSIG Key
A TSIG (Transaction SIGnature) key is a shared secret key used to authenticate dynamic DNS updates between a DNS client and server. It provides a way to securely sign and verify DNS messages exchanged during dynamic updates.
To create a TSIG key for use with dynamic updates, the `tsig-keygen` command can be used.
```
tsig-keygen -a hmac-sha256
```
To add the TSIG key to the zone configuration, the "key" statement must be added to the "allow-update" statement in the named.conf file. For example:
```
zone "example.com" {
type master;
file "example.com.zone";
allow-update { key "tsig-key"; };
};
```
In this example, the "allow-update" statement now uses the TSIG key, to allow updates to the "example.com" zone.

83
apps/cert-manager.md Normal file
View File

@ -0,0 +1,83 @@
# Cert-Manager
Cert-manager adds [certificates](misc/ssl-certs) and certificate issuers as resource types in [Kubernetes Clusters](kubernetes/kubernetes.md), and simplifies the process of obtaining, renewing and using those [certificates](misc/ssl-certs).
Documentation & Project Homepage: [Cert-Manager Docs](https://cert-manager.io/docs/)
---
## Self-Signed Certificates
### Upload existing CA.key and CA.crt files (Option 1)
1. Create a self-signed CA creating a ca.key (private-key) and ca.crt (certificate)
(ca.key)
```bash
openssl genrsa -out ca.key 4096
```
(ca.crt)
```bash
openssl req -new -x509 -sha256 -days 365 -key ca.key -out ca.crt
```
2. Convert the files to a one line base64 decoded string (only works on Linux base64 tool)
```bash
cat ca.key | base64 -w 0
```
3. Create a new ssl secret object using the strings
```yaml
apiVersion: v1
kind: Secret
metadata:
name: ssl-issuer-secret
  # (Optional) Metadata
  # ---
  # namespace: your-namespace
type: Opaque
data:
tls.crt: <base64-decoded-string>
tls.key: <base64-decoded-string>
```
4. Create a new ClusterIssuer or Issuer object by using the ssl secret
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
  # (Optional) Metadata
  # ---
  # namespace: your-namespace
spec:
ca:
secretName: ssl-issuer-secret
```
### Create CA through Cert-manager (Option 2)
Create a new ClusterIssuer or Issuer object by using the selfSigned Attribute.
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: root-issuer
spec:
selfSigned: {}
```
---
## Troubleshooting
### Common Errors
**DNS Record not yet propagated**
The error, `Waiting for DNS-01 challenge propagation: DNS record for "your-dns-record" not yet propagated.`, might occur in the `challenge` object. Cert-Manager creates a TXT Record on the DNS provider and checks, whether the record is existing, before issuing the certificate. In a split-dns environment, this could be a problem when internal DNS Servers can't resolve the TXT Record on the Cloud DNS. You can use the `extraArgs` `--dns01-recursive-nameservers-only`, and `--dns01-recursive-nameservers=8.8.8.8:53,1.1.1.1:53`, to specific the DNS Resolvers used for the challenge.
**No solver found**
The error, `Failed to determine a valid solver configuration for the set of domains on the Order: no configured challenge solvers can be used for this challenge` might occur in the `order` object, when no solver can't be found for the DNS Hostname. Make sure your solvers have a corrent `dnsZones` configured that matches the DNS Hostnames Zone.

View File

@ -0,0 +1,85 @@
## Cloudflare Tunnel
##### Protect your web servers from direct attack
From the moment an application is deployed, developers and IT spend time locking it down — configuring ACLs, rotating IP addresses, and using clunky solutions like GRE tunnels.
Theres a simpler and more secure way to protect your applications and web servers from direct attacks: Cloudflare Tunnel.
Ensure your server is safe, no matter where its running: public cloud, private cloud, Kubernetes cluster, or even a Mac mini under your TV.
### I do everthing in the cli
install the cloudflare tunnel service.
in my case i will do the install on een ubuntu machine.
```
wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb && sudo dpkg -i cloudflared-linux-amd64.deb
```
When you run the flowing command you get a url. login to cloudflare
```
cloudflared tunnel login
```
when cloudflare is connected you get a cert.pem.
make a note of the location.
create the tunnel
by name fill the name that you want for the tunnel.
```
cloudflared tunnel create <NAME>
# Take a note where your tunnel credentials are saved.
```
create a configuration file in the `.cloudflared` directory
```
nano /home/$USER/.cloudflared/config.yaml
```
set the following lines.
```
tunnel: Your-Tunnel-Id
credentials-file: /home/$USER/.cloudflared/1d4537b6-67b9-4c75-a022-ce805acd5c0a.json
1d4537b6-67b9-4c75-a022-ce805acd5c0a.json # Get the json file from previous step.
```
add your first site example.com
```
cloudflared tunnel route dns <name of the tunnel> <example.com>
```
create the ingress.
create config.yml file in you .cloudflared directory
```
ingress:
- hostname: example.com
service: http://internalip:80
- hostname: sub.example.com
service: http://internalip:88
- service: http_status:404 # this is required as a 'catch-all'
```
start the tunnel
```
cloudflared tunnel run <name of your tunnel>
```
Make a service to run automatic
```
cloudflared service install
```
start en enable the service
```
systemctl enable --now cloudflared
```

7
apps/grafana.md Normal file
View File

@ -0,0 +1,7 @@
# Grafana
Operational dashboards for your data here, there, or anywhere
Project Homepage: [Grafana Homepage](https://grafana.com)
Documentation: [Grafana Docs](https://grafana.com/docs/)
---

92
apps/kasm.md Normal file
View File

@ -0,0 +1,92 @@
# KASM Workspaces
Streaming containerized apps and desktops to end-users. The Workspaces platform provides enterprise-class orchestration, data loss prevention, and web streaming technology to enable the delivery of containerized workloads to your browser.
---
## Add self-signed SSL Certificates
...
1. Stop the kasm services
```
sudo /opt/kasm/bin/stop
```
2. Replace `kasm_nginx.crt` and `kasm_nginx.key` files
```
sudo cp <your_cert> /opt/kasm/current/certs/kasm_nginx.crt
sudo cp <your_key> /opt/kasm/current/certs/kasm_nginx.key
```
3. Start the Kasm Services
```
sudo /opt/kasm/bin/start
```
---
## Custom Images
...
Registry
```
https://index.docker.io/v1/
```
...
### Add Images in KASM
> [!attention]
> You need to pass in a "tag" in the Docker Image. Otherwise kasm won't pull and start the image correctly.
### Docker Run Config
**Example**
```
{
"cap_add":["NET_ADMIN"],
"devices":["dev/net/tun","/dev/net/tun"],
"sysctls":{"net.ipv6.conf.all.disable_ipv6":"0"}
}
```
---
## Troubleshooting
...
### KASM Agent
...
### Database
...
```
sudo docker exec -it kasm_db psql -U kasmapp -d kasm
```
### Delete invalid users from user_groups table
...
1. Check table for invalid entries
```
kasm=# select * from user_groups;
user_group_id | user_id | group_id
--------------------------------------+--------------------------------------+--------------------------------------
07c54672-739f-42d8-befc-bb2ba29fa22d | 71899524-5b31-41ac-a359-1aa8a008b831 | 68d557ac-4cac-42cc-a9f3-1c7c853de0f3
e291f1f7-86be-490f-9f9b-3a520d4d1dfa | 71899524-5b31-41ac-a359-1aa8a008b831 | b578d8e9-5585-430b-a70b-9935e8acaaa3
07b6f450-2bf5-48c0-9c5e-3443ad962fcb | | 68d557ac-4cac-42cc-a9f3-1c7c853de0f3
8c4c7242-b2b5-4a7a-89d3-e46d24456e5c | | b578d8e9-5585-430b-a70b-9935e8acaaa3
```
2. Delete invalid entries from the table:
```postgresql
delete from user_groups where user_id is null;
```
3. Verify table
```
kasm=# select * from user_groups;
user_group_id | user_id | group_id
--------------------------------------+--------------------------------------+--------------------------------------
07c54672-739f-42d8-befc-bb2ba29fa22d | 71899524-5b31-41ac-a359-1aa8a008b831 | 68d557ac-4cac-42cc-a9f3-1c7c853de0f3
e291f1f7-86be-490f-9f9b-3a520d4d1dfa | 71899524-5b31-41ac-a359-1aa8a008b831 | b578d8e9-5585-430b-a70b-9935e8acaaa3
(2 rows)
```

21
apps/longhorn.md Normal file
View File

@ -0,0 +1,21 @@
# Longhorn
Longhorn is a lightweight, reliable and easy-to-use distributed block storage system for [Kubernetes](kubernetes/kubernetes.md).
Project Homepage: [Longhorn Homepage](https://longhorn.io)
Documentation: [Longhorn Docs](https://longhorn.io/docs/)
---
## Installation
You can install Longhorn via [Helm](tools/helm.md). To customize values, follow the [Chart Default Values](https://github.com/longhorn/longhorn/blob/master/chart/values.yaml)
```shell
helm repo add longhorn https://charts.longhorn.io
helm repo update
helm install longhorn longhorn/longhorn
```
---

91
apps/nginx.md Normal file
View File

@ -0,0 +1,91 @@
# Nginx
Open source web and application server.
Project Homepage: [Nginx Homepage](https://www.nginx.com/)
Documentation: [Nginx Unit Docs](https://unit.nginx.org/)
---
## Basic configuration arguments and examples
Logging and debugging:
```nginx
error_log <file> <loglevel>
error_log logs/error.log;
error_log logs/debug.log debug;
error_log logs/error.log notice;
```
basic listening ports:
```nginx
listen <port> <options>
listen 80;
listen 443 ssl http2;
listen 443 http3 reuseport; (this is experimental!)
```
header modifcations:
```nginx
add_header <header> <values>
add_header Alt-svc '$http3=":<port>"; ma=<value>'; (this is experimental!)
ssl_certificate / ssl_certificate_key
ssl_certificate cert.pem;
ssl_certificate_key cert.key;
server_name <domains>
server_name domain1.com *.domain1.com
root <folder>
root /var/www/html/domain1;
index <file>
index index.php;
location <url> {
}
location / {
root index.html;
index index.html index.htm;
}
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \\.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
log_not_found off;
access_log off;
allow all;
}
location ~* .(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
```
## Reverse Proxy
### Show Client's real IP
```nginx
server {
server_name example.com;
location / {
proxy_pass http://localhost:4000;
# Show clients real IP behind a proxy
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
```

51
apps/passbolt.md Normal file
View File

@ -0,0 +1,51 @@
# Passbolt
Passbolt is a free and open-source password manager built for collaboration. Secure, flexible, and automation ready. Trusted by 10,000 organizations, including Fortune 500 companies, newspapers, governments and defence forces.
Project Homepage: https://passbolt.com/
---
## Set Up
### Create admin user
```sh
docker-compose exec passbolt su -m -c "/usr/share/php/passbolt/bin/cake \
passbolt register_user \
-u <your_email> \
-f <first_name> \
-l <last_name>\
-r admin" -s /bin/sh www-data
```
### Backup options
Backup database container
change database-container to the name of your passbolt database container
and change the backup location
```
docker exec -i database-container bash -c \
'mysqldump -u${MYSQL_USER} -p${MYSQL_PASSWORD} ${MYSQL_DATABASE}' \
> /path/to/backup.sql
```
### Backup server public and private keys
change passbolt-container to the name of your passbolt container
and change the backup location
```
docker cp passbolt-container:/etc/passbolt/gpg/serverkey_private.asc \
/path/to/backup/serverkey_private.asc
docker cp passbolt-container:/etc/passbolt/gpg/serverkey.asc \
/path/to/backup/serverkey.asc
```
### Backup The avatars
```
docker exec -i passbolt-container \
tar cvfzp - -C /usr/share/php/passbolt/ webroot/img/avatar \
> passbolt-avatars.tar.gz
```

91
apps/portainer.md Normal file
View File

@ -0,0 +1,91 @@
# Portainer
Easily deploy, configure and secure containers in minutes on [Docker](docker/docker.md), [Kubernetes](kubernetes/kubernetes.md), Swarm and Nomad in any cloud, datacenter or device.
Project Homepage: [Portainer](https://www.portainer.io)
Documentation: [Portainer Docs](http://documentation.portainer.io)
## Installation
There are two installation options: [Portainer CE](https://docs.portainer.io/start/install-ce/server/docker) (Community Edition) and [Portainer BE](https://docs.portainer.io/start/install/server/docker) (Business Edition). Up to three nodes of BE can be requested at no cost; documentation outlining the feature differences between BE and CE [here](https://docs.portainer.io/).
>**Requirements:**
*[Docker](../docker/docker.md)*, *[Docker Swarm](../docker/docker-swarm.md)*, or *[Kubernetes](../kubernetes/kubernetes.md)* must be installed to run Portainer. *[Docker](../docker/docker-compose.md)* is also recommended but not required.
The examples below focus on installing Community Edition (CE) on Linux but Windows and Windows Container Service installation instructions (very little, if any, difference) can also be accessed from the hyperlinks above.
## Deploy Portainer CE in Docker on Linux
Create the volume that Portainer Server will use to store its database.
```shell
docker volume create portainer_data
```
Download and install the Portainer Server container.
```shell
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
```
Check to see whether the Portainer Server container has started by running.
```shell
docker ps
```
Log into your Portainer Server in a web browser at URL `https://your-server-address:9443`.
## Deploy Portainer CE in Docker Swarm on Linux
Retrieve the stack YML manifest.
```shell
curl -L https://downloads.portainer.io/ce2-19/portainer-agent-stack.yml -o portainer-agent-stack.yml
```
Use the downloaded YML manifest to deploy your stack.
```shell
docker stack deploy -c portainer-agent-stack.yml portainer
```
Check to see whether the Portainer Server container has started by running.
```shell
docker ps
```
Log into your Portainer Server in a web browser at URL `https://your-server-address:9443`.
## Add environments to Portainer
Various protocols can be used for [Portainer node monitoring](https://docs.portainer.io/admin/environments/add/docker) including:
- Portainer Agent (running as a container on the client)
- URL/IP address
- Socket
The method that requires least configuration, and least additional external accessibility, is the Portainer Agent.
Running a docker command on the client machine will install the Portainer agent -- the appropriate docker command can be obtained in Portainer by:
1. Clicking "Environments" in the left side navigation pane
2. Click the blue button labled "+Add Environment" in the top-right corner
3. Click the appropriate environment (Docker Standalone, Docker Swarm, K8S, etc.)
4. Copy the docker string in the middle of the window and execute on the client
5. Enter a name and IP (ending in :9001) in the fields below and click "Connect"
## Updating Portainer (host)
In this example the container name for Portainer is "Portainer"; change this if necessary for your installation.
```shell
docker stop portainer && docker rm portainer && docker pull portainer/portainer-ce:latest && docker run -d -p 8000:8000 -p 9443:9443 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
```
## Updating Portainer Agent (clients)
```shell
docker stop portainer_agent && docker rm portainer_agent && docker pull portainer/agent:latest && docker run -d -p 9001:9001 --name=portainer_agent --restart=always -v /v
ar/run/docker.sock:/var/run/docker.sock -v /var/lib/docker/volumes:/var/lib/docker/volumes portainer/agent:latest
```

0
apps/prometheus.md Normal file
View File

13
apps/rancher.md Normal file
View File

@ -0,0 +1,13 @@
# Rancher
Rancher, the open-source multi-cluster orchestration platform, lets operations teams deploy, manage and secure enterprise [Kubernetes](kubernetes/kubernetes.md).
Project Homepage: [Rancher Homepage](https://www.rancher.com)
---
## Remove Installation
```
kubectl delete validatingwebhookconfiguration rancher.cattle.io
kubectl delete mutatingwebhookconfiguration rancher.cattle.io
```

6
apps/tailscale.md Normal file
View File

@ -0,0 +1,6 @@
# Tailscale
Tailscale is a zero config [VPN](networking/vpn.md) for building secure networks, powered by [wireguard](networking/wireguard.md). Install on any device in minutes. Remote access from any network or physical location.
Project Homepage: https://tailscale.com
---

View File

@ -0,0 +1,64 @@
# Teleport Assist
**'Teleport Assist'** is an artificial intelligence feature, that utilizes facts about your infrastructure to help answer questions, generate command line scripts, and help you perform routine tasks on target nodes. At the moment only SSH and bash are supported. Support for SQL, AWS API and Kubernetes is planned for the near future.
> **'Teleport Assist'** is currently experimental, available starting from Teleport v12.4 for Teleport Community Edition.
## Prerequisites
- You will need an active OpenAI account with GPT-4 API access as Teleport Assist relies on OpenAI services.
## Configuration
Copy the GPT-4 API key into the file `/etc/teleport/openai_key`, and set read-only permissions and change the file owner to the user that the Teleport Proxy Service uses by running the following commands:
```sh
chmod 400 /etc/teleport/openai_key
chown teleport:teleport /etc/teleport/openai_key
```
To enable Teleport Assist, you need to provide your OpenAI API key. On each Proxy and Auth Service host, perform the following actions.
If the host is running the Auth Service, add the following section:
```yaml
auth_service:
assist:
openai:
api_token_path: /etc/teleport/openai_key
```
If the host is running the Proxy Service, add the following section:
```yaml
proxy_service:
assist:
openai:
api_token_path: /etc/teleport/openai_key
```
Restart Teleport for the changes to take effect.
Make sure that your Teleport user has the `assistant` permission. By default, users with built-in `access` and `editor` roles have this permission. You can also add it to a custom role. Here is an example:
```yaml
kind: role
version: v6
metadata:
name: assist
spec:
allow:
rules:
- resources:
- assistant
verbs:
- list
- create
- read
- update
- delete
```
## Usage
Now that you have Teleport Assist enabled, you can start using it, by click on the **'Assist'** button in the Teleport UI.

View File

@ -0,0 +1,52 @@
# Teleport App Service
The **'Teleport App Service'** is a secure and convenient way to access internal applications from anywhere. It uses Teleport's built-in IAM system to authenticate users, and allows users to access applications from a web browser or command-line client. The **'Teleport App Service'** can be scaled to support numerous users and applications.
## Requirements
> To securely access applications, you need to obtain a valid [SSL/TLS certificate](../../misc/ssl-certs.md) for Teleport, and its application subdomains.
### Example: wildcard certificate in [Traefik](../traefik/traefik.md)
```yaml
labels:
- "traefik.http.routers.teleport.rule=HostRegexp(`teleport.your-domain`, `{subhost:[a-z]+}.teleport.your-domain`)"
- "traefik.http.routers.teleport.tls.domains[0].main=teleport.your-domain"
- "traefik.http.routers.teleport.tls.domains[0].sans=*.teleport.your-domain"
```
## Configuration
The following snippet shows the full YAML configuration of an Application Service appearing in the `teleport.yaml` configuration file:
```yaml
app_service:
enabled: yes
apps:
- name: "grafana"
description: "This is an internal Grafana instance"
uri: "http://localhost:3000"
public_addr: "grafana.teleport.example.com". # (optional)
insecure_skip_verify: false # (optional) don't very certificate
```
## Usage
To access a configured application in the Teleport UI, you can either:
- Go to the **Applications** tab and click the **Launch** button for the application that you want to access.
- Enter the subdomain of the application in your web browser, e.g. `https://grafana.teleport.example.com`.
### Relevant CLI commands
List the available applications:
```sh
tsh apps ls
```
Retrieves short-lived X.509 certificate for CLI application access.
```sh
tsh apps login grafana
```

View File

@ -0,0 +1,50 @@
# Teleport Configuration
In order to avoid breaking existing configurations, Teleport's configuration is versioned. The newer configuration version is `v3`. If a `version` is not specified in the configuration file, `v1` is assumed.
## Instance-wide settings
### Log Settings
```yaml
teleport:
log:
output: stderr
severity: INFO
format:
output: text
```
## Proxy Service
```yaml
proxy_service:
enabled: "yes"
web_listen_addr: 0.0.0.0:3080
# -- (Optional) when using reverse proxy
# public_addr: ['your-server-url:443']
https_keypairs: []
acme: {}
# --(Optional) ACME
# acme:
# enabled: "yes"
# email: your-email-address
```
## Auth Service
```yaml
auth_service:
enabled: "yes"
listen_addr: 0.0.0.0:3025
proxy_listener_mode: multiplex
cluster_name: your-server-url
```
## Additional Services Configuration
- [SSH Service](teleport-ssh)
- [Kubernetes Service](teleport-kubernetes)
- [Application Service](teleport-appservice)
- [Databases Service](teleport-databases)
- [Remote Desktop Service](teleport-remotedesktop)

View File

@ -0,0 +1,3 @@
# Teleport Databases Service
WIP

View File

@ -0,0 +1,3 @@
# Teleport Installation Guidelines
WIP

View File

@ -0,0 +1,3 @@
# Teleport Kubernetes Service
WIP

View File

@ -0,0 +1,3 @@
# Teleport Passwordless Auth
WIP

View File

@ -0,0 +1,3 @@
# Remote Desktop Service
WIP

View File

@ -0,0 +1,3 @@
# Teleport SSH Service
WIP

24
apps/teleport/teleport.md Normal file
View File

@ -0,0 +1,24 @@
# Teleport
DevOps teams use **'Teleport'** to access [SSH](../../networking/ssh.md) and Windows servers, [Kubernetes](../../kubernetes/kubernetes.md), databases, AWS Console, and web applications. **'Teleport'** prevents phishing by moving away from static credentials towards ephemeral certificates backed by biometrics and hardware identity, and stops attacker pivots with the [Zero Trust design](../../misc/zerotrust.md).
Project homepage: [Teleport](https://goteleport.com/)
Documentation: [Teleport Docs](https://goteleport.com/docs/)
## Installation
[Teleport Installation Guidelines](teleport-installation)
## Configuration
[Teleport General Configuration Guidelines](teleport-configuration)
## Features
- [SSH Service](teleport-ssh)
- [Kubernetes Service](teleport-kubernetes)
- [Databases Service](teleport-databases)
- [Remote Desktop Service](teleport-remotedesktop)
- [Application Service](teleport-appservice)
- [Passwordless Auth](teleport-passwordless)
- [AI Assist](teleport-aiassist)

194
apps/traefik/traefik.md Normal file
View File

@ -0,0 +1,194 @@
# Traefik
Traefik is an open-source Edge Router for [Docker](docker/docker.md), and [Kubernetes](kubernetes/kubernetes.md) that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.
---
## Installation
### Docker
TODO: WIP
### Kubernetes
You can install Traefik via [Helm](tools/helm.md).
```sh
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik traefik/traefik
```
---
## Dashboard and API
WIP
---
## EntryPoints
WIP
### HTTP Redirection
WIP
```yaml
entryPoints:
web:
address: :80
http:
redirections:
entryPoint:
to: websecure
scheme: https
```
### HTTPS
WIP
```yaml
entryPoints:
websecure:
address: :443
```
---
## Routers
**traefik.http.routers.router.entrypoints**
Specifies the Entrypoint for the Router. Setting this to `traefik.http.routers.router.entrypoints: websecure` will expose the Container on the `websecure` entrypoint.
*When using websecure, you should enable `traefik.http.routers.router.tls` as well.
**traefik.http.routers.router.rule**
Specify the Rules for the Router.
*This is an example for an FQDN: Host(`subdomain.your-domain`)*
**traefik.http.routers.router.tls**
Will enable TLS protocol on the router.
**traefik.http.routers.router.tls.certresolver**
Specifies the Certificate Resolver on the Router.
### PathPrefix and StripPrefix
WIP
```yml
- "traefik.enable=true"
- "traefik.http.routers.nginx-test.entrypoints=websecure"
- "traefik.http.routers.nginx-test.tls=true"
- "traefik.http.routers.nginx-test.rule=PathPrefix(`/nginx-test/`)"
- "traefik.http.routers.nginx-test.middlewares=nginx-test"
- "traefik.http.middlewares.nginx-test.stripprefix.prefixes=/nginx-test"
```
Add `/api` prefix to any requets to `myapidomain.com`
Example:
- Request -> `myapidomain.com`
- Traefik translates this to `myapidomain.com/api` without requestee seeing it
```yml
- "traefik.enable=true"
- "traefik.http.routers.myapp-secure-api.tls=true"
- "traefik.http.routers.myapp-secure-api.rule=Host(`myapidomain.com`)"
- "traefik.http.routers.myapp-secure-api.middlewares=add-api"
# Middleware
- "traefik.http.middlewares.add-api.addPrefix.prefix=/api"
```
---
## CertificatesResolvers
WIP
### dnsChallenge
DNS Providers such as `cloudflare`, `digitalocean`, `civo`, and more. To get a full list of supported providers, look up the [Traefik ACME Documentation](https://doc.traefik.io/traefik/https/acme/) .
```yaml
certificatesResolvers:
yourresolver:
acme:
email: "your-mail-address"
dnsChallenge:
provider: your-dns-provider
resolvers:
- "your-dns-resolver-ip-addr:53"
```
---
## ServersTransport
### InsecureSkipVerify
If you want to skip the TLS verification from **Traefik** to your **Servers**, you can add the following section to your `traefik.yml` config file.
```yaml
serversTransport:
insecureSkipVerify: true
```
---
## TLS Settings
Define TLS Settings in Traefik.
### defaultCertificates
```yaml
tls:
stores:
default:
defaultCertificate:
certFile: /your-traefik-cert.crt
keyFile: /your-traefik-key.key
```
### options
Define TLS Options like disabling insecure TLS1.0 and TLS 1.1.
```yaml
tls:
options:
default:
minVersion: VersionTLS12
```
---
## Providers
WIP
### File
WIP
```yaml
providers:
file:
```
### Docker
With `exposedByDefault: false`, Traefik won't automatically expose any containers by default. Setting `traefik.enable: true`, will expose the Container.
```yaml
providers:
docker:
exposedByDefault: false
```
### Kubernetes
WIP
---
## Ingress
WIP
---
## Log
WIP
```yaml
log:
level: ERROR
```
---
## Global
WIP
```yaml
global:
checkNewVersion: true
sendAnonymousUsage: false
```