This commit is contained in:
Hubert Cornet 2024-04-03 22:04:13 +02:00
parent 7e68609006
commit 0b373d31db
142 changed files with 7334 additions and 0 deletions

View File

@ -0,0 +1,19 @@
openssl genrsa -aes256 -out ca-key.pem 4096
openssl req -new -x509 -days 365 -subj "/C=FR/ST=NORD/L=ROUBAIX/O=Tips-Of-Mine/OU=IT/CN=tips-of-mine.local" -key ca-key.pem -sha256 -out ca.pem
openssl genrsa -out server-key.pem 4096
openssl req -sha256 -new -subj "/C=FR/ST=NORD/L=ROUBAIX/O=Tips-Of-Mine/OU=IT/CN=tips-of-mine.local" -key server-key.pem -out server.csr
cat > v3-server.cnf <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1=tips-of-mine.local
DNS.2=tips-of-mine
DNS.3=hostname
IP.1=127.0.0.1
IP.2=@IP
EOF
openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile v3-server.cnf

View File

@ -0,0 +1,20 @@
openssl genrsa -out key.pem 4096
openssl rep -subj "/CN=client" -new -key key.pem -out client.csr
cat > v3-client.cnf <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = clientAuth
subjectAltName = @alt_names
[alt_names]
DNS.1=tips-of-mine.local
DNS.2=tips-of-mine
DNS.3=hostname
IP.1=127.0.0.1
IP.2=@IP
EOF
openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile v3-client.cnf
chmod -v 0400 ca-key.pem key.pem server-key.pem
chmod -v 0444 ca.pem server-cert.pem cert.pem

View File

@ -0,0 +1,25 @@
mkdir -p /etc/docker/certs.d/example.com:2376
cp ca.pem server-cert.pem server-key.pem /etc/docker/certs.d/example.com:2376
nano /lib/systemd/system/docker.service
-> Remove '-H fd://' from 'ExecStart'
# Create /etc/docker/daemon.json
tee /etc/docker/daemon.json << EOL
{
"tlsverify": true,
"tlscacert": "/etc/docker/certs.d/example.com:2376/ca.pem",
"tlscert" : "/etc/docker/certs.d/example.com:2376/server-cert.pem",
"tlskey" : "/etc/docker/certs.d/example.com:2376/server-key.pem",
"hosts" : ["fd://", "0.0.0.0:2376"]
}
EOL
# Reload and restart
systemctl daemon-reload
systemctl restart docker
# Test client connection from another server
# copy ca.pem, cert.pem and key.pem to another machine
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem --host=example.com:2376 version

154
apps/argocd.md Normal file
View File

@ -0,0 +1,154 @@
# Argo CD
**Argo CD** is a declarative, GitOps continuous delivery tool for **[Kubernetes](kubernetes/kubernetes.md). It allows application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.
Documentation & Project Homepage: [Argo CD Docs](https://argo-cd.readthedocs.io/en/stable/)
---
## Installation
1. Install Argo CD on a **[Kubernetes](kubernetes/kubernetes.md) Cluster, using [kubectl](kubernetes/kubectl)**.
```bash
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
2. Add **[Traefik](traefik/traefik.md) IngressRoute.
```yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: argocd-server
namespace: argocd
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`argocd.example.com`)
priority: 10
services:
- name: argocd-server
port: 80
- kind: Rule
match: Host(`argocd.example.com`) && Headers(`Content-Type`, `application/grpc`)
priority: 11
services:
- name: argocd-server
port: 80
scheme: h2c
tls:
certResolver: default
```
3. Disable internal TLS
Edit the `--insecure` flag in the `argocd-server` command of the argocd-server deployment, or simply set `server.insecure: "true"` in the `argocd-cmd-params-cm` ConfigMap.
---
## Get the admin password
For Argo CD v1.8 and earlier, the initial password is set to the name of the server pod, for Argo CD v1.9 and later, the initial password is available from a secret named `argocd-initial-admin-secret`.
```bash
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
```
---
## Configuration
### Add private GitHub Repositories
1. Create a github token: https://github.com/settings/tokens
2. Add new repository in ArgoCD via **[kubectl](kubernetes/kubectl) or the GUI
```yaml
apiVersion: v1
kind: Secret
metadata:
 name: repo-private-1
 labels:
   argocd.argoproj.io/secret-type: repository
stringData:
 url: https://github.com/xcad2k/private-repo
 password: <github-token>
 username: not-used
```
3. Verify new repository is connected
---
### Declarative Application and ApplicationSet
Apart from using the WebUI to add managed apps to ArgoCD, you can configure `Application`
and `ApplicationSet` resources. This enables you to define not only ArgoCD and your apps
as code, but also the definition which application you want to manage.
With apps defined as YAML via an `Application`, you can e.g. deploy the app within a CI/CD
pipeline that deploys your Argo instance.
There are two types of resources. `Application` and `ApplicationSet`. The main difference is,
that you can specify so called inline generators which allow you to template your Application
definition. If you manage multiple clusters with ArgoCD and you want to get an `Application`
deployed with cluster specific parameters you want to use an `ApplicationSet`.
Below, you find an example for an `Application` and an `ApplicationSet`.
**Application:**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
spec:
destination:
namespace: default
server: 'https://kubernetes.default.svc'
source:
path: kustomize-guestbook
repoURL: 'https://github.com/argoproj/argocd-example-apps'
targetRevision: HEAD
project: default
syncPolicy:
automated:
prune: false
selfHeal: false
```
**ApplicationSet:**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: guestbook
namespace: argocd
spec:
generators:
- clusters: {} # This is a generator, specifically, a cluster generator.
template:
# This is a template Argo CD Application, but with support for parameter substitution.
metadata:
name: '{{name}}-guestbook'
spec:
project: "default"
source:
repoURL: https://github.com/argoproj/argocd-example-apps/
targetRevision: HEAD
path: kustomize-guestbook
destination:
server: '{{server}}'
namespace: default
```
## Further information
More examples and tutorials regarding ArgoCD can be found in the link list below:
- Basic tutorial for installation and configuration: [Let loose the squid - Deploy ArgoCD the declarative way](https://thedatabaseme.de/2022/06/05/let-loose-the-squid-deploy-argocd-the-declarative-way/)
- Writing ArgoCD Plugins: [ArgoCD Custom Plugins](https://dev.to/tylerauerbeck/argocd-custom-plugins-creating-a-custom-plugin-to-process-openshift-templates-4p5m)

175
apps/bind9.md Normal file
View File

@ -0,0 +1,175 @@
# BIND9
BIND9 (Berkeley Internet Name Domain version 9) is an open-source [[DNS]] (Domain Name System) software system. It is the most widely used DNS server software on the Internet and is maintained by the Internet Systems Consortium (ISC). BIND9 provides a robust and scalable platform for resolving domain names into IP addresses and vice versa, as well as supporting advanced DNS features such as [[DNSSEC]] (DNS Security Extensions), dynamic updates, and incremental zone transfers. BIND9 runs on a variety of operating systems, including [[Linux]], [[Unix]], and [[Windows]], and is highly configurable and extensible through the use of plugins and modules.
Project Homepage: https://www.isc.org/bind/
---
## Installation
ISC provides executables for Windows and packages for [Ubuntu](linux/distros/ubuntu.md), [CentOS](linux/distros/centos.md), [Fedora](linux/distros/fedora.md) and [Debian](linux/distros/debian.md) - BIND 9 ESV, Debian - BIND 9 Stable, Debian - BIND 9 Development version. Most operating systems also offer BIND 9 packages for their users. These may be built with a different set of defaults than the standard BIND 9 distribution, and some of them add a version number of their own that does not map exactly to the BIND 9 version.
### Ubuntu Linux
BIND9 is available in the Main repository. No additional repository needs to be enabled for BIND9.
```sh
sudo apt install bind9
```
### Ubuntu Docker
As part of the [Long Term Supported OCI Images](https://ubuntu.com/security/docker-images), Canonical offers Bind9 as a hardened and maintained [Docker](docker/docker.md).
```sh
docker run -d --name bind9-container -e TZ=UTC -p 30053:53 ubuntu/bind9:9.18-22.04_beta
```
---
## Configuration
BIND 9 uses a single configuration file called `named.conf`, which is typically located in either `/etc/bind`, `/etc/namedb` or `/usr/local/etc/namedb`.
The `named.conf` consists of `logging`, and `options` blocks, and `category`, `channel`, `directory`, `file` and `severity` statements.
### Named Config
```conf
options {
...
};
zone "domain.tld" {
type primary;
file "domain.tld";
};
```
### Zone File
Depending on the functionality of the system, one or more `zone` files is required.
```conf
; base zone file for domain.tld
$TTL 2d ; default TTL for zone
$ORIGIN domain.tld. ; base domain-name
; Start of Authority RR defining the key characteristics of the zone (domain)
@ IN SOA ns1.domain.tld. hostmaster.domain.tld. (
2022121200 ; serial number
12h ; refresh
15m ; update retry
3w ; expiry
2h ; minimum
)
; name server RR for the domain
IN NS ns1.domain.tld.
; mail server RRs for the zone (domain)
3w IN MX 10 mail.domain.tld.
; domain hosts includes NS and MX records defined above
; plus any others required
; for instance a user query for the A RR of joe.domain.tld will
; return the IPv4 address 192.168.254.6 from this zone file
ns1 IN A 192.168.254.2
mail IN A 192.168.254.4
joe IN A 192.168.254.6
www IN A 192.168.254.7
```
#### SOA (Start of Authority)
A start of authority record is a type of resource record in the Domain Name System ([DNS](networking/dns.md)) containing administrative information about the zone, especially regarding zone transfers. The SOA record format is specified in RFC 1035.
```conf
@ IN SOA ns1.domain.tld. hostmaster.domain.tld. (
2022121200 ; serial number
12h ; refresh
15m ; update retry
3w ; expiry
2h ; minimum
)
```
---
## Forwarders
DNS forwarders are servers that resolve DNS queries on behalf of another DNS server.
To configure bind9 as a forwarding DNS server, you need to add a `forwarders` clause inside the `options` block. The `forwarders` clause specifies a list of IP addresses of other DNS servers that bind9 will forward queries to.
```conf
options {
// ... other options ...
forwarders {
8.8.8.8; // Google Public DNS
1.1.1.1; // Cloudflare DNS
};
};
```
---
## Access Control
To configure permissions in BIND9, you can use the “acl” statement to define access control lists, and then use the “allow-query” and “allow-transfer” statements to specify which hosts or networks are allowed to query or transfer zones.
```conf
acl "trusted" {
192.168.1.0/24;
localhost;
};
options {
// ...
allow-query { any; };
allow-transfer { "trusted"; };
// ...
};
zone "example.com" {
// ...
allow-query { "trusted"; };
// ...
};
```
In this example, we define an ACL called “trusted” that includes the 192.168.1.0/24 network and the local host. We then specify that hosts in this ACL are allowed to transfer zones, and that any host is allowed to query.
For the “example.com” zone, we specify that only hosts in the “trusted” ACL are allowed to query.
You can also use other ACL features, such as “allow-recursion” and “allow-update”, to further control access to your DNS server.
---
## Dynamic Updates
Dynamic updates in BIND allow for the modification of DNS records in real-time without having to manually edit zone files.
### Secure DNS updates with TSIG Key
A TSIG (Transaction SIGnature) key is a shared secret key used to authenticate dynamic DNS updates between a DNS client and server. It provides a way to securely sign and verify DNS messages exchanged during dynamic updates.
To create a TSIG key for use with dynamic updates, the `tsig-keygen` command can be used.
```
tsig-keygen -a hmac-sha256
```
To add the TSIG key to the zone configuration, the "key" statement must be added to the "allow-update" statement in the named.conf file. For example:
```
zone "example.com" {
type master;
file "example.com.zone";
allow-update { key "tsig-key"; };
};
```
In this example, the "allow-update" statement now uses the TSIG key, to allow updates to the "example.com" zone.

83
apps/cert-manager.md Normal file
View File

@ -0,0 +1,83 @@
# Cert-Manager
Cert-manager adds [certificates](misc/ssl-certs) and certificate issuers as resource types in [Kubernetes Clusters](kubernetes/kubernetes.md), and simplifies the process of obtaining, renewing and using those [certificates](misc/ssl-certs).
Documentation & Project Homepage: [Cert-Manager Docs](https://cert-manager.io/docs/)
---
## Self-Signed Certificates
### Upload existing CA.key and CA.crt files (Option 1)
1. Create a self-signed CA creating a ca.key (private-key) and ca.crt (certificate)
(ca.key)
```bash
openssl genrsa -out ca.key 4096
```
(ca.crt)
```bash
openssl req -new -x509 -sha256 -days 365 -key ca.key -out ca.crt
```
2. Convert the files to a one line base64 decoded string (only works on Linux base64 tool)
```bash
cat ca.key | base64 -w 0
```
3. Create a new ssl secret object using the strings
```yaml
apiVersion: v1
kind: Secret
metadata:
name: ssl-issuer-secret
  # (Optional) Metadata
  # ---
  # namespace: your-namespace
type: Opaque
data:
tls.crt: <base64-decoded-string>
tls.key: <base64-decoded-string>
```
4. Create a new ClusterIssuer or Issuer object by using the ssl secret
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
  # (Optional) Metadata
  # ---
  # namespace: your-namespace
spec:
ca:
secretName: ssl-issuer-secret
```
### Create CA through Cert-manager (Option 2)
Create a new ClusterIssuer or Issuer object by using the selfSigned Attribute.
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: root-issuer
spec:
selfSigned: {}
```
---
## Troubleshooting
### Common Errors
**DNS Record not yet propagated**
The error, `Waiting for DNS-01 challenge propagation: DNS record for "your-dns-record" not yet propagated.`, might occur in the `challenge` object. Cert-Manager creates a TXT Record on the DNS provider and checks, whether the record is existing, before issuing the certificate. In a split-dns environment, this could be a problem when internal DNS Servers can't resolve the TXT Record on the Cloud DNS. You can use the `extraArgs` `--dns01-recursive-nameservers-only`, and `--dns01-recursive-nameservers=8.8.8.8:53,1.1.1.1:53`, to specific the DNS Resolvers used for the challenge.
**No solver found**
The error, `Failed to determine a valid solver configuration for the set of domains on the Order: no configured challenge solvers can be used for this challenge` might occur in the `order` object, when no solver can't be found for the DNS Hostname. Make sure your solvers have a corrent `dnsZones` configured that matches the DNS Hostnames Zone.

View File

@ -0,0 +1,85 @@
## Cloudflare Tunnel
##### Protect your web servers from direct attack
From the moment an application is deployed, developers and IT spend time locking it down — configuring ACLs, rotating IP addresses, and using clunky solutions like GRE tunnels.
Theres a simpler and more secure way to protect your applications and web servers from direct attacks: Cloudflare Tunnel.
Ensure your server is safe, no matter where its running: public cloud, private cloud, Kubernetes cluster, or even a Mac mini under your TV.
### I do everthing in the cli
install the cloudflare tunnel service.
in my case i will do the install on een ubuntu machine.
```
wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb && sudo dpkg -i cloudflared-linux-amd64.deb
```
When you run the flowing command you get a url. login to cloudflare
```
cloudflared tunnel login
```
when cloudflare is connected you get a cert.pem.
make a note of the location.
create the tunnel
by name fill the name that you want for the tunnel.
```
cloudflared tunnel create <NAME>
# Take a note where your tunnel credentials are saved.
```
create a configuration file in the `.cloudflared` directory
```
nano /home/$USER/.cloudflared/config.yaml
```
set the following lines.
```
tunnel: Your-Tunnel-Id
credentials-file: /home/$USER/.cloudflared/1d4537b6-67b9-4c75-a022-ce805acd5c0a.json
1d4537b6-67b9-4c75-a022-ce805acd5c0a.json # Get the json file from previous step.
```
add your first site example.com
```
cloudflared tunnel route dns <name of the tunnel> <example.com>
```
create the ingress.
create config.yml file in you .cloudflared directory
```
ingress:
- hostname: example.com
service: http://internalip:80
- hostname: sub.example.com
service: http://internalip:88
- service: http_status:404 # this is required as a 'catch-all'
```
start the tunnel
```
cloudflared tunnel run <name of your tunnel>
```
Make a service to run automatic
```
cloudflared service install
```
start en enable the service
```
systemctl enable --now cloudflared
```

7
apps/grafana.md Normal file
View File

@ -0,0 +1,7 @@
# Grafana
Operational dashboards for your data here, there, or anywhere
Project Homepage: [Grafana Homepage](https://grafana.com)
Documentation: [Grafana Docs](https://grafana.com/docs/)
---

92
apps/kasm.md Normal file
View File

@ -0,0 +1,92 @@
# KASM Workspaces
Streaming containerized apps and desktops to end-users. The Workspaces platform provides enterprise-class orchestration, data loss prevention, and web streaming technology to enable the delivery of containerized workloads to your browser.
---
## Add self-signed SSL Certificates
...
1. Stop the kasm services
```
sudo /opt/kasm/bin/stop
```
2. Replace `kasm_nginx.crt` and `kasm_nginx.key` files
```
sudo cp <your_cert> /opt/kasm/current/certs/kasm_nginx.crt
sudo cp <your_key> /opt/kasm/current/certs/kasm_nginx.key
```
3. Start the Kasm Services
```
sudo /opt/kasm/bin/start
```
---
## Custom Images
...
Registry
```
https://index.docker.io/v1/
```
...
### Add Images in KASM
> [!attention]
> You need to pass in a "tag" in the Docker Image. Otherwise kasm won't pull and start the image correctly.
### Docker Run Config
**Example**
```
{
"cap_add":["NET_ADMIN"],
"devices":["dev/net/tun","/dev/net/tun"],
"sysctls":{"net.ipv6.conf.all.disable_ipv6":"0"}
}
```
---
## Troubleshooting
...
### KASM Agent
...
### Database
...
```
sudo docker exec -it kasm_db psql -U kasmapp -d kasm
```
### Delete invalid users from user_groups table
...
1. Check table for invalid entries
```
kasm=# select * from user_groups;
user_group_id | user_id | group_id
--------------------------------------+--------------------------------------+--------------------------------------
07c54672-739f-42d8-befc-bb2ba29fa22d | 71899524-5b31-41ac-a359-1aa8a008b831 | 68d557ac-4cac-42cc-a9f3-1c7c853de0f3
e291f1f7-86be-490f-9f9b-3a520d4d1dfa | 71899524-5b31-41ac-a359-1aa8a008b831 | b578d8e9-5585-430b-a70b-9935e8acaaa3
07b6f450-2bf5-48c0-9c5e-3443ad962fcb | | 68d557ac-4cac-42cc-a9f3-1c7c853de0f3
8c4c7242-b2b5-4a7a-89d3-e46d24456e5c | | b578d8e9-5585-430b-a70b-9935e8acaaa3
```
2. Delete invalid entries from the table:
```postgresql
delete from user_groups where user_id is null;
```
3. Verify table
```
kasm=# select * from user_groups;
user_group_id | user_id | group_id
--------------------------------------+--------------------------------------+--------------------------------------
07c54672-739f-42d8-befc-bb2ba29fa22d | 71899524-5b31-41ac-a359-1aa8a008b831 | 68d557ac-4cac-42cc-a9f3-1c7c853de0f3
e291f1f7-86be-490f-9f9b-3a520d4d1dfa | 71899524-5b31-41ac-a359-1aa8a008b831 | b578d8e9-5585-430b-a70b-9935e8acaaa3
(2 rows)
```

21
apps/longhorn.md Normal file
View File

@ -0,0 +1,21 @@
# Longhorn
Longhorn is a lightweight, reliable and easy-to-use distributed block storage system for [Kubernetes](kubernetes/kubernetes.md).
Project Homepage: [Longhorn Homepage](https://longhorn.io)
Documentation: [Longhorn Docs](https://longhorn.io/docs/)
---
## Installation
You can install Longhorn via [Helm](tools/helm.md). To customize values, follow the [Chart Default Values](https://github.com/longhorn/longhorn/blob/master/chart/values.yaml)
```shell
helm repo add longhorn https://charts.longhorn.io
helm repo update
helm install longhorn longhorn/longhorn
```
---

91
apps/nginx.md Normal file
View File

@ -0,0 +1,91 @@
# Nginx
Open source web and application server.
Project Homepage: [Nginx Homepage](https://www.nginx.com/)
Documentation: [Nginx Unit Docs](https://unit.nginx.org/)
---
## Basic configuration arguments and examples
Logging and debugging:
```nginx
error_log <file> <loglevel>
error_log logs/error.log;
error_log logs/debug.log debug;
error_log logs/error.log notice;
```
basic listening ports:
```nginx
listen <port> <options>
listen 80;
listen 443 ssl http2;
listen 443 http3 reuseport; (this is experimental!)
```
header modifcations:
```nginx
add_header <header> <values>
add_header Alt-svc '$http3=":<port>"; ma=<value>'; (this is experimental!)
ssl_certificate / ssl_certificate_key
ssl_certificate cert.pem;
ssl_certificate_key cert.key;
server_name <domains>
server_name domain1.com *.domain1.com
root <folder>
root /var/www/html/domain1;
index <file>
index index.php;
location <url> {
}
location / {
root index.html;
index index.html index.htm;
}
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \\.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
log_not_found off;
access_log off;
allow all;
}
location ~* .(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
```
## Reverse Proxy
### Show Client's real IP
```nginx
server {
server_name example.com;
location / {
proxy_pass http://localhost:4000;
# Show clients real IP behind a proxy
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
```

51
apps/passbolt.md Normal file
View File

@ -0,0 +1,51 @@
# Passbolt
Passbolt is a free and open-source password manager built for collaboration. Secure, flexible, and automation ready. Trusted by 10,000 organizations, including Fortune 500 companies, newspapers, governments and defence forces.
Project Homepage: https://passbolt.com/
---
## Set Up
### Create admin user
```sh
docker-compose exec passbolt su -m -c "/usr/share/php/passbolt/bin/cake \
passbolt register_user \
-u <your_email> \
-f <first_name> \
-l <last_name>\
-r admin" -s /bin/sh www-data
```
### Backup options
Backup database container
change database-container to the name of your passbolt database container
and change the backup location
```
docker exec -i database-container bash -c \
'mysqldump -u${MYSQL_USER} -p${MYSQL_PASSWORD} ${MYSQL_DATABASE}' \
> /path/to/backup.sql
```
### Backup server public and private keys
change passbolt-container to the name of your passbolt container
and change the backup location
```
docker cp passbolt-container:/etc/passbolt/gpg/serverkey_private.asc \
/path/to/backup/serverkey_private.asc
docker cp passbolt-container:/etc/passbolt/gpg/serverkey.asc \
/path/to/backup/serverkey.asc
```
### Backup The avatars
```
docker exec -i passbolt-container \
tar cvfzp - -C /usr/share/php/passbolt/ webroot/img/avatar \
> passbolt-avatars.tar.gz
```

91
apps/portainer.md Normal file
View File

@ -0,0 +1,91 @@
# Portainer
Easily deploy, configure and secure containers in minutes on [Docker](docker/docker.md), [Kubernetes](kubernetes/kubernetes.md), Swarm and Nomad in any cloud, datacenter or device.
Project Homepage: [Portainer](https://www.portainer.io)
Documentation: [Portainer Docs](http://documentation.portainer.io)
## Installation
There are two installation options: [Portainer CE](https://docs.portainer.io/start/install-ce/server/docker) (Community Edition) and [Portainer BE](https://docs.portainer.io/start/install/server/docker) (Business Edition). Up to three nodes of BE can be requested at no cost; documentation outlining the feature differences between BE and CE [here](https://docs.portainer.io/).
>**Requirements:**
*[Docker](../docker/docker.md)*, *[Docker Swarm](../docker/docker-swarm.md)*, or *[Kubernetes](../kubernetes/kubernetes.md)* must be installed to run Portainer. *[Docker](../docker/docker-compose.md)* is also recommended but not required.
The examples below focus on installing Community Edition (CE) on Linux but Windows and Windows Container Service installation instructions (very little, if any, difference) can also be accessed from the hyperlinks above.
## Deploy Portainer CE in Docker on Linux
Create the volume that Portainer Server will use to store its database.
```shell
docker volume create portainer_data
```
Download and install the Portainer Server container.
```shell
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
```
Check to see whether the Portainer Server container has started by running.
```shell
docker ps
```
Log into your Portainer Server in a web browser at URL `https://your-server-address:9443`.
## Deploy Portainer CE in Docker Swarm on Linux
Retrieve the stack YML manifest.
```shell
curl -L https://downloads.portainer.io/ce2-19/portainer-agent-stack.yml -o portainer-agent-stack.yml
```
Use the downloaded YML manifest to deploy your stack.
```shell
docker stack deploy -c portainer-agent-stack.yml portainer
```
Check to see whether the Portainer Server container has started by running.
```shell
docker ps
```
Log into your Portainer Server in a web browser at URL `https://your-server-address:9443`.
## Add environments to Portainer
Various protocols can be used for [Portainer node monitoring](https://docs.portainer.io/admin/environments/add/docker) including:
- Portainer Agent (running as a container on the client)
- URL/IP address
- Socket
The method that requires least configuration, and least additional external accessibility, is the Portainer Agent.
Running a docker command on the client machine will install the Portainer agent -- the appropriate docker command can be obtained in Portainer by:
1. Clicking "Environments" in the left side navigation pane
2. Click the blue button labled "+Add Environment" in the top-right corner
3. Click the appropriate environment (Docker Standalone, Docker Swarm, K8S, etc.)
4. Copy the docker string in the middle of the window and execute on the client
5. Enter a name and IP (ending in :9001) in the fields below and click "Connect"
## Updating Portainer (host)
In this example the container name for Portainer is "Portainer"; change this if necessary for your installation.
```shell
docker stop portainer && docker rm portainer && docker pull portainer/portainer-ce:latest && docker run -d -p 8000:8000 -p 9443:9443 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
```
## Updating Portainer Agent (clients)
```shell
docker stop portainer_agent && docker rm portainer_agent && docker pull portainer/agent:latest && docker run -d -p 9001:9001 --name=portainer_agent --restart=always -v /v
ar/run/docker.sock:/var/run/docker.sock -v /var/lib/docker/volumes:/var/lib/docker/volumes portainer/agent:latest
```

0
apps/prometheus.md Normal file
View File

13
apps/rancher.md Normal file
View File

@ -0,0 +1,13 @@
# Rancher
Rancher, the open-source multi-cluster orchestration platform, lets operations teams deploy, manage and secure enterprise [Kubernetes](kubernetes/kubernetes.md).
Project Homepage: [Rancher Homepage](https://www.rancher.com)
---
## Remove Installation
```
kubectl delete validatingwebhookconfiguration rancher.cattle.io
kubectl delete mutatingwebhookconfiguration rancher.cattle.io
```

6
apps/tailscale.md Normal file
View File

@ -0,0 +1,6 @@
# Tailscale
Tailscale is a zero config [VPN](networking/vpn.md) for building secure networks, powered by [wireguard](networking/wireguard.md). Install on any device in minutes. Remote access from any network or physical location.
Project Homepage: https://tailscale.com
---

View File

@ -0,0 +1,64 @@
# Teleport Assist
**'Teleport Assist'** is an artificial intelligence feature, that utilizes facts about your infrastructure to help answer questions, generate command line scripts, and help you perform routine tasks on target nodes. At the moment only SSH and bash are supported. Support for SQL, AWS API and Kubernetes is planned for the near future.
> **'Teleport Assist'** is currently experimental, available starting from Teleport v12.4 for Teleport Community Edition.
## Prerequisites
- You will need an active OpenAI account with GPT-4 API access as Teleport Assist relies on OpenAI services.
## Configuration
Copy the GPT-4 API key into the file `/etc/teleport/openai_key`, and set read-only permissions and change the file owner to the user that the Teleport Proxy Service uses by running the following commands:
```sh
chmod 400 /etc/teleport/openai_key
chown teleport:teleport /etc/teleport/openai_key
```
To enable Teleport Assist, you need to provide your OpenAI API key. On each Proxy and Auth Service host, perform the following actions.
If the host is running the Auth Service, add the following section:
```yaml
auth_service:
assist:
openai:
api_token_path: /etc/teleport/openai_key
```
If the host is running the Proxy Service, add the following section:
```yaml
proxy_service:
assist:
openai:
api_token_path: /etc/teleport/openai_key
```
Restart Teleport for the changes to take effect.
Make sure that your Teleport user has the `assistant` permission. By default, users with built-in `access` and `editor` roles have this permission. You can also add it to a custom role. Here is an example:
```yaml
kind: role
version: v6
metadata:
name: assist
spec:
allow:
rules:
- resources:
- assistant
verbs:
- list
- create
- read
- update
- delete
```
## Usage
Now that you have Teleport Assist enabled, you can start using it, by click on the **'Assist'** button in the Teleport UI.

View File

@ -0,0 +1,52 @@
# Teleport App Service
The **'Teleport App Service'** is a secure and convenient way to access internal applications from anywhere. It uses Teleport's built-in IAM system to authenticate users, and allows users to access applications from a web browser or command-line client. The **'Teleport App Service'** can be scaled to support numerous users and applications.
## Requirements
> To securely access applications, you need to obtain a valid [SSL/TLS certificate](../../misc/ssl-certs.md) for Teleport, and its application subdomains.
### Example: wildcard certificate in [Traefik](../traefik/traefik.md)
```yaml
labels:
- "traefik.http.routers.teleport.rule=HostRegexp(`teleport.your-domain`, `{subhost:[a-z]+}.teleport.your-domain`)"
- "traefik.http.routers.teleport.tls.domains[0].main=teleport.your-domain"
- "traefik.http.routers.teleport.tls.domains[0].sans=*.teleport.your-domain"
```
## Configuration
The following snippet shows the full YAML configuration of an Application Service appearing in the `teleport.yaml` configuration file:
```yaml
app_service:
enabled: yes
apps:
- name: "grafana"
description: "This is an internal Grafana instance"
uri: "http://localhost:3000"
public_addr: "grafana.teleport.example.com". # (optional)
insecure_skip_verify: false # (optional) don't very certificate
```
## Usage
To access a configured application in the Teleport UI, you can either:
- Go to the **Applications** tab and click the **Launch** button for the application that you want to access.
- Enter the subdomain of the application in your web browser, e.g. `https://grafana.teleport.example.com`.
### Relevant CLI commands
List the available applications:
```sh
tsh apps ls
```
Retrieves short-lived X.509 certificate for CLI application access.
```sh
tsh apps login grafana
```

View File

@ -0,0 +1,50 @@
# Teleport Configuration
In order to avoid breaking existing configurations, Teleport's configuration is versioned. The newer configuration version is `v3`. If a `version` is not specified in the configuration file, `v1` is assumed.
## Instance-wide settings
### Log Settings
```yaml
teleport:
log:
output: stderr
severity: INFO
format:
output: text
```
## Proxy Service
```yaml
proxy_service:
enabled: "yes"
web_listen_addr: 0.0.0.0:3080
# -- (Optional) when using reverse proxy
# public_addr: ['your-server-url:443']
https_keypairs: []
acme: {}
# --(Optional) ACME
# acme:
# enabled: "yes"
# email: your-email-address
```
## Auth Service
```yaml
auth_service:
enabled: "yes"
listen_addr: 0.0.0.0:3025
proxy_listener_mode: multiplex
cluster_name: your-server-url
```
## Additional Services Configuration
- [SSH Service](teleport-ssh)
- [Kubernetes Service](teleport-kubernetes)
- [Application Service](teleport-appservice)
- [Databases Service](teleport-databases)
- [Remote Desktop Service](teleport-remotedesktop)

View File

@ -0,0 +1,3 @@
# Teleport Databases Service
WIP

View File

@ -0,0 +1,3 @@
# Teleport Installation Guidelines
WIP

View File

@ -0,0 +1,3 @@
# Teleport Kubernetes Service
WIP

View File

@ -0,0 +1,3 @@
# Teleport Passwordless Auth
WIP

View File

@ -0,0 +1,3 @@
# Remote Desktop Service
WIP

View File

@ -0,0 +1,3 @@
# Teleport SSH Service
WIP

24
apps/teleport/teleport.md Normal file
View File

@ -0,0 +1,24 @@
# Teleport
DevOps teams use **'Teleport'** to access [SSH](../../networking/ssh.md) and Windows servers, [Kubernetes](../../kubernetes/kubernetes.md), databases, AWS Console, and web applications. **'Teleport'** prevents phishing by moving away from static credentials towards ephemeral certificates backed by biometrics and hardware identity, and stops attacker pivots with the [Zero Trust design](../../misc/zerotrust.md).
Project homepage: [Teleport](https://goteleport.com/)
Documentation: [Teleport Docs](https://goteleport.com/docs/)
## Installation
[Teleport Installation Guidelines](teleport-installation)
## Configuration
[Teleport General Configuration Guidelines](teleport-configuration)
## Features
- [SSH Service](teleport-ssh)
- [Kubernetes Service](teleport-kubernetes)
- [Databases Service](teleport-databases)
- [Remote Desktop Service](teleport-remotedesktop)
- [Application Service](teleport-appservice)
- [Passwordless Auth](teleport-passwordless)
- [AI Assist](teleport-aiassist)

194
apps/traefik/traefik.md Normal file
View File

@ -0,0 +1,194 @@
# Traefik
Traefik is an open-source Edge Router for [Docker](docker/docker.md), and [Kubernetes](kubernetes/kubernetes.md) that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.
---
## Installation
### Docker
TODO: WIP
### Kubernetes
You can install Traefik via [Helm](tools/helm.md).
```sh
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik traefik/traefik
```
---
## Dashboard and API
WIP
---
## EntryPoints
WIP
### HTTP Redirection
WIP
```yaml
entryPoints:
web:
address: :80
http:
redirections:
entryPoint:
to: websecure
scheme: https
```
### HTTPS
WIP
```yaml
entryPoints:
websecure:
address: :443
```
---
## Routers
**traefik.http.routers.router.entrypoints**
Specifies the Entrypoint for the Router. Setting this to `traefik.http.routers.router.entrypoints: websecure` will expose the Container on the `websecure` entrypoint.
*When using websecure, you should enable `traefik.http.routers.router.tls` as well.
**traefik.http.routers.router.rule**
Specify the Rules for the Router.
*This is an example for an FQDN: Host(`subdomain.your-domain`)*
**traefik.http.routers.router.tls**
Will enable TLS protocol on the router.
**traefik.http.routers.router.tls.certresolver**
Specifies the Certificate Resolver on the Router.
### PathPrefix and StripPrefix
WIP
```yml
- "traefik.enable=true"
- "traefik.http.routers.nginx-test.entrypoints=websecure"
- "traefik.http.routers.nginx-test.tls=true"
- "traefik.http.routers.nginx-test.rule=PathPrefix(`/nginx-test/`)"
- "traefik.http.routers.nginx-test.middlewares=nginx-test"
- "traefik.http.middlewares.nginx-test.stripprefix.prefixes=/nginx-test"
```
Add `/api` prefix to any requets to `myapidomain.com`
Example:
- Request -> `myapidomain.com`
- Traefik translates this to `myapidomain.com/api` without requestee seeing it
```yml
- "traefik.enable=true"
- "traefik.http.routers.myapp-secure-api.tls=true"
- "traefik.http.routers.myapp-secure-api.rule=Host(`myapidomain.com`)"
- "traefik.http.routers.myapp-secure-api.middlewares=add-api"
# Middleware
- "traefik.http.middlewares.add-api.addPrefix.prefix=/api"
```
---
## CertificatesResolvers
WIP
### dnsChallenge
DNS Providers such as `cloudflare`, `digitalocean`, `civo`, and more. To get a full list of supported providers, look up the [Traefik ACME Documentation](https://doc.traefik.io/traefik/https/acme/) .
```yaml
certificatesResolvers:
yourresolver:
acme:
email: "your-mail-address"
dnsChallenge:
provider: your-dns-provider
resolvers:
- "your-dns-resolver-ip-addr:53"
```
---
## ServersTransport
### InsecureSkipVerify
If you want to skip the TLS verification from **Traefik** to your **Servers**, you can add the following section to your `traefik.yml` config file.
```yaml
serversTransport:
insecureSkipVerify: true
```
---
## TLS Settings
Define TLS Settings in Traefik.
### defaultCertificates
```yaml
tls:
stores:
default:
defaultCertificate:
certFile: /your-traefik-cert.crt
keyFile: /your-traefik-key.key
```
### options
Define TLS Options like disabling insecure TLS1.0 and TLS 1.1.
```yaml
tls:
options:
default:
minVersion: VersionTLS12
```
---
## Providers
WIP
### File
WIP
```yaml
providers:
file:
```
### Docker
With `exposedByDefault: false`, Traefik won't automatically expose any containers by default. Setting `traefik.enable: true`, will expose the Container.
```yaml
providers:
docker:
exposedByDefault: false
```
### Kubernetes
WIP
---
## Ingress
WIP
---
## Log
WIP
```yaml
log:
level: ERROR
```
---
## Global
WIP
```yaml
global:
checkNewVersion: true
sendAnonymousUsage: false
```

3
cloud/azure/azure Normal file
View File

@ -0,0 +1,3 @@
# Microsoft Azure
Microsoft Azure is

33
cloud/azure/azure-cli Normal file
View File

@ -0,0 +1,33 @@
# Microsoft Azure CLI
## Install Azure CLI
### Install on Windows
Download and install the [Azure CLI](https://aka.ms/installazurecliwindows) for Windows.
### Install on macOS
Run the following commands to install the Azure CLI via Homebrew.
```bash
brew update && brew install azure-cli
```
### Install on Linux
Run the following commands to install the Azure CLI via the package manager for your Linux distribution.
```bash
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
```
## Login to Azure
Run the following command to login to Azure.
```bash
az login
```
The command will open a browser window and prompt you to login to Azure. Once you have logged in, you can close the browser window and return to the command line.

61
cloud/civo.md Normal file
View File

@ -0,0 +1,61 @@
# Civo
Homepage: [Civo Kubernetes - Fast, Simple, Managed Kubernetes Service - Civo.com](https://www.civo.com/)
Documentation: [Documentation - Civo.com](https://www.civo.com/docs)
Terraform Registry: [Terraform Registry](https://registry.terraform.io/providers/civo/civo/latest)
---
## Civo CLI
Civo CLI is a tool to manage your Civo account from the terminal. Civo CLI is built with Go and distributed as binary files, available for multiple operating systems and downloadable from https://github.com/civo/cli/releases.
### Authentication
In order to use the command-line tool, you will need to authenticate yourself to the Civo API using a special key. You can find an automatically-generated API key or regenerate a new key at [https://www.civo.com/api](https://www.civo.com/api).
### Create Instances
You can create an instance by running `civo instance create` with a hostname parameter, as well as any options you provide.
**Example:**
```
civo instance create --hostname=<your-hostname> --sshkey=<your-ssh-key-name> --initialuser=xcad --size=g3.xsmall --diskimage=921fcb64-8abf-4a51-8823-027d9d75c1d4
```
**Parameters:**
PARAMETER | LONG VERSION | DESCRIPTION
---|---|---
`-t` | `--diskimage` | the instance's disk image (from 'civo diskimage ls' command)
`-l` | `--firewall` | the instance's firewall you can use the Name or the ID
`-l` | `--firewall` | the instance's firewall you can use the Name or the ID
`-s` | `--hostname` | the instance's hostname
`-u` | `--initialuser` | the instance's initial user
`-r` | `--network` | the instance's network you can use the Name or the ID
`-p` | `--publicip` | This should be either "none" or "create" (default "create")
`-i` | `--size` | the instance's size (from 'civo instance size' command)
`-k` | `--sshkey` | the instance's ssh key you can use the Name or the ID
`-g` | `--tags` | the instance's tags
`-w` | `--wait` | wait until the instance's is ready
**Instance Sizes:**
ID|SIZE|TYPE|CPU|MEMORY|SSD
---|---|---|---|---|---
g3.xsmall|ExtraSmall|Instance|1|1024|25
g3.small|Small|Instance|1|2048|25
g3.medium|Medium|Instance|2|4096|50
g3.large|Large|Instance|4|8192|100
g3.xlarge|ExtraLarge|Instance|6|16384|150
g3.2xlarge|2XLarge|Instance|8|32768|200
g3.k3s.xsmall|ExtraSmall|Kubernetes|1|1024|15
g3.k3s.small|Small|Kubernetes|1|2048|15
g3.k3s.medium|Medium|Kubernetes|2|4096|15
g3.k3s.large|Large|Kubernetes|4|8192|15
g3.k3s.xlarge|ExtraLarge|Kubernetes|6|16384|15
g3.k3s.2xlarge|2XLarge|Kubernetes|8|32768|15
**Diskimages:**
ID | NAME
---|---
`9ffb043e-37d8-4b71-80ed-81227564944f` | centos-7
`e1a83a29-d35b-433b-b1cb-4baade48c81a` | debian-10
`67a75d21-3726-4152-8fc9-dcdb51b6e39e` | debian-9
`880d37ca-372e-4d33-91bd-3122cf56614b` | ubuntu-bionic
`921fcb64-8abf-4a51-8823-027d9d75c1d4` | ubuntu-focal

2
cloud/digitalocean.md Normal file
View File

@ -0,0 +1,2 @@
# DigitalOcean

View File

@ -0,0 +1,56 @@
# Microsoft365 EMail Protection
Email authentication (also known as email validation) is a group of standards that tries to stop email messages from forged senders (also known as spoofing). Microsoft 365 uses the following standards to verify inbound email:
- SPF (Sender Policy Framework)
- DKIM (DomainKeys Identified Mail)
- DMARC (Domain-based Message Authentication, Reporting, and Conformance)
## Prerequisites
WIP
## Set up SPF to help prevent spoofing
WIP
## Use DKIM to validate outbound email sent from your custom domain
### Publish two CNAME records for your custom domain in DNS
For each domain for which you want to add a DKIM signature in DNS, you need to publish two CNAME records.
```txt
Name: selector1._domainkey
Target: selector1-yourdomain-com._domainkey.yourdomaincom.onmicrosoft.com
TTL: 3600
Name: selector2._domainkey
Target: selector2-yourdomain-com._domainkey.yourdomaincom.onmicrosoft.com
TTL: 3600
```
### To enable DKIM signing for your custom domain in the Microsoft 365 Defender portal
Once you have published the CNAME records in DNS, you are ready to enable DKIM signing through Microsoft 365. You can do this either through the Microsoft 365 admin center or by using PowerShell.
1. In the Microsoft 365 Defender portal at [https://security.microsoft.com](https://security.microsoft.com), go to Email & Collaboration > Policies & Rules > Threat policies > Email Authentication Settings in the Rules section >DKIM. To go directly to the DKIM page, use [https://security.microsoft.com/dkimv2](https://security.microsoft.com/dkimv2).
2. On the DKIM page, select the domain by clicking on the name.
3. In the details flyout that appears, change the Sign messages for this domain with DKIM signatures setting to Enabled (Toggle on.)
When you're finished, click Rotate DKIM keys.
4. Repeat these step for each custom domain.
5. If you are configuring DKIM for the first time and see the error 'No DKIM keys saved for this domain' you will have to use Windows PowerShell to enable DKIM signing as explained in the next step.
#### (Optional) To enable DKIM signing for your custom domain by using PowerShell
1. Connect to Exchange Online PowerShell.
2. Use the following syntax:
`Set-DkimSigningConfig -Identity your-domain -Enabled $true`
your-domain is the name of the custom domain that you want to enable DKIM signing for.
This example enables DKIM signing for the domain contoso.com:
`Set-DkimSigningConfig -Identity contoso.com -Enabled $true`
### To Confirm DKIM signing is configured properly for Microsoft 365
WIP

View File

@ -0,0 +1,134 @@
# What is Cloud-Computing?
- A model that enables businesses to acquire resources for their IT infrastructure needs on demand
- Cloud resources: Servers, storage, databases, netwroks, software applications, and so on
- Ensures the instantaneous availability of resources with lower cost and operational overhead
## Benefits
1. Cost Savings
- Helps you reduce capital investment
- Reduces hardare and software procurement, which further eliminates the need for power and cooling systems
- Provides cloud services on demand and you are only charged when you use the service
2. Data Loss Prevention
- Cloud computing allows you to store your organizations' valuable data in the cloud rather your own data center storage hardware
- The cloud provider's data storage solutions typically offer better access, redundancy, and availability than enterprise data centers
- These solutions help prevent data loss through malfunction, viruses, user errors, or theft
3. Scalability
- Cloud computing enables you to increase or decrease IT infra resources according to your business needs
- Both manual and automatic scaling options are available with most cloud providers
4. Flexibility
- IT organiztions traditionally focus on various responsibilities, from procuring, hosting, and maintaining IT infra to custumer support and security
- Because these services are made available as managed services by the cloud provider, organizations can focus on their actual business and not IT management issues
5. Security
- Cloud providers offer data protection services like data encryption and policy-based user management, making cloud security equivalent to conventional systems
6. Data Analytics
- Cloud computing technology generally includes analytics and reporting, which helps track usage
- This feature allows you to indentify areas of improvement, meet your business goals, and increase organizational efficiency
7. Collaboration
- Cloud computing allows users from different geographic locations to work as a team and collaborate easily and effectively
- This speeds delivery of applications to market
8. Data Recovery
- Cloud computing provides features and technologies that help companies recover data lost during natural disasters, power outages, and other unforeseen emergencies.
9. Mobile Access
- Cloud applications can provide mobile access to corporate resources
- This feature is beneficial for employees and customers, allowing them to access a cloud application from anywhere
## Use Cases
- Faster Testing and Deployment
- Remote Working
- Cloud Communication
# What is a Cloud Application?
A cloud application is a software program that runs in the cloud and is accessed remotely over the network.
It has all the functionality of a non cloud based application with the added advantage of being delivered over the network.
# Cloud Economics
Cloud computing reduces capital expenditures ( CapEx ) by eliminating the need to run and maintain your own infrastructure
Your costs shift to operating expenses ( OpEx ), which are generally lower as you only pay for the resources you consume
# Operational Efficiencies
- Reduces Capital Expenses
- Reduces Staffing Costs
- Improves Productivity
# What is a Distributed System?
A distriuted computing system consists of multiple independent software components. These independent software components are located on different systems that communicate in such a way that they appear as a single system to the end user
Note - Cloud computing is based on the distributed systems model
### Types of Distributed Systems
1. Peer-to-Peer -> in the peer-to-peer architectural model, responsibilities are uniformly distributed among machines in the system.
2. Client-Server -> In the client-server model, data on the server is accessed by clients.
3. Three-tier -> The three-tier architectural model enables information about the client to be stored in the middle tier.
4. N-tier -> The n-tier architecture allows an application or server to forward requests to additional enterprise services on the network.
# Centralized v/s Distributed Systems
# Workloads
- The amount of work allocated to a defined computing task at any given time
- An isolated computing task that is executed indepenedently without any support from external programs or applications
### Edge Computing :
- Is a distributed computing model that brings compute and storage workloads closer to the user
- Decreases latency and saves bandwidth
- Processes information close to the edge and decentralizes a network
### Workloads in distributed systems :
- Are distributed among the available IT resources based on the utilization of each resource
- Uses an algorithm that consumes runtime logic and distributes the workload among the available IT resouces evenly
# Bare Metal Server
- A physical server assigned to a single tenant
- Can be modified based on the need for performance and security
- Isolates resources from other tenants and provides security to your business
- Can be configured for different cloud setups
# Cloud Implementations
# Types of Cloud
## Public Clouds
- Public Clouds are environments where network infra and resources are made accessible to the public
- Resources are partitioned and distributed amongst multiple customers or tenants
## Private Clouds
- Private Clouds environments are privately owned and hosted by an enterprise.
- Resources are genrally made accessible to a private organization and their customers and partners
- Managed private clouds are deployed and fully managed by a third-party, reducing the IT staffing needs for the enterprise
- Dedicated private clouds are hosted on a public or private cloud to server a particular department within an enterprise
## Hybrid Clouds
- Hybrid Clouds are cloud environments that appear as a single cloud although they are built from multiple clouds ( connected through LANs, WANs, VPNs and/or APIs )
- Offer flexibilty in deployment options by enabling workloads to move between private and public clouds based on computing needs
## Multi Clouds
- Multiclouds are cloud envirenments that offer more than one cloud service from more than one public cloud service provider
- Resources are deployed accross different cloud availabilty zones and regions
- All Hybrid Clouds are Multi Clouds
# Top Public Cloud Providers
## Microsoft Azure
## Amazon Web Services
## Google Cloud
## Others Public Cloud Providers
## Cloud Connectors

70
databases/mariadb.md Normal file
View File

@ -0,0 +1,70 @@
# MariaDB Cheat-Sheet
MariaDB Server is one of the most popular open source relational databases. Its made by the original developers of MySQL and guaranteed to stay open source. It is part of most cloud offerings and the default in most Linux distributions.
It is built upon the values of performance, stability, and openness, and MariaDB Foundation ensures contributions will be accepted on technical merit. Recent new functionality includes advanced clustering with Galera Cluster 4, compatibility features with Oracle Database and Temporal Data Tables, allowing one to query the data as it stood at any point in the past.
Project Homepage: [MariaDB](https://mariadb.org/)
Documentation: [MariaDB Docs](https://mariadb.org/documentation/)
---
## Installation
### Install MariaDB on Debian/Ubuntu/Mint/Zorin/forks
```bash
sudo apt update
sudo apt install -y mariadb-server mycli --install-recommends
sudo mysql_secure_installation
```
### Install MariaDB on RHEL/Fedora/CentOS/Alma/Rocky
```bash
sudo dnf update
sudo dnf install -y mariadb-server mycli
sudo mysql_secure_installation
```
### Install MariaDB on Arch/Manjaro/Arco/forks
```bash
sudo pacman -Syyu
sudo pacman -S mariadb-server mycli --noconfirm
sudo mysql_secure_installation
```
### Deploy MariaDB in Docker
- [https://hub.docker.com/_/mariadb](https://hub.docker.com/_/mariadb)
- [https://docs.linuxserver.io/images/docker-mariadb/](https://docs.linuxserver.io/images/docker-mariadb/)
### Deploy MariaDB in Kubernetes
- [https://mariadb.org/start-mariadb-in-k8s/](https://mariadb.org/start-mariadb-in-k8s/)
- [https://kubedb.com/kubernetes/databases/run-and-manage-mariadb-on-kubernetes/](https://kubedb.com/kubernetes/databases/run-and-manage-mariadb-on-kubernetes/)
## Access Database from outside
Open `/etc/mysql/mariadb.conf.d/50-server.cnf` and change the line containing `bind-address` to `bind-address = 0.0.0.0`.
---
## Create Administrative User
Access the MySQL command line by entering `mysql -u root -p` in the shell followed by the Database `root` password.
Create a new user `newuser` for the host `localhost` with a new `password`:
```sql
CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'password';
```
Grant all permissions to the new user
```sql
GRANT ALL PRIVILEGES ON * . * TO 'newuser'@'localhost';
```
Update permissions
```sql
FLUSH PRIVILEGES
```

88
databases/mysql.md Normal file
View File

@ -0,0 +1,88 @@
# MySQL Community Edition (CE) Cheat-Sheet
MySQL Community Edition is the freely downloadable version of the world's most popular open source database.
Project Homepage: [MySQL Community Edition](https://www.mysql.com/products/community/)
Documentation: [MySQL Docs](https://dev.mysql.com/doc/)
>_Editor's note:_ The MariaDB Project was forked from MySQL when [Sun Microsystems](https://en.wikipedia.org/wiki/Sun_Microsystems)' intellectual property [was acquired](https://en.wikipedia.org/wiki/Acquisition_of_Sun_Microsystems_by_Oracle_Corporation) by [Oracle](https://en.wikipedia.org/wiki/Oracle_Corporation). MariaDB still shares enormous inter-compatibility with MySQL functions and software interoperability, and performance at most scales is arguably indiscernible from MySQL CE. In this writers' opinion it is **not beneficial** in most cases to favor Oracle's monetized option over the GPL's MariaDB alternative.
---
## Installation
### Install MySQL on Debian/Ubuntu/Mint/Zorin/Deb forks
>[!warning]
> Common Debian repositories don't populate MySQL CE and require a vendor-provided repository available [here](https://dev.mysql.com/downloads/repo/apt/). The repo file in this example is current as of 2024-01-20.
```bash
sudo apt update
sudo apt install -y lsb-release gnupg
wget https://dev.mysql.com/get/mysql-apt-config_0.8.29-1_all.deb
sudo dpkg -i mysql-apt-config_0.8.29-1_all.deb
sudo mkdir /var/lib/mysql
sudo apt update
sudo apt install -y mysql-community-server mysql-common mycli --install-recommends
sudo mysql_secure_installation
```
### Install MySQL on RHEL/Fedora/CentOS/Alma/Rocky
```bash
sudo dnf update
sudo dnf install -y mysql-server mysql-common mycli
sudo mysql_secure_installation
```
### Install MySQL on Arch/Manjaro/Arco/ Arch forks
>[!warning]
> Common Arch repositories don't populate MySQL CE and the vendor doesn't provide one. The packages are available in the [AUR](https://aur.archlinux.org/) but this is \***not recommended**\* for a production environment!!
##### Enable the AUR (if not already available)
>[!notice]
>_This **must** be done by a non-root user!_
```shell
git clone https://aur.archlinux.org/yay.git
cd yay
makepkg -si
```
##### Proceed with installation from the AUR
```bash
yay pacman -Syyu
yay -S mysql mysql-utilities mycli --noconfirm
sudo mysql_secure_installation
```
### Deploy MySQL in Docker
- [https://hub.docker.com/_/mysql/](https://hub.docker.com/_/mysql/)
- [https://dev.mysql.com/doc/mysql-installation-excerpt/8.0/en/docker-mysql-getting-started.html](https://dev.mysql.com/doc/mysql-installation-excerpt/8.0/en/docker-mysql-getting-started.html)
### Deploy MySQL in Kubernetes
- [https://dev.mysql.com/doc/mysql-operator/en/](https://dev.mysql.com/doc/mysql-operator/en/)
- [https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/](https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
---
## Create Administrative User
Access the MySQL command line by entering `mysql -u root -p` in the shell followed by the Database `root` password.
Create a new user `newuser` for the host `localhost` with a new `password`:
```sql
CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'password';
```
Grant all permissions to the new user
```sql
GRANT ALL PRIVILEGES ON * . * TO 'newuser'@'localhost';
```
Update permissions
```sql
FLUSH PRIVILEGES
```

206
databases/postgres.md Normal file
View File

@ -0,0 +1,206 @@
# PostgreSQL Cheat-Sheet
[PostgreSQL](https://www.postgresql.org/) or also known as Postgres, is a free and open-source relational database management system. PostgreSQL features transactions with Atomicity, Consistency, Isolation, Durability (ACID) properties automatically updatable views, materialized views, triggers, foreign keys, and stored procedures. It is designed to handle a range of workloads, from single machines to data warehouses or web services with many concurrent users.
---
## Installation
### Install PostgreSQL on Debian/Ubuntu/Mint/Zorin/forks
```bash
sudo apt update
sudo apt install -y postgresql postgresql-contrib postgresql-client
sudo systemctl status postgresql.service
```
### Install PostgreSQL on RHEL/Fedora/CentOS/Alma/Rocky
```bash
sudo dnf update
sudo dnf install -y postgresql-server
```
### Install PostgreSQL on Arch/Manjaro/Arco/forks
```bash
sudo pacman -Syyu
sudo pacman -S postgresql --noconfirm
```
### Deploy PostgreSQL in Docker
- [https://hub.docker.com/_/postgres/](https://hub.docker.com/_/postgres/)
- [https://github.com/postgres/postgres](https://github.com/postgres/postgres)
### Deploy PostgreSQL on Kubernetes with Zalando Postgres Operator
Postgres is probably the database which is most common on Cloud platforms and also, running on Kubernetes environments. There are several so called "Kubernetes Operators" which handle the deployment of Postgres clusters for you. One of it is the [Postgres Operator by Zalando](https://github.com/zalando/postgres-operator).
You can find some tutorials regarding deployment of the operator and how to work with it, in the link list below:
- [Deploy Zalando Postgres Operator on your Kubernetes cluster](https://thedatabaseme.de/2022/03/13/keep-the-elefants-in-line-deploy-zalando-operator-on-your-kubernetes-cluster/)
- [Configure Zalando Postgres Operator Backup with WAL-G](https://thedatabaseme.de/2022/03/26/backup-to-s3-configure-zalando-postgres-operator-backup-with-wal-g/)
- [Configure Zalando Postgres Operator Restore with WAL-G](https://thedatabaseme.de/2022/05/03/restore-and-clone-from-s3-configure-zalando-postgres-operator-restore-with-wal-g/)
---
## Connecting to Postgres
### Connect to local Postgres instance
A local connection (from the database server) can be done by the following command:
```sh
sudo -u postgres psql
```
### Connect to remote Postgres instance
Note, that you first have to install the `postgresql-client` package, (`postgresql` via Homebrew on macOS) on the client machine. A connection from a remote host can be done by the following command:
```sh
psql -h {pg_host} -U {username} -d {database} -p {port}
```
## Set password for postgres database user
The password for the `postgres` database user can be set the the quickcommand `\password` or by `alter user postgres password 'Supersecret'`. A connection using the `postgres` user is still not possible from the "outside" hence to the default settings in the `pg_hba.conf`.
### Update pg_hba.conf to allow postgres user connections with password
In order to allow connections of the `postgres` database user not using OS user authentication, you have to update the `pg_hba.conf` which can be found under `/etc/postgresql/12/main/pg_hba.conf`.
```sh
sudo vi /etc/postgresql/12/main/pg_hba.conf
...
local all postgres peer
...
```
Change the last section of the above line to `md5`.
```sh
local all postgres md5
```
A restart is required in order to apply the new configuration:
```sh
sudo systemctl restart postgresql
```
Now a connection from outside the database host is possible e.g.
```sh
psql -U postgres -d postgres -h databasehostname
```
## Creation of additional database users
A database user can be created by the following command:
```sql
create user myuser with encrypted password 'Supersecret';
CREATE ROLE
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
myuser | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
```
## Creation of additional databases
One can create new Postgres databases within an instance. Therefore you can use the `psql` command to login (see above).
```sql
CREATE DATABASE dbname OWNER myuser;
CREATE DATABASE
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
dbname | myuser | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
```
You can leave the `OWNER` section of the command, when doing so, the current user will become owner of the newly created database.
To change the owner of an existing database later, you can use the following command:
```sql
postgres=# alter database dbname owner to myuser;
ALTER DATABASE
```
## Backup and Restore
There are near to endless combinations in tools and parameters to backup postgres databases. Below you can find some examples using the Postgres built-in tools `pgdump`, `pg_basebackup` and `pg_restore`.
### pg_dump / pg_dumpall
Using `pg_dump` or `pg_dumpall` enables you to extract / export a PostgreSQL database(s) into a (SQL) script file or a custom archive file.
#### pg_dump
The following command creates a custom archive file from a database specified with `-d`. To export data in custom format, you have to specify so with the `-F c` option. Custom file dumps have the benefit, that they are compressed by default.
```bash
Using the `--create` option will include the SQL commands in the dump script that will create the database before importing it later. The `-Z 9` option in this example compresses the SQL script created with the highest available compression rate (`0-9`).
```bash
pg_dump -h vmdocker -U awx -d awx --create -f -Z 9 /tmp/awx_dump.sql.gz
```
The following command creates a custom archive file from a database specified with `-d`. To export data in custom format, you have to specify so with the `-F c` option. Custom file dumps have the benefit, that they are compressed by default.
```bash
pg_dump -h {pg_host} -U {username} -d {database} -F c -f /pg_dump/dumpfile.dmp
```
Custom format files can only be restored by `pg_restore` (see below). A SQL dump can be restored by using `psql`.
```bash
psql -d newdb -f db.sql
```
A complete guide of `pg_dump` from the official documentation can be found [here](https://www.postgresql.org/docs/current/app-pgdump.html).
#### pg_dumpall
A full dump of all databases of a Postgres instance can be done by `pg_dumpall`. It will include also user creation information.
A difference to `pg_dump`, you cannot choose for different output formats. `pg_dumpall` will always create a SQL script as output. Therefore,
you don't need `pg_restore` for restoring a "full" dump. Only `psql` is needed (see below).
```bash
pg_dumpall -h {pg_host} -U postgres > database.out
```
If you use password authentication it will ask for a password each time. It is convenient to have a `~/.pgpass` file or `PGPASSWORD` environment variable set.
So importing a full dump is really easy by the following `psql` command:
```bash
psql -h {pg_host} -f databaseb.out -U postgres
```
A complete guide of `pg_dumpall` from the official documentation can be found [here](https://www.postgresql.org/docs/current/app-pg-dumpall.html).
### pg_restore
`pg_restore` can be used to restore custom file dumps created by `pg_dump`.
The following command will create the database (which has been dumped before).
```bash
pg_restore -h {pg_host} -U {pg_user} -d postgres --create -F c /tmp/db.dmp -v
```
A complete guide of `pg_restore` from the official documentation can be found [here](https://www.postgresql.org/docs/current/app-pgrestore.html).

12
databases/sqlite.md Normal file
View File

@ -0,0 +1,12 @@
# SQLite Cheat-Sheet
SQLite is a relational database contained in a C library. In contrast to many other databases, SQLite is not a client-server database engine. Rather, it's embedded into an end program.
SQLite generally follows the [PostgreSQL](databases/postgres.md) syntax but does not enforce type checking.
You can open a SQLite Database with `sqlite3 <filename>` directly.
---
## Commands
`.help` Shows all commands `.databases` Show all existing databases `.quit` Exists `.tables` Shows all tables `.backup` Backups current database

113
docker/docker-compose.md Normal file
View File

@ -0,0 +1,113 @@
# Docker-Compose
...
## Networking
By default Docker-Compose will create a new network for the given compose file. You can change the behavior by defining custom networks in your compose file.
### Create and assign custom network
...
*Example:*
```yaml
networks:
custom-network:
services:
app:
networks:
- custom-network
```
### Use existing networks
If you want to use an existing Docker network for your compose files, you can add the `external: true` parameter in your compose file
*Example:*
```yaml
networks:
existing-network:
external: true
```
## Volumes
Volumes are data storage objects that Docker containers can use for persistent storage.
### Create and map static volume(s)
```yaml
volumes:
my-volume:
services:
app:
volumes:
- my-volume:/path-in-container
```
These volumes are stored in `/var/lib/docker/volumes`.
### Create volume that is a CIFS mount to external share
```yaml
# Variables that will need to be changed:
# <PUID> - User id for folder/file permissions
# <PGID> - Group id for folder/file permissions
# <PATH_TO_CONFIG> - Path where Unmanic will store config files
# <PATH_TO_ENCODE_CACHE> - Cache path for in-progress encoding tasks
# <REMOTE_IP> - Remote IP address of CIFS mount
# <PATH_TO_LIBRARY> - Path in remote machine to be mounted as your library
# <USERNAME> - Remote mount username
# <PASSWORD> - Remote mount password
#
---
version: '2.4'
services:
app:
container_name: app_name
image: repo/app:tag
ports:
- 1234:1234
environment:
- PUID=<PUID>
- PGID=<PGID>
volumes:
- cifs_mount:/path-in-container
volumes:
cifs_mount:
driver: local
driver_opts:
type: cifs
device: //<REMOTE_IP>/<PATH_TO_LIBRARY>
o: "username=<USERNAME>,password=<PASSWORD>,vers=3.0,uid=<PUID>,gid=<PGID>"
```
## Environment Variables
Environment variables can be defined in the `environment` section of a service in a Docker Compose file.
### Define environment variables
```yaml
services:
app:
environment:
- ENV_VAR=value
```
### Interpolate environment variables
| Variable | Description |
| --- | --- |
| `${ENV_VAR}` | Value of `ENV_VAR` |
| `${ENV_VAR:-default}` | Value of `ENV_VAR` if set and non-empty, otherwise `default`|
| `${ENV_VAR-default}` | Value of `ENV_VAR` if set, otherwise `default`|
| `${ENV_VAR:?error}` | Value of `ENV_VAR` if set and non-empty, otherwise exit with `error` |
| `${ENV_VAR?error}` | Value of `ENV_VAR` if set, otherwise exit with `error` |
| `${ENV_VAR:+replacement}` | `replacement` if `ENV_VAR` is set and non-empty, otherwise empty |
| `${ENV_VAR+replacement}` | `replacement` if `ENV_VAR` is set, otherwise empty |

15
docker/docker-desktop.md Normal file
View File

@ -0,0 +1,15 @@
# Docker Desktop
Docker Desktop is a software application that enables developers to build, package, and run applications using Docker containers on their local machines. It provides an easy-to-use graphical interface and includes the necessary tools and components for managing Docker containers, such as the Docker engine, images, and networking capabilities.
## Installing Docker Desktop
Docker Desktop is available for Windows and macOS. You can download the installer from the [Docker website](https://www.docker.com/products/docker-desktop).
## Troubleshooting
The `com.docker.diagnose check` command can be used to run a diagnostic check on Docker Desktop for Mac.
```sh
/Applications/Docker.app/Contents/MacOS/com.docker.diagnose check
```

90
docker/docker-file.md Normal file
View File

@ -0,0 +1,90 @@
# Dockerfile
## What is a Dockerfile?
Docker builds images automatically by reading the instructions from a Dockerfile which is a text file that contains all commands, in order, needed to build a given image. A Dockerfile adheres to a specific format and set of instructions which you can find at [Dockerfile reference](https://docs.docker.com/engine/reference/builder/).
A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer.
```Go
# syntax=docker/dockerfile:1
FROM ubuntu:22.04
COPY . /app
RUN make /app
CMD python /app/app.py
```
In the example above, each instruction creates one layer:
* FROM creates a layer from the ubuntu:22.04 Docker image.
* COPY adds files from your Docker client's current directory.
* RUN builds your application with make.
* CMD specifies what command to run within the container.
# BuildKits
BuildKit supports loading frontends dynamically from container images. To use an external Dockerfile frontend, the first line of your Dockerfile needs to set the syntax directive pointing to the specific image you want to use:
```
# syntax=docker/dockerfile:1
```
# New feature in 1.4 Here-Documents
```GO
RUN <<EOF
apt-get update
apt-get upgrade -y
apt-get install -y ...
EOF
```
## Example running a multi-line script (Python)
```GO
# syntax=docker/dockerfile:1
FROM debian
RUN <<EOT bash
set -ex
apt-get update
apt-get install -y vim
EOT
```
## Multi-Stage Dockerfile Example (SpringBoot)
```GO
# syntax=docker/dockerfile:1
#Start with a base image containing Maven & Java runtime
FROM maven:3.9.6-eclipse-temurin-21-alpine7 AS build
#Information about who maintains the image
MAINTAINER John Doe
ENV HOME=/home/app
COPY src /home/app/src
COPY pom.xml $HOME
RUN mkdir -p /root/.m2 \
&& mkdir /root/.m2/repository
# Subsequent runs will use local dependencies and execute much faster. (https://www.baeldung.com/ops/docker-cache-maven-dependencies)
RUN --mount=type=cache,target=/root/.m2 mvn -f $HOME/pom.xml clean package -DskipTests
#
# Package stage
#
FROM clipse-temurin:21.0.2_13-jre-alpine
VOLUME /tmp
RUN groupadd -r -g 2000 mygroup && useradd -m -d /home/myuser -u 2000 -r -g mygroup myuser
# Add the application's jar to the container
COPY --from=build ${HOME}/target/demo-0.0.1-SNAPSHOT.jar /usr/local/lib/demo.jar
USER myuser
EXPOSE 8080
#execute the application
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/usr/local/lib/demo.jar"]
```

148
docker/docker.md Normal file
View File

@ -0,0 +1,148 @@
# Docker
Docker is a containerization platform that encapsulates an application and its dependencies into a container, ensuring consistent operation across different computing environments. It leverages OS-level virtualization to deliver software in packages called containers, providing isolation and resource efficiency, and facilitating CI/CD practices by streamlining deployment and scaling.
## Installation
Docker can be installed on different operating systems. For local workstations, Docker Desktop is the recommended installation. For servers, Docker Engine is the recommended installation.
### Docker Desktop
Docker Desktop is a software application that enables developers to build, package, and run applications using Docker containers on their local machines. It provides an easy-to-use graphical interface and includes the necessary tools and components for managing Docker containers, such as the Docker engine, images, and networking capabilities.
For more information, see [Docker Desktop](docker-desktop.md)
### Install Docker Engine
One click installation script:
```sh
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
```
Run docker as non root user:
```sh
sudo groupadd docker
sudo usermod -aG docker $USER
```
For more information, see [Install Docker Engine](https://docs.docker.com/engine/install/)
## Using Docker
### Running Containers
| COMMAND | DESCRIPTION |
| --- | --- |
| `docker run <image>` | Start a new container from an image |
| `docker run -it <image>` | Start a new container in interactive mode |
| `docker create <image>` | Create a new container |
| `docker start <container>` | Start a container |
| `docker stop <container>` | Graceful stop a container |
| `docker kill <container>` | Kill (SIGKILL) a container |
| `docker restart <container>` | Graceful stop and restart a container |
| `docker pause <container>` | Suspend a container |
| `docker unpause <container>` | Resume a container |
| `docker rm <container>` | Destroy a container |
### Container Bulk Management
| COMMAND | DESCRIPTION |
| --- | --- |
| `docker stop $(docker ps -q)` | To stop all the running containers |
| `docker stop $(docker ps -a -q)` | To stop all the stopped and running containers |
| `docker kill $(docker ps -q)` | To kill all the running containers |
| `docker kill $(docker ps -a -q)` | To kill all the stopped and running containers |
| `docker restart $(docker ps -q)` | To restart all running containers |
| `docker restart $(docker ps -a -q)` | To restart all the stopped and running containers |
| `docker rm $(docker ps -q)` | To destroy all running containers |
| `docker rm $(docker ps -a -q)` | To destroy all the stopped and running containers |
| `docker pause $(docker ps -q)` | To pause all running containers |
| `docker pause $(docker ps -a -q)` | To pause all the stopped and running containers |
| `docker start $(docker ps -q)` | To start all running containers |
| `docker start $(docker ps -a -q)` | To start all the stopped and running containers |
| `docker rm -vf $(docker ps -a -q)` | To delete all containers including its volumes use |
| `docker rmi -f $(docker images -a -q)` | To delete all the images |
| `docker system prune` | To delete all dangling and unused images, containers, cache and volumes |
| `docker system prune -a` | To delete all used and unused images |
| `docker system prune --volumes` | To delete all docker volumes |
### Inspect Containers
| COMMAND | DESCRIPTION |
| --- | --- |
`docker ps` | List running containers
`docker ps --all` | List all containers, including stopped
`docker logs <container>` | Show a container output
`docker logs -f <container>` | Follow a container output
`docker top <container>` | List the processes running in a container
`docker diff` | Show the differences with the image (modified files)
`docker inspect` | Show information of a container (json formatted)
### Executing Commands
| COMMAND | DESCRIPTION |
| --- | --- |
| `docker attach <container>` | Attach to a container |
| `docker cp <container>:<container-path> <host-path>` | Copy files from the container |
| `docker cp <host-path> <container>:<container-path>` | Copy files into the container |
| `docker export <container>` | Export the content of the container (tar archive) |
| `docker exec <container>` | Run a command inside a container |
| `docker exec -it <container> /bin/bash` | Open an interactive shell inside a container (there is no bash in some |images, use /bin/sh)
| `docker wait <container>` | Wait until the container terminates and return the exit code |
### Images
| COMMAND | DESCRIPTION |
| --- | --- |
| `docker image ls` | List all local images |
| `docker history <image>` | Show the image history |
| `docker inspect <image>` | Show information (json formatted) |
| `docker tag <image> <tag>` | Tag an image |
| `docker commit <container> <image>` | Create an image (from a container) |
| `docker import <url>` | Create an image (from a tarball) |
| `docker rmi <image>` | Delete images |
| `docker pull <user>/<repository>:<tag>` | Pull an image from a registry |
| `docker push <user>/<repository>:<tag>` | Push and image to a registry |
| `docker search <test>` | Search an image on the official registry |
| `docker login` | Login to a registry |
| `docker logout` | Logout from a registry |
| `docker save <user>/<repository>:<tag>` | Export an image/repo as a tarball |
| `docker load` | Load images from a tarball |
### Volumes
| COMMAND | DESCRIPTION |
| --- | --- |
| `docker volume ls` | List all vol1umes |
| `docker volume create <volume>` | Create a volume |
| `docker volume inspect <volume>` | Show information (json formatted) |
| `docker volume rm <volume>` | Destroy a volume |
| `docker volume ls --filter="dangling=true"` | List all dangling volumes (not referenced by any container) |
| `docker volume prune` | Delete all volumes (not referenced by any container) |
### Backup a container
Backup docker data from inside container volumes and package it in a tarball archive.
`docker run --rm --volumes-from <container> -v $(pwd):/backup busybox tar cvfz /backup/backup.tar <container-path>`
An automated backup can be done also by this [Ansible playbook](https://github.com/thedatabaseme/docker_backup).
The output is also a (compressed) tar. The playbook can also manage the backup retention.
So older backups will get deleted automatically.
To also create and backup the container configuration itself, you can use `docker-replay`for that. If you lose
the entire container, you can recreate it with the export from `docker-replay`.
A more detailed tutorial on how to use docker-replay can be found [here](https://thedatabaseme.de/2022/03/18/shorty-generate-docker-run-commands-using-docker-replay/).
### Restore container from backup
Restore the volume with a tarball archive.
`docker run --rm --volumes-from <container> -v $(pwd):/backup busybox sh -c "cd <container-path> && tar xvf /backup/backup.tar --strip 1"`
## Troubleshooting
### Networking
`docker run --name netshoot --rm -it nicolaka/netshoot /bin/bash`

48
hardware/aspm.md Normal file
View File

@ -0,0 +1,48 @@
# Active State Power Management
Active State Power Management is a power management technology used in computer systems to reduce power consumption by controlling the power state of various hardware components, such as graphics cards, network adapters, and storage controllers. It helps to reduce power consumption and improve energy efficiency in these devices.
ASPM is typically enabled by default in modern computer systems. However, it can be manually enabled or disabled through the system BIOS or UEFI firmware settings. The specific steps to access and modify these settings will vary depending on the manufacturer and model of the system.
---
## How to use ASPM
If you suspect that ASPM is causing issues on your system, you can try disabling it temporarily to see if the issue goes away. To do this, you can add the `pcie_aspm=off` kernel parameter to your boot options.
1. Edit the GRUB configuration file `/etc/default/grub` with a text editor such as `nano` or `vi`.
```
$ sudo nano /etc/default/grub
```
2. Find the line that starts with `GRUB_CMDLINE_LINUX_DEFAULT` and add the `pcie_aspm=off` kernel parameter to the existing parameters between the quotes. For example:
```
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pcie_aspm=off"
```
3. Save the file and exit the text editor.
4. Update the GRUB configuration by running the following command:
```
$ sudo update-grub
```
5. Reboot your system for the changes to take effect.
After disabling ASPM, you can use the `lspci -vv` command to check if the ASPM settings have been disabled for your devices. If you still experience issues, you may need to investigate further or seek assistance from a qualified technician.
---
## ASPM Force
> **Warning:** Forcing ASPM can cause system instability or decreased performance if not done correctly. You should only force ASPM if you have a specific reason to do so and have thoroughly tested the system to ensure that it is stable and performing as expected.
That being said, if you have a specific reason to force ASPM and have thoroughly tested the system, you can use the following steps to force ASPM on a Linux system:
Add the `pcie_aspm=force` kernel parameter to the existing parameters between the quotes. For example:
```
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pcie_aspm=force"
```

20
infra/proxmox-api.md Normal file
View File

@ -0,0 +1,20 @@
# Proxmox API Authentication
## Create an API Token on Proxmox
To create a new API Token for your `user` in Proxmox, follow these steps:
1. Open the Proxmox Web UI and navigate to the **'Datacenter'** in the **'Server View'** menu.
2. Select **'API Token'** in the **'Permissions'** menu.
3. Click on **'Add'**.
4. Select the `user`, and add a **'Token ID'**.
5. (Optional) Disable `Privilege Separation`, and set an `Expire` date.
6. Click on **'Add'**.
## Check the API Token
To test the API Token, you can use the following command:
```sh
curl -H "Authorization: PVEAPIToken=root@pam!monitoring=aaaaaaaaa-bbb-cccc-dddd-ef0123456789" https://your-proxmox-url:8006/api2/json/
```

View File

@ -0,0 +1,315 @@
# Proxmox Certificate Management
## Certificates for Intra-Cluster Communication
Each Proxmox VE cluster creates by default its own (self-signed) Certificate
Authority (CA) and generates a certificate for each node which gets signed by
the aforementioned CA. These certificates are used for encrypted communication
with the cluster's pveproxy service and the Shell/Console feature if SPICE is used.
The CA certificate and key are stored in the Proxmox Cluster File System (pmxcfs).
## Certificates for API and Web GUI
The REST API and web GUI are provided by the pveproxy service, which runs on each node.
You have the following options for the certificate used by pveproxy:
* By default the node-specific certificate in `/etc/pve/nodes/NODENAME/pve-ssl.pem` is used. This certificate is signed by the cluster CA and therefore not automatically trusted by browsers and operating systems.
* Use an externally provided certificate (e.g. signed by a commercial CA).
* Use ACME (Let's Encrypt) to get a trusted certificate with automatic renewal, this is also integrated in the Proxmox VE API and web interface.
For options 2 and 3 the file `/etc/pve/local/pveproxy-ssl.pem` (and
`/etc/pve/local/pveproxy-ssl.key`, which needs to be without password) is used.
Keep in mind that `/etc/pve/local` is a node specific symlink to
`/etc/pve/nodes/NODENAME`.
Certificates are managed with the Proxmox VE Node management command
(see the `pvenode(1)` manpage).
Do not replace or manually modify the automatically generated node
certificate files in `/etc/pve/local/pve-ssl.pem` and
`/etc/pve/local/pve-ssl.key` or the cluster CA files in
`/etc/pve/pve-root-ca.pem` and `/etc/pve/priv/pve-root-ca.key`.
## Upload Custom Certificate
If you already have a certificate which you want to use for a Proxmox VE node
you can upload that certificate simply over the web interface. Note that the
certificates key file, if provided, mustn't be password protected.
## Trusted certificates via Let's Encrypt (ACME)
Proxmox VE includes an implementation of the Automatic Certificate
Management Environment ACME protocol, allowing Proxmox VE admins to
use an ACME provider like Let's Encrypt for easy setup of TLS certificates
which are accepted and trusted on modern operating systems and web browsers
out of the box.
Currently, the two ACME endpoints implemented are the
Let's Encrypt (LE) production and its staging
environment. Our ACME client supports validation of http-01 challenges using
a built-in web server and validation of dns-01 challenges using a DNS plugin
supporting all the DNS API endpoints `acme.sh` does.
### ACME Account
You need to register an ACME account per cluster with the endpoint you want to
use. The email address used for that account will serve as contact point for
renewal-due or similar notifications from the ACME endpoint.
You can register and deactivate ACME accounts over the web interface
`Datacenter -&gt; ACME` or using the `pvenode` command line tool.
```shell
pvenode acme account register account-name mail@example.com
```
Because of rate-limits you should use LE staging for experiments or if you use
ACME for the first time.
### ACME Plugins
The ACME plugins task is to provide automatic verification that you, and thus
the Proxmox VE cluster under your operation, are the real owner of a domain. This is
the basis building block for automatic certificate management.
The ACME protocol specifies different types of challenges, for example the
http-01 where a web server provides a file with a certain content to prove
that it controls a domain. Sometimes this isn't possible, either because of
technical limitations or if the address of a record to is not reachable from
the public internet. The dns-01 challenge can be used in these cases. This
challenge is fulfilled by creating a certain DNS record in the domain's zone.
Proxmox VE supports both of those challenge types out of the box, you can configure
plugins either over the web interface under `Datacenter -&gt; ACME`, or using the
`pvenode acme plugin add` command.
ACME Plugin configurations are stored in `/etc/pve/priv/acme/plugins.cfg`.
A plugin is available for all nodes in the cluster.
### Node Domains
Each domain is node specific. You can add new or manage existing domain entries
under Node -&gt; Certificates, or using the `pvenode config` command.
After configuring the desired domain(s) for a node and ensuring that the
desired ACME account is selected, you can order your new certificate over the
web-interface. On success the interface will reload after 10 seconds.
Renewal will happen automatically.
## ACME HTTP Challenge Plugin
There is always an implicitly configured standalone plugin for validating
http-01 challenges via the built-in webserver spawned on port 80.
The name `standalone` means that it can provide the validation on it's
own, without any third party service. So, this plugin works also for cluster
nodes.
There are a few prerequisites to use it for certificate management with Let's
Encrypts ACME.
* You have to accept the ToS of Let's Encrypt to register an account.
* Port 80 of the node needs to be reachable from the internet.
* There must be no other listener on port 80.
* The requested (sub)domain needs to resolve to a public IP of the Node.
## ACME DNS API Challenge Plugin
On systems where external access for validation via the http-01 method is
not possible or desired, it is possible to use the dns-01 validation method.
This validation method requires a DNS server that allows provisioning of TXT
records via an API.
### Configuring ACME DNS APIs for validation
Proxmox VE re-uses the DNS plugins developed for the
[acme.sh project](https://github.com/acmesh-official/acme.sh). Please
refer to its documentation for details on configuration of specific APIs.
The easiest way to configure a new plugin with the DNS API is using the web
interface (`Datacenter -&gt; ACME`).
Choose DNS as challenge type. Then you can select your API provider, enter
the credential data to access your account over their API.
See the `acme.sh`
[How to use DNS API](https://github.com/acmesh-official/acme.sh/wiki/dnsapi#how-to-use-dns-api)
wiki for more detailed information about getting API credentials for your
provider.
As there are many DNS providers and API endpoints Proxmox VE automatically generates
the form for the credentials for some providers. For the others you will see a
bigger text area, simply copy all the credentials KEY=VALUE pairs in there.
## DNS Validation through CNAME Alias
A special alias mode can be used to handle the validation on a different
domain/DNS server, in case your primary/real DNS does not support provisioning
via an API. Manually set up a permanent CNAME record for
`_acme-challenge.domain1.example` pointing to `_acme-challenge.domain2.example`
and set the alias property in the Proxmox VE node configuration file to
`domain2.example` to allow the DNS server of `domain2.example` to validate all
challenges for `domain1.example`.
### Combination of Plugins
Combining http-01 and dns-01 validation is possible in case your node is
reachable via multiple domains with different requirements / DNS provisioning
capabilities. Mixing DNS APIs from multiple providers or instances is also
possible by specifying different plugin instances per domain.
Accessing the same service over multiple domains increases complexity and
should be avoided if possible.
## Automatic renewal of ACME certificates
If a node has been successfully configured with an ACME-provided certificate
(either via `pvenode` or via the GUI), the certificate will be automatically
renewed by the `pve-daily-update.service`. Currently, renewal will be attempted
if the certificate has expired already, or will expire in the next 30 days.
## ACME Examples with pvenode
*Example*: Sample `pvenode` invocation for using Let's Encrypt certificates
```sh
root@proxmox:~# pvenode acme account register default mail@example.invalid
Directory endpoints:
0) Let's Encrypt V2 (https://acme-v02.api.letsencrypt.org/directory)
1) Let's Encrypt V2 Staging (https://acme-staging-v02.api.letsencrypt.org/directory)
2) Custom
Enter selection: 1
Terms of Service: https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf
Do you agree to the above terms? [y|N]y
...
Task OK
root@proxmox:~# pvenode config set --acme domains=example.invalid
root@proxmox:~# pvenode acme cert order
Loading ACME account details
Placing ACME order
...
Status is 'valid'!
All domains validated!
...
Downloading certificate
Setting pveproxy certificate and key
Restarting pveproxy
Task OK
```
*Example*: Setting up the OVH API for validating a domain
The account registration steps are the same no matter which plugins are
used, and are not repeated here.
OVH_AK and OVH_AS need to be obtained from OVH according to the OVH
API documentation
First you need to get all information so you and Proxmox VE can access the API.
```sh
root@proxmox:~# cat /path/to/api-token
OVH_AK=XXXXXXXXXXXXXXXX
OVH_AS=YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
root@proxmox:~# source /path/to/api-token
root@proxmox:~# curl -XPOST -H"X-Ovh-Application: $OVH_AK" -H "Content-type: application/json" \
https://eu.api.ovh.com/1.0/auth/credential -d '{
"accessRules": [
{"method": "GET","path": "/auth/time"},
{"method": "GET","path": "/domain"},
{"method": "GET","path": "/domain/zone/*"},
{"method": "GET","path": "/domain/zone/*/record"},
{"method": "POST","path": "/domain/zone/*/record"},
{"method": "POST","path": "/domain/zone/*/refresh"},
{"method": "PUT","path": "/domain/zone/*/record/"},
{"method": "DELETE","path": "/domain/zone/*/record/*"}
]
}'
{"consumerKey":"ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ","state":"pendingValidation","validationUrl":"https://eu.api.ovh.com/auth/?credentialToken=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"}
(open validation URL and follow instructions to link Application Key with account/Consumer Key)
root@proxmox:~# echo "OVH_CK=ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ" &gt;&gt; /path/to/api-token
```
Now you can setup the ACME plugin:
```sh
root@proxmox:~# pvenode acme plugin add dns example_plugin --api ovh --data /path/to/api_token
root@proxmox:~# pvenode acme plugin config example_plugin
```sh
<pre>
┌────────┬──────────────────────────────────────────┐
│ key │ value │
╞════════╪══════════════════════════════════════════╡
│ api │ ovh │
├────────┼──────────────────────────────────────────┤
│ data │ OVH_AK=XXXXXXXXXXXXXXXX │
│ │ OVH_AS=YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY │
│ │ OVH_CK=ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ │
├────────┼──────────────────────────────────────────┤
│ digest │ 867fcf556363ca1bea866863093fcab83edf47a1 │
├────────┼──────────────────────────────────────────┤
│ plugin │ example_plugin │
├────────┼──────────────────────────────────────────┤
│ type │ dns │
└────────┴──────────────────────────────────────────┘
</pre>
At last you can configure the domain you want to get certificates for and
place the certificate order for it:
```sh
root@proxmox:~# pvenode config set -acmedomain0 example.proxmox.com,plugin=example_plugin
root@proxmox:~# pvenode acme cert order
Loading ACME account details
Placing ACME order
Order URL: https://acme-staging-v02.api.letsencrypt.org/acme/order/11111111/22222222
Getting authorization details from 'https://acme-staging-v02.api.letsencrypt.org/acme/authz-v3/33333333'
The validation for example.proxmox.com is pending!
[Wed Apr 22 09:25:30 CEST 2020] Using OVH endpoint: ovh-eu
[Wed Apr 22 09:25:30 CEST 2020] Checking authentication
[Wed Apr 22 09:25:30 CEST 2020] Consumer key is ok.
[Wed Apr 22 09:25:31 CEST 2020] Adding record
[Wed Apr 22 09:25:32 CEST 2020] Added, sleep 10 seconds.
Add TXT record: _acme-challenge.example.proxmox.com
Triggering validation
Sleeping for 5 seconds
Status is 'valid'!
[Wed Apr 22 09:25:48 CEST 2020] Using OVH endpoint: ovh-eu
[Wed Apr 22 09:25:48 CEST 2020] Checking authentication
[Wed Apr 22 09:25:48 CEST 2020] Consumer key is ok.
Remove TXT record: _acme-challenge.example.proxmox.com
All domains validated!
Creating CSR
Checking order status
Order is ready, finalizing order
valid!
Downloading certificate
Setting pveproxy certificate and key
Restarting pveproxy
Task OK
```
### Example: Switching from the staging to the regular ACME directory
Changing the ACME directory for an account is unsupported, but as Proxmox VE
supports more than one account you can just create a new one with the
production (trusted) ACME directory as endpoint. You can also deactivate the
staging account and recreate it.
*Example*: Changing the default ACME account from staging to directory using `pvenode`
```sh
root@proxmox:~# pvenode acme account deactivate default
Renaming account file from '/etc/pve/priv/acme/default' to '/etc/pve/priv/acme/_deactivated_default_4'
Task OK
root@proxmox:~# pvenode acme account register default example@proxmox.com
Directory endpoints:
0) Let's Encrypt V2 (https://acme-v02.api.letsencrypt.org/directory)
1) Let's Encrypt V2 Staging (https://acme-staging-v02.api.letsencrypt.org/directory)
2) Custom
Enter selection: 0
Terms of Service: https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf
Do you agree to the above terms? [y|N]y
...
Task OK
```

View File

@ -0,0 +1,66 @@
# Proxmox Terraform Integration
You can use [Terraform](tools/terraform.md) to automate certain tasks on [Proxmox](infra/proxmox.md). This allows you to manage virtual machines and lxc containers with infrastructure-as-code. We're using the third-party plugin [telmate/terraform-provider-proxmox](https://github.com/Telmate/terraform-provider-proxmox).
## Authenticate to Proxmox
### Create an API Token on Proxmox
To create a new API Token for your `user` in Proxmox, follow the steps described in [Proxmox API Authentication](proxmox-api.md).
### Add Provider config to Terraform
```json
terraform {
required_version = ">= 0.13.0"
required_providers {
proxmox = {
source = "telmate/proxmox"
version = ">=2.9.14"
}
}
}
```
```json
variable "PROXMOX_URL" {
type = string
}
variable "PROXMOX_USER" {
type = string
}
variable "PROXMOX_TOKEN" {
type = string
sensitive = true
}
provider "proxmox" {
pm_api_url = var.PROXMOX_URL
pm_api_token_id = var.PROXMOX_USER
pm_api_token_secret = var.PROXMOX_TOKEN
pm_tls_insecure = false
}
```
## Templates
WIP
## Useful commands
### Import existing virtual machines to Terraform
Existing virtual machines can be imported to the Terraform state file with the following command. Make sure, you have created a corresponding **Resource** in the **Terraform File**.
```sh
terraform import <resourcetype.resourcename> <id>
```
In the telmate/terraform-provider-proxmox, the id needs to be set according to `<node>/<type>/<vmid>`, like in the following example.
```sh
terraform import proxmox_vm_qemu.srv-prod-1 prx-prod-1/proxmox_vm_qemu/102
```

189
infra/proxmox.md Normal file
View File

@ -0,0 +1,189 @@
# Proxmox Cheat-Sheet
Proxmox Virtual Environment (Proxmox VE or PVE) is a hyper-converged infrastructure open-source software. It is a hosted hypervisor that can run operating systems including Linux and Windows on x64 hardware. It is a Debian-based Linux distribution with a modified Ubuntu LTS kernel and allows deployment and management of virtual machines and containers. Proxmox VE includes a web console and command-line tools, and provides a REST API for third-party tools. Two types of virtualization are supported: container-based with LXC (starting from version 4.0 replacing OpenVZ used in version up to 3.4, included), and full virtualization with KVM. It includes a web-based management interface.
Proxmox VE is licensed under the GNU Affero General Public License, version 3.
Repository: [https://git.proxmox.com](https://git.proxmox.com)
Website: [https://pve.proxmox.com](https://pve.proxmox.com)
## VM Management
| Command | Command Description |
|---|---|
| `qm list` | list VMs |
| `qm create VM_ID` | Create or restore a virtual machine. |
| `qm start VM_ID` | Start a VM |
| `qm suspend VM_ID` | Suspend virtual machine. |
| `qm shutdown VM_ID` | Shutdown a VM |
| `qm reboot VM_ID` | Reboot a VM |
| `qm reset VM_ID` | Reset a VM |
| `qm stop VM_ID` | Stop a VM |
| `qm destroy VM_ID` | Destroy the VM and all used/owned volumes. |
| `qm monitor VM_ID` | Enter Qemu Monitor interface. |
| `qm pending VM_ID` | Get the virtual machine configuration with both current and pending values. |
| `qm sendkey VM_ID YOUR_KEY_EVENT [OPTIONS]` | Send key event to virtual machine. |
| `qm showcmd VM_ID [OPTIONS]` | Show command line used to start the VM (debug info). |
| `qm unlock VM_ID` | Unlock the VM |
| `qm clone VM_ID NEW_VM_ID` | Clone a VM |
| `qm migrate VM_ID TARGET_NODE` | Migrate a VM |
| `qm status VM_ID` | Show VM status |
| `qm cleanup VM_ID CLEAN_SHUTDOWN GUEST_REQUESTED` | Clean up resources for a VM |
| `qm template VM_ID [OPTIONS]` | Create a Template |
| `qm set VM_ID [OPTIONS]` | Set virtual machine options (synchronous API) |
### Cloudinit
| Command | Command Description |
|---|---|
| `qm cloudinit dump VM_ID VM_TYPE` | Get automatically generated cloudinit config. |
| `qm cloudinit pending VM_ID` | Get the cloudinit configuration with both current and pending values. |
| `qm cloudinit update VM_ID` | Regenerate and change cloudinit config drive. |
### Disk
| Command | Command Description |
|---|---|
| `qm disk import VM_ID TARGET_SOURCE TARGET_STORAGE` | Import an external disk image as an unused disk in a VM. |
| `qm disk move VM_ID VM_DISK [STORAGE] [OPTIONS]` | Move volume to different storage or to a different VM. |
| `qm disk rescan [OPTIONS]` | Rescan all storages and update disk sizes and unused disk images. |
| `qm disk resize VM_ID VM_DISK SIZE [OPTIONS]` | Extend volume size. |
| `qm disk unlink VM_ID --IDLIST STRING [OPTIONS]` | Unlink/delete disk images. |
| `qm rescan` | Rescan volumes. |
### Snapshot
| Command | Command Description |
|---|---|
| `qm listsnapshot VM_ID` | List all snapshots. |
| `qm snapshot VM_ID SNAPNAME` | Snapshot a VM. |
| `qm delsnapshot VM_ID SNAPNAME` | Delete a snapshot. |
| `qm rollback VM_ID SNAPNAME` | Rollback a snapshot. |
| `qm terminal VM_ID [OPTIONS]` | Open a terminal using a serial device. |
| `qm vncproxy VM_ID` | Proxy VM VNC traffic to stdin/stdout. |
### Misc
| Command | Command Description |
|---|---|
| `qm guest cmd VM_ID COMMAND` | Execute Qemu Guest Agent commands. |
| `qm guest exec VM_ID [EXTRA-ARGS] [OPTIONS]` | Executes the given command via the guest agent. |
| `qm guest exec-status VM_ID PID` | Gets the status of the given pid started by the guest-agent. |
| `qm guest passwd VM_ID USERNAME [OPTIONS]` | Sets the password for the given user to the given password. |
### PV, VG, LV Management
| Command | Command Description |
|---|---|
| `pvcreate DISK-DEVICE-NAME` | Create a PV |
| `pvremove DISK-DEVICE-NAME` | Remove a PV |
| `pvs` | List all PVs |
| `vgcreate VG-NAME DISK-DEVICE-NAME` | Create a VG |
| `vgremove VG-NAME` | Remove a VG |
| `vgs` | List all VGs |
| `lvcreate -L LV-SIZE -n LV-NAME VG-NAME` | Create a LV |
| `lvremove VG-NAME/LV-NAME` | Remove a LV |
| `lvs` | List all LVs |
### Storage Management
| Command | Command Description |
|---|---|
| `pvesm add TYPE STORAGE [OPTIONS]` | Create a new storage |
| `pvesm alloc STORAGE your-vm-id FILENAME SIZE [OPTIONS]` | Allocate disk images |
| `pvesm free VOLUME [OPTIONS]` | Delete volume |
| `pvesm remove STORAGE` | Delete storage configuration |
| `pvesm list STORAGE [OPTIONS]` | List storage content |
| `pvesm lvmscan` | An alias for pvesm scan lvm |
| `pvesm lvmthinscan` | An alias for pvesm scan lvmthin |
| `pvesm scan lvm` | List local LVM volume groups |
| `pvesm scan lvmthin VG` | List local LVM Thin Pools |
| `pvesm status [OPTIONS]` | Get status for all datastores |
### Template Management
| Command | Command Description |
|---|---|
| `pveam available` | List all templates |
| `pveam list STORAGE` | List all templates |
| `pveam download STORAGE TEMPLATE` | Download appliance templates |
| `pveam remove TEMPLATE-PATH` | Remove a template |
| `pveam update` | Update Container Template Database |
## Certificate Management
See the [Proxmox Certificate Management](proxmox-certificate-management.md) cheat sheet.
## Container Management
| Command | Command Description |
|---|---|
| `pct list` | List containers |
| `pct create YOUR-VM-ID OSTEMPLATE [OPTIONS]` | Create or restore a container |
| `pct start YOUR-VM-ID [OPTIONS]` | Start the container |
| `pct clone YOUR-VM-ID NEW-VM-ID [OPTIONS]` | Create a container clone/copy |
| `pct suspend YOUR-VM-ID` | Suspend the container. This is experimental. |
| `pct resume YOUR-VM-ID` | Resume the container |
| `pct stop YOUR-VM-ID [OPTIONS]` | Stop the container. This will abruptly stop all processes running in the container. |
| `pct shutdown YOUR-VM-ID [OPTIONS]` | Shutdown the container. This will trigger a clean shutdown of the container. |
| `pct destroy YOUR-VM-ID [OPTIONS]` | Destroy the container (also delete all uses files) |
| `pct status YOUR-VM-ID [OPTIONS]` | Show CT status |
| `pct migrate YOUR-VM-ID TARGET [OPTIONS]` | Migrate the container to another node. Creates a new migration task. |
| `pct config YOUR-VM-ID [OPTIONS]` | Get container configuration |
| `pct cpusets` | Print the list of assigned CPU sets |
| `pct pending YOUR-VM-ID` | Get container configuration, including pending changes |
| `pct reboot YOUR-VM-ID [OPTIONS]` | Reboot the container by shutting it down and starting it again. Applies pending changes. |
| `pct restore YOUR-VM-ID OSTEMPLATE [OPTIONS]` | Create or restore a container |
| `pct set YOUR-VM-ID [OPTIONS]` | Set container options |
| `pct template YOUR-VM-ID` | Create a Template |
| `pct unlock YOUR-VM-ID` | Unlock the VM |
### Container Disks
| Command | Command Description |
|---|---|
| `pct df YOUR-VM-ID` | Get the containers current disk usage |
| `pct fsck YOUR-VM-ID [OPTIONS]` | Run a filesystem check (fsck) on a container volume |
| `pct fstrim YOUR-VM-ID [OPTIONS]` | Run fstrim on a chosen CT and its mountpoints |
| `pct mount YOUR-VM-ID` | Mount the containers filesystem on the host |
| `pct move-volume YOUR-VM-ID VOLUME [STORAGE] [TARGET-VMID] [TARGET-VOLUME] [OPTIONS]` | Move a rootfs-/mp-volume to a different storage or to a different container |
| `pct unmount YOUR-VM-ID` | Unmount the containers filesystem |
| `pct resize YOUR-VM-ID YOUR-VM-DISK SIZE [OPTIONS]` | Resize a container mount point |
| `pct rescan [OPTIONS]` | Rescan all storages and update disk sizes and unused disk images |
| `pct enter YOUR-VM-ID` | Connect to container |
| `pct console YOUR-VM-ID [OPTIONS]` | Launch a console for the specified container |
| `pct exec YOUR-VM-ID [EXTRA-ARGS]` | Launch a command inside the specified container |
| `pct pull YOUR-VM-ID PATH DESTINATION [OPTIONS]` | Copy a file from the container to the local system |
| `pct push YOUR-VM-ID FILE DESTINATION [OPTIONS]` | Copy a local file to the container |
## Web GUI
```shell
# Restart web GUI
service pveproxy restart
```
## Resize Disk
### Increase disk size
Increase disk size in the GUI or with the following command
```shell
qm resize 100 virtio0 +5G
```
### Decrease disk size
Before decreasing disk sizes in Proxmox, you should take a backup!
1. Convert qcow2 to raw: `qemu-img convert vm-100.qcow2 vm-100.raw`
2. Shrink the disk `qemu-img resize -f raw vm-100.raw 10G`
3. Convert back to qcow2 `qemu-img convert -p -O qcow2 vm-100.raw vm-100.qcow2`
## Further information
More examples and tutorials regarding Proxmox can be found in the link list below:
- Ansible playbook that automates Linux VM updates running on Proxmox (including snapshots): [TheDatabaseMe - update_proxmox_vm](https://github.com/thedatabaseme/update_proxmox_vm)
- Manage Proxmox VM templates with Packer: [Use Packer to build Proxmox images](https://thedatabaseme.de/2022/10/16/what-a-golden-boy-use-packer-to-build-proxmox-images/)

1
infra/sophos-xg.md Normal file
View File

@ -0,0 +1 @@
# Sophos XG

14
infra/truenas-scale.md Normal file
View File

@ -0,0 +1,14 @@
# TrueNAS Scale
WIP
---
## ACME
WIP
1. Create DNS Credentials
2. Create Signing Request
3. Configure email address for your current user (in case of root, info)
4. Create ACME Cert
5. Switch Admin Cert
---

65
infra/zfs.md Normal file
View File

@ -0,0 +1,65 @@
# ZFS
WIP
Reference: [Oracle Solaris ZFS Administration Guide](https://docs.oracle.com/cd/E19253-01/819-5461/index.html)
---
## Storage Pools
WIP
### Stripe
ZFS dynamically stripes data across all top-level virtual devices. The decision about where to place data is done at write time, so no fixed-width stripes are created at allocation time.
When new virtual devices are added to a pool, ZFS gradually allocates data to the new device in order to maintain performance and disk space allocation policies. Each virtual device can also be a mirror or a RAID-Z device that contains other disk devices or files. This configuration gives you flexibility in controlling the fault characteristics of your pool.
Although ZFS supports combining different types of virtual devices within the same pool, avoid this practice. For example, you can create a pool with a two-way mirror and a three-way RAID-Z configuration. However, your fault tolerance is as good as your worst virtual device, RAID-Z in this case. A best practice is to use top-level virtual devices of the same type with the same redundancy level in each device.
### Mirror
A mirrored storage pool configuration requires at least two disks, preferably on separate controllers. Many disks can be used in a mirrored configuration. In addition, you can create more than one mirror in each pool.
### Striped Mirror
Data is dynamically striped across both mirrors, with data being redundant between each disk appropriately.
Currently, the following operations are supported in a ZFS mirrored configuration:
- Adding another set of disks for an additional top-level virtual device (vdev) to an existing mirrored configuration.
- Attaching additional disks to an existing mirrored configuration. Or, attaching additional disks to a non-replicated configuration to create a mirrored configuration.
- Replacing a disk or disks in an existing mirrored configuration as long as the replacement disks are greater than or equal to the size of the device to be replaced.
- Detaching a disk in a mirrored configuration as long as the remaining devices provide adequate redundancy for the configuration.
- Splitting a mirrored configuration by detaching one of the disks to create a new, identical pool.
### RAID-Z
In addition to a mirrored storage pool configuration, **ZFS provides a RAID-Z configuration with either single-, double-, or triple-parity fault tolerance**. Single-parity RAID-Z (raidz or raidz1) is similar to RAID-5. Double-parity RAID-Z (raidz2) is similar to RAID-6.
A RAID-Z configuration with N disks of size X with P parity disks can hold approximately `(N-P)*X` bytes and can withstand P device(s) failing before data integrity is compromised. You need at least two disks for a single-parity RAID-Z configuration and at least three disks for a double-parity RAID-Z configuration. For example, if you have three disks in a single-parity RAID-Z configuration, parity data occupies disk space equal to one of the three disks. Otherwise, no special hardware is required to create a RAID-Z configuration.
If you are creating a RAID-Z configuration with many disks, consider splitting the disks into multiple groupings. For example, a RAID-Z configuration with 14 disks is better split into two 7-disk groupings. **RAID-Z configurations with single-digit groupings of disks should perform better.**
---
## Scrubbing
The simplest way to check data integrity is to initiate an explicit scrubbing of all data within the pool. This operation traverses all the data in the pool once and verifies that all blocks can be read. Scrubbing proceeds as fast as the devices allow, though the priority of any I/O remains below that of normal operations. This operation might negatively impact performance, though the pool's data should remain usable and nearly as responsive while the scrubbing occurs.
**Scrub ZFS Pool:**
```bash
zpool scrub POOLNAME
```
**Example:**
```bash
zpool status -v POOLNAME
pool: store
state: ONLINE
scan: scrub in progress since Fri Nov 4 06:43:51 2022
317G scanned at 52.9G/s, 1.09M issued at 186K/s, 3.41T total
0B repaired, 0.00% done, no estimated completion time
```
---
## Resilvering
When a device is replaced, a resilvering operation is initiated to move data from the good copies to the new device. This action is a form of disk scrubbing. Therefore, only one such action can occur at a given time in the pool. If a scrubbing operation is in progress, a resilvering operation suspends the current scrubbing and restarts it after the resilvering is completed.

56
kubernetes/helm.md Normal file
View File

@ -0,0 +1,56 @@
# Helm
## Repository Management
COMMAND | DESCRIPTION
---|---
`helm repo list` | List Helm repositories
`helm repo update` | Update list of Helm charts from repositories
## Chart Management
COMMAND | DESCRIPTION
---|---
`helm search` | List all installed charts
`helm search <CHARTNAME>` | Search for a chart
`helm ls` | List all installed Helm charts
`helm ls --deleted` | List all deleted Helm charts
`helm ls --all` | List installed and deleted Helm charts
`helm inspect values <REPO>/<CHART>` | Inspect the variables in a chart
## Install/Delete Helm Charts
COMMAND | DESCRIPTION
---|---
`helm install --name <NAME> <REPO>/<CHART>` | Install a Helm chart
`helm install --name <NAME> --values <VALUES.YML> <REPO>/<CHART>` | Install a Helm chart and override variables
`helm status <NAME>` | Show status of Helm chart being installed
`helm delete --purge <NAME>` | Delete a Helm chart
## Upgrading Helm Charts
COMMAND | DESCRIPTION
---|---
`helm get values <NAME>` | Return the variables for a release
`helm upgrade --values <VALUES.YML> <NAME> <REPO>/<CHART>` | Upgrade the chart or variables in a release
`helm history <NAME>` | List release numbers
`helm rollback <NAME> 1` | Rollback to a previous release number
## Creating Helm Charts
COMMAND | DESCRIPTION
---|---
`helm create <NAME>` | Create a blank chart
`helm lint <NAME>` | Lint the chart
`helm package <NAME>` | Package the chart into foo.tgz
`helm dependency update` | Install chart dependencies
## Chart Folder Structure
```
wordpress/
Chart.yaml # A YAML file containing information about the chart
LICENSE # OPTIONAL: A plain text file containing the license for the chart
README.md # OPTIONAL: A human-readable README file
requirements.yaml # OPTIONAL: A YAML file listing dependencies for the chart
values.yaml # The default configuration values for this chart
charts/ # A directory containing any charts upon which this chart depends.
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes
```

130
kubernetes/k3s.md Normal file
View File

@ -0,0 +1,130 @@
# K3S
Lightweight [Kubernetes](kubernetes/kubernetes.md). Production ready, easy to install, half the memory, all in a binary less than 100 MB.
Project Homepage: [K3s.io](https://www.k3s.io/)
Documentation: [K3s Documentation](https://docs.k3s.io/)
---
## Installation
To install k3s, you can follow different approaches like setting up k3s with an **external database**, **embedded database**, or as a **single node**.
### K3s with external DB
Set up an HA K3s cluster backed by an external datastore such as MySQL, PostgreSQL, or etcd.
#### Install Database
Install [MariaDB](databases/mariadb.md).
#### Install Servers
```bash
curl -sfL https://get.k3s.io | sh -s - server \
--token=YOUR-SECRET \
--datastore-endpoint='mysql://user:pass@tcp(ipaddress:3306)/dbname' \
--node-taint CriticalAddonsOnly=true:NoExecute \
--tls-san your-dns-name --tls-san your-lb-ip-address
```
#### Node-Taint
By default, server nodes will be schedulable and thus your workloads can get launched on them. If you wish to have a dedicated control plane where no user workloads will run, you can use taints. The node-taint parameter will allow you to configure nodes with taints, for example `--node-taint CriticalAddonsOnly=true:NoExecute`.
#### SSL Certificates
To avoid certificate errors in such a configuration, you should install the server with the `--tls-san YOUR_IP_OR_HOSTNAME_HERE` option. This option adds an additional hostname or IP as a Subject Alternative Name in the TLS cert, and it can be specified multiple times if you would like to access via both the IP and the hostname.
#### Get a registered Address
TODO: WIP
#### Install Agents
TODO: WIP
```bash
curl -sfL https://get.k3s.io | sh -s - agent \
--server https://your-lb-ip-address:6443 \
--token YOUR-SECRET
```
### K3s with embedded DB
Set up an HA K3s cluster that leverages a built-in distributed database.
TODO: WIP
#### Install first Server
TODO: WIP
```bash
curl -sfL https://get.k3s.io | sh -s - server \
--token=YOUR-SECRET \
--tls-san your-dns-name --tls-san your-lb-ip-address \
--cluster-init
```
To avoid certificate errors in such a configuration, you should install the server with the `--tls-san YOUR_IP_OR_HOSTNAME_HERE` option. This option adds an additional hostname or IP as a Subject Alternative Name in the TLS cert, and it can be specified multiple times if you would like to access via both the IP and the hostname.
#### Install additional Servers
TODO: WIP
```bash
curl -sfL https://get.k3s.io | sh -s - server \
--token=YOUR-SECRET \
--tls-san your-dns-name --tls-san your-lb-ip-address \
--server https://IP-OF-THE-FIRST-SERVER:6443
```
The `--cluster-init` initializes an HA Cluster with an embedded etcd database. The fault tolerance requires an odd number, minimum three, nodes to function.
Total Number of nodes | Failed Node Tolerance
---|---
1|0
2|0
3|1
4|1
5|2
6|2
...|...
#### Get a registered Address
To achieve a high-available scenario you also need to load balance incoming connections between the server nodes.
TODO: WIP
#### Install Agents
You can still add additional nodes without a server function to this cluster.
```bash
curl -sfL https://get.k3s.io | sh -s - agent \
--server https://your-lb-ip-address:6443 \
--token YOUR-SECRET
```
### K3s single node
Set up K3s as a single node installation.
TODO: WIP
---
## Manage K3S
### Management on Server Nodes
`k3s kubectl`
### Download Kube Config
`/etc/rancher/k3s/k3s.yaml`
## Database Backups
### etcd snapshots
Stored in `/var/lib/rancher/k3s/server/db/snapshots`.

73
kubernetes/k9s.md Normal file
View File

@ -0,0 +1,73 @@
# K9s
K9s is a command line interface to easy up managing [Kubernetes Clusters](kubernetes/kubernetes.md).
Core features of k9s are for instance:
- Editing of resource manifests
- Shell into a Pod / Container
- Manage multiple Kubernetes clusters using one tool
More information and current releases of k9s, can be found on their [Github repository](https://github.com/derailed/k9s).
---
## Installation
### On Linux
#### Find and download the latest release
Check the release page [here](https://github.com/derailed/k9s/releases) and search for the
fitting package type (e.g. Linux_x86_64). Copy the link to the archive of your choice.
Download and unpack the archive like in this example:
```bash
wget https://github.com/derailed/k9s/releases/download/v0.26.6/k9s_Linux_x86_64.tar.gz
tar -xvf k9s_Linux_x86.tar.gz
```
#### Install k9s
```bash
sudo install -o root -g root -m 0755 k9s /usr/local/bin/k9s
```
---
## Commands
### Cluster selection
As soon as you've started k9s, you can use a bunch of commands to interact with your selected
cluster (which is the context you have selected in you current shell environment).
You can everytime change the cluster you want to work with by typing `:context`. A list of
available cluster configurations appear, you can select the cluster to connect to with the
arrow keys and select the context to be used by pressing enter.
### General command structure
**Menu**
You can switch between resource types to show using a text menu selection. You need to press `:`
to bring up this menu. Then you can type the resource type you want to switch to
(e.g. `pod`, `services`...). Press the enter key to finish the command.
**Selection**
Selections are made with the arrow keys. To confirm your selection or to show more information,
use the enter key again. For instance, you can select a pod with the arrow keys and type enter
to "drill down" in that pod and view the running containers in it.
**Filter and searches**
In nearly every screen of k9s, you can apply filters or search for something (e.g. in the log output
of a pod). This can be done by pressing `/` followed by the search / filter term. Press enter so apply
the filter / search.
Also in some screens, there are shortcuts for namespace filters bound to the number keys. Where `0`
always shows all namespaces.
### Useful shortcuts and commands
| Command | Comment | Compareable kubectl command |
|-------------|--------------------------------------------------------------------------------|---------------------------------------------------------------------------|
| `:pod` | Switches to the pod screen, where you can see all pods on the current cluster. | `kubectl get pods --all-namespaces` |
| `:services` | Switches to the service screen, where you can see all services. | `kubectl get services --all-namespaces` |
| `ctrl`+`d` | Delete a resource. | `kubectl delete <resource> -n <namespace>` |
| `ctrl`+`k` | Kill a resource (no confirmation) | |
| `s` | When on the Pod screen, you then open a shell into the selected pod. | `kubectl exec -n <namespace> <pod_name> -c <container_name> -- /bin/bash` |
| `l` | Show the log output of a pod. | `kubectl logs -n <namespace> <pod_name>` |

94
kubernetes/kind.md Normal file
View File

@ -0,0 +1,94 @@
# Kind
Using the Kind project, you are able to easily deploy a Kubernetes cluster on top of Docker as Docker containers. Kind will spawn separate containers which be shown as the Kubernetes nodes. In this documentation, you can find some examples, as well as a link to a Ansible playbook which can do the cluster creation / deletion for you. This document only describes the basics of Kind. To find more detailed information, you can check the [official Kind documentation](https://kind.sigs.k8s.io/docs/user/quick-start/).
Kind is ideal to use in a local development environment or even during a build pipeline run.
## Installation on Linux
Since Kind deploys Docker containers, it needs to have a Container engine (like Docker) installed.
Installing Kind can be done by downloading the latest available release / binary for your platform:
```bash
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.16.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
```
## Cluster management
### Cluster creation
You have to provide a configuration file which tells Kind how you want your Kubernetes cluster to be deployed. Find an example configuration file below:
```yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: testcluster
# 1 control plane node and 2 workers
nodes:
# the control plane node config
- role: control-plane
# the two workers
- role: worker
- role: worker
```
Create the cluster by the following command:
```bash
kind create cluster --config kind-cluster-config.yaml
Creating cluster "testcluster" ...
Ensuring node image (kindest/node:v1.25.2)
Preparing nodes
Writing configuration
Starting control-plane
Installing CNI
Installing StorageClass
Joining worker nodes
Set kubectl context to "kind-testcluster"
You can now use your cluster with:
kubectl cluster-info --context kind-testcluster
Not sure what to do next? Check out https://kind.sigs.k8s.io/docs/user/quick-start/
```
Checking for Docker containers running, you can see the following:
```bash
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac14d8c7a3c9 kindest/node:v1.25.2 "/usr/local/bin/entr..." 2 minutes ago Up About a minute testcluster-worker2
096dd4bf1718 kindest/node:v1.25.2 "/usr/local/bin/entr..." 2 minutes ago Up About a minute 127.0.0.1:42319->6443/tcp testcluster-control-plane
e1ae2d701394 kindest/node:v1.25.2 "/usr/local/bin/entr..." 2 minutes ago Up About a minute testcluster-worker
```
### Interacting with your cluster
You may have multiple Kind clusters deployed at the same time. To get a list of running clusters, you can use the following command:
```bash
kind get clusters
kind
kind-2
```
After cluster creation, the Kubernetes context is set automatically to the newly created cluster. In order to set the currently used kubeconfig, you may use some tooling like [kubectx](https://github.com/ahmetb/kubectx). You may also set the current context used by `kubectl` with the `--context` option, which refers to the Kind cluster name.
### Cluster deletion
To delete a Kind cluster, you can use the following command. Kind will also delete the kubeconfig of the deleted cluster. So you don't need to do this on your own.
```bash
kind delete cluster -n testcluster
Deleting cluster "testcluster" ...
```
## Further information
More examples and tutorials regarding Proxmox can be found in the link list below:
- Creating an Ansible playbook to manage Kind cluster: [Lightweight Kubernetes cluster using Kind and Ansible](https://thedatabaseme.de/2022/04/22/lightweight-kubernetes-cluster-using-kind-and-ansible/)

179
kubernetes/kubectl.md Normal file
View File

@ -0,0 +1,179 @@
# Kubectl
Kubectl is a command line tool for communicating with a [Kubernetes Cluster](kubernetes/kubernetes.md)'s control pane, using the Kubernetes API.
Documentation: [Kubectl Reference](https://kubernetes.io/docs/reference/kubectl/)
---
## Installation
### On Windows (PowerShell)
Install Kubectl with [chocolatey](tools/chocolatey.md):
```
choco install kubernetes-cli
```
### On Linux
> [!INFO] Installing on WSL2
> On WSL2 it's recommended to install Docker Desktop [[docker-desktop]], which automatically comes with kubectl.
#### Download the latest release
```bash
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
```
#### Install kubectl
```bash
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
```
### On mac OS
Install Kubectl with [homebrew](tools/homebrew.md):
```zsh
brew install kubernetes-cli
```
---
## Config Management
### Multiple Config Files
### On Windows (PowerShell)
```powershell
$env:KUBECONFIG = "$HOME/.kube/prod-k8s-clcreative-kubeconfig.yaml;$HOME/.kube/infra-home-kube-prod-1.yml;$HOME/.kube/infra-home-kube-demo-1.yml;$HOME/.kube/infra-cloud-kube-prod-1.yml"
```
### On Linux
```bash
export KUBECONFIG=~/.kube/kube-config-1.yml:~/.kube/kube-config-2.yml
```
Managing multiple config files manually can become extensive. Below you can find a handy script, which you can implement in your shell rc file (e.g. .bashrc or .zshrc). The script will automatically add all found kubeconfigs to the `KUBECONFIG` environment variable.
Script was copied from [here](https://medium.com/@alexgued3s/multiple-kubeconfigs-no-problem-f6be646fc07d)
```bash
# If there's already a kubeconfig file in ~/.kube/config it will import that too and all the contexts
DEFAULT_KUBECONFIG_FILE="$HOME/.kube/config"
if test -f "${DEFAULT_KUBECONFIG_FILE}"
then
export KUBECONFIG="$DEFAULT_KUBECONFIG_FILE"
fi# Your additional kubeconfig files should be inside ~/.kube/config-files
ADD_KUBECONFIG_FILES="$HOME/.kube/config-files"
mkdir -p "${ADD_KUBECONFIG_FILES}"OIFS="$IFS"
IFS=$'\n'
for kubeconfigFile in `find "${ADD_KUBECONFIG_FILES}" -type f -name "*.yml" -o -name "*.yaml"`
do
export KUBECONFIG="$kubeconfigFile:$KUBECONFIG"
done
IFS="$OIFS"
```
Another helpful tool that makes you changing and selecting the cluster context easier is
`kubectx`. You can download `kubectx` [here](https://github.com/ahmetb/kubectx).
:warning: The above script conflicts with kubectx, cause kubectx can only work with one
kubeconfig file listed in the `KUBECONFIG` env var. If you want to use both, add the following
lines to your rc file.
```bash
# now we merge all configs to one
kubectl config view --merge --flatten > $HOME/.kube/merged-config
export KUBECONFIG="$HOME/.kube/merged-config"
```
---
## Commands
### Networking
Connect containers using Kubernetes internal DNS system:
`<service-name>.<namespace>.svc.cluster.local`
Troubleshoot Networking with a netshoot toolkit Container:
`kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot -- /bin/bash`
### Containers
Restart Deployments (Stops and Restarts all Pods):
`kubectl scale deploy <deployment> --replicas=0`
`kubectl scale deploy <deployment> --replicas=1`
Executing Commands on Pods:
`kubectl exec -it <PODNAME> -- <COMMAND>`
`kubectl exec -it generic-pod -- /bin/bash`
### Config and Cluster Management
COMMAND | DESCRIPTION
---|---
`kubectl cluster-info` | Display endpoint information about the master and services in the cluster
`kubectl config view` |Get the configuration of the cluster
### Resource Management
COMMAND | DESCRIPTION
---|---
`kubectl get all --all-namespaces` | List all resources in the entire Cluster
`kubectl delete <RESOURCE> <RESOURCENAME> --grace-period=0 --force` | Try to force the deletion of the resource
---
## List of Kubernetes Resources "Short Names"
Short Name | Long Name
---|---
`csr`|`certificatesigningrequests`
`cs`|`componentstatuses`
`cm`|`configmaps`
`ds`|`daemonsets`
`deploy`|`deployments`
`ep`|`endpoints`
`ev`|`events`
`hpa`|`horizontalpodautoscalers`
`ing`|`ingresses`
`limits`|`limitranges`
`ns`|`namespaces`
`no`|`nodes`
`pvc`|`persistentvolumeclaims`
`pv`|`persistentvolumes`
`po`|`pods`
`pdb`|`poddisruptionbudgets`
`psp`|`podsecuritypolicies`
`rs`|`replicasets`
`rc`|`replicationcontrollers`
`quota`|`resourcequotas`
`sa`|`serviceaccounts`
`svc`|`services`
---
## 陼 Logs and Troubleshooting
...
### Logs
...
### MySQL
`kubectl run -it --rm --image=mysql:5.7 --restart=Never mysql-client -- mysql -u USERNAME -h HOSTNAME -p`
### Networking
`kubectl run -it --rm --image=nicolaka/netshoot netshoot -- /bin/bash`
---
## Resources stuck in Terminating state
...
```sh
(
NAMESPACE=longhorn-demo-1
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
)
```

View File

@ -0,0 +1,46 @@
# Kubernetes DNS
## DNS for Services and Pods
Kubernetes creates DNS records for Services and Pods. You can contact Services with consistent DNS names instead of IP addresses.
```
your-service.your-namespace.svc.cluster.local
```
Any Pods exposed by a Service have the following DNS resolution available:
```
your-prod.your-service.your-namespace.svc.cluster.local
```
---
## Custom DNS Settings
### Edit coredns config map
Add entry to the `Corefile: |` section of the `configmap/coredns` in section **kube-system**.
```yml
.:53 {
# ...
}
import /etc/coredns/custom/*.server
```
### Add new config map
Example for local DNS server using the **clcreative.home** zone.
```yml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
clcreative.server: |
clcreative.home:53 {
forward . 10.20.0.10
}
```

3
kubernetes/kubernetes.md Normal file
View File

@ -0,0 +1,3 @@
# Kubernetes
**'Kubernetes'** is an open-source container orchestration platform that automates the deployment, scaling, and management of applications in a containerized environment. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).

11
linux/arp.md Normal file
View File

@ -0,0 +1,11 @@
# ARP in Linux
The **arp** command in Linux allows you to view and modify the [ARP table](../networking/arp-protocol.md), which contains information about devices on the local network. It can be used to view the IP addresses and MAC addresses of devices on the network, and to add or remove entries from the [ARP table](../networking/arp-protocol.md).
| Command | Description |
| --- | --- |
| `arp` | View the ARP table |
| `arp -a` | View the ARP table |
| `arp -n` | View the ARP table (don't resolve names) |
| `arp -d <ip-address>` | Delete an entry from the ARP table |
| `arp -s <ip-address> <mac-address>` | Add an entry to the ARP table |

328
linux/awk.md Normal file
View File

@ -0,0 +1,328 @@
# AWK
AWK (awk) is a domain-specific language designed for text processing and typically used as a data extraction and reporting tool. Similar to the **[Sed](sed)** and **[Grep](grep)** commands, it is a filter, and is a standard feature of most Unix-like operating systems, like **[Linux](linux)**.
## Usage
### Unix/Linux
```bash
awk '/pattern/ {print "$1"}' # standard Unix shells
```
### DOS/Win
```bash
awk '/pattern/ {print "$1"}' # compiled with DJGPP, Cygwin
awk "/pattern/ {print \"$1\"}" # GnuWin32, UnxUtils, Mingw
```
Note that the DJGPP compilation (for DOS or Windows-32) permits an awk
script to follow Unix quoting syntax `'/like/ {"this"}'`. HOWEVER, if the
command interpreter is `CMD.EXE` or `COMMAND.COM`, single quotes will not
protect the redirection arrows `(<, >)` nor do they protect pipes `(|)`.
These are special symbols which require "double quotes" to protect them
from interpretation as operating system directives. If the command
interpreter is bash, ksh, zsh or another Unix shell, then single and double
quotes will follow the standard Unix usage.
Users of MS-DOS or Microsoft Windows must remember that the percent
sign `(%)` is used to indicate environment variables, so this symbol must
be doubled `(%%)` to yield a single percent sign visible to awk.
To conserve space, use `'1'` instead of `'{print}'` to print each line.
Either one will work.
## Handy one-line Awk scripts
### File Spacing
```bash
# double space a file
awk '1;{print ""}'
awk 'BEGIN{ORS="\n\n"};1'
# double space a file which already has blank lines in it. Output file
# should contain no more than one blank line between lines of text.
# NOTE: On Unix systems, DOS lines which have only CRLF (\r\n) are
# often treated as non-blank, and thus 'NF' alone will return TRUE.
awk 'NF{print $0 "\n"}'
# triple space a file
awk '1;{print "\n"}'
```
### Numbering and Calculations
```bash
# precede each line by its line number FOR THAT FILE (left alignment).
# Using a tab (\t) instead of space will preserve margins.
awk '{print FNR "\t" $0}' files*
# precede each line by its line number FOR ALL FILES TOGETHER, with tab.
awk '{print NR "\t" $0}' files*
# number each line of a file (number on left, right-aligned)
# Double the percent signs if typing from the DOS command prompt.
awk '{printf("%5d : %s\n", NR,$0)}'
# number each line of file, but only print numbers if line is not blank
# Remember caveats about Unix treatment of \r (mentioned above)
awk 'NF{$0=++a " :" $0};1'
awk '{print (NF? ++a " :" :"") $0}'
# count lines (emulates "wc -l")
awk 'END{print NR}'
# print the sums of the fields of every line
awk '{s=0; for (i=1; i<=NF; i++) s=s+$i; print s}'
# add all fields in all lines and print the sum
awk '{for (i=1; i<=NF; i++) s=s+$i}; END{print s}'
# print every line after replacing each field with its absolute value
awk '{for (i=1; i<=NF; i++) if ($i < 0) $i = -$i; print }'
awk '{for (i=1; i<=NF; i++) $i = ($i < 0) ? -$i : $i; print }'
# print the total number of fields ("words") in all lines
awk '{ total = total + NF }; END {print total}' file
# print the total number of lines that contain "Beth"
awk '/Beth/{n++}; END {print n+0}' file
# print the largest first field and the line that contains it
# Intended for finding the longest string in field #1
awk '$1 > max {max=$1; maxline=$0}; END{ print max, maxline}'
# print the number of fields in each line, followed by the line
awk '{ print NF ":" $0 } '
# print the last field of each line
awk '{ print $NF }'
# print the last field of the last line
awk '{ field = $NF }; END{ print field }'
# print every line with more than 4 fields
awk 'NF > 4'
# print every line where the value of the last field is > 4
awk '$NF > 4'
```
### String Creation
```bash
# create a string of a specific length (e.g., generate 513 spaces)
awk 'BEGIN{while (a++<513) s=s " "; print s}'
# insert a string of specific length at a certain character position
# Example: insert 49 spaces after column #6 of each input line.
gawk --re-interval 'BEGIN{while(a++<49)s=s " "};{sub(/^.{6}/,"&" s)};1'
```
### Array Creation
```bash
# These next 2 entries are not one-line scripts, but the technique
# is so handy that it merits inclusion here.
# create an array named "month", indexed by numbers, so that month[1]
# is 'Jan', month[2] is 'Feb', month[3] is 'Mar' and so on.
split("Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec", month, " ")
# create an array named "mdigit", indexed by strings, so that
# mdigit["Jan"] is 1, mdigit["Feb"] is 2, etc. Requires "month" array
for (i=1; i<=12; i++) mdigit[month[i]] = i
```
### Text Conversion and Substitution
```bash
# IN UNIX ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format
awk '{sub(/\r$/,"")};1' # assumes EACH line ends with Ctrl-M
# IN UNIX ENVIRONMENT: convert Unix newlines (LF) to DOS format
awk '{sub(/$/,"\r")};1'
# IN DOS ENVIRONMENT: convert Unix newlines (LF) to DOS format
awk 1
# IN DOS ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format
# Cannot be done with DOS versions of awk, other than gawk:
gawk -v BINMODE="w" '1' infile >outfile
# Use "tr" instead.
tr -d \r <infile >outfile # GNU tr version 1.22 or higher
# delete leading whitespace (spaces, tabs) from front of each line
# aligns all text flush left
awk '{sub(/^[ \t]+/, "")};1'
# delete trailing whitespace (spaces, tabs) from end of each line
awk '{sub(/[ \t]+$/, "")};1'
# delete BOTH leading and trailing whitespace from each line
awk '{gsub(/^[ \t]+|[ \t]+$/,"")};1'
awk '{$1=$1};1' # also removes extra space between fields
# insert 5 blank spaces at beginning of each line (make page offset)
awk '{sub(/^/, " ")};1'
# align all text flush right on a 79-column width
awk '{printf "%79s\n", $0}' file*
# center all text on a 79-character width
awk '{l=length();s=int((79-l)/2); printf "%"(s+l)"s\n",$0}' file*
# substitute (find and replace) "foo" with "bar" on each line
awk '{sub(/foo/,"bar")}; 1' # replace only 1st instance
gawk '{$0=gensub(/foo/,"bar",4)}; 1' # replace only 4th instance
awk '{gsub(/foo/,"bar")}; 1' # replace ALL instances in a line
# substitute "foo" with "bar" ONLY for lines which contain "baz"
awk '/baz/{gsub(/foo/, "bar")}; 1'
# substitute "foo" with "bar" EXCEPT for lines which contain "baz"
awk '!/baz/{gsub(/foo/, "bar")}; 1'
# change "scarlet" or "ruby" or "puce" to "red"
awk '{gsub(/scarlet|ruby|puce/, "red")}; 1'
# reverse order of lines (emulates "tac")
awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' file*
# if a line ends with a backslash, append the next line to it (fails if
# there are multiple lines ending with backslash...)
awk '/\\$/ {sub(/\\$/,""); getline t; print $0 t; next}; 1' file*
# print and sort the login names of all users
awk -F ":" '{print $1 | "sort" }' /etc/passwd
# print the first 2 fields, in opposite order, of every line
awk '{print $2, $1}' file
# switch the first 2 fields of every line
awk '{temp = $1; $1 = $2; $2 = temp}' file
# print every line, deleting the second field of that line
awk '{ $2 = ""; print }'
# print in reverse order the fields of every line
awk '{for (i=NF; i>0; i--) printf("%s ",$i);print ""}' file
# concatenate every 5 lines of input, using a comma separator
# between fields
awk 'ORS=NR%5?",":"\n"' file
```
### Selective Printing of Certain Lines
```bash
# print first 10 lines of file (emulates behavior of "head")
awk 'NR < 11'
# print first line of file (emulates "head -1")
awk 'NR>1{exit};1'
# print the last 2 lines of a file (emulates "tail -2")
awk '{y=x "\n" $0; x=$0};END{print y}'
# print the last line of a file (emulates "tail -1")
awk 'END{print}'
# print only lines which match regular expression (emulates "grep")
awk '/regex/'
# print only lines which do NOT match regex (emulates "grep -v")
awk '!/regex/'
# print any line where field #5 is equal to "abc123"
awk '$5 == "abc123"'
# print only those lines where field #5 is NOT equal to "abc123"
# This will also print lines which have less than 5 fields.
awk '$5 != "abc123"'
awk '!($5 == "abc123")'
# matching a field against a regular expression
awk '$7 ~ /^[a-f]/' # print line if field #7 matches regex
awk '$7 !~ /^[a-f]/' # print line if field #7 does NOT match regex
# print the line immediately before a regex, but not the line
# containing the regex
awk '/regex/{print x};{x=$0}'
awk '/regex/{print (NR==1 ? "match on line 1" : x)};{x=$0}'
# print the line immediately after a regex, but not the line
# containing the regex
awk '/regex/{getline;print}'
# grep for AAA and BBB and CCC (in any order on the same line)
awk '/AAA/ && /BBB/ && /CCC/'
# grep for AAA and BBB and CCC (in that order)
awk '/AAA.*BBB.*CCC/'
# print only lines of 65 characters or longer
awk 'length > 64'
# print only lines of less than 65 characters
awk 'length < 64'
# print section of file from regular expression to end of file
awk '/regex/,0'
awk '/regex/,EOF'
# print section of file based on line numbers (lines 8-12, inclusive)
awk 'NR==8,NR==12'
# print line number 52
awk 'NR==52'
awk 'NR==52 {print;exit}' # more efficient on large files
# print section of file between two regular expressions (inclusive)
awk '/Iowa/,/Montana/' # case sensitive
```
### Selective Deletion of Certain Lines
```bash
# delete ALL blank lines from a file (same as "grep '.' ")
awk NF
awk '/./'
# remove duplicate, consecutive lines (emulates "uniq")
awk 'a !~ $0; {a=$0}'
# remove duplicate, nonconsecutive lines
awk '!a[$0]++' # most concise script
awk '!($0 in a){a[$0];print}' # most efficient script
```
## References
For additional syntax instructions, including the way to apply editing
commands from a disk file instead of the command line, consult:
"sed & awk, 2nd Edition," by Dale Dougherty and Arnold Robbins
(O'Reilly, 1997)
"UNIX Text Processing," by Dale Dougherty and Tim O'Reilly (Hayden
Books, 1987)
"GAWK: Effective awk Programming," 3d edition, by Arnold D. Robbins
(O'Reilly, 2003) or at http://www.gnu.org/software/gawk/manual/
To fully exploit the power of awk, one must understand "regular
expressions." For detailed discussion of regular expressions, see
"Mastering Regular Expressions, 3d edition" by Jeffrey Friedl (O'Reilly,
2006).
The info and manual ("man") pages on Unix systems may be helpful (try
"man awk", "man nawk", "man gawk", "man regexp", or the section on
regular expressions in "man ed").
USE OF '\t' IN awk SCRIPTS: For clarity in documentation, I have used
'\t' to indicate a tab character (0x09) in the scripts. All versions of
awk should recognize this abbreviation.

48
linux/cron.md Normal file
View File

@ -0,0 +1,48 @@
# Cron
A CRON expression is simply a string consisting of six fields that each define a specific unit of time. 
They are written in the following format:
```
{second} {minute} {hour} {day} {month} {day of the week}
```
---
## Values
The following values are allowed within each date/time unit placeholder.
| Field | Allowed Values | Description |
|---|---|---|
| {second} | 0-59 | Trigger every {second} second(s) |
| {minute} | 0-59 | Trigger every {minute} minute(s) |
| {hour} | 0-23 | Trigger every {hour} hour(s) |
| {day} | 1-31 | Trigger every {day} day(s) of month |
| {month} | 1-12 | Trigger every {month} month(s) |
| {day of week} | 0-6 | MON-SUN Trigger on specific {day of week} |
---
## Special Characters
Additionally you can also use the following special characters to build more advanced expressions:
| Special Character | Description |
|---|---|
| `*` | Trigger on tick of every time unit |
| `,` | List separator |
|`` | Specifies a range |
| `/` | Defines an increment |
---
## Examples
`0 * * * * *` - Executes every minute
`0 0 * * * *` - Executes every hour
`0 0 0 * * *` - Executes every day
`0 0 0 0 * *` - Executes every month
`0 0 0 1 1 *` - Executes on first day of Jan each year
`30 20 * * SAT` - Executes at 08:30pm every Saturday
`30 20 * * 6` - Executes at 08:30pm every Saturday
`0 */5 * * * *` - Executes every five minutes
`0 0 8-10/1 * * *` - Executes every hour between 8am and 10am

4
linux/distros/centos.md Normal file
View File

@ -0,0 +1,4 @@
# CentOS
CentOS, from Community Enterprise Operating System; also known as CentOS Linux) is a Linux distribution that provides a free and open-source community-supported computing platform, functionally compatible with its upstream source, Red Hat Enterprise Linux (RHEL). CentOS announced the official joining with Red Hat while staying independent from RHEL, under a new CentOS governing board.

3
linux/distros/debian.md Normal file
View File

@ -0,0 +1,3 @@
# Debian
Debian also known as Debian GNU/Linux, is a [Linux](../linux.md) distribution composed of free and open-source software, developed by the community-supported Debian Project. The Debian Stable branch is the most popular edition for personal computers and servers. Debian is also the basis for many other distributions, most notably [Ubuntu](ubuntu.md).

107
linux/distros/fedora.md Normal file
View File

@ -0,0 +1,107 @@
# Fedora
Fedora Linux is a Linux distribution developed by the Fedora Project. Fedora contains software distributed under various free and open-source licenses and aims to be on the leading edge of open-source technologies. Fedora is the upstream source for Red Hat Enterprise Linux.
Since the release of Fedora 35, six different editions are made available tailored to personal computer, server, cloud computing, container and Internet of Things installations. A new version of Fedora Linux is released every six months.
Project Homepage: [Home - Fedora](https://getfedora.org/en/)
Documentation: [Fedora Documentation](https://docs.fedoraproject.org/en-US/docs/)
---
## Post Install Steps
### 1- Enable Caching in dnf Package Manager
Caching is Enabled to increase dnf speed
Edit dnf configuration:
```shell
sudo nano /etc/dnf/dnf.conf
```
Add this lines add the end:
```shell
# Added for speed:
fastestmirror=True
#change to 10 if you have fast internet speed
max_parallel_downloads=5
#when click enter the default is yes
defaultyes=True
#Keeps downloaded packages in the cache
keepcache=True
```
To clean dnf cache periodically:
```shell
sudo dnf clean dbcache
#or
sudo dnf clean all
```
for more configuration options: [DNF Configuration Reference](https://dnf.readthedocs.io/en/latest/conf_ref.html)
### 2- System Update
Run the following command:
```shell
sudo dnf update
```
## 3- Enable RPM Fusion
RPM Fusion **provides software that the Fedora Project or Red Hat doesn't want to ship**. That software is provided as precompiled RPMs for all current Fedora versions and current Red Hat Enterprise Linux or clones versions; you can use the RPM Fusion repositories with tools like yum and PackageKit.
Installing both free and non-free RPM Fusion:
```shell
sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
### AppStream metadata
to enable users to install packages using Gnome Software/KDE Discover. Please note that these are a subset of all packages since the metadata are only generated for GUI packages.
The following command will install the required packages:
```shell
sudo dnf groupupdate core
```
## 4- Adding Flatpak
Flatpak, formerly known as xdg-app, is a utility for software deployment and package management for Linux. It is advertised as offering a sandbox environment in which users can run application software in isolation from the rest of the system.
Flatpak is installed by default on Fedora Workstation, Fedora Silverblue, and Fedora Kinoite. To get started, all you need to do is enable **Flathub**, which is the best way to get Flatpak apps. Just download and install the [Flathub repository file](https://flathub.org/repo/flathub.flatpakrepo)
The above links should work on the default GNOME and KDE Fedora installations, but if they fail for some reason you can manually add the Flathub remote by running:
```shell
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
## 5- Change Hostname
Run the following command:
```shell
sudo hostnamectl set-hostname #your-name
```
## 6- Add Multimedia Codecs
Run the following commands:
```shell
sudo dnf groupupdate multimedia --setop="install_weak_deps=False" --exclude=PackageKit-gstreamer-plugin
sudo dnf groupupdate sound-and-video
```
## 7- Make it More Customizable
Open GNOME software installer and install the following:
- GNOME Tweaks
- Extensions
Consider the following GNOME Extensions:
- Vitals
- ArcMenu
- Custom Hot Corners - Extended
- Dash to Panel
- Sound input & ouput Device Chooser
- OpenWeather
- Impatience
- Screenshot Tool
- Tiling Assistant
- Extension List
- Clipboard Indicator

32
linux/distros/ubuntu.md Normal file
View File

@ -0,0 +1,32 @@
# Ubuntu
Ubuntu is a Linux distribution based on Debian and composed mostly of free and open-source software.Ubuntu is officially released in three editions: Desktop, Server, and Core for Internet of things devices and robots. Ubuntu is a popular operating system for cloud computing, with support for OpenStack.
## How to enable sudo without a password for a user
Open a Terminal window and type:
```
sudo visudo
```
In the bottom of the file, add the following line:
```
$USER ALL=(ALL) NOPASSWD: ALL
```
Where `$USER` is your username on your system. Save and close the sudoers file (if you haven't changed your default terminal editor (you'll know if you have), press Ctl + x to exit `nano` and it'll prompt you to save).
---
## Networking
In Ubuntu, networking can be managed using various tools and utilities, including the following:
1. **NetworkManager**: NetworkManager is a system service that manages network connections and devices. It provides a graphical user interface (GUI) for configuring network settings, as well as a command-line interface (CLI) for advanced configuration. NetworkManager is the default network management tool in Ubuntu.
2. **Netplan**: [Netplan](../netplan) is a command-line utility for configuring network interfaces in modern versions of Ubuntu. It uses YAML configuration files to describe network interfaces, IP addresses, routes, and other network-related parameters. [Netplan](../netplan) generates the corresponding configuration files for the underlying network configuration subsystem, such as systemd-networkd or NetworkManager.
3. **ifupdown**: ifupdown is a traditional command-line tool for managing network interfaces in Ubuntu. It uses configuration files located in the /etc/network/ directory to configure network interfaces, IP addresses, routes, and other network-related parameters.
To manage networking in **Ubuntu**, you can use one or more of these tools depending on your needs and preferences. For example, you can use the NetworkManager GUI to configure basic network settings and use [Netplan](../netplan) or ifupdown for advanced configuration. You can also use the command-line tools to automate network configuration tasks or to configure networking on headless servers.

View File

@ -0,0 +1 @@
# Environment Variables in Linux

15
linux/etherwake.md Normal file
View File

@ -0,0 +1,15 @@
# Etherwake
Etherwake is a command-line utility for sending [Wake-on-LAN (WoL)](../networking/wakeonlan.md) magic packets to wake up a device over a network connection. It allows you to wake up a device by specifying its MAC address as an argument, and it sends the magic packet to the broadcast address of the network interface that is specified.
Here's an example of how to use Etherwake to wake up a device with a specific MAC address:
```sh
sudo etherwake -i eth0 00:11:22:33:44:55
```
In this example, `sudo` is used to run the command with administrative privileges, `-i eth0` specifies the network interface to use (in this case, `eth0`), and `00:11:22:33:44:55` is the MAC address of the device to wake up.
The command sends a Wake-on-LAN magic packet to the broadcast address of the `eth0` network interface, which should wake up the device with the specified MAC address if Wake-on-LAN is enabled on the device.
Note that the exact syntax and options for `etherwake` may vary depending on your operating system and version of the utility. You can usually find more information and examples in the `etherwake` manual page (`man etherwake`).

60
linux/ethtool.md Normal file
View File

@ -0,0 +1,60 @@
# Ethtool
**Ethtool** is a command-line utility used in [Linux](../linux/linux.md) systems to query and control network interface settings. It provides information about the network interface cards (NICs) installed on a system, such as link status, driver information, speed, duplex mode, and more, and allows you to modify certain settings of a network interface.
## Using Ethtool to view network interface information
To view general information about a specific network interface (e.g., eth0), use the following command:
```sh
ethtool interface_name
```
If you want to retrieve only the link status of an interface, you can use the -i option followed by the interface name:
```sh
ethtool -i interface_name
```
## Using Ethtool to change network interface settings
**Ethtool** allows you to modify certain settings of a network interface. For example, you can manually set the speed and duplex mode, enable or disable features like [Wake-on-LAN](../networking/wakeonlan.md) or [autonegotiation](../networking/autonegotiation.md), configure flow control settings, and adjust ring buffer sizes.
### Manually set the speed and duplex mode of a network interface
To manually set the speed and duplex mode of a network interface (e.g., eth0) to a specific value, use the following command:
```sh
ethtool -s interface_name speed interface_speed duplex interface_duplex
```
If you want to enable or disable autonegotiation on a specific interface, you can use the following command:
```sh
ethtool -s interface_name autoneg on
ethtool -s interface_name autoneg off
```
### Enable Wake On LAN (WoL) on the network adapter
Use the following command to check if your network interface supports Wake On LAN (WoL):
```sh
sudo ethtool interface_name | grep "Wake-on"
```
If the output shows "Wake-on: d", it means that Wake On LAN (WoL) is disabled.
To enable Wake On LAN (WoL), use the following command:
```sh
sudo ethtool -s interface_name wol g
```
### Make the Wake On LAN (WoL) setting persistent across reboots
To make the Wake On LAN (WoL) setting persistent across reboots, add the following line to the `/etc/network/interfaces` file:
```sh
post-up /usr/sbin/ethtool -s interface_name wol g
```

2
linux/grep.md Normal file
View File

@ -0,0 +1,2 @@
# Grep
Grep is a command-line utility for searching plain-text data sets for lines that match a regular expression. Its name comes from the ed command g/re/p (globally search for a regular expression and print matching lines), which has the same effect. grep was originally developed for the Unix operating system like **Linux ([[linux]])**, but later available for all Unix-like systems and some others such as OS-9.

2
linux/iptables.md Normal file
View File

@ -0,0 +1,2 @@
# IPTables
Iptables is a user-space utility program that allows a system administrator to configure the IP packet filter rules of the **Linux ([[linux]])** kernel firewall, implemented as different Netfilter modules. The filters are organized in different tables, which contain chains of rules for how to treat network traffic packets. Different kernel modules and programs are currently used for different protocols; iptables applies to IPv4, ip6tables to IPv6, arptables to ARP, and ebtables to Ethernet frames.

0
linux/linux.md Normal file
View File

75
linux/lspci.md Normal file
View File

@ -0,0 +1,75 @@
# LSPCI
**Lspci** is a command-line utility in Linux and Unix operating systems that is used to display information about all the [PCI](../hardware/pci.md) (Peripheral Component Interconnect) buses and devices connected to the system. It provides detailed information about the hardware components, including their vendor and device IDs, subsystems, and other attributes. The **lspci** command is often used for diagnosing hardware-related issues and identifying the specific hardware components installed in a system.
---
## How to use LSPCI
Here is an example of using the **lspci** command in a Linux terminal:
```sh
$ lspci
00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 07)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 07)
00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 630 (Desktop)
00:08.0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model
00:14.0 USB controller: Intel Corporation 200 Series/Z370 Chipset Family USB 3.0 xHCI Controller
00:14.2 Signal processing controller: Intel Corporation 200 Series PCH Thermal Subsystem
...
```
This output shows information about various hardware components in the system, including the vendor and device IDs, the device type, and the revision number.
### Show details about devices
To show detailed information about a specific device using **lspci**, you can specify the devices bus address using the `lspci -s` option.
```sh
$ lspci -s 00:02.0
00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 630 (Desktop) (rev 02)
Subsystem: ASRock Incorporation Device 3977
Flags: bus master, fast devsel, latency 0, IRQ 131
Memory at a0000000 (64-bit, non-prefetchable) [size=16M]
Memory at 90000000 (64-bit, prefetchable) [size=256M]
I/O ports at 5000 [size=64]
[virtual] Expansion ROM at 000c0000 [disabled] [size=128K]
Capabilities: <access denied>
Kernel driver in use: i915
Kernel modules: i915
```
This output shows detailed information about the VGA compatible controller, including its subsystem, memory addresses, I/O ports, and kernel driver.
### Verbose output of lspci
The `-v` (verbose) and `-vv` (very verbose) parameters in **lspci** are used to increase the level of detail in the output.
• The `-v` option provides additional information about the devices, including the vendor and device IDs, subsystem IDs, and more.
• The `-vv` option provides even more detailed information, including the devices capabilities, IRQ settings, and ASPM (Active State Power Management) settings.
For example, to show the [ASPM](../hardware/aspm.md) settings for the [PCI Express](../hardware/pci-express.md) device with bus address 00:1c.0, you can run the following command:
```sh
$ lspci -s 00:1c.0 -vv | grep -i aspm
ASPM L1 Enabled; L0s Enabled
```
---
## Most useful commands
| Command | Description |
| ------- | ----------- |
| `lspci` | List all PCI devices in the system. |
| `lspci -v` | List all PCI devices in the system with verbose output, including vendor and device IDs, subsystem IDs, and more. |
| `lspci -vv` | List all PCI devices in the system with very verbose output, including device capabilities, IRQ settings, and ASPM (Active State Power Management) settings. |
| `lspci -s <bus_address>` | Display information for a specific PCI device with the specified bus address. |
| `lspci -k` | Show kernel driver in use for each device. |
| `lspci -n` | Show numeric IDs for vendor and device instead of names. |
| `lspci -nn` | Show numeric IDs for vendor, device, subsystem vendor, and subsystem device instead of names. |
| `lspci -t` | Display a tree-like diagram of the PCI bus hierarchy. |
| `lspci -D` | Show only PCI devices that are not behind a bridge. |
| `lspci -H1` | Show device numbers in hexadecimal format instead of decimal. |
| `lspci -x` | Show hex dump of the PCI configuration space for each device. |

169
linux/lvm2.md Normal file
View File

@ -0,0 +1,169 @@
# LVM2 (Logical Volume Manager 2)
**LVM2 (Logical Volume Manager 2)** is a utility for managing disk storage in [Linux](linux.md). It allows you to manage disk space efficiently by abstracting physical storage devices into logical volumes. It provides features like volume resizing, snapshotting, and striping, making it flexible and scalable for various storage needs.
1. Physical Volume: Represents the physical storage devices (e.g., hard drives, SSDs) that are part of the storage pool managed by LVM.
2. Volume Group: Combines multiple physical volumes into a unified storage pool, enabling easy management and allocation of logical volumes.
3. Logical Volume: Serves as a virtual disk that can be used for various purposes, such as creating partitions, mounting file systems, or even setting up RAID configurations.
4. File System: Represents the data organization and access methods used to store and retrieve data on a logical volume. Common file systems include EXT4, XFS, and Btrfs.
## Physical Volume (PV)
A Physical Volume (PV) in LVM is a physical storage device or partition used by LVM. It is a building block for creating Volume Groups and Logical Volumes, allowing you to manage storage efficiently. This command creates the PV on the devices, you can do multiple at a time.
```bash
sudo pvcreate /dev/Device /dev/Device2
```
The `pvdisplay` command provides a output for each physical volume. It displays physical properties like size, extents, volume group, and so on in a fixed format.
```bash
sudo pvdisplay
```
The pvscan command scans all physical volumes (PVs) on the system and displays information about them.
```bash
sudo pvscan
```
Moves the allocated physical extents from one physical volume to another. Useful when you need to redistribute space between physical volumes in a volume group. After a crash or power failure this can be finished without a problem.
```bash
sudo pvmove /dev/source_device /dev/target_device
```
## Volume Group (VG)
A Volume Group (VG) in LVM is a collection of one or more Physical Volumes (PVs) combined into a single storage pool. It allows flexible and efficient management of disk space, enabling easy allocation to Logical Volumes (LVs) as needed.
Creates a volume group with a specified name.
```bash
sudo vgcreate Volume_Name /dev/Device1 /dev/Device2 ...
```
The `vgdisplay` command displays volume group properties (such as size, extents, number of physical volumes, and so on) in a fixed form.
```bash
sudo vgdisplay
```
The `vgs` command provides volume group information in a configurable form, displaying one line per volume group. The `vgs` command provides a great deal of format control, and is useful for scripting.
```bash
sudo vgs
```
## Logical Volume (LV)
A logical volume in LVM is a flexible virtual partition that separates storage management from physical disks. This creates a logical volume out of the Volume Group with the specified name and size (5GB).
```bash
sudo lvcreate -n Volume -L 5g Group
```
Extends the logical volume by all the available space from the volume group. U can also extend it to a fixed size if u don't use the `+`.
```bash
sudo lvextend -L +100%FREE Group/Volume
```
Same as above but in the other direction.
```bash
sudo lvreduce -L -5g Group/Volume
```
This is how u rename a logical volume.
```bash
sudo lvrename /dev/Group/old_LV_name new_LV_name
```
This removes a logical volume. Use this command with extreme caution, as it will permanently delete the data on the logical volume.
```bash
sudo lvremove /dev/Group/Volume
```
The `lvs` command provides logical volume information in a configurable form, displaying one line per logical volume. The `lvs` command provides a great deal of format control, and is useful for scripting.
```bash
sudo lvs
```
The `lvdisplay` command displays logical volume properties (such as size, layout, and mapping) in a fixed format.
```bash
sudo lvdisplay
```
### File System
After extending a logical volume, use this command to expand the file system to use the new space.
```bash
sudo resize2fs /dev/Group/Volume
```
## Snapshots
Snapshots in LVM are copies of a logical volume at a specific time, useful for backups and data recovery. This creates a snapshot named "snap" with 5GB. Snapshots store only the changes made since their creation and are independent of the original volume.
```bash
sudo lvcreate -s -n snap -L 5g Group/Volume
```
Merges the snapshot with the original volume. Useful after a faulty update; requires a reboot.
```bash
sudo lvconvert --merge Group/snap
```
## Cache
This creates a cache logical volume with the "writethrough" cache mode using 100% of the free space. Caching improves disk read/write performance. Writethrough ensures that any data written will be stored both in the cache and on the origin LV. The loss of a device associated with the cache in this case would not mean the loss of any data. A second cache mode is "writeback". Writeback delays writing data blocks from the cache back to the origin LV.
```bash
sudo lvcreate --type cache --cachemode writethrough -l 100%FREE -n root_cachepool MyVolGroup/rootvol /dev/fastdisk
```
This removes the cache from the specified logical volume.
```bash
sudo lvconvert --uncache MyVolGroup/rootvol
```
## RAID
> LVM Is Using md Under the Hood
This configuring below will still use "md" behind the scenes. It just saves you the trouble of using "mdadm".
#### RAID 0
RAID 0 is a data storage configuration that stripes data across multiple drives to improve performance, but offers no data redundancy, meaning a single drive failure can result in data loss. This creates a RAID 0 logical volume with a specified name using 100% of the free space in the volume group, with the specified stripe size (2GB).
```bash
sudo lvcreate -i Stripes -I 2G -l 100%FREE -n Volume Group
```
#### RAID 1
RAID 1 is a data storage configuration that mirrors data across multiple drives for data redundancy, providing fault tolerance in case of drive failure but without the performance improvement of RAID 0. This creates a RAID 1 logical volume with a specified name using 100% of the free space in the volume group. The `--nosync` option skips initial sync.
```bash
sudo lvcreate --mirrors 1 --type raid1 -l 100%FREE --nosync -n Volume VGName
```
#### RAID 5
RAID 5 is a data storage configuration that combines striping and parity to provide both performance improvement and data redundancy, allowing for continued data access even if a single drive fails. This creates a RAID 5 logical volume with a specified name using 100% of the free space in the volume group. RAID 5 offers both data striping and parity for fault tolerance.
```bash
sudo lvcreate --type raid5 -l 100%FREE -n LVName VGName
```

14
linux/mount.md Normal file
View File

@ -0,0 +1,14 @@
# Mount
In **Linux ([[linux]])**, mount is a command in various operating systems. Before a user can access a file on a Unix-like machine, the file system on the device which contains the file needs to be mounted with the mount command. Frequently mount is used for SD card, USB storage, DVD and other removable storage devices.
---
List mount-points
```
findmnt (optional)<device/directory>
```
Unmount
```
umount <device/directory>
```

149
linux/netplan.md Normal file
View File

@ -0,0 +1,149 @@
# Netplan
Netplan is a utility for configuring network interfaces in modern versions of [Ubuntu](distros/ubuntu.md) and other Linux distributions. Netplan generates the corresponding configuration files for the underlying network configuration subsystem, such as systemd-networkd or NetworkManager.
---
## How to use
Netplan uses YAML configuration files in the `/etc/netplan/` to describe network interfaces, IP addresses, routes, and other network-related parameters.
**Example: `/etc/netplan/01-netcfg.yaml`**
```yaml
network:
version: 2
renderer: networkd
ethernets:
enp0s3:
dhcp4: true
```
This configuration sets up a DHCP client for the `enp0s3` Ethernet interface, using the systemd-networkd renderer.
To apply this configuration, run the following command.
```sh
sudo netplan apply.
```
You can also test a new Netplan configuration without applying it permanently. This is useful if you want to try out a new network configuration without disrupting your current network connection.
```sh
sudo netplan try
```
---
## Static IP addresses
To define a static IP address in Netplan, you can use the addresses key in the configuration file for the relevant network interface. Heres an example configuration file that sets a static IP address of 192.168.1.10 with a net mask of 24 bits for the `enp0s3` Ethernet interface:
```yaml
network:
version: 2
renderer: networkd
ethernets:
enp0s3:
addresses:
- 192.168.1.10/24
gateway4: 192.168.1.1
nameservers:
addresses: [8.8.8.8, 8.8.4.4]
```
In this configuration, the addresses key sets the static IP address and net mask for the `enp0s3` interface. The gateway4 key sets the default gateway, and the nameserver's key sets the DNS servers.
---
## VLANs
**Example 1: Simple VLAN configuration**
```yaml
network:
version: 2
renderer: networkd
ethernets:
enp0s3:
dhcp4: true
vlans:
vlan10:
id: 10
link: enp0s3
dhcp4: true
```
In this configuration, the `enp0s3` Ethernet interface is configured to use DHCP to obtain an IP address. A VLAN with ID 10 is also configured on the `enp0s3` interface, and DHCP is enabled for this VLAN as well. The link key specifies that the VLAN is associated with the `enp0s3` interface.
**Example 2: Advanced VLAN configuration**
```yaml
network:
version: 2
renderer: networkd
ethernets:
enp0s3:
dhcp4: true
vlans:
vlan10:
id: 10
link: enp0s3
addresses:
- 192.168.10.2/24
routes:
- to: 0.0.0.0/0
via: 192.168.10.1
nameservers:
addresses: [8.8.8.8, 8.8.4.4]
```
In this configuration, a VLAN with ID 10 is configured on the `enp0s3` interface, and a static IP address of `192.168.10.2` with a net mask of `24` bits is assigned to the VLAN interface. The routes key specifies a default route via the gateway at `192.168.10.1`. The nameserver's key sets the DNS servers to `8.8.8.8` and `8.8.4.4`.
## Bridges and Bonding
Bridging and bonding are two techniques used to combine multiple network interfaces into a single logical interface.
### Bonding
Bonding involves combining two or more physical interfaces into a single logical interface, called a bond interface. The bond interface acts like a single network interface, providing higher bandwidth and redundancy. Bonding is often used in high-performance computing environments, where multiple network interfaces are required to handle the high volume of network traffic.
Example 1: Bonding configuration
```yaml
network:
version: 2
renderer: networkd
ethernets:
enp0s3:
dhcp4: true
enp0s4:
dhcp4: true
bonds:
bond0:
interfaces:
- enp0s3
- enp0s4
dhcp4: true
parameters:
mode: active-backup
```
In this configuration, two Ethernet interfaces (enp0s3 and enp0s4) are configured with DHCP to obtain IP addresses. A bond interface (bond0) is also configured, which combines the two Ethernet interfaces into a single logical interface. The interfaces key specifies the physical interfaces to include in the bond, and the mode key specifies the bonding mode (in this case, active-backup).
### Bridging
Bridging involves creating a bridge interface that connects two or more physical interfaces. The bridge interface acts like a virtual switch, allowing devices connected to any of the physical interfaces to communicate with each other as if they were on the same network segment. Bridging is often used to connect two separate network segments or to provide redundancy in case one physical interface fails.
Example 2: Bridging configuration
```yaml
network:
version: 2
renderer: networkd
ethernets:
enp0s3:
dhcp4: true
enp0s4:
dhcp4: true
bridges:
br0:
interfaces:
- enp0s3
- enp0s4
dhcp4: true
```
In this configuration, two Ethernet interfaces (enp0s3 and enp0s4) are configured with DHCP to obtain IP addresses. A bridge interface (br0) is also configured, which combines the two Ethernet interfaces into a single logical interface. The interfaces key specifies the physical interfaces to include in the bridge.

23
linux/nfs.md Normal file
View File

@ -0,0 +1,23 @@
# NFS
Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems (Sun), available in **Linux ([[linux]])**, allowing a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. NFS is an open IETF standard defined in a Request for Comments (RFC), allowing anyone to implement the protocol.
---
## Install NFS
Install NFS Client on Ubuntu
```
sudo apt -y update
sudo apt -y install nfs-common
```
## Client Configuration
## Server Configuration
### Configuration
*TEMP EXAMPLE*:
`/srv/nfs 192.168.1.2(rw,sync,no_root_squash,subtree_check)`
### root rw permissions
Note the **root_squash** mount option. This option is set by default and must be disabled if not wanted.
*Fix:* enable `no_root_squash`in the `/etc/exports` file and reload the permissions with `sudo exportfs -ra`

8
linux/sed.md Normal file
View File

@ -0,0 +1,8 @@
# SED Cheat-Sheet
**sed** ("stream editor") is a **Linux ([[linux]])**, and Unix utility that parses and transforms text, using a simple, compact programming language.
TMP
replace pattern:
```
sed -i 's/Steven/Kate/' file
```

28
linux/ufw.md Normal file
View File

@ -0,0 +1,28 @@
# UFW (uncomplicated firewall)
UFW (uncomplicated firewall) is a firewall configuration tool for **Linux ([[linux]])** that runs on top of IPTables ([[iptables]]), included by default within Ubuntu distributions. It provides a streamlined interface for configuring common firewall use cases via the command line.
## Enable UFW
To check if ufw is enabled, run:
```bash
sudo ufw status
```
To enable UFW on your system, run:
```bash
sudo ufw enable
```
If for some reason you need to disable UFW, you can do so with the following command:
```bash
sudo ufw disable
```
Block an IP Address
## Block an IP Address/Subnet
```bash
sudo ufw deny from 203.0.113.0/24
```

8
linux/user.md Normal file
View File

@ -0,0 +1,8 @@
# User Management
COMMAND | DESCRIPTION
---|---
`sudo adduser username` | Create a new user
`sudo userdel username` | Delete a user
`sudo usermod -aG groupname username` | Add a user to group
`sudo deluser username groupname` | Remove a user from a group

10
macos/chrome-on-macos.md Normal file
View File

@ -0,0 +1,10 @@
# Google Chrome
## Advanced
### No "Proceed Anyway" Option
Chrome on MacOS won't show a "Proceed Anyway" Option on `NE:ERR_CERT_INVALID` invalid SSL Certificates. Use the secret passphrase as a workaround.
1. Make sure the website is selected
2. Just type in `thisisunsafe`

View File

@ -0,0 +1,14 @@
# Docker on macOS Silicon
## Installation
Docker Desktop
## Platform
```
docker run --platform linux/arm/v7
docker run --platform linux/amd64
```
## Shared volumes
Shared volumes can be configured in the Docker Desktop Settings `Docker -> Preferences... -> Resources -> File Sharing`

225
macos/macos-shortcuts.md Normal file
View File

@ -0,0 +1,225 @@
# MacOS keyboard shortcuts
# Table of contents
- [MacOS keyboard shortcuts](#macos-keyboard-shortcuts)
- [Table of contents](#table-of-contents)
- [Mac Keyboard Modifier keys](#mac-keyboard-modifier-keys)
- [Cut, copy, paste, and other common shortcuts](#cut-copy-paste-and-other-common-shortcuts)
- [Sleep, log out, and shut down shortcuts](#sleep-log-out-and-shut-down-shortcuts)
- [Finder and system shortcuts](#finder-and-system-shortcuts)
- [Document shortcuts](#document-shortcuts)
## Mac Keyboard Modifier keys
| key | description |
| :--- | :------------- |
| ⌘ | ⌘ + / Cmd |
| ⌃ | Control / Ctrl |
| ⌥ | ⌥ + s / Alt |
| ⇧ | Shift |
| ⇪ | Caps Lock |
| Fn | Function key |
_- On Windows keyboards use the Alt key instead of ⌥ + , and the Windows logo key instead of ⌘ + ._
## Cut, copy, paste, and other common shortcuts
| keys | description |
| :------------ | :------------------------------------------------------------------------------------------------------------ |
| ⌘ + x | Cut the selected item and copy it to the Clipboard |
| ⌘ + c | Copy the selected item to the Clipboard. This also works for files in the Finder |
| ⌘ + v | Paste the contents of the Clipboard into the current document or app. This also works for files in the Finder |
| ⌘ + z | Undo the previous comamnd |
| ⇧ + ⌘ + z | Redo, reversing the undo command |
| ⌘ + a | Select All items |
| ⌘ + f | Find items in a document or open a Find window |
| ⌘ + g | Find Again: Find the next occurrence of the item previously found |
| ⇧ + ⌘ + G | Find the previous occurrence |
| ⌘ + h | Hide the windows of the front app |
| ⌥ + ⌘ + h | View the front app but hide all other apps |
| ⌘ + m | Minimize the front window to the Dock |
| ⌥ + ⌘ + m | Minimize all windows of the front app |
| ⌘ + o | Open the selected item, or open a dialog to select a file to open |
| ⌘ + p | Print the current document |
| ⌘ + s | Save the current document |
| ⌘ + t | Open a new tab |
| ⌘ + w | Close the front window |
| ⌥ + ⌘ + w | Close all windows of the app |
| ⌥ + ⌘ + Esc | Force quit an app |
| ⌘ + Space | Show or hide the Spotlight search field |
| ⌘ + ⌥ + Space | Perform a Spotlight search from a Finder window |
| ⌃ + ⌘ + Space | Show the Character Viewer, from which you can choose emoji and other symbols |
| ⌃ + ⌘ + f | Use the app in full screen, if supported by the app |
| Space | Use Quick Look to preview the selected item |
| ⌘ + Tab | Switch to the next most recently used app among your open apps |
| ⇧ + ⌘ + 5 | In macOS Mojave or later, take a screenshot or make a screen recording |
| ⇧ + ⌘ + 3 | Take whole display screenshot |
| ⇧ + ⌘ + 4 | Take custom screenshot |
| ⇧ + ⌘ + n | Create a new folder in the Finder |
| ⌘ + , | Open preferences for the front app |
## Sleep, log out, and shut down shortcuts
_* You might need to press and hold some of these shortcuts for slightly longer than other shortcuts. This helps you to avoid using them unintentionally._
| keys | description |
| :---------------- | :---------------------------------------------------------------------------------- |
| Power button | Press to turn on your Mac or wake it from sleep |
| Power button | Press and hold for 1.5 seconds to put your Mac to sleep |
| Power button | Press and continue holding to force your Mac to turn off |
| ⌥ + ⌘ + Power | Put your Mac to sleep |
| ⌥ + ⌘ + Eject | Put your Mac to sleep |
| ⌃ + ⇧ + Power | Put your displays to sleep |
| ⌃ + ⇧ + Eject | Put your displays to sleep |
| ⌃ + Power | Display a dialog asking whether you want to restart, sleep, or shut down |
| ⌃ + Eject | Display a dialog asking whether you want to restart, sleep, or shut down |
| ⌃ + ⌘ + Power | Force your Mac to restart, without prompting to save any open and unsaved documents |
| ⌃ + ⌘ + Eject | Quit all apps, then restart your Mac |
| ⌃ + ⌥ + ⌘ + Power | Quit all apps, then shut down your Mac |
| ⌃ ⌥ + ⌘ + Eject | Quit all apps, then shut down your Mac |
| ⌃ + ⌘ + q | Immediately lock your screen |
| ⇧ + ⌘ + q | Log out of your macOS user account. You will be asked to confirm |
| ⌥ + ⇧ + ⌘ + q | Log out immediately without confirming |
## Finder and system shortcuts
| keys | description |
| :------------------------------ | :----------------------------------------------------------------------------------------- |
| ⌘ + d | Duplicate the selected files |
| ⌘ + e | Eject the selected disk or volume |
| ⌘ + f | Start a Spotlight search in the Finder window |
| ⌘ + i | Show the Get Info window for a selected file |
| ⌘ + r | (1) When an alias is selected in the Finder: show the original file for the selected alias |
| ⌘ + r | (2) In some apps, such as Calendar or Safari, refresh or reload the page |
| ⌘ + r | (3) In Software Update preferences, check for software updates again |
| ⇧ + ⌘ + c | Open the Computer window |
| ⇧ + ⌘ + d | Open the desktop folder |
| ⇧ + ⌘ + f | Open the Recents window, showing all of the files you viewed or changed recently |
| ⇧ + ⌘ + g | Open a Go to Folder window |
| ⇧ + ⌘ + h | Open the Home folder of the current macOS user account |
| ⇧ + ⌘ + i | Open iCloud Drive |
| ⇧ + ⌘ + k | Open the Network window |
| ⌥ + ⌘ + l | Open the Downloads folder |
| ⇧ + ⌘ + n | Create a new folder |
| ⇧ + ⌘ + o | Open the Documents folder |
| ⇧ + ⌘ + p | Show or hide the Preview pane in Finder windows |
| ⇧ + ⌘ + r | Open the AirDrop window |
| ⇧ + ⌘ + t | Show or hide the tab bar in Finder windows |
| ⌃ + ⇧ + ⌘ + t | Add selected Finder item to the Dock (OS X Mavericks or later) |
| ⇧ + ⌘ + u | Open the Utilities folder |
| ⌥ + ⌘ + d | Show or hide the Dock |
| ⌃ + ⌘ + t | Add the selected item to the sidebar (OS X Mavericks or later) |
| ⌥ + ⌘ + p | Hide or show the path bar in Finder windows |
| ⌥ + ⌘ + s | Hide or show the Sidebar in Finder windows |
| ⌘ + / | Hide or show the status bar in Finder windows |
| ⌘ + j | Show View Options |
| ⌘ + k | Open the Connect to Server window |
| ⌃ + ⌘ + a | Make an alias of the selected item |
| ⌘ + n | Open a new Finder window |
| ⌥ + ⌘ + n | Create a new Smart Folder |
| ⌘ + t | Show or hide the tab bar when a single tab is open in the current Finder window |
| ⌥ + ⌘ + t | Show or hide the toolbar when a single tab is open in the current Finder window |
| ⌥ + ⌘ + v | Move the files in the Clipboard from their original location to the current location |
| ⌘ + y | Use Quick Look to preview the selected files |
| ⌥ + ⌘ + v | View a Quick Look slideshow of the selected files |
| ⌘ + 1 | View the items in the Finder window as icons |
| ⌘ + 2 | View the items in a Finder window as a list |
| ⌘ + 3 | View the items in a Finder window in columns |
| ⌘ + 4 | View the items in a Finder window in a gallery |
| ⌘ + [ | Go to the previous folder |
| ⌘ + ] | Go to the next folder |
| ⌘ + ↑ | Open the folder that contains the current folder |
| ⌃ + ⌘ + ↑ | Open the folder that contains the current folder in a new window |
| ⌘ + ↓ | Open the selected item |
| → | Open the selected folder. This works only when in list view |
| ← | Close the selected folder. This works only when in list view |
| ⌘ + Delete | Move the selected item to the Trash |
| ⇧ + ⌘ + Delete | Empty the Trash |
| ⌥ + ⇧ + ⌘ + Delete | Empty the Trash without confirmation dialog |
| ⌘ + Brightness Down | Turn video mirroring on or off when your Mac is connected to more than one display |
| ⌥ + Brightness Up | Open Displays preferences. This works with either Brightness key |
| ⌃ + Brightness Up/Down | Adjust brightness of your external display, if supported by your display |
| ⌥ + ⇧ + Brightness Up/Down | Adjust display brightness in smaller steps |
| ⌃ + ⌥ + ⇧ + Brightness Up/Down | Adjust external display brightness in smaller steps, if supported by display |
| ⌥ + Mission Control | Open Mission Control preferences |
| ⌃ + Mission Control | Show the desktop |
| ⌃ + ↓ | Show all windows of the front app |
| ⌥ + Volume Up | Open Sound preferences. This works with any of the volume keys |
| ⌥ + ⇧ + Volume up/Down | Adjust the sound volume in smaller steps |
| ⌥ + Brightness Up | Open Keyboard preferences. This works with either Keyboard Brightness key |
| ⌥ + ⇧ + Brightness Up/Down | Adjust the keyboard brightness in smaller steps |
| ⌥ + double-clicking | Open the item in a separate window, then close the original window |
| ⌘ + double-clicking | Open a folder in a separate tab or window |
| ⌘ + dragging to another volume | Move the dragged item to the other volume, instead of copying it |
| ⌥ + dragging | Copy the dragged item. The pointer changes while you drag the item |
| ⌥ + ⌘ + while dragging | Make an alias of the dragged item. The pointer changes while you drag the item |
| ⌥ + click a disclosure triangle | Open all folders within the selected folder. This works only when in list view |
| ⌘ + click a window title | See the folders that contain the current folder |
## Document shortcuts
_*The behavior of these shortcuts may vary with the app you're using_
| keys | description |
| :------------ | :--------------------------------------------------------------------------------------------------------------------------------- |
| ⌘ + b | Boldface the selected text, or turn boldfacing on or off |
| ⌘ + i | Italicize the selected text, or turn italics on or off |
| ⌘ + k | Add a web link |
| ⌘ + u | Underline the selected text, or turn underlining on or off |
| ⌘ + t | Show or hide the Fonts window |
| ⌘ + d | Select the Desktop folder from within an Open dialog or Save dialog |
| ⌃ + ⌘ + d | Show or hide the definition of the selected word |
| ⇧ + ⌘ + : | Display the Spelling and Grammar window |
| ⌘ + ; | Find misspelled words in the document |
| ⌥ + Delete | Delete the word to the left of the insertion point |
| ⌃ + h | Delete the character to the left of the insertion point. Or use Delete |
| ⌃ + d | Delete the character to the right of the insertion point |
| Fn + Delete | Forward delete on keyboards that don't have a Forward Delete key |
| ⌃ + k | Delete the text between the insertion point and the end of the line or paragraph |
| Fn + ↑ | Page Up: Scroll up one page |
| Fn + ↓ | Page Down: Scroll down one page. |
| Fn + ← | Home: Scroll to the beginning of a document. |
| Fn + → | End: Scroll to the end of a document. |
| ⌘ + ↑ | Move the insertion point to the beginning of the document |
| ⌘ + ↓ | Move the insertion point to the end of the document. |
| ⌘ + ← | Move the insertion point to the beginning of the current line. |
| ⌘ + → | Move the insertion point to the end of the current line. |
| ⌥ + ← | Move the insertion point to the beginning of the previous word |
| ⌥ + → | Move the insertion point to the end of the next word |
| ⇧ + ⌘ + ↑ | Select the text between the insertion point and the beginning of the document |
| ⇧ + ⌘ + ↓ | Select the text between the insertion point and the end of the document |
| ⇧ + ⌘ + ← | Select the text between the insertion point and the beginning of the current line |
| ⇧ + ⌘ + → | Select the text between the insertion point and the end of the current line |
| ⇧ + ↑ | Extend text selection to the nearest character at the same horizontal location on the line above |
| ⇧ + ↓ | Extend text selection to the nearest character at the same horizontal location on the line below |
| ⇧ + ← | Extend text selection one character to the left |
| ⇧ + → | Extend text selection one character to the right |
| ⌥ + ⇧ + ↑ | Extend text selection to the beginning of the current paragraph, then to the beginning of the following paragraph if pressed again |
| ⌥ + ⇧ + ↓ | Extend text selection to the end of the current paragraph, then to the end of the following paragraph if pressed again |
| ⌥ + ⇧ + ← | Extend text selection to the beginning of the current word, then to the beginning of the following word if pressed again |
| ⌥ + ⇧ + → | Extend text selection to the end of the current word, then to the end of the following word if pressed again |
| ⌃ + a | Move to the beginning of the line or paragraph |
| ⌃ + e | Move to the end of a line or paragraph |
| ⌃ + f | Move one character forward |
| ⌃ + b | Move one character backward |
| ⌃ + l | Center the cursor or selection in the visible area |
| ⌃ + p | Move up one line |
| ⌃ + n | Move down one line |
| ⌃ + o | Insert a new line after the insertion point |
| ⌃ + t | Swap the character behind the insertion point with the character in front of the insertion point |
| ⌘ + { | Left align |
| ⌘ + } | Right align |
| ⇧ + ⌘ + \| | Center align |
| ⌥ + ⌘ + f | Go to the search field |
| ⌥ + ⌘ + t | Show or hide a toolbar in the app |
| ⌥ + ⌘ + c | Copy Style: Copy the formatting settings of the selected item to the Clipboard |
| ⌥ + ⌘ + v | Paste Style: Apply the copied style to the selected item |
| ⌥ + ⇧ + ⌘ + v | Paste and Match Style: Apply the style of the surrounding content to the item pasted within that content |
| ⌥ + ⌘ + i | Show or hide the inspector window |
| ⇧ + ⌘ + p | Page setup: Display a window for selecting document settings |
| ⇧ + ⌘ + s | Display the Save As dialog, or duplicate the current document |
| ⇧ + ⌘ + (-) | Decrease the size of the selected item |
| ⇧ + ⌘ + (+) | Increase the size of the selected item |
| ⌘ + = | performs the same function |
| ⇧ + ⌘ + ? | Open the Help menu |

View File

@ -0,0 +1,89 @@
# VSCode shortcuts
- [MacOS vscode keyboard shortcuts](https://code.visualstudio.com/shortcuts/keyboard-shortcuts-macos.pdf)
- [Howto vscode custom shortcuts](https://code.visualstudio.com/docs/getstarted/keybindings)
- [Learn vscode keyboard shortcuts](https://blog.logrocket.com/learn-these-keyboard-shortcuts-to-become-a-vs-code-ninja/)
## Side Menu
| shortcut | description |
| :-------- | :------------------- |
| ⌘ + B | Hide show side menu |
| ⌘ + ⇧ + E | Explorer window |
| ⌘ + ⇧ + F | Find window |
| ⌘ + ⇧ + J | Find in files window |
| ⌃ + ⇧ + G | Git window |
| ⌘ + ⇧ + D | Debug window |
| ⌘ + ⇧ + X | Extension window |
## Multi-Cursor Editing
| shortcut | description |
| :-------- | :-------------------------------------------- |
| ⌘ + ⌥ + ↓ | add a new cursor below |
| ⌥ + Click | add a new cursor at the mouse click |
| ⌘ + ⇧ + L | add new cursor behind all instances of a word |
## Split editor
| shortcut | description |
| :------- | :---------- |
| ⌘ + \ | split |
## Split Window focusing
| shortcut | description |
| :------- | :------------------------------------ |
| ⌘ + 0 | explorer panel |
| ⌘ + 1 | 1st window split window |
| ⌘ + 2 | 2nd window split window |
| ⌃ + ~ | terminal window |
| ^ + tab | switch between tabs |
| ⌘ + ~ | switch between VS code editor windows |
## IntelliSense
| shortcut | description |
| :-------- | :--------------------- |
| ⌃ + Space | to invoke IntelliSense |
## Line Action
| shortcut | description |
| :-------- | :----------------------------- |
| ⇧ + ⌥ + ↓ | copy the line and insert below |
| ⇧ + ⌥ + ↑ | copy the line and insert above |
| ⌥ + ↓ | move entire line below |
| ⌥ + ↑ | move entire line above |
| ⌘ + ⇧ + K | delete entire line |
## Rename Refactoring
| shortcut | description |
| :--------------------------------- | :----------------------------------- |
| F2 (Fn + F2) | Rename Symbol in the current project |
| Right Mouse Click -> Rename Symbol | Rename Symbol in the current project |
## Formatting
| shortcut | description |
| :------------ | :--------------------- |
| ⇧ + ⌥ + F | format entire document |
| ⌘ + K and ⌘ F | format selected text |
## Transform selected
| shortcut | description |
| :------------ | :------------------------------ |
| ^ + ⇧ + ⌥ + L | transform selected to lower |
| ^ + ⇧ + ⌥ + U | transform selected to upper |
| ^ + ⇧ + ⌥ + S | transform selected to snake |
| ^ + ⇧ + ⌥ + T | transform selected to titelcase |
## Code Folding
| shortcut | description |
| :---------- | :------------ |
| ⌘ + ⌥ + [ | fold |
| ⌘ + ⌥ + ] | unfold |
| ⌘ K and ⌘ 0 | fold all |
| ⌘ K and ⌘ J | unfold all |
| ⌘ K and ⌘ 1 | fold 1 level |
| ⌘ K and ⌘ 2 | fold 2 levels |
| ⌘ K and ⌘ 5 | fold 5 levels |
## Errors and Warnings
| shortcut | description |
| :------- | :--------------------- |
| F8 | navigate across errors |

262
misc/color-codes.md Normal file
View File

@ -0,0 +1,262 @@
# 256 Color Codes Cheat-Sheet
Colors 0-15 are Xterm system colors.
| Xterm Number | Xterm Name | HEX | RGB | HSL |
| ------------ | ----------------- | --------- | ---------------- | ----------------- |
| 0 | Black (SYSTEM) | `#000000` | rgb(0,0,0) | hsl(0,0%,0%) |
| 1 | Maroon (SYSTEM) | `#800000` | rgb(128,0,0) | hsl(0,100%,25%) |
| 2 | Green (SYSTEM) | `#008000` | rgb(0,128,0) | hsl(120,100%,25%) |
| 3 | Olive (SYSTEM) | `#808000` | rgb(128,128,0) | hsl(60,100%,25%) |
| 4 | Navy (SYSTEM) | `#000080` | rgb(0,0,128) | hsl(240,100%,25%) |
| 5 | Purple (SYSTEM) | `#800080` | rgb(128,0,128) | hsl(300,100%,25%) |
| 6 | Teal (SYSTEM) | `#008080` | rgb(0,128,128) | hsl(180,100%,25%) |
| 7 | Silver (SYSTEM) | `#c0c0c0` | rgb(192,192,192) | hsl(0,0%,75%) |
| 8 | Grey (SYSTEM) | `#808080` | rgb(128,128,128) | hsl(0,0%,50%) |
| 9 | Red (SYSTEM) | `#ff0000` | rgb(255,0,0) | hsl(0,100%,50%) |
| 10 | Lime (SYSTEM) | `#00ff00` | rgb(0,255,0) | hsl(120,100%,50%) |
| 11 | Yellow (SYSTEM) | `#ffff00` | rgb(255,255,0) | hsl(60,100%,50%) |
| 12 | Blue (SYSTEM) | `#0000ff` | rgb(0,0,255) | hsl(240,100%,50%) |
| 13 | Fuchsia (SYSTEM) | `#ff00ff` | rgb(255,0,255) | hsl(300,100%,50%) |
| 14 | Aqua (SYSTEM) | `#00ffff` | rgb(0,255,255) | hsl(180,100%,50%) |
| 15 | White (SYSTEM) | `#ffffff` | rgb(255,255,255) | hsl(0,0%,100%) |
| 16 | Grey0 | `#000000` | rgb(0,0,0) | hsl(0,0%,0%) |
| 17 | NavyBlue | `#00005f` | rgb(0,0,95) | hsl(240,100%,18%) |
| 18 | DarkBlue | `#000087` | rgb(0,0,135) | hsl(240,100%,26%) |
| 19 | Blue3 | `#0000af` | rgb(0,0,175) | hsl(240,100%,34%) |
| 20 | Blue3 | `#0000d7` | rgb(0,0,215) | hsl(240,100%,42%) |
| 21 | Blue1 | `#0000ff` | rgb(0,0,255) | hsl(240,100%,50%) |
| 22 | DarkGreen | `#005f00` | rgb(0,95,0) | hsl(120,100%,18%) |
| 23 | DeepSkyBlue4 | `#005f5f` | rgb(0,95,95) | hsl(180,100%,18%) |
| 24 | DeepSkyBlue4 | `#005f87` | rgb(0,95,135) | hsl(97,100%,26%) |
| 25 | DeepSkyBlue4 | `#005faf` | rgb(0,95,175) | hsl(07,100%,34%) |
| 26 | DodgerBlue3 | `#005fd7` | rgb(0,95,215) | hsl(13,100%,42%) |
| 27 | DodgerBlue2 | `#005fff` | rgb(0,95,255) | hsl(17,100%,50%) |
| 28 | Green4 | `#008700` | rgb(0,135,0) | hsl(120,100%,26%) |
| 29 | SpringGreen4 | `#00875f` | rgb(0,135,95) | hsl(62,100%,26%) |
| 30 | Turquoise4 | `#008787` | rgb(0,135,135) | hsl(180,100%,26%) |
| 31 | DeepSkyBlue3 | `#0087af` | rgb(0,135,175) | hsl(93,100%,34%) |
| 32 | DeepSkyBlue3 | `#0087d7` | rgb(0,135,215) | hsl(02,100%,42%) |
| 33 | DodgerBlue1 | `#0087ff` | rgb(0,135,255) | hsl(08,100%,50%) |
| 34 | Green3 | `#00af00` | rgb(0,175,0) | hsl(120,100%,34%) |
| 35 | SpringGreen3 | `#00af5f` | rgb(0,175,95) | hsl(52,100%,34%) |
| 36 | DarkCyan | `#00af87` | rgb(0,175,135) | hsl(66,100%,34%) |
| 37 | LightSeaGreen | `#00afaf` | rgb(0,175,175) | hsl(180,100%,34%) |
| 38 | DeepSkyBlue2 | `#00afd7` | rgb(0,175,215) | hsl(91,100%,42%) |
| 39 | DeepSkyBlue1 | `#00afff` | rgb(0,175,255) | hsl(98,100%,50%) |
| 40 | Green3 | `#00d700` | rgb(0,215,0) | hsl(120,100%,42%) |
| 41 | SpringGreen3 | `#00d75f` | rgb(0,215,95) | hsl(46,100%,42%) |
| 42 | SpringGreen2 | `#00d787` | rgb(0,215,135) | hsl(57,100%,42%) |
| 43 | Cyan3 | `#00d7af` | rgb(0,215,175) | hsl(68,100%,42%) |
| 44 | DarkTurquoise | `#00d7d7` | rgb(0,215,215) | hsl(180,100%,42%) |
| 45 | Turquoise2 | `#00d7ff` | rgb(0,215,255) | hsl(89,100%,50%) |
| 46 | Green1 | `#00ff00` | rgb(0,255,0) | hsl(120,100%,50%) |
| 47 | SpringGreen2 | `#00ff5f` | rgb(0,255,95) | hsl(42,100%,50%) |
| 48 | SpringGreen1 | `#00ff87` | rgb(0,255,135) | hsl(51,100%,50%) |
| 49 | MediumSpringGreen | `#00ffaf` | rgb(0,255,175) | hsl(61,100%,50%) |
| 50 | Cyan2 | `#00ffd7` | rgb(0,255,215) | hsl(70,100%,50%) |
| 51 | Cyan1 | `#00ffff` | rgb(0,255,255) | hsl(180,100%,50%) |
| 52 | DarkRed | `#5f0000` | rgb(95,0,0) | hsl(0,100%,18%) |
| 53 | DeepPink4 | `#5f005f` | rgb(95,0,95) | hsl(300,100%,18%) |
| 54 | Purple4 | `#5f0087` | rgb(95,0,135) | hsl(82,100%,26%) |
| 55 | Purple4 | `#5f00af` | rgb(95,0,175) | hsl(72,100%,34%) |
| 56 | Purple3 | `#5f00d7` | rgb(95,0,215) | hsl(66,100%,42%) |
| 57 | BlueViolet | `#5f00ff` | rgb(95,0,255) | hsl(62,100%,50%) |
| 58 | Orange4 | `#5f5f00` | rgb(95,95,0) | hsl(60,100%,18%) |
| 59 | Grey37 | `#5f5f5f` | rgb(95,95,95) | hsl(0,0%,37%) |
| 60 | MediumPurple4 | `#5f5f87` | rgb(95,95,135) | hsl(240,17%,45%) |
| 61 | SlateBlue3 | `#5f5faf` | rgb(95,95,175) | hsl(240,33%,52%) |
| 62 | SlateBlue3 | `#5f5fd7` | rgb(95,95,215) | hsl(240,60%,60%) |
| 63 | RoyalBlue1 | `#5f5fff` | rgb(95,95,255) | hsl(240,100%,68%) |
| 64 | Chartreuse4 | `#5f8700` | rgb(95,135,0) | hsl(7,100%,26%) |
| 65 | DarkSeaGreen4 | `#5f875f` | rgb(95,135,95) | hsl(120,17%,45%) |
| 66 | PaleTurquoise4 | `#5f8787` | rgb(95,135,135) | hsl(180,17%,45%) |
| 67 | SteelBlue | `#5f87af` | rgb(95,135,175) | hsl(210,33%,52%) |
| 68 | SteelBlue3 | `#5f87d7` | rgb(95,135,215) | hsl(220,60%,60%) |
| 69 | CornflowerBlue | `#5f87ff` | rgb(95,135,255) | hsl(225,100%,68%) |
| 70 | Chartreuse3 | `#5faf00` | rgb(95,175,0) | hsl(7,100%,34%) |
| 71 | DarkSeaGreen4 | `#5faf5f` | rgb(95,175,95) | hsl(120,33%,52%) |
| 72 | CadetBlue | `#5faf87` | rgb(95,175,135) | hsl(150,33%,52%) |
| 73 | CadetBlue | `#5fafaf` | rgb(95,175,175) | hsl(180,33%,52%) |
| 74 | SkyBlue3 | `#5fafd7` | rgb(95,175,215) | hsl(200,60%,60%) |
| 75 | SteelBlue1 | `#5fafff` | rgb(95,175,255) | hsl(210,100%,68%) |
| 76 | Chartreuse3 | `#5fd700` | rgb(95,215,0) | hsl(3,100%,42%) |
| 77 | PaleGreen3 | `#5fd75f` | rgb(95,215,95) | hsl(120,60%,60%) |
| 78 | SeaGreen3 | `#5fd787` | rgb(95,215,135) | hsl(140,60%,60%) |
| 79 | Aquamarine3 | `#5fd7af` | rgb(95,215,175) | hsl(160,60%,60%) |
| 80 | MediumTurquoise | `#5fd7d7` | rgb(95,215,215) | hsl(180,60%,60%) |
| 81 | SteelBlue1 | `#5fd7ff` | rgb(95,215,255) | hsl(195,100%,68%) |
| 82 | Chartreuse2 | `#5fff00` | rgb(95,255,0) | hsl(7,100%,50%) |
| 83 | SeaGreen2 | `#5fff5f` | rgb(95,255,95) | hsl(120,100%,68%) |
| 84 | SeaGreen1 | `#5fff87` | rgb(95,255,135) | hsl(135,100%,68%) |
| 85 | SeaGreen1 | `#5fffaf` | rgb(95,255,175) | hsl(150,100%,68%) |
| 86 | Aquamarine1 | `#5fffd7` | rgb(95,255,215) | hsl(165,100%,68%) |
| 87 | DarkSlateGray2 | `#5fffff` | rgb(95,255,255) | hsl(180,100%,68%) |
| 88 | DarkRed | `#870000` | rgb(135,0,0) | hsl(0,100%,26%) |
| 89 | DeepPink4 | `#87005f` | rgb(135,0,95) | hsl(17,100%,26%) |
| 90 | DarkMagenta | `#870087` | rgb(135,0,135) | hsl(300,100%,26%) |
| 91 | DarkMagenta | `#8700af` | rgb(135,0,175) | hsl(86,100%,34%) |
| 92 | DarkViolet | `#8700d7` | rgb(135,0,215) | hsl(77,100%,42%) |
| 93 | Purple | `#8700ff` | rgb(135,0,255) | hsl(71,100%,50%) |
| 94 | Orange4 | `#875f00` | rgb(135,95,0) | hsl(2,100%,26%) |
| 95 | LightPink4 | `#875f5f` | rgb(135,95,95) | hsl(0,17%,45%) |
| 96 | Plum4 | `#875f87` | rgb(135,95,135) | hsl(300,17%,45%) |
| 97 | MediumPurple3 | `#875faf` | rgb(135,95,175) | hsl(270,33%,52%) |
| 98 | MediumPurple3 | `#875fd7` | rgb(135,95,215) | hsl(260,60%,60%) |
| 99 | SlateBlue1 | `#875fff` | rgb(135,95,255) | hsl(255,100%,68%) |
| 100 | Yellow4 | `#878700` | rgb(135,135,0) | hsl(60,100%,26%) |
| 101 | Wheat4 | `#87875f` | rgb(135,135,95) | hsl(60,17%,45%) |
| 102 | Grey53 | `#878787` | rgb(135,135,135) | hsl(0,0%,52%) |
| 103 | LightSlateGrey | `#8787af` | rgb(135,135,175) | hsl(240,20%,60%) |
| 104 | MediumPurple | `#8787d7` | rgb(135,135,215) | hsl(240,50%,68%) |
| 105 | LightSlateBlue | `#8787ff` | rgb(135,135,255) | hsl(240,100%,76%) |
| 106 | Yellow4 | `#87af00` | rgb(135,175,0) | hsl(3,100%,34%) |
| 107 | DarkOliveGreen3 | `#87af5f` | rgb(135,175,95) | hsl(90,33%,52%) |
| 108 | DarkSeaGreen | `#87af87` | rgb(135,175,135) | hsl(120,20%,60%) |
| 109 | LightSkyBlue3 | `#87afaf` | rgb(135,175,175) | hsl(180,20%,60%) |
| 110 | LightSkyBlue3 | `#87afd7` | rgb(135,175,215) | hsl(210,50%,68%) |
| 111 | SkyBlue2 | `#87afff` | rgb(135,175,255) | hsl(220,100%,76%) |
| 112 | Chartreuse2 | `#87d700` | rgb(135,215,0) | hsl(2,100%,42%) |
| 113 | DarkOliveGreen3 | `#87d75f` | rgb(135,215,95) | hsl(100,60%,60%) |
| 114 | PaleGreen3 | `#87d787` | rgb(135,215,135) | hsl(120,50%,68%) |
| 115 | DarkSeaGreen3 | `#87d7af` | rgb(135,215,175) | hsl(150,50%,68%) |
| 116 | DarkSlateGray3 | `#87d7d7` | rgb(135,215,215) | hsl(180,50%,68%) |
| 117 | SkyBlue1 | `#87d7ff` | rgb(135,215,255) | hsl(200,100%,76%) |
| 118 | Chartreuse1 | `#87ff00` | rgb(135,255,0) | hsl(8,100%,50%) |
| 119 | LightGreen | `#87ff5f` | rgb(135,255,95) | hsl(105,100%,68%) |
| 120 | LightGreen | `#87ff87` | rgb(135,255,135) | hsl(120,100%,76%) |
| 121 | PaleGreen1 | `#87ffaf` | rgb(135,255,175) | hsl(140,100%,76%) |
| 122 | Aquamarine1 | `#87ffd7` | rgb(135,255,215) | hsl(160,100%,76%) |
| 123 | DarkSlateGray1 | `#87ffff` | rgb(135,255,255) | hsl(180,100%,76%) |
| 124 | Red3 | `#af0000` | rgb(175,0,0) | hsl(0,100%,34%) |
| 125 | DeepPink4 | `#af005f` | rgb(175,0,95) | hsl(27,100%,34%) |
| 126 | MediumVioletRed | `#af0087` | rgb(175,0,135) | hsl(13,100%,34%) |
| 127 | Magenta3 | `#af00af` | rgb(175,0,175) | hsl(300,100%,34%) |
| 128 | DarkViolet | `#af00d7` | rgb(175,0,215) | hsl(88,100%,42%) |
| 129 | Purple | `#af00ff` | rgb(175,0,255) | hsl(81,100%,50%) |
| 130 | DarkOrange3 | `#af5f00` | rgb(175,95,0) | hsl(2,100%,34%) |
| 131 | IndianRed | `#af5f5f` | rgb(175,95,95) | hsl(0,33%,52%) |
| 132 | HotPink3 | `#af5f87` | rgb(175,95,135) | hsl(330,33%,52%) |
| 133 | MediumOrchid3 | `#af5faf` | rgb(175,95,175) | hsl(300,33%,52%) |
| 134 | MediumOrchid | `#af5fd7` | rgb(175,95,215) | hsl(280,60%,60%) |
| 135 | MediumPurple2 | `#af5fff` | rgb(175,95,255) | hsl(270,100%,68%) |
| 136 | DarkGoldenrod | `#af8700` | rgb(175,135,0) | hsl(6,100%,34%) |
| 137 | LightSalmon3 | `#af875f` | rgb(175,135,95) | hsl(30,33%,52%) |
| 138 | RosyBrown | `#af8787` | rgb(175,135,135) | hsl(0,20%,60%) |
| 139 | Grey63 | `#af87af` | rgb(175,135,175) | hsl(300,20%,60%) |
| 140 | MediumPurple2 | `#af87d7` | rgb(175,135,215) | hsl(270,50%,68%) |
| 141 | MediumPurple1 | `#af87ff` | rgb(175,135,255) | hsl(260,100%,76%) |
| 142 | Gold3 | `#afaf00` | rgb(175,175,0) | hsl(60,100%,34%) |
| 143 | DarkKhaki | `#afaf5f` | rgb(175,175,95) | hsl(60,33%,52%) |
| 144 | NavajoWhite3 | `#afaf87` | rgb(175,175,135) | hsl(60,20%,60%) |
| 145 | Grey69 | `#afafaf` | rgb(175,175,175) | hsl(0,0%,68%) |
| 146 | LightSteelBlue3 | `#afafd7` | rgb(175,175,215) | hsl(240,33%,76%) |
| 147 | LightSteelBlue | `#afafff` | rgb(175,175,255) | hsl(240,100%,84%) |
| 148 | Yellow3 | `#afd700` | rgb(175,215,0) | hsl(1,100%,42%) |
| 149 | DarkOliveGreen3 | `#afd75f` | rgb(175,215,95) | hsl(80,60%,60%) |
| 150 | DarkSeaGreen3 | `#afd787` | rgb(175,215,135) | hsl(90,50%,68%) |
| 151 | DarkSeaGreen2 | `#afd7af` | rgb(175,215,175) | hsl(120,33%,76%) |
| 152 | LightCyan3 | `#afd7d7` | rgb(175,215,215) | hsl(180,33%,76%) |
| 153 | LightSkyBlue1 | `#afd7ff` | rgb(175,215,255) | hsl(210,100%,84%) |
| 154 | GreenYellow | `#afff00` | rgb(175,255,0) | hsl(8,100%,50%) |
| 155 | DarkOliveGreen2 | `#afff5f` | rgb(175,255,95) | hsl(90,100%,68%) |
| 156 | PaleGreen1 | `#afff87` | rgb(175,255,135) | hsl(100,100%,76%) |
| 157 | DarkSeaGreen2 | `#afffaf` | rgb(175,255,175) | hsl(120,100%,84%) |
| 158 | DarkSeaGreen1 | `#afffd7` | rgb(175,255,215) | hsl(150,100%,84%) |
| 159 | PaleTurquoise1 | `#afffff` | rgb(175,255,255) | hsl(180,100%,84%) |
| 160 | Red3 | `#d70000` | rgb(215,0,0) | hsl(0,100%,42%) |
| 161 | DeepPink3 | `#d7005f` | rgb(215,0,95) | hsl(33,100%,42%) |
| 162 | DeepPink3 | `#d70087` | rgb(215,0,135) | hsl(22,100%,42%) |
| 163 | Magenta3 | `#d700af` | rgb(215,0,175) | hsl(11,100%,42%) |
| 164 | Magenta3 | `#d700d7` | rgb(215,0,215) | hsl(300,100%,42%) |
| 165 | Magenta2 | `#d700ff` | rgb(215,0,255) | hsl(90,100%,50%) |
| 166 | DarkOrange3 | `#d75f00` | rgb(215,95,0) | hsl(6,100%,42%) |
| 167 | IndianRed | `#d75f5f` | rgb(215,95,95) | hsl(0,60%,60%) |
| 168 | HotPink3 | `#d75f87` | rgb(215,95,135) | hsl(340,60%,60%) |
| 169 | HotPink2 | `#d75faf` | rgb(215,95,175) | hsl(320,60%,60%) |
| 170 | Orchid | `#d75fd7` | rgb(215,95,215) | hsl(300,60%,60%) |
| 171 | MediumOrchid1 | `#d75fff` | rgb(215,95,255) | hsl(285,100%,68%) |
| 172 | Orange3 | `#d78700` | rgb(215,135,0) | hsl(7,100%,42%) |
| 173 | LightSalmon3 | `#d7875f` | rgb(215,135,95) | hsl(20,60%,60%) |
| 174 | LightPink3 | `#d78787` | rgb(215,135,135) | hsl(0,50%,68%) |
| 175 | Pink3 | `#d787af` | rgb(215,135,175) | hsl(330,50%,68%) |
| 176 | Plum3 | `#d787d7` | rgb(215,135,215) | hsl(300,50%,68%) |
| 177 | Violet | `#d787ff` | rgb(215,135,255) | hsl(280,100%,76%) |
| 178 | Gold3 | `#d7af00` | rgb(215,175,0) | hsl(8,100%,42%) |
| 179 | LightGoldenrod3 | `#d7af5f` | rgb(215,175,95) | hsl(40,60%,60%) |
| 180 | Tan | `#d7af87` | rgb(215,175,135) | hsl(30,50%,68%) |
| 181 | MistyRose3 | `#d7afaf` | rgb(215,175,175) | hsl(0,33%,76%) |
| 182 | Thistle3 | `#d7afd7` | rgb(215,175,215) | hsl(300,33%,76%) |
| 183 | Plum2 | `#d7afff` | rgb(215,175,255) | hsl(270,100%,84%) |
| 184 | Yellow3 | `#d7d700` | rgb(215,215,0) | hsl(60,100%,42%) |
| 185 | Khaki3 | `#d7d75f` | rgb(215,215,95) | hsl(60,60%,60%) |
| 186 | LightGoldenrod2 | `#d7d787` | rgb(215,215,135) | hsl(60,50%,68%) |
| 187 | LightYellow3 | `#d7d7af` | rgb(215,215,175) | hsl(60,33%,76%) |
| 188 | Grey84 | `#d7d7d7` | rgb(215,215,215) | hsl(0,0%,84%) |
| 189 | LightSteelBlue1 | `#d7d7ff` | rgb(215,215,255) | hsl(240,100%,92%) |
| 190 | Yellow2 | `#d7ff00` | rgb(215,255,0) | hsl(9,100%,50%) |
| 191 | DarkOliveGreen1 | `#d7ff5f` | rgb(215,255,95) | hsl(75,100%,68%) |
| 192 | DarkOliveGreen1 | `#d7ff87` | rgb(215,255,135) | hsl(80,100%,76%) |
| 193 | DarkSeaGreen1 | `#d7ffaf` | rgb(215,255,175) | hsl(90,100%,84%) |
| 194 | Honeydew2 | `#d7ffd7` | rgb(215,255,215) | hsl(120,100%,92%) |
| 195 | LightCyan1 | `#d7ffff` | rgb(215,255,255) | hsl(180,100%,92%) |
| 196 | Red1 | `#ff0000` | rgb(255,0,0) | hsl(0,100%,50%) |
| 197 | DeepPink2 | `#ff005f` | rgb(255,0,95) | hsl(37,100%,50%) |
| 198 | DeepPink1 | `#ff0087` | rgb(255,0,135) | hsl(28,100%,50%) |
| 199 | DeepPink1 | `#ff00af` | rgb(255,0,175) | hsl(18,100%,50%) |
| 200 | Magenta2 | `#ff00d7` | rgb(255,0,215) | hsl(09,100%,50%) |
| 201 | Magenta1 | `#ff00ff` | rgb(255,0,255) | hsl(300,100%,50%) |
| 202 | OrangeRed1 | `#ff5f00` | rgb(255,95,0) | hsl(2,100%,50%) |
| 203 | IndianRed1 | `#ff5f5f` | rgb(255,95,95) | hsl(0,100%,68%) |
| 204 | IndianRed1 | `#ff5f87` | rgb(255,95,135) | hsl(345,100%,68%) |
| 205 | HotPink | `#ff5faf` | rgb(255,95,175) | hsl(330,100%,68%) |
| 206 | HotPink | `#ff5fd7` | rgb(255,95,215) | hsl(315,100%,68%) |
| 207 | MediumOrchid1 | `#ff5fff` | rgb(255,95,255) | hsl(300,100%,68%) |
| 208 | DarkOrange | `#ff8700` | rgb(255,135,0) | hsl(1,100%,50%) |
| 209 | Salmon1 | `#ff875f` | rgb(255,135,95) | hsl(15,100%,68%) |
| 210 | LightCoral | `#ff8787` | rgb(255,135,135) | hsl(0,100%,76%) |
| 211 | PaleVioletRed1 | `#ff87af` | rgb(255,135,175) | hsl(340,100%,76%) |
| 212 | Orchid2 | `#ff87d7` | rgb(255,135,215) | hsl(320,100%,76%) |
| 213 | Orchid1 | `#ff87ff` | rgb(255,135,255) | hsl(300,100%,76%) |
| 214 | Orange1 | `#ffaf00` | rgb(255,175,0) | hsl(1,100%,50%) |
| 215 | SandyBrown | `#ffaf5f` | rgb(255,175,95) | hsl(30,100%,68%) |
| 216 | LightSalmon1 | `#ffaf87` | rgb(255,175,135) | hsl(20,100%,76%) |
| 217 | LightPink1 | `#ffafaf` | rgb(255,175,175) | hsl(0,100%,84%) |
| 218 | Pink1 | `#ffafd7` | rgb(255,175,215) | hsl(330,100%,84%) |
| 219 | Plum1 | `#ffafff` | rgb(255,175,255) | hsl(300,100%,84%) |
| 220 | Gold1 | `#ffd700` | rgb(255,215,0) | hsl(0,100%,50%) |
| 221 | LightGoldenrod2 | `#ffd75f` | rgb(255,215,95) | hsl(45,100%,68%) |
| 222 | LightGoldenrod2 | `#ffd787` | rgb(255,215,135) | hsl(40,100%,76%) |
| 223 | NavajoWhite1 | `#ffd7af` | rgb(255,215,175) | hsl(30,100%,84%) |
| 224 | MistyRose1 | `#ffd7d7` | rgb(255,215,215) | hsl(0,100%,92%) |
| 225 | Thistle1 | `#ffd7ff` | rgb(255,215,255) | hsl(300,100%,92%) |
| 226 | Yellow1 | `#ffff00` | rgb(255,255,0) | hsl(60,100%,50%) |
| 227 | LightGoldenrod1 | `#ffff5f` | rgb(255,255,95) | hsl(60,100%,68%) |
| 228 | Khaki1 | `#ffff87` | rgb(255,255,135) | hsl(60,100%,76%) |
| 229 | Wheat1 | `#ffffaf` | rgb(255,255,175) | hsl(60,100%,84%) |
| 230 | Cornsilk1 | `#ffffd7` | rgb(255,255,215) | hsl(60,100%,92%) |
| 231 | Grey100 | `#ffffff` | rgb(255,255,255) | hsl(0,0%,100%) |
| 232 | Grey3 | `#080808` | rgb(8,8,8) | hsl(0,0%,3%) |
| 233 | Grey7 | `#121212` | rgb(18,18,18) | hsl(0,0%,7%) |
| 234 | Grey11 | `#1c1c1c` | rgb(28,28,28) | hsl(0,0%,10%) |
| 235 | Grey15 | `#262626` | rgb(38,38,38) | hsl(0,0%,14%) |
| 236 | Grey19 | `#303030` | rgb(48,48,48) | hsl(0,0%,18%) |
| 237 | Grey23 | `#3a3a3a` | rgb(58,58,58) | hsl(0,0%,22%) |
| 238 | Grey27 | `#444444` | rgb(68,68,68) | hsl(0,0%,26%) |
| 239 | Grey30 | `#4e4e4e` | rgb(78,78,78) | hsl(0,0%,30%) |
| 240 | Grey35 | `#585858` | rgb(88,88,88) | hsl(0,0%,34%) |
| 241 | Grey39 | `#626262` | rgb(98,98,98) | hsl(0,0%,37%) |
| 242 | Grey42 | `#6c6c6c` | rgb(108,108,108) | hsl(0,0%,40%) |
| 243 | Grey46 | `#767676` | rgb(118,118,118) | hsl(0,0%,46%) |
| 244 | Grey50 | `#808080` | rgb(128,128,128) | hsl(0,0%,50%) |
| 245 | Grey54 | `#8a8a8a` | rgb(138,138,138) | hsl(0,0%,54%) |
| 246 | Grey58 | `#949494` | rgb(148,148,148) | hsl(0,0%,58%) |
| 247 | Grey62 | `#9e9e9e` | rgb(158,158,158) | hsl(0,0%,61%) |
| 248 | Grey66 | `#a8a8a8` | rgb(168,168,168) | hsl(0,0%,65%) |
| 249 | Grey70 | `#b2b2b2` | rgb(178,178,178) | hsl(0,0%,69%) |
| 250 | Grey74 | `#bcbcbc` | rgb(188,188,188) | hsl(0,0%,73%) |
| 251 | Grey78 | `#c6c6c6` | rgb(198,198,198) | hsl(0,0%,77%) |
| 252 | Grey82 | `#d0d0d0` | rgb(208,208,208) | hsl(0,0%,81%) |
| 253 | Grey85 | `#dadada` | rgb(218,218,218) | hsl(0,0%,85%) |
| 254 | Grey89 | `#e4e4e4` | rgb(228,228,228) | hsl(0,0%,89%) |
| 255 | Grey93 | `#eeeeee` | rgb(238,238,238) | hsl(0,0%,93%) |

25
misc/github-actions.md Normal file
View File

@ -0,0 +1,25 @@
# GitHub Actions
Automate, customize, and execute your software development workflows right in your repository with GitHub Actions. You can discover, create, and share actions to perform any job you'd like, including CI/CD, and combine actions in a completely customized workflow.
---
## Workflows
---
## Events
---
## Jobs
---
## Actions
---
## Runners
A runner is a server that runs your workflows when they're triggered. Each runner can run a single job at a time. GitHub provides Ubuntu Linux, Microsoft Windows, and macOS runners to run your workflows; each workflow run executes in a fresh, newly-provisioned virtual machine.

9
misc/github.md Normal file
View File

@ -0,0 +1,9 @@
# GitHub
GitHub is an Internet hosting service for software development and version control using Git ([[git]]), plus access control, bug tracking, software feature requests, task management, continuous integration, and wikis for every project.
---
## GitHub Actions
Automate, customize, and execute your software development workflows right in your repository with GitHub Actions ([[repos/cheat-sheets/misc/github-actions]]). You can discover, create, and share actions to perform any job you'd like, including CI/CD, and combine actions in a completely customized workflow.

78
misc/http-status-codes.md Normal file
View File

@ -0,0 +1,78 @@
# HTTP Status Codes
## Categories
- **1XX** status codes: Informational Requests
- **2XX** status codes: Successful Requests
- **3XX** status codes: Redirects
- **4XX** status codes: Client Errors
- **5XX** status codes: Server Errors
## Complete List
| Code | Name | Description |
| ---- | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 100 | Continue | Everything so far is OK and that the client should continue with the request or ignore it if it is already finished. |
| 101 | Switching Protocols | The client has asked the server to change protocols and the server has agreed to do so. |
| 102 | Processing | The server has received and is processing the request, but that it does not have a final response yet. |
| 103 | Early Hints | Used to return some response headers before final HTTP message. |
| 200 | OK | Successful request. |
| 201 | Created | The server acknowledged the created resource. |
| 202 | Accepted | The client's request has been received but the server is still processing it. |
| 203 | Non-Authoritative Information | The response that the server sent to the client is not the same as it was when the server sent it. |
| 204 | No Content | There is no content to send for this request |
| 205 | Reset Content | Tells the user agent to reset the document which sent this request. |
| 206 | Partial Content | This response code is used when the range-header is sent from the client to request only part of a resource. |
| 207 | Multi-Status | Conveys information about multiple resources, for situations where multiple status codes might be appropriate. |
| 208 | Already Reported | The members of a DAV binding have already been enumerated in a preceding part of the multi-status response. |
| 226 | IM Used | IM is a specific extension of the HTTP protocol. The extension allows a HTTP server to send diffs (changes) of resources to clients. |
| 300 | Multiple Choices | The request has more than one possible response. The user agent should choose one. |
| 301 | Moved Permanently | The URL of the requested resource has been changed permanently. The new URL is given in the response. |
| 302 | Found | This response code means that the URI of requested resource has been changed temporarily |
| 303 | See Other | The server sent this response to direct the client to get the requested resource at another URI with a GET request. |
| 304 | Not Modified | It tells the client that the response has not been modified, so the client can continue to use the same cached version of the response. |
| 305 | Use Proxy | Defined in a previous version of the HTTP specification to indicate that a requested response must be accessed by a proxy. (discontinued) |
| 307 | Temporary Redirect | The server sends this response to direct the client to get the requested resource at another URI with same method that was used in the prior request. |
| 308 | Permanent Redirect | This means that the resource is now permanently located at another URI, specified by the Location: HTTP Response header. |
| 400 | Bad Request | The server could not understand the request |
| 401 | Unauthorized | The client didn't authenticate himself. |
| 402 | Payment Required | This response code is reserved for future use. The initial aim for creating this code was using it for digital payment systems, however this status code is used very rarely and no standard convention exists. |
| 403 | Forbidden | The client does not have access rights to the content |
| 404 | Not Found | The server can not find the requested resource |
| 405 | Method Not Allowed | The request method is known by the server but is not supported by the target resource |
| 406 | Not Acceptable | The reponse doens't conforms to the creteria given by the client |
| 407 | Proxy Authentication Required | This is similar to 401 Unauthorized but authentication is needed to be done by a proxy. |
| 408 | Request Timeout | This response is sent on an idle connection by some servers, even without any previous request by the client. |
| 409 | Conflict | This response is sent when a request conflicts with the current state of the server. |
| 410 | Gone | This response is sent when the requested content has been permanently deleted from server, with no forwarding address. |
| 411 | Length Required | Server rejected the request because the Content-Length header field is not defined and the server requires it. |
| 412 | Precondition Failed | Access to the target resource has been denied. |
| 413 | Payload Too Large | Request entity is larger than limits defined by server. |
| 414 | Request-URI Too Long | The URI requested by the client is longer than the server is willing to interpret. |
| 415 | Unsupported Media Type | The media format is not supported by the server. |
| 416 | Requested Range Not Satisfiable | The range specified by the Range header field in the request cannot be fulfilled. |
| 417 | Expectation Failed | the expectation indicated by the Expect request header field cannot be met by the server. |
| 418 | I'm a teapot | The server refuses the attempt to brew coffee with a teapot. |
| 421 | Misdirected Request | The request was directed at a server that is not able to produce a response. |
| 422 | Unprocessable Entity | The request was well-formed but was unable to be followed due to semantic errors. |
| 423 | Locked | The resource that is being accessed is locked. |
| 424 | Failed Dependency | The request failed due to failure of a previous request. |
| 426 | Upgrade Required | The server refuses to perform the request using the current protocol but might be willing to do so after the client upgrades to a different protocol. |
| 428 | Precondition Required | his response is intended to prevent the 'lost update' problem, where a client GETs a resource's state, modifies it and PUTs it back to the server, when meanwhile a third party has modified the state on the server, leading to a conflict. |
| 429 | Too Many Requests | The user has sent too many requests in a given amount of time |
| 431 | Request Header Fields Too Large | The server is can't process the request because its header fields are too large. |
| 444 | Connection Closed Without Response | The connection opened, but no data was written. |
| 451 | Unavailable For Legal Reasons | The user agent requested a resource that cannot legally be provided (such as a web page censored by a government) |
| 499 | Client Closed Request | The client closed the connection, despite the server was processing the request already. |
| 500 | Internal Server Error | The server has encountered a situation it does not know how to handle. |
| 501 | Not Implemented | The request method is not supported by the server and cannot be handled. |
| 502 | Bad Gateway | This error response means that the server, while working as a gateway to get a response needed to handle the request, got an invalid response. |
| 503 | Service Unavailable | The server is not ready to handle the request. |
| 504 | Gateway Timeout | This error response is given when the server is acting as a gateway and cannot get a response in time. |
| 505 | HTTP Version Not Supported | The HTTP version used in the request is not supported by the server. |
| 506 | Variant Also Negotiates | the chosen variant resource is configured to engage in transparent content negotiation itself, and is therefore not a proper end point in the negotiation process. |
| 507 | Insufficient Storage | The method could not be performed on the resource because the server is unable to store the representation needed to successfully complete the request. |
| 508 | Loop Detected | The server detected an infinite loop while processing the request. |
| 510 | Not Extended | Further extensions to the request are required for the server to fulfill it. |
| 511 | Network Authentication Required | Indicates that the client needs to authenticate to gain network access. |
| 599 | Network Connect Timeout Error | The connection timed out due to a overloaded server, a hardware error or a infrastructure error. |

120
misc/markdown.md Normal file
View File

@ -0,0 +1,120 @@
# Markdown
Markdown is a text-to-HTML conversion tool for web writers. Markdown allows you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML).
Documentation: [Markdown Docs](https://daringfireball.net/projects/markdown/)
RFC: [RFC 7763](https://www.rfc-editor.org/rfc/rfc7763)
GitHub Documentation: [Writing Markdown on GitHub](https://docs.github.com/en/get-started/writing-on-github)
---
## Cheat-Sheet
### Headings
```markdown
# Heading 1
## Heading 2
### Heading 3
#### Heading 4
##### Heading 5
###### Heading 6
```
Here is a heading: `# Heading`, **don't do this:** `#Heading`
### Emphasis
```markdown
Emphasis, aka italics, with *asterisks* or _underscores_.
Strong emphasis, aka bold, with **asterisks** or __underscores__.
Combined emphasis with **asterisks and _underscores_**.
Strikethrough uses two tildes. ~~Scratch this.~~
```
### Line Breaks
```markdown
First line with two spaces after.
And the next line.
```
### Lists
#### Ordered Lists
```markdown
1. First item
2. Second item
3. Third item
```
#### Unordered Lists
```markdown
- First item
- Second item
- Third item
```
### Links
```markdown
Link with text: [link-text](https://www.google.com)
```
### Images
```markdown
Image with alt text: ![alt-text](https://camo.githubusercontent.com/4d89cd791580bfb19080f8b0844ba7e1235aa4becc3f43dfd708a769e257d8de/68747470733a2f2f636e642d70726f642d312e73332e75732d776573742d3030342e6261636b626c617a6562322e636f6d2f6e65772d62616e6e6572342d7363616c65642d666f722d6769746875622e6a7067)
Image without alt text: ![](https://camo.githubusercontent.com/4d89cd791580bfb19080f8b0844ba7e1235aa4becc3f43dfd708a769e257d8de/68747470733a2f2f636e642d70726f642d312e73332e75732d776573742d3030342e6261636b626c617a6562322e636f6d2f6e65772d62616e6e6572342d7363616c65642d666f722d6769746875622e6a7067)
```
### Code Blocks
#### Inline Code Block
```markdown
Inline `code` has `back-ticks around` it.
```
#### Blocks of Code
<pre>
```javascript
var s = "JavaScript syntax highlighting";
alert(s);
```
```python
s = "Python syntax highlighting"
print s
```
```
No language indicated, so no syntax highlighting.
But let's throw in a <b>tag</b>.
```
</pre>
### Tables
There must be at least 3 dashes separating each header cell.
The outer pipes (|) are optional, and you don't need to make the raw Markdown line up prettily.
```markdown
| Heading 1 | Heading 2 | Heading 3 |
|---|---|---|
| col1 | col2 | col3 |
| col1 | col2 | col3 |
```
### Task list
To create a task list start line with square brackets with an empty space.
Ex: [ <space> ] and add text for task.
To check the task replace the space between the bracket with "x".
```markdown
[x] Write the post
[ ] Update the website
[ ] Contact the user
```
## Reference
Link: [markdown guide](https://www.markdownguide.org/cheat-sheet)

121
misc/ssl-certs.md Normal file
View File

@ -0,0 +1,121 @@
# SSL/TLS Certificates
X.509 is an ITU standard defining the format of public key certificates. X.509 are used in TLS/SSL, which is the basis for HTTPS. An X.509 certificate binds an identity to a public key using a digital signature. A certificate contains an identity (hostname, organization, etc.) and a public key (RSA, DSA, ECDSA, ed25519, etc.), and is either signed by a Certificate Authority or is Self-Signed.
## Self-Signed Certificates
### Generate CA
1. Generate RSA
```bash
openssl genrsa -aes256 -out ca-key.pem 4096
```
2. Generate a public CA Cert
```bash
openssl req -new -x509 -sha256 -days 365 -key ca-key.pem -out ca.pem
```
### Optional Stage: View Certificate's Content
```bash
openssl x509 -in ca.pem -text
openssl x509 -in ca.pem -purpose -noout -text
```
### Generate Certificate
1. Create a RSA key
```bash
openssl genrsa -out cert-key.pem 4096
```
2. Create a Certificate Signing Request (CSR)
```bash
openssl req -new -sha256 -subj "/CN=yourcn" -key cert-key.pem -out cert.csr
```
3. Create a `extfile` with all the alternative names
```bash
echo "subjectAltName=DNS:your-dns.record,IP:257.10.10.1" >> extfile.cnf
```
```bash
# optional
echo extendedKeyUsage = serverAuth >> extfile.cnf
```
4. Create the certificate
```bash
openssl x509 -req -sha256 -days 365 -in cert.csr -CA ca.pem -CAkey ca-key.pem -out cert.pem -extfile extfile.cnf -CAcreateserial
```
## Certificate Formats
X.509 Certificates exist in Base64 Formats **PEM (.pem, .crt, .ca-bundle)**, **PKCS#7 (.p7b, p7s)** and Binary Formats **DER (.der, .cer)**, **PKCS#12 (.pfx, p12)**.
### Convert Certs
COMMAND | CONVERSION
---|---
`openssl x509 -outform der -in cert.pem -out cert.der` | PEM to DER
`openssl x509 -inform der -in cert.der -out cert.pem` | DER to PEM
`openssl pkcs12 -in cert.pfx -out cert.pem -nodes` | PFX to PEM
## Verify Certificates
`openssl verify -CAfile ca.pem -verbose cert.pem`
## Install the CA Cert as a trusted root CA
### On Debian & Derivatives
- Move the CA certificate (`ca.pem`) into `/usr/local/share/ca-certificates/ca.crt`.
- Update the Cert Store with:
```bash
sudo update-ca-certificates
```
Refer the documentation [here](https://wiki.debian.org/Self-Signed_Certificate) and [here.](https://manpages.debian.org/buster/ca-certificates/update-ca-certificates.8.en.html)
### On Fedora
- Move the CA certificate (`ca.pem`) to `/etc/pki/ca-trust/source/anchors/ca.pem` or `/usr/share/pki/ca-trust-source/anchors/ca.pem`
- Now run (with sudo if necessary):
```bash
update-ca-trust
```
Refer the documentation [here.](https://docs.fedoraproject.org/en-US/quick-docs/using-shared-system-certificates/)
### On Arch
System-wide Arch(p11-kit)
(From arch wiki)
- Run (As root)
```bash
trust anchor --store myCA.crt
```
- The certificate will be written to /etc/ca-certificates/trust-source/myCA.p11-kit and the "legacy" directories automatically updated.
- If you get "no configured writable location" or a similar error, import the CA manually:
- Copy the certificate to the /etc/ca-certificates/trust-source/anchors directory.
- and then
```bash
update-ca-trust
```
wiki page [here](https://wiki.archlinux.org/title/User:Grawity/Adding_a_trusted_CA_certificate)
### On Windows
Assuming the path to your generated CA certificate as `C:\ca.pem`, run:
```powershell
Import-Certificate -FilePath "C:\ca.pem" -CertStoreLocation Cert:\LocalMachine\Root
```
- Set `-CertStoreLocation` to `Cert:\CurrentUser\Root` in case you want to trust certificates only for the logged in user.
OR
In Command Prompt, run:
```sh
certutil.exe -addstore root C:\ca.pem
```
- `certutil.exe` is a built-in tool (classic `System32` one) and adds a system-wide trust anchor.
### On Android
The exact steps vary device-to-device, but here is a generalised guide:
1. Open Phone Settings
2. Locate `Encryption and Credentials` section. It is generally found under `Settings > Security > Encryption and Credentials`
3. Choose `Install a certificate`
4. Choose `CA Certificate`
5. Locate the certificate file `ca.pem` on your SD Card/Internal Storage using the file manager.
6. Select to load it.
7. Done!

10
misc/ssl-security.md Normal file
View File

@ -0,0 +1,10 @@
# SSL Security Cheat-Sheet
... TBD
## TLS Version and Ciphers
Scanning for TLS Version and supported Ciphers: `nmap --script ssl-enum-ciphers <target>`
Tool | Link | Description
---|---|---
Qualys SSL Labs | https://www.ssllabs.com/projects/index.html | SSL Security Tools by Qualys

9
misc/zerotrust.md Normal file
View File

@ -0,0 +1,9 @@
# Zero Trust
Zero Trust is a security concept and framework that assumes no level of trust by default for any user, device, or network component, regardless of whether they are inside or outside the organization's network perimeter. It emphasizes the need for strict access controls and authentication measures at all levels, such as verifying user identity, device security posture, and continuously monitoring user activities. Zero Trust aims to enhance overall security posture and protect against internal and external threats by minimizing potential attack surfaces and adopting a more granular and dynamic approach to security.
## Zero Trust Network Access (ZTNA)
Zero Trust Network Access (ZTNA) is a specific implementation of Zero Trust principles that focuses on providing secure access to internal resources and applications for authorized users, regardless of their location or the network they are connecting from.
[Learn more](../networking/zerotrust-networkaccess.md)

View File

@ -0,0 +1,19 @@
# ARP Protocol
The Address Resolution Protocol (ARP) is a communication protocol used for discovering the link layer address, such as a MAC address, associated with a given internet layer address, typically an IPv4 address. This mapping is necessary because the data link and network layer addresses of a device can be different, and ARP provides a way to translate between them.
ARP operates within the Internet Protocol Suite's networking layer, and is used by network devices to map an IP address to a physical address, such as an Ethernet address. ARP is used for communication within a network segment (layer 2), while the Internet Protocol is used for communication across network segments (layer 3).
ARP is a stateless protocol, meaning that each request is independent of the previous request in the same session. ARP is also a broadcast protocol, meaning that it is used for one-to-all communication within a network.
## ARP Request
An ARP request is a message that is sent by a device to all other devices in a network to request their MAC addresses. The ARP request contains the IP address of the device that sent the request, and the MAC address of the device that is requesting the IP address. The ARP request is broadcast to all devices in the network, and the device that has the requested IP address responds with an ARP reply.
## ARP Reply
An ARP reply is a message that is sent by a device to another device in a network to provide its MAC address. The ARP reply contains the IP address of the device that sent the request, and the MAC address of the device that is requesting the IP address. The ARP reply is sent directly to the device that sent the ARP request.
## ARP Table
An ARP table is a table that is used by a device to store the IP addresses and MAC addresses of other devices in a network. The ARP table is used by the device to determine the MAC address of a device when it receives an ARP request from that device.

View File

@ -0,0 +1,7 @@
# Autonegotiation
**Autonegotiation** is a feature in Ethernet networking that allows two connected devices to automatically negotiate and establish the best possible parameters for communication. When two devices with **autonegotiation** capability are connected, they exchange information about their supported capabilities, such as link speed (e.g., 10 Mbps, 100 Mbps, 1 Gbps), duplex mode (e.g., half-duplex or full-duplex), and flow control.
Based on this information, the devices negotiate and agree upon the highest mutually supported settings for optimal communication.
**Autonegotiation** helps ensure compatibility and optimal performance between network devices. It eliminates the need for manual configuration and allows network devices to adapt to different speeds and duplex modes based on the capabilities of the connected devices.

View File

@ -0,0 +1,13 @@
# DNSSEC
DNSSEC (DNS Security Extensions) is a set of security extensions to the [[DNS]] (Domain Name System) protocol that provides authentication and integrity checking for DNS data. DNSSEC uses digital signatures to ensure that DNS responses have not been modified in transit and that they come from an authorized source.
With DNSSEC, each zone in the DNS hierarchy is signed with a private key, and the corresponding public key is published in the DNS. When a DNS resolver receives a DNS response, it can use the public key to verify the digital signature and ensure that the response has not been tampered with. If the signature is valid, the resolver can be confident that the response is authentic and has not been modified in transit.
DNSSEC provides several benefits, including:
- Data integrity: DNSSEC ensures that DNS responses have not been modified in transit, preventing DNS spoofing and other types of attacks that rely on DNS data tampering.
- Authentication: DNSSEC allows DNS resolvers to authenticate the source of DNS responses, providing an additional layer of security against DNS cache poisoning and other attacks.
- Trust hierarchy: DNSSEC allows for the creation of a trust hierarchy in the DNS, with each zone in the hierarchy being responsible for signing its own data and delegating trust to its child zones.
DNSSEC is supported by most modern DNS servers and resolvers, and is becoming increasingly important as a tool for securing the Internet's infrastructure.

View File

View File

View File

@ -0,0 +1,71 @@
# Mail Server DNS Records Cheat-Sheet
If you want to run a mail server on the public internet, you need to set up your [DNS Records](networking/dns-record-types.md) correctly. While some [DNS Records](networking/dns-record-types.md) are necessary to send and receive emails, others are recommended to build a good reputation.
## Required Mail Server DNS Records
### A Record
DNS A Record that will resolve to the public IP address of your mail server. This is also needed when your web server has a different IP address than your mail server.
**Recommended Settings Example:**
Type | Host | Points to | TTL
---|---|---|---
`A`|`mail`|`your-mail-servers-ipv4`|`1 hour`
### MX Record
The MX record is important when you want to receive emails. This tells everyone which IP address to contact.
If you have multiple Mail Servers that need to be load-balanced use the same **priority**. Lower numbers are prioritized. Higher numbers can be used as backup servers.
**Recommended Settings:**
Type | Host | Points to | Priority | TTL
---|---|---|---|---
`MX`|`@`|`mail.your-domain`|`0`|`1 hour`
### RDNS or PTR Record
The reverse DNS record or also called PTR (Pointer Resource Record) is important when you want to send mails. Almost all mail servers check the RDNS record to perform simple anti-spam checks. RDNS is just like a DNS query, just backward.
>Your RDNS record is not configured on your DNS server, instead, its configured on your hosting provider where you got your public IP address from.
## (Optional but recommended) DNS Records
### SPF Record
The SPF (Sender Policy Framework) is a TXT record on your DNS server that specifies which hosts are allowed to send mails for a given domain. When a mail server receives a mail that seems to come from your domain it can check if its a valid message. Some mail servers reject mails if they cant validate that the message comes from an authorized mail server.
**Recommended Settings:**
Type | Host | TXT Value | TTL
---|---|---|---
`TXT`|`@`|`v=spf1 ip4:your-mail-servers-ipv4 -all`|`1 hour`
### DKIM Record
DKIM (Domain Keys Identified Mail) allows the receiving mail server to check that an email was indeed sent by the owner of that domain. The sending mail server adds a digital signature to every mail that is sent. This signature is added as a header and secured with encryption. These signatures are not visible to the end-user.
>If you want to add DKIM to your mail server you first need to create a private and a public keypair
We use the tool [OpenSSL](tools/openssl.md) to generate a DKIM private and public keypair.
```sh
openssl genrsa -out dkim_private.pem 2048
openssl rsa -in dkim_private.pem -pubout -outform der 2>/dev/null | openssl base64 -A
```
**Recommended Settings:**
Type | Host | TXT Value | TTL
---|---|---|---
`TXT`|`dkim._domainkey`|`v=DKIM1;k=rsa;p=public-dkim-key`|`1 hour`
### DMARC Record
DMARC (Domain-based Message Authentication, Reporting, and Conformance) extends your existing SPF and DKIM records. It makes sure that the sender's emails are protected by SPF and DKIM and tells the receiving mail server what to do if these checks fail.
 
**Recommended Settings:**
Type | Host | TXT Value | TTL
---|---|---|---
`TXT`|`_dmarc`|`v=DMARC1;p=quarantine`|`1 hour`
## (Optional) DNS Records
### Autoconfiguration DNS Records
If youre using mail clients like Outlook, Thunderbird on your Computer, or Mobile devices they offer the ability to do an “autoconfiguration” also called “Autodiscover”. That means you just need to enter your email address and password and the mail client tries to resolve the mail server IP addresses, used ports, and encryption settings for IMAP and SMTP. You can achieve this by adding SRV DNS records that are defined in the [RFC 6186 standard](https://tools.ietf.org/html/rfc6186) and some specific records that are used in Outlook clients.

View File

@ -0,0 +1,39 @@
# DNS Record Types
[[DNS]] (Domain Name System) record types are used to store different types of information about a domain name in the DNS database.
## Most common types of DNS Records
| Type | Description |
| ----- | -------------------------------------------------------------------------------------------------------------- |
| A | The record that holds the IP address of a domain. |
| AAAA | The record that contains the IPv6 address for a domain (as opposed to A records, which list the IPv4 address). |
| CNAME | Forwards one domain or subdomain to another domain, does NOT provide an IP address. |
| MX | Directs mail to an email server. |
| TXT | Lets an admin store text notes in the record. These records are often used for email security. |
| NS | Stores the name server for a DNS entry. |
| SOA | Stores admin information about a domain. |
| SRV | Specifies a port for specific services. |
| PTR | Provides a domain name in reverse-lookups. |
## Less commonly used DNS Records
| Type | Description |
| -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| APL | The address prefix list is an experiment record that specifies lists of address ranges. |
| AFSDB | This record is used for clients of the Andrew File System (AFS) developed by Carnegie Melon. The AFSDB record functions to find other AFS cells. |
| CAA | This is the certification authority authorization record, it allows domain owners state which certificate authorities can issue certificates for that domain. If no CAA record exists, then anyone can issue a certificate for the domain. These records are also inherited by subdomains. |
| DNSKEY | The DNS Key Record contains a public key used to verify Domain Name System Security Extension (DNSSEC) signatures. |
| CDNSKEY | This is a child copy of the DNSKEY record, meant to be transferred to a parent. |
| CERT | The certificate record stores public key certificates. |
| DCHID | The DHCP Identifier stores info for the Dynamic Host Configuration Protocol (DHCP), a standardized network protocol used on IP networks. |
| DNAME | The delegation name record creates a domain alias, just like CNAME, but this alias will redirect all subdomains as well. For instance if the owner of example.com bought the domain website.net and gave it a DNAME record that points to example.com, then that pointer would also extend to blog.website.net and any other subdomains. |
| HIP | This record uses Host identity protocol, a way to separate the roles of an IP address; this record is used most often in mobile computing. |
| IPSECKEY | The IPSEC key record works with the Internet Protocol Security (IPSEC), an end-to-end security protocol framework and part of the Internet Protocol Suite (TCP/IP). |
| LOC | The location record contains geographical information for a domain in the form of longitude and latitude coordinates. |
| NAPTR | The name authority pointer record can be combined with an SRV record to dynamically create URIs to point to based on a regular expression. |
| NSEC | The next secure record is part of DNSSEC, and its used to prove that a requested DNS resource record does not exist. |
| RRSIG | The resource record signature is a record to store digital signatures used to authenticate records in accordance with DNSSEC. |
| RP | This is the responsible person record and it stores the email address of the person responsible for the domain. |
| SSHFP | This record stores the SSH public key fingerprints; SSH stands for Secure Shell and its a cryptographic networking protocol for secure communication over an unsecure network. |

70
networking/dns/dns.md Normal file
View File

@ -0,0 +1,70 @@
# DNS
DNS (Domain Name System) is a hierarchical distributed naming system used to translate human-readable domain names, such as `www.example.com`, into [[IP]] (Internet Protocol) addresses, such as 192.0.2.1, that computers use to identify each other on the Internet. DNS allows users to access websites and other Internet resources using easy-to-remember domain names instead of having to remember the numerical IP addresses that correspond to them.
## How DNS works
DNS operates using a client-server architecture. When a user types a domain name into their web browser, the browser sends a DNS query to a DNS resolver, which is typically provided by the users Internet Service Provider (ISP). The resolver then sends a series of recursive queries to other DNS servers, working its way up the DNS hierarchy until it receives a response containing the IP address associated with the requested domain name.
DNS is organized into a hierarchical structure of domains, with the root domain at the top of the hierarchy. Each domain is divided into subdomains, with each level of the hierarchy separated by a dot (e.g., example.com is a subdomain of the com top-level domain). Each domain is managed by a domain name registrar, which is responsible for assigning domain names and IP addresses to organizations and individuals. DNS also supports advanced features such as [DNSSEC](dns-dnssec.md) (DNS Security Extensions), which provides authentication and integrity checking for DNS queries and responses.
## DNS Record Types
DNS records are an essential part of the DNS system, as they contain the information needed to translate domain names into IP addresses and vice versa. Each DNS record contains a specific type of information about a domain name, such as its IP address, mail exchange server, or authoritative name servers.
There are many different types of DNS records, each with a specific format and purpose. Some of the most commonly used DNS record types include A, AAAA, CNAME, MX, NS, PTR, SOA, SRV, and TXT records. Each record type has a specific format and purpose, and is used to provide different types of information about a domain name.
Here's a [[dns-record-types|List of all DNS Record Types]].
## Encryption
Ever since DNS was created in 1987, it has been largely unencrypted. Everyone between your device and the resolver is able to snoop on or even modify your DNS queries and responses.
The UDP source port is 53 which is the standard port number for unencrypted **DNS**. The [UDP](../networking/udp.md) payload is therefore likely to be a **DNS** answer.
Encrypting DNS makes it much harder for snoopers to look into your **DNS** messages, or to corrupt them in transit.
Two standardized mechanisms exist to secure the **DNS** transport between you and the resolver, [DNS over TLS](dns-dot.md), and [DNS queries over HTTPS](dns-doh.md).
Both are based on Transport Layer Security ([TLS](../networking/tls.md)) which is also used to secure communication between you and a website using [HTTPS](../networking/https.md).
As both DoT and DoH are relatively new, they are not universally deployed yet.
### DNS over HTTPS
DNS over HTTPS, or DoH, is an alternative to DoT. With DoH, DNS queries and responses are encrypted, but they are sent via the HTTP or HTTP/2 protocols instead of directly over UDP.
Like DoT, DoH ensures that attackers can't forge or alter DNS traffic. DoH traffic looks like other HTTPS traffic e.g. normal user-driven interactions with websites and web apps from a network administrator's perspective.
```txt
┌─────────────────┐ ──┐
│ 爵 HTTP Protocol │ │  encrypted
├─────────────────┤ ├── traffic
│  TLS Protocol │ │ via HTTPS
├─────────────────┤ ──┘
│ TCP Protocol │
│ (Port 443) │
├─────────────────┤
│ IP Protocol │
└─────────────────┘
GET/POST
url/dns-request?dns-...
```
### DNS over TLS
DNS over TLS, or DoT, is a standard for encrypting DNS queries to keep them secure and private. DoT uses the same security protocol, TLS, that HTTPS websites use to encrypt and authenticate communications. (TLS is also known as "SSL.") DoT adds TLS encryption on top of the user datagram protocol (UDP), which is used for DNS queries.
```txt
┌─────────────────┐ ──┐
│ DNS Protocol │ │  encrypted
├─────────────────┤ ├── traffic
│  TLS Protocol │ │ via TLS
├─────────────────┤ ──┘
│ UDP Protocol │
│ (Port 853) │
├─────────────────┤
│ IP Protocol │
└─────────────────┘
```

View File

@ -0,0 +1,26 @@
# EHLO Response Codes
**EHLO Response Codes** are used by an **SMTP server** in response to an **EHLO command** issued by an **SMTP client**.
> Please note that the presence and specific **EHLO response codes** will depend on the **SMTP server software**, its version, and its configuration. The above table includes some commonly encountered **EHLO response codes**, but it may not cover every possible code or extension.
| EHLO Response Code | Description |
| --- | --- |
| 250 | Requested mail action okay, completed |
| 250-PIPELINING | Server supports command pipelining |
| 250-SIZE `<value>` | Server specifies maximum message size |
| 250-ETRN | Server supports the ETRN extension |
| 250-ENHANCEDSTATUSCODES | Server uses enhanced status codes |
| 250-8BITMIME | Server supports the 8BITMIME extension |
| 250-DSN | Server supports delivery status notifications (DSN) |
| 250-STARTTLS | Server supports TLS encryption |
| 250-AUTH `<authentication_types>` | Server specifies supported authentication types |
| 250-DELIVERBY | Server supports the DELIVERBY extension |
| 250-RSET | Server supports the RSET command |
| 250-HELP | Server provides help information |
| 250-BINARYMIME | Server supports binary MIME (Multipurpose Internet Mail Extensions) |
| 250-CHUNKING | Server supports chunking for message transmission |
| 250-EXPN | Server supports the EXPN command |
| 250-VRFY | Server supports the VRFY command |
| 250-X-EXPS `<extension>` | Server supports an additional extension |
| 250 X-LINK2STATE | Server provides link-related state information |

View File

@ -0,0 +1,3 @@
# IMAP (Internet Message Access Protocol)
**IMAP** is a protocol for retrieving mail from a **mail server**.

View File

@ -0,0 +1,3 @@
# POP3 (Post Office Protocol version 3)
**POP3** is a protocol for retrieving mail from a **mail server**.

Some files were not shown because too many files have changed in this diff Show More