This commit is contained in:
2024-04-03 22:04:13 +02:00
parent 7e68609006
commit 0b373d31db
142 changed files with 7334 additions and 0 deletions

20
infra/proxmox-api.md Normal file
View File

@ -0,0 +1,20 @@
# Proxmox API Authentication
## Create an API Token on Proxmox
To create a new API Token for your `user` in Proxmox, follow these steps:
1. Open the Proxmox Web UI and navigate to the **'Datacenter'** in the **'Server View'** menu.
2. Select **'API Token'** in the **'Permissions'** menu.
3. Click on **'Add'**.
4. Select the `user`, and add a **'Token ID'**.
5. (Optional) Disable `Privilege Separation`, and set an `Expire` date.
6. Click on **'Add'**.
## Check the API Token
To test the API Token, you can use the following command:
```sh
curl -H "Authorization: PVEAPIToken=root@pam!monitoring=aaaaaaaaa-bbb-cccc-dddd-ef0123456789" https://your-proxmox-url:8006/api2/json/
```

View File

@ -0,0 +1,315 @@
# Proxmox Certificate Management
## Certificates for Intra-Cluster Communication
Each Proxmox VE cluster creates by default its own (self-signed) Certificate
Authority (CA) and generates a certificate for each node which gets signed by
the aforementioned CA. These certificates are used for encrypted communication
with the cluster's pveproxy service and the Shell/Console feature if SPICE is used.
The CA certificate and key are stored in the Proxmox Cluster File System (pmxcfs).
## Certificates for API and Web GUI
The REST API and web GUI are provided by the pveproxy service, which runs on each node.
You have the following options for the certificate used by pveproxy:
* By default the node-specific certificate in `/etc/pve/nodes/NODENAME/pve-ssl.pem` is used. This certificate is signed by the cluster CA and therefore not automatically trusted by browsers and operating systems.
* Use an externally provided certificate (e.g. signed by a commercial CA).
* Use ACME (Let's Encrypt) to get a trusted certificate with automatic renewal, this is also integrated in the Proxmox VE API and web interface.
For options 2 and 3 the file `/etc/pve/local/pveproxy-ssl.pem` (and
`/etc/pve/local/pveproxy-ssl.key`, which needs to be without password) is used.
Keep in mind that `/etc/pve/local` is a node specific symlink to
`/etc/pve/nodes/NODENAME`.
Certificates are managed with the Proxmox VE Node management command
(see the `pvenode(1)` manpage).
Do not replace or manually modify the automatically generated node
certificate files in `/etc/pve/local/pve-ssl.pem` and
`/etc/pve/local/pve-ssl.key` or the cluster CA files in
`/etc/pve/pve-root-ca.pem` and `/etc/pve/priv/pve-root-ca.key`.
## Upload Custom Certificate
If you already have a certificate which you want to use for a Proxmox VE node
you can upload that certificate simply over the web interface. Note that the
certificates key file, if provided, mustn't be password protected.
## Trusted certificates via Let's Encrypt (ACME)
Proxmox VE includes an implementation of the Automatic Certificate
Management Environment ACME protocol, allowing Proxmox VE admins to
use an ACME provider like Let's Encrypt for easy setup of TLS certificates
which are accepted and trusted on modern operating systems and web browsers
out of the box.
Currently, the two ACME endpoints implemented are the
Let's Encrypt (LE) production and its staging
environment. Our ACME client supports validation of http-01 challenges using
a built-in web server and validation of dns-01 challenges using a DNS plugin
supporting all the DNS API endpoints `acme.sh` does.
### ACME Account
You need to register an ACME account per cluster with the endpoint you want to
use. The email address used for that account will serve as contact point for
renewal-due or similar notifications from the ACME endpoint.
You can register and deactivate ACME accounts over the web interface
`Datacenter -> ACME` or using the `pvenode` command line tool.
```shell
pvenode acme account register account-name mail@example.com
```
Because of rate-limits you should use LE staging for experiments or if you use
ACME for the first time.
### ACME Plugins
The ACME plugins task is to provide automatic verification that you, and thus
the Proxmox VE cluster under your operation, are the real owner of a domain. This is
the basis building block for automatic certificate management.
The ACME protocol specifies different types of challenges, for example the
http-01 where a web server provides a file with a certain content to prove
that it controls a domain. Sometimes this isn't possible, either because of
technical limitations or if the address of a record to is not reachable from
the public internet. The dns-01 challenge can be used in these cases. This
challenge is fulfilled by creating a certain DNS record in the domain's zone.
Proxmox VE supports both of those challenge types out of the box, you can configure
plugins either over the web interface under `Datacenter -> ACME`, or using the
`pvenode acme plugin add` command.
ACME Plugin configurations are stored in `/etc/pve/priv/acme/plugins.cfg`.
A plugin is available for all nodes in the cluster.
### Node Domains
Each domain is node specific. You can add new or manage existing domain entries
under Node -> Certificates, or using the `pvenode config` command.
After configuring the desired domain(s) for a node and ensuring that the
desired ACME account is selected, you can order your new certificate over the
web-interface. On success the interface will reload after 10 seconds.
Renewal will happen automatically.
## ACME HTTP Challenge Plugin
There is always an implicitly configured standalone plugin for validating
http-01 challenges via the built-in webserver spawned on port 80.
The name `standalone` means that it can provide the validation on it's
own, without any third party service. So, this plugin works also for cluster
nodes.
There are a few prerequisites to use it for certificate management with Let's
Encrypts ACME.
* You have to accept the ToS of Let's Encrypt to register an account.
* Port 80 of the node needs to be reachable from the internet.
* There must be no other listener on port 80.
* The requested (sub)domain needs to resolve to a public IP of the Node.
## ACME DNS API Challenge Plugin
On systems where external access for validation via the http-01 method is
not possible or desired, it is possible to use the dns-01 validation method.
This validation method requires a DNS server that allows provisioning of TXT
records via an API.
### Configuring ACME DNS APIs for validation
Proxmox VE re-uses the DNS plugins developed for the
[acme.sh project](https://github.com/acmesh-official/acme.sh). Please
refer to its documentation for details on configuration of specific APIs.
The easiest way to configure a new plugin with the DNS API is using the web
interface (`Datacenter -> ACME`).
Choose DNS as challenge type. Then you can select your API provider, enter
the credential data to access your account over their API.
See the `acme.sh`
[How to use DNS API](https://github.com/acmesh-official/acme.sh/wiki/dnsapi#how-to-use-dns-api)
wiki for more detailed information about getting API credentials for your
provider.
As there are many DNS providers and API endpoints Proxmox VE automatically generates
the form for the credentials for some providers. For the others you will see a
bigger text area, simply copy all the credentials KEY=VALUE pairs in there.
## DNS Validation through CNAME Alias
A special alias mode can be used to handle the validation on a different
domain/DNS server, in case your primary/real DNS does not support provisioning
via an API. Manually set up a permanent CNAME record for
`_acme-challenge.domain1.example` pointing to `_acme-challenge.domain2.example`
and set the alias property in the Proxmox VE node configuration file to
`domain2.example` to allow the DNS server of `domain2.example` to validate all
challenges for `domain1.example`.
### Combination of Plugins
Combining http-01 and dns-01 validation is possible in case your node is
reachable via multiple domains with different requirements / DNS provisioning
capabilities. Mixing DNS APIs from multiple providers or instances is also
possible by specifying different plugin instances per domain.
Accessing the same service over multiple domains increases complexity and
should be avoided if possible.
## Automatic renewal of ACME certificates
If a node has been successfully configured with an ACME-provided certificate
(either via `pvenode` or via the GUI), the certificate will be automatically
renewed by the `pve-daily-update.service`. Currently, renewal will be attempted
if the certificate has expired already, or will expire in the next 30 days.
## ACME Examples with pvenode
*Example*: Sample `pvenode` invocation for using Let's Encrypt certificates
```sh
root@proxmox:~# pvenode acme account register default mail@example.invalid
Directory endpoints:
0) Let's Encrypt V2 (https://acme-v02.api.letsencrypt.org/directory)
1) Let's Encrypt V2 Staging (https://acme-staging-v02.api.letsencrypt.org/directory)
2) Custom
Enter selection: 1
Terms of Service: https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf
Do you agree to the above terms? [y|N]y
...
Task OK
root@proxmox:~# pvenode config set --acme domains=example.invalid
root@proxmox:~# pvenode acme cert order
Loading ACME account details
Placing ACME order
...
Status is 'valid'!
All domains validated!
...
Downloading certificate
Setting pveproxy certificate and key
Restarting pveproxy
Task OK
```
*Example*: Setting up the OVH API for validating a domain
The account registration steps are the same no matter which plugins are
used, and are not repeated here.
OVH_AK and OVH_AS need to be obtained from OVH according to the OVH
API documentation
First you need to get all information so you and Proxmox VE can access the API.
```sh
root@proxmox:~# cat /path/to/api-token
OVH_AK=XXXXXXXXXXXXXXXX
OVH_AS=YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
root@proxmox:~# source /path/to/api-token
root@proxmox:~# curl -XPOST -H"X-Ovh-Application: $OVH_AK" -H "Content-type: application/json" \
https://eu.api.ovh.com/1.0/auth/credential -d '{
"accessRules": [
{"method": "GET","path": "/auth/time"},
{"method": "GET","path": "/domain"},
{"method": "GET","path": "/domain/zone/*"},
{"method": "GET","path": "/domain/zone/*/record"},
{"method": "POST","path": "/domain/zone/*/record"},
{"method": "POST","path": "/domain/zone/*/refresh"},
{"method": "PUT","path": "/domain/zone/*/record/"},
{"method": "DELETE","path": "/domain/zone/*/record/*"}
]
}'
{"consumerKey":"ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ","state":"pendingValidation","validationUrl":"https://eu.api.ovh.com/auth/?credentialToken=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"}
(open validation URL and follow instructions to link Application Key with account/Consumer Key)
root@proxmox:~# echo "OVH_CK=ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ" >> /path/to/api-token
```
Now you can setup the ACME plugin:
```sh
root@proxmox:~# pvenode acme plugin add dns example_plugin --api ovh --data /path/to/api_token
root@proxmox:~# pvenode acme plugin config example_plugin
```sh
<pre>
┌────────┬──────────────────────────────────────────┐
│ key │ value │
╞════════╪══════════════════════════════════════════╡
│ api │ ovh │
├────────┼──────────────────────────────────────────┤
│ data │ OVH_AK=XXXXXXXXXXXXXXXX │
│ │ OVH_AS=YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY │
│ │ OVH_CK=ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ │
├────────┼──────────────────────────────────────────┤
│ digest │ 867fcf556363ca1bea866863093fcab83edf47a1 │
├────────┼──────────────────────────────────────────┤
│ plugin │ example_plugin │
├────────┼──────────────────────────────────────────┤
type │ dns │
└────────┴──────────────────────────────────────────┘
</pre>
At last you can configure the domain you want to get certificates for and
place the certificate order for it:
```sh
root@proxmox:~# pvenode config set -acmedomain0 example.proxmox.com,plugin=example_plugin
root@proxmox:~# pvenode acme cert order
Loading ACME account details
Placing ACME order
Order URL: https://acme-staging-v02.api.letsencrypt.org/acme/order/11111111/22222222
Getting authorization details from 'https://acme-staging-v02.api.letsencrypt.org/acme/authz-v3/33333333'
The validation for example.proxmox.com is pending!
[Wed Apr 22 09:25:30 CEST 2020] Using OVH endpoint: ovh-eu
[Wed Apr 22 09:25:30 CEST 2020] Checking authentication
[Wed Apr 22 09:25:30 CEST 2020] Consumer key is ok.
[Wed Apr 22 09:25:31 CEST 2020] Adding record
[Wed Apr 22 09:25:32 CEST 2020] Added, sleep 10 seconds.
Add TXT record: _acme-challenge.example.proxmox.com
Triggering validation
Sleeping for 5 seconds
Status is 'valid'!
[Wed Apr 22 09:25:48 CEST 2020] Using OVH endpoint: ovh-eu
[Wed Apr 22 09:25:48 CEST 2020] Checking authentication
[Wed Apr 22 09:25:48 CEST 2020] Consumer key is ok.
Remove TXT record: _acme-challenge.example.proxmox.com
All domains validated!
Creating CSR
Checking order status
Order is ready, finalizing order
valid!
Downloading certificate
Setting pveproxy certificate and key
Restarting pveproxy
Task OK
```
### Example: Switching from the staging to the regular ACME directory
Changing the ACME directory for an account is unsupported, but as Proxmox VE
supports more than one account you can just create a new one with the
production (trusted) ACME directory as endpoint. You can also deactivate the
staging account and recreate it.
*Example*: Changing the default ACME account from staging to directory using `pvenode`
```sh
root@proxmox:~# pvenode acme account deactivate default
Renaming account file from '/etc/pve/priv/acme/default' to '/etc/pve/priv/acme/_deactivated_default_4'
Task OK
root@proxmox:~# pvenode acme account register default example@proxmox.com
Directory endpoints:
0) Let's Encrypt V2 (https://acme-v02.api.letsencrypt.org/directory)
1) Let's Encrypt V2 Staging (https://acme-staging-v02.api.letsencrypt.org/directory)
2) Custom
Enter selection: 0
Terms of Service: https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf
Do you agree to the above terms? [y|N]y
...
Task OK
```

View File

@ -0,0 +1,66 @@
# Proxmox Terraform Integration
You can use [Terraform](tools/terraform.md) to automate certain tasks on [Proxmox](infra/proxmox.md). This allows you to manage virtual machines and lxc containers with infrastructure-as-code. We're using the third-party plugin [telmate/terraform-provider-proxmox](https://github.com/Telmate/terraform-provider-proxmox).
## Authenticate to Proxmox
### Create an API Token on Proxmox
To create a new API Token for your `user` in Proxmox, follow the steps described in [Proxmox API Authentication](proxmox-api.md).
### Add Provider config to Terraform
```json
terraform {
required_version = ">= 0.13.0"
required_providers {
proxmox = {
source = "telmate/proxmox"
version = ">=2.9.14"
}
}
}
```
```json
variable "PROXMOX_URL" {
type = string
}
variable "PROXMOX_USER" {
type = string
}
variable "PROXMOX_TOKEN" {
type = string
sensitive = true
}
provider "proxmox" {
pm_api_url = var.PROXMOX_URL
pm_api_token_id = var.PROXMOX_USER
pm_api_token_secret = var.PROXMOX_TOKEN
pm_tls_insecure = false
}
```
## Templates
WIP
## Useful commands
### Import existing virtual machines to Terraform
Existing virtual machines can be imported to the Terraform state file with the following command. Make sure, you have created a corresponding **Resource** in the **Terraform File**.
```sh
terraform import <resourcetype.resourcename> <id>
```
In the telmate/terraform-provider-proxmox, the id needs to be set according to `<node>/<type>/<vmid>`, like in the following example.
```sh
terraform import proxmox_vm_qemu.srv-prod-1 prx-prod-1/proxmox_vm_qemu/102
```

189
infra/proxmox.md Normal file
View File

@ -0,0 +1,189 @@
# Proxmox Cheat-Sheet
Proxmox Virtual Environment (Proxmox VE or PVE) is a hyper-converged infrastructure open-source software. It is a hosted hypervisor that can run operating systems including Linux and Windows on x64 hardware. It is a Debian-based Linux distribution with a modified Ubuntu LTS kernel and allows deployment and management of virtual machines and containers. Proxmox VE includes a web console and command-line tools, and provides a REST API for third-party tools. Two types of virtualization are supported: container-based with LXC (starting from version 4.0 replacing OpenVZ used in version up to 3.4, included), and full virtualization with KVM. It includes a web-based management interface.
Proxmox VE is licensed under the GNU Affero General Public License, version 3.
Repository: [https://git.proxmox.com](https://git.proxmox.com)
Website: [https://pve.proxmox.com](https://pve.proxmox.com)
## VM Management
| Command | Command Description |
|---|---|
| `qm list` | list VMs |
| `qm create VM_ID` | Create or restore a virtual machine. |
| `qm start VM_ID` | Start a VM |
| `qm suspend VM_ID` | Suspend virtual machine. |
| `qm shutdown VM_ID` | Shutdown a VM |
| `qm reboot VM_ID` | Reboot a VM |
| `qm reset VM_ID` | Reset a VM |
| `qm stop VM_ID` | Stop a VM |
| `qm destroy VM_ID` | Destroy the VM and all used/owned volumes. |
| `qm monitor VM_ID` | Enter Qemu Monitor interface. |
| `qm pending VM_ID` | Get the virtual machine configuration with both current and pending values. |
| `qm sendkey VM_ID YOUR_KEY_EVENT [OPTIONS]` | Send key event to virtual machine. |
| `qm showcmd VM_ID [OPTIONS]` | Show command line used to start the VM (debug info). |
| `qm unlock VM_ID` | Unlock the VM |
| `qm clone VM_ID NEW_VM_ID` | Clone a VM |
| `qm migrate VM_ID TARGET_NODE` | Migrate a VM |
| `qm status VM_ID` | Show VM status |
| `qm cleanup VM_ID CLEAN_SHUTDOWN GUEST_REQUESTED` | Clean up resources for a VM |
| `qm template VM_ID [OPTIONS]` | Create a Template |
| `qm set VM_ID [OPTIONS]` | Set virtual machine options (synchronous API) |
### Cloudinit
| Command | Command Description |
|---|---|
| `qm cloudinit dump VM_ID VM_TYPE` | Get automatically generated cloudinit config. |
| `qm cloudinit pending VM_ID` | Get the cloudinit configuration with both current and pending values. |
| `qm cloudinit update VM_ID` | Regenerate and change cloudinit config drive. |
### Disk
| Command | Command Description |
|---|---|
| `qm disk import VM_ID TARGET_SOURCE TARGET_STORAGE` | Import an external disk image as an unused disk in a VM. |
| `qm disk move VM_ID VM_DISK [STORAGE] [OPTIONS]` | Move volume to different storage or to a different VM. |
| `qm disk rescan [OPTIONS]` | Rescan all storages and update disk sizes and unused disk images. |
| `qm disk resize VM_ID VM_DISK SIZE [OPTIONS]` | Extend volume size. |
| `qm disk unlink VM_ID --IDLIST STRING [OPTIONS]` | Unlink/delete disk images. |
| `qm rescan` | Rescan volumes. |
### Snapshot
| Command | Command Description |
|---|---|
| `qm listsnapshot VM_ID` | List all snapshots. |
| `qm snapshot VM_ID SNAPNAME` | Snapshot a VM. |
| `qm delsnapshot VM_ID SNAPNAME` | Delete a snapshot. |
| `qm rollback VM_ID SNAPNAME` | Rollback a snapshot. |
| `qm terminal VM_ID [OPTIONS]` | Open a terminal using a serial device. |
| `qm vncproxy VM_ID` | Proxy VM VNC traffic to stdin/stdout. |
### Misc
| Command | Command Description |
|---|---|
| `qm guest cmd VM_ID COMMAND` | Execute Qemu Guest Agent commands. |
| `qm guest exec VM_ID [EXTRA-ARGS] [OPTIONS]` | Executes the given command via the guest agent. |
| `qm guest exec-status VM_ID PID` | Gets the status of the given pid started by the guest-agent. |
| `qm guest passwd VM_ID USERNAME [OPTIONS]` | Sets the password for the given user to the given password. |
### PV, VG, LV Management
| Command | Command Description |
|---|---|
| `pvcreate DISK-DEVICE-NAME` | Create a PV |
| `pvremove DISK-DEVICE-NAME` | Remove a PV |
| `pvs` | List all PVs |
| `vgcreate VG-NAME DISK-DEVICE-NAME` | Create a VG |
| `vgremove VG-NAME` | Remove a VG |
| `vgs` | List all VGs |
| `lvcreate -L LV-SIZE -n LV-NAME VG-NAME` | Create a LV |
| `lvremove VG-NAME/LV-NAME` | Remove a LV |
| `lvs` | List all LVs |
### Storage Management
| Command | Command Description |
|---|---|
| `pvesm add TYPE STORAGE [OPTIONS]` | Create a new storage |
| `pvesm alloc STORAGE your-vm-id FILENAME SIZE [OPTIONS]` | Allocate disk images |
| `pvesm free VOLUME [OPTIONS]` | Delete volume |
| `pvesm remove STORAGE` | Delete storage configuration |
| `pvesm list STORAGE [OPTIONS]` | List storage content |
| `pvesm lvmscan` | An alias for pvesm scan lvm |
| `pvesm lvmthinscan` | An alias for pvesm scan lvmthin |
| `pvesm scan lvm` | List local LVM volume groups |
| `pvesm scan lvmthin VG` | List local LVM Thin Pools |
| `pvesm status [OPTIONS]` | Get status for all datastores |
### Template Management
| Command | Command Description |
|---|---|
| `pveam available` | List all templates |
| `pveam list STORAGE` | List all templates |
| `pveam download STORAGE TEMPLATE` | Download appliance templates |
| `pveam remove TEMPLATE-PATH` | Remove a template |
| `pveam update` | Update Container Template Database |
## Certificate Management
See the [Proxmox Certificate Management](proxmox-certificate-management.md) cheat sheet.
## Container Management
| Command | Command Description |
|---|---|
| `pct list` | List containers |
| `pct create YOUR-VM-ID OSTEMPLATE [OPTIONS]` | Create or restore a container |
| `pct start YOUR-VM-ID [OPTIONS]` | Start the container |
| `pct clone YOUR-VM-ID NEW-VM-ID [OPTIONS]` | Create a container clone/copy |
| `pct suspend YOUR-VM-ID` | Suspend the container. This is experimental. |
| `pct resume YOUR-VM-ID` | Resume the container |
| `pct stop YOUR-VM-ID [OPTIONS]` | Stop the container. This will abruptly stop all processes running in the container. |
| `pct shutdown YOUR-VM-ID [OPTIONS]` | Shutdown the container. This will trigger a clean shutdown of the container. |
| `pct destroy YOUR-VM-ID [OPTIONS]` | Destroy the container (also delete all uses files) |
| `pct status YOUR-VM-ID [OPTIONS]` | Show CT status |
| `pct migrate YOUR-VM-ID TARGET [OPTIONS]` | Migrate the container to another node. Creates a new migration task. |
| `pct config YOUR-VM-ID [OPTIONS]` | Get container configuration |
| `pct cpusets` | Print the list of assigned CPU sets |
| `pct pending YOUR-VM-ID` | Get container configuration, including pending changes |
| `pct reboot YOUR-VM-ID [OPTIONS]` | Reboot the container by shutting it down and starting it again. Applies pending changes. |
| `pct restore YOUR-VM-ID OSTEMPLATE [OPTIONS]` | Create or restore a container |
| `pct set YOUR-VM-ID [OPTIONS]` | Set container options |
| `pct template YOUR-VM-ID` | Create a Template |
| `pct unlock YOUR-VM-ID` | Unlock the VM |
### Container Disks
| Command | Command Description |
|---|---|
| `pct df YOUR-VM-ID` | Get the containers current disk usage |
| `pct fsck YOUR-VM-ID [OPTIONS]` | Run a filesystem check (fsck) on a container volume |
| `pct fstrim YOUR-VM-ID [OPTIONS]` | Run fstrim on a chosen CT and its mountpoints |
| `pct mount YOUR-VM-ID` | Mount the containers filesystem on the host |
| `pct move-volume YOUR-VM-ID VOLUME [STORAGE] [TARGET-VMID] [TARGET-VOLUME] [OPTIONS]` | Move a rootfs-/mp-volume to a different storage or to a different container |
| `pct unmount YOUR-VM-ID` | Unmount the containers filesystem |
| `pct resize YOUR-VM-ID YOUR-VM-DISK SIZE [OPTIONS]` | Resize a container mount point |
| `pct rescan [OPTIONS]` | Rescan all storages and update disk sizes and unused disk images |
| `pct enter YOUR-VM-ID` | Connect to container |
| `pct console YOUR-VM-ID [OPTIONS]` | Launch a console for the specified container |
| `pct exec YOUR-VM-ID [EXTRA-ARGS]` | Launch a command inside the specified container |
| `pct pull YOUR-VM-ID PATH DESTINATION [OPTIONS]` | Copy a file from the container to the local system |
| `pct push YOUR-VM-ID FILE DESTINATION [OPTIONS]` | Copy a local file to the container |
## Web GUI
```shell
# Restart web GUI
service pveproxy restart
```
## Resize Disk
### Increase disk size
Increase disk size in the GUI or with the following command
```shell
qm resize 100 virtio0 +5G
```
### Decrease disk size
Before decreasing disk sizes in Proxmox, you should take a backup!
1. Convert qcow2 to raw: `qemu-img convert vm-100.qcow2 vm-100.raw`
2. Shrink the disk `qemu-img resize -f raw vm-100.raw 10G`
3. Convert back to qcow2 `qemu-img convert -p -O qcow2 vm-100.raw vm-100.qcow2`
## Further information
More examples and tutorials regarding Proxmox can be found in the link list below:
- Ansible playbook that automates Linux VM updates running on Proxmox (including snapshots): [TheDatabaseMe - update_proxmox_vm](https://github.com/thedatabaseme/update_proxmox_vm)
- Manage Proxmox VM templates with Packer: [Use Packer to build Proxmox images](https://thedatabaseme.de/2022/10/16/what-a-golden-boy-use-packer-to-build-proxmox-images/)

1
infra/sophos-xg.md Normal file
View File

@ -0,0 +1 @@
# Sophos XG

14
infra/truenas-scale.md Normal file
View File

@ -0,0 +1,14 @@
# TrueNAS Scale
WIP
---
## ACME
WIP
1. Create DNS Credentials
2. Create Signing Request
3. Configure email address for your current user (in case of root, info)
4. Create ACME Cert
5. Switch Admin Cert
---

65
infra/zfs.md Normal file
View File

@ -0,0 +1,65 @@
# ZFS
WIP
Reference: [Oracle Solaris ZFS Administration Guide](https://docs.oracle.com/cd/E19253-01/819-5461/index.html)
---
## Storage Pools
WIP
### Stripe
ZFS dynamically stripes data across all top-level virtual devices. The decision about where to place data is done at write time, so no fixed-width stripes are created at allocation time.
When new virtual devices are added to a pool, ZFS gradually allocates data to the new device in order to maintain performance and disk space allocation policies. Each virtual device can also be a mirror or a RAID-Z device that contains other disk devices or files. This configuration gives you flexibility in controlling the fault characteristics of your pool.
Although ZFS supports combining different types of virtual devices within the same pool, avoid this practice. For example, you can create a pool with a two-way mirror and a three-way RAID-Z configuration. However, your fault tolerance is as good as your worst virtual device, RAID-Z in this case. A best practice is to use top-level virtual devices of the same type with the same redundancy level in each device.
### Mirror
A mirrored storage pool configuration requires at least two disks, preferably on separate controllers. Many disks can be used in a mirrored configuration. In addition, you can create more than one mirror in each pool.
### Striped Mirror
Data is dynamically striped across both mirrors, with data being redundant between each disk appropriately.
Currently, the following operations are supported in a ZFS mirrored configuration:
- Adding another set of disks for an additional top-level virtual device (vdev) to an existing mirrored configuration.
- Attaching additional disks to an existing mirrored configuration. Or, attaching additional disks to a non-replicated configuration to create a mirrored configuration.
- Replacing a disk or disks in an existing mirrored configuration as long as the replacement disks are greater than or equal to the size of the device to be replaced.
- Detaching a disk in a mirrored configuration as long as the remaining devices provide adequate redundancy for the configuration.
- Splitting a mirrored configuration by detaching one of the disks to create a new, identical pool.
### RAID-Z
In addition to a mirrored storage pool configuration, **ZFS provides a RAID-Z configuration with either single-, double-, or triple-parity fault tolerance**. Single-parity RAID-Z (raidz or raidz1) is similar to RAID-5. Double-parity RAID-Z (raidz2) is similar to RAID-6.
A RAID-Z configuration with N disks of size X with P parity disks can hold approximately `(N-P)*X` bytes and can withstand P device(s) failing before data integrity is compromised. You need at least two disks for a single-parity RAID-Z configuration and at least three disks for a double-parity RAID-Z configuration. For example, if you have three disks in a single-parity RAID-Z configuration, parity data occupies disk space equal to one of the three disks. Otherwise, no special hardware is required to create a RAID-Z configuration.
If you are creating a RAID-Z configuration with many disks, consider splitting the disks into multiple groupings. For example, a RAID-Z configuration with 14 disks is better split into two 7-disk groupings. **RAID-Z configurations with single-digit groupings of disks should perform better.**
---
## Scrubbing
The simplest way to check data integrity is to initiate an explicit scrubbing of all data within the pool. This operation traverses all the data in the pool once and verifies that all blocks can be read. Scrubbing proceeds as fast as the devices allow, though the priority of any I/O remains below that of normal operations. This operation might negatively impact performance, though the pool's data should remain usable and nearly as responsive while the scrubbing occurs.
**Scrub ZFS Pool:**
```bash
zpool scrub POOLNAME
```
**Example:**
```bash
zpool status -v POOLNAME
pool: store
state: ONLINE
scan: scrub in progress since Fri Nov 4 06:43:51 2022
317G scanned at 52.9G/s, 1.09M issued at 186K/s, 3.41T total
0B repaired, 0.00% done, no estimated completion time
```
---
## Resilvering
When a device is replaced, a resilvering operation is initiated to move data from the good copies to the new device. This action is a form of disk scrubbing. Therefore, only one such action can occur at a given time in the pool. If a scrubbing operation is in progress, a resilvering operation suspends the current scrubbing and restarts it after the resilvering is completed.