Showing posts with label Proxmox. Show all posts
Showing posts with label Proxmox. Show all posts

Proxmox Update 8 to 9 does not boot anymore - black screen

Problem

After I updated my Asus/Intel NUC from proxmox v8.4.14 to 9.1.6, it did not boot anymore. I followed the instructions of https://pve.proxmox.com/wiki/Upgrade_from_8_to_9 and used the update checker tool:
  1. Update checker with pve8to9 
    --> all green ✅
  2. Updated from 8.4.14 to 9.1.6 complete using this 
    --> GUI & CLI showed new version ✅
    --> VMs and Containers continued to run normally ✅
  3. Reboot 
    --> does not boot anymore ❌
  4. BIOS Update of the Asus/Intel NUC did not help ❌


Solution

I found Dustin Rues awesome blog entry: https://dustinrue.com/2025/12/recovering-from-a-failed-proxmox-upgrade/

1. Boot from live linux via USB stick (in my case linux mint)

2. In the live linux terminal, run the following commands:

mount /dev/mapper/pve-root /mnt
mount /dev/sda2 /mnt/boot/efi
mount --bind /dev /mnt/dev
mount --bind /sys /mnt/sys
mount --bind /proc /mnt/proc
mount --bind /dev/pts /mnt/dev/pts
chroot /mnt
mount -t efivarfs efivarfs /sys/firmware/efi/efivars
grub-install --target x86_64-efi --no-floppy --bootloader-id proxmox /dev/sda
grub-install --target x86_64-efi --no-floppy --bootloader-id proxmox --removable /dev/sda
echo 'grub-efi-amd64 grub2/force_efi_extra_removable boolean true' | debconf-set-selections -v -u
apt install --reinstall grub-efi-amd64
update-initramfs -u -k all

Once this finished I did the following

umount /mnt/dev/pts
umount /mnt/proc
umount /mnt/dev
umount /mnt/sys/firmware/efi/efivars
umount /mnt/sys
umount /mnt/boot/efi
umount /mnt/
reboot

Example

root@mint:~#
root@mint:~#
root@mint:~# mount /dev/mapper/pve-root /mnt
root@mint:~# mount /dev/sda2 /mnt/boot/efi/
root@mint:~# mount --bind /dev /mnt/dev
root@mint:~# mount --bind /sys /mnt/sys
root@mint:~# mount --bind /proc /mnt/proc
root@mint:~# mount --bind /dev/pts /mnt/dev/pts
root@mint:~#
root@mint:~# chroot /mnt
root@mint:/#
root@mint:/# mount -t efivarfs efivarfs /sys/firmware/efi/efivars/
root@mint:/#
root@mint:/# grub-install --target x86_64-efi --no-floppy --bootloader-id proxmox /dev/sda
Installing for x86_64-efi platform.
Installation finished. No error reported.
root@mint:/#
root@mint:/# grub-install --target x86_64-efi --no-floppy --bootloader-id proxmox --removable /dev/sda
Installing for x86_64-efi platform.
Installation finished. No error reported.
root@mint:/#
root@mint:/#
root@mint:/# echo 'grub-efi-amd64 grub2/force_efi_extra_removable boolean true' | debconf-set-selections -v -u
info: Trying to set 'grub2/force_efi_extra_removable' [boolean] to 'true'
info: Loading answer for 'grub2/force_efi_extra_removable'
root@mint:/#
root@mint:/# apt install --reinstall grub-efi-amd64
The following packages were automatically installed and are no longer required:
  bsdmainutils                    libdav1d6               libllvm15               libpython3.9-stdlib  perl-modules-5.32                   python3-soupsieve
  gcc-12-base                     libdns-export1110       libmpdec3               librav1e0            perl-modules-5.36                   python3-talloc
  libabsl20220623                 libdrm-nouveau2         libnsl-dev              libsubid4            proxmox-kernel-6.8.12-1-pve-signed  python3-tempita
  libapt-pkg6.0                   libdrm-radeon1          libnumber-compare-perl  libsvtav1enc1        pve-kernel-5.15                     python3-tz
  libavif15                       libfile-find-rule-perl  libopts25               libtext-glob-perl    pve-kernel-5.15.126-1-pve           python3-waitress
  libboost-context1.74.0          libflac12               libperl5.32             libthrift-0.17.0     pve-kernel-5.15.158-2-pve           python3-webtest
  libboost-coroutine1.74.0        libfmt9                 libperl5.36             libtiff5             python3-bs4                         python3.11
  libboost-filesystem1.74.0       libglusterd0            libprocps8              libtirpc-dev         python3-jaraco.classes              python3.11-minimal
  libboost-iostreams1.74.0        libicu67                libprotobuf23           liburing1            python3-ldb                         python3.9
  libboost-program-options1.74.0  libicu72                libpython3.11           libwebp6             python3-paste                       python3.9-minimal
  libboost-thread1.74.0           libisc-export1105       libpython3.11-minimal   libx265-199          python3-pastedeploy                 sgml-base
  libbpf0                         libjs-sencha-touch      libpython3.11-stdlib    libxcb-dri2-0        python3-pastedeploy-tpl             telnet
  libcbor0                        libldap-2.5-0           libpython3.9            libzpool5linux       python3-pytz                        usrmerge
  libcbor0.8                      libleveldb1d            libpython3.9-minimal    lua-lpeg             python3-singledispatch
Use 'apt autoremove' to remove them.

Installing:
  grub-efi-amd64

REMOVING:
  grub-pc

Summary:
  Upgrading: 0, Installing: 1, Removing: 1, Not Upgrading: 0
  Download size: 46.7 kB
  Freed space: 349 kB

Continue? [Y/n]
Get:1 http://download.proxmox.com/debian/pve trixie/pve-no-subscription amd64 grub-efi-amd64 amd64 2.12-9+pmx2 [46.7 kB]
Fetched 46.7 kB in 0s (398 kB/s)          
Preconfiguring packages ...
(Reading database ... 119801 files and directories currently installed.)
Removing grub-pc (2.12-9+pmx2) ...
Selecting previously unselected package grub-efi-amd64.
(Reading database ... 119792 files and directories currently installed.)
Preparing to unpack .../grub-efi-amd64_2.12-9+pmx2_amd64.deb ...
Unpacking grub-efi-amd64 (2.12-9+pmx2) ...
Setting up grub-efi-amd64 (2.12-9+pmx2) ...
Installing for x86_64-efi platform.
File descriptor 3 (pipe:[43286]) leaked on vgs invocation. Parent PID 3806: grub-install.real
File descriptor 3 (pipe:[43286]) leaked on vgs invocation. Parent PID 3806: grub-install.real
Installation finished. No error reported.
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.17.13-1-pve
Found initrd image: /boot/initrd.img-6.17.13-1-pve
Found linux image: /boot/vmlinuz-6.8.12-17-pve
Found initrd image: /boot/initrd.img-6.8.12-17-pve
Found linux image: /boot/vmlinuz-6.8.12-1-pve
Found initrd image: /boot/initrd.img-6.8.12-1-pve
Found linux image: /boot/vmlinuz-5.15.158-2-pve
Found initrd image: /boot/initrd.img-5.15.158-2-pve
Found linux image: /boot/vmlinuz-5.15.126-1-pve
Found initrd image: /boot/initrd.img-5.15.126-1-pve
Found linux image: /boot/vmlinuz-5.4.203-1-pve
Found initrd image: /boot/initrd.img-5.4.203-1-pve
Found linux image: /boot/vmlinuz-5.4.73-1-pve
Found initrd image: /boot/initrd.img-5.4.73-1-pve
Found memtest86+ 64bit EFI image: /boot/memtest86+x64.efi
Found memtest86+ 32bit EFI image: /boot/memtest86+ia32.efi
Found memtest86+ 64bit image: /boot/memtest86+x64.bin
Found memtest86+ 32bit image: /boot/memtest86+ia32.bin
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
Adding boot menu entry for UEFI Firmware Settings ...
done
Processing triggers for man-db (2.13.1-1) ...
root@mint:/#
root@mint:/#
root@mint:/# 
root@mint:/# 
root@mint:/# 
root@mint:/# 
root@mint:/# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-6.17.13-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-6.8.12-17-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-6.8.12-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.15.158-2-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.15.126-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.4.203-1-pve
W: zstd compression (CONFIG_RD_ZSTD) not supported by kernel, using gzip
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.4.73-1-pve
W: zstd compression (CONFIG_RD_ZSTD) not supported by kernel, using gzip
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
root@mint:/#
root@mint:/#
root@mint:/#
root@mint:/# exit
exit
root@mint:~#
root@mint:~# ls /mnt
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@mint:~#
root@mint:~# umount /mnt/dev/pts
root@mint:~# umount /mnt/proc
root@mint:~# umount /mnt/dev
root@mint:~# umount /mnt/sys/firmware/efi/efivars
root@mint:~# umount /mnt/sys
root@mint:~# umount /mnt/boot/efi
root@mint:~# umount /mnt/
root@mint:~#
root@mint:~# reboot



Splunk Version 9.4.4 shows error while starting - VM CPU Flags are missing

Problem 

When you update your Splunk to e.g. version 9.4.4 and get this error while starting splunk:

Migrating to:
VERSION=9.4.4
BUILD=f627d88b766b
PRODUCT=splunk
PLATFORM=Linux-x86_64

********** BEGIN PREVIEW OF CONFIGURATION FILE MIGRATION **********

-> Currently configured KVSTore database path="/opt/splunk/var/lib/splunk/kvstore"
CPU Vendor: GenuineIntel
CPU Family: 15
CPU Model: 6
CPU Brand: \x
AVX Support: No
SSE4.2 Support: No
AES-NI Support: No

-> isSupportedArchitecture=0
-> isKVstoreDisabled=0
-> isKVstoreDatabaseFolderExist=0
-> isKVstoreDiagnosticsFolderExist=0
-> isKVstoreVersionFileFolderExist=1
-> isKVstoreVersionFileFolderEmpty=0
-> isKVstoreVersionFileMatched=1
-> isKVstoreVersionFromBsonMatched=0
-> isSupportedArchitecture=0
* Active KVStore version upgrade precheck FAILED!
  -- This check is to ensure that KVStore version 4.2 been in use.
  -- In order to fix this failed check, re-install the previous Splunk version, and follow the KVStore upgrade documentation: https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/MigrateKVstore#Upgrade_KV_store_server_to_version_4.2 .
Some upgrade prechecks failed!
ERROR while running splunk-preinstall.
 

Cause

This might be related to missing CPU features AVX, SSE4.2 and AES-NI to the Splunk VM, which are necessary for the new kvstore mongodb version, which is introduced in Splunk version 9.4: https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/administer-the-app-key-value-store/upgrade-the-kv-store-server-version#Upgrade_the_KV_store_server_version

You can check inside your VM using:
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$ grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
splnonroot@devubu22h102:/opt/splunk$

Solution 

In your VM hypervisor (VMware ESXi, Microsoft Hyper-V, Proxmox, etc..) --> give the Splunk VMs the necessary CPU flags/features.

Example for proxmox:

  1. Check inside your VM: grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
  2. Edit /etc/pve/qemu-server/*Your VM ID*.conf and add CPU: Host - so all the Host CPU Hardware flags are forwarded to the VM
  3. Reboot the VM 
  4. Check inside your VM again: grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
  5. Start Splunk 

Before Proxmox VM has CPU features: 

root@proxmox1:~#
root@proxmox1:~#
root@proxmox1:~# cat /etc/pve/qemu-server/*Your VM ID*.conf
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]

[snapshot-pre-splunkupdate]
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]
root@proxmox1:~#
root@proxmox1:~# 

After Proxmox VM has CPU features:


1.
root@proxmox1:~#
root@proxmox1:~#
root@proxmox1:~# cat /etc/pve/qemu-server/*Your VM ID*.conf
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]

[snapshot-pre-splunkupdate]
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]
root@proxmox1:~#
root@proxmox1:~#


2. Reboot the VM

3. Then inside your VM the CPU flags are visible:
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$ grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
aes
avx
sse4_2
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$ 


4. Start Splunk again

Update Nginx ProxyManager docker container guide

Commands

  1. Backup your container
  2. Check which version your Nginx ProxyManager is currently running by:
    docker exec -it nginx_app_1 /bin/bash

  3. Check which docker containers are currently running
    docker ps

  4. Stop the Nginx ProxyManager application and database containers by:
    docker stop nginx_app_1
    docker stop nginx_db_1


  5. Pull the latest (or a specific) version of the image by:
    docker pull jc21/nginx-proxy-manager:latest

  6. Start the Containers
    docker-compose -f nginx.yml up -d

  7. Check the logs of the containers
    docker logs --follow nginx_app_1

  8. Check which version your Nginx ProxyManager is currently running by:
    docker exec -it nginx_app_1 /bin/bash

  9. Check your monitoring solution & test your applications


Example

user@container-nginx:~#
user@container-nginx:~# docker ps
CONTAINER ID   IMAGE                      COMMAND             CREATED         STATUS         PORTS                                                                                  NAMES
1a74a14bc3ab   9c3f57826a5d               "/init"             18 days ago   Up 4 minutes   0.0.0.0:80-81->80-81/tcp, :::80-81->80-81/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   nginx_app_1
02372069f98d   jc21/mariadb-aria:latest   "/scripts/run.sh"   18 days ago   Up 4 minutes   3306/tcp                                                                               nginx_db_1
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~# docker stop nginx_app_1
nginx_app_1
user@container-nginx:~# docker stop nginx_db_1
nginx_db_1
user@container-nginx:~#
user@container-nginx:~# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
user@container-nginx:~#
user@container-nginx:~# docker pull jc21/nginx-proxy-manager:latest
latest: Pulling from jc21/nginx-proxy-manager
7cf63256a31a: Pull complete
191fb0319d69: Pull complete
9ace5189354c: Pull complete
e4db5efc926a: Pull complete
[...]
be35f3c3bf02: Pull complete
Digest: sha256:e5eecad9bf040f1e7ddc9db6bbc812d690503aa119005e3aa0c24803746b49ea
Status: Downloaded newer image for jc21/nginx-proxy-manager:latest
docker.io/jc21/nginx-proxy-manager:latest
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~# ls -lah
total 676K
[...]
-rw-r--r--  1 user    user    607K May 14 02:55 cron-auto-update.log
drwxr-xr-x  7 user    user    4.0K Nov  5  2023 data
drwxr-xr-x  8 user    user    4.0K May 14 19:07 letsencrypt
drwxr-xr-x  5 postfix crontab 4.0K May 14 19:14 mysql
-rw-r--r--  1 user    user    1.1K Aug 11  2024 nginx.yml
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~# docker-compose -f nginx.yml up -d
Starting nginx_db_1 ... done
Recreating 
nginx_app_1 ... done
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~# docker exec -it nginx_app_1 /bin/bash
 _   _       _            ____                      __  __
| \ | | __ _(_)_ __ __  _|  _ \ _ __ _____  ___   _|  \/  | __ _ _ __   __ _  __ _  ___ _ __
|  \| |/ _` | | '_ \\ \/ / |_) | '__/ _ \ \/ / | | | |\/| |/ _` | '_ \ / _` |/ _` |/ _ \ '__|
| |\  | (_| | | | | |>  <|  __/| | | (_) >  <| |_| | |  | | (_| | | | | (_| | (_| |  __/ |
|_| \_|\__, |_|_| |_/_/\_\_|   |_|  \___/_/\_\\__, |_|  |_|\__,_|_| |_|\__,_|\__, |\___|_|
       |___/                                  |___/                          |___/
Version 2.12.3 (c5a319c) 2025-03-12 00:21:07 UTC, OpenResty 1.27.1.1, debian 12 (bookworm), Certbot certbot 3.2.0
Base: debian:bookworm-slim, linux/amd64
Certbot: nginxproxymanager/nginx-full:latest, linux/amd64
Node: nginxproxymanager/nginx-full:certbot, linux/amd64

[yp@docker-9a056abb3b01:/app]#

 

This also works fine if the docker container is within an LXC container. It should also work fine with podman instead of docker.

New proxmox VM does not boot

When adding a new VM (in this example the nextcloud appliance VM from https://www.hanssonit.se/nextcloud-vm/) to an old version of proxmox like version 6 (debian 10), the VM might not boot and stay stuck showing Booting from Hard Disk ...

Booting from Hard Disk ...

Solution

 

  1. Update your proxmox system, e.g. from version 6 (debian 10 - "buster") to promox version 7 (debian 11 - "bullseye"), see https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
  2. Import the VM again and start it


 

Update proxmox 6.4.x to 7.x

Updating a proxmox system from version 6.4.x to 7.x using https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

Proxmox VE 6.x is based on Debian 10.x which is called “buster”.
Proxmox VE 7.x is based on Debian 11.x which is called “bullseye”.

  1. Make sure you have a backup of all VMs, Containers, Proxmox itself etc.
  2. Login via SSH/CLI
  3. Check your sources.list file, should look like this:

    cat /etc/apt/sources.list

    deb http://deb.debian.org/debian
    bullseye main contrib
    deb http://deb.debian.org/debian 
    bullseye-updates main contrib
    # security updates
    deb http://security.debian.org 
    bullseye/updates main contrib

    You may use sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list to update "buster" to "bullseye".

  4. Check the enterprise repository:

    cat /etc/apt/sources.list.d/pve-enterprise.list

    When running Proxmox VE 7.x with No-Subscription use:

    deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription

    When running Proxmox VE 7.x with a subscription use:

    deb https://enterprise.proxmox.com/debian/pve
    bullseye pve-enterprise

  5. Check Proxmox version using:

    pveversion -v


  6. Run the pve6to7 script

    root@prxmx024a:~# pve6to7
    = CHECKING VERSION INFORMATION FOR PVE PACKAGES =

    Checking for package updates..
    PASS: all packages uptodate

    Checking proxmox-ve package version..
    PASS: proxmox-ve package has version >= 6.4-1

    Checking running kernel version..
    PASS: expected running kernel '5.4.203-1-pve'.

    = CHECKING CLUSTER HEALTH/SETTINGS =

    SKIP: standalone node.

    = CHECKING HYPER-CONVERGED CEPH STATUS =

    SKIP: no hyper-converged ceph setup detected!

    = CHECKING CONFIGURED STORAGES =

    PASS: storage 'local' enabled and active.
    PASS: storage 'local-lvm' enabled and active.
    PASS: storage 'storageusbhdd01' enabled and active.

    = MISCELLANEOUS CHECKS =

    INFO: Checking common daemon services..
    PASS: systemd unit 'pveproxy.service' is in state 'active'
    PASS: systemd unit 'pvedaemon.service' is in state 'active'
    PASS: systemd unit 'pvestatd.service' is in state 'active'
    INFO: Checking for running guests..
    PASS: no running guest detected.
    INFO: Checking if the local node's hostname 'proxmox1' is resolvable..
    INFO: Checking if resolved IP is configured on local node..
    PASS: Resolved node IP '192.168.2.106' configured and active on single interface.
    INFO: Checking backup retention settings..
    INFO: storage 'local' - no backup retention settings defined - by default, PVE 7.x will no longer keep only the last backup, but all backups
    PASS: no problems found.
    INFO: checking CIFS credential location..
    PASS: no CIFS credentials at outdated location found.
    INFO: Checking custom roles for pool permissions..
    INFO: Checking node and guest description/note legnth..
    PASS: All node config descriptions fit in the new limit of 64 KiB
    PASS: All guest config descriptions fit in the new limit of 8 KiB
    INFO: Checking container configs for deprecated lxc.cgroup entries
    PASS: No legacy 'lxc.cgroup' keys found.
    INFO: Checking storage content type configuration..
    PASS: no problems found
    INFO: Checking if the suite for the Debian security repository is correct..
    INFO: Make sure to change the suite of the Debian security repository from 'buster/updates' to 'bullseye-security' - in /etc/apt/sources.list:6
    SKIP: NOTE: Expensive checks, like CT cgroupv2 compat, not performed without '--full' parameter

    = SUMMARY =

    TOTAL:    20
    PASSED:   17
    SKIPPED:  3
    WARNINGS: 0
    FAILURES: 0
    root@prxmx024a:~#
    root@prxmx024a:~#


  7. Run the pve6to7 script with the parameter -full

    root@prxmx024a:~#
    root@prxmx024a:~# pve6to7 --full
    = CHECKING VERSION INFORMATION FOR PVE PACKAGES =

    Checking for package updates..
    PASS: all packages uptodate

    Checking proxmox-ve package version..
    PASS: proxmox-ve package has version >= 6.4-1

    Checking running kernel version..
    PASS: expected running kernel '5.4.203-1-pve'.

    = CHECKING CLUSTER HEALTH/SETTINGS =

    SKIP: standalone node.

    = CHECKING HYPER-CONVERGED CEPH STATUS =

    SKIP: no hyper-converged ceph setup detected!

    = CHECKING CONFIGURED STORAGES =

    PASS: storage 'local' enabled and active.
    PASS: storage 'local-lvm' enabled and active.
    PASS: storage 'storageusbhdd01' enabled and active.

    = MISCELLANEOUS CHECKS =

    INFO: Checking common daemon services..
    PASS: systemd unit 'pveproxy.service' is in state 'active'
    PASS: systemd unit 'pvedaemon.service' is in state 'active'
    PASS: systemd unit 'pvestatd.service' is in state 'active'
    INFO: Checking for running guests..
    PASS: no running guest detected.
    INFO: Checking if the local node's hostname 'proxmox1' is resolvable..
    INFO: Checking if resolved IP is configured on local node..
    PASS: Resolved node IP '192.168.2.106' configured and active on single interface.
    INFO: Checking backup retention settings..
    INFO: storage 'local' - no backup retention settings defined - by default, PVE 7.x will no longer keep only the last backup, but all backups
    PASS: no problems found.
    INFO: checking CIFS credential location..
    PASS: no CIFS credentials at outdated location found.
    INFO: Checking custom roles for pool permissions..
    INFO: Checking node and guest description/note legnth..
    PASS: All node config descriptions fit in the new limit of 64 KiB
    PASS: All guest config descriptions fit in the new limit of 8 KiB
    INFO: Checking container configs for deprecated lxc.cgroup entries
    PASS: No legacy 'lxc.cgroup' keys found.
    INFO: Checking storage content type configuration..
    PASS: no problems found
    INFO: Checking if the suite for the Debian security repository is correct..
    INFO: Make sure to change the suite of the Debian security repository from 'buster/updates' to 'bullseye-security' - in /etc/apt/sources.list:6
    SKIP: No containers on node detected.

    = SUMMARY =

    TOTAL:    20
    PASSED:   17
    SKIPPED:  3
    WARNINGS: 0
    FAILURES: 0
    root@prxmx024a:~#

  8. Update your repository and packages:

    apt update

  9. Now upgrade the packages:

    apt dist-upgrade

  10. Reboot to activate the new Kernel, to check if you got all packages, run 'pveversion -v' and compare your output (all packages should have equal or higher version numbers): 
  11. Check Proxmox version using  

    pveversion -v



Example:

root@prxmx024a:~#
root@prxmx024a:~# pve6to7
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
PASS: all packages uptodate

Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 6.4-1

Checking running kernel version..
PASS: expected running kernel '5.4.203-1-pve'.

= CHECKING CLUSTER HEALTH/SETTINGS =

SKIP: standalone node.

= CHECKING HYPER-CONVERGED CEPH STATUS =

SKIP: no hyper-converged ceph setup detected!

= CHECKING CONFIGURED STORAGES =

PASS: storage 'local' enabled and active.
PASS: storage 'local-lvm' enabled and active.
PASS: storage 'storageusbhdd01' enabled and active.

= MISCELLANEOUS CHECKS =

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for running guests..
PASS: no running guest detected.
INFO: Checking if the local node's hostname 'proxmox1' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '192.168.2.106' configured and active on single interface.
INFO: Checking backup retention settings..
INFO: storage 'local' - no backup retention settings defined - by default, PVE 7.x will no longer keep only the last backup, but all backups
PASS: no problems found.
INFO: checking CIFS credential location..
PASS: no CIFS credentials at outdated location found.
INFO: Checking custom roles for pool permissions..
INFO: Checking node and guest description/note legnth..
PASS: All node config descriptions fit in the new limit of 64 KiB
PASS: All guest config descriptions fit in the new limit of 8 KiB
INFO: Checking container configs for deprecated lxc.cgroup entries
PASS: No legacy 'lxc.cgroup' keys found.
INFO: Checking storage content type configuration..
PASS: no problems found
INFO: Checking if the suite for the Debian security repository is correct..
INFO: Make sure to change the suite of the Debian security repository from 'buster/updates' to 'bullseye-security' - in /etc/apt/sources.list:6
SKIP: NOTE: Expensive checks, like CT cgroupv2 compat, not performed without '--full' parameter

= SUMMARY =

TOTAL:    20
PASSED:   17
SKIPPED:  3
WARNINGS: 0
FAILURES: 0
root@prxmx024a:~#
root@prxmx024a:~#
root@prxmx024a:~#
root@prxmx024a:~# pve6to7 --full
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
PASS: all packages uptodate

Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 6.4-1

Checking running kernel version..
PASS: expected running kernel '5.4.203-1-pve'.

= CHECKING CLUSTER HEALTH/SETTINGS =

SKIP: standalone node.

= CHECKING HYPER-CONVERGED CEPH STATUS =

SKIP: no hyper-converged ceph setup detected!

= CHECKING CONFIGURED STORAGES =

PASS: storage 'local' enabled and active.
PASS: storage 'local-lvm' enabled and active.
PASS: storage 'storageusbhdd01' enabled and active.

= MISCELLANEOUS CHECKS =

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for running guests..
PASS: no running guest detected.
INFO: Checking if the local node's hostname 'proxmox1' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '192.168.2.106' configured and active on single interface.
INFO: Checking backup retention settings..
INFO: storage 'local' - no backup retention settings defined - by default, PVE 7.x will no longer keep only the last backup, but all backups
PASS: no problems found.
INFO: checking CIFS credential location..
PASS: no CIFS credentials at outdated location found.
INFO: Checking custom roles for pool permissions..
INFO: Checking node and guest description/note legnth..
PASS: All node config descriptions fit in the new limit of 64 KiB
PASS: All guest config descriptions fit in the new limit of 8 KiB
INFO: Checking container configs for deprecated lxc.cgroup entries
PASS: No legacy 'lxc.cgroup' keys found.
INFO: Checking storage content type configuration..
PASS: no problems found
INFO: Checking if the suite for the Debian security repository is correct..
INFO: Make sure to change the suite of the Debian security repository from 'buster/updates' to 'bullseye-security' - in /etc/apt/sources.list:6
SKIP: No containers on node detected.

= SUMMARY =

TOTAL:    20
PASSED:   17
SKIPPED:  3
WARNINGS: 0
FAILURES: 0
root@prxmx024a:~#
root@prxmx024a:~# cat /etc/apt/sources.list
deb http://deb.debian.org/debian buster main contrib

deb http://deb.debian.org/debian buster-updates main contrib

# security updates
deb http://security.debian.org buster/updates main contrib

root@prxmx024a:~#
root@prxmx024a:~#
root@prxmx024a:~# sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list
root@prxmx024a:~#
root@prxmx024a:~# cat /etc/apt/sources.list
deb http://deb.debian.org/debian bullseye main contrib

deb http://deb.debian.org/debian bullseye-updates main contrib

# security updates
deb http://security.debian.org bullseye-security main contrib

root@prxmx024a:~#
root@prxmx024a:~# cat /etc/apt/sources.list.d/pve-enterprise.list
deb http://download.proxmox.com/debian/pve buster pve-no-subscription
#deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise
root@prxmx024a:~#
root@prxmx024a:~# vi /etc/apt/sources.list.d/pve-enterprise.list
root@prxmx024a:~#
root@prxmx024a:~# cat /etc/apt/sources.list.d/pve-enterprise.list
deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription
#deb http://download.proxmox.com/debian/pve buster pve-no-subscription
#deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise
root@prxmx024a:~#
root@prxmx024a:~#
root@prxmx024a:~#
root@prxmx024a:~# apt update
Hit:1 http://deb.debian.org/debian bullseye InRelease
Get:2 http://download.proxmox.com/debian/pve bullseye InRelease [2,768 B]
Get:3 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]
Hit:4 http://security.debian.org bullseye-security InRelease
Get:5 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 Packages [427 kB]
Fetched 474 kB in 0s (1,022 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
582 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@prxmx024a:~#
root@prxmx024a:~# apt list --upgradable
[...]

root@prxmx024a:~# apt dist-upgrade
[...]
root@prxmx024a:~# reboot

Proxmox update error "Sub-process /usr/share/proxmox-ve/pve-apt-hook returned an error code (1)"

Problem

During a proxmox update (e.g. from proxmox version 6 to 7) you receive the following error:

[...]
100% [608 zstd 34.1 kB/630 kB 5%] 1,337 kB/s 0s
100% [Working] 1,337 kB/s 0s

Fetched 255 MB in 2min 30s (1,702 kB/s)
W: (pve-apt-hook) !! WARNING !!
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
W: (pve-apt-hook)
W: (pve-apt-hook) If you really want to permanently remove 'proxmox-ve' from your system, run the following command
W: (pve-apt-hook) touch '/please-remove-proxmox-ve'
W: (pve-apt-hook) run apt purge proxmox-ve to remove the meta-package
W: (pve-apt-hook) and repeat your apt invocation.
W: (pve-apt-hook)
W: (pve-apt-hook) If you are unsure why 'proxmox-ve' would be removed, please verify
W: (pve-apt-hook) - your APT repository settings
W: (pve-apt-hook) - that you are using 'apt full-upgrade' to upgrade your system
E: Sub-process /usr/share/proxmox-ve/pve-apt-hook returned an error code (1)
E: Failure running script /usr/share/proxmox-ve/pve-apt-hook
root@proxmox1:~#
root@proxmox1:~# 

 

Solution

 
Proxmox VE 6.x is based on Debian 10.x which is called “buster”.
Proxmox VE 7.x is based on Debian 11.x which is called “bullseye”.  

  1. Check if your /etc/apt/sources.list.d/pve-enterprise.list file and /etc/apt/sources.list file still have "buster" (proxmox version 6) in them and replace it with "bullseye". E.g. with sed -i -e 's/buster/bullseye/g' /etc/apt/sources.list.d/pve-install-repo.list
  2. Run apt update again
  3. Run apt dist-upgrade again

Update Proxmox 6.x to latest 6.4

Update a Proxmox 6.x system to latest 6.4 using the guide https://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_6.x_to_latest_6.4:

Proxmox VE 6.x is based on Debian 10.x which is called “buster”.

  1. Make sure you have a backup of all VMs, Containers, Proxmox itself etc.
  2. Login via SSH/CLI
  3. Check your sources.list file, should look like this:

    cat /etc/apt/sources.list

    deb http://deb.debian.org/debian buster main contrib
    deb http://deb.debian.org/debian buster-updates main contrib
    # security updates
    deb http://security.debian.org buster/updates main contrib

  4. Check the enterprise repository:

    cat /etc/apt/sources.list.d/pve-enterprise.list

    When running Proxmox VE 6.x with No-Subscription use:

    deb http://download.proxmox.com/debian/pve buster pve-no-subscription

    When running Proxmox VE 6.x with a subscription use:

    deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise

  5. Check Proxmox version using:

    pveversion -v


  6. Update your repository and packages:

    apt update

    If you get any errors, your sources.list (or your system or network) has a problem.
  7. Now upgrade the packages:

    apt dist-upgrade

  8. Reboot to activate the new Kernel, to check if you got all packages, run 'pveversion -v' and compare your output (all packages should have equal or higher version numbers): 
  9. Check Proxmox version using

    pveversion -v 


Example:

root@prxmx053b:~#
root@prxmx053b:~# cat /etc/apt/sources.list
deb http://deb.debian.org/debian buster main contrib
deb http://deb.debian.org/debian buster-updates main contrib
# security updates
deb http://security.debian.org buster/updates main contrib
root@prxmx053b:~#
root@prxmx053b:~#
root@prxmx053b:~# apt update
Hit:1 http://security.debian.org buster/updates InRelease
Hit:2 http://download.proxmox.com/debian/pve buster InRelease
Hit:3 http://deb.debian.org/debian buster InRelease
Hit:4 http://deb.debian.org/debian buster-updates InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
242 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@prxmx053b:~#
root@prxmx053b:~#
root@prxmx053b:~# apt dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
[...]
root@prxmx053b:~#
root@prxmx053b:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.4-15 (running version: 6.4-15/af7986e6)
pve-kernel-5.4: 6.4-20
pve-kernel-helper: 6.4-20
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+deb10u1
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-5
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.14-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-2
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.7-pve1
root@prxmx053b:~#
root@prxmx053b:~# reboot


Slow USB3 data transfer speed on debian with proxmox

Issue

Recently I saw a debian proxmox system, which had a usb3 hdd mounted. The hdd write speed should have been ~100MB/s and the USB 3 connection speed was 5GB/s. However the datatransfer speed was very slow, about 5-6 kB/s. The hdd was mounted to /mnt/usbhdd01/. A short speedcheck showed the issue:

root@proxmox1:~#
root@proxmox1:~# dd if=/dev/zero of=/mnt/usbhdd01/test03012021-2036uhr.img bs=1024 count=10000
^C220+0 records in
220+0 records out
225280 bytes (225 kB, 220 KiB) copied, 35.8676 s, 6.3 kB/s 😕
root@proxmox1:~#

Solution

After some troubleshooting I found out why: /etc/fstab had the options auto,nofail,sync,users,rw:

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=1234-4567 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
UUID=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee /mnt/usbhdd01 ext4 auto,nofail,sync,users,rw 0 0

After changing that to defaults and re-mounting it, speed was fast again:

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=1234-4567 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
UUID=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee /mnt/usbhdd01 ext4 defaults 0 0

Short speedtest:

root@proxmox1:~# dd if=/dev/zero of=/mnt/usbhdd01/test03012021-2110uhr.img bs=1024 count=2000000
2000000+0 records in
2000000+0 records out
2048000000 bytes (2.0 GB, 1.9 GiB) copied, 20.3455 s, 101 MB/s 😊

Increase disk and zfs of nextcloud vm running on proxmox

To increase the data disk of your nextcloud vm, which is running on proxmox, you need to do the following:

  1. Make sure no disk snapshot is active or delete them.
  2. Shutdown VM. 
  3. Check current disk size of your data disk of your nextcloud vm using lvs on your proxmox hypervisor:

    root@proxmox1:~#

    root@proxmox1:~# lvs

      LV            VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert

      data          pve twi-aotz-- <3.49t             0.78   0.28

      root          pve -wi-ao---- 96.00g

      swap          pve -wi-ao----  8.00g

      vm-100-disk-0 pve Vwi-a-tz-- 40.00g data        9.99

      vm-100-disk-1 pve Vwi-a-tz-- 40.00g data        0.06

      vm-101-disk-0 pve Vwi-a-tz-- 40.00g data        58.01

      vm-101-disk-1 pve Vwi-a-tz-- 40.00g data        1.60 <-- This is my nextcloud data disk

    root@proxmox1:~#

    root@proxmox1:~#
     
  4. In my case this disk is mounted as scsi1 to the VM:Proxmox vm hardware disks 
  5. Increase the disk size using qm resize <vm-id> <scsi-id> <size>, so for example qm resize 101 scsi1 +100G your disk:

    root@proxmox1:~#

    root@proxmox1:~# qm resize 101 scsi1 +3210G

      Size of logical volume pve/vm-101-disk-1 changed from 40.00 GiB (10240 extents) to 3.17 TiB (832000 extents).

      Logical volume pve/vm-101-disk-1 successfully resized.

    root@proxmox1:~#

    root@proxmox1:~# lvs

      LV            VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert

      data          pve twi-aotz-- <3.49t             0.78   0.28

      root          pve -wi-ao---- 96.00g

      swap          pve -wi-ao----  8.00g

      vm-100-disk-0 pve Vwi-a-tz-- 40.00g data        9.99

      vm-100-disk-1 pve Vwi-a-tz-- 40.00g data        0.06

      vm-101-disk-0 pve Vwi-a-tz-- 40.00g data        58.01

      vm-101-disk-1 pve Vwi-a-tz--  3.17t data        0.02

    root@proxmox1:~#

    root@proxmox1:~#

    Proxmox virtual hardware disk resized
     
  6. Start your VM.
  7. Check the zpool size using zpool list
  8. Check the /mnt/ncdata size using df -h
  9. Read the new partition size using parted -l with the answer "fix" for the adjustment
  10. You can delete the buffer partition 9 using parted /dev/sdb rm 9
  11. Extend the first partition using to 100% of the available size parted /dev/sdb resizepart 1 100%
  12. Use zpool export zpool export ncdata 
  13. Import zpool again zpool import -d /dev ncdata
  14. Set zpool online zpool online -e ncdata sdb
  15. zpool online -e ncdata /dev/sdb you can adjust the partition to the correct size
  16. Check the new zpool size using zpool list
  17. Check the new /mnt/ncdata size using df -h

Example with nextcloud 20 on Ubuntu 20.04:

root@nextcloud:~#
root@nextcloud:~# zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ncdata  39.5G  46.0M  39.5G        -     3.13T     0%     0%  1.00x    ONLINE  -
root@nextcloud:~#
root@nextcloud:~# df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               3.9G     0  3.9G   0% /dev
tmpfs                              797M  1.2M  796M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   39G  5.5G   32G  15% /
tmpfs                              3.9G  8.0K  3.9G   1% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda2                          976M  198M  712M  22% /boot
/dev/loop0                          55M   55M     0 100% /snap/core18/1705
/dev/loop1                          56M   56M     0 100% /snap/core18/1932
/dev/loop2                          61M   61M     0 100% /snap/core20/634
/dev/loop3                          70M   70M     0 100% /snap/lxd/18520
/dev/loop4                          62M   62M     0 100% /snap/core20/875
/dev/loop5                          72M   72M     0 100% /snap/lxd/18546
/dev/loop6                          31M   31M     0 100% /snap/snapd/9721
/dev/loop7                          32M   32M     0 100% /snap/snapd/10492
ncdata                              39G   19M   39G   1% /mnt/ncdata
tmpfs                              797M     0  797M   0% /run/user/1000
root@nextcloud:~#
root@nextcloud:~# parted -l
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  1076MB  1074MB  ext4
 3      1076MB  42.9GB  41.9GB


Warning: Not all of the space available to /dev/sdb appears to be used, you can
fix the GPT to use all of the space (an extra 6731857920 blocks) or continue
with the current setting?
Fix/Ignore? Fix
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sdb: 3490GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name                  Flags
 1      1049kB  42.9GB  42.9GB  zfs          zfs-4172ff7a9f945112
 9      42.9GB  42.9GB  8389kB


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 41.9GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  41.9GB  41.9GB  ext4


root@nextcloud:~#
root@nextcloud:~# parted /dev/sdb rm 9
Information: You may need to update /etc/fstab.

root@nextcloud:~#
root@nextcloud:~# parted /dev/sdb resizepart 1 100%
Information: You may need to update /etc/fstab.

root@nextcloud:~#
root@nextcloud:~# zpool export ncdata
root@nextcloud:~#
root@nextcloud:~# zpool import -d /dev ncdata
root@nextcloud:~#
root@nextcloud:~# zpool online -e ncdata sdb
root@nextcloud:~#
root@nextcloud:~# zpool online -e ncdata /dev/sdb
root@nextcloud:~#
root@nextcloud:~# zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ncdata  3.17T  46.1M  3.17T        -         -     0%     0%  1.00x    ONLINE  -
root@nextcloud:~#
root@nextcloud:~#
root@nextcloud:~#  df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               3.9G     0  3.9G   0% /dev
tmpfs                              797M  1.2M  796M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   39G  5.5G   32G  15% /
tmpfs                              3.9G  8.0K  3.9G   1% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda2                          976M  198M  712M  22% /boot
/dev/loop0                          55M   55M     0 100% /snap/core18/1705
/dev/loop1                          56M   56M     0 100% /snap/core18/1932
/dev/loop2                          61M   61M     0 100% /snap/core20/634
/dev/loop3                          70M   70M     0 100% /snap/lxd/18520
/dev/loop4                          62M   62M     0 100% /snap/core20/875
/dev/loop5                          72M   72M     0 100% /snap/lxd/18546
/dev/loop6                          31M   31M     0 100% /snap/snapd/9721
/dev/loop7                          32M   32M     0 100% /snap/snapd/10492
tmpfs                              797M     0  797M   0% /run/user/1000
ncdata                             3.1T   19M  3.1T   1% /mnt/ncdata
root@nextcloud:~#

Import Nextcloud VM OVA to Proxmox VE

How to import a Nextcloud VM OVA image file to a Proxmox VE server:

1. Unzip the file:

root@proxmox1:/var/lib/vz/images#
root@proxmox1:/var/lib/vz/images# tar -xvf Nextcloud_VM_v20_www.hanssonit.se.ova
Nextcloud_VM_www.hanssonit.se.ovf
Nextcloud_VM_www.hanssonit.se.mf
Nextcloud_VM_www.hanssonit.se-disk1.vmdk
Nextcloud_VM_www.hanssonit.se-disk2.vmdk
root@proxmox1:/var/lib/vz/images#
root@proxmox1:/var/lib/vz/images# ls -lah
total 3.6G
drwxr-xr-x 2 root root 4.0K Dec  6 14:33 .
drwxr-xr-x 5 root root 4.0K Dec  6 14:00 ..
-rw-r--r-- 1 root root 1.8G Dec  6 14:26 Nextcloud_VM_v20_www.hanssonit.se.ova
-rw-r--r-- 1   64   64 1.8G Oct 28 21:37 Nextcloud_VM_www.hanssonit.se-disk1.vmdk
-rw-r--r-- 1   64   64  17M Oct 28 21:38 Nextcloud_VM_www.hanssonit.se-disk2.vmdk
-rw-r--r-- 1   64   64  338 Oct 28 21:06 Nextcloud_VM_www.hanssonit.se.mf
-rw-r--r-- 1   64   64 7.4K Oct 28 21:06 Nextcloud_VM_www.hanssonit.se.ovf
root@proxmox1:/var/lib/vz/images#

2. Import the OVA/OVF using qm importovf:
qm importovf <vm-id> <file.ova> local-lvm 
My VM-ID was 101, so it looked like this: 

root@proxmox1:/var/lib/vz/images#
root@proxmox1:/var/lib/vz/images# qm importovf 101 Nextcloud_VM_www.hanssonit.se.ovf local-lvm
  Logical volume "vm-101-disk-0" created.
transferred: 0 bytes remaining: 42949672960 bytes total: 42949672960 bytes progression: 0.00 %
transferred: 429496729 bytes remaining: 42520176231 bytes total: 42949672960 bytes progression: 1.00 %
transferred: 858993459 bytes remaining: 42090679501 bytes total: 42949672960 bytes progression: 2.00 %
[...]
transferred: 42949672960 bytes remaining: 0 bytes total: 42949672960 bytes progression: 100.00 %
Logical volume "vm-101-disk-1" created.
transferred: 0 bytes remaining: 42949672960 bytes total: 42949672960 bytes progression: 0.00 %
[...]
transferred: 42949672960 bytes remaining: 0 bytes total: 42949672960 bytes progression: 100.00 %
root@proxmox1:/var/lib/vz/images#
root@proxmox1:/var/lib/vz/images#

3. I had to add a NIC in the VM Hardware:

Proxmox VM Hardware Nic


Intel NUC 10th gen running VMware ESXi 7.0

Due to growing data I had to add more storage. Therefore I bought a new Intel NUC (10th generation)  running VMware ESXi 7.0. Really helpful for the setup is again virten.net, which provides all the necessary information.

Simply installing the ESXi on the NUC using a USB stick. For creating the USB stick I used rufus. For the ESXi image use the steps from virten.net, in order to create an ESXi 7.0 image with a network interface card driver which works for Intel NUC 10th gen (otherwise an error about "No Network Adapters" is shown).

Start PowerShell (with Admin-Rights) and type in:

Add-EsxSoftwareDepot https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
Export-ESXImageProfile -ImageProfile "ESXi-7.0.0-15843807-standard" -ExportToBundle -filepath ESXi-7.0.0-15843807-standard.zip
Remove-EsxSoftwareDepot https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
Add-EsxSoftwareDepot .\ESXi-7.0.0-15843807-standard.zip
Add-EsxSoftwareDepot .\ESXi670-NE1000-32543355-offline_bundle-15486963.zip
New-EsxImageProfile -CloneProfile "ESXi-7.0.0-15843807-standard" -name "ESXi-7.0.0-15843807-NUC" -Vendor "virten.net"
Remove-EsxSoftwarePackage -ImageProfile "ESXi-7.0.0-15843807-NUC" -SoftwarePackage "ne1000"
Add-EsxSoftwarePackage -ImageProfile "ESXi-7.0.0-15843807-NUC" -SoftwarePackage "ne1000 0.8.4-3vmw.670.3.99.32543355"
Export-ESXImageProfile -ImageProfile "ESXi-7.0.0-15843807-NUC" -ExportToIso -filepath ESXi-7.0.0-15843807-NUC.iso
Export-ESXImageProfile -ImageProfile "ESXi-7.0.0-15843807-NUC" -ExportToBundle -filepath ESXi-7.0.0-15843807-NUC.zip

If there is an issue "about_Execution_Policies" (https:/go.microsoft.com/fwlink/?LinkID=135170), like:

+ Import-Module VMware.ImageBuilder
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : Sicherheitsfehler: (:) [Import-Module], PSSecurityException
    + FullyQualifiedErrorId : UnauthorizedAccess,Microsoft.PowerShell.Commands.ImportModuleCommand
 

then you can help yourself using the following temporary workaround:

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

❗Warning! This is a possible security issue (see MS documentation). Set this setting back to default after creating the image using:

Set-ExecutionPolicy -ExecutionPolicy Default 

Update 04.01.2021: After having problems with large file transfers from and to the ESXi or from and to VMs running on the ESXi, I've reinstalled ESXi6.7u3 on the NUC. The problems continued and large file transfers using SCP, using SFTP or HTTPS always were corrupted or broke up, no matter which application or operating system. So I decided to switch to proxmox. Proxmox and the VMs on Proxmox work fine and have no issues.

Monitor UniFi WLAN Access Point with PRTG with SNMPv3 Auth+Encrypted

This is a tiny guide howto monitor your UniFi wireless accesspoint, in this case a Unifi U7 pro with SNMPv3 with AES-Encryption and SHA-Auth...