Showing posts with label Ubuntu. Show all posts
Showing posts with label Ubuntu. Show all posts

Azure Linux Ubuntu not fully upgraded

Using apt-get update && apt-get upgrade -y on your Ubuntu VM in Azure sometimes does not upgrade all packages:

root@hostname6:~#
root@hostname6:~#
root@hostname6:~# apt-get update && apt-get upgrade -y
Hit:1 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Hit:2 http://azure.archive.ubuntu.com/ubuntu jammy InRelease
Hit:3 http://azure.archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:4 http://azure.archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:5 http://azure.archive.ubuntu.com/ubuntu jammy-security InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages have been kept back:
libnss-systemd libpam-systemd libsystemd0 libudev1 linux-azure linux-cloud-tools-azure linux-headers-azure linux-image-azure linux-tools-azure
systemd systemd-sysv udev
0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded.
root@hostname6:~#


Solution

sudo apt-get install aptitude -y
sudo aptitude safe-upgrade

aptitude safe-upgrade
Upgrades installed packages to their most recent version. Installed packages will not be removed unless they are unused [...] Packages which are not currently installed may be installed to resolve dependencies unless the --no-new-installs command-line option is supplied.


Example

root@hostname6:~#
root@hostname6:~# sudo aptitude safe-upgrade
Resolving dependencies...
The following NEW packages will be installed:
linux-azure-6.8-cloud-tools-6.8.0-1041{a} linux-azure-6.8-headers-6.8.0-1041{a} linux-azure-6.8-tools-6.8.0-1041{a}
linux-cloud-tools-6.8.0-1041-azure{a} linux-headers-6.8.0-1041-azure{a} linux-image-6.8.0-1041-azure{a} linux-modules-6.8.0-1041-azure{a}
linux-tools-6.8.0-1041-azure{a}
The following packages will be upgraded:
libnss-systemd libpam-systemd libsystemd0 libudev1 linux-azure linux-cloud-tools-azure linux-headers-azure linux-image-azure linux-tools-azure
systemd systemd-sysv udev
12 packages upgraded, 8 newly installed, 0 to remove and 0 not upgraded.
Need to get 67.2 MB of archives. After unpacking 269 MB will be used.
Do you want to continue? [Y/n/?]

[..]
Current status: 0 (-12) upgradable.
root@hostname6:~#
root@hostname6:~# apt-get update
Hit:1 http://azure.archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 http://azure.archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:3 http://azure.archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:4 http://azure.archive.ubuntu.com/ubuntu jammy-security InRelease
Hit:5 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Reading package lists... Done
root@hostname6:~#
root@hostname6:~# apt-get upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@hostname6:~#
root@hostname6:~#

Disabled Transparent Huge Pages with created systemd service

In some applications (e.g. Splunk) you need to disable Transparent Huge Pages THP

When your systems are running in a managed environment (e.g. the public cloud - like Microsoft Azure) with cloud images (like "0001-com-ubuntu-server-jammy" or "0001-com-ubuntu-confidential-vm-jammy" (both Ubuntu22.04 LTS)), you may not be able to use the GRUB Edit to disable THP, because the cloud image ignores the GRUB edits. A possible solution can be a custom systemd service:

Example Disable Transparent Huge Pages once (not reboot-persistentđź’˘):

user@devazubu227:~$
user@devazubu227:~$
user@devazubu227:~$ sudo cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
user@devazubu227:~$
user@devazubu227:~$
user@devazubu227:~$ echo never | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
never
user@devazubu227:~$
user@devazubu227:~$ sudo cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
user@devazubu227:~$



Example Disable Transparent Huge Pages with custom systemd service (reboot-persistent ✅):


Commands:

sudo tee /etc/systemd/system/disable-thp.service > /dev/null <<EOF
[Unit]
Description=Disable Transparent Huge Pages
After=network.target

[Service]
Type=simple
ExecStart=/bin/bash -c "echo never > /sys/kernel/mm/transparent_hugepage/enabled && echo never > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reexec
sudo systemctl daemon-reload
sudo systemctl enable disable-thp
sudo systemctl start disable-thp
sudo systemctl status disable-thp
cat /sys/kernel/mm/transparent_hugepage/enabled
cat /sys/kernel/mm/transparent_hugepage/defrag



Example

user@devazubu227:~$
user@devazubu227:~$
user@devazubu227:~$ sudo tee /etc/systemd/system/disable-thp.service > /dev/null <<EOF
[Unit]
Description=Disable Transparent Huge Pages
After=network.target

[Service]
Type=simple
ExecStart=/bin/bash -c "echo never > /sys/kernel/mm/transparent_hugepage/enabled && echo never > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target
EOF
user@devazubu227:~$
user@devazubu227:~$ cat /etc/systemd/system/disable-thp.service
[Unit]
Description=Disable Transparent Huge Pages
After=network.target

[Service]
Type=simple
ExecStart=/bin/bash -c "echo never > /sys/kernel/mm/transparent_hugepage/enabled && echo never > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target
user@devazubu227:~$
user@devazubu227:~$
user@devazubu227:~$ sudo systemctl daemon-reexec
user@devazubu227:~$ sudo systemctl daemon-reload
user@devazubu227:~$ sudo systemctl enable disable-thp
Created symlink /etc/systemd/system/multi-user.target.wants/disable-thp.service → /etc/systemd/system/disable-thp.service.
user@devazubu227:~$ sudo systemctl start disable-thp

user@devazubu227:~$ sudo systemctl status disable-thp
○ disable-thp.service - Disable Transparent Huge Pages
Loaded: loaded (/etc/systemd/system/disable-thp.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Thu 2025-08-07 11:01:42 UTC; 11s ago
Process: 4394 ExecStart=/bin/bash -c echo never > /sys/kernel/mm/transparent_hugepage/enabled && echo never > /sys/kernel/mm/transparent_hugepage/defra>
Main PID: 4394 (code=exited, status=0/SUCCESS)
CPU: 2ms

Aug 07 11:01:42 devazubu227 systemd[1]: Started Disable Transparent Huge Pages.
Aug 07 11:01:42 devazubu227 systemd[1]: disable-thp.service: Deactivated successfully.
user@devazubu227:~$
user@devazubu227:~$
user@devazubu227:~$ cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
user@devazubu227:~$
user@devazubu227:~$ cat /sys/kernel/mm/transparent_hugepage/defrag
always defer defer+madvise madvise [never]
user@devazubu227:~$
user@devazubu227:~$

Nextcloud VM backup and restore scripts

For moving nextcloud vm installations from one vm to another or in order to move from nextcloud-vm@ubuntu16 to nextcloud-vm@ubuntu18 as well later again from nextcloud-vm@ubuntu18 to nextcloud-vm@ubuntu20 I successfully used the following two scripts, which therefore I highly can recommend:

Download the scripts to /root/:

sudo -i
cd ~
wget https://codeberg.org/DecaTec/Nextcloud-Backup-Restore/raw/branch/master/NextcloudBackup.sh
wget https://codeberg.org/DecaTec/Nextcloud-Backup-Restore/raw/branch/master/NextcloudRestore.sh

Secure the scripts:

chown root NextcloudBackup.sh
chown root NextcloudRestore.sh
chmod 700 NextcloudBackup.sh
chmod 700 NextcloudRestore.sh

Execute the scripts:

./NextcloudBackup.sh
./NextcloudRestore.sh 20201223_223941

To mount a SMB/CIFS share:  

mkdir /mnt/cifsdir
sudo mount -t cifs -o user=YourSMBUser,password=YourVeryLongPassSentence //192.168.0.10/somedir /mnt/cifsdir
 
 
 

 

Increase disk and zfs of nextcloud vm running on proxmox

To increase the data disk of your nextcloud vm, which is running on proxmox, you need to do the following:

  1. Make sure no disk snapshot is active or delete them.
  2. Shutdown VM. 
  3. Check current disk size of your data disk of your nextcloud vm using lvs on your proxmox hypervisor:

    root@proxmox1:~#

    root@proxmox1:~# lvs

      LV            VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert

      data          pve twi-aotz-- <3.49t             0.78   0.28

      root          pve -wi-ao---- 96.00g

      swap          pve -wi-ao----  8.00g

      vm-100-disk-0 pve Vwi-a-tz-- 40.00g data        9.99

      vm-100-disk-1 pve Vwi-a-tz-- 40.00g data        0.06

      vm-101-disk-0 pve Vwi-a-tz-- 40.00g data        58.01

      vm-101-disk-1 pve Vwi-a-tz-- 40.00g data        1.60 <-- This is my nextcloud data disk

    root@proxmox1:~#

    root@proxmox1:~#
     
  4. In my case this disk is mounted as scsi1 to the VM:Proxmox vm hardware disks 
  5. Increase the disk size using qm resize <vm-id> <scsi-id> <size>, so for example qm resize 101 scsi1 +100G your disk:

    root@proxmox1:~#

    root@proxmox1:~# qm resize 101 scsi1 +3210G

      Size of logical volume pve/vm-101-disk-1 changed from 40.00 GiB (10240 extents) to 3.17 TiB (832000 extents).

      Logical volume pve/vm-101-disk-1 successfully resized.

    root@proxmox1:~#

    root@proxmox1:~# lvs

      LV            VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert

      data          pve twi-aotz-- <3.49t             0.78   0.28

      root          pve -wi-ao---- 96.00g

      swap          pve -wi-ao----  8.00g

      vm-100-disk-0 pve Vwi-a-tz-- 40.00g data        9.99

      vm-100-disk-1 pve Vwi-a-tz-- 40.00g data        0.06

      vm-101-disk-0 pve Vwi-a-tz-- 40.00g data        58.01

      vm-101-disk-1 pve Vwi-a-tz--  3.17t data        0.02

    root@proxmox1:~#

    root@proxmox1:~#

    Proxmox virtual hardware disk resized
     
  6. Start your VM.
  7. Check the zpool size using zpool list
  8. Check the /mnt/ncdata size using df -h
  9. Read the new partition size using parted -l with the answer "fix" for the adjustment
  10. You can delete the buffer partition 9 using parted /dev/sdb rm 9
  11. Extend the first partition using to 100% of the available size parted /dev/sdb resizepart 1 100%
  12. Use zpool export zpool export ncdata 
  13. Import zpool again zpool import -d /dev ncdata
  14. Set zpool online zpool online -e ncdata sdb
  15. zpool online -e ncdata /dev/sdb you can adjust the partition to the correct size
  16. Check the new zpool size using zpool list
  17. Check the new /mnt/ncdata size using df -h

Example with nextcloud 20 on Ubuntu 20.04:

root@nextcloud:~#
root@nextcloud:~# zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ncdata  39.5G  46.0M  39.5G        -     3.13T     0%     0%  1.00x    ONLINE  -
root@nextcloud:~#
root@nextcloud:~# df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               3.9G     0  3.9G   0% /dev
tmpfs                              797M  1.2M  796M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   39G  5.5G   32G  15% /
tmpfs                              3.9G  8.0K  3.9G   1% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda2                          976M  198M  712M  22% /boot
/dev/loop0                          55M   55M     0 100% /snap/core18/1705
/dev/loop1                          56M   56M     0 100% /snap/core18/1932
/dev/loop2                          61M   61M     0 100% /snap/core20/634
/dev/loop3                          70M   70M     0 100% /snap/lxd/18520
/dev/loop4                          62M   62M     0 100% /snap/core20/875
/dev/loop5                          72M   72M     0 100% /snap/lxd/18546
/dev/loop6                          31M   31M     0 100% /snap/snapd/9721
/dev/loop7                          32M   32M     0 100% /snap/snapd/10492
ncdata                              39G   19M   39G   1% /mnt/ncdata
tmpfs                              797M     0  797M   0% /run/user/1000
root@nextcloud:~#
root@nextcloud:~# parted -l
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  1076MB  1074MB  ext4
 3      1076MB  42.9GB  41.9GB


Warning: Not all of the space available to /dev/sdb appears to be used, you can
fix the GPT to use all of the space (an extra 6731857920 blocks) or continue
with the current setting?
Fix/Ignore? Fix
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sdb: 3490GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name                  Flags
 1      1049kB  42.9GB  42.9GB  zfs          zfs-4172ff7a9f945112
 9      42.9GB  42.9GB  8389kB


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 41.9GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  41.9GB  41.9GB  ext4


root@nextcloud:~#
root@nextcloud:~# parted /dev/sdb rm 9
Information: You may need to update /etc/fstab.

root@nextcloud:~#
root@nextcloud:~# parted /dev/sdb resizepart 1 100%
Information: You may need to update /etc/fstab.

root@nextcloud:~#
root@nextcloud:~# zpool export ncdata
root@nextcloud:~#
root@nextcloud:~# zpool import -d /dev ncdata
root@nextcloud:~#
root@nextcloud:~# zpool online -e ncdata sdb
root@nextcloud:~#
root@nextcloud:~# zpool online -e ncdata /dev/sdb
root@nextcloud:~#
root@nextcloud:~# zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ncdata  3.17T  46.1M  3.17T        -         -     0%     0%  1.00x    ONLINE  -
root@nextcloud:~#
root@nextcloud:~#
root@nextcloud:~#  df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               3.9G     0  3.9G   0% /dev
tmpfs                              797M  1.2M  796M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   39G  5.5G   32G  15% /
tmpfs                              3.9G  8.0K  3.9G   1% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda2                          976M  198M  712M  22% /boot
/dev/loop0                          55M   55M     0 100% /snap/core18/1705
/dev/loop1                          56M   56M     0 100% /snap/core18/1932
/dev/loop2                          61M   61M     0 100% /snap/core20/634
/dev/loop3                          70M   70M     0 100% /snap/lxd/18520
/dev/loop4                          62M   62M     0 100% /snap/core20/875
/dev/loop5                          72M   72M     0 100% /snap/lxd/18546
/dev/loop6                          31M   31M     0 100% /snap/snapd/9721
/dev/loop7                          32M   32M     0 100% /snap/snapd/10492
tmpfs                              797M     0  797M   0% /run/user/1000
ncdata                             3.1T   19M  3.1T   1% /mnt/ncdata
root@nextcloud:~#

Nextcloud VM updater shows permissions error

When trying to update your Nextcloud VM using the updater in the GUI the following error might be shown: 

Nextcloud updater fails Check for write permissions

  • Check for write permissions

    The following places can not be written to:
    • /var/www/nextcloud/updater/../cron.php
    • /var/www/nextcloud/updater/../version.php
    • /var/www/nextcloud/updater/../console.php
    • /var/www/nextcloud/updater/../public.php
    • /var/www/nextcloud/updater/../robots.txt
    • /var/www/nextcloud/updater/../status.php
    • /var/www/nextcloud/updater/../.htaccess
    • /var/www/nextcloud/updater/../COPYING
    • /var/www/nextcloud/updater/../occ
    • /var/www/nextcloud/updater/../remote.php
    • /var/www/nextcloud/updater/../index.php
    • /var/www/nextcloud/updater/../index.html
    • /var/www/nextcloud/updater/../AUTHORS
    • /var/www/nextcloud/updater/../.user.ini

That might be due to the usage of the "set strong permissions" script, which sets permissions to root:www-data instead of www-data:www-data. You can check that using:

root@lin:~#
root@lin:~# ll /var/www/nextcloud/
total 172
drwxr-x--- 14 root     www-data  4096 Sep 21 14:07 ./
drwxr-xr-x  4 root     root      4096 Sep 21 14:07 ../
drwxr-x--- 41 root     www-data  4096 Sep  9 13:44 3rdparty/
drwxr-x--- 46 www-data www-data  4096 Sep 21 14:07 apps/
-rw-r-----  1 root     www-data 16522 Sep  9 13:41 AUTHORS
drwxr-x---  2 www-data www-data  4096 Sep 21 14:07 config/
-rw-r-----  1 root     www-data  3967 Sep  9 13:41 console.php
-rw-r-----  1 root     www-data 34520 Sep  9 13:41 COPYING
drwxr-x--- 23 root     www-data  4096 Sep  9 13:44 core/
-rw-r-----  1 root     www-data  5140 Sep  9 13:41 cron.php
drwxr-x---  2 root     www-data  4096 Sep 21 14:07 data/
-rw-r--r--  1 root     www-data  4400 Sep 21 14:08 .htaccess
-rw-r-----  1 root     www-data   156 Sep  9 13:41 index.html
-rw-r-----  1 root     www-data  2960 Sep  9 13:41 index.php
drwxr-x---  6 root     www-data  4096 Sep  9 13:41 lib/
-rwxr-x--x  1 root     www-data   283 Sep  9 13:41 occ*
drwxr-x---  2 root     www-data  4096 Sep  9 13:41 ocm-provider/
drwxr-x---  2 root     www-data  4096 Sep  9 13:41 ocs/
drwxr-x---  2 root     www-data  4096 Sep  9 13:41 ocs-provider/
-rw-r-----  1 root     www-data  3102 Sep  9 13:41 public.php
-rw-r-----  1 root     www-data  5332 Sep  9 13:41 remote.php
drwxr-x---  4 root     www-data  4096 Sep  9 13:41 resources/
-rw-r-----  1 root     www-data    26 Sep  9 13:41 robots.txt
-rw-r-----  1 root     www-data  2379 Sep  9 13:41 status.php
drwxr-x---  3 www-data www-data  4096 Sep  9 13:41 themes/
drwxr-x---  2 www-data www-data  4096 Sep  9 13:42 updater/
-rw-r-----  1 root     www-data   101 Sep  9 13:41 .user.ini
-rw-r-----  1 root     www-data   362 Sep  9 13:44 version.php
root@lin:~#
root@lin:~#

Cause and solution

That is why the GUI updater of nextcloud is blocked because the permissions isn’t as safe as with root:www-data. In the Nextcloud VM this is solved that using an own script for it: https://github.com/nextcloud/vm/blob/master/nextcloud_update.sh

👉Instead of using that script, run sudo bash /var/scripts/update.sh instead.


Migrate nextcloud v15 with mariadb database to nextcloud v16 with postgresql

If you are running nextcloud v15 with mariadb and want to upgrade from nextcloud version 16, then you have to migrate the database from mariadb to postgresql.

This can be done using the following commands, which I adjusted for ubuntu 16.04 and is originally from the following site: https://www.techandme.se/we-migrated-to-postgresql/
 

#!/bin/bash

## Convert to PostgreSQL ##
# Tested on Ubuntu Server 16.04
# Make sure you have a full backup of your nextcloud installation


# Make sure only root can run our script
if [[ $EUID -ne 0 ]]; then
 echo "This script must be run as root, please type sudo -i and run it again." 1>&2
 exit 1
fi

service apache2 stop

. <(curl -sL https://raw.githubusercontent.com/nextcloud/vm/master/lib.sh)

NCUSER=pgsql_user_nextcloud

# Install PostgreSQL
apt update
check_command apt install postgresql-9.5

# Create DB
cd /tmp || exit
sudo -u postgres psql <<END
CREATE USER $NCUSER WITH PASSWORD '$PGDB_PASS';
CREATE DATABASE nextcloud_db WITH OWNER $NCUSER TEMPLATE template0 ENCODING 'UTF8';
END
check-command service postgresql restart

# Convert DB
sudo -u www-data php /var/www/nextcloud/occ db:convert-type --all-apps --password "$PGDB_PASS" pgsql $NCUSER 127.0.0.1 nextcloud_db
sudo -u www-data php /var/www/nextcloud/occ maintenance:repair

# Remove MySQL / MariaDB
read -p "Are you sure you want to remove MySQL?" -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]
then
    apt clean
    apt update
    dpkg -r mariadb-client-10.2
    dpkg -r mariadb-server-10.2
    dpkg -r libmysqlclient20:i386
    dpkg -r libmysqlclient20:amd64
    dpkg -r libmysqlclient18:amd64
    dpkg -r mysql
    apt purge mysql\* libmysql\* libmariadb\*
    apt autoremove -y
    rm -R /var/lib/mysql /var/lib/mysql-files /var/lib/mysql-keyring /var/mysql-upgrade /etc/mysql /var/lib/mysql
fi

# Remove mysql.utf8mb4
if grep -q "mysql.utf8mb4" /var/www/nextcloud/config/config.php
then
sed -i "s|'mysql.utf8mb4' => true,||g" /var/www/nextcloud/config/config.php
sed '/^\s*$/d' /var/www/nextcloud/config/config.php
fi

# Show password
echo "Your new PostgreSQL password is: $PGDB_PASS. It's also written in your Nextcloud config.php file."

# Start Apache2
echo "Apache will start in 30 seconds... Press CTRL+C to abort."
sleep 30
service apache2 start

# Fetch the correct update script
if [ -f "$SCRIPTS"/update.sh ]
then
 rm "$SCRIPTS"/update.sh
 wget https://raw.githubusercontent.com/nextcloud/vm/master/static/update.sh -P "$SCRIPTS"
 chmod +x "$SCRIPTS"/update.sh
fi

exit

Monitor UniFi WLAN Access Point with PRTG with SNMPv3 Auth+Encrypted

This is a tiny guide howto monitor your UniFi wireless accesspoint, in this case a Unifi U7 pro with SNMPv3 with AES-Encryption and SHA-Auth...