Showing posts with label Troubleshooting. Show all posts
Showing posts with label Troubleshooting. Show all posts

Proxmox Update 8 to 9 does not boot anymore - black screen

Problem

After I updated my Asus/Intel NUC from proxmox v8.4.14 to 9.1.6, it did not boot anymore. I followed the instructions of https://pve.proxmox.com/wiki/Upgrade_from_8_to_9 and used the update checker tool:
  1. Update checker with pve8to9 
    --> all green ✅
  2. Updated from 8.4.14 to 9.1.6 complete using this 
    --> GUI & CLI showed new version ✅
    --> VMs and Containers continued to run normally ✅
  3. Reboot 
    --> does not boot anymore ❌
  4. BIOS Update of the Asus/Intel NUC did not help ❌


Solution

I found Dustin Rues awesome blog entry: https://dustinrue.com/2025/12/recovering-from-a-failed-proxmox-upgrade/

1. Boot from live linux via USB stick (in my case linux mint)

2. In the live linux terminal, run the following commands:

mount /dev/mapper/pve-root /mnt
mount /dev/sda2 /mnt/boot/efi
mount --bind /dev /mnt/dev
mount --bind /sys /mnt/sys
mount --bind /proc /mnt/proc
mount --bind /dev/pts /mnt/dev/pts
chroot /mnt
mount -t efivarfs efivarfs /sys/firmware/efi/efivars
grub-install --target x86_64-efi --no-floppy --bootloader-id proxmox /dev/sda
grub-install --target x86_64-efi --no-floppy --bootloader-id proxmox --removable /dev/sda
echo 'grub-efi-amd64 grub2/force_efi_extra_removable boolean true' | debconf-set-selections -v -u
apt install --reinstall grub-efi-amd64
update-initramfs -u -k all

Once this finished I did the following

umount /mnt/dev/pts
umount /mnt/proc
umount /mnt/dev
umount /mnt/sys/firmware/efi/efivars
umount /mnt/sys
umount /mnt/boot/efi
umount /mnt/
reboot

Example

root@mint:~#
root@mint:~#
root@mint:~# mount /dev/mapper/pve-root /mnt
root@mint:~# mount /dev/sda2 /mnt/boot/efi/
root@mint:~# mount --bind /dev /mnt/dev
root@mint:~# mount --bind /sys /mnt/sys
root@mint:~# mount --bind /proc /mnt/proc
root@mint:~# mount --bind /dev/pts /mnt/dev/pts
root@mint:~#
root@mint:~# chroot /mnt
root@mint:/#
root@mint:/# mount -t efivarfs efivarfs /sys/firmware/efi/efivars/
root@mint:/#
root@mint:/# grub-install --target x86_64-efi --no-floppy --bootloader-id proxmox /dev/sda
Installing for x86_64-efi platform.
Installation finished. No error reported.
root@mint:/#
root@mint:/# grub-install --target x86_64-efi --no-floppy --bootloader-id proxmox --removable /dev/sda
Installing for x86_64-efi platform.
Installation finished. No error reported.
root@mint:/#
root@mint:/#
root@mint:/# echo 'grub-efi-amd64 grub2/force_efi_extra_removable boolean true' | debconf-set-selections -v -u
info: Trying to set 'grub2/force_efi_extra_removable' [boolean] to 'true'
info: Loading answer for 'grub2/force_efi_extra_removable'
root@mint:/#
root@mint:/# apt install --reinstall grub-efi-amd64
The following packages were automatically installed and are no longer required:
  bsdmainutils                    libdav1d6               libllvm15               libpython3.9-stdlib  perl-modules-5.32                   python3-soupsieve
  gcc-12-base                     libdns-export1110       libmpdec3               librav1e0            perl-modules-5.36                   python3-talloc
  libabsl20220623                 libdrm-nouveau2         libnsl-dev              libsubid4            proxmox-kernel-6.8.12-1-pve-signed  python3-tempita
  libapt-pkg6.0                   libdrm-radeon1          libnumber-compare-perl  libsvtav1enc1        pve-kernel-5.15                     python3-tz
  libavif15                       libfile-find-rule-perl  libopts25               libtext-glob-perl    pve-kernel-5.15.126-1-pve           python3-waitress
  libboost-context1.74.0          libflac12               libperl5.32             libthrift-0.17.0     pve-kernel-5.15.158-2-pve           python3-webtest
  libboost-coroutine1.74.0        libfmt9                 libperl5.36             libtiff5             python3-bs4                         python3.11
  libboost-filesystem1.74.0       libglusterd0            libprocps8              libtirpc-dev         python3-jaraco.classes              python3.11-minimal
  libboost-iostreams1.74.0        libicu67                libprotobuf23           liburing1            python3-ldb                         python3.9
  libboost-program-options1.74.0  libicu72                libpython3.11           libwebp6             python3-paste                       python3.9-minimal
  libboost-thread1.74.0           libisc-export1105       libpython3.11-minimal   libx265-199          python3-pastedeploy                 sgml-base
  libbpf0                         libjs-sencha-touch      libpython3.11-stdlib    libxcb-dri2-0        python3-pastedeploy-tpl             telnet
  libcbor0                        libldap-2.5-0           libpython3.9            libzpool5linux       python3-pytz                        usrmerge
  libcbor0.8                      libleveldb1d            libpython3.9-minimal    lua-lpeg             python3-singledispatch
Use 'apt autoremove' to remove them.

Installing:
  grub-efi-amd64

REMOVING:
  grub-pc

Summary:
  Upgrading: 0, Installing: 1, Removing: 1, Not Upgrading: 0
  Download size: 46.7 kB
  Freed space: 349 kB

Continue? [Y/n]
Get:1 http://download.proxmox.com/debian/pve trixie/pve-no-subscription amd64 grub-efi-amd64 amd64 2.12-9+pmx2 [46.7 kB]
Fetched 46.7 kB in 0s (398 kB/s)          
Preconfiguring packages ...
(Reading database ... 119801 files and directories currently installed.)
Removing grub-pc (2.12-9+pmx2) ...
Selecting previously unselected package grub-efi-amd64.
(Reading database ... 119792 files and directories currently installed.)
Preparing to unpack .../grub-efi-amd64_2.12-9+pmx2_amd64.deb ...
Unpacking grub-efi-amd64 (2.12-9+pmx2) ...
Setting up grub-efi-amd64 (2.12-9+pmx2) ...
Installing for x86_64-efi platform.
File descriptor 3 (pipe:[43286]) leaked on vgs invocation. Parent PID 3806: grub-install.real
File descriptor 3 (pipe:[43286]) leaked on vgs invocation. Parent PID 3806: grub-install.real
Installation finished. No error reported.
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.17.13-1-pve
Found initrd image: /boot/initrd.img-6.17.13-1-pve
Found linux image: /boot/vmlinuz-6.8.12-17-pve
Found initrd image: /boot/initrd.img-6.8.12-17-pve
Found linux image: /boot/vmlinuz-6.8.12-1-pve
Found initrd image: /boot/initrd.img-6.8.12-1-pve
Found linux image: /boot/vmlinuz-5.15.158-2-pve
Found initrd image: /boot/initrd.img-5.15.158-2-pve
Found linux image: /boot/vmlinuz-5.15.126-1-pve
Found initrd image: /boot/initrd.img-5.15.126-1-pve
Found linux image: /boot/vmlinuz-5.4.203-1-pve
Found initrd image: /boot/initrd.img-5.4.203-1-pve
Found linux image: /boot/vmlinuz-5.4.73-1-pve
Found initrd image: /boot/initrd.img-5.4.73-1-pve
Found memtest86+ 64bit EFI image: /boot/memtest86+x64.efi
Found memtest86+ 32bit EFI image: /boot/memtest86+ia32.efi
Found memtest86+ 64bit image: /boot/memtest86+x64.bin
Found memtest86+ 32bit image: /boot/memtest86+ia32.bin
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
Adding boot menu entry for UEFI Firmware Settings ...
done
Processing triggers for man-db (2.13.1-1) ...
root@mint:/#
root@mint:/#
root@mint:/# 
root@mint:/# 
root@mint:/# 
root@mint:/# 
root@mint:/# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-6.17.13-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-6.8.12-17-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-6.8.12-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.15.158-2-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.15.126-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.4.203-1-pve
W: zstd compression (CONFIG_RD_ZSTD) not supported by kernel, using gzip
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.4.73-1-pve
W: zstd compression (CONFIG_RD_ZSTD) not supported by kernel, using gzip
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
root@mint:/#
root@mint:/#
root@mint:/#
root@mint:/# exit
exit
root@mint:~#
root@mint:~# ls /mnt
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@mint:~#
root@mint:~# umount /mnt/dev/pts
root@mint:~# umount /mnt/proc
root@mint:~# umount /mnt/dev
root@mint:~# umount /mnt/sys/firmware/efi/efivars
root@mint:~# umount /mnt/sys
root@mint:~# umount /mnt/boot/efi
root@mint:~# umount /mnt/
root@mint:~#
root@mint:~# reboot



Azure Linux Ubuntu not fully upgraded

Using apt-get update && apt-get upgrade -y on your Ubuntu VM in Azure sometimes does not upgrade all packages:

root@hostname6:~#
root@hostname6:~#
root@hostname6:~# apt-get update && apt-get upgrade -y
Hit:1 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Hit:2 http://azure.archive.ubuntu.com/ubuntu jammy InRelease
Hit:3 http://azure.archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:4 http://azure.archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:5 http://azure.archive.ubuntu.com/ubuntu jammy-security InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages have been kept back:
libnss-systemd libpam-systemd libsystemd0 libudev1 linux-azure linux-cloud-tools-azure linux-headers-azure linux-image-azure linux-tools-azure
systemd systemd-sysv udev
0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded.
root@hostname6:~#


Solution

sudo apt-get install aptitude -y
sudo aptitude safe-upgrade

aptitude safe-upgrade
Upgrades installed packages to their most recent version. Installed packages will not be removed unless they are unused [...] Packages which are not currently installed may be installed to resolve dependencies unless the --no-new-installs command-line option is supplied.


Example

root@hostname6:~#
root@hostname6:~# sudo aptitude safe-upgrade
Resolving dependencies...
The following NEW packages will be installed:
linux-azure-6.8-cloud-tools-6.8.0-1041{a} linux-azure-6.8-headers-6.8.0-1041{a} linux-azure-6.8-tools-6.8.0-1041{a}
linux-cloud-tools-6.8.0-1041-azure{a} linux-headers-6.8.0-1041-azure{a} linux-image-6.8.0-1041-azure{a} linux-modules-6.8.0-1041-azure{a}
linux-tools-6.8.0-1041-azure{a}
The following packages will be upgraded:
libnss-systemd libpam-systemd libsystemd0 libudev1 linux-azure linux-cloud-tools-azure linux-headers-azure linux-image-azure linux-tools-azure
systemd systemd-sysv udev
12 packages upgraded, 8 newly installed, 0 to remove and 0 not upgraded.
Need to get 67.2 MB of archives. After unpacking 269 MB will be used.
Do you want to continue? [Y/n/?]

[..]
Current status: 0 (-12) upgradable.
root@hostname6:~#
root@hostname6:~# apt-get update
Hit:1 http://azure.archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 http://azure.archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:3 http://azure.archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:4 http://azure.archive.ubuntu.com/ubuntu jammy-security InRelease
Hit:5 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Reading package lists... Done
root@hostname6:~#
root@hostname6:~# apt-get upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@hostname6:~#
root@hostname6:~#

Splunk Version 9.4.4 shows error while starting - VM CPU Flags are missing

Problem 

When you update your Splunk to e.g. version 9.4.4 and get this error while starting splunk:

Migrating to:
VERSION=9.4.4
BUILD=f627d88b766b
PRODUCT=splunk
PLATFORM=Linux-x86_64

********** BEGIN PREVIEW OF CONFIGURATION FILE MIGRATION **********

-> Currently configured KVSTore database path="/opt/splunk/var/lib/splunk/kvstore"
CPU Vendor: GenuineIntel
CPU Family: 15
CPU Model: 6
CPU Brand: \x
AVX Support: No
SSE4.2 Support: No
AES-NI Support: No

-> isSupportedArchitecture=0
-> isKVstoreDisabled=0
-> isKVstoreDatabaseFolderExist=0
-> isKVstoreDiagnosticsFolderExist=0
-> isKVstoreVersionFileFolderExist=1
-> isKVstoreVersionFileFolderEmpty=0
-> isKVstoreVersionFileMatched=1
-> isKVstoreVersionFromBsonMatched=0
-> isSupportedArchitecture=0
* Active KVStore version upgrade precheck FAILED!
  -- This check is to ensure that KVStore version 4.2 been in use.
  -- In order to fix this failed check, re-install the previous Splunk version, and follow the KVStore upgrade documentation: https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/MigrateKVstore#Upgrade_KV_store_server_to_version_4.2 .
Some upgrade prechecks failed!
ERROR while running splunk-preinstall.
 

Cause

This might be related to missing CPU features AVX, SSE4.2 and AES-NI to the Splunk VM, which are necessary for the new kvstore mongodb version, which is introduced in Splunk version 9.4: https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/administer-the-app-key-value-store/upgrade-the-kv-store-server-version#Upgrade_the_KV_store_server_version

You can check inside your VM using:
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$ grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
splnonroot@devubu22h102:/opt/splunk$

Solution 

In your VM hypervisor (VMware ESXi, Microsoft Hyper-V, Proxmox, etc..) --> give the Splunk VMs the necessary CPU flags/features.

Example for proxmox:

  1. Check inside your VM: grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
  2. Edit /etc/pve/qemu-server/*Your VM ID*.conf and add CPU: Host - so all the Host CPU Hardware flags are forwarded to the VM
  3. Reboot the VM 
  4. Check inside your VM again: grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
  5. Start Splunk 

Before Proxmox VM has CPU features: 

root@proxmox1:~#
root@proxmox1:~#
root@proxmox1:~# cat /etc/pve/qemu-server/*Your VM ID*.conf
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]

[snapshot-pre-splunkupdate]
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]
root@proxmox1:~#
root@proxmox1:~# 

After Proxmox VM has CPU features:


1.
root@proxmox1:~#
root@proxmox1:~#
root@proxmox1:~# cat /etc/pve/qemu-server/*Your VM ID*.conf
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]

[snapshot-pre-splunkupdate]
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]
root@proxmox1:~#
root@proxmox1:~#


2. Reboot the VM

3. Then inside your VM the CPU flags are visible:
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$ grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
aes
avx
sse4_2
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$ 


4. Start Splunk again

Splunk SearchHead Cluster Artifact Proxying - Splunk internally sharing cached search results

When the same search is run twice in a splunk cluster, is it using a cache for the results or searching the data a second time?

A splunk search head search artifact means the results and metadata from a completed splunk search job (see: https://docs.splunk.com/Splexicon:Searchartifact)

So an artifact is a complete search which is cached for 10minutes.

In a search head cluster the search artifacts are replicated. However this takes a few seconds. What happens, if a search is run again on another search head and the artifact isnt replicated yet to that search head?

The search head captain, which streers those requests uses artifact proxying so the artifact is proxied from the search head which already has completed the search to the other search head.

See also: https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/SHCarchitecture#How_the_cluster_handles_search_artifacts

Example

  • 15 May 2025 11:21:01am - User1 starts the search "index=abc sourcetype=def" @04.May 2025 05:00:00am to 06:00:00am on SH1
  • 15 May 2025 11:21:03am - The search "index=abc sourcetype=def" @04.May 2025 05:00:00am to 06:00:00am on SH1 is complete
  • 15 May 2025 11:21:13am - User2 searches "index=abc sourcetype=def" @04.May 2025 05:00:00am to 06:00:00am on SH2
  • The search head cluster captain will proxy the search artifact (search results) from SH2 to SH1, so the search mustnt run a second time

SPL query for Splunk proxied artifacts

index=_internal host IN (searchhead01*,searchhead02*,searchhead03*) sourcetype=splunkd_access uri_path="/services/search/jobs*" isProxyRequest=true | stats count by method host file

Splunk SearchHead Cluster Artifact Proxying - Splunk internally sharing cached search results


Splunk alert for buckets which are not correctly replicated

The following shows a splunk savedsearch/alert which searches for Splunk buckets which are not correctly replicated to all indexers. 

Example

For example if you have a multisite cluster having 2 sites and each site should contain 2 copies of a bucket: 

splunk_server_clustering_available_sites: "site1,site2"
splunk_server_clustering_site_replication_factor: 'origin:1, site1:2, site2:2, total:4'
splunk_server_clustering_site_search_factor: 'origin:1, site1:2, site2:2, total:4'


Then the following SPL or savedsearch/alert might help identify if multiple buckets of an index are only replicated once:

| dbinspect index=* ```<-- show all buckets of all indexes ``` 
|search NOT state=hot ```<-- only warm & cold buckets ``` 
|eventstats count by bucketId  ```<-- list all bucket-ids only once, count how often they occur ``` 
|search count<2 ```<-- filter for all buckets that occur only once and are not replicated 4 times ``` 
|stats count by index ```<-- show all indexes that have buckets which were replicated only once ``` 
|search count>10 ```<-- show all indexes that have more than 10 buckets which were replicated only once```
``` All buckets should be replicated 4 times according to the search/replication factor of the Splunk multisite cluster. This alert shows if there are indexes with over 10 buckets that are only present once instead of being replicated on 4 indexers``` 


Screenshot:

Splunk bucket only once replicated dbinspect

Explaining screenshot:

Splunk bucket only once replicated dbinspect


Edge browser internal debug tools - example network traffic

 Microsoft Edge browser has some internal tools: edge://edge-urls/

Example usage net-export - debugging network traffic

  1. edge://net-export/ 


  2. Start & set file location (edge-net-export-log.json)
  3. Reproduce the issue in a new tab
  4. Stop recording
  5. The recording can be viewed via: https://netlog-viewer.appspot.com/#import
    → See Privacy: https://chromium.googlesource.com/catapult/+/master/netlog_viewer/
    "This app loads NetLog files generated by Chromium's chrome://net-export. Log data is processed and visualized entirely on the client side (your browser). Data is never uploaded to a remote endpoint."
  6. Select and load the file
  7. For example, Proxy/ProxyPAC Configuration: https://netlog-viewer.appspot.com/#proxy


  8. For example, detailed Event Timeline: https://netlog-viewer.appspot.com/#events


  9. For example, detailed DNS Events:


  10. Detailed Socket overview: https://netlog-viewer.appspot.com/#sockets
  11. Detailed HTTP/2 overview: https://netlog-viewer.appspot.com/#http2
  12. Detailed QUIC overview: https://netlog-viewer.appspot.com/#quic

Update Nextron Aurora lite EDR Agent

To manually update Nextrons Aurora Lite EDR agent, follow the steps: https://aurora-agent-manual.nextron-systems.com/en/latest/usage/upgrade-and-updates.html

  1. Download Aurora Lite files & license: https://www.nextron-systems.com/aurora/
  2. Unzip the files into a folder
  3. Copy the license file into that folder
  4. Start a PowerShell with Admin rights
  5. Execute aurora-agent-util.exe upgrade --restart-service

Example

PS C:\Users\clw11c493\Downloads\aurora-agent-lite-win-pack_v1.2.1>
PS C:\Users\clw11c493\Downloads\aurora-agent-lite-win-pack_v1.2.1> aurora-agent-util.exe upgrade --restart-service
Aug 10 19:30:37 clw11c493 AURORA: Info MODULE: Aurora-Agent MESSAGE: License file found OWNER: some@address.com VALID: true VALID_FROM: 2024/04/15 VALID_TO: 2025/02/21
Aug 10 19:30:37 clw11c493 AURORA: Info MODULE: Aurora-Agent MESSAGE: Checking for new version PRODUCT: aurora-agent-lite-win
Aug 10 19:31:08 clw11c493 AURORA: Info MODULE: Aurora-Agent MESSAGE: Stopped installed Aurora Agent service
Aug 10 19:31:08 clw11c493 AURORA: Info MODULE: Aurora-Agent MESSAGE: Installing downloaded package INSTALL_PATH: C:\Program Files\Aurora-Agent
Aug 10 19:31:13 clw11c493 AURORA: Info MODULE: Aurora-Agent MESSAGE: Started installed Aurora Agent service
Aug 10 19:31:13 clw11c493 AURORA: Info MODULE: Aurora-Agent MESSAGE: Updated Aurora Agent NEW: 1.2.1 OLD: 1.1.5
PS C:\Users\clw11c493\Downloads\aurora-agent-lite-win-pack_v1.2.1>
PS C:\Users\clw11c493\Downloads\aurora-agent-lite-win-pack_v1.2.1>  

 

To debug aurora you can use aurora-agent-64.exe --debug

Nextron Aurora EDR agent shows \Pr Error

Problem

During start of Nextrons Aurora EDR lite agent the programm shows the following error message:

PS C:\Program Files\Aurora-Agent> aurora-agent-64.exe --dashboard
      ___                                  __    _ __
     /   | __  ___________  _________ _   / /   (_) /____
    / /| |/ / / / ___/ __ \/ ___/ __ `/  / /   / / __/ _ \
   / ___ / /_/ / /  / /_/ / /  / /_/ /  / /___/ / /_/  __/
  /_/  |_\__,_/_/   \____/_/   \__,_/  /_____/_/\__/\___/


  Aurora Agent Lite Version 1.2.1 (9da9fbf29275c), Signature Revision 2024/08/10-134221 (Sigma r2024-07-17-29-gace902b68)
  (C) Nextron Systems GmbH, 2022

Aug 10 19:51:16 clw11c493 AURORA: Error MODULE: EventDistributor MESSAGE: Could not parse process exclude ERROR: error parsing regexp: invalid character class range: `\Pr` LINE: error parsing regexp: invalid character class range: `\Pr`
Aug 10 19:51:16 clw11c493 AURORA: Error MODULE: EventDistributor MESSAGE: Could not parse process exclude ERROR: error parsing regexp: invalid character class range: `\Pr` LINE: error parsing regexp: invalid character class range: `\Pr`
Aug 10 19:51:16 clw11c493 AURORA: Error MODULE: EventDistributor MESSAGE: Could not parse process exclude ERROR: error parsing regexp: invalid character class range: `\Pr` LINE: error parsing regexp: invalid character class range: `\Pr`


Solution

Your "process-excludes.cfg" (C:\Program Files\Aurora-Agent\config\process-excludes.cfg) configurations probably has a missing escaping "\" in the process-path (aurora searches for those process paths using regular expression):

Wrong:
^"C:\Program Files (x86)\

Correct:
^"C:\\Program Files (x86)\\
 

Nextcloud shows error "Data directory and your files are probably accessible from the Internet"

Starting Nextcloud v29 the error "Data directory and your files are probably accessible from the Internet".

Nextcloud error "Data directory and your files are probably accessible from the Internet"

 

Cause

root@prdanc2049:/var/www/nextcloud/config# pwd
/var/www/nextcloud/config
root@prdanc2049:/var/www/nextcloud/config#
root@prdanc2049:/var/www/nextcloud/config# cat config.php
<?php
$CONFIG = array (
  'passwordsalt' => 'Redacted',
  'secret' => 'Redacted',
  'trusted_domains' =>
  array (
    0 => 'localhost',
    1 => '10.68.127.123',
    2 => 'nextcloud',

    3 => 'mypublic.domain.com',
  ),
  'datadirectory' => '/mnt/ncdata',
  'dbtype' => 'pgsql',
[...]

Solution

Remove the ip addresses, "localhost" and "nextcloud" from the trusted_domains in /var/www/nextcloud/config/config.php

root@prdanc2049:/var/www/nextcloud/config# pwd
/var/www/nextcloud/config
root@prdanc2049:/var/www/nextcloud/config#
root@prdanc2049:/var/www/nextcloud/config# cat config.php
<?php
$CONFIG = array (
  'passwordsalt' => 'Redacted',
  'secret' => 'Redacted',
  'trusted_domains' =>
  array (
    0 => 'mypublic.domain.com',
  ),
  'datadirectory' => '/mnt/ncdata',
  'dbtype' => 'pgsql',
[...]


Information regarding trusted_domains in the config.php: https://docs.nextcloud.com/server/stable/admin_manual/configuration_server/config_sample_php_parameters.html#trusted-domains

Your list of trusted domains that users can log into. Specifying trusted domains prevents host header poisoning. Do not remove this, as it performs necessary security checks.

You can specify:

  • the exact hostname of your host or virtual host, e.g. demo.example.org.

  • the exact hostname with permitted port, e.g. demo.example.org:443. This disallows all other ports on this host

  • use * as a wildcard, e.g. ubos-raspberry-pi*.local will allow ubos-raspberry-pi.local and ubos-raspberry-pi-2.local

  • the IP address with or without permitted port, e.g. [2001:db8::1]:8080 Using TLS certificates where commonName=<IP address> is deprecated

Fix Nextcloud missing database indexes

 

Nextcloud security warning database indexes missing

Nextcloud adminsitration page shows the following waring:

The database is missing some indexes. Due to the fact that adding indexes on big tables could take some time they were not added automatically. By running "occ db:add-missing-indices" those missing indexes could be added manually while the instance keeps running. Once the indexes are added queries to those tables are usually much faster. Missing optional index "mail_messages_msgid_idx" in table "mail_messages". Missing optional index "fs_storage_path_prefix" in table "filecache".

Solution

Login to your Nextcloud system and use the command "sudo -u www-data php /var/www/nextcloud/occ db:add-missing-indices" to fix it.
 
root@nextcloud:~#
root@nextcloud:~# sudo -u www-data php /var/www/nextcloud/occ db:add-missing-indices
Adding additional mail_messages_msgid_idx index to the oc_mail_messages table, this can take some time...
oc_mail_messages table updated successfully.
Adding additional fs_storage_path_prefix index to the oc_filecache table, this can take some time...
oc_filecache table updated successfully.
root@nextcloud:~#


Nextcloud behind nginx reverse proxy error on iPhone and iPad

When publishing a nextcloud website using a nginx reverse proxy, you might get an error shown on Apple iOS iPhone and iPadOS iPads on all browsers - e.g. Safari or Chrome: ERR_CONNECTION_CLOSED

Solution

Add in the nginx reverse proxy configuration the following line:
proxy_hide_header Upgrade;
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header 

By default, nginx does not pass the header fields “Date”, “Server”, “X-Pad”, and “X-Accel-...” from the response of a proxied server to a client. The proxy_hide_header directive sets additional fields that will not be passed. If, on the contrary, the passing of fields needs to be permitted, the proxy_pass_header directive can be used.

Syntax: proxy_hide_header field;
Default:
Context: http, server, location

Nginx GUI configuration:

  1. Login to your Nginx Proxy Manager.
  2. Open the 3 dots settings menu of the NextCloud host and select “Edit”
  3. In the tab menu at the top of the window that has just opened select “Advanced” and insert the following in the “Custom Nginx Configuration” box:
    proxy_hide_header Upgrade;
  4. Click "save". 

 

Source: https://help.nextcloud.com/t/nextcloud-behind-nginx-proxy-manager-and-safari-ios-macos-no-access/142234/13

New proxmox VM does not boot

When adding a new VM (in this example the nextcloud appliance VM from https://www.hanssonit.se/nextcloud-vm/) to an old version of proxmox like version 6 (debian 10), the VM might not boot and stay stuck showing Booting from Hard Disk ...

Booting from Hard Disk ...

Solution

 

  1. Update your proxmox system, e.g. from version 6 (debian 10 - "buster") to promox version 7 (debian 11 - "bullseye"), see https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
  2. Import the VM again and start it


 

Proxmox update error "Sub-process /usr/share/proxmox-ve/pve-apt-hook returned an error code (1)"

Problem

During a proxmox update (e.g. from proxmox version 6 to 7) you receive the following error:

[...]
100% [608 zstd 34.1 kB/630 kB 5%] 1,337 kB/s 0s
100% [Working] 1,337 kB/s 0s

Fetched 255 MB in 2min 30s (1,702 kB/s)
W: (pve-apt-hook) !! WARNING !!
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
W: (pve-apt-hook)
W: (pve-apt-hook) If you really want to permanently remove 'proxmox-ve' from your system, run the following command
W: (pve-apt-hook) touch '/please-remove-proxmox-ve'
W: (pve-apt-hook) run apt purge proxmox-ve to remove the meta-package
W: (pve-apt-hook) and repeat your apt invocation.
W: (pve-apt-hook)
W: (pve-apt-hook) If you are unsure why 'proxmox-ve' would be removed, please verify
W: (pve-apt-hook) - your APT repository settings
W: (pve-apt-hook) - that you are using 'apt full-upgrade' to upgrade your system
E: Sub-process /usr/share/proxmox-ve/pve-apt-hook returned an error code (1)
E: Failure running script /usr/share/proxmox-ve/pve-apt-hook
root@proxmox1:~#
root@proxmox1:~# 

 

Solution

 
Proxmox VE 6.x is based on Debian 10.x which is called “buster”.
Proxmox VE 7.x is based on Debian 11.x which is called “bullseye”.  

  1. Check if your /etc/apt/sources.list.d/pve-enterprise.list file and /etc/apt/sources.list file still have "buster" (proxmox version 6) in them and replace it with "bullseye". E.g. with sed -i -e 's/buster/bullseye/g' /etc/apt/sources.list.d/pve-install-repo.list
  2. Run apt update again
  3. Run apt dist-upgrade again

Confluence behind LoadBalancer with another domain results in XSRF error

If you have an atlassian confluence running, which is published by a loadbalancer or reverse proxy using another domain, you might run into an XSRF error.

Example

Confluence FQDN: somehostname.domain.tld
LoadBalancer Confluence FQDN: confluence.domain.tld

Some actions like uploading your profile picture (https://confluence.domain.tld/users/profile/editmyprofilepicture.action) do not work. You'll receive an generic error from the confluence page (see red box of the screenshot below). If you check the HTTP Header response, you'll see XSRF check failed. It is caused by the confluence cross site request forgery (CSRF) protection.

Confluence XSRF Error

Solution

Edit confluence server.xml and add the FQDN from the LoadBalancer or reverse proxy.

More information can be found here: https://confluence.atlassian.com/kb/cross-site-request-forgery-csrf-protection-changes-in-atlassian-rest-779294918.html



Overview of public interfaces for SOC/IT-Security staff

In case of an IT-security incident, emergency oder if a new critical vulnerability (like log4j in December 2021) arises, it is good to be prepared, so you can quickly answer questions like:

  • "Are we affected?"
  • "Do we use this technology?"
  • "Where do we use this vulnerable protocol?"
  • "To whom is the attack surface exposed to?"
  • "Are there mitigations in place?"
  • "Is is exploitable without authentication in our setup?"
  • "Which is the best place to place a first mitigation?"
  • etc..
An overview like the following can and will be helpful for your IT-security staff or your Security Operations Center SOC:

System Internet Facing Protocol Authentication Security Used Products/Vendors Logs send to SIEM Contact Person Known Weaknesses
Websites Yes, exposed to all public-ip-addresses HTTPS (TCP:443) & HTTP (TCP:80 - HTTP 301 Redirect to HTTPS) None Web Application Firewall F5 BigIP LoadBalancer WAF & Apache Container on OpenShift Yes Link to CMDB Websites may contain 3rd party code, SBOM see CMDB
Managed File Transfer Yes, but limited to dedicated public ip-addresses of partners HTTPS (TCP:443) HTTPS Tokens Web Application Firewall F5 BigIP LoadBalancer WAF IPSwitch Yes Link to CMDB Runs on VM as appliance, OS might not be hardend from vendor
Citrix Yes, exposed to all public-ip-addresses HTTPS (TCP:443) MFA Netscaler WAF Citrix Systems + Okta MFA Yes Link to CMDB NetScaler WAF Ruleset might be out-of-date
Mailserver Yes, exposed to all public-ip-addresses SMTP (TCP:25) None AntiSpam Mailgatway & AV-Sandbox Cisco E-Mail Security Yes Link to CMDB Mailgateways run on Hardware, might not be hardended from vendor
SSLVPN S2E Yes, exposed to all public-ip-addresses HTTPS (TCP:443) Mutual TLS Certbased + MFA Azure DDoS FortiGate SSLVPN Azure VM + Okta MFA Yes Link to CMDB Possible FortiGate FortiOS SSLVPN Vulnerabilities
M365 ActiveSync Yes, exposed to all public-ip-addresses HTTPS (TCP:443) Mutual TLS Certbased Azure DDoS Microsoft 365 + Intunes Yes Link to CMDB Not part of own Vulnerability-Scanner
VPN S2S Yes, but limited to dedicated public ip-addresses of partners IPSec UDP:500 & UDP:4500 & ESP IPsec IKEv2 Certbased Auth Azure DDoS FortiGate SSLVPN Azure VM Link to CMDB -
DMARC SaaS Yes, exposed to all public-ip-addresses DNS (UDP:53), HTTP (TCP:80), HTTPS (TCP:443), SMTP (TCP:25) None - dmarcadvisor.com SaaS No Link to CMDB Not part of own Vulnerability-Scanner
DNS Server Yes, but limited to dedicated public ip-addresses of partners DNS (UDP:53 & TCP:53) None Azure Network Security Groups RHEL Bind Yes Link to CMDB -
ISP Routers Yes, but limited to dedicated public ip-addresses of ISP routers BGP (TCP:179), BFD, Ping (ICMP:0/8) BGP MD5 Auth - Extreme Networks XOS Yes Link to CMDB
etc.. etc.. etc.. etc.. etc.. etc.. etc.. etc.. etc..

 

Of course you can add many more columns like e.g.:

  • "SBOM technologys used" (for example: RHEL, Apache Tomcat, OpenSSL, log4j, puppet, ansible, splunk universal forwarder, appdynamics,..)
  • Direct links to your Firewall Management System, WAF or SIEM
  • "Is it part of our vulnerability scanner?"
  • "Is the vulnerability scanner scanning it authenticated?"
  • "Is the system/application hardended?"
  • and so on :-)
This list will help in case of an IT-security emergency to sort out the first steps in order to mitigate and fix the issue of the public exposed interfaces (like to the internet or to business partners). However this is only one of many steps necessary - always "asume breach" and make sure an attacker controlling a client or server still is unable to spread (unnoticed) in your companies (cloud) network.

Fix blocked ldap user in GitLab container using GitLabs shell

If you are running GitLab in a docker container and your are using some directory service, for example ActiveDirectory with LDAPS for authentication, you might face the challenge, that when a user is moved in ActiveDirectory to another ad-group or the ad-group which is used as user-filter is deleted, then GitLab marks the user as "blocked".

Unblock the ldap user in GitLab

  1. Connect to the docker host server
  2. Open a connection to GitLabs Shell using docker exec -it <container-name> gitlab-rails console -e production
  3. Find the user in GitLabs Shell using user = User.find_by_email("someone@e-mail")
  4. Check the Users state using user.state
  5. Unblock the user using user.state = "active"
  6. Save using user.save
  7. Exit

Example:

prdrhel8180:/ #
prdrhel8180:/ # docker exec -it gitlab gitlab-rails console -e production
--------------------------------------------------------------------------------
Ruby: ruby 2.7.5p203 (2021-11-24 revision f69aeb8314) [x86_64-linux]
GitLab: 15.3.1-ee (518311979e3) EE
GitLab Shell: 14.10.0
PostgreSQL: 12.10
------------------------------------------------------------[ booted in 37.73s ]
Loading production environment (Rails 6.1.6.1)
irb(main):001:0> user = User.find_by_email("someone@e-mail")
=> nil
irb(main):002:0> user = User.find_by_email("someone@e-mail.com")
=> #<User id:55 @someone>
irb(main):003:0> user.state
=> "ldap_blocked"
irb(main):004:0> user.state = "active"
=> "active"
irb(main):005:0> user.save
=> true
irb(main):006:0> exit
prdrhel8180:/ #
prdrhel8180:/ #

Fix the LDAP user filter

If the user was blocked due to a deleted AD group, which was used as ldap user filter, then you have to fix the LDAP connect from GitLab to ActiveDirectory. GitLab will log this in /var/log/gitlab/gitlab-rails/application.log as:

2023-02-02T01:30:18.098Z: LDAP account "cn=lastname\, firstname,ou=deleted-users,ou=someou,dc=internal,dc=domain,dc=local" does not exist anymore, blocking GitLab user "Lastname, Firstname" (firstname.lastname@domain.local)

prdrhel8180:/ #
prdrhel8180:/ # docker exec -it gitlab cat /etc/gitlab/gitlab.rbgitlab_rails
gitlab_rails['ldap_servers'] = YAML.load <<- br=""> someldap: #
label: 'LDAP'
host: 'some-ldaps-vip.internal.domain.local'
port: 636
uid: 'sAMAccountName'
[...]
user_filter: '(|(memberOf=CN=SomeGroup,OU=Groups,OU=SomeOU,DC=internal,DC=domain,DC=local)(memberOf=CN=SomeGroup2,OU=Groups2,OU=SomeOU2,DC=internal,DC=domain,DC=local))'
prdrhel8180:/ #
prdrhel8180:/ #

The user_filter has to be adjusted to the new AD group, which includes the blocked user(s).

Microsoft Windows Defender AntiVirus Performance analysis

When you suspect your Microsoft Defender Antivirus to be a bottleneck for your Windows performance, then you may use Microsofts Defender Antivirus performance analyzer. It helps you with the on-premise Windows Defender Antivirus as well as with the cloud solution Microsoft Defender for Endpoint (Defender ATP).

https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/tune-performance-defender-antivirus?view=o365-worldwide

Especially on developer systems with an IDE Microsoft Defender Antivirus can have a significant performance impact on your system due to the many temporary files, which are not digitally signed but contain exectuable code. Microsofts Defender Antivirus performance can help you to detect:

  • Files with long antivirus scan times
  • Processes with long antivirus scan times
  • File extensions with long antivirus scan times 

Running defender antivirus performance analyzer

  1. Run PowerShell (Admin)
  2. Use the PowerShell command New-MpPerformanceRecording -RecordTo how2itsec-analyze-microsoft-antivirus.etl
  3. Repeate your performance issue, e.g. building your software or opening a programm
  4. Press Enter to stop the trace

Defender Antivirus performance analysis etl

Analysis of the trace 

You can analyze your results using the Get-MpPerformanceReportparameter with one of the following arguments:
Get-MpPerformanceReport    [-Path] <String>
[-TopScans <Int32>]
[-TopFiles  <Int32>
    [-TopScansPerFile <Int32>]
    [-TopProcessesPerFile  <Int32>
        [-TopScansPerProcessPerFile <Int32>]
    ]
]
[-TopExtensions  <Int32>
    [-TopScansPerExtension <Int32>]
    [-TopProcessesPerExtension <Int32>
        [-TopScansPerProcessPerExtension <Int32>]
        ]
    [-TopFilesPerExtension  <Int32>
        [-TopScansPerFilePerExtension <Int32>]
        ]
    ]
]
[-TopProcesses  <Int32>
    [-TopScansPerProcess <Int32>]
    [-TopExtensionsPerProcess <Int32>
        [-TopScansPerExtensionPerProcess <Int32>]
    ]
]
[-TopFilesPerProcess  <Int32>
    [-TopScansPerFilePerProcess <Int32>]
]
[-MinDuration <String>]
[-Raw]

Example Analysis

Get-MpPerformanceReport -Path .\how2itsec-analyze-microsoft-antivirus.etl -TopFiles 10Get-MpPerformanceReport Defender analysis1

Get-MpPerformanceReport -Path .\how2itsec-analyze-microsoft-antivirus.etl -TopFiles 10 -TopScansPerFile 3 Get-MpPerformanceReport Defender analysis files scans per file

Get-MpPerformanceReport -Path .\how2itsec-analyze-microsoft-antivirus.etl -TopExtensions:10 -TopProcesses:10 -TopScans:10Get-MpPerformanceReport Defender analysis2 Top processes top scans per file

Get-MpPerformanceReport -Path .\how2itsec-analyze-microsoft-antivirus.etl -TopScans:100 -MinDuration:100msGet-MpPerformanceReport Defender analysis3 processes scan duration

Get-MpPerformanceReport -Path .\how2itsec-analyze-microsoft-antivirus.etl -TopScans:100 -MinDuration:500ms -Raw | ConvertTo-Js

Debug Windows Defender AntiVirus Performance JSON

Optimize performance 

Based on your analysis results you can carefully set exclusions or adjust parameters in Windows Defender or Defender for Endpoint (Defender ATP) in order to boost performance.

Nextcloud Security & setup warning php-imagick svg support

 When running nextcloud, in the administration overview you might find the error: "Module php-imagick in this instance has no SVG support. For better compatibility it is recommended to install it."


In order to fix this warning install libmagickcore-6.q16-6-extra package with the following command:

apt-get install libmagickcore-6.q16-6-extra

Nextcloud repairing missing indexes in database

When your nextcloud installation is showing an error like the following, then you could use "occ db:add-missing-indices" to repair it:

Nextcloud database is missing indexes occ db:add-missing-indices

"Security & setup warnings

It's important for the security and performance of your instance that everything is configured correctly. To help you with that we are doing some automatic checks. Please see the linked documentation for more information.

There are some warnings regarding your setup.
  • The database is missing some indexes. Due to the fact that adding indexes on big tables could take some time they were not added automatically. By running "occ db:add-missing-indices" those missing indexes could be added manually while the instance keeps running. Once the indexes are added queries to those tables are usually much faster.

    Missing index "fs_size" in table "oc_filecache"
    .

How to fix it:

privusrA17@nextcloud042:~#
privusrA17@nextcloud042:~# cd /var/www/nextcloud; sudo -u www-data php ./occ db:add-missing-indices
Check indices of the share table.
Check indices of the filecache table.
Adding additional size index to the filecache table, this can take some time...
Filecache table updated successfully.
Check indices of the twofactor_providers table.
Check indices of the login_flow_v2 table.
Check indices of the whats_new table.
Check indices of the cards table.
Check indices of the cards_properties table.
Check indices of the calendarobjects_props table.
Check indices of the schedulingobjects table.
Check indices of the oc_properties table.
privusrA17@nextcloud042:/var/www/nextcloud#
privusrA17@nextcloud042:/var/www/nextcloud#


Windows VMs have issues resolving DNS names, run into network timeouts or packet loss

Problem

Windows VMs (VMware vSphere) have issues when trying to resolve DNS names and run into network timeouts or packet loss on other protocols, too.

For example running a simple PowerShell script shows the issue (Change *YourFQDN* to your FQDN and '*DNS-Server-IP*' to your DNS server ip-address) :
 

1..1000 | Foreach-Object -Process {
    [pscustomobject]@{
        Try         = $_
        ElapsedTime = (Measure-Command -Expression {
                Resolve-DnsName -DnsOnly -QuickTimeout -NoHostsFile -Name '*YourFQDN*' -Server '*DNS-Server-IP*'
            }).TotalMilliseconds -as [int]
    }
} |
    Group-Object -Property 'ElapsedTime' |
    Sort-Object -Property ‚Count'

PowerShell DNS query test script

From 1000 DNS-queries 541x were answered within 2ms
From 1000 DNS-queries 243x were answered within 1ms
From 1000 DNS-queries 57x were answered within 3ms
From 153 DNS-queries were not answered, timeout >1000ms

Debug-Logs of vnetWFP show the event „DEBUG: ALEInspectInjectComplete : Packet injection status is : c000021b”.

Solution

Update your VMware Tools 11.x with Guest Introspection Driver to version 11.2.6 and reboot your VM or uninstall the Guest Introspection Driver. We first suspected it is VMware NSX-T or VMware Carbon Black EDR, but it was not. It was the NSX Guest Introspection Driver.

Root Cause: Packet drop is seen due to intermittent failure reported by the Microsoft WFP packet injection API.

https://kb.vmware.com/s/article/79185

After the update or removal of the driver the issues were gone:

PowerShell DNS query test script after vmware tools update

From 1000 DNS-queries 985x were answered within 1ms
From 1000 DNS-queries 10x were answered within 2ms
From 1000 DNS-queries 3x were answered within 3ms
From 1000 DNS-queries 1x was answered within 4ms
From 1000 DNS-queries 1x was answered within 35ms
From 1000 DNS-queries 0x timed out.

Monitor UniFi WLAN Access Point with PRTG with SNMPv3 Auth+Encrypted

This is a tiny guide howto monitor your UniFi wireless accesspoint, in this case a Unifi U7 pro with SNMPv3 with AES-Encryption and SHA-Auth...