Splunk Version 9.4.4 shows error while starting - VM CPU Flags are missing

Problem 

When you update your Splunk to e.g. version 9.4.4 and get this error while starting splunk:

Migrating to:
VERSION=9.4.4
BUILD=f627d88b766b
PRODUCT=splunk
PLATFORM=Linux-x86_64

********** BEGIN PREVIEW OF CONFIGURATION FILE MIGRATION **********

-> Currently configured KVSTore database path="/opt/splunk/var/lib/splunk/kvstore"
CPU Vendor: GenuineIntel
CPU Family: 15
CPU Model: 6
CPU Brand: \x
AVX Support: No
SSE4.2 Support: No
AES-NI Support: No

-> isSupportedArchitecture=0
-> isKVstoreDisabled=0
-> isKVstoreDatabaseFolderExist=0
-> isKVstoreDiagnosticsFolderExist=0
-> isKVstoreVersionFileFolderExist=1
-> isKVstoreVersionFileFolderEmpty=0
-> isKVstoreVersionFileMatched=1
-> isKVstoreVersionFromBsonMatched=0
-> isSupportedArchitecture=0
* Active KVStore version upgrade precheck FAILED!
  -- This check is to ensure that KVStore version 4.2 been in use.
  -- In order to fix this failed check, re-install the previous Splunk version, and follow the KVStore upgrade documentation: https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/MigrateKVstore#Upgrade_KV_store_server_to_version_4.2 .
Some upgrade prechecks failed!
ERROR while running splunk-preinstall.
 

Cause

This might be related to missing CPU features AVX, SSE4.2 and AES-NI to the Splunk VM, which are necessary for the new kvstore mongodb version, which is introduced in Splunk version 9.4: https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/administer-the-app-key-value-store/upgrade-the-kv-store-server-version#Upgrade_the_KV_store_server_version

You can check inside your VM using:
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$ grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
splnonroot@devubu22h102:/opt/splunk$

Solution 

In your VM hypervisor (VMware ESXi, Microsoft Hyper-V, Proxmox, etc..) --> give the Splunk VMs the necessary CPU flags/features.

Example for proxmox:

  1. Check inside your VM: grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
  2. Edit /etc/pve/qemu-server/*Your VM ID*.conf and add CPU: Host - so all the Host CPU Hardware flags are forwarded to the VM
  3. Reboot the VM 
  4. Check inside your VM again: grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
  5. Start Splunk 

Before Proxmox VM has CPU features: 

root@proxmox1:~#
root@proxmox1:~#
root@proxmox1:~# cat /etc/pve/qemu-server/*Your VM ID*.conf
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]

[snapshot-pre-splunkupdate]
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]
root@proxmox1:~#
root@proxmox1:~# 

After Proxmox VM has CPU features:


1.
root@proxmox1:~#
root@proxmox1:~#
root@proxmox1:~# cat /etc/pve/qemu-server/*Your VM ID*.conf
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]

[snapshot-pre-splunkupdate]
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide2: local:iso/ubuntu-22.04-live-server-amd64_2.iso,media=cdrom
memory: 8192
name: devubu22h102
net0: virtio=CA:*redacted*:CC,bridge=vmbr0,firewall=1
[...]
root@proxmox1:~#
root@proxmox1:~#


2. Reboot the VM

3. Then inside your VM the CPU flags are visible:
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$ grep -o -w 'sse4_2\|avx\|aes' /proc/cpuinfo | sort -u
aes
avx
sse4_2
splnonroot@devubu22h102:/opt/splunk$
splnonroot@devubu22h102:/opt/splunk$ 


4. Start Splunk again

Update Nginx ProxyManager docker container guide

Commands

  1. Backup your container
  2. Check which version your Nginx ProxyManager is currently running by:
    docker exec -it nginx_app_1 /bin/bash

  3. Check which docker containers are currently running
    docker ps

  4. Stop the Nginx ProxyManager application and database containers by:
    docker stop nginx_app_1
    docker stop nginx_db_1


  5. Pull the latest (or a specific) version of the image by:
    docker pull jc21/nginx-proxy-manager:latest

  6. Start the Containers
    docker-compose -f nginx.yml up -d

  7. Check the logs of the containers
    docker logs --follow nginx_app_1

  8. Check which version your Nginx ProxyManager is currently running by:
    docker exec -it nginx_app_1 /bin/bash

  9. Check your monitoring solution & test your applications


Example

user@container-nginx:~#
user@container-nginx:~# docker ps
CONTAINER ID   IMAGE                      COMMAND             CREATED         STATUS         PORTS                                                                                  NAMES
1a74a14bc3ab   9c3f57826a5d               "/init"             18 days ago   Up 4 minutes   0.0.0.0:80-81->80-81/tcp, :::80-81->80-81/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   nginx_app_1
02372069f98d   jc21/mariadb-aria:latest   "/scripts/run.sh"   18 days ago   Up 4 minutes   3306/tcp                                                                               nginx_db_1
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~# docker stop nginx_app_1
nginx_app_1
user@container-nginx:~# docker stop nginx_db_1
nginx_db_1
user@container-nginx:~#
user@container-nginx:~# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
user@container-nginx:~#
user@container-nginx:~# docker pull jc21/nginx-proxy-manager:latest
latest: Pulling from jc21/nginx-proxy-manager
7cf63256a31a: Pull complete
191fb0319d69: Pull complete
9ace5189354c: Pull complete
e4db5efc926a: Pull complete
[...]
be35f3c3bf02: Pull complete
Digest: sha256:e5eecad9bf040f1e7ddc9db6bbc812d690503aa119005e3aa0c24803746b49ea
Status: Downloaded newer image for jc21/nginx-proxy-manager:latest
docker.io/jc21/nginx-proxy-manager:latest
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~# ls -lah
total 676K
[...]
-rw-r--r--  1 user    user    607K May 14 02:55 cron-auto-update.log
drwxr-xr-x  7 user    user    4.0K Nov  5  2023 data
drwxr-xr-x  8 user    user    4.0K May 14 19:07 letsencrypt
drwxr-xr-x  5 postfix crontab 4.0K May 14 19:14 mysql
-rw-r--r--  1 user    user    1.1K Aug 11  2024 nginx.yml
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~# docker-compose -f nginx.yml up -d
Starting nginx_db_1 ... done
Recreating 
nginx_app_1 ... done
user@container-nginx:~#
user@container-nginx:~#
user@container-nginx:~# docker exec -it nginx_app_1 /bin/bash
 _   _       _            ____                      __  __
| \ | | __ _(_)_ __ __  _|  _ \ _ __ _____  ___   _|  \/  | __ _ _ __   __ _  __ _  ___ _ __
|  \| |/ _` | | '_ \\ \/ / |_) | '__/ _ \ \/ / | | | |\/| |/ _` | '_ \ / _` |/ _` |/ _ \ '__|
| |\  | (_| | | | | |>  <|  __/| | | (_) >  <| |_| | |  | | (_| | | | | (_| | (_| |  __/ |
|_| \_|\__, |_|_| |_/_/\_\_|   |_|  \___/_/\_\\__, |_|  |_|\__,_|_| |_|\__,_|\__, |\___|_|
       |___/                                  |___/                          |___/
Version 2.12.3 (c5a319c) 2025-03-12 00:21:07 UTC, OpenResty 1.27.1.1, debian 12 (bookworm), Certbot certbot 3.2.0
Base: debian:bookworm-slim, linux/amd64
Certbot: nginxproxymanager/nginx-full:latest, linux/amd64
Node: nginxproxymanager/nginx-full:certbot, linux/amd64

[yp@docker-9a056abb3b01:/app]#

 

This also works fine if the docker container is within an LXC container. It should also work fine with podman instead of docker.

Ansible Remote Shell Examples

To execute remote commands or get access to a remote server using ansible, you can do: source =prdeu4spl002  destination = prdus1ans105  aut...