How To Install Proxmox VE 7 on Debian 11 (Bullseye)

Posted on 79 views

Virtualization is the foundation of cloud computing as it allows for more efficient usage of physical computer hardware. In Virtualization, a software application is used to create an abstraction layer over hardware elements of a computer – processors, memory, storage, network and more, to be divided into multiple virtual machines (VMs). Proxmox Virtual Environment (VE) is a Virtualization solution based on Debian Linux distribution with a modified LTS kernel. It enables you to deploy and manage both virtual machines and containers, with unified storage for better efficiency.

In this guide, we will cover a step-by-step installation of Proxmox VE 7 virtualization software on Debian 11 (Bullseye) Linux system. It’s recommended to deploy Proxmox VE server from a Bare-metal_ISO_Installer, but it’s sometimes inevitable to deploy it on a running instance of Debian 11 (Bullseye) server.

Setup Pre-requisites

For the installation of Proxmox VE 7 on Debian 11 (Bullseye), you need the following requirements to be met;

  • A running instance of Debian Bullseye
  • A 64-bit processor with support for the Intel 64 or AMD64 CPU extensions.
  • Access to Debian server terminal as root or standard user with sudo
  • Server needs internet access
  • Enough hardware resources to be used in Virtualizing other operating systems

We have a guide that helps with the installation of Debian 11 (Bullseye) operating system. In the link below:

With all the requirements satisfied, proceed with the installation of Proxmox VE 7 on Debian 11 (Bullseye) with the steps discussed in the next sections.

For Proxmox VE 6, check out: How To Install Proxmox VE 6 on Debian 10 (Buster)

Step 1: Update Debian OS

Ensure your Debian 11 (Bullseye) operating system is upgraded.

sudo apt -y update && sudo apt -y upgrade

Once the upgrade process is complete, reboot the server if needed.

[ -f /var/run/reboot-required ] && sudo reboot -f

Step 2: Set Proxmox Server hostname

Let’s set a hostname on the server

sudo hostnamectl set-hostname --static with correct hostname you’re setting on your system.

Get the IP address of the primary interface:

$ ip ad
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:ef:22:c5 brd ff:ff:ff:ff:ff:ff
    inet brd scope global dynamic noprefixroute enp1s0
       valid_lft 1982sec preferred_lft 1982sec
    inet6 fe80::5054:ff:feef:22c5/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

Update the record on /etc/hosts file with hostname and matching IP address for local resolution without DNS server.

$ sudo vim /etc/hosts proxmox7node01

Logout and back in to use new hostname

$ logout

Test if configured hostname is is ok using the hostname command:

$ hostname --ip-address

Step 3: Add the Proxmox VE repository

The Proxmox server packages are distributed in an APT repository. Add the repository to your Debian 11 system by running the commands below:

echo "deb bullseye pve-no-subscription" | sudo tee /etc/apt/sources.list.d/pve-install-repo.list

Then import GPG packages signing key:

sudo mv proxmox-release-bullseye.gpg /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
chmod +r /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg

Update your APT sources list

$ sudo apt update
Hit:1 bullseye InRelease
Hit:2 bullseye-updates InRelease
Hit:3 bullseye-security InRelease
Get:4 bullseye InRelease [3053 B]
Hit:5 bullseye-backports InRelease
Get:6 bullseye/pve-no-subscription amd64 Packages [186 kB]
Fetched 189 kB in 0s (435 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
1 package can be upgraded. Run 'apt list --upgradable' to see it.

You can see we have an upgrade available after adding the repo. Let’s run the system upgrade command:

$ sudo apt full-upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 82.0 kB of archives.
After this operation, 2048 B disk space will be freed.
Do you want to continue? [Y/n] y
Get:1 bullseye/pve-no-subscription amd64 ifupdown amd64 0.8.36+pve1 [82.0 kB]
Fetched 82.0 kB in 0s (2558 kB/s)
Reading changelogs... Done
(Reading database ... 137105 files and directories currently installed.)
Preparing to unpack .../ifupdown_0.8.36+pve1_amd64.deb ...
Unpacking ifupdown (0.8.36+pve1) over (0.8.36) ...
Setting up ifupdown (0.8.36+pve1) ...
Processing triggers for man-db (2.9.4-2) ...

Adding Proxmox VE Ceph Repository:

This is Proxmox VE’s main Ceph repository and holds the Ceph packages for production use. You can also use this repository to update only the Ceph client.

echo "deb bullseye main" | sudo tee /etc/apt/sources.list.d/ceph.list

Step 4: Install Proxmox VE 7 packages

With the repository added, we can now install Proxmox VE packages on Debian 11 (Bullseye) system:

sudo apt update
sudo apt install proxmox-ve postfix open-iscsi

The installation time will depend on other variables such as internet connectivity and hard disk write speed:

The following packages will be REMOVED:
  firmware-linux-free ifupdown
The following NEW packages will be installed:
  attr bridge-utils ceph-common ceph-fuse cifs-utils corosync criu cstream curl dmeventd dtach ebtables faketime fonts-font-awesome fonts-glyphicons-halflings genisoimage glusterfs-client
  glusterfs-common gnutls-bin hdparm ibverbs-providers idn ifupdown2 ipset keyutils libaio1 libanyevent-http-perl libanyevent-perl libappconfig-perl libapt-pkg-perl libasync-interrupt-perl
  libauthen-pam-perl libbabeltrace1 libboost-context1.74.0 libboost-coroutine1.74.0 libboost-program-options1.74.0 libbytes-random-secure-perl libcephfs2 libcfg7 libcmap4 libcommon-sense-perl
  libconvert-asn1-perl libcorosync-common4 libcpg4 libcrypt-openssl-bignum-perl libcrypt-openssl-random-perl libcrypt-openssl-rsa-perl libcrypt-random-seed-perl libcrypt-ssleay-perl libdbi1
  libdevel-cycle-perl libdevmapper-event1.02.1 libdigest-bubblebabble-perl libdigest-hmac-perl libev-perl libfaketime libfdt1 libfile-chdir-perl libfile-readbackwards-perl libfilesys-df-perl
  libgfapi0 libgfchangelog0 libgfrpc0 libgfxdr0 libglusterd0 libglusterfs0 libgnutls-dane0 libgnutlsxx28 libgoogle-perftools4 libgssapi-perl libguard-perl libibverbs1 libinih1 libio-multiplex-perl
  libipset13 libiscsi7 libisns0 libjemalloc2 libjs-bootstrap libjs-extjs libjs-jquery libjs-qrcodejs libjson-perl libjson-xs-perl libknet1 libleveldb1d liblinux-inotify2-perl liblvm2cmd2.03
  libmath-random-isaac-perl libmath-random-isaac-xs-perl libmime-base32-perl libnet-dns-perl libnet-dns-sec-perl libnet-ip-perl libnet-ldap-perl libnet-libidn-perl libnet1 libnetaddr-ip-perl
  libnetfilter-log1 libnfsidmap2 libnozzle1 libnvpair3linux liboath0 libopeniscsiusr libopts25 libposix-strptime-perl libproxmox-acme-perl libproxmox-acme-plugins libproxmox-backup-qemu0
  libpve-access-control libpve-apiclient-perl libpve-cluster-api-perl libpve-cluster-perl libpve-common-perl libpve-guest-common-perl libpve-http-server-perl libpve-rs-perl libpve-storage-perl
  libpve-u2f-server-perl libqb100 libqrencode4 libquorum5 librados2 librados2-perl libradosstriper1 librbd1 librdmacm1 librrd8 librrds-perl libsdl1.2debian libsocket6-perl libspice-server1
  libstatgrab10 libstring-shellquote-perl libtcmalloc-minimal4 libtemplate-perl libterm-readline-gnu-perl libtpms0 libtypes-serialiser-perl libu2f-server0 libunbound8 liburcu6 libusbredirparser1
  libuuid-perl libuutil3linux libvotequorum8 libxml-libxml-perl libxml-namespacesupport-perl libxml-sax-base-perl libxml-sax-expat-perl libxml-sax-perl libyaml-libyaml-perl libzfs4linux
  libzpool5linux lvm2 lxc-pve lxcfs lzop nfs-common novnc-pve numactl open-iscsi postfix powermgmt-base proxmox-archive-keyring proxmox-backup-client proxmox-backup-file-restore
  proxmox-backup-restore-image proxmox-mini-journalreader proxmox-ve proxmox-widget-toolkit pve-cluster pve-container pve-docs pve-edk2-firmware pve-firewall pve-firmware pve-ha-manager pve-i18n
  pve-kernel-5.13 pve-kernel-5.13.19-2-pve pve-kernel-helper pve-lxc-syscalld pve-manager pve-qemu-kvm pve-xtermjs python3-ceph-argparse python3-cephfs python3-cffi-backend python3-cryptography
  python3-gpg python3-jwt python3-prettytable python3-protobuf python3-rados python3-rbd python3-samba python3-tdb qemu-server qrencode rpcbind rrdcached rsync samba-common samba-common-bin
  samba-dsdb-modules smartmontools smbclient socat spiceterm sqlite3 ssl-cert swtpm swtpm-libs swtpm-tools thin-provisioning-tools uidmap vncterm xfsprogs xsltproc zfs-zed zfsutils-linux zstd
0 upgraded, 223 newly installed, 2 to remove and 0 not upgraded.
Need to get 302 MB of archives.
After this operation, 1780 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

If you have a mail server in your network, you should configure postfix as a satellite system, and your existing mail server will be the ‘relay host’ which will route the emails send by the proxmox server to the end recipient. If you don’t know what to enter here, choose local only.


Confirm system mail name / update accordingly.


Confirm the installation completes without any errors:

Created symlink /etc/systemd/system/ → /lib/systemd/system/pvedaemon.service.
Created symlink /etc/systemd/system/ → /lib/systemd/system/pveproxy.service.
Created symlink /etc/systemd/system/ → /lib/systemd/system/spiceproxy.service.
Created symlink /etc/systemd/system/ → /lib/systemd/system/pvestatd.service.
Created symlink /etc/systemd/system/ → /lib/systemd/system/pvebanner.service.
Created symlink /etc/systemd/system/ → /lib/systemd/system/pvescheduler.service.
Created symlink /etc/systemd/system/ → /lib/systemd/system/pve-daily-update.timer.
Created symlink /etc/systemd/system/ → /lib/systemd/system/pvenetcommit.service.
Created symlink /etc/systemd/system/pve-manager.service → /lib/systemd/system/pve-guests.service.
Created symlink /etc/systemd/system/ → /lib/systemd/system/pve-guests.service.
Backing up lvm.conf before setting pve-manager specific settings..
'/etc/lvm/lvm.conf' -> '/etc/lvm/lvm.conf.bak'
Setting 'global_filter' in /etc/lvm/lvm.conf to prevent zvols from being scanned:
global_filter=["a|.*|"] => global_filter=["r|/dev/zd.*|"]
Setting up proxmox-ve (7.1-1) ...
Processing triggers for mailcap (3.69) ...
Processing triggers for fontconfig (2.13.1-4.2) ...
Processing triggers for desktop-file-utils (0.26-1) ...
Processing triggers for initramfs-tools (0.140) ...
update-initramfs: Generating /boot/initrd.img-5.13.19-2-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
Processing triggers for hicolor-icon-theme (0.17-2) ...
Processing triggers for gnome-menus (3.36.0-1) ...
Processing triggers for libc-bin (2.31-13+deb11u2) ...
Processing triggers for rsyslog (8.2102.0-2) ...
Processing triggers for man-db (2.9.4-2) ...
Processing triggers for proxmox-backup-file-restore (2.1.2-1) ...
Updating file-restore initramfs...
11292 blocks
Processing triggers for pve-ha-manager (3.3-1) ...
[email protected]:~$

Reboot your Debian system after installation to boot with Proxmox VE kernel.

sudo systemctl reboot

Check if Port 8006 is bound to Proxmox VE Proxy service

$ ss -tunelp | grep 8006
tcp   LISTEN 0      4096                *:8006             *:*    uid:33 ino:25414 sk:18 cgroup:/system.slice/pveproxy.service v6only:0 <->

Step 5: Access Proxmox VE web interface

From your Workstation, connect to the Proxmox VE admin web console on (https://youripaddress:8006).


Select “PAM Authentication” and authenticate with server’s root user password to access Proxmox VE dashboard which has a look like below:


To change Proxmox VE UI theme see guide below:

Once logged in, create a Linux Bridge called vmbr0,


Add the first network interface to be used by the bridge being created.


For Private bridge using NAT check below article:

The official Proxmox Documentation has more guides on the advanced configurations and Proxmox VE Administration.


Gravatar Image
A systems engineer with excellent skills in systems administration, cloud computing, systems deployment, virtualization, containers, and a certified ethical hacker.