How to Combine Multiple Proxmox Nodes into One Cluster Step-by-Step Setup, Migration, and Safe Consolidation
“Combining several Proxmox nodes into one” usually means one of two things:
- Clustering: you keep multiple servers, but you manage them as one logical datacenter (single pane of glass), with shared configuration and features like migration. Proxmox explicitly supports running as a single node or assembling multiple nodes into a cluster.
- Consolidation: you move all VMs/containers onto one physical node and retire the rest (for maintenance, cost reduction, or hardware refresh).
This article covers both—because in real operations, you often cluster first (for easy migration), then consolidate safely.
Why Combine Proxmox Nodes?
A Proxmox cluster gives you:
- Centralized management: you can connect to any node and manage the whole cluster because configuration is shared via the cluster filesystem concept.
- Simpler workload movement: with the right storage setup, you can migrate guests between nodes.
- A foundation for HA (High Availability): HA generally requires quorum and is commonly designed around 3+ voting members for reliability.
Core Concepts (Short, but Critical)
Corosync + Quorum: why clusters “freeze” when something fails
Proxmox clustering relies on a quorum-based approach for consistency. If a node loses quorum, the cluster can switch into a safer mode (commonly described as read-only behavior for cluster-managed configuration) to avoid split-brain scenarios.
What that means operationally: in small or unstable clusters, losing a node can break management actions until quorum is restored.
Two-node clusters are special (and often misunderstood)
A 2-node cluster is fragile: if one node is down, the remaining node may not have quorum by default, and some operations become restricted. Community guidance commonly discourages 2-node “production” designs unless you add a tie-breaker.
The usual fix: add a QDevice/QNetd-based witness vote (or a third node). The Proxmox docs describe QDevice as providing votes to help the cluster reach quorum.
Pre-Flight Checklist (Do This Before You Create a Cluster)
Treat this like a change plan. Clustering is not hard, but sloppy prerequisites create the majority of failures.
1) Align versions and identities
- Keep nodes on the same Proxmox VE major version where practical (best practice in most cluster designs).
- Ensure each node has a unique hostname and stable name resolution (DNS or consistent
/etc/hosts).
2) Time sync is non-negotiable
Configure NTP/chrony on all nodes. Clock drift in clustered systems is a reliable way to create intermittent, hard-to-debug issues.
3) Cluster network quality matters
Proxmox community guidance and documentation excerpts emphasize low-latency, reliable links (LAN-grade). You will see operational instability if you try to stretch a cluster across high-latency links.
4) Plan storage up front
Your migration options depend on storage:
- Shared storage (NFS/iSCSI/Ceph, etc.) is the classic path for fast or live migration.
- Local storage can still work, but you’ll rely more on replication, storage migration, or backup/restore workflows.
Step-by-Step: Create One Proxmox Cluster from Multiple Nodes
Below is the most common and maintainable pattern: pick one node as the first cluster member, then join other nodes to it.
Step 1 — Create the cluster on the first node
From the shell on the node you want as the initial cluster member:
pvecm create <clustername>
The pvecm tool is the standard CLI for creating a cluster and managing membership.
(You can also do this from the Web UI under the Datacenter cluster section, but the CLI is easier to document and audit.)
Step 2 — Join each additional node to the cluster
On each node you want to add (run this on the joining node):
pvecm add <IP-or-hostname-of-an-existing-cluster-member>
This command “adds the current node to an existing cluster” (i.e., you run it on the node being added).
Step 3 — Verify cluster membership and quorum
On any cluster node:
pvecm status
pvecm nodes
These are standard status/membership checks referenced by the pvecm tooling documentation and common operational practice.
What you want to see: all nodes listed, and the cluster in a healthy (quorate) state.
Migration After Clustering: Move VMs/Containers Between Nodes
Clustering is the control plane. The next question is workload mobility.
Option A — Live migration (best case)
A major benefit of shared storage is that nodes can live-migrate running guests with minimal/no downtime because the VM disks are accessible from both nodes.
Operationally, you can migrate from the GUI (right-click VM → Migrate) or via CLI, depending on your environment and policies.
Option B — Local storage migration (common in homelabs and cost-sensitive setups)
If VMs live on local disks, you generally have three practical approaches:
- Backup and restore to the target node (most predictable).
- Storage replication (notably when using ZFS replication patterns).
- Offline migration (shutdown VM, copy/move disks, then start on target).
Proxmox’s “Storage Replication” feature is explicitly described as replicating guest volumes to another node to reduce migration time and add redundancy for local-storage guests.
Consolidation Scenario: Move Everything onto One Node and Retire the Others
If your end goal is “multiple nodes become one physical server,” do this in two phases:
Phase 1 — Consolidate guests onto the target node
- Pick your “final” node (capacity, storage, NICs, power redundancy).
- Migrate or restore all VMs/containers to that node.
- Confirm:
- guests boot cleanly
- networking is correct
- storage performance is acceptable
- backups run successfully
Only then proceed.
Phase 2 — Remove the extra nodes from the cluster safely
Node removal is a sensitive operation because quorum can be affected.
Best practice pattern:
- Power down (or otherwise fully stop) the node you plan to remove.
- Run
pvecm delnode <node-name>from a remaining healthy cluster member.
The Proxmox community (including staff responses) repeatedly points back to pvecm delnode as the correct removal mechanism and emphasizes sequencing (power down first, then remove).
Important Warning: A “Single-Node Cluster” Is Usually a Bad End State
After you remove nodes, you may be left with a cluster configuration on only one node. That is not the same as a clean standalone node.
Why this matters:
- Cluster services still expect quorum logic.
- Certain cluster-managed configuration behaviors can become restricted when quorum expectations aren’t met.
The “pvecm expected” workaround (use with caution)
pvecm expected <expected> exists to tell corosync a new value of expected votes.
Community guidance often describes pvecm expected 1 as a temporary measure to regain control when you need to disband or repair a cluster that is not quorate.
However: running a cluster long-term with forced expected votes is generally not the same as designing for quorum. If you truly want “one node only,” the cleanest operational posture is usually to run Proxmox as a standalone host (not as a one-node cluster).
Better alternative for small clusters: add a QDevice witness
If you must run with only two physical nodes (or anticipate frequent “one node down” situations), adding a QDevice vote is a widely recommended pattern because it helps break ties and maintain quorum safely.
Troubleshooting Checklist (What Usually Goes Wrong)
1) “Waiting for quorum” or cluster not ready
This typically indicates quorum is not achieved or corosync communication is unhealthy. Common root causes:
- latency/jitter on the cluster network
- misconfigured addresses/hostnames
- firewall rules blocking required traffic
2) You can’t edit cluster config because it’s read-only
This behavior is associated with quorum protection semantics (cluster goes read-only without quorum).
3) Two-node cluster surprises
If one node is down, the other may lose quorum—this is expected behavior without a witness vote.
Practical “Best Practice” Architecture Patterns
Pattern A — Minimum viable production cluster (3 voting members)
- 3 Proxmox nodes (or 2 nodes + QDevice)
- reliable low-latency cluster network
- shared storage (or well-planned replication + backup)
Proxmox’s own wiki guidance commonly points to three nodes as the baseline for reliable quorum, especially if HA is a goal.
Pattern B — Lab / budget cluster (2 nodes + QDevice)
- 2 hypervisor nodes
- 1 small witness (QDevice on a lightweight Debian host, mini PC, or similar)
This is a common compromise for home labs or edge deployments.
Pattern C — Consolidate and run standalone
If the strategic direction is “one box only,” do not overcomplicate:
- consolidate guests
- keep strong backups
- consider a second box only for backup or cold standby
FAQ
Can I “merge” two existing Proxmox clusters into one?
Not directly in the sense of combining two independent cluster control planes. The typical operational approach is:
- pick the target cluster
- migrate workloads (often via backup/restore if clusters are separate)
- retire the source cluster
Do I need shared storage to migrate?
For live migration without downtime, shared storage is the most straightforward approach because both nodes access the same VM disks.
Without shared storage, you can still move workloads, but expect more offline steps or replication/restore workflows.
Is it safe to run a 2-node cluster without QDevice?
It can function, but quorum behavior will bite you at inconvenient times (maintenance, outages). Community and design guidance strongly encourages a third vote (node or QDevice) for operational stability.
To “combine multiple Proxmox nodes into one,” the most robust method is to build a cluster using pvecm create and pvecm add, verify quorum, then use migration (ideally with shared storage) to place workloads where you want them.
If your real goal is consolidation onto one physical host, do it in two phases: migrate everything first, then remove nodes safely with pvecm delnode, while respecting quorum design (prefer 3 votes via nodes or QDevice).
If you tell me your exact scenario (number of nodes, storage type—ZFS/NFS/Ceph/local LVM, and whether you want HA), I can adapt this into a more prescriptive runbook tailored to your topology.