We are excited to announce that our latest software version 8.2 for Proxmox Virtual Environment is now available for download. This release is based on Debian 12.5 "Bookworm" but uses a newer Linux kernel 6.8, QEMU 8.1, LXC 6.0, Ceph 18.2 and ZFS 2.2.
We have an import wizard to migrate VMware ESXi guests to Proxmox VE. The integrated VM importer is presented as storage plugin for native integration into the API and web-based user interface. You can use this to import the VM as a whole, with most of the original configuration settings mapped to Proxmox VE's configuration model.
With the new ‘proxmox-auto-install-assistant’ tool you can fully automate the setup process on bare-metal, rapidly deploying Proxmox VE hosts without the need for manual access to the systems.
Proxmox VE 8.2 comes full of new features and highlights
As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.
Release notes
https://pve.proxmox.com/wiki/Roadmap
Press release
https://www.proxmox.com/en/news/press-releases/
Video tutorial
https://www.proxmox.com/en/services/videos/proxmox-virtual-environment/whats-new-in-proxmox-ve-8-2
Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso
Documentation
https://pve.proxmox.com/pve-docs
Community Forum
https://forum.proxmox.com
Bugtracker
https://bugzilla.proxmox.com
Source code
https://git.proxmox.com
We want to thank everyone who has contributed to this release, whether it's through code contributions, bug reports, or simply using and providing feedback on the software. As always, we welcome any feedback or bug reports you may have. Thanks again for your support, and happy virtualization!
FAQ
Q: Can I upgrade latest Proxmox VE 7 to 8 with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
Q: Can I upgrade an 8.0 installation to the stable 8.2 via apt?
A: Yes, upgrading from is possible via apt and GUI.
Q: Can I install Proxmox VE 8.2 on top of Debian 12 "Bookworm"?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
Q: Can I upgrade my Proxmox VE 7.4 cluster with Ceph Pacific to Proxmox VE 8.2 and to Ceph Reef?
A: This is a three-step process. First, you have to upgrade Ceph from Pacific to Quincy, and afterwards you can then upgrade Proxmox VE from 7.4 to 8.2. As soon as you run Proxmox VE 8.2, you can upgrade Ceph to Reef. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef
Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.
ceph源
http://download.proxmox.com/debian/ceph-reef/dists/bookworm/no-subscription/binary-amd64/
https://download.ceph.com/debian-reef/pool/main/c/ceph/
https://docs.ceph.com/en/latest/install/get-packages/ 官网最近ceph安装包源
//
Ceph Pacific to Quincy
Introduction
This article explains how to upgrade Ceph from Pacific to Quincy (17.2.0 or higher) on Proxmox VE 7.2 and newer 7.x releases.
Important Release Notes
Filestore OSDs are deprecated. Before you proceed, destroy your Filestore OSDs and recreate them to be Bluestore OSDs one by one.
The support for LevelDB has been dropped in Quincy. Bluestore OSDs should always be using RocksDB, but old monitors that were set up prior to Luminous (v12) could still be using LevelDB. Verify it by running
on your Ceph monitor hosts. The result should be "rocksdb". If it is not, destroy and recreate that monitor.
The device_health_metrics pool has been renamed to .mgr. It is now used as a common store for all ceph-mgr modules. After upgrading to Quincy, the device_health_metrics pool will be renamed to .mgr on existing clusters.
A health warning is now reported if the require-osd-release flag is not set to the appropriate release after a cluster upgrade.
For more information, see Release Notes
Assumption
We assume that all nodes are on the latest Proxmox VE 7.2 (or higher) version and Ceph is on version Pacific (16.2.9-pve1 or higher). If not, see the Ceph Octopus to Pacific upgrade guide.
Read the Known Issues section to avoid encountering them, for example when performing steps not described in this guide.
Note: While in theory it is possible to upgrade from Ceph Octopus to Quincy directly, we highly recommend upgrading to Pacific first. |
The cluster must be healthy and working!
Enable msgrv2 Protocol and Update Ceph Configuration
If you did not already do so when you upgraded to Nautilus, Octopus or Pacific, you must enable the new v2 network protocol. Issue the following command:
This will instruct all monitors that bind to the old default port 6789 for the legacy v1 protocol to also bind to the new 3300 v2 protocol port. To see if all monitors have been updated run
and verify that each monitor has both a v2: and v1: address listed.
Preparation on each Ceph Cluster Node
Change the current Ceph repositories from Pacific to Quincy.
Your /etc/apt/sources.list.d/ceph.list should now look like this
Set the 'noout' Flag
Set the noout flag for the duration of the upgrade (optional, but recommended):
Or via the GUI in the OSD tab (Manage Global Flags).
Upgrade on each Ceph Cluster Node
Upgrade all your nodes with the following commands or by installing the latest updates via the GUI. It will upgrade the Ceph on your node to Quincy.
After the update, your setup will still be running the old Pacific binaries.
Restart the Monitor Daemon
Note: You can use the web-interface or the command-line to restart ceph services. |
After upgrading all cluster nodes, you have to restart the monitor on each node where a monitor runs.
Once all monitors are up, verify that the monitor upgrade is complete. Look for the Quincy string in the mon map. The command
should report
If it does not, this implies that one or more monitors haven’t been upgraded and restarted, and/or that the quorum doesn't include all monitors.
Restart the Manager Daemons on all Nodes
If the managers did not automatically restart with the monitors, restart them now on all nodes
Verify that the ceph-mgr daemons are running by checking ceph -s
Restart the OSD Daemon on all Nodes
Restart all OSDs. Only restart OSDs on one node at a time to avoid loss of data redundancy. To restart all OSDs on a node, run the following command:
Wait after each restart and periodically checking the status of the cluster:
It should be in HEALTH_OK or
Once all OSDs are running with the latest versions, the following warning can appear:
Disallow pre-Quincy OSDs and Enable all new Quincy-only Functionality
Upgrade all CephFS MDS Daemons
For each CephFS file system,
Disable standby_replay
Reduce the number of ranks to 1 (if you plan to restore it later, first take notes of the original number of MDS daemons).:
With a rank higher than 1 you will see more than one MDS active for that Ceph FS.
Wait for the cluster to deactivate any non-zero ranks by periodically checking the status of Ceph.:
The number of active MDS should go down to the number of file systems you have
Alternatively, check in the CephFS panel in the GUI that each Ceph filesystem has only one active MDS
Take all standby MDS daemons offline on the appropriate hosts with:
Confirm that only one MDS is online and is on rank 0 for your FS:
Upgrade the last remaining MDS daemon by restarting the daemon:
Restart all standby MDS daemons that were taken offline:
Restore the original value of max_mds for the volume:
Unset the 'noout' Flag
Once the upgrade process is finished, don't forget to unset the noout flag.
Or via the GUI in the OSD tab (Manage Global Flags).
Notes
When restarting a MGR, log lines containing "has missing NOTIFY_TYPES member" can be ignored
Known Issues
Guest images are stored on pool device_health_metrics
If the guest images are stored in the "device_health_metrics" pool, they will be broken after the upgrade!
To avoid the issue, create a new Ceph Pool with the "Add Storage" option enabled. Then use the "Disk Action -> Move Storage" for VMs or "Volume Actions -> Move Storage" for containers to move the guest images away from the "device_health_metrics" pool before you upgrade to Quincy.
//
Ceph Quincy to Reef
Introduction
This article explains how to upgrade Ceph from Quincy (17.2+) to Reef (18.2+) on Proxmox VE 8.
Important Release Notes
Note: Filestore OSDs are deprecated. Before you proceed, destroy your Filestore OSDs and recreate them to be Bluestore OSDs one by one. |
A health warning is now reported if the require-osd-release flag is not set to the appropriate release after a cluster upgrade.
For more information, see Release Notes
Assumption
We assume that all nodes are on the latest Proxmox VE 8.0 (or higher) version and Ceph is on version Quincy (17.2.6-pve1+3
or higher). If not, see the Ceph Pacific to Quincy upgrade guide.
Note: While in theory it is possible to upgrade from the older Ceph Pacific (16.2+) to Reef (18.2+) release directly, we do not provide builds of Ceph Pacific for Proxmox VE 8, making this impossible |
The cluster must be healthy and working!
Note: All commands starting with ceph need to be run only once. It doesn't matter on which node in the Ceph cluster. |
Enable msgrv2 protocol and update Ceph configuration
If you did not already do so when you upgraded to Nautilus, Octopus or Pacific, you must enable the new v2 network protocol. Issue the following command:
This will instruct all monitors that bind to the old default port 6789 for the legacy v1 protocol to also bind to the new 3300 v2 protocol port. To see if all monitors have been updated run
and verify that each monitor has both a v2: and v1: address listed.
Preparation on each Ceph cluster node
Change the current Ceph repositories from Quincy to Reef.
Note that the main
repository does not exist anymore and is now split into a public no-subscription
and a for production recommended enterprise
repository. The latter is accessible with any Proxmox VE Subscription.
Your /etc/apt/sources.list.d/ceph.list should now look like this
Note, with Proxmox VE 8 we introduced an enterprise repository for Ceph, which is accessible with a valid Proxmox VE subscription. If you do not have a valid subscription you can use the publicly available no-subscription
or test
repositories, for example:
Set the 'noout' flag
Set the noout flag for the duration of the upgrade (optional, but recommended):
Or via the GUI in the OSD tab (Manage Global Flags).
Upgrade on each Ceph cluster node
Upgrade all your nodes with the following commands or by installing the latest updates via the GUI. It will upgrade the Ceph on your node to Reef.
After the update, your setup will still be running the old Ceph Quincy (17.2) binaries.
Restart the monitor daemon
Note: You can use the web-interface or the command-line to restart ceph services. |
After upgrading all cluster nodes, you have to restart the monitor on each node where a monitor runs.
Do so one node at a time. Wait after each restart and periodically check the status of the cluster:
It should be in HEALTH_OK or
Once all monitors are up, verify that the monitor upgrade is complete. Look for the Reef string in the mon map. The command
should report
If it does not, this implies that one or more monitors haven’t been upgraded and restarted, and/or that the quorum doesn't include all monitors.
Restart the manager daemons on all nodes
If the managers did not automatically restart with the monitors, restart them now on all nodes
Verify that the ceph-mgr daemons are running by checking ceph -s
Restart the OSD daemon on all nodes
Restart all OSDs. Only restart OSDs on one node at a time to avoid loss of data redundancy. To restart all OSDs on a node, run the following command:
Wait after each restart and periodically checking the status of the cluster:
It should be in HEALTH_OK or
Once all OSDs are running with the latest versions, the following warning can appear:
Disallow pre-Reef OSDs and enable all new Reef-only functionality
Upgrade all CephFS MDS daemons
For each CephFS file system you need to apply the following steps. Please note that you can list the file systems with ceph fs ls
or check the web UI under Node -> Ceph -> CephFS.
Disable
standby_replay
If you have increased the ranks (maximal MDS instances active per a single CephFS instance) for some CephFS instances, you must reduce all instances to a single rank (set
max_mds
to 1) before you continue.Please note that if you plan to restore the rank later, first take notes of the original number of MDS daemons.
Wait for the cluster to deactivate any extra active MDS (ranks) by periodically checking the status of Ceph.:
The number of active MDS should go down to the number of file systems you have, i.e., only one active MDS for each file system.
Alternatively, check in the MDS list in the CephFS panel on the web UI that each Ceph filesystem has only one active MDS
Stop all standby MDS daemons.
You can do so via either the CephFS panel on the web UI, or alternatively, by using the following CLI command
(for a single ID)
Confirm that only one MDS is online and is on rank 0 for your FS:
Upgrade all remaining (active) MDS daemons and restart the standby ones in one go by restarting the whole systemd MDS-target via CLI:
If you had a higher rank set, you can now restore the original rank value (
max_mds
) for the file system instance again:
Unset the 'noout' flag
Once the upgrade process is finished, don't forget to unset the noout flag.
Or via the GUI in the OSD tab (Manage Global Flags).
Notes
When restarting a MGR, log lines containing "has missing NOTIFY_TYPES member" can be ignored
推荐本站淘宝优惠价购买喜欢的宝贝:
本文链接:https://hqyman.cn/post/6074.html 非本站原创文章欢迎转载,原创文章需保留本站地址!
休息一下~~