- Remove outdated planning documents and implementation guides - Update README with comprehensive DR procedures and monitoring - Enhance rman_restore_from_zero.cmd with SPFILE creation and auto-start - Add Proxmox monitoring and weekly test scripts - Archive old implementation documentation Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
8.4 KiB
Oracle DR VM - Migration Between Proxmox Nodes
Purpose: How to migrate VM 109 between Proxmox nodes while maintaining backup access Scenario: Move VM from pveelite (10.0.20.202) to pvemini (10.0.20.201) or vice versa
📋 OVERVIEW
Current Setup:
- VM 109 runs on pveelite (10.0.20.202)
- Backups stored on pveelite:
/mnt/pve/oracle-backups - VM has mount point:
qm set 109 -mp0 /mnt/pve/oracle-backups - Mount appears in Windows as *F:* (E:\ already used)
Challenge:
- Mount points are node-local - path
/mnt/pve/oracle-backupsexists only on pveelite - If you migrate VM to pvemini, mount point breaks
Solution:
- Create same directory structure on destination node
- Sync backups between nodes
- Mount point works identically on new node
🔄 MIGRATION PROCEDURE
PRE-MIGRATION CHECKLIST
- VM 109 is powered OFF
- You have root SSH access to both Proxmox nodes
- You know which node you're migrating TO
- Backups are current (check timestamp)
STEP 1: Prepare Destination Node (pvemini)
On pvemini (10.0.20.201):
ssh root@10.0.20.201
# Create identical directory structure
mkdir -p /mnt/pve/oracle-backups/ROA/autobackup
chmod 755 /mnt/pve/oracle-backups
chmod 755 /mnt/pve/oracle-backups/ROA
chmod 755 /mnt/pve/oracle-backups/ROA/autobackup
# Verify structure
ls -la /mnt/pve/oracle-backups/ROA/autobackup
STEP 2: Sync Backups from Source to Destination
Option A: Full Sync (first time migration)
# On pvemini, sync all backups from pveelite
rsync -avz --progress \
root@10.0.20.202:/mnt/pve/oracle-backups/ \
/mnt/pve/oracle-backups/
# This copies all backup files (~15 GB, takes 2-3 minutes on 1Gbps network)
Option B: Incremental Sync (if you already synced before)
# On pvemini, sync only new/changed files
rsync -avz --progress --update \
root@10.0.20.202:/mnt/pve/oracle-backups/ \
/mnt/pve/oracle-backups/
# Much faster - only copies new backups
Verify sync:
# Check file count matches
ssh root@10.0.20.202 "ls /mnt/pve/oracle-backups/ROA/autobackup/*.bkp | wc -l"
ls /mnt/pve/oracle-backups/ROA/autobackup/*.bkp | wc -l
# Should be same number
STEP 3: Migrate VM via Proxmox
Option A: Online Migration (VM stays running)
# From Proxmox CLI on source node (pveelite):
qm migrate 109 pvemini --online
# This uses live migration - VM doesn't stop
# Takes 5-10 minutes depending on RAM/disk
Option B: Offline Migration (VM must be stopped)
# Stop VM first
qm stop 109
# Migrate
qm migrate 109 pvemini
# Faster than online, but requires downtime
Option C: Via Proxmox Web UI
1. Select VM 109 on pveelite
2. Click "Migrate"
3. Select target node: pvemini
4. Choose migration type: online or offline
5. Click "Migrate"
STEP 4: Verify Mount Point After Migration
After migration completes:
# On pvemini, check VM config includes mount point
qm config 109 | grep mp0
# Expected output:
# mp0: /mnt/pve/oracle-backups,mp=/mnt/oracle-backups
# If missing, add it:
qm set 109 -mp0 /mnt/pve/oracle-backups,mp=/mnt/oracle-backups
STEP 5: Start VM and Verify Access
# Start VM on new node
qm start 109
# Wait for boot
sleep 180
# Check mount in Windows
ssh -p 22122 romfast@10.0.20.37 "Get-PSDrive F"
# Should show F:\ with Used/Free space
# Verify backup files accessible
ssh -p 22122 romfast@10.0.20.37 "Get-ChildItem F:\ROA\autobackup\*.bkp | Measure-Object"
# Should show backup file count
STEP 6: Update PRIMARY Transfer Scripts
On PRIMARY (10.0.20.36):
Backup transfer scripts need to know which node to send to.
Option A: Update scripts to point to new node
# Edit transfer scripts
cd D:\rman_backup
# Find and replace in transfer scripts:
# ÎNAINTE:
$DRHost = "10.0.20.202" # pveelite
# DUPĂ:
$DRHost = "10.0.20.201" # pvemini
Option B: Use DNS/hostname (RECOMMENDED)
# In transfer scripts, use hostname instead of IP:
$DRHost = "pvedr" # DNS name
# Then update DNS to point to active node:
# pvedr → 10.0.20.201 (currently pvemini)
# When you migrate back, just update DNS
🔄 ONGOING SYNC STRATEGY
If VM Stays on New Node Long-Term
Setup automated sync from PRIMARY → new node:
Just update transfer scripts as in Step 6 above. Backups will now go directly to pvemini.
Old backups on pveelite:
- Can be deleted after verification
- Or kept as additional backup copy (recommended)
# On pveelite, cleanup old backups after 7 days
find /mnt/pve/oracle-backups/ROA/autobackup -name "*.bkp" -mtime +7 -delete
If You Migrate VM Back and Forth
Scenario: VM moves between nodes frequently
Solution 1: Sync in both directions
# Cronjob on pveelite (every 6 hours)
0 */6 * * * rsync -az root@10.0.20.201:/mnt/pve/oracle-backups/ /mnt/pve/oracle-backups/
# Cronjob on pvemini (every 6 hours)
0 */6 * * * rsync -az root@10.0.20.202:/mnt/pve/oracle-backups/ /mnt/pve/oracle-backups/
Solution 2: Shared Storage (NFS/CIFS)
Use Proxmox shared storage instead of local paths:
- Setup NFS server on one node
- Both nodes mount same NFS share
/mnt/pve/oracle-backupspoints to shared storage- VM migration doesn't require backup sync
📊 MIGRATION CHECKLIST
Before Migration:
- VM 109 is stopped (or prepared for online migration)
- Destination node has directory:
/mnt/pve/oracle-backups/ROA/autobackup - Backups synced to destination node (rsync completed)
- You have tested restore recently (weekly test passed)
During Migration:
- VM migration initiated (online or offline)
- Migration progress monitored (no errors)
- Migration completed successfully
After Migration:
- VM 109 shows as running on new node
- Mount point configured:
qm config 109 | grep mp0 - VM started successfully
- F:\ drive accessible in Windows:
Get-PSDrive F - Backup files visible:
Get-ChildItem F:\ROA\autobackup\*.bkp - PRIMARY transfer scripts updated (point to new node IP)
- Test restore completed successfully
⚠️ TROUBLESHOOTING
Mount Point Not Visible in VM After Migration
Symptom: F:\ drive missing in Windows after migration
Solution:
# On new node, verify mount point config
qm config 109 | grep mp0
# If missing, add it
qm set 109 -mp0 /mnt/pve/oracle-backups,mp=/mnt/oracle-backups
# Restart VM
qm stop 109
qm start 109
Backup Files Not Accessible
Symptom: F:\ exists but shows as empty
Cause: Backups not synced to new node
Solution:
# Re-sync backups from old node
rsync -avz root@10.0.20.202:/mnt/pve/oracle-backups/ /mnt/pve/oracle-backups/
# Verify files exist
ls -lh /mnt/pve/oracle-backups/ROA/autobackup/*.bkp
PRIMARY Still Sending to Old Node
Symptom: New backups not appearing on new node
Cause: Transfer scripts still point to old node IP
Solution:
Update $DRHost in transfer scripts on PRIMARY (see Step 6)
🎯 MIGRATION TIMELINE
| Task | Duration | Downtime |
|---|---|---|
| Prepare destination node | 5 min | None |
| Sync backups (full, ~15GB) | 3 min | None |
| Migrate VM (offline) | 5 min | 5 min |
| Verify and start VM | 3 min | 3 min |
| Update PRIMARY scripts | 2 min | None |
| Total | 18 min | 8 min |
With online migration: 0 minutes downtime (VM keeps running during migration)
📞 QUICK REFERENCE
Current Setup:
- Source node: pveelite (10.0.20.202)
- Destination node: pvemini (10.0.20.201)
- VM: 109 (oracle-dr-windows)
- Backup path:
/mnt/pve/oracle-backups - Windows mount: F:\ (not E:\ - already used)
Key Commands:
# Sync backups
rsync -avz root@SOURCE:/mnt/pve/oracle-backups/ /mnt/pve/oracle-backups/
# Migrate VM
qm migrate 109 DESTINATION --online
# Check mount
qm config 109 | grep mp0
# Add mount if missing
qm set 109 -mp0 /mnt/pve/oracle-backups,mp=/mnt/oracle-backups
Generated: 2025-10-09 Version: 1.0 Status: Ready for use See Also: DR_UPGRADE_TO_CUMULATIVE_PLAN.md