从 Hyper-V 迁移到 Proxmox VE 过程记录

一边折腾一边做记录,可能有点乱,也有许多错漏的地方。

备份OpenWrt各种配置

  • 已完成各种配置的截图备份

备份 Hyper-V 服务器的各种环境

  • NextCloud与WordPress数据及数据库已备份及同步完成
  • rsync -av --delete /my-data/www/html/ root@192.168.33.131:/data-backup
  • 配置文件目录备份已完成
  • 已备份目录/etc/apache2
  • 已备份目录/etc/mysql
  • 已备份目录/etc/lsyncd
  • 已备份目录/etc/php/7.4
  • 计划任务配置已备份

修改测试主机的IP地址

  • 为主服务器让路,腾出我这个IP地址
  • 修改配置文件 vi /etc/network/interfaces
  • 由原来的192.168.33.3修改为192.168.33.87,并成功完成修改与访问。

400GB多的视频文件在NTFS分区上,该如何处理?

  • 拷贝了一部分存档,剩余的一些不喜欢的就丢弃了。

正式开始了

准时在2021年10月28日21:30左右 开始了

关闭Windows server 2019主机

接着使用路由器替换了OpenWrt,网络继续正常工作

修改B85主板的 PCIe 1x 接口

也就是爆PCIe 1x插槽的菊花,目的是插入PCIe 16x的显卡。

这B85主板配合E3 1231 V3最神奇的地方是: 我可以不插显卡也能够正常开机并进入系统!

我的另一个Z68配合E3 1230 V2不插显卡,死活不能正常开机启动系统!

改造这个PCIe 1x插槽是为了方便以后如果想要玩显卡直通!虽然只有1x的速度,但直通显卡的目的又不是为了游戏。

不玩显卡直通的情况下,我直接拔掉显卡运行,起码也能省一点点的电。

  • 使用小刀爆菊花浪费了不少的时间,由于华硕的PCIe塑料太TM硬了,
  • 只爆了一点点实在搞不定,最后使用电络铁,终于把 PCIe 1x 插槽的菊花完爆了,
  • 顺利插入PCIe 16x显卡,主机正常启动并显示了,完美!

时间已经来到了10:30左右.

开始为Windows server 2019做一个Ghost备份吧,C盘40GB多左右,极限压缩模式真的非常慢

  • 做一个备份吧,万一PVE严重翻车,还能快速恢复到Hyper-V
  • 在Windows server 2019的备份盘暂时不挂载到PVE上,用另一个已同步好资料的磁盘挂载到PVE
  • 在PVE成功完成后,将NextCloud的资料恢复到SSD上之后,再考虑格式化前任备份盘为LVM-thin
  • 在这等待Ghost备份的时间里,又想到了UPS,该如何匹配PVE呢?又是一个头疼的问题

Windows server 2019 系统盘备份完成 时间 23:20

  • 将C盘的Ghost备份25GB拷贝至移动磁盘,30M/S的速度太慢了,这垃圾移动硬盘!
  • 面对之前在Hyper-V上运行的Win7系统,里面下载了不少的电影,丢弃吗?
  • 纠结,从Windows到Linux的迁移,文件系统不同,烦。

下面开始安装PVE

开始安装PVE 23:38

PVE安装完成 2021年10月29日 0:00

开始配置PVE

最基础的配置

  • vi编辑器 /etc/vim/vimrc.tiny
  • 软件源修改为清华镜像源 vi /etc/apt/sources.list
  • 禁用PVE企业源 vi /etc/apt/sources.list.d/pve-enterprise.list
  • CT模板改为清华镜像站/usr/share/perl5/PVE/APLInfo.pm

更新系统

apt update
apt upgrade
reboot

添加SSD硬盘为LVM-thin

  • 由于SSD之前是在Windows上使用的,所以直接在PVE的WebUI管理界面 ⇢ 磁盘 ⇢ 擦除磁盘

为SSD分区

root@fzpve:~# fdisk -l
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: CT1000MX500SSD1 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 119.24 GiB, 128035676160 bytes, 250069680 sectors
Disk model: SanDisk SDSSDHP1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: A91B586C-3EDC-4722-81A6-525F1F2A13FF

Device       Start       End   Sectors   Size Type
/dev/sda1       34      2047      2014  1007K BIOS boot
/dev/sda2     2048   1050623   1048576   512M EFI System
/dev/sda3  1050624 250069646 249019023 118.7G Linux LVM


Disk /dev/mapper/pve-swap: 4 GiB, 4294967296 bytes, 8388608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

# 创建物理卷PV (volume)
pvcreate /dev/sdb

root@fzpve:~# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

# 创建卷组VG (group)
vgcreate vgssd /dev/sdb

root@fzpve:~# vgcreate vgssd /dev/sdb
  Volume group "vgssd" successfully created

# 查看SSD可用空间
root@fzpve:~# pvs
  PV         VG    Fmt  Attr PSize    PFree  
  /dev/sda3  pve   lvm2 a--  <118.74g   4.74g
  /dev/sdb   vgssd lvm2 a--   931.51g 931.51g

# 创建 thin-pool (精简池),确定建立920GB,剩余的作为PFree空间
# lvcreate -L 100G -n <poolNAME> <VGNAME>
# lvconvert --type thin-pool <VGNAME>/<poolNAME>
lvcreate -L 920G -n ssdpool vgssd
lvconvert --type thin-pool vgssd/ssdpool

root@fzpve:~# lvcreate -L 920G -n ssdpool vgssd
  Logical volume "ssdpool" created.
root@fzpve:~# lvconvert --type thin-pool vgssd/ssdpool
  Thin pool volume with chunk size 512.00 KiB can address at most 126.50 TiB of data.
  WARNING: Pool zeroing and 512.00 KiB large chunk size slows down thin provisioning.
  WARNING: Consider disabling zeroing (-Zn) or using smaller chunk size (<512.00 KiB).
  WARNING: Converting vgssd/ssdpool to thin pool's data volume with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert vgssd/ssdpool? [y/n]: y
  Converted vgssd/ssdpool to thin pool.

扩容tmeta空间:

# 查看SSD可用空间
root@fzpve:~# vgs
  VG    #PV #LV #SN Attr   VSize    VFree  
  pve     1   3   0 wz--n- <118.74g   4.74g
  vgssd   1   1   0 wz--n-  931.51g <11.29g
# 查看<poolNAME>_tmeta]空间
root@fzpve:~# lvs -a
  LV              VG    Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve   twi-a-tz-- <88.00g             0.00   1.61                            
  [data_tdata]    pve   Twi-ao---- <88.00g                                                    
  [data_tmeta]    pve   ewi-ao----   1.00g                                                    
  [lvol0_pmspare] pve   ewi-------   1.00g                                                    
  root            pve   -wi-ao----  20.00g                                                    
  swap            pve   -wi-ao----   4.00g                                                    
  [lvol0_pmspare] vgssd ewi------- 116.00m                                                    
  ssdpool         vgssd twi-a-tz-- 920.00g             0.00   10.42                           
  [ssdpool_tdata] vgssd Twi-ao---- 920.00g                                                    
  [ssdpool_tmeta] vgssd ewi-ao---- 116.00m    

# 扩容[<poolNAME>_tmeta]空间
# lvextend --poolmetadatasize +900M <VGNAME>/<poolNAME>
lvextend --poolmetadatasize +140M vgssd/ssdpool

root@fzpve:~# lvextend --poolmetadatasize +140M vgssd/ssdpool
  Size of logical volume vgssd/ssdpool_tmeta changed from 116.00 MiB (29 extents) to 256.00 MiB (64 extents).
  Logical volume vgssd/ssdpool_tmeta successfully resized.
# 查看<poolNAME>_tmeta]空间
root@fzpve:~# lvs -a
  LV              VG    Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve   twi-a-tz-- <88.00g             0.00   1.61                            
  [data_tdata]    pve   Twi-ao---- <88.00g                                                    
  [data_tmeta]    pve   ewi-ao----   1.00g                                                    
  [lvol0_pmspare] pve   ewi-------   1.00g                                                    
  root            pve   -wi-ao----  20.00g                                                    
  swap            pve   -wi-ao----   4.00g                                                    
  [lvol0_pmspare] vgssd ewi------- 256.00m                                                    
  ssdpool         vgssd twi-a-tz-- 920.00g             0.00   6.45                            
  [ssdpool_tdata] vgssd Twi-ao---- 920.00g                                                    
  [ssdpool_tmeta] vgssd ewi-ao---- 256.00m 

为PVE添加存储:

# 使用刚才创建的 thin-pool (精简池)
# pvesm add lvmthin <STORAGE_ID> --vgname <VGNAME> --thinpool <poolNAME>
pvesm add lvmthin ssd-data --vgname vgssd --thinpool ssdpool

# 检查存储状态
pvesm status

root@fzpve:~# pvesm status
Name             Type     Status           Total            Used       Available        %
local             dir     active        20466256         2617908        16783388   12.79%
local-lvm     lvmthin     active        92270592               0        92270592    0.00%
ssd-data      lvmthin     active       964689920               0       964689920    0.00%

硬件直通配置

时间来到了 00:55

硬件直通需要以下几个步骤的配置

  • 修改grub文件
  • 更新grub
  • 加载内核模块modules
  • 刷新initramfs
  • 然后重启主机
  • 验证是否开启iommu

修改grub文件:

   vi /etc/default/grub
   # GRUB_CMDLINE_LINUX_DEFAULT="quiet"做如下修改(intel的CPU)
   GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

更新grub:

update-grub

root@fzpve:~# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.11.22-4-pve
Found initrd image: /boot/initrd.img-5.11.22-4-pve
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
Adding boot menu entry for EFI firmware configuration
done

加载内核模块modules:

   vi /etc/modules
   # 内容如下
   vfio
   vfio_iommu_type1
   vfio_pci
   vfio_virqfd

刷新initramfs:

update-initramfs -u -k all

root@fzpve:~# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-5.11.22-4-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.

然后重启主机:

reboot

验证是否开启iommu:

dmesg | grep 'remapping'

root@fzpve:~# dmesg | grep 'remapping'
[    0.152366] DMAR-IR: Enabled IRQ remapping in xapic mode
[    0.152367] x2apic: IRQ remapping doesn't support X2APIC mode
  • 执行命令后显示如下内容,说明开启成功 DMAR-IR: Enabled IRQ remapping in x2apic mode
  • 此时输入命令 find /sys/kernel/iommu_groups/ -type l 出现很多直通组,说明成功了
root@fzpve:~# find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/7/devices/0000:00:1c.0
/sys/kernel/iommu_groups/5/devices/0000:00:1a.0
/sys/kernel/iommu_groups/3/devices/0000:00:16.0
/sys/kernel/iommu_groups/11/devices/0000:02:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/8/devices/0000:00:1c.1
/sys/kernel/iommu_groups/6/devices/0000:00:1b.0
/sys/kernel/iommu_groups/4/devices/0000:00:19.0
/sys/kernel/iommu_groups/12/devices/0000:03:00.0
/sys/kernel/iommu_groups/12/devices/0000:03:00.1
/sys/kernel/iommu_groups/2/devices/0000:00:14.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.2
/sys/kernel/iommu_groups/10/devices/0000:00:1f.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.3
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/9/devices/0000:00:1d.0

配置直通完成 时间来到了 01:09

VM虚拟机部署OpenWrt

首先要建立一个VM虚拟机 硬件配置暂定如下:

  • 内存 512MB
  • 处理器 2核心 [host,flags=+aes]
  • BIOS OVMF(UEFI)
  • 显示 标准VGA(安装配置完成后,最后选择无显示)
  • 机器 q35(最新)
  • SCSI控制器 VirtIO SCSI
  • 硬盘随便添加一个1G的,然后需要使用img镜像导入的磁盘,之后要删除这个硬盘的。
  • 网络设备(net0) 随便添加一个网卡,之后要删除,配置直通网卡的哟。
  • EFI磁盘 这个必须要且不能删除。

然后下载OpenWrt

# 直接在PVE上下载即可
wget https://downloads.openwrt.org/releases/21.02.1/targets/x86/64/openwrt-21.02.1-x86-64-generic-ext4-combined-efi.img.gz

root@fzpve:~# sha256sum openwrt-21.02.1-x86-64-generic-ext4-combined-efi.img.gz > openwrt-21.02.1-x86-64-generic-ext4-combined-efi.img.gz.sha256sum
root@fzpve:~# cat openwrt-21.02.1-x86-64-generic-ext4-combined-efi.img.gz.sha256sum 
005ab565649e10fec4220ee6cc96f2b35a89c01abf2a6d79ccc990b2f39b476a  openwrt-21.02.1-x86-64-generic-ext4-combined-efi.img.gz

# 解压
gzip -d openwrt-21.02.0-x86-64-generic-ext4-combined-efi.img.gz

# 将img磁盘直接导入为VM虚拟磁盘,vmid根据现实灵活修改哟。
# qm importdisk <vmid> <source> <storage> [OPTIONS]
qm importdisk 100 /root/openwrt-21.02.1-x86-64-generic-ext4-combined-efi.img local-lvm

root@fzpve:~# qm importdisk 100 /root/openwrt-21.02.1-x86-64-generic-ext4-combined-efi.img local-lvm
importing disk '/root/openwrt-21.02.1-x86-64-generic-ext4-combined-efi.img' to VM 100 ...
  Rounding up size to full physical extent 124.00 MiB
  Logical volume "vm-100-disk-2" created.
transferred 0.0 B of 120.5 MiB (0.00%)
transferred 2.0 MiB of 120.5 MiB (1.66%)
transferred 4.0 MiB of 120.5 MiB (3.32%)
transferred 6.0 MiB of 120.5 MiB (4.98%)
transferred 50.0 MiB of 120.5 MiB (41.48%)
......
transferred 104.0 MiB of 120.5 MiB (86.29%)
transferred 106.0 MiB of 120.5 MiB (87.94%)
transferred 108.0 MiB of 120.5 MiB (89.60%)
transferred 110.0 MiB of 120.5 MiB (91.26%)
transferred 112.0 MiB of 120.5 MiB (92.92%)
transferred 114.0 MiB of 120.5 MiB (94.58%)
transferred 120.5 MiB of 120.5 MiB (100.00%)
Successfully imported disk as 'unused0:local-lvm:vm-100-disk-2'

最后编辑虚拟机配置

  • 硬件
  • 将之前创建时的网卡删除,添加PCI设备,先添加一个直通网口作为LAN,之后安装好后再添加WAN口
  • 将之前创建虚拟机时的磁盘删除,添加刚才命令导入的虚拟磁盘。
  • 剩下的默认吧,检查配置无误后进入下一环节
  • 选项
  • 开机自动启动 是
  • OS类型 Linux 5.x – 2.6 Kernel
  • 引导顺序 scsi0 (特别注意这必须选导入的OpenWrt磁盘)
  • 剩下的项目默认吧。

82576-0直通作为LAN口

上传gparted-live-1.0.0-3-amd64.iso进行调整OpenWrt分区大小

因为OpenWrt官方镜像默认的根分区只有100MB左右,我们需要将其扩容一下用起来更舒服些。

已完成扩容,关闭虚拟机。

删除 CD-ROM

再到选项选择引导顺序为 scsi0

开启虚拟机测试OpenWRT

我发现双网口的82576网卡直通的时候,不分0或1,都同时直通给OpenWrt的!无法只直通一个口。

这可能是PVE的硬件直通配置还漏了哪个步骤,但已经够用了,所以不纠结这问题先。

时间来到了 2:12

关机,安装硬盘,拷贝数据!!

时间来到了 3:08

发现PVE日志syslog 许多报错

Oct 29 03:00:41 fzpve pmxcfs[1065]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-storage/fzpve/local-lvm: /var/lib/rrdcached/db/pve2-storage/fzpve/local-lvm: illegal attempt to update using time 1635447641 when last update time is 1635465615 (minimum one second step)
Oct 29 03:00:51 fzpve pmxcfs[1065]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-node/fzpve: /var/lib/rrdcached/db/pve2-node/fzpve: illegal attempt to update using time 1635447651 when last update time is 1635465615 (minimum one second step)
Oct 29 03:00:51 fzpve pmxcfs[1065]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-storage/fzpve/local-lvm: /var/lib/rrdcached/db/pve2-storage/fzpve/local-lvm: illegal attempt to update using time 1635447651 when last update time is 1635465615 (minimum one second step)
Oct 29 03:00:51 fzpve pmxcfs[1065]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-storage/fzpve/local: /var/lib/rrdcached/db/pve2-storage/fzpve/local: illegal attempt to update using time 1635447651 when last update time is 1635465615 (minimum one second step)
Oct 29 03:01:00 fzpve systemd[1]: Starting Proxmox VE replication runner...
Oct 29 03:01:00 fzpve systemd[1]: pvesr.service: Succeeded.
Oct 29 03:01:00 fzpve systemd[1]: Finished Proxmox VE replication runner.
Oct 29 03:01:01 fzpve pmxcfs[1065]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-node/fzpve: /var/lib/rrdcached/db/pve2-node/fzpve: illegal attempt to update using time 1635447661 when last update time is 1635465615 (minimum one second step)
Oct 29 03:01:02 fzpve pmxcfs[1065]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-storage/fzpve/local: /var/lib/rrdcached/db/pve2-storage/fzpve/local: illegal attempt to update using time 1635447661 when last update time is 1635465615 (minimum one second step)
Oct 29 03:01:02 fzpve pmxcfs[1065]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-storage/fzpve/local-lvm: /var/lib/rrdcached/db/pve2-storage/fzpve/local-lvm: illegal attempt to update using time 1635447661 when last update time is 1635465615 (minimum one second step)
Oct 29 03:01:11 fzpve pmxcfs[1065]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-node/fzpve: /var/lib/rrdcached/db/pve2-node/fzpve: illegal attempt to update using time 1635447671 when last update time is 1635465615 (minimum one second step)
Oct 29 03:01:31 fzpve pmxcfs[1065]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-storage/fzpve/local: opening '/var/lib/rrdcached/db/pve2-storage/fzpve/local': No such file or directory
Oct 29 03:01:31 fzpve pmxcfs[1065]: [status] notice: RRD create error /var/lib/rrdcached/db/pve2-storage/fzpve/ssd-data: Cannot create temporary file
Oct 29 03:01:31 fzpve pmxcfs[1065]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-storage/fzpve/ssd-data: opening '/var/lib/rrdcached/db/pve2-storage/fzpve/ssd-data': No such file or directory
Oct 29 03:01:31 fzpve pmxcfs[1065]: [status] notice: RRD create error /var/lib/rrdcached/db/pve2-storage/fzpve/local-lvm: Cannot create temporary file
Oct 29 03:01:31 fzpve pmxcfs[1065]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-storage/fzpve/local-lvm: opening '/var/lib/rrdcached/db/pve2-storage/fzpve/local-lvm': No such file or directory
Oct 29 03:01:37 fzpve systemd[1]: Starting LSB: start or stop rrdcached...
Oct 29 03:01:37 fzpve rrdcached[7353]: rrdcached started.
Oct 29 03:01:37 fzpve systemd[1]: Started LSB: start or stop rrdcached.
Oct 29 03:01:45 fzpve systemd[1]: Stopping The Proxmox VE cluster filesystem...
Oct 29 03:01:45 fzpve pmxcfs[1065]: [main] notice: teardown filesystem
Oct 29 03:01:46 fzpve systemd[4074]: etc-pve.mount: Succeeded.
Oct 29 03:01:46 fzpve systemd[1]: etc-pve.mount: Succeeded.
Oct 29 03:01:47 fzpve pmxcfs[1065]: [main] notice: exit proxmox configuration filesystem (0)
Oct 29 03:01:47 fzpve systemd[1]: pve-cluster.service: Succeeded.
Oct 29 03:01:47 fzpve systemd[1]: Stopped The Proxmox VE cluster filesystem.
Oct 29 03:01:47 fzpve systemd[1]: pve-cluster.service: Consumed 2.237s CPU time.
Oct 29 03:01:47 fzpve systemd[1]: Starting The Proxmox VE cluster filesystem...
Oct 29 03:01:48 fzpve systemd[1]: Started The Proxmox VE cluster filesystem.
Oct 29 03:01:48 fzpve systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Oct 29 03:02:00 fzpve systemd[1]: Starting Proxmox VE replication runner...
Oct 29 03:02:00 fzpve systemd[1]: pvesr.service: Succeeded.
Oct 29 03:02:00 fzpve systemd[1]: Finished Proxmox VE replication runner.

执行如下命令后终于消停了

# 这是错误的命令路径,这个方法失败
root@fzpve:~# cd /var/lib/rrdcached/
root@fzpve:/var/lib/rrdcached# systemctl stop rrdcached
root@fzpve:/var/lib/rrdcached# mv rrdcached rrdcached.bck
mv: cannot stat 'rrdcached': No such file or directory
# 这才是正确的方法
root@fzpve:/var/lib/rrdcached# cd /var/lib/
root@fzpve:/var/lib# systemctl stop rrdcached
root@fzpve:/var/lib# mv rrdcached rrdcached.bck
root@fzpve:/var/lib# systemctl start rrdcached
root@fzpve:/var/lib# systemctl restart pve-cluster

修改Openwrt软件源为清华镜像站:

# openwrt
#src/gz openwrt_core https://downloads.openwrt.org/releases/21.02.1/targets/x86/64/packages
#src/gz openwrt_base https://downloads.openwrt.org/releases/21.02.1/packages/x86_64/base
#src/gz openwrt_luci https://downloads.openwrt.org/releases/21.02.1/packages/x86_64/luci
#src/gz openwrt_packages https://downloads.openwrt.org/releases/21.02.1/packages/x86_64/packages
#src/gz openwrt_routing https://downloads.openwrt.org/releases/21.02.1/packages/x86_64/routing
#src/gz openwrt_telephony https://downloads.openwrt.org/releases/21.02.1/packages/x86_64/telephony

# tsinghua
src/gz openwrt_core https://mirrors.tuna.tsinghua.edu.cn/openwrt/releases/21.02.1/targets/x86/64/packages
src/gz openwrt_base https://mirrors.tuna.tsinghua.edu.cn/openwrt/releases/21.02.1/packages/x86_64/base
src/gz openwrt_luci https://mirrors.tuna.tsinghua.edu.cn/openwrt/releases/21.02.1/packages/x86_64/luci
src/gz openwrt_packages https://mirrors.tuna.tsinghua.edu.cn/openwrt/releases/21.02.1/packages/x86_64/packages
src/gz openwrt_routing https://mirrors.tuna.tsinghua.edu.cn/openwrt/releases/21.02.1/packages/x86_64/routing
src/gz openwrt_telephony https://mirrors.tuna.tsinghua.edu.cn/openwrt/releases/21.02.1/packages/x86_64/telephony

准备下载CT模板

却发现只有debian10的模板

执行已下命令更新模板

pveam update

即可正常刷出debian11了

CT已经安装好了,基本的配置完成,没有安装AMP环境。

时间已经来到 3:50

好累啊!

关机,然后几分钟后再开,看看PVE日志是否有报错了先吧。

终于正常了,syslog没有报错信息了。

挂载备份磁盘到CT容器 拷贝数据

挂载WD2T磁盘到PVE存储

pvesm add lvmthin wd-backup --vgname vgwd2t --thinpool databackup

在LXC容器添加挂载点

需要修改配置文件进行添加,因为我已经忘记了虚拟磁盘的大小,或是命令有问题吧!

arch: amd64
cores: 4
features: nesting=1
hostname: Debian11-NC
memory: 5120
mp0: wd-backup:vm-100-disk-0,mp=/data-backup,replicate=0,ro=1,size=1000G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.33.1,hwaddr=0A:FD:83:CE:3B:5A,ip=192.168.33.5/24,type=veth
ostype: debian
rootfs: local-lvm:vm-101-disk-0,size=8G
swap: 2048
unprivileged: 1
# 就是添加这个 unused0: <STORAGE_ID>:<虚拟磁盘>
unused0: local-lvm:vm-101-disk-1 

然后在Web UI管理页面进行操作添加磁盘,即可自动识别虚拟磁盘的容量。

创建的挂载点容量很迷,850G的挂载点,虚拟磁盘显示占用912.68G

SSD pool的容量是920G,那不是正好吗?

时间来到 5:00

安装rsync,默认没有rsync

执行数据同步

rsync -av --delete /data-backup/ /web-data/www/

数据已同步完成

openwrt配置ddns 有点复杂

自定义脚本

aly脚本路径

/usr/lib/ddns/update_aliyun_com.sh

太累了,不玩先吧,现在时间是 7:00

2021年10月29日 17:33

继续研究LXC容器的pct管理命令,目的是使用命令管理LXC,从而实现自动快照。

做快照的时候,我的目的是只为LXC的rootfs与备份的HDD磁盘做快照,SSD不参与做快照
因此快照前必须将SSD挂载点删除,然后再执行快照命令,快照完成后,再将SSD添加回来。

特别注意 千万不能在Web UI管理界面容器资源页删除未使用的磁盘!

此操作将会删除虚拟磁盘!!!

PCT管理命令 我们可以使用帮助命令去了解

# PVE 进入 PCT
# pct enter <vmid>
pct enter 103
# exit 退出pct

# 关于pct的使用帮助
pct help

# 具体到哪个命令
pct help set

我需要使用的pct管理命令

  • 为了能够实现自动快照,目前我需要的操作命令
# 关闭pct容器
pct shutdown <vmid> [OPTIONS]

root@fzpve:~# pct help shutdown
USAGE: pct shutdown <vmid> [OPTIONS]

  Shutdown the container. This will trigger a clean shutdown of the
  container, see lxc-stop(1) for details.

  <vmid>     <integer> (1 - N)

             The (unique) ID of the VM.

  -forceStop <boolean>   (default=0)

             Make sure the Container stops.

  -timeout   <integer> (0 - N)   (default=60)

             Wait maximal timeout seconds.

# 关闭pct容器示例
pct shutdown 101

# 删除pct挂载点mp[0,1,2,...]
pct set <vmid> -delete mp[n]

root@fzpve:~# pct help set
USAGE: pct set <vmid> [OPTIONS]

  Set container options.

  <vmid>     <integer> (1 - N)

             The (unique) ID of the VM.

  ......

  -delete    <string>

             A list of settings you want to delete.

  ......

  -mp[n]     [volume=]<volume> ,mp=<Path> [,acl=<1|0>] [,backup=<1|0>]
             [,mountoptions=<opt[;opt...]>] [,quota=<1|0>]
             [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>]
             [,size=<DiskSize>]

             Use volume as container mount point. Use the special syntax
             STORAGE_ID:SIZE_IN_GiB to allocate a new volume.

  -unused[n] [volume=]<volume>

             Reference to unused volumes. This is used internally, and
             should not be modified manually.


# 删除pct挂载点示例,得记住配置文件挂载点mp0的配置,比如mp=/data-backup,replicate=0,backup=0,以便再次添加
pct set 101 -delete mp0

# 测试关于unused[n]这个命令,测试只有下面的命令成功添加了磁盘
pct set 101 -unused0 wd-backup:vm-100-disk-0 -mp0 wd-backup:vm-100-disk-0,mp=/data-backu
### 但是上面的命令 -unused0 wd-backup:vm-100-disk-0 是多此一举的。

# 我直接使用如下命令即可再次添加删除的挂载点
pct set 101 -mp0 wd-backup:vm-100-disk-0,mp=/data-backup

# 建立pct快照
pct snapshot <vmid> <snapname> [OPTIONS]

root@fzpve:~# pct help snapshot
USAGE: pct snapshot <vmid> <snapname> [OPTIONS]

  Snapshot a container.

  <vmid>     <integer> (1 - N)

             The (unique) ID of the VM.

  <snapname> <string>

             The name of the snapshot.

  -description <string>

             A textual description or comment.

# 建立pct快照示例
pct snapshot 101 debian_snap20211029

# 建立快照之后当然是再次添加刚才已经删除的挂载点啦
# pct set <vmid> -mp[n] <STORAGE_ID>:<VM-DISK-NAME>,mp=<Path>,backup=<1|0>,replicate=<1|0>
pct set 101 -mp0 wd-backup:vm-100-disk-0,mp=/data-backup,backup=0,replicate=0

# 开启pct容器
# pct start <vmid>
pct start 100

时间来到了 20:20

配置了一个LXC容器模板,并且做了一个备份

然后完整克隆了2个基于这个模板的完整克隆,分别用于NextCloud与OnlyOffice

部署证书吧 了解一下pve用的什么证书

  • 貌似PVE有集成acme脚本?
  • 但是PVE集成的这个acme什么鬼来滴?我根本不懂怎么使用。

安装acme.sh 按照之前的旧方法搞

  • 安装在PVE上,然后同步到各个LXC
# 在PVE上安装acme.sh
curl  https://get.acme.sh | sh
## 居然安装出错,再次尝试还是同样的报错,PVE貌似只能使用他集成的ACME
root@fzpve:~# curl  https://get.acme.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   937    0   937    0     0    795      0 --:--:--  0:00:01 --:--:--   795
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  203k  100  203k    0     0    808      0  0:04:18  0:04:18 --:--:-- 67051
[Fri 29 Oct 2021 09:33:46 PM CST] Installing from online archive.
[Fri 29 Oct 2021 09:33:46 PM CST] Downloading https://github.com/acmesh-official/acme.sh/archive/master.tar.gz
[Fri 29 Oct 2021 09:37:56 PM CST] Extracting master.tar.gz

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
sh: 6676: gtar: not found
[Fri 29 Oct 2021 09:37:56 PM CST] Extraction error.

PVE集成的ACME貌似需要注册账户 而且有点繁琐

还是在LXC容器安装ACME试试吧

LXC没有curl,需要安装

apt install curl

然后再安装acme.sh

root@Debian11-Cloud:~# curl  https://get.acme.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   937    0   937    0     0    264      0 --:--:--  0:00:03 --:--:--   264
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  203k  100  203k    0     0   174k      0  0:00:01  0:00:01 --:--:--  174k
[Fri 29 Oct 2021 10:56:08 PM CST] Installing from online archive.
[Fri 29 Oct 2021 10:56:08 PM CST] Downloading https://github.com/acmesh-official/acme.sh/archive/master.tar.gz
[Fri 29 Oct 2021 10:56:12 PM CST] Extracting master.tar.gz
[Fri 29 Oct 2021 10:56:12 PM CST] It is recommended to install socat first.
[Fri 29 Oct 2021 10:56:12 PM CST] We use socat for standalone server if you use standalone mode.
[Fri 29 Oct 2021 10:56:12 PM CST] If you don't use standalone mode, just ignore this warning.
[Fri 29 Oct 2021 10:56:12 PM CST] Installing to /root/.acme.sh
[Fri 29 Oct 2021 10:56:12 PM CST] Installed to /root/.acme.sh/acme.sh
[Fri 29 Oct 2021 10:56:12 PM CST] Installing alias to '/root/.bashrc'
[Fri 29 Oct 2021 10:56:12 PM CST] OK, Close and reopen your terminal to start using acme.sh
[Fri 29 Oct 2021 10:56:12 PM CST] Installing cron job
no crontab for root
no crontab for root
[Fri 29 Oct 2021 10:56:12 PM CST] Good, bash is found, so change the shebang to use bash as preferred.
[Fri 29 Oct 2021 10:56:13 PM CST] OK
[Fri 29 Oct 2021 10:56:13 PM CST] Install success!

配置API key

export Ali_Key="L************S"
export Ali_Secret="O***************************F"

root@Debian11-Cloud:~# export Ali_Key="L************S"
root@Debian11-Cloud:~# export Ali_Secret="O**********************F"

申请证书

申请的是通配符证书

/root/.acme.sh/acme.sh --issue --dns dns_ali -d sgtfz.top -d *.sgtfz.top
### 貌似也是无法申请证书,需要注册账户了。
root@Debian11-Cloud:~# /root/.acme.sh/acme.sh --issue --dns dns_ali -d sgtfz.top -d *.sgtfz.top
[Fri 29 Oct 2021 11:04:00 PM CST] Using CA: https://acme.zerossl.com/v2/DV90
[Fri 29 Oct 2021 11:04:00 PM CST] Create account key ok.
[Fri 29 Oct 2021 11:04:00 PM CST] No EAB credentials found for ZeroSSL, let's get one
[Fri 29 Oct 2021 11:04:00 PM CST] acme.sh is using ZeroSSL as default CA now.
[Fri 29 Oct 2021 11:04:00 PM CST] Please update your account with an email address first.
[Fri 29 Oct 2021 11:04:00 PM CST] acme.sh --register-account -m my@example.com
[Fri 29 Oct 2021 11:04:00 PM CST] See: https://github.com/acmesh-official/acme.sh/wiki/ZeroSSL.com-CA
[Fri 29 Oct 2021 11:04:00 PM CST] Please add '--debug' or '--log' to check more details.
[Fri 29 Oct 2021 11:04:00 PM CST] See: https://github.com/acmesh-official/acme.sh/wiki/How-to-debug-acme.sh

### 注册zerossl账户
/root/.acme.sh/acme.sh --register-account -m sgtdjfz@live.cn --server zerossl
### 注册完成
root@Debian11-Cloud:~# /root/.acme.sh/acme.sh --register-account -m sgtdjfz@live.cn --server zerossl
[Fri 29 Oct 2021 11:34:08 PM CST] No EAB credentials found for ZeroSSL, let's get one
[Fri 29 Oct 2021 11:34:09 PM CST] Registering account: https://acme.zerossl.com/v2/DV90
[Fri 29 Oct 2021 11:34:13 PM CST] Registered
[Fri 29 Oct 2021 11:34:13 PM CST] ACCOUNT_THUMBPRINT='QGg******************_cPkL6QhfI'

## 设置默认CA为zerossl
/root/.acme.sh/acme.sh --set-default-ca  --server zerossl

## 再来申请证书!尼玛!搞错域名了!
/root/.acme.sh/acme.sh --issue --dns dns_ali -d sgtfz.cn -d *.sgtfz.cn

## 惨了!
## 难道又要折腾公网sgtfz.cn的证书了?不管先了。

# 继续申请sgtfz.top的的证书吧!!
/root/.acme.sh/acme.sh --issue --dns dns_ali -d sgtfz.top -d *.sgtfz.top

-----END CERTIFICATE-----
[Fri 29 Oct 2021 11:47:14 PM CST] Your cert is in: /root/.acme.sh/sgtfz.top/sgtfz.top.cer
[Fri 29 Oct 2021 11:47:14 PM CST] Your cert key is in: /root/.acme.sh/sgtfz.top/sgtfz.top.key
[Fri 29 Oct 2021 11:47:14 PM CST] The intermediate CA cert is in: /root/.acme.sh/sgtfz.top/ca.cer
[Fri 29 Oct 2021 11:47:14 PM CST] And the full chain certs is there: /root/.acme.sh/sgtfz.top/fullchain.cer

## 删除本地的sgtfz.cn误操作的证书
/root/.acme.sh/acme.sh --remove -d sgtfz.cn -d *.sgtfz.cn

root@Debian11-Cloud:~# /root/.acme.sh/acme.sh --remove -d sgtfz.cn -d *.sgtfz.cn
[Fri 29 Oct 2021 11:53:50 PM CST] sgtfz.cn is removed, the key and cert files are in /root/.acme.sh/sgtfz.cn
[Fri 29 Oct 2021 11:53:50 PM CST] You can remove them by yourself.

# 删除sgtfz.cn下载的证书目录
rm -r /root/.acme.sh/sgtfz.cn

# 让acme自动更新
root@Debian11-Cloud:~# /root/.acme.sh/acme.sh --upgrade  --auto-upgrade
[Sat 30 Oct 2021 12:14:49 AM CST] Already uptodate!
[Sat 30 Oct 2021 12:14:49 AM CST] Upgrade success!

2021年10月30日 00:10

是时候布置AMP环境了

#根据自己需需要部署nextcloud的环境安装如下软件包
apt install apache2;\
apt install php-fpm;\
apt install php-apcu php-bcmath php-curl php-gd php-gmp php-intl;\
apt install php-imagick php-mbstring php-mysql php-redis php-xml php-zip;\
apt install libmagickcore-6.q16-6-extra;\
apt install ffmpeg;\
apt install redis-server;\
apt install mariadb-server mariadb-client;

配置MariaDB数据库

初始化数据库:

mysql_secure_installation

Enter current password for root (enter for none): #直接按回车即可
Switch to unix_socket authentication [Y/n] y
Change the root password? [Y/n] n #不用设置root用户密码,因为我们使用unix_socket免密
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] y
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y

数据库的一些基本配置

#登录数据库,root身份下,直接免密登录
mysql

#查看字符集
show variables like '%character%';
# 操作记录
root@Debian11-Cloud:~# mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 52
Server version: 10.5.12-MariaDB-0+deb11u1 Debian 11

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show variables like '%character%';
+--------------------------+----------------------------+
| Variable_name            | Value                      |
+--------------------------+----------------------------+
| character_set_client     | utf8                       |
| character_set_connection | utf8                       |
| character_set_database   | utf8mb4                    |
| character_set_filesystem | binary                     |
| character_set_results    | utf8                       |
| character_set_server     | utf8mb4                    |
| character_set_system     | utf8                       |
| character_sets_dir       | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0.001 sec)

#修改字符集
cp /etc/mysql/mariadb.conf.d/50-client.cnf{,.backup};\
vi /etc/mysql/mariadb.conf.d/50-client.cnf
[client]
default-character-set = utf8mb4

cp /etc/mysql/mariadb.conf.d/50-mysql-clients.cnf{,.backup};\
vi /etc/mysql/mariadb.conf.d/50-mysql-clients.cnf
[mysql]
default-character-set = utf8mb4

# 再次检查字符集
root@Debian11-Cloud:~# mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 53
Server version: 10.5.12-MariaDB-0+deb11u1 Debian 11

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show variables like '%character%';
+--------------------------+----------------------------+
| Variable_name            | Value                      |
+--------------------------+----------------------------+
| character_set_client     | utf8mb4                    |
| character_set_connection | utf8mb4                    |
| character_set_database   | utf8mb4                    |
| character_set_filesystem | binary                     |
| character_set_results    | utf8mb4                    |
| character_set_server     | utf8mb4                    |
| character_set_system     | utf8                       |
| character_sets_dir       | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0.001 sec)


#针对nextcloud的一些数据库调优
cp /etc/mysql/mariadb.conf.d/50-server.cnf{,.backup};\
vi /etc/mysql/mariadb.conf.d/50-server.cnf
[mysqld]
innodb_buffer_pool_size=1G
innodb_io_capacity=4000
# 单服务器环境使用数据库就加上此配置项,如果此配置项有注释就去掉注释。
skip-external-locking

配置apache

# 开启支持php-fpm模块
a2enmod proxy_fcgi setenvif
a2enmod mpm_event
a2enconf php-fpm
# 操作记录
root@Debian11-Cloud:~# a2enmod proxy_fcgi setenvif
Considering dependency proxy for proxy_fcgi:
Module proxy already enabled
Module proxy_fcgi already enabled
Module setenvif already enabled
root@Debian11-Cloud:~# a2enmod mpm_event
Considering conflict mpm_worker for mpm_event:
Considering conflict mpm_prefork for mpm_event:
Module mpm_event already enabled
# 这儿报错是什么情况?先不管
root@Debian11-Cloud:~# a2enconf php-fpm
ERROR: Conf php-fpm does not exist!
# 正确姿势
a2enconf php7.4-fpm

#开启重写模块
a2enmod rewrite

root@Debian11-Cloud:~# a2enmod rewrite
Enabling module rewrite.
To activate the new configuration, you need to run:
  systemctl restart apache2
root@Debian11-Cloud:~# systemctl restart apache2

# 开启ssl,http2
a2enmod ssl
a2enmod http2

root@Debian11-Cloud:~# a2enmod ssl
Considering dependency setenvif for ssl:
Module setenvif already enabled
Considering dependency mime for ssl:
Module mime already enabled
Considering dependency socache_shmcb for ssl:
Enabling module socache_shmcb.
Enabling module ssl.
See /usr/share/doc/apache2/README.Debian.gz on how to configure SSL and create self-signed certificates.
To activate the new configuration, you need to run:
  systemctl restart apache2
root@Debian11-Cloud:~# a2enmod http2
Enabling module http2.
To activate the new configuration, you need to run:
  systemctl restart apache2


# 隐藏apache2版本号

cp /etc/apache2/conf-available/security.conf{,.backup};\
vi /etc/apache2/conf-available/security.conf
# 隐藏apache版本号。
ServerTokens Prod
# 或是 ServerTokens OS 修改为 ServerTokens Prod
# ServerSignature off 由原来的On改为Off 
ServerSignature off

配置php

# 配置PHP.ini
cp /etc/php/7.4/fpm/php.ini{,.backup};\
vi /etc/php/7.4/fpm/php.ini

# 大概有以下的配置内容
memory_limit = 1024M
post_max_size = 8G
upload_max_filesize = 8G
max_execution_time = 3600
max_input_time = 3600
; 隐藏 PHP 版本号,debian默认已经是off
expose_php = Off
# opcache配置项
opcache.enable=1
opcache.enable_cli=1
opcache.memory_consumption=128
opcache.interned_strings_buffer=16
opcache.max_accelerated_files=10000
opcache.save_comments=1
# redis配置项
redis.session.locking_enabled=1
redis.session.lock_retries=-1
redis.session.lock_wait_time=10000
;apc.enable_cli=1 此条目还要在cli目录配置此项开启,否正nextcloud的occ命令无法使用
apc.enable_cli=1

# 配置/cli/PHP.ini
cp /etc/php/7.4/cli/php.ini{,.backup};\
vi /etc/php/7.4/cli/php.ini
# 在配置文件最后添加
apc.enable_cli=1


# 配置www.conf
cp /etc/php/7.4/fpm/pool.d/www.conf{,.backup};\
vi /etc/php/7.4/fpm/pool.d/www.conf

pm = dynamic
pm.max_children = 18
pm.start_servers = 8
pm.min_spare_servers = 4
pm.max_spare_servers = 16

AMP配置大概基本完成,真TM累

创建数据库

时间来到了1:15

root@Debian11-Cloud:~# mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 54
Server version: 10.5.12-MariaDB-0+deb11u1 Debian 11

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
# 查看用户的授权方式
# 正常情况下root用户没有设置密码invalid,只有本地登录root用户才能通过unix_socket免密登录。
MariaDB [(none)]> select Host,User,Password,plugin from mysql.user;
+-----------+-------------+----------+-----------------------+
| Host      | User        | Password | plugin                |
+-----------+-------------+----------+-----------------------+
| localhost | mariadb.sys |          | mysql_native_password |
| localhost | root        | invalid  | mysql_native_password |
| localhost | mysql       | invalid  | mysql_native_password |
+-----------+-------------+----------+-----------------------+
3 rows in set (0.001 sec)

#创建数据库
#创建数据库用户及密码

#创建数据库
create database wordpress;
create database nextcloud;

#创建用户及密码
create user 'c***p'@'localhost' identified by 'S**密码**l';
create user 'c***c'@'localhost' identified by 'L**密码**l';

#授权
grant all on wordpress.* to 'c***p'@'localhost';
grant all on nextcloud.* to 'c***c'@'localhost';

#刷新权限
flush privileges;

#查看数据库
show databases;

#查看数据库的用户们
select host,user from mysql.user;

#创建数据库
#创建数据库用户及密码
## 操作记录
MariaDB [(none)]> create database wordpress;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> create database nextcloud;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> create user 'c***p'@'localhost' identified by 'S**密码**l';
Query OK, 0 rows affected (0.005 sec)

MariaDB [(none)]> create user 'c***c'@'localhost' identified by 'L**密码**l';
Query OK, 0 rows affected (0.005 sec)

MariaDB [(none)]> grant all on wordpress.* to 'c***p'@'localhost';
Query OK, 0 rows affected (0.005 sec)

MariaDB [(none)]> grant all on nextcloud.* to 'c***c'@'localhost';
Query OK, 0 rows affected (0.009 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| nextcloud          |
| performance_schema |
| wordpress          |
+--------------------+
5 rows in set (0.000 sec)

MariaDB [(none)]> select host,user from mysql.user;
+-----------+-------------+
| Host      | User        |
+-----------+-------------+
| localhost | c***c       |
| localhost | c***p       |
| localhost | mariadb.sys |
| localhost | mysql       |
| localhost | root        |
+-----------+-------------+
5 rows in set (0.000 sec)

MariaDB [(none)]> select Host,User,Password,plugin from mysql.user;
+-----------+-------------+-------------------------------------------+-----------------------+
| Host      | User        | Password                                  | plugin                |
+-----------+-------------+-------------------------------------------+-----------------------+
| localhost | mariadb.sys |                                           | mysql_native_password |
| localhost | root        | invalid                                   | mysql_native_password |
| localhost | mysql       | invalid                                   | mysql_native_password |
| localhost | c***p       | *D47204E111DE8AA0063727E425BE318FC1CCFF1E | mysql_native_password |
| localhost | c***c       | *89E56EC22FF7C9B9CF3B5D66745C42537E6E2D66 | mysql_native_password |
+-----------+-------------+-------------------------------------------+-----------------------+
5 rows in set (0.001 sec)

挂载磁盘到PCT

时间来到了 1:38

# 命令给pct容器挂载磁盘,非常顺利
pct set 103 -mp0 ssd-data:vm-101-disk-0,mp=/site-data

# 命令pct开机
pct start 103

部署网站

时间来到了 1:48

# 首先安装证书到apache2的目录

# 创建证书存放目录
mkdir /etc/apache2/cert

# 安装证书,证书匹配apache与nginx。
# nginx使用key.pem、chain.pem
# apache使用cert.pem、key.pem、chain.pem
acme.sh --install-cert -d sgtfz.top \
--cert-file      /etc/apache2/cert/sgtfztop_cert.pem  \
--key-file       /etc/apache2/cert/sgtfztop_key.pem  \
--fullchain-file /etc/apache2/cert/sgtfztop_chain.pem \
--reloadcmd     "service apache2 force-reload"
## 操作记录
root@Debian11-Cloud:~# /root/.acme.sh/acme.sh --install-cert -d sgtfz.top \
--cert-file      /etc/apache2/cert/sgtfztop_cert.pem  \
--key-file       /etc/apache2/cert/sgtfztop_key.pem  \
--fullchain-file /etc/apache2/cert/sgtfztop_chain.pem \
--reloadcmd     "service apache2 force-reload"
[Sat 30 Oct 2021 01:56:04 AM CST] Installing cert to: /etc/apache2/cert/sgtfztop_cert.pem
[Sat 30 Oct 2021 01:56:04 AM CST] Installing key to: /etc/apache2/cert/sgtfztop_key.pem
[Sat 30 Oct 2021 01:56:04 AM CST] Installing full chain to: /etc/apache2/cert/sgtfztop_chain.pem
[Sat 30 Oct 2021 01:56:04 AM CST] Run reload cmd: service apache2 force-reload
[Sat 30 Oct 2021 01:56:04 AM CST] Reload success

创建网站的apache配置文件

## 站点配置文件存放路径
## /etc/apache2/sites-available/

#建立NextCloud站点配置文件
vi /etc/apache2/sites-available/nextcloud.conf

<VirtualHost 192.168.33.8:80>
  # nextcloud 根目录
  DocumentRoot /site-data/www/nextcloud/
  # 指定 nextcloud 使用的域名
  ServerName  cloud.sgtfz.top
  # nextcloud 根目录规则
  <Directory /site-data/www/nextcloud/>
    Require all granted
    AllowOverride All
    Options FollowSymLinks MultiViews
    <IfModule mod_dav.c>
      Dav off
    </IfModule>
  </Directory>
</VirtualHost>

<VirtualHost *:443>
  # 开启 HTTP/2
  Protocols h2 h2c http/1.1
  # nextcloud 根目录
  DocumentRoot "/site-data/www/nextcloud/"
  # 指定 nextcloud 使用的域名
  ServerName  cloud.sgtfz.top:443

  # 日志
  ErrorLog ${APACHE_LOG_DIR}/error.log
  CustomLog ${APACHE_LOG_DIR}/access.log combined

  # Enable/Disable SSL for this virtual host.
  SSLEngine on

  # 证书公钥配置
  SSLCertificateFile /etc/apache2/cert/sgtfztop_cert.pem
  # 证书私钥配置
  SSLCertificateKeyFile /etc/apache2/cert/sgtfztop_key.pem
  # 证书链配置
  SSLCertificateChainFile /etc/apache2/cert/sgtfztop_chain.pem

  SSLUseStapling on
  SSLStaplingReturnResponderErrors off
  SSLStaplingResponderTimeout 5

  #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
  <FilesMatch "\.(cgi|shtml|phtml|php)$">
      SSLOptions +StdEnvVars
  </FilesMatch>
  <Directory /usr/lib/cgi-bin>
      SSLOptions +StdEnvVars
  </Directory>
</VirtualHost>

#建立WordPress站点配置文件
vi /etc/apache2/sites-available/wordpress.conf
<VirtualHost *:80>
  # wordpress 根目录
  DocumentRoot /site-data/www/wordpress/
  # 指定 wordpress 使用的域名
  ServerName  sgtfz.top
  # wordpress 根目录规则
  <Directory /site-data/www/wordpress/>
    Options FollowSymLinks
    AllowOverride All
    Require all granted
  </Directory>
</VirtualHost>

<VirtualHost *:443>
  # 开启 HTTP/2
  Protocols h2 h2c http/1.1

  # wordpress 根目录
  DocumentRoot "/site-data/www/wordpress/"
  # 指定 wordpress 使用的域名
  ServerName  sgtfz.top:443

  # 日志
  ErrorLog ${APACHE_LOG_DIR}/error.log
  CustomLog ${APACHE_LOG_DIR}/access.log combined

  # Enable/Disable SSL for this virtual host.
  SSLEngine on

  # 证书公钥配置
  SSLCertificateFile /etc/apache2/cert/sgtfztop_cert.pem
  # 证书私钥配置
  SSLCertificateKeyFile /etc/apache2/cert/sgtfztop_key.pem
  # 证书链配置
  SSLCertificateChainFile /etc/apache2/cert/sgtfztop_chain.pem

  SSLUseStapling on
  SSLStaplingReturnResponderErrors off
  SSLStaplingResponderTimeout 5

  #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
  <FilesMatch "\.(cgi|shtml|phtml|php)$">
      SSLOptions +StdEnvVars
  </FilesMatch>
  <Directory /usr/lib/cgi-bin>
      SSLOptions +StdEnvVars
  </Directory>
</VirtualHost>

# 站点总目录配置文件
# 路径 /etc/apache2/conf-available/ 的配置文件
vi /etc/apache2/conf-available/site-data.conf

ServerName localhost:80
# site-data DocumentRoot
<Directory "/site-data/www">
    Options FollowSymLinks
    AllowOverride All
    Require all granted
</Directory>

然后开启网站配置文件

a2enconf site-data
a2ensite wordpress
a2ensite nextcloud
# 操作记录
root@Debian11-Cloud:~# a2enconf site-data
Enabling conf site-data.
To activate the new configuration, you need to run:
  systemctl reload apache2
root@Debian11-Cloud:~# a2ensite wordpress
Enabling site wordpress.
To activate the new configuration, you need to run:
  systemctl reload apache2
root@Debian11-Cloud:~# a2ensite nextcloud
Enabling site nextcloud.
To activate the new configuration, you need to run:
  systemctl reload apache2

停止apache2

systemctl stop apache2

上传并导入NextCloud与WordPress的数据库

  • 这我们得先创建一个普通用户来上传文件

创建用户

adduser sgtfz

# 操作记录,按向导操作各种资料留空,至Is the information correct?,直接y即可。
root@Debian11-Cloud:~# adduser sgtfz
Adding user `sgtfz' ...
Adding new group `sgtfz' (1000) ...
Adding new user `sgtfz' (1000) with group `sgtfz' ...
Creating home directory `/home/sgtfz' ...
Copying files from `/etc/skel' ...
New password: 
Retype new password: 
passwd: password updated successfully
Changing the user information for sgtfz
Enter the new value, or press ENTER for the default
        Full Name []: 
        Room Number []: 
        Work Phone []: 
        Home Phone []: 
        Other []: 
Is the information correct? [Y/n] y

# 去TND,搞着搞着就没有保存到密码,现在忘了普通用户的密码了
# 修改普通用户sgtfz密码
passwd sgtfz
# 修改完成,爽呆了~!!

# 登录数据库
mysql

# 选择数据库
use nextcloud;
# 导入nextcloud的数据库
source /home/sgtfz/nextcloud_202110281906.sql;
# 继续wordpress的数据库
use wordpress;
source /home/sgtfz/wordpress_202110281906.sql;
# 顺利完成导入数据库

编辑WordPress与NextCloud配置文件的数据库用户及密码

# 编辑wordpress配置文件wp-config.php
vi /site-data/www/wordpress/wp-config.php

# nextcloud的配置文件还要修改一下目录路径,数据库用户及密码
vi /site-data/www/nextcloud/config/config.php

重启LXC,测试网站

reboot

遇到apache无法启动

报错日志如下:

[Sat Oct 30 03:39:56.096910 2021] [ssl:emerg] [pid 661:tid 139811129806144] AH01958: SSLStapling: no stapling cache available
[Sat Oct 30 03:39:56.096942 2021] [ssl:emerg] [pid 661:tid 139811129806144] AH02311: Fatal error initialising mod_ssl, exiting. See /var/log/apache2/error.log for more information
AH00016: Configuration Failed

禁用以下配置文件,apache就能起来

认真检查这两个网站的配置文件,没有发现有什么不妥啊?那么问题出在哪里?

a2dissite wordpress
a2dissite nextcloud

apache 启动失败已解决

如果长时间没有折腾,真的可以忘记很多以前经历过的事情,所以再好的记忆力都不如认真的记录下来。

  • 查看apache报错日志大概与证书的配置文件相关,我漏了这个配置文件没有检查与配置
  • 就这 /etc/apache2/mods-enabled/ssl.conf
root@Debian11-Cloud:~# cat /etc/apache2/mods-enabled/ssl.conf
<IfModule mod_ssl.c>

        # Pseudo Random Number Generator (PRNG):
        # Configure one or more sources to seed the PRNG of the SSL library.
        # The seed data should be of good random quality.
        # WARNING! On some platforms /dev/random blocks if not enough entropy
        # is available. This means you then cannot use the /dev/random device
        # because it would lead to very long connection times (as long as
        # it requires to make more entropy available). But usually those
        # platforms additionally provide a /dev/urandom device which doesn't
        # block. So, if available, use this one instead. Read the mod_ssl User
        # Manual for more details.
        #
        SSLRandomSeed startup builtin
        SSLRandomSeed startup file:/dev/urandom 512
        SSLRandomSeed connect builtin
        SSLRandomSeed connect file:/dev/urandom 512

        ##
        ##  SSL Global Context
        ##
        ##  All SSL configuration in this context applies both to
        ##  the main server and all SSL-enabled virtual hosts.
        ##

        #
        #   Some MIME-types for downloading Certificates and CRLs
        #
        AddType application/x-x509-ca-cert .crt
        AddType application/x-pkcs7-crl .crl

        #   Pass Phrase Dialog:
        #   Configure the pass phrase gathering process.
        #   The filtering dialog program (`builtin' is a internal
        #   terminal dialog) has to provide the pass phrase on stdout.
        SSLPassPhraseDialog  exec:/usr/share/apache2/ask-for-passphrase

        #   Inter-Process Session Cache:
        #   Configure the SSL Session Cache: First the mechanism
        #   to use and second the expiring timeout (in seconds).
        #   (The mechanism dbm has known memory leaks and should not be used).
        #SSLSessionCache                 dbm:${APACHE_RUN_DIR}/ssl_scache
        SSLSessionCache         shmcb:${APACHE_RUN_DIR}/ssl_scache(512000)
        SSLSessionCacheTimeout  300

        #   Semaphore:
        #   Configure the path to the mutual exclusion semaphore the
        #   SSL engine uses internally for inter-process synchronization.
        #   (Disabled by default, the global Mutex directive consolidates by default
        #   this)
        #Mutex file:${APACHE_LOCK_DIR}/ssl_mutex ssl-cache


        #   SSL Cipher Suite:
        #   List the ciphers that the client is permitted to negotiate. See the
        #   ciphers(1) man page from the openssl package for list of all available
        #   options.
        #   Enable only secure ciphers:
        #SSLCipherSuite HIGH:!aNULL
        #   sgtfz 20211030 这里注意啦!
        SSLCipherSuite ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384

        # SSL server cipher order preference:
        # Use server priorities for cipher algorithm choice.
        # Clients may prefer lower grade encryption.  You should enable this
        # option if you want to enforce stronger encryption, and can afford
        # the CPU cost, and did not override SSLCipherSuite in a way that puts
        # insecure ciphers first.
        # Default: Off
        #SSLHonorCipherOrder on
        # sgtfz 20211030 这里注意啦!
        SSLHonorCipherOrder off
        SSLSessionTickets off

        #   The protocols to enable.
        #   Available values: all, SSLv3, TLSv1, TLSv1.1, TLSv1.2
        #   SSL v2  is no longer supported
        #SSLProtocol all -SSLv3
        #   sgtfz 20211030
        SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1

        #   Allow insecure renegotiation with clients which do not yet support the
        #   secure renegotiation protocol. Default: Off
        #SSLInsecureRenegotiation on

        #   Whether to forbid non-SNI clients to access name based virtual hosts.
        #   Default: Off
        #SSLStrictSNIVHostCheck On

        #   ssl_stapling_cache
        #   sgtfz - 20211030 这里注意啦!
        SSLStaplingCache shmcb:${APACHE_RUN_DIR}/ssl_stapling_cache(128000)

</IfModule>

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

配置nextcloud后台任务

crontab -u www-data -e
# 每10分钟执行一次
*/10 * * * * php -f /site-data/www/nextcloud/cron.php
# 配置顺利完成

又一个不眠之夜

时间来到了 5:13

折腾这方面的东西,也只有晚上没有人打扰思路才能清晰。

但是全程总体非常顺利,Proxmox VE可玩性确实更高!

在这个主机(B85 + E3-1231V3)上运行非常舒服,对比我最古老的那个主机(Z68 + E3-1230V2)明显好不少。

可能是华硕主板B85M(MATX),比微星主板Z68(ATX)更优秀的原因。

微星的Z68启动的时候有PVE有几个报错的提示:ACPI Error

华硕的B85M没有此问题

在华硕B85M主板上的,PVE的VM虚拟机同样的OpenWrt占用资源更低一些,这也许是因为OpenWrt的3个网口都是直接使用硬件直通网卡。

Windows Terminal ssh 连接报错解决

在折腾的中途使用 Windows Terminal ssh 连接服务器却报错

ssh连接到指定服务器报错如下:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:Y+VRt0Y***********************************l4TLM.
Please contact your system administrator.
Add correct host key in C:\\Users\\sgtfz/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in C:\\Users\\sgtfz/.ssh/known_hosts:1
ECDSA host key for 192.168.33.8 has changed and you have requested strict checking.
Host key verification failed.

解决方法

打开当前用户目录路径如下:

  • C:\Users\sgtfz\.ssh
  • 编辑known_hosts文件
# 哪个IP地址无法用ssh连接的,整行删掉即可。
192.168.33.8 ecdsa-sha2-nistp256 AAAAE2VjZHNh****uerLgqcnkItfIAsuyD9*****cP8ipTxIj2Cj6lw=
192.168.33.131 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoY*************f54zj69xxmXTmXunQvUxmkM=
192.168.33.6 ecdsa-sha2-nistp256 AAAAE2V***uerLgqcnkItfIAsuyD9+fhI/JtZ/dXacP8ipTxIj2Cj6lw=
192.168.33.1 ssh-rsa AAAAB3N*************************************E+k0diaOcB

发表评论