- チラシの裏にでも書いてろ な!: mdadmでRAID構築(まとめ)
- Tips/Linux/mdadm - 福岡大学奥村研究室 - okkun-lab Pukiwiki!
- サーバー設定メモ/mdadmのRAID再構築 - トヤヲ.ネット
- Yet Another Diary: mdadmで/dev/md2を/dev/md0にリネームしてみた
- Debian_etch:RAID-1:Shrink 走ってるlinuxの / パーティションを小さくする。まあ、RAIDならでは。
- wikipedia:Nested_RAID_levels
- Linux Software RAID10 - HelloKitty68
- RAID10 in Linux MD driver
- RAID10構築手順の記録 - 夜間飛行
dmesgを見るとsdc,sddがひとつのSATAコントローラで認識され、sde,sdfがひとつのSATAコントローラで認識されていた。 RAIDはSATAコントローラをまたいでRAID1を組むようにsdcとsde,sddとsdfのペアにする。
# mdadm --create /dev/md0 -v --raid-devices=4 --level=raid10 /dev/sdc1 /dev/sde1 /dev/sdd1 /dev/sdf1
参考:引数のパーティションの順番とRAIDの構成内容(1,2番目と3,4番目がそれぞれRAID1を構成している)
- Linux software RAID - HelloKitty68
- metadata
- metadata 1.2 — Linux RAID
- can metadata type be changed for an active array?
- Superblock - Linux Raid Wiki
- RAID superblock formats - Linux Raid Wiki
Sub-Version Superblock Position on Device 1.0 At the end of the device 1.1 At the beginning of the device 1.2 4K from the beginning of the device
- check/ failure / recover
- "Write hole" phenomenon in RAID5, RAID6, RAID1, and other arrays.
- Mdadm checkarray - Thomas-Krenn-Wiki
- raid - RAID6 scrubbing mismatch repair? - Unix & Linux Stack Exchange
- software raid - Linux - Repairing bad blocks on a RAID1 array with GPT - Unix & Linux Stack Exchange
- linux kernel - md raid5: translate md internal sector numbers to offsets - Unix & Linux Stack Exchange
boot loader
lilo
- ファイルシステムにXFSを使う等で、OSインストール時にliloを選択した場合はデフォルトで全てのミラーdiskのMBRにliloがインストールされる模様 (Ubuntu 8.04 Hardy)
- attachment:ubuntu-lilo-raid1.png
- /etc/lilo.conf (snip)
# Specifies the boot device. This is where Lilo installs its boot # block. It can be either a partition, or the raw device, in which # case it installs in the MBR, and will overwrite the current MBR. # boot=/dev/md1 # This option may be needed for some software RAID installs. # raid-extra-boot=mbr-only # Specifies the location of the map file # map=/boot/map # Specifies the number of deciseconds (0.1 seconds) LILO should # wait before booting the first image. # delay=20 # # Boot up Linux by default. # default=Linux image=/vmlinuz label=Linux read-only # restricted # alias=1 append="root=/dev/md1 " initrd=/initrd.img image=/vmlinuz.old label=LinuxOLD read-only optional # restricted # alias=2 append="root=/dev/md1 " initrd=/initrd.img.old
- 少なくともdiskの先頭10MBは同一
- sudo dd if=/dev/sda of=sda count=10240 bs=1024
- sudo dd if=/dev/sda of=sdb count=10240 bs=1024
- sha1sum sd?
642aace4e9239c85a011c4f6d643786b78e8e454 sda 642aace4e9239c85a011c4f6d643786b78e8e454 sdb
- RAID5でどうなるかは不明
GRUB
- デフォルトでは一つ目のミラーのMBRのみ、GRUBがセットされるため、二つ目のMBRにもGRUBを入れる必要がある
- GRUBのコマンドについて
- この情報によると、RAID時にGRUBコマンドのデファクトとなっている「install」コマンドではなく「setup」コマンドがbetter?
- ネット上で主に見つかる手順
- df /boot
Filesystem 1K-blocks Used Available Use% Mounted on /dev/md1 1450192 605532 771572 44% /
- grep md1 /proc/mdstat
md1 : active raid1 sda2[0] sdb1[2](S) sdc2[1]
- sudo grub
grub> device (hd0) /dev/sdc grub> root (hd0,1) grub> install /boot/grub/stage1 (hd0) /boot/grub/stage2 p /boot/grub/grub.conf
- こちらの方がbetter? (未検証)
grub> root (hd2,1) grub> setup (hd2)
- df /boot
- ネット上で主に見つかる手順
Ubuntu
- liloと同様に、二つ目のMBRにもGRUBが自動でセットされる模様。GRUB installは不要?
- /boot/grub/menu.lst (snip)
## default num # Set the default entry to the entry number NUM. Numbering starts from 0, and # the entry number 0 is the default if the command is not used. # # You can specify 'saved' instead of a number. In this case, the default entry # is the entry saved with the command 'savedefault'. # WARNING: If you are using dmraid do not use 'savedefault' or your # array will desync and will not let you boot your system. default 0 ## timeout sec # Set a timeout, in SEC seconds, before automatically booting the default entry # (normally the first entry defined). timeout 3 ## hiddenmenu # Hides the menu by default (press ESC to see the menu) hiddenmenu title Ubuntu 8.04.2, kernel 2.6.24-24-server root (hd0,1) kernel /boot/vmlinuz-2.6.24-24-server root=/dev/md1 ro quiet splash initrd /boot/initrd.img-2.6.24-24-server
- /boot/grub/menu.lst (snip)
Debian
- (ヅ) Debian squeeze + RAID1 + GRUB2 (grub-pc)
dpkg-reconfigure grub-pc を実行。
──────│ GRUB install devices: │ │ │ │ **] /dev/sda (500107 MB; WDC_WD5000AAKB-00H8A0) │ │ * ] /dev/sdb (500107 MB; WDC_WD5000AAKB-00H8A0) │ │ ] /dev/md0 (51538 MB; eggplant:0) │
mdadm
Ubuntu
- Ubuntu 8.10 技術概要 | Ubuntu Japanese Team #起動時のデグレードRAID設定
- /etc/initramfs-tools/conf.d/mdadm
# BOOT_DEGRADED: # Do you want to boot your system if a RAID providing your root filesystem # becomes degraded? # # Running a system with a degraded RAID could result in permanent data loss # if it suffers another hardware fault. # # However, you might answer "yes" if this system is a server, expected to # tolerate hardware faults and boot unattended. BOOT_DEGRADED=false
- viなどでこのファイルを直接編集しても、反映されない模様。以下のdpkg-reconfigure mdadmから設定する必要がある。
- /etc/mdadm/mdadm.conf
# instruct the monitoring daemon where to send mail alerts MAILADDR root
- /etc/default/mdadm
# AUTOCHECK: # should mdadm run periodic redundancy checks over your arrays? See # /etc/cron.d/mdadm. AUTOCHECK=true # START_DAEMON: # should mdadm start the MD monitoring daemon during boot? START_DAEMON=true
- /etc/cron.d/mdadm
# By default, run at 01:06 on every Sunday, but do nothing unless the day of # the month is less than or equal to 7. Thus, only run on the first Sunday of # each month. crontab(5) sucks, unfortunately, in this regard; therefore this # hack (see #380425). 6 1 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet
- /etc/cron.d/mdadm
dpkg-reconfigure mdadm
- [ ... ] がデフォルト値
- sudo dpkg-reconfigure mdadm
- /etc/default/mdadm AUTOCHECK
┌───────────────────────────┤ Configuring mdadm ├ │ │ │ If your kernel supports it (>> 2.6.14), mdadm can periodically check the │ │ redundancy of your MD arrays (RAIDs). This may be a resource-intensive │ │ process, depending on your setup, but it could help prevent rare cases │ │ of data loss. Note that this is a read-only check unless errors are │ │ found; if errors are found, mdadm will try to correct them, which may │ │ result in write access to the media. │ │ │ │ The default, if turned on, is to run the checks on the first Sunday of │ │ every month at 01:06 o'clock. │ │ │ │ Should mdadm run monthly redundancy checks of the MD arrays? │ │ │ │ [<Yes>] <No> │ │ │ └──────────────────────────────────────
- /etc/default/mdadm START_DAEMON
┌──────────────────────────┤ Configuring mdadm ├ ─│ │ │ The MD (RAID) monitor daemon sends email notifications in response to │ │ important MD events (such as a disk failure). You probably want to │ │ enable it. │ │ │ │ Do you want to start the MD monitoring daemon? │ │ │ │ [<Yes>] <No> │ │ │ └──────────────────────────────────────
- /etc/mdadm/mdadm.conf MAILADDR
┌─────────────────────────┤ Configuring mdadm ├─ ─ │ Please enter the email address of the user who should get the email │ │ notification for important MD events. │ │ │ │ Recipient for email notifications: │ │ │ │ root_________________________________________________________________ │ │ │ │ <Ok> │ │ │ └─────────────────────────────────────
- /etc/initramfs-tools/conf.d/mdadm BOOT_DEGRADED
┌───────────────────────────┤ Configuring mdadm ├ │ │ │ If your root filesystem is on a RAID, and a disk is missing at boot, it │ │ can either boot with the degraded array, or hold the system at a │ │ recovery shell. │ │ │ │ Running a system with a degraded RAID could result in permanent data │ │ loss if it suffers another hardware fault. │ │ │ │ If you do not have access to the server console to use the recovery │ │ shell, you might answer "yes" to enable the system to boot unattended. │ │ │ │ Do you want to boot your system if your RAID becomes degraded? │ │ │ │ <Yes> [<No>] │ │ │ └──────────────────────────────────────
- apply settings... (liloを使っている場合)
* Stopping MD monitoring service mdadm --monitor [ OK ] Generating array device nodes... done. Removing any system startup links for /etc/init.d/mdadm-raid ... update-initramfs: Generating /boot/initrd.img-2.6.24-24-server Warning: LBA32 addressing assumed Added Linux * Added LinuxOLD The Master boot record of /dev/sda has been updated. Warning: /dev/sdb is not on the first disk The Master boot record of /dev/sdb has been updated. 2 warnings were issued. * Starting MD monitoring service mdadm --monitor [ OK ]
- /etc/default/mdadm AUTOCHECK
mount
duplicate uuid on mount
- /dev/md2 (XFS) -> RAID1 /dev/sda2, /dev/sdb2
- on Knoppix
# mount /dev/sda2 /mnt/sda -r -t xfs # mount /dev/sdb2 /mnt/sdb -r -t xfs mount: wrong fs type, bad option, bad superblock on /dev/sdb2, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so # dmesg | tail XFS: Filesystem sdb2 has duplicate UUID - can't mount
- => Re: duplicate uuid on mount
There is also the "nouuid" option to mount: mount -t xfs -o nouuid /dev/sdc5 /mnt/tmp This tells the kernel to ignore duplicate UUID.
# mount /dev/sdb2 /mnt/sdb -r -t xfs -o nouuid # mount /dev/sda2 on /media/sda type xfs (ro,ikeep,noquota) /dev/sdb2 on /media/sdb type xfs (ro,nouuid,noquota)
Last modified 7 years ago
Last modified on Oct 8, 2017 10:40:40 AM
Attachments (1)
- ubuntu-lilo-raid1.png (6.2 KB) - added by mitty 16 years ago.
Download all attachments as: .zip