[[PageOutline]] [[TitleIndex(TipAndDoc/RAID,format=group)]] * [http://rdt17.blogspot.com/2010/05/mdadmraid.html チラシの裏にでも書いてろ な!: mdadmでRAID構築(まとめ)] * [http://okkun-lab.rd.fukuoka-u.ac.jp/wiki/?Tips%2FLinux%2Fmdadm Tips/Linux/mdadm - 福岡大学奥村研究室 - okkun-lab Pukiwiki!] * [http://toyao.net/xoops/modules/xpwiki/?%E3%82%B5%E3%83%BC%E3%83%90%E3%83%BC%E8%A8%AD%E5%AE%9A%E3%83%A1%E3%83%A2%2Fmdadm%E3%81%AERAID%E5%86%8D%E6%A7%8B%E7%AF%89 サーバー設定メモ/mdadmのRAID再構築 - トヤヲ.ネット] * [http://yasu-2.blogspot.com/2009/08/mdadmdevmd2devmd0.html Yet Another Diary: mdadmで/dev/md2を/dev/md0にリネームしてみた] * [http://now.ohah.net/setu/wiki?Debian_etch%3ARAID-1%3AShrink Debian_etch:RAID-1:Shrink] 走ってるlinuxの / パーティションを小さくする。まあ、RAIDならでは。 * wikipedia:Nested_RAID_levels * [http://hellokitty68.main.jp/wiki/Linux_Software_RAID10 Linux Software RAID10 - HelloKitty68] * [http://neil.brown.name/blog/20040827225440 RAID10 in Linux MD driver] * [http://d.hatena.ne.jp/BlueSkyDetector/20080608/1212936771 RAID10構築手順の記録 - 夜間飛行] > dmesgを見るとsdc,sddがひとつのSATAコントローラで認識され、sde,sdfがひとつのSATAコントローラで認識されていた。 > RAIDはSATAコントローラをまたいでRAID1を組むようにsdcとsde,sddとsdfのペアにする。 > {{{ > # mdadm --create /dev/md0 -v --raid-devices=4 --level=raid10 /dev/sdc1 /dev/sde1 /dev/sdd1 /dev/sdf1 > }}} > 参考:引数のパーティションの順番とRAIDの構成内容(1,2番目と3,4番目がそれぞれRAID1を構成している) * metadata * [http://www.spinics.net/lists/raid/msg27926.html metadata 1.2 — Linux RAID] * [http://www.issociate.de/board/post/493796/can_metadata_type_be_changed_for_an_active_array?.html can metadata type be changed for an active array?] * [https://raid.wiki.kernel.org/index.php/RAID_superblock_formats RAID superblock formats - Linux Raid Wiki] * [http://www.raid-recovery-guide.com/raid5-write-hole.aspx "Write hole" phenomenon in RAID5, RAID6, RAID1, and other arrays.] = boot loader = == lilo == * ファイルシステムにXFSを使う等で、OSインストール時にliloを選択した場合はデフォルトで全てのミラーdiskのMBRにliloがインストールされる模様 (Ubuntu 8.04 Hardy) * attachment:ubuntu-lilo-raid1.png * /etc/lilo.conf (snip) {{{ # Specifies the boot device. This is where Lilo installs its boot # block. It can be either a partition, or the raw device, in which # case it installs in the MBR, and will overwrite the current MBR. # boot=/dev/md1 # This option may be needed for some software RAID installs. # raid-extra-boot=mbr-only # Specifies the location of the map file # map=/boot/map # Specifies the number of deciseconds (0.1 seconds) LILO should # wait before booting the first image. # delay=20 # # Boot up Linux by default. # default=Linux image=/vmlinuz label=Linux read-only # restricted # alias=1 append="root=/dev/md1 " initrd=/initrd.img image=/vmlinuz.old label=LinuxOLD read-only optional # restricted # alias=2 append="root=/dev/md1 " initrd=/initrd.img.old }}} * 少なくともdiskの先頭10MBは同一 1. sudo dd if=/dev/sda of=sda count=10240 bs=1024 1. sudo dd if=/dev/sda of=sdb count=10240 bs=1024 1. sha1sum sd? {{{ 642aace4e9239c85a011c4f6d643786b78e8e454 sda 642aace4e9239c85a011c4f6d643786b78e8e454 sdb }}} * RAID5でどうなるかは不明 == GRUB == * デフォルトでは一つ目のミラーのMBRのみ、GRUBがセットされるため、二つ目のMBRにもGRUBを入れる必要がある * [http://www.miraclelinux.com/technet/faq/data/00080.html ミラクル・リナックス:ブートディスクをソフトウェア RAID 1 (ミラーリング) に構成する際の注意] * [http://www.naney.org/diki/d/2004-01-05-RAID.html nDiki: Debian GRUB でソフトウェアRAID1 ブート設定 (2004-01-05)] * [http://d.hatena.ne.jp/hogem/20061018/1161187177 ソフトウェアRAID(RAID1)で2台目のHDDでブートする方法 - うまい棒blog] * [http://shukukei.com/2009/03/raid1grub.html ソフトウェアRAID1にgrubをインストールする(メモ) - 夙慧.com] * GRUBのコマンドについて * [http://www.h5.dion.ne.jp/~simochu/grub/grub.html Introduction to GNU GRUB] * [http://www.geocities.co.jp/SiliconValley-Bay/3897/grub/grub.html grubを使おう] * [http://grub.enbug.org/CommandList CommandList - GRUB Wiki] * この情報によると、RAID時にGRUBコマンドのデファクトとなっている「install」コマンドではなく「setup」コマンドがbetter? * ネット上で主に見つかる手順 1. df /boot {{{ Filesystem 1K-blocks Used Available Use% Mounted on /dev/md1 1450192 605532 771572 44% / }}} 1. grep md1 /proc/mdstat {{{ md1 : active raid1 sda2[0] sdb1[2](S) sdc2[1] }}} 1. sudo grub {{{ grub> device (hd0) /dev/sdc grub> root (hd0,1) grub> install /boot/grub/stage1 (hd0) /boot/grub/stage2 p /boot/grub/grub.conf }}} * こちらの方がbetter? (未検証) {{{ grub> root (hd2,1) grub> setup (hd2) }}} === Ubuntu === * liloと同様に、二つ目のMBRにもGRUBが自動でセットされる模様。GRUB installは不要? * /boot/grub/menu.lst (snip) {{{ ## default num # Set the default entry to the entry number NUM. Numbering starts from 0, and # the entry number 0 is the default if the command is not used. # # You can specify 'saved' instead of a number. In this case, the default entry # is the entry saved with the command 'savedefault'. # WARNING: If you are using dmraid do not use 'savedefault' or your # array will desync and will not let you boot your system. default 0 ## timeout sec # Set a timeout, in SEC seconds, before automatically booting the default entry # (normally the first entry defined). timeout 3 ## hiddenmenu # Hides the menu by default (press ESC to see the menu) hiddenmenu title Ubuntu 8.04.2, kernel 2.6.24-24-server root (hd0,1) kernel /boot/vmlinuz-2.6.24-24-server root=/dev/md1 ro quiet splash initrd /boot/initrd.img-2.6.24-24-server }}} = mdadm = == Ubuntu == * [http://www.ubuntulinux.jp/getubuntu/releasenotes/810overview Ubuntu 8.10 技術概要 | Ubuntu Japanese Team] #起動時のデグレードRAID設定 * /etc/initramfs-tools/conf.d/mdadm {{{ # BOOT_DEGRADED: # Do you want to boot your system if a RAID providing your root filesystem # becomes degraded? # # Running a system with a degraded RAID could result in permanent data loss # if it suffers another hardware fault. # # However, you might answer "yes" if this system is a server, expected to # tolerate hardware faults and boot unattended. BOOT_DEGRADED=false }}} * viなどでこのファイルを直接編集しても、反映されない模様。以下のdpkg-reconfigure mdadmから設定する必要がある。 * /etc/mdadm/mdadm.conf {{{ # instruct the monitoring daemon where to send mail alerts MAILADDR root }}} * /etc/default/mdadm {{{ # AUTOCHECK: # should mdadm run periodic redundancy checks over your arrays? See # /etc/cron.d/mdadm. AUTOCHECK=true # START_DAEMON: # should mdadm start the MD monitoring daemon during boot? START_DAEMON=true }}} * /etc/cron.d/mdadm {{{ # By default, run at 01:06 on every Sunday, but do nothing unless the day of # the month is less than or equal to 7. Thus, only run on the first Sunday of # each month. crontab(5) sucks, unfortunately, in this regard; therefore this # hack (see #380425). 6 1 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet }}} === dpkg-reconfigure mdadm === * [ ... ] がデフォルト値 * sudo dpkg-reconfigure mdadm 1. /etc/default/mdadm AUTOCHECK {{{ ┌───────────────────────────┤ Configuring mdadm ├ │ │ │ If your kernel supports it (>> 2.6.14), mdadm can periodically check the │ │ redundancy of your MD arrays (RAIDs). This may be a resource-intensive │ │ process, depending on your setup, but it could help prevent rare cases │ │ of data loss. Note that this is a read-only check unless errors are │ │ found; if errors are found, mdadm will try to correct them, which may │ │ result in write access to the media. │ │ │ │ The default, if turned on, is to run the checks on the first Sunday of │ │ every month at 01:06 o'clock. │ │ │ │ Should mdadm run monthly redundancy checks of the MD arrays? │ │ │ │ [] │ │ │ └────────────────────────────────────── }}} 1. /etc/default/mdadm START_DAEMON {{{ ┌──────────────────────────┤ Configuring mdadm ├ ─│ │ │ The MD (RAID) monitor daemon sends email notifications in response to │ │ important MD events (such as a disk failure). You probably want to │ │ enable it. │ │ │ │ Do you want to start the MD monitoring daemon? │ │ │ │ [] │ │ │ └────────────────────────────────────── }}} 1. /etc/mdadm/mdadm.conf MAILADDR {{{ ┌─────────────────────────┤ Configuring mdadm ├─ ─ │ Please enter the email address of the user who should get the email │ │ notification for important MD events. │ │ │ │ Recipient for email notifications: │ │ │ │ root_________________________________________________________________ │ │ │ │ │ │ │ └───────────────────────────────────── }}} 1. /etc/initramfs-tools/conf.d/mdadm BOOT_DEGRADED {{{ ┌───────────────────────────┤ Configuring mdadm ├ │ │ │ If your root filesystem is on a RAID, and a disk is missing at boot, it │ │ can either boot with the degraded array, or hold the system at a │ │ recovery shell. │ │ │ │ Running a system with a degraded RAID could result in permanent data │ │ loss if it suffers another hardware fault. │ │ │ │ If you do not have access to the server console to use the recovery │ │ shell, you might answer "yes" to enable the system to boot unattended. │ │ │ │ Do you want to boot your system if your RAID becomes degraded? │ │ │ │ [] │ │ │ └────────────────────────────────────── }}} * apply settings... (liloを使っている場合) {{{ * Stopping MD monitoring service mdadm --monitor [ OK ] Generating array device nodes... done. Removing any system startup links for /etc/init.d/mdadm-raid ... update-initramfs: Generating /boot/initrd.img-2.6.24-24-server Warning: LBA32 addressing assumed Added Linux * Added LinuxOLD The Master boot record of /dev/sda has been updated. Warning: /dev/sdb is not on the first disk The Master boot record of /dev/sdb has been updated. 2 warnings were issued. * Starting MD monitoring service mdadm --monitor [ OK ] }}} = mount = == duplicate uuid on mount == * /dev/md2 (XFS) -> RAID1 /dev/sda2, /dev/sdb2 * on Knoppix {{{ # mount /dev/sda2 /mnt/sda -r -t xfs # mount /dev/sdb2 /mnt/sdb -r -t xfs mount: wrong fs type, bad option, bad superblock on /dev/sdb2, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so # dmesg | tail XFS: Filesystem sdb2 has duplicate UUID - can't mount }}} * => [http://oss.sgi.com/archives/xfs/2004-10/msg00322.html Re: duplicate uuid on mount] {{{ There is also the "nouuid" option to mount: mount -t xfs -o nouuid /dev/sdc5 /mnt/tmp This tells the kernel to ignore duplicate UUID. }}} {{{ # mount /dev/sdb2 /mnt/sdb -r -t xfs -o nouuid # mount /dev/sda2 on /media/sda type xfs (ro,ikeep,noquota) /dev/sdb2 on /media/sdb type xfs (ro,nouuid,noquota) }}}