wiki:TipAndDoc/storage/RAID

boot loader

lilo

  • ファイルシステムにXFSを使う等で、OSインストール時にliloを選択した場合はデフォルトで全てのミラーdiskのMBRにliloがインストールされる模様 (Ubuntu 8.04 Hardy)
    • attachment:ubuntu-lilo-raid1.png
    • /etc/lilo.conf (snip)
      # Specifies the boot device.  This is where Lilo installs its boot
      # block.  It can be either a partition, or the raw device, in which
      # case it installs in the MBR, and will overwrite the current MBR.
      #
      boot=/dev/md1
      
      # This option may be needed for some software RAID installs.
      #
      raid-extra-boot=mbr-only
      
      # Specifies the location of the map file
      #
      map=/boot/map
      
      # Specifies the number of deciseconds (0.1 seconds) LILO should
      # wait before booting the first image.
      #
      delay=20
      
      #
      # Boot up Linux by default.
      #
      default=Linux
      
      image=/vmlinuz
              label=Linux
              read-only
      #       restricted
      #       alias=1
              append="root=/dev/md1  "
              initrd=/initrd.img
      
      image=/vmlinuz.old
              label=LinuxOLD
              read-only
              optional
      #       restricted
      #       alias=2
              append="root=/dev/md1  "
              initrd=/initrd.img.old
      
  • 少なくともdiskの先頭10MBは同一
    1. sudo dd if=/dev/sda of=sda count=10240 bs=1024
    2. sudo dd if=/dev/sda of=sdb count=10240 bs=1024
    3. sha1sum sd?
      642aace4e9239c85a011c4f6d643786b78e8e454  sda
      642aace4e9239c85a011c4f6d643786b78e8e454  sdb
      
  • RAID5でどうなるかは不明

GRUB

  • GRUBのコマンドについて
  • この情報によると、RAID時にGRUBコマンドのデファクトとなっている「install」コマンドではなく「setup」コマンドがbetter?
    • ネット上で主に見つかる手順
      1. df /boot
        Filesystem           1K-blocks      Used Available Use% Mounted on
        /dev/md1               1450192    605532    771572  44% /
        
      2. grep md1 /proc/mdstat
        md1 : active raid1 sda2[0] sdb1[2](S) sdc2[1]
        
      3. sudo grub
        grub> device (hd0) /dev/sdc
        grub> root (hd0,1)
        grub> install /boot/grub/stage1 (hd0) /boot/grub/stage2 p /boot/grub/grub.conf
        
      • こちらの方がbetter? (未検証)
        grub> root (hd2,1)
        grub> setup (hd2)
        

Ubuntu

  • liloと同様に、二つ目のMBRにもGRUBが自動でセットされる模様。GRUB installは不要?
    • /boot/grub/menu.lst (snip)
      ## default num
      # Set the default entry to the entry number NUM. Numbering starts from 0, and
      # the entry number 0 is the default if the command is not used.
      #
      # You can specify 'saved' instead of a number. In this case, the default entry
      # is the entry saved with the command 'savedefault'.
      # WARNING: If you are using dmraid do not use 'savedefault' or your
      # array will desync and will not let you boot your system.
      default         0
      
      ## timeout sec
      # Set a timeout, in SEC seconds, before automatically booting the default entry
      # (normally the first entry defined).
      timeout         3
      
      ## hiddenmenu
      # Hides the menu by default (press ESC to see the menu)
      hiddenmenu
      
      title           Ubuntu 8.04.2, kernel 2.6.24-24-server
      root            (hd0,1)
      kernel          /boot/vmlinuz-2.6.24-24-server root=/dev/md1 ro quiet splash
      initrd          /boot/initrd.img-2.6.24-24-server
      

Debian

  • (ヅ) Debian squeeze + RAID1 + GRUB2 (grub-pc)

    dpkg-reconfigure grub-pc を実行。

    ──────│ GRUB install devices:                                │
                │                                                      │
                │    **] /dev/sda (500107 MB; WDC_WD5000AAKB-00H8A0)   │
                │    * ] /dev/sdb (500107 MB; WDC_WD5000AAKB-00H8A0)   │
                │      ] /dev/md0 (51538 MB; eggplant:0)               │
    

mdadm

Ubuntu

  • /etc/initramfs-tools/conf.d/mdadm
    # BOOT_DEGRADED:
    # Do you want to boot your system if a RAID providing your root filesystem
    # becomes degraded?
    #
    # Running a system with a degraded RAID could result in permanent data loss
    # if it suffers another hardware fault.
    #
    # However, you might answer "yes" if this system is a server, expected to
    # tolerate hardware faults and boot unattended.
    
    BOOT_DEGRADED=false
    
    • viなどでこのファイルを直接編集しても、反映されない模様。以下のdpkg-reconfigure mdadmから設定する必要がある。
  • /etc/mdadm/mdadm.conf
    # instruct the monitoring daemon where to send mail alerts
    MAILADDR root
    
  • /etc/default/mdadm
    # AUTOCHECK:
    #   should mdadm run periodic redundancy checks over your arrays? See
    #   /etc/cron.d/mdadm.
    AUTOCHECK=true
    
    # START_DAEMON:
    #   should mdadm start the MD monitoring daemon during boot?
    START_DAEMON=true
    
    • /etc/cron.d/mdadm
      # By default, run at 01:06 on every Sunday, but do nothing unless the day of
      # the month is less than or equal to 7. Thus, only run on the first Sunday of
      # each month. crontab(5) sucks, unfortunately, in this regard; therefore this
      # hack (see #380425).
      6 1 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet
      

dpkg-reconfigure mdadm

  • [ ... ] がデフォルト値
  • sudo dpkg-reconfigure mdadm
    1. /etc/default/mdadm AUTOCHECK
       ┌───────────────────────────┤ Configuring mdadm ├
       │                                                                           │
       │ If your kernel supports it (>> 2.6.14), mdadm can periodically check the  │
       │ redundancy of your MD arrays (RAIDs). This may be a resource-intensive    │
       │ process, depending on your setup, but it could help prevent rare cases    │
       │ of data loss. Note that this is a read-only check unless errors are       │
       │ found; if errors are found, mdadm will try to correct them, which may     │
       │ result in write access to the media.                                      │
       │                                                                           │
       │ The default, if turned on, is to run the checks on the first Sunday of    │
       │ every month at 01:06 o'clock.                                             │
       │                                                                           │
       │ Should mdadm run monthly redundancy checks of the MD arrays?              │
       │                                                                           │
       │                   [<Yes>]                      <No>                       │
       │                                                                           │
       └──────────────────────────────────────
      
    2. /etc/default/mdadm START_DAEMON
        ┌──────────────────────────┤ Configuring mdadm ├
      ─│                                                                         │
        │ The MD (RAID) monitor daemon sends email notifications in response to   │
        │ important MD events (such as a disk failure). You probably want to      │
        │ enable it.                                                              │
        │                                                                         │
        │ Do you want to start the MD monitoring daemon?                          │
        │                                                                         │
        │                   [<Yes>]                      <No>                     │
        │                                                                         │
        └──────────────────────────────────────
      
    3. /etc/mdadm/mdadm.conf MAILADDR
         ┌─────────────────────────┤ Configuring mdadm ├─
      ─ │ Please enter the email address of the user who should get the email   │
         │ notification for important MD events.                                 │
         │                                                                       │
         │ Recipient for email notifications:                                    │
         │                                                                       │
         │ root_________________________________________________________________ │
         │                                                                       │
         │                                <Ok>                                   │
         │                                                                       │
         └─────────────────────────────────────
      
    4. /etc/initramfs-tools/conf.d/mdadm BOOT_DEGRADED
       ┌───────────────────────────┤ Configuring mdadm ├
       │                                                                           │
       │ If your root filesystem is on a RAID, and a disk is missing at boot, it   │
       │ can either boot with the degraded array, or hold the system at a          │
       │ recovery shell.                                                           │
       │                                                                           │
       │ Running a system with a degraded RAID could result in permanent data      │
       │ loss if it suffers another hardware fault.                                │
       │                                                                           │
       │ If you do not have access to the server console to use the recovery       │
       │ shell, you might answer "yes" to enable the system to boot unattended.    │
       │                                                                           │
       │ Do you want to boot your system if your RAID becomes degraded?            │
       │                                                                           │
       │                    <Yes>                      [<No>]                      │
       │                                                                           │
       └──────────────────────────────────────
      
    • apply settings... (liloを使っている場合)
       * Stopping MD monitoring service mdadm --monitor                        [ OK ]
      Generating array device nodes... done.
       Removing any system startup links for /etc/init.d/mdadm-raid ...
      update-initramfs: Generating /boot/initrd.img-2.6.24-24-server
      Warning: LBA32 addressing assumed
      Added Linux *
      Added LinuxOLD
      The Master boot record of  /dev/sda  has been updated.
      Warning: /dev/sdb is not on the first disk
      The Master boot record of  /dev/sdb  has been updated.
      2 warnings were issued.
       * Starting MD monitoring service mdadm --monitor                        [ OK ]
      

mount

duplicate uuid on mount

  • /dev/md2 (XFS) -> RAID1 /dev/sda2, /dev/sdb2
  • on Knoppix
    # mount /dev/sda2 /mnt/sda -r -t xfs
    # mount /dev/sdb2 /mnt/sdb -r -t xfs
    mount: wrong fs type, bad option, bad superblock on /dev/sdb2,
           missing codepage or other error
           In some cases useful info is found in syslog - try
           dmesg | tail  or so
    # dmesg | tail
    XFS: Filesystem sdb2 has duplicate UUID - can't mount
    
  • => Re: duplicate uuid on mount
    There is also the "nouuid" option to mount:
    mount -t xfs -o nouuid /dev/sdc5 /mnt/tmp
    This tells the kernel to ignore duplicate UUID.
    
    # mount /dev/sdb2 /mnt/sdb -r -t xfs -o nouuid
    # mount
    /dev/sda2 on /media/sda type xfs (ro,ikeep,noquota)
    /dev/sdb2 on /media/sdb type xfs (ro,nouuid,noquota)
    
Last modified 7 years ago Last modified on Oct 8, 2017 10:40:40 AM

Attachments (1)

Download all attachments as: .zip