Ahoj,
mam problemy s raidem, ktery je nastaveny pres mdadm, bohuzel mdadm vubec nehovim tak jsem chtel pozadat o radu zde.
Raid ma celkem 6 disku a byl primountovan do Ubuntu jako datove uloziste. Jednoho dne se Ubuntu kouslo a po restartu uz raid nebyl dostupny.
Pri bootovani jsem obdrzel hlasku: "/mnt/pool is not present or not available", mel jsem na vyber S pro start Ubuntu nebo M pro manualni namountovani. Kdyz jsem nechal nabehnout Ubuntu, pool samozrejmne nikde. Kdyz jsem spustil palimpsest tak raid byl videt v seznamu Multi-disk devices, ale mel status: "Not running, partially assembled". Procital jsem ruzna fora, nekde psali ze jim pomohlo stopnout raid(i kdyz ma ve stavu Not running, stopnout sel) pres palimpsest a pote opet zpustit. Bohuzel toto mi nepomohlo a pri pokusu o zpusteni dostavam hlasku:
metadata format 01.02 unknown, ignored.
mdadm: cannot open device /dev/sdg1: Device or resource busy
mdadm: /dev/sdg1 has no superblock - assembly aborted
Nyni muzu provest mdadm --detail /dev/md2 (dokud jsem nestopl raid tak to skoncilo s hlaskou ze /dev/md2 je zrejmne nedostupne):
mdadm: metadata format 01.02 unknown, ignored.
/dev/md2:
Version : 01.02
Creation Time : Sun Aug 8 22:50:41 2010
Raid Level : raid6
Used Dev Size : 1953511424 (1863.01 GiB 2000.40 GB)
Raid Devices : 6
Total Devices : 4
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Tue Jan 1 18:39:38 2013
State : active, degraded, Not Started
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : :pool1
UUID : 52544974:24687624:d1188992:95f07e6b
Events : 541811
Number Major Minor RaidDevice State
0 8 113 0 active sync /dev/sdh1
1 8 65 1 active sync /dev/sde1
2 8 97 2 active sync /dev/sdg1
3 8 81 3 active sync /dev/sdf1
4 0 0 4 removed
5 0 0 5 removed
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
cat /proc/partitions:
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=3ac84499:ba962435:4d0ac48f:0dedaf16
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=744161aa:ba92c60d:ae5af280:697c2a6e
ARRAY /dev/md2 level=raid6 num-devices=6 metadata=01.02 name=:pool1 UUID=52544974:24687624:d1188992:95f07e6b
# This file was auto-generated on Fri, 30 Jul 2010 22:52:29 +0100
# by mkconf $Id$cat /proc/partitions:
cat /proc/mdstat:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : inactive sdh1[0] sdf1[3] sdg1[2] sde1[1]
7814047460 blocks super 1.2
md1 : active raid1 sdd2[1] sdc2[0]
7833536 blocks [2/2] [UU]
md0 : active raid1 sdd1[1] sdc1[0]
50780096 blocks [2/2] [UU]
unused devices: <none>
cat /proc/partitions:
major minor #blocks name
8 0 1953514584 sda
8 1 1953512001 sda1
8 16 1953514584 sdb
8 17 1953512001 sdb1
8 32 58615704 sdc
8 33 50780160 sdc1
8 34 7833600 sdc2
9 0 50780096 md0
9 1 7833536 md1
8 48 58615704 sdd
8 49 50780160 sdd1
8 50 7833600 sdd2
8 64 1953514584 sde
8 65 1953512001 sde1
8 80 1953514584 sdf
8 81 1953512001 sdf1
8 96 1953514584 sdg
8 97 1953512001 sdg1
8 112 1953514584 sdh
8 113 1953512001 sdh1
Vsechny disky pres palimpsest vidim, SMART status maji Healthy nebo ze maji spatnych nekolik sektoru. Tak nevim jestli ten problem muze souviset s vadnymi sektory na nekterych discich... Na raidu jsou ulozena docela dulezita data, o ktera bych velice nerad prisel, proto se ani nepoustim do nejakych vlastnich experimentu s obnovou.
Budu vdecny za kazdou radu, diky.