Author Topic: OperationModeI12 degrade mode  (Read 14023 times)

Scott_Clark

  • Calf
  • *
  • Posts: 3
OperationModeI12 degrade mode
« on: August 06, 2013, 03:15:55 PM »
TS-QVH8.0TL/R6  ( this unit is almost a year old)

There were some temporary files on the server that I couldn't delete( I think because a client computer crashed), so I restarted the NAS. 

After the Restart, the "Info" light is on and the message "OperationModeI12 degrade mode" on the digital display.  Through the web interface, all looks normal except "Notify Information: degrade_mode_info"  shown in the lower left of the browser window.  Checking through some other posts, thought it might be related to a hard disk, but I don't see anything. 

Below is part the system log since restart.

For what should I be looking?

Thanks,
Scott



Aug  6 08:11:55 CPU10 fanctld.sh: temp0=77 temp1=79 temp2=41 temp3=33 temp4=32
Aug  6 08:12:25 CPU10 fanctld.sh: fan0=fast->1923
Aug  6 08:15:01 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 08:20:01 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 08:25:01 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 08:30:01 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 08:35:02 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 08:40:01 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 08:42:26 CPU10 fanctld.sh: temp0=79 temp1=79 temp2=42 temp3=34 temp4=33
Aug  6 08:42:56 CPU10 fanctld.sh: fan0=fast->1917
Aug  6 08:45:01 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 08:50:01 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 08:52:21 CPU10 liblvm: DEV_CANDIDATE=/dev/md10 : ARRAY_NO=1
Aug  6 08:52:21 CPU10 liblvm: LibLvmShowDev_list>/dev/md10<>3840621568<><><>;
Aug  6 08:53:35 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 08:53:38 CPU10 twonky: Media Server script is begun. type=stop
Aug  6 08:53:38 CPU10 twonky: stop Media Server
Aug  6 08:53:39 CPU10 bittorrent: WaitBtDie is finished.
Aug  6 08:53:39 CPU10 linkstation: DWARF : Killing all dwarves
Aug  6 08:53:40 CPU10 errormon[2851]: exit.: sig=15
Aug  6 08:53:46 CPU10 kernelmon: cmd=lanact 0 half eth0
Aug  6 08:53:48 CPU10 linkstation: cron.sh : logrotate.status is fine.
Aug  6 08:57:30 CPU10 linkstation: Started logchkd
Aug  6 08:57:30 CPU10 linkstation: Started inetd
Aug  6 08:57:30 CPU10 linkstation: Started errormon
Aug  6 08:57:30 CPU10 linkstation: Started kernelmon
Aug  6 08:57:30 CPU10 kernelmon: cmd=SATA 0 plugged
Aug  6 08:57:31 CPU10 kernelmon: cmd=SATA 1 plugged
Aug  6 08:57:31 CPU10 kernelmon: cmd=SATA 2 plugged
Aug  6 08:57:31 CPU10 kernelmon: cmd=SATA 3 plugged
Aug  6 08:57:32 CPU10 kernelmon: cmd=SATA 4 unplugged
Aug  6 08:57:32 CPU10 kernelmon: cmd=SATA 5 unplugged
Aug  6 08:57:32 CPU10 kernelmon: cmd=SATA 6 unplugged
Aug  6 08:57:33 CPU10 kernelmon: cmd=SATA 7 unplugged
Aug  6 08:57:33 CPU10 fanctld.sh: temp0=79 temp1=79 temp2=41 temp3=34 temp4=29
Aug  6 08:57:33 CPU10 kernelmon: cmd=lanact 1000 full eth0
Aug  6 08:57:42 CPU10 start_data_array.sh: disk1 does not include System MD.
Aug  6 08:58:02 CPU10 start_data_array.sh: IS_CHECK_ARRAY_STAUS 0
Aug  6 08:58:02 CPU10 hdd_raid_syncspeed.sh: Adding EDP md devices
Aug  6 08:58:02 CPU10 hdd_raid_syncspeed.sh: USERLAND_MD=md10 md11 md12 md20 md21 md22 md101 md102 md103 md104
Aug  6 08:58:02 CPU10 hdd_raid_syncspeed.sh: /dev/md11 (raid1) sync speed max is setted to 3000
Aug  6 08:58:02 CPU10 hdd_raid_syncspeed.sh: /dev/md12 (raid1) sync speed max is setted to 3000
Aug  6 08:58:03 CPU10 libsys: DISK_DISKINFO_STAT[1] is updating to array1 from degrade
Aug  6 08:58:03 CPU10 libsys: DISK_DISKINFO_STAT[2] is updating to array1 from normal
Aug  6 08:58:03 CPU10 libsys: DISK_DISKINFO_STAT[3] is updating to array1 from normal
Aug  6 08:58:03 CPU10 libsys: DISK_DISKINFO_STAT[4] is updating to array1 from normal
Aug  6 08:58:03 CPU10 libsys: ARRAY_DEV[1]=md10
Aug  6 08:58:03 CPU10 libsys: ARRAY_STAT[1]=raid0
Aug  6 08:58:03 CPU10 libsys: ARRAY_DISKINFO_STAT[1]=
Aug  6 08:58:03 CPU10 libsys: ARRAY_DEV_DISKINFO_STAT[1]=
Aug  6 08:58:03 CPU10 libsys: ARRAY_DEV[2]=md20
Aug  6 08:58:03 CPU10 libsys: ARRAY_STAT[2]=
Aug  6 08:58:03 CPU10 libsys: ARRAY_DISKINFO_STAT[2]=
Aug  6 08:58:03 CPU10 libsys: ARRAY_DEV_DISKINFO_STAT[2]=
Aug  6 08:58:03 CPU10 libsys: DISK_DEV[1]=disk1_6
Aug  6 08:58:03 CPU10 libsys: DISK_STAT[1]=
Aug  6 08:58:03 CPU10 libsys: DISK_DISKINFO_STAT[1]=array1
Aug  6 08:58:03 CPU10 libsys: DISK_SIGNATUREINFO_STAT[1]=array1
Aug  6 08:58:03 CPU10 libsys: DISK_DEV_DISKINFO_STAT[1]=
Aug  6 08:58:03 CPU10 libsys: DISK_DEV[2]=disk2_6
Aug  6 08:58:03 CPU10 libsys: DISK_STAT[2]=
Aug  6 08:58:03 CPU10 libsys: DISK_DISKINFO_STAT[2]=array1
Aug  6 08:58:03 CPU10 libsys: DISK_SIGNATUREINFO_STAT[2]=array1
Aug  6 08:58:03 CPU10 libsys: DISK_DEV_DISKINFO_STAT[2]=
Aug  6 08:58:03 CPU10 libsys: DISK_DEV[3]=disk3_6
Aug  6 08:58:03 CPU10 libsys: DISK_STAT[3]=
Aug  6 08:58:03 CPU10 libsys: DISK_DISKINFO_STAT[3]=array1
Aug  6 08:58:03 CPU10 libsys: DISK_SIGNATUREINFO_STAT[3]=array1
Aug  6 08:58:03 CPU10 libsys: DISK_DEV_DISKINFO_STAT[3]=
Aug  6 08:58:03 CPU10 libsys: DISK_DEV[4]=disk4_6
Aug  6 08:58:03 CPU10 libsys: DISK_STAT[4]=
Aug  6 08:58:03 CPU10 libsys: DISK_DISKINFO_STAT[4]=array1
Aug  6 08:58:03 CPU10 libsys: DISK_SIGNATUREINFO_STAT[4]=array1
Aug  6 08:58:03 CPU10 libsys: DISK_DEV_DISKINFO_STAT[4]=
Aug  6 08:58:03 CPU10 libsys: array1 is raid0 : Checking sub volumes.
Aug  6 08:58:03 CPU10 libsys: array1 is using md11
Aug  6 08:58:03 CPU10 libsys: array1 is using md12
Aug  6 08:58:03 CPU10 libsys: (md_slaves_num=2)
Aug  6 08:58:03 CPU10 libsys: array1 have a md slaves. (md_slaves_num=2)
Aug  6 08:58:03 CPU10 libsys: md11 is raid1
Aug  6 08:58:03 CPU10 libsys: array1 is using sdb6
Aug  6 08:58:03 CPU10 libsys: sdb6 -> disk2
Aug  6 08:58:03 CPU10 libsys: md12 is raid1
Aug  6 08:58:03 CPU10 libsys: array1 is using sdc6
Aug  6 08:58:03 CPU10 libsys: array1 is using sdd6
Aug  6 08:58:03 CPU10 fanctld.sh: fan0=fast->1923
Aug  6 08:58:03 CPU10 libsys: sdc6 -> disk3
Aug  6 08:58:03 CPU10 libsys: sdd6 -> disk4
Aug  6 08:58:03 CPU10 libsys: Upper layer md is raid0
Aug  6 08:58:03 CPU10 libsys: Lower layer md is raid1
Aug  6 08:58:03 CPU10 libsys: array2 is off.
Aug  6 08:58:03 CPU10 libsys: diskinfo status is BAD!(first boot?) rebuilding diskinfo.
Aug  6 08:58:03 CPU10 libsys:  ----- diskinfo -----
Aug  6 08:58:03 CPU10 libsys: array1=raid10
Aug  6 08:58:03 CPU10 libsys: array1_dev=
Aug  6 08:58:03 CPU10 libsys: array2=off
Aug  6 08:58:03 CPU10 libsys: array2_dev=
Aug  6 08:58:03 CPU10 libsys: disk1=array1
Aug  6 08:58:03 CPU10 libsys: disk1_dev=
Aug  6 08:58:03 CPU10 libsys: disk2=array1
Aug  6 08:58:03 CPU10 libsys: disk2_dev=
Aug  6 08:58:03 CPU10 libsys: disk3=array1
Aug  6 08:58:03 CPU10 libsys: disk3_dev=
Aug  6 08:58:03 CPU10 libsys: disk4=array1
Aug  6 08:58:03 CPU10 libsys: disk4_dev=
Aug  6 08:58:03 CPU10 libsys: usb_disk1=0bc25071NA0LKENE
Aug  6 08:58:03 CPU10 libsys: usb_disk2=
Aug  6 08:58:04 CPU10 libsys: usb_disk3=
Aug  6 08:58:04 CPU10 libsys: usb_disk4=
Aug  6 08:58:04 CPU10 start_data_array.sh:  -- Mount local disks --
Aug  6 08:58:04 CPU10 start_data_array.sh: array1 is raid10 : try to mounting...
Aug  6 08:58:04 CPU10 start_data_array.sh: array1 is not encrypted
Aug  6 08:58:04 CPU10 start_data_array.sh: mounting /dev/md10 to /mnt/array1
Aug  6 08:58:05 CPU10 start_data_array.sh: Success to mount.
Aug  6 08:58:05 CPU10 start_data_array.sh:  -- checkRaidStatus 1 /dev/md10 --
Aug  6 08:58:06 CPU10 start_data_array.sh: array1 is using md11
Aug  6 08:58:07 CPU10 start_data_array.sh: array1 is using md12
Aug  6 08:58:07 CPU10 start_data_array.sh: checkRaidStatusDetail : md11 md10
Aug  6 08:58:07 CPU10 start_data_array.sh: checkRaidStatusDetail : md12 md10
Aug  6 08:58:07 CPU10 start_data_array.sh: array2 is off : skipping to mount ...
Aug  6 08:58:07 CPU10 start_data_array.sh: disk1 is array1 : skipping to mount ...
Aug  6 08:58:07 CPU10 start_data_array.sh: disk2 is array1 : skipping to mount ...
Aug  6 08:58:07 CPU10 start_data_array.sh: disk3 is array1 : skipping to mount ...
Aug  6 08:58:07 CPU10 start_data_array.sh: disk4 is array1 : skipping to mount ...
Aug  6 08:58:08 CPU10 liblvm: DEV_CANDIDATE=/dev/md10 : ARRAY_NO=1
Aug  6 08:58:08 CPU10 liblvm: LibLvmShowDev_list>/dev/md10<>3840621568<><><>;
Aug  6 08:58:08 CPU10 start_data_array.sh: LVM mount routine.
Aug  6 08:58:08 CPU10 start_data_array.sh: LVM Skipping /dev/lvm_disk1 - not exist.
Aug  6 08:58:08 CPU10 start_data_array.sh: LVM Skipping /dev/lvm_disk2 - not exist.
Aug  6 08:58:08 CPU10 start_data_array.sh: LVM Skipping /dev/lvm_disk3 - not exist.
Aug  6 08:58:08 CPU10 start_data_array.sh: LVM Skipping /dev/lvm_disk4 - not exist.
Aug  6 08:58:08 CPU10 start_data_array.sh: LVM Skipping /dev/lvm_array1 - not exist.
Aug  6 08:58:08 CPU10 start_data_array.sh: LVM Skipping /dev/lvm_array2 - not exist.
Aug  6 08:58:08 CPU10 start_data_array.sh: recover_shareinfo_sub : checking disk1
Aug  6 08:58:08 CPU10 start_data_array.sh: recover_shareinfo_sub : checking disk2
Aug  6 08:58:08 CPU10 start_data_array.sh: recover_shareinfo_sub : checking disk3
Aug  6 08:58:08 CPU10 start_data_array.sh: recover_shareinfo_sub : checking disk4
Aug  6 08:58:08 CPU10 start_data_array.sh: recover_shareinfo_sub : checking array1
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/spool
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/Active_Jobs
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/cnc
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/Engineering
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/Installation
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/iso
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/job_archive
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/Marketing
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/New_POs
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/nick
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/POs
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/Quotes_Jeff
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/RedBook
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/Ricoh_Amy
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/Ricoh_Debbie
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/Ricoh_Jeff
Aug  6 08:58:08 CPU10 start_data_array.sh: Checking /mnt/array1/Ricoh_ScottC
Aug  6 08:58:09 CPU10 start_data_array.sh: Checking /mnt/array1/Ricoh_Share
Aug  6 08:58:09 CPU10 start_data_array.sh: Checking /mnt/array1/SW_Projects
Aug  6 08:58:09 CPU10 start_data_array.sh: Checking /mnt/array1/SW_Tools
Aug  6 08:58:09 CPU10 start_data_array.sh: Checking /mnt/array1/SystemBackup
Aug  6 08:58:09 CPU10 start_data_array.sh: Checking /mnt/array1/PDJM
Aug  6 08:58:09 CPU10 start_data_array.sh: Checking /mnt/array1/Transfer_Amy
Aug  6 08:58:09 CPU10 start_data_array.sh: Checking /mnt/array1/ShopData
Aug  6 08:58:09 CPU10 start_data_array.sh: Checking /mnt/array1/Ricoh_CS
Aug  6 08:58:09 CPU10 start_data_array.sh: Checking /mnt/array1/Ricoh_DL
Aug  6 08:58:09 CPU10 start_data_array.sh: Checking /mnt/array1/Ricoh_BL
Aug  6 08:58:09 CPU10 start_data_array.sh: Checking /mnt/array1/Ricoh_CJ
Aug  6 08:58:09 CPU10 start_data_array.sh: recover_shareinfo_sub : checking array2
Aug  6 08:58:10 CPU10 liblvm: DEV_CANDIDATE=/dev/md10 : ARRAY_NO=1
Aug  6 08:58:10 CPU10 liblvm: LibLvmShowDev_list>/dev/md10<>3840621568<><><>;
Aug  6 08:58:10 CPU10 handle-mdadm-events: NewArray, /dev/md12,
Aug  6 08:58:11 CPU10 handle-mdadm-events: NewArray, /dev/md11,
Aug  6 08:58:11 CPU10 handle-mdadm-events: NewArray, /dev/md2,
Aug  6 08:58:11 CPU10 handle-mdadm-events: NewArray, /dev/md1,
Aug  6 08:58:12 CPU10 handle-mdadm-events: NewArray, /dev/md0,
Aug  6 08:58:12 CPU10 handle-mdadm-events: DegradedArray, /dev/md0,
Aug  6 08:58:12 CPU10 handle-mdadm-events: DegradedArray, /dev/md1,
Aug  6 08:58:13 CPU10 handle-mdadm-events: DegradedArray, /dev/md2,
Aug  6 08:58:14 CPU10 handle-mdadm-events: DegradedArray, /dev/md11,
Aug  6 08:58:26 CPU10 linkstation: cron.sh : logrotate.status is fine.
Aug  6 08:58:29 CPU10 S10_fo_server.sh: Busy status is no problem.(1<>0)
Aug  6 08:58:29 CPU10 S10_fo_server.sh: fo_confirm.pid check OK.
Aug  6 08:58:33 CPU10 twonky: Media Server script is begun. type=start
Aug  6 08:58:33 CPU10 S40B_update_notifications.sh: deleting old settings...
Aug  6 08:58:33 CPU10 S40B_update_notifications.sh: deleting old settings...
Aug  6 08:58:33 CPU10 twonky: Media Server setting is off
Aug  6 08:58:33 CPU10 S40B_update_notifications.sh: deleting old settings...
Aug  6 08:58:33 CPU10 S40B_update_notifications.sh: checking and registering to cron...
Aug  6 08:58:36 CPU10 splx_buffalo.sh: Service is setted to off
Aug  6 08:58:36 CPU10 root: linkstation
Aug  6 08:58:36 CPU10 S40B_update_notifications.sh: starting f/w cheking process ...
Aug  6 08:58:37 CPU10 S40B_update_notifications.sh: step1 result=0
Aug  6 08:58:37 CPU10 S40B_update_notifications.sh: version_is_latest
Aug  6 08:58:37 CPU10 S40B_update_notifications.sh: deleting from lcd...
Aug  6 08:58:42 CPU10 hdd_raid_syncspeed.sh: /dev/md0 sync speed max is setted to 50000
Aug  6 08:58:42 CPU10 hdd_raid_syncspeed.sh: /dev/md1 sync speed max is setted to 50000
Aug  6 08:58:42 CPU10 hdd_raid_syncspeed.sh: /dev/md2 sync speed max is setted to 50000
Aug  6 08:58:42 CPU10 hdd_raid_syncspeed.sh: Adding EDP md devices
Aug  6 08:58:42 CPU10 hdd_raid_syncspeed.sh: USERLAND_MD=md10 md11 md12 md20 md21 md22 md101 md102 md103 md104
Aug  6 08:58:42 CPU10 hdd_raid_syncspeed.sh: /dev/md11 (raid1) sync speed max is setted to 30000
Aug  6 08:58:42 CPU10 hdd_raid_syncspeed.sh: /dev/md12 (raid1) sync speed max is setted to 30000
Aug  6 08:58:43 CPU10 linkstation: #[miconapl.mcon_get_version] mcon_version=TS-QVL    Ver1.5
Aug  6 08:58:44 CPU10 errormon[3065]: Information situation detected! OperationModeI12DEGRADE MODE
Aug  6 08:58:50 CPU10 diskmon: lcd_error_man.sh array1_raid_error on diag_on buzzer_on(old error code)
Aug  6 08:58:50 CPU10 diskmon: lcd_error_man.sh array2_raid_error off(old error code)
Aug  6 08:58:53 CPU10 pmcd_exec.sh: Don't exist auto_power node!
Aug  6 08:58:53 CPU10 linkstation: 1.15-1.38 2013/03/13 21:27:24 started!
Aug  6 08:58:53 CPU10 buffalo_consoled.sh: It's not a console environment.
Aug  6 09:00:01 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 09:04:04 CPU10 kernelmon: cmd=micon_interrupts
Aug  6 09:04:08 CPU10 last message repeated 5 times
Aug  6 09:04:35 CPU10 liblvm: DEV_CANDIDATE=/dev/md10 : ARRAY_NO=1
Aug  6 09:04:35 CPU10 liblvm: LibLvmShowDev_list>/dev/md10<>3840621568<><><>;
Aug  6 09:05:01 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 09:10:01 CPU10 linkstation: This platform doesn't support any mtd.
Aug  6 09:15:01 CPU10 linkstation: This platform doesn't support any mtd.

davo

  • Really Big Bull
  • VIP
  • *
  • Posts: 6149
Re: OperationModeI12 degrade mode
« Reply #1 on: August 07, 2013, 08:38:32 AM »
What does the Nas Navigator software report? Are there any errors being reported in the System - Storage section?
PM me for TFTP / Boot Images / Recovery files  LSRecovery.exe file.
Having network issues? Drop me an email: info@interwebnetworks.com and we will get it fixed!

Eastmarch

  • 1500 Lb Water Buffalo
  • Administrator
  • *****
  • Posts: 339
Re: OperationModeI12 degrade mode
« Reply #2 on: August 07, 2013, 01:05:53 PM »
Also, scroll through the LCD screens with the 'Display' Button and see what it says. If you have a bad hard drive it will say something like 'Drive X is BROKEN' which would account for the degraded array.

It's important to handle this incidentally, another drive failure could result in data loss.

In the System->Storage area, you should be able to click on Array 1 and see a list of drives and their status. If one shows 'error' I think that will cinch up what the issue is.
**A single copy of data, even on a RAID array, is NOT a backup! Hard drive failure is not a question of IF, but WHEN! Don't take my word for it, take Google's!**

Ken_Williams

  • Calf
  • *
  • Posts: 1
Re: OperationModeI12 degrade mode
« Reply #3 on: October 21, 2013, 08:53:00 AM »
Was there ever a resolution to this issue?  I am getting the same messages.


bpolackoff

  • Calf
  • *
  • Posts: 1
Re: OperationModeI12 degrade mode
« Reply #4 on: November 13, 2013, 02:43:10 PM »
Anyone have any updates on this; I too am running into this now and not sure where else to look or what else to do.  All my drives are blinking (from disk activity) green like normal but I am seeing the exact symptoms noted above.  Any help is greatly appreciated

davo

  • Really Big Bull
  • VIP
  • *
  • Posts: 6149
Re: OperationModeI12 degrade mode
« Reply #5 on: November 14, 2013, 06:10:15 AM »
There was never a solution because the information me and Eastmarch asked for was never supplied.
PM me for TFTP / Boot Images / Recovery files  LSRecovery.exe file.
Having network issues? Drop me an email: info@interwebnetworks.com and we will get it fixed!

stranz

  • Calf
  • *
  • Posts: 1
Re: OperationModeI12 degrade mode
« Reply #6 on: January 17, 2014, 09:50:45 AM »
Hi,

got the I12 Error in LCD the Display of my TS-QVH8.0TL/R6 with a RAID 5
No further Info in this Display when pressing Display-Button.
In the box all disks show a green LED
The Nas Navigator shows a yello '?'
The Web-Interface: System / Storage / RAID show 'Error' on Array1.

Firmware is up to Date.

How can I find out wich Disk ist damaged, or what the problem is...

Thanks - S.