anuragccsu 0 Newbie Poster

Hello All,

I am using a NAS(network attached storage) product running a customized 2.6 linux kernel on it.
I've put two physical HDD in my NAS box and configured them in RAID1(mirror), and the software tool which i am using to configure has got a degraded mode management(DMM) option which provide us the flexibility to select how long you want to run the drive in degraded mode.

Now after configuring the RAID1 i pulled out one of the physical drive outside while DMM was set as immediate, So as per the expectations the DMM kicked in and immediately the RAID was stopped, but if i am doing continuous I/O(using iozone program or any simple disk I/O utility) on the shares created on a volume of RAID, I got the messeage:

md: md0 still in use.

and the RAID continues to run in the degraded mode means output of the command:

mdadm --D /dev/md0 shows the status of the drive as degraded and on

the GUI(I used to manage NAS product from GUI) also it shows the degraded, while actually it should stop immediately.

So i went ahead and found that the message is coming from the MD driver code function do_md_stop defined in md.c where after this message driver set the EBUSY status to the kernel, but i am still doubtful in the relation between DMM and the md driver, and more if i want to track how that message came like which process or conditions triggered the driver to that execution point, can i use getpid function inside the driver code and compile it like i want to figure out how it happened or any other way?