So I recently had a catastrophe with using the Mega cloud service with my Linux Mint desktop and I got my whole home directory wiped out. Everything got wiped out about an hour into syncing my 151 Gig home directory to the cloud so needless to say most of my data didn't get synced to the cloud. When it got deleted from the cloud the cloud then also deleted it all from my desktop so I lost everything. Upon further investigation I was able to find a small amount of data that did get synced to the cloud in the cloud's Rubbish Bin directory, I then found the rest of my files, (the majority of the 151 GB home directory) in a hidden folder set-up by the cloud service in my home directory named .debris. What I want to do is to use rsync to move all the file from /home/garrett/.debris to /home/garrett/
Here's the thing. Both directories (/home/garrett/ & /home/garrett/.debris/) have all the same top level directories in them (Music, Videos, Pictures, etc...) but the directory (/home/garrett/.debris/) has the minority of files within those top level directories, only the few that got synced before the sync got interrupted are in /home/garrett/ so /home/garrett/.debris/ has the majority of files under these top level directories that got deleted before the sync finished. So I want rsync to move all the files from /home/garrett/.debris/ to /home/garrett/ but I don't want files and directories under /home/garrett/ to get overwritten by the files and directories in /home/garrett/.debris, but instead want rsync to be recursive and only move a directory from the source if it's not already in the destination. And if a given directory is already in the destination to then look into that directory for other files and directories that are in the source that are not in the destination.

rsync [?options?] /home/garrett/.debris/* /home/garrett/

I'm sure I'll need a recursive option here but after looking at the man pages for rsync I'm pretty lost.

Also, since there is 151 Gigs give or take to move here should I be using the * wild card here or would it be better to write a bash script that loops the rsync command? I'm worried the number of files I have might be to many arguments for rsync to handle.

Recommended Answers

All 7 Replies

to be cautious, use the dry run option 1st.

rsync --verbose --recursive --ignore-existing --dry-run <src> <dst>

you can also use a cp command with --no-clobber.
cp --no-clobber <src> <dst>

You can also run this python script

#!/usr/bin/env python
# -*-coding: utf8-*-
'''doc
'''
from __future__ import (absolute_import, division,
                        print_function, unicode_literals)
import os
from os.path import join as pjoin
import shutil
DEBRIS = '/home/garrett/.debris'
HOME = '/home/garrett'

def home(path):
    return path.replace(DEBRIS, HOME, 1)

def main():
    for dirpath, dirnames, filenames in os.walk(DEBRIS):
        todo = []
        for d in dirnames:
            src = pjoin(dirpath, d)
            dst = home(src)
            if os.path.exists(dst):
                todo.append(d)
            else:
                try:
                    shutil.move(src, dst)
                except:
                    print('Could not move directory {}'.format(src))
        dirnames[:] = todo
        for f in filenames:
            src = pjoin(dirpath, f)
            dst = home(src)
            if not os.path.exists(dst):
                try:
                    shutil.move(src, dst)
                except:
                    print('Could not move file {}'.format(src))

if __name__ == '__main__':
    main()

Oh and one other criteria, the files that were being transfered when the sync stoped and everything got deleted are no doubt corrupted but that's problem not much if any. Is there an option for stitching two files with the same name back together if one or both of them are incomplete due to a transfer error? Thanks again.

Gribouillis will tha python script stop everytime it runs into a problem or will it keep going and simply print those error messages to the screen?

The python script will keep going and print the error messages if it runs into a problem. The only effect is that the files and directories that could not be moved stay where they are in the .debris directory.

I seem to have recovered my lost data but oddly I now have an extra 7 Gigs give or take in my home directory than before everything got wiped out. Does anyone have any ideas on how I could have gotten an extra 7 Gigs of data when I was only recovering lost data and deleting the duplicates?

Temporary files ?

Any ideo on how I will find and identify these temporary files?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.