I am running some makeshift tar backups, because our tape drive is broken.

The target location of the tar file is a mounted samba share on another server on the network.

Once the tar file reaches 2 GB, I get the following error.

backupjob.bash: line 79: 25891 Broken pipe

This only happens if I run the tar command on the server it is being backed up FROM. If I run the tar command, on the server it is being backed up TO (which is grabbing the files to tar off of a mounted samba share on the main server) then it does not have this error. I dont want to have to do that though - because it has to transfer everything to be backed up, uncompressed.


What seems kind of odd about this to me -- is that my cp/mv/rm commands are all versions that can't handle large files (I think 2GB is the limit)(my tar command does not have the 2GB limit). So whenever I have had to delete a really large file, I always do it on my windows PC through samba -- which is able to delete and move large files.

So it still seems odd to me (considering samba is how I delete 2GB+ files) that I am getting this error, seemingly just because it's trying to access a 2GB+ file over samba.

Does anyone have any idea why this is happening?

I have done some tests. And if I even try to copy a file bigger than 2GB to the other server through the mounted samba share -- when it gets to 2GB - i get the following error

Filesize limit exceeded (core dumped)

However, I drives mapped to both servers on my windows PC -- and I can drag and copy a 5GB file from one server to the other -- with no problems at all.

So the problem does not seem to be the samba share itself -- but rather somehow, how it is mounted.

Any Idea's?

Think I found the problem.

I had mounted the mounts in question using webmin, and it apparently used "mount.smbfs" instead of "smbmount".

I am about to remount with smbmount, and hopefully it works.

nope -- that didn't solve the problem -- seems my smbmount command is limited to 2GB -- I have kernel 2.4.21-27.0.2 -- am I able to get a smbmount that supports +2GB files without updating kernel?

try the lfs option when you mount the smbfs ; it worked for me

mount -t smbfs //10.1.1.5/SHARE /mnt/backup0 -o username=your_user,password=your_pw,lfs

try the lfs option when you mount the smbfs ; it worked for me

mount -t smbfs //10.1.1.5/SHARE /mnt/backup0 -o username=your_user,password=your_pw,lfs

Thanks for the idea. This didn't work though. :( -- I think it might have something to do with my mv and cp commands, being old versions that dont support large files.

Your problem might have something to do with samba itself. try mounting the samba share with ,lfs at the end of your -o string

that being said it sounds like both your servers are *NIX based, why use samba at all?

read up on rsync.

Thanks for the suggestions.

I did actually just try the ,lfs option, and it still did not work.

I am familiar with rsync. We use it sync our main server(server 1), with our "backup" server(server 2), for use in case our main server fails.

What I need in this case though, is to be able to mount a directory from another server (server 3) on the main server (server 1). So that we can easily copy files between them from the command line, and in shell scripts. I suppose I could use NFS if needed, but I already have samba set up, because we need to be able to access the files on any of the server from our windows workstations. So I figured it would just be easiest to mount the samba shares, that already existed.

The strange thing is, I have mapped drives to shares on both servers, and I can "drag n drop" 2GB+ files from server to server -- so it's not the samba share it self, it's seems to be how its mounted.

It's not a huge deal really. We never need to copy 2GB+ files in a script, and rarely need to at all, and the few times we do, I can just drag n drop from windows.

I was just hoping I could find out what is wrong with the mount (i.e. do I need a new version of smbmount; am I mounting it wrong; etc), mainly just so I could understand mounting samba shares on linux a little better.

Problem is if you're using two linux or *NIX boxes Samba is nasty because it's basically translating from *nix fs -> samba's kludge of MS SMB protocol ->*nix fs. a lot can get lost in that translation.

It's a moving target trying to hit another moving target (microsoft's SMB protocol) which changes with every release of MS operating systems and the service packs. "DOS ain't done 'til Lotus won't run." I don't know if you've come across that quote but it applies to Samba and SMB and Microsoft's stance on the two. If you don't need to use Samba to interface with Windows computers, don't use it, is probably the best advice I can give.

SMB is a really chatty protocol too. the latest version requires something on the order of 1500 (I think the number that tridge gave was 1506) packets to delete a file!


getting back to the subject at hand try this

tar -M -L 2000000 -cv -f archive1 -f archive2[...] -farchiveN /dir/to/backup

this will make tar swicth archives after 2000000*1024 bytes, just make sure you have enough
'-f archiveN' defined to cover the size of the backup or it will prompt you in a very cryptic way to mount a new tape (and will overwrite the one it just finished if it doesn't like your answer :) )

a drawback is that you can't use compression when spanning archives.

if you're good at writing shell or have time on your hands you can use the -F directive to point to a script that will automatically 'change the tape' for you.

Thanks for the info! :)

One possibility is that (some) 2.4 kernels could be limited to 2GB file sizes. And/or some 2.4 filesystems could be limited to 2GB file sizes.

Another could be that you have a process limit. Try 'ulimit -a' and see what the max file size is. If it's limited, 'ulimit -f unlimited' should bring you to the kernel's (and/or filesystem's) limit.

One possibility is that (some) 2.4 kernels could be limited to 2GB file sizes. And/or some 2.4 filesystems could be limited to 2GB file sizes.

Another could be that you have a process limit. Try 'ulimit -a' and see what the max file size is. If it's limited, 'ulimit -f unlimited' should bring you to the kernel's (and/or filesystem's) limit.

As mentioned before, I really think it can't be a kernel limit or anything like that because, I can drag and drop 2GB+ files, all day from a windows computer with the samba shares mounted.

I did try 'ulimit -a' and it said unlimited for filesize.

I noticed ulimit seems to built into the bash shell (had to switch to bash to run it)

Would these limits apply, only to things running in a bash shell? If so, is there another command to check the same thing for csh (which is what we use)

The equivalent command in csh is 'limit' and will likely show 'unlimited' file size as well.

Hmmm. It doesn't sound like the filesystem is the problem, since remote systems can read and write 2GB+ files. It almost sounds like your 2.4 samba cannot write 2GB+ files, though it can read them just fine.

Here's an experiment to try to narrow the problem. Install nc(1) on both server, if needed. It might be called netcat on some older systems. Also be aware some versions of nc() have different usage syntaxes. On your 2.4 system:

tar cvf - splitting_large_tree | nc -l -p 1021

This will pipe tar's output to netcat listening on port 1021. On your main server:

nc 2.4_server 1021 > 2.4_server.tar

This will connect to 2.4_server's port 1021 and dump the tar output to a file.

This will mostly limit the scope of the problem to 2.4_server's tar() command.

If the tar() succeeds, then the problem is likely the version of samba. If the tar fails, try (on 2.4_server):

dd if=/dev/zero bs=1024k count=4096 | nc -l -p 1021

and on the main server:

nc 2.4_server 1021 | dd of=/dev/null bs=1024k

This *will* exercise your network. You could even try these on 2.4_server by itself:

dd if=/dev/zero bs=1024k count=4096 | nc -l -p 1021 &
  nc localhost 1021 >/dev/null

These experiments prolly won't solve your problem, but they should help you narrow it down. Hmmm. Using netcat could be an effective work-around, now that I think about it.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.