Hi guys, I'm fairly new to C++, though I have several years of experience with C#.

I've made program with XAML and C++. Part of the program's job is to download several hundred 10mb files from an FTP patch server. I've been using the .NET FtpWebRequest for this task, and it works OK. However.....

I tested downloading the files with FileZilla, which is C++, and I got almost double the speed as my .NET program did. This is no insignificant difference, either. (3 mpbs vs 1.5mbps).

I've concluded that unmanaged C++ is significantly faster than C# (well, I knew that already...) and that I should place the FTP code in a separate C++ executable.... but I'm *really* lost as to how to do this.

I do know what the program needs to do:

  1. Receive 3 arguments, Host, Remote Path, and Local Path.
  2. Connect to FTP server anonymously
  3. Download said file, writing progress to STD out.
  4. Exit when the transfer completes.

I've looked at several C++ FTP libraries, but I can't seem to get a handle on anything. Can one of you help?

One last thing: I'd start the C++ exe from C#, supplying the arguments, then read the STD out to display progress on the UI.

You are not going to get Mbps speedup by changing from C# to C++. How many times did you run this test? What was the state of your network when you ran these tests?

Getting double the transfer rate is indicative of a network problem, not a code problem.

[NOTE]: It could also be a compression setting. I'm not familiar enough with the FTP protocol to know if there is an option to run compression before transfer; you should check it out and see if either of the two applications are doing things differently.

Comments
Considering all of the facts!

I've downloaded files both ways quite a bit. I'd estimate somewhere around 500 for each. The file size is always 10 mb, and both FileZilla and my program report transferring the same number of bytes total, so it would seem the compression is the same for both.

Also, I do one right after the other, ie I use my program, then use FileZilla, so the network shouldn't change.

Edited 5 Years Ago by Xcelled194: n/a

You can also try the WebClient DownloadFile() method.
If you're going to use this functionality a lot, you could just compile it into a DLL and link it into your other projects when needed.

The file size is always 10 mb, and both FileZilla and my program report transferring the same number of bytes total, so it would seem the compression is the same for both.

I'm not talking about the files being compressed on your machine; rather, I mean as part of the FTP protocol. It might be possible to request that FTP compress your file before transfer, uncompress upon receipt and then deliver it to your application. This would allow for less bytes over the wire while still appearing as full-size on your machine.

If you wouldn't mind using QT I suppose you could try:
http://doc.qt.nokia.com/latest/qftp.html

One word of advice, I would recommend using the QT installer on Windows, definitely.

Alternatively there are other projects,
http://sourceforge.net/projects/poco/

As for the command line arguments,
as you probably already know char *argv[] contains the command line arguments, but it will separate them when it encounters a space and all sorts of other ancient behavior ( the '>' character redirects the output, etc.) so you might want to bone up on that aspect first.

I'm not talking about the files being compressed on your machine; rather, I mean as part of the FTP protocol. It might be possible to request that FTP compress your file before transfer, uncompress upon receipt and then deliver it to your application. This would allow for less bytes over the wire while still appearing as full-size on your machine.

He said he measured the bytes transferred somehow, but I don't see how he would know legitimately from someone else's client.

@pseudorandom21

At a cursory glance it seems FTP supports a 'compressed' mode. My suggestion is that this could account for the difference in transfer times. My comment was made independent of what measurement technique was used. For all I know it was file_size / transfer_time .

If the measurement was on the actual network stream then it is not a code issue but a network issue. However, as stated earlier these test were run in succession so that seems (on the surface) unlikely.

In either case, I seriously doubt that this is a 'speed of code' issue.

This article has been dead for over six months. Start a new discussion instead.