Hello, I'm having a problem receiving information with Winsock. I have 2 programs written, one is a server capable of accepting sockets from multiple users, and the other is a simple one socket client.

The server uses an array of a user defined data type with a boolean variable and a socket variable in it, to handle all the clients. It updates at run-time for each user connecting.

After a successful connection is made, both the client and server are put into non-blocking mode using ioctlsocket().

Client send code:

string strSend = "all this packet information!!";
			int sz = strSend.size();
			send(m_socket, strSend.c_str(), sz, 0);

Server receive code:

// check incoming packets

		for (int c=0; c<nSize; c++) {

			char recvbuff[1024];
			int bytesRecv = 0;

			// each socket
			bytesRecv = recv(nClients[c].m_socket, recvbuff, 1024, 0);

			if (bytesRecv>0) {
				string strRecv = recvbuff;
				string output = strRecv.substr(0, bytesRecv);

				cout << output << endl;

			}

		}

		return;
	}

My receiving packet is set at 1024 bytes, but whenever I receive a packet over 10 characters long, my packets tend to stack up. As you see, I cout << output << endl; so that each incoming packet is on its own line. My result is this:

1234567890
1234567890
1234567890
12345678901234567890
12345678901234567890
1234567890
1234567890
12345678901234567890
1234567890
1234567890
1234567890
12345678901234567890

If my packet is say, around 6 characters, I don't ever see this kind of stacking. The larger the packet, the more often they stack. In this case, I use "all this packet information!!", the result is:

all this packet information!!all this packet information!!all this packet information!!all this packet information!!all this packet information!!all this packet information!!all this packet information!!all this packet information!!all this packet information!!all this packet information!!all this packet information!!all this packet information!!all this packet information!!all this packet information!!

A stream until it hits 1024 bytes, then it repeats. Help!

Recommended Answers

All 13 Replies

So you think giving my program a 1 ms delay will correct this issue indefinitely? Hmm, thanks for the feedback. Is there some function that actually checks the socket if it is ready to receive new incoming data? I read about some select() function at some point, but didn't really get into it.

If I follow the example you gave me, I should wait on a 1 ms timer (I have a GetTickCount() timer built into my program already to run a check disconnects routine every minute or so) and run my main loop (with the exception to this check incoming function) in the meantime?

Thanks for the quick and timely reply, helps a lot to know I'm not alone out here.

Hey,

No problem. I had this problem once, and setting a delay actually helped.
My server used to spawn a thread for every incoming request and send data based on some authentification using send(..)
When receiving, first bytes are the actual size of the data to receive, so I know for how long should I be listening. Then in my receiving loop, I had to set some delay until I got everything.

One more thing, in your cout you need to do a flush, to make sure you display the string right away.

Alright. I find it kind-of unreliable to base my receiving loop on a 1ms timer, I figured the winsock method would have something more..solid. I'll try it, thanks!

Also, do you know anything about using select() for this kind of thing?

Read beej very carefully
http://beej.us/guide/bgnet/output/htmlsingle/bgnet.html
It has an example of how to use select.

Some points to note
> send(m_socket, strSend.c_str(), sz, 0);
Both send() and recv() can fragment the message. There is no guarantee that if you ask to send 20 bytes that it will happen in a single call. Even if the send() is a single call, the recv() can still be fragmented.
Making the whole thing non-blocking just makes this much more likely.

> So you think giving my program a 1 ms delay will correct this issue indefinitely?
Not at all.
Some points to ponder.
1. The sleep duration is a MINIMUM value only, so sleeping for 1 second and returning an hour later would be in spec.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dllproc/base/sleep.asp
"Suspends the execution of the current thread for at least the specified interval."

2. The OS time base is usually 10 or 20mS. Very short sleep periods get rounded up to the base OS scheduling interval.
http://www.geisswerks.com/ryan/FAQS/timing.html
Next OS upgrade, you could be looking for a different answer.

3. The actual amount you sleep will vary according to system load at the time. I mean, if your timer becomes very finely balanced, then all sorts of wierdness could result like
- my program runs fine during the day and crashes in the evening (sensitive to network load)
- my program is OK in debug, but crashes in release mode(*) (sensitive to code performance)
- my program works if I'm playing minesweeper (sensitive to system load)


(*) The much more likely scenario of course is a bug in the code.

commented: I always forget about rep on this site. +4

I've tried using a manual timer delay with GetTickCount(), and I've also tried using select(). When I make a loop and only check incoming data after 1ms (using GetTickCount()) I just get a big chunk of clumped up data every 1ms or whatever timer I set. select() isn't having any effect. Perhaps the information is getting clumped up when I send it? I suppose i'll test around with that now.

No luck.

Sending loop:

FD_SET setWrite;
			FD_ZERO(&setWrite);
			FD_SET(m_socket, &setWrite);
			int n = m_socket + 1;
			if (select(n, 0, &setWrite, 0, 0) != SOCKET_ERROR) {
				string strSend = "all this packet information!!";
				int sz = strSend.size();
				send(m_socket, strSend.c_str(), sz, 0);
			}

Receiving Loop:

FD_SET setReader;
			FD_ZERO(&setReader);
			FD_SET(nClients[c].m_socket, &setReader);
			int n = nClients[c].m_socket + 1;
			if (select(n, &setReader, 0, 0, 0) != SOCKET_ERROR) {

				char recvbuff[1024];
				int bytesRecv = 0;

				// each socket
				bytesRecv = recv(nClients[c].m_socket, recvbuff, 1024, 0);

				if (bytesRecv>0) {
					string strRecv = recvbuff;
					string output = strRecv.substr(0, bytesRecv);

					cout << output << endl;

				}

			}

The select() doesn't seem to have any effect. Do any of you happen to have successful network code that fits into a loop like this, and doesn't stack messages, that I could look at? Thanks for all the help.

stacking is not something unexpected or errornous in the socket API. the TCP IP protocol only guarantees that the data will reach the destination at the sent order. It may get sent in one try or multiple tries.The problem here is that you are using a 1024 size buffer. If the network IO buffer at the receiving end has data to fill this buffer it fills it with that data. The only way you can eliminate this problem is by using the knowledge of the properties of the data sent. If you know the length you can traverse the buffer using that length. Also you can use a buffer of exactly the required lenght.But even then if there isnot sufficient data, you may get a partially fileed buffer in one try and the rest of the data in the next. If you don't know the lenght, then you will have to use something like the line termination character when sending character data. This is the most reliable.
The best way to handle this problem is to use the 1024 size buffer for data collecting and process this data before displaying it.
Something like coding another loop to find the EOL character in the buffer, displaying the data upto that, and again searching for the EOL....

That's not a bad idea at all--sending the EOL termination character and completely ignore the fact that packets may stack. However, I used a 1ms sleep on my client that was sending data to my server, and every message was seperated. My guess is that my client was sending too fast for send(), and the receiving was fine. I'm not sure what my limits are, but I may had to 'pad' my messages with a beginning and ending character to recognize the entirety of a message. Thanks for all the responses.

That's not a bad idea at all--sending the EOL termination character and completely ignore the fact that packets may stack. However, I used a 1ms sleep on my client that was sending data to my server, and every message was seperated. My guess is that my client was sending too fast for send(), and the receiving was fine. I'm not sure what my limits are, but I may had to 'pad' my messages with a beginning and ending character to recognize the entirety of a message. Thanks for all the responses.

Hello,

I have actually the same problem. I try to send Data ervery 10ms (is it to fast?) and i try to receive the data, but there is the same effect as you discribed at the biginning of the thread.

My Question now is: Is it possible that you tell us how you solved this problem eg. a code snippet?

Thank you and regards

I didn't really solve the problem. I haven't found a good solution to this, although putting a slight delay on your sending function does seem to help--this is dependant on the speed of the program though.

Can some one give me an example of a working winsock program that can send data and receive it because im on microsoft visual studios 2005 and it sais 1>.\Menu.cpp(126) : error C2065: 'm_socket' : undeclared identifier when i try to send data like this

FD_SET setWrite;
			FD_ZERO(&setWrite);
			FD_SET(m_socket, &setWrite);
			int n = m_socket + 1;
			if (select(n, 0, &setWrite, 0, 0) != SOCKET_ERROR) {
				string strSend = "all this packet information!!";
				int sz = strSend.size();
				send(m_socket, strSend.c_str(), sz, 0);
			}

stacking is not something unexpected or errornous in the socket API. the TCP IP protocol only guarantees that the data will reach the destination at the sent order. It may get sent in one try or multiple tries.The problem here is that you are using a 1024 size buffer. If the network IO buffer at the receiving end has data to fill this buffer it fills it with that data. The only way you can eliminate this problem is by using the knowledge of the properties of the data sent. If you know the length you can traverse the buffer using that length. Also you can use a buffer of exactly the required lenght.But even then if there isnot sufficient data, you may get a partially fileed buffer in one try and the rest of the data in the next. If you don't know the lenght, then you will have to use something like the line termination character when sending character data. This is the most reliable.
The best way to handle this problem is to use the 1024 size buffer for data collecting and process this data before displaying it.
Something like coding another loop to find the EOL character in the buffer, displaying the data upto that, and again searching for the EOL....

Can u tell me the exact code to access and read the data in that buffer.. i mean TCP buffer.. you can also mail me if u want to.
my email is <snip>

commented: 2 years late, with the "sendtehcodez2myemailplzkthxbye" -7
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.