edit
I just realized that Im on a single core machine atm (stupid work machine), so theortically the server isint actually executing on the CPU 50% of the time (while the client is running), so packets would be lost right? Or is there something else im missing alltogether? Ill post back once ive had a chance to run this code on my quad core at home.

/edit

Hello everybody.

Im using Datagrams to create a UDP client / server relationship. This is my first real networking coding, with the exception of a little bit of TCP a while back. This server needs to eventually support about 20 players at about 66 tick, to integrate with a 3d application im making with LWJGL.

Currently, when I blast my server with 500 packets from localhost in a while true, it looses about 50% of them. Now my source code for the thread thats responsible for listening to all incomming packets is below. I believe that the issue stems from the fact that i close the socket after each received packet, and create a brand new one. However if i dont do this, the server constantly thinks theres a new packet, and dies trying to read a null packet.

Im pretty sure my fundamental understanding may be incorrect. I followed the tutorial here http://systembash.com/content/a-simple-java-udp-server-and-udp-client/ but obvisoually only for the general information, not exact code or anything.

Could the 50% packet loss just be the fact that im sending 500 packets from a while true, with no delay, and its unreasonible for a server to keep up with that? Thanks for the advice, sorry if this is really noob, still trying to learn java networking.

Would starting multiple listeneres help (theoretically i should have as many as i do cores right?, but could 2 listeners both get the same packet at the same time??)

package serverCode;
import java.io.*;
import java.net.*;

public class ServerListenThread extends Thread
{
    public void run()
    {
        int testrecieved = 0;
        System.out.println("ServerListenThread is now running");

        while (true)
        {

            try
            {
                System.out.println("Server waiting for a packet to come in");
                DatagramSocket serverSocket = new DatagramSocket(9876);
                byte[] receiveData = new byte[1024];
                DatagramPacket receivePacket = new DatagramPacket(receiveData, receiveData.length);
                serverSocket.receive(receivePacket);
                String recievedPacket = new String(receivePacket.getData());
                testrecieved ++;
                System.out.println ("Total Packets Recieved so far: " + testrecieved);
                System.out.println("Server packet recieved: " + recievedPacket + " .. from: " + receivePacket.getAddress());
                serverSocket.close();
                //here we would send that string straight over to the threaded parser, dont want to waste any time
                //even then with only 1 listener...
            }
            catch (SocketException e)
            {
                e.printStackTrace();
            }
            catch (IOException e)
            {
                e.printStackTrace();
            }

        }
    }
}

Edited 3 Years Ago by Fedhell: said in post

I do tho have another question while i have you all here tho. My application (both client and server are set up like this), currently has 1 thread for recieving UDP packets, 1 thread for processing the packets (so the listener thread can go straight back to listening), and when I need to send packets, i just send them from a non threaded method call. Is that the best way of doing this? Will i have concurrency issues by using threads for processing packets if 2 threads try and modify the same data? How woud you structure the application with concurrency issues in mind?

I believe that the issue stems from the fact that i close the socket after each received packet, and create a brand new one.

Surely you must be right about that. I can't imagine a good reason for closing the socket after each packet. You're practically asking to lose packets and apparently you are getting what you ask for.

However if i dont do this, the server constantly thinks theres a new packet, and dies trying to read a null packet.

What exactly do you mean by this? Is this just another way of saying that receive blocks the loop from continuing until a packet arrives? But that is exactly what you should want it to do, and closing and opening the socket repeatedly wouldn't change that.

Bguild you dont even realize how awesome you are. In trying to explain why the server constantly died I revisited code and found the issue why. Definatly was stupid to close the socket after every packet. The server is running much smoother now. Marked as solved, and the correct solution with 0 packet loss on the server is below incase anybody googles for it.

package serverCode;

import java.io.*;
import java.net.*;

public class ServerListenThread extends Thread
{
    public void run()
    {
        int testrecieved = 0;

        System.out.println("ServerListenThread is now running");
        try
        {
            DatagramSocket serverSocket = new DatagramSocket(9876);
            byte[] receiveData = new byte[1024];
            DatagramPacket receivePacket = new DatagramPacket(receiveData, receiveData.length);
            while (true)
            {

                System.out.println("Server waiting for a packet to come in");


                serverSocket.receive(receivePacket);
                String recievedPacket = new String(receivePacket.getData());
                testrecieved++;
                System.out.println("Total Packets Recieved so far: " + testrecieved);

                ServerPacketParser parse = new ServerPacketParser(recievedPacket);
                parse.start();
            }

        }
        catch (SocketException e)
        {
            e.printStackTrace();
        }
        catch (IOException e)
        {
            e.printStackTrace();
        }

    }
}
This article has been dead for over six months. Start a new discussion instead.