When I have a simple server-client code, how do I get the client to send the server a time request and the server responds, and the client then displays the offset and the delay ? The communication should be in NTP. I think that means there are 4 time stamps, and 2 stay on the server and the other 2 will be send to the client. The calculations don't matter right now, I am just curious how I would send and receive those requests.

Recommended Answers

All 7 Replies

If you've already established the connection, you can just use the existing socket to send messages between the two machines. What that means, specifically, depends on the client/server setup.

If you have a command-oriented set where one machine issues commands to the other (or there is some other well know set of packet structures) you just add a new command for the time passing.

If you only have a raw connection then you need to develop some sort of structure on top of that connection to allow each machine to specifically construct (and recognize) these special packets. This usually follows the definition of some struct of your design. For example, you might serialize the following structure:

struct two_time_t {
   long sec1, usec1, sec2, usec2;
};

And add a header that declares the following data as that type of data.

There are may varieties on that theme - look around and pick which best suits your needs.

There's a connection and there's also communication between the server and the client. It packs the data and sends it.
So you mean just add a new command line ? I don't see how this would get the offset and the delay. I'm still clueless, sadly.

Sorry, I can't edit the post.

From what I understand, I could use clock_gettime() to get the timestamps in NTP and then use it to calculate the offset and delay ?

I was addressing your final comment:

The calculations don't matter right now, I am just curious how I would send and receive those requests.

If what you want is how to get fine-grain time resolution on your machine - that is platform (and sometimes hardware) specific.

There are kernel build options (TSclock / RADclock) and hardware options (IEEE 1588) that can get you most of the way there for what I think you are looking for. Are these incompatible for what you are trying to do?

Thanks for your response.

So I have a server and a client. The client sends the server a request, the server responds, and the client displays the offset and the delay.

Since it should be NTP, you need to have 4 timestamps. 2 on the server and 2 on the client. I guess the client just calculates it according to these formulas.

http://en.wikipedia.org/wiki/Network_Time_Protocol

So the offset would be (t1-t0)+(t2-t3)/2

The response from the server should, from what I understand, the timestamps for t2 and t3 in seconds and nanoseconds.

So I have to have a function on the server that gets me t2 and t3. The question is which function does that and gives me these two timestamps.

I guess that should do it:

struct timespec {
        time_t   tv_sec;        /* seconds */
        long     tv_nsec;       /* nanoseconds */
};

So these would be basically my t2 and t3. That I then send to the client. Then the client needs t0 and t1. I don't know how I would do that.
The client needs to send a request, but I actually don't know what kind of request. Just a request to get the values ? But that wouldn't make much sense to calculate the offset.

First, if you are looking to do nanosecond time synchronization you will need a hardware solution. An expensive one at that.

Putting that aside, I'm not sure where you are cofused about the algorithm described on the wikipedia page. In pseudocode it looks something like:

[client] makeRequest (t0 <- getTime)
   ...
[server] getRequest (t1 <- getTime)
[server] sendResponse (t2 <- getTime)
   ...
[client] getResponse (t3 <- getTime)
[client] roundTripTime <- (t3 - t0) - (t2 - t1)
[client] offset <- ((t1 - t0) + (t2 - t3))/2
[client] Use offset to adjust clock to match server clock

The offset calculation assumes a symmetric link between client and server. It is insufficient to just calculate this one time - you need to sample many times to get a more accurate value.

The t0 <- getTime could be replaced in code with something like:

struct timeval t0;
gettimeofday (&t0, NULL);

/* OR */

struct timespec t0;
clock_gettime (CLOCK_REALTIME, &t0);

Like a client for calculating the greatest common divisor.

int main(int argc, char *argv[])
{
    int sockfd;
    struct sockaddr_in their_addr; // connector's address information
    struct hostent *he;
    int numbytes;
    int serverPort;
    int a = 0;
    int b = 0;
    **struct timespec time_a, time_b;**

    printf("UDP client example\n\n");

    if (argc != 5) {
        fprintf(stderr,"Usage: udpClient serverName serverPort int1 int2\n");
        exit(1);
    }

    serverPort = atoi(argv[2]);
    a = atoi(argv[3]);
    b = atoi(argv[4]); 

    //Resolv hostname to IP Address
    if ((he= gethostbyname(argv[1])) == NULL) {  // get the host info
        herror("gethostbyname");
        exit(1);
    }

    //Create Socket
    **clock_gettime(CLOCK_REALTIME, &time_a);**


    if ((sockfd= socket(PF_INET, SOCK_DGRAM, 0)) == -1) {
        fputs("Unable to create socket.\n", stderr);
        return EXIT_FAILURE;
    }


    //setup transport address
    their_addr.sin_family = AF_INET;
    their_addr.sin_port = htons(serverPort);
    their_addr.sin_addr = *((struct in_addr*)he->h_addr);
    memset(their_addr.sin_zero, 0, sizeof(their_addr.sin_zero));

    //Pack Data

    unsigned char buffer[4];
    packData(buffer, a, b);
    unsigned char buffer2[4];

    //Send Data
    int bytes_send;
    int bytes_recv;
    int result;
    socklen_t their_addr_length;
    if((bytes_send= sendto(sockfd, buffer, sizeof(buffer), 0, (struct sockaddr*)&their_addr, sizeof(their_addr))) == -1) {
        fputs("Unable to send data.\n", stderr);
        return EXIT_FAILURE;
    }
    else {
        printf("Send UDP Data\n"
               "Bytes: %u\n"
               "First Number: %hu\n"
               "Second Number: %hu\n\n",
               bytes_send, a, b);
    if (bytes_recv = recvfrom(sockfd, buffer2, sizeof(buffer2), 0, (struct sockaddr*)&their_addr, &their_addr_length)) {
        result = unpackData(buffer2, &result);  
            printf("Result: %d\n", result);
        }
    }


    //Close Socket
    **double offset = (time_b.tv_sec - time_a.tv_sec) + ((time_b.tv_nsec -       time_a.tv_nsec)/2);
    printf("Result", offset);
    **

    if (close(sockfd) != 0) {
      fputs("Unable to close socket.\n", stderr);
      return EXIT_SUCCESS;
    }

    return 0;
}

int packData(unsigned char *buffer, unsigned int a, unsigned int b) {
    /* ******************************************************************
      pack data
    ******************************************************************* */
    buffer[0] = (0xff00 & a) >> 8;
    buffer[1] = 0x00ff & a;
    buffer[2] = (0xff00 & b) >> 8;
    buffer[3] = 0x00ff & b;
}

int unpackData(const unsigned char *buffer2, unsigned int result) {
    result = (buffer2[0] << 8) | buffer2[1];
}

That's all I can come up with right now. If I figure out how to exactly do the calculation and change that, would this be ok for the client ?

P.S. Can't make parts of the code bold. But the 2 asterisk in front of it and at the end of it were supposed to highlight the important parts.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.