Hi there,

What I am trying to do is implement my own sequence number program.

I have a sequence number i.e 0001 and then I try to convert this to a binary string in byte format of 4 bytes: 00 01 00 00

And then I am appending this in the middle of a byte payload that I have created, which is just a group of numbers. ie choose 12345678 which got converted to something like 32 33 34 35 etc for byte format..

I then want to extract the sequence number portion, which is bytes 4-8 of the 8 byte payload. This is where I am stuck and not sure how to progress.
ie my payload looks like 32 33 34 35 00 01 00 00
and I want to extract 00 01 00 00 from this and then convert this back to my original integer.

I am trying to use the substr command but am not just trying to extract data from a string, rather a string of bytes. I am just extracting the sequence number part of the last calculated sequence number for now to try and get it working.

Here is my code, any help would be great, Many thanks!

#include <stdio.h>

#include <iostream>

#include <string>
using std::string;


#include "hex.h"

unsigned char temp[4];
unsigned int seq;
bool flag = true;
unsigned int currentseq;
unsigned int previousseq = 0;

int testing =2;

int main()

{

byte payload[8] = {"1234567"};

for (int i=0; i<10; i++)
    {

if (flag == true)
{
    seq = 0000;
    flag = false;
}

seq=seq++;

printf("Sender sequence number = %0004x", seq);
printf("\n");

// BYTE CONVERSION


    for( int i=0;i<4;i++)
    temp[i]= '\0';
    memcpy(temp,&seq,4);
    printf("TEMP = %02x %02x %02x %02x \n", temp[0], temp[1], temp[2], temp[3]);


    payload[4] = temp[0];
    payload[5] = temp[1];
    payload[6] = temp[2];
    payload[7] = temp[3];
    printf ("payload[4] =  %02x \n", temp[0]);
    printf("\n");


    }

[b]    //this is where my problems are, am trying to extract by sequence number in byte format from the binary string, but not sure how
    string str = payload;
    const char *p = str.substr(2,4).c_str();
    std::cout << "\n" << p << "\n";[/b]
    

}

Hi there,

What I am trying to do is implement my own sequence number program.

I have a sequence number i.e 0001 and then I try to convert this to a binary string in byte format of 4 bytes: 00 01 00 00

And then I am appending this in the middle of a byte payload that I have created, which is just a group of numbers. ie choose 12345678 which got converted to something like 32 33 34 35 etc for byte format..

I then want to extract the sequence number portion, which is bytes 4-8 of the 8 byte payload. This is where I am stuck and not sure how to progress.
ie my payload looks like 32 33 34 35 00 01 00 00
and I want to extract 00 01 00 00 from this and then convert this back to my original integer.

I am trying to use the substr command but am not just trying to extract data from a string, rather a string of bytes. I am just extracting the sequence number part of the last calculated sequence number for now to try and get it working.

Here is my code, any help would be great, Many thanks!

#include <stdio.h>

#include <iostream>

#include <string>
using std::string;


#include "hex.h"

unsigned char temp[4];
unsigned int seq;
bool flag = true;
unsigned int currentseq;
unsigned int previousseq = 0;

int testing =2;

int main()

{

byte payload[8] = {"1234567"};

for (int i=0; i<10; i++)
    {

if (flag == true)
{
    seq = 0000;
    flag = false;
}

seq=seq++;

printf("Sender sequence number = %0004x", seq);
printf("\n");

// BYTE CONVERSION


    for( int i=0;i<4;i++)
    temp[i]= '\0';
    memcpy(temp,&seq,4);
    printf("TEMP = %02x %02x %02x %02x \n", temp[0], temp[1], temp[2], temp[3]);


    payload[4] = temp[0];
    payload[5] = temp[1];
    payload[6] = temp[2];
    payload[7] = temp[3];
    printf ("payload[4] =  %02x \n", temp[0]);
    printf("\n");


    }

[b]    //this is where my problems are, am trying to extract by sequence number in byte format from the binary string, but not sure how
    string str = payload;
    const char *p = str.substr(2,4).c_str();
    std::cout << "\n" << p << "\n";[/b]
    

}

C++ doesn't have a byte type. Do you mean unsigned char? And hex.h is something you wrote yourself, correct? So you are trying to extract the "0001" from the string using substr, but that doesn't work. But why doesn't it work? Is it that you need the 0, 0, 0, and 1 stored as integers and you are getting the ASCII values for 0, 0, 0, and 1 instead?

C++ doesn't have a byte type. Do you mean unsigned char? And hex.h is something you wrote yourself, correct? So you are trying to extract the "0001" from the string using substr, but that doesn't work. But why doesn't it work? Is it that you need the 0, 0, 0, and 1 stored as integers and you are getting the ASCII values for 0, 0, 0, and 1 instead?

Hi, thanks for the reply,

Yeah I think I am probably trying to extract from a string of unsigned chars.

Basically all the data from the packets, such as payload, header length's etc are appended to the payload in a long binary string. The specific data is then extracted using the substr command.

I want to use a 4 byte (32 bit) sequence number. When I convert my sequence number, of say 1, to byte format (unsigned char etc?) I use the following code:

unsigned char temp[4];
for( int i=0;i<4;i++)
temp= '\0';
memcpy(temp,&seq,4);

which gives me 01000000 for number 1 which is 4 bytes (they give 2 characters per byte for some reason)...etc...

I then append this to my payload and then want to extract the sequence number from the correct place from this string, and then convert it back to an integer, for comparison purposes. Ie this would take place at a receiver.

But having problems using substr for this. Because the only examples of substr I have seen extract from a string str = "hello" etc...and not from a binary string of unsigned chars.

Thanks.
p.s yeah hex.h was put in by myself but not sure it is being used in this code though, just copied from another header.

An alternative method I was using to convert the integer into a binary string was:
(i think they byte comes from a library somewhere i have then)

i= 4 //seq num length
byte *p;
p= new byte ;
memcpy (p, &seq, i) ;

maybe it is during this initial converstion I am doing something wrong, rather than just using &seq im memcpy maybe I should be doing something else?

which gives me 01000000 for number 1 which is 4 bytes (they give 2 characters per byte for some reason)...etc...

Who are they? Is this a function somewhere? This looks like the big-endian (or is it little endian?) representation of 1 in memory, where the byte order is reversed. The fact that you are given 2 characters per byte (what "gives" you this, by the way?) suggests to me that there is some function that you pass a number from 0 to 255, say 47, and you are returned the following two character string:

2F

which is the string representation of the hexadecimal representation of 47.

This earlier quote:

And then I am appending this in the middle of a byte payload that I have created, which is just a group of numbers. ie choose 12345678 which got converted to something like 32 33 34 35 etc for byte format..

suggests that 1 is being converted to "32", 2 is converted to "33", 3 is converted to "34", etc.

The ASCII value of '1' is 0x31, the ASCII value of '2' is 0x32, the ASCII value of '3' is 0x33, so it appears like you are dealing with ASCII values but are one off.

I have a sequence number i.e 0001 and then I try to convert this to a binary string in byte format of 4 bytes: 00 01 00 00

Why is this not 00 00 00 01?


I'm not sure we are using the same definition of "binary string". In your definition, does the number 1 get stored as one byte of 0x1 or as one byte of 0x31 (ASCII representation of '1') or as two bytes (one byte of 0x30, which is ASCII representation of '0', followed by one byte of 0x31, which is ASCII representation of '1')? Do you ever have bytes that are not in the range of 0x30 through 0x39 or 0x41 through 0x46 (Ascii representations of '0' through '9' and 'A' through 'F')?

Who are they? Is this a function somewhere? The fact that you are given 2 characters per byte (what "gives" you this, by the way?) suggests to me that there is some function that you pass a number from 0 to 255, say 47, and you are returned the following two character string:

2F

which is the string representation of the hexadecimal representation of 47.

This earlier quote:


suggests that you 1 is being converted to "32", 2 is converted to "33", 3 is converted to "34", etc.

The ASCII value of '1' is 0x31, the ASCII value of '2' is 0x32, the ASCII value of '3' is 0x33, so it appears like you are dealing with ASCII values but are one off.


Why is this not 00 00 00 01?


I'm not sure we are using the same definition of "binary string". In your definition, does the number 1 get stored as one byte of 0x1 or as one byte of 0x31 (ASCII representation of '1') or as two bytes (one byte of 0x30, which is ASCII representation of '0', followed by one byte of 0x31, which is ASCII representation of '1')? Do you ever have bytes that are not in the range of 0x30 through 0x39 or 0x41 through 0x46 (Ascii representations of '0' through '9' and 'A' through 'F')?

I have been given some code for an new packet header processing. And all the data is appended to the payload as one long string. I am trying to add a seq num bit to it and just testing it out in a single prog. first.

These are some methods that are used for conversion to the format representing the data as strings of bytes (or whatever the correct definition is?)

i= this->_payload.length();
byte *p;
p= new byte ;
memcpy (p, this->_payload.c_str(),i) ;

int ULE_SID=4444;
int uli_netval=htonl(ULE_SID); //convert host to network for integers
unsigned char temp[4];
for( int i=0;i<4;i++)
temp= '\0';
memcpy(temp,&uli_netval,4);

I have tried adapting these methods to convert my seq number into a binary string of 4 bytes.

for some reason for say seq num 0001 I always get 01 00 00 00 from those methods.

I then want to append my 4 byte seq num to a payload (which is a string of data...ie header length first 4 bytes, seq num is next 4 bytes), I have just created a imaginery one of 12345678:
byte payload[8] = {"1234567"}; >> but maybe that is the wrong way to go about it?

I dont really know what is going on with the bytes and how they are represented etc so sorry am not much help with providing the necessary info. WIll just have to go back to my supervisor at Uni for further help and to find out exactly how the payload data is stored. I know its a string, and the seq num should be represented over 4 bytes, so guess it is over 4 bytes as unsigned chars.

Thanks for responces, will check out that link in a bit.

Sorry for not making my questions clear.

I now know it is a string of chars that I want to append my 4 byte sequence number to.

I then want to extract the sequence number from that string of chars.

At the moment am just stuck on how to extract from a string of chars rather than a normal string. Any example code will be great. Thanks.

Thanks for responces, will check out that link in a bit.

Sorry for not making my questions clear.

I now know it is a string of chars that I want to append my 4 byte sequence number to.

I then want to extract the sequence number from that string of chars.

At the moment am just stuck on how to extract from a string of chars rather than a normal string. Any example code will be great. Thanks.

Give this a try. Basically you have four integers from 0 to 255 (i.e. bytes) that are stored as unsigned chars (C++'s equivalence of a byte). I have it so the unsigned chars are converted to chars, then added to your string. The last four characters are then taken off of the back of the payload string, converted back from chars to to unsigned chars, then converted to integers so they can be displayed. Line 24 is there to prove that they are truly extracted when they are later extracted using substr. Line 1 is to define byte as unsigned char. Lines 7 to 11 is to make sure that a char is represented as one byte in the compiler. If it's not, this won't work. I think (not 100% positive) that some compilers set aside more than one byte for a char.

typedef unsigned char byte;
#include <iostream>
using namespace std;

int main ()
{
    if (CHAR_MAX - CHAR_MIN != 255)
    {
        cout << "char does not take one byte of storage.  Exiting." << endl;
        exit(1);
    }
    
    string payload = "Hello world";
    byte sequence[4];
    sequence[0] = 134;
    sequence[1] = 6;
    sequence[2] = 200;
    sequence[3] = 234;
    
    for (int i = 0; i < 4; i++)
    {
        char aChar = sequence[i] + CHAR_MIN;
        payload = payload + aChar;
        sequence[i] = 0;  // zero out sequence array
    }
    
    string charSequence = payload.substr(payload.length() - 4, 4);
    
    for (int i = 0; i < 4; i++)
    {
        sequence[i] = charSequence[i] - CHAR_MIN;
        int number = sequence[i];
        cout << "Sequence byte " << i << " of 4 = " << number << endl;
    }
    
    return 0;
}

Give this a try. Basically you have four integers from 0 to If it's not, this won't work. I think (not 100% positive) that some compilers set aside more than one byte for a char.

I think you're right on that one. I read something interesting about why chars have different types and that some Operating Systems (if not compilers) will interpret chars differently.

>I think you're right on that one.
It depends on what you mean by "char". If you're talking about the char data type, it's always equivalent to one byte in C++, for whatever the definition of a byte is. I think Vernon is assuming that a byte always equates to an 8-bit entity, which is not true.

On the other hand, if you're talking about a logical character, such as with Unicode, then a single character could very well be represented by more than one octet[1].

[1] Which is the correct term for an 8-bit entity.

>I think you're right on that one.
It depends on what you mean by "char". If you're talking about the char data type, it's always equivalent to one byte in C++, for whatever the definition of a byte is. I think Vernon is assuming that a byte always equates to an 8-bit entity, which is not true.

On the other hand, if you're talking about a logical character, such as with Unicode, then a single character could very well be represented by more than one octet[1].

[1] Which is the correct term for an 8-bit entity.

Yeah I was referring to the difference between Unicode and ANSI chars ( wchar_t and char).

What about the datatype __int8, is that guaranteed to be 8-bit? Because it seems to act exactly as a char and would be abit misleading otherwise.

>Yeah I was referring to the difference between Unicode and ANSI chars ( wchar_t and char).
For the record, wchar_t doesn't imply Unicode. It's a generic wide character type that can be used by any number of character sets and encoding schemes that meet the requirements. And technically, char is only required to handle the basic character set which essentially consists of the lower half of ASCII (but not requiring the same values). Anything else to fill up the remaining characters is implementation-dependent.

In practice, correct string and character handling beyond ASCII is terribly complex because there are so many different sets of rules that need to be taken into account.

>What about the datatype __int8, is that guaranteed to be 8-bit?
That really depends on your compiler, as __int8 isn't a standard type.

Give this a try. Basically you have four integers from 0 to 255 (i.e. bytes) that are stored as unsigned chars (C++'s equivalence of a byte). I have it so the unsigned chars are converted to chars, then added to your string. The last four characters are then taken off of the back of the payload string, converted back from chars to to unsigned chars, then converted to integers so they can be displayed. Line 24 is there to prove that they are truly extracted when they are later extracted using substr. Line 1 is to define byte as unsigned char. Lines 7 to 11 is to make sure that a char is represented as one byte in the compiler. If it's not, this won't work. I think (not 100% positive) that some compilers set aside more than one byte for a char.

typedef unsigned char byte;
#include <iostream>
using namespace std;

int main ()
{
    if (CHAR_MAX - CHAR_MIN != 255)
    {
        cout << "char does not take one byte of storage.  Exiting." << endl;
        exit(1);
    }
    
    string payload = "Hello world";
    byte sequence[4];
    sequence[0] = 134;
    sequence[1] = 6;
    sequence[2] = 200;
    sequence[3] = 234;
    
    for (int i = 0; i < 4; i++)
    {
        char aChar = sequence[i] + CHAR_MIN;
        payload = payload + aChar;
        sequence[i] = 0;  // zero out sequence array
    }
    
    string charSequence = payload.substr(payload.length() - 4, 4);
    
    for (int i = 0; i < 4; i++)
    {
        sequence[i] = charSequence[i] - CHAR_MIN;
        int number = sequence[i];
        cout << "Sequence byte " << i << " of 4 = " << number << endl;
    }
    
    return 0;
}

Yes that does work! Thanks so much for that, that is exactly what I was trying to do!

I have changed the code slightly to start adapting it for my use (see below). I have set the sequence number as 8, for a test. And the output for this reads:

Sequence byte 0 of 4 = 8
Sequence byte 1 of 4 = 0
Sequence byte 2 of 4 = 0
Sequence byte 3 of 4 = 0

What I want to do is read that entire 4 byte sequence (ie 8000) and then convert that back to an integer (8) again.

How would I go about that? Thanks again.

typedef unsigned char byte;
  
      #include <iostream>
  
      using namespace std;
  
       
   
      int main ()
  
      {
  
      if (CHAR_MAX - CHAR_MIN != 255)
  
      {
  
      cout << "char does not take one byte of storage. Exiting." << endl;
  
      exit(1);
  
      }
  
       
  
      string payload = "Hello world";
  
  byte sequence[4];

	unsigned int seq = 8;
	
	for( int i=0;i<4;i++)
	sequence[i]= '\0';
	memcpy(sequence,&seq,4);
	
 
       
  
      for (int i = 0; i < 4; i++)
  
      {
  
      char aChar = sequence[i] + CHAR_MIN;
  
      payload = payload + aChar;
  
      sequence[i] = 0; // zero out sequence array
  
      }
  
       
  
      string charSequence = payload.substr(payload.length() - 4, 4);
  
       

cout << "\t NUMBER = " << g;
cout << "\n";
  
      for (int i = 0; i < 4; i++)
  
      {
  
      sequence[i] = charSequence[i] - CHAR_MIN;
  
      int number = sequence[i];
  
      cout << "Sequence byte " << i << " of 4 = " << number << endl;



  
      }


  
       
  
      return 0;
  
      }

Yes that does work! Thanks so much for that, that is exactly what I was trying to do!

I have changed the code slightly to start adapting it for my use (see below). I have set the sequence number as 8, for a test. And the output for this reads:

Sequence byte 0 of 4 = 8
Sequence byte 1 of 4 = 0
Sequence byte 2 of 4 = 0
Sequence byte 3 of 4 = 0

What I want to do is read that entire 4 byte sequence (ie 8000) and then convert that back to an integer (8) again.

How would I go about that? Thanks again.

typedef unsigned char byte;
  
      #include <iostream>
  
      using namespace std;
  
       
   
      int main ()
  
      {
  
      if (CHAR_MAX - CHAR_MIN != 255)
  
      {
  
      cout << "char does not take one byte of storage. Exiting." << endl;
  
      exit(1);
  
      }
  
       
  
      string payload = "Hello world";
  
  byte sequence[4];

	unsigned int seq = 8;
	
	for( int i=0;i<4;i++)
	sequence[i]= '\0';
	memcpy(sequence,&seq,4);
	
 
       
  
      for (int i = 0; i < 4; i++)
  
      {
  
      char aChar = sequence[i] + CHAR_MIN;
  
      payload = payload + aChar;
  
      sequence[i] = 0; // zero out sequence array
  
      }
  
       
  
      string charSequence = payload.substr(payload.length() - 4, 4);
  
       

cout << "\t NUMBER = " << g;
cout << "\n";
  
      for (int i = 0; i < 4; i++)
  
      {
  
      sequence[i] = charSequence[i] - CHAR_MIN;
  
      int number = sequence[i];
  
      cout << "Sequence byte " << i << " of 4 = " << number << endl;



  
      }


  
       
  
      return 0;
  
      }

Given that you are storing your least significant bit as your left-most bit, the following change could work:

typedef unsigned char byte;
#include <iostream>
#include <cmath>
#include <string>

using namespace std;

int main ()
{
    if (CHAR_MAX - CHAR_MIN != 255)
    {
        cout << "char does not take one byte of storage.  Exiting." << endl;
        exit(1);
    }
    
    string payload = "Hello world";
    byte sequence[4];
    sequence[0] = 8;
    sequence[1] = 0;
    sequence[2] = 0;
    sequence[3] = 0;
    
    for (int i = 0; i < 4; i++)
    {
        char aChar = sequence[i] + CHAR_MIN;
        payload = payload + aChar;
        sequence[i] = 0;  // zero out sequence array
    }
    
    string charSequence = payload.substr(payload.length() - 4, 4);

    int sequenceNumber = 0;    
    for (int i = 0; i < 4; i++)
    {
        sequence[i] = charSequence[i] - CHAR_MIN;
        int number = sequence[i];
        sequenceNumber = sequenceNumber + number * pow(256.0, i);
        cout << "Sequence byte " << i << " of 4 = " << number << endl;
    }
    
    cout << "sequenceNumber = " << sequenceNumber << endl;
      
    return 0;
}

What is the range of sequence numbers? If they can go above 2^31 - 1, this could overflow since 2^31 - 1 is the highest an integer can go in a 32-bit system. You may want to change it to an unsigned int to go higher. Basically this code just calculates a number one digit at a time from the individual digits (sequence). Be sure you are consistent with how you want to store your sequence digits (i.e. sequence[0] is the least significant bit versus the most significant bit). You'll have to change this code if the significance order of the sequence bits changes.

Is it possible to somehow use this line to obtain the sequence number:

int g = *(reinterpret_cast<int*>(&sequence));

I put the above after defining:
#
sequence[0] = 8;
#
sequence[1] = 0;
#
sequence[2] = 0;
#
sequence[3] = 0;

and it printed the correct integer of 8.

So is it possible to perform a similar command at the other end?


Im not sure how large my seq nums will go up to yet but it will be fine for now for the initial demonstration purposes of just testing the concepts.

I got this error when trying to use the pow command:

vernon.cpp:84: error: no matching function for call to ‘pow(int, int, int&)’
/usr/include/bits/mathcalls.h:154: note: candidates are: double pow(double, double)
/usr/include/c++/4.2/cmath:373: note: long double std::pow(long double, int)
/usr/include/c++/4.2/cmath:369: note: float std::pow(float, int)
/usr/include/c++/4.2/cmath:365: note: double std::pow(double, int)
/usr/include/c++/4.2/cmath:361: note: long double std::pow(long double, long double)
/usr/include/c++/4.2/cmath:357: note: float std::pow(float, float)

Is it possible to somehow use this line to obtain the sequence number:

int g = *(reinterpret_cast<int*>(&sequence));

I put the above after defining:
#
sequence[0] = 8;
#
sequence[1] = 0;
#
sequence[2] = 0;
#
sequence[3] = 0;

and it printed the correct integer of 8.

So is it possible to perform a similar command at the other end?


Im not sure how large my seq nums will go up to yet but it will be fine for now for the initial demonstration purposes of just testing the concepts.

I got this error when trying to use the pow command:

vernon.cpp:84: error: no matching function for call to ‘pow(int, int, int&)’
/usr/include/bits/mathcalls.h:154: note: candidates are: double pow(double, double)
/usr/include/c++/4.2/cmath:373: note: long double std::pow(long double, int)
/usr/include/c++/4.2/cmath:369: note: float std::pow(float, int)
/usr/include/c++/4.2/cmath:365: note: double std::pow(double, int)
/usr/include/c++/4.2/cmath:361: note: long double std::pow(long double, long double)
/usr/include/c++/4.2/cmath:357: note: float std::pow(float, float)

This seems to work:

int g = *(reinterpret_cast<int*>(&sequence));

but it may well be susceptible to problems on a little endian versus big endian machine. Not positive on this, but if it was me, I wouldn't risk it without researching this a little further to make sure it will work on all architectures. You could use the htonl and ntohl commands possibly to confirm endianness and write some code from there to guarantee portability. I think I saw those commands in your code earlier. Are you familiar with them and the concept of little versus big endian? I don't know for sure if that would be necessary, but better safe than sorry.

Regarding this command:

pow(int, int, int&)

I don't know the code you used, but my code uses this:

pow(256.0, i)

which is of the form:

pow(double, int)

not

pow(int, int, int&)

Yeah that command does work after all, did not realise that.

I don't know anything about endian etc so I probably should look into that as some of my code will be eventually imported onto a different machine.

I have got my seq number emulation program doing exactly want I want to do now so thankyou for your help with it.

The one last thing I would like to do is just print out the full payload of the "string+sequence number",just for demonstration and visibility purposes.

So I can get something like "helloworld08000000" or whatever it will look like, just so it can been seen where the seq num is extracted from,

Thanks again.

Yeah that command does work after all, did not realise that.

I don't know anything about endian etc so I probably should look into that as some of my code will be eventually imported onto a different machine.

I have got my seq number emulation program doing exactly want I want to do now so thankyou for your help with it.

The one last thing I would like to do is just print out the full payload of the "string+sequence number",just for demonstration and visibility purposes.

So I can get something like "helloworld08000000" or whatever it will look like, just so it can been seen where the seq num is extracted from,

Thanks again.

I'd write a function that took a byte/unsigned char and returned a two character string, where the two characters represent your two hexadecimal digits:

string HexRepresentation(byte aByte)
// takes a byte (0 to 255) and returns a two digit string
{
     // code
}

Call it from here:

for (int i = 0; i < 4; i++)  
{  
      sequence[i] = charSequence[i] - CHAR_MIN;  
      cout << HexRepresentation(sequence[i]);
}

Hi again!

I am now at the stage with adapting my code with the code used in this system in the labs.

In the labs the payload is a string of unsigned chars so I don't think it likes the following line:

string charSequence = payload.substr(payload.length() - 24, 4 )
(the position of the sequence numbers in the payload changed for use with new code)

It comes up with conversion errors from an unsigned char to char etc

I am now trying to re-write the code so I no longer need to use the string charSequence and can just directly use substr to put the data into an unsigned char array.

I tried this:

for (int i = 0; i < 4; i++)
  sequence[i] = payload.substr(payload.length() - i, 1);

and I get this error:

seqnum.cpp: In function ‘int main()’:
seqnum.cpp:127: error: cannot convert ‘std::basic_string<char, std::char_traits<char>, std::allocator<char> >’ to ‘byte’ in assignment

Any more help with this would be great. Thanks again.

This article has been dead for over six months. Start a new discussion instead.