hi,

when I try string.size on accentuated words, the result is bigger than it was "supposed" to be, as accentuated characters count as 2 size units instead of one.

how can I count them as one?

cheers

Please post example code.

the code is irrelevant, I think, as my question applies to any code calling this function, but here it is:

void geraMenu(string titulo,string versao)
{
	int numEstrelas = titulo.size() + 40;
	string linha = string(numEstrelas,'*');
	string meiaLinha = string(19,'*');
	cout << linha << endl << linha << endl;
	cout << meiaLinha << " " << titulo << " " << meiaLinha << endl;
	cout << linha << endl << linha << endl;
	cout << string(5,'*') << " " <<  versao << " " << string(numEstrelas - 7 - versao.size(),'*') << endl;
}

the output with "PROJECTO GESTÃO" as 'titulo':

********************************************************
********************************************************
******************* PROJECTO GESTÃO *******************
********************************************************
********************************************************
***** beta 1 *******************************************

the output with "PROJECTO GESTAO" as 'titulo':

*******************************************************
*******************************************************
******************* PROJECTO GESTAO *******************
*******************************************************
*******************************************************
***** beta 1 ******************************************

as you see, the '*' aren't aligned in the first case, as I use 'Ã' instead of 'A' in the word "GESTÃO".
in the first case titulo.size() counts as 21 and in the second case counts as 20 (the correct amount of letters).

what I want is to know how can I count the right number of letters, independently of them being accentuated or not.

Are you compiling for UNICODE ?

I'm sorry, but I'm a beginner C++ programmer to such a "noob" level I don't know what you're talking about :P
all I can do is tell you that I'm using Eclipse SDK under Ubuntu and show you this picture of the properties menu: http://img366.imageshack.us/img366/9537/help1yh9.png

Yes I have no issue on windowsxp using dev-cpp, but with ubuntu (under a vmware environment) I do.

#include <iostream>
#include <string>

using namespace std;

int main()
{
    string a = "Ã";
    string b = "A";
    
    //cout << a << " " << b << endl;
   cout << a.length();
   cout << "\n";
   cout << b.length();

    
    cin.get();
}

Output in ubuntu

user@ubuntu804desktop:~$ g++ -Wall pedantic.cc
user@ubuntu804desktop:~$ ./a.out
2
1

Output in windowsxp using dev-cpp

1
1
Comments
Nice test :)

I don't have ubantu, but Microsoft VC++ 2008 Express reports 1 for the program that jamthwee posted. The compiler stored -61 in that byte. crappy *nix :)

>>I'm sorry, but I'm a beginner C++ programmer to such a "noob" level I don't know what you're talking about

UNICODE is a standard way to use non-English languages in computer programs. The standard UNICODE character is wchar_t, not char. Under MS-Windows wchar_t is defined to be unsigned short while in *nix (the last time I heard) it is unsigned long This is because many languages, such as Chinese, use graphic symbols which can be accommodated by wchar_t. In order to compile for UNICODE you have to set specific flags in the makefile -- I have no clue what those flags are for your compiler.

[edit]Considering jamthwee's test I would not bother with the UNICODE described above. It appears to be a compiler issue.[/edit]

Yes, it is an encoding issue. I suspect that it comes from the way your editor is saving the text file.

There are several ways to 'encode', or store, character data.

There is the old char-sized ASCII encoding, but that is limited to only 7-bit ASCII characters and any system dependant character codes above 127. Microsoft calls this "ANSI" and the exact selection of extended characters depends on your output code page. Obviously, this is not very convenient for languages using anything but straight-up Roman characters.

Then came (eventually) Unicode, which handles all language graphemes. (This doesn't mean it is complete --additions are still being made, but most industrialized nations can express their native language[s] with Unicode.)

There are several ways to store Unicode: three of which are of interest to us.

UTF-8 uses our venerable char. Only those graphemes that need more than one byte use more than one byte.

UTF-16/UCS-2 variable-width characters, like UTF-8, but where the smallest element is a 16-bit word instead of a byte. This format is considered deprecated, but it is still very much in use.

UTF-32/UCS-4 simply stores every character in a 32-bit word. This is how the GCC treats Unicode (wchar_t) values. As such, modern Linux systems in general are moving toward the exclusive use of this encoding.


So, now that you've had the lecture, on to the point: your text editor is using UTF-8, which you will recall is variable-width. I don't have Portugese installed, but I do have Spanish, so I hope you'll forgive the language choice in the examples. The file I've encoded is

Espanol
Español

"ANSI" (Microsoft's way), produces the following byte sequence
(escapes are either C-style or HEX, and the code page is Notepad's default)

E   s   p   a   n   o   l   \r  \n
 E   s   p   a   \F1 o   l   \r  \n

UTF-16 produces
(Notepad's "Unicode" option; notice the byte-order mark at the beginning)

\FF \FE
 E   \0  s   \0  p   \0  a   \0  n   \0  o   \0  l   \0  \r  \0  \n  \0
 E   \0  s   \0  p   \0  a   \0  \F1 \0  o   \0  l   \0  \r  \0  \n  \0

UTF-8 produces
(I removed Notepads weird BOM prefix)

E   s   p   a   n   o   l   \r  \n
 E   s   p   a   \C3 \B1 o   l   \r  \n

Notice how the second line is a different length than the first, due to the two-byte code for 'ñ'.

You are using UTF-8. And you have found UTF-8's limitation: you can't use any of the standard C or C++ string length functions on a UTF-8 string. You must either roll your own or use a library of some kind. Here is one using the STL:

#include <algorithm>
#include <functional>
#include <string>

std::size_t UTF8_length( const std::string& s )
  {
  return std::count_if(
           s.begin(),
           s.end(),
           std::bind2nd( std::less <char> (), 0x80 )
           );
  }

Hope this helps.

Comments
Very good post

Hope this helps.

Thank you very much for your help in explaining me this problem! It's now very clear why it happens.

The only problem is that the code you gave me for helping me count characters does not work :(

here's my code:

#include <iostream>
using std::cout;
using std::cin;
using std::endl;

#include <string>
using std::string;

#include <fstream>
using std::ifstream;

#include <algorithm>

#include <functional>

std::size_t UTF8_length(const string& s )
{
	return std::count_if(s.begin(),s.end(),std::bind2nd(std::less <char> (), 0x80));
}

void geraMenu(const string& titulo,const string& versao)
{
	int numEstrelas = titulo.size() + 40;
	string linha = string(numEstrelas,'*');
	string meiaLinha = string(19,'*');
	cout << linha << endl << linha << endl;
	cout << meiaLinha << " " << titulo << " " << meiaLinha << endl;
	cout << linha << endl << linha << endl;
	cout << string(5,'*') << " " <<  versao << " " << string(numEstrelas - 7 - versao.size(),'*') << endl;
	
	// UTF8_length tests
	cout << titulo.size() << endl;
	cout << UTF8_length(titulo) << endl;
	cout << UTF8_length("coco") << endl;
	cout << UTF8_length("cocó") << endl;
}

here's the output when I call geraMenu("PROJECTO GESTÃO", "beta 1"):

********************************************************
********************************************************
******************* PROJECTO GESTÃO *******************
********************************************************
********************************************************
***** beta 1 *******************************************
16
0
0
0

I can't see where the problem is as I've no idea what this [ std::bind2nd(std::less <char> (), 0x80) ] means.

thanks!

Argh! I'm so sorry! (Recent med changes have made my brain work worse than usual...)

I forgot a couple of things:

  1. force proper type comparison
  2. non-ASCII characters

This will work. (I tested it to be sure!)

#include <algorithm>
#include <ciso646>
#include <functional>
#include <string>

struct UTF8_ischar
  {
  bool operator () ( unsigned char c ) const 
    {
    return (c < 0x80) or (c >= 0xC0);
    }
  };

std::size_t UTF8_length( const std::string& s )
  {
  return std::count_if( s.begin(), s.end(), UTF8_ischar() );
  }

The above is an optimized version of

std::size_t UTF8_length( const std::string& s )
  {
  return std::count_if(
           s.begin(),
           s.end(),
           std::bind2nd( std::less <unsigned char> (), 0x80 )
           )
       + std::count_if(
	   s.begin(),
           s.end(),
           std::bind2nd( std::greater_equal <unsigned char> (), 0xC0 )
           );
  }

Don't worry too much about the weird stuff. You'll learn about it soon enough. It is just C++'s way of giving the user simple lambda s.

Essentially it says "count every character that has the msb == 0 or the two msbs == 11", which are the UTF-8 prefix codes for individual character sequences [ 1 ].

Sorry again! :$
Have fun now!

Argh! I'm so sorry!

Oh! Don't be sorry at all, you've been very kind for explaining me all this...

The new code works perfectly! Thank you for your precious help!

Best regards,
André

This question has already been answered. Start a new discussion instead.