I have a need to output a large array for a CNC machine I have built. The attached code includes comments which explains my problem.

Thank you for your consideration.

using far pointers you are still limited by 64K memory regions; its just that, that region can be outside your current CS or DS, and you can have many such regions. pointer arithmetic on far pointers do not modify the segment portion of the pointer, only its offset (modulo 64K arithmetic).

huge pointers are normalized every time they are modified; pointer arithmetic is *very* slow, but arrays of size >64k can be safely accessed. changing line 18 to

char huge* I ;

should work.

Also, delete I; must be delete [B][ ][/B] I; If you're processing the file in a linear fashion, is there any real need to store the whole file in memory at the same time?

Thank you Vajayan for your time. I tried your suggestion of declaring the pointer as huge but this did not overcome the problem. I would be interested if you have successfully run the program with the pointer declared as huge, as this may indicate that I have a problem with running my compiler.

I changed the delete I to delete[] (and acknowledge that is good practice to always incorporate brackets) but this did not overcome the problem. I need to output in real time an array larger than 64k as 64k is just 304 mm of movement of the machine whereas I need at least 2000 mm.

...I would be interested if you have successfully run the program with the pointer declared as huge...

no, i have not run the program. what i posted was based on (somewhat foggy) memory of the days when intel tortured programmers with segented addressing and memory models. i do not have the compiler; so i cannot test it.

the following suggestions are also based on guesswork:
a. verify that you are using a library which returns *huge* pointers for things like operator new / malloc. perhaps, the compiler ships with different libraries; one for each memory model. one way to check this is verify that for the huge pointer you try to use (of the form segment:offset) segment is non-zero and not equal to youir DS
b. try an malloc/free instead of a new/delete.
c. try using an array (declared as huge) with a static storage duration.

do you really have to use this compiler? (the real question is: do you really have to execute your code under the segmented addressing mode of intel?)

So how is your CNC machine going to know whether you stored the whole thing in one massive array, or output them individually?

> 64k is just 304 mm of movement of the machine whereas I need at least 2000 mm.
Like who cares that it travels 300mm, appears to pause for a moment, then moves another 300mm. Your PC is so fast that you might not even notice.

Does your CNC machine store the whole program first, then only does things when you send a 'go' command? If so, there is absolutely no reason I can see to complicate the sending of the data by trying to create large arrays on your restricted OS.

> I have run the program on Pentium1/dos5, Pentium1/dos6.2
> and P4/XP machines.
If XP is a real choice, then get a better compiler. Any modern 32-bit compiler suitable for XP would have no trouble at all dealing with a couple of MB of array size.

Since you're using C++, maybe create a small class to do some work for you, by providing some kind of simulation of a large array?

#include <iostream>
using namespace std;

const int BLOCK_SIZE = 50000;

class bigA {
    char  **m_array;
    size_t  m_size;
    size_t  m_blocks;
    bigA( size_t size );
    ~bigA( void );
    char & operator [] ( unsigned int i );

bigA::bigA ( size_t size ) {
    m_size = size;
    m_blocks = size / BLOCK_SIZE;
    if ( size % BLOCK_SIZE != 0 ) m_blocks++;
    m_array = new char*[m_blocks];
    for ( size_t i = 0 ; i < m_blocks ; i++ ) {
        m_array[i] = new char[BLOCK_SIZE];

bigA::~bigA ( void ) {
    for ( size_t i = 0 ; i < m_blocks ; i++ ) {
        delete [] m_array[i];
    delete [] m_array;

char & bigA::operator [] ( unsigned int i ) {
    if ( i < m_size ) {
        size_t  row = i / BLOCK_SIZE;
        size_t  col = i % BLOCK_SIZE;
        return m_array[row][col];
    // panic now, out of bounds

int main ( ) {
    bigA foo(123456);
    foo[1234] = 'a';
    cout << foo[1234] << endl;
    return 0;

I will follow up on your suggestions and come back again later Vajayan121.

In the meantime I would be grateful if there is a kind person out there able to compile and run the program on a more current compiler.

Also I would value advice on which replacement compiler I should acquire. I would want an IDE and the machine computer interface board and real time limits me to using dos.

...compile and run the program on a more current compiler

runs without any problems on microsoft vc++ 8.0 and gcc 3.4.x (need to change headers iostream.h => iostream etc.)

Also I would value advice on which replacement compiler I should acquire. I would want an IDE and the machine computer interface board and real time limits me to using dos.

if you have to run under dos, this is reported to be a workable solution: http://www.delorie.com/djgpp/doc/ug/intro/what-is-djgpp.html & http://www.delorie.com/djgpp/v2faq/ it supports 32 bit addressing; so array sizes are not an issue if you have sufficient ram. see: http://www.delorie.com/djgpp/doc/ug/basics/32bit.html there is an ide for gjgpp: ttp://www.rhide.com/ djgpp is particularly popular among game programmers.

note: Salem's idea (sending a large array in smaller chunks of <64k) would be a simple and elegant solution; you should not rule it out without investigating the possibility.

commented: Good idea - DJGPP +9

I will answer the respondents and pose a question at the end.

Thank you for your contribution Salem of breaking up the large array into smaller chunks. The machine starts on receiving char I[0]. It demands I[1]whilst moving according to I[0] and so on to the end of the array. This arrangement plus dos puts the computer under the control of the machine and satisfies the real time and safety considerations of the machine.

Pausing every 300mm of movement would slow/stall the machine and cause bit to overheat, unless mitigated for. However, the time duration between each output of I[x] is 100microseconds when movement is at maximum speed so there is time to progressively manipulate arrays between movement steps.

Thank you Vijayan121 for confirming that the program can run under another (more modern!) compiler and your suggestion of an alternative to the compiler I have.

Should you Salem or Vijayan121 decide to visit Wellington, New Zealand in the future please email me.

What was puzzling me when I originally posted my problem was that I interpreted the Turbo C++ V3.0 documentation to mean that I could do as follows:

Invoke the Huge Memory Model and declare the array to be local for it to be compiled into dynamic memory. I understood the size of the array would, under these circumstances, be limited to the amount of available heap, not just one segment.

If there is some venerable sage who can confirm this is the case and/or point out the error of my ways I would be indeed be grateful.

commented: Nice info - thanks +9

are you trying to run your program from within the turbo c++ ide? in this case, the amount of heap memory that you have is very limited. the memory layout would be something like this (wwww < xxxx < yyyy < zzzz < a000) :

0000:0000 interrupt vector
0040:0000 dos low memory
wwww:0000 IDE code, data, heap, stack.
xxxx:0000 your program code.
yyyy:0000 your program data& stack.
zzzz:0000 your program heap.
A000:0000 end of heap (start of graphic memory).
"640 Kb is more than enough for anybody" - Bill Gates

with the changes ( char huge* I, delete[] I ), have you tried getting out of the ide and running the program from the shell (command.com)? even if it may not work from within the ide, it should work once you kick the ide out. you would then have more memory for your program.

this would take care of the code that you posted; you should have enough heap size for an array of 100KB. if you want array sizes of a few MB, you would need to use a dos extender (like DPMI) which allows access to the protected mode memory on your machine. i do not know if turbo C++ ships with a dos extender or would be able to use one if it is present. however, since you say "64K gives me 304 mm of movement of the machine whereas I need at least 2000 mm.", you may be able to manage everything in real mode memory itsef. (you need 64*2000/304 KB which should be available.) and for larger movements, you may be able to use Salem's suggestion.

Problem solved - thanks to Vijayan121's suggestion. My compiler does not correctly run the new() function - it would appear to have been shipped that way. However it correctly runs an alternative of farcalloc() .

Thanks again Vijayan121 and Salem. I have learned a lot including the need to replace my compiler.