Hi I've found a strange problem on ubuntu64bit, that limits the data you are allowed to allocate on a 64bit platform using the c function 'read()'

The following program wont allow to allocate more that 2.1gig memory.

#include <stdio.h>
#include <stdlib.h>
#include <err.h>
#include <fcntl.h>
#include <sysexits.h>
#include <unistd.h>
#include <sys/stat.h>

// get bytesize of file
size_t fsize(const char* fname){
  struct stat st ;
  stat(fname,&st);
  return st.st_size;
}

int main() {
  const char *infile = "bigfile.dat";
  int fd;
  size_t bytes_read, bytes_expected = fsize(infile);
  char *data;
 printf("\nLONG_MAX:%lu\n",LONG_MAX);

  if ((fd = open(infile,O_RDONLY)) < 0)
    err(EX_NOINPUT, "%s", infile);

  if ((data =(char *) malloc(bytes_expected)) == NULL)
    err(EX_OSERR, "data malloc");


  bytes_read = read(fd, data, bytes_expected);

  if (bytes_read != bytes_expected)
    err(EX_DATAERR, "Read only %lu of %lu bytes",bytes_read, bytes_expected);

  /* ... operate on data ... */

  free(data);

  exit(EX_OK);
}

./a.out
LONG_MAX:9223372036854775807
a.out: Read only 2147479552 of 2163946253 bytes: Success

According to man 2 read, the maximum is limited by SSIZE_MAX
which is defined in

/usr/include/bits/posix1_lim.h
# define SSIZE_MAX LONG_MAX

And LONG_MAX is defined in /usr/include/limits.h as
# if __WORDSIZE == 64
# define LONG_MAX 9223372036854775807L
# else
# define LONG_MAX 2147483647L
# endif
# define LONG_MIN (-LONG_MAX - 1L)

Either this is a bug in the ubuntu buildsystem,
or my build system is broken.

Can anyone with a 64 try to run the above program.

btw

readelf -h ./a.out
ELF Header:
Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
Class: ELF64
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: Advanced Micro Devices X86-64
Version: 0x1
Entry point address: 0x400750
Start of program headers: 64 (bytes into file)
Start of section headers: 5312 (bytes into file)
Flags: 0x0
Size of this header: 64 (bytes)
Size of program headers: 56 (bytes)
Number of program headers: 9
Size of section headers: 64 (bytes)
Number of section headers: 37
Section header string table index: 34

ldd ./a.out
linux-vdso.so.1 => (0x00007fff689ff000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007ffee433e000)
libm.so.6 => /lib/libm.so.6 (0x00007ffee40ba000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00007ffee3ea3000)
libc.so.6 => /lib/libc.so.6 (0x00007ffee3b34000)
/lib64/ld-linux-x86-64.so.2 (0x00007ffee464e000)

Recommended Answers

All 3 Replies

const char *infile = "bigfile.dat";
  size_t bytes_read, bytes_expected = fsize(infile);

Don't you think that fsize better works on a FILE *, rather than char *?

Don't you think that fsize better works on a FILE *, rather than char *?

The last time I checked fsize was a windows specific command.
I'm using linux.

The first function in the codesample I supplied is a function called fsize,
that just returns the number of bytes for the file.

Thanks for all your replies,
but it seems that its not possible to use posix read to read files larger than 2.1 gig.
This is according to the uberpenguin linus
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=e28cc715

At somepoint I guess it was possible but the kernel has regressed to only allow smaller chunks being read.

kernelsourcetree/fs/read_write.c

/*
 * rw_verify_area doesn't like huge counts. We limit
 * them to something that fits in "int" so that others
 * won't have to do range checks all the time.
 */
#define MAX_RW_COUNT (INT_MAX & PAGE_CACHE_MASK)

int rw_verify_area(int read_write, struct file *file, loff_t *ppos, size_t count);

This was rather annoying, it would have been nice if this had been documented outside of the kernelsource and kernelmailinglist.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.