In PHP I can use the pack function to format a integer value to a 16 bit wav channel chunk. It gets the endianness right and it works. How would one go about creating raw wav data in c?

function cram ($integer, $type = 's', $length = '*')
{
	return pack($type.$length, $integer);
}

I'm currently using SDL_mixer and this rough function gets me a playable sawtooth-like sound:

Mix_Chunk *make_wav () {
  sample_buffer.allocated = 1;
  sample_buffer.alen = 80000;
  sample_buffer.volume = 128;
  sample_buffer.abuf = malloc(sample_buffer.alen);
  int i, up, v, min, max, incr;
  up = 1;
  v = 1;
  min = 0;
  max = 30000;
  incr = 100;
  for (i = 0;i < sample_buffer.alen;i++) {
    if (up)
      v = v + incr;
    else
      v = v - incr;
    if (v > max) {
      up = 0;
      v = max;
    }
    if (v < min) {
      up = 1;
      v = min;
    }
    sample_buffer.abuf[i] = v;
    printf("%d\n", v);
  }
  return &sample_buffer;
}

However, as you can see I'm just throwing values into the data buffer and I'm getting property packed stereo wav samples. I can't find information on the Internet for WAV programming in C. Any help would be GREATLY appreciated. I'm lost here.

Well the first thing you need to research is the WAV file format

You'll also need to be familiar with the bitwise operators
<< >> & | ^
in order to pack information at the bit level.

Well the first thing you need to research is the WAV file format

You'll also need to be familiar with the bitwise operators
<< >> & | ^
in order to pack information at the bit level.

For 16-bit stereo audio, each sample is 16 bits. Samples are ordered left, right, left, right, etc. Each sample is stored least significant byte first so does that mean 00000000 00000000 == -32768, 11111111 00000000 == 0, and 11111111 11111111 == 32767?

If so, isn't that just a short int in C? Would this work or does [] not signify one byte? Or does C not store integers in LSB format?

sample_buffer.abuf[i] = (short int) lvalue;
i = i + 2;
sample_buffer.abuf[i] = (short int) rvalue;
i = i + 2;

> Each sample is stored least significant byte first so does that mean 00000000 00000000 == -32768,
> 11111111 00000000 == 0, and 11111111 11111111 == 32767?
Nope,
0 would be 00000000 00000000
255 would be 11111111 00000000 (LSB=0xFF, MSB=0x00)
256 would be 00000000 00000001 (LSB=0x00, MSB=0x01)

> If so, isn't that just a short int in C?
Perhaps, but C doesn't assume a short is exactly 16 bits.

> Or does C not store integers in LSB format?
Perhaps, but C doesn't assume you're using a little endian machine.

If you are using 16-bit shorts on a little endian machine, and happy that is the only kind of machine your code will run on, then you can make life a bit easier.

But if either of those things isn't true, then you need to assemble the byte stream one byte at a time, say

buff[i++] = sample & 0xFF;
buff[i++] = (sample >> 8 ) & 0xFF;

Nope,
0 would be 00000000 00000000
255 would be 11111111 00000000 (LSB=0xFF, MSB=0x00)
256 would be 00000000 00000001 (LSB=0x00, MSB=0x01)

I'm confused because, 2 ^ 16 = 65536, so 65536 should be the highest number possible with 16 bits. How are you supposed to count across multiple bytes?

> If so, isn't that just a short int in C?
Perhaps, but C doesn't assume a short is exactly 16 bits.

> Or does C not store integers in LSB format?
Perhaps, but C doesn't assume you're using a little endian machine.

How is one supposed to use bitwise operators if it is unknown how C is storing the bits?

> I'm confused because, 2 ^ 16 = 65536, so 65536 should be the highest number possible with 16 bits.
No, 65535 is, which would be 0xFFFF or 11111111 11111111

> 255 would be 11111111 00000000 (LSB=0xFF, MSB=0x00)
> 256 would be 00000000 00000001 (LSB=0x00, MSB=0x01)
What if I wrote them out as
255 would be 00000000 11111111 (MSB=0x00, LSB=0xFF)
256 would be 00000001 00000000 (MSB=0x01, LSB=0x00)
It's the same information, written out in BE order.

> How is one supposed to use bitwise operators if it is unknown how C is storing the bits?
Because at that level, you're not aware of the endian issue.

> buff[i++] = (sample >> 8 ) & 0xFF;
If sample is say 0x1234, then the result in the appropriate buff location will always be 0x12, no matter what the endian format of the machine is.

You are attempting to write to a data structure which is endian-specific, so you need to store all your bytes in that order.

But I've already shown you how to extract the LSB and MSB, what more do you want?

Either it's

buff[i++] = sample & 0xFF;
buff[i++] = (sample >> 8) & 0xFF;

Or it's

buff[i++] = (sample >> 8) & 0xFF;
buff[i++] = sample & 0xFF;

(sample >> 8) & 0xFF will always get you bits 8 to 15 of an integer, no matter what endian format the machine uses. The only thing which changes is the order you write them to your buffer.

I'm trying to understand what's going on here.

buff[i++] = sample & 0xFF;

This should copy all the bits from sample that match 0xFF. However, I don't know what it would do if sample was a different size than 0xFF. Would it copy the matching and then ignore the rest? Or would it copy the extra bits afterward? Or would it copy the extra bits afterward but pad them 0 because they don't match corresponding & bits? I also don't understand how this extracts the LSB or the MSB.

sample1 = 0; // 0x0000; waveform = -32767
sample2 = 32767; // 0x7fff; waveform = 0
sample3 = 65535; // 0xffff; waveform = 32765
buff1 = sample1 & 0xFF;
buff2 = sample2 & 0xFF;
buff3 = sample3 & 0xFF;

So, assuming the extra bits are ignored, this should result in:

00000000 00000000 (sample1 0x0000)
  11111111 (0xff)
& xxxxxxxx
  --------
  00000000 (buff1)

  01111111 11111111 (sample2 0x7fff)
  11111111 (0xff)
& x&&&&&&&
  --------
  01111111 (buff2)

  11111111 11111111 (sample3 0xffff)
  11111111 (0xff)
& &&&&&&&&
  --------
  11111111 (buff3)

Maybe I'm just dense, but I fail to see the pattern. How is 0xFF helping us out here? It's nice to know that "(sample >> 8) & 0xFF" gets bits 8 to 15, but I don't understand the mechanics of how.

> sample & 0xFF;
Gets bits 0 to 7

As would
(sample >> 0) & 0xFF; // bits 0 to 7
which would stylistically match say
(sample >> 8) & 0xFF; // bits 8 to 15

You know how to get both bytes of your sample, all that remains is to put them into your buffer in the right order.

> So, assuming the extra bits are ignored, this should result in:
No, 0x7fff & 0xff is 0xff, NOT 0x7f

Examples

1100100101001001
        11111111
----------------------
00000000[B]01001001[/B]
buff gets 01001001

1101110110110011
        11111111
----------------------
00000000[B]10110011[/B]
buff gets 10110011

> sample & 0xFF;
Gets bits 0 to 7

As would
(sample >> 0) & 0xFF; // bits 0 to 7
which would stylistically match say
(sample >> 8) & 0xFF; // bits 8 to 15

You know how to get both bytes of your sample, all that remains is to put them into your buffer in the right order.

> So, assuming the extra bits are ignored, this should result in:
No, 0x7fff & 0xff is 0xff, NOT 0x7f

I adapted some code to create this function. It seems to work for packing a long into little endian of b size:

void *longtolittleendian (long l, int b) {
  char *le;
  le = malloc(b);
  int i;
  for(i=0;i<b;i++) {
    le[i] = (l>>(i*8)) & 0xff;
  }
  return le;
}

I'm able to write a working wav file. Thanks very much for your help. I think I'll have to play around with this stuff a bit.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.