Hey guys,
I am currently implementing the audio system of GCC4 for my OS X port. I will use OpenAL for sound playback but now I am a little bit unsure about an implementation detail.
OpenAL has the concept of sources and buffers. A buffer holds sound data while one or more sources can play back the sound from that buffer. It seems to me that DirectSound has only the concept of buffers so in GCC4 creating a new object which can be played involves creating a new buffer. If I transfer that concept to OpenAL I would store an OpenAL buffer together with an OpenAl source and treat it as a single object..
The problem with that is the amount of memory used because for every sound I want to play I would have to create another buffer even if I already had a buffer with the corresponding sound data. So my current idea is to put away the AudioBuffer and instead use something like an AudioSource which is created by an audio manager. This manager would hold a list of buffers identified with the name of the file from which they were initialized. This way the same sound data gets only loaded once into memory and the sources would get an handle to the buffer they want to play back. My problem with this design is that it seems to duplicate functionality from the resource cache (avoiding double loading) and somehow I am feeling uneasy about that. Any ideas or comments on that?
Thanks a lot!
I am currently implementing the audio system of GCC4 for my OS X port. I will use OpenAL for sound playback but now I am a little bit unsure about an implementation detail.
OpenAL has the concept of sources and buffers. A buffer holds sound data while one or more sources can play back the sound from that buffer. It seems to me that DirectSound has only the concept of buffers so in GCC4 creating a new object which can be played involves creating a new buffer. If I transfer that concept to OpenAL I would store an OpenAL buffer together with an OpenAl source and treat it as a single object..
The problem with that is the amount of memory used because for every sound I want to play I would have to create another buffer even if I already had a buffer with the corresponding sound data. So my current idea is to put away the AudioBuffer and instead use something like an AudioSource which is created by an audio manager. This manager would hold a list of buffers identified with the name of the file from which they were initialized. This way the same sound data gets only loaded once into memory and the sources would get an handle to the buffer they want to play back. My problem with this design is that it seems to duplicate functionality from the resource cache (avoiding double loading) and somehow I am feeling uneasy about that. Any ideas or comments on that?
Thanks a lot!