Hi, I'm in the process of creating a program that will allow any client connected to a simple server to voice chat. To start small, I only want the server to play the sound it is receiving from the clients -- the server will not be sending voice to any of the clients (just to clear that up).
I have written 2 small programs already to get my feet wet with working with audio, and I will display them now.
Records a .wav
# record a wave file
import pyaudio
import wave
import sys
CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 44100
RECORD_SECONDS = 5
WAVE_OUTPUT_FILENAME = "output.wav"
p = pyaudio.PyAudio()
# open stream
stream = p.open(format = FORMAT,
channels = CHANNELS,
rate = RATE,
input = True,
frames_per_buffer = CHUNK)
print "Recording for", RECORD_SECONDS, "seconds..."
all = []
for i in range(0, (RATE / CHUNK) * RECORD_SECONDS):
data = stream.read(CHUNK)
all.append(data)
print "Finished Recording."
stream.close()
p.terminate()
# write data to WAVE file
data = ''.join(all)
wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(data)
wf.close()
Plays a .wav
# play a wave file
import pyaudio
import wave
import sys
CHUNK = 1024
WAVE_INPUT_FILENAME = "output.wav"
wavefile = ""
try:
wavefile = wave.open(WAVE_INPUT_FILENAME, 'rb')
print "Successfully opened", WAVE_INPUT_FILENAME
except:
print "Could not open", WAVE_INPUT_FILENAME
sys.exit()
print wavefile.getnchannels(), "audio channels."
print wavefile.getsampwidth(), "sample width in bytes."
print wavefile.getframerate(), "sampling frequency."
print wavefile.getnframes(), "audio frames."
print wavefile.getcomptype(), "compression type."
print wavefile.tell(), "current file pointer position."
p = pyaudio.PyAudio()
# open stream
stream = p.open(format =
p.get_format_from_width(wavefile.getsampwidth()),
channels = wavefile.getnchannels(),
rate = wavefile.getframerate(),
output = True)
# read data
data = wavefile.readframes(CHUNK)
# play stream
while data != '':
stream.write(data)
data = wavefile.readframes(CHUNK)
stream.close()
p.terminate()
I also have experience with client-server applications. Unless I am missing something, is it really going to be as easy as changing the record a .wav file so that instead of writing the data to a .wav file like it does already, it sends it through a socket over to the server?
All I'd have to do is get the server to receive that sound data in an infinite loop and pass it over to code identical to the play a .wav program and it should play the sound right?
Last question
Also, is that how programs like Ventrilo are made?
Thanks for all the help in each point you can address.