Some Ramblings on Reproducing Video with Audio Equipment -------------------------------------------------------- I too have been thinking in the background about a way to do the "vinylonvideo" effect cheaply. I'm basing it around the $90 ez-kit lite. it has a dsp and audio inputs and outputs. I'm actually strarting in the "turn sound into video" direction. For starters, I'd simply like to generate the h and v sync signals, then use the audio input to make shades of gray lines on the TV. Once I've gotten that far, I'll be happy. The kit is capable of easily generating an interrupt at about 33KHz. I think you mentioned elsewhere that the 17KHz h-sync frequency is critical to getting an NTSC signal. So, I'm hoping to use every other interrupt and that the resulting 16.5KHz will be close enough to work :) If that doesn't, I'll switch to a real-time (YES! Using a 33 MHz RISC computer for ONE SINGLE task) loop that handles the video, 100% accurately generating sync signals, using the interrupts as timinig refrences. generally, I guess an audio signal of 44100 samples per second delivers 44100/30 samples per video frame. That's 1470 samples per frame. if it were a square frame, it would be approximately a 36*36 grid. Not too great, but could be fun anyways. btw, that's with a pretty high framerate. for experiment purposes it might be better to have better resolution and less framerate! Oops I forgot Shannon's theorem. heh heh heh but this is good enough for government work. to generate the h-sync v-sync signals I believe would be trivial... If my friend is right, the swing of the video signal is 1V, so I'm hoping to use an 8-bit R2R D/A converter. fwiw, I looked into using the $200 Analog Devices Wavelet Codec Evaluation board to do video, since it has it's own video digitzing and generation chips! the things I realized with that are: the main compression from wavelet's comes from RLE-encoding strings of 0's in the higher-frequency componenets...which means a digital protocol must be employed to communicate data back and forth. not the hardest problem in the world to solve, however, it's a showstopper for this project because it uses non-digital storage (such as audio tape, or sampling CD through a codec). That problem is significant because I would like to generate audio tapes and generate video from then resonably reliably. But, you say? Modems do this all the time? Well, the answer is modems have a terrific deal of intelligence at both ends of the transmission, including two-way line calibration, which is all way above the comlexity of this project, PLUS it requires two-way protocols...I'd simply like to stream video off of a CD, taking the hits for non-ideal sampling errors, because after all, you can't tell a CD player or tape player "Please back up and replay that sector". So much for the massive digital compression. I still want to use the videolab because it has a video digitizer and very high quality video generation. Another option is to *better* understand wavelets...to digitize video using the Videolab board, transmit *uncompressed* video digitally to an ez-kit lite, which uses it's audio codec's to produce the sound. record that sound, play it back into the ez-kit, which digitizes it, transmits a digital stream to the videolab, which generates the video. this is definitely feasible. Only, I don't understand wavelets well enough, nor audio modulation incl. noise/distortion well enough to figure out what kind of audio signals to generate. What I do know is that the codec works with 43 bands frequncy-domain data. Actually, if I learned correctly, wavelets are not completelyf frequency-domain only, but some time domain. Anyway, there are 43 roughly equal in bandwidth "bins" of data to be transmitted. What I propose doing is using my badnwidth estimation for audio tape (about 36x36 pixels), and sending only enough bins to fill 44.1KHz (minus Shannon's awful constraints) worth of data to the ez-kit. the ez-kit would then turn that data into audio somehow. How? I don't know! hahahahahaha this is the greatest part! 100 MHz Wirless Ethernet enigneers are layering and intertwining dozens of modulation techniques to solve a similar problem- to make the data as recoverable as possible to noise and other kinds of distortion. When I am struck by a problem I know next to ntohing about, I sometimes try to come up with the most naive possible approach, just to have a basis for comparison with other potential solutions. So here's my most naive technique: I could divide the audio bandwidth into the same number of bins as the number of wavelets I've determined to represent the signal. And then, I could amplitude modulate (AM) the signals and combine them together into a most likely displeasurable sounding audio signal, which could be recorded on any normal piece of equipment, like boomboxes, soundblasters, portable memo recorders, microcassette recorders, CD-R's, delay units, samplers, and more! One of the more interesting things about this technique is that lower audio frequencies represent lower frequcnies in the video domanin. So running the sound through an equalizer, adjusting the bass sliders will make the picture appear smoother! Reconstruction: to get a picture out of an audio cassette, the signal would be run through a number of band-pass filters. Then, each frequency band would be individually demodulated, digitally encoded then sent over serial link to the videolab. One problem with this whole tactic is the sync signal...the cassettes or cd's must contain some kind of indication when a video frame starts. Perhaps this could be stored in an extra, lower-frequency bin? -N > The barrier to doing this is the sampling rate of any of your audio > equipment. Sounds cards, etc., are going to do some filtering to chop off > high frequency information -- unfortunately, most of your picture info is > high frequency (from an audio perspective). > > We know that a mono audio signal at CD quality is sampled at 44.1KHz. Ie, > 44,100 samples per second (each sample recorded with a resolution of 16 > bits). Something called the Nyquist Frequency means that the highest > frequency signal we can represent at CD audio quality (without compression > techniques) is around 22K. If your audiocard is sampling at 22K, we can > represent frequencies up to around 11K. > > A plain old crappy TV video signal is ~30fps, with each frame fixed > vertical resolution of 525 lines; this gives you ~15K lines drawn per > second on NTSC (National Television Standards Committe, ie US TV) signals. > > So we're already close, just analyzing lines per second, to the limit of > what our sampling frequency can hold. It's too bad we can't flip those 16 > bits of sample "on their side" and capture 16 x the samples with depth 1 > to get b & w images! > > Btw an NTSC signal color information is encoded as modulation on a base > carrier of about 3.58Mhz. This will get stripped off right off the bat > anyway, or plain ignored. > > Anyway hypothetically we might get 2 or 3 pixels per line using a top > notch 96Khz audio card... feh! > > I'd be very curious though to see if you could get anything at all -- does > the noise you see correlate at all to the input signal?If you have the > ability you might want to try just recording the luminence info and strip > the chroma (color)... > > My instinct is that this won't ever work though because of the lack of > resolution to capture the horizontal and vertical retrace flags, which > have to be specific lengths -- the horizontal ones are only 5us (that's > microseconds), which if implies a sampling frequency of (I think) 2 MHz... > > God knows how the pixelvision cameras work... but I suspect they did not > record in anything like conventional NTSC, maybe than converted back to > output to TV? > > Ponder... > > Significant that the data rate itself is not such a huge limitation, if we > can be content with black and white (NOT grayscale) images. If we could > drop to a 100 x 100 x 30 fps b & w image my napkin suggest we should be > able to record this on anything that will hold audio data, including audio > tape or get close anyway. Or we could get a few values of gray and drop > the frame rate. And that's without even basic compression like run length > encoding, which might make sense if we're encoding scan lines > anyway. Would work even nicer maybe if we smear some vaseline on the > lense... :) > > Of course to do this we have to have equipment that will sample faster but > shallower -- a normal audio tape deck won't do it, but perhaps you could > build a widget that would convert time to bit depth back and forth and let > you do it. My money says that's what a pixelvision is doing. > > any takers? :) > > PS I'm not engineer so those that aren't don't shoot me. I just learned a > little working here: www.spcontrols.com. Just enough to be argumentitive > and dangerous. > > ghede@well.com > http://www.quietamerican.org > > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > + To unsubscribe, send the phrase "unsubscribe snuggles" without quotes + > +in a message body to majordomo@kuci.org -- To get info on list policies,+ > + use the phrase "info snuggles" instead + > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >