Description
Existing music visualizers in media players (G-Force, AVS, Milkdrop) seem to link only weakly what we hear to the expected animation which they generate. This is so because it is only little based on physics as the primordial human correspondence of sound and perception.Some examples: beat=crash/bump, violin=continuous motion, guitar riff=decelerating displacement, loud=big/fast, quiet=small/slow. Our hypothesis is that reflecting that in the generated animation feels better because then the visuals seem 'more right'.
Task
1. Define sound input parameters (e.g. bpm/frequency, volume, duration, attenuation), 2D graphics animation primitives (simple shapes, movements, transformations, fade-in/out) and a mapping file format so that an artist can create correspondences based on simple (e.g. trigonometric) functions.2. Implement both sound parsing (either events from a MIDI-file, or frequency analyzing audio) and rendering of the animation primitives controlled by the mapped correspondences.