Sound Renderer

A visual renderer with sound simulation for SIGGRAPH 2022 Labs Session.

We (Jin Xutong, Lang Qinglin and I) made a visual and sound renderer to use in SIGGRAPH 2022 Labs Session for NeuralSound ([Jin et.al. 2022]). When a primitive is clicked, its modal analysis will be triggered and vibration result is then filled into audio buffer so users can hear sound. NeuralSound is used to precompute modal data and acoustic maps for the material and model very quickly (hundreds of milliseconds to achieve accuracy of traditional methods in several seconds).

I’m sorry that I only find a low-quality recording; it may be updated when I have spare time to record a new one (and please contact me if you really want it :-)).