Speaker: Alan Chalmers (Universität Bristol)

The computer graphics industry, and in particular those involved with films, games and virtual reality, continue to demand more realistic multi-sensory computer generated environments. In addition, there is an ever increasing desire for multi-user networked interaction. Despite the ready availability of modern high performance graphics cards, the complexity of the scenes being modelled, the need for interaction and the high fidelity required of the images and sound means that synthesising such scenes is still simply not possible in a reasonable, let alone real time on a single computer. Two approaches do, however, appear to offer the possibility of helping achieve high fidelity virtual environments in real-time: Parallel Processing and Visual Perception. Parallel Processing has a number of computers working together to render a single image, which appears to provide almost unlimited performance, however, enabling many processors to work efficiently together is a significant challenge. Visual Perception, on the other hand, takes into account that it is the human who will ultimately be looking at the resultant images, and while the human eye is good, it is not that good. Exploiting knowledge of the human visual system can save significant rendering time by simply not computing those parts of a scene which the human will fail to notice. This talk will consider how parallel processing and visual perception may be combined to achieve perceptual realism in real-time. The application considered for this approach is the high fidelity reconstruction of archaeological sites.

Details

Category

Duration

45+15
Host: