Multi-Core Platforms for Beamforming and Wave Field Synthesis

Dimitris Theodoropoulos*, Georgi Kuzmanov, Georgi Gaydadjiev

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

25 Citations (Scopus)


Immersive-Audio technologies are widely used to build experimental and commercial audio systems. However, most of them are based on standard PCs, which introduce performance limitations and excessive power consumption. To address these drawbacks, we explore the implementation prospectives of two Immersive-Audio technologies: the beamforming (BF) and the wave field synthesis (WFS). We target two popular multi-core platforms, namely graphic processor units (GPUs) and field programmable gate arrays (FPGAs). We identify the most computationally intensive parts of both applications and employ the CUDA environment to map them onto a Quadro FX1700, a GeForce 8600GT, a GTX275, and a GTX460 GPU. Furthermore, we design our custom multi-core hardware accelerators for both algorithms and map them onto Virtex6 FPGAs. Both GPU and FPGA implementations are compared against OpenMP-annotated software running on a Core2 Duo at 3.0 GHz. Experimental results suggest that middle-range GPUs process data equally well as the Core2 Duo for the BF, and approximately two times faster for the WFS. However, high-end GPU and FPGA solutions provide an order of magnitude better performance for BF, and approximately two orders of magnitude better performance for WFS than the Core2 Duo. Ultimately, single-chip GPU and FPGA implementations can provide more power-effective solutions, since they can drive more complex microphone and loudspeaker setups than PC-based approaches.

Original languageEnglish
Article number5661853
Pages (from-to)235-245
Number of pages11
JournalIeee transactions on multimedia
Issue number2
Publication statusPublished - Apr-2011
Externally publishedYes


  • Audio systems
  • digital signal processors
  • reconfigurable architectures

Cite this