Parallelizability of an algorithm is nowadays a highly desirable property as computer hardware is becoming increasingly parallel. In this paper, a formulation of the particle filtering algorithm, suitable for parallel or distributed computing, is proposed. From the particle set, a series expansion is fitted to the posterior probability density function. The global information expressed by the particles can in this way be compressed to a few informative coefficients that can be efficiently communicated between the local processing units. Experiments on a shared-memory multicore processor using up to 8 cores show that linear speedup in the number of used cores is achieved.
Available as PDF (348 kB, no cover)
Download BibTeX entry.