Department of Information Technology

Methods in Image Data Analysis

The MIDA group focuses on development of general methods for image data analysis. Our aim is to devise generally applicable methods, which work well independent of the particular application and types of images used. We therefore strive for robust methods which are performing well under varying conditions. Also aiming for practically useful methods, we essentially always collaborate with other groups, including Quantitative Microscopy and MedIP - Medical Image Processing.

MIDA_Group.jpg

Combining Shape and Intensity Information

AffineSpots.gif


Similarity (or distance) measures between images are fundamental components in image analysis, and are used in many tasks such as

  • template matching,
  • image registration,
  • classification,
  • objective functions for training various types of Neural Networks.

We study measures which combine image intensity and spatial information efficiently and aim to demonstrate that they lead to practical, robust, high performance methods for these and other common tasks.

Precise Image-Based Measurements through Irregular Sampling

irregular_MM_rockpile.png


Operations within mathematical morphology depend strongly on the sampling grid, and therefore in general produce a result different from the corresponding continuous domain operation. Ideally image-based measurements are sampling invariant, however the morphological operators are not, because:

  • The output depends on local suprema/infima, but it is very likely that local extrema fall between sampling points.
  • The operators produce lines along which the derivative is not continuous, thereby introducing unbounded frequencies, which make the result not band limited. Therefore the result cannot be represented using the classical sampling theorem.
  • The structuring element is limited by the sampling grid.

To tackle these issues we will use irregular sampling to capture local maxima and minima and increase the sampling density in areas with a non-continuous derivative. Another benefit of moving towards mathematical morphology on irregularly sampled data is that this allows us to use morphological operators on irregularly sampled data, e.g. point clouds, without resampling and interpolating.

Variational methods for image enhancement, restoration and segmentation

  • Denoising
  • Deconvolution/deblurring
  • Super resolution reconstruction
  • Segmentation
  • Defuzzification

A common approach to solve the ill-posed inverse problem of image restoration is to formulate it as an energy minimization problem. A priori knowledge is, typically, included through a regularization component. Total variation is among most popular approaches, due to simplicity and generally good performance. We are studying and developing energy minimization based methods for enhancing images degraded with blur and different types of noise - Gaussian, Poisson and mixed Poisson-Gaussian.

588d1e54b1abb.png

The Coverage Model

The coverage model provides a framework for representing continuous objects present in digital images as spatial fuzzy subsets. Assigned membership values indicate to what extent image elements are covered by the imaged objects. This model can be used to improve information extraction from digital images and to reduce problems originating from limited spatial resolution. We have developed a number of image segmentation methods that result in coverage representations. Given a coverage segmentation, features such as area, perimeter, diameter, etc. can be estimated with greatly increased precision compared to what is possible with classic binary approaches.

CoverageEdge.png

Multi-layer object representations for texture analysis

MultiscaleDetection.png


Texture features such as local binary patterns have shown to provide complementary information to that of plain intensity images in learning algorithms. We investigate methods on the fusion of texture and intensity sources, as well as the problems connected to the fact that many texture descriptors are unordered sets and require suitable (dis-)similarity measures in order to be processes by for example convolutional neural networks. We develop strategies to integrate more complex texture features into learning methods and evaluate their performance on various biomedical images. Such hybrid object representations show promising results in detection and segmentation of high resolution transmission electron microscope (TEM) images, taking it one step closer to automation of pathological diagnostics.

Interpretation of classification behaviour of deep neural network models

Interp_of_NNs.png


Deep convolutional neural networks demonstrate state-of-the-art performance in many image analysis tasks, however, their opacity does not allow to infer how they arrive at a decision. We are aiming at detection of oral cancer at an early stage, and it is particularly important to develop a reliable algorithm. In our workflow, trained deep convolutional neural networks are used to differentiate cytological images into normal and abnormal classes. We examine methods that could elevate understanding of the deep learning classification properties and enable interpretation of data classification. Furthermore, we would like to increase understanding of the premalignant state by exploring and visualizing what parts of cell images are considered as most important for the task.

Robust learning of geometric equivariances

The project builds on, and extends recent works on Geometric deep learning and aims at combining it with Manifold learning, to produce truly learned equivariances without the need for engineered solutions and maximize benefits of shared weights (parameters to learn). A decrease of the numbers of parameters to learn leads to increased performance, generalizability and reliability (robustness) of the network. An additional gain is in reducing a risk that the augmented data incorporates artefacts not present in the original data. A typical example is textured data, where interpolation performed in augmentation by rotation and scaling unavoidably affects the original texture and may lead to non-reliable results. Reliable texture-based classification is, on the other hand, in many cases of high importance in biomedical applications.

equivariance.png
Updated  2019-11-03 01:08:46 by Joakim Lindblad.