See all upcoming seminars in LäsIT and seminar web pages at the homepage for the PhD studentseminars, TDB, Vi2, Theory and Applications Seminars (TAS) @ UpMARC., Department of Mathematics and The Stockholm Logic Seminar.

Disputation | PhD defense
9 June
Virginia Grande Castro: That's How We Role! A Framework for Role Modeling in Computing and Engineering Education
Location: ÅNG 101195, Time: 13:15

Subtitle: A Focus on the Who, What, How, and Why

Join via Zoom: Click here for Zoom link

Opponent: Professor Alison Clear, Easter Institute of Technology
Main Supervisor: Professor Mats Daniels, Uppsala university


Role model is a term used in everyday language and literature on education, particularly on diversity, equity, inclusion, and access, describing topics such as motivation and inspiration. However, role model, as a loosely defined concept, is understood and used in different ways. This shows the need for a shared vocabulary and structure to scaffold nuanced reflections and discussions on the who, what, how, and why of role modeling.

This thesis describes the development of a framework for role modeling in computing and engineering education. It is focused on the role model’s perspective and is of particular use for educators as role models for students, although it can be used for others in this context. Educators were interviewed and surveyed, and the analysis comprised a phenomenographic approach, thematic coding analysis, argumentation, descriptive statistics, and group comparisons.

The framework includes the dimensions of awareness and intention of role modeling. All educators are potential role models, regardless of whether we are aware of what we are role modeling and whether we intend for this to be emulated. What can be modeled is presented as achievements and aspects. As lenses to reflect on which ones a teacher should role model, we bring virtue ethics, care ethics, and ethics of freedom.

Context and norms matter in role modeling, such as in who is a role model, as we argue for using research on identity and the history of computing. We provide examples of how and why educators role model (or not) care, emotions, and professional competencies outside norms in the disciplines.

This thesis broadens how we understand and discuss role modeling in research and practice, including what can be modeled and obstacles to it. Practical examples (including reflection prompts) of how to use the framework are included for educators and other practitioners.

Half-time seminar
12 June
Ivy Weber: High order Hermite Finite Element methods for wave equations
Location: ÅNG 101142, Time: 10:15

To solve second-order wave equations directly, the application of a high order finite element method (FEM) using Hermite interpolation polynomials is investigated, and comparisons to high order FEM using Lagrange interpolation polynomials are made. A study in one spatial dimension shows that Hermite FEM has improved time-step stability compared to Lagrange FEM, reducing the minimum number of time-steps required to solve up to a specified time while maintaining stability. A second study demonstrates numerically that Hermite FEM has the expected level of accuracy, and proposes a method for maintaining this accuracy over time without increasing the number of mass matrix solves required. Finally, ongoing work to use Hermite FEM in the cutFEM framework is discussed.

Licentiatseminarium | Licentiate seminar
12 June
Håkan Runvik: Modeling and Estimation of Impulsive Biomedical Systems
Location: ÅNG 101130, Time: 13:00

External reviewer: Professor Torsten Wik, Chalmers
Advisor: Maria Kjellsson
Supervisor: Professor Alexander Medvedev, Uppsala university
The e-publication of the thesis can be found here:
Dynamical systems are often expressed in either continuous or discrete time. Some biomedical processes are however more suitably modeled as impulsive systems, which combine continuous dynamics with abrupt changes of the state of the system. This thesis concerns two such systems: the pharmacokinetics of the anti-Parkinson’s drug levodopa, and the testosterone regulation in the human male. Despite the differences between these systems, they can be modeled in similar ways. Modeling entails not only the model, but also the methods used to estimate its parameters. Impulsive dynamics can enable simpler representations compared with using continuous dynamics alone, but may also complicate the estimation procedure, since standard techniques often cannot be used. The contributions of this thesis are therefore both in model development and parameter estimation.

Model development is the topic of Paper I. It presents a model of the multi-peaking phenomenon in levodopa pharmacokinetics, which is manifested by secondary concentration peaks in the blood concentration profile of the drug. The remaining papers focus on estimation, in a setup where a sequence of impulses is fed to a linear plant, whose output is measured. Two estimation techniques are considered. The first is presented in Paper II and uses a Laguerre domain representation to estimate the timing and weights of the impulses. The second combines estimation of the impulsive input with estimation of the plant parameters, which represent the elimination rates of testosterone-regulating hormones. This problem is particularly challenging since increasing the estimated elimination rates and the number of impulses generally improves the model fit, but only models with sparse input signals are practically useful. Paper III addresses this issue through a novel regularization method. The uncertainties in model and measurements encountered when working with clinical hormone data add another layer of complexity to the problem; methods for handling such issues are described in Paper IV.

Disputation | PhD defense
12 June
Elisabeth Wetzer: Representation Learning and Information Fusion, Applications in Biomedical Image Processing
Location: Polhemsalen, ÅNG 10134, Time: 9:15

Opponent: Professor Fred Hamprecht, Heidelberg University
Supervisor: Professor Natasa Sladoje, Uppsala university

In recent years Machine Learning and in particular Deep Learning have excelled in object recognition and classification tasks in computer vision. As these methods extract features from the data itself by learning features that are relevant for a particular task, a key aspect of this remarkable success is the amount of data on which these methods train. Biomedical applications face the problem that the amount of training data is limited. In particular, labels and annotations are usually scarce and expensive to obtain as they require biological or medical expertise. One way to overcome this issue is to use additional knowledge about the data at hand. This guidance can come from expert knowledge, which puts focus on specific, relevant characteristics in the images, or geometric priors which can be used to exploit the spatial relationships in the images. This thesis presents machine learning methods for visual data that exploit such additional information and build upon classic image processing techniques, to combine the strengths of both model- and learning-based approaches. The thesis comprises five papers with applications in digital pathology. Two of them study the use and fusion of texture features within convolutional neural networks for image classification tasks. The other three papers study rotational equivariant representation learning, and show that learned, shared representations of multimodal images can be used for multimodal image registration and cross-modality image retrieval.

Disputation | PhD defense
14 June
Natalia Calvo Barajas: Exploring Multidimensional Trust: Shaping Child-Robot Creative Collaborations in Education
Location: ITC 10134, Time: 13:15

Opponent: Professor Kerstin Fischer, University of Southern Denmark
Supervisor: Professor Ginevra Castellano, Uppsala university

Disputation | PhD defense
14 June
David Widmann: Reliable Uncertainty Quantification in Statistical Learning
Location: Häggsalen (10132), Ångströmlaboratoriet, Time: 9:15

Opponent: Dino Sejdinovic, University of Adelaide (
Supervisor: Fredrik Lindsten, University of Linköping (

Mathematical models are powerful yet simplified abstractions used to study, explain, and predict the behavior of systems of interest. This thesis is concerned with their latter application as predictive models. Predictions of such models are often inherently uncertain, as exemplified in weather forecasting and experienced with epidemiological models during the COVID-19 pandemic. Missing information, such as incomplete atmospheric data, and the very nature of models as approximations ("all models are wrong") imply that predictions are at most approximately correct.

Probabilistic models alleviate this issue by reporting not a single point prediction ("rain"/"no rain") but a probability distribution of all possible outcomes ("80% probability of rain"), representing the uncertainty of a prediction, with the intention to be able to mark predictions as more or less trustworthy. However, simply reporting a probabilistic prediction does not guarantee that the uncertainty estimates are reliable. Calibrated models ensure that the uncertainty expressed by the predictions is consistent with the prediction task and hence the predictions are neither under- nor overconfident. Calibration is important in particular in safety-critical applications such as medical diagnostics and autonomous driving where it is crucial to be able to distinguish between uncertain and trustworthy predictions. Mathematical models do not necessarily possess this property, and in particular complex machine learning models are susceptible to reporting overconfident predictions.

The main contribution of this thesis are new statistical methods for analyzing the calibration of a model, consisting of calibration measures, their estimators, and statistical hypothesis tests based on them. These methods are presented in the five scientific papers in the second part of the thesis. In the first part the reader is introduced to probabilistic predictive models, the analysis of calibration, and positive definite kernels that form the basis of the proposed calibration measures. The contributed tools for calibration analysis cover in principle any predictive model and are applied specifically to classification models, with an arbitrary number of classes, models for regression problems, and models arising from Bayesian inference. This generality is motivated by the need for more detailed calibration analysis of increasingly complex models nowadays. To simplify the use of the statistical methods, a collection of software packages for calibration analysis written in the Julia programming language is made publicly available and supplemented with interfaces to the Python and R programming languages.

Thesis in DIVA

See also the list of all upcoming seminars.

Internal seminars. Lecturers may be either internal or external.