Hauptinhalt
Topinformationen
Topinformationen
Using Deep Reinforcement Learning to Teach a Machine a World Understanding
Viviane Clay is working on several techniques to try and teach a machine a semantic world understanding in a largely unsupervised fashion.
Supervisors: Peter König, Kai-Uwe Kühnberger, Gordon Pipa
Complexity, Self-Organization and Emergence in a Multi-Agent System through Microcosm Simulation
Julius is simulating microcosms (multi-agent systems simulating populations of simple organisms), to analyze the system properties of self-organization, emergence, and complexity.
Supervisors: Gordon Pipa, Elia Bruni
The neural mechanics of lifelong learners
Daniel researches why learning algorithms and artificial networks fail in continual learning scenarios. He takes inspiration from properties of biological brains to identify mechanisms and inductive biases that may enable artificial neural networks to become successful continual learners.
Supervisors: Tim C. Kietzman, Peter König
Cortical spike synchrony as a measure of contour uniformity
The goal of Viktoria's project is to develop a spiking model of V1 brain area, which can demonstrate synchronous neuronal activity in response to visual stimuli with specific geometrical properties.
Supervisor: Gordon Pipa
At the Interface of Signalling Theory and Deep Learning for a disentangled speech representation learning
In this project, Yusuf Brima investigates how an effective disentangled speech representation learning is possible via self-supervised learning in deep learning.
Supervisors: Gunther Heidemann, Simone Pika
Graph theoretical analysis of eye tracking data recorded in complex VR cities to investigate spatial navigation
Eye tracking data recorded in virtual reality with freedom of movement require new analysis approaches. In this project, Jasmin L. Walter proposes a new method to quantify characteristics of visual behavior by applying graph-theoretical measures to eye tracking data. Using this methodology, she investigates visual behavior during free exploration of a virtual city and assesses global spatial navigation characteristics.
Supervisor: Peter König
Adapting the rational speech act framework for visual interaction
In this project, Jasmin L. Walter investigates whether the rational speech act framework also applies to non-verbal, more specifically, visual interaction and communication.
Supervisors: Peter König, Gordon Pipa, Tim Kietzmann
The role of language and pragmatics in higher-level cognition: forming abstract concepts in social interaction
By using iterative, agent-based computational modeling Kristina Kobrock aims to answer the research question: “Under which circumstances do more and more abstract concepts evolve?”.
Supervisor: Nicole Gotzner, Elia Bruni
Effects of Humanoid Agents on free spatial navigation patterns
Tracy's project is about analyzing the impact of human agents' presence on peoples' ability to remember locations inside a Virtual City.
Supervisors: Gordon Pipa, Peter König, Sabine König
Towards a better understanding of visual information sampling in the brain: neural correlates and deep neural network models of the exploration-exploitation dilemma
To investigate which aspects of the continuously changing neural signatures are predictive for human fixation durations Philip Sulewski records neural activity using magnetoencephalography (MEG) combined with eye tracking while subjects visually explore natural scenes.
Supervisors: Tim C. Kietzmann, Peter König
Improving the Signal Quality of a Mobile EEG Device with Deep Learning
Laura is working on the DreamMachine, a low cost, mobile EEG-device, developed in the NI research group. Her goal is to improve the signal quality and spatial resolution of the device by applying different Deep Learning architectures.
Supervisor: Gordon Pipa
Self-organised grammar learning with a plastic recurrent network
A major goal of Sophie Lehfeldts PhD project is to train a recurrent neural network to learn grammatical structures as found in natural language in a self-organised fashion.
Supervisors: Gordon Pipa & Jutta Mueller
Project Westdrive: large scale VR Foundation for immersive Experiments on Human Computer Interactions
Would you trust a robot to drive your car? Maximilian Wächter's goal is to gain insights in human trust building behavior and ultimately lower reservations regarding this technology. For this he developed a large scaled, highly realistic VR simulation with AI controlled cars as an eye-tracking experiment.
Supervisors: Peter König, Gordon Pipa
Language emergence in artificial agents
Xenia Ohmer develops computational models of language learning and emergence in artificial agents. Firstly, she uses these models to gain insights on the role of pragmatic reasoning in human language learning, and secondly, she tries to integrate pragmatic reasoning mechanisms into artificial agents designed for language learning or communication.
Supervisors: Michael Franke, Peter König
Probabilistic Modeling of rational communication with conditionals
Britta Grusdt studies the interpretation of the little word “if”, as it is
an excellent showcase of the context-dependence of language understanding and logical
reasoning
Supervisors: Michael Franke, Mingya Liu
VR environment to study context dependent visual perception
How does attention influence our visual perception depending on the task? To answer this Marc Vidal de Palol uses novel experimental methodology by combining techniques: VR, Eye-tracking, and EEG.
Supervisors: Gordon Pipa & Peter König
Incorporating motion into PeriNet - a computational model for central and peripheral vision
This project helps to advance our understanding of the human visual system and to develop efficient, biologically plausible end-to-end computational models for vision. With the PeriNet computational model Hristofor Lukanov addresses the problem of the split in the peripheral and central vision.
Supervisors: Gordon Pipa & Peter König
The semantics, pragmatics, and acquisition of polarity items
Juliane Schwab studies positive and negative polarity items in natural language. Her project contributes to our understanding of the processing and learning mechanisms at the interface of syntax, semantics, and pragmatics.
Supervisors: Mingya Liu, Jutta Mueller
Semi-supervised Conceptors and Conceptor Logic
Conceptors were introduced by H. Jaeger in 2014 as a mathematical formalism to derive and manipulate internal representations of concepts in neural networks and reintroduce them into the network dynamics. Georg Schroeter explores further the theoretical foundations and possible applications of Conceptors to both recurrent and feed-forward neural network architectures.
Supervisors: Kai-Uwe Kühnberger & Gordon Pipa