International Seminar on Signal Processing

Lecturer: Wolfgang Utschick and Josef A. Nossek

In Cooperation with:
Prof. Markus Rupp (Institute of Telecommunications, TU Vienna)
Prof. Armin Wittneben (Communication Technology Laboratory, ETH Zürich)

Organization: Hela Jedda

Target Audience: Master EI

Language: English

Offered in: Summer Term

Description

The Institute of Telecommunications at TU Vienna (Prof. Rupp, Prof. Mecklenbräucker and Prof. Görtz), the Communication Technology Lab at ETH Zürich (Prof. Wittneben) and the Methods of Signal Processing Group at Technische Universität München (Prof. Utschick) are organizing an international seminar on selected topics in signal processing and communications for students from the participating institutions during the summer term 2016. The students (4-6 from each department) are offered potential topics by the corresponding supervisors, collect the required literature, understand the topic, summarize it in a two-page abstract and finally give a scientific talk. The talks are held in Zürich, Vienna and Munich. The travel and accomodation costs are covered by the organizers.

Application

Please send your informal application with an up-to-date transcript of records by April, 1st 2018 to hellings@tum.de.

Scheduled Dates (Summer Term 2018)

25 May 2018ETH Zürich
1st June 2018TU Wien
22 June 2018TU München
 

Topics Offered in Summer Term 2018

Transforming Per-Antenna Power Constraints to Sum Power Constraints

(supervised by Andreas Barthelme)

Precoding at the transmitter is essential to obtain high data rates in the downlink direction of today's communication systems. The precoder design itself can be cast as an optimization problem, where the objective and constraints can differ depending on the design criterion. Independent of this, the optimization of the precoders is always subject to a certain power budget. Considering a sum power constraint, i.e., a limit for the total power consumption of the system, the arising optimization problems may be solved efficiently. However, in actual communication systems, each RF chain has its own power amplifier, which leads to so called per-antenna power constraints. Solving per-antenna power constrained optimization problems typically turns out to be more complex than under a sum power constraint.

Recently, a new method has been proposed that allows the transformation of certain per-antenna power constrained problems to sum power constrained problems by the help of Lagrange duality. This approach actually leads to an efficient solution of the ZF precoding problem with generalized inverses that has been formulated by Wiesel et al. ten years ago. In the seminar, the student shall present this new transformation technique and discuss its applicability.

Full Duplex in MG.fast DSL Systems

(supervised by Andreas Barthelme und Michael Newinger)

As the standardization for the 4th generation broadband access networks – so called G.fast – is almost finished, the development of the next DSL generation has started. Apart from increasing the useable frequency band width even further, the application of full duplex transmission is currently discussed. By transmitting in both uplink and downlink direction at the same time and frequency the obtained data rates could potentially increase by a factor of 2 compared to half duplex systems (TDD or FDD). However, self-interference, i.e., the interference a device causes to itself by its own transmitted signal, and near end crosstalk at the users (CPEs), i.e., receiving the transmit signals of other users, are major problems.

In this seminar talk, the unique properties of full duplex transmission in the context of MG.fast DSL systems shall be analyzed. Furthermore, the student shall assess the quality the proposed optimization strategies.

Generative Adversarial Nets

(supervised by Oliver De Candido)

In recent years, the promise of deep learning methods to efficiently represent rich and complicated data structures has been at the forefront of research.  However, the focus has primarily involved deep discriminative models, typically to map a high-dimensional input data stream to a specific class label (e.g., labelling objects in an image).  The gains of these discriminative models have been built upon the success of the back-propagation (back-prop) and dropout algorithms.  On the other hand, deep generative models have had less success, due to the difficulty of approximating the probabilistic computations which arise when estimating the maximum likelihood.

The framework behind deep Generative Adversarial Nets (GANs) is a novel concept to estimate generative models by simultaneously training two deep networks pitted against each other.  On the one side, a generative model G is trained to capture the input data distribution, and on the other side, the discriminative model D is trained to differentiate between true inputs and inputs generated by G.  This framework is equivalent to a two-player minimax game, where G tries to maximize the probability that D makes a mistake, and D tries to minimize this probability.

In this seminar talk, the topic of deep generative models will be studied, and the framework behind GANs will be presented and discussed.

Learning the MMSE Channel Estimator

(supervised by Christoph Hellings)

Accurate channel estimation is a major challenge in the next generation of wireless communication networks, e.g., in cellular massive MIMO or millimeter-wave networks. In order to fully exploit the potential of large antenna arrays in future systems, good channel estimates are particularly important in setups with many antennas and with low signal to noise ratios (SNRs). To obtain such estimates without massive training overhead, it is necessary to exploit some form of low-dimensional structure in the channels. However, this leads to complicated stochastic models, in which the minimum mean square error (MMSE) estimates of the channel cannot be calculated in closed form. One of the recently proposed solutions to this problem is a low-complexity estimator that is based on methods from the field of machine learning. In particular, a convolutional neural network learns the particularities of the considered channel models in an offline training phase, and can then compute low-complexity channel estimates online based on pilot data. The seminar talk should give an overview of the basic ideas behind this novel approach.

Distributed Optimization with the Alternating Direction Method of Multipliers

(supervised by Matthias Hotz)

With the increasing availability of huge data sets, very large-scale optimization problems are encountered, e.g., in machine learning, signal processing, and operations research. They may benefit from or even necessitate methods that enable a distributed solution. In this work, the popular and versatile approach known as the alternating direction method of multipliers (ADMM) shall be studied and reviewed. Based on two tutorial papers on the subject, ADMM is motivated, described and illustrated by an example. Furthermore, to attain a deeper understanding of this iterative method, the proof of convergence is outlined.

Backpropagation algorithm as a powerful tool in deep neural networks

(supervised by Hela Jedda)

The reverse mode differentiation also called backpropagation was introduced in 1970s. However, its application was not relevant until its importance in several neural networks was highlighted in 1986. The backpropagation algorithm works much faster than earlier approaches in several neural networks, which makes it a powerful tool in deep learning problems.

In this seminar, the student will gain insights into three fundamental topics:

  • The backpropagation algorithm
  • An introduction to neural networks and deep learning
  • The application of backpropagation algorithm in neural networks

Unfolding iterative algorithms as a way of applying deep learning to sparse linear inverse problems

(supervised by Michael Koller)

In sparse linear inverse problems, the goal is to recover a vector x from measurements y = Ax where the matrix A is known and where x is assumed to have only few non-zero elements. There exists a variety of iterative algorithms which try to approximate the solution x. Examples include iterative soft-thresholding (IST) and approximate message passing (AMP). Using a technique called unfolding, these algorithms can be transformed into learning based algorithms – leading to the learned iterative soft-thresholding algorithm (LISTA) and learned approximate message passing (LAMP).

The goal of the seminar talk is to give a brief introduction into the problem of sparse linear inverse problems, to explain the main idea of unfolding and to illustrate it via LISTA and LAMP.

A Gradient-based Algorithm for Stochastic Optimization

(supervised by Michael Newinger)

Gradient-based stochastic optimization algorithms are a key method for machine learning problems such as training regression models or artificial neural networks. Considering the large data sets and high dimensional parameter spaces associated with these kind of problems, fast and efficient optimization algorithms are of upmost importance. In this seminar talk, "Adam", a robust and highly efficient state-of-the-art gradient descend algorithm will be discussed. It relies on an adaptive estimation of the first and second moment of the gradient of the stochastic objective function that are utilized for a sophisticated step-size control. Experimental evaluations will demonstrate the remarkable performance of this method for convex as well as non-convex problems.

Random Forests Classifier

(supervised by Christoph Stöckle)

In machine learning, random forests are a popular learning method for classification and other tasks. Random forests consist of decision trees, each of which is a classifier that maps an input to one of the possible classes. Although a decision tree has advantageous properties, e.g., robustness to irrelevant features and invariance under scaling of features, as compared to other learning methods, it has the disadvantage that it tends to overfit the training data during the training phase, which makes the classification of unseen data inaccurate. In order to overcome this problem while preserving the advantageous properties of decision trees, random forests do not rely on one but on an ensemble of decision trees for carrying out the classification task. Each of these decision trees forming the random forests is constructed during the training phase from the given training set based on a randomly generated vector. The so generated random forests map an input to a possible class by choosing the class which most of the decision trees they consist of have voted for.

The goal of this seminar talk is to explain this basic idea of random forests in detail and give an overview of different variants having this basic idea in common, e.g., bagging and random split selection.

Topics Offered in Summer Term 2017

To get you an idea about how possible topics can look like, you find below the list of last year's topics. The topics for 2018 will be published as soon as they are available.