Agility in Project Management – What does it really mean?

U-residence, green room

Speaker: Luc Peirlinckx, Capgemini Belgium - Diegem

Registration for this event with catering (free)

Profile of Luc Peirlinckx: Luc

Luc  graduated from the VUB , Faculty of Engineering in 1988. He has gained R&D experience as a post-doctoral researcher and lecturer at the Free University of Brussels (VUB, dept. ELEC) and as R&D coordinator at Tele Atlas.

He is experienced in managing projects & programs (PMBOK) in the In-Car Navigation & Location Based Services (Tele Atlas, TomTom), utility and supply chain markets, and in the public sector. He also has business development & customer support expertise (support of bids & contracts) for these markets.

Luc also gained experience in Best Practices for requirements management, release and sprint planning and implementation, and product delivery based on principles of Agile Framework.

Luc has been part of the Tele Atlas – TomTom organisation for about 13 years and was assigned in 4 key roles: Team Manager, Product Line Manager, Program Manager and Manager Engineering Quality Management System and processes.

Luc has started at Capgemini in May 2012. He focused to bid and engagement management. During this period, he was managing bids in various market segments and technologies, and was responsible for several engagements. Examples: Coca Cola - Risk management business process, Janssen Pharmaceutical companies of J&J - Mobile & Cloud technology, FOD Social Security - IBM Curam, bpost - Drupal web platform, Johnson Controls International – Salesforce, ServiceMax EUR rollout.

Abstract lecture:

In this session, Luc Peirlinckx will share the key activities, insights and trends about the role and responsibilities of an project/program manager.

It’s obvious both the complexity of (IT)-technology and business-requirements has increased.  Hence, project and program managers are facing higher expectations on the level of the key project dimensions especially budget, scope and timing flexibility. But what does this really mean?

Luc will share with you what a typical week looks like for him.

Several topics will be highlighted:

  • The project manager role on delivery projects;
  • Risk, Change and  Stakeholder management;
  • Trends and expectations;
  • International mobility and impact on delivery processes;

System identification with input and output uncertainties using kernel-based methods

Building K, 7th floor

SpeakerGiulio Bottegal, KUL

Recent developments in system identification have brought attention to regularized kernel-based methods. Using the Bayesian interpretation of kernel-based methods, in this talk I will discuss several ways to extend this paradigm to system identification problems with input or output uncertainties. Examples of this type of problems include blind system identification, Hammerstein systems, errors-in-variables, quantized output data, and identification with outliers. An application to dynamic network identification is also discussed. 

Target/Source Localization by an Imperfect Network of Binary

Dept. ELEC, Building K, 7th floor

Speaker: ErWei Bai, University of Iowa

Target/source localization is a parameter (location) estimation problem. In this talk,  target/source localization and tracking by a network of imperfect and primitive binary sensors will be discussed. Binary sensors are inexpensive and can be employed in a large numbers.However, cheap binary sensors are imperfect and subject to various uncertainties. In this talk, detailed analysis, mathematical modeling of imperfect binary sensors and convergence results will be presented. Numerical algorithms for source localization and tracking are proposed along with convergence and asymptotic normality results. Further a distributive way to calculate the globally optimal estimate by using only local information provided by binary sensors is derived.

Nonlinear system identification in structural heath monitoring

Dept. ELEC, Building K, 7th floor

Speaker: Marc Rébillat, Associate Professor DYSCO team, PIMM laboratory, Arts et Métiers – CNRS – CNAM, Paris, France

The process of implementing a damage monitoring strategy for aerospace, civil and mechanical engineering infrastructure is referred to as structural health monitoring (SHM) and implies a sensor network that monitors the behavior of the structure on-line. A SHM process potentially allows for an optimal use of the monitored structure, a minimized downtime, and the avoidance of catastrophic failures. The SHM process classically relies on four sequential steps that are damage detection, localization, classification, and quantification. The key idea underlying this seminary is that structural damages may result in nonlinear dynamical signatures that are not yet used in SHM despite the fact that they can significantly enhance their monitoring. We thus propose to monitor these structural damages by identifying their nonlinear signature on the basis of a cascade of Hammerstein models representation of the structure. This model is here estimated at very low computational cost by means of the Exponential Sine Sweep Method. It will be shown that on the basis of this richer dynamical representation of the structure, SHM algorithms dedicated to damage detection, classification and quantification can be derived. This will be illustrated in the aeronautic and civil engineering contexts and using experimental as well as numerical data.

Data perturbation methods for hypothesis testing in nonlinear estimation problems

7th floor, building K

Speaker: Sandor Kolumbán (Technische Universiteit Eindhoven)


In most cases point estimates are not satisfactory when an estimation problem is faced, reliability certificates are also required. In case of non-linear estimation problems these certificates are usually derived using the asymptotic distribution of the estimate as the sample count tends to infinity.

The combination of non-linear estimation problems with finite sample counts results in faulty uncertainty estimates. We present a family of randomized hypothesis testing methods, called data perturbation (DP) methods, that allow hypothesis testing with exact confidence level for practically any model structure.

When carefully constructed, DP hypothesis tests result in well-structured confidence regions. We are going to see an 'appropriate' DP method that works well in general. It results in connected and bounded confidence regions for linear regression problems if the joint distribution of the noise is invariant under a subgroup of the unitary group (i.i.d. noise, componentwise symmetric noise, conditionally uniform noise).

The algorithm is going to be illustrated using a simple non-linear estimation example.

The structure of the confidence regions depends on an intimate relationship between the assumed noise characteristics, model structure and external input. To illustrate this we are going to investigate the power function of the proposed test. As we will see, if the input of the problem does not satisfy certain excitation conditions then the test will reduce to a simple coin toss.

Objective diagnosis of brain diseases with EEG

7th floor, building K

Speaker: Jorne Laton (VUB)


Using features extracted from time signals in three EEG channels, we were able to detect schizophrenia patients with almost 85% accuracy. The second step consisted of using all channels in a network analysis in which we correlated all channels with each other. There, we found that correlations involving temporal electrodes showed significant differences between schizophrenia patients and healthy controls, but only in auditory and not in visual tasks. In the third step, we performed source localisation; features extracted from time signals in different reconstructed sources were used to detect schizophrenia patients, with 74% accuracy so far. Last year, we also started an MEG study on multiple sclerosis, where we will perform similar analyses as with EEG. We measured 45 people last year and plan to continue in September.

In cooperation with another research group, we have done a EEG frequency analysis on two types of dementia: Alzheimer’s disease and frontotemporal lobe disorder. Here, we found that the frequency of the dominant frequency peak detected in a broad alpha interval (5 - 15 Hz) was significantly different between these groups. In another cooperation involving electroconvulsive therapy, we compared EEGs taken before the first and after the sixth treatment and found significant differences in the theta frequency band amplitudes.

Sparse interpolation, exponential analysis, Padé approximation and tensor decomposition

7th floor, building K

Speaker: Annie Cuyt (Universiteit Antwerpen)


A mathematical model is called t-sparse if it is a combination of only t generating elements. In sparse interpolation, the aim is to determine both the support of the sparse linear combination and the scalar coefficients in the representation, from a small or minimal amount of data samples. Sparse techniques solve the problem statement from a number of samples proportional to the number t of terms in the representation rather than the number of available data points or available generating elements. Sparse representations reduce the complexity in several ways: data collection, algorithmic complexity, model complexity.

We indicate the connections between sparse interpolation, generalized eigenvalue computation, exponential analysis, rational approximation, orthogonal polynomials and tensor decomposition.  In the past few years, insight gained from the computer algebra community combined with methods developed by the numerical analysis community, has lead to significant progress in several very practical and real-life signal processing applications.  We make use of tools such as the singular value decomposition and various convergence results for Padé approximants to regularize an otherwise inverse problem. Classical resolution limitations in signal processing with respect to frequency and decay rates, are overcome.

In the illustrations we particularly focus on multi-exponential models

representing signals which fall exponentially with time.  These models appear, for instance, in transient detection, motor fault diagnosis, electrophysiology, magnetic resonance and infrared spectroscopy, vibration analysis, fluorescence lifetime imaging, music signal processing, dynamic spectrum management such as in cognitive radio, nuclear science, and so on.  Through the connection with tensor decomposition we can even have an impact on big data analytics.

Pdf version:PDF icon annie_cuyt_2016_seminar.pdf

Splitting envelopes and quasi-Newton methods for nonsmooth composite problems

7th floor, building K

Speaker: Lorenzo Stella (KU Leuven)

Nonsmooth optimization problems in composite are very frequent in several fields of science and engineering. For example, regularization methods in machine learning, signal and image processing often have this form, as well as constrained optimization problems arising in optimal control. Operator splitting algorithms, such as forward-backward splitting (also known as proximal gradient method) or the Douglas-Rachford splitting, are nowadays very well studied and understood. As any first order method, though, they are efficient at computing low- to medium-accuracy solutions only, and greatly suffer from ill conditioning of the problem at hand. In this talk I will describe an algorithmic scheme which operates on a smooth surrogate of the original nonsmooth objective, which we called forward-backward envelope function, allowing to extend classical line-search methods for differentiable objective to nonsmooth problems.

Back to top