Gonçalo Mordido

I am a postdoctoral fellow at Mila and Polytechnique Montreal working with Prof. Sarath Chandar and Prof. François Leduc-Primeau on efficient deep learning. Previously, I worked on diversification, compression, and evaluation methods for generative adversarial networks.

My main focus is on building sustainable and energy-efficient machine learning systems. I am also interested in efficiently improving the fairness, generalization, and convergence of deep neural networks.

Email  /  CV  /  Google Scholar  /  GitHub

profile photo
News
Selected publications
Deep learning on a healthy data diet: Finding important examples for fairness
Abdelrahman Zayed, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian, Sarath Chandar
AAAI 2023

Data augmentation may be used to reduce gender bias by adding counterfactual examples to the training set. We show that some of the augmented examples are not important or harm fairness and prune both factual and counterfactual examples to mitigate gender bias.

Sharpness-aware training for accurate inference on noisy DNN accelerators
Gonçalo Mordido, Sarath Chandar, François Leduc-Primeau
CoLLAs 2022 workshop, in submission

We show that applying sharpness-aware training by optimizing for both the loss value and the loss sharpness significantly improves robustness to noisy hardware at inference time while also increasing DNN performance.

Improving meta-learning generalization with activation-based early-stopping
Simon Guiroy, Christopher Pal, Gonçalo Mordido, Sarath Chandar
CoLLAs 2022

We propose activation-based early-stopping (ABE), an alternative to using validation-based early-stopping for meta-learning. Specifically, we analyze the evolution of the neural activations at each hidden layer and early-stop when target activation trajectories diverge from source activation trajectories.

Compressing 1D time-channel separable convolutions using sparse random ternary matrices
Gonçalo Mordido, Matthijs Van Keirsbilck, Alexander Keller
INTERSPEECH 2021

We demonstrate that 1x1-convolutions in 1D time-channel separable convolutions may be replaced by constant, sparse random ternary matrices with weights in {−1, 0, +1}. Such layers do not perform any multiplications, do not require training, and may be generated on the chip during computation without any memory access.

Mark-Evaluate: Assessing language generation using population estimation methods
Gonçalo Mordido, Chistoph Meinel
COLING 2020

We propose a family of metrics to assess language generation derived from population estimation methods. Specifically, we propose three novel metrics that separately assess the evaluation set in terms of quality and diversity. Our metrics are sensitive to drops in quality and diversity and show a high correlation to human evaluation.

microbatchGAN: Stimulating diversity with multi-adversarial discrimination
Gonçalo Mordido, Haojin Yang, Chistoph Meinel
WACV 2020

We propose to tackle the mode collapse problem in generative adversarial networks (GANs) by using multiple discriminators and assigning a different portion of each minibatch, called microbatch, to each discriminator.


Design and source code adapted from