PhD student, Quebec Artficial Intelligence Institute.
Research interests: self-supervised learning, generative models, few-shot learning, inverse graphics
Contact: christopher.beckham(at)mila(dot)quebec

About me

I am a PhD candidate at MILA and Polytechnique Montreal, advised by Prof. Christopher Pal. Previously, I completed my BCMS(Hons) at The University of Waikato, under the supervision of Prof. Eibe Frank. My research interests are in model-based optimisation, energy-based models, generative models, and adversarial learning.

I also maintain a [blog](/blog), where I mostly write about things pertaining to my research interests. I consider some of these articles to be under the umbrella of 'alterative' publishing, things that are less appreciated or cared for in 'mainstream' academia such as tutorials, proof of concept work, or reproducing existing papers. (It is also nice to be able to write in prose and not have to be overly formal.)

Selected papers

Below is a selection of papers I have published. For a full list, please see my Google Scholar.

Pre-prints

ImagearXiv Lim, J. H., Kovachki, N. B., Baptista, R., Beckham, C., Azizzadenesheli, K., Kossaifi, J., ... & Anandkumar, A. (2023). Score-based diffusion models in function space. arXiv preprint arXiv:2302.07400.
tldr: Most generative models are defined on a finite number of dimensions, and this is also true for diffusion models. A mathematical framework is derived for diffusion models in function space (infinite dimensions). This can give rise to models which are resolution invariant. ImagearXiv Beckham, C., Piche, A., Vazquez, D., & Pal, C. (2022). Towards good validation metrics for generative models in offline model-based optimisation. arXiv preprint arXiv:2211.10747.
tldr: In model-based optimisation, online evaluation is expensive and it is desirable to have cheap-to-compute validation metrics that are also well correlated with the real oracle. These metrics can be used to select for better models while mitigating over-reliance on online evaluation.

Conferences

ImageCoLLAs 2022 Beckham, C., Laradji, I. H., Rodriguez, P., Vazquez, D., Nowrouzezahrai, D., & Pal, C. (2022, November). Overcoming challenges in leveraging GANs for few-shot data augmentation. In Conference on Lifelong Learning Agents (pp. 255-280). PMLR. [`[`paper`]`](https://arxiv.org/abs/2203.16662) [`[`code`]`](https://github.com/christopher-beckham/challenges-few-shot-gans) [`[`video`]`](https://www.youtube.com/watch?v=uBe5gOz4CSk)
tldr: We explore GAN-based few-shot data augmentation to improve classification performance on highly underrepresented classes, and do this in a principled and rigorous manner. We find some difficulty in this task can be mitgated through a simple semi-supervised modification. ImageNeurIPS 2019 **Beckham, C.**, Honari, S., Verma, V., Lamb, A. M., Ghadiri, F., Hjelm, R. D., Bengio, Y., & Pal, C. (2019). _On adversarial mixup resynthesis._ In Advances in Neural Information Processing Systems (pp. 4346-4357). [`[`paper`]`](https://papers.nips.cc/paper/8686-on-adversarial-mixup-resynthesis) [`[`code`]`](https://github.com/christopher-beckham/amr) [`[`video`]`](https://www.youtube.com/watch?v=ezbC3_VZeNY)
tldr: We leverage mixing functions and adversarial learning to perform representation learning in the bottleneck of a deterministic autoencoder. This can be leveraged as a generative model to produce novel examples (through mixing latent codes of known examples) and also avoid some of the shortcomings commonly observed in variational autoencoder (VAE) models. ImageICML 2019 Verma, V., Lamb, A., **Beckham, C.**, Najafi, A., Mitliagkas, I., Lopez-Paz, D., & Bengio, Y. (2019, May). _Manifold mixup: Better representations by interpolating hidden states._ In International Conference on Machine Learning (pp. 6438-6447). [`[`paper`]`](http://proceedings.mlr.press/v97/verma19a.html) [`[`code`]`](https://github.com/vikasverma1077/manifold_mixup)
tldr: Perform mixup interpolation in the hidden layers of a classifier, mixing pairs of hidden states. This can be seen as performing implicit data augmentation by leveraging features internally learned by the classifier. Competitive accuracies are achieved with respect to the original mixup and other baselines. ImageNeurIPS 2018 Moniz, J. R. A. , Beckham, C. , Rajotte, S., Honari, S., & Pal, C. (2018). _Unsupervised depth estimation, 3D face rotation and replacement._ In Advances in Neural Information Processing Systems (pp. 9736-9746). ( = equal authorship) [`[`paper`]`](https://papers.nips.cc/paper/8181-unsupervised-depth-estimation-3d-face-rotation-and-replacement) [`[`code`]`](https://github.com/joelmoniz/DepthNets) [`[`video`]`](https://www.youtube.com/watch?v=h_brJWd7nNg)
tldr: We perform unsupervised depth estimation by conditioning on source and target keypoints of an object to predict depth, which is subsequently used to parameterise an affine transformation from a source pose to a target pose. The inferred depths are well correlated with the ground truth depths and this is demonstrated via face transformations. ImageNeurIPS 2017 Racah, E., **Beckham, C.**, Maharaj, T., Kahou, S. E., Prabhat, M., & Pal, C. (2017). _ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events._ In Advances in Neural Information Processing Systems (pp. 3402-3413). [`[`paper`]`](https://papers.nips.cc/paper/6932-extremeweather-a-large-scale-climate-dataset-for-semi-supervised-detection-localization-and-understanding-of-extreme-weather-events) [`[`dataset`]`](https://extremeweatherdataset.github.io/)
tldr: We propose a high-resolution time series dataset of climate simulations with ground truth labels. We demonstrate experiments on this dataset in the form of bounding box prediction. ImageICML 2017 **Beckham, C.**, & Pal, C. (2017, July). _Unimodal probability distributions for deep ordinal classification._ In International Conference on Machine Learning (pp. 411-419). [`[`paper`]`](http://proceedings.mlr.press/v70/beckham17a.html)
tldr: Unimodal distributions are a natural way to fit probability distributions for many ordinal classification tasks. We use deep nets to parameterise binomial and Poisson distributions with optional temperature scaling to control distribution variance.

Journals

ImagePatRec 2023 **Beckham, C.**, Weiss, M., Golemo, F., Honari, S., Nowrouzezahrai, D., & Pal, C. (2023). Visual question answering from another perspective: CLEVR Mental Rotation Tests. Pattern Recognition, 136, 109209.
tldr: Fuse mental rotation tests with CLEVR visual-question answering (VQA). The goal is to be able to answer a question from a random viewpoint that is not the same as the viewpoint given to the VQA model. We also propose the use of contrastive learning to learn a 2D-to-3D volumetric encoder, without camera extrinsics. ImageMIA 2022 Vorontsov, E., Molchanov, P., Gazda, M., **Beckham, C.**, Kautz, J., & Kadoury, S. (2022). Towards annotation-efficient segmentation via image-to-image translation. Medical Image Analysis, 82, 102624.
tldr: Ground truth images for segmentations are laborious and costly to obtain. By leveraging weaker labels, i.e. whether an image is of a sick/healthy patient, we can leverage image-to-image translation techniques (translating between sick and healthy domains) to augment performance of the segmentation network. Image KBS 2019 Lang, S., Bravo-Marquez, F., Beckham, C., Hall, M., & Frank, E. (2019). Wekadeeplearning4j: A deep learning package for WEKA based on DeepLearning4j. Knowledge-Based Systems, 178, 48-50. [`[`paper`]`](https://felipebravom.com/publications/WDL4J_KBS2019.pdf) [`[`code`]`](https://github.com/Waikato/wekaDeeplearning4j/)
tldr; Users can now train and tinker with deep neural networks within WEKA and its graphical user interface.

Workshops

Image ML4H @ NeurIPS 2016 Beckham, C., & Pal, C. (2016). _A simple squared-error reformulation for ordinal classification._ arXiv preprint arXiv:1612.00775.
tldr: Treat ordinal classification as a regression problem, but still maintain a discrete probability distribution over classes by computing the regression as the expectation of integer labels over that distribution.

Reviewing

- NeurIPS (2019, 2020, 2022), ICLR (2020, 2021), ICCV (2021), SIGGRAPH Asia (2020)