Outline of Research Interests

This page outlines current research in my lab, along with a subset of relevant publications.

Short Summary: My current research interests are in the probabilistic and statistical aspects of machine learning, with a focus on: (i) theory and applications of constrained probabilistic inference & variational inference (ii) development and analysis of consistent and statistically efficient estimators that optimize a given loss (or utility) function (ii) large scale implementation of these estimators. My applied work is primarily in neuroimaging, with a focus on: (i) encoding and decoding models for fMRI (ii) estimators for (time-varying) brain networks (iii) network statistics and embeddings, among other topics.

Classification with Complex Metrics

Real-world applications of classification to complex decision problems have led to the design of a wide range of evaluation metrics, many of which are non-decomposable e.g. AUC, F-measure. This is in contrast to decomposable metrics such as accuracy which are defined as an empirical average. Non-decomposability is the primary source of difficulty in theoretical analysis and efficient algorithms. We study binary and multilabel classification from first principles, and derive novel efficient and statistically consistent algorithms that result in improved empirical performance.

Optimal classification with multivariate losses
Nagarajan Natarajan, Oluwasanmi Koyejo, Pradeep K Ravikumar, and Inderjit S Dhillon.
International Conference on Machine Learning (ICML), 2016
[url] [prelim. arXiv]
Consistent multilabel classification
Oluwasanmi Koyejo*, Nagarajan Natarajan*, Pradeep Ravikumar, and Inderjit Dhillon
Advances in Neural Information Processing Systems (NIPS) 28, 2015
[url]

Large Scale and Structured Probabilistic Inference

Data in scientific and commercial disciplines are increasingly characterized by high dimensions and relatively few samples. For such cases, a-priori knowledge gleaned from expertise and experimental evidence are invaluable for recovering meaningful models. In particular, knowledge of restricted degrees of freedom such as sparsity or low rank has become an important design paradigm, enabling the recovery of parsimonious and interpretable results, and improving storage and prediction efficiency for high dimensional problems. In Bayesian models, this structure is determined by the prior distribution. We are developing a variety of variational inference techniques that lead to scalable and accurate inference, particularly for high dimensional structured problems. We are also developing new techniques that capture structure in the inference rather than the prior, and show that leads to improved efficiency and performance in several cases.

Information Projection and Approximate Inference for Structured Sparse Variables
Rajiv Khanna, Joydeep Ghosh, Rusell Poldrack, Oluwasanmi Koyejo
Proceedings of the 20th International conference on Artificial Intelligence and Statistics (AISTATS), 2017
[prelim. arXiv]
On prior distributions and approximate inference for structured variables
Oluwasanmi Koyejo, Rajiv Khanna, Joydeep Ghosh, and Russell A Poldrack
Advances in Neural Information Processing Systems (NIPS) 27, 2014
[url]

Learning with Aggregated Data

Existing work in spatio-temporal data analysis invariably assumes data available as individual measurements with localized estimates. However, for many applications in econometrics, financial forecasting and climate science, data is often only available as aggregates. Data aggregation presents severe mathematical challenges to learning and inference, and naive application of standard techniques is susceptible to the ecological fallacy. We have shown that in some cases, this aggregation procedure has only a mild effect. For other cases, we are developing a variety of tools that enable provably accurate predictive modeling with aggregated data, while avoiding unnecessary and error-prone data reconstruction.

Frequency Domain Predictive Modeling with Aggregated Data
Avradeep Bhowmik, Joydeep Ghosh, Oluwasanmi Koyejo
Proceedings of the 20th International conference on Artificial Intelligence and Statistics (AISTATS), 2017
Sparse parameter recovery from aggregated data
Avradeep Bhowmik, Joydeep Ghosh, and Oluwasanmi Koyejo
International Conference on Machine Learning (ICML), 2016
[url]

Modeling Graph Data & Graph Statistics

Unlike vectors, graphs are not easily manipulated and compared using standard Euclidean techniques. This is particularly important in neuroscience applications where brain graph estimates are susceptible to noise and alignment issues. We are exploring novel approaches for data exploration and predictive modeling for graph data. We are especially interested in graphon based techniques that compactly capture asymptotic graph statistics and associated graph distances. We primarily use these tools to enable the scientific analysis of, and predictive modeling from brain networks.

The Dynamics of Functional Brain Networks: Integrated Network States during Cognitive Task Performance
J.M. Shine, P.G. Bissett, P.T. Bell, O. Koyejo, J.H. Balsters, K.J. Gorgolewski, C.A. Moodie and R. A. Poldrack
Neuron (2016)
[url] [arXiv] [code]
Temporal metastates are associated with differential patterns of time-resolved connectivity, network topology, and attention
James M. Shine, Oluwasanmi Koyejo, and Russell A. Poldrack
Proceedings of the National Academy of Sciences (2016): 201604898.
[url] [arXiv] [code]

Learning with Spatio-temporal Data

Spatio-temporal data are ubiquitous in science and engineering applications. We are pursuing a variety of techniques for modeling such datasets. So far we have focused on applications to brain imaging time series. Our methods include (i) extensions of Gaussian processes to jointly capture spatial and temporal smoothness (ii) structured decomposition models (iii) generative deep network models (variational autoencoders and generative adversarial networks).

A simple and provable algorithm for sparse CCA
Megasthenis Asteris, Anastasios Kyrillidis, Oluwasanmi Koyejo, and Russell A Poldrack
International Conference on Machine Learning (ICML), 2016
[url] [code]
A constrained matrix-variate Gaussian process for transposable data
Oluwasanmi Koyejo, Cheng Lee, and Joydeep Ghosh
Machine Learning, 2014
[url] [arXiv]

Scaling up Machine Learning Systems

In collaboration with systems and infrastructure domain experts, we are developing new architectures for scaling up machine learning systems. Novel challenges include fault tolerance and predictability of resource requirements. The inherent noisiness and other natural statistical properties of machine learning problems enable our new approaches to system design.

Selected Publications In Preparation

Interpretable Machine Learning

As machine learning methods have become ubiquitous in human decision making, their transparency and interpretability have grown in importance. Interpretability is particularity important in domains where decisions can have significant consequences. Examples abound where interpretable models can reveal important but surprising patterns in the data that complex models obscure. We are currently studying exemplar-based interpretable modeling. This is motivated by studies of human reasoning which suggest that the use of examples (prototypes) is fundamental to the development of effective strategies for tactical decision-making. We are also exploring the application of structured sparsity and attention (with deep neural networks) for enabling interpretability.

Examples are not Enough, Learn to Criticize! Criticism for Interpretable Machine Learning
Been Kim*, Rajiv Khanna* and Oluwasanmi Koyejo*
Advances in Neural Information Processing Systems (NIPS) 29, 2016 (Oral)
[url] [code]
Sparse dependent Bayesian structure learning
Anqi Wu, Mijung Park, Oluwasanmi Koyejo, and Jonathan Pillow
Advances in Neural Information Processing Systems (NIPS) 27, 2014
[url]

Sequential Decision Making & Interactive Processes

Human decision making is complex, and may depend on many different exogenous factors as well as the task at hand. Indeed biases such as anchoring, confirmation, framing, base rate fallacy, primacy, and recency, among others, are well documented in the literature. We consider such biases as features of an interactive process that impacts human decision-making. Two major challenges in dealing with human machine interactions are exponential complexity and transience. Processes with these characteristics are widespread e.g. in decision support systems from online advertising to education and healthcare. We are exploring data-driven techniques to infer and model their role in human-machine interaction – towards improving sequential decision making.

Selected Publications In Preparation

Learning to Rank

Determining priorities over a set of items or choices, ranking items based on preferences, or scoring them such that some desired preference order is maintained, are fundamental activities in both science and industry. We are developing novel scalable techniques for learning to rank that elegantly manage the combinatorial complexity of the ranking space via monotonic retargeting. We have explored both standard ranking and collaborative ranking, and are currently investigating extensions to adaptive and online scenarios.

Preference Completion from Partial Rankings
Suriya Gunasekar, Oluwasanmi Koyejo, and Joydeep Ghosh.
Advances in Neural Information Processing Systems (NIPS) 29, 2016
[url] [code]
Learning to rank with Bregman divergences and monotone retargeting
Sreangsu Acharyya*, Oluwasanmi Koyejo*, and Joydeep Ghosh
Proceedings of the 28th conference on Uncertainty in artificial intelligence (UAI), 2012
[pdf] [arXiv]