Interested in working with me?

Preliminaries

Please read this note for prospective students and review my research page. This should give you a good idea of expectations for machine learning research in my lab.

Postdocs

I'm considering postdocs as part of the Illinois institute for data science and dynamical systems (IDS2) and the Illinois Future Faculty Fellows.

Graduate students applying to Illinois

I apologize that I am no longer able to respond to individual inquiries. Please submit your application to Illinois (CS, ECE, Statistics). You may indicate your interest in working with me as part of your application to ensure that I read it.

Current Illinois students

You may contact me by email. Please include a resume with any relevant background and a brief note on how our interests might overlap. For some research topics, experience and/or an interest in neuroscience is a plus, but not required.
Undergraduate Students: I will consider advising your senior thesis if our interests are aligned (see my research page). You may also consider contacting Ph.D. students in my lab.
Graduate Students: I am happy to chat about research if our interests are closely aligned.

Spring 2021

We have several research and software positions. Please contact me (see above) if interested.

Surgical planning in virtual reality (Software engineering internship, Joint with Brad Sutton and OSF Healthcare)

We seek a software engineering intern focused on biomedical image data translation and data processing. Specific tasks include simplifying DICOM extraction from PACS, image translation between PACS DICOM and Segmented DICOM, and using various segmentation programs (Mimics and other autoseg programs) around the DICOM-OBJ standard. Expected background includes excellent software development skills. Experience with biomedical image processing is a plus, but not required. We anticipate that the role will be for up to one year, at 10 hours per week, and $20/hour.

Cancer imaging and diagnosis (Undergrad hourly or Graduate RA, joint with Wes Tansey at Memorial Sloan Kettering Cancer Center)

Cancer imaging data represents a wealth of information for diagnosing and treating patients. Standard H&E slides of biopsies enable single-cell resolution of tumor tissue, enabling fine-grained inspection and histopathologists’ tumor classification. Advances in multiplex immunofluorescence enhance this ability by overlaying multiple markers for various immune-related proteins to further refine tumor characterization. The richness of these data suggests that automated techniques leveraging recent advances in deep learning may be able to assist clinicians with much of this work, speeding up tumor characterization, improving cell typing, and providing outcome predictions and prognoses. Unfortunately, current work in cancer imaging is labor-intensive, requiring massive labeling effort from histopathologists to build training corpora and manual annotation of immunofluorescence markers by bioinformaticians. This high time burden limits the size of training corpora that can be gathered and reduces the power of deep learning models. We propose to overcome these challenges by addressing two ideas: (i) Combining deep computer vision models with Bayesian spatial modeling to maximize predictive power on cancer imaging datasets. (ii) Developing a spatially-aware active learning framework for tumor segmentation and cell typing to minimize pathologists’ labeling burden.