Schedule

Wednesday, March 8 , 2023 (in Japan Standard Time)

Tuesday, March 7 (in EST, PST)

Session 1: Recent Advances in Medical Science

(9:00-10:00, JST)

9:00 - 9:30

19:00-19:30, EST

16:00-16:30, PST

Learning Disentangled Representations for T Cell Receptor Design

Tianxiao Li

Abstract: The interaction between T cell receptors (TCRs) and peptide antigens is critical for human immune responses. Designing antigen-specific TCRs, therefore, represents an important step in adoptive immunotherapy. As experimental procedures for TCR engineering are expensive, leveraging machine learning methods to learn and optimize informative TCR representations to computationally generate antigen-specific candidate TCRs has recently attracted a surge of interest. In particular, learning representations that isolate salient (or explanatory) factors to explicitly capture the interplay between TCR and antigen is crucial for model explainability, training sample efficiency, and effective conditional TCR modification, and thus is highly demanding. In this work, we propose a novel strategy to attain such goal. Our proposed autoencoder model aims at generating disentangled embeddings where different sets of dimensions correspond to generic TCR sequence backbones and antigen binding-related patterns, respectively. As a result, the disentangled embedding space formed not only improves interpretability of the model but also enables one-pass optimization of TCR sequences conditioned on antigen binding properties. By modifying the binding-related parts of the embedding,  our model could generate TCR sequences with enhanced binding affinity while maintaining the backbone of the template TCR. It outperforms several baseline methods by generating larger amounts of valid TCR sequences with higher binding affinity to the given antigen. Promisingly, the functional embeddings can also be used as a signature for distinguishing the peptide-binding property of TCR sequences, which can further benefit applications such as classification and clustering of TCR specificity.


9:30 - 10:00

19:30-20:00, EST

16:30-17:00, PST

Uses of Synthetic Data in Machine Learning for Healthcare

Dr. Luyao Shi

Panel: Challenges about ChatGPT

(10:00-10:50, JST)

Panelists:  Dr. Alexander Fabbri,  Dr. Yu Su,  Michihiro Yasunaga, Vanessa Yan

Moderator: Irene Li

Session 2: Recent Advances in Natural Language Processing

(10:50-11:50, JST)

10:50 - 11:20

20:50-21:20, EST

17:50-18:20, PST

Revisiting Summarization Evaluation: A Novel Benchmark and a Study on Opinion Summarization

Dr. Alexander Fabbri

Abstract: In this talk, I will present two recent submissions on text summarization. In the first work, we examine human evaluation protocols for summarization and develop the ROSE benchmark, consisting of over 22k summary-level annotations of summarization salience over state-of-the-art systems on three datasets using our proposed Atomic Content Unit (ACU) protocol. We compare our protocol against existing protocols and point to potential problems in evaluating LLMs with these protocols. In the second work, we examine pipelines for opinion summarization based on GPT-3, in particular focusing on the need for a more targeted evaluation of LLMs. We examine several proxy metrics for faithfulness, factuality, and genericity to better automatically evaluate such models. 


11:20 - 11:50

21:20-21:50, EST

18:20-18:50, PST

Grounding Language Models to Real-World Environments

Dr. Yu Su

Abstract: The rapid development of language models (LMs), especially the recent release of ChatGPT, seems to point us towards a future where natural language serves as a universal device, powered by LMs, for automated problem solving and interacting with the (computing) world. However, a key missing piece in realizing this future is the connection between LMs and real-world environments, including both digital environments (e.g., databases, knowledge bases, software, websites, among others) and physical environments (e.g., through robots that follow language instructions). In such environments, instead of generating free-form text, LMs need to generate environment-dependent formal programs to achieve the desired effects specified by language commands from users. In this talk, I will discuss the unique challenges of this exciting research frontier as well as recent development that points towards promising solutions: 1) Pangu, a generic neurosymbolic framework for such grounded language understanding, which features a symbolic agent and a neural LM working in concerted fashion. 2) LLM-Planner, that leverages large language models such as GPT-3 for robot planning to interact with physical environments. The talk will be concluded with discussion on promising future directions.


Session 3: Recent Advances in Machine Learning

(11:50-14:00, JST, Tentative)

11:50 - 12:20

21:50-22:20, EST

18:50-19:20, PST

Natural Language Processing for Neuroscience Database Mining

Yujie Qiao

Abstract: In recent decades, there has been a massive development of data-based models of neurons. Motivated by the lack of a large-scale database for neuroscience researchers to share those models, ModelDB was founded to help researchers discover models of interest by collecting over 1,700 published models to date. Nonetheless, not much progress has been made regarding information extraction from ModelDB. We propose using SPECTER and LinkBERT, two recent language models for the embedding of scientific documents, to map the inter-document relatedness of the 1,568 models by using each model’s titles and abstracts. We then combined such representation with a Gaussian Naïve Bayes classifier for the prediction of each model’s metadata, in the hope that discovering patterns in metadata and its implied biological information can benefit the broad neuroscience community.


12:20 - 12:50

22:20-22:50, EST

19:20-19:50, PST

Predicting Plant Biodiversity at Scale Using Machine Learning and Open Science

Lauren Gillespie

Abstract: From a warming planet and more extreme weather patterns to deforestation, worldwide plant biodiversity is increasingly at risk of extinction. Despite the threats facing plant biodiversity globally, we still lack models that can effectively monitor and predict plant communities and biodiversity at high spatial and temporal resolution. Here, I showcase a deep convolutional neural network that can predict plant species presence from high-resolution remote sensing imagery paired with publicly available species occurrences. This model successfully captures the distribution of both broad and narrow-ranged plant species in California and recapitulates expected spatial and temporal macroecological trends in a wide variety of plant communities across the state. Furthermore, this model can be used as an efficient feature extractor for several relevant downstream spatial mapping tasks, including outperforming both a state-of-the-art unsupervised contrastive learning approach for cropland mapping and the official National Park Service vegetation map of Redwoods National and State Parks. This work showcases how deep learning paired with open data can be used to automate plant biodiversity monitoring at scale.

12:50 - 13:20

22:50-23:20, EST

19:50-20:20, PST

Learning Protein Representations via Complete 3D Graph Networks

Haoran Liu

Abstract: Learning effective representations of proteins is crucial to a variety of tasks in biology such as predicting protein function or interaction. We consider representation learning for proteins with 3D structures. We build 3D graphs based on protein structures and develop graph networks to learn their representations. Depending on the levels of detail that we wish to capture, protein representations can be computed at different levels, the amino acid, backbone, or all-atom levels. Importantly, there exist hierarchical relations among different levels. In this work, we propose to develop a novel hierarchical graph network, known as ProNet, to capture the relations. Our ProNet is very flexible and can be used to compute protein representations at different levels of granularity. By treating each amino acid as a node in graph modeling as well as harnessing the inherent hierarchies, our ProNet is more effective and efficient than existing methods. We also show that, given a base 3D graph network that is complete, our ProNet representations are also complete at all levels. Experimental results show that ProNet outperforms recent methods on most datasets. In addition, results indicate that different downstream tasks may require representations at different levels.


13:20 - 13:50

23:20-23:50, EST

20:20-20:50, PST

Self-Adaptive Training: Bridging Supervised and Self-Supervised Learning

Lang Huang 

Abstract: In this talk, I will delve into Self-Adaptive Training - a novel, unified training algorithm that harnesses the power of model predictions to dynamically enhance the training process for deep neural networks in both supervised and self-supervised learning. Our research investigates the impact of various forms of data corruption, such as random noise and adversarial examples, on the training dynamics of deep networks. Our findings demonstrate that model predictions can effectively amplify valuable information within the data, even in the absence of labels. This highlights the potential benefits of incorporating model predictions into the training process and improving both the generalization of deep networks under noisy conditions and the representation learning in self-supervised models. Furthermore, our analysis also sheds light on understanding deep learning, including providing explanations for the double-descent phenomenon in empirical risk minimization and resolving limitations in state-of-the-art self-supervised algorithms.