Stanford AI Lab Papers and Talks at NeurIPS 2021

Exciting Stanford AI Lab Research and Presentations from NeurIPS 2021

Introduction:

Welcome to the thirty-fifth Conference on Neural Information Processing Systems (NeurIPS) 2021! This year, the conference is being hosted virtually from December 6th to 14th. We are thrilled to share the exciting work from SAIL (Stanford Artificial Intelligence Laboratory) that will be presented at the main conference, as well as at the Datasets and Benchmarks track and various workshops.

Our SAIL community members are also co-organizers of several workshops happening on December 13-14, so make sure to check them out! You’ll find links to papers, videos, and blogs below.

From improving the compositionality of neural networks to reverse engineering recurrent neural networks, our researchers have been exploring various topics in the field of AI and deep learning. We also have work on scene generation, combining different models, emergent communication, data pruning, exploration through learned language abstraction, and much more.

If you’re interested in any of these exciting projects, feel free to reach out to the contact authors and workshop organizers directly. They would be happy to provide more information about the work happening at Stanford.

Stay tuned for more updates from NeurIPS 2021 and keep exploring the cutting-edge research in the field of neural information processing systems!

Full Article: Exciting Stanford AI Lab Research and Presentations from NeurIPS 2021

The Thirty-Fifth Conference on Neural Information Processing Systems (NeurIPS) 2021 is set to be held virtually from December 6th to 14th. This conference is an eagerly anticipated event in the field of neural information processing systems, and it provides a platform for researchers and practitioners to present their work. In this article, we will highlight some of the exciting papers and workshops that will be presented at NeurIPS 2021, including those from the Stanford Artificial Intelligence Laboratory (SAIL) community.

Main Conference

Improving Compositionality of Neural Networks by Decoding Representations to Inputs
Authors: Mike Wu, Noah Goodman, Stefano Ermon
Contact: wumike@stanford.edu
Keywords: generative models, compositionality, decoder

You May Also Like to Read  Elevating LGBTQIA Employee Benefits: A Commitment to Inclusion

Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems
Authors: Jimmy T.H. Smith, Scott W. Linderman, David Sussillo
Contact: jsmith14@stanford.edu
Keywords: recurrent neural networks, switching linear dynamical systems, interpretability, fixed points

Compositional Transformers for Scene Generation
Authors: Drew A. Hudson, C. Lawrence Zitnick
Contact: dorarad@cs.stanford.edu
Keywords: GANs, transformers, compositionality, scene synthesis

Combining Recurrent, Convolutional, and Continuous-time Models with Linear State Space Layers
Authors: Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, Chris Ré
Contact: albertgu@stanford.edu
Keywords: recurrent neural networks, rnn, continuous models, state space, long-range dependencies, sequence modeling

Emergent Communication of Generalizations
Authors: Jesse Mu, Noah Goodman
Contact: muj@stanford.edu
Keywords: emergent communication, multi-agent communication, language grounding, compositionality

Deep Learning on a Data Diet: Finding Important Examples Early in Training
Authors: Mansheej Paul, Surya Ganguli, Gintare Karolina Dziugaite
Contact: mansheej@stanford.edu
Keywords: data pruning

ELLA: Exploration through Learned Language Abstraction
Authors: Suvir Mirchandani, Siddharth Karamcheti, Dorsa Sadigh
Contact: suvir@cs.stanford.edu
Keywords: instruction following, reward shaping, reinforcement learning

CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation
Authors: Yusuke Tashiro, Jiaming Song, Yang Song, Stefano Ermon
Contact: ytashiro@stanford.edu
Keywords: score-based generative modeling, time series imputation

Confidence-Aware Imitation Learning from Demonstrations with Varying Optimality
Authors: Songyuan Zhang, Zhangjie Cao, Dorsa Sadigh, Yanan Sui
Contact: szhang21@mit.edu
Keywords: imitation learning, learning from demonstration, learning from suboptimal demonstrations

Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks
Authors: Aran Nayebi, Alexander Attinger, Malcolm G. Campbell, Kiah Hardcastle, Isabel I.C. Low, Caitlin S. Mallory, Gabriel C. Mel, Ben Sorscher, Alex H. Williams, Surya Ganguli, Lisa M. Giocomo, Daniel L.K. Yamins
Contact: anayebi@stanford.edu
Award nominations: Spotlight Presentation
Keywords: neural coding, medial entorhinal cortex, grid cells, biologically-inspired navigation, path integration, recurrent neural networks

On the theory of reinforcement learning with once-per-episode feedback
Authors: Niladri Chatterji, Aldo Pacchiano, Peter Bartlett, Michael Jordan
Contact: niladri@cs.stanford.edu
Keywords: theoretical reinforcement learning, binary rewards, non-Markovian rewards

HyperSPNs: Compact and Expressive Probabilistic Circuits
Authors: Andy Shih, Dorsa Sadigh, Stefano Ermon
Contact: andyshih@stanford.edu
Keywords: generative models, tractable probabilistic models, sum product networks, probabilistic circuits

COMBO: Conservative Offline Model-Based Policy Optimization
Authors: Tianhe Yu*, Aviral Kumar*, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn
Contact: tianheyu@stanford.edu
Keywords: offline reinforcement learning, model-based reinforcement learning, deep reinforcement learning

You May Also Like to Read  The Increasing Impact of Artificial Intelligence (AI) in the Field of Marketing

Conservative Data Sharing for Multi-Task Offline Reinforcement Learning
Authors: Tianhe Yu*, Aviral Kumar*, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn
Contact: tianheyu@stanford.edu
Keywords: offline reinforcement learning, multi-task reinforcement learning, deep reinforcement learning

Autonomous Reinforcement Learning via Subgoal Curricula
Authors: Archit Sharma, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn
Contact: architsh@stanford.edu
Keywords: reinforcement learning, curriculum, autonomous learning, reset-free reinforcement learning

Lossy Compression for Lossless Prediction
Authors: Yann Dubois, Benjamin Bloem-Reddy, Karen Ullrich, Chris J. Maddison
Contact: yanndubs@stanford.edu
Award nominations: Spotlight Presentation
Keywords: compression, invariances, information theory, machine learning, self-supervised learning

Capturing implicit hierarchical structure in 3D biomedical images with self-supervised hyperbolic representations
Authors: Joy Hsu, Jeffrey Gu, Gong-Her Wu, Wah Chiu, Serena Yeung
Contact: joycj@stanford.edu
Keywords: hyperbolic representations, hierarchical structure, biomedical

Estimating High Order Gradients of the Data Distribution by Denoising
Authors: Chenlin Meng, Yang Song, Wenzhe Li, Stefano Ermon
Contact: chenlin@stanford.edu
Keywords: score matching, Langevin dynamics, denoising, generative modeling

Universal Off-Policy Evaluation
Authors: Yash Chandak, Scott Niekum, Bruno Castro da Silva, Erik Learned-Miller, Emma Brunskill, Philip Thomas
Contact: ychandak@cs.umass.edu
Keywords: metrics, risk, distribution, CDF, off-policy evaluation, OPE, reinforcement learning, counterfactuals, high-confidence bounds, confidence intervals

Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models
Authors: Phil Chen, Masha Itkina, Ransalu Senanayake, Mykel J. Kochenderfer
Contact: philhc@stanford.edu
Keywords: deep learning or neural networks, sparsity and feature selection, variational inference, (application) natural language and text processing

Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss
Authors: Jeff Z. HaoChen, Colin Wei, Adrien Gaidon, Tengyu Ma
Contact: jhaochen@stanford.edu
Keywords: deep learning theory, unsupervised learning theory, representation learning theory

Provable Model-based Nonlinear Bandit and Rein

Summary: Exciting Stanford AI Lab Research and Presentations from NeurIPS 2021

The thirty-fifth Conference on Neural Information Processing Systems (NeurIPS) 2021 will be held virtually from December 6th to 14th. Stanford Artificial Intelligence Laboratory (SAIL) is excited to share the research being presented by its members at the main conference, Datasets and Benchmarks track, and various workshops. The work covers a wide range of topics such as improving the compositionality of neural networks, reverse engineering recurrent neural networks, scene generation using compositional transformers, and more. Links to papers, videos, and blogs are provided for further exploration. Additionally, SAIL members are co-organizing several workshops, offering more opportunities to delve into cutting-edge research.

You May Also Like to Read  Creating Stunning Photorealistic Art with AI Tools - Delve into the World of AI Time Journal

Frequently Asked Questions:

Q1: What is artificial intelligence (AI)?

A1: Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. It involves the development of computer systems capable of performing tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and problem-solving.

Q2: How is artificial intelligence used in everyday life?

A2: Artificial intelligence has various applications in everyday life, such as virtual assistants (e.g., Siri, Alexa), recommendation systems (e.g., product or movie recommendations), autonomous vehicles, spam filtering, fraud detection, and personalized advertising. AI algorithms are also utilized in healthcare for disease diagnosis, in finance for investment analysis, and in manufacturing for optimizing production processes.

Q3: What are the different types of artificial intelligence?

A3: Artificial intelligence can be broadly categorized into three types: narrow AI, general AI, and superintelligent AI. Narrow AI focuses on one specific task or area, such as image recognition or natural language processing. General AI aims to exhibit human-level intelligence and be capable of performing any cognitive task that a human can do. Superintelligent AI refers to AI systems that surpass human intelligence across all domains.

Q4: What are the potential benefits and risks associated with artificial intelligence?

A4: Artificial intelligence presents numerous benefits, including increased efficiency, improved accuracy, automation of repetitive tasks, enhanced decision-making capabilities, and innovations in various industries. However, there are also potential risks, such as job displacement, ethical concerns (e.g., biased decision-making), privacy invasion, and the potential for AI systems to gain autonomy and potentially pose risks if not properly controlled.

Q5: Is artificial intelligence a threat to humanity?

A5: The debate around whether artificial intelligence poses a threat to humanity is a topic of ongoing discussion. While AI has the potential for both positive and negative consequences, many experts argue that as long as AI is developed responsibly, with appropriate regulations and ethical considerations in place, it can greatly benefit society and improve our quality of life. Striking a balance between innovation and responsible development is crucial to avoid potential risks.