This is an old revision of the document!


SDMIA Fall Symposium: Invited Speakers

Craig Boutilier

Google

Title: Large-scale MDPs in Practice: Opportunities and Challenges
Abstract:
Markov decision processes (MDPs) have been very well-studied in AI over the past 20 years and offer great promise as a model for sophisticated decision making. However, the practical applications of MDPs and reinforcement learning (RL)—in particular, AI-based approaches—have been somewhat limited. Indeed, the use of MDPs and RL in AI applications pales in comparison to the wide-ranging applications of machine learning across a variety of industrial sectors.

In this talk, I'll discuss:

  • a sample of areas of direct industrial relevance where MDPs and RL have great promise;
  • some speculation as to why ML methods in these areas have succeeded, while the application of sequential decision-making techniques has faltered;
  • how we can bridge that gap, including: techniques for leveraging existing large-scale ML methods for modeling MDPs; the tension between model-based and model-free methods; and time permitting, some thoughts on solution methods for such models at industrial scale.

Emma Brunskill

CMU

Title: Quickly Learning to Make Good Decisions
Abstract:
A fundamental goal of artificial intelligence is to create agents that learn to make good decisions as they interact with a stochastic environment. Some of the most exciting and valuable potential applications involve systems that interact directly with humans, such as intelligent tutoring systems or medical interfaces. In these cases, sample efficiency is highly important, as each decision, good or bad, is impacting a real person. I will describe our research on tackling this challenge, as well as its relevance to improving educational tools.

Alan Fern

Oregon State

Title: TBD
Abstract:
TBD

Mykel Kochenderfer

Stanford

Title: Decision Theoretic Planning for Air Traffic Applications
Abstract:
Every large aircraft in the world is equipped with a collision avoidance system that alerts pilots to potential conflict with other aircraft and recommends maneuvers to avoid collision. Due to the potentially catastrophic consequences of error in their operation, the complex decision making rules underlying the system have received considerable scrutiny over the past few decades. Recently, the international safety community has been working to standardize a new system for worldwide deployment that is derived from a partially observable Markov decision process (POMDP) formulation. This talk will discuss the process taken for developing the system and building confidence in its safe operation. In addition, several other applications of POMDPs to air traffic problems will be outlined.

Milind Tambe

USC

Title: TBD
Abstract:
TBD

Jason Williams

Microsoft Research

Title: Decision-theoretic control in dialog systems: recent progress and opportunities for research
Abstract:
Dialog systems interact with a person using natural language to help them achieve some goal. Dialog systems are now a part of daily life, with commercial systems including Microsoft Cortana, Apple Siri, Amazon Echo, Google Now, Facebook M, in-car systems, and many others. Because dialog is a sequential process, and because computers' ability to understand human language is error-prone, it has long been an important application for sequential decision making under uncertainty. In this talk, I will first present the dialog system problem through the lens of decision making under uncertainty. I'll then survey recent work which has tailored methods for state tracking and action selection from the general machine learning literature to the dialog problem. Finally, I'll discuss open problems and current opportunities for research.

Shlomo Zilberstein

UMass Amherst

Title: TBD
Abstract:
TBD

sdmia_invited_speakers.1443256117.txt.gz · Last modified: 2015/09/26 08:28 by matthijs
Recent changes RSS feed Creative Commons License Donate Minima Template by Wikidesign Driven by DokuWiki