Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
sdmia_invited_speakers [2015/10/18 09:32]
matthijs
sdmia_invited_speakers [2015/11/13 12:07]
matthijs
Line 39: Line 39:
 [[http://​web.engr.oregonstate.edu/​~afern/​|Oregon State]] [[http://​web.engr.oregonstate.edu/​~afern/​|Oregon State]]
  
-Title: ​TBD\\+Title: ​**Kinder and Gentler Teaching Modes for Human-Assisted Policy Learning**\\
 Abstract:\\ Abstract:\\
-TBD+This talk considers the problem of teaching action policies to computers for sequential decision making. The vast majority of policy learning algorithms offer human teachers little flexibility in how policies are taught. In particular, one of two learning modes is typically considered: 1) Imitation learning, where the teacher demonstrates explicit action sequences to the learner, and 2) Reinforcement learning, where the teacher designs a reward function for the learner to autonomously optimize via practice. This is in sharp contrast to how humans teach other humans, where many other learning modes are commonly used besides imitation and practice. The talk will highlight some of our recent work on broadening the available learning modes for computer policy learners, with the eventual aim of allowing humans to teach computers more naturally and efficiently. In addition, we will sketch some of the challenges in this research direction for both policy learning and more general planning systems. 
 + 
  
 === Mykel Kochenderfer === === Mykel Kochenderfer ===
Recent changes RSS feed Creative Commons License Donate Minima Template by Wikidesign Driven by DokuWiki