
Manuel Baltieri - Araya Inc., Japan
Biography
Title: On the interplay between goals and action-orientedness
Abstract: Action-oriented models are a construction posited to (partly) capture notions of embodiment in probabilistic frameworks used to model agents in situations involving various degrees of uncertainty.
While of interest, their definition is still quite informal, with works in philosophy of mind struggling to settle on clear requirements, and simulations in artificial life/reinforcement learning focusing on specific, mostly idiosyncratic examples that don't necessarily capture more general patterns.
In this talk I will discuss some work in progress where I try to formally define a class of action-oriented models using a construction from theoretical computer science, bisimulations, that provides different ways to compress problems, or tasks, presented as (PO)MDPs. Some of these ways include specific accounts of actions, policies and the goals of an agent, and in light of this, I believe bisimulations may thus provide a formal understanding of action-orientedness.
Furthermore, while typically hard to compute, different definitions of bisimulations can be approximated and implemented in modern (deep) reinforcement learning frameworks, where we find already several examples of agents with the ability to solve tasks by compressing a problem to only consider "task-relevant" information, thus making it, I would argue, not only an appealing theoretical construction but also a useful definition with direct practical applications.






















