Explainable AI for Human-Robot Collaboration Prof. Brad Hayes - - PowerPoint PPT Presentation

explainable ai for human robot collaboration
SMART_READER_LITE
LIVE PREVIEW

Explainable AI for Human-Robot Collaboration Prof. Brad Hayes - - PowerPoint PPT Presentation

Explainable AI for Human-Robot Collaboration Prof. Brad Hayes Bradley.Hayes@Colorado.edu Collaborative Artificial Intelligence and Robotics Lab http://www.cairo-lab.com/ @hayesbh http://bradhayes.info Research Themes Learning from


slide-1
SLIDE 1

Collaborative Artificial Intelligence and Robotics Lab

  • Prof. Brad Hayes

Bradley.Hayes@Colorado.edu http://www.cairo-lab.com/

@hayesbh

http://bradhayes.info

Explainable AI for Human-Robot Collaboration

slide-2
SLIDE 2

Research Themes

Learning from Demonstration Explainable AI Intelligent Tutoring Shared-Environment Human-Robot Collaboration Life-Long Learning

  • f Human Behavior

Learning to model the world we interact in

slide-3
SLIDE 3
slide-4
SLIDE 4

Human-in-the-loop artificial intelligence enables robot workers to make human collaborators safer, more effective, and more efficient.

Collaborative Human-Robot Interaction

slide-5
SLIDE 5

So let’s jump right in!

𝜌𝜄

slide-6
SLIDE 6

Robot Co-workers

Cages are being replaced by algorithms, sensors, and HRI

slide-7
SLIDE 7

Robot Co-workers

slide-8
SLIDE 8

Robots are the future!

…but it’s really hard to make them do what we want.

slide-9
SLIDE 9

Task Execution

slide-10
SLIDE 10

Robotics is Hard Nobody knows everything Even worse: HRI is multi-disciplinary

slide-11
SLIDE 11

Task Execution

slide-12
SLIDE 12

Markov Model Chart

NO YES YES Markov Chain MDP NO HMM POMDP Do we have control over the state transitions? (Are we picking which actions are executed) Are the states completely

  • bservable?
slide-13
SLIDE 13

Difficulty of Human-Robot Collaboration

No Communication General Communication Free Communication Full Observability Collective Observability Partial Observability Unobservability Open-loop control? MMDP MDP MDP POMDP DEC-MDP DEC-POMDP

Human-Robot Collaboration

Chart credit: Maayan Roth, “Markov Model for Multi-Agent Coordination”

slide-14
SLIDE 14

Collaborative Task Execution

slide-15
SLIDE 15

Collaborating During Task Execution

Yikes :(

slide-16
SLIDE 16

Collaborating During Task Execution Understanding Task Structure Modeling Human Behavior

slide-17
SLIDE 17

Sample Problem

slide-18
SLIDE 18

Sample Problem Terminology

A state is a representation of the world An action is something that transitions you from one state to another (can also be a self-transition!) A transition function T(s,a,s’) provides the probability that a particular action a taken in a particular state s will bring the system to state s’ A reward function R(s, a) provides the value of taking a particular action a in state s

slide-19
SLIDE 19

Sample Policy

𝜌: 𝑇 → 𝐵

slide-20
SLIDE 20

State Representation is Critical

slide-21
SLIDE 21

Motion Planning & Optimal Control

Optimal Control: Finding the best control policy for a desired goal Closed-Loop Solutions Open-Loop Solution

𝑣 = 𝑣(𝑦)

“Global Method”: Gives action at all states Very expensive to compute

𝑣 = 𝑣(𝑢)

“Local Method”: Gives action at relevant states Usable in high dimensions

slide-22
SLIDE 22

Trajectory Optimization:

Problem Statement

  • Trajectory 𝜊: 𝑢 ∈ 0, 𝑈 → 𝐷

Maps time to configurations

  • Objective Functional 𝑉: Ξ → ℝ+

Maps trajectories to scalars

  • The objective 𝑉 encodes traits we want our path to have
  • Path length
  • Efficiency
  • Obstacle avoidance
  • Legibility
  • Uncertainty reduction
  • Human comfort

Goal: Leverage the benefits of randomized sampling with asymptotic optimality

Set of possible trajectories

slide-23
SLIDE 23

Problem Specification: Spaces

World Space W(ℝ^3) Configuration Space C(#𝐸𝑝𝐺) Trajectory Space Ξ(∞ 𝑒𝑗𝑛) Robot pose in World Space (set of points) Single point in Configuration Space Trajectory through Configuration Space (set of points) Single point in Trajectory Space

slide-24
SLIDE 24

Problem Specification: Optimization

Trajectory Optimization seeks to find an

  • ptimal trajectory 𝜊∗:

𝜊∗ = 𝑏𝑠𝑕𝑛𝑗𝑜 𝜊∈Ξ 𝑉[𝜊] s.t. 𝜊 0 = 𝑟𝑡 𝜊 𝑈 = 𝑟𝑕 …(any other constraints we want)

slide-25
SLIDE 25

Problem Specification: Optimization

Want to optimize 𝜊 to a global minimum of our objective U => Usually too hard! Instead, optimize 𝜊 to a local minimum of our objective U => If the solution is bad, resample 𝝄 and try again

slide-26
SLIDE 26

Donald Michie’s cri riteria for Machine Le Learning (M (ML)

Weak cr crit iterion:

ML L occ

  • ccurs when

enever a system gen enerates an updated ed basis is build ildin ing

  • n
  • n sample data for im

improvin ing its its per erformance on

  • n subseq

equent data.

Strong cr crit iterion:

Weak criterion + ability of system to communicate internal updates in explicit symbolic form.

Ult ltra-strong cr crit iterion:

Str trong crit criterion + + communic icati tion of

  • f updates

es must be e op

  • perationally

ly effecti tive e (i.e (i.e. user r is is required to

  • understand updates and con
  • nsequences shou
  • uld be drawn fr

from it). it).

slide-27
SLIDE 27

Where is this?

slide-28
SLIDE 28

Relating Different Types of Systems

Factory Factory Factory

(Halogen lights, fixtures, car chassis, …) f(x)=y

Opaque Comprehensible Interpretable

slide-29
SLIDE 29

Let’s Make a Furniture-Building Collaborative Robot

slide-30
SLIDE 30

Let’s unpack this problem…

slide-31
SLIDE 31

Consider the following challenge

  • How do we get a robot

to write for us?

  • What’s the best way to

encode the actions the robot has to perform?

  • How can we teach the

robot to draw a single letter properly?

slide-32
SLIDE 32

Painstakingly Program Each Motion

  • We can code each motion one at time, giving the motors

set amounts to move at each step of the process

  • This is brittle! What if the robot isn’t in the exact same spot

as it was when we programmed it?

slide-33
SLIDE 33

Add hand-written rules and logic!

We can create a bunch of rules to make it more robust to variations in the environment!

What if I miss a rule?

slide-34
SLIDE 34

Learning from Demonstration

slide-35
SLIDE 35

Keyframe Demonstration Trajectory Demonstration Hybrid Demonstration Trajectories and keyframes for kinesthetic teaching: A human-robot interaction perspective B Akgun, M Cakmak, JW Yoo, AL Thomaz

slide-36
SLIDE 36

Learning to Draw “P” from Examples:

Continuous trajectories in 2D Data converted to keyframes Clustering of keyframes and the sequential pose distributions Learned model trajectory

slide-37
SLIDE 37

Dealing with variations in speed

We can turn trajectories into sequences of letters (Comparisons are a lot easier this way!)

slide-38
SLIDE 38

Did the robot capture my intent?

slide-39
SLIDE 39

Robust Robot Learning from Demonstration and Skill Repair Using Conceptual Constraints

[IROS 18]

slide-40
SLIDE 40

Skills learned from demonstrations can be brittle due to the limited information content provided by trajectory demonstrations. For example, a learned skill may only execute correctly for specific environment or object used during demonstration. Learning implied constraints (e.g., cups need to be carried upright) from demonstrations can require a prohibitively large number of trajectories

slide-41
SLIDE 41

Traje jectory ry Demonstration

Intrinsically precise behavior specification Narrow coverage

  • f skill per example

Obstacle

No way to know if this path is okay or not!

G S

“Pick up the glass of water” “Move it in an arc over the table to the bowl” “But don’t carry it over the laptop if it is full” “Also make sure that your gripper stays closed” “But not tight enough to break the glass”

Narration

Difficult to provide precise details Can easily specify broadly applicable concepts

Key Insights

slide-42
SLIDE 42

Concept Constrained Learning from Demonstration

CC-LfD Algorithm Augments Keyframe-based LfD by incorporating narrated high level constraints into keyframe models.

Increase Skill Robustness

Improves execution under conditions not seen during training

Reduce Training Requirements

Learns more flexible, generalizable representations with less data

Increase Resilience to Poor Training

Avoids skill failures even when trained with sub-optimal demonstrations

Improve and Repair Existing Skills

Enables one-shot skill repair to improve existing skills with a single new example Conceptual Constraint A physically grounded or abstract behavioral restriction encoded as a Boolean function

CC-LfD Allows You To:

slide-43
SLIDE 43

CC-LfD :: Algorithm Overview

1

Record w/ Narration, & Align

2

Cluster & Model Keyframes

3

Rejection Sampling

4

Remodel & Reconstruct

slide-44
SLIDE 44

Unconstrained Skill Reconstruction from Keyframed Trajectories Skill Reconstruction from Keyframed Trajectories with CC-LfD Narration

slide-45
SLIDE 45

One-shot Skill Repair

Broken Skill Fixed Skill

slide-46
SLIDE 46

10 20 30 40 50 60 70 80 90 100 5 10 15 20 25 30 35

Number of Training Demonstrations Provided (including 3 poor baseline demonstrations) Percentage of Successful Attempts

“POURING TASK” ROBOT PERFORMANCE AND ONE-SHOT SKILL REPAIR

Just ONE narrated example fixes the skill with CC-LfD! After three noisy (but valid) examples, the robot cannot perform the task at all Traditional LfD CC-LfD More data doesn’t always help!

slide-47
SLIDE 47

Moving forward to Joint Task Execution (Teamwork)

slide-48
SLIDE 48

Leader / Follower Equal Partners

Teaming Paradigms

slide-49
SLIDE 49

How can we enable collaborative robots that may lack either authority or capability to provide utility to their co-workers?

slide-50
SLIDE 50

Supportive Behaviors

Actions that facilitate more rapidly satisfiable

  • r less difficult task solutions.
slide-51
SLIDE 51

Hierarchical Task Structure IK IKEA Chair

Assemble Chair

Orient Rear Frame

Get Frame Place Frame in Workspace

Attach Supports

Attach Left Support

Get Peg Place Peg(Left Frame) Get Support Place Support(Lef t Frame) Add Left Support HW Get nut Place Nut(Left support) Get bolt Place bolt(Left rear frame) Screw bolt(left rear frame)

Attach Right Support

Get Peg Place Peg(Right Frame) Get Support Place Support(Rig ht Frame) Add Right Support HW Get nut Place Nut(Right support) Get bolt Place bolt(Right rear frame) Screw bolt(Right rear frame)

Add Seat

Get Seat Place Seat

Attach Front Frame

Place Pegs Place left peg Get peg Place peg(left support) Place right peg Get peg Place peg(right support) Mount Get Front Frame Place Front Frame(Sup ports)
slide-52
SLIDE 52

Collaborative robots need to recognize human activities

  • Nearly all collaboration models depend on some form of

activity recognition

  • Collaboration imposes real-time constraints on classifier

performance and tolerance to partial trajectories

slide-53
SLIDE 53

Interpretable Models for Fast Activity Recognition and Anomaly Explanation During Collaborative Robotics Tasks

[ICRA 17]

slide-54
SLIDE 54

Common Activity Classifier Pipeline

Feature Extraction Keyframe Clustering (Usually KNN) Point to Keyframe Classifier (Usually SVM) HMM trained on keyframe sequences Feature Extraction Keyframe Classification HMM Likelihood Evaluation (Forward Algorithm) Choose model with greatest posterior probability

Training Testing

  • P. Koniusz, A. Cherian, and F. Porikli, “Tensor representations via kernel linearization for action recognition from 3d skeletons.”
  • Gori, J. Aggarwal, L. Matthies, and M. Ryoo, “Multitype activity recognition in robot-centric scenarios,”
  • E. Cippitelli, S. Gasparrini, E. Gambi, and S. Spinsante, “A human activity recognition system using skeleton data from rgbd sensors.”
  • L. Xia, C. Chen, and J. Aggarwal, “View invariant human action recognition using histograms of 3d joints.”
slide-55
SLIDE 55

Rapid Activity Prediction Through Object-oriented Regression (RAPTOR)

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

A highly parallel ensemble classifier that is resilient to temporal variations

slide-56
SLIDE 56

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

Kinect Skeletal Joints VICON Markers [Timestep x Feature] Matrix Learned Feature Extractor

slide-57
SLIDE 57

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

Time Displacement 12 sec 0 sec

slide-58
SLIDE 58

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

Time Displacement 100% 0%

slide-59
SLIDE 59

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

Time Displacement 100% 0% Two Temporal Segment Parameters: Width and Stride

slide-60
SLIDE 60

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

Time Displacement 100% 0% {Width=0.2 , Stride=1.} 1 2 3 4 5

slide-61
SLIDE 61

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

Time Displacement 100% 0% {Width=0.2 , Stride=.5} 1 3 5 7 9 2 4 6 8

slide-62
SLIDE 62

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

Object Map: Dictionary that maps IDs to sets of column indices E.g., {“Hands”: [0,1,2,5,6,7]} Displacement

slide-63
SLIDE 63

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

Displacement Within each temporal segment:

  • Isolate columns of each demonstration

trajectory according to (pre-defined) object map

  • Create local model for each object
slide-64
SLIDE 64

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

Within each temporal-object segment:

  • Ignore temporal information for each data point
  • Treat as general pattern recognition problem
  • Model the resulting distribution using a GMM

Result: An activity classifier ensemble across objects and time! Displacement

slide-65
SLIDE 65

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

1 3 2 4

Object GMMs

Object GMMs

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

Need to find the most discriminative Object GMMs per time segment

slide-66
SLIDE 66

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

1 3 2 4

Object GMMs

1.0 1.0 1.0 1.0

Need to find the most discriminative Object GMMs per time segment Random Forest Classifier

slide-67
SLIDE 67

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

Need to find the most discriminative Object GMMs per time segment Target Class Demonstrations Off-Target Class Demonstrations Random Forest Classifier

Likelihood Vector Trajectories

slide-68
SLIDE 68

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

  • Choose top-N most discriminative features from the Random Forest classifier
  • Weight each GMM proportional to its discriminative power

1 3 2 4

Object GMMs

0.0 .5 0.22 0.28

slide-69
SLIDE 69

Activity Model Training Pipeline

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

1 3 2 4

Object GMMs

0.0 .5 0.22 0.28

Feature Extraction Temporal Segmentation Feature-wise Segmentation Local Model Training Ensemble Weight Learning

Result: Trained Highly Parallel Ensemble Learner with Temporal/Object-specific sensitivity

  • Choose top-N most discriminative object-based classifiers
  • Weight each object proportionally to its discriminative power
slide-70
SLIDE 70

Results: Three Datasets

UTKinect Automotive Final Assembly Sealant Application

  • UTKinect publicly available benchmark

(Kinect Joints)

  • Dy

Dynamic Actor Industrial Manufacturing Task (Joint positions)

  • Static Actor Industrial Manufacturing Task

(Joint positions)

slide-71
SLIDE 71

Recognition Results: UTKinect-Action3D

slide-72
SLIDE 72

Results: Online Prediction

slide-73
SLIDE 73

Interpretability: Explaining Classifications

Asking a “carry” classifier about a “walk” trajectory: “In the middle and end of the trajectory, the left hand and right hand features were very poorly matched to my template.” Key Insight:

  • Apply outlier detection methods across internal activity classifiers
  • Use outliers or lack thereof to explain issues across tim

time and ob

  • bjects
slide-74
SLIDE 74

Support task network

Associating su supportive behaviors with su subgoals ls

Explicitly learned from demonstration during task execution Support policy can be propagated to higher-level task nodes

Hayes & Scassellati, “Online Development of Assistive Robot Behaviors for Collaborative Manipulation and Human-Robot Teamwork”, Machine Learning for Interactive Systems, AAAI 2014

Supportive Behaviors by Demonstration

slide-75
SLIDE 75

Context-sensitive Supportive Behavior Policies

slide-76
SLIDE 76

Supportive Behaviors by Demonstration

Iss Issues

  • Only

ly le learns before deplo loyment

  • Fix

ixed behavio ior, reactiv ive-only ly durin ing executio ion

  • Dif

iffi ficult lt to generaliz lize across tasks

What happens if you’re not the one programming the support policy?

slide-77
SLIDE 77

Learning from Demonstration Breaks Down in Team Scenarios!

Tradit itional l LfD fD is optimal if the reference demonstrations are “Expert” demonstrations. …but execution happens in isolation! Exp xpert demonstrati tions are not

  • t alw

lways th the mos

  • st effectiv

ive teaching str trategy. Sometimes it’s better to learn the landscape of the problem than to see optimal demonstrations Properly crafted ‘imperfect’ demonstrations can better com

  • mmunicate

e in informati tion abou

  • ut th

the e ob

  • bje

jectiv ive. Leading to one all-important question…

slide-78
SLIDE 78

Human figures out how and when the robot can be helpful

Quickly enables useful, helpful actions. Does not scale with task count! Requires human expert

Robot figures out how and when it can be helpful

Allows for novel behaviors to be discovered Enables deeper task comprehension and action understanding

Ca Can we do better r th than lea learnin ing fr from example les?

Demonstration-based Methods Goal-driven Methods

slide-79
SLIDE 79
slide-80
SLIDE 80

Autonomously Generating Supportive Behaviors:

A Task and Motion Planning Approach

Perspective Taking Symbolic planning Motion planning Autonomously Generated Supportive Behaviors

slide-81
SLIDE 81
  • 1. Propose alternative

environments

  • Change one thing about the

environment

  • 2. Evaluate if they facilitate the

leader’s task/motion planning

  • Simulate policy execution(s) from

leader’s perspective

  • 3. Compute cost of creating

target environment

  • Simulate support agent’s plan

execution

  • 4. Choose environment that

maximizes [benefit – cost]

  • Execute supportive behavior plan

Supportive Behavior Pipeline: Intuition

Propose alternate environments Evaluate Impacts

  • n Leader

Evaluate Cost of Alterations Manipulate scene to create best environment candidate

slide-82
SLIDE 82

Plan Evaluation

Choose the support policy (ξ ∈ Ξ) that minimizes the expected execution cost of the leader’s policy (π ∈ Π) to solve the TAMP problem T from the current state (sc)

  • Cost estimate must account for
  • Resource conflicts (shared utilization/demand)
  • Spatial constraints (support agent’s avoidance of lead)
slide-83
SLIDE 83

Plan Evaluation

Choose the support policy (ξ ∈ Ξ) that minimizes the expected execution cost of the leader’s policy (π ∈ Π) to solve the TAMP problem T from the current state (sc)

  • Cost estimate must account for
  • Resource conflicts (shared utilization/demand)
  • Spatial constraints (support agent’s avoidance of lead)

Weighting function makes a big difference!

slide-84
SLIDE 84

Weighting functions: Uniform, Greedy

Only the best-known solution is worth planning against

Min duration

= 1

Consider all known solutions equivalently likely and important

slide-85
SLIDE 85

Weighting functions: Uniform

slide-86
SLIDE 86

Weighting functions: Optimality-Proportional

Weight plans proportional to their cost vs. the best-known solution

p p=2 Plan Weight Plan Duration : Best Known Plan Duration

slide-87
SLIDE 87

Weighting functions: Error Mitigation

f(π) αwπ Plans more optimal than some cutoff ε are treated normally, per f. Suboptimal plans are negatively weighted, encouraging active mitigation behavior from the supportive robot. α <

1 max

𝜌

𝑥𝜌 is a normalization term to avoid harm due to plan overlap

slide-88
SLIDE 88

Weighting functions: Error Mitigation

slide-89
SLIDE 89

Limitations

  • Short forward lookahead (<10 seconds)
  • Sampling problem is incredibly difficult
  • Pushes some of the same problems that LfD has into the sampling mechanism
  • A priori knowledge of human policy space is necessary
  • This is coordination, not planning!
slide-90
SLIDE 90

The Promise of Collaborative Robots

slide-91
SLIDE 91

The Reality of Mismatched Expectations

slide-92
SLIDE 92
slide-93
SLIDE 93

Shared Expectations are Critical for Teamwork

In close human-robot collaboration…

  • Human must be able to plan around

expected robot behaviors

  • Understanding failure modes and

policies are central to ensuring safe interaction and managing risk risk Fluent teaming requires communication…

  • When there’s no prior knowledge
  • When expectations are violated
  • When there is joint action
slide-94
SLIDE 94

Establishing Shared Expectations

Collaborative Planning [Milliez et al. 2016] State Disambiguation [Wang et al. 2016] Short Term Long Term Role-based Feedback [St. Clair et al. 2016] Coordination Graphs [Kalech 2010] Policy Dictation [Johnson et al. 2006] Legible Motion [Dragan et al. 2013] Hierarchical Task Models [Hayes et al. 2016] Cross-training [Nikolaidis et al. 2013]

slide-95
SLIDE 95

Semantics for Policy Transfer

Under what conditions will you drop the bar?

slide-96
SLIDE 96

Semantics for Policy Transfer

Under what conditions will you drop the bar?

slide-97
SLIDE 97

Semantics for Policy Transfer

I will drop the bar when the world is in the blue region of state space:

slide-98
SLIDE 98

Semantics for Policy Transfer

slide-99
SLIDE 99

Semantics for Policy Transfer

slide-100
SLIDE 100

12.4827 5.12893 1.12419 1 3.62242

  • 40.241

… 15 7.125 1.12419 1

  • 8.1219
  • 40

… 12.4827 8.51422 1.12419 1 3.62242

  • 40.241

… , , … I will drop the bar when the world is in the blue region of state space:

slide-101
SLIDE 101

I will drop the bar when the world is in the blue region of state space: 12.4827 5.12893 1.12419 1 3.62242

  • 40.241

… 15 7.125 1.12419 1

  • 8.1219
  • 40

… 12.4827 8.51422 1.12419 1 3.62242

  • 40.241

… , , …

State space is too obscure to directly articulate

slide-102
SLIDE 102

State of the Art

int *detect_gear = &INPUT1; int *gear_x = &INPUT2; if (*detect_gear == 1 && *gear_x <= 10 && *gear_x >= 8) { pick_gear(gear_x); }

???

slide-103
SLIDE 103

Reasonable question: “Why didn’t you inspect the gear?” Interpretable answer: “My camera didn’t see a gear. I inspect the gear when it is less than 0.3m from the conveyor belt center and it has been placed by the gantry.”

Natural Interaction

Fault Diagnosis Policy Explanation Root Cause Analysis

“My camera didn’t see a gear. I inspect the gear when it is less than 0.3m from the conveyor belt center and it has been placed by the gantry.”

slide-104
SLIDE 104

Making Control Systems More Interpretable

Approach:

1. Attach a smart debugger to monitor controller execution 2. Build a graphical model from observations 3. Use specialized algorithms to map queries to state regions 4. Collect relevant state region attributes 5. Minimally summarize relevant state regions with attributes 6. Communicate query response

Model Building Query Analysis

Response Generation

slide-105
SLIDE 105

Concept Representations

Concept library: generic state classifiers mapped to semantic templates that identify whether a state fulfills a given criteria Set of Boolean classifiers: State → {True, False}

  • Spatial concepts

(e.g., “A is on top of B”)

  • Domain-specific concepts

(e.g., “Widget paint is drying”)

  • Agent-specific concepts

(e.g., “Camera is powered”)

  • n_top(A,B)

camera_powered

slide-106
SLIDE 106

Relevant Question Templates

When will you do {action}?

slide-107
SLIDE 107

Relevant Question Templates

Why didn’t you do {action}?

slide-108
SLIDE 108

Relevant Question Templates

What will you do when {conditions}?

slide-109
SLIDE 109

Language Mapping: Model to Response

  • n_top(A,B)

camera_powered

Recall: Concept library provides dictionary of classifiers that cover state regions

slide-110
SLIDE 110

Using Concepts to Describe State Regions

We perform state-to-language mapping by applying a Boolean algebra over the space of concepts This reduces concept selection to a set cover problem over state regions Disjunctive normal form (DNF) formulae enable coverage over arbitrary geometric state space regions via intersections and unions of concepts Templates provide a mapping from DNF → natural language

slide-111
SLIDE 111

Query Response Process

When do you inspect the gear? Find states where action {inspect(gear)} is most likely action

Detected_gear /\ at(conveyor_belt)

Find concept mapping that covers the indicated states

Convert to natural language

I’ll inspect the gear when I’ve detected a gear and I’m at the conveyor belt. Detected_gear at(conveyor_belt)

slide-112
SLIDE 112

Interpretable and comprehensible systems are lacking in the ability to formulate their line of reasoning, usin sing human an-understandab able features of

  • f in

input data. Interpretable and comprehensible models en enab able explanations

  • f decisions, but do not yield explanations themselves!

Ho How els else can an we e es establish sh shar ared exp xpectations an and ver erify fy th that in intent was as cap aptured? Have the robot use its model to teach a human!

Explainable AI Needs Reasoning!

slide-113
SLIDE 113

Improving Human-Robot Collaboration through Autonomous Explanation-based Reward Coaching

[HRI 19] Nominated for Best Technical Paper Award

slide-114
SLIDE 114

We spend a lot of time making robots good at things

We’re pretty good at this transition We’re less good at this transition

But how do we use this to make others proficient too?

slide-115
SLIDE 115

Learning from experience can be expensive

slide-116
SLIDE 116

Motivating Questions

How do we tu turn a capable robot in into a competent in instructor?

Can a robot use its own understanding of the world to figure out yours? Given this understanding, can it issue corrective guidance you’ll follow? Can we do all of this within a general framework?

slide-117
SLIDE 117

Key Assumption: Humans are goal directed, generally rational agents

Confusing behavior indicates a difference in domain understanding

Unexpected policy reward function

slide-118
SLIDE 118

Humans are agents maximizing their expected reward

slide-119
SLIDE 119

Reward Augmentation and Repair through Explanation

Optimal Path Human Path

100

slide-120
SLIDE 120

Coaching as Partially Observable Markov Decision Process

Belief State Of Human Observation by coach Action by coach Reward

Action can be task-specific physical action and reward repair-specific social action Robot is coaching while collaborating

slide-121
SLIDE 121

RARE: An Intuition

Estimate the collaborator’s reward fu function by figuring out which policy they’re following Assuming policies are optimal w.r.t. the reward function that produced them

Track beli lief over reward fu functions Using latent Boolean state variables to indicate the collaborator’s knowledge about a particular reward.

slide-122
SLIDE 122

State Augmentation to Extend Belief and Action Space

Compound State Vector:

World variables Comprehension variables

slide-123
SLIDE 123

Repairing a Domain Misunderstanding

Extend robot’s action space Include communicative actions for revealing reward components.

slide-124
SLIDE 124

100 100 10

Reward Augmentation through Repair and Explanation

slide-125
SLIDE 125

100 100 10

Reward Augmentation through Repair and Explanation

slide-126
SLIDE 126

100 100 10

Reward Augmentation through Repair and Explanation

slide-127
SLIDE 127

100 100 10

Need to communicate this reward before they finish!

Reward Augmentation through Repair and Explanation

slide-128
SLIDE 128

100 100 10

Need to communicate this reward before they finish!

Option 1: “If you do that you won’t get the best reward” Ind Indic icate su suboptim imalit lity of f an an ac acti tion to encourage exp xplo loratio ion

Reward Augmentation through Repair and Explanation

slide-129
SLIDE 129

100 100 10

Need to communicate this reward before they finish!

Option 2: “If you do that you won’t get the best reward. There’s a better reward in the top right corner.”

Justify the advice by providing a description of the reward’s location

Reward Augmentation through Repair and Explanation

slide-130
SLIDE 130

100

Reward Augmentation through Repair and Explanation

slide-131
SLIDE 131

User Study

slide-132
SLIDE 132

Realtime Color Sudoku:

Each player gets 3 rows to fill: near to far, right to left. There are no turns: play whenever you’re ready

A really hard game for humans

slide-133
SLIDE 133

Realtime Color Sudoku: The Rules

No color may appear twice on the same row No color may border itself

slide-134
SLIDE 134

Between-subjects experiment (n=26)

Justification: Players about to make a mistake were told about the reward inferred they were missing. Control: Players about to make a mistake were told that they cannot make that move or they’ll fail the game. No Interruption: Players completed the game without mistakes.

slide-135
SLIDE 135

Subjective Hypotheses Subjective Hypotheses

H1: Participants will find the robot more helpful and useful when it explains why a failure may occur H2: Participants will find the robot to be more intelligent when providing justification for its advice H3: Participants will find the robot more sociable when it provides justifications for its failure mitigation

slide-136
SLIDE 136

H1: Participants will find the robot more helpful and useful when it explains why a failure may occur

Subjective Results: Helpfulness

p < 0.05 p < 0.05

slide-137
SLIDE 137

Subjective Results: Intelligence

H2: Participants will find the robot to be more intelligent when it provides justification for its advice

N.S. p < 0.05 N.S. p<0.08

slide-138
SLIDE 138

Subjective Hypotheses Subjective Hypotheses

H1: Participants will find the robot more helpful and useful when it explains why a failure may occur H2: Participants will find the robot to be more intelligent when coaches them H3: Participants will find the robot more sociable when it provides justifications for its failure mitigation

slide-139
SLIDE 139

Subjective Hypotheses

Objective Hypothesis

H1: Participants will complete the game faster when provided with justification

But we couldn’t test it.

Because most participants didn’t even listen to the control condition’s advice without justification.

Justification: 80% Control: 20%

Game Completion Rate:

slide-140
SLIDE 140

Issues and Future Work

Comprehension variables for each reward causes the state space to explode combinatorially… but rewards are rarely independent! Justification matters… but why?

slide-141
SLIDE 141

Summary

We develo loped... Reward Augmentatio ion and Repair ir th through Exp xpla lanatio ion framework for using a competent agent to coach others

We evaluated… Challenging collaborative cognitive game with a human and robot We found… Control condition: Hardly anyone followed the robot’s advice! Justification condition: Nearly everyone followed the robot’s advice!

We showed…

RARE makes robots more useful, helpful, , and in intelligent coaches. Ju Justification is is essential for r effective knowledge tr transfer!

slide-142
SLIDE 142

Justification Control

“Sawyer wasn’t forceful enough and was not giving me the reasons why the move was

  • wrong. So I couldn’t trust him”

“Response looked like hard coded and I did not find the reason to think that Sawyer was addressing to me” “I did not believe it as it did not give details regarding the wrong step”

Skep epti tical of

  • f Sawyer for not

giv givin ing justif ification More posit

  • sitiv

ive use ser r expe xperie ience

“He was … telling me why my move was not right even though it was the right move. I was able to trust him easily when he gave the reasons”

“I learnt to think of moves ahead when Sawyer helped me once with the game.” “Sawyer’s input made me question my understanding of the game”

slide-143
SLIDE 143

Collaborative Artificial Intelligence and Robotics Lab

  • Prof. Brad Hayes

Bradley.Hayes@Colorado.edu http://www.cairo-lab.com/

@hayesbh

http://bradhayes.info

Explainable AI for Human-Robot Teaming