ZURICH R-USER GROUP REINFORCEMENT LEARNING USING R STATWORX GmbH - - PowerPoint PPT Presentation

zurich r user group reinforcement learning using r
SMART_READER_LITE
LIVE PREVIEW

ZURICH R-USER GROUP REINFORCEMENT LEARNING USING R STATWORX GmbH - - PowerPoint PPT Presentation

ZURICH R-USER GROUP REINFORCEMENT LEARNING USING R STATWORX GmbH Sebastian Heinz, CEO Oliver Guggenbhl, Consultant Zrich, 18th June 2019 AGENDA R-Users Zurich meetup COMPANY PROFILE 1 2 INTRODUCTION TO REINFORCEMENT LEARNING 3


slide-1
SLIDE 1

ZURICH R-USER GROUP REINFORCEMENT LEARNING USING R

STATWORX GmbH Sebastian Heinz, CEO Oliver Guggenbühl, Consultant Zürich, 18th June 2019

slide-2
SLIDE 2

AGENDA

R-Users Zurich meetup

18.06.19

COMPANY PROFILE 1 THEORETICAL OVERVIEW 3 INTRODUCTION TO REINFORCEMENT LEARNING 2 IMPLEMENTATION IN R 4 SUPER MARIO AI USE CASE 5 QUESTIONS 6

R-Users Zurich meetup

slide-3
SLIDE 3

18.06.19

STATWORX COMPANY PROFILE

Information Services Project approach

R-Users Zurich meetup

slide-4
SLIDE 4

STATWORX

Facts and figures

18.06.19

COMPANY PROFILE

STATWORX is a consulting company for data science, machine learning, and AI located in Frankfurt, Vienna and Zurich. We support our customers in the development and implementation of data science and machine learning projects as well as data driven products.

2011 3 40

FOUNDED OFFICES EMPLOYEES

200+ 50+ 1000+

DATA SCIENCE PROJECTS INDUSTRY CUSTOMERS DATA ACADEMY PARTICIPANTS

R-Users Zurich meetup

TOOL STACK & PARTNERS CLIENT EXCERPT

slide-5
SLIDE 5

END-2-END DATA CONSULTING

We support our customers along the whole process of data driven decision making

18.06.19

DATA STRATEGY

Ideation, communication and steering of data products and strategies Defining the company's

  • verall data strategy

DATA ACADEMY

Planning and execution of customer specific data science trainings

DATA ENGINEERING

Design and implementation

  • f data pipelines, storages

and architectures

DATA SCIENCE

Development, training and evaluation of machine learning and AI models

DATA OPERATIONS

Implementation and

  • peration of machine

learning models Building data pipelines and putting models into production Building predictive models that drive and create value Efficient operations of modes and products Knowledge transfer and expertise building R-Users Zurich meetup

slide-6
SLIDE 6

INTRODUCTION REINFORCEMENT LEARNING

Where is Reinforcement Learning being used? A brief history of Data Science What distinguishes Reinforcement Learning from Supervised & Unsupervised Learning?

18.06.19 R-Users Zurich meetup

slide-7
SLIDE 7

INTRODUCTION

Reinforcement Learning is currently one of the hottest ML topics

18.06.19 R-Users Zurich meetup

slide-8
SLIDE 8

A BRIEF HISTORY OF DATA SCIENCE

The history of Data Science and AI

18.06.19

Rosenblatt Perceptron

1957

Data Science vs. Statistics (Tuckey)

1962

First NIPS Conference

1987

RNN Networks

1997

First Nvidia GPU

1999

Google AlphaGo

2014

Kaggle Platform

2010

OpenAI’s DotA-2

2018

Reinforcement Learning

1989

CNN Networks

1998

Amazon AWS

2006

Deep Learning Google AI TensorFlow

2015 2012

AI „Hope“ AI „Winter“ AI „Rise“

R-Users Zurich meetup

slide-9
SLIDE 9

MACHINE LEARNING OVERVIEW

Machine Learning Applications

18.06.19

UNSUPERVISED LEARNING Clustering Anomaly Detection SUPERVISED LEARNING Regression Classification Dynamic Environments

Agent Environment

REINFORCEMENT LEARNING

R-Users Zurich meetup

slide-10
SLIDE 10

INTRODUCTION

What is Reinforcement Learning?

18.06.19

„Instead of relying on a set of (labelled or unlabelled) training data, Reinforcement Learning relies on being able to monitor the response of the actions taken by the agent.“

R-Users Zurich meetup

slide-11
SLIDE 11

THEORY REINFORCEMENT LEARNING

How does Reinforcement Learning work? The Gridworld problem as an example

18.06.19 R-Users Zurich meetup

slide-12
SLIDE 12

THEORY

How does Reinforcement Learning work?

18.06.19 R-Users Zurich meetup

Agent Environment

slide-13
SLIDE 13

THEORY

How does Reinforcement Learning work?

18.06.19 R-Users Zurich meetup

Agent Environment Action at State st

slide-14
SLIDE 14

THEORY

How does Reinforcement Learning work?

18.06.19 R-Users Zurich meetup

Agent Environment Action at State st Reward rt st+1 rt+1

slide-15
SLIDE 15

THEORY

How does Reinforcement Learning work?

18.06.19 R-Users Zurich meetup

Agent Environment Action at State st Reward rt st+1 rt+1

Agent tries to maximize his reward by choosing appropriate actions at a given state of the environment.

slide-16
SLIDE 16

EXAMPLE: GRIDWORLD

Reinforcement Learning Use Case

18.06.19

Starting Field Goal Field

R-Users Zurich meetup

slide-17
SLIDE 17

EXAMPLE: GRIDWORLD

Reinforcement Learning Use Case

18.06.19 R-Users Zurich meetup

Starting Field Goal Field

slide-18
SLIDE 18

EXAMPLE: GRIDWORLD

Reinforcement Learning Use Case

18.06.19

Ideal return: -6 (6 steps to complete the episode)

R-Users Zurich meetup

slide-19
SLIDE 19

Q-LEARNING

Determining the optimal policy for an environment

18.06.19 R-Users Zurich meetup

𝑅 𝑡, 𝑏 = 𝑠 + γmax a’ Q(s’, a’)

immediate reward future reward

Q-Value

  • represents the maximum

possible reward at the end

  • f the game for action a in

state s

  • does so for each possible

action a for the current state s Immediate reward

  • the immediate reward of a

possible action taken, as defined by the environment

  • there might be no

immediate rewards, but

  • nly delayed rewards

Future reward

  • the maximum possible

future reward after transitioning to the next state s’ by choosing action

a

Q-Value

Discount factor

  • Steers the rate of

considering future rewards

  • Small values promote

decisions that generate immediate reward

  • Larger values favor

decisions that generate future reward

discount factor

slide-20
SLIDE 20

Q-LEARNING

Determining the optimal policy for an environment

18.06.19 R-Users Zurich meetup

Assuming that the discount factor γ = 0.9 and the final reward r = 1:

𝑅 14, 𝑠𝑗𝑕ℎ𝑢 = 1 + 0.9・0

slide-21
SLIDE 21

IMPLEMENTATION IN R THE reinforcelearn PACKAGE

18.06.19 R-Users Zurich meetup

The reinforcelearn Package Live Demo Advanced Functionalities

slide-22
SLIDE 22

THE reinforcelearn PACKAGE

Reinforcement Learning implementation in R

18.06.19 library(reinforcelearn) # Create an environment env <- makeEnvironment() # Create an agent agent <- makeAgent() # Let the agent interact with the environment interact(env, agent)

reinforcelearn package: The reinforcelearn package offers easy tools to create environments, agents and let them interact.

R-Users Zurich meetup

slide-23
SLIDE 23

THE reinforcelearn PACKAGE

Reinforcement Learning implementation in R

18.06.19 library(reinforcelearn) # Create an environment env <- makeEnvironment("gridworld", shape = c(4, 4), goal.states = 15, initial.state = 0)

Gridworld in reinforcelearn: The Gridworld environment can be easily created with only a few lines of code:

R-Users Zurich meetup

slide-24
SLIDE 24

THE reinforcelearn PACKAGE

Reinforcement Learning implementation in R

18.06.19 library(reinforcelearn) # Set up the agent policy <- makePolicy(”epsilon.greedy”, epsilon = 0.1) val.fun <- makeValueFunction("table") algorithm <- makeAlgorithm("qlearning") # Create the agent agent <- makeAgent(policy = policy, val.fun = val.fun, algorithm = algorithm)

Gridworld in reinforcelearn: The agent consists of several parts:

  • The policy defines the type of decision rules.
  • The value function determines how the current state of the

agent is to be evaluated.

  • The algorithm determines how the optimal policy is to be

found and learnt.

R-Users Zurich meetup

slide-25
SLIDE 25

THE reinforcelearn PACKAGE

Reinforcement Learning implementation in R

18.06.19 library(reinforcelearn) # Let the agent interact with the environment interact(env, agent, n.episodes = 500, visualize = TRUE, learn = TRUE)

Interaction: Once the environment and agent have been created we can let them interact with the reinforcelearn::interact() function.

R-Users Zurich meetup

slide-26
SLIDE 26

LIVE DEMO

Reinforcement Learning in action

18.06.19 R-Users Zurich meetup

L i v e D e m

slide-27
SLIDE 27

OPENAI GYMS

Advanced functionalities with OpenAI gyms

18.06.19 library(reinforcelearn) library(reticulate) # Create an environment env <- makeEnvironment("gym", gym.name = "SpaceInvaders—v0")

Using OpenAI gyms in reinforcelearn: reinforcelearn allows for easy access to gym environments created by OpenAI.

R-Users Zurich meetup

slide-28
SLIDE 28

NEURAL NETWORKS

Advanced functionalities with Keras

18.06.19 library(reinforcelearn) library(keras) env <- makeEnvironment("gridworld", shape =c(4, 4), goal.states = 15) model <- keras_model_sequential() %>% layer_dense(units = 4, input_shape = 1, activation = "linear") %>% compile(optimizer = optimizer_sgd(lr = 0.1), loss = "mae") policy <- makePolicy("epsilon.greedy", epsilon = 0.2) algorithm <- makeAlgorithm("qlearning") val.fun <- makeValueFunction("neural.network", model = model) agent <- makeAgent(policy, val.fun, algorithm) interact(env, agent, n.episodes = 100)

Using neural networks in reinforcelearn: reinforcelearn allows for easy integration of neural networks made in keras into your value function.

R-Users Zurich meetup

slide-29
SLIDE 29

USE CASE DEMONSTRATION SUPER MARIO BROS. AI

18.06.19 R-Users Zurich meetup

Overview States, Actions & Rewards Training and Results

slide-30
SLIDE 30

GYM-SUPER-MARIO-BROS

There is a great gym for Super Mario Bros (NES)

18.06.19 R-Users Zurich meetup

GYM-SUPER-MARIO-BROS An OpenAI Gym environment for Super Mario Bros. & Super Mario Bros. 2 (Lost Levels) on The Nintendo Entertainment System (NES) using the nes-py emulator. GAME MODES

Standard Downsample Pixel Rectangle

slide-31
SLIDE 31

252 140 251 135 249 106 ⋯ ⋯ ⋯ 55 13 89 ⋮ ⋱ ⋮ 232 156 ⋯ 9 252 140 251 135 249 106 ⋯ ⋯ ⋯ 55 13 89 ⋮ ⋱ ⋮ 232 156 ⋯ 9 252 140 251 135 249 106 ⋯ ⋯ ⋯ 55 13 89 ⋮ ⋱ ⋮ 232 156 ⋯ 9 y

GAME STATES

Game states are defined as 4-D tensors containing the last four game states as grayscale matrices

Single Game Screen

252 140 251 135 249 106 ⋯ ⋯ ⋯ 55 13 89 ⋮ ⋱ ⋮ 232 156 ⋯ 9 y

{𝒐 𝑦 𝒒 𝑦 𝒅} RGB Pixel Tensor

R-Users Zurich meetup 18.06.19

{𝒐 𝑦 𝒒 𝑦 4} Grayscale Pixel Tensor

252 140 251 135 249 106 ⋯ ⋯ ⋯ 55 13 89 ⋮ ⋱ ⋮ 232 156 ⋯ 9 252 140 251 135 249 106 ⋯ ⋯ ⋯ 55 13 89 ⋮ ⋱ ⋮ 232 156 ⋯ 9 y 252 140 251 135 249 106 ⋯ ⋯ ⋯ 55 13 89 ⋮ ⋱ ⋮ 232 156 ⋯ 9 y

"For video game environments, it is important to provide stacked subsequent game frames to the agent. Otherwise, it would not be possible to detect any "movement" on the screen. Furthermore, we use only every nth frame."

Mario Pro-Tip

slide-32
SLIDE 32

"During training, controller actions (and their combinations) are mapped to integers, usually limited to game-relevant

  • actions. This is done by using warpper functions around the environment."

ACTIONS

Mario's action space is mapped to integers

18.06.19 R-Users Zurich meetup

CONTROL CROSS Used to control Mario into 8 different directions: right, right-down, down, down-left, left, left-top, top, top-right. START / SELECT Usually used to pause / start / exit the game. BUTTONS Used for jumping (A) and running (B hold).

Mario Pro-Tip

slide-33
SLIDE 33

REWARDS

The game's reward is composed of three different components

18.06.19 R-Users Zurich meetup

𝑠 = 𝑤 + 𝑑 + 𝑒

reward velocity clock death

VELOCITY

  • v: the difference in agent x values

between states

  • v = x1 - x0
  • x0 is the x position before the step
  • x1 is the x position after the step
  • moving right ⇔ v > 0
  • moving left ⇔ v < 0
  • not moving ⇔ v = 0

CLOCK

  • c: the difference in the game clock

between frames

  • The clock encourages to be fast
  • c = c0 - c1
  • c0 is the clock before the step
  • c1 is the clock after the step
  • no clock tick ⇔ c = 0
  • clock tick ⇔ c < 0

DEATH

  • d: a death penalty that penalizes the

agent for dying in a state

  • this penalty encourages the agent to

avoid death

  • alive ⇔ d = 0
  • dead ⇔ d = -15

"The reward function assumes the objective of the game is to move as far right as possible (increase the agent's x value), as fast as possible (decrease time), without dying."

Mario Pro-Tip

slide-34
SLIDE 34

MODEL

We are using a deep CNN to approximate the Q-value function of our agent

84 x 84 x 4 Tensor

3x3 Kernel als Feature Detector 2x2 Pooling INPUT

We are using the last 4 game frames as input tensor to the neural network.

2D CONVOLUTIONS

Convolutional layers extract relevant information from the game screens.

POOLED FEATURE MAPS

Pooling compresses information and shrinks the dimensionality of the problem.

DENSE & OUTPUT

The network approximates the Q- values by comparing the

R-Users Zurich meetup 18.06.19

slide-35
SLIDE 35

MODEL

A cool screenshot of our model architecture in TensorBoard J

R-Users Zurich meetup 18.06.19

slide-36
SLIDE 36

TRAINING PIPELINE

Overview of the actual training process of the agent

R-Users Zurich meetup 18.06.19

"Experience replay is needed because subsequent game states are highly correlated which violates the i.i.d. assumption

  • f stochastic gradient descent, which is used for updating the online network's weights."

Mario Pro-Tip

Environment Experience Replay Buffer Transition (st-1, at-1, st, rt) Target Network Online Network Compute Loss Batch of Transitions

Update Weights Sync Weights Sample Current State Perform 𝜻-greedy Action "True" Q-Values Q-Value Prediction Emmit Reward Store Feed

slide-37
SLIDE 37

RESULTS

Watch the Super Mario Agent in action as it masters the first level of the game

18.06.19 R-Users Zurich meetup

slide-38
SLIDE 38

18.06.19 R-Users Zurich meetup

LIVE DEMO

Learning to play Super Mario using R and reinforcelearn (well, kind of) J

L i v e D e m

slide-39
SLIDE 39

18.06.19 R-Users Zurich meetup

KEY FINDINGS

What we have learned

  • 1. Reinforcement learning is a very promising area of ML / AI research
  • 2. R is still quite limited in the area of modern reinforcement learning methods
  • 3. The reinforcelearn package is not up-2-date when it comes to network policies
  • 4. reticulate is a way to play around with RL for the moment but not the final solution
  • 5. If you are considering diving deeper into RL, take a look at Python ;)
slide-40
SLIDE 40

WANT TO LEARN MORE?

Join one of our next training events!

18.06.19 R-Users Zurich meetup

Data Science Bootcamp

A 5-day introduction to the field of data science and machine learning. July 1st – July 5th STATWORX Office Zurich, Switzerland German Language www.statworx.com/ch/academy

Deep Learning Bootcamp

A 5-day introduction to the field of deep learning with Keras and TensorFlow. November 4th – November 8th STATWORX Office Frankfurt, Germany German Language www.statworx.com/ch/academy

Data University

A 2-day immersive learning experience with 24 sessions from can be selected. October 9th – October 10th Goethe University Frankfurt, Germany German Language www.data-university.de

JOIN TRAINING JOIN TRAINING JOIN TRAINING

slide-41
SLIDE 41

Q&A and Discussion

18.06.19 R-Users Zurich meetup

slide-42
SLIDE 42

THANK YOU FOR THE ATTENTION.

CONTACT Sebastian Heinz, CEO sebastian.heinz@statworx.com www.statworx.com STATWORX Wöhlerstr. 8-10 60323 Frankfurt am Main