Using TensorFlow.js to Automate the Chrome Dinosaur Game

An introduction to neural networks using TensorFlow.js

In this blog post, we’ll be learning how to automate the Chrome Dinosaur Game using neural networks with TensorFlow.js. If you haven’t played it before, it’s a side scrolling game available offline (for when Chrome or your Internet crashes) where you control a 2D dinosaur and have to jump and duck to avoid obstacles. Give it a shot here:

In the most simple terms, a neural network is a computer system modeled after the workings of a human brain. The human brain consists of multiple neurons that work in conjunction to make decisions.

Each connection has a weight with a bias assigned to it, and it’s the optimization of these parameters that’s termed learning.

In recent years, Google has developed a library called TensorFlow, which has not only transformed the performance of deep and machine learning algorithms but made it easier for developers all around the world to reap the benefits of artificial intelligence.

After the TensorFlow library for Python, Google released TensorFlow Lite for Android, and now the much awaited TensorFlow.js is here. TF.js is a JavaScript library that provides machine learning capabilities to both the browser and the backend.

TensorFlow.js has two primary APIs—the Core API, which consists of the basic mathematical functions, optimizers, and more for enthusiasts who like to build their models from scratch; and the Layers API, which is a high-level API built on TensorFlow.js Core, enabling users to build, train, and execute deep learning models directly in browsers.

Installing TensorFlow.js

For this blog, we’ll be using the Layers API. To set up TensorFlow.js, all we need to do is install it via npm

To use TensorFlow.js, we need to build our project using a tool like webpack, parcel, or rollup.

These all are build tools with their pros and cons. Parcel should be used for small projects, while rollup is quite new. All of them internally build dependency graphs that map every module your project needs and generate one or more bundles. For a comparison between these tools, you can check out this article.

The Chrome Dino Game

We’ll be trying to create an AI system that’s able to play this game like any human would.

Our first task was to simulate the Chrome Dinosaur game (this is where GitHub helped :P). For this we found an open-source repo that had the complete code of the Chrome Dino game. We’ve formatted it to make the code more readable and organized.

Let’s work through it step by step 🙂

The main class that contains the dinosaurs and events related to playing the game is called Runner.js. It’s designed to allow us to use multiple dinos simultaneously (We’ll be using this feature in the next blog post).

We’ve also added three events to this class :

  1. onCrash
  2. onReset
  3. onRunning

These are the three primary events into which the game can be divided. The onCrash method is called when the dino crashes, onReset is called after onCrash to reset the game, and onRunning is called at every instance of movement to decide whether the dino should jump or not.

You can check the reference code here:

The src/game folder contains the files used to replicate the Chrome Dino game. runner.js is exported from here so that nn.js, where the main artificial intelligence code is written, can access the above-mentioned functions. The index.html file creates HTML divs, where we’ll inject the game and add the scripts.

Brilliant!! Hope you’re with me so far :p Let’s see how to set up our project.

Setting up

The first step is to define imports.

We’re using babel-polyfill, which allows us to use the full set of ES6 features beyond syntax changes. It also includes features such as new built-in objects like WeakMaps.

Now we can import the tensorflow library as a tf object.

We’ll also import canvas width and canvas height to use feature scaling, and obviously the Runner class.

import 'babel-polyfill';
import * as tf from '@tensorflow/tfjs';
import { CANVAS_WIDTH, CANVAS_HEIGHT } from './game/constants';
import { Runner } from './game';

After that, we can initialize the instance of runner class as null.

let runner = null;

We’ve created a function named setup that will be called after the DOM content is loaded.

function setup() {
  // setup code here

In the setup function, we initialize the runner instance with a DINO_COUNT of 1 and assign functions to onReset, onCrash, and onRunning events. As mentioned earlier, the Runner class is designed so we can have multiple dinos play the game simultaneously. Here, DINO_COUNT signifies the number of dinos we want to run the current simulation with.

runner = new Runner('.game', {
   onReset: handleReset,
   onCrash: handleCrash,
   onRunning: handleRunning

Assign the runner object to window.runner for global access and call the runner.init() function, which starts the game.

window.runner = runner;

Seems like we are “set up” and ready to go 🙂

Handling Resets

We create a variable named firstTime, which tells us whether the Dino game is being played for the first time or the current game is a reset. This way, we can use the same function with an if condition inside it to handle resets.

let firstTime = true;

The handleReset method takes an array of dinos as an argument. The runner class creates an array of dinosaurs that can be used for playing the game with multiple dinosaurs. For the purposes of part one of this series, we’re using only one dino.

function handleReset( dinos ) {
  //will handle resetting and initialization of the game

Since we’re only using one dino, let’s just take the 0th element of the dinos array.

const dino = dinos[0];

If it’s the first time this function is called, we initialize the model in the dino.model object. We create the model using the tf.sequential() call which returns a sequential model. Then, we’ll be adding two layers to the model.

The neural net will take three inputs in the beginning—namely, the parameters that define the state of the dino, i.e. the speed of the game, the width of the oncoming obstacle and it’s distance from our dino.

Therefore the first layer has an input shape of [3] that’s a 2D tensor array, such as [ [1 , 1 , 0] ], to account for the three inputs. The activation function we’ve used is the basic sigmoid function that will output six units for the next output layer.

if (firstTime) {
   firstTime = false;
   dino.model = tf.sequential();

This is the second output layer with six inputs coming from the previously hidden layer.


The Activation function is again sigmoid. What do you think? How many units do we need in the output layer now? It will be two units, right? One for a dino to jump [0,1] and one for a dino to not jump [1,0].

     optimizer : tf.train.adam(0.1)

We finally compile the model using meanSquaredError loss function and adam optimizer with a learning rate of 0.1. Feel free to play around with this learning rate 🙂

We’ve also created two arrays inside a object that will keep our training set as 2 arrays named inputs and labels. = {
     inputs: [],
     labels: []

Now, if this isn’t the first time reset has been called, we’ll train our neural network using the function of TensorFlow models. This function takes two tensors as vectors, where the first argument is the input tensor with the shape of the first layer’s input, and the next argument is the appropriate output tensor, which again is of the shape specified in the arguments of the model’s last layer. We’ll be using the tensor2d function of the TensorFlow api to convert these normal 2D arrays to tensors.

else {,tf.tensor2d(;

Awesome, we’ve created our model and written code for training it.
So now…

It’s Prediction Time!

The prediction part of our model will obviously be used in handleRunning, as that’s where we’ll decide what to do next.

The handleRunning method takes dino and state as arguments. The state is the current condition of the Runner— it contains the distance of the next object, its width, and the speed of the game. It returns a promise that’s resolved using the action the dino is required to take.

function handleRunning(dino, state) {
  return new Promise((resolve) => {

In the callback for the promise, we’ll give one argument to the arrow function, which is the resolve callback. If the dino is currently not jumping, we’ll predict the next action using the model.predict method, which in turn calls theConvertStateToVector method, taking a state object as an input and returning a feature scaled vector. Then we’ll call the tf.tensor2d method to convert this array to a tensor and will call the predict function on it.

if (!dino.jumping) {
     // whenever the dino is not jumping decide whether it needs to jump or not
    let action = 0;// variable for action 1 for jump 0 for not
     // call model.predict on the state vector after converting it to tensor2d object
    const prediction = dino.model.predict(tf.tensor2d([convertStateToVector(state)]));

     // the predict function returns a tensor we get the data in a promise as result
     // and based on result decide the action

The model.predict method returns an object. That object has a data method that returns a promise. The then function of that promise takes a callback with a result as an argument. It’s this result that contains the prediction in the form of a simple array.

Since we defined [0,1] as the jump output, we compare result[1] and result[0]: if result[1] is greater than result[0], the dino should jump; otherwise, the dino should keep running.

If it chooses to jump, we’ll set the action as 1 and set the lastJumpingState as a current state, as we should choose to jump at this state.

const predictionPromise =;
 predictionPromise.then((result) => {
       // console.log(result);
       // converting prediction to action
       if (result[1] > result[0]) {
 // we want to jump
         action = 1;
         // set last jumping state to current state
         dino.lastJumpingState = state;

If it chooses to not jump we’ll set the lastRunningState as current state, as we chose to run at this state.

      dino.lastRunningState = state;

In the end, we’ll resolve the promise with the action required of the dino (0 already as it was not jumping, 1 if it predicts a jump)


If the dino was already jumping in the current state, we resolve with the code for running (0)


Phew! We’ve finally decided how we’re going to act. Now let’s handle failures i.e. the crash. This is also where we’ll create our training data.

Handling the Crashing Dino

Collecting training data

The handleCrash function checks to see if the dino was jumping at the time of the crash or not, and on that basis it selects which state to add to the training set.

function handleCrash(dino) {
 let input = null;
 let label = null;
 // check if at the time of crash dino was jumping or not
 if (dino.jumping) {
   // Should not jump next time
   // convert state object to array

If the dino was jumping at this time and it crashed, it means that it shouldn’t have jumped. Therefore, we’ll save the last jumping state to the input and label corresponding to the not-jump output.

Similarly, if we find that it didn’t jump and crashed, it means it should have jumped. So we’ll take the last running state and add the label corresponding to the jump, then push these new inputs and label to the training set in

input = convertStateToVector(dino.lastJumpingState);
   label = [1, 0];
 } else {,
   // Should jump next time
   // convert state object to array
   input = convertStateToVector(dino.lastRunningState);
   label = [0, 1];
 // push the new input to the training set;
 // push the label to labels;

Finally, we’ll run npm start in the directory to launch webpack-dev-server, and the project can be viewed at http://localhost:8080.

Conclusion and Next Steps

Okay, so in this post, we’ve automated the Chrome Dino game using neural networks. In the following articles, we’ll be using genetic algorithms in conjunction with neural networks. We’ll also try automating the game using only genetic algorithms.

By the end of this series, we’ll be drawing comparisons between the performance of all the three automation strategies.

Thanks to Pratyush Goel for working hard and helping with this blog post series.for working hard and helping with this blog post series.

If you liked the article, please clap your heart out. Tip — Your 50 claps will make my day!

Want to know more about me, you can checkout my course on Web Development here.

Please also share on Facebook and Twitter. If you’d like to get updates, follow me on Twitter and Medium. If anything is not clear or you want to point out something, please comment down below.

Discuss this post on Hacker News and Reddit.

Avatar photo


Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *