End-to-End Image Recognition With Open Source Data — Part 2: Model Deployment with Plotly Dash and Heroku

data4help
10 min readMay 5, 2021

A beginner’s guide to deployment with Dash: Deploying a model that classifies paintings from the Metropolitan Museum of Art in New York

In Part 1 of this series, we showed how we collected and analyzed open-source image data from the Metropolitan Museum of Art to build an image classifier to classify paintings by their country of origin. In this post, we go one step further and show how to take this trained model and bring it to life with an interactive front-end dashboard.

This post is intended as a tutorial for Data Scientists interested in deploying their models and in learning the very basics of Plotly Dash and Heroku.

How to Deploy?

The first step in deployment is deciding how and where to deploy your app. You may be asking yourself what options are available, and how your predictions should best be shown. There are many options for this.

One of the most straight-forward options would be to deploy your model as a simple web-app. This means that your model becomes a web service that can take requests. You can think of a request as an order for a prediction from your model, given the same input features used to train the model originally. The web service then runs these input features through the model and returns the prediction as the output. The beauty of web services is that the output can be consumed by virtually any other application, including applications written in other languages. This makes simple web services great for interacting with other computer components. We walk through how to deploy your model as a simple web app here.

But what if we don’t care too much about our predictions being easily consumed by other apps? What if we are more concerned with them being easily consumed by humans? Image recognition in particular is a highly visual area of machine learning. In this case, it often makes sense not simply to return the prediction as a string of text from a web service, but rather to display it along with the corresponding image.

To do this, we need to build a front-end visual application. For this, we will use Plotly Dash, a popular framework for building front-end applications and dashboards in Python. In essence, Dash allows you to write HTML and CSS code in Python. These two languages are commonly used when creating websites to add elements like buttons and scroll bars and change the color and fonts in the webpage.

The Architecture of our App

Before diving into the code, it’s important to think about how we want our front-end application to look and how our users will interact with it.

Model Input

The first step is to decide how input images will be fed into our model. We considered multiple different options for this. For example, a user could be given the option to upload a new, unseen painting. However, this would require more work on the part of the user, as they would first need to find and download an image and then upload it into the app. Our main goal was for users to be able to interact with the model and see predictions right away, so we then began to think about other ways in which users could generate an image for the model to predict on.

Ultimately, we decided that it would be best to simple let users generate a random image from the Met’s collection, and see a the prediction based on that.

Another option could have been making use of drop-down selectors to allow users to select specific paintings, or to select paintings from a given culture.

Model Output

Once we decided on creating a random image generator to generate the input image for the model, we had to decide how the output prediction should be displayed.

We decided to simply display the prediction underneath the image, with a change in the color of the prediction label to indicate if the prediction was right or not: the prediction label should show up red if incorrect, and green if correct.

Another idea would have been to show the relative likelihoods of each culture label predicted by the model, rather than simply the label with the highest likelihood (final prediction). This would have been especially interesting to look at for the paintings that the model predicted incorrectly.

The final step in designing our app architecture is to sketch out how we would like our simple app to look, so that we know where to put each of the components. This rough outline is shown below.

Rough sketch of the layout of the app

Building the App

Now that we have decided on the inputs and outputs of our app, meaning what it should show and how users of the app should interact with it, it’s time to start building it using Plotly Dash.

The very first step in building our Dash app is to initialize the app. This app will be called whenever the Python script containing the app’s code (conveniently called app.py) is run. When the app is called, it is rendered via a local port. The port address will be shown in the terminal after running the app.py script in the command line. Clicking on this local port link launches a browser page showing the app. The simple starter code for this step is shown below. It’s that easy to start an app!

Starter script for building your first Dash app

In the app.run_server() call, we set debug=True to enable that our app automatically updates when we make changes.

At first, the app is simply a blank webpage. To build our app, we simply add Dash components to this blank page, in the order in which we want them to appear on the webpage.

All of the components of the app will live in the “children” list of the app.layout. To build out the additional features of the app, we simply add Dash components into this children list. Dash components exist for almost anything you would like to add to your app, including interactive charts and dropdown menus, but also for simpler components like an text block.

To decide which components to add where, we refer to the rough app sketch we drew in the previous section. The first step is to add a title. This component is easy to add, since it will only contain text. To add it, we add a new row to our app page using the Dash bootstrap component for a row and column. Dash bootstrap components add some additional components beyond the basic components and are based on the popular bootstrap framework in CSS .These column and row elements help us with spacing the components of our app.

The code for this first component, as well as the similar description portion underneath the title, are shown below.

We start by adding a row, then a column, for each of our elements. Finally, we add an HTML component saying the type of text we would like to have. The title will be in the largest heading size offered in HTML, H1. The description will be in normal text, called Font here.

We then use the keyword argument dictionaries to specify things like colors and text alignment. The output of this code is shown below.

The first text components of our basic Dash app

At this point, we aren’t too worried about how the fonts of the app look and are mainly just concerned with functionality and getting the necessary pieces in place.

Working with App Callbacks

The next step is to create the random image generator button. This step is a bit more complicated than the static text components we created in the last step, since we want the button to actually activate some functionality, namely rendering an image.

To do this, we need to use an important Dash feature called an @app.callback() . This feature is a Python decorator, which can be seen from the @ symbol at the beginning of the function. A decorator is placed right before a function to add some additional functionality to it. In this case, the decorator makes any function that we write work with the components of our Dash app. It uses Dash Input() and Output() components to make the inputs and outputs of any function that we write updatable by other components in the app. For example, when our “random image generator” button is clicked, we can use this click action as an input to the function, telling it to run. The output from each button click can then be returned to other parts of the app. The code below shows how this works for the “random image generator” button. It uses a simple CSV made from our training data, which just contains 2 columns for each of the paintings: one with its label, the culture it was labeled as, and one with the hyperlink to its corresponding image on the Met’s servers.

Since we don’t want the image_cultureto be displayed immediately, but rather just saved, we return it as a component with it’s keyword arguments set not to display. We do the same for the other output called random_image. This is because it is actually just the URL link to the image, not the image itself. We will need to pass it into another Dash component in order to fully render the image.

This next step is shown added to the code below. We also add an additional component for printing out the painting’s correct culture above the image.

Note that here we use 2 additional callbacks: one to simply generate a label that updates every time the button is clicked with the new image culture, and one that uses the URL link to the image to get the image data and render it in the app. The new output of the app after these updates is shown below.

The “random image generator” button now generates and displays a random painting from the Met’s collection.

Incorporating the ML Model

We now have the input to the model sorted: our app allows the user to generate a random image from the Met’s collection, like we wanted, and it also displays the image.

The next step is to use our model to make a prediction on this new data. To do this, we will load in our saved model from the model training stage discussed in Part 1 of this series, and use its .predict() method to generate predictions for the newly generated images.

We also have to be sure to apply the same pre-processing steps to these new images as we did when training the model.

First, we load the trained model that we saved as a pickle after training. We also set some important variables, namely the height and width of the image.

Our updated app (shown below) now displays the model’s prediction underneath the image.

Note in the code how we can specify the changing color of the culture label by simply including it as a variable that is updated with a conditional check.

Our app now does everything we want it to do: Generating a random image from the collection automatically for the user, rendering the image, and returning a color-coded prediction. The final step in building the app is just to improve the look and feel of the app and fix the alignment. To do this, we adjust the width and justification of the various components. Next, we add a CSS style sheet to change the fonts.

And voilà! The app is complete:

The final app, showing a mis-classified painting.

Deploying the App with Heroku

Now that our app is working well locally and looks how we want it to, it’s time to show it to the world.

To do this, we use Heroku, a free and open-source hosting platform for applications. Following their very easy to use instructions, we were able to deploy the app. These steps involve adding a requirements file and Procfile and pushing your code base to Heroku.

We did run in to a few small bugs and issues when pushing our code to Heroku, however. We got an error saying the “slug size” was too large, which essentially means that our code and dependencies were too big to deploy for free on Heroku. TensorFlow can be too large, and since Heroku currently doesn’t offer GPU support (which is a large part what makes the standard TensorFlow library so large as a dependency), we switched to tensorflow-cpu instead of the standard Tensorflow. This also caused us to have to switch to a different version of CV2, the pre-processing library we used, called opencv-python-headless. These changes decreased the size of our project enough to be able to deploy it for free on Heroku. Just something to keep in mind if you plan to deploy your image recognition project on Heroku in the future.

Now that the app is live on Heroku, you can try it out here: Try out the App!

--

--

data4help

Helping non-profits and NGOs harness the power of their data. data4help.org