From AI Model to Production in Azure

Problem Description (courtesy of DataDriven.com):

When a patient has a CT scan taken, a special device uses X-rays to take measurements from a variety of angles which are then computationally reconstructed into a 3D matrix of intensity values. Each layer of the matrix shows one very thin “slice” of the patient’s body.

This data is saved in an industry-standard format known as DICOM, which saves the image matrix in a set binary format and then wraps this data with a huge variety of metadata tags.

Some of these fields (e.g. hardware manufacturer, device serial number, voltage) are usually correct because they are automatically read from hardware and software settings.

The problem is that many important fields must be added manually by the technician and are therefore subject to human error factors like confusion, fatigue, loss of situational awareness, and simple typos.

A doctor scrutinising image data will usually be able to detect incorrect metadata, but in an era when more and more diagnoses are being carried out by computers it is becoming increasingly important that patient record data is as accurate as possible.

This is where Artificial Intelligence comes in. We want to improve the error checking for one single but incredibly important value: a field known as Image Orientation (Patient) which indicates the 3D orientation of the patient’s body in the image.

For this challenge we’re given 20,000 CT scan images, sized 64×64 pixels and labelled correctly for training. The basic premise is given an image, the AI model needs to predict the correct orientation as explained graphically below. The red arrow shows the location of the spine, which our AI model needs to find to figure out the image orientation.

Capstone

We’ll use Tensorflow and Keras to build and train an AI model in Python and validate against another 20,000 unlabelled images. The pipeline I used had three parts to it, but the core is shown in Python below and achieved 99.98% accuracy on the validation set. The second and third parts (not shown) pushed this to 100%, landing me a #6 ranking on the leader board. A preview of the 20,000 sample training images is shown below.

Sample

Our model in Python:

(x_train, x_test, y_train, y_test) = train_test_split(data, labels, test_size=0.15, random_state=42)

# construct our model
model = Sequential()
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', input_shape=InputShape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(Classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])

checkpoint = ModelCheckpoint("model.h5", monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]

# start training
model.fit(x_train, y_train, batch_size=BatchSize, epochs=Epochs, verbose=1, validation_data=(x_test, y_test), callbacks=callbacks_list)
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

# save the model and multi-label binarizer to disk
model.save('capstone.model')
f = open('capstone.pickle', "wb")
f.write(pickle.dumps(mlb))
f.close()

 

I split the sample images into four folders according to their labels and I used ZERO, ONE, TWO and THREE as the class labels. So, given a test image the model will do a prediction and return one of those class labels to assign.

First things first, we’ll construct our model and start the training. On my dual-K80 GPU server this took about an hour. The model is saved at various stages, and once we are happy with the accuracy we’ll save the resulting model and pickle file (capstone.model & capstone.pickle in the code)

To deploy this as an API in Azure we’ll create a new web app with default Azure settings. Once deployed, we’ll add the Python 3.6 extension. Switch to the console mode and use pip to install any additional libraries we need, including Flask, OpenCV, Tensorflow and Keras. Modify the web.config to look like the one shown below. Note that our Python server script will be named run_keras_server.py

<configuration>
  <appSettings>
    <add key="PYTHONPATH" value="D:\home\site\wwwroot"/>
    <add key="WSGI_HANDLER" value="run_keras_server.app"/>
    <add key="WSGI_LOG" value="D:\home\LogFiles\wfastcgi.log"/>
  </appSettings>
  <system.webServer>
    <handlers>
      <add name="PythonHandler" path="*" verb="*" modules="FastCgiModule" scriptProcessor="D:\home\Python364x64\python.exe|D:\home\Python364x64\wfastcgi.py" resourceType="Unspecified" requireAccess="Script"/>
    </handlers>
  </system.webServer>
</configuration>

 

Our Python run_keras_server.py script:

import numpy as np
from keras.preprocessing.image import img_to_array
from keras.applications import imagenet_utils
from keras.models import load_model
import cv2
import flask
import io
import pickle

app = flask.Flask(__name__)

model = load_model("capstone.model")
mlb = pickle.loads(open('capstone.pickle', "rb").read())

def _grab_image(stream=None):
	if stream is not None:
		data = stream.read()
		image = np.asarray(bytearray(data), dtype="uint8")
		image = cv2.imdecode(image, cv2.IMREAD_COLOR)
	return image
	
@app.route("/predict", methods=["POST"])
def predict():
    
    data = {"success": False, "label":"None"}

    if flask.request.method == "POST":
        if flask.request.files.get('image'):
            image = _grab_image(stream=flask.request.files["image"])
            image = image.astype("float") / 255.0
            image = img_to_array(image)
            image = np.expand_dims(image, axis=0)
            proba = model.predict(image)[0]
            idxs = np.argsort(proba)[::-1][:2]
            label = mlb.classes_[idxs[0]]
            
            if label == "ZERO":
                label = "Spine at bottom, patient facing up."
            if label == "ONE":
                label = "Spine at right, patient facing left."
            if label == "TWO":
                label = "Spine at top, patient facing down."
            if label == "THREE":
                label = "Spine at left, patient facing right."
            
            data["label"] = label
            data["success"] = True

    return flask.jsonify(data)

if __name__ == "__main__":
    app.run()

 

Using your FTP tool of choice, upload the run_keras_server.py script, along with capstone.model and capstone.pickle, into the D:\home\site\wwwroot folder. Restart the web app from within Azure.

We can test our API using Postman, or the C# script shown below, which takes a sample image and performs a prediction.

using System;
using System.Net.Http;
using System.Threading.Tasks;

namespace CallPythonAPI
{
    class Program
    {
        private static readonly HttpClient client = new HttpClient();

        static void Main(string[] args)
        {
            string responsePayload = Upload().GetAwaiter().GetResult();
            Console.WriteLine(responsePayload);
        }

        private static async Task<string> Upload()
        {
            var request = new HttpRequestMessage(HttpMethod.Post, "http://mywebappdemo.azurewebsites.net/predict");
            var content = new MultipartFormDataContent();
            byte[] byteArray = System.IO.File.ReadAllBytes("20.png");
            content.Add(new ByteArrayContent(byteArray), "image", "20.png");
            request.Content = content;
            var response = await client.SendAsync(request);
            response.EnsureSuccessStatusCode();
            return await response.Content.ReadAsStringAsync();
        }
    }
}

 

Our sample image looks like this:

20

Running the prediction on this image yields the following result:

Prediction

That’s it. We can incorporate the API call into a web site, desktop client app or even a Raspberry PI device, since all the heavy lifting is done on the server-side.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s