Creating a Simple ML Server from Scratch with CherryPy
Apr 17, 2023
Creating a Simple ML Server From Scratch Using CherryPy
- The full code for this post can be found here.
Creating a Simple Machine Learning Server with CherryPy
In a previous posts on AI/ML architectures, we went through and defined a way to think about how to build out various parts of an AI/ML workflow for various sizes of teams over time, using cute names from greek mythology. One part of that post highlighted the concept of an, “Epimetheus Endpoint” and specifically, how one could go about starting to build one from scratch. This section seemed to deserve a more robust treatment.
To that end, in this blog post we will walk through creating a simple Machine Learning API server using CherryPy, a Python web framework. We will assume that you have a trained machine learning model saved as a model.joblib
file using scikit-learn and a list of columns saved as model_columns.joblib
Let’s dive into the code.
1. Import Necessary Libraries
First, we need to import the required libraries for our server, including CherryPy, joblib, pandas, and JSON. Part of the objective here is to use the fewest possible dependencies that we possibly can, to avoid Python dependency hell. So while, “requests,” may be a typical library used to build API’s in Python, instead we’re going to use CherryPy’s native built-in tools.
import cherrypy
import joblib
import json
import pandas as pd
2. Load the Trained Model and Columns
Load the previously trained machine learning model and the corresponding columns into memory using joblib. The reason we want to load the columns is because when a user posts a JSON payload to the API, the fields of that JSON may not be matched to the fields within the model.joblib, which means that there would be an error. To prevent this, we reindex the original JSON payload from the user to match the model.joblib fields before the inference is run.
lr = joblib.load('/home/app/model/model.joblib')
model_columns = joblib.load("/home/app/model/model_columns.joblib")
3. Define the Server and API Classes
Create two classes: Server()
and API()
The Server() class will have a simple index() method that returns just a simple, “Hello World” to test if the server is working. The API() class will have an exposed=True attribute and a query() method that will handle incoming requests and return predictions.
class Server(object):
@cherrypy.expose
def index(self):
return "Hello World"
4. Implement the query() method in the API class
The query() method in the API() class will process incoming JSON data, convert categorical variables into dummy variables, reindex the columns, use the trained model to make predictions, and return the predictions as a JSON response. `` Note that we used the pd.get_dummies and the dataFrame.reindex() method and used model_columns to prevent a mismatch as discussed in point (2) above, prior to making the inference (prediction).
@cherrypy.expose
@cherrypy.tools.json_in()
@cherrypy.tools.json_out()
def query(self, *args, **kwargs):
data = cherrypy.request.json
query = pd.get_dummies(pd.DataFrame(data))
query = query.reindex(columns=model_columns, fill_value=0)
prediction = list(lr.predict(query))
cherrypy.response.headers['Content-Type'] = 'application/json'
json_prediction = json.dumps([int(x) for x in prediction])
return {"prediction": json_prediction}
5. Define the main() Function and Server Configuration
In the main() function, define the global server configuration, server configuration for the Server() class, and the API configuration for the API() class. Then, mount the Server() and API() classes, start the CherryPy engine, and block the engine.
Of particular interest in how this was done below, is that the server configuration, server_config`** is seperated from the api configuration, api_config. The reason this was done was, “seperation of concerns,” meaning we are seperating out different applications into units with mimimal overlap between the units. Within this application, the “Server,” object was used to define a web service, which is different from an API, in that it really is designed to accept HTTP requests at a URL for the purposes of viewing on a browser, so it doesn’t really have a complex configuration. The “API” objet in contrast requires dispatch requests and response header configuration to function.
What both applications share is:
- They both serve at the host and port shown.
- They both are mounted to the CherryPy server overall.
- They both have to be started.
All of these universal traits are reflected in the global_config and the actions at the bottom of the main() function.
def main():
global_config = {
'server.socket_host': '0.0.0.0',
'server.socket_port': 8889,
}
server_config = {
'/': {
'tools.sessions.on': True,
'tools.sessions.timeout': 3600,
}
}
api_config = {
'/api': {
'request.dispatch': cherrypy.dispatch.MethodDispatcher(),
'tools.sessions.on': True,
'tools.response_headers.on': True,
'tools.response_headers.headers': [('Content-Type', 'application/json')]
}
}
cherrypy.config.update(global_config)
cherrypy.tree.mount(Server(), '/', server_config)
cherrypy.tree.mount(API(), '/api', api_config)
cherrypy.engine.start()
cherrypy.engine.block()
6. Run the Server
Finally, run the main() function when the script, which is fairly self-explanatory.
if __name__ == "__main__":
main()
Miscellaneous Concerns and Wrapping Up
- All of the above was wrapped up into a directory structure which allows whomever to build and run this on a Dockerfile, so that there should be less problems with dependency issues, regardless of what base machine it is run on.
- This API is not protected, it’s just a bare API with no authentication or authorization, it’s as simple as possible, so if this was used in anything close to a production environment, obviouslly the necessary level of security would have to be built in.
Just briefly looking at our directory structure:
0_buildimageandtag
1_run
Dockerfile
app/
cookies.txt
curltest.sh
docker-compose.yaml
launch/
posttest.sh
requirements.in
requirements.txt
- Using Docker, the project can be built and app launched with 0_buildimageandtag and 1_run
- app/ contains all of the code necessary to run the application, except the requirements.
- launch/ is used to hold a couple different commands which are referenced in the Dockerfile, the entrypoint is a bash script which points to another start bash script which, both of which gets copied into the container and are used to launch the CherryPy application.
- curltest.sh and posttest.sh are used to test the Server and Post data, respectively, with a pre-formed data.
- Output from the api request is stored at app/output.
app:
api.py model output
app/model:
model.joblib model_columns.joblib
app/output:
prediction_output.json
launch:
entrypoint start
- Usage guide is provided at the application README.md.
Future Concerns
- What we did not cover here were anything involved with what was referred to in a previous blog post as the Zeus Zonal, nor the Nemesis Normalization nor Odysseus Orchestration.
- One of the next logical steps, assuming Zeus Zonal is already somehow covered, being that more than one model might be used, would be figuring out a way to build out a simple Odysseus Orchestration, either with a text file that keeps track of available models that can be queried as a part of the API, or more advanced than that, a database with said models that can be queried.