Currently, our Model API can be accessed via:
http://109.235.70.27:8501/v1/models/model
We are pushing these models to a docker image running at 109.235.70.27.
Instead of deploying models trained models manually, we have to automate this.
Good thing is that all models are versioned, so Tensorflow will always choose to serve the latest model version.
Some questions we have to address:
- Where are these trained models stored?
- How often do new models have to be pushed into production?
- How are they pushed to the Tensorflow Serving? (Currently, they are mounted as volumes into docker container)
Currently, our Model API can be accessed via:
http://109.235.70.27:8501/v1/models/modelWe are pushing these models to a docker image running at
109.235.70.27.Instead of deploying models trained models manually, we have to automate this.
Good thing is that all models are versioned, so Tensorflow will always choose to serve the latest model version.
Some questions we have to address: