Deploying Deep Learning Models with Model Serverby@wobotai
6,582 reads

Deploying Deep Learning Models with Model Server

tldt arrow
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

A model server is a web server that hosts the deep learning model and allows it to be accessed over standard network protocols. The model server can be accessed across devices as long as they are connected via a common network. In this writeup, we will explore a part of a deployment that deals with hosting the deep learning model to make it available across the web for inference, known as model servers. In this example, we will be dealing with images: REST API request-response and gRPC API. We will first learn how to build our own, and then explore the Triton Inference Server (by Nvidia).

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Deploying Deep Learning Models with Model Server
Wobot Intelligence Inc HackerNoon profile picture

@wobotai

Wobot Intelligence Inc

Wobot.ai is a Video Intelligence Platform that enables businesses to do more with their existing camera systems.


Receive Stories from @wobotai

react to story with heart

RELATED STORIES

L O A D I N G
. . . comments & more!