Skip to content
forked from neo-ai/neo-ai-dlr

Neo-AI-DLR is a common runtime for machine learning models compiled by AWS SageMaker Neo, TVM, or TreeLite.

License

Notifications You must be signed in to change notification settings

dvhg/neo-ai-dlr

 
 

Repository files navigation

DLR

DLR is a compact, common runtime for deep learning models and decision tree models compiled by AWS SageMaker Neo, TVM, or Treelite. DLR uses the TVM runtime, Treelite runtime, NVIDIA TensorRT™, and can include other hardware-specific runtimes. DLR provides unified Python/C++ APIs for loading and running compiled models on various devices. DLR currently supports platforms from Intel, NVIDIA, and ARM, with support for Xilinx, Cadence, and Qualcomm coming soon.

Installation

On x86_64 CPU targets running Linux, you can install latest release of DLR package via

pip install dlr

For installation of DLR on GPU targets or non-x86 edge devices, please refer to Releases for prebuilt binaries, or Installing DLR for building DLR from source.

Usage

import dlr
import numpy as np

# Load model.
# /path/to/model is a directory containing the compiled model artifacts (.so, .params, .json)
model = dlr.DLRModel('/path/to/model', 'cpu', 0)

# Prepare some input data.
x = np.random.rand(1, 3, 224, 224)

# Run inference.
y = model.run(x)

Release compatibility with different versions of TVM

Each release of DLR is capable of executing models compiled with the same corresponding release of neo-ai/tvm. For example, if you used the release-1.2.0 branch of neo-ai/tvm to compile your model, then you should use the release-1.2.0 branch of neo-ai/neo-ai-dlr to execute the compiled model. Please see DLR Releases for more information.

Documentation

For instructions on using DLR, please refer to Amazon SageMaker Neo – Train Your Machine Learning Models Once, Run Them Anywhere

Also check out the API documentation

Call Home Feature

You acknowledge and agree that DLR collects the following metrics to help improve its performance. By default, Amazon will collect and store the following information from your device:

record_type: <enum, internal record status, such as model_loaded, model_>, 
arch: <string, platform architecture, eg 64bit>, 
osname: <string, platform os name, eg. Linux>, 
uuid: <string, one-way non-identifable hashed mac address, eg. 8fb35b79f7c7aa2f86afbcb231b1ba6e>, 
dist: <string, distribution of os, eg. Ubuntu 16.04 xenial>, 
machine: <string, retuns the machine type, eg. x86_64 or i386>, 
model: <string, one-way non-identifable hashed model name, eg. 36f613e00f707dbe53a64b1d9625ae7d> 

If you wish to opt-out of this data collection feature, please follow the steps below:

1. Disable through code
  ``` 
  from dlr.counter.phone_home import PhoneHome
  PhoneHome.disable_feature()
  ```
2. Or, create a config file, ccm_config.json inside your DLR target directory path, i.e. python3.6/site-packages/dlr/counter/ccm_config.json, then add below format content in it, ```{ "enable_phone_home" : false } ``` 
3. Restart DLR application. 
4. Validate this feature is disabled by verifying this notification is no longer displayed, or programmatically with following command: 
    ```
    from dlr.counter.phone_home import PhoneHome 
    PhoneHome.is_enabled() # false if disabled 
    ```

Examples

We prepared several examples demonstrating how to use DLR API on different platforms

License

This library is licensed under the Apache License Version 2.0.

About

Neo-AI-DLR is a common runtime for machine learning models compiled by AWS SageMaker Neo, TVM, or TreeLite.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 36.3%
  • Jupyter Notebook 34.6%
  • Python 20.8%
  • CMake 3.7%
  • C 2.3%
  • Shell 1.0%
  • Other 1.3%