Code it
4.6K views | +0 today
Follow
Code it
This is a curated resource for programmers and software architects. It is regularly updated with Articles, Hacks, How Tos, Examples and Code.
Curated by nrip
Your new post is loading...
Your new post is loading...
Scooped by nrip
Scoop.it!

ONNX Standard And Its Significance For Data Scientists

ONNX Standard And Its Significance For Data Scientists | Code it | Scoop.it

ONNX is an open format built to represent machine learning models. ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers.

 

It was introduced in September 2017 by Microsoft and Facebook.

 

ONNX breaks the dependence between frameworks and hardware architectures. It has very quickly emerged as the default standard for portability and interoperability between deep learning frameworks.

 

Before ONNX, data scientists found it difficult to choose from a range of AI frameworks available.

 

Developers may prefer a certain framework at the outset of the project, during the research and development stage, but may require a completely different set of features for production. Thus organizations are forced to resort to creative and often cumbersome workarounds, including translating models by hand.

 

ONNX standard aims to bridge the gap and enable AI developers to switch between frameworks based on the project’s current stage. Currently, the models supported by ONNX are Caffe, Caffe2, Microsoft Cognitive toolkit, MXNET, PyTorch. ONNX also offers connectors for other standard libraries and frameworks.

 

Two use cases where ONNX has been successfully adopted include:

  • TensorRT: NVIDIA’s platform for high performance deep learning inference. It utilises ONNX to support a wide range of deep learning frameworks.
  • Qualcomm Snapdragon NPE: The Qualcomm neural processing engine (NPE) SDK adds support for neural network evaluation to mobile devices. While NPE directly supports only Caffe, Caffe 2 and TensorFlow frameworks, ONNX format helps in indirectly supporting a wider range of frameworks.

 

The ONNX standard helps by allowing the model to be trained in the preferred framework and then run it anywhere on the cloud. Models from frameworks, including TensorFlow, PyTorch, Keras, MATLAB, SparkML can be exported and converted to standard ONNX format. Once the model is in the ONNX format, it can run on different platforms and devices.

 

ONNX Runtime is the inference engine for deploying ONNX models to production. The features include:

  • It is written in C++ and has C, Python, C#, and Java APIs to be used in various environments.
  • It can be used on both cloud and edge and works equally well on Linux, Windows, and Mac.
  • ONNX Runtime supports DNN and traditional machine learning. It can integrate with accelerators on different hardware platforms such as NVIDIA GPUs, Intel processors, and DirectML on Windows.
  • ONNX Runtime offers extensive production-grade optimisation, testing, and other improvements

 

access the original unedited post at https://analyticsindiamag.com/onnx-standard-and-its-significance-for-data-scientists/

 

Access the ONNX website at https://onnx.ai/

 

Start using ONNX -> Access the Github repo at https://github.com/onnx/onnx

 

No comment yet.
Scooped by nrip
Scoop.it!

How to Train and Test an AI Language Translation System

How to Train and Test an AI Language Translation System | Code it | Scoop.it
In the previous article, we built a deep learning-based model for automatic translation from English to Russian. In this article, we’ll train and test this model.
 
Here we'll create a Keras tokenizer that will build an internal vocabulary out of the words found in the parallel corpus, use a Jupyter notebook to train and test our model, and try running our model with self-attention enabled.
 
 
No comment yet.
Scooped by nrip
Scoop.it!

Building AI Language Translation with TensorFlow and Keras

Building AI Language Translation with TensorFlow and Keras | Code it | Scoop.it

Google Translate works so well, it often seems like magic. But it’s not magic — it’s deep learning!

 

In this series of articles, we’ll show you how to use deep learning to create an automatic translation system. This series can be viewed as a step-by-step tutorial that helps you understand and build a neuronal machine translation.

 

This series assumes that you are familiar with the concepts of machine learning: model training, supervised learning, neural networks, as well as artificial neurons, layers, and backpropagation.

 

In the previous article, we installed all the tools required to develop an automatic translation system, and defined the development workflow. In this article, we’ll go ahead and build our AI language translation system.

 

We’ll need to write very few lines of code because, for most of the logic, we’ll use Keras-based pre-formatted templates.

If you'd like to see the final code we end up with, it's available in this Python notebook.

 

read the article with the code here https://www.codeproject.com/Articles/5299748/Building-AI-Language-Translation-with-TensorFlow-a

 

No comment yet.
Scooped by nrip
Scoop.it!

Tools for Building AI Language Translation Systems

Tools for Building AI Language Translation Systems | Code it | Scoop.it

Google Translate works so well, it often seems like magic. But it’s not magic — it’s deep learning!

 

In this series of articles, we’ll show you how to use deep learning to create an automatic translation system. This series can be viewed as a step-by-step tutorial that helps you understand and build a neuronal machine translation.

 

This series assumes that you are familiar with the concepts of machine learning: model training, supervised learning, neural networks, as well as artificial neurons, layers, and backpropagation.

 

In this article, we’ll examine the tools we'll need to use to build an AI language translator.

 

Multiple frameworks provide APIs for deep learning (DL). The TensorFlow + Keras combination is by far the most popular, but competing frameworks, such as PyTorch, Caffe, and Theano, are also widely used.

 

These frameworks often practice the black box approach to neural networks (NNs) as they perform most of their "magic" without requiring you to code the NN logic. There are other ways to build NNs — for instance, with deep learning compilers.

 

The following table lists the versions of the Python modules we’ll use. All these modules can be explicitly installed using the ==[version] flag at the end of a pip command. For instance: "pip install tensorflow==2.0".

 

The code we're writing should work on any operating system but note that we're using Python 3, so make sure you have it installed. If your system has both Python 2 and Python 3 installed, you'll need to run pip3 instead of pip in the install commands below:

 

 

module version TensorFlow 2.3.1 Keras 2.1.0 numpy 1.18.1 pandas 1.1.3 word2vec

0.11.1

 

read the entire article for the list of instructions at https://www.codeproject.com/Articles/5299747/Tools-for-Building-AI-Language-Translation-Systems

 

No comment yet.