Code it
4.6K views | +0 today
Follow
Code it
This is a curated resource for programmers and software architects. It is regularly updated with Articles, Hacks, How Tos, Examples and Code.
Curated by nrip
Your new post is loading...
Your new post is loading...
Scooped by nrip
Scoop.it!

Why Python hasn't taken off on mobile, or in the browser - according to its creator

Why Python hasn't taken off on mobile, or in the browser - according to its creator | Code it | Scoop.it

Python creator Guido van Rossum reveals the strengths and weaknesses of one of the world's most popular programming languages.

 

mobile app development is one of the key growth fields that Python hasn't gained any traction in, despite it dominating in machine learning with libraries like NumPy and Google's TensorFlow, as well as backend services automation.

 

Python isn't exactly boxed into high-end hardware, but that's where it's gravitated to and it's been left out of mobile and the browser, even if it's popular on the backend of these services, he said.

 

Why? Python simply guzzles too much memory and energy from hardware, he said. For similar reasons, he said Python probably doesn't have a future in the browser despite WebAssembly, a standard that is helping make more powerful applications on websites.  

 

Mobile app development in Python is a "bit of a sore point", said van Rossum in a recent video Q&A for Microsoft Reactor.  

 

"It would be nice if mobile apps could be written in Python. There are actually a few people working on that but CPython has 30 years of history where it's been built for an environment that is a workstation, a desktop or a server and it expects that kind of environment and the users expect that kind of environment," he said.

 

"The people who have managed to cross-compile CPython to run on an Android tablet or even on iOS, they find that it eats up a lot of resources," he said. "Compared to what the mobile operating systems expect, Python is big and slow. It uses a lot of battery charge, so if you're coding in Python you would probably very quickly run down your battery and quickly run out of memory," he said.

 

"Python is a pretty popular language [at the backend]. At Google I worked on projects that were sort of built on Python, although most Google stuff wasn't. At Dropbox, the whole Dropbox server is built on Python. On the other hand, if you look at what runs in the browser, that's the world of JavaScript and unless it translates to JavaScript, you can't run it," van Rossum said. 

 

"I don't mind so much different languages have to have different goals i mean nobody is asking Rust when you can write Rust in the browser; at least that wouldn't seem a useful sort of target for Rust either. Python should focus on the application areas where it's good and for the web that's the backend and for scientific data processing."

 

watch the Microsoft Q&A with him at https://www.youtube.com/watch?v=aYbNh3NS7jA

 

 

read the original article at https://www.zdnet.com/article/python-programming-why-it-hasnt-taken-off-in-the-browser-or-mobile-according-to-its-creator/

 

 

No comment yet.
Scooped by nrip
Scoop.it!

Humans are leading AI systems astray because we can't agree on labeling

Humans are leading AI systems astray because we can't agree on labeling | Code it | Scoop.it

Is it a bird? Is it a plane? Asking for a friend's machine-learning code

 

Top datasets used to train AI models and benchmark how the technology has progressed over time are riddled with labeling errors, a study shows.

 

Data is a vital resource in teaching machines how to complete specific tasks, whether that's identifying different species of plants or automatically generating captions. Most neural networks are spoon-fed lots and lots of annotated samples before they can learn common patterns in data.

 

But these labels aren’t always correct; training machines using error-prone datasets can decrease their performance or accuracy. In the aforementioned study, led by MIT, analysts combed through ten popular datasets that have been cited more than 100,000 times in academic papers and found that on average 3.4 per cent of the samples are wrongly labelled.

 

The datasets they looked at range from photographs in ImageNet, to sounds in AudioSet, reviews scraped from Amazon, to sketches in QuickDraw.

 

Examples of some of the mistakes compiled by the researchers show that in some cases, it’s a clear blunder, such as a drawing of a light bulb tagged as a crocodile, in others, however, it’s not always obvious. Should a picture of a bucket of baseballs be labeled as ‘baseballs’ or ‘bucket’?

 

What would happen if a self-driving car is trained on a dataset with frequent label errors that mislabel a three-way intersection as a four-way intersection?

 

read more at https://www.theregister.com/2021/04/01/mit_ai_accuracy/

 

 

No comment yet.
Scooped by nrip
Scoop.it!

ONNX Standard And Its Significance For Data Scientists

ONNX Standard And Its Significance For Data Scientists | Code it | Scoop.it

ONNX is an open format built to represent machine learning models. ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers.

 

It was introduced in September 2017 by Microsoft and Facebook.

 

ONNX breaks the dependence between frameworks and hardware architectures. It has very quickly emerged as the default standard for portability and interoperability between deep learning frameworks.

 

Before ONNX, data scientists found it difficult to choose from a range of AI frameworks available.

 

Developers may prefer a certain framework at the outset of the project, during the research and development stage, but may require a completely different set of features for production. Thus organizations are forced to resort to creative and often cumbersome workarounds, including translating models by hand.

 

ONNX standard aims to bridge the gap and enable AI developers to switch between frameworks based on the project’s current stage. Currently, the models supported by ONNX are Caffe, Caffe2, Microsoft Cognitive toolkit, MXNET, PyTorch. ONNX also offers connectors for other standard libraries and frameworks.

 

Two use cases where ONNX has been successfully adopted include:

  • TensorRT: NVIDIA’s platform for high performance deep learning inference. It utilises ONNX to support a wide range of deep learning frameworks.
  • Qualcomm Snapdragon NPE: The Qualcomm neural processing engine (NPE) SDK adds support for neural network evaluation to mobile devices. While NPE directly supports only Caffe, Caffe 2 and TensorFlow frameworks, ONNX format helps in indirectly supporting a wider range of frameworks.

 

The ONNX standard helps by allowing the model to be trained in the preferred framework and then run it anywhere on the cloud. Models from frameworks, including TensorFlow, PyTorch, Keras, MATLAB, SparkML can be exported and converted to standard ONNX format. Once the model is in the ONNX format, it can run on different platforms and devices.

 

ONNX Runtime is the inference engine for deploying ONNX models to production. The features include:

  • It is written in C++ and has C, Python, C#, and Java APIs to be used in various environments.
  • It can be used on both cloud and edge and works equally well on Linux, Windows, and Mac.
  • ONNX Runtime supports DNN and traditional machine learning. It can integrate with accelerators on different hardware platforms such as NVIDIA GPUs, Intel processors, and DirectML on Windows.
  • ONNX Runtime offers extensive production-grade optimisation, testing, and other improvements

 

access the original unedited post at https://analyticsindiamag.com/onnx-standard-and-its-significance-for-data-scientists/

 

Access the ONNX website at https://onnx.ai/

 

Start using ONNX -> Access the Github repo at https://github.com/onnx/onnx

 

No comment yet.
Scooped by nrip
Scoop.it!

How to build your own Neural Network from scratch in Python

Most introductory texts to Neural Networks brings up brain analogies when describing them. Without delving into brain analogies, I find it easier to simply describe Neural Networks as a mathematical function that maps a given input to a desired output.

Neural Networks consist of the following components

 

  • An input layerx
  • An arbitrary amount of hidden layers
  • An output layerŷ
  • A set of weights and biases between each layer, W and b
  • A choice of activation function for each hidden layer, σ. In this tutorial, we’ll use a Sigmoid activation function.

 

The diagram above shows the architecture of a 2-layer Neural Network (note that the input layer is typically excluded when counting the number of layers in a Neural Network)

 

read the rest of this article with code examples at https://towardsdatascience.com/how-to-build-your-own-neural-network-from-scratch-in-python-68998a08e4f6

 

No comment yet.