Cloudflare, the network which holds up a sizeable chunk of the Internet, and GPU and chip giant NVIDIA, have announced a collaboration aimed towards deploying artificial intelligence (AI) at the edge.
The ‘AI at the edge’ partnership looks to combine the former for NVIDIA and the latter for Cloudflare. The result will be a ‘massive platform on which developers can deploy applications that use pre-trained or custom machine learning models in seconds.’
The companies outline the rationale for the partnership thus:
“Today’s applications use AI for a variety of tasks from translating text on webpages to object recognition in images, making machine learning models a critical part of application development,” the companies wrote. “Users expect this functionality to be fast and reliable, while developers want to keep proprietary machine learning models reliable and secure.
“Cloudflare is seamlessly solving for their security, performance, and reliability needs while NVIDIA provides developers with a broad range of AI-powered application frameworks.”
These frameworks include Jarvis, for natural language processing, Clara, for healthcare and life sciences, and Morpheus for cybersecurity.
Developers will be able to leverage the TensorFlow platform to build and test machine learning models, before deploying them globally onto Cloudflare’s edge network. The partnership will build upon Cloudflare’s current internal machine learning needs, such as business intelligence and bot detection, and package it for any developer to use.
“Cloudflare Workers is one of the fastest and most widely adopted edge computing products with security built into its DNA,” said Matthew Prince, co-founder and CEO of Cloudflare in a statement. “Now, working with NVIDIA, we will be bringing developers powerful artificial intelligence tools to build the applications that will power the future.”
Machine learning is increasingly becoming an irresistible component to take to the network edge. The theory, as Cloudflare and NVIDIA put it, is that machine learning models can be enhanced to millisecond latency. Previously, such models were deployed on expensive centralised servers, or using cloud services which limited them to ‘regions’.
Other companies are looking at a different approach to machine learning. Speaking to this publication in February, Zach Shelby, CEO of Edge Impulse, noted that as a relative amateur in ML, his expertise in embedded engineering and the corresponding growth of the Internet of Things (IoT) led to a light bulb moment, where techniques from the cloud ML world could be harnessed at microcontroller-scale.
“Almost all algorithms today are hand-coded – trial and error,” he told Edge Computing News. “They’re using data as a kind of testing facility, but not as a way to really drive what’s being designed. What machine learning does is flip the entire equation on its head, and now you have the ability to use data to drive design. That’s powerful.”
Want to learn more about topics like this from thought leaders in the space?Find out more about the Edge Computing Expo, a brand new, innovative event and conference exploring the edge computing ecosystem.