fujitsu.jpg

Fujitsu Announces Better Memory Capacity for Deep Neural Learning Networks

Oct. 13, 2016
Fujitsu announces development of a GPU memory system that will enable more layers in a DNL network without compromising its speed.

Deep neural learning (DNL) technologies have become an advanced tool for computers to identify the content in images, decipher audio recordings, and analyze other complex inputs.  A DNL network consists of thousands of layers of nodes. Each node processes individual content from the input and generates a few interpretations that are sent to other nodes in a subsequent layer for further processing. This continues throughout the layers.

After an input has been processed through the network, the output is compared to a desired output and the computer generates an error reading. This error is fed back through the network, so that the each interpretation by a single node can be weighted. Based on the error, some interpretations are more heavily considered for the final output. There may be thousands of iterations for this process until the input is interpreted with a minimal error. This is how the machine learns.

Graphical processing units (GPUs) are generally used for DNL because of their memory capacity in parallel processing. The GPU must be able to remember weights and data associated with each error reading in every layer. When more layers are added, the processing speed of the GPU decreases because it needs to concentrate more of its power on memory. Conversely, central processing units (CPUs) are used more in serial processing, where data is interpreted one node at a time and processed through single strings of nodes. They can operate much faster, since they do not require as much memory as the node layers in a GPU.

With the introduction of a new memory system, Fujitsu announces development of a GPU that enables more layers in a DNL network without compromising its speed. Adding more layers will improve the overall accuracy and learning capacity of the GPU. At each layer, the GPU will compare the weights of nodal connections to a “weight error” calculated at the end of each iteration and will simultaneously compare the data stored at each layer to the “data error” calculated by the GPU. By subtracting the errors from the existing weights and data, the GPU can actually delete excess data and weights stored at each layer. This frees up more memory space so that they GPU can operate faster, storing only data that is necessary.

Courtesy of Fujitsu. Click to enlarge.

The new memory system is tested in the Caffe open source deep learning framework software. Evaluations used AlexNet and VGGNet, which is common in DNL research initiatives. Fujitsu reports that it reduced memory usage by 40% with the new system, nearly doubling the learning capacity and speed of the DNL network. The company plans to release the technology in March 2017 for use in its Human Centric AI Zinrai

Sponsored Recommendations

From concept to consumption: Optimizing success in food and beverage

April 9, 2024
Identifying opportunities and solutions for plant floor optimization has never been easier. Download our visual guide to quickly and efficiently pinpoint areas for operational...

A closer look at modern design considerations for food and beverage

April 9, 2024
With new and changing safety and hygiene regulations at top of mind, its easy to understand how other crucial aspects of machine design can get pushed aside. Our whitepaper explores...

Cybersecurity and the Medical Manufacturing Industry

April 9, 2024
Learn about medical manufacturing cybersecurity risks, costs, and threats as well as effective cybersecurity strategies and essential solutions.

Condition Monitoring for Energy and Utilities Assets

April 9, 2024
Condition monitoring is an essential element of asset management in the energy and utilities industry. The American oil and gas, water and wastewater, and electrical grid sectors...

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!