[PDF] SyNERGY: An energy measurement and prediction framework for Convolutional Neural Networks on Jetson TX1 | Semantic Scholar (2024)

Figures and Tables from this paper

  • figure 1
  • table 1
  • figure 2
  • table 2
  • figure 3
  • figure 4
  • table 4
  • figure 5
  • table 5
  • figure 6

Topics

Single Instruction Multiple Data (opens in a new tab)Caffe (opens in a new tab)TensorFlow (opens in a new tab)Deep Convolutional Neural Networks (opens in a new tab)Graphical Processing Units (opens in a new tab)Torch (opens in a new tab)Convolutional Neural Network (opens in a new tab)Neural Network (opens in a new tab)Multiply-accumulate (opens in a new tab)Hardware Resources (opens in a new tab)

29 Citations

Energy-Efficient Use of an Embedded Heterogeneous SoC for the Inference of CNNs
    Agathe ArchetNicolas VentrouxNicolas GacFrançois Orieux

    Computer Science, Engineering

    2023 26th Euromicro Conference on Digital System…

  • 2023

This paper studies deep neural network design and inference options for each accelerator ineterogeneous system-on-chips, and forms guidelines to specifically make the best use of the computing and energy-efficiency capabilities published by manufacturers with the default Tensor Rt mapping.

Profiling Energy Consumption of Deep Neural Networks on NVIDIA Jetson Nano
    Stephan HollyAlexander WendtMartin Lechner

    Computer Science, Engineering

    2020 11th International Green and Sustainable…

  • 2020

This work provides a measurement base for power estimation on NVIDIA Jetson devices, and analyzes the effects of different CPU and GPU settings on power consumption, latency, and energy for complete DNNs as well as for individual layers.

  • 14
  • PDF
Evaluating and analyzing the energy efficiency of CNN inference on high‐performance GPU
    Chunrong YaoWantao Liu Wei Jiang

    Computer Science, Engineering

    Concurr. Comput. Pract. Exp.

  • 2021

This paper conducts a comprehensive study on the model‐level and layer‐level energy efficiency of popular CNN models and proposes a revenue model to allow an optimal trade‐off between energy efficiency and latency.

  • 17
Energy Predictive Models for Convolutional Neural Networks on Mobile Platforms
    Crefeda Faviola RodriguesG. RileyM. Luján

    Computer Science, Engineering

    ArXiv

  • 2020

This work provides a comprehensive analysis of building regression-based predictive models for deep learning on mobile devices, based on empirical measurements gathered from the SyNERGY framework, and shows that simple layer-type features achieve a model complexity of 4 to 32 times less for convolutional layer predictions for a similar accuracy compared to predictive models using more complex features adopted by previous approaches.

Measuring the Energy Consumption and Efficiency of Deep Neural Networks: An Empirical Analysis and Design Recommendations
    Charles Edison TrippJ. Perr-Sauer Erik A. Bensen

    Computer Science, Engineering

    ArXiv

  • 2024

This work introduces the BUTTER-E dataset, an augmentation to the BUTTER Empirical Deep Learning dataset, containing energy consumption and performance data from 63,527 individual experimental runs spanning 30,582 distinct configurations, and proposes a straightforward and effective energy model that accounts for network size, computing, and memory hierarchy.

Characterizing the Deployment of Deep Neural Networks on Commercial Edge Devices
    Ramyad HadidiJiashen CaoYilun XieBahar AsgariT. KrishnaHyesoon Kim

    Computer Science, Engineering

    2019 IEEE International Symposium on Workload…

  • 2019

This paper characterizes several commercial edge devices on popular frameworks using well-known convolution neural networks (CNNs), a type of DNN, and analyzes the impact of frameworks, their software stack, and their implemented optimizations on the final performance.

  • 90
  • Highly Influenced
  • PDF
Compute and Energy Consumption Trends in Deep Learning Inference
    Radosvet DesislavovFernando Mart'inez-PlumedJos'e Hern'andez-Orallo

    Computer Science, Engineering

    Sustain. Comput. Informatics Syst.

  • 2023
A Transistor Operations Model for Deep Learning Energy Consumption Scaling Law
    Chen LiA. TsourdosWeisi Guo

    Engineering, Computer Science

    IEEE Transactions on Artificial Intelligence

  • 2024

This article is the first to develop a bottom-up transistor operations (TOs) approach to expose the role of nonlinear activation functions and neural network structure and statistically model the energy scaling laws as opposed to absolute consumption values.

EnergySense: A Fine-Grained Energy Analysis Framework for DNN Processing with Low-Power Ubiquitous Sensors
    Jiaju RenZhiwen YuTao XingHelei CuiYaxing ChenBin Guo

    Computer Science, Engineering

    2023 IEEE Smart World Congress (SWC)

  • 2023

The design and implementation of EnergySense is reported, which is an energy-efficient scheduling framework that improves the computing power of low-power ubiquitous sensors and provides a longer life cycle of the feature map extraction and improves the computing power of low-power ubiquitous sensors.

Multi-band sub-GHz technology recognition on NVIDIA’s Jetson Nano
    Jaron FontaineA. ShahidRobbe ElsasAmina SeferagićI. MoermanE. D. Poorter

    Computer Science, Engineering

    2020 IEEE 92nd Vehicular Technology Conference…

  • 2020

This work aims to propose a deep learning solution using convolutional neural networks, cheap software defined radios and efficient embedded platforms such as NVIDIA’s Jetson Nano to enable smart spectrum management without the need of expensive and high power consuming hardware.

  • 10
  • PDF

...

...

21 References

Fine-grained energy profiling for deep convolutional neural networks on the Jetson TX1
    Crefeda Faviola RodriguesG. RileyM. Luján

    Computer Science, Engineering

    2017 IEEE International Symposium on Workload…

  • 2017

This work presents a novel evaluation framework for measuring energy and performance for deep neural networks using ARMs Streamline Performance Analyser integrated with standard deep learning frameworks such as Caffe and CuDNNv5.

Performance Analysis of GPU-Based Convolutional Neural Networks
    Xiaqing LiGuangyan ZhangH. Howie HuangZhufan WangWeimin Zheng

    Computer Science

    2016 45th International Conference on Parallel…

  • 2016

A comprehensive comparison of these implementations of convolutional neural networks over a wide range of parameter configurations is conducted, investigate potential performance bottlenecks and point out a number of opportunities for further optimization.

  • 116
  • PDF
Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning
    Tien-Ju YangYu-hsin ChenV. Sze

    Computer Science, Engineering

    2017 IEEE Conference on Computer Vision and…

  • 2017

This work proposes an energy-aware pruning algorithm for CNNs that directly uses the energy consumption of a CNN to guide the pruning process, and shows that reducing the number of target classes in AlexNet greatly decreases thenumber of weights, but has a limited impact on energy consumption.

Caffe: Convolutional Architecture for Fast Feature Embedding
    Yangqing JiaEvan Shelhamer Trevor Darrell

    Computer Science

    ACM Multimedia

  • 2014

Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.

Fathom: reference workloads for modern deep learning methods
    Robert AdolfSaketh RamaBrandon ReagenGu-Yeon WeiD. Brooks

    Computer Science, Engineering

    2016 IEEE International Symposium on Workload…

  • 2016

This paper assembles Fathom: a collection of eight archetypal deep learning workloads, ranging from the familiar deep convolutional neural network of Krizhevsky et al., to the more exotic memory networks from Facebook's AI research group, and focuses on understanding the fundamental performance characteristics of each model.

An Analysis of Deep Neural Network Models for Practical Applications
    A. CanzianiAdam PaszkeE. Culurciello

    Computer Science

    ArXiv

  • 2016

This work presents a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption and believes it provides a compelling set of information that helps design and engineer efficient DNNs.

  • 1,088
  • Highly Influential
  • [PDF]
Embedded Deep Neural Network Processing: Algorithmic and Processor Techniques Bring Deep Learning to IoT and Edge Devices
    M. VerhelstBert Moons

    Computer Science, Engineering

    IEEE Solid-State Circuits Magazine

  • 2017

Evaluating the powerful but large deep neural networks with power budgets in the milliwatt or even microwatt range requires a significant improvement in processing energy efficiency.

  • 115
  • Highly Influential
  • PDF
BenchIP: Benchmarking Intelligence Processors
    Jinhua TaoZidong Du Tianshi Chen

    Computer Science, Engineering

    Journal of Computer Science and Technology

  • 2018

This paper proposes BenchIP, a benchmark suite and benchmarking methodology for intelligence processors that is utilized for evaluating various hardware platforms, including CPUs, GPUs, and accelerators and will be open-sourced soon.

Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
    Song HanHuizi MaoW. Dally

    Computer Science, Engineering

    ICLR

  • 2016

This work introduces "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.

Exploring the Design Space of Deep Convolutional Neural Networks at Large Scale
    F. Iandola

    Computer Science, Engineering

    ArXiv

  • 2016

This dissertation develops a methodology that enables systematic exploration of the design space of CNNs and develops an effective methodology for discovering the “right” CNN architectures to meet the needs of practical applications.

  • 16
  • Highly Influential
  • [PDF]

...

...

Related Papers

Showing 1 through 3 of 0 Related Papers

    [PDF] SyNERGY: An energy measurement and prediction framework for Convolutional Neural Networks on Jetson TX1 | Semantic Scholar (2024)
    Top Articles
    Latest Posts
    Article information

    Author: Greg Kuvalis

    Last Updated:

    Views: 6466

    Rating: 4.4 / 5 (75 voted)

    Reviews: 82% of readers found this page helpful

    Author information

    Name: Greg Kuvalis

    Birthday: 1996-12-20

    Address: 53157 Trantow Inlet, Townemouth, FL 92564-0267

    Phone: +68218650356656

    Job: IT Representative

    Hobby: Knitting, Amateur radio, Skiing, Running, Mountain biking, Slacklining, Electronics

    Introduction: My name is Greg Kuvalis, I am a witty, spotless, beautiful, charming, delightful, thankful, beautiful person who loves writing and wants to share my knowledge and understanding with you.