Showing posts with label Computing. Show all posts
Showing posts with label Computing. Show all posts

Oct 31, 2023

[paper] Analog System Synthesis for Reconfigurable Computing

Afolabi Ige, Linhao Yang, Hang Yang, Jennifer Hasler, and Cong Hao
Analog System High-Level Synthesis for Energy-Efficient Reconfigurable Computing
J. Low Power Electron. Appl. 2023, 13, 58. 
DOI: 10.3390/jlpea1304005

* Electrical and Computer Engineering (ECE), Georgia Institute of Technology (USA)

Abstract: The design of analog computing systems requires significant human resources and domain expertise due to the lack of automation tools to enable these highly energy-efficient, high-performance computing nodes. This work presents the first automated tool flow from a high-level representation to a reconfigurable physical device. This tool begins with a high-level algorithmic description, utilizing either our custom Python framework or the XCOS GUI, to compile and optimize computations for integration into an Integrated Circuit (IC) design or a Field Programmable Analog Array (FPAA). An energy-efficient embedded speech classifier benchmark illustrates the tool demonstration, automatically generating GDSII layout or FPAA switch list targeting.

Figure: The analog synthesis tool flow to generate a design on a large-scale Field Programmable Analog Array (FPAA) or an Application-Specific Integrated Circuit (ASIC). A single user-supplied high-level description goes through multiple lowering steps to reach the targeted output, either GDSII or a switch list. For targeting an FPAA, a design can either be specified through the GUI in XCOS (a pre-existing flow) or through the new text-based Python flow. Users construct circuits and systems using class objects provided in the Python cell library that mirror the palette browser in the XCOS library, and the description is then lowered into a Verilog syntax. The FPAA path lowers to Blif netlist, fitting into our preexisting flow compiling a switch list to target the FPAA. For targeting an ASIC, users perform similar steps to construct a system from Python objects with cells made available in the provided library. Those Python objects are then converted to a Verilog netlist before being fed to the layout synthesis modules, which handle placement and global routing. These serve as inputs to the open-source detailed router (TritonRoute) to convert the guide to a path. That path is merged with the placement file to create a final output layout file.

Funding: Partial funding for the development of this effort came from NSF (2212179).

Jul 19, 2023

[paper] artificial synapse

Md. Hasan Raza Ansari, Udaya Mohanan Kannan, and Nazek El-Atab
Silicon Nanowire Charge Trapping Memory for Energy-Efficient Neuromorphic Computing
IEEE Transactions on Nanotechnology (2023)
DOI 10.1109/TNANO.2023.3296673

SAMA Labs, CEMSE Division, KAUST, Thuwal 23955-6900, Saudi Arabia
Department of Electronic Engineering, Gachon University, Seongnam 13120, Korea

Abstract: This work highlights the utilization of the floating body effect and charge-trapping/de-trapping phenomenon of a Silicon-nanowire (Si-nanowire) charge-trapping memory for an artificial synapse of neuromorphic computing application. Charge trapping/de-trapping in the nitride layer characterizes the long-term potentiation (LTP)/depression (LTD). The accumulation of holes in the potential well achieves short-term potentiation (STP) and controls the transition from STP to LTP. Also, the transition from STP to LTP is analyzed through gate length scaling and high-κ material (Al2O3) for blocking oxide. Furthermore, the conductance values of the device are utilized for system-level simulation. System-level hardware parameters of a convolutional neural network (CNN) for inference applications are evaluated and compared to a static random-access memory (SRAM) device and charge-trapping memory. The results confirm that the Si-nanowire transistor with better gate controllability has a high retention time for LTP states, consumes low power, and archives better accuracy (91.27%). These results make the device suitable for low-power neuromorphic applications.


FIG: Schematic representation of biological and Si-nanowire charge trapping memory as an artificial synapse

Mar 10, 2021

[Workshop] Brain Inspired Computing; March 24, 2021

Workshop "Brain Inspired Computing”
March 24, 2021
Under the aegis of UKIERI and SPARC Scheme
Jointly Organized by
Dept. of EEE, The University of Sheffield (UK) 
Dept. of ECE, Indian Institute of Technology, Roorkee (IN)
and H2020 Project INFET

Why Brain Inspired Computing? Modern computers are based on the von Neumann architecture in which computation and storage are physically separated. It has been evaluated that, for many computing tasks, most of the energy and time are consumed in data movement, rather than computation. One promising solution is the brain-inspired architecture that leverages the distributed computing of neurons with localized storage in synapses. 
Registration Link Google Form

Coordinators:
Dr. Merlyne De Souza; PI, UKIERI, EEE Dept, University of Sheffield (UK)
Dr. Sanjeev Manhas; PI, SPARC, ECE Dept, IIT Roorkee (IN)

Schedule Details (UK times)
• 11:00-11:05am “Welcome note by organisers”, Merlyne De Souza, University of Sheffield (UK)
• 11:05-11:30am“Algorithm-circuits-device co-design for neuromorphic edge intelligence”, Melika Payvvand, ETH Zurich (CH)
• 11:30am-12pm “Adapting communication delays between neurons: A new type of brain plasticity”, Renaud Jolivet, University of Geneva (CH)
• 12-12.30pm “Self-adaptive and defect tolerant in-memory analogue computing with memristors”, Can Li, Hong Kong University (HK)
• 12:30-1:00 pm “In situ learning using intrinsic memristor variability”, Damien Querlioz, Université Paris-Saclay, France (F)
• 1:00-1:30pm “In memory hyper dimensional computing”, Abbas Rahimi, IBM Zurich (CH)
• 1:30pm-2:00pm “Introduction to NeuroSim: A Benchmark Tool for Compute-in-Memory
 Accelerator”, Shimeng Yu, Georgia Tech (USA)





Jun 2, 2020

[paper] In-memory hyperdimensional computing

In-memory hyperdimensional computing
Geethan Karunaratne, Manuel Le Gallo, Giovanni Cherubini, Luca Benini, Abbas Rahimi
and Abu Sebastian 
Nature Electronics (2020)
DOI: 10.1038/s41928-020-0410-3

Abstract: Hyperdimensional computing is an emerging computational framework that takes inspiration from attributes of neuronal circuits including hyperdimensionality, fully distributed holographic representation and (pseudo)randomness. When employed for machine learning tasks, such as learning and classification, the framework involves manipulation and comparison of large patterns within memory. A key attribute of hyperdimensional computing is its robustness to the imperfections associated with the computational substrates on which it is implemented. It is therefore particularly amenable to emerging non-von Neumann approaches such as in-memory computing, where the physical attributes of nanoscale memristive devices are exploited to perform computation. Here, we report a complete in-memory hyperdimensional computing system in which all operations are implemented on two memristive crossbar engines together with peripheral digital complementary metal–oxide–semiconductor (CMOS) circuits. Our approach can achieve a near-optimum trade-off between design complexity and classification accuracy based on three prototypical hyperdimensional computing-related learning tasks: language classification, news classification and hand gesture recognition from electromyography signals. Experiments using 760,000 phase-change memory devices performing analog in-memory computing achieve comparable accuracies to software implementations.
Fig.: The concept of in-memory hyperdimensional computing.

Acknowledgements: This work was supported in part by the European Research Council through the European Union’s Horizon 2020 Research and Innovation Programme under grant no. 682675 and in part by the European Union’s Horizon 2020 Research and Innovation Programme through the project MNEMOSENE under grant no. 780215.


Oct 13, 2014

Wearable Sensing and Computing

 Wearable Sensing and Computing 
 05.11.2014 - 06.11.2014
 EPFL Lausanne (CH)

COURSE OBJECTIVES
The course main objective is to inform and discuss in great details the latest advancements in low power sensing technology, energy harvesting and their heterogeneous integration for wearable smart system applications. Technological roadmaps of performance and future evolutions will be presented. The low power wireless communications are discussed from the point of view of existing standards and challenges for reducing the energy per communicated bit. Another objective is to detail some key future applications for wearable sensing and computing with main emphasis on: (1) medical Diagnostics, monitoring and prevention and (2) sports, fitness and activity monitoring applications. We analyze the benefits of autonomous smart system technology from many different points of view, including that of the individual, the physician, health care management, and society in general. We provide a rationale on the role of such technology as a component of the care cycle and the changes it can induce by reinforcing preventive strategies.

AGENDA on-line
Day 1 (09:00 – 17:00):

  • Introduction to wearable technology and energy efficient functions for autonomous smart systems
  • Energy efficient computing technologies and their importance for wearable applications:
  • Wearable low power sensor technology trends
  • Wearable low power communications technologies
  • Wearable energy harvesting technology trends

Day 2 (09:00 – 17:00):

  • Heterogeneous integration: solutions, roadmaps and trends for wearables
  • Context-driven embodiments by wearable systems and related applications and services
  • Market Trends for Mobile and Wearable Technology
  • Wearable autonomous smart systems: Applications to Medical Diagnostics, Monitoring and Prevention Paradigms using Feedback Loops
  • Wearable Technology – Sports, Fitness and Activity Monitoring Applications
Course registration on-line