Artificial Intelligence (

Search code, repositories, users, issues, pull requests...

submited by
Style Pass
2024-06-05 12:00:14

Artificial Intelligence ("AI") is deployed in various applications, ranging from noise cancellation to image recognition. AI-based products often come at remarkably high hardware and electricity costs, making them inaccessible to consumer devices and small-scale edge electronics. Inspired by biological brains, artificial neural networks are modeled in mathematical formulae and functions. However, brains (i.e., analog systems) deal with continuous values along a spectrum (e.g., variance of voltage) rather than being restricted to the binary on/off states that digital hardware has; this continuous nature of analog logic allows for a smoother and more efficient representation of data. Given how present computers are almost exclusively digital, they emulate analog-based AI algorithms in a space-inefficient and slow manner: a single analog value gets encoded as multitudes of binary digits on digital hardware. In addition, general-purpose computer processors treat otherwise-parallelizable AI algorithms as step-by-step sequential logic. So, in my research, I have explored the possibility of improving the state of AI performance on currently available mainstream digital hardware. A family of digital circuitry known as Programmable Logic Devices ("PLDs") can be customized down to the specific parameters of a trained neural network, thereby ensuring data-tailored computation and algorithmic parallelism. Furthermore, a subgroup of PLDs, the Field-Programmable Gate Arrays ("FPGAs"), are dynamically re-configurable; they are reusable and can have subsequent customized designs swapped out in-the-field. As a proof of concept, I have implemented a sample 8x8-pixel handwritten digit-recognizing neural network, in a low-cost "Xilinx Artix-7" FPGA, using VHDL-2008 (a hardware description language by the U.S. DoD and IEEE). Compared to software-emulated implementations, power consumption and execution speed were shown to have greatly improved; ultimately, this hardware-accelerated approach bridges the inherent mismatch between current AI algorithms and the general-purpose digital hardware they run on.

Although the Abstract specifically talks about an image-recognizing neural network, I endeavoured to generalize Innervator: in practice, it is capable of implementing any number of neurons and layers, and in any possible application (e.g., speech recognition), not just imagery. In the ./data folder, you will find weight and bias parameters that will be used during Innervator's synthesis. Because of the incredibly broken implementation of VHDL's std.textio library across most synthesis tools, I was limited to only reading std_logic_vectors from files; due to that, weights and biases had to be pre-formatted in a fixed-point representation. (More information is available in file_parser.vhd.

Leave a Comment