Showing posts with label LLM. Show all posts
Showing posts with label LLM. Show all posts

Feb 15, 2026

[paper] From RTL to Prompt Coding Chip Design

Lukas Krupp∗, Matthew Venn† and Norbert Wehn∗
From RTL to Prompt Coding: Empowering the Next Generation of Chip Designers through LLMs
arXiv:2601.13815v1 [cs.AR] 20 Jan 2026

∗RPTU University of Kaiserslautern-Landau, Kaiserslautern, Germany
†Tiny Tapeout

Abstract: This paper presents an LLM-based learning platform for chip design education, aiming to make chip design accessible to beginners without overwhelming them with technical complexity. It represents the first educational platform that assists learners holistically across both frontend and backend design. The proposed approach integrates an LLM-based chat agent into a browser-based workflow built upon the Tiny Tapeout ecosystem. The workflow guides users from an initial design idea through RTL code generation to a tapeout-ready chip. To evaluate the concept, a case study was conducted with 18 high-school students. Within a 90-minute session they developed eight functional VGA chip designs in a 130 nm technology. Despite having no prior experience in chip design, all groups successfully implemented tapeout-ready projects. The results demonstrate the feasibility and educational impact of LLM-assisted chip design, highlighting its potential to attract and inspire early learners and significantly broaden the target audience for the field.

Fig: Overview of the proposed idea-to-GDSII learning workflow integrating the LLM-based chat agent for the RTL implementation, VGA simulation tool, and GitHub-driven backend flow.

Acknowledgments: This paper was funded by the German Federal Ministry of Research, Technology and Space (BMFTR) as part of the “Chipdesign Germany” project under grant number 16ME0890.

May 26, 2023

[paper] Chip-Chat

Jason Blocklove, Siddharth Garg, Ramesh Karri, and Hammond Pearce^
Chip-Chat: Challenges and Opportunities in Conversational Hardware Design
arXiv preprint arXiv:2305.13243 [cs.LG] 22 May 2023

New York University, NY USA
^University of New South Wales Sydney, Australia

Abstract: Modern hardware design starts with specifications provided in natural language. These are then translated by hardware engineers into appropriate Hardware Description Languages (HDLs) such as Verilog before synthesizing circuit elements. Automating this translation could reduce sources of human error from the engineering process. But, it is only recently that artificial intelligence (AI) has demonstrated capabilities for machine-based end-to-end design translations. Commercially available instruction-tuned Large Language Models (LLMs) such as OpenAI’s ChatGPT and Google’s Bard claim to be able to produce code in a variety of programming languages; but studies examining them for hardware are still lacking. In this work, we thus explore the challenges faced and opportunities presented when leveraging these recent advances in LLMs for hardware design. Using a suite of 8 representative benchmarks, we examined the capabilities and limitations of the state of the art conversational LLMs when producing Verilog for functional and verification purposes. Given that the LLMs performed best when used interactively, we then performed a longer, fully conversational case study where a hardware engineer co-designed a novel 8-bit accumulator-based microprocessor architecture. We sent the benchmarks and processor to tapeout in a Skywater 130nm shuttle, meaning that these ‘Chip-Chats’ resulted in what we believe to be the world’s first wholly-AI-written HDL for tapeout.
Fig: Processor synthesis information - Above (a) Components. Left: (b) Final processorGDS render by ‘kLayout’, I/O ports on left side, grid lines = 0.001 um.

Opportunities: Still, when the human feedback is provided to the more capable ChatGPT-4 model, or it is used to co-design, the language model seems to be a ‘force multiplier’, allowing for rapid design space exploration and iteration. In general, ChatGPT-4 could produce functionally correct code, which could free up designer time when implementing common modules. Potential future work could involve a larger user study to investigate this potential, as well as the development of conversational LLMs specific to hardware design to improve upon the results.