Optimization of Forksheet and Nanosheet Transistors Through Parametric Simulations and Machine Learning for High-Performance Semiconductor Applications

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study investigates the scalability and performance of nanosheet (NS) and forksheet (FS) transistors through LTSpice simulations and a machine learning (ML)-based Python program for modeling and analysis. Key performance metrics, including rise time, fall time, propagation delay, and voltage stability, were evaluated across different transistor lengths and widths. The results show that smaller transistor dimensions significantly improve switching speeds while maintaining stable voltage characteristics. NS transistors outperformed FS in switching performance and exhibited lower modeling errors, especially in compact configurations. However, FS demonstrated slightly better voltage stability in larger dimensions, making them more suitable for applications that prioritize reliability. The integration of ML models enabled accurate predictions of transistor behavior, streamlining the design process and overcoming the inefficiencies of traditional simulation methods. Notably, the system achieved a 94\% accuracy rate, highlighting the potential of ML in optimizing transistor design. These findings provide valuable insights into optimizing NS and FS transistor designs for advanced semiconductor applications, balancing performance and stability.

Article activity feed