Writy.
  • Home
No Result
View All Result
Writy.
  • Home
No Result
View All Result
The AGI News
No Result
View All Result

Researchers Fine-Tune LLMs to Reduce Vulnerabilities in Auto-Completed Smart Contract Code

September 19, 2023
Auto-Completed Smart Contract Code
Share on FacebookShare on Twitter

Researchers from Norwegian University of Science and Technology, Nanjing, have presented a pioneering approach aimed at addressing the vulnerabilities in auto-completed smart contract code. Their focus was primarily on Ethereum Blockchain smart contracts, given the stringent security requirements associated with these digital contracts.

Auto-completing code, while immensely beneficial in the software development process, has its drawbacks. Notably, a significant proportion of such synthesized codes, as revealed by recent studies, are riddled with vulnerabilities. In a concerning revelation, an analysis of auto-completed Python and C programs determined that approximately 40% of such synthesized codes were vulnerable to potential breaches.

The researchers’ innovative methodology, dubbed “vulnerability-constrained decoding,” attempt to diminish the generation of vulnerable code. The approach focuses on a curated dataset of previously identified vulnerable code lines. By using this data, the technique fine-tunes a state-of-the-art large language model (LLM) to not only recognize but also to avoid these vulnerabilities during the auto-completion phase.

A feature of the team’s research was the efficiency achieved during the model’s fine-tuning process. Traditional methods, which often involve re-training these complex models, could take upwards of a week, even when utilizing potent computational resources. Remarkably, the team’s new approach streamlined this process, completing it in a mere hour without sacrificing efficacy.

In subsequent evaluations, the modified model showcased a commendable reduction in the susceptibility of the generated code to vulnerabilities. Specific tests involving Ethereum smart contracts reflected a substantial reduction in vulnerabilities by 30%.

Such advancements in the realm of secure code generation are timely and critical, especially considering the escalating importance of digital security in today’s tech-driven landscape. The team’s research not only contributes a valuable methodology to the field but also lays the groundwork for future studies aiming at further enhancing security in code generation.

The researchers, acknowledging the potential and significance of their findings, have indicated the refinements in their model further. They also aim to explore the broader applicability of their approach across different technological domains, ensuring a safer and more robust digital coding environment. Check full paper here.

Related News

artificial intelligence and neuroscience

Integration of LLMs and Neuroimaging Sheds Light on Cognitive Processes in Reading Comprehension

September 28, 2023
RankVicuna

Researchers Introduce RankVicuna, An Open-Source Model Elevating Zero-Shot Reranking in Information Retrieval

September 27, 2023
CS1 Coding Tasks and Learning Trajectories

LLM-Based Code Generators on CS1 Coding Tasks and Learning Trajectories

September 26, 2023
Speech Data Processing

Speech Technology with Tencent AI Lab’s AutoPrep for Optimal Unstructured Speech Data Processing

September 26, 2023
Load More
Next Post
Formal Property Verification with Advanced Language Models

GPT-4 Transforms Hardware Design Verification by Streamlining Formal Property Verification with Advanced Language Models

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

© 2023 AGI News All Rights Reserved.

Contact: community@superagi.com

No Result
View All Result
  • Home

Sign up for Newsletter