1                                  Altmetric                                   The widespread ad

Do LLMs consider security? an empirical study on responses to programming questions

submited by
Style Pass
2025-07-31 15:30:13

1 Altmetric

The widespread adoption of conversational LLMs for software development has raised new security concerns regarding the safety of LLM-generated content. Our motivational study outlines ChatGPT’s potential in volunteering context-specific information to the developers, promoting safe coding practices. Motivated by this finding, we conduct a study to evaluate the degree of security awareness exhibited by three prominent LLMs: Claude 3, GPT-4, and Llama 3. We prompt these LLMs with Stack Overflow questions that contain vulnerable code to evaluate whether they merely provide answers to the questions or if they also warn users about the insecure code, thereby demonstrating a degree of security awareness. Further, we assess whether LLM responses provide information about the causes, exploits, and the potential fixes of the vulnerability, to help raise users’ awareness. Our findings show that all three models struggle to accurately detect and warn users about vulnerabilities, achieving a detection rate of only 12.6% to 40% across our datasets. We also observe that the LLMs tend to identify certain types of vulnerabilities related to sensitive information exposure and improper input neutralization much more frequently than other types, such as those involving external control of file names or paths. Furthermore, when LLMs do issue security warnings, they often provide more information on the causes, exploits, and fixes of vulnerabilities compared to Stack Overflow responses. Finally, we provide an in-depth discussion on the implications of our findings, and demonstrated a CLI-based prompting tool that can be used to produce more secure LLM responses.

Large language Models (LLMs) have become deeply integrated into software engineering workflows, performing tasks such as code generation, summarization, debugging, and addressing queries related to programming (Liu et al. 2023a; Hou et al. 2023; Zheng et al. 2023; Belzner et al. 2023). In particular, LLM chatbots or conversational LLMs, such as OpenAI’s GPT (OpenAI 2023), Anthropic’s Claude (Anthropic 2024), and Meta’s Llama (Meta 2024), have significantly impacted problem-solving activities by enabling interactive Q&As (Suad Mohamed 2024; Das et al. 2024; Da Silva et al. 2024). Developers use them to describe symptoms, provide contextual information, and seek guidance on solutions (Hou et al. 2023). According to a 2023 survey, 92% of U.S.-based developers are using various generative models to perform or to automate some of their daily tasks (Shani 2024).

Leave a Comment
Related Posts