PROJECT 3
Still don’t trust me? Building Trustworthy AI Code Generation
Project Leader: Souti Chattopadhyay, Assistant Professor Computer Science & Lars Lindemann, Assistant Professor Computer Science
Abstract: A significant part of software development is building and updating software by adapting existing code. Developers start with a goal in mind (e.g., adding features to the application or resolving an error) and decompose the goal into smaller steps on how to change the code to behave in a specific way. Recent advances in machine learning and artificial intelligence enable high-performance solutions for AI-assisted code generation matching the user-described intention. AI coding assistance tools can generate code to match the user's description written in natural language. These tools have profoundly impacted programming ability and accessibility, especially for non-experts and novice programmers. However, it is well known that AI-generated code suggestions are untrustworthy so that, despite the reported advances and impressive demonstrations, the trusted integration of large language models in software engineering is challenging and still one of the main bottlenecks. Therefore, this proposal aims to improve the users' trust in AI-generated programming solutions. We propose a framework that decomposes an intended user behavior into a sequence of code suggestions along with a measure of trust that dynamically adapts to the user's evolving intention. This measure of trust is computed from past code suggestions using a statistical tool called conformal prediction. Our approach uses statistical tools along with program analysis techniques to instill confidence measures that indicates the usefulness of the code suggestion to the user..
PROJECT LEADERS
Souti Chattopadhyay
Lars Lindemann