Abstract:Recent advancements in large language models (LLMs) have significantly enhanced their ability to understand both natural language and code, driving their use in tasks like natural language-to-code (NL2Code) and code summarization. However, LLMs are prone to hallucination-outputs that stray from intended meanings. Detecting hallucinations in code summarization is especially difficult due to the complex interplay between programming and natural languages. We introduce a first-of-its-kind dataset with $\sim$10K samples, curated specifically for hallucination detection in code summarization. We further propose a novel Entity Tracing Framework (ETF) that a) utilizes static program analysis to identify code entities from the program and b) uses LLMs to map and verify these entities and their intents within generated code summaries. Our experimental analysis demonstrates the effectiveness of the framework, leading to a 0.73 F1 score. This approach provides an interpretable method for detecting hallucinations by grounding entities, allowing us to evaluate summary accuracy.
Abstract:There has been an unprecedented surge in the number of service providers offering a wide range of machine learning prediction APIs for tasks such as image classification, language translation, etc. thereby monetizing the underlying data and trained models. Typically, a data owner (API provider) develops a model, often over proprietary data, and leverages the infrastructure services of a cloud vendor for hosting and serving API requests. Clearly, this model assumes complete trust between the API Provider and cloud vendor. On the other hand, a malicious/buggy cloud vendor may copy the APIs and offer an identical service, under-report model usage metrics, or unfairly discriminate between different API providers by offering them a nominal share of the revenue. In this work, we present the design of a blockchain based decentralized trustless API marketplace that enables all the stakeholders in the API ecosystem to audit the behavior of the parties without having to trust a single centralized entity. In particular, our system divides an AI model into multiple pieces and deploys them among multiple cloud vendors who then collaboratively execute the APIs. Our design ensures that cloud vendors cannot collude with each other to steal the combined model, while individual cloud vendors and clients cannot repudiate their input or model executions.