Abstract:Large Language Models (LLMs) have demonstrated exceptional abilities across a broad range of language-related tasks, including generating solutions to complex reasoning problems. An effective technique to enhance LLM performance is in-context learning, which encourages a step-by-step reasoning process by including explanatory examples to guide the model's responses. However, selecting appropriate exemplars for the model poses a challenge, as each dataset demands a distinct set of exemplars to enable the LLM to learn effectively and perform well on the test set. Current studies often rely on uncertainty- or diversity-based selection strategies to select exemplars for annotation and to improve model learning. However, these studies typically employ a non-adaptive approach, selecting a set of exemplars all at once. We argue that this non-adaptive strategy may result in a set of exemplars with high redundancy in terms of the knowledge covered, ultimately reducing their overall informativeness. To address this limitation, we propose \textsc{Adaptive-Prompt}, a novel method that adaptively selects exemplars by leveraging model feedback from previously chosen exemplars. Experimental results show that \textsc{Adaptive-Prompt} significantly enhances LLM performance across a variety of reasoning tasks.
Abstract:Submodular function optimization has numerous applications in machine learning and data analysis, including data summarization which aims to identify a concise and diverse set of data points from a large dataset. It is important to implement fairness-aware algorithms when dealing with data items that may contain sensitive attributes like race or gender, to prevent biases that could lead to unequal representation of different groups. With this in mind, we investigate the problem of maximizing a monotone submodular function while meeting group fairness constraints. Unlike previous studies in this area, we allow for randomized solutions, with the objective being to calculate a distribution over feasible sets such that the expected number of items selected from each group is subject to constraints in the form of upper and lower thresholds, ensuring that the representation of each group remains balanced in the long term. Here a set is considered feasible if its size does not exceed a constant value of $b$. Our research includes the development of a series of approximation algorithms for this problem.