Abstract:In this work we study the problem of Stochastic Budgeted Multi-round Submodular Maximization (SBMSm), in which we would like to maximize the sum over multiple rounds of the value of a monotone and submodular objective function, subject to the fact that the values of this function depend on the realization of stochastic events and the number of observations that we can make over all rounds is limited by a given budget. This problem extends, and generalizes to multiple round settings, well-studied problems such as (adaptive) influence maximization and stochastic probing. We first show that whenever a certain single-round optimization problem can be optimally solved in polynomial time, then there is a polynomial time dynamic programming algorithm that returns the same solution as the optimal algorithm, that can adaptively choose both which observations to make and in which round to have them. Unfortunately, this dynamic programming approach cannot be extended to work when the single-round optimization problem cannot be efficiently solved (even if we allow it would be approximated within an arbitrary small constant). Anyway, in this case we are able to provide a simple greedy algorithm for the problem. It guarantees a $(1/2-\epsilon)$-approximation to the optimal value, even if it non-adaptively allocates the budget to rounds.
Abstract:We focus on the scenario in which messages pro and/or against one or multiple candidates are spread through a social network in order to affect the votes of the receivers. Several results are known in the literature when the manipulator can make seeding by buying influencers. In this paper, instead, we assume the set of influencers and their messages to be given, and we ask whether a manipulator (e.g., the platform) can alter the outcome of the election by adding or removing edges in the social network. We study a wide range of cases distinguishing for the number of candidates or for the kind of messages spread over the network. We provide a positive result, showing that, except for trivial cases, manipulation is not affordable, the optimization problem being hard even if the manipulator has an unlimited budget (i.e., he can add or remove as many edges as desired). Furthermore, we prove that our hardness results still hold in a reoptimization variant, where the manipulator already knows an optimal solution to the problem and needs to compute a new solution once a local modification occurs (e.g., in bandit scenarios where estimations related to random variables change over time)