Abstract:Dynamic pricing is a promising strategy to address the challenges of smart charging, as traditional time-of-use (ToU) rates and stationary pricing (SP) do not dynamically react to changes in operating conditions, reducing revenue for charging station (CS) vendors and affecting grid stability. Previous studies evaluated single objectives or linear combinations of objectives for EV CS pricing solutions, simplifying trade-offs and preferences among objectives. We develop a novel formulation for the dynamic pricing problem by addressing multiple conflicting objectives efficiently instead of solely focusing on one objective or metric, as in earlier works. We find optimal trade-offs or Pareto solutions efficiently using Non-dominated Sorting Genetic Algorithms (NSGA) II and NSGA III. A dynamic pricing model quantifies the relationship between demand and price while simultaneously solving multiple conflicting objectives, such as revenue, quality of service (QoS), and peak-to-average ratios (PAR). A single method can only address some of the above aspects of dynamic pricing comprehensively. We present a three-part dynamic pricing approach using a Bayesian model, multi-objective optimization, and multi-criteria decision-making (MCDM) using pseudo-weight vectors. To address the research gap in CS pricing, our method selects solutions using revenue, QoS, and PAR metrics simultaneously. Two California charging sites' real-world data validates our approach.
Abstract:We develop an iterative differentially private algorithm for client selection in federated settings. We consider a federated network wherein clients coordinate with a central server to complete a task; however, the clients decide whether to participate or not at a time step based on their preferences -- local computation and probabilistic intent. The algorithm does not require client-to-client information exchange. The developed algorithm provides near-optimal values to the clients over long-term average participation with a certain differential privacy guarantee. Finally, we present the experimental results to check the algorithm's efficacy.
Abstract:The rapid spread of fake news is a serious problem calling for AI solutions. We employ a deep learning based automated detector through a three level hierarchical attention network (3HAN) for fast, accurate detection of fake news. 3HAN has three levels, one each for words, sentences, and the headline, and constructs a news vector: an effective representation of an input news article, by processing an article in an hierarchical bottom-up manner. The headline is known to be a distinguishing feature of fake news, and furthermore, relatively few words and sentences in an article are more important than the rest. 3HAN gives a differential importance to parts of an article, on account of its three layers of attention. By experiments on a large real-world data set, we observe the effectiveness of 3HAN with an accuracy of 96.77%. Unlike some other deep learning models, 3HAN provides an understandable output through the attention weights given to different parts of an article, which can be visualized through a heatmap to enable further manual fact checking.
Abstract:We study the complexity of reductions for weighted reachability in parametric Markov decision processes. That is, we say a state p is never worse than q if for all valuations of the polynomial indeterminates it is the case that the maximal expected weight that can be reached from p is greater than the same value from q. In terms of computational complexity, we establish that determining whether p is never worse than q is coETR-complete. On the positive side, we give a polynomial-time algorithm to compute the equivalence classes of the order we study for Markov chains. Additionally, we describe and implement two inference rules to under-approximate the never-worse relation and empirically show that it can be used as an efficient preprocessing step for the analysis of large Markov decision processes.
Abstract:This paper proposes a Robust Gradient Classification Framework (RGCF) for Byzantine fault tolerance in distributed stochastic gradient descent. The framework consists of a pattern recognition filter which we train to be able to classify individual gradients as Byzantine by using their direction alone. This filter is robust to an arbitrary number of Byzantine workers for convex as well as non-convex optimisation settings, which is a significant improvement on the prior work that is robust to Byzantine faults only when up to 50% of the workers are Byzantine. This solution does not require an estimate of the number of Byzantine workers; its running time is not dependent on the number of workers and can scale up to training instances with a large number of workers without a loss in performance. We validate our solution by training convolutional neural networks on the MNIST dataset in the presence of Byzantine workers.
Abstract:Developing a framework for the locomotion of a six-legged robot or a hexapod is a complex task that has extensive hardware and computational requirements. In this paper, we present a bio-inspired framework for the locomotion of a hexapod. Our locomotion model draws inspiration from the structure of a cockroach, with its fairly simple central nervous system, and results in our model being computationally inexpensive with simpler control mechanisms. We consider the limb morphology for a hexapod, the corresponding central pattern generators for its limbs, and the inter-limb coordination required to generate appropriate patterns in its limbs. We also designed two experiments to validate our locomotion model. Our first experiment models the predator-prey dynamics between a cockroach and its predator. Our second experiment makes use of a reinforcement learning-based algorithm, putting forward a realization of our locomotion model. These experiments suggest that this model will help realize practical hexapod robot designs.
Abstract:In the present day, more than 3.8 billion people around the world actively use social media. The effectiveness of social media in facilitating quick and easy sharing of information has attracted brands and advertizers who wish to use the platform to market products via the influencers in the network. Influencers, owing to their massive popularity, provide a huge potential customer base generating higher returns of investment in a very short period. However, it is not straightforward to decide which influencers should be selected for an advertizing campaign that can generate maximum returns with minimum investment. In this work, we present an agent-based model (ABM) that can simulate the dynamics of influencer advertizing campaigns in a variety of scenarios and can help to discover the best influencer marketing strategy. Our system is a probabilistic graph-based model that incorporates real-world factors such as customers' interest in a product, customer behavior, the willingness to pay, a brand's investment cap, influencers' engagement with influence diffusion, and the nature of the product being advertized viz. luxury and non-luxury.
Abstract:The "Thinking, Fast and Slow" paradigm of Kahneman proposes that we use two different styles of thinking -- a fast and intuitive System 1 for certain tasks, along with a slower but more analytical System 2 for others. While the idea of using this two-system style of thinking is gaining popularity in AI and robotics, our work considers how to interleave the two styles of decision-making, i.e., how System 1 and System 2 should be used together. For this, we propose a novel and general framework which includes a new System 0 to oversee Systems 1 and 2. At every point when a decision needs to be made, System 0 evaluates the situation and quickly hands over the decision-making process to either System 1 or System 2. We evaluate such a framework on a modified version of the classic Pac-Man game, with an already-trained RL algorithm for System 1, a Monte-Carlo tree search for System 2, and several different possible strategies for System 0. As expected, arbitrary switches between Systems 1 and 2 do not work, but certain strategies do well. With System 0, an agent is able to perform better than one that uses only System 1 or System 2.
Abstract:Neuromorphic computing describes the use of VLSI systems to mimic neuro-biological architectures and is also looked at as a promising alternative to the traditional von Neumann architecture. Any new computing architecture would need a system that can perform floating-point arithmetic. In this paper, we describe a neuromorphic system that performs IEEE 754-compliant floating-point multiplication. The complex process of multiplication is divided into smaller sub-tasks performed by components Exponent Adder, Bias Subtractor, Mantissa Multiplier and Sign OF/UF. We study the effect of the number of neurons per bit on accuracy and bit error rate, and estimate the optimal number of neurons needed for each component.
Abstract:Modeling social interactions based on individual behavior has always been an area of interest, but prior literature generally presumes rational behavior. Thus, such models may miss out on capturing the effects of biases humans are susceptible to. This work presents a method to model egocentric bias, the real-life tendency to emphasize one's own opinion heavily when presented with multiple opinions. We use a symmetric distribution centered at an agent's own opinion, as opposed to the Bounded Confidence (BC) model used in prior work. We consider a game of iterated interactions where an agent cooperates based on its opinion about an opponent. Our model also includes the concept of domain-based self-doubt, which varies as the interaction succeeds or not. An increase in doubt makes an agent reduce its egocentricity in subsequent interactions, thus enabling the agent to learn reactively. The agent system is modeled with factions not having a single leader, to overcome some of the issues associated with leader-follower factions. We find that agents belonging to factions perform better than individual agents. We observe that an intermediate level of egocentricity helps the agent perform at its best, which concurs with conventional wisdom that neither overconfidence nor low self-esteem brings benefits.