Visual Abductive Reasoning (VAR) is an emerging vision-language (VL) topic where the model needs to retrieve/generate a likely textual hypothesis from a visual input (image or part of an image) using backward reasoning based on prior knowledge or commonsense. Unlike in conventional VL retrieval or captioning tasks, where entities of texts appear in the image, in abductive inferences, the relevant facts about inferences are not directly visible in the input images. Besides, the inferences are causally relevant to regional visual hints and vary with the latter. Existing works highlight visual parts from a global background with specific prompt tuning techniques (e.g., colorful prompt tuning) on top of foundation models, like CLIP. However, these methods uniformly patchify "regional hints" and "global context" at the same granularity level and may lose fine-grained visual details significant for abductive reasoning. To tackle this, we propose a simple yet effective Regional Prompt Tuning, which encodes "regional visual hints" and "global contexts" separately at fine and coarse-grained levels. Specifically, our model explicitly upsamples, then patchify local hints to get fine-grained regional prompts. These prompts are concatenated with coarse-grained contextual tokens from whole images. We also equip our model with a new Dual-Contrastive Loss to regress the visual feature simultaneously toward features of factual description (a.k.a. clue text) and plausible hypothesis (abductive inference text) during training. Extensive experiments on the Sherlock dataset demonstrate that our fully fine-tuned RGP/RGPs with Dual-Contrastive Loss significantly outperforms previous SOTAs, achieving the 1 rank on abductive reasoning leaderboards among all submissions, under all metrics (e.g., P@1$_{i->t}$: RGPs 38.78 vs CPT-CLIP 33.44, higher=better). We would open-source our codes for further research.