In current state-of-the-art commercial first person shooter games, computer controlled bots, also known as non player characters, can often be easily distinguishable from those controlled by humans. Tell-tale signs such as failed navigation, "sixth sense" knowledge of human players' whereabouts and deterministic, scripted behaviors are some of the causes of this. We propose, however, that one of the biggest indicators of non humanlike behavior in these games can be found in the weapon shooting capability of the bot. Consistently perfect accuracy and "locking on" to opponents in their visual field from any distance are indicative capabilities of bots that are not found in human players. Traditionally, the bot is handicapped in some way with either a timed reaction delay or a random perturbation to its aim, which doesn't adapt or improve its technique over time. We hypothesize that enabling the bot to learn the skill of shooting through trial and error, in the same way a human player learns, will lead to greater variation in game-play and produce less predictable non player characters. This paper describes a reinforcement learning shooting mechanism for adapting shooting over time based on a dynamic reward signal from the amount of damage caused to opponents.