SLAM allows a robot to continuously perceive the surrounding environment and locate itself correctly. However, its high computational complexity limits the practical use of SLAM in resource-constrained computing platforms. We propose a resource-efficient FPGA-based accelerator and apply it to two major SLAM methods: particle filter-based and graph-based SLAM. We compare their performances in terms of the latency, throughput gain, and memory consumption, considering their algorithmic characteristics, and confirm that the accelerator removes the bottleneck without compromising the accuracy in both methods.