Optical Intra-oral Scanners (IOS) are widely used in digital dentistry, providing 3-Dimensional (3D) and high-resolution geometrical information of dental crowns and the gingiva. Accurate 3D tooth segmentation, which aims to precisely delineate the tooth and gingiva instances in IOS, plays a critical role in a variety of dental applications. However, segmentation performance of previous methods are error-prone in complicated tooth-tooth or tooth-gingiva boundaries, and usually exhibit unsatisfactory results across various patients, yet the clinically applicability is not verified with large-scale dataset. In this paper, we propose a novel method based on 3D transformer architectures that is evaluated with large-scale and high-resolution 3D IOS datasets. Our method, termed TFormer, captures both local and global dependencies among different teeth to distinguish various types of teeth with divergent anatomical structures and confusing boundaries. Moreover, we design a geometry guided loss based on a novel point curvature to exploit boundary geometric features, which helps refine the boundary predictions for more accurate and smooth segmentation. We further employ a multi-task learning scheme, where an additional teeth-gingiva segmentation head is introduced to improve the performance. Extensive experimental results in a large-scale dataset with 16,000 IOS, the largest IOS dataset to our best knowledge, demonstrate that our TFormer can surpass existing state-of-the-art baselines with a large margin, with its utility in real-world scenarios verified by a clinical applicability test.