3D motion tracking is a critical task in many computer vision applications. Unsupervised markerless 3D motion tracking systems determine the most relevant object in the screen and then track it by continuously estimating its projection features (center and area) from the edge image and a point inside the relevant object projection (namely, inner point), until the tracking fails. Existing reliable object projection feature estimation techniques are based on ray-casting or grid-filling from the inner point. These techniques assume the edge image to be accurate. However, in real case scenarios, edge miscalculations may arise from low contrast between the target object and its surroundings or motion blur caused by low frame rates or fast moving target objects. In this paper, we propose a barrier extension to casting-based techniques that mitigates the effect of edge miscalculations.