Precise localization is a core ability of an autonomous vehicle. It is a prerequisite for motion planning and execution. The well-established localization approaches such as Kalman and particle filters require a probabilistic observation model allowing to compute a likelihood of measurement given a system state vector, usually vehicle pose, and a map. The higher precision of the localization system may be achieved through the development of a more sophisticated observation model considering various measurement error sources. Meanwhile model needs to be simple to be computable in real-time. This paper proposes an observation model for visually detected linear features. Examples of such features include, but not limited to, road markings and road boundaries. The proposed observation model depicts two core detection error sources: shift error and angular error. It also considers the probability of false-positive detection. The structure of the proposed model allows precomputing and incorporating the measurement error directly into the map represented by a multichannel digital image. Measurement error precomputation and storing the map as an image speeds up observation likelihood computation and in turn localization system. The experimental evaluation on real autonomous vehicle demonstrates that the proposed model allows for precise and reliable localization in a variety of scenarios.