Feature weighting algorithms try to solve a problem of great importance nowadays in machine learning: The search of a relevance measure for the features of a given domain. This relevance is primarily used for feature selection as feature weighting can be seen as a generalization of it, but it is also useful to better understand a problem's domain or to guide an inductor in its learning process. Relief family of algorithms are proven to be very effective in this task. Some other feature weighting methods are reviewed in order to give some context and then the different existing extensions to the original algorithm are explained. One of Relief's known issues is the performance degradation of its estimates when redundant features are present. A novel theoretical definition of redundancy level is given in order to guide the work towards an extension of the algorithm that is more robust against redundancy. A new extension is presented that aims for improving the algorithms performance. Some experiments were driven to test this new extension against the existing ones with a set of artificial and real datasets and denoted that in certain cases it improves the weight's estimation accuracy.