Experimental evaluation is an integral part in the design process of algorithms. Publicly available benchmark instances are widely used to evaluate methods in SAT solving. For the interpretation of results and the design of algorithm portfolios their attributes are crucial. Capturing the interrelation of benchmark instances and their attributes is considerably simplified through our specification of a benchmark instance identifier. Thus, our tool increases the availability of both by providing means to manage and retrieve benchmark instances by their attributes and vice versa. Like this, it facilitates the design and analysis of SAT experiments and the exchange of results.