Textures can be used to describe the appearance of objects in a wide range of fine-grained domains. Textures are localized and one can often refer to their properties in a manner that is independent of the object identity. Moreover, there is a rich vocabulary to describe textures corresponding to properties such as their color, pattern, structure, periodicity, stochasticity, and others. Motivated by this, we study the effectiveness of large-scale language and vision models (e.g., CLIP) at recognizing texture attributes in natural images. We first conduct a systematic study of CLIP on texture datasets where we find that it has good coverage for a wide range of texture terms. CLIP can also handle compositional phrases that consist of color and pattern terms (e.g., red dots or yellow stripes). We then show how these attributes allow for zero-shot fine-grained categorization on existing datasets.