This paper presents a framework for predicting affordances of object parts of unseen categories, with application to robot manipulation. The framework generates affordance maps of novel objects within an image via region-based affordance segmentation. Earlier work used category priors while jointly optimizing detection and segmentation to boost accuracy with limited ability to generalize to unknown categories. This work integrates a category-agnostic region proposal network for proposing instance regions of an image across categories. A self-attention mechanism trained to interpret each proposal learns to capture rich contextual dependencies through the region. To further guide affordance learning in the absence of category priors, an auxiliary task of object attribute inference improves local feature learning. Experimental results show that the trained deep network architecture achieves state-of-the-art performance on affordance segmentation of novel objects and outperforms several baselines. An ablation study quantifies the effectiveness and contributions of each proposed component. Experiments demonstrate the use of affordance detection on novel objects for vision tasks and for manipulation.