This paper presents a novel streaming end-to-end target-speaker speech recognition that addresses two critical limitations in systems: the handling of noisy enrollment utterances and specific enrollment phrase requirements. This paper proposes a robust Target-Speaker Recurrent Neural Network Transducer (TS-RNNT) with dual attention mechanisms for contextual biasing and overlapping enrollment processing. The model incorporates a text decoder and attention mechanism specifically designed to extract relevant speaker characteristics from noisy, overlapping enrollment audio. Experimental results on a synthesized dataset demonstrate the model's resilience, maintaining a Word Error Rate (WER) of 16.44% even with overlapping enrollment at 5dB Signal-to-Interference Ratio (SIR), compared to conventional approaches that degrade to WERs above 75% under similar conditions. This significant performance improvement, coupled with the model's semi-text-dependent enrollment capabilities, represents a substantial advancement toward more practical and versatile voice-controlled devices.