This study explores whether labeling AI as "trustworthy" or "reliable" influences user perceptions and acceptance of automotive AI technologies. Using a one-way between-subjects design, the research involved 478 online participants who were presented with guidelines for either trustworthy or reliable AI. Participants then evaluated three vignette scenarios and completed a modified version of the Technology Acceptance Model, which included variables such as perceived ease of use, human-like trust, and overall attitude. Although labeling AI as "trustworthy" did not significantly influence judgments on specific scenarios, it increased perceived ease of use and human-like trust, particularly benevolence. This suggests a positive impact on usability and an anthropomorphic effect on user perceptions. The study provides insights into how specific labels can influence attitudes toward AI technology.