Recent studies have proposed using large language models (LLMs) for sentence embeddings. However, most existing LLMs are built with an autoregressive architecture that primarily captures forward dependencies while neglecting backward dependencies. Previous work has highlighted the importance of backward dependencies in improving sentence embeddings. To address this issue, in this paper, we first present quantitative evidence demonstrating the limited learning of backward dependencies in LLMs. Then, we propose a novel approach called Dependency-Enhanced Large Language Model (DeeLM) to improve sentence embeddings. Specifically, we found a turning point in LLMs, where surpassing specific LLM layers leads to a significant performance drop in the semantic textual similarity (STS) task. STS is a crucial task for evaluating sentence embeddings. We then extract the layers after the turning point to make them bidirectional, allowing for the learning of backward dependencies. Extensive experiments demonstrate that DeeLM outperforms baselines and achieves state-of-the-art performance across various STS tasks.