Abstract:Legal multi-label classification is a critical task for organizing and accessing the vast amount of legal documentation. Despite its importance, it faces challenges such as the complexity of legal language, intricate label dependencies, and significant label imbalance. In this paper, we propose Legal-LLM, a novel approach that leverages the instruction-following capabilities of Large Language Models (LLMs) through fine-tuning. We reframe the multi-label classification task as a structured generation problem, instructing the LLM to directly output the relevant legal categories for a given document. We evaluate our method on two benchmark datasets, POSTURE50K and EURLEX57K, using micro-F1 and macro-F1 scores. Our experimental results demonstrate that Legal-LLM outperforms a range of strong baseline models, including traditional methods and other Transformer-based approaches. Furthermore, ablation studies and human evaluations validate the effectiveness of our approach, particularly in handling label imbalance and generating relevant and accurate legal labels.
Abstract:Text-to-image generation has witnessed significant advancements with the integration of Large Vision-Language Models (LVLMs), yet challenges remain in aligning complex textual descriptions with high-quality, visually coherent images. This paper introduces the Vision-Language Aligned Diffusion (VLAD) model, a generative framework that addresses these challenges through a dual-stream strategy combining semantic alignment and hierarchical diffusion. VLAD utilizes a Contextual Composition Module (CCM) to decompose textual prompts into global and local representations, ensuring precise alignment with visual features. Furthermore, it incorporates a multi-stage diffusion process with hierarchical guidance to generate high-fidelity images. Experiments conducted on MARIO-Eval and INNOVATOR-Eval benchmarks demonstrate that VLAD significantly outperforms state-of-the-art methods in terms of image quality, semantic alignment, and text rendering accuracy. Human evaluations further validate the superior performance of VLAD, making it a promising approach for text-to-image generation in complex scenarios.
Abstract:The integration of artificial intelligence into agricultural practices, specifically through Consultation on Intelligent Agricultural Machinery Management (CIAMM), has the potential to revolutionize efficiency and sustainability in farming. This paper introduces a novel approach that leverages large language models (LLMs), particularly GPT-4, combined with multi-round prompt engineering to enhance decision-making processes in agricultural machinery management. We systematically developed and refined prompts to guide the LLMs in generating precise and contextually relevant outputs. Our approach was evaluated using a manually curated dataset from various online sources, and performance was assessed with accuracy and GPT-4 Scores. Comparative experiments were conducted using LLama-2-70B, ChatGPT, and GPT-4 models, alongside baseline and state-of-the-art methods such as Chain of Thought (CoT) and Thought of Thought (ThoT). The results demonstrate that our method significantly outperforms these approaches, achieving higher accuracy and relevance in generated responses. This paper highlights the potential of advanced prompt engineering techniques in improving the robustness and applicability of AI in agricultural contexts.