Abstract:Garbage production and littering are persistent global issues that pose significant environmental challenges. Despite large-scale efforts to manage waste through collection and sorting, existing approaches remain inefficient, leading to inadequate recycling and disposal. Therefore, developing advanced AI-based systems is less labor intensive approach for addressing the growing waste problem more effectively. These models can be applied to sorting systems or possibly waste collection robots that may produced in the future. AI models have grown significantly at identifying objects through object detection. This paper reviews the implementation of AI models for classifying trash through object detection, specifically focusing on using YOLO V5 for training and testing. The study demonstrates how YOLO V5 can effectively identify various types of waste, including plastic, paper, glass, metal, cardboard, and biodegradables.
Abstract:Large Language Models (LLMs), originally shown to ace various text comprehension tasks have also remarkably been shown to tackle table comprehension tasks without specific training. While previous research has explored LLM capabilities with tabular dataset tasks, our study assesses the influence of $\textit{in-context learning}$,$ \textit{model scale}$, $\textit{instruction tuning}$, and $\textit{domain biases}$ on Tabular Question Answering (TQA). We evaluate the robustness of LLMs on Wikipedia-based $\textbf{WTQ}$ and financial report-based $\textbf{TAT-QA}$ TQA datasets, focusing on their ability to robustly interpret tabular data under various augmentations and perturbations. Our findings indicate that instructions significantly enhance performance, with recent models like Llama3 exhibiting greater robustness over earlier versions. However, data contamination and practical reliability issues persist, especially with WTQ. We highlight the need for improved methodologies, including structure-aware self-attention mechanisms and better handling of domain-specific tabular data, to develop more reliable LLMs for table comprehension.