Picture for Vipin Chaudhary

Vipin Chaudhary

Demystifying Hybrid Thinking: Can LLMs Truly Switch Between Think and No-Think?

Add code
Oct 14, 2025
Viaarxiv icon

Don't Pass$\mathtt{@}k$: A Bayesian Framework for Large Language Model Evaluation

Add code
Oct 05, 2025
Viaarxiv icon

LABELING COPILOT: A Deep Research Agent for Automated Data Curation in Computer Vision

Add code
Sep 26, 2025
Viaarxiv icon

$K^4$: Online Log Anomaly Detection Via Unsupervised Typicality Learning

Add code
Jul 26, 2025
Viaarxiv icon

AutoL2S: Auto Long-Short Reasoning for Efficient Large Language Models

Add code
May 28, 2025
Viaarxiv icon

Grammars of Formal Uncertainty: When to Trust LLMs in Automated Reasoning Tasks

Add code
May 26, 2025
Viaarxiv icon

100-LongBench: Are de facto Long-Context Benchmarks Literally Evaluating Long-Context Ability?

Add code
May 25, 2025
Figure 1 for 100-LongBench: Are de facto Long-Context Benchmarks Literally Evaluating Long-Context Ability?
Figure 2 for 100-LongBench: Are de facto Long-Context Benchmarks Literally Evaluating Long-Context Ability?
Figure 3 for 100-LongBench: Are de facto Long-Context Benchmarks Literally Evaluating Long-Context Ability?
Figure 4 for 100-LongBench: Are de facto Long-Context Benchmarks Literally Evaluating Long-Context Ability?
Viaarxiv icon

Longer Context, Deeper Thinking: Uncovering the Role of Long-Context Ability in Reasoning

Add code
May 22, 2025
Viaarxiv icon

SELF: Self-Extend the Context Length With Logistic Growth Function

Add code
May 22, 2025
Viaarxiv icon

70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float

Add code
Apr 15, 2025
Figure 1 for 70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float
Figure 2 for 70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float
Figure 3 for 70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float
Figure 4 for 70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float
Viaarxiv icon