Picture for Kalyan Veeramachaneni

Kalyan Veeramachaneni

Large language models can be zero-shot anomaly detectors for time series?

Add code
May 23, 2024
Viaarxiv icon

LLMs for XAI: Future Directions for Explaining Explanations

Add code
May 09, 2024
Viaarxiv icon

Single Word Change is All You Need: Designing Attacks and Defenses for Text Classifiers

Add code
Jan 30, 2024
Viaarxiv icon

Pyreal: A Framework for Interpretable ML Explanations

Add code
Dec 20, 2023
Viaarxiv icon

Lessons from Usable ML Deployments and Application to Wind Turbine Monitoring

Add code
Dec 05, 2023
Viaarxiv icon

Making the End-User a Priority in Benchmarking: OrionBench for Unsupervised Time Series Anomaly Detection

Add code
Oct 26, 2023
Viaarxiv icon

AER: Auto-Encoder with Regression for Time Series Anomaly Detection

Add code
Dec 27, 2022
Viaarxiv icon

Sequential Models in the Synthetic Data Vault

Add code
Jul 28, 2022
Figure 1 for Sequential Models in the Synthetic Data Vault
Figure 2 for Sequential Models in the Synthetic Data Vault
Figure 3 for Sequential Models in the Synthetic Data Vault
Figure 4 for Sequential Models in the Synthetic Data Vault
Viaarxiv icon

Sintel: A Machine Learning Framework to Extract Insights from Signals

Add code
Apr 19, 2022
Figure 1 for Sintel: A Machine Learning Framework to Extract Insights from Signals
Figure 2 for Sintel: A Machine Learning Framework to Extract Insights from Signals
Figure 3 for Sintel: A Machine Learning Framework to Extract Insights from Signals
Figure 4 for Sintel: A Machine Learning Framework to Extract Insights from Signals
Viaarxiv icon

The Need for Interpretable Features: Motivation and Taxonomy

Add code
Feb 23, 2022
Figure 1 for The Need for Interpretable Features: Motivation and Taxonomy
Figure 2 for The Need for Interpretable Features: Motivation and Taxonomy
Figure 3 for The Need for Interpretable Features: Motivation and Taxonomy
Figure 4 for The Need for Interpretable Features: Motivation and Taxonomy
Viaarxiv icon