Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
Teen-Different 
posted an update 17 days ago
Post
307
Revisiting LIME (Local Interpretable Model-agnostic Explanations) in the age of foundation models feels archaic, but the intuition holds: accuracy isn't a proxy for understanding.



I ran an experiment pitting CNNs against Transformers using a custom SLIC perturbation pipeline to see what they were actually looking at.



The results say, models are lazy students.

• ViT didn’t see a "Jeep"; it recognized a "muddy road" and used a dataset shortcut to guess the vehicle.

• EfficientNet hallucinated a "toaster" just because it saw a white counter.



High confidence based on background noise is a liability. If you aren't visually auditing your decision boundaries, you're just hoping for the best.



Full breakdown of the "Clever Hans" effect below. 👇

https://teendifferent.substack.com/p/your-features-arent-what-you-think
In this post