copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
Evaluating the Robustness of Neural Networks: An Extreme Value. . . Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks
Counterfactual Debiasing for Fact Verification 579 In this paper, we have proposed a novel counter- factual framework CLEVER for debiasing fact- checking models Unlike existing works, CLEVER is augmentation-free and mitigates biases on infer- ence stage In CLEVER, the claim-evidence fusion model and the claim-only model are independently trained to capture the corresponding information
Leaving the barn door open for Clever Hans: Simple features predict. . . This phenomenon, widely known in human and animal experiments, is often referred to as the 'Clever Hans' effect, where tasks are solved using spurious cues, often involving much simpler processes than those putatively assessed Previous research suggests that language models can exhibit this behaviour as well
EVALUATING THE ROBUSTNESS OF NEURAL NET : A E VALUE THEORY APPROACH te the CLEVER scores for the same set of images and attack targets To the best of our knowledge, CLEVER is the first attack-independent robustness score that is capable of handling the large networks studied in this paper, so we directly r `2 and `1 norms, and Figure 4 visualizes the results for `1 norm Similarly, Table 2 comp
Ignore Previous Prompt: Attack Techniques For Language Models Abstract Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications However, studies that explore their vulnerabilities emerging from malicious user interac-tion are scarce By proposing PROMPTINJECT, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT
Learnable Representative Coefficient Image Denoiser for. . . Fully characterizing the spatial-spectral priors of hyperspectral images (HSIs) is crucial for HSI denoising tasks Recently, HSI denoising models based on representative coefficient images (RCIs) under the spectral low-rank decomposition framework have garnered significant attention due to their clever utilization of spatial-spectral information in HSI at a low cost However, current methods
LLaVA-OneVision: Easy Visual Task Transfer | OpenReview We present LLaVA-OneVision, a family of open large multimodal models (LMMs) developed by consolidating our insights into data, models, and visual representations in the LLaVA-NeXT blog series Our
Weakly-Supervised Affordance Grounding Guided by Part-Level. . . In this work, we focus on the task of weakly supervised affordance grounding, where a model is trained to identify affordance regions on objects using human-object interaction images and egocentric
Submissions | OpenReview Leaving the barn door open for Clever Hans: Simple features predict LLM benchmark answers Lorenzo Pacchiardi, Marko Tesic, Lucy G Cheke, Jose Hernandez-Orallo 27 Sept 2024 (modified: 05 Feb 2025) Submitted to ICLR 2025 Readers: Everyone