copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
CLEVER: A Curated Benchmark for Formally Verified Code Generation TL;DR: We introduce CLEVER, a hand-curated benchmark for verified code generation in Lean It requires full formal specs and proofs No few-shot method solves all stages, making it a strong testbed for synthesis and formal reasoning
Clever: A Curated Benchmark for Formally Verified Code Generation We introduce CLEVER, the first curated benchmark for evaluating the generation of specifications and formally verified code in Lean The benchmark comprises of 161 programming problems; it evaluates both formal speci-fication generation and implementation synthesis from natural language, requiring formal correctness proofs for both
CLEVER: A Curated Benchmark for Formally Verified Code Generation This paper introduces CLEVER, a benchmark dataset designed to evaluate LLMs on formally verified code generation It consists of 161 carefully crafted Lean specifications derived from programming problems in the existing HumanEval dataset
The Clever Hans Mirage: A Comprehensive Survey on Spurious. . . This survey on spurious correlations uses the Clever Hans metaphor to motivate the problem, formalizes a group-based setup g=(y,a) with core metrics (worst-group, average-group, bias-conflicting), and explains why models latch onto shortcuts (simplicity bias, training dynamics)
Counterfactual Debiasing for Fact Verification 579 In this paper, we have proposed a novel counter- factual framework CLEVER for debiasing fact- checking models Unlike existing works, CLEVER is augmentation-free and mitigates biases on infer- ence stage In CLEVER, the claim-evidence fusion model and the claim-only model are independently trained to capture the corresponding information
STAIR: Improving Safety Alignment with Introspective Reasoning One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can trick the AI into providing harmful responses Our method, STAIR (SafeTy Alignment with Introspective Reasoning), guides models to think more carefully before responding
Evaluating the Robustness of Neural Networks: An Extreme Value. . . Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks
KnowTrace: Explicit Knowledge Tracing for Structured. . . " This paper introduces a clever incorporation of knowledge graph operation for structured RAG " (Reviewer ifaQ) " The proposed method is straightforward, intuitive, and easy to implement "; " It is innovative that the paper leverages the structured nature of reasoning paths to filter and refine generated trajectories for model training
On the Planning Abilities of Large Language Models : A Critical . . . While, as we mentioned earlier, there can be thorny “clever hans” issues about humans prompting LLMs, an automated verifier mechanically backprompting the LLM doesn’t suffer from these We tested this setup on a subset of the failed instances in the one-shot natural language prompt configuration using GPT-4, given its larger context window