|
- CLEVER: A Curated Benchmark for Formally Verified Code Generation
TL;DR: We introduce CLEVER, a hand-curated benchmark for verified code generation in Lean It requires full formal specs and proofs No few-shot method solves all stages, making it a strong testbed for synthesis and formal reasoning
- Clever: A Curated Benchmark for Formally Verified Code Generation
We introduce CLEVER, the first curated benchmark for evaluating the generation of specifications and formally verified code in Lean The benchmark comprises of 161 programming problems; it evaluates both formal speci-fication generation and implementation synthesis from natural language, requiring formal correctness proofs for both
- Evaluating the Robustness of Neural Networks: An Extreme Value. . .
Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks
- EvoTest: Evolutionary Test-Time Learning for Self-Improving Agentic . . .
A fundamental limitation of current AI agents is their inability to learn complex skills on the fly at test time, often behaving like “clever but clueless interns” in novel environments This severely limits their practical utility To systematically measure and drive progress on this challenge, we first introduce the Jericho Test-Time Learning (J-TTL) benchmark J-TTL is a new evaluation
- STAIR: Improving Safety Alignment with Introspective Reasoning
One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can trick the AI into providing harmful responses Our method, STAIR (SafeTy Alignment with Introspective Reasoning), guides models to think more carefully before responding
- Contrastive Learning Via Equivariant Representation - OpenReview
In this paper, we revisit the roles of augmentation strategies and equivariance in improving CL's efficacy We propose CLeVER (Contrastive Learning Via Equivariant Representation), a novel equivariant contrastive learning framework compatible with augmentation strategies of arbitrary complexity for various mainstream CL backbone models
- Dual-Model Defense: Safeguarding Diffusion Models from Membership . . .
Membership inference and memorization is a key challenge with diffusion models Mitigating such vulnerabilities is hence an important topic The idea of using an ensemble of model is clever
- La RoSA: Enhancing LLM Efficiency via Layerwise Rotated Sparse. . .
We use a clever technique that involves rotating the data within each layer of the model, making it easier to identify and keep only the most important parts for processing This ensures that the model remains fast and efficient without losing much accuracy
|
|
|