1LLM Sycophancy: The Risk of Vulnerable Misguidance in AI Medical Advice (opens in new tab)(giskard.ai)2alexcombessie3mo ago0
2Agentic Tool Extraction: Multi-turn attacks that expose AI agents (opens in new tab)(giskard.ai)1alexcombessie3mo ago0
3LMEval: An Open Source Framework for Cross-Model Evaluation (opens in new tab)(opensource.googleblog.com)2alexcombessie10mo ago0
4Show HN: Open-Source Evaluation and Testing for Computer Vision Models (opens in new tab)(github.com)3alexcombessie1y ago0
6AI Systems Security: Top Tools for Preventing Prompt Injection (opens in new tab)(sahbichaieb.com)2alexcombessie1y ago0
7Scanning LLM app vulnerabilities: Quickstart (opens in new tab)(docs.giskard.ai)1alexcombessie1y ago0
9Show HN: Automatic generation of LLM guardrails with NeMo and Giskard (opens in new tab)(docs.giskard.ai)1alexcombessie1y ago0
11Open-source AI projects selected by GitHub accelerator (opens in new tab)(github.blog)10alexcombessie1y ago1
12Show HN: Open-Source RAG Evaluation Toolkit (opens in new tab)(docs.giskard.ai)6alexcombessie1y ago0
14Open-Source Quality Management for AI Models (opens in new tab)(kdnuggets.com)2alexcombessie2y ago0