Skip to content
Better HN
GLM-4.7-Flash: 30B MoE model achieves 59.2% on SWE-bench, runs on 24GB GPUs | Better HN