Files
prompt-optimizer/docs/workspace/compare-evaluation-analysis/real-api-samples/basic-system-compare/response.json
linshen 2cdd095c2b docs(workspace): consolidate compare evaluation specs and acceptance evidence
- fold earlier planning notes into a single current-spec and archived history structure
- keep manual acceptance steps and real API samples aligned with the refactored analysis/result/compare model
- retain supporting workspace notes needed to review version-selection and evaluation behavior changes
2026-03-18 09:35:44 +08:00

46 lines
1.3 KiB
JSON

{
"type": "compare",
"score": {
"overall": 65,
"dimensions": [
{
"key": "goalAchievementRobustness",
"label": "目标达成稳定性",
"score": 40
},
{
"key": "outputQualityCeiling",
"label": "输出质量上限",
"score": 70
},
{
"key": "promptPatternQuality",
"label": "提示词模式质量",
"score": 30
},
{
"key": "crossSnapshotRobustness",
"label": "跨快照鲁棒性",
"score": 20
},
{
"key": "workspaceTransferability",
"label": "对工作区的可迁移性",
"score": 80
}
]
},
"improvements": [
"在提示词中明确角色(如客服助手)、任务步骤(判断问题类型)和输出格式,以提升输出结构化与目标达成率。",
"增加示例或更具体的约束条件,引导模型生成更贴合场景的回复内容。",
"统一不同模型间的提示词规范,增强跨模型响应的一致性和可靠性。"
],
"summary": "快照 B 表现更好,因为它补充了明确的角色、任务步骤和输出格式,而快照 A 的提示词过于模糊,导致输出质量低且缺乏结构。",
"patchPlan": [],
"metadata": {
"model": "dashscope",
"timestamp": 1773732666570,
"duration": 8418
}
}