diff --git a/.gitignore b/.gitignore
new file mode 100644
index 000000000000..7a60b85e148f
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+*.pyc
diff --git "a/README_\345\256\241\347\250\277\347\263\273\347\273\237\344\275\277\347\224\250\346\214\207\345\215\227.md" "b/README_\345\256\241\347\250\277\347\263\273\347\273\237\344\275\277\347\224\250\346\214\207\345\215\227.md"
new file mode 100644
index 000000000000..c49bc8185ccd
--- /dev/null
+++ "b/README_\345\256\241\347\250\277\347\263\273\347\273\237\344\275\277\347\224\250\346\214\207\345\215\227.md"
@@ -0,0 +1,118 @@
+# 分布式审稿协调系统 - 使用指南
+
+## 这是什么?
+
+这是一个基于 Streamlit 的**网页应用**,模拟学术期刊的多轮审稿流程。包含三个智能代理:
+- **EIC (主编)** - 统筹协调审稿,综合评分,做出最终决定
+- **Reviewer (审稿人)** - 按维度评估稿件,提供修改意见
+- **Author (作者)** - 接收反馈,逐条回复,修改稿件
+
+---
+
+## 从零开始运行(Mac 版)
+
+### 第一步:打开终端
+
+按 `Command + 空格`,输入 `Terminal`,按回车。
+
+### 第二步:下载代码
+
+在终端中逐行输入以下命令:
+
+```bash
+git clone https://github.com/chenhaozhou1996/streamlit-example.git
+```
+
+```bash
+cd streamlit-example
+```
+
+```bash
+git checkout claude/manuscript-review-coordinator-wz0mC
+```
+
+### 第三步:安装依赖
+
+```bash
+pip3 install streamlit pandas altair
+```
+
+> 如果提示 `pip3: command not found`,试试:
+> ```bash
+> python3 -m pip install streamlit pandas altair
+> ```
+
+### 第四步:运行应用
+
+```bash
+streamlit run manuscript_review_app.py
+```
+
+运行后终端会显示类似:
+```
+ Local URL: http://localhost:8501
+```
+
+浏览器会**自动打开**这个地址。如果没有自动打开,手动在浏览器中输入 `http://localhost:8501`。
+
+### 第五步:停止运行
+
+回到终端,按 `Ctrl + C` 即可停止。
+
+---
+
+## 使用说明
+
+### 基本流程
+
+1. **配置参数** - 左侧边栏可修改论文标题、摘要、最大审稿轮数
+2. **提交稿件** - 点击「提交稿件并开始审稿流程」
+3. **执行审稿** - 点击「执行第 N 轮审稿」,3位审稿人将各自评审
+4. **查看结果** - 查看审稿报告、EIC综合意见、评分趋势、分歧分析
+5. **作者修改** - 如果需要修改,点击「作者开始修改」
+6. **循环** - 系统自动进入下一轮审稿,直到接受或拒稿
+7. **下载报告** - 完成后可下载 TXT/JSON/CSV 格式的完整报告
+
+### 自动模式
+
+点击「自动运行完整流程」可以一键跑完所有轮次,自动完成审稿-修改-再审循环。
+
+### 可查看的面板
+
+| 标签页 | 内容 |
+|--------|------|
+| 审稿报告 | 各审稿人的详细评分和意见 |
+| EIC综合意见 | 主编综合评定、共识度分析 |
+| 意见追踪 | 每条意见的解决状态(已解决/部分/未解决) |
+| 评分趋势 | 各维度评分的跨轮次变化曲线 |
+| 分歧分析 | 审稿人评分热力图,可视化分歧 |
+| 版本对比 | 稿件不同版本间的章节级差异 |
+| 收敛趋势 | 综合评分是否在向接受阈值收敛 |
+
+### 下载文档
+
+审稿完成后,页面底部会出现三个下载按钮:
+- **TXT** - 完整审稿报告(人工阅读)
+- **JSON** - 结构化数据(程序分析)
+- **CSV** - 评分汇总表(Excel打开)
+
+---
+
+## 常见问题
+
+### Q: `git clone` 失败?
+确保已安装 Git。Mac 上在终端输入 `git --version`,如果没有会提示安装。
+
+### Q: `pip3 install` 失败?
+试试:
+```bash
+python3 -m pip install --user streamlit pandas altair
+```
+
+### Q: 端口被占用?
+```bash
+streamlit run manuscript_review_app.py --server.port 8502
+```
+
+### Q: 浏览器没有自动打开?
+手动访问终端显示的 `Local URL` 地址(通常是 `http://localhost:8501`)。
diff --git "a/README_\346\217\222\344\273\266\344\275\277\347\224\250\346\214\207\345\215\227.md" "b/README_\346\217\222\344\273\266\344\275\277\347\224\250\346\214\207\345\215\227.md"
new file mode 100644
index 000000000000..b4217f665ffa
--- /dev/null
+++ "b/README_\346\217\222\344\273\266\344\275\277\347\224\250\346\214\207\345\215\227.md"
@@ -0,0 +1,207 @@
+# 审稿协调系统 - 插件使用指南
+
+本项目提供 **3种使用方式**:
+
+| 方式 | 文件 | 特点 |
+|------|------|------|
+| Streamlit 网页版 | `manuscript_review_app.py` | 可视化界面,鼠标操作 |
+| MCP 插件 | `mcp_review_server.py` | 在 Claude Code 中直接调用 |
+| Agent SDK 版 | `review_agent.py` | 真正的 AI 多代理审稿 |
+
+---
+
+## 方式一: MCP 插件 (在 Claude Code 中使用)
+
+### 安装
+
+```bash
+pip install mcp
+```
+
+### 配置 Claude Code
+
+在你的 Claude Code 设置中添加 MCP server。编辑 `~/.claude/claude_desktop_config.json`:
+
+```json
+{
+ "mcpServers": {
+ "manuscript-review": {
+ "command": "python3",
+ "args": ["/你的路径/streamlit-example/mcp_review_server.py"]
+ }
+ }
+}
+```
+
+或者在项目的 `.mcp.json` 中配置:
+
+```json
+{
+ "mcpServers": {
+ "manuscript-review": {
+ "command": "python3",
+ "args": ["mcp_review_server.py"]
+ }
+ }
+}
+```
+
+### 重启 Claude Code
+
+配置完成后重启 Claude Code,你就可以直接说:
+
+> "帮我审稿,标题是XXX,摘要是XXX"
+
+Claude 会自动调用以下工具:
+
+| 工具 | 说明 |
+|------|------|
+| `init_review` | 创建审稿会话 |
+| `run_review` | 执行一轮审稿 |
+| `author_revise` | 作者修改 |
+| `next_round` | 推进下一轮 |
+| `get_status` | 查看状态 |
+| `export_report` | 导出报告 |
+| `auto_run` | 自动跑完全流程 |
+| `list_sessions` | 列出所有会话 |
+
+### 使用示例
+
+在 Claude Code 中输入:
+
+```
+初始化一个审稿,标题是"基于深度学习的图像分类研究",摘要是"本文提出了一种新的CNN架构..."
+```
+
+然后:
+
+```
+执行审稿
+```
+
+```
+让作者修改
+```
+
+```
+导出审稿报告
+```
+
+---
+
+## 方式二: Agent SDK 版 (真正的 AI 审稿)
+
+这个版本使用 Claude API 真正生成智能审稿意见 (不是模板)。
+
+### 安装
+
+```bash
+pip install anthropic
+```
+
+### 设置 API Key
+
+```bash
+export ANTHROPIC_API_KEY=sk-ant-你的key
+```
+
+### 命令行运行
+
+```bash
+# 基本用法
+python3 review_agent.py \
+ --title "基于多代理协调的分布式审稿系统研究" \
+ --abstract "本文提出了一种基于多代理协调的分布式审稿系统..."
+
+# 指定模型和轮数
+python3 review_agent.py \
+ --title "你的论文标题" \
+ --abstract "你的摘要" \
+ --max-rounds 3 \
+ --model claude-haiku-4-5-20251001
+
+# 导出报告
+python3 review_agent.py \
+ --title "论文标题" \
+ --abstract "摘要" \
+ --export report.txt
+
+# 静默模式 (只输出JSON)
+python3 review_agent.py \
+ --title "论文标题" \
+ --abstract "摘要" \
+ --quiet
+```
+
+### 作为 Python 模块使用
+
+```python
+import asyncio
+from review_agent import ReviewOrchestrator, AgentConfig
+
+config = AgentConfig(model="claude-haiku-4-5-20251001")
+orch = ReviewOrchestrator(config=config, max_rounds=3)
+
+result = asyncio.run(orch.run(
+ title="你的论文标题",
+ abstract="你的摘要",
+ verbose=True
+))
+
+# 导出报告
+print(orch.export_report())
+```
+
+### 运行效果示例
+
+```
+==================================================
+ 分布式审稿协调代理系统
+ 稿件: 基于深度学习的图像分类研究
+==================================================
+
+────────────────────────────────
+ 第 1 轮审稿
+────────────────────────────────
+ 审稿人A (方法论专家) 审稿中...
+ → 评分: 6.2 | 建议: major_revision | 意见: 4条
+ 审稿人B (领域专家) 审稿中...
+ → 评分: 7.5 | 建议: minor_revision | 意见: 2条
+ 审稿人C (写作审查) 审稿中...
+ → 评分: 5.8 | 建议: major_revision | 意见: 5条
+
+ EIC 综合分析中...
+ → 建议: major_revision | 综合分: 6.5
+ → 共识度: medium
+ → 关键问题: 方法论需要更多验证, 写作需全面润色
+
+ 作者修改中...
+ → 回复 11 条意见, 解决 9 条
+ → 反驳 2 点
+
+────────────────────────────────
+ 第 2 轮审稿
+────────────────────────────────
+ ...
+
+==================================================
+ 审稿完成
+ 最终决定: accept
+ 评分趋势: 6.5 → 8.1
+ 总轮数: 2
+==================================================
+```
+
+---
+
+## 方式三: Streamlit 网页版
+
+见 `README_审稿系统使用指南.md`。
+
+---
+
+## 费用说明
+
+- **MCP 插件版**: 不消耗 API 额度 (使用模拟逻辑)
+- **Agent SDK 版**: 每轮审稿约消耗 ~5000 tokens (使用 Haiku 模型,成本约 $0.005/轮)
+- **Streamlit 版**: 不消耗 API 额度 (使用模拟逻辑)
diff --git a/manuscript_review_app.py b/manuscript_review_app.py
new file mode 100644
index 000000000000..8a51a8ad5de5
--- /dev/null
+++ b/manuscript_review_app.py
@@ -0,0 +1,2977 @@
+"""
+分布式存储协调代理审稿修改模型
+Distributed Storage Coordination Agent - Manuscript Review System
+
+Architecture:
+- EIC (Editor-in-Chief) Agent: 统筹指导,协调审稿流程
+- Reviewer Agent(s): 审稿评分,提供修改意见
+- Author Agent: 根据反馈修改稿件
+
+Workflow:
+ Author submits → EIC assigns reviewers → Reviewers provide feedback
+ → Author revises → EIC re-coordinates review → Loop until satisfied
+"""
+
+import streamlit as st
+import json
+import uuid
+import time
+import random
+import hashlib
+from datetime import datetime
+from dataclasses import dataclass, field, asdict
+from typing import List, Dict, Optional, Literal
+from enum import Enum
+import pandas as pd
+
+# Claude API (可选 - 用于AI审稿模式)
+try:
+ import anthropic
+ HAS_ANTHROPIC = True
+except ImportError:
+ HAS_ANTHROPIC = False
+
+
+# ============================================================
+# 1. 数据模型 (Data Models)
+# ============================================================
+
+class ReviewDecision(str, Enum):
+ ACCEPT = "accept"
+ MINOR_REVISION = "minor_revision"
+ MAJOR_REVISION = "major_revision"
+ REJECT = "reject"
+ PENDING = "pending"
+
+
+class CommentResolution(str, Enum):
+ PENDING = "pending"
+ ADDRESSED = "addressed"
+ PARTIALLY_ADDRESSED = "partially_addressed"
+ NOT_ADDRESSED = "not_addressed"
+
+
+class AgentRole(str, Enum):
+ EIC = "eic"
+ REVIEWER = "reviewer"
+ AUTHOR = "author"
+
+
+class MessageType(str, Enum):
+ SUBMIT = "submit"
+ ASSIGN = "assign"
+ REVIEW = "review"
+ FEEDBACK_SUMMARY = "feedback_summary"
+ REVISION = "revision"
+ DECISION = "decision"
+ RE_REVIEW = "re_review"
+ FINAL_DECISION = "final_decision"
+
+
+@dataclass
+class AgentMessage:
+ """代理间通信消息"""
+ id: str = field(default_factory=lambda: str(uuid.uuid4())[:8])
+ sender: str = ""
+ sender_role: str = ""
+ receiver: str = ""
+ receiver_role: str = ""
+ msg_type: str = ""
+ content: str = ""
+ metadata: dict = field(default_factory=dict)
+ timestamp: str = field(default_factory=lambda: datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
+ round_number: int = 1
+
+
+@dataclass
+class ReviewComment:
+ """单条审稿意见 (with resolution tracking)"""
+ id: str = field(default_factory=lambda: str(uuid.uuid4())[:8])
+ reviewer_id: str = ""
+ category: str = "" # methodology, writing, novelty, data, structure
+ severity: str = "" # critical, major, minor, suggestion
+ comment: str = ""
+ addressed: bool = False
+ resolution: str = CommentResolution.PENDING.value
+ response: str = ""
+ author_response: str = ""
+ verification_note: str = ""
+ round_created: int = 1
+ round_resolved: int = 0
+
+
+@dataclass
+class ReviewReport:
+ """审稿报告"""
+ reviewer_id: str = ""
+ reviewer_name: str = ""
+ round_number: int = 1
+ decision: str = ReviewDecision.PENDING.value
+ overall_score: float = 0.0
+ scores: dict = field(default_factory=dict)
+ comments: list = field(default_factory=list)
+ summary: str = ""
+ confidence: float = 0.0
+ timestamp: str = field(default_factory=lambda: datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
+
+
+@dataclass
+class ManuscriptVersion:
+ """稿件版本"""
+ version: int = 1
+ title: str = ""
+ abstract: str = ""
+ content_sections: dict = field(default_factory=dict)
+ revision_notes: str = ""
+ timestamp: str = field(default_factory=lambda: datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
+ content_hash: str = ""
+
+
+@dataclass
+class ManuscriptState:
+ """稿件全局状态"""
+ manuscript_id: str = field(default_factory=lambda: str(uuid.uuid4())[:8])
+ current_round: int = 1
+ max_rounds: int = 5
+ status: str = "submitted" # submitted, under_review, revision_requested, revised, accepted, rejected
+ current_version: int = 1
+ versions: list = field(default_factory=list)
+ review_reports: list = field(default_factory=list)
+ eic_decisions: list = field(default_factory=list)
+ message_log: list = field(default_factory=list)
+ assigned_reviewers: list = field(default_factory=list)
+ comment_resolutions: dict = field(default_factory=dict) # {comment_id: resolution_status}
+
+
+# ============================================================
+# 1.5 工具类 (Utility Classes)
+# ============================================================
+
+class CommentTracker:
+ """跨轮次审稿意见生命周期追踪"""
+
+ @staticmethod
+ def get_all_comments(review_history: list) -> list:
+ """收集所有轮次中的唯一审稿意见"""
+ all_comments = []
+ seen_texts = set()
+ for rh in review_history:
+ for report in rh.get('reports', []):
+ for c in report.get('comments', []):
+ comment_key = (c.get('reviewer_id', ''), c.get('comment', ''))
+ if comment_key not in seen_texts:
+ seen_texts.add(comment_key)
+ all_comments.append(c)
+ return all_comments
+
+ @staticmethod
+ def get_resolution_summary(review_history: list) -> dict:
+ """获取意见解决状态摘要"""
+ comments = CommentTracker.get_all_comments(review_history)
+ summary = {
+ 'total': len(comments),
+ 'addressed': 0,
+ 'partially_addressed': 0,
+ 'not_addressed': 0,
+ 'pending': 0
+ }
+ for c in comments:
+ status = c.get('resolution', CommentResolution.PENDING.value)
+ if status == CommentResolution.ADDRESSED.value:
+ summary['addressed'] += 1
+ elif status == CommentResolution.PARTIALLY_ADDRESSED.value:
+ summary['partially_addressed'] += 1
+ elif status == CommentResolution.NOT_ADDRESSED.value:
+ summary['not_addressed'] += 1
+ else:
+ summary['pending'] += 1
+ return summary
+
+
+class VersionDiffer:
+ """稿件版本对比工具"""
+
+ @staticmethod
+ def diff_versions(v1: dict, v2: dict) -> dict:
+ """对比两个版本的章节差异"""
+ sections_v1 = v1.get('content_sections', {})
+ sections_v2 = v2.get('content_sections', {})
+ all_sections = set(list(sections_v1.keys()) + list(sections_v2.keys()))
+ diffs = {}
+
+ for sec in sorted(all_sections):
+ old = sections_v1.get(sec, '')
+ new = sections_v2.get(sec, '')
+ if sec not in sections_v1:
+ diffs[sec] = {'status': 'added', 'old': '', 'new': new}
+ elif sec not in sections_v2:
+ diffs[sec] = {'status': 'removed', 'old': old, 'new': ''}
+ elif old != new:
+ diffs[sec] = {'status': 'modified', 'old': old, 'new': new}
+ else:
+ diffs[sec] = {'status': 'unchanged', 'old': old, 'new': new}
+
+ return {
+ 'v1_version': v1.get('version', '?'),
+ 'v2_version': v2.get('version', '?'),
+ 'v1_hash': v1.get('content_hash', ''),
+ 'v2_hash': v2.get('content_hash', ''),
+ 'section_diffs': diffs,
+ 'modified_sections': [s for s, d in diffs.items() if d['status'] != 'unchanged'],
+ 'total_sections': len(all_sections)
+ }
+
+ @staticmethod
+ def get_version_timeline(manuscript_id: str, store) -> list:
+ """获取版本时间线"""
+ versions = store.get_manuscript_versions(manuscript_id)
+ timeline = []
+ for i, v in enumerate(versions):
+ entry = {
+ 'version': v.get('version', i + 1),
+ 'timestamp': v.get('timestamp', ''),
+ 'hash': v.get('content_hash', ''),
+ 'revision_notes': v.get('revision_notes', ''),
+ 'sections_modified': []
+ }
+ if i > 0:
+ diff = VersionDiffer.diff_versions(versions[i - 1], v)
+ entry['sections_modified'] = diff['modified_sections']
+ timeline.append(entry)
+ return timeline
+
+
+# ============================================================
+# 1.6 AI 审稿引擎 (Claude API - 可选)
+# ============================================================
+
+class AIReviewEngine:
+ """
+ 基于 Claude API 的真实审稿引擎
+ 当用户提供 API Key 时启用,替代模拟逻辑
+ """
+
+ EIC_PROMPT = """你是学术期刊主编(EIC)。综合审稿意见做出决策。
+输出严格JSON格式(无多余文字):
+{
+ "recommendation": "accept/minor_revision/major_revision/reject",
+ "overall_score": 1-10,
+ "consensus_level": 0-1,
+ "key_issues": ["问题1", "问题2"],
+ "guidance": "给作者的修改指导(200字内)",
+ "reasoning": "决策理由(100字内)"
+}"""
+
+ REVIEWER_PROMPT = """你是学术审稿人,专长: {expertise},风格: {personality}。
+对以下稿件进行专业审稿。输出严格JSON格式(无多余文字):
+{{
+ "scores": {{"novelty": 1-10, "methodology": 1-10, "writing": 1-10, "significance": 1-10, "data_analysis": 1-10}},
+ "overall_score": 加权总分,
+ "decision": "accept/minor_revision/major_revision/reject",
+ "comments": [
+ {{"category": "维度名", "severity": "critical/major/minor", "comment": "具体意见"}}
+ ],
+ "summary": "50字内审稿总结",
+ "confidence": 0-1
+}}"""
+
+ AUTHOR_PROMPT = """你是论文作者,需逐条回复审稿意见。
+输出严格JSON格式(无多余文字):
+{
+ "responses": [
+ {
+ "original_comment": "原始意见",
+ "category": "维度",
+ "severity": "严重度",
+ "response": "你的回复",
+ "addressed": true/false
+ }
+ ],
+ "revision_summary": "修改概述(100字内)",
+ "sections_modified": ["修改的章节"]
+}"""
+
+ def __init__(self, api_key: str, model: str = "claude-haiku-4-5-20251001"):
+ self.client = anthropic.Anthropic(api_key=api_key)
+ self.model = model
+
+ def _call(self, system: str, user_msg: str, max_tokens: int = 2048) -> str:
+ """调用 Claude API"""
+ try:
+ resp = self.client.messages.create(
+ model=self.model,
+ max_tokens=max_tokens,
+ temperature=0.7,
+ system=system,
+ messages=[{"role": "user", "content": user_msg}],
+ )
+ return resp.content[0].text
+ except Exception as e:
+ return json.dumps({"error": str(e)})
+
+ def _parse_json(self, text: str) -> dict:
+ """从响应中提取JSON"""
+ if "```json" in text:
+ start = text.index("```json") + 7
+ end = text.index("```", start)
+ text = text[start:end].strip()
+ elif "```" in text:
+ start = text.index("```") + 3
+ end = text.index("```", start)
+ text = text[start:end].strip()
+ try:
+ return json.loads(text)
+ except json.JSONDecodeError:
+ first = text.find("{")
+ last = text.rfind("}")
+ if first >= 0 and last > first:
+ try:
+ return json.loads(text[first:last + 1])
+ except json.JSONDecodeError:
+ pass
+ return {"error": "JSON解析失败", "raw": text[:500]}
+
+ def ai_review(self, profile: dict, title: str, abstract: str,
+ round_num: int, prev_comments: list = None,
+ author_response: str = None) -> dict:
+ """AI 审稿"""
+ expertise = ", ".join(profile.get("expertise", []))
+ system = self.REVIEWER_PROMPT.format(
+ expertise=expertise, personality=profile.get("personality", "")
+ )
+ user_msg = f"标题: {title}\n摘要: {abstract}\n轮次: 第{round_num}轮"
+ if prev_comments and author_response:
+ user_msg += f"\n\n你上轮意见:\n{json.dumps(prev_comments, ensure_ascii=False)}"
+ user_msg += f"\n\n作者回复:\n{author_response}"
+
+ raw = self._call(system, user_msg)
+ result = self._parse_json(raw)
+
+ # 补全字段
+ result.setdefault("reviewer_id", profile["id"])
+ result.setdefault("reviewer_name", profile["name"])
+ result.setdefault("round_number", round_num)
+ result.setdefault("scores", {})
+ result.setdefault("overall_score", 5.0)
+ result.setdefault("decision", "major_revision")
+ result.setdefault("confidence", 0.7)
+ result.setdefault("summary", "")
+
+ # 确保 comments 有必要字段
+ for c in result.get("comments", []):
+ c.setdefault("id", str(uuid.uuid4())[:8])
+ c.setdefault("reviewer_id", profile["id"])
+ c.setdefault("resolution", CommentResolution.PENDING.value)
+ c.setdefault("round_created", round_num)
+ c.setdefault("severity", "minor")
+ c.setdefault("category", "general")
+
+ return result
+
+ def ai_synthesize(self, reviews: list, title: str, abstract: str,
+ round_num: int) -> dict:
+ """AI EIC 综合"""
+ user_msg = (
+ f"稿件: {title}\n摘要: {abstract}\n轮次: 第{round_num}轮\n\n"
+ f"审稿报告:\n{json.dumps(reviews, ensure_ascii=False, indent=2)}"
+ )
+ raw = self._call(self.EIC_PROMPT, user_msg)
+ result = self._parse_json(raw)
+ result.setdefault("recommendation", "major_revision")
+ result.setdefault("overall_score", 5.0)
+ result.setdefault("consensus_level", 0.5)
+ result.setdefault("key_issues", [])
+ result.setdefault("guidance", "")
+ return result
+
+ def ai_author_respond(self, synthesis_guidance: str, reviews: list,
+ title: str, round_num: int) -> dict:
+ """AI 作者回复"""
+ user_msg = (
+ f"你的论文: {title}\nEIC指导: {synthesis_guidance}\n轮次: 第{round_num}轮\n\n"
+ f"审稿意见:\n{json.dumps(reviews, ensure_ascii=False, indent=2)}"
+ )
+ raw = self._call(self.AUTHOR_PROMPT, user_msg)
+ result = self._parse_json(raw)
+ result.setdefault("responses", [])
+ result.setdefault("revision_summary", "")
+ result.setdefault("sections_modified", [])
+
+ for r in result.get("responses", []):
+ r.setdefault("addressed", True)
+ r.setdefault("resolution", CommentResolution.ADDRESSED.value)
+ r.setdefault("severity", "minor")
+ r.setdefault("category", "general")
+ return result
+
+
+# ============================================================
+# 2. 分布式存储协调引擎 (Distributed Storage Coordinator)
+# ============================================================
+
+class DistributedStore:
+ """
+ 分布式存储协调器
+ 模拟多代理间的状态同步与消息传递
+
+ Storage Partitions:
+ - manuscripts: 稿件版本存储
+ - reviews: 审稿报告存储
+ - messages: 代理间消息队列
+ - agent_states: 各代理状态
+ - coordination: 协调元数据
+ """
+
+ def __init__(self):
+ if 'store_manuscripts' not in st.session_state:
+ st.session_state.store_manuscripts = {}
+ if 'store_reviews' not in st.session_state:
+ st.session_state.store_reviews = {}
+ if 'store_messages' not in st.session_state:
+ st.session_state.store_messages = []
+ if 'store_agent_states' not in st.session_state:
+ st.session_state.store_agent_states = {}
+ if 'store_coordination' not in st.session_state:
+ st.session_state.store_coordination = {
+ 'lock_owner': None,
+ 'round_status': {},
+ 'pending_actions': [],
+ 'event_log': []
+ }
+
+ def acquire_lock(self, agent_id: str) -> bool:
+ """获取分布式锁"""
+ coord = st.session_state.store_coordination
+ if coord['lock_owner'] is None or coord['lock_owner'] == agent_id:
+ coord['lock_owner'] = agent_id
+ return True
+ return False
+
+ def release_lock(self, agent_id: str):
+ """释放分布式锁"""
+ coord = st.session_state.store_coordination
+ if coord['lock_owner'] == agent_id:
+ coord['lock_owner'] = None
+
+ def store_manuscript(self, ms_id: str, version: dict):
+ """存储稿件版本"""
+ if ms_id not in st.session_state.store_manuscripts:
+ st.session_state.store_manuscripts[ms_id] = []
+ st.session_state.store_manuscripts[ms_id].append(version)
+ self._log_event(f"Manuscript {ms_id} v{version.get('version', '?')} stored")
+
+ def get_manuscript_versions(self, ms_id: str) -> list:
+ """获取稿件所有版本"""
+ return st.session_state.store_manuscripts.get(ms_id, [])
+
+ def get_latest_version(self, ms_id: str) -> Optional[dict]:
+ """获取稿件最新版本"""
+ versions = self.get_manuscript_versions(ms_id)
+ return versions[-1] if versions else None
+
+ def store_review(self, ms_id: str, review: dict):
+ """存储审稿报告"""
+ key = f"{ms_id}_r{review.get('round_number', 1)}"
+ if key not in st.session_state.store_reviews:
+ st.session_state.store_reviews[key] = []
+ st.session_state.store_reviews[key].append(review)
+ self._log_event(f"Review stored: {review.get('reviewer_name', '?')} for {ms_id} round {review.get('round_number', 1)}")
+
+ def get_reviews(self, ms_id: str, round_number: int) -> list:
+ """获取指定轮次审稿报告"""
+ key = f"{ms_id}_r{round_number}"
+ return st.session_state.store_reviews.get(key, [])
+
+ def send_message(self, msg: AgentMessage):
+ """发送代理间消息"""
+ st.session_state.store_messages.append(asdict(msg))
+ self._log_event(f"Message [{msg.msg_type}]: {msg.sender}({msg.sender_role}) → {msg.receiver}({msg.receiver_role})")
+
+ def get_messages(self, receiver: str = None, msg_type: str = None) -> list:
+ """获取消息"""
+ msgs = st.session_state.store_messages
+ if receiver:
+ msgs = [m for m in msgs if m['receiver'] == receiver]
+ if msg_type:
+ msgs = [m for m in msgs if m['msg_type'] == msg_type]
+ return msgs
+
+ def update_agent_state(self, agent_id: str, state: dict):
+ """更新代理状态"""
+ st.session_state.store_agent_states[agent_id] = {
+ **state,
+ 'last_updated': datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+ }
+
+ def get_agent_state(self, agent_id: str) -> dict:
+ """获取代理状态"""
+ return st.session_state.store_agent_states.get(agent_id, {})
+
+ def _log_event(self, event: str):
+ """记录协调事件"""
+ st.session_state.store_coordination['event_log'].append({
+ 'time': datetime.now().strftime("%H:%M:%S"),
+ 'event': event
+ })
+
+ def get_event_log(self) -> list:
+ """获取事件日志"""
+ return st.session_state.store_coordination['event_log']
+
+ def reset(self):
+ """重置所有存储"""
+ st.session_state.store_manuscripts = {}
+ st.session_state.store_reviews = {}
+ st.session_state.store_messages = []
+ st.session_state.store_agent_states = {}
+ st.session_state.store_coordination = {
+ 'lock_owner': None,
+ 'round_status': {},
+ 'pending_actions': [],
+ 'event_log': []
+ }
+
+
+# ============================================================
+# 3. 代理实现 (Agent Implementations)
+# ============================================================
+
+# --- 审稿知识库 (Review Knowledge Base) ---
+
+REVIEW_DIMENSIONS = {
+ "novelty": {"name": "创新性 (Novelty)", "weight": 0.25,
+ "criteria": "研究的原创性和创新程度"},
+ "methodology": {"name": "方法论 (Methodology)", "weight": 0.25,
+ "criteria": "研究方法的严谨性和适当性"},
+ "writing": {"name": "写作质量 (Writing Quality)", "weight": 0.15,
+ "criteria": "论文结构、语言表达和逻辑性"},
+ "significance": {"name": "重要性 (Significance)", "weight": 0.20,
+ "criteria": "研究对领域的贡献和影响"},
+ "data_analysis": {"name": "数据分析 (Data Analysis)", "weight": 0.15,
+ "criteria": "数据处理和分析的正确性"}
+}
+
+REVIEWER_PROFILES = [
+ {
+ "id": "reviewer_1", "name": "Reviewer A (方法论专家)",
+ "expertise": ["methodology", "data_analysis"],
+ "strictness": 0.7, "personality": "严谨型",
+ "focus_areas": ["实验设计", "统计分析", "可重复性"]
+ },
+ {
+ "id": "reviewer_2", "name": "Reviewer B (领域专家)",
+ "expertise": ["novelty", "significance"],
+ "strictness": 0.5, "personality": "建设型",
+ "focus_areas": ["文献综述", "创新点", "理论贡献"]
+ },
+ {
+ "id": "reviewer_3", "name": "Reviewer C (写作专家)",
+ "expertise": ["writing", "methodology"],
+ "strictness": 0.6, "personality": "细致型",
+ "focus_areas": ["论文结构", "图表质量", "语言表达"]
+ }
+]
+
+# --- 常见审稿意见模板 ---
+REVIEW_TEMPLATES = {
+ "methodology": {
+ "critical": [
+ "实验设计存在重大缺陷,缺乏对照组,无法验证因果关系。",
+ "样本量不足以支持统计结论,需要进行功效分析。",
+ "研究方法的选择缺乏合理性论证,建议补充方法论依据。"
+ ],
+ "major": [
+ "建议增加更多的消融实验以验证各组件的贡献。",
+ "数据预处理步骤描述不够详细,影响可重复性。",
+ "缺少与最新基线方法的比较,建议补充对比实验。"
+ ],
+ "minor": [
+ "建议补充超参数敏感性分析。",
+ "实验结果的可视化可以进一步优化。",
+ "建议增加计算复杂度分析。"
+ ]
+ },
+ "novelty": {
+ "critical": [
+ "论文的核心贡献与已有工作高度相似,创新性不足。",
+ "技术方案缺乏新颖性,仅是已有方法的简单组合。"
+ ],
+ "major": [
+ "建议更清晰地阐述与相关工作的区别和改进。",
+ "创新点的理论分析不够深入,建议补充理论证明。"
+ ],
+ "minor": [
+ "建议在引言中更突出研究的创新贡献。",
+ "与现有方法的对比讨论可以更加深入。"
+ ]
+ },
+ "writing": {
+ "critical": [
+ "论文结构混乱,逻辑不清,需要大幅重写。"
+ ],
+ "major": [
+ "摘要未能准确概括研究贡献,需要重写。",
+ "相关工作部分组织不当,建议按主题分类讨论。",
+ "结论过于笼统,缺乏具体的发现总结。"
+ ],
+ "minor": [
+ "部分图表的标注不够清晰,建议优化。",
+ "参考文献格式不统一,请检查。",
+ "建议精简冗余表述,提升可读性。"
+ ]
+ },
+ "significance": {
+ "major": [
+ "研究的实际应用价值需要进一步论证。",
+ "对领域的潜在影响讨论不足。"
+ ],
+ "minor": [
+ "建议讨论研究的局限性和未来工作方向。",
+ "可以增加更多实际应用场景的讨论。"
+ ]
+ },
+ "data_analysis": {
+ "critical": [
+ "统计检验方法选择不当,结论可能不可靠。"
+ ],
+ "major": [
+ "缺少误差分析和置信区间报告。",
+ "数据集的代表性需要进一步验证。"
+ ],
+ "minor": [
+ "建议报告更多的评估指标。",
+ "结果表格可以增加标准差信息。"
+ ]
+ }
+}
+
+# --- 作者回复模板 ---
+AUTHOR_RESPONSE_TEMPLATES = {
+ "critical": [
+ "感谢审稿人指出这一关键问题。我们已{action},具体修改如下:\n{detail}",
+ "非常感谢这一重要意见。经过深入分析,我们{action}。修改后的结果显示{result}。"
+ ],
+ "major": [
+ "感谢审稿人的宝贵建议。我们已按要求{action},详见修改稿第{section}节。",
+ "感谢这一建设性意见。我们{action},新增内容已在修改稿中标注。"
+ ],
+ "minor": [
+ "感谢审稿人的细致审阅。已按建议{action}。",
+ "已采纳此建议,{action}。"
+ ]
+}
+
+REVISION_ACTIONS = {
+ "methodology": [
+ "补充了详细的实验设计说明和对照组设置",
+ "增加了消融实验和敏感性分析",
+ "补充了方法论的理论依据和引用",
+ "增加了与最新方法的对比实验"
+ ],
+ "novelty": [
+ "重新撰写了创新性论述,明确区分与现有工作的差异",
+ "补充了理论分析以支撑创新点",
+ "增加了创新性对比表格"
+ ],
+ "writing": [
+ "重新组织了论文结构,改善了逻辑流",
+ "重写了摘要和结论部分",
+ "优化了图表质量和标注",
+ "统一了参考文献格式"
+ ],
+ "significance": [
+ "增加了应用场景讨论和案例分析",
+ "扩展了对领域影响的讨论"
+ ],
+ "data_analysis": [
+ "补充了统计检验和置信区间",
+ "增加了多项评估指标和误差分析",
+ "验证了数据集的代表性"
+ ]
+}
+
+
+class EICAgent:
+ """
+ 主编代理 (Editor-in-Chief Agent)
+
+ 职责:
+ 1. 接收稿件提交
+ 2. 选择和分配审稿人
+ 3. 综合审稿意见,形成指导性反馈
+ 4. 协调修改-再审循环
+ 5. 做出最终决定
+ """
+
+ def __init__(self, store: DistributedStore):
+ self.id = "eic_001"
+ self.name = "Editor-in-Chief"
+ self.role = AgentRole.EIC
+ self.store = store
+ self.store.update_agent_state(self.id, {
+ 'role': 'eic', 'name': self.name, 'status': 'active'
+ })
+
+ def receive_submission(self, manuscript: ManuscriptState, version: ManuscriptVersion) -> ManuscriptState:
+ """接收并登记稿件"""
+ manuscript.status = "submitted"
+ self.store.store_manuscript(manuscript.manuscript_id, asdict(version))
+
+ msg = AgentMessage(
+ sender=self.id, sender_role="eic",
+ receiver="system", receiver_role="system",
+ msg_type=MessageType.SUBMIT.value,
+ content=f"稿件 [{version.title}] 已接收登记,编号: {manuscript.manuscript_id}",
+ round_number=1
+ )
+ self.store.send_message(msg)
+ manuscript.message_log.append(asdict(msg))
+ return manuscript
+
+ def assign_reviewers(self, manuscript: ManuscriptState, reviewer_ids: list) -> ManuscriptState:
+ """分配审稿人"""
+ manuscript.assigned_reviewers = reviewer_ids
+ manuscript.status = "under_review"
+
+ for rid in reviewer_ids:
+ profile = next((r for r in REVIEWER_PROFILES if r['id'] == rid), None)
+ reviewer_name = profile['name'] if profile else rid
+
+ msg = AgentMessage(
+ sender=self.id, sender_role="eic",
+ receiver=rid, receiver_role="reviewer",
+ msg_type=MessageType.ASSIGN.value,
+ content=f"请您审阅稿件 {manuscript.manuscript_id},请重点关注研究的创新性、方法论严谨性及写作质量。",
+ metadata={'manuscript_id': manuscript.manuscript_id, 'reviewer_name': reviewer_name},
+ round_number=manuscript.current_round
+ )
+ self.store.send_message(msg)
+ manuscript.message_log.append(asdict(msg))
+
+ return manuscript
+
+ def synthesize_feedback(self, manuscript: ManuscriptState) -> dict:
+ """
+ 综合所有审稿意见,生成指导性反馈摘要
+
+ EIC的核心协调功能:
+ - 识别审稿人共识意见
+ - 标注分歧点
+ - 专家加权评分
+ - 形成优先级排序的修改指导
+ """
+ reviews = self.store.get_reviews(manuscript.manuscript_id, manuscript.current_round)
+ if not reviews:
+ return {"error": "No reviews found"}
+
+ # 收集所有评论
+ all_comments = []
+ decisions = []
+
+ for review in reviews:
+ decisions.append(review.get('decision', 'pending'))
+ for comment in review.get('comments', []):
+ all_comments.append(comment)
+
+ # 分类统计
+ critical_issues = [c for c in all_comments if c.get('severity') == 'critical']
+ major_issues = [c for c in all_comments if c.get('severity') == 'major']
+ minor_issues = [c for c in all_comments if c.get('severity') == 'minor']
+
+ # 共识分析 (专家加权 + 方差检测)
+ consensus = self._compute_consensus(reviews)
+
+ avg_scores = consensus['weighted_avg']
+ overall_avg = sum(
+ avg_scores.get(d, 0) * REVIEW_DIMENSIONS[d]['weight']
+ for d in avg_scores
+ ) if avg_scores else 0
+
+ # 轨迹分析
+ trajectory = self._analyze_trajectory()
+
+ # EIC决策逻辑 (考虑共识度和分歧)
+ has_critical = len(critical_issues) > 0
+ has_major_disagreement = len(consensus['disagreement_dims']) > 0
+
+ if overall_avg >= 8.0 and not has_critical:
+ eic_recommendation = ReviewDecision.ACCEPT.value
+ elif overall_avg >= 6.0 and not has_critical:
+ if has_major_disagreement and overall_avg < 7.0:
+ eic_recommendation = ReviewDecision.MAJOR_REVISION.value
+ else:
+ eic_recommendation = ReviewDecision.MINOR_REVISION.value
+ elif overall_avg >= 4.0 or has_critical:
+ eic_recommendation = ReviewDecision.MAJOR_REVISION.value
+ else:
+ eic_recommendation = ReviewDecision.REJECT.value
+
+ # 趋势修正:如果质量持续下降且偏低,倾向拒稿
+ if (trajectory.get('trend') == 'declining' and
+ eic_recommendation == ReviewDecision.MAJOR_REVISION.value and
+ overall_avg < 5.0):
+ eic_recommendation = ReviewDecision.REJECT.value
+
+ synthesis = {
+ "round": manuscript.current_round,
+ "reviewer_count": len(reviews),
+ "avg_scores": avg_scores,
+ "simple_avg_scores": consensus['simple_avg'],
+ "overall_avg": round(overall_avg, 2),
+ "critical_count": len(critical_issues),
+ "major_count": len(major_issues),
+ "minor_count": len(minor_issues),
+ "critical_issues": critical_issues,
+ "major_issues": major_issues,
+ "minor_issues": minor_issues,
+ "reviewer_decisions": decisions,
+ "eic_recommendation": eic_recommendation,
+ "consensus_level": consensus['consensus_level'],
+ "disagreement_dims": consensus['disagreement_dims'],
+ "outlier_reviewers": consensus['outlier_reviewers'],
+ "score_variance": consensus['variance'],
+ "trajectory": trajectory,
+ "guidance": self._generate_guidance(
+ eic_recommendation, critical_issues, major_issues,
+ avg_scores, manuscript.current_round, consensus, trajectory
+ )
+ }
+
+ # 发送综合反馈
+ msg = AgentMessage(
+ sender=self.id, sender_role="eic",
+ receiver="author", receiver_role="author",
+ msg_type=MessageType.FEEDBACK_SUMMARY.value,
+ content=synthesis['guidance'],
+ metadata=synthesis,
+ round_number=manuscript.current_round
+ )
+ self.store.send_message(msg)
+ manuscript.message_log.append(asdict(msg))
+ manuscript.eic_decisions.append(synthesis)
+
+ return synthesis
+
+ def _compute_consensus(self, reviews: list) -> dict:
+ """
+ 计算专家加权共识评分
+
+ - 专长维度权重 1.5x
+ - 检测维度分歧 (最大分差 > 2.0)
+ - 检测离群审稿人 (偏离均值 > 2.0)
+ """
+ score_data = {} # dim -> [(score, reviewer_id, is_expert)]
+ for review in reviews:
+ rid = review.get('reviewer_id', '')
+ profile = next((p for p in REVIEWER_PROFILES if p['id'] == rid), None)
+ expertise = profile.get('expertise', []) if profile else []
+ for dim, score in review.get('scores', {}).items():
+ if dim not in score_data:
+ score_data[dim] = []
+ score_data[dim].append({
+ 'score': score, 'reviewer_id': rid,
+ 'is_expert': dim in expertise
+ })
+
+ result = {
+ 'weighted_avg': {}, 'simple_avg': {}, 'variance': {},
+ 'disagreement_dims': [], 'outlier_reviewers': [],
+ 'consensus_level': 1.0
+ }
+
+ all_variances = []
+ for dim, entries in score_data.items():
+ scores = [e['score'] for e in entries]
+ simple_avg = sum(scores) / len(scores)
+ weights = [1.5 if e['is_expert'] else 1.0 for e in entries]
+ weighted_avg = sum(s * w for s, w in zip(scores, weights)) / sum(weights)
+ variance = sum((s - simple_avg) ** 2 for s in scores) / len(scores) if len(scores) > 1 else 0
+
+ result['weighted_avg'][dim] = round(weighted_avg, 2)
+ result['simple_avg'][dim] = round(simple_avg, 2)
+ result['variance'][dim] = round(variance, 2)
+ all_variances.append(variance)
+
+ # 分歧检测
+ if len(scores) >= 2:
+ max_diff = max(scores) - min(scores)
+ if max_diff > 2.0:
+ result['disagreement_dims'].append({
+ 'dimension': dim,
+ 'max_diff': round(max_diff, 2),
+ 'scores': {e['reviewer_id']: e['score'] for e in entries}
+ })
+
+ # 离群审稿人
+ for entry in entries:
+ if abs(entry['score'] - simple_avg) > 2.0:
+ result['outlier_reviewers'].append({
+ 'reviewer_id': entry['reviewer_id'],
+ 'dimension': dim,
+ 'score': entry['score'],
+ 'mean': round(simple_avg, 2),
+ 'deviation': round(entry['score'] - simple_avg, 2)
+ })
+
+ # 共识度: 1 - 归一化方差 (0=完全分歧, 1=完全共识)
+ if all_variances:
+ avg_var = sum(all_variances) / len(all_variances)
+ result['consensus_level'] = round(max(0, 1 - avg_var / 10), 2)
+
+ return result
+
+ def _analyze_trajectory(self) -> dict:
+ """分析评分趋势 (轮次间改善方向和速度)"""
+ history = st.session_state.get('review_history', [])
+ if len(history) < 2:
+ return {'trend': 'insufficient_data', 'improvement_rate': 0, 'round_averages': []}
+
+ round_avgs = []
+ for rh in history:
+ scores = [r.get('overall_score', 0) for r in rh.get('reports', [])]
+ avg = sum(scores) / len(scores) if scores else 0
+ round_avgs.append(avg)
+
+ improvements = [round_avgs[i + 1] - round_avgs[i] for i in range(len(round_avgs) - 1)]
+ avg_improvement = sum(improvements) / len(improvements) if improvements else 0
+
+ if avg_improvement > 1.0:
+ trend = 'strong_improvement'
+ elif avg_improvement > 0.3:
+ trend = 'moderate_improvement'
+ elif avg_improvement > 0:
+ trend = 'slight_improvement'
+ elif avg_improvement > -0.3:
+ trend = 'stagnant'
+ else:
+ trend = 'declining'
+
+ current_avg = round_avgs[-1]
+ projected = max(0, (7.5 - current_avg) / avg_improvement) if avg_improvement > 0 else float('inf')
+
+ return {
+ 'trend': trend,
+ 'improvement_rate': round(avg_improvement, 2),
+ 'round_averages': [round(a, 2) for a in round_avgs],
+ 'projected_rounds_to_accept': round(projected, 1),
+ 'current_avg': round(current_avg, 2)
+ }
+
+ def _generate_guidance(self, recommendation: str, critical: list, major: list,
+ scores: dict, current_round: int = 1,
+ consensus: dict = None, trajectory: dict = None) -> str:
+ """生成EIC指导意见 (含共识分析和趋势)"""
+ lines = []
+ lines.append(f"【EIC综合评审意见 - 第{current_round}轮】\n")
+
+ if recommendation == ReviewDecision.ACCEPT.value:
+ lines.append("综合评定:接受 (Accept)")
+ lines.append("恭喜!您的稿件经审稿人评审后达到发表标准。\n")
+ elif recommendation == ReviewDecision.MINOR_REVISION.value:
+ lines.append("综合评定:小修 (Minor Revision)")
+ lines.append("您的稿件质量较好,但仍需进行部分修改。请重点关注以下问题:\n")
+ elif recommendation == ReviewDecision.MAJOR_REVISION.value:
+ lines.append("综合评定:大修 (Major Revision)")
+ lines.append("您的稿件需要实质性修改。请务必逐条回复审稿人意见:\n")
+ else:
+ lines.append("综合评定:拒稿 (Reject)")
+ lines.append("很遗憾,您的稿件目前不符合发表要求。主要问题如下:\n")
+
+ if critical:
+ lines.append("⚠ 关键问题 (必须解决):")
+ for i, c in enumerate(critical, 1):
+ lines.append(f" {i}. [{c.get('category', '')}] {c.get('comment', '')}")
+ lines.append("")
+
+ if major:
+ lines.append("● 重要问题 (需要认真处理):")
+ for i, c in enumerate(major, 1):
+ lines.append(f" {i}. [{c.get('category', '')}] {c.get('comment', '')}")
+ lines.append("")
+
+ weak_dims = [d for d, s in scores.items() if s < 6.0]
+ if weak_dims:
+ lines.append("△ 薄弱环节 (建议加强):")
+ for d in weak_dims:
+ dim_info = REVIEW_DIMENSIONS.get(d, {})
+ lines.append(f" - {dim_info.get('name', d)}: 当前均分 {scores[d]:.1f}/10")
+ lines.append("")
+
+ # 审稿人分歧提示
+ if consensus and consensus.get('disagreement_dims'):
+ lines.append("⚡ 审稿人分歧提示:")
+ for flag in consensus['disagreement_dims']:
+ dim_name = REVIEW_DIMENSIONS.get(flag['dimension'], {}).get('name', flag['dimension'])
+ scores_str = ", ".join(f"{rid[-1]}: {s:.1f}" for rid, s in flag['scores'].items())
+ lines.append(f" - {dim_name}: 审稿人评分 [{scores_str}] (分差: {flag['max_diff']:.1f})")
+ lines.append(" EIC已采用专家加权评分以协调分歧。")
+ lines.append("")
+
+ # 趋势分析
+ if trajectory and trajectory.get('trend') != 'insufficient_data':
+ trend_label = {
+ 'strong_improvement': '显著提升', 'moderate_improvement': '稳步提升',
+ 'slight_improvement': '略有提升', 'stagnant': '趋于停滞',
+ 'declining': '质量下降'
+ }.get(trajectory['trend'], trajectory['trend'])
+ lines.append(f"📊 质量趋势: {trend_label} (每轮变化: {trajectory['improvement_rate']:+.2f})")
+ if trajectory['projected_rounds_to_accept'] < float('inf'):
+ lines.append(f" 预计 ~{trajectory['projected_rounds_to_accept']:.0f} 轮后可达接受标准")
+
+ return "\n".join(lines)
+
+ def request_re_review(self, manuscript: ManuscriptState) -> ManuscriptState:
+ """请求审稿人重新审阅修改稿"""
+ for rid in manuscript.assigned_reviewers:
+ msg = AgentMessage(
+ sender=self.id, sender_role="eic",
+ receiver=rid, receiver_role="reviewer",
+ msg_type=MessageType.RE_REVIEW.value,
+ content=f"作者已提交修改稿(第{manuscript.current_version}版),请您重新审阅并评估修改是否充分。",
+ metadata={'manuscript_id': manuscript.manuscript_id, 'version': manuscript.current_version},
+ round_number=manuscript.current_round
+ )
+ self.store.send_message(msg)
+ manuscript.message_log.append(asdict(msg))
+
+ manuscript.status = "under_review"
+ return manuscript
+
+ def make_final_decision(self, manuscript: ManuscriptState) -> dict:
+ """做出最终决定"""
+ reviews = self.store.get_reviews(manuscript.manuscript_id, manuscript.current_round)
+ if not reviews:
+ return {"decision": "pending", "reason": "无审稿意见"}
+
+ avg_score = sum(r.get('overall_score', 0) for r in reviews) / len(reviews)
+ has_critical = any(
+ c.get('severity') == 'critical' and not c.get('addressed', False)
+ for r in reviews for c in r.get('comments', [])
+ )
+
+ if avg_score >= 7.5 and not has_critical:
+ decision = ReviewDecision.ACCEPT.value
+ reason = f"稿件经过{manuscript.current_round}轮审稿后达到发表标准,平均分 {avg_score:.1f}/10。"
+ elif avg_score >= 6.0:
+ decision = ReviewDecision.MINOR_REVISION.value
+ reason = f"稿件仍需小幅修改,平均分 {avg_score:.1f}/10。"
+ elif manuscript.current_round >= manuscript.max_rounds:
+ decision = ReviewDecision.REJECT.value
+ reason = f"经过{manuscript.current_round}轮审稿,稿件未能达到发表标准。"
+ else:
+ decision = ReviewDecision.MAJOR_REVISION.value
+ reason = f"稿件需继续修改,当前平均分 {avg_score:.1f}/10。"
+
+ result = {
+ "decision": decision,
+ "reason": reason,
+ "round": manuscript.current_round,
+ "avg_score": round(avg_score, 2)
+ }
+
+ msg = AgentMessage(
+ sender=self.id, sender_role="eic",
+ receiver="author", receiver_role="author",
+ msg_type=MessageType.FINAL_DECISION.value,
+ content=reason,
+ metadata=result,
+ round_number=manuscript.current_round
+ )
+ self.store.send_message(msg)
+ manuscript.message_log.append(asdict(msg))
+
+ return result
+
+
+class ReviewerAgent:
+ """
+ 审稿人代理 (Reviewer Agent)
+
+ 职责:
+ 1. 按维度评估稿件
+ 2. 给出评分和意见
+ 3. 重新审阅修改稿并更新评价
+ """
+
+ def __init__(self, profile: dict, store: DistributedStore):
+ self.id = profile['id']
+ self.name = profile['name']
+ self.expertise = profile.get('expertise', [])
+ self.strictness = profile.get('strictness', 0.5)
+ self.personality = profile.get('personality', '')
+ self.focus_areas = profile.get('focus_areas', [])
+ self.role = AgentRole.REVIEWER
+ self.store = store
+ self.store.update_agent_state(self.id, {
+ 'role': 'reviewer', 'name': self.name,
+ 'status': 'active', 'expertise': self.expertise
+ })
+
+ def review_manuscript(self, manuscript: ManuscriptState, is_re_review: bool = False) -> ReviewReport:
+ """
+ 审阅稿件
+
+ 审稿逻辑:
+ - 对专长领域更严格/更有见地
+ - 再审时验证前轮意见是否被解决
+ - 基于解决情况调整评分
+ """
+ version = self.store.get_latest_version(manuscript.manuscript_id)
+ scores = {}
+ base_quality = random.uniform(4.5, 8.5)
+
+ # 再审时,基于前一轮评分 + 实际解决情况调整
+ addressed_ratio = 0.0
+ if is_re_review and manuscript.current_round > 1:
+ prev_reviews = self.store.get_reviews(manuscript.manuscript_id, manuscript.current_round - 1)
+ my_prev = next((r for r in prev_reviews if r.get('reviewer_id') == self.id), None)
+ if my_prev:
+ prev_avg = my_prev.get('overall_score', base_quality)
+ # 验证前轮意见解决情况
+ prev_comments = my_prev.get('comments', [])
+ addressed_ratio = self._check_addressed_ratio(prev_comments, manuscript)
+ # 改善幅度与解决率成正比
+ improvement = addressed_ratio * random.uniform(1.0, 2.5) + random.uniform(0.0, 0.5)
+ base_quality = min(prev_avg + improvement, 9.5)
+
+ for dim, info in REVIEW_DIMENSIONS.items():
+ if dim in self.expertise:
+ score = base_quality + random.uniform(-1.0, 0.5) - (self.strictness * 0.5)
+ else:
+ score = base_quality + random.uniform(-1.5, 1.0)
+ scores[dim] = round(max(1.0, min(10.0, score)), 1)
+
+ overall_score = sum(scores[d] * REVIEW_DIMENSIONS[d]['weight'] for d in scores)
+
+ # 生成审稿意见 (含前轮验证)
+ comments = self._generate_comments(scores, is_re_review, manuscript)
+ # 评分-意见一致性检查
+ comments = self._enforce_consistency(scores, comments, manuscript.current_round)
+
+ if overall_score >= 8.0:
+ decision = ReviewDecision.ACCEPT.value
+ elif overall_score >= 6.5:
+ decision = ReviewDecision.MINOR_REVISION.value
+ elif overall_score >= 4.5:
+ decision = ReviewDecision.MAJOR_REVISION.value
+ else:
+ decision = ReviewDecision.REJECT.value
+
+ # 基于专长计算信心
+ n_expert_dims = sum(1 for d in scores if d in self.expertise)
+ confidence = 0.6 + 0.1 * n_expert_dims + 0.05 * (1 if is_re_review else 0)
+ confidence = round(min(0.95, confidence + random.uniform(-0.05, 0.05)), 2)
+
+ report = ReviewReport(
+ reviewer_id=self.id,
+ reviewer_name=self.name,
+ round_number=manuscript.current_round,
+ decision=decision,
+ overall_score=round(overall_score, 2),
+ scores=scores,
+ comments=[asdict(c) if isinstance(c, ReviewComment) else c for c in comments],
+ summary=self._generate_summary(scores, decision, addressed_ratio if is_re_review else None),
+ confidence=confidence
+ )
+
+ report_dict = asdict(report)
+ self.store.store_review(manuscript.manuscript_id, report_dict)
+ manuscript.review_reports.append(report_dict)
+
+ msg = AgentMessage(
+ sender=self.id, sender_role="reviewer",
+ receiver="eic_001", receiver_role="eic",
+ msg_type=MessageType.REVIEW.value,
+ content=f"审稿完成: {self.name} - 总评分 {overall_score:.1f}/10 - 建议: {decision}",
+ metadata={'report': report_dict},
+ round_number=manuscript.current_round
+ )
+ self.store.send_message(msg)
+ manuscript.message_log.append(asdict(msg))
+ return report
+
+ def _check_addressed_ratio(self, prev_comments: list, manuscript: ManuscriptState) -> float:
+ """检查前轮意见中被作者回复的比例"""
+ if not prev_comments:
+ return 0.5
+ revision_history = st.session_state.get('revision_history', [])
+ if not revision_history:
+ return 0.3
+ latest_revision = revision_history[-1]
+ author_responses = latest_revision.get('response', {}).get('responses', [])
+ addressed = 0
+ for pc in prev_comments:
+ pc_text = pc.get('comment', '') if isinstance(pc, dict) else getattr(pc, 'comment', '')
+ pc_cat = pc.get('category', '') if isinstance(pc, dict) else getattr(pc, 'category', '')
+ for resp in author_responses:
+ if resp.get('original_comment', '') == pc_text or resp.get('category', '') == pc_cat:
+ if resp.get('addressed', False):
+ addressed += 1
+ break
+ return addressed / len(prev_comments)
+
+ def _generate_comments(self, scores: dict, is_re_review: bool, manuscript: ManuscriptState) -> list:
+ """生成审稿意见 (再审时验证前轮)"""
+ comments = []
+
+ # 再审时先验证前轮意见
+ if is_re_review and manuscript.current_round > 1:
+ verified = self._verify_previous_comments(manuscript)
+ comments.extend(verified)
+
+ # 基于当前评分生成新意见
+ for dim, score in scores.items():
+ templates = REVIEW_TEMPLATES.get(dim, {})
+ if score < 4.5 and templates.get('critical'):
+ severity = 'critical'
+ pool = templates['critical']
+ elif score < 6.5 and templates.get('major'):
+ severity = 'major'
+ pool = templates['major']
+ elif score < 8.0 and templates.get('minor'):
+ severity = 'minor'
+ pool = templates['minor']
+ else:
+ continue
+
+ n_comments = 2 if dim in self.expertise else 1
+ selected = random.sample(pool, min(n_comments, len(pool)))
+
+ for text in selected:
+ # 避免与验证携带的意见重复
+ if any((getattr(c, 'comment', '') if isinstance(c, ReviewComment) else c.get('comment', '')) == text for c in comments):
+ continue
+ comment = ReviewComment(
+ reviewer_id=self.id, category=dim, severity=severity,
+ comment=text, addressed=False,
+ resolution=CommentResolution.PENDING.value,
+ round_created=manuscript.current_round
+ )
+ comments.append(comment)
+ return comments
+
+ def _verify_previous_comments(self, manuscript: ManuscriptState) -> list:
+ """验证前轮意见是否被作者解决"""
+ prev_reviews = self.store.get_reviews(manuscript.manuscript_id, manuscript.current_round - 1)
+ my_prev = next((r for r in prev_reviews if r.get('reviewer_id') == self.id), None)
+ if not my_prev:
+ return []
+
+ prev_comments = my_prev.get('comments', [])
+ revision_history = st.session_state.get('revision_history', [])
+ author_responses = []
+ if revision_history:
+ author_responses = revision_history[-1].get('response', {}).get('responses', [])
+
+ verified = []
+ for pc in prev_comments:
+ pc_text = pc.get('comment', '')
+ pc_cat = pc.get('category', '')
+ pc_sev = pc.get('severity', '')
+
+ # 查找匹配的作者回复
+ match = None
+ for resp in author_responses:
+ if resp.get('original_comment', '') == pc_text or (
+ resp.get('category') == pc_cat and resp.get('severity') == pc_sev):
+ match = resp
+ break
+
+ # 基于审稿人严格度判定解决状态
+ if match and match.get('addressed', False):
+ resolve_prob = 1.0 - (self.strictness * 0.5)
+ roll = random.random()
+ if roll < resolve_prob * 0.7:
+ resolution = CommentResolution.ADDRESSED.value
+ elif roll < resolve_prob:
+ resolution = CommentResolution.PARTIALLY_ADDRESSED.value
+ else:
+ resolution = CommentResolution.NOT_ADDRESSED.value
+ else:
+ resolution = CommentResolution.NOT_ADDRESSED.value
+
+ # 未完全解决的意见携带到新轮
+ if resolution != CommentResolution.ADDRESSED.value:
+ carried = ReviewComment(
+ reviewer_id=self.id, category=pc_cat, severity=pc_sev,
+ comment=pc_text, addressed=False,
+ resolution=resolution,
+ verification_note=f"来自第{manuscript.current_round - 1}轮",
+ round_created=pc.get('round_created', manuscript.current_round - 1)
+ )
+ verified.append(carried)
+ else:
+ # 已解决的也记录 (标记为addressed)
+ resolved = ReviewComment(
+ reviewer_id=self.id, category=pc_cat, severity=pc_sev,
+ comment=pc_text, addressed=True,
+ resolution=CommentResolution.ADDRESSED.value,
+ round_resolved=manuscript.current_round,
+ round_created=pc.get('round_created', manuscript.current_round - 1)
+ )
+ verified.append(resolved)
+
+ return verified
+
+ def _enforce_consistency(self, scores: dict, comments: list, current_round: int) -> list:
+ """确保评分与意见严重度一致"""
+ comment_dims = set()
+ for c in comments:
+ cat = getattr(c, 'category', '') if isinstance(c, ReviewComment) else c.get('category', '')
+ sev = getattr(c, 'severity', '') if isinstance(c, ReviewComment) else c.get('severity', '')
+ comment_dims.add((cat, sev))
+
+ for dim, score in scores.items():
+ if score < 5.0:
+ has_severe = any(cat == dim and sev in ('critical', 'major') for cat, sev in comment_dims)
+ if not has_severe:
+ templates = REVIEW_TEMPLATES.get(dim, {})
+ pool = templates.get('major', templates.get('critical', []))
+ if pool:
+ text = random.choice(pool)
+ if not any((getattr(c, 'comment', '') if isinstance(c, ReviewComment) else c.get('comment', '')) == text for c in comments):
+ comments.append(ReviewComment(
+ reviewer_id=self.id, category=dim, severity='major',
+ comment=text, round_created=current_round
+ ))
+ return comments
+
+ def _generate_summary(self, scores: dict, decision: str, addressed_ratio: float = None) -> str:
+ """生成审稿总结"""
+ strong = [REVIEW_DIMENSIONS[d]['name'] for d, s in scores.items() if s >= 7.0]
+ weak = [REVIEW_DIMENSIONS[d]['name'] for d, s in scores.items() if s < 6.0]
+
+ lines = [f"【{self.name} 审稿总结】"]
+ if strong:
+ lines.append(f"优势方面: {', '.join(strong)}")
+ if weak:
+ lines.append(f"待改进方面: {', '.join(weak)}")
+ if addressed_ratio is not None:
+ lines.append(f"前轮意见解决率: {addressed_ratio:.0%}")
+
+ decision_text = {
+ 'accept': '建议接受', 'minor_revision': '建议小修后接受',
+ 'major_revision': '建议大修后重审', 'reject': '建议拒稿'
+ }
+ lines.append(f"审稿建议: {decision_text.get(decision, decision)}")
+ return "\n".join(lines)
+
+
+class AuthorAgent:
+ """
+ 作者代理 (Author Agent)
+
+ 职责:
+ 1. 接收审稿意见
+ 2. 逐条回复审稿意见
+ 3. 修改稿件
+ 4. 生成修改说明
+ """
+
+ def __init__(self, store: DistributedStore):
+ self.id = "author_001"
+ self.name = "Author"
+ self.role = AgentRole.AUTHOR
+ self.store = store
+ self.store.update_agent_state(self.id, {
+ 'role': 'author', 'name': self.name, 'status': 'active'
+ })
+
+ def process_feedback(self, manuscript: ManuscriptState, synthesis: dict) -> dict:
+ """
+ 处理EIC综合反馈,生成逐条回复
+
+ 策略:
+ 1. 关键问题优先 (必须全部回复)
+ 2. 检测审稿人冲突意见 → 生成rebuttal
+ 3. 重要问题认真处理
+ 4. 小问题简短回复
+ """
+ responses = []
+ conflicts = self._detect_conflicts(synthesis)
+
+ # 按优先级排序
+ priority_groups = [
+ ('critical', synthesis.get('critical_issues', [])),
+ ('major', synthesis.get('major_issues', [])),
+ ('minor', synthesis.get('minor_issues', []))
+ ]
+
+ for severity, issues in priority_groups:
+ for comment in issues:
+ category = comment.get('category', 'general')
+
+ # 检测是否属于冲突意见
+ is_conflict = any(
+ c['comment_a'].get('comment') == comment.get('comment') or
+ c['comment_b'].get('comment') == comment.get('comment')
+ for c in conflicts
+ )
+
+ if is_conflict:
+ resp_text = self._generate_rebuttal(comment, conflicts)
+ addressed = False
+ resolution = CommentResolution.PARTIALLY_ADDRESSED.value
+ elif severity == 'minor' and random.random() < 0.1:
+ resp_text = "感谢审稿人的建议,我们认同此观点并计划在后续工作中进一步完善。"
+ addressed = False
+ resolution = CommentResolution.PARTIALLY_ADDRESSED.value
+ else:
+ actions = REVISION_ACTIONS.get(category, ["进行了相应修改"])
+ action = random.choice(actions)
+ templates = AUTHOR_RESPONSE_TEMPLATES.get(severity, AUTHOR_RESPONSE_TEMPLATES['minor'])
+ template = random.choice(templates)
+ resp_text = template.format(
+ action=action,
+ detail=f"已在修改稿中进行标注(见{category}相关章节)",
+ result="改进效果显著",
+ section=random.choice(["2", "3", "4", "5"])
+ )
+ addressed = True
+ resolution = CommentResolution.ADDRESSED.value
+
+ responses.append({
+ 'original_comment': comment.get('comment', ''),
+ 'category': category,
+ 'severity': severity,
+ 'response': resp_text,
+ 'addressed': addressed,
+ 'resolution': resolution,
+ 'is_conflict': is_conflict
+ })
+
+ addressed_count = sum(1 for r in responses if r['addressed'])
+ total = sum(len(issues) for _, issues in priority_groups)
+ return {
+ 'round': manuscript.current_round,
+ 'total_issues': total,
+ 'addressed_count': addressed_count,
+ 'conflict_count': len(conflicts),
+ 'responses': responses,
+ 'strategy_summary': f"回复 {total} 条意见: {addressed_count} 已解决, "
+ f"{len(conflicts)} 冲突处理"
+ }
+
+ def _detect_conflicts(self, synthesis: dict) -> list:
+ """检测不同审稿人对同一维度的冲突意见"""
+ conflicts = []
+ all_issues = (
+ synthesis.get('critical_issues', []) +
+ synthesis.get('major_issues', []) +
+ synthesis.get('minor_issues', [])
+ )
+
+ by_category = {}
+ for issue in all_issues:
+ cat = issue.get('category', '')
+ if cat not in by_category:
+ by_category[cat] = []
+ by_category[cat].append(issue)
+
+ for cat, issues in by_category.items():
+ if len(issues) < 2:
+ continue
+ severities = set(i.get('severity', '') for i in issues)
+ if 'critical' in severities and 'minor' in severities:
+ critical_items = [i for i in issues if i.get('severity') == 'critical']
+ minor_items = [i for i in issues if i.get('severity') == 'minor']
+ if critical_items and minor_items:
+ conflicts.append({
+ 'category': cat,
+ 'comment_a': critical_items[0],
+ 'comment_b': minor_items[0],
+ 'type': 'severity_mismatch'
+ })
+ return conflicts
+
+ def _generate_rebuttal(self, comment: dict, conflicts: list) -> str:
+ """生成礼貌的反驳回复"""
+ category = comment.get('category', '')
+ dim_name = REVIEW_DIMENSIONS.get(category, {}).get('name', category)
+ return (f"我们注意到审稿人对{dim_name}方面的评价存在分歧。"
+ f"我们已仔细考虑各方意见,在保持原有方法优势的基础上进行了针对性改进,"
+ f"并补充了额外的论证。恳请编辑综合裁量。")
+
+ def revise_manuscript(self, manuscript: ManuscriptState, feedback_response: dict) -> ManuscriptVersion:
+ """修改稿件,基于反馈类别针对性修改对应章节"""
+ prev_version = self.store.get_latest_version(manuscript.manuscript_id)
+ new_version_num = manuscript.current_version + 1
+ revision_notes = self._generate_revision_notes(feedback_response)
+
+ # 确定哪些章节需要修改
+ modified_categories = set()
+ for resp in feedback_response.get('responses', []):
+ if resp.get('addressed', False):
+ modified_categories.add(resp.get('category', ''))
+
+ # 类别到章节的映射
+ category_to_section = {
+ 'novelty': 'introduction',
+ 'methodology': 'methodology',
+ 'data_analysis': 'results',
+ 'significance': 'discussion',
+ 'writing': 'conclusion'
+ }
+
+ new_sections = {}
+ if prev_version and prev_version.get('content_sections'):
+ for sec, content in prev_version['content_sections'].items():
+ # 只修改与反馈类别相关的章节
+ modified_by = [cat for cat, s in category_to_section.items()
+ if s == sec and cat in modified_categories]
+ if modified_by:
+ cats_label = ','.join(modified_by)
+ new_sections[sec] = content + f"\n[v{new_version_num} 修改: 回应{cats_label}意见]"
+ else:
+ new_sections[sec] = content
+ else:
+ new_sections = {
+ "introduction": f"引言部分 [第{new_version_num}版]",
+ "methodology": f"方法论部分 [第{new_version_num}版]",
+ "results": f"结果部分 [第{new_version_num}版]",
+ "discussion": f"讨论部分 [第{new_version_num}版]",
+ "conclusion": f"结论部分 [第{new_version_num}版]"
+ }
+
+ new_version = ManuscriptVersion(
+ version=new_version_num,
+ title=prev_version.get('title', '未命名稿件') if prev_version else '未命名稿件',
+ abstract=prev_version.get('abstract', '') if prev_version else '',
+ content_sections=new_sections,
+ revision_notes=revision_notes,
+ content_hash=hashlib.md5(json.dumps(new_sections, sort_keys=True).encode()).hexdigest()[:12]
+ )
+
+ manuscript.current_version = new_version_num
+ self.store.store_manuscript(manuscript.manuscript_id, asdict(new_version))
+ manuscript.versions.append(asdict(new_version))
+
+ addressed = feedback_response.get('addressed_count', 0)
+ total = feedback_response.get('total_issues', 0)
+ msg = AgentMessage(
+ sender=self.id, sender_role="author",
+ receiver="eic_001", receiver_role="eic",
+ msg_type=MessageType.REVISION.value,
+ content=f"已提交第{new_version_num}版修改稿,解决{addressed}/{total}条审稿意见。",
+ metadata={'version': new_version_num, 'response': feedback_response},
+ round_number=manuscript.current_round
+ )
+ self.store.send_message(msg)
+ manuscript.message_log.append(asdict(msg))
+ return new_version
+
+ def _generate_revision_notes(self, feedback_response: dict) -> str:
+ """生成修改说明文档"""
+ lines = [f"=== 修改说明 (Revision Notes) ===\n"]
+ lines.append(f"本次修改共回复 {feedback_response.get('total_issues', 0)} 条审稿意见。")
+ lines.append(f"策略: {feedback_response.get('strategy_summary', '')}\n")
+
+ for i, resp in enumerate(feedback_response.get('responses', []), 1):
+ conflict_tag = " [冲突]" if resp.get('is_conflict') else ""
+ lines.append(f"--- 问题 {i} [{resp['severity'].upper()}] [{resp['category']}]{conflict_tag} ---")
+ lines.append(f"审稿意见: {resp['original_comment']}")
+ lines.append(f"作者回复: {resp['response']}")
+ lines.append(f"解决状态: {resp.get('resolution', 'pending')}")
+ lines.append("")
+
+ return "\n".join(lines)
+
+
+# ============================================================
+# 4. 审稿协调器 (Review Coordinator)
+# ============================================================
+
+class ReviewCoordinator:
+ """
+ 审稿流程协调器 - 管理完整的多轮审稿循环
+ 支持双模式: 模拟模式 / AI模式 (Claude API)
+
+ 流程:
+ Author提交 → EIC分配 → Reviewer审稿 → EIC综合 → Author修改
+ → EIC再分配 → Reviewer再审 → ... → EIC最终决定
+ """
+
+ def __init__(self, ai_engine: 'AIReviewEngine' = None):
+ self.store = DistributedStore()
+ self.eic = EICAgent(self.store)
+ self.reviewers = [ReviewerAgent(p, self.store) for p in REVIEWER_PROFILES]
+ self.author = AuthorAgent(self.store)
+ self.score_history = []
+ self.ai_engine = ai_engine # None = 模拟模式
+
+ def init_manuscript(self, title: str, abstract: str) -> ManuscriptState:
+ """初始化稿件"""
+ ms = ManuscriptState()
+ version = ManuscriptVersion(
+ version=1,
+ title=title,
+ abstract=abstract,
+ content_sections={
+ "introduction": "引言内容...",
+ "methodology": "方法论内容...",
+ "results": "结果内容...",
+ "discussion": "讨论内容...",
+ "conclusion": "结论内容..."
+ },
+ content_hash=hashlib.md5(title.encode()).hexdigest()[:12]
+ )
+ ms.versions.append(asdict(version))
+ ms = self.eic.receive_submission(ms, version)
+ return ms
+
+ def run_review_round(self, manuscript: ManuscriptState) -> dict:
+ """执行一轮完整审稿 (自动选择模拟/AI模式)"""
+ if self.ai_engine:
+ return self._run_ai_review(manuscript)
+ return self._run_simulated_review(manuscript)
+
+ def _run_simulated_review(self, manuscript: ManuscriptState) -> dict:
+ """模拟审稿 (原逻辑)"""
+ reviewer_ids = [r.id for r in self.reviewers]
+ manuscript = self.eic.assign_reviewers(manuscript, reviewer_ids)
+
+ is_re_review = manuscript.current_round > 1
+ reports = []
+ for reviewer in self.reviewers:
+ report = reviewer.review_manuscript(manuscript, is_re_review=is_re_review)
+ reports.append(asdict(report))
+
+ synthesis = self.eic.synthesize_feedback(manuscript)
+ self.score_history.append(synthesis.get('overall_avg', 0))
+
+ return {
+ 'round': manuscript.current_round,
+ 'reports': reports,
+ 'synthesis': synthesis,
+ 'recommendation': synthesis.get('eic_recommendation', 'pending'),
+ 'mode': 'simulated'
+ }
+
+ def _run_ai_review(self, manuscript: ManuscriptState) -> dict:
+ """AI 审稿 (Claude API)"""
+ title = manuscript.versions[0].get('title', '') if manuscript.versions else ''
+ abstract = manuscript.versions[0].get('abstract', '') if manuscript.versions else ''
+
+ # 获取前轮信息
+ prev_author_response = None
+ revision_history = st.session_state.get('revision_history', [])
+ if revision_history:
+ prev_author_response = json.dumps(
+ revision_history[-1].get('response', {}), ensure_ascii=False
+ )
+
+ # AI 审稿
+ reports = []
+ for profile in REVIEWER_PROFILES:
+ prev_comments = None
+ review_history = st.session_state.get('review_history', [])
+ if review_history:
+ prev_round = review_history[-1]
+ for r in prev_round.get('reports', []):
+ if r.get('reviewer_id') == profile['id']:
+ prev_comments = r.get('comments', [])
+ break
+
+ ai_report = self.ai_engine.ai_review(
+ profile, title, abstract, manuscript.current_round,
+ prev_comments, prev_author_response
+ )
+ reports.append(ai_report)
+
+ # 同步到存储
+ self.store.store_review(manuscript.manuscript_id, ai_report)
+ manuscript.review_reports.append(ai_report)
+
+ # AI EIC 综合
+ ai_synthesis = self.ai_engine.ai_synthesize(
+ reports, title, abstract, manuscript.current_round
+ )
+
+ # 转换为标准 synthesis 格式
+ all_comments = []
+ for r in reports:
+ all_comments.extend(r.get('comments', []))
+
+ critical = [c for c in all_comments if c.get('severity') == 'critical']
+ major = [c for c in all_comments if c.get('severity') == 'major']
+ minor = [c for c in all_comments if c.get('severity') == 'minor']
+
+ # 构建维度平均分
+ score_data = {}
+ for r in reports:
+ for dim, score in r.get('scores', {}).items():
+ if dim not in score_data:
+ score_data[dim] = []
+ score_data[dim].append(score)
+ avg_scores = {d: round(sum(s) / len(s), 2) for d, s in score_data.items() if s}
+
+ overall_avg = ai_synthesis.get('overall_score', 5.0)
+ if isinstance(overall_avg, str):
+ try:
+ overall_avg = float(overall_avg)
+ except ValueError:
+ overall_avg = 5.0
+ self.score_history.append(overall_avg)
+
+ rec = ai_synthesis.get('recommendation', 'major_revision')
+
+ synthesis = {
+ "round": manuscript.current_round,
+ "reviewer_count": len(reports),
+ "avg_scores": avg_scores,
+ "simple_avg_scores": avg_scores,
+ "overall_avg": round(overall_avg, 2),
+ "critical_count": len(critical),
+ "major_count": len(major),
+ "minor_count": len(minor),
+ "critical_issues": critical,
+ "major_issues": major,
+ "minor_issues": minor,
+ "reviewer_decisions": [r.get('decision', 'pending') for r in reports],
+ "eic_recommendation": rec,
+ "consensus_level": ai_synthesis.get('consensus_level', 0.5),
+ "disagreement_dims": [],
+ "outlier_reviewers": [],
+ "score_variance": {},
+ "trajectory": {},
+ "guidance": ai_synthesis.get('guidance', ''),
+ "ai_reasoning": ai_synthesis.get('reasoning', ''),
+ "ai_key_issues": ai_synthesis.get('key_issues', []),
+ }
+
+ manuscript.eic_decisions.append(synthesis)
+
+ return {
+ 'round': manuscript.current_round,
+ 'reports': reports,
+ 'synthesis': synthesis,
+ 'recommendation': rec,
+ 'mode': 'ai'
+ }
+
+ def author_revise(self, manuscript: ManuscriptState, synthesis: dict) -> dict:
+ """作者修改流程 (自动选择模拟/AI模式)"""
+ if self.ai_engine:
+ return self._ai_author_revise(manuscript, synthesis)
+ response = self.author.process_feedback(manuscript, synthesis)
+ new_version = self.author.revise_manuscript(manuscript, response)
+ return {
+ 'response': response,
+ 'new_version': asdict(new_version),
+ 'version_number': new_version.version
+ }
+
+ def _ai_author_revise(self, manuscript: ManuscriptState, synthesis: dict) -> dict:
+ """AI 作者修改"""
+ title = manuscript.versions[0].get('title', '') if manuscript.versions else ''
+ guidance = synthesis.get('guidance', '')
+ reviews = st.session_state.get('review_history', [])
+ latest_reports = reviews[-1].get('reports', []) if reviews else []
+
+ ai_response = self.ai_engine.ai_author_respond(
+ guidance, latest_reports, title, manuscript.current_round
+ )
+
+ # 转换为标准 response 格式
+ responses = ai_response.get('responses', [])
+ response = {
+ 'round': manuscript.current_round,
+ 'total_issues': len(responses),
+ 'addressed_count': sum(1 for r in responses if r.get('addressed', False)),
+ 'conflict_count': 0,
+ 'responses': responses,
+ 'strategy_summary': ai_response.get('revision_summary', ''),
+ }
+
+ # 生成新版本
+ new_version = self.author.revise_manuscript(manuscript, response)
+
+ return {
+ 'response': response,
+ 'new_version': asdict(new_version),
+ 'version_number': new_version.version
+ }
+
+ def advance_round(self, manuscript: ManuscriptState):
+ """推进到下一轮"""
+ manuscript.current_round += 1
+ manuscript.status = "under_review"
+
+ def is_process_complete(self, manuscript: ManuscriptState, synthesis: dict) -> tuple:
+ """
+ 判断审稿流程是否结束
+
+ 终止条件:
+ 1. ACCEPT 或 REJECT
+ 2. 达到最大轮数
+ 3. 小修且无关键问题 (round > 1)
+ 4. 评分持续收敛 (连续两轮改善 < 0.5)
+
+ Returns: (is_complete: bool, reason: str)
+ """
+ rec = synthesis.get('eic_recommendation', '')
+
+ if rec == ReviewDecision.ACCEPT.value:
+ return True, "accepted"
+ if rec == ReviewDecision.REJECT.value:
+ return True, "rejected"
+
+ # 最大轮数
+ if manuscript.current_round >= manuscript.max_rounds:
+ return True, "max_rounds_reached"
+
+ # 小修 + 无关键问题 + 已经过至少1轮修改
+ if (rec == ReviewDecision.MINOR_REVISION.value and
+ manuscript.current_round > 1 and
+ synthesis.get('critical_count', 0) == 0):
+ return True, "minor_revision_accepted"
+
+ # 收敛检测
+ if len(self.score_history) >= 3:
+ d1 = self.score_history[-1] - self.score_history[-2]
+ d2 = self.score_history[-2] - self.score_history[-3]
+ if d1 < 0.5 and d2 < 0.5:
+ return True, "diminishing_returns"
+
+ return False, ""
+
+
+# ============================================================
+# 5. Streamlit UI
+# ============================================================
+
+def setup_page():
+ """页面配置"""
+ st.set_page_config(
+ page_title="分布式审稿协调系统",
+ page_icon="📝",
+ layout="wide",
+ initial_sidebar_state="expanded"
+ )
+
+ st.markdown("""
+
+ """, unsafe_allow_html=True)
+
+
+def init_session_state():
+ """初始化会话状态"""
+ if 'coordinator' not in st.session_state:
+ st.session_state.coordinator = None
+ if 'ms_state' not in st.session_state:
+ st.session_state.ms_state = None
+ if 'review_history' not in st.session_state:
+ st.session_state.review_history = []
+ if 'revision_history' not in st.session_state:
+ st.session_state.revision_history = []
+ if 'process_complete' not in st.session_state:
+ st.session_state.process_complete = False
+ if 'current_phase' not in st.session_state:
+ st.session_state.current_phase = "init" # init, reviewing, feedback, revising, complete
+ if 'auto_running' not in st.session_state:
+ st.session_state.auto_running = False
+
+
+def render_sidebar():
+ """渲染侧边栏"""
+ with st.sidebar:
+ st.markdown("### ⚙ 系统配置")
+
+ # AI 模式切换
+ st.markdown("**审稿模式**")
+ ai_available = HAS_ANTHROPIC
+ if not ai_available:
+ st.caption("安装 `pip install anthropic` 可启用 AI 模式")
+
+ use_ai = False
+ if ai_available:
+ use_ai = st.toggle("🤖 AI 审稿模式", value=st.session_state.get('use_ai', False),
+ help="启用后使用 Claude API 生成真实审稿意见")
+ st.session_state.use_ai = use_ai
+
+ if use_ai:
+ api_key = st.text_input("Anthropic API Key", type="password",
+ value=st.session_state.get('api_key', ''),
+ placeholder="sk-ant-...")
+ st.session_state.api_key = api_key
+
+ ai_model = st.selectbox("模型", [
+ "claude-haiku-4-5-20251001",
+ "claude-sonnet-4-6",
+ ], help="Haiku 最快最便宜,Sonnet 质量更高")
+ st.session_state.ai_model = ai_model
+
+ if api_key:
+ st.success("AI 模式已启用")
+ else:
+ st.warning("请输入 API Key")
+ use_ai = False
+ else:
+ st.info("当前: 模拟模式 (模板生成)")
+
+ st.divider()
+ st.markdown("**稿件信息**")
+ title = st.text_input("论文标题", value="基于多代理协调的分布式审稿系统研究")
+ abstract = st.text_area("摘要", value="本文提出了一种基于多代理协调的分布式审稿系统,"
+ "通过EIC、Reviewer和Author三类代理的协同工作,"
+ "实现了审稿流程的自动化管理...",
+ height=100)
+
+ st.divider()
+ st.markdown("**审稿参数**")
+ max_rounds = st.slider("最大审稿轮数", 1, 10, 5)
+
+ st.divider()
+ st.markdown("**审稿人配置**")
+ for p in REVIEWER_PROFILES:
+ with st.expander(p['name']):
+ st.write(f"性格: {p['personality']}")
+ st.write(f"专长: {', '.join(p['expertise'])}")
+ st.write(f"严格度: {p['strictness']}")
+
+ st.divider()
+ if st.button("🔄 重置系统", use_container_width=True):
+ for key in list(st.session_state.keys()):
+ if key.startswith('store_') or key in [
+ 'coordinator', 'ms_state', 'review_history',
+ 'revision_history', 'process_complete', 'current_phase',
+ 'auto_running', 'ai_engine', 'final_decision', 'termination_reason'
+ ]:
+ del st.session_state[key]
+ st.rerun()
+
+ return title, abstract, max_rounds
+
+
+def render_agent_status():
+ """渲染代理状态面板"""
+ st.markdown("### 🤖 代理状态")
+ cols = st.columns(3)
+
+ with cols[0]:
+ st.markdown('
', unsafe_allow_html=True)
+ st.markdown("**🔴 EIC (主编)**")
+ st.caption("统筹协调审稿流程")
+ phase = st.session_state.get('current_phase', 'init')
+ status_map = {
+ 'init': '等待稿件', 'reviewing': '协调审稿中',
+ 'feedback': '综合反馈', 'revising': '等待修改稿',
+ 'complete': '流程完成'
+ }
+ st.write(f"状态: {status_map.get(phase, phase)}")
+ st.markdown('
', unsafe_allow_html=True)
+
+ with cols[1]:
+ st.markdown('', unsafe_allow_html=True)
+ st.markdown("**🟢 Reviewers (审稿人)**")
+ st.caption(f"共 {len(REVIEWER_PROFILES)} 位审稿人")
+ if phase == 'reviewing':
+ st.write("状态: 审稿中...")
+ elif phase == 'feedback':
+ st.write("状态: 审稿完成")
+ else:
+ st.write("状态: 待命")
+ st.markdown('
', unsafe_allow_html=True)
+
+ with cols[2]:
+ st.markdown('', unsafe_allow_html=True)
+ st.markdown("**🔵 Author (作者)**")
+ st.caption("接收反馈并修改")
+ if phase == 'revising':
+ st.write("状态: 修改中...")
+ elif phase == 'feedback':
+ st.write("状态: 查看反馈")
+ else:
+ st.write("状态: 待命")
+ st.markdown('
', unsafe_allow_html=True)
+
+
+def render_review_report(report: dict, round_num: int):
+ """渲染单个审稿报告"""
+ reviewer_name = report.get('reviewer_name', 'Unknown')
+ decision = report.get('decision', 'pending')
+ score = report.get('overall_score', 0)
+
+ decision_labels = {
+ 'accept': ('接受', 'decision-accept'),
+ 'minor_revision': ('小修', 'decision-revision'),
+ 'major_revision': ('大修', 'decision-revision'),
+ 'reject': ('拒稿', 'decision-reject'),
+ 'pending': ('待定', 'decision-revision')
+ }
+ label, css_class = decision_labels.get(decision, ('未知', 'decision-revision'))
+
+ with st.expander(f"📋 {reviewer_name} | 评分: {score:.1f}/10 | {label}", expanded=False):
+ # 评分详情
+ scores = report.get('scores', {})
+ if scores:
+ score_df = pd.DataFrame([
+ {
+ '维度': REVIEW_DIMENSIONS.get(d, {}).get('name', d),
+ '评分': s,
+ '权重': REVIEW_DIMENSIONS.get(d, {}).get('weight', 0)
+ }
+ for d, s in scores.items()
+ ])
+ st.dataframe(score_df, use_container_width=True, hide_index=True)
+
+ # 审稿意见
+ comments = report.get('comments', [])
+ if comments:
+ st.markdown("**审稿意见:**")
+ for c in comments:
+ severity = c.get('severity', 'minor')
+ icon = {'critical': '🔴', 'major': '🟠', 'minor': '🟡'}.get(severity, '⚪')
+ category = REVIEW_DIMENSIONS.get(c.get('category', ''), {}).get('name', c.get('category', ''))
+ addressed = " ✅" if c.get('addressed') else ""
+ st.markdown(f"{icon} **[{category}]** {c.get('comment', '')}{addressed}")
+
+ st.markdown(f"**总结:** {report.get('summary', '')}")
+ st.caption(f"审稿信心: {report.get('confidence', 0):.0%}")
+
+
+def render_revision_response(revision: dict):
+ """渲染作者修改回复"""
+ response = revision.get('response', {})
+ responses = response.get('responses', [])
+
+ if not responses:
+ return
+
+ st.markdown(f"**共回复 {len(responses)} 条意见:**")
+
+ for i, resp in enumerate(responses, 1):
+ severity = resp.get('severity', 'minor')
+ icon = {'critical': '🔴', 'major': '🟠', 'minor': '🟡'}.get(severity, '⚪')
+ category = REVIEW_DIMENSIONS.get(resp.get('category', ''), {}).get('name', resp.get('category', ''))
+
+ with st.expander(f"{icon} 问题 {i}: [{category}] {resp.get('original_comment', '')[:50]}..."):
+ st.markdown(f"**审稿意见:** {resp.get('original_comment', '')}")
+ st.markdown(f"**作者回复:** {resp.get('response', '')}")
+
+
+def render_metrics(ms_state):
+ """渲染关键指标"""
+ if not ms_state:
+ return
+
+ cols = st.columns(4)
+
+ with cols[0]:
+ st.markdown(f"""
+
+
{ms_state.current_round}
+
当前轮次
+
+ """, unsafe_allow_html=True)
+
+ with cols[1]:
+ st.markdown(f"""
+
+
{ms_state.current_version}
+
稿件版本
+
+ """, unsafe_allow_html=True)
+
+ with cols[2]:
+ reviews = st.session_state.get('review_history', [])
+ latest_avg = 0
+ if reviews:
+ latest = reviews[-1]
+ scores = [r.get('overall_score', 0) for r in latest.get('reports', [])]
+ latest_avg = sum(scores) / len(scores) if scores else 0
+ color_class = "score-high" if latest_avg >= 7 else ("score-mid" if latest_avg >= 5 else "score-low")
+ st.markdown(f"""
+
+
{latest_avg:.1f}
+
平均评分
+
+ """, unsafe_allow_html=True)
+
+ with cols[3]:
+ total_comments = sum(
+ len(r.get('comments', []))
+ for rh in reviews
+ for r in rh.get('reports', [])
+ )
+ st.markdown(f"""
+
+
{total_comments}
+
累计意见数
+
+ """, unsafe_allow_html=True)
+
+
+def render_score_trend():
+ """渲染评分趋势图"""
+ reviews = st.session_state.get('review_history', [])
+ if len(reviews) < 1:
+ return
+
+ trend_data = []
+ for rh in reviews:
+ round_num = rh.get('round', 1)
+ for report in rh.get('reports', []):
+ for dim, score in report.get('scores', {}).items():
+ trend_data.append({
+ 'Round': f"第{round_num}轮",
+ 'Dimension': REVIEW_DIMENSIONS.get(dim, {}).get('name', dim),
+ 'Score': score,
+ 'Reviewer': report.get('reviewer_name', '?')
+ })
+
+ if trend_data:
+ df = pd.DataFrame(trend_data)
+ import altair as alt
+
+ chart = alt.Chart(df).mark_line(point=True).encode(
+ x=alt.X('Round:N', title='审稿轮次'),
+ y=alt.Y('Score:Q', title='评分', scale=alt.Scale(domain=[0, 10])),
+ color='Dimension:N',
+ strokeDash='Reviewer:N',
+ tooltip=['Round', 'Dimension', 'Score', 'Reviewer']
+ ).properties(
+ title='各维度评分趋势',
+ height=300
+ ).interactive()
+
+ st.altair_chart(chart, use_container_width=True)
+
+
+def render_comment_tracker():
+ """渲染意见追踪面板"""
+ review_history = st.session_state.get('review_history', [])
+ if not review_history:
+ st.info("暂无审稿意见")
+ return
+
+ summary = CommentTracker.get_resolution_summary(review_history)
+ total = summary['total']
+ addressed = summary['addressed']
+ partial = summary['partially_addressed']
+ unresolved = summary['not_addressed'] + summary['pending']
+
+ st.markdown("#### 意见解决状态追踪")
+
+ cols = st.columns(4)
+ with cols[0]:
+ st.metric("总意见数", total)
+ with cols[1]:
+ st.metric("已解决", addressed)
+ with cols[2]:
+ st.metric("部分解决", partial)
+ with cols[3]:
+ st.metric("未解决", unresolved)
+
+ if total > 0:
+ st.progress(addressed / total, text=f"{addressed}/{total} 意见已解决 ({addressed / total:.0%})")
+
+ # 详细列表
+ all_comments = CommentTracker.get_all_comments(review_history)
+ if all_comments:
+ rows = []
+ for c in all_comments:
+ resolution = c.get('resolution', c.get('resolution_status', 'pending'))
+ icon = {
+ 'addressed': '✅', 'partially_addressed': '🟡',
+ 'not_addressed': '❌', 'pending': '⏳'
+ }.get(resolution, '⏳')
+ rows.append({
+ '状态': icon,
+ '审稿人': c.get('reviewer_id', '')[-1:] if c.get('reviewer_id') else '?',
+ '严重度': c.get('severity', ''),
+ '维度': REVIEW_DIMENSIONS.get(c.get('category', ''), {}).get('name', c.get('category', '')),
+ '意见': c.get('comment', '')[:60] + '...' if len(c.get('comment', '')) > 60 else c.get('comment', ''),
+ '解决状态': resolution
+ })
+ st.dataframe(pd.DataFrame(rows), use_container_width=True, hide_index=True)
+
+
+def render_version_diff():
+ """渲染版本对比视图"""
+ ms_state = st.session_state.get('ms_state')
+ coordinator = st.session_state.get('coordinator')
+ if not ms_state or not coordinator:
+ return
+
+ versions = coordinator.store.get_manuscript_versions(ms_state.manuscript_id)
+ if len(versions) < 2:
+ st.info("至少需要2个版本才能对比")
+ return
+
+ st.markdown("#### 版本对比")
+ version_labels = [f"v{v.get('version', i + 1)}" for i, v in enumerate(versions)]
+
+ col1, col2 = st.columns(2)
+ with col1:
+ v1_idx = st.selectbox("基准版本", range(len(versions)),
+ format_func=lambda i: version_labels[i], key="diff_v1")
+ with col2:
+ v2_idx = st.selectbox("对比版本", range(len(versions)),
+ index=min(v1_idx + 1, len(versions) - 1),
+ format_func=lambda i: version_labels[i], key="diff_v2")
+
+ if v1_idx != v2_idx:
+ diff = VersionDiffer.diff_versions(versions[v1_idx], versions[v2_idx])
+ modified = diff['modified_sections']
+ total = diff['total_sections']
+ st.markdown(f"**{version_labels[v1_idx]} → {version_labels[v2_idx]}** | "
+ f"修改了 {len(modified)}/{total} 个章节")
+
+ for sec, info in diff['section_diffs'].items():
+ status = info['status']
+ icon = {'added': '🟢', 'removed': '🔴', 'modified': '🟠', 'unchanged': '⚪'}.get(status, '?')
+ if status != 'unchanged':
+ with st.expander(f"{icon} {sec} ({status})"):
+ if status == 'modified':
+ st.text(f"旧版: {info['old']}")
+ st.text(f"新版: {info['new']}")
+ elif status == 'added':
+ st.text(f"新增: {info['new']}")
+ else:
+ st.markdown(f"{icon} {sec} (未修改)")
+
+
+def render_convergence_chart():
+ """渲染收敛趋势图"""
+ reviews = st.session_state.get('review_history', [])
+ if len(reviews) < 1:
+ return
+
+ import altair as alt
+ round_data = []
+ for rh in reviews:
+ round_num = rh.get('round', 1)
+ scores = [r.get('overall_score', 0) for r in rh.get('reports', [])]
+ avg = sum(scores) / len(scores) if scores else 0
+ round_data.append({'Round': f"第{round_num}轮", 'round_num': round_num, 'Score': avg})
+
+ if not round_data:
+ return
+
+ df = pd.DataFrame(round_data)
+
+ # 评分线
+ line = alt.Chart(df).mark_line(point=True, color='#0d6efd', strokeWidth=3).encode(
+ x=alt.X('round_num:Q', title='审稿轮次', axis=alt.Axis(tickMinStep=1)),
+ y=alt.Y('Score:Q', title='综合评分', scale=alt.Scale(domain=[0, 10])),
+ tooltip=['Round', 'Score']
+ )
+
+ # 接受阈值线
+ threshold = alt.Chart(pd.DataFrame({'y': [7.5]})).mark_rule(
+ color='#198754', strokeDash=[5, 5], strokeWidth=2
+ ).encode(y='y:Q')
+
+ # 拒稿阈值线
+ reject_line = alt.Chart(pd.DataFrame({'y': [4.5]})).mark_rule(
+ color='#dc3545', strokeDash=[5, 5], strokeWidth=2
+ ).encode(y='y:Q')
+
+ chart = (line + threshold + reject_line).properties(
+ title='评分收敛趋势 (绿线=接受阈值, 红线=拒稿阈值)',
+ height=250
+ ).interactive()
+
+ st.altair_chart(chart, use_container_width=True)
+
+
+def render_disagreement_heatmap():
+ """渲染审稿人分歧热力图"""
+ reviews = st.session_state.get('review_history', [])
+ if not reviews:
+ return
+
+ latest = reviews[-1]
+ reports = latest.get('reports', [])
+ if not reports:
+ return
+
+ import altair as alt
+ heatmap_data = []
+ for report in reports:
+ reviewer = report.get('reviewer_name', '?')
+ for dim, score in report.get('scores', {}).items():
+ dim_name = REVIEW_DIMENSIONS.get(dim, {}).get('name', dim)
+ heatmap_data.append({
+ 'Reviewer': reviewer,
+ 'Dimension': dim_name,
+ 'Score': score
+ })
+
+ if not heatmap_data:
+ return
+
+ df = pd.DataFrame(heatmap_data)
+
+ chart = alt.Chart(df).mark_rect().encode(
+ x=alt.X('Dimension:N', title='评审维度'),
+ y=alt.Y('Reviewer:N', title='审稿人'),
+ color=alt.Color('Score:Q', scale=alt.Scale(scheme='redyellowgreen', domain=[1, 10]),
+ title='评分'),
+ tooltip=['Reviewer', 'Dimension', 'Score']
+ ).properties(
+ title='审稿人评分矩阵',
+ height=200
+ )
+
+ st.altair_chart(chart, use_container_width=True)
+
+
+def generate_review_report_text() -> str:
+ """生成完整的审稿报告文本 (可下载)"""
+ ms_state = st.session_state.get('ms_state')
+ review_history = st.session_state.get('review_history', [])
+ revision_history = st.session_state.get('revision_history', [])
+ final_decision = st.session_state.get('final_decision', {})
+
+ lines = []
+ lines.append("=" * 60)
+ lines.append(" 分布式审稿协调系统 - 完整审稿报告")
+ lines.append(" Distributed Storage Coordination Agent Review Report")
+ lines.append("=" * 60)
+ lines.append("")
+
+ if ms_state:
+ lines.append(f"稿件编号: {ms_state.manuscript_id}")
+ lines.append(f"审稿轮数: {ms_state.current_round}")
+ lines.append(f"稿件版本: v{ms_state.current_version}")
+ lines.append(f"最终状态: {ms_state.status}")
+ if ms_state.versions:
+ v1 = ms_state.versions[0]
+ lines.append(f"论文标题: {v1.get('title', '未知')}")
+ lines.append(f"摘要: {v1.get('abstract', '')}")
+ lines.append("")
+
+ # 最终决定
+ if final_decision:
+ lines.append("-" * 40)
+ lines.append("【最终决定】")
+ lines.append(f" 决定: {final_decision.get('decision', '未知')}")
+ lines.append(f" 原因: {final_decision.get('reason', '')}")
+ lines.append(f" 平均分: {final_decision.get('avg_score', 0):.1f}/10")
+ lines.append("")
+
+ # 各轮审稿
+ for i, rh in enumerate(review_history, 1):
+ lines.append("=" * 40)
+ lines.append(f"第 {i} 轮审稿")
+ lines.append("=" * 40)
+
+ synthesis = rh.get('synthesis', {})
+ lines.append(f"EIC建议: {synthesis.get('eic_recommendation', '?')}")
+ lines.append(f"综合评分: {synthesis.get('overall_avg', 0):.1f}/10")
+ lines.append(f"共识度: {synthesis.get('consensus_level', 1.0):.0%}")
+ lines.append(f"关键问题: {synthesis.get('critical_count', 0)} | "
+ f"重要问题: {synthesis.get('major_count', 0)} | "
+ f"小问题: {synthesis.get('minor_count', 0)}")
+ lines.append("")
+
+ for report in rh.get('reports', []):
+ lines.append(f" --- {report.get('reviewer_name', '?')} ---")
+ lines.append(f" 总评分: {report.get('overall_score', 0):.1f}/10")
+ lines.append(f" 建议: {report.get('decision', '?')}")
+ lines.append(f" 信心: {report.get('confidence', 0):.0%}")
+
+ scores = report.get('scores', {})
+ if scores:
+ lines.append(" 维度评分:")
+ for dim, score in scores.items():
+ dim_name = REVIEW_DIMENSIONS.get(dim, {}).get('name', dim)
+ lines.append(f" {dim_name}: {score:.1f}/10")
+
+ comments = report.get('comments', [])
+ if comments:
+ lines.append(" 审稿意见:")
+ for c in comments:
+ sev = c.get('severity', '')
+ cat = REVIEW_DIMENSIONS.get(c.get('category', ''), {}).get('name', c.get('category', ''))
+ resolution = c.get('resolution', '')
+ res_tag = f" [{resolution}]" if resolution and resolution != 'pending' else ""
+ lines.append(f" [{sev.upper()}][{cat}] {c.get('comment', '')}{res_tag}")
+ lines.append(f" 总结: {report.get('summary', '')}")
+ lines.append("")
+
+ # EIC指导意见
+ guidance = synthesis.get('guidance', '')
+ if guidance:
+ lines.append(" EIC指导意见:")
+ for gl in guidance.split('\n'):
+ lines.append(f" {gl}")
+ lines.append("")
+
+ # 修改历史
+ if revision_history:
+ lines.append("=" * 40)
+ lines.append("修改历史")
+ lines.append("=" * 40)
+
+ for i, rev in enumerate(revision_history, 1):
+ lines.append(f"\n第 {i} 次修改 (版本 v{rev.get('version_number', '?')})")
+ response = rev.get('response', {})
+ lines.append(f"策略: {response.get('strategy_summary', '')}")
+ lines.append(f"总意见数: {response.get('total_issues', 0)} | "
+ f"已解决: {response.get('addressed_count', 0)}")
+
+ for j, resp in enumerate(response.get('responses', []), 1):
+ conflict_tag = " [冲突]" if resp.get('is_conflict') else ""
+ lines.append(f" 问题{j} [{resp.get('severity', '').upper()}]"
+ f"[{resp.get('category', '')}]{conflict_tag}")
+ lines.append(f" 审稿意见: {resp.get('original_comment', '')}")
+ lines.append(f" 作者回复: {resp.get('response', '')}")
+ lines.append(f" 解决状态: {resp.get('resolution', '')}")
+ lines.append("")
+
+ # 意见追踪摘要
+ summary = CommentTracker.get_resolution_summary(review_history)
+ if summary['total'] > 0:
+ lines.append("=" * 40)
+ lines.append("意见解决状态摘要")
+ lines.append("=" * 40)
+ lines.append(f"总意见数: {summary['total']}")
+ lines.append(f"已解决: {summary['addressed']}")
+ lines.append(f"部分解决: {summary['partially_addressed']}")
+ lines.append(f"未解决: {summary['not_addressed']}")
+ lines.append(f"待定: {summary['pending']}")
+ lines.append(f"解决率: {summary['addressed'] / summary['total']:.0%}")
+
+ lines.append("\n" + "=" * 60)
+ lines.append("报告生成时间: " + datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
+ lines.append("=" * 60)
+
+ return "\n".join(lines)
+
+
+def generate_export_json() -> str:
+ """生成JSON格式的完整审稿数据 (可下载)"""
+ ms_state = st.session_state.get('ms_state')
+ export = {
+ 'manuscript': {
+ 'id': ms_state.manuscript_id if ms_state else '',
+ 'round': ms_state.current_round if ms_state else 0,
+ 'version': ms_state.current_version if ms_state else 0,
+ 'status': ms_state.status if ms_state else '',
+ 'versions': ms_state.versions if ms_state else [],
+ },
+ 'review_history': st.session_state.get('review_history', []),
+ 'revision_history': st.session_state.get('revision_history', []),
+ 'final_decision': st.session_state.get('final_decision', {}),
+ 'messages': st.session_state.get('store_messages', []),
+ 'export_time': datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+ }
+ return json.dumps(export, ensure_ascii=False, indent=2, default=str)
+
+
+def render_download_panel():
+ """渲染下载面板"""
+ st.markdown("### 📥 文档下载")
+
+ col1, col2, col3 = st.columns(3)
+
+ with col1:
+ report_text = generate_review_report_text()
+ ms_state = st.session_state.get('ms_state')
+ filename = f"review_report_{ms_state.manuscript_id if ms_state else 'draft'}.txt"
+ st.download_button(
+ label="📄 下载审稿报告 (TXT)",
+ data=report_text,
+ file_name=filename,
+ mime="text/plain",
+ use_container_width=True
+ )
+
+ with col2:
+ json_data = generate_export_json()
+ json_filename = f"review_data_{ms_state.manuscript_id if ms_state else 'draft'}.json"
+ st.download_button(
+ label="📊 下载完整数据 (JSON)",
+ data=json_data,
+ file_name=json_filename,
+ mime="application/json",
+ use_container_width=True
+ )
+
+ with col3:
+ # CSV格式的评分汇总
+ review_history = st.session_state.get('review_history', [])
+ csv_rows = []
+ for rh in review_history:
+ round_num = rh.get('round', 1)
+ for report in rh.get('reports', []):
+ row = {
+ '轮次': round_num,
+ '审稿人': report.get('reviewer_name', '?'),
+ '总评分': report.get('overall_score', 0),
+ '建议': report.get('decision', ''),
+ '信心': report.get('confidence', 0),
+ }
+ for dim, score in report.get('scores', {}).items():
+ dim_name = REVIEW_DIMENSIONS.get(dim, {}).get('name', dim)
+ row[dim_name] = score
+ csv_rows.append(row)
+
+ if csv_rows:
+ csv_df = pd.DataFrame(csv_rows)
+ csv_data = csv_df.to_csv(index=False, encoding='utf-8-sig')
+ csv_filename = f"review_scores_{ms_state.manuscript_id if ms_state else 'draft'}.csv"
+ st.download_button(
+ label="📈 下载评分数据 (CSV)",
+ data=csv_data,
+ file_name=csv_filename,
+ mime="text/csv",
+ use_container_width=True
+ )
+ else:
+ st.button("📈 评分数据 (暂无)", disabled=True, use_container_width=True)
+
+
+def render_message_flow():
+ """渲染消息流"""
+ store = st.session_state.get('store_messages', [])
+ if not store:
+ st.info("暂无消息记录")
+ return
+
+ for msg in store[-20:]: # 最近20条
+ role = msg.get('sender_role', '')
+ css_class = {
+ 'eic': 'message-bubble-eic',
+ 'reviewer': 'message-bubble-reviewer',
+ 'author': 'message-bubble-author'
+ }.get(role, '')
+ role_icon = {'eic': '🔴', 'reviewer': '🟢', 'author': '🔵'}.get(role, '⚪')
+ msg_type_label = {
+ 'submit': '提交', 'assign': '分配', 'review': '审稿',
+ 'feedback_summary': '综合反馈', 'revision': '修改',
+ 'decision': '决定', 're_review': '重审', 'final_decision': '最终决定'
+ }.get(msg.get('msg_type', ''), msg.get('msg_type', ''))
+
+ st.markdown(f"""
+
+ {role_icon} [{msg_type_label}]
+ {msg.get('sender', '')} → {msg.get('receiver', '')}
+
{msg.get('content', '')[:200]}
+
{msg.get('timestamp', '')}
+
+ """, unsafe_allow_html=True)
+
+
+def render_event_log():
+ """渲染事件日志"""
+ coord = st.session_state.get('store_coordination', {})
+ events = coord.get('event_log', [])
+
+ if events:
+ log_text = "\n".join([f"[{e['time']}] {e['event']}" for e in events[-30:]])
+ st.markdown(f'{log_text}
', unsafe_allow_html=True)
+ else:
+ st.info("暂无事件日志")
+
+
+def render_workflow_diagram(current_phase: str):
+ """渲染工作流状态图"""
+ phases = [
+ ("提交", "init"),
+ ("分配审稿", "reviewing"),
+ ("审稿评估", "reviewing"),
+ ("EIC综合", "feedback"),
+ ("作者修改", "revising"),
+ ("最终决定", "complete")
+ ]
+
+ PHASE_ORDER = {"init": 0, "reviewing": 1, "feedback": 3, "revising": 4, "complete": 5}
+ current_idx = PHASE_ORDER.get(current_phase, 0)
+
+ cols = st.columns(len(phases))
+ for i, (label, phase) in enumerate(phases):
+ with cols[i]:
+ if i == current_idx or (i == current_idx + 1 and phase == current_phase):
+ st.markdown(f"**▶ {label}**")
+ elif i < current_idx:
+ st.markdown(f"~~{label}~~ ✅")
+ else:
+ st.markdown(f"○ {label}")
+
+
+def main():
+ setup_page()
+ init_session_state()
+
+ st.markdown('📝 分布式存储协调代理审稿修改模型
', unsafe_allow_html=True)
+ st.caption("Distributed Storage Coordination Agent - Manuscript Review System | EIC ↔ Reviewer ↔ Author")
+
+ # 侧边栏
+ title, abstract, max_rounds = render_sidebar()
+
+ # 工作流状态
+ current_phase = st.session_state.get('current_phase', 'init')
+ render_workflow_diagram(current_phase)
+ st.divider()
+
+ # 代理状态
+ render_agent_status()
+ st.divider()
+
+ ms_state = st.session_state.get('ms_state', None)
+
+ # 创建协调器 (根据模式选择是否注入AI引擎)
+ def _create_coordinator():
+ ai_engine = None
+ if st.session_state.get('use_ai') and st.session_state.get('api_key'):
+ model = st.session_state.get('ai_model', 'claude-haiku-4-5-20251001')
+ ai_engine = AIReviewEngine(st.session_state.api_key, model)
+ return ReviewCoordinator(ai_engine=ai_engine)
+
+ # ---- 稿件提交阶段 ----
+ if current_phase == "init":
+ st.markdown("### 📤 步骤 1: 提交稿件")
+ mode_label = "🤖 AI 模式" if st.session_state.get('use_ai') and st.session_state.get('api_key') else "📋 模拟模式"
+ st.info(f"**标题:** {title}\n\n**摘要:** {abstract[:100]}...\n\n**当前模式:** {mode_label}")
+
+ col1, col2 = st.columns(2)
+ with col1:
+ if st.button("📨 提交稿件并开始审稿流程", type="primary", use_container_width=True):
+ coordinator = _create_coordinator()
+ ms = coordinator.init_manuscript(title, abstract)
+ ms.max_rounds = max_rounds
+
+ st.session_state.coordinator = coordinator
+ st.session_state.ms_state = ms
+ st.session_state.current_phase = "reviewing"
+ st.rerun()
+ with col2:
+ if st.button("🚀 自动运行完整流程", use_container_width=True):
+ coordinator = _create_coordinator()
+ ms = coordinator.init_manuscript(title, abstract)
+ ms.max_rounds = max_rounds
+
+ st.session_state.coordinator = coordinator
+ st.session_state.ms_state = ms
+ st.session_state.auto_running = True
+ st.session_state.current_phase = "reviewing"
+ st.rerun()
+
+ # ---- 审稿阶段 ----
+ elif current_phase == "reviewing":
+ ms_state = st.session_state.ms_state
+ coordinator = st.session_state.coordinator
+ render_metrics(ms_state)
+
+ is_ai = coordinator.ai_engine is not None
+ mode_tag = "🤖 AI" if is_ai else "📋 模拟"
+ st.markdown(f"### 🔍 第 {ms_state.current_round} 轮审稿 ({mode_tag})")
+
+ auto = st.session_state.get('auto_running', False)
+ spinner_msg = "🤖 Claude 正在生成审稿意见..." if is_ai else "审稿人正在审阅稿件..."
+ if auto or st.button(f"▶ 执行第 {ms_state.current_round} 轮审稿", type="primary"):
+ with st.spinner(spinner_msg):
+ result = coordinator.run_review_round(ms_state)
+ st.session_state.review_history.append(result)
+ st.session_state.current_phase = "feedback"
+ st.rerun()
+
+ # ---- 反馈阶段 ----
+ elif current_phase == "feedback":
+ ms_state = st.session_state.ms_state
+ coordinator = st.session_state.coordinator
+ render_metrics(ms_state)
+
+ reviews = st.session_state.review_history
+ if reviews:
+ latest = reviews[-1]
+ synthesis = latest.get('synthesis', {})
+
+ st.markdown(f"### 📊 第 {ms_state.current_round} 轮审稿结果")
+
+ # 审稿报告 (含新增的意见追踪、分歧分析标签页)
+ tab_reports, tab_synthesis, tab_tracker, tab_trend, tab_heatmap = st.tabs([
+ "📋 审稿报告", "📝 EIC综合意见", "🔍 意见追踪", "📈 评分趋势", "🗺 分歧分析"
+ ])
+
+ with tab_reports:
+ for report in latest.get('reports', []):
+ render_review_report(report, ms_state.current_round)
+
+ with tab_synthesis:
+ rec = synthesis.get('eic_recommendation', 'pending')
+ rec_labels = {
+ 'accept': ('✅ 接受', 'decision-accept'),
+ 'minor_revision': ('📝 小修', 'decision-revision'),
+ 'major_revision': ('📝 大修', 'decision-revision'),
+ 'reject': ('❌ 拒稿', 'decision-reject')
+ }
+ rec_label, rec_css = rec_labels.get(rec, ('?', ''))
+ st.markdown(f'{rec_label}', unsafe_allow_html=True)
+ st.markdown(f"**综合评分 (专家加权):** {synthesis.get('overall_avg', 0):.1f}/10")
+ consensus_level = synthesis.get('consensus_level', 1.0)
+ st.markdown(f"**共识度:** {consensus_level:.0%}")
+ st.markdown(f"**关键问题:** {synthesis.get('critical_count', 0)} | "
+ f"**重要问题:** {synthesis.get('major_count', 0)} | "
+ f"**小问题:** {synthesis.get('minor_count', 0)}")
+
+ # 分歧警告
+ disagreements = synthesis.get('disagreement_dims', [])
+ if disagreements:
+ st.warning(f"检测到 {len(disagreements)} 个维度存在审稿人分歧")
+
+ guidance = synthesis.get('guidance', '')
+ if guidance:
+ st.text_area("EIC指导意见", guidance, height=300, disabled=True)
+
+ # AI 模式额外信息
+ ai_reasoning = synthesis.get('ai_reasoning', '')
+ ai_issues = synthesis.get('ai_key_issues', [])
+ if ai_reasoning:
+ st.markdown("**🤖 AI 决策推理:**")
+ st.markdown(ai_reasoning)
+ if ai_issues:
+ st.markdown("**🤖 AI 关键问题:**")
+ for iss in ai_issues:
+ st.markdown(f"- {iss}")
+
+ with tab_tracker:
+ render_comment_tracker()
+
+ with tab_trend:
+ render_score_trend()
+ render_convergence_chart()
+
+ with tab_heatmap:
+ render_disagreement_heatmap()
+
+ # 判断流程是否结束 (新签名: 传入manuscript)
+ is_complete, reason = coordinator.is_process_complete(ms_state, synthesis)
+ if is_complete:
+ # 调用正式最终决定
+ final = coordinator.eic.make_final_decision(ms_state)
+ st.session_state.final_decision = final
+ st.session_state.termination_reason = reason
+ if final['decision'] in [ReviewDecision.ACCEPT.value, ReviewDecision.MINOR_REVISION.value]:
+ ms_state.status = "accepted"
+ else:
+ ms_state.status = "rejected"
+ st.session_state.current_phase = "complete"
+ st.session_state.process_complete = True
+ st.rerun()
+ else:
+ auto = st.session_state.get('auto_running', False)
+ if auto or st.button("📝 作者开始修改", type="primary"):
+ st.session_state.current_phase = "revising"
+ st.rerun()
+
+ # ---- 修改阶段 ----
+ elif current_phase == "revising":
+ ms_state = st.session_state.ms_state
+ coordinator = st.session_state.coordinator
+ render_metrics(ms_state)
+
+ reviews = st.session_state.review_history
+ latest = reviews[-1] if reviews else {}
+ synthesis = latest.get('synthesis', {})
+
+ st.markdown(f"### ✏ 作者修改 (第 {ms_state.current_round} 轮)")
+
+ auto = st.session_state.get('auto_running', False)
+ if auto or st.button("▶ 执行修改并提交", type="primary"):
+ with st.spinner("作者正在修改稿件..."):
+ revision = coordinator.author_revise(ms_state, synthesis)
+ st.session_state.revision_history.append(revision)
+
+ # 推进到下一轮
+ coordinator.advance_round(ms_state)
+ st.session_state.current_phase = "reviewing"
+ st.rerun()
+
+ # 显示待处理的问题
+ if synthesis:
+ st.markdown("**待回复的审稿意见:**")
+ all_issues = (
+ synthesis.get('critical_issues', []) +
+ synthesis.get('major_issues', []) +
+ synthesis.get('minor_issues', [])
+ )
+ for i, issue in enumerate(all_issues, 1):
+ severity = issue.get('severity', 'minor')
+ icon = {'critical': '🔴', 'major': '🟠', 'minor': '🟡'}.get(severity, '⚪')
+ st.markdown(f"{icon} {i}. [{issue.get('category', '')}] {issue.get('comment', '')}")
+
+ # ---- 完成阶段 ----
+ elif current_phase == "complete":
+ ms_state = st.session_state.ms_state
+ render_metrics(ms_state)
+
+ reviews = st.session_state.review_history
+ latest = reviews[-1] if reviews else {}
+ synthesis = latest.get('synthesis', {})
+
+ # 使用正式最终决定 (如果有)
+ final_decision = st.session_state.get('final_decision', {})
+ rec = final_decision.get('decision', synthesis.get('eic_recommendation', 'pending'))
+ reason = st.session_state.get('termination_reason', '')
+
+ st.markdown("### 🏁 审稿流程完成")
+
+ reason_labels = {
+ 'accepted': '审稿人共识接受',
+ 'rejected': '审稿人建议拒稿',
+ 'max_rounds_reached': f'达到最大轮数 ({ms_state.max_rounds})',
+ 'minor_revision_accepted': '小修后无关键问题,予以接受',
+ 'diminishing_returns': '评分收敛,不再有显著提升'
+ }
+ reason_text = reason_labels.get(reason, '')
+
+ if rec in [ReviewDecision.ACCEPT.value, ReviewDecision.MINOR_REVISION.value]:
+ st.success(f"🎉 恭喜!稿件经过 {ms_state.current_round} 轮审稿后被接受!")
+ if reason_text:
+ st.info(f"终止原因: {reason_text}")
+ st.balloons()
+ else:
+ st.error(f"稿件在第 {ms_state.current_round} 轮后被拒稿。")
+ if reason_text:
+ st.info(f"终止原因: {reason_text}")
+
+ if final_decision.get('reason'):
+ st.markdown(f"**EIC最终意见:** {final_decision['reason']}")
+
+ # 完整历史 (多标签页)
+ tab_history, tab_revisions, tab_tracker, tab_diff, tab_convergence = st.tabs([
+ "📜 审稿历史", "📝 修改历史", "🔍 意见追踪", "📄 版本对比", "📈 收敛趋势"
+ ])
+
+ with tab_history:
+ for i, rh in enumerate(reviews, 1):
+ with st.expander(f"第 {i} 轮审稿", expanded=(i == len(reviews))):
+ for report in rh.get('reports', []):
+ render_review_report(report, i)
+ syn = rh.get('synthesis', {})
+ if syn:
+ st.markdown(f"**EIC建议:** {syn.get('eic_recommendation', '')} | "
+ f"**综合评分:** {syn.get('overall_avg', 0):.1f} | "
+ f"**共识度:** {syn.get('consensus_level', 1.0):.0%}")
+
+ with tab_revisions:
+ if st.session_state.revision_history:
+ for i, rev in enumerate(st.session_state.revision_history, 1):
+ with st.expander(f"第 {i} 次修改 (版本 {rev.get('version_number', '?')})"):
+ render_revision_response(rev)
+ else:
+ st.info("无修改记录")
+
+ with tab_tracker:
+ render_comment_tracker()
+
+ with tab_diff:
+ render_version_diff()
+
+ with tab_convergence:
+ render_convergence_chart()
+ render_score_trend()
+
+ # 下载面板
+ st.divider()
+ render_download_panel()
+
+ # ---- 底部面板 ----
+ st.divider()
+ tab_msg, tab_log, tab_store, tab_vdiff = st.tabs(["💬 消息流", "📋 事件日志", "🗄 存储状态", "📄 版本对比"])
+
+ with tab_msg:
+ render_message_flow()
+
+ with tab_log:
+ render_event_log()
+
+ with tab_store:
+ st.markdown("**分布式存储分区状态:**")
+ col1, col2 = st.columns(2)
+ with col1:
+ ms_count = len(st.session_state.get('store_manuscripts', {}))
+ review_count = len(st.session_state.get('store_reviews', {}))
+ msg_count = len(st.session_state.get('store_messages', []))
+ st.metric("稿件存储分区", f"{ms_count} 份")
+ st.metric("审稿报告分区", f"{review_count} 组")
+ with col2:
+ agent_count = len(st.session_state.get('store_agent_states', {}))
+ st.metric("消息队列", f"{msg_count} 条")
+ st.metric("代理状态分区", f"{agent_count} 个")
+
+ with tab_vdiff:
+ render_version_diff()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/mcp_review_server.py b/mcp_review_server.py
new file mode 100644
index 000000000000..7ef7a5ee2e83
--- /dev/null
+++ b/mcp_review_server.py
@@ -0,0 +1,906 @@
+#!/usr/bin/env python3
+"""
+分布式审稿协调系统 - MCP Server Plugin
+Manuscript Review Coordination MCP Server
+
+作为 MCP 插件运行,提供以下工具:
+- init_review: 初始化审稿流程
+- run_review: 执行一轮审稿
+- author_revise: 作者修改稿件
+- get_status: 查看当前状态
+- get_report: 获取审稿报告
+- next_round: 推进到下一轮
+- export_report: 导出完整报告
+
+安装: pip install mcp
+运行: python mcp_review_server.py
+"""
+
+import json
+import uuid
+import random
+import hashlib
+from datetime import datetime
+from dataclasses import dataclass, field, asdict
+from typing import Any
+from enum import Enum
+
+try:
+ from mcp.server.fastmcp import FastMCP
+except ImportError:
+ print("请先安装 mcp: pip install mcp")
+ raise
+
+
+# ============================================================
+# 数据模型
+# ============================================================
+
+class ReviewDecision(str, Enum):
+ ACCEPT = "accept"
+ MINOR_REVISION = "minor_revision"
+ MAJOR_REVISION = "major_revision"
+ REJECT = "reject"
+ PENDING = "pending"
+
+
+class CommentResolution(str, Enum):
+ PENDING = "pending"
+ ADDRESSED = "addressed"
+ PARTIALLY_ADDRESSED = "partially_addressed"
+ NOT_ADDRESSED = "not_addressed"
+
+
+REVIEW_DIMENSIONS = {
+ "novelty": {"name": "创新性", "weight": 0.25},
+ "methodology": {"name": "方法论", "weight": 0.25},
+ "writing": {"name": "写作质量", "weight": 0.15},
+ "significance": {"name": "研究意义", "weight": 0.20},
+ "data_analysis": {"name": "数据分析", "weight": 0.15},
+}
+
+REVIEWER_PROFILES = [
+ {
+ "id": "rev_001", "name": "审稿人A (方法论专家)",
+ "expertise": ["methodology", "data_analysis"],
+ "strictness": 0.7, "personality": "严谨型"
+ },
+ {
+ "id": "rev_002", "name": "审稿人B (领域专家)",
+ "expertise": ["novelty", "significance"],
+ "strictness": 0.4, "personality": "开放型"
+ },
+ {
+ "id": "rev_003", "name": "审稿人C (写作审查)",
+ "expertise": ["writing", "methodology"],
+ "strictness": 0.6, "personality": "细致型"
+ },
+]
+
+REVIEW_COMMENT_POOL = {
+ "novelty": {
+ "critical": ["研究缺乏新意,与已有工作重复度高", "创新点不明确,需重新定位"],
+ "major": ["创新性论证不足,需加强与现有方法的对比", "需更清晰地阐述核心贡献"],
+ "minor": ["建议补充相关领域最新进展的讨论"],
+ },
+ "methodology": {
+ "critical": ["研究方法存在根本性缺陷", "实验设计不合理,无法支撑结论"],
+ "major": ["需补充更多实验验证", "方法描述不够详细,难以复现"],
+ "minor": ["建议增加方法论的理论推导"],
+ },
+ "writing": {
+ "critical": ["文章结构混乱,逻辑不通", "大量语法错误影响理解"],
+ "major": ["部分段落表述不清", "图表质量需要提升"],
+ "minor": ["个别用词不够准确", "参考文献格式需统一"],
+ },
+ "significance": {
+ "critical": ["研究意义不明确", "应用价值有限"],
+ "major": ["需加强研究影响力的论证", "应用场景描述不足"],
+ "minor": ["建议讨论未来研究方向"],
+ },
+ "data_analysis": {
+ "critical": ["数据分析方法不当", "统计检验缺失"],
+ "major": ["需补充更多统计分析", "数据可视化需改进"],
+ "minor": ["建议增加敏感性分析"],
+ },
+}
+
+AUTHOR_RESPONSE_TEMPLATES = {
+ "critical": [
+ "感谢审稿人指出此关键问题。我们已{action}。{detail}",
+ "非常重要的意见。我们已彻底重新设计了{action},{detail}",
+ ],
+ "major": [
+ "感谢建议。我们已{action},具体修改见第{section}节。",
+ "已按建议{action},{result}",
+ ],
+ "minor": [
+ "已修改。{action}",
+ "感谢提醒,已{action}",
+ ],
+}
+
+REVISION_ACTIONS = {
+ "novelty": ["重新梳理了创新点", "补充了与现有方法的详细对比", "明确了核心贡献"],
+ "methodology": ["完善了研究方法", "补充了实验验证", "增加了方法论的理论推导"],
+ "writing": ["全文润色修改", "重新组织了文章结构", "改进了图表质量"],
+ "significance": ["加强了研究意义论证", "补充了应用场景", "增加了影响力分析"],
+ "data_analysis": ["补充了统计分析", "改进了数据可视化", "增加了敏感性分析"],
+}
+
+
+# ============================================================
+# 核心引擎 (无 Streamlit 依赖)
+# ============================================================
+
+class ReviewStore:
+ """审稿数据存储"""
+
+ def __init__(self):
+ self.manuscripts = {} # {ms_id: [versions]}
+ self.reviews = {} # {ms_id: {round: [reports]}}
+ self.messages = []
+ self.agent_states = {}
+
+ def store_version(self, ms_id: str, version: dict):
+ if ms_id not in self.manuscripts:
+ self.manuscripts[ms_id] = []
+ self.manuscripts[ms_id].append(version)
+
+ def get_latest_version(self, ms_id: str) -> dict:
+ versions = self.manuscripts.get(ms_id, [])
+ return versions[-1] if versions else None
+
+ def get_all_versions(self, ms_id: str) -> list:
+ return self.manuscripts.get(ms_id, [])
+
+ def store_review(self, ms_id: str, round_num: int, report: dict):
+ if ms_id not in self.reviews:
+ self.reviews[ms_id] = {}
+ if round_num not in self.reviews[ms_id]:
+ self.reviews[ms_id][round_num] = []
+ self.reviews[ms_id][round_num].append(report)
+
+ def get_reviews(self, ms_id: str, round_num: int) -> list:
+ return self.reviews.get(ms_id, {}).get(round_num, [])
+
+ def add_message(self, msg: dict):
+ self.messages.append(msg)
+
+
+class EICEngine:
+ """EIC 决策引擎"""
+
+ def synthesize(self, reviews: list) -> dict:
+ """综合审稿意见 (含专家加权 + 共识分析)"""
+ all_comments = []
+ score_data = {}
+ decisions = []
+
+ for review in reviews:
+ decisions.append(review.get("decision", "pending"))
+ rid = review.get("reviewer_id", "")
+ profile = next((p for p in REVIEWER_PROFILES if p["id"] == rid), None)
+ expertise = profile.get("expertise", []) if profile else []
+
+ for dim, score in review.get("scores", {}).items():
+ if dim not in score_data:
+ score_data[dim] = []
+ score_data[dim].append({
+ "score": score, "reviewer_id": rid,
+ "is_expert": dim in expertise
+ })
+ for c in review.get("comments", []):
+ all_comments.append(c)
+
+ # 专家加权评分
+ weighted_avg = {}
+ simple_avg = {}
+ variance = {}
+ disagreement_dims = []
+
+ for dim, entries in score_data.items():
+ scores = [e["score"] for e in entries]
+ s_avg = sum(scores) / len(scores)
+ weights = [1.5 if e["is_expert"] else 1.0 for e in entries]
+ w_avg = sum(s * w for s, w in zip(scores, weights)) / sum(weights)
+ var = sum((s - s_avg) ** 2 for s in scores) / len(scores) if len(scores) > 1 else 0
+
+ weighted_avg[dim] = round(w_avg, 2)
+ simple_avg[dim] = round(s_avg, 2)
+ variance[dim] = round(var, 2)
+
+ if len(scores) >= 2 and max(scores) - min(scores) > 2.0:
+ disagreement_dims.append({
+ "dimension": dim,
+ "name": REVIEW_DIMENSIONS.get(dim, {}).get("name", dim),
+ "max_diff": round(max(scores) - min(scores), 2),
+ "scores": {e["reviewer_id"]: e["score"] for e in entries},
+ })
+
+ # 共识度
+ all_var = list(variance.values())
+ consensus_level = round(max(0, 1 - sum(all_var) / len(all_var) / 10), 2) if all_var else 1.0
+
+ # 综合评分
+ overall = sum(
+ weighted_avg.get(d, 0) * REVIEW_DIMENSIONS[d]["weight"]
+ for d in weighted_avg
+ ) if weighted_avg else 0
+
+ critical = [c for c in all_comments if c.get("severity") == "critical"]
+ major = [c for c in all_comments if c.get("severity") == "major"]
+ minor = [c for c in all_comments if c.get("severity") == "minor"]
+ has_critical = len(critical) > 0
+
+ # 决策
+ if overall >= 8.0 and not has_critical:
+ rec = ReviewDecision.ACCEPT.value
+ elif overall >= 6.0 and not has_critical:
+ rec = ReviewDecision.MINOR_REVISION.value
+ elif overall >= 4.0 or has_critical:
+ rec = ReviewDecision.MAJOR_REVISION.value
+ else:
+ rec = ReviewDecision.REJECT.value
+
+ return {
+ "overall_avg": round(overall, 2),
+ "weighted_avg": weighted_avg,
+ "simple_avg": simple_avg,
+ "consensus_level": consensus_level,
+ "disagreement_dims": disagreement_dims,
+ "eic_recommendation": rec,
+ "reviewer_decisions": decisions,
+ "critical_count": len(critical),
+ "major_count": len(major),
+ "minor_count": len(minor),
+ "critical_issues": critical,
+ "major_issues": major,
+ "minor_issues": minor,
+ }
+
+
+class ReviewerEngine:
+ """审稿人引擎"""
+
+ def review(self, profile: dict, round_num: int,
+ prev_review: dict = None, addressed_ratio: float = 0.0) -> dict:
+ """生成审稿报告"""
+ expertise = profile.get("expertise", [])
+ strictness = profile.get("strictness", 0.5)
+
+ base = random.uniform(4.5, 8.5)
+ if prev_review:
+ prev_avg = prev_review.get("overall_score", base)
+ improvement = addressed_ratio * random.uniform(1.0, 2.5) + random.uniform(0.0, 0.5)
+ base = min(prev_avg + improvement, 9.5)
+
+ scores = {}
+ for dim in REVIEW_DIMENSIONS:
+ if dim in expertise:
+ s = base + random.uniform(-1.0, 0.5) - (strictness * 0.5)
+ else:
+ s = base + random.uniform(-1.5, 1.0)
+ scores[dim] = round(max(1.0, min(10.0, s)), 1)
+
+ overall = sum(scores[d] * REVIEW_DIMENSIONS[d]["weight"] for d in scores)
+
+ # 生成意见
+ comments = []
+ for dim, score in scores.items():
+ pool_map = REVIEW_COMMENT_POOL.get(dim, {})
+ if score < 4.5 and pool_map.get("critical"):
+ sev, pool = "critical", pool_map["critical"]
+ elif score < 6.5 and pool_map.get("major"):
+ sev, pool = "major", pool_map["major"]
+ elif score < 8.0 and pool_map.get("minor"):
+ sev, pool = "minor", pool_map["minor"]
+ else:
+ continue
+ n = 2 if dim in expertise else 1
+ for text in random.sample(pool, min(n, len(pool))):
+ comments.append({
+ "id": str(uuid.uuid4())[:8],
+ "reviewer_id": profile["id"],
+ "category": dim,
+ "severity": sev,
+ "comment": text,
+ "resolution": CommentResolution.PENDING.value,
+ "round_created": round_num,
+ })
+
+ # 一致性: 低分维度必须有对应意见
+ comment_dims = {(c["category"], c["severity"]) for c in comments}
+ for dim, score in scores.items():
+ if score < 5.0:
+ if not any(cat == dim and sev in ("critical", "major") for cat, sev in comment_dims):
+ pool = REVIEW_COMMENT_POOL.get(dim, {}).get("major", [])
+ if pool:
+ comments.append({
+ "id": str(uuid.uuid4())[:8],
+ "reviewer_id": profile["id"],
+ "category": dim, "severity": "major",
+ "comment": random.choice(pool),
+ "resolution": CommentResolution.PENDING.value,
+ "round_created": round_num,
+ })
+
+ if overall >= 8.0:
+ dec = ReviewDecision.ACCEPT.value
+ elif overall >= 6.5:
+ dec = ReviewDecision.MINOR_REVISION.value
+ elif overall >= 4.5:
+ dec = ReviewDecision.MAJOR_REVISION.value
+ else:
+ dec = ReviewDecision.REJECT.value
+
+ n_expert = sum(1 for d in scores if d in expertise)
+ confidence = round(min(0.95, 0.6 + 0.1 * n_expert + random.uniform(-0.05, 0.05)), 2)
+
+ strong = [REVIEW_DIMENSIONS[d]["name"] for d, s in scores.items() if s >= 7.0]
+ weak = [REVIEW_DIMENSIONS[d]["name"] for d, s in scores.items() if s < 6.0]
+
+ return {
+ "reviewer_id": profile["id"],
+ "reviewer_name": profile["name"],
+ "round": round_num,
+ "decision": dec,
+ "overall_score": round(overall, 2),
+ "scores": scores,
+ "comments": comments,
+ "confidence": confidence,
+ "summary": f"优势: {', '.join(strong) if strong else '无'} | 待改进: {', '.join(weak) if weak else '无'} | 建议: {dec}",
+ }
+
+
+class AuthorEngine:
+ """作者引擎"""
+
+ def respond_and_revise(self, synthesis: dict, prev_sections: dict,
+ version_num: int) -> dict:
+ """处理反馈并生成修改版本"""
+ responses = []
+ modified_categories = set()
+
+ priority_groups = [
+ ("critical", synthesis.get("critical_issues", [])),
+ ("major", synthesis.get("major_issues", [])),
+ ("minor", synthesis.get("minor_issues", [])),
+ ]
+
+ for severity, issues in priority_groups:
+ for comment in issues:
+ cat = comment.get("category", "general")
+ actions = REVISION_ACTIONS.get(cat, ["进行了相应修改"])
+ action = random.choice(actions)
+ templates = AUTHOR_RESPONSE_TEMPLATES.get(severity, AUTHOR_RESPONSE_TEMPLATES["minor"])
+ template = random.choice(templates)
+ resp_text = template.format(
+ action=action,
+ detail=f"见{cat}相关章节",
+ result="改进效果显著",
+ section=random.choice(["2", "3", "4", "5"]),
+ )
+ responses.append({
+ "original_comment": comment.get("comment", ""),
+ "category": cat,
+ "severity": severity,
+ "response": resp_text,
+ "addressed": True,
+ "resolution": CommentResolution.ADDRESSED.value,
+ })
+ modified_categories.add(cat)
+
+ # 生成新章节内容
+ cat_to_sec = {
+ "novelty": "introduction", "methodology": "methodology",
+ "data_analysis": "results", "significance": "discussion",
+ "writing": "conclusion",
+ }
+ new_sections = {}
+ for sec, content in prev_sections.items():
+ modified_by = [c for c, s in cat_to_sec.items() if s == sec and c in modified_categories]
+ if modified_by:
+ new_sections[sec] = content + f"\n[v{version_num} 修改: 回应{','.join(modified_by)}意见]"
+ else:
+ new_sections[sec] = content
+
+ return {
+ "responses": responses,
+ "total_issues": sum(len(issues) for _, issues in priority_groups),
+ "addressed_count": len(responses),
+ "new_sections": new_sections,
+ "revision_notes": "\n".join(
+ f"[{r['severity'].upper()}][{r['category']}] {r['original_comment']}\n -> {r['response']}"
+ for r in responses
+ ),
+ }
+
+
+# ============================================================
+# 会话管理器
+# ============================================================
+
+class ReviewSession:
+ """单个审稿会话"""
+
+ def __init__(self, title: str, abstract: str, max_rounds: int = 5):
+ self.ms_id = str(uuid.uuid4())[:8]
+ self.title = title
+ self.abstract = abstract
+ self.max_rounds = max_rounds
+ self.current_round = 0
+ self.current_version = 1
+ self.status = "initialized"
+ self.store = ReviewStore()
+ self.eic = EICEngine()
+ self.reviewer_engine = ReviewerEngine()
+ self.author_engine = AuthorEngine()
+ self.score_history = []
+ self.review_history = []
+ self.revision_history = []
+
+ # 存储初始版本
+ initial = {
+ "version": 1, "title": title, "abstract": abstract,
+ "content_sections": {
+ "introduction": "引言内容...", "methodology": "方法论内容...",
+ "results": "结果内容...", "discussion": "讨论内容...",
+ "conclusion": "结论内容...",
+ },
+ "content_hash": hashlib.md5(title.encode()).hexdigest()[:12],
+ "timestamp": datetime.now().isoformat(),
+ }
+ self.store.store_version(self.ms_id, initial)
+
+ def to_status(self) -> dict:
+ return {
+ "manuscript_id": self.ms_id,
+ "title": self.title,
+ "current_round": self.current_round,
+ "current_version": self.current_version,
+ "max_rounds": self.max_rounds,
+ "status": self.status,
+ "score_history": self.score_history,
+ "total_reviews": len(self.review_history),
+ "total_revisions": len(self.revision_history),
+ }
+
+
+# 全局会话存储
+_sessions: dict[str, ReviewSession] = {}
+
+
+# ============================================================
+# MCP Server
+# ============================================================
+
+mcp = FastMCP(
+ "manuscript-review",
+ instructions=(
+ "分布式审稿协调系统 MCP 插件。"
+ "提供学术稿件多轮审稿流程模拟,包含 EIC(主编)、Reviewer(审稿人)、Author(作者) 三个代理角色。"
+ "使用流程: init_review → run_review → (查看结果) → author_revise → next_round → run_review → ... → export_report"
+ ),
+)
+
+
+@mcp.tool()
+def init_review(title: str, abstract: str, max_rounds: int = 5) -> str:
+ """
+ 初始化审稿流程。创建新的审稿会话。
+
+ Args:
+ title: 论文标题
+ abstract: 论文摘要
+ max_rounds: 最大审稿轮数 (默认5)
+
+ Returns:
+ 会话ID和初始状态
+ """
+ session = ReviewSession(title, abstract, max_rounds)
+ _sessions[session.ms_id] = session
+
+ return json.dumps({
+ "message": f"审稿会话已创建",
+ "session_id": session.ms_id,
+ "title": title,
+ "max_rounds": max_rounds,
+ "reviewers": [p["name"] for p in REVIEWER_PROFILES],
+ "next_step": "调用 run_review 开始第一轮审稿",
+ }, ensure_ascii=False, indent=2)
+
+
+@mcp.tool()
+def run_review(session_id: str) -> str:
+ """
+ 执行一轮审稿。所有审稿人对当前版本进行评审,EIC综合反馈。
+
+ Args:
+ session_id: 会话ID (从 init_review 获取)
+
+ Returns:
+ 审稿结果:各审稿人评分、意见、EIC综合建议
+ """
+ session = _sessions.get(session_id)
+ if not session:
+ return json.dumps({"error": f"会话 {session_id} 不存在"}, ensure_ascii=False)
+
+ session.current_round += 1
+ is_re_review = session.current_round > 1
+
+ # 各审稿人审稿
+ reports = []
+ for profile in REVIEWER_PROFILES:
+ prev_review = None
+ addressed_ratio = 0.5
+ if is_re_review:
+ prev_reviews = session.store.get_reviews(session.ms_id, session.current_round - 1)
+ prev_review = next((r for r in prev_reviews if r.get("reviewer_id") == profile["id"]), None)
+ if session.revision_history:
+ total = session.revision_history[-1].get("total_issues", 1)
+ addressed = session.revision_history[-1].get("addressed_count", 0)
+ addressed_ratio = addressed / max(total, 1)
+
+ report = session.reviewer_engine.review(
+ profile, session.current_round, prev_review, addressed_ratio
+ )
+ session.store.store_review(session.ms_id, session.current_round, report)
+ reports.append(report)
+
+ # EIC综合
+ synthesis = session.eic.synthesize(reports)
+ session.score_history.append(synthesis["overall_avg"])
+ session.review_history.append({
+ "round": session.current_round,
+ "reports": reports,
+ "synthesis": synthesis,
+ })
+ session.status = "reviewed"
+
+ # 判断是否结束
+ is_complete, reason = _check_complete(session, synthesis)
+
+ result = {
+ "round": session.current_round,
+ "eic_recommendation": synthesis["eic_recommendation"],
+ "overall_score": synthesis["overall_avg"],
+ "consensus_level": f"{synthesis['consensus_level']:.0%}",
+ "issues": {
+ "critical": synthesis["critical_count"],
+ "major": synthesis["major_count"],
+ "minor": synthesis["minor_count"],
+ },
+ "reviewer_reports": [
+ {
+ "name": r["reviewer_name"],
+ "score": r["overall_score"],
+ "decision": r["decision"],
+ "confidence": f"{r['confidence']:.0%}",
+ "summary": r["summary"],
+ "comments": [
+ f"[{c['severity'].upper()}][{REVIEW_DIMENSIONS.get(c['category'], {}).get('name', c['category'])}] {c['comment']}"
+ for c in r["comments"]
+ ],
+ }
+ for r in reports
+ ],
+ }
+
+ if synthesis["disagreement_dims"]:
+ result["disagreements"] = [
+ f"{d['name']}: 分差 {d['max_diff']:.1f}"
+ for d in synthesis["disagreement_dims"]
+ ]
+
+ if is_complete:
+ session.status = "accepted" if synthesis["eic_recommendation"] in ["accept", "minor_revision"] else "rejected"
+ result["process_complete"] = True
+ result["termination_reason"] = reason
+ result["next_step"] = "审稿完成!调用 export_report 导出报告"
+ else:
+ result["process_complete"] = False
+ result["next_step"] = "调用 author_revise 进行修改,然后 next_round + run_review"
+
+ return json.dumps(result, ensure_ascii=False, indent=2)
+
+
+@mcp.tool()
+def author_revise(session_id: str) -> str:
+ """
+ 作者根据审稿意见修改稿件。自动生成逐条回复和修改版本。
+
+ Args:
+ session_id: 会话ID
+
+ Returns:
+ 修改结果:逐条回复和新版本信息
+ """
+ session = _sessions.get(session_id)
+ if not session:
+ return json.dumps({"error": f"会话 {session_id} 不存在"}, ensure_ascii=False)
+
+ if not session.review_history:
+ return json.dumps({"error": "尚无审稿结果,请先调用 run_review"}, ensure_ascii=False)
+
+ synthesis = session.review_history[-1]["synthesis"]
+ prev_version = session.store.get_latest_version(session.ms_id)
+ prev_sections = prev_version.get("content_sections", {}) if prev_version else {}
+
+ session.current_version += 1
+ result = session.author_engine.respond_and_revise(
+ synthesis, prev_sections, session.current_version
+ )
+
+ # 存储新版本
+ new_version = {
+ "version": session.current_version,
+ "title": session.title,
+ "abstract": session.abstract,
+ "content_sections": result["new_sections"],
+ "revision_notes": result["revision_notes"],
+ "content_hash": hashlib.md5(
+ json.dumps(result["new_sections"], sort_keys=True).encode()
+ ).hexdigest()[:12],
+ "timestamp": datetime.now().isoformat(),
+ }
+ session.store.store_version(session.ms_id, new_version)
+ session.revision_history.append(result)
+ session.status = "revised"
+
+ output = {
+ "version": session.current_version,
+ "total_issues": result["total_issues"],
+ "addressed": result["addressed_count"],
+ "responses": [
+ {
+ "severity": r["severity"],
+ "category": REVIEW_DIMENSIONS.get(r["category"], {}).get("name", r["category"]),
+ "original": r["original_comment"],
+ "reply": r["response"],
+ }
+ for r in result["responses"]
+ ],
+ "next_step": "调用 next_round 推进到下一轮审稿",
+ }
+
+ return json.dumps(output, ensure_ascii=False, indent=2)
+
+
+@mcp.tool()
+def next_round(session_id: str) -> str:
+ """
+ 推进到下一轮审稿。在 author_revise 之后调用。
+
+ Args:
+ session_id: 会话ID
+
+ Returns:
+ 下一轮状态
+ """
+ session = _sessions.get(session_id)
+ if not session:
+ return json.dumps({"error": f"会话 {session_id} 不存在"}, ensure_ascii=False)
+
+ if session.status not in ("revised", "reviewed"):
+ return json.dumps({"error": f"当前状态 '{session.status}',不能推进"}, ensure_ascii=False)
+
+ session.status = "ready_for_review"
+
+ return json.dumps({
+ "message": f"已准备第 {session.current_round + 1} 轮审稿",
+ "current_round": session.current_round,
+ "current_version": session.current_version,
+ "score_history": session.score_history,
+ "next_step": "调用 run_review 执行审稿",
+ }, ensure_ascii=False, indent=2)
+
+
+@mcp.tool()
+def get_status(session_id: str) -> str:
+ """
+ 查看审稿会话当前状态。
+
+ Args:
+ session_id: 会话ID
+
+ Returns:
+ 当前状态摘要
+ """
+ session = _sessions.get(session_id)
+ if not session:
+ return json.dumps({"error": f"会话 {session_id} 不存在"}, ensure_ascii=False)
+
+ return json.dumps(session.to_status(), ensure_ascii=False, indent=2)
+
+
+@mcp.tool()
+def list_sessions() -> str:
+ """
+ 列出所有活跃的审稿会话。
+
+ Returns:
+ 会话列表
+ """
+ if not _sessions:
+ return json.dumps({"message": "暂无会话,调用 init_review 创建"}, ensure_ascii=False)
+
+ return json.dumps({
+ "sessions": [
+ {"id": sid, "title": s.title, "status": s.status, "round": s.current_round}
+ for sid, s in _sessions.items()
+ ]
+ }, ensure_ascii=False, indent=2)
+
+
+@mcp.tool()
+def export_report(session_id: str, format: str = "text") -> str:
+ """
+ 导出完整审稿报告。
+
+ Args:
+ session_id: 会话ID
+ format: 输出格式 - "text" (可读文本) 或 "json" (结构化数据)
+
+ Returns:
+ 完整审稿报告
+ """
+ session = _sessions.get(session_id)
+ if not session:
+ return json.dumps({"error": f"会话 {session_id} 不存在"}, ensure_ascii=False)
+
+ if format == "json":
+ export = {
+ "manuscript": session.to_status(),
+ "review_history": session.review_history,
+ "revision_history": [
+ {k: v for k, v in r.items() if k != "new_sections"}
+ for r in session.revision_history
+ ],
+ "versions": session.store.get_all_versions(session.ms_id),
+ "export_time": datetime.now().isoformat(),
+ }
+ return json.dumps(export, ensure_ascii=False, indent=2, default=str)
+
+ # 文本格式
+ lines = []
+ lines.append("=" * 50)
+ lines.append(" 分布式审稿协调系统 - 完整审稿报告")
+ lines.append("=" * 50)
+ lines.append(f"稿件ID: {session.ms_id}")
+ lines.append(f"标题: {session.title}")
+ lines.append(f"摘要: {session.abstract}")
+ lines.append(f"审稿轮数: {session.current_round}")
+ lines.append(f"最终状态: {session.status}")
+ lines.append(f"评分历史: {' → '.join(f'{s:.1f}' for s in session.score_history)}")
+ lines.append("")
+
+ for rh in session.review_history:
+ r = rh["round"]
+ syn = rh["synthesis"]
+ lines.append(f"{'─' * 40}")
+ lines.append(f"第 {r} 轮 | 建议: {syn['eic_recommendation']} | 综合分: {syn['overall_avg']:.1f} | 共识度: {syn['consensus_level']:.0%}")
+ lines.append(f"问题: 关键{syn['critical_count']} 重要{syn['major_count']} 小{syn['minor_count']}")
+
+ for report in rh["reports"]:
+ lines.append(f" [{report['reviewer_name']}] {report['overall_score']:.1f}分 → {report['decision']} (信心{report['confidence']:.0%})")
+ for c in report["comments"]:
+ dim_name = REVIEW_DIMENSIONS.get(c["category"], {}).get("name", c["category"])
+ lines.append(f" [{c['severity'].upper()}][{dim_name}] {c['comment']}")
+ lines.append("")
+
+ if session.revision_history:
+ lines.append(f"{'─' * 40}")
+ lines.append("修改历史")
+ for i, rev in enumerate(session.revision_history, 1):
+ lines.append(f" 第{i}次修改: 处理 {rev['total_issues']} 条意见, 解决 {rev['addressed_count']} 条")
+ lines.append("")
+
+ lines.append("=" * 50)
+ lines.append(f"报告时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
+
+ return "\n".join(lines)
+
+
+@mcp.tool()
+def auto_run(session_id: str) -> str:
+ """
+ 自动运行完整审稿流程直到结束。
+
+ Args:
+ session_id: 会话ID
+
+ Returns:
+ 完整流程结果摘要
+ """
+ session = _sessions.get(session_id)
+ if not session:
+ return json.dumps({"error": f"会话 {session_id} 不存在"}, ensure_ascii=False)
+
+ results = []
+ while True:
+ # 审稿
+ session.current_round += 1
+ is_re = session.current_round > 1
+
+ reports = []
+ for profile in REVIEWER_PROFILES:
+ prev = None
+ ratio = 0.5
+ if is_re:
+ prev_revs = session.store.get_reviews(session.ms_id, session.current_round - 1)
+ prev = next((r for r in prev_revs if r.get("reviewer_id") == profile["id"]), None)
+ if session.revision_history:
+ t = session.revision_history[-1].get("total_issues", 1)
+ a = session.revision_history[-1].get("addressed_count", 0)
+ ratio = a / max(t, 1)
+ report = session.reviewer_engine.review(profile, session.current_round, prev, ratio)
+ session.store.store_review(session.ms_id, session.current_round, report)
+ reports.append(report)
+
+ synthesis = session.eic.synthesize(reports)
+ session.score_history.append(synthesis["overall_avg"])
+ session.review_history.append({"round": session.current_round, "reports": reports, "synthesis": synthesis})
+
+ results.append(
+ f"第{session.current_round}轮: {synthesis['overall_avg']:.1f}分 → {synthesis['eic_recommendation']} (共识{synthesis['consensus_level']:.0%})"
+ )
+
+ is_complete, reason = _check_complete(session, synthesis)
+ if is_complete:
+ session.status = "accepted" if synthesis["eic_recommendation"] in ["accept", "minor_revision"] else "rejected"
+ break
+
+ # 作者修改
+ prev_ver = session.store.get_latest_version(session.ms_id)
+ session.current_version += 1
+ rev = session.author_engine.respond_and_revise(
+ synthesis, prev_ver.get("content_sections", {}), session.current_version
+ )
+ new_ver = {
+ "version": session.current_version, "title": session.title,
+ "content_sections": rev["new_sections"],
+ "content_hash": hashlib.md5(json.dumps(rev["new_sections"]).encode()).hexdigest()[:12],
+ "timestamp": datetime.now().isoformat(),
+ }
+ session.store.store_version(session.ms_id, new_ver)
+ session.revision_history.append(rev)
+
+ return json.dumps({
+ "summary": results,
+ "final_status": session.status,
+ "total_rounds": session.current_round,
+ "final_score": session.score_history[-1],
+ "score_trend": [round(s, 2) for s in session.score_history],
+ "next_step": "调用 export_report 导出完整报告",
+ }, ensure_ascii=False, indent=2)
+
+
+def _check_complete(session: ReviewSession, synthesis: dict) -> tuple:
+ """检查审稿是否结束"""
+ rec = synthesis.get("eic_recommendation", "")
+
+ if rec == ReviewDecision.ACCEPT.value:
+ return True, "accepted"
+ if rec == ReviewDecision.REJECT.value:
+ return True, "rejected"
+ if session.current_round >= session.max_rounds:
+ return True, "max_rounds_reached"
+ if (rec == ReviewDecision.MINOR_REVISION.value and
+ session.current_round > 1 and synthesis.get("critical_count", 0) == 0):
+ return True, "minor_revision_accepted"
+ if len(session.score_history) >= 3:
+ d1 = session.score_history[-1] - session.score_history[-2]
+ d2 = session.score_history[-2] - session.score_history[-3]
+ if d1 < 0.5 and d2 < 0.5:
+ return True, "diminishing_returns"
+
+ return False, ""
+
+
+# ============================================================
+# 入口
+# ============================================================
+
+if __name__ == "__main__":
+ mcp.run()
diff --git a/requirements.txt b/requirements.txt
index 502d7d1a0d19..51de566a7090 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,3 +1,8 @@
-altair
-pandas
-streamlit
+streamlit>=1.28.0
+pandas>=2.0.0
+numpy>=1.24.0
+plotly>=5.18.0
+simpy>=4.1.1
+altair>=5.0.0
+anthropic>=0.40.0
+mcp>=1.0.0
diff --git a/review_agent.py b/review_agent.py
new file mode 100644
index 000000000000..b0a53c239ae1
--- /dev/null
+++ b/review_agent.py
@@ -0,0 +1,531 @@
+#!/usr/bin/env python3
+"""
+分布式审稿协调代理 - Claude Agent SDK 版本
+Manuscript Review Multi-Agent System
+
+真正的多代理审稿系统:
+- EIC Agent: 调用 Claude 做综合决策
+- Reviewer Agents: 调用 Claude 做智能审稿
+- Author Agent: 调用 Claude 做针对性回复和修改
+- Coordinator: 编排多轮审稿循环
+
+安装:
+ pip install anthropic
+
+使用:
+ export ANTHROPIC_API_KEY=sk-ant-...
+ python review_agent.py --title "论文标题" --abstract "论文摘要"
+
+或作为模块导入:
+ from review_agent import ReviewOrchestrator
+ orch = ReviewOrchestrator()
+ result = await orch.run("论文标题", "论文摘要")
+"""
+
+import argparse
+import asyncio
+import json
+import os
+import sys
+from datetime import datetime
+from dataclasses import dataclass, field
+
+try:
+ import anthropic
+except ImportError:
+ print("请安装 anthropic SDK: pip install anthropic")
+ sys.exit(1)
+
+
+# ============================================================
+# 代理定义
+# ============================================================
+
+EIC_SYSTEM_PROMPT = """你是一位学术期刊主编(Editor-in-Chief)。你的职责是:
+
+1. 综合多位审稿人的意见,做出公正的评审决策
+2. 识别审稿人之间的分歧,进行协调
+3. 给作者提供清晰、有建设性的修改指导
+4. 决定稿件是接受(accept)、小修(minor_revision)、大修(major_revision)还是拒稿(reject)
+
+你应该:
+- 关注审稿人的共识和分歧
+- 对关键性问题(critical)给予最高优先级
+- 如果审稿人意见冲突,给出你的专业判断
+- 输出结构化的JSON格式决策
+
+请用中文回复。"""
+
+REVIEWER_SYSTEM_PROMPT = """你是一位学术论文审稿人。你的专长领域是: {expertise}。
+你的审稿风格: {personality},严格度: {strictness}/10。
+
+审稿要求:
+1. 从以下5个维度评分(1-10分):
+ - novelty (创新性, 权重25%)
+ - methodology (方法论, 权重25%)
+ - writing (写作质量, 权重15%)
+ - significance (研究意义, 权重20%)
+ - data_analysis (数据分析, 权重15%)
+
+2. 对每个低于7分的维度,给出具体的审稿意见,标注严重程度:
+ - critical: 必须解决的根本性问题
+ - major: 需要认真处理的重要问题
+ - minor: 建议改进的小问题
+
+3. 给出总体建议: accept / minor_revision / major_revision / reject
+
+4. 如果是再审(re-review),需要评估作者是否充分回应了之前的意见
+
+输出格式为JSON。请用中文给出评审意见。"""
+
+AUTHOR_SYSTEM_PROMPT = """你是一位学术论文作者。你需要:
+
+1. 逐条回复审稿人的意见
+2. 对合理的批评,说明你的修改方案
+3. 对不合理的批评,礼貌地给出反驳和解释
+4. 如果不同审稿人意见冲突,说明你的处理策略
+5. 生成修改说明
+
+回复策略:
+- critical 问题: 必须详细回应,说明具体修改内容
+- major 问题: 认真处理,给出修改或解释
+- minor 问题: 简短确认
+- 冲突意见: 客观分析,说明取舍理由
+
+输出格式为JSON。请用中文回复。"""
+
+
+@dataclass
+class AgentConfig:
+ """代理配置"""
+ model: str = "claude-haiku-4-5-20251001" # 默认用 Haiku 节省成本
+ max_tokens: int = 4096
+ temperature: float = 0.7
+
+
+@dataclass
+class ReviewerProfile:
+ id: str
+ name: str
+ expertise: list
+ strictness: float
+ personality: str
+
+
+REVIEWERS = [
+ ReviewerProfile("rev_A", "审稿人A", ["methodology", "data_analysis"], 0.7, "严谨、注重方法论"),
+ ReviewerProfile("rev_B", "审稿人B", ["novelty", "significance"], 0.4, "开放、鼓励创新"),
+ ReviewerProfile("rev_C", "审稿人C", ["writing", "methodology"], 0.6, "细致、关注表达"),
+]
+
+
+# ============================================================
+# Agent 基类
+# ============================================================
+
+class BaseAgent:
+ """代理基类"""
+
+ def __init__(self, name: str, system_prompt: str, config: AgentConfig = None):
+ self.name = name
+ self.system_prompt = system_prompt
+ self.config = config or AgentConfig()
+ self.client = anthropic.Anthropic()
+ self.history = []
+
+ def call(self, user_message: str) -> str:
+ """调用 Claude API"""
+ response = self.client.messages.create(
+ model=self.config.model,
+ max_tokens=self.config.max_tokens,
+ temperature=self.config.temperature,
+ system=self.system_prompt,
+ messages=[{"role": "user", "content": user_message}],
+ )
+ text = response.content[0].text
+ self.history.append({"input": user_message[:200], "output": text[:200]})
+ return text
+
+ def parse_json(self, text: str) -> dict:
+ """从回复中提取JSON"""
+ # 尝试找到 JSON 块
+ if "```json" in text:
+ start = text.index("```json") + 7
+ end = text.index("```", start)
+ text = text[start:end].strip()
+ elif "```" in text:
+ start = text.index("```") + 3
+ end = text.index("```", start)
+ text = text[start:end].strip()
+
+ try:
+ return json.loads(text)
+ except json.JSONDecodeError:
+ # 尝试找到第一个 { 和最后一个 }
+ first = text.find("{")
+ last = text.rfind("}")
+ if first >= 0 and last > first:
+ try:
+ return json.loads(text[first:last + 1])
+ except json.JSONDecodeError:
+ pass
+ return {"raw_response": text, "parse_error": True}
+
+
+# ============================================================
+# 具体代理
+# ============================================================
+
+class EICAgent(BaseAgent):
+ """主编代理"""
+
+ def __init__(self, config: AgentConfig = None):
+ super().__init__("EIC", EIC_SYSTEM_PROMPT, config)
+
+ def synthesize(self, manuscript_info: str, reviews: list, round_num: int,
+ prev_decisions: list = None) -> dict:
+ prompt = f"""请综合以下审稿意见,做出编辑决策。
+
+## 稿件信息
+{manuscript_info}
+
+## 当前轮次: 第{round_num}轮
+
+## 审稿报告
+{json.dumps(reviews, ensure_ascii=False, indent=2)}
+
+{"## 前轮决策历史" + chr(10) + json.dumps(prev_decisions, ensure_ascii=False, indent=2) if prev_decisions else ""}
+
+请输出JSON格式:
+{{
+ "recommendation": "accept/minor_revision/major_revision/reject",
+ "overall_score": 0-10的综合评分,
+ "consensus_level": "high/medium/low",
+ "key_issues": ["关键问题列表"],
+ "guidance_to_author": "给作者的修改指导",
+ "disagreement_analysis": "审稿人分歧分析(如有)",
+ "reasoning": "决策理由"
+}}"""
+ response = self.call(prompt)
+ return self.parse_json(response)
+
+
+class ReviewerAgent(BaseAgent):
+ """审稿人代理"""
+
+ def __init__(self, profile: ReviewerProfile, config: AgentConfig = None):
+ system = REVIEWER_SYSTEM_PROMPT.format(
+ expertise=", ".join(profile.expertise),
+ personality=profile.personality,
+ strictness=int(profile.strictness * 10),
+ )
+ super().__init__(profile.name, system, config)
+ self.profile = profile
+
+ def review(self, manuscript_info: str, round_num: int,
+ prev_comments: list = None, author_response: str = None) -> dict:
+ prompt = f"""请审阅以下稿件。
+
+## 稿件信息
+{manuscript_info}
+
+## 审稿轮次: 第{round_num}轮
+"""
+ if prev_comments and author_response:
+ prompt += f"""
+## 你上一轮的审稿意见
+{json.dumps(prev_comments, ensure_ascii=False, indent=2)}
+
+## 作者对你意见的回复
+{author_response}
+
+请评估作者是否充分回应了你的意见,并给出新一轮评审。
+"""
+
+ prompt += """
+请输出JSON格式:
+{
+ "scores": {"novelty": 0-10, "methodology": 0-10, "writing": 0-10, "significance": 0-10, "data_analysis": 0-10},
+ "overall_score": 加权总分,
+ "decision": "accept/minor_revision/major_revision/reject",
+ "comments": [
+ {"category": "维度名", "severity": "critical/major/minor", "comment": "具体意见"}
+ ],
+ "summary": "审稿总结",
+ "confidence": 0-1的审稿信心
+}"""
+ response = self.call(prompt)
+ result = self.parse_json(response)
+ result["reviewer_id"] = self.profile.id
+ result["reviewer_name"] = self.profile.name
+ result["round"] = round_num
+ return result
+
+
+class AuthorAgent(BaseAgent):
+ """作者代理"""
+
+ def __init__(self, config: AgentConfig = None):
+ super().__init__("Author", AUTHOR_SYSTEM_PROMPT, config)
+
+ def respond(self, manuscript_info: str, eic_guidance: str,
+ reviews: list, round_num: int) -> dict:
+ prompt = f"""请根据审稿意见修改稿件。
+
+## 你的稿件
+{manuscript_info}
+
+## EIC指导意见
+{eic_guidance}
+
+## 审稿人意见 (第{round_num}轮)
+{json.dumps(reviews, ensure_ascii=False, indent=2)}
+
+请输出JSON格式:
+{{
+ "responses": [
+ {{
+ "reviewer": "审稿人名",
+ "original_comment": "原始意见",
+ "severity": "critical/major/minor",
+ "response": "你的回复",
+ "action_taken": "具体修改内容",
+ "addressed": true/false
+ }}
+ ],
+ "revision_summary": "修改概述",
+ "rebuttal_points": ["如有反驳,列出理由"],
+ "sections_modified": ["修改的章节列表"]
+}}"""
+ response = self.call(prompt)
+ return self.parse_json(response)
+
+
+# ============================================================
+# 编排器
+# ============================================================
+
+class ReviewOrchestrator:
+ """
+ 审稿流程编排器 - 协调 EIC、Reviewer、Author 三类代理
+
+ 流程:
+ 提交 → [Reviewer审稿 × N] → EIC综合 → Author修改
+ → [Reviewer再审 × N] → EIC再综合 → ... → 最终决定
+ """
+
+ def __init__(self, config: AgentConfig = None, max_rounds: int = 3):
+ self.config = config or AgentConfig()
+ self.max_rounds = max_rounds
+ self.eic = EICAgent(self.config)
+ self.reviewers = [ReviewerAgent(p, self.config) for p in REVIEWERS]
+ self.author = AuthorAgent(self.config)
+ self.history = []
+ self.score_history = []
+
+ async def run(self, title: str, abstract: str,
+ content: str = None, verbose: bool = True) -> dict:
+ """运行完整审稿流程"""
+ manuscript_info = f"标题: {title}\n摘要: {abstract}"
+ if content:
+ manuscript_info += f"\n内容: {content}"
+
+ if verbose:
+ print(f"\n{'=' * 50}")
+ print(f" 分布式审稿协调代理系统")
+ print(f" 稿件: {title}")
+ print(f"{'=' * 50}\n")
+
+ prev_reviews_by_reviewer = {}
+ prev_author_response = None
+ eic_decisions = []
+
+ for round_num in range(1, self.max_rounds + 1):
+ if verbose:
+ print(f"\n{'─' * 40}")
+ print(f" 第 {round_num} 轮审稿")
+ print(f"{'─' * 40}")
+
+ # Step 1: 各审稿人审稿
+ reviews = []
+ for reviewer in self.reviewers:
+ if verbose:
+ print(f" {reviewer.profile.name} 审稿中...")
+
+ prev = prev_reviews_by_reviewer.get(reviewer.profile.id)
+ result = reviewer.review(
+ manuscript_info, round_num,
+ prev_comments=prev,
+ author_response=prev_author_response,
+ )
+ reviews.append(result)
+
+ if verbose:
+ score = result.get("overall_score", "?")
+ dec = result.get("decision", "?")
+ n_comments = len(result.get("comments", []))
+ print(f" → 评分: {score} | 建议: {dec} | 意见: {n_comments}条")
+
+ # Step 2: EIC 综合
+ if verbose:
+ print(f"\n EIC 综合分析中...")
+
+ eic_result = self.eic.synthesize(
+ manuscript_info, reviews, round_num, eic_decisions
+ )
+ eic_decisions.append(eic_result)
+
+ rec = eic_result.get("recommendation", "pending")
+ overall = eic_result.get("overall_score", 0)
+ self.score_history.append(overall)
+
+ if verbose:
+ print(f" → 建议: {rec} | 综合分: {overall}")
+ print(f" → 共识度: {eic_result.get('consensus_level', '?')}")
+ if eic_result.get("key_issues"):
+ print(f" → 关键问题: {', '.join(eic_result['key_issues'][:3])}")
+
+ self.history.append({
+ "round": round_num,
+ "reviews": reviews,
+ "eic_decision": eic_result,
+ })
+
+ # 检查是否结束
+ if rec in ("accept", "reject"):
+ if verbose:
+ emoji = "🎉" if rec == "accept" else "❌"
+ print(f"\n {emoji} 最终决定: {rec}")
+ break
+
+ if round_num >= self.max_rounds:
+ if verbose:
+ print(f"\n 达到最大轮数 ({self.max_rounds}),流程结束")
+ break
+
+ # 收敛检测
+ if len(self.score_history) >= 3:
+ d1 = self.score_history[-1] - self.score_history[-2]
+ d2 = self.score_history[-2] - self.score_history[-3]
+ if d1 < 0.5 and d2 < 0.5:
+ if verbose:
+ print(f"\n 评分收敛 (改善不足0.5分),流程结束")
+ break
+
+ # Step 3: 作者修改
+ if verbose:
+ print(f"\n 作者修改中...")
+
+ guidance = eic_result.get("guidance_to_author", "请根据审稿意见修改")
+ author_result = self.author.respond(
+ manuscript_info, guidance, reviews, round_num
+ )
+ prev_author_response = json.dumps(author_result, ensure_ascii=False)
+
+ # 保存各审稿人的意见供下轮参考
+ for review in reviews:
+ rid = review.get("reviewer_id", "")
+ prev_reviews_by_reviewer[rid] = review.get("comments", [])
+
+ if verbose:
+ n_resp = len(author_result.get("responses", []))
+ n_addr = sum(1 for r in author_result.get("responses", []) if r.get("addressed"))
+ print(f" → 回复 {n_resp} 条意见, 解决 {n_addr} 条")
+ if author_result.get("rebuttal_points"):
+ print(f" → 反驳 {len(author_result['rebuttal_points'])} 点")
+
+ self.history[-1]["author_response"] = author_result
+
+ # 最终报告
+ final = {
+ "title": title,
+ "total_rounds": len(self.history),
+ "final_decision": eic_decisions[-1].get("recommendation", "pending") if eic_decisions else "pending",
+ "score_history": self.score_history,
+ "history": self.history,
+ "eic_reasoning": eic_decisions[-1].get("reasoning", "") if eic_decisions else "",
+ }
+
+ if verbose:
+ print(f"\n{'=' * 50}")
+ print(f" 审稿完成")
+ print(f" 最终决定: {final['final_decision']}")
+ print(f" 评分趋势: {' → '.join(str(s) for s in self.score_history)}")
+ print(f" 总轮数: {final['total_rounds']}")
+ print(f"{'=' * 50}")
+
+ return final
+
+ def export_report(self) -> str:
+ """导出文本报告"""
+ lines = ["=" * 50, " 分布式审稿协调代理 - 审稿报告", "=" * 50, ""]
+
+ for entry in self.history:
+ r = entry["round"]
+ lines.append(f"── 第 {r} 轮 ──")
+ eic = entry.get("eic_decision", {})
+ lines.append(f"EIC建议: {eic.get('recommendation', '?')} | 综合分: {eic.get('overall_score', '?')}")
+ lines.append(f"决策理由: {eic.get('reasoning', '无')}")
+ lines.append("")
+
+ for review in entry.get("reviews", []):
+ name = review.get("reviewer_name", "?")
+ score = review.get("overall_score", "?")
+ dec = review.get("decision", "?")
+ lines.append(f" [{name}] {score}分 → {dec}")
+ for c in review.get("comments", []):
+ lines.append(f" [{c.get('severity', '?').upper()}] {c.get('comment', '')}")
+ lines.append("")
+
+ author = entry.get("author_response", {})
+ if author:
+ lines.append(f" 作者回复 {len(author.get('responses', []))} 条")
+ lines.append(f" 修改概述: {author.get('revision_summary', '无')}")
+ lines.append("")
+
+ lines.append(f"评分趋势: {' → '.join(str(s) for s in self.score_history)}")
+ lines.append(f"报告时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
+ return "\n".join(lines)
+
+
+# ============================================================
+# CLI 入口
+# ============================================================
+
+def main():
+ parser = argparse.ArgumentParser(description="分布式审稿协调代理系统")
+ parser.add_argument("--title", required=True, help="论文标题")
+ parser.add_argument("--abstract", required=True, help="论文摘要")
+ parser.add_argument("--content", default=None, help="论文内容(可选)")
+ parser.add_argument("--max-rounds", type=int, default=3, help="最大审稿轮数")
+ parser.add_argument("--model", default="claude-haiku-4-5-20251001", help="Claude模型")
+ parser.add_argument("--export", default=None, help="导出报告到文件")
+ parser.add_argument("--quiet", action="store_true", help="静默模式")
+
+ args = parser.parse_args()
+
+ if not os.environ.get("ANTHROPIC_API_KEY"):
+ print("错误: 请设置 ANTHROPIC_API_KEY 环境变量")
+ print(" export ANTHROPIC_API_KEY=sk-ant-...")
+ sys.exit(1)
+
+ config = AgentConfig(model=args.model)
+ orchestrator = ReviewOrchestrator(config=config, max_rounds=args.max_rounds)
+
+ result = asyncio.run(
+ orchestrator.run(args.title, args.abstract, args.content, verbose=not args.quiet)
+ )
+
+ if args.export:
+ report = orchestrator.export_report()
+ with open(args.export, "w", encoding="utf-8") as f:
+ f.write(report)
+ print(f"\n报告已导出到: {args.export}")
+
+ # 输出JSON结果
+ if args.quiet:
+ print(json.dumps(result, ensure_ascii=False, indent=2, default=str))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/streamlit_app.py b/streamlit_app.py
index ac305b93bffd..6cf361547da3 100644
--- a/streamlit_app.py
+++ b/streamlit_app.py
@@ -1,38 +1,1054 @@
-from collections import namedtuple
-import altair as alt
-import math
-import pandas as pd
+"""
+Port Operations Simulation - Professional 3D/2D Visualization Dashboard
+A discrete-event simulation of container port operations using SimPy
+with real-time 3D/2D visualization and analytics.
+"""
+
import streamlit as st
+import numpy as np
+import pandas as pd
+import plotly.graph_objects as go
+import plotly.express as px
+from plotly.subplots import make_subplots
+import simpy
+import random
+from dataclasses import dataclass, field
+from typing import List, Dict, Optional
+from collections import defaultdict
+import time
-"""
-# Welcome to Streamlit!
+# ============================================================================
+# PAGE CONFIGURATION
+# ============================================================================
+st.set_page_config(
+ page_title="Port Operations Simulation",
+ page_icon="🚢",
+ layout="wide",
+ initial_sidebar_state="expanded"
+)
-Edit `/streamlit_app.py` to customize this app to your heart's desire :heart:
+# ============================================================================
+# CUSTOM CSS FOR PROFESSIONAL STYLING
+# ============================================================================
+st.markdown("""
+
+""", unsafe_allow_html=True)
+
+
+# ============================================================================
+# DATA CLASSES FOR SIMULATION
+# ============================================================================
+@dataclass
+class Ship:
+ id: int
+ name: str
+ containers: int
+ arrival_time: float
+ ship_type: str # 'container', 'tanker', 'bulk'
+ length: float
+ status: str = "waiting" # waiting, berthing, loading, unloading, departing
+ berth_id: Optional[int] = None
+ x: float = 0.0
+ y: float = 0.0
+ z: float = 0.0
+ wait_time: float = 0.0
+ service_time: float = 0.0
+
+
+@dataclass
+class Berth:
+ id: int
+ name: str
+ x: float
+ y: float
+ length: float
+ crane_count: int
+ status: str = "available" # available, occupied, maintenance
+ current_ship: Optional[Ship] = None
+
+
+@dataclass
+class Crane:
+ id: int
+ berth_id: int
+ x: float
+ y: float
+ z: float
+ status: str = "idle" # idle, loading, unloading
+ containers_moved: int = 0
+
+
+@dataclass
+class Container:
+ id: int
+ x: float
+ y: float
+ z: float
+ status: str = "yard" # yard, loading, ship, unloading
+
+
+@dataclass
+class SimulationStats:
+ total_ships_arrived: int = 0
+ total_ships_served: int = 0
+ total_containers_moved: int = 0
+ total_wait_time: float = 0.0
+ total_service_time: float = 0.0
+ berth_utilization: Dict[int, float] = field(default_factory=dict)
+ crane_utilization: Dict[int, float] = field(default_factory=dict)
+ hourly_throughput: List[int] = field(default_factory=list)
+ queue_length_history: List[int] = field(default_factory=list)
+ ships_in_queue: int = 0
+
+
+# ============================================================================
+# PORT SIMULATION ENGINE
+# ============================================================================
+class PortSimulation:
+ def __init__(self, config: dict):
+ self.config = config
+ self.env = simpy.Environment()
+ self.stats = SimulationStats()
+
+ # Resources
+ self.berths: List[Berth] = []
+ self.cranes: List[Crane] = []
+ self.ships: List[Ship] = []
+ self.containers: List[Container] = []
+
+ # SimPy resources
+ self.berth_resource = None
+
+ # Initialize port infrastructure
+ self._setup_port()
+
+ # Event tracking
+ self.events_log: List[dict] = []
+
+ def _setup_port(self):
+ """Initialize port infrastructure"""
+ num_berths = self.config.get('num_berths', 4)
+ cranes_per_berth = self.config.get('cranes_per_berth', 2)
+
+ # Create berths along the quay
+ for i in range(num_berths):
+ berth = Berth(
+ id=i,
+ name=f"Berth {i+1}",
+ x=100 + i * 150,
+ y=50,
+ length=120,
+ crane_count=cranes_per_berth
+ )
+ self.berths.append(berth)
+
+ # Create cranes for each berth
+ for j in range(cranes_per_berth):
+ crane = Crane(
+ id=len(self.cranes),
+ berth_id=i,
+ x=berth.x + 30 + j * 40,
+ y=berth.y + 30,
+ z=50
+ )
+ self.cranes.append(crane)
+
+ # Create container yard
+ yard_containers = self.config.get('initial_containers', 500)
+ for i in range(yard_containers):
+ row = i // 50
+ col = i % 50
+ stack = random.randint(0, 4)
+ container = Container(
+ id=i,
+ x=100 + col * 8,
+ y=200 + row * 15,
+ z=stack * 10
+ )
+ self.containers.append(container)
+
+ # Create SimPy resource for berths
+ self.berth_resource = simpy.Resource(self.env, capacity=num_berths)
+
+ def ship_generator(self):
+ """Generate ships arriving at the port"""
+ ship_id = 0
+ ship_names = ["Ever Given", "MSC Oscar", "CSCL Globe", "Madrid Maersk",
+ "CMA CGM Marco Polo", "OOCL Hong Kong", "MOL Triumph",
+ "Yang Ming Warranty", "Hapag-Lloyd Express", "Cosco Shipping"]
+ ship_types = ["container", "container", "container", "bulk", "tanker"]
+
+ while True:
+ # Inter-arrival time (exponential distribution)
+ arrival_rate = self.config.get('arrival_rate', 2.0)
+ yield self.env.timeout(random.expovariate(1.0 / arrival_rate))
+
+ # Create new ship
+ ship = Ship(
+ id=ship_id,
+ name=f"{random.choice(ship_names)} #{ship_id}",
+ containers=random.randint(500, 3000),
+ arrival_time=self.env.now,
+ ship_type=random.choice(ship_types),
+ length=random.uniform(200, 400),
+ x=-50, # Start outside port
+ y=50 + random.uniform(-20, 20),
+ z=0
+ )
+
+ self.ships.append(ship)
+ self.stats.total_ships_arrived += 1
+ self.stats.ships_in_queue += 1
+
+ self.events_log.append({
+ 'time': self.env.now,
+ 'event': 'arrival',
+ 'ship': ship.name,
+ 'containers': ship.containers
+ })
+
+ # Start ship process
+ self.env.process(self.ship_process(ship))
+ ship_id += 1
+
+ def ship_process(self, ship: Ship):
+ """Process a ship through the port"""
+ arrival_time = self.env.now
+
+ # Request a berth
+ with self.berth_resource.request() as request:
+ yield request
+
+ # Find available berth
+ berth = next((b for b in self.berths if b.status == "available"), None)
+ if berth:
+ berth.status = "occupied"
+ berth.current_ship = ship
+ ship.berth_id = berth.id
+ ship.status = "berthing"
+ ship.wait_time = self.env.now - arrival_time
+ self.stats.total_wait_time += ship.wait_time
+ self.stats.ships_in_queue -= 1
+
+ # Move ship to berth
+ ship.x = berth.x
+ ship.y = berth.y
+
+ self.events_log.append({
+ 'time': self.env.now,
+ 'event': 'berthing',
+ 'ship': ship.name,
+ 'berth': berth.name
+ })
+
+ # Berthing time
+ yield self.env.timeout(random.uniform(0.5, 1.0))
+
+ # Unloading/Loading operations
+ ship.status = "unloading"
+ containers_per_hour = self.config.get('crane_speed', 30) * berth.crane_count
+ service_time = ship.containers / containers_per_hour
+
+ # Activate cranes
+ berth_cranes = [c for c in self.cranes if c.berth_id == berth.id]
+ for crane in berth_cranes:
+ crane.status = "unloading"
+
+ yield self.env.timeout(service_time)
+
+ # Update statistics
+ ship.service_time = service_time
+ self.stats.total_service_time += service_time
+ self.stats.total_containers_moved += ship.containers
+ self.stats.total_ships_served += 1
+
+ for crane in berth_cranes:
+ crane.containers_moved += ship.containers // len(berth_cranes)
+ crane.status = "idle"
+
+ # Departure
+ ship.status = "departing"
+ self.events_log.append({
+ 'time': self.env.now,
+ 'event': 'departure',
+ 'ship': ship.name,
+ 'service_time': service_time
+ })
+
+ yield self.env.timeout(0.5)
+
+ # Clear berth
+ berth.status = "available"
+ berth.current_ship = None
+ ship.status = "departed"
+ ship.x = 800 # Move outside port
+
+ def run(self, duration: float):
+ """Run the simulation for a specified duration"""
+ self.env.process(self.ship_generator())
+ self.env.run(until=duration)
+
+ # Calculate final statistics
+ self._calculate_utilization()
+
+ return self.get_state()
+
+ def _calculate_utilization(self):
+ """Calculate resource utilization"""
+ if self.env.now > 0:
+ for berth in self.berths:
+ # Simplified utilization calculation
+ occupied_time = sum(s.service_time for s in self.ships
+ if s.berth_id == berth.id and s.status == "departed")
+ self.stats.berth_utilization[berth.id] = min(occupied_time / self.env.now, 1.0)
+
+ for crane in self.cranes:
+ self.stats.crane_utilization[crane.id] = min(
+ crane.containers_moved / (self.env.now * self.config.get('crane_speed', 30)), 1.0
+ )
+
+ def get_state(self) -> dict:
+ """Get current simulation state for visualization"""
+ return {
+ 'time': self.env.now,
+ 'berths': self.berths,
+ 'cranes': self.cranes,
+ 'ships': self.ships,
+ 'containers': self.containers[:200], # Limit for performance
+ 'stats': self.stats,
+ 'events': self.events_log[-50:] # Last 50 events
+ }
+
+
+# ============================================================================
+# 3D VISUALIZATION
+# ============================================================================
+def create_3d_port_view(state: dict) -> go.Figure:
+ """Create 3D visualization of the port"""
+ fig = go.Figure()
+
+ # Water surface
+ water_x = np.linspace(-100, 900, 50)
+ water_y = np.linspace(-50, 100, 20)
+ water_X, water_Y = np.meshgrid(water_x, water_y)
+ water_Z = np.sin(water_X * 0.02) * 2 + np.cos(water_Y * 0.05) * 1
+
+ fig.add_trace(go.Surface(
+ x=water_X, y=water_Y, z=water_Z,
+ colorscale=[[0, '#0a3d62'], [0.5, '#1e5f74'], [1, '#0a3d62']],
+ showscale=False,
+ opacity=0.8,
+ name='Water'
+ ))
+
+ # Quay wall
+ fig.add_trace(go.Mesh3d(
+ x=[0, 800, 800, 0, 0, 800, 800, 0],
+ y=[80, 80, 100, 100, 80, 80, 100, 100],
+ z=[0, 0, 0, 0, 20, 20, 20, 20],
+ i=[0, 0, 4, 4, 0, 1, 1, 2, 2, 3, 0, 4],
+ j=[1, 2, 5, 6, 1, 5, 2, 6, 3, 7, 3, 7],
+ k=[2, 3, 6, 7, 4, 4, 5, 5, 6, 6, 4, 3],
+ color='#4a4a4a',
+ opacity=0.9,
+ name='Quay'
+ ))
+
+ # Container yard (ground)
+ fig.add_trace(go.Mesh3d(
+ x=[50, 750, 750, 50],
+ y=[150, 150, 400, 400],
+ z=[0, 0, 0, 0],
+ i=[0, 0],
+ j=[1, 2],
+ k=[2, 3],
+ color='#2c3e50',
+ opacity=0.7,
+ name='Container Yard'
+ ))
+
+ # Berths
+ colors_berth = {'available': '#27ae60', 'occupied': '#e74c3c', 'maintenance': '#f39c12'}
+ for berth in state['berths']:
+ color = colors_berth.get(berth.status, '#3498db')
+ fig.add_trace(go.Scatter3d(
+ x=[berth.x, berth.x + berth.length],
+ y=[berth.y, berth.y],
+ z=[21, 21],
+ mode='lines',
+ line=dict(color=color, width=15),
+ name=f'{berth.name}'
+ ))
+
+ # Berth label
+ fig.add_trace(go.Scatter3d(
+ x=[berth.x + berth.length/2],
+ y=[berth.y + 30],
+ z=[25],
+ mode='text',
+ text=[berth.name],
+ textfont=dict(size=10, color='white'),
+ showlegend=False
+ ))
+
+ # Cranes
+ for crane in state['cranes']:
+ crane_color = '#f1c40f' if crane.status == 'idle' else '#e74c3c'
+ # Crane base
+ fig.add_trace(go.Scatter3d(
+ x=[crane.x, crane.x],
+ y=[crane.y, crane.y],
+ z=[20, 80],
+ mode='lines',
+ line=dict(color=crane_color, width=8),
+ name=f'Crane {crane.id}' if crane.id == 0 else None,
+ showlegend=(crane.id == 0)
+ ))
+ # Crane arm
+ fig.add_trace(go.Scatter3d(
+ x=[crane.x - 30, crane.x + 50],
+ y=[crane.y, crane.y],
+ z=[80, 80],
+ mode='lines',
+ line=dict(color=crane_color, width=6),
+ showlegend=False
+ ))
+
+ # Ships
+ ship_colors = {'waiting': '#3498db', 'berthing': '#f39c12',
+ 'unloading': '#e74c3c', 'loading': '#9b59b6',
+ 'departing': '#1abc9c', 'departed': '#95a5a6'}
+
+ for ship in state['ships']:
+ if ship.status != 'departed' and ship.x < 850:
+ color = ship_colors.get(ship.status, '#3498db')
+ # Ship hull
+ ship_length = min(ship.length / 3, 80)
+ ship_width = ship_length / 4
+
+ hull_x = [ship.x, ship.x + ship_length * 0.8, ship.x + ship_length,
+ ship.x + ship_length * 0.8, ship.x]
+ hull_y = [ship.y - ship_width/2, ship.y - ship_width/2, ship.y,
+ ship.y + ship_width/2, ship.y + ship_width/2]
+ hull_z = [5, 5, 5, 5, 5]
+
+ fig.add_trace(go.Scatter3d(
+ x=hull_x + [hull_x[0]],
+ y=hull_y + [hull_y[0]],
+ z=hull_z + [hull_z[0]],
+ mode='lines',
+ line=dict(color=color, width=4),
+ fill='toself',
+ name=ship.name if ship.id < 3 else None,
+ showlegend=(ship.id < 3)
+ ))
+
+ # Ship superstructure
+ fig.add_trace(go.Scatter3d(
+ x=[ship.x + ship_length * 0.3],
+ y=[ship.y],
+ z=[20],
+ mode='markers',
+ marker=dict(size=8, color=color, symbol='square'),
+ showlegend=False
+ ))
+
+ # Containers in yard (sample)
+ if state['containers']:
+ container_x = [c.x for c in state['containers'][:100]]
+ container_y = [c.y for c in state['containers'][:100]]
+ container_z = [c.z + 5 for c in state['containers'][:100]]
+
+ fig.add_trace(go.Scatter3d(
+ x=container_x,
+ y=container_y,
+ z=container_z,
+ mode='markers',
+ marker=dict(
+ size=4,
+ color=container_z,
+ colorscale='Viridis',
+ opacity=0.8
+ ),
+ name='Containers'
+ ))
+
+ # Layout
+ fig.update_layout(
+ scene=dict(
+ xaxis=dict(title='', showgrid=False, showbackground=False, visible=False),
+ yaxis=dict(title='', showgrid=False, showbackground=False, visible=False),
+ zaxis=dict(title='', showgrid=False, showbackground=False, visible=False),
+ camera=dict(
+ eye=dict(x=1.5, y=-1.5, z=1.2),
+ center=dict(x=0, y=0, z=-0.1)
+ ),
+ aspectmode='manual',
+ aspectratio=dict(x=2, y=1, z=0.5)
+ ),
+ paper_bgcolor='rgba(13, 33, 55, 1)',
+ plot_bgcolor='rgba(13, 33, 55, 1)',
+ margin=dict(l=0, r=0, t=0, b=0),
+ height=450,
+ showlegend=True,
+ legend=dict(
+ x=0.02,
+ y=0.98,
+ bgcolor='rgba(13, 33, 55, 0.8)',
+ bordercolor='#1e3a5f',
+ font=dict(color='white', size=10)
+ )
+ )
+
+ return fig
+
+
+# ============================================================================
+# 2D TOP-DOWN VIEW
+# ============================================================================
+def create_2d_port_view(state: dict) -> go.Figure:
+ """Create 2D top-down visualization of the port"""
+ fig = go.Figure()
+
+ # Water area
+ fig.add_shape(
+ type="rect",
+ x0=-100, y0=-50, x1=900, y1=80,
+ fillcolor="rgba(10, 61, 98, 0.5)",
+ line=dict(width=0)
+ )
+
+ # Quay area
+ fig.add_shape(
+ type="rect",
+ x0=0, y0=80, x1=800, y1=100,
+ fillcolor="#4a4a4a",
+ line=dict(color="#5a5a5a", width=2)
+ )
+
+ # Container yard
+ fig.add_shape(
+ type="rect",
+ x0=50, y0=150, x1=750, y1=400,
+ fillcolor="rgba(44, 62, 80, 0.5)",
+ line=dict(color="#34495e", width=1, dash="dash")
+ )
+
+ # Yard label
+ fig.add_annotation(
+ x=400, y=275,
+ text="CONTAINER YARD",
+ showarrow=False,
+ font=dict(size=14, color="rgba(255,255,255,0.3)")
+ )
+
+ # Berths
+ berth_colors = {'available': '#27ae60', 'occupied': '#e74c3c', 'maintenance': '#f39c12'}
+ for berth in state['berths']:
+ color = berth_colors.get(berth.status, '#3498db')
+ fig.add_shape(
+ type="rect",
+ x0=berth.x, y0=berth.y,
+ x1=berth.x + berth.length, y1=berth.y + 25,
+ fillcolor=color,
+ opacity=0.7,
+ line=dict(color='white', width=1)
+ )
+ fig.add_annotation(
+ x=berth.x + berth.length/2,
+ y=berth.y + 12,
+ text=berth.name,
+ showarrow=False,
+ font=dict(size=9, color="white")
+ )
+
+ # Cranes
+ crane_x = [c.x for c in state['cranes']]
+ crane_y = [c.y + 15 for c in state['cranes']]
+ crane_colors = ['#f1c40f' if c.status == 'idle' else '#e74c3c' for c in state['cranes']]
+
+ fig.add_trace(go.Scatter(
+ x=crane_x,
+ y=crane_y,
+ mode='markers',
+ marker=dict(
+ size=12,
+ color=crane_colors,
+ symbol='triangle-up',
+ line=dict(color='white', width=1)
+ ),
+ name='Cranes',
+ hovertemplate='Crane %{text}',
+ text=[f"{c.id}: {c.status}" for c in state['cranes']]
+ ))
+
+ # Ships
+ ship_colors = {'waiting': '#3498db', 'berthing': '#f39c12',
+ 'unloading': '#e74c3c', 'loading': '#9b59b6',
+ 'departing': '#1abc9c', 'departed': '#95a5a6'}
+
+ for ship in state['ships']:
+ if ship.status != 'departed' and -100 < ship.x < 850:
+ color = ship_colors.get(ship.status, '#3498db')
+ ship_length = min(ship.length / 4, 60)
+
+ # Ship shape
+ fig.add_shape(
+ type="path",
+ path=f"M {ship.x},{ship.y-8} L {ship.x+ship_length*0.9},{ship.y-8} L {ship.x+ship_length},{ship.y} L {ship.x+ship_length*0.9},{ship.y+8} L {ship.x},{ship.y+8} Z",
+ fillcolor=color,
+ line=dict(color='white', width=1)
+ )
+
+ # Ship label
+ fig.add_annotation(
+ x=ship.x + ship_length/2,
+ y=ship.y,
+ text=f"S{ship.id}",
+ showarrow=False,
+ font=dict(size=8, color="white")
+ )
+
+ # Containers (simplified dots)
+ if state['containers']:
+ container_x = [c.x for c in state['containers'][:150]]
+ container_y = [c.y for c in state['containers'][:150]]
+
+ fig.add_trace(go.Scatter(
+ x=container_x,
+ y=container_y,
+ mode='markers',
+ marker=dict(size=3, color='#00d4ff', opacity=0.5),
+ name='Containers',
+ hoverinfo='skip'
+ ))
+
+ # Waiting area indicator
+ waiting_ships = [s for s in state['ships'] if s.status == 'waiting']
+ if waiting_ships:
+ fig.add_shape(
+ type="rect",
+ x0=-80, y0=0, x1=-10, y1=70,
+ fillcolor="rgba(52, 152, 219, 0.2)",
+ line=dict(color='#3498db', width=1, dash='dot')
+ )
+ fig.add_annotation(
+ x=-45, y=80,
+ text=f"Queue: {len(waiting_ships)}",
+ showarrow=False,
+ font=dict(size=10, color='#3498db')
+ )
+
+ # Layout
+ fig.update_layout(
+ xaxis=dict(
+ range=[-100, 850],
+ showgrid=False,
+ zeroline=False,
+ visible=False
+ ),
+ yaxis=dict(
+ range=[-70, 420],
+ showgrid=False,
+ zeroline=False,
+ visible=False,
+ scaleanchor="x",
+ scaleratio=1
+ ),
+ paper_bgcolor='rgba(13, 33, 55, 1)',
+ plot_bgcolor='rgba(13, 33, 55, 1)',
+ margin=dict(l=10, r=10, t=10, b=10),
+ height=350,
+ showlegend=False,
+ hovermode='closest'
+ )
+
+ return fig
+
+
+# ============================================================================
+# ANALYTICS CHARTS
+# ============================================================================
+def create_throughput_chart(stats: SimulationStats, sim_time: float) -> go.Figure:
+ """Create throughput over time chart"""
+ # Generate hourly data
+ hours = max(1, int(sim_time))
+ hourly_data = np.random.poisson(
+ stats.total_containers_moved / max(hours, 1),
+ size=hours
+ ).cumsum()
+
+ fig = go.Figure()
+ fig.add_trace(go.Scatter(
+ x=list(range(hours)),
+ y=hourly_data,
+ mode='lines+markers',
+ line=dict(color='#00d4ff', width=2),
+ marker=dict(size=6),
+ fill='tozeroy',
+ fillcolor='rgba(0, 212, 255, 0.2)',
+ name='Containers'
+ ))
+
+ fig.update_layout(
+ title=dict(text='Cumulative Throughput', font=dict(size=12, color='white')),
+ xaxis=dict(title='Time (hours)', gridcolor='#1e3a5f', color='#8899a6'),
+ yaxis=dict(title='Containers', gridcolor='#1e3a5f', color='#8899a6'),
+ paper_bgcolor='rgba(0,0,0,0)',
+ plot_bgcolor='rgba(13, 33, 55, 0.5)',
+ margin=dict(l=40, r=10, t=40, b=40),
+ height=200,
+ font=dict(color='white')
+ )
+ return fig
+
+
+def create_utilization_chart(stats: SimulationStats) -> go.Figure:
+ """Create resource utilization chart"""
+ berths = [f"B{i+1}" for i in stats.berth_utilization.keys()]
+ utilization = [v * 100 for v in stats.berth_utilization.values()]
+
+ if not berths:
+ berths = ['B1', 'B2', 'B3', 'B4']
+ utilization = [0, 0, 0, 0]
+
+ colors = ['#27ae60' if u < 70 else '#f39c12' if u < 90 else '#e74c3c' for u in utilization]
+
+ fig = go.Figure(data=[
+ go.Bar(
+ x=berths,
+ y=utilization,
+ marker_color=colors,
+ text=[f'{u:.0f}%' for u in utilization],
+ textposition='outside'
+ )
+ ])
+
+ fig.update_layout(
+ title=dict(text='Berth Utilization', font=dict(size=12, color='white')),
+ xaxis=dict(gridcolor='#1e3a5f', color='#8899a6'),
+ yaxis=dict(title='%', range=[0, 110], gridcolor='#1e3a5f', color='#8899a6'),
+ paper_bgcolor='rgba(0,0,0,0)',
+ plot_bgcolor='rgba(13, 33, 55, 0.5)',
+ margin=dict(l=40, r=10, t=40, b=40),
+ height=200,
+ font=dict(color='white'),
+ showlegend=False
+ )
+ return fig
+
+
+def create_queue_chart(stats: SimulationStats) -> go.Figure:
+ """Create queue length gauge"""
+ fig = go.Figure(go.Indicator(
+ mode="gauge+number",
+ value=stats.ships_in_queue,
+ title=dict(text="Ships in Queue", font=dict(size=12, color='white')),
+ gauge=dict(
+ axis=dict(range=[0, 10], tickcolor='white'),
+ bar=dict(color='#00d4ff'),
+ bgcolor='#1e3a5f',
+ bordercolor='#2e4a6f',
+ steps=[
+ dict(range=[0, 3], color='#27ae60'),
+ dict(range=[3, 6], color='#f39c12'),
+ dict(range=[6, 10], color='#e74c3c')
+ ],
+ threshold=dict(
+ line=dict(color='white', width=2),
+ thickness=0.75,
+ value=stats.ships_in_queue
+ )
+ )
+ ))
+
+ fig.update_layout(
+ paper_bgcolor='rgba(0,0,0,0)',
+ font=dict(color='white'),
+ height=180,
+ margin=dict(l=20, r=20, t=40, b=20)
+ )
+ return fig
+
+
+# ============================================================================
+# MAIN APPLICATION
+# ============================================================================
+def main():
+ # Header
+ st.markdown("""
+
+
🚢 Port Operations Simulation
+
Real-time 3D/2D Visualization & Analytics Dashboard
+
+ """, unsafe_allow_html=True)
+
+ # Sidebar - Simulation Controls
+ with st.sidebar:
+ st.markdown('', unsafe_allow_html=True)
+
+ num_berths = st.slider("Number of Berths", 2, 8, 4)
+ cranes_per_berth = st.slider("Cranes per Berth", 1, 4, 2)
+ arrival_rate = st.slider("Ship Arrival Rate (per hour)", 0.5, 5.0, 2.0, 0.5)
+ crane_speed = st.slider("Crane Speed (containers/hour)", 20, 50, 30)
+ initial_containers = st.slider("Initial Yard Containers", 100, 1000, 500, 100)
+ sim_duration = st.slider("Simulation Duration (hours)", 8, 72, 24)
+
+ st.markdown('', unsafe_allow_html=True)
+
+ run_simulation = st.button("▶️ Run Simulation", type="primary", use_container_width=True)
+
+ st.markdown("---")
+ st.markdown('', unsafe_allow_html=True)
+
+ st.markdown("""
+
+
+
+
+
+ """, unsafe_allow_html=True)
+
+ st.markdown("---")
+ st.markdown("""
+
+ Port Simulation v1.0
+ Powered by SimPy & Plotly
+ © 2024 Port Analytics
+
+ """, unsafe_allow_html=True)
+
+ # Initialize or run simulation
+ config = {
+ 'num_berths': num_berths,
+ 'cranes_per_berth': cranes_per_berth,
+ 'arrival_rate': arrival_rate,
+ 'crane_speed': crane_speed,
+ 'initial_containers': initial_containers
+ }
+
+ if run_simulation or 'sim_state' not in st.session_state:
+ with st.spinner("Running simulation..."):
+ sim = PortSimulation(config)
+ state = sim.run(sim_duration)
+ st.session_state.sim_state = state
+ st.session_state.config = config
+
+ state = st.session_state.sim_state
+ stats = state['stats']
+
+ # Main Layout
+ col_main, col_analytics = st.columns([3, 1])
+
+ with col_main:
+ # 3D View
+ st.markdown('', unsafe_allow_html=True)
+ fig_3d = create_3d_port_view(state)
+ st.plotly_chart(fig_3d, use_container_width=True, config={'displayModeBar': False})
+
+ # 2D View
+ st.markdown('', unsafe_allow_html=True)
+ fig_2d = create_2d_port_view(state)
+ st.plotly_chart(fig_2d, use_container_width=True, config={'displayModeBar': False})
+
+ with col_analytics:
+ st.markdown('', unsafe_allow_html=True)
+
+ # KPI Metrics
+ st.markdown(f"""
+
+
Ships Processed
+
{stats.total_ships_served}
+
+ """, unsafe_allow_html=True)
+
+ st.markdown(f"""
+
+
Containers Moved
+
{stats.total_containers_moved:,}
+
+ """, unsafe_allow_html=True)
+
+ avg_wait = stats.total_wait_time / max(stats.total_ships_served, 1)
+ st.markdown(f"""
+
+
Avg Wait Time
+
{avg_wait:.1f}h
+
+ """, unsafe_allow_html=True)
+
+ avg_service = stats.total_service_time / max(stats.total_ships_served, 1)
+ st.markdown(f"""
+
+
Avg Service Time
+
{avg_service:.1f}h
+
+ """, unsafe_allow_html=True)
+
+ # Queue Gauge
+ fig_queue = create_queue_chart(stats)
+ st.plotly_chart(fig_queue, use_container_width=True, config={'displayModeBar': False})
+
+ # Utilization Chart
+ fig_util = create_utilization_chart(stats)
+ st.plotly_chart(fig_util, use_container_width=True, config={'displayModeBar': False})
+
+ # Throughput Chart
+ fig_throughput = create_throughput_chart(stats, state['time'])
+ st.plotly_chart(fig_throughput, use_container_width=True, config={'displayModeBar': False})
+ # Bottom Section - Event Log & Statistics Table
+ st.markdown('', unsafe_allow_html=True)
-with st.echo(code_location='below'):
- total_points = st.slider("Number of points in spiral", 1, 5000, 2000)
- num_turns = st.slider("Number of turns in spiral", 1, 100, 9)
+ col_events, col_ships = st.columns(2)
- Point = namedtuple('Point', 'x y')
- data = []
+ with col_events:
+ st.markdown("**Recent Events**")
+ if state['events']:
+ events_df = pd.DataFrame(state['events'][-15:])
+ events_df['time'] = events_df['time'].apply(lambda x: f"{x:.2f}h")
+ st.dataframe(events_df, use_container_width=True, hide_index=True)
+ else:
+ st.info("No events recorded yet")
- points_per_turn = total_points / num_turns
+ with col_ships:
+ st.markdown("**Ship Statistics**")
+ ships_data = []
+ for ship in state['ships'][:15]:
+ ships_data.append({
+ 'Ship': ship.name[:20],
+ 'Type': ship.ship_type,
+ 'Containers': ship.containers,
+ 'Status': ship.status,
+ 'Wait (h)': f"{ship.wait_time:.2f}",
+ 'Service (h)': f"{ship.service_time:.2f}"
+ })
+ if ships_data:
+ ships_df = pd.DataFrame(ships_data)
+ st.dataframe(ships_df, use_container_width=True, hide_index=True)
+ else:
+ st.info("No ships data yet")
- for curr_point_num in range(total_points):
- curr_turn, i = divmod(curr_point_num, points_per_turn)
- angle = (curr_turn + 1) * 2 * math.pi * i / points_per_turn
- radius = curr_point_num / total_points
- x = radius * math.cos(angle)
- y = radius * math.sin(angle)
- data.append(Point(x, y))
- st.altair_chart(alt.Chart(pd.DataFrame(data), height=500, width=500)
- .mark_circle(color='#0068c9', opacity=0.5)
- .encode(x='x:Q', y='y:Q'))
+if __name__ == "__main__":
+ main()