Interoperability
Bring your agents.
Blueprint Forge is the bridge.
Blueprint Forge drives UE5 — not your LLM. Use Claude, Ludus, Aura, FlopAI, VibeUE, or any mix. We tested them on real work so you don't have to.
Last evaluated 2026-04-21. Historical timeline below.
How it works
Blueprint Forge exposes agent_chat —
a tool that lets any orchestrator send messages to agent panels inside the UE5 editor,
read their replies, and wait for specific completion signals.
v1 drives native Slate panels directly (VibeUE and Aura work today). v2 — in progress — adds SWebBrowser JavaScript injection for web-view-based panels (FlopAI, Ludus plugin, BPF Manny).
MCP-based agents (Ludus, Claude Code) connect through BPF's MCP bridge at localhost:4000
and can invoke any of Blueprint Forge's 1,200+ tools.
Agent Compatibility Matrix
Overall grade is our latest aggregate across project RAG, engine Q&A, tool invocation, and workflow integration. See methodology →
| Agent | Grade | Best at | Connection | Status |
|---|---|---|---|---|
| Aura Inspector / Project RAG | A+ | Deep project RAG + asset graph introspection via get_asset_graph | agent_chat v1 (native Slate panel) | ✓ Direct |
| Ludus MCP Engine Q&A + Diagnostic | A | UE engine source Q&A and AnimInstance diagnostic investigation | MCP tool (LudusChat Lite + Full modes) | ✓ Direct |
| FlopAI Project RAG (paste-based today) | A | Best aggregate project-RAG quality in tested workflows | Manual paste via chat panel; agent_chat v2 web-view injection in progress | ⏳ Partial |
| VibeUE UE5 Engine Reference | B | UE5 engine reference; project-context capabilities still maturing | agent_chat v1 (native Slate panel) | ✓ Direct |
| Claude Code Multi-surface Orchestrator | A+ | Cross-surface diagnosis, planning, orchestration across other agents | MCP bridge (BPF as MCP server) | ✓ Direct |
Grades reflect performance in Blueprint Forge's workflows. Your results may vary. Vendors may request re-evaluation via Support.
The agent_chat tool
A single BPF tool that lets any orchestrator drive any supported in-editor agent panel.
send
Send a message to a named agent panel.
read_tab
Read the latest reply + full transcript from a panel.
send_and_wait
Send, then wait for a completion signal (pattern match or token).
Also ships: dump_types (introspect panel widgets), discover (list supported agents).
v2 (in progress) adds SWebBrowser JavaScript injection for web-view-based agents.
Case study · April 2026
Agent Orchestra: Hoverbike Mount
Three sessions. Five agents. One working mount cycle. The hoverbike mount pipeline was produced by Claude Code orchestrating Aura (inspector), Ludus (diagnostic), BlueprintForge (tool host), and BlenderForge (animation pipeline). Final positional error: 0.0 cm.
Phase A — Approach
Character walks toward hoverbike.
Phase B — Trigger
Overlap fires; OpenVehicleDoor chain begins.
Phase C — Motion warp
SkewWarp drives pelvis to seat.
Phase D — Settle
Montage completes; pose blends in.
Phase E — Idle
Seated; ready for input.
Phase F — Confirmed
Wide shot. 0.0 cm positional error.
Live PIE capture — April 19–20, 2026. No post-processing.
Per-phase agent attribution
- Phase A–B: Claude Code + BF
node_create+pin_connectauthored the overlap → OpenVehicleDoor chain. - Phase C: BlenderForge +
batch_fbx_tools.UE5_FBX_PRESETproduced the mount animation; BFaf_montage_createwired the montage with SkewWarp notify. - Diagnostic (AnimInstance=null blocker): Ludus MCP ruled out 5 common causes; Aura identified + removed the broken Control Rig node on ABP_Manny.
- Phase D–F: Claude Code + BF
scs_modify_componentmoved MountPosition from ground to seat; BFcapture_viewportproduced the verification frames shown above.
Methodology
Every grade on this page comes from actual sessions against real UE5 projects. No theoretical benchmarks, no vendor-supplied metrics.
Grade scale
- A+
- Breakthrough — first-try success, verified clean, pipeline formalized into protocol
- A
- Excellent — first-try success, minor cleanup only
- A-
- Strong — first-try success with one small caveat
- B+
- Solid — minor iteration needed, fully understood
- B
- Good with gaps — success after 1-2 iterations; root cause understood
- B-
- Uneven — works but required multiple iterations
- C+
- Usable — significant gaps but functional for primary case
- C
- Usable with caveats — success but required fallback or many iterations
- C-
- Marginal — mostly works but unreliable
- D
- Experimental — partial success, known issues deferred
- F
- Failed — regression or blocker introduced; had to revert
Test contexts
engine_qaproject_ragtool_invocationworkflow_integrationdiagnostic
Update cadence
Re-evaluated per major release or when capability changes materially
Grades reflect performance on Blueprint Forge's workflows against UE5 projects. Your results may vary. Vendors can request re-evaluation via the support form.
Evaluation history
Each entry is a dated evaluation. Grades evolve as agents and our tests improve.
2026-04-20
Aura
Deep project RAG verified against hoverbike mount refactor chain; asset graph introspection surfaced canonical pipeline deviations correctly.
⚠ Wrap multi-item delegations with post-delegation verification (SOP-A). Aura ships results but may skip verification — probe before accepting 'complete'.
2026-04-20
Ludus MCP
Three-test smoke pass: project RAG on OpenVehicleDoor chain (A), 3D model generation (A), UE source Q&A on PhysWalking (A). Diagnostic grade A earned on AnimInstance=null investigation ruling out 5 common causes correctly.
2026-04-20
FlopAI
Aggregate A across three axes. Strongest project-RAG agent evaluated to date. Currently requires manual paste because BPF's agent_chat v1 does not inject into web-view-based panels; v2 will close this.
2026-04-20
VibeUE
Engine-reference answers strong. Project-context quality trails other agents in our test set. Tool-invocation reliability improving.
2026-04-20
Claude Code
Narrowed 3-session mount blocker chain to root cause faster than any single-surface agent when combining BF state probe + BlenderForge sample + runtime screenshots in one turn.
Want a new agent evaluated?
Open a suggestion or request a specific workflow test.