Agentic Development Guide
1. The Core Concept: “Agents, Not Just Prompts”
Section titled “1. The Core Concept: “Agents, Not Just Prompts””Most people use AI by typing a prompt and getting text back. Agentic Development is different. It means giving an AI a goal and access to tools, then letting it figure out the steps.
Reframed for Amina Games:
- Prompting: “Write a C++ function for a fireball.”
- Agentic: “Here is the codebase. Create a fireball ability that scales with the player’s level, update the header files, and write a unit test to verify it deals damage.”
The difference is autonomy. An agent reads context, plans steps, uses tools, and self-corrects. You give it the what and it figures out the how.
Why This Matters for a 2-Person Studio
Section titled “Why This Matters for a 2-Person Studio”The math is simple: a team of 2 people cannot ship a fighting game with 4 characters, rollback netcode, and cross-platform support in 6 months using traditional methods. But if each person has 3-5 AI agents handling the grunt work (data entry, boilerplate code, asset generation, balance testing), the effective team size is 8-12.
90% of game developers are already using AI in workflows (Google Cloud, August 2025). We’re not experimenting — we’re catching up.
2. Recommended Tool Stack for UE5
Section titled “2. Recommended Tool Stack for UE5”A. Coding & Blueprints (The “Developer” Agents)
Section titled “A. Coding & Blueprints (The “Developer” Agents)”These tools live inside or alongside Unreal Engine and help build the actual game logic.
1. CodeGPT (UE5-Specific AI Copilot)
Section titled “1. CodeGPT (UE5-Specific AI Copilot)”- What it does: Connects LLMs directly to VS Code with a dedicated Unreal Engine 5 agent that understands GAS, Game Modes, Pawns, Controllers, Blueprint patterns, Nanite, Lumen, Chaos Physics, and Niagara VFX.
- Why we need it: Generic ChatGPT messes up UE5 macros (
UPROPERTY,UFUNCTION,UCLASS). CodeGPT’s UE5 agent was trained on engine-specific patterns. - Cost: Free tier (30 interactions/mo) or BYOK with your own API key.
- Links:
- Website: https://www.codegpt.co/
- VS Code Extension: https://marketplace.visualstudio.com/items?itemName=DanielSanMedium.dscodegpt
- UE5 Agent: https://www.codegpt.co/agents/unreal-engine-v5
- Action: Install the CodeGPT extension in VS Code. Set up BYOK with a Claude or OpenAI API key for unlimited use.
2. GitHub Copilot (General Code Completion)
Section titled “2. GitHub Copilot (General Code Completion)”- What it does: Inline code completions and chat. Free tier gives 2,000 suggestions/month.
- UE5 Plugin: The UnrealCopilot plugin integrates GitHub models directly into the UE5 Editor for Python-based editor automation.
- Cost: Free tier available. Pro: $10/month.
- Action: Sign up at https://github.com/features/copilot. Install in VS Code alongside CodeGPT.
3. Cursor IDE (For Heavy Refactoring)
Section titled “3. Cursor IDE (For Heavy Refactoring)”- What it does: VS Code fork with AI-first features — multi-file editing (Composer mode), full codebase indexing, agent mode for autonomous tasks.
- When to use: When the codebase gets large enough that you need AI to understand relationships across many files.
- Cost: Free tier available. Pro: $20/month.
- Link: https://www.cursor.com/
4. Unreal Engine 5 Python API
Section titled “4. Unreal Engine 5 Python API”- What it does: Write Python scripts that control the Unreal Editor — create assets, modify properties, batch-import files, automate repetitive tasks.
- Why we need it: This is the “hands” of our custom agents. The LLM generates the plan, and Python executes it inside the engine.
- Docs: https://dev.epicgames.com/documentation/en-us/unreal-engine/scripting-the-unreal-editor-using-python
- Best Community Guide: https://ryandowlingsoka.com/unreal/python-in-unreal/
- Example Scripts: https://github.com/mamoniem/UnrealEditorPythonScripts
How to Enable (Required — Do This First):
- Open your UE5 Project ->
Edit > Plugins - Navigate to
Scriptingsection - Check
Python Editor Script Plugin-> Enable - Restart the editor
- Access Python console:
Window > Developer Tools > Output Log > Python tab
B. Content Creation (The “Artist” Agents)
Section titled “B. Content Creation (The “Artist” Agents)”Tools to generate assets so we don’t start with grey cubes.
1. Tripo AI (Best for Characters)
Section titled “1. Tripo AI (Best for Characters)”- What it does: Turns text prompts into 3D models with quad topology (critical for animation deformation). Auto-rigging built in. Native UE5 plugin.
- Free tier: 300 credits/month.
- Link: https://www.tripo3d.ai/
Example Prompt for a Fighting Game Character:
Full body 3D character, muscular male, Japanese street fighter aesthetic,white gi with torn sleeves, black belt, bare feet, fighting stance,medium poly, game-ready, PBR textures2. Meshy (Best for Props & Stages)
Section titled “2. Meshy (Best for Props & Stages)”- What it does: Text-to-3D generation with PBR textures. Unique “Text-to-Texture” feature applies AI textures to existing models.
- Free tier: 200 credits/month (~4-5 models).
- Link: https://www.meshy.ai/
Example Prompt for a Stage:
Japanese dojo interior, wooden training floor, hanging scrolls,bamboo practice swords on wall rack, warm lighting, game environment asset3. DeepMotion (AI Motion Capture from Video)
Section titled “3. DeepMotion (AI Motion Capture from Video)”- What it does: Upload a video of yourself doing a move (punch, kick, throw) and it converts to a rigged 3D animation. No suits, no markers.
- For Fighting Games: Record martial arts moves in your living room and have them in the game in minutes.
- Free tier: 60 seconds of animation/month.
- UE5 Auto-Retarget: https://www.deepmotion.com/post/deepmotion-ue5-animation-retargeting-just-got-easier
- SayMotion (Text-to-Animation): Type “jumping roundhouse kick with spin” and get an FBX file.
- Link: https://www.deepmotion.com/
4. Mixamo (Pre-Made Animation Library)
Section titled “4. Mixamo (Pre-Made Animation Library)”- What it does: Thousands of free, royalty-free animations (walk, run, punch, kick, idle) plus auto-rigging of any humanoid model.
- Cost: Completely free. Requires free Adobe ID.
- Link: https://www.mixamo.com/
- Use Case: Standard locomotion and basic attack animations. Use DeepMotion for custom fighting moves.
C. Balancing & Testing (The “QA” Agents)
Section titled “C. Balancing & Testing (The “QA” Agents)”Technologies to help with the “math” of a fighting game.
1. Reinforcement Learning (RL) / Imitation Learning
Section titled “1. Reinforcement Learning (RL) / Imitation Learning”- Concept: Train AI agents to play the game. They play thousands of matches overnight. Output: win-rate matrix per character matchup.
- Tool: Unreal’s Learning Agents plugin (free, included with UE5 5.3+).
- How to enable:
Edit > Plugins > Learning Agents-> Enable. AddLearningAgentsto your module’sDependencyModuleNames. - Tutorial: https://dev.epicgames.com/community/learning/tutorials/8OWY/unreal-engine-learning-agents-introduction-5-3
- Hugging Face Guide: https://huggingface.co/learn/deep-rl-course/en/unitbonus3/learning-agents
2. Python-Based Balance Testing (Standalone)
Section titled “2. Python-Based Balance Testing (Standalone)”For faster iteration before the full UE5 RL setup, use Stable-Baselines3 with a lightweight Python simulation:
# pip install stable-baselines3 gymnasium numpyfrom stable_baselines3 import PPO
# See Section 6 for full FightingGameEnv implementation# Train agent to play 500k matches, evaluate win ratemodel = PPO("MlpPolicy", env, learning_rate=3e-4, verbose=1)model.learn(total_timesteps=500_000)Research:
- IEEE: “Mastering Fighting Game Using Deep RL With Self-play” — https://ieeexplore.ieee.org/document/9231639
- DareFightingICE AI Competition — https://www.ice.ci.ritsumei.ac.jp/~ftgaic/index-4.html
3. Practical “Agentic MVP” Steps
Section titled “3. Practical “Agentic MVP” Steps”For the agentic MVP (Character Move Data system), here is how we build it using an agentic workflow.
Goal: Create a system where an AI defines the stats and frame data for a fighting move, validates it against balance rules, and automatically creates UE5 DataAssets.
Step 1: The “Brain” (LLM) Creates the Data
Section titled “Step 1: The “Brain” (LLM) Creates the Data”We give a structured prompt to Claude/GPT with explicit constraints and output format:
System Prompt:
You are an expert fighting game systems designer with 15 years ofexperience designing frame data for competitive 2D fighting games.You understand the relationship between startup frames, advantage onblock, cancel windows, and competitive balance.
You design moves that:- Feel satisfying and distinct per archetype- Have clear risk/reward tradeoffs- Follow standard conventions (light < medium < heavy in damage/startup)- Create interesting neutral, pressure, and combo situations
You ALWAYS output valid JSON matching the exact schema provided.User Prompt (Grappler Example):
Design a complete moveset for:
Character ID: titanArchetype: grapplerDescription: A massive, slow wrestler. Excels at close range withdevastating command grabs and armored approaches. Weak to zoning andrushdown pressure. Highest health, slowest walk speed.
Generate exactly:- 6 normal moves (LP, MP, HP, LK, MK, HK)- 3 special moves with motion inputs (236, 623, 214, etc.)- 1 super move (costs 1 meter bar)- 1 command grab
Ensure light normals: 3-5 frame startup.Mediums: 6-9 frame startup.Heavies: 10-16 frame startup.Specials: 8-20 frames depending on reward.Super: invincible frames 1-7, 5+ frame startup.
Return JSON with: character_id, archetype, moves array.Each move: move_id, display_name, command, move_type, damage, chip_damage,startup_frames, active_frames, recovery_frames, on_hit_advantage,on_block_advantage, hitstun, blockstun, cancel_options, properties, hitbox.Output (JSON):
{ "character_id": "titan", "archetype": "grappler", "moves": [ { "move_id": "standing_light_punch", "display_name": "Jab", "command": "LP", "move_type": "normal", "damage": 30, "chip_damage": 0, "startup_frames": 5, "active_frames": 3, "recovery_frames": 8, "on_hit_advantage": 3, "on_block_advantage": 1, "hitstun": 12, "blockstun": 10, "cancel_options": ["normal", "special"], "properties": [], "hitbox": {"width": 40, "height": 30, "offset_x": 60, "offset_y": 0} }, { "move_id": "titan_buster", "display_name": "Titan Buster", "command": "360P", "move_type": "command_grab", "damage": 200, "chip_damage": 0, "startup_frames": 2, "active_frames": 3, "recovery_frames": 40, "on_hit_advantage": 0, "on_block_advantage": -40, "hitstun": 0, "blockstun": 0, "cancel_options": [], "properties": [], "hitbox": {"width": 60, "height": 80, "offset_x": 40, "offset_y": 0} } ]}Step 2: The “Validator” (Deterministic Rules) Checks It
Section titled “Step 2: The “Validator” (Deterministic Rules) Checks It”The JSON passes through a deterministic balance rules engine (Python, no AI). This catches mistakes:
class BalanceRules: UNIVERSAL = { "max_damage_single_move": 300, # No one-shot kills "min_startup_frames_super": 3, # Supers must be reactable "max_plus_on_block": 5, # Nothing more than +5 "light_normal_max_startup": 5, # Lights must be fast }
ARCHETYPE_RULES = { "grappler": { "health_range": (1100, 1250), "command_grab_required": True, "max_projectile_moves": 0, }, "rushdown": { "health_range": (850, 950), "min_plus_on_block_moves": 2, }, "zoner": { "min_projectile_moves": 1, "max_walk_speed": 4.0, }, "shoto": { "requires_dp": True, "requires_fireball": True, }, }If violations are found, the JSON goes back to the LLM with specific fix instructions. This loop runs up to 3 times.
Step 3: The “Hands” (Python Script) Builds It in UE5
Section titled “Step 3: The “Hands” (Python Script) Builds It in UE5”The validated JSON feeds into a Python script that runs inside UE5:
"""Run inside UE5's Python console:Window > Developer Tools > Output Log > Python tab"""import unrealimport json
# Load the validated JSONwith open("/path/to/titan_moveset.json") as f: data = json.load(f)
DEST_PATH = f"/Game/Data/Characters/{data['character_id']}/Moves"
# Ensure destination folder existsif not unreal.EditorAssetLibrary.does_directory_exist(DEST_PATH): unreal.EditorAssetLibrary.make_directory(DEST_PATH)
asset_tools = unreal.AssetToolsHelpers.get_asset_tools()
for move in data["moves"]: # Create DataAsset (requires UFighterMoveData C++ class) asset = asset_tools.create_asset( move["move_id"], DEST_PATH, unreal.load_class(None, "/Script/AminaArena.FighterMoveData"), unreal.DataAssetFactory() )
# Set properties asset.set_editor_property("MoveName", move["display_name"]) asset.set_editor_property("Damage", move["damage"]) asset.set_editor_property("StartupFrames", move["startup_frames"]) asset.set_editor_property("ActiveFrames", move["active_frames"]) asset.set_editor_property("RecoveryFrames", move["recovery_frames"]) asset.set_editor_property("OnHitAdvantage", move["on_hit_advantage"]) asset.set_editor_property("OnBlockAdvantage", move["on_block_advantage"])
unreal.EditorAssetLibrary.save_loaded_asset(asset) unreal.log(f"Created: {move['move_id']}")Step 4: Result
Section titled “Step 4: Result”Run one command -> all moves exist as DataAssets in the Content Browser -> ready for Blueprint references. No manual clicking through 50 menus.
Full pipeline implementation (400+ lines with Pydantic validation, Claude structured outputs, and UE5 export): see Resource Library or ask Chris to set up the move_generation_agent.py script.
4. More Prompt Examples for Game Development
Section titled “4. More Prompt Examples for Game Development”A. Generating a Complete Character Archetype
Section titled “A. Generating a Complete Character Archetype”Design a new fighting game character archetype for a 2D competitive fighter.
Requirements:- Name and visual concept (2-3 sentences)- Archetype tag (shoto, grappler, zoner, rushdown, puppet, setup)- Health value (800-1200)- Walk speed (1-10 scale, 5 = average)- 3 defining gameplay traits- 1 unique mechanic that differentiates them from the roster- Strengths (2-3)- Weaknesses (2-3)- Ideal matchup spread: who they beat, who beats them, and why
Output as structured JSON.B. Generating UE5 C++ Boilerplate (GAS Ability)
Section titled “B. Generating UE5 C++ Boilerplate (GAS Ability)”Write a UE5 C++ class for a GAS GameplayAbility called "UGA_HeavyKick".
Requirements:- Inherits from UGameplayAbility- Uses UPROPERTY for: Damage (float), StartupFrames (int32), ActiveFrames (int32), RecoveryFrames (int32)- Override ActivateAbility() and EndAbility()- In ActivateAbility: play a montage, apply a GameplayEffect for damage on hit confirm, and set the cooldown- Use UFUNCTION(BlueprintCallable) for a CommitAbility check- Include the .h and .cpp files- Follow Epic's coding standard (prefix U for UObject, F for structs, E for enums)C. Game Design Document from High-Level Description
Section titled “C. Game Design Document from High-Level Description”Create a detailed Game Design Document (GDD) section for the following:
Game: "Amina Arena" -- a 2D competitive fighting game with AI-driven opponentsSection: Core Combat System
Cover these points:1. Input system (8-way directional + 4 attack buttons)2. Frame data model (startup, active, recovery, advantage)3. Combo system (cancel hierarchy: normal > special > super)4. Meter system (gain on hit/block/whiff, spend on supers and EX moves)5. Defensive options (block, throw tech, burst, pushblock)6. Training mode features (frame data display, hitbox visualization, combo recording/playback)
Format as a professional GDD with numbered sections, bullet points,and technical specifications where appropriate.D. Balance Spreadsheet Generation
Section titled “D. Balance Spreadsheet Generation”Generate a frame data spreadsheet for a fighting game character named"Ryu" (shoto archetype). Include all normals (standing, crouching, jumping)and 3 specials (fireball, uppercut, hurricane kick).
For each move, include columns:- Move Name, Command, Startup, Active, Recovery, On Hit, On Block, Damage, Chip, Cancel Options, Properties (low/overhead/armor/invincible)
Output as a markdown table. Ensure:- Light attacks: 3-6f startup, +1 to +3 on hit- Medium attacks: 7-10f startup, +3 to +6 on hit- Heavy attacks: 11-16f startup, knockdown on hit- DP (uppercut): invincible frames 1-5, very negative on block (-20+)- Fireball: 12-15f startup, +2 on block at max rangeE. Batch Asset Import Script
Section titled “E. Batch Asset Import Script”Write a UE5 Python script that:1. Scans a folder (/tmp/imports/) for all .fbx files2. For each file, imports it into /Game/Characters/Imports/3. Sets the skeletal mesh to use our standard skeleton (/Game/Characters/Shared/SK_FighterBase)4. Creates a physics asset automatically5. Logs success/failure for each file6. Shows a progress bar using unreal.ScopedSlowTask5. Setting Up the Agent Pipeline (Step-by-Step)
Section titled “5. Setting Up the Agent Pipeline (Step-by-Step)”Prerequisites
Section titled “Prerequisites”# Python dependenciespip install anthropic pydantic stable-baselines3 gymnasium numpy
# Set API keyexport ANTHROPIC_API_KEY="sk-ant-..."File Structure
Section titled “File Structure”aminagames/ agents/ move_generation_agent.py # Main pipeline balance_rules.py # Deterministic validation schemas.py # Pydantic models (FighterMove, Hitbox, etc.) ue5_export.py # Generates UE5 Python scripts output/ titan_moveset.json # Generated data create_titan_moves.py # UE5 automation scriptRunning the Pipeline
Section titled “Running the Pipeline”# Generate a grappler movesetpython agents/move_generation_agent.py --character titan --archetype grappler
# Output:# === STEP 1: Generating moveset ===# Generated 11 moves# === STEP 2: Validating against balance rules ===# All balance checks passed!# === STEP 3: Exporting for UE5 ===# JSON: output/titan_moveset.json# UE5 Script: output/create_titan_moves.py# Total moves: 11Then copy output/create_titan_moves.py into UE5 and run from the Python console.
6. RL Balance Testing (Detailed)
Section titled “6. RL Balance Testing (Detailed)”Concept
Section titled “Concept”Train an RL agent as Player 1, let it play 1,000+ matches against a random/trained Player 2. Measure win rates. If Character A wins 90% against Character B, something is unbalanced.
Simplified Fighting Game Environment
Section titled “Simplified Fighting Game Environment”import gymnasium as gymimport numpy as npimport json
class FightingGameEnv(gym.Env): """ Lightweight fighting game simulation for balance testing. NOT the real game -- just the math (frame data + health). """
def __init__(self, p1_moves_path, p2_moves_path): with open(p1_moves_path) as f: self.p1_moves = json.load(f)["moves"] with open(p2_moves_path) as f: self.p2_moves = json.load(f)["moves"]
self.num_moves = len(self.p1_moves) # Actions: each move + block + forward + backward self.action_space = gym.spaces.Discrete(self.num_moves + 3) # Observation: [p1_health, p2_health, distance, p1_meter, p2_meter, p1_state, p2_state] self.observation_space = gym.spaces.Box( low=0, high=1200, shape=(7,), dtype=np.float32 )
def reset(self, seed=None): self.p1_health = 1000 self.p2_health = 1000 self.distance = 300 self.p1_meter = 0 self.p2_meter = 0 self.frame_count = 0 return self._get_obs(), {}
def step(self, action): reward = 0 terminated = False truncated = False self.frame_count += 1
if action < self.num_moves: move = self.p1_moves[action] # Simplified: if in range, deal damage if self.distance < move.get("hitbox", {}).get("width", 50) + 100: self.p2_health -= move["damage"] reward += move["damage"] * 0.1 self.p1_meter += move.get("meter_gain_on_hit", 10)
# Simple P2 AI (random baseline) p2_action = self.action_space.sample() # ... mirror logic for P2 ...
# Win conditions if self.p2_health <= 0: reward += 100 terminated = True elif self.p1_health <= 0: reward -= 100 terminated = True elif self.frame_count >= 5400: # 90s at 60fps truncated = True
return self._get_obs(), reward, terminated, truncated, {}
def _get_obs(self): return np.array([ self.p1_health, self.p2_health, self.distance, self.p1_meter, self.p2_meter, 0, 0 ], dtype=np.float32)Generating a Matchup Chart
Section titled “Generating a Matchup Chart”from stable_baselines3 import PPOfrom pathlib import Path
def generate_matchup_chart(roster_dir): """Train agents for each character pair, produce NxN win-rate matrix.""" move_files = sorted(Path(roster_dir).glob("*.json")) n = len(move_files) chart = np.zeros((n, n))
for i, p1 in enumerate(move_files): for j, p2 in enumerate(move_files): if i == j: chart[i][j] = 0.5 # Mirror match else: env = FightingGameEnv(str(p1), str(p2)) model = PPO("MlpPolicy", env, verbose=0) model.learn(total_timesteps=500_000)
# Evaluate over 1000 matches wins = 0 for _ in range(1000): obs, _ = env.reset() done = False while not done: action, _ = model.predict(obs, deterministic=True) obs, _, term, trunc, _ = env.step(action) done = term or trunc if env.p2_health <= 0: wins += 1 chart[i][j] = wins / 1000
return chart
# Run overnight:# chart = generate_matchup_chart("output/roster/")# Result: "Titan vs Ryu: 45% win rate" -- balanced# "Titan vs Zoner: 85% win rate" -- nerf Titan or buff Zoner7. 3D Asset Pipeline: Text to Playable Character
Section titled “7. 3D Asset Pipeline: Text to Playable Character”The Real 2026 Workflow
Section titled “The Real 2026 Workflow”1. TEXT PROMPT 2. AI GENERATION 3. AUTO-RIG "muscular warrior, Tripo AI / Meshy Tripo built-in or cyberpunk armor, generates mesh Mixamo auto-rig fighting stance" (~20 seconds)
4. MANUAL CLEANUP 5. IMPORT TO UE5 6. ANIMATE Blender: retopology, FBX export, use DeepMotion (custom) UV cleanup, optimize Python batch script Mixamo (library) to 15k-30k polysPrompt Tips for 3D Generation
Section titled “Prompt Tips for 3D Generation”For Characters (use Tripo AI):
Full body 3D character, [body type], [cultural aesthetic],[clothing details], [accessories], [pose: fighting stance/idle/action],medium poly, game-ready, PBR texturesFor Stages (use Meshy):
[Environment type] interior/exterior, [key props],[lighting mood], game environment asset, [art style reference]For Weapons/Accessories:
[Object name], [material: metallic/organic/energy], [details],[size reference], game-ready prop, PBR texturesPost-Generation Cleanup (Non-Negotiable)
Section titled “Post-Generation Cleanup (Non-Negotiable)”Even in 2026, AI meshes need work before production:
- Retopology: AI outputs 50k-100k+ polys. Fighting game characters need 15k-30k.
- UV Cleanup: Seams need manual adjustment for proper texture flow.
- Rig Verification: Auto-rigs need manual weight painting for extreme fighting poses.
- LODs: Nanite handles static meshes, but skeletal meshes still need manual LODs.
8. Motion Capture Workflow for Fighting Moves
Section titled “8. Motion Capture Workflow for Fighting Moves”DeepMotion: Video to Animation
Section titled “DeepMotion: Video to Animation”- Record — Film yourself doing the fighting move (phone camera, any angle)
- Upload — https://www.deepmotion.com/animate-3d
- Configure — Enable hand tracking + face tracking
- Process — AI extracts 3D skeletal animation (no suits/markers)
- Download — Export as FBX
- Import — Load into UE5
- Retarget — Auto-retargets to your fighter skeleton
Text-to-Animation Alternative (SayMotion):
Type "jumping roundhouse kick with spin" -> get an FBX file -> import to UE5.
Timing Alignment
Section titled “Timing Alignment”After importing, adjust animation timing in UE5 Sequencer to match your frame data. If a move has 6 startup frames, the animation’s hit impact should land on frame 6 at 60fps.
RADiCAL: Real-Time Preview
Section titled “RADiCAL: Real-Time Preview”For blocking out fight choreography in real-time:
- https://radicalmotion.com/unreal
- Uses standard webcam
- Full-body + fingers + facial expressions
- Live stream into UE5 via Live Link
- Useful for directors to test choreography before capture
9. Implementation Priorities
Section titled “9. Implementation Priorities”| Priority | Task | Time | Cost |
|---|---|---|---|
| 1 | Enable Python Editor Script Plugin in UE5 | 5 min | Free |
| 2 | Install CodeGPT + Copilot Free in VS Code | 10 min | Free |
| 3 | Build UFighterMoveData C++ DataAsset class | 2 hrs | Free |
| 4 | Set up Move Generation Agent (Python + Claude API) | 4 hrs | ~$5/mo API |
| 5 | Test DeepMotion for recording fight animations | 1 hr | Free tier |
| 6 | Generate placeholder characters with Tripo AI | 2 hrs | Free tier |
| 7 | Set up RL balance testing | 1 week | Free (CPU) |
10. Risk Mitigation
Section titled “10. Risk Mitigation””Hallucination” — AI Generates Non-Existent Functions
Section titled “”Hallucination” — AI Generates Non-Existent Functions”- Problem: LLMs sometimes generate code using UE5 functions that don’t exist.
- Mitigation: Always review agent output before committing. Treat agents like a junior developer: trust but verify.
- Better Mitigation: Use structured outputs (Pydantic schemas) to force the LLM to return valid data. The schema acts as a contract — if the output doesn’t match, it fails before reaching the engine.
”Drift” — AI Gradually Breaks Balance
Section titled “”Drift” — AI Gradually Breaks Balance”- Problem: If the AI generates 50 moves, small biases compound. One character becomes overpowered.
- Mitigation: The deterministic balance rules engine catches this. It runs AFTER every AI generation and rejects invalid data. The rules are hand-written, not AI-generated.
”Garbage In” — Low-Quality 3D Assets
Section titled “”Garbage In” — Low-Quality 3D Assets”- Problem: AI-generated meshes look good in screenshots but have terrible topology for animation.
- Mitigation: ALWAYS retopologize in Blender before production use. Budget 30 minutes per character for cleanup.
”Vendor Lock-in” — API Goes Down or Changes Pricing
Section titled “”Vendor Lock-in” — API Goes Down or Changes Pricing”- Problem: If Claude/GPT raises prices or changes API, our pipeline breaks.
- Mitigation: All agent configs use a standard interface. Swap the LLM provider by changing one line (
self.model = "claude-sonnet-4-5"->self.model = "gpt-4o"). LangChain abstracts this further.
11. Key Resources
Section titled “11. Key Resources”For the complete resource library with every link, book, subreddit, grant, and tool referenced in this guide, see Resource Library.
Key quick links:
- GAS Community Docs: https://github.com/tranek/GASDocumentation
- UE5 Python Scripting: https://dev.epicgames.com/documentation/en-us/unreal-engine/scripting-the-unreal-editor-using-python
- GGPO Rollback SDK: https://www.ggpo.net/
- LangChain Academy: https://academy.langchain.com/courses/intro-to-langgraph
- Claude API Structured Outputs: https://platform.claude.com/docs/en/build-with-claude/structured-outputs
- Infil’s FGC Glossary: https://glossary.infil.net/
This is a living document. Update it as tools evolve, new techniques emerge, or the pipeline matures.