Skip to content

Agentic Development Guide

1. The Core Concept: “Agents, Not Just Prompts”

Section titled “1. The Core Concept: “Agents, Not Just Prompts””

Most people use AI by typing a prompt and getting text back. Agentic Development is different. It means giving an AI a goal and access to tools, then letting it figure out the steps.

Reframed for Amina Games:

  • Prompting: “Write a C++ function for a fireball.”
  • Agentic: “Here is the codebase. Create a fireball ability that scales with the player’s level, update the header files, and write a unit test to verify it deals damage.”

The difference is autonomy. An agent reads context, plans steps, uses tools, and self-corrects. You give it the what and it figures out the how.

The math is simple: a team of 2 people cannot ship a fighting game with 4 characters, rollback netcode, and cross-platform support in 6 months using traditional methods. But if each person has 3-5 AI agents handling the grunt work (data entry, boilerplate code, asset generation, balance testing), the effective team size is 8-12.

90% of game developers are already using AI in workflows (Google Cloud, August 2025). We’re not experimenting — we’re catching up.


A. Coding & Blueprints (The “Developer” Agents)

Section titled “A. Coding & Blueprints (The “Developer” Agents)”

These tools live inside or alongside Unreal Engine and help build the actual game logic.

  • What it does: Connects LLMs directly to VS Code with a dedicated Unreal Engine 5 agent that understands GAS, Game Modes, Pawns, Controllers, Blueprint patterns, Nanite, Lumen, Chaos Physics, and Niagara VFX.
  • Why we need it: Generic ChatGPT messes up UE5 macros (UPROPERTY, UFUNCTION, UCLASS). CodeGPT’s UE5 agent was trained on engine-specific patterns.
  • Cost: Free tier (30 interactions/mo) or BYOK with your own API key.
  • Links:
  • Action: Install the CodeGPT extension in VS Code. Set up BYOK with a Claude or OpenAI API key for unlimited use.

2. GitHub Copilot (General Code Completion)

Section titled “2. GitHub Copilot (General Code Completion)”
  • What it does: Inline code completions and chat. Free tier gives 2,000 suggestions/month.
  • UE5 Plugin: The UnrealCopilot plugin integrates GitHub models directly into the UE5 Editor for Python-based editor automation.
  • Cost: Free tier available. Pro: $10/month.
  • Action: Sign up at https://github.com/features/copilot. Install in VS Code alongside CodeGPT.
  • What it does: VS Code fork with AI-first features — multi-file editing (Composer mode), full codebase indexing, agent mode for autonomous tasks.
  • When to use: When the codebase gets large enough that you need AI to understand relationships across many files.
  • Cost: Free tier available. Pro: $20/month.
  • Link: https://www.cursor.com/

How to Enable (Required — Do This First):

  1. Open your UE5 Project -> Edit > Plugins
  2. Navigate to Scripting section
  3. Check Python Editor Script Plugin -> Enable
  4. Restart the editor
  5. Access Python console: Window > Developer Tools > Output Log > Python tab

B. Content Creation (The “Artist” Agents)

Section titled “B. Content Creation (The “Artist” Agents)”

Tools to generate assets so we don’t start with grey cubes.

  • What it does: Turns text prompts into 3D models with quad topology (critical for animation deformation). Auto-rigging built in. Native UE5 plugin.
  • Free tier: 300 credits/month.
  • Link: https://www.tripo3d.ai/

Example Prompt for a Fighting Game Character:

Full body 3D character, muscular male, Japanese street fighter aesthetic,
white gi with torn sleeves, black belt, bare feet, fighting stance,
medium poly, game-ready, PBR textures
  • What it does: Text-to-3D generation with PBR textures. Unique “Text-to-Texture” feature applies AI textures to existing models.
  • Free tier: 200 credits/month (~4-5 models).
  • Link: https://www.meshy.ai/

Example Prompt for a Stage:

Japanese dojo interior, wooden training floor, hanging scrolls,
bamboo practice swords on wall rack, warm lighting, game environment asset

3. DeepMotion (AI Motion Capture from Video)

Section titled “3. DeepMotion (AI Motion Capture from Video)”
  • What it does: Thousands of free, royalty-free animations (walk, run, punch, kick, idle) plus auto-rigging of any humanoid model.
  • Cost: Completely free. Requires free Adobe ID.
  • Link: https://www.mixamo.com/
  • Use Case: Standard locomotion and basic attack animations. Use DeepMotion for custom fighting moves.

C. Balancing & Testing (The “QA” Agents)

Section titled “C. Balancing & Testing (The “QA” Agents)”

Technologies to help with the “math” of a fighting game.

1. Reinforcement Learning (RL) / Imitation Learning

Section titled “1. Reinforcement Learning (RL) / Imitation Learning”

2. Python-Based Balance Testing (Standalone)

Section titled “2. Python-Based Balance Testing (Standalone)”

For faster iteration before the full UE5 RL setup, use Stable-Baselines3 with a lightweight Python simulation:

# pip install stable-baselines3 gymnasium numpy
from stable_baselines3 import PPO
# See Section 6 for full FightingGameEnv implementation
# Train agent to play 500k matches, evaluate win rate
model = PPO("MlpPolicy", env, learning_rate=3e-4, verbose=1)
model.learn(total_timesteps=500_000)

Research:


For the agentic MVP (Character Move Data system), here is how we build it using an agentic workflow.

Goal: Create a system where an AI defines the stats and frame data for a fighting move, validates it against balance rules, and automatically creates UE5 DataAssets.

Step 1: The “Brain” (LLM) Creates the Data

Section titled “Step 1: The “Brain” (LLM) Creates the Data”

We give a structured prompt to Claude/GPT with explicit constraints and output format:

System Prompt:

You are an expert fighting game systems designer with 15 years of
experience designing frame data for competitive 2D fighting games.
You understand the relationship between startup frames, advantage on
block, cancel windows, and competitive balance.
You design moves that:
- Feel satisfying and distinct per archetype
- Have clear risk/reward tradeoffs
- Follow standard conventions (light < medium < heavy in damage/startup)
- Create interesting neutral, pressure, and combo situations
You ALWAYS output valid JSON matching the exact schema provided.

User Prompt (Grappler Example):

Design a complete moveset for:
Character ID: titan
Archetype: grappler
Description: A massive, slow wrestler. Excels at close range with
devastating command grabs and armored approaches. Weak to zoning and
rushdown pressure. Highest health, slowest walk speed.
Generate exactly:
- 6 normal moves (LP, MP, HP, LK, MK, HK)
- 3 special moves with motion inputs (236, 623, 214, etc.)
- 1 super move (costs 1 meter bar)
- 1 command grab
Ensure light normals: 3-5 frame startup.
Mediums: 6-9 frame startup.
Heavies: 10-16 frame startup.
Specials: 8-20 frames depending on reward.
Super: invincible frames 1-7, 5+ frame startup.
Return JSON with: character_id, archetype, moves array.
Each move: move_id, display_name, command, move_type, damage, chip_damage,
startup_frames, active_frames, recovery_frames, on_hit_advantage,
on_block_advantage, hitstun, blockstun, cancel_options, properties, hitbox.

Output (JSON):

{
"character_id": "titan",
"archetype": "grappler",
"moves": [
{
"move_id": "standing_light_punch",
"display_name": "Jab",
"command": "LP",
"move_type": "normal",
"damage": 30,
"chip_damage": 0,
"startup_frames": 5,
"active_frames": 3,
"recovery_frames": 8,
"on_hit_advantage": 3,
"on_block_advantage": 1,
"hitstun": 12,
"blockstun": 10,
"cancel_options": ["normal", "special"],
"properties": [],
"hitbox": {"width": 40, "height": 30, "offset_x": 60, "offset_y": 0}
},
{
"move_id": "titan_buster",
"display_name": "Titan Buster",
"command": "360P",
"move_type": "command_grab",
"damage": 200,
"chip_damage": 0,
"startup_frames": 2,
"active_frames": 3,
"recovery_frames": 40,
"on_hit_advantage": 0,
"on_block_advantage": -40,
"hitstun": 0,
"blockstun": 0,
"cancel_options": [],
"properties": [],
"hitbox": {"width": 60, "height": 80, "offset_x": 40, "offset_y": 0}
}
]
}

Step 2: The “Validator” (Deterministic Rules) Checks It

Section titled “Step 2: The “Validator” (Deterministic Rules) Checks It”

The JSON passes through a deterministic balance rules engine (Python, no AI). This catches mistakes:

class BalanceRules:
UNIVERSAL = {
"max_damage_single_move": 300, # No one-shot kills
"min_startup_frames_super": 3, # Supers must be reactable
"max_plus_on_block": 5, # Nothing more than +5
"light_normal_max_startup": 5, # Lights must be fast
}
ARCHETYPE_RULES = {
"grappler": {
"health_range": (1100, 1250),
"command_grab_required": True,
"max_projectile_moves": 0,
},
"rushdown": {
"health_range": (850, 950),
"min_plus_on_block_moves": 2,
},
"zoner": {
"min_projectile_moves": 1,
"max_walk_speed": 4.0,
},
"shoto": {
"requires_dp": True,
"requires_fireball": True,
},
}

If violations are found, the JSON goes back to the LLM with specific fix instructions. This loop runs up to 3 times.

Step 3: The “Hands” (Python Script) Builds It in UE5

Section titled “Step 3: The “Hands” (Python Script) Builds It in UE5”

The validated JSON feeds into a Python script that runs inside UE5:

"""
Run inside UE5's Python console:
Window > Developer Tools > Output Log > Python tab
"""
import unreal
import json
# Load the validated JSON
with open("/path/to/titan_moveset.json") as f:
data = json.load(f)
DEST_PATH = f"/Game/Data/Characters/{data['character_id']}/Moves"
# Ensure destination folder exists
if not unreal.EditorAssetLibrary.does_directory_exist(DEST_PATH):
unreal.EditorAssetLibrary.make_directory(DEST_PATH)
asset_tools = unreal.AssetToolsHelpers.get_asset_tools()
for move in data["moves"]:
# Create DataAsset (requires UFighterMoveData C++ class)
asset = asset_tools.create_asset(
move["move_id"],
DEST_PATH,
unreal.load_class(None, "/Script/AminaArena.FighterMoveData"),
unreal.DataAssetFactory()
)
# Set properties
asset.set_editor_property("MoveName", move["display_name"])
asset.set_editor_property("Damage", move["damage"])
asset.set_editor_property("StartupFrames", move["startup_frames"])
asset.set_editor_property("ActiveFrames", move["active_frames"])
asset.set_editor_property("RecoveryFrames", move["recovery_frames"])
asset.set_editor_property("OnHitAdvantage", move["on_hit_advantage"])
asset.set_editor_property("OnBlockAdvantage", move["on_block_advantage"])
unreal.EditorAssetLibrary.save_loaded_asset(asset)
unreal.log(f"Created: {move['move_id']}")

Run one command -> all moves exist as DataAssets in the Content Browser -> ready for Blueprint references. No manual clicking through 50 menus.

Full pipeline implementation (400+ lines with Pydantic validation, Claude structured outputs, and UE5 export): see Resource Library or ask Chris to set up the move_generation_agent.py script.


4. More Prompt Examples for Game Development

Section titled “4. More Prompt Examples for Game Development”

A. Generating a Complete Character Archetype

Section titled “A. Generating a Complete Character Archetype”
Design a new fighting game character archetype for a 2D competitive fighter.
Requirements:
- Name and visual concept (2-3 sentences)
- Archetype tag (shoto, grappler, zoner, rushdown, puppet, setup)
- Health value (800-1200)
- Walk speed (1-10 scale, 5 = average)
- 3 defining gameplay traits
- 1 unique mechanic that differentiates them from the roster
- Strengths (2-3)
- Weaknesses (2-3)
- Ideal matchup spread: who they beat, who beats them, and why
Output as structured JSON.

B. Generating UE5 C++ Boilerplate (GAS Ability)

Section titled “B. Generating UE5 C++ Boilerplate (GAS Ability)”
Write a UE5 C++ class for a GAS GameplayAbility called "UGA_HeavyKick".
Requirements:
- Inherits from UGameplayAbility
- Uses UPROPERTY for: Damage (float), StartupFrames (int32),
ActiveFrames (int32), RecoveryFrames (int32)
- Override ActivateAbility() and EndAbility()
- In ActivateAbility: play a montage, apply a GameplayEffect for damage
on hit confirm, and set the cooldown
- Use UFUNCTION(BlueprintCallable) for a CommitAbility check
- Include the .h and .cpp files
- Follow Epic's coding standard (prefix U for UObject, F for structs,
E for enums)

C. Game Design Document from High-Level Description

Section titled “C. Game Design Document from High-Level Description”
Create a detailed Game Design Document (GDD) section for the following:
Game: "Amina Arena" -- a 2D competitive fighting game with AI-driven opponents
Section: Core Combat System
Cover these points:
1. Input system (8-way directional + 4 attack buttons)
2. Frame data model (startup, active, recovery, advantage)
3. Combo system (cancel hierarchy: normal > special > super)
4. Meter system (gain on hit/block/whiff, spend on supers and EX moves)
5. Defensive options (block, throw tech, burst, pushblock)
6. Training mode features (frame data display, hitbox visualization,
combo recording/playback)
Format as a professional GDD with numbered sections, bullet points,
and technical specifications where appropriate.
Generate a frame data spreadsheet for a fighting game character named
"Ryu" (shoto archetype). Include all normals (standing, crouching, jumping)
and 3 specials (fireball, uppercut, hurricane kick).
For each move, include columns:
- Move Name, Command, Startup, Active, Recovery, On Hit, On Block,
Damage, Chip, Cancel Options, Properties (low/overhead/armor/invincible)
Output as a markdown table. Ensure:
- Light attacks: 3-6f startup, +1 to +3 on hit
- Medium attacks: 7-10f startup, +3 to +6 on hit
- Heavy attacks: 11-16f startup, knockdown on hit
- DP (uppercut): invincible frames 1-5, very negative on block (-20+)
- Fireball: 12-15f startup, +2 on block at max range
Write a UE5 Python script that:
1. Scans a folder (/tmp/imports/) for all .fbx files
2. For each file, imports it into /Game/Characters/Imports/
3. Sets the skeletal mesh to use our standard skeleton
(/Game/Characters/Shared/SK_FighterBase)
4. Creates a physics asset automatically
5. Logs success/failure for each file
6. Shows a progress bar using unreal.ScopedSlowTask

5. Setting Up the Agent Pipeline (Step-by-Step)

Section titled “5. Setting Up the Agent Pipeline (Step-by-Step)”
Terminal window
# Python dependencies
pip install anthropic pydantic stable-baselines3 gymnasium numpy
# Set API key
export ANTHROPIC_API_KEY="sk-ant-..."
aminagames/
agents/
move_generation_agent.py # Main pipeline
balance_rules.py # Deterministic validation
schemas.py # Pydantic models (FighterMove, Hitbox, etc.)
ue5_export.py # Generates UE5 Python scripts
output/
titan_moveset.json # Generated data
create_titan_moves.py # UE5 automation script
Terminal window
# Generate a grappler moveset
python agents/move_generation_agent.py --character titan --archetype grappler
# Output:
# === STEP 1: Generating moveset ===
# Generated 11 moves
# === STEP 2: Validating against balance rules ===
# All balance checks passed!
# === STEP 3: Exporting for UE5 ===
# JSON: output/titan_moveset.json
# UE5 Script: output/create_titan_moves.py
# Total moves: 11

Then copy output/create_titan_moves.py into UE5 and run from the Python console.


Train an RL agent as Player 1, let it play 1,000+ matches against a random/trained Player 2. Measure win rates. If Character A wins 90% against Character B, something is unbalanced.

import gymnasium as gym
import numpy as np
import json
class FightingGameEnv(gym.Env):
"""
Lightweight fighting game simulation for balance testing.
NOT the real game -- just the math (frame data + health).
"""
def __init__(self, p1_moves_path, p2_moves_path):
with open(p1_moves_path) as f:
self.p1_moves = json.load(f)["moves"]
with open(p2_moves_path) as f:
self.p2_moves = json.load(f)["moves"]
self.num_moves = len(self.p1_moves)
# Actions: each move + block + forward + backward
self.action_space = gym.spaces.Discrete(self.num_moves + 3)
# Observation: [p1_health, p2_health, distance, p1_meter, p2_meter, p1_state, p2_state]
self.observation_space = gym.spaces.Box(
low=0, high=1200, shape=(7,), dtype=np.float32
)
def reset(self, seed=None):
self.p1_health = 1000
self.p2_health = 1000
self.distance = 300
self.p1_meter = 0
self.p2_meter = 0
self.frame_count = 0
return self._get_obs(), {}
def step(self, action):
reward = 0
terminated = False
truncated = False
self.frame_count += 1
if action < self.num_moves:
move = self.p1_moves[action]
# Simplified: if in range, deal damage
if self.distance < move.get("hitbox", {}).get("width", 50) + 100:
self.p2_health -= move["damage"]
reward += move["damage"] * 0.1
self.p1_meter += move.get("meter_gain_on_hit", 10)
# Simple P2 AI (random baseline)
p2_action = self.action_space.sample()
# ... mirror logic for P2 ...
# Win conditions
if self.p2_health <= 0:
reward += 100
terminated = True
elif self.p1_health <= 0:
reward -= 100
terminated = True
elif self.frame_count >= 5400: # 90s at 60fps
truncated = True
return self._get_obs(), reward, terminated, truncated, {}
def _get_obs(self):
return np.array([
self.p1_health, self.p2_health, self.distance,
self.p1_meter, self.p2_meter, 0, 0
], dtype=np.float32)
from stable_baselines3 import PPO
from pathlib import Path
def generate_matchup_chart(roster_dir):
"""Train agents for each character pair, produce NxN win-rate matrix."""
move_files = sorted(Path(roster_dir).glob("*.json"))
n = len(move_files)
chart = np.zeros((n, n))
for i, p1 in enumerate(move_files):
for j, p2 in enumerate(move_files):
if i == j:
chart[i][j] = 0.5 # Mirror match
else:
env = FightingGameEnv(str(p1), str(p2))
model = PPO("MlpPolicy", env, verbose=0)
model.learn(total_timesteps=500_000)
# Evaluate over 1000 matches
wins = 0
for _ in range(1000):
obs, _ = env.reset()
done = False
while not done:
action, _ = model.predict(obs, deterministic=True)
obs, _, term, trunc, _ = env.step(action)
done = term or trunc
if env.p2_health <= 0:
wins += 1
chart[i][j] = wins / 1000
return chart
# Run overnight:
# chart = generate_matchup_chart("output/roster/")
# Result: "Titan vs Ryu: 45% win rate" -- balanced
# "Titan vs Zoner: 85% win rate" -- nerf Titan or buff Zoner

7. 3D Asset Pipeline: Text to Playable Character

Section titled “7. 3D Asset Pipeline: Text to Playable Character”
1. TEXT PROMPT 2. AI GENERATION 3. AUTO-RIG
"muscular warrior, Tripo AI / Meshy Tripo built-in or
cyberpunk armor, generates mesh Mixamo auto-rig
fighting stance" (~20 seconds)
4. MANUAL CLEANUP 5. IMPORT TO UE5 6. ANIMATE
Blender: retopology, FBX export, use DeepMotion (custom)
UV cleanup, optimize Python batch script Mixamo (library)
to 15k-30k polys

For Characters (use Tripo AI):

Full body 3D character, [body type], [cultural aesthetic],
[clothing details], [accessories], [pose: fighting stance/idle/action],
medium poly, game-ready, PBR textures

For Stages (use Meshy):

[Environment type] interior/exterior, [key props],
[lighting mood], game environment asset, [art style reference]

For Weapons/Accessories:

[Object name], [material: metallic/organic/energy], [details],
[size reference], game-ready prop, PBR textures

Even in 2026, AI meshes need work before production:

  1. Retopology: AI outputs 50k-100k+ polys. Fighting game characters need 15k-30k.
  2. UV Cleanup: Seams need manual adjustment for proper texture flow.
  3. Rig Verification: Auto-rigs need manual weight painting for extreme fighting poses.
  4. LODs: Nanite handles static meshes, but skeletal meshes still need manual LODs.

8. Motion Capture Workflow for Fighting Moves

Section titled “8. Motion Capture Workflow for Fighting Moves”
  1. Record — Film yourself doing the fighting move (phone camera, any angle)
  2. Uploadhttps://www.deepmotion.com/animate-3d
  3. Configure — Enable hand tracking + face tracking
  4. Process — AI extracts 3D skeletal animation (no suits/markers)
  5. Download — Export as FBX
  6. Import — Load into UE5
  7. Retarget — Auto-retargets to your fighter skeleton

Text-to-Animation Alternative (SayMotion): Type "jumping roundhouse kick with spin" -> get an FBX file -> import to UE5.

After importing, adjust animation timing in UE5 Sequencer to match your frame data. If a move has 6 startup frames, the animation’s hit impact should land on frame 6 at 60fps.

For blocking out fight choreography in real-time:

  • https://radicalmotion.com/unreal
  • Uses standard webcam
  • Full-body + fingers + facial expressions
  • Live stream into UE5 via Live Link
  • Useful for directors to test choreography before capture

PriorityTaskTimeCost
1Enable Python Editor Script Plugin in UE55 minFree
2Install CodeGPT + Copilot Free in VS Code10 minFree
3Build UFighterMoveData C++ DataAsset class2 hrsFree
4Set up Move Generation Agent (Python + Claude API)4 hrs~$5/mo API
5Test DeepMotion for recording fight animations1 hrFree tier
6Generate placeholder characters with Tripo AI2 hrsFree tier
7Set up RL balance testing1 weekFree (CPU)

”Hallucination” — AI Generates Non-Existent Functions

Section titled “”Hallucination” — AI Generates Non-Existent Functions”
  • Problem: LLMs sometimes generate code using UE5 functions that don’t exist.
  • Mitigation: Always review agent output before committing. Treat agents like a junior developer: trust but verify.
  • Better Mitigation: Use structured outputs (Pydantic schemas) to force the LLM to return valid data. The schema acts as a contract — if the output doesn’t match, it fails before reaching the engine.

”Drift” — AI Gradually Breaks Balance

Section titled “”Drift” — AI Gradually Breaks Balance”
  • Problem: If the AI generates 50 moves, small biases compound. One character becomes overpowered.
  • Mitigation: The deterministic balance rules engine catches this. It runs AFTER every AI generation and rejects invalid data. The rules are hand-written, not AI-generated.

”Garbage In” — Low-Quality 3D Assets

Section titled “”Garbage In” — Low-Quality 3D Assets”
  • Problem: AI-generated meshes look good in screenshots but have terrible topology for animation.
  • Mitigation: ALWAYS retopologize in Blender before production use. Budget 30 minutes per character for cleanup.

”Vendor Lock-in” — API Goes Down or Changes Pricing

Section titled “”Vendor Lock-in” — API Goes Down or Changes Pricing”
  • Problem: If Claude/GPT raises prices or changes API, our pipeline breaks.
  • Mitigation: All agent configs use a standard interface. Swap the LLM provider by changing one line (self.model = "claude-sonnet-4-5" -> self.model = "gpt-4o"). LangChain abstracts this further.

For the complete resource library with every link, book, subreddit, grant, and tool referenced in this guide, see Resource Library.

Key quick links:


This is a living document. Update it as tools evolve, new techniques emerge, or the pipeline matures.