Results
Internal Guide · Dev Team

Use AI to build
faster, not blindly.

A practical handbook for DOTB developers. Learn when, how, and why to use AI in your engineering workflow — with real examples from our SugarCRM stack.

3
Tools to master
1
Core framework
12+
Prompt templates
5
Failure patterns
The core idea: AI won't replace your thinking. It replaces your typing and Googling. Your job is to think clearly, prompt precisely, and always verify what comes back. Accelerate thinking — don't skip it.
01 · Context

Why we need this

These are the real friction points slowing down our team. AI doesn't solve all of them — but it removes most of the repetitive, time-consuming parts.

Legacy debugging
Hours lost reading SugarCRM customizations written 3+ years ago with no documentation.
🔁
Repetitive boilerplate
CRUD endpoints, validation logic, API wrappers — all written from scratch every time.
🐢
Slow onboarding
New devs spend weeks just understanding the system before contributing.
📭
Missing documentation
Business logic lives in developer heads, not in files. Knowledge is lost when people leave.
We spend more time figuring things out than actually building. AI compresses the figuring-out phase dramatically.

02 · Core Framework

Prompt → Plan → Execute

Every AI-assisted task should follow this three-phase flow. Skipping phases — especially Plan — is how things go wrong. This framework applies to a 10-minute bug fix and a 2-week feature equally.

Phase 01
Prompt
  • Define the problem clearly
  • Include context + constraints
  • State what NOT to change
  • Include real code/errors
  • Choose the right tool
Phase 02
Plan
  • Ask for approaches first
  • Evaluate trade-offs
  • Correct AI's assumptions
  • Split into phases
  • Align before coding
Phase 03
Execute
  • Generate small chunks
  • Review each piece
  • Integrate carefully
  • Test on staging first
  • Verify before shipping
Rule: Never let AI jump straight to Execute. Always force it through Plan first — even for small tasks. Ask: "Before writing code, give me 2–3 approaches with trade-offs."

03 · Workflow

AI in your daily dev flow

Seven concrete steps — from reading a ticket to shipping code. Click each step to see what to actually do and say to the AI.

1
Understand the requirement

Before writing a single line, paste the ticket/requirement into Claude or ChatGPT and ask it to summarize, surface edge cases, and flag ambiguities. This is a 2-minute step that saves hours of rework.

Tool: Claude or ChatGPT "Here is a feature requirement: [paste ticket content] Do the following: 1. Summarize what I need to build in 3 sentences 2. List any ambiguities or missing information 3. List potential edge cases I should handle 4. Ask me the 2 most important clarifying questions"
→ Tip: If AI doesn't ask you clarifying questions, it's making assumptions. Prompt it to.
2
Explore the codebase

Use Cursor (which sees your actual files) to map unfamiliar code. This replaces 30–60 minutes of manual reading with a 5-minute conversation. Start broad, then drill into what matters.

Tool: Cursor (with codebase context) "Explain what the [ModuleName] module does. Cover: what data it stores, key functions, any hooks attached to it, and relationships to other modules. Keep it under 200 words." Then follow-up: "Now trace the flow when a user saves a record. Show me which files are called, in order."
→ Tip: Always start with "explain" before asking for changes. Understanding first, coding second.
3
Plan before coding

This is the most skipped and most important step. Get AI to propose approaches — not write code. You evaluate and choose. Then agree on phases before execution starts.

Tool: Claude (best for multi-step planning) "I need to implement [feature]. Before writing any code: 1. Give me 2–3 implementation approaches with trade-offs 2. Flag any risks specific to SugarCRM's override system 3. Recommend an approach and explain why 4. Break the recommended approach into implementation phases 5. List what assumptions you're making Do NOT write code yet."
→ Tip: After getting the plan, say "What assumptions are you making?" Correct them before executing.
4
Generate code — in small chunks

Once you've agreed on a plan, execute phase by phase. Never ask AI to generate a full feature in one shot. Small chunks = you stay in control.

Tool: Cursor "Implement phase 1 only: [describe phase 1]. Constraints: - Only modify [specific file/function] - Do NOT change the function signature - Do NOT touch any other files - Add a comment explaining any non-obvious logic After writing the code, tell me: - What you changed and why - What you did NOT touch - Any edge cases you didn't handle"
→ Tip: The "tell me what you changed" follow-up is critical. AI sometimes modifies more than asked.
5
Review and refine

After generating code, use AI to review it — but from a critical angle, not a confirming one. Ask it to attack its own output. This surfaces issues faster than manual review.

Tool: Claude or ChatGPT "Here is the code you just generated: [paste code] Now review it critically. Answer: 1. What could go wrong with this in production? 2. Are there any edge cases not handled? 3. Any hardcoded values that should be configurable? 4. Any security concerns? 5. Would this work correctly on 10k+ rows of data?"
→ Tip: If AI says "looks good", push harder: "What would a senior engineer flag in a PR review?"
6
Write tests

AI is excellent at generating test scaffolding. Give it your function and ask for edge-case-focused tests — not just happy path. AI-written code especially needs tests because it can be subtly wrong.

Tool: Cursor or Claude "Write unit tests for this function: [paste function] Include tests for: 1. Happy path (normal expected input) 2. Empty/null input 3. Edge case: [describe your specific edge case] 4. Large data volume (simulate 10k rows) 5. Invalid input that should be rejected Use [your testing framework] syntax."
→ Tip: Run the tests. If AI-generated tests all pass immediately without any debugging, they may be too shallow.
7
Generate documentation

Don't ship without docs. AI can generate onboarding-quality documentation from code in minutes. This is how we fix the "knowledge lives in heads" problem over time.

Tool: Claude "Generate developer documentation for this module/function: [paste code] Format it as: - What it does (2 sentences, plain English) - Key inputs and outputs - Important business rules it enforces - Things a new developer should NOT change without senior review - One usage example Keep it under 300 words. A junior dev should understand it."
→ Tip: Add "what would a new developer misunderstand about this?" to catch hidden assumptions.

04 · Prompt Library

Copy-paste prompt templates

Prompts organized by task type. DOTB tab has SugarCRM/PHP/SQL-specific versions. Generic tab works for any project. Click Copy to grab any prompt.

SugarCRMMap an unfamiliar custom module
I'm working on a SugarCRM customization. Explore the [ModuleName] module and give me: 1. What it does and what data it stores 2. Any hooks or logic hooks attached to it 3. Its relationships to other modules 4. Parts of the code that look fragile, non-standard, or likely to break Do not suggest changes yet. I just need to understand it first.
Why this works: Separating "understand" from "change" prevents AI from jumping to solutions before you have full context.
PHPFix a bug with strict scope constraints
I have a bug in this PHP function inside a SugarCRM customization: [paste code] Error: [paste error message or describe the symptom] Rules: - Only modify code inside this function - Do NOT change the function signature - Do NOT touch any files other than this one - This runs on every record save, so keep performance in mind Give me the fix and explain: what caused the bug, what you changed, and what you did NOT touch.
Why this works: Explicit constraints ("do not touch X") prevent the scope creep that causes AI to break working code while fixing something else.
SQLOptimize a slow query safely
This SQL query runs inside our SugarCRM report module. It's slow on large datasets (80k+ rows in production): [paste query] Investigate and tell me: 1. What's likely causing the slowness (missing index, full scan, N+1, etc.) 2. Two optimization options — one safe/minimal change, one more aggressive 3. The SQL for EXPLAIN or EXPLAIN ANALYZE I should run to confirm the bottleneck 4. Any risks I should check before applying changes in production Do not rewrite the full query yet. Just diagnose and propose.
Why this works: Asking for diagnosis before rewrite means you understand the problem before changing anything. AI-generated SQL must always be verified with EXPLAIN in production data volumes.
GenericStructured error diagnosis
I'm getting this error in [language/framework]: [paste exact error message] Here is the relevant code: [paste code] Context: - This broke after: [what changed recently] - Expected behavior: [what should happen] - Actual behavior: [what is happening instead] Give me: 1. The root cause of the error 2. The minimal fix 3. Any other code that might need updating as a result 4. How to test that the fix worked
Why this works: Giving the "before" context (what changed recently) gives AI the signal it needs to pinpoint cause vs. symptom.
GenericTrace a confusing function
I don't understand what this function does or why it behaves this way: [paste code] Walk me through it line by line. For each meaningful block, explain: - What it does - Why it might be written this way - Any side effects or dependencies I should know about End with a one-sentence summary of the function's purpose.
Why this works: Line-by-line explanation forces AI to be precise rather than give a vague summary.
ReviewSenior engineer PR review simulation
Review this code as if you were a senior engineer doing a PR review. Be direct and critical — don't soften feedback. [paste code] Check for: 1. Logic errors or incorrect assumptions 2. Missing edge case handling 3. Security issues (injection, auth, data exposure) 4. Performance concerns (especially at scale) 5. Readability and maintainability 6. Anything you'd block the PR over Rate severity: BLOCK / COMMENT / NITPICK for each issue.
Why this works: "Be critical" gives AI permission to flag issues it might otherwise soften. The severity rating makes the output actionable.
DocsOnboarding doc from code
Generate a developer onboarding document for this module or feature: [paste code or describe the feature] Structure it as: ## What it does (2–3 sentences, plain English, no jargon) ## Key files and their roles ## Business rules it enforces (list the logic that reflects real business decisions) ## Safe to modify vs. do not touch (what a new dev can change vs. what needs senior review) ## Common mistakes to avoid ## One example of how to use it Keep the whole doc under 400 words.
Why this works: The "safe to modify vs. do not touch" section is what junior devs need most and what AI can infer from code patterns.
PlanningFeature planning — force plan before code
I need to implement: [describe the feature] Stack: [language, framework, database] Constraints: [time, team size, things that cannot change] Before writing any code: 1. Give me 2–3 implementation approaches with pros and cons 2. Recommend one and explain your reasoning 3. Break the recommended approach into clear phases (input → output for each phase) 4. List the assumptions you're making about my system 5. Ask me the 2 most important questions before we proceed Do NOT write code yet.
Why this works: This is the foundational Prompt → Plan template. It forces collaborative planning and exposes AI's assumptions before you're committed to a direction.
GenericRefactor with safe scope
Refactor this code for readability and maintainability. Do NOT change behavior. [paste code] Rules: - Only modify what's necessary for clarity - Keep the same function signatures - Add a comment above each change explaining why you made it After refactoring, give me a diff summary: - What you changed - What you deliberately left unchanged and why
Why this works: The diff summary at the end is the most important part — it reveals if AI changed things you didn't ask it to.

05 · Case Study

Real examples, step by step

These are worked examples of Prompt → Plan → Execute applied to real DOTB problems. Expand each to see the full flow — including what the AI was actually told at each phase.

Case Study 01 · Performance
Optimizing the Report module for large datasets
Prompt
Problem: Report feature crashes and times out on datasets with 10k+ rows. Users can't export large reports.
Tool: Claude "Give me a document about the Report workflow, then explore to find the bottleneck that makes it run slowly on huge data. Do not suggest fixes yet — just help me understand what's happening."
Plan
AI identified 3 core problems. We validated them against the actual code before agreeing on a plan.
Core problems AI found: 1. Generic field formatting on every row (1s/row × 10k rows = hours) 2. Grand total + detail queries run simultaneously 3. No pagination — all data sent to client at once → browser crash Our response: confirmed all 3 against codebase. Then: "Propose a phased plan. Phase 1 = grand total / detail split. Phase 2 = lazy load detail. Phase 3 = pagination. Define input/output for each phase."
Execute
Phase 1 only. One function at a time. Reviewed each output before moving on.
Tool: Cursor "Implement phase 1 only: separate the grand total query from the detail query. The detail query should only run when a user clicks 'expand.' Do NOT touch any other files. Do NOT change the existing API contract. After writing, tell me exactly what you changed."
Verify
Used the full verification checklist + asked AI to self-review before testing on staging.
→ Key lesson: Spending 20 minutes in the Plan phase (understanding root causes) saved us from building the wrong solution. The first instinct was to add caching — the actual fix was query separation.
Case Study 02 · Onboarding
A new developer maps the Account module in 30 minutes
Prompt
Context: New junior dev needs to add a field to the Account module. Has never touched SugarCRM before.
Tool: Cursor (with codebase open) "I'm new to this codebase. Explain the Account module to me. What does it store, what hooks are on it, and what would a beginner most likely break by accident?"
Plan
AI explained the module structure, flagged 2 fragile hooks, and suggested the safest place to add the new field.
"Given that structure, where is the safest place to add a custom text field called 'contract_ref'? Show me which file to edit and what not to touch."
Execute
Junior dev made the change with AI guidance. Senior reviewed the output in 5 minutes instead of pair-programming for 2 hours.
→ Key lesson: AI as an onboarding tool compresses the "exploration phase" dramatically. The senior's time went from 2 hours of hand-holding to a 5-minute review.
Case Study 03 · Generic
Planning a paginated API endpoint from scratch
Prompt
Generic scenario: Any stack. Build a paginated list endpoint with filtering.
Tool: Claude "I need to build a paginated list API endpoint with filtering. Stack: PHP, MySQL, REST API. Constraints: must support cursor-based pagination (not offset), filtering by date range and status. Before writing code: give me 2 approaches, list your assumptions, and ask me what you need to know."
Plan
Claude proposed cursor vs. keyset pagination, explained the trade-offs for large datasets, asked about sort requirements. Plan agreed before a line was written.
Execute
Generated in 3 phases: query layer → pagination logic → API response format. Each reviewed before the next was generated.
→ Key lesson: The constraint "cursor-based, not offset" was in the prompt. If it wasn't, AI would have defaulted to LIMIT/OFFSET which breaks on large datasets. Constraints in the prompt = better output.

06 · Rules

Do & Don't

These rules are specific to DOTB's stack. The SugarCRM ones are the most important — generic AI advice won't tell you these.

✓ Do
Tell AI your SugarCRM version and which custom/ overrides exist
Force Plan phase before any code generation
Paste real errors, real code, real table schemas — not descriptions
Say explicitly what NOT to change in every prompt
Run EXPLAIN on every AI-generated SQL before production
Use AI to review its own output with a critical angle
Generate one function / one phase at a time
Test on staging. Always. AI doesn't know your data volume.
✗ Don't
Trust Sugar suggestions without checking custom/ folder first
Paste credentials, API tokens, or customer data into any AI tool
Ask AI to refactor a whole file — it will change things you didn't ask
Let AI decide on business logic — it doesn't know your client's rules
Ship code you can't explain in a PR review
Assume AI's output is up-to-date — it has a knowledge cutoff
Skip tests because "AI wrote it, it's probably fine"
Use the same tool for everything — match tool to task type

07 · Tool Glossary

Which tool for which task

We use three tools: Cursor, Claude, and ChatGPT. Each has a primary job. Using the wrong tool for a task gives worse results — here's the map.

Cursor
Primary use → Coding
VS Code with AI built in. It reads your actual files, so it knows your codebase. Best when the task requires file context — debugging, refactoring, exploring modules.
codebase-awaredebuggingrefactorgenerate
Claude
Primary use → Planning & Docs
Strongest for multi-step reasoning, long-context analysis, and documentation. Use it to plan before you go to Cursor to code. Longest context window of the three.
planninglong contextdocsanalysis
ChatGPTFree / $20
Primary use → Brainstorming
Good for exploring options, explaining concepts quickly, and early-stage thinking. Widely familiar. Use it to think through approaches before committing to a tool with your actual code.
brainstormconceptsquick questions
Recommended workflow: ChatGPT to explore an idea → Claude to plan the implementation → Cursor to write and refine the code.

08 · Checklist

Verification checklist

Run through this before shipping anything AI helped build. Tick off items as you go. If you can't answer "yes" to every item — don't ship.

0 of 18 completed

Before prompting
I defined the task: input, expected output, and constraints
No sensitive data, credentials, or customer PII in my prompt
SugarCRM: check for DB connection strings, API keys in code snippets
I've told AI my SugarCRM version and which customizations are active
I've stated explicitly what should NOT be changed
During planning
I got a plan with trade-offs before any code was generated
I asked "what assumptions are you making?" and corrected wrong ones
Plan doesn't conflict with any existing SugarCRM overrides
After code generation
I read every line — I can explain what it does without looking at AI's answer
I asked "what else did you change?" and confirmed scope wasn't exceeded
No hardcoded values (credentials, magic numbers, environment-specific strings)
All SQL has been run through EXPLAIN / EXPLAIN ANALYZE
Never trust AI-generated SQL at scale until you verify the query plan
I've run the critical review prompt: "what could go wrong in production?"
Before committing
Tested on local or staging — not just on dev with 200 rows
Edge cases tested: null input, empty results, large volume, invalid types
Tests written or updated for the changed logic
Any business logic or auth change reviewed by a senior
PR description notes what AI assisted with
Transparency helps reviewers know where to look carefully
Documentation updated or generated for the new/changed logic

09 · Failure Patterns

When AI makes things worse

These are real failure patterns — not hypothetical. Understanding them helps you recognize the warning signs before shipping broken code.

⚠ AI suggested standard Sugar logic — we had a custom override
Asked Cursor to fix a save error in a custom module. It suggested the standard SugarCRM approach — which was already overridden in our custom/Extension folder. Applying it silently broke the override and caused records to save incorrectly for 2 hours before anyone noticed.
→ Lesson: Always tell AI which customizations exist before it touches anything Sugar-related. Check custom/ folder before applying any Sugar suggestion.
⚠ "Clean up this function" became a full file refactor
Asked to "clean up this function." AI rewrote 3 functions, renamed variables, and changed a return type. The code looked cleaner but broke 2 callers in other files that weren't checked. It passed code review and went to prod before the break was found.
→ Lesson: Scope every prompt explicitly. "Only modify lines 45–72." Always follow up: "What else did you change outside what I asked?"
⚠ AI's SQL was correct on dev, catastrophic on prod
Generated a working SQL query for the report module. Tested on dev with 200 rows — fast. Deployed to production with 80k rows — full table scan, timeout, service degraded for 30 minutes. AI had no idea about production data volume.
→ Lesson: Always run EXPLAIN on AI-generated SQL. Tell AI the expected row count upfront so it designs indexes into the query.
⚠ AI confidently cited a SugarCRM API from 2 versions ago
Asked ChatGPT how to use a specific SugarCRM API endpoint. Got a detailed, confident answer — for a version 2 major versions older than ours. The method signature had changed. Spent 45 minutes debugging why the code wouldn't run before discovering the API had been updated.
→ Lesson: AI has a knowledge cutoff and doesn't know your exact Sugar version. Always verify API usage against your actual codebase or official docs.
⚠ Great plan — but AI skipped asking critical questions
Got a detailed 5-phase implementation plan from Claude. Followed it faithfully. Phase 3 assumed the grand total query was already separated from the detail query — it wasn't in our system. Had to redo phases 3 and 4 after discovering this mid-execution.
→ Lesson: After every plan, ask explicitly: "What assumptions are you making about my system?" Correct them before you start executing anything.

10 · Keyword Index

Terms to know & where to learn more

Keywords and concepts from this handbook — with plain-English definitions and research links. Use these to go deeper on any topic.

Prompt Engineering
The practice of structuring AI inputs to get better, more reliable outputs. Most of this handbook is applied prompt engineering.
Context Window
The maximum amount of text an AI can "see" at once. Larger = it can read more of your codebase. Claude has one of the largest available.
Hallucination
When AI generates confident-sounding but incorrect information. A major risk when using AI for APIs, version-specific features, or domain-specific logic.
RAG (Retrieval-Augmented Generation)
A technique where AI retrieves relevant documents before answering. How Cursor works — it retrieves relevant files from your codebase to add to the prompt context.
Few-Shot Prompting
Giving AI 1–3 examples of the format or style you want before asking for the real output. Dramatically improves consistency of AI-generated code.
Chain-of-Thought
Asking AI to "think step by step" before giving an answer. Adding this phrase to prompts consistently improves output quality on complex tasks.
Knowledge Cutoff
The date after which an AI model has no training data. It won't know about library updates, API changes, or new Sugar versions released after that date.
Agentic AI / AI Agents
AI that can take multi-step actions autonomously — running commands, reading files, and iterating. Cursor's "agent mode" is an example. Higher power, higher risk of scope creep.
EXPLAIN / Query Plan
SQL command that shows how the database executes a query — whether it's doing a full scan, using indexes, etc. Run this on every AI-generated SQL before production.
SugarCRM Logic Hooks
PHP event callbacks that run before/after Sugar operations (save, delete, etc.). AI often doesn't know about your custom hooks — always tell it which ones exist.
Cursor Composer / Agent
Cursor's multi-file, multi-step editing mode. Can make changes across many files at once. Powerful but requires careful scoping — it can change more than you expect.
System Prompt / Custom Instructions
Persistent context you give to an AI before every conversation. Use this to tell Claude or ChatGPT about your stack, team conventions, and constraints once — not in every message.
Quick Reference

One-line reminders

Core rule
Accelerate thinking. Don't skip it.
On prompting
Real code. Real errors. No paraphrasing.
On planning
Get 2–3 approaches before any code.
On executing
One function. One phase. Review each.
On SQL
EXPLAIN before production. Always.
On shipping
If you can't explain it, don't ship it.
On SugarCRM
Check custom/ before applying any Sugar suggestion.
On security
No credentials. No PII. No exceptions.