How AI chat analysis works
Technical, no-marketing explanation of what happens between uploading a chat and getting the report. What the AI can detect, what it can't, where it errs and why — so you take the reports with the right weight.
Any tool that uses AI for something important should be transparent about how it does it. If you're going to make decisions (even small ones) based on what ChatAnalyzer says, you deserve to know what's behind the report and where the real limits are.
The pipeline in 5 steps
A ChatAnalyzer chat analysis goes through five stages. Only step 4 uses AI — the others are traditional code that prepares the context and validates the result.
Chat parsing
ChatAnalyzer detects the format of your .txt (Android, iOS, 12h, 24h, languages) and extracts structure: participants, dates, messages. This stage is deterministic — traditional code, not AI. If your file is corrupt, this is where it fails.
Basic statistics
Counting messages, words, emojis, response times, peak hour. Also deterministic, not AI. These metrics feed the context that's passed to the model afterward.
Prompt construction
Based on the chosen focus (psychological, relational, vibes, etc.), ChatAnalyzer builds a specialized prompt with precise instructions, relevant theoretical frameworks (OCEAN, attachment, etc.) and output rules (structured JSON format, mandatory literal quotes, disqualification of unsupported inferences).
AI model call
The prompt + chat content is sent to the chosen model (Claude, GPT or Gemini). The model processes everything in a single pass and returns a structured analysis in JSON. This is the 'magic' part — but the output is tightly constrained by the prompt.
Validation and formatting
ChatAnalyzer validates that the returned JSON matches the expected schema (correct fields, citations present, scores in range). If something fails, it retries. Then it renders the visual report with scores and citations.
What the AI can do well
Current models (Claude 4, GPT-4, Gemini 2.5) are surprisingly good at tasks that reduce complex patterns to structured categories. Specifically:
- ✓Detect repeated patterns in the chat (words, phrases, dynamics)
- ✓Estimate OCEAN personality traits from linguistic markers
- ✓Identify attachment styles from responses to conflict
- ✓Recognize documented manipulation techniques (gaslighting, DARVO)
- ✓Calculate precise statistics (messages, words, emojis, hours)
- ✓Cite specific messages as evidence for each conclusion
- ✓Detect inconsistencies between what's said at different points in the chat
What the AI CAN'T do (yet, or ever)
Being explicit here is important — honesty about limits is what separates a serious tool from one that overpromises:
- ✗Diagnose psychiatric disorders (requires in-person clinical evaluation)
- ✗Predict the future of a relationship with certainty
- ✗Detect everything outside the text (body language, physical context)
- ✗Compensate for a biased chat (only fights, only flirting) — it reflects what you gave it
- ✗Replace therapy, coaching or professional advice
- ✗Guarantee 0% errors: inferences are probabilistic
Why we require literal chat citations
AI models can hallucinate — generate plausible text that has no grounding in the data. The most effective way to mitigate this is to require every conclusion to come with a literal chat citation. If a "high extraversion" score can't be backed up with textual phrases from the chat itself, the conclusion is suspect. ChatAnalyzer builds prompts with this rule explicitly and discards inferences without evidence.
Why it sometimes fails
Reports can be weak for concrete reasons:
- · Few messages: with under 50 messages the AI has little to triangulate. ChatAnalyzer marks low confidence in these cases.
- · Biased chat: if you upload only a fight, the report will overrepresent conflict. The AI doesn't know the full chat is broader.
- · Limited model: Gemini Flash is fast and cheap, but less fine than Claude Opus on psychological nuances.
- · Cultural bias: models are trained predominantly on English. Analyses in regional Spanish or other languages may lose nuances.
- · Context window: very long chats may exceed what the model processes in one pass. ChatAnalyzer truncates or summarizes when needed.
Privacy: what happens to your chat
- · Content is sent only to the chosen AI provider (Anthropic, OpenAI or Google) for the requested analysis.
- · Commercial providers do not train on enterprise API data by default — it's part of the usage contract.
- · ChatAnalyzer does not store chat content after generating the report.
- · Users' own API keys are stored encrypted with AES-256-GCM.
- · For sensitive chat analyses, you can anonymize names before uploading.
How to use the reports well
Some advice based on how the tool actually works:
- · Upload several different chats from the same person or relationship, not just one. The constants between reports are the most reliable.
- · Use Pro with Claude Opus or GPT-4 for serious psychological analyses. Free with Gemini Flash is good for statistics or a quick pass.
- · Treat scores as hypotheses, not verdicts. "Probability of anxious attachment: 70%" is an observation to explore, not a diagnosis.
- · Combine focuses: run the same chat with psychological, relational and social analyses. The three reads together give a more complete picture.
Try ChatAnalyzer
Now that you know how it works, try the different focuses with your own chat:
- 📊 Chat statistics: the most deterministic, barely uses AI.
- 🧠 Psychological analysis: OCEAN traits and attachment — the most technically demanding.
- 💕 Relational analysis: for chats between two people.
- 🎭 Vibes analysis: the most casual, ideal to start.
Frequently asked questions
Does the AI really understand what I'm saying?
Depends on what we mean by 'understand'. Current language models (GPT-4, Claude, Gemini) process text by detecting statistical patterns over billions of examples. They don't experience emotions or live the chats they read, but they can identify psychological, rhetorical and emotional patterns with surprising precision — because those patterns are well represented in their training data. 'Understanding' as such is still philosophically debated, but functional performance is real.
Why does the report sometimes seem accurate and other times weak?
Three main factors. (1) Volume: with few messages the AI has little pattern to triangulate and depends heavily on the prompt. (2) Context diversity: if the chat is just a fight, the report will overrepresent conflict. (3) Model chosen: Claude Opus or GPT-4 are much finer than Gemini Flash on psychological nuances. Quality correlates with the model and the material you gave it.
Is my data used to train the AI?
No. ChatAnalyzer sends chat content to the AI provider (Anthropic, OpenAI or Google) only to process the requested analysis. Current commercial providers (via their B2B APIs) do not train on that data by default — it's part of the API usage contract. ChatAnalyzer also does not store chat content after generating the report. For sensitive analyses you can anonymize names before uploading.
Why does the AI sometimes make things up?
It's a phenomenon known as 'hallucination'. Models generate statistically plausible text, which sometimes includes data that sounds right but isn't in the chat. To minimize it, ChatAnalyzer uses prompts that require literal chat citations as evidence for each conclusion and disqualify unsupported claims. If your report has a conclusion without a citation, take it skeptically.
Why are there different models and what changes?
Each model family has strengths. Claude (Anthropic) tends to be more cautious and nuanced in psychological analyses. GPT (OpenAI) has good general balance and is usually good at structured tasks. Gemini (Google) is very fast and affordable, ideal for basic analyses or statistics. ChatAnalyzer Pro lets you choose between 8 models so you can see which fits your case best, or use your own API key if you have a preference.
Can AI replace a therapist or coach?
No. An AI detects patterns, can show you things you weren't seeing, and give you a starting point to reflect. But a therapist or coach brings what AI can't: human presence, lived context, real-time adaptive intervention, and clinical responsibility. Think of AI as a complement — useful before a session to organize thoughts, useful after to integrate — not as a replacement.