AI Processing Troubleshooting

AI Processing is designed to fail silently — if anything goes wrong, the email saves with its original content as if AI was never configured. So you'll never lose an email because of an AI hiccup. But you might find that AI didn't behave the way you expected.

This article covers the most common situations and how to debug them.

A field wasn't filled

When the model can't find a value for a property, it leaves the field blank rather than guessing. The activity feed on your destination shows you exactly which fields were extracted and which were skipped — click any row with the AI badge to expand and see the breakdown.

If a field is consistently skipped, the most common causes are:

  1. The information isn't in the email. AI doesn't make things up. If the email never mentions a deadline, the Deadline field stays blank.
  2. The model didn't understand the field name. Rename the property to something more descriptive ("Customer Email" instead of "Email 2") so the model knows what to look for.
  3. The select options don't match what the email contains. AI must pick from your defined options for select/list fields. Add an extraction guidance prompt explaining the mapping (e.g., "Mark Priority as High if the email mentions URGENT").
  4. The field type is too restrictive. A "rating" field expecting 1-5 won't accept "very high" — AI can't coerce text into numbers it didn't extract.

A field has the wrong value

If extraction is consistently wrong (not blank, but incorrect):

  1. Add extraction guidance. The third input in the AI Processing section ("Tell AI how to find each value") is for domain hints. Use it: "Company name is in the From address, not the signature."
  2. Make the property name unambiguous. "Status" can mean order status, deal status, or ticket status. Rename it to be specific.
  3. Test with a few different emails. A wrong value on one email might be a fluke. A wrong value on five emails is a prompt problem.

The body wasn't transformed

If your body prompt is set but the saved body looks the same as the original:

  1. Check that the destination's aiBodyPrompt is non-empty. Open the destination, scroll to AI Processing, confirm the textarea has content, and click Save.
  2. Check the activity feed. If the AI badge doesn't appear on a recent row, AI didn't run at all (probably because no prompt is configured or you're on the free plan).
  3. Check your plan. AI Processing requires Pro. Free-plan users see a paywall card instead of the prompt inputs.

I'm on Pro but the AI section is missing

The AI Processing section is gated by a beta flag during private rollout. If you're a Pro user and don't see the section, contact support — we'll add you to the rollout list.

The activity feed shows "couldn't extract: X"

This is a normal status, not an error. It means AI tried to extract field X but either:

  • Returned null because the email didn't have enough information
  • Returned a value that failed type coercion (e.g., "very high" for a number field, or a select option that doesn't exist on the property)

To fix it, see "A field wasn't filled" above.

AI is too slow

A typical AI call takes 2-4 seconds. If yours is slower:

  1. Check the email length. Very long emails (newsletters, long threads) take longer because they generate more output tokens.
  2. Reduce the number of extraction targets. 10 fields takes longer than 3.
  3. Simplify your body prompt. "Summarize in 3 bullets" is faster than "rewrite this email as a 5-paragraph essay with headers and footnotes".

If saves are timing out (the email lands without AI processing), the system fell back to the silent fallback path. The email is still saved, just without the AI step.

AI is wrong about my domain-specific terminology

Add an extraction guidance prompt. The model has no idea what your internal jargon means until you tell it.

How accurate is it?

For straightforward emails (lead inquiries, support tickets, newsletters, project updates) extraction is reliable. For ambiguous content, technical jargon, or fields the model can't infer, you'll see more skipped values and need to iterate on your prompts.

We use Google's Gemini Flash Lite model — fast and cheap but not as accurate as larger models. We optimized for cost so AI Processing stays included in the Pro plan with no per-email surcharge.

Still stuck?

Contact support with:

  • The destination ID
  • The body prompt and extraction guidance you're using
  • 1-2 example emails (subject + first few lines of body) where AI didn't behave as expected
  • What you expected vs what happened

We'll take a look at the prompt and the activity logs and help you tune it.

Was this helpful?