Why AI gives generic advice (and what it needs to give useful answers)
ChatGPT knows accounting, but nothing about your business. The difference between generic and useful answers isn't the model — it's the context.
You ask ChatGPT: "Can I deduct client lunches as a business expense?" And it responds with something that could be on any accounting blog: "Entertainment expenses may be deductible provided they are properly documented and necessary for the business activity..."
Technically correct. But it doesn't help you decide whether to claim that lunch or not. Because it doesn't know if you're a sole trader or a limited company. Doesn't know your industry code. Doesn't know your tax regime. It knows nothing about you — so it answers as if you were anyone.
Now imagine it knew you're a freelance architect on direct estimation, with entertainment expenses running at 3% of your revenue. The answer would be different: specific, with the limits that apply to your profile, with the criteria your tax authority typically uses.
Same model. What changes is what it knows about you.
The accountant who knows you vs the one who doesn't
In the previous article we talked about AI needing to be where you work. But being there isn't enough — it needs to know about you.
Think of it this way. Your accountant doesn't know more accounting than any other accountant. But they know more about you. They know that every December you get invoices from the same lumber supplier. That your biggest client pays at 60 days. That you already had an issue with the tax authority over entertainment expenses. That accumulated knowledge is what makes their advice useful, not generic.
ChatGPT is the accountant who doesn't know you. It knows accounting, but nothing about your business. And every time you start a new conversation, it forgets everything.
AI researchers have a name for this: context engineering — managing what the AI knows about you, in what format, and at what moment. It's not just "giving it data." It's deciding what data it needs, how to structure it, and when to load it. It's what makes the difference between an AI that gives textbook answers and one that gives useful ones.
A developer tools company (Augment Code) measured the difference: the same model, with full access to the project you're working on, performs 70% better than without that context. They didn't change the model. They just gave it the information it needed.
Three things we learned managing an AI's memory
It sounds simple: give it data and it responds better. In practice, there are traps that aren't obvious.
The AI answers with what it has, not what it should look up. A user asked us "how much have I invoiced María García this quarter?" The system responded only with invoices it had seen in that conversation — it didn't check the full database. It literally said it didn't have more invoices in its context.
It was like asking a new employee who only remembers what they've seen today. It wasn't lying — it was answering with what it had. The fix wasn't improving the model. It was adding an explicit rule: for questions about totals, always check the full database. Never answer from memory.
Sounds obvious. But the AI doesn't know this unless you tell it. And it's not natural to have to give instructions for things any person with common sense would do without thinking. That's part of the discipline of context engineering: codifying what an employee would learn on their own.
And the traps aren't just about what data to give it. Also how. We prepared a summary of recent activity so it wouldn't have to search for suppliers every time — but we put the invoice number (the one clients see) before the internal reference the system uses. The AI grabbed the human-readable number, the system rejected it. Same information, wrong order.
We also learned that the AI's memory fills up — like a desk. And when it's full, it doesn't warn you: it just starts doing things badly, silently. A user imported several invoices in a row and by the fifth one the AI created an invoice for €0. Not because it couldn't read it — because it had no room left to think. We had to automatically compress old information; after that change, processing time dropped 40%.
Managing that memory — what to load, in what format, when to compress — is part of the work. And it's not something the user should have to do.
What this means for your business
When you think about using AI, the first question shouldn't be "which model is best?" but "what does it know about me?"
If you use ChatGPT pasting data each time, you'll get generic answers. Not because the AI is bad — because it has no context. And the next time you open a conversation, it will have forgotten everything.
The difference between "generic AI" and "useful AI" isn't the model. It's having permanent access to your business information — your clients, your suppliers, your history, your tax regime. And having someone who's thought about how to manage that memory: what to load, in what format, when to compress, what to always remember.
That connects to what we saw in the previous article: infrastructure doesn't just put AI where you work — it's what allows context to flow automatically, without you having to paste it by hand every time.
But there are things the AI shouldn't do even with all the context in the world. It can read an invoice perfectly — identify the supplier, the date, the line items. But when it comes to calculating base plus VAT minus withholding equals total... that's where things change. That's what the next article is about.