pyxta

The AI Context Gap

The models are getting smarter.
The answers aren’t getting more right.

AI excels at tasks with multiple acceptable outputs — drafting content, generating ideas, summarising research. These are tasks where “good enough” is the quality bar and nobody audits the result.

Business operations are different. An expense is classified correctly or it isn’t. A tax obligation is calculated right or it’s wrong. A compliance report meets the standard or it doesn’t. These tasks don’t need a creative AI. They need an informed one.

The gap between those two modes is not intelligence — it is knowledge. AI does not know how your specific organisation works. And a larger, more capable model does not help, for the same reason that a smarter new employee still needs to be onboarded: the knowledge is specific to your organisation and does not exist anywhere the model could have learned it.

Where AI goes wrong.

When AI produces incorrect outputs on business tasks, the errors are not random. They fall into three categories, and each tells you something different.

Data access problems

The AI could not get to the data it needed, or the data was incomplete. This is an infrastructure finding — it tells you about the state of your integrations and whether your systems are set up to share information at all.

Terminology and mapping problems

The AI had the data but misinterpreted it. It called a refund a sale, grouped two suppliers as one because they share a name, or applied the wrong rate because it could not distinguish between categories. This is a context finding — the raw data, without knowledge of what your business means by its own terminology, is not enough.

Reasoning problems

The AI had the right data, understood the terminology, but applied the wrong logic — calculated a margin using revenue instead of gross profit, or averaged when it should have summed. This is a capability finding about the limits of the specific model.

In most deployments with real business data, the second category dominates. The AI is smart enough. The data exists. What is missing is the knowledge about how your specific organisation uses its data.

The cost problem.

Enterprises solve this by spending millions on data governance programmes, master data management platforms, semantic layers, and dedicated teams. They build structured models of how their organisations work, and they staff those models continuously because the underlying reality changes. This path produces results. It is also completely inaccessible to a business with 30 employees and no data engineering team.

For growing businesses, this knowledge lives in people — in the finance manager who knows the chart of accounts, in the operations lead who maintains the master spreadsheet, in the founder who remembers why that product line is categorised the way it is. It is real, it is valuable, and it is locked in their heads. No AI can access it, which is why no AI produces reliably correct answers for tasks that depend on it.

The viable path is not to ask growing businesses to undertake million-dollar data governance projects. It is to build systems where this knowledge is captured automatically — as a natural by-product of daily operations rather than as a separate, resource-intensive exercise.

What Makes This Different

Process-driven AI, not search-driven AI.

Most enterprise AI platforms have feedback loops — mechanisms that allow the system to learn from user interactions. But these loops are typically designed for search relevance: was this the right document? Was this result helpful? The system learns what users prefer to see. It does not learn whether the business logic was applied correctly.

Onboarded AI learns from process accuracy: was this expense classified according to our rules? Was this supplier matched to the right entity? Was this obligation calculated using the correct framework?

The distinction is structural. In a search-driven system, feedback adjusts relevance rankings. In a process-driven system, every correction is a supervised learning event. When a user fixes an output, that correction is recorded as a new truth for that specific process context. The system doesn’t just learn to show that result less often — it learns the rule that was violated and applies the correction globally.

The result is a self-correcting business model rather than a smart search engine. The validation is the training. The work people already do — correcting, confirming, classifying — becomes the mechanism that makes the system more accurate over time.

How It Works

The system that answers your questions is the same system that learns from your corrections.

When a finance manager corrects a misclassified expense, that correction is not just fixing one record. It is a piece of organisational knowledge: “in our business, this type of expense belongs in that category.” When an operations lead maps a supplier invoice to the right project, the mapping carries semantic meaning about how the business links its costs to its activities.

Our platform captures these corrections and mappings, structures them, and makes them available as context for AI at the point of decision. The knowledge compounds: each correction makes the next answer more accurate, and the model of how the organisation works grows richer through use rather than through a separate documentation effort.

When you fix a supplier mapping, the fix applies to every future transaction from that supplier — not as a one-off data patch, but as a rule the system now knows. When you confirm that a certain category rolls up to a specific line on your management report, that relationship is captured permanently and applied automatically going forward.

The underlying AI models are improving rapidly, and that makes this more urgent, not less.

Every improvement in model capability raises the ceiling of what AI can do when given the right context. An organisation that has captured and structured its business knowledge is positioned to benefit from each improvement automatically. One that has not will keep encountering the same knowledge gap, regardless of how capable the model becomes.

The investment is not in any particular AI product. It is in the knowledge layer that makes all of them effective.

Strategy

Three phases to scale.

Each phase builds on the Onboarded AI context captured before it. The organisational knowledge accumulated in Phase 1 becomes the training data for Phase 2, and the adapted models from Phase 2 become the foundation for Phase 3. This creates compounding value at every stage.

Phase 1 — Now

Semantic Prompting

Structured business context delivered as prompts to any AI model, improving accuracy in production today. SaaS applications built on the platform generate revenue while every user interaction captures organisational knowledge that feeds the next phase.

Phase 2 — Next

Organisation-Specific Fine-Tuning

The organisational knowledge captured in Phase 1 becomes training data for model adaptation. AI that doesn’t just receive context at inference time — it has internalised how each organisation works.

Phase 3 — Future

Semantics-First AI

Purpose-built AI with business context as a native capability — not general-purpose models adapted after the fact. Technology licensing to AI companies building the next generation of business applications.

Our roadmap moves from structured prompting toward organisation-specific model adaptation — each phase unlocking capabilities that the previous phase made possible.

Our Values

Superpowers are for heroes, not villains.

Social responsibility is at the core of our values.

As individuals we believe in the power of “voting with one’s wallet”: everyone, every day, can make a difference by deliberately channelling their purchases and support towards people, businesses, and even countries that share their values. Extending this idea to the way we conduct our business is only natural.

Artificial intelligence is a superpower. Pyxta is a superpower. We are proud that our partners and clients care deeply about fairness, equality, integration, sustainability, the environment, and transparency. We are deliberate about who we work with — and we choose partners and clients whose values align with ours.

Sustainability

We are always looking for new ways to uphold our core values, within our company and in the broader community. Got ideas? Let us know.