know.2nth.ai โ€บ Business โ€บ biz โ€บ erp โ€บ sage-x3
biz/erp ยท Sage X3 ยท Skill Leaf

Sage X3
on GraphQL.

Mid-market ERP (formerly Sage Enterprise Management) exposed via a GraphQL API through the Syracuse web server using Yoga GraphQL. Relay cursor pagination, OData-style filters, and a schema that varies per installation. Introspect first, query second.

Production GraphQL Syracuse API Relay cursors v1.1.0

A GraphQL front door into Sage's mid-market ERP.

Sage X3 is the enterprise management platform most mid-market manufacturers, distributors, and services businesses in South Africa already run. Modern instances expose a GraphQL API via the Syracuse web server โ€” the same server that renders the web client. That API is the right door for AI agents: one endpoint, one schema, cursor-based pagination, and server-side filtering.

The endpoint pattern is predictable:

https://<host>:<port>/<folder>/api

The schema is organised into five top-level query types โ€” master data, sales, purchasing, stock, accounting โ€” each containing the entities you'd expect: customers, orders, invoices, stock movements, journal entries.

2nth is middleware, not the system of record.

The canonical skill pattern: read from Sage X3, enrich with AI, write results back or display in a dashboard. Never try to replace X3 as the source of truth for financial or stock data โ€” the audit trail, tax filings, and regulatory obligations all live there.

HTTP Basic, over HTTPS only.

Sage X3's GraphQL endpoint uses plain HTTP Basic Authentication โ€” the same username and password the user would log into the Syracuse web client with. Trivial to implement, impossible to get wrong, provided you never put the credentials in code.

# Authorization header
Authorization: Basic base64(USERNAME:PASSWORD)

# Cloudflare Worker โ€” store secrets with wrangler
$ wrangler secret put SAGE_USER
$ wrangler secret put SAGE_PASS

# In the Worker
const auth = 'Basic ' + btoa(`${env.SAGE_USER}:${env.SAGE_PASS}`);
const res = await fetch('https://sage-host/folder/api', {
  method: 'POST',
  headers: { 'Authorization': auth, 'Content-Type': 'application/json' },
  body: JSON.stringify({ query }),
});

Never hardcode credentials.

Environment variables for local dev. wrangler secret put for Workers. AWS Secrets Manager for Lambda. If Sage X3 creds ever land in a git commit, rotate them immediately โ€” the operational X3 user usually has write access to master data, which means a leak is a data integrity incident, not just a security one.

The schema, the query, the filter.

Three things you need to internalise to use Sage X3's GraphQL effectively: the domain structure, the Relay-style query shape with the odd node.code wrapper, and OData-style filters. Everything else is elaboration on these.

Domain structure. The top-level schema groups by business domain. Exact fields vary per installation, but the shape is stable.

x3MasterData          โ€” Master data
  โ”œโ”€โ”€ customer        โ€” Customer business partners
  โ”œโ”€โ”€ supplier        โ€” Supplier business partners
  โ”œโ”€โ”€ product         โ€” Product catalog
  โ””โ”€โ”€ stockSite       โ€” Warehouses / stock sites

x3Sales               โ€” Sales domain
  โ”œโ”€โ”€ salesOrder      โ€” Sales orders
  โ”œโ”€โ”€ salesInvoice    โ€” Sales invoices
  โ””โ”€โ”€ salesDelivery   โ€” Delivery notes

x3Purchasing          โ€” Purchasing domain
  โ”œโ”€โ”€ purchaseOrder   โ€” Purchase orders
  โ””โ”€โ”€ purchaseReceipt โ€” Goods receipt

x3Stock               โ€” Inventory domain
  โ”œโ”€โ”€ stockChange     โ€” Stock movements
  โ””โ”€โ”€ stockCount      โ€” Stock counts

x3Accounting          โ€” Financial domain
  โ”œโ”€โ”€ journalEntry    โ€” Journal entries
  โ””โ”€โ”€ generalLedger   โ€” GL accounts

Relay cursor pagination. Every list query uses first/after with edges โ†’ node. There's one weird thing about Sage's implementation: fields live inside node.code, not directly on node.

query {
  x3MasterData {
    customer {
      query(first: 10, after: "cursor_value") {
        edges {
          node {
            code {             โ† this wrapper matters
              code
              companyName1
              country { code countryName }
              currency { code localizedDescription }
              isCustomer
              isSupplier
            }
          }
        }
        pageInfo {
          hasNextPage
          endCursor
        }
      }
    }
  }
}

OData-style filters. Filter strings are passed through the filter argument and follow OData syntax. This is where most real-world queries happen โ€” server-side filtering, not client-side post-processing.

query(first: 50, filter: "country.code eq 'ZA' and isCustomer eq true")
query(first: 100, filter: "orderDate ge '2026-01-01'")
Operator Meaning Example
eqEqualsstatus eq 'Active'
neNot equalscountry.code ne 'ZA'
gt/ge/lt/leComparisontotalAmount gt 10000
contains()Substringcontains(companyName1, 'Umbrella')
and/orLogicalisCustomer eq true and isSupplier eq true

Introspection. Because the schema varies per installation, start every new integration with an introspection query. It's the only reliable way to know what's actually available in a specific X3 instance.

{
  __schema {
    queryType {
      fields {
        name
        description
        type { name kind ofType { name kind } }
      }
    }
  }
}

// Discover fields on a specific type
{
  __type(name: "X3MasterDataQuery") {
    fields {
      name
      type { name kind }
    }
  }
}

Things that only bite in production.

Seven specific mistakes that waste an afternoon the first time you hit them. Each is lifted directly from the canonical skill โ€” these are the failure modes the skill has seen in real deployments.

Fields live under node.code

Data is at node.code.fieldName, not node.fieldName. Every query needs the code wrapper or it returns nothing.

Pagination is required

Omitting first returns zero results โ€” not an error, just zero. Always pass an explicit page size.

CORS blocks browser requests

The Syracuse endpoint rejects cross-origin fetches from browsers. You need a server-side proxy โ€” a Cloudflare Worker is the cheapest option.

No aggregation in GraphQL

There's no GROUP BY. Pull the rows and aggregate client-side, or fall back to a reporting view on the X3 side.

ISO 8601 dates in filters

Dates in filters must be '2026-01-01' format. Other formats parse as strings and filter nothing.

Self-signed certs on dev

Dev X3 instances usually have self-signed TLS. Node needs NODE_TLS_REJECT_UNAUTHORIZED=0 to talk to them โ€” never set this in prod.

Schema varies per installation

Custom fields, activated modules, and Sage version all change what's in the schema. Introspect first every time โ€” don't assume the fields from another X3 instance exist in this one.

What sage-x3 pulls on, and what pulls on it.

The canonical skill declares its requires and improves in frontmatter. Here are the nodes in the tree sage-x3 is wired into.

tech/cloudflare/workers
Cloudflare Workers
The runtime sage-x3 explicitly requires. Worker hosts the GraphQL proxy, handles CORS, and caches access tokens.
fin/reporting
Financial reporting
Required by sage-x3 for KPI output formats. X3 data feeds directly into the reporting skill's patterns.
biz/erp
ERP sub-domain
sage-x3 improves biz/erp โ€” it contributes its patterns back up to the shared ERP skill.
biz/accounting
Accounting
X3's GL entries and journal data cross over into biz/accounting when the AI is generating management accounts.
data/analysis
Data analysis
Most X3 reporting work is client-side aggregation โ€” which means the data/analysis skill is the next node once rows are in hand.
biz/erp ยท siblings
erpnext ยท shopify ยท woocommerce
All four ERP leaves follow the same auth/introspect/query/write/webhook shape. Different APIs, identical pattern.

Go deeper.

The canonical SKILL.md is the authoritative version and gets updates as the production skill evolves. Everything else here is context.