AI pricing
— what's real,
what's marketing
Every pricing software vendor in 2026 says they're AI-powered. Some are. Many aren't. The marketing is so consistent that the only way to tell is to ask better questions and recognise the deflections. Here's how to read the signals — what real AI looks like in pricing, what AI-washing sounds like, and the diagnostic questions that surface the difference in under ten minutes.
are mostly rule engines
used in pricing
that decode any vendor
tell real AI from marketing
In 2026, "AI-powered" is on every pricing software website. The phrase has done something close to inflation — when everyone claims it, the word stops carrying information. The asymmetry is awkward: vendors describe their tools the way they want them perceived, and buyers without ML backgrounds have no easy way to verify. So the demo plays clean, the pricing is opaque, and a year later the platform is doing what a smart rules engine could have done for a third of the cost.
The good news is that the gap is bridgeable. Real AI in pricing uses three specific techniques. Marketing-AI uses one of two specific tells. Six diagnostic questions, asked in the right way, separate the two reliably in a single discovery call.
Part one · The decoder
What vendors say
vs. what they mean
Every claim below is something we've heard in real evaluations. The pairs aren't always dishonest — sometimes the vendor genuinely doesn't know what their engineering team built. But the pattern is consistent enough to use as a translation table.
Part two · The six techniques
What real AI
in pricing actually is
"AI" in pricing software, when it's real, draws on six specific techniques — three established, three emerging. A serious platform uses several in different parts of the system. The first three (01–03) have been shipping in pricing software for years. The next three (04–06) are coming online now and signal which vendors are building for the next two years, not just defending the last five. Knowing what each one does — and when it's the right tool — gives you the language to evaluate vendors on substance.
learning
modelling
forecasting
models
agents
headless pricing
Notice what's not on this list: a single "AI engine" that does everything. Real AI pricing software uses different techniques for different parts of the lifecycle, because the math problems are genuinely different. A vendor that says "our AI handles all of pricing" is either oversimplifying for marketing or undersimplifying their actual product. Either way, the answer to follow up with is "which technique handles which decision?" Bonus credit if the answer covers both the established three (01–03) and at least one of the emerging three (04–06) — that shows the vendor is investing in what comes next, not just maintaining what worked yesterday.
Part three · Diagnostic questions
Six questions
that decode any vendor
Use these in any vendor call, ideally with the technical lead present. The questions are designed to be answered in plain language by anyone with real model understanding. Vague answers are themselves the answer.
Ask these
The six
diagnostic
questions
Run these in any AI vendor evaluation. Each one is short, specific, and reveals more than it asks. The "tell" below each question is what to listen for.
-
Question 01"What model type runs the pricing decisions?"A real answer names a class — reinforcement learning, hierarchical Bayesian, gradient boosting, transformer-based forecaster. Marketing answers stay at "AI" or "machine learning."TellIf the answer doesn't include a named technique within 30 seconds, the model isn't doing as much as the marketing implies.
-
Question 02"What data does the model train on?"A real answer names data sources, time windows, and feature engineering steps. "Customer transaction data" is too vague — "POS transactions plus stock levels plus competitor prices, on a rolling 90-day window with promotional flags" is real.Tell"All your data" is a deflection. Real ML engineers care about what's in and out of the training set.
-
Question 03"How often does the model retrain, and what triggers retraining?"Healthy retraining cadences range from twice-weekly to monthly depending on the use case. Triggers include time-based (cron) and event-based (data drift detected, performance degradation).Tell"Continuously" without a cadence usually means "weekly" or "monthly." Be specific.
-
Question 04"How does the system handle a brand-new product with no history?"Real AI uses cluster-based priors, similarity matching, or expert-set defaults with explicit low-confidence flagging. There is no model in the world that magics demand data out of nothing.Tell"It learns automatically" is a deflection. Cold-start is a known hard problem with known solutions; the answer should describe one.
-
Question 05"When the model recommends a price, what does the explanation look like?"Real explainability shows feature attribution (which inputs drove this decision), confidence intervals, and the safeguards that constrained the recommendation. "An insights dashboard" usually doesn't.TellIf you can't see why the model recommended this price, your category managers won't trust it past month three.
-
Question 06"What does the model do when input data shifts unexpectedly?"Real systems detect distribution drift, raise alerts, and either revert to safer defaults or trigger retraining. They do not silently keep producing recommendations against bad inputs.Tell"It adapts automatically" without describing detection or fallback usually means "it doesn't notice." Drift detection is engineering, not magic.
Part four · When AI is and isn't the right tool
Real AI is not
always the answer
One last thing worth saying: rule engines aren't bad. For many pricing decisions — competitive repricing within a defined gateway, margin floor enforcement, promotional freeze-out — rules are the right tool. They're fast, predictable, auditable, and don't need training data. The problem isn't that some pricing software is rule-based; the problem is when rule-based software is sold as AI at AI prices.
The honest framing is: great pricing software combines AI where it adds genuine value (optimisation, demand forecasting, dynamic adjustment) with rule engines where rules are the right tool (safeguards, compliance, predictable triggers). A vendor that explains where each technique fits is showing real product depth. A vendor that calls everything AI is selling marketing.
The pattern below is what good looks like. Use it as a quick mental check after any vendor demo.
- Optimisation — AI (elasticity modelling, RL)
- Markdown timing — AI (demand forecasting)
- Dynamic adjustment — AI (RL, contextual bandits)
- Competitive repricing — Rules + AI signal
- Margin floor enforcement — Rules
- Promotional freeze-out — Rules
- Compliance (Omnibus, MAP) — Rules
- Anomaly detection — AI (drift detection)
If a vendor describes their AI doing items 4–7, raise an eyebrow. If they describe rules doing items 1–3 and 8, that's the underlying tell that what looks like AI is actually a sophisticated automation engine.
Part five · The next layer
When pricing AI
becomes available
to your AI
The conversation above is about the AI inside the pricing platform. There's a newer one starting that's worth flagging: pricing AI exposed to your AI. Modern pricing platforms are beginning to ship MCP servers — Model Context Protocol endpoints — that let your organisation's own AI agents query pricing models, fetch live recommendations, and reason about pricing decisions inside broader workflows.
Concretely: a category manager asks the company's internal Claude or ChatGPT a natural-language question — "why is this SKU's recommended price 12% above last week's," or "what would happen to gross margin if we held the spring outerwear assortment at full price for two more weeks" — and the assistant queries the pricing platform's MCP server, gets the model's real reasoning, and answers. No ticket to the data team. No CSV export. No screenshot pasted into Slack.
Why this matters now, specifically for non-food retail:
- Buyers are already using LLMs. Internal AI assistants are landing in retail organisations faster than any previous productivity wave. Buyers, category managers, and merchandisers are asking these tools real pricing questions today — without MCP access, the answers come from whatever public information the model has. That's a precision problem with a margin tail.
- Cross-system reasoning is the actual unlock. An internal agent that can query pricing, ERP, demand forecasting, and customer data simultaneously can answer questions no single system can. The MCP server is what makes the pricing platform a participant in that reasoning, not a silo outside it.
- The data stays governed. Unlike pasting numbers into a public AI tool, MCP server access keeps the data inside your authentication boundary. The internal AI gets answers from your pricing system; the pricing system gets to control what that AI can see and do.
What to ask vendors right now, even if you're not actively rolling out internal AI: "Do you have an MCP server, what's on the roadmap, and what permissions model does it use?" A vendor that has thought about this seriously can describe specific tools exposed (read-only price queries vs. write-back recommendations), authentication via your identity provider, and audit logging. A vendor that says "AI integration is on the roadmap" is restating an aspiration, not a capability.
This is genuinely new. Most pricing software vendors don't have it yet. The ones who do are building it for the same reason the dynamic master data layer matters: pricing is most valuable when it's a participant in the broader retail decision-making system, not a separate tool people remember to consult.
The Pricen approach
Real AI
where it earns,
rules where
they fit
Pricen uses reinforcement learning for dynamic pricing decisions, hierarchical elasticity modelling for price optimisation, and demand forecasting for markdown and planning. Where rules are the right tool — safeguards, compliance, freeze-out logic — the platform uses rules and says so.
Every recommendation comes with model confidence, dominant inputs, and the safeguards that constrained it. The technical team joins evaluations and answers the six diagnostic questions in plain language. Dynamic master data keeps the AI working on a live picture of your assortment, not last quarter's snapshot.
An MCP server is shipping for the latest product tier — letting your organisation's own AI agents query pricing models, fetch recommendations with reasoning, and participate in broader retail workflows. Pricing AI becomes available to your AI, governed and audit-logged, without CSV exports or screenshots.
Continue the series
Other pieces
of the buying
decision
Retail Pricing Software: A Mid-Market Buyer's Guide (2026)
The full lifecycle map: nine functions, six stages, and the eight criteria that separate platforms that work from platforms that disappoint.
Back to the pillar No. 01 · EvaluationHow to Choose Pricing Software: A Practical Evaluation Framework
Twelve criteria, vendor questions, and the red flags that show up in demos but not in decks.
Read the framework No. 02 · CostHow Much Does Retail Pricing Software Cost? A 2026 Reality Check
Pricing models, real ranges, and the total cost of ownership most vendors won't publish.
See the breakdown No. 03 · ROIPricing Software ROI: What Mid-Market Retailers Actually Measure
Real benchmarks. Margin lift, sell-through, time savings — and how to build the business case.
See the numbersFrequently asked
Quick answers
to common questions
How can I tell if a pricing platform is really using AI?
Ask three questions: (1) What model type runs the pricing decisions? (2) What data does it train on? (3) How often does it retrain and what triggers retraining? Real AI gets specific answers in plain language. Marketing AI deflects to "proprietary engine" or "machine learning" without naming a technique. The vendor's data scientist should be able to describe trade-offs and known failure modes in five minutes.
What's the difference between rule-based pricing and AI pricing?
Rules are if-then logic written by humans: "if competitor price drops below ours by more than 5%, match within margin floor." AI learns patterns from data and outputs predictions or actions: "given these features, recommended price is €127 with 84% confidence." Rules are predictable and auditable; AI is more nuanced and adaptive. Both have legitimate uses. Bad pricing software is rule-based but sold as AI; good pricing software combines them where each fits.
Is "AI-powered" the same as machine learning?
No, and the difference matters. "AI-powered" is a marketing label that gets applied to anything from a logistic regression to a deep transformer. Machine learning is the broader technical category. Within ML, the techniques used in pricing are specific: reinforcement learning, elasticity modelling, demand forecasting. A vendor saying "AI-powered" without naming a technique is using marketing vocabulary. A vendor saying "we use Gaussian process regression for elasticity" is using engineering vocabulary.
What's reinforcement learning in pricing?
Reinforcement learning is a class of ML where the system learns by trying actions and observing outcomes. In pricing: the system tries a price, observes the demand response, updates its policy, tries again. Variants used in pricing software include contextual bandits and policy gradient methods. RL is well-suited to dynamic pricing where the system can observe outcomes quickly. It's less useful for products with very long lifecycles or sparse data, where the feedback loop is too slow to learn.
Why do non-food retailers need different AI than food retailers?
Because the underlying problem is different. Food retail optimises baskets — customer persona, weekly rhythm, KVI image, lifetime value. The AI patterns that work there are about cross-product effects within a shop. Non-food optimises categories — assortment turnover, seasonality, competitive positioning per SKU. The AI patterns that work in non-food deal with high SKU count, short lifecycles, and master data that changes mid-season. Generic "AI pricing" tools that don't handle dynamic master data fail in non-food year two.
What does explainable AI mean in pricing software?
Explainable AI shows you why the model made a specific decision. In pricing, this means three things visible alongside every recommendation: (1) feature attribution — which inputs drove this decision, (2) confidence — how certain the model is, (3) safeguards — which constraints (margin floor, competitor cap, compliance) were applied. An "Insights" dashboard that shows post-hoc explanations is not the same as native explainability — the difference is whether explanations come from the model or from a separate process trying to explain it.
When does it make sense to use rules instead of AI?
For predictable, auditable, compliance-bound decisions. Margin floor enforcement: rules. Promotional freeze-out (locking promo SKUs from automated repricing before campaigns): rules. EU Omnibus Directive compliance (30-day lowest-price reference): rules. MAP (minimum advertised price) enforcement: rules. Anywhere the right answer is "always do X when Y is true," rules are simpler, faster, and easier to audit than AI. The mistake is using rules for problems that genuinely need pattern recognition — like estimating elasticity across thousands of SKUs.
What does it mean if a pricing platform offers an MCP server?
MCP (Model Context Protocol) servers let your organisation's own AI agents — internal Claude, ChatGPT Enterprise, Copilot, custom agents — query the pricing platform directly. A category manager can ask the company AI assistant a natural-language pricing question and get the model's real reasoning back, without exporting CSVs or pasting screenshots. The data stays inside your authentication boundary, the access is audit-logged, and the pricing platform controls what the AI can read or do. Most vendors don't offer this yet; the ones who do are positioning pricing as a participant in broader retail workflows rather than a separate tool.
Ready to see what fast
time-to-value pricing
software looks like?
The demo runs on your data, not a sample dataset. Twenty minutes. Real numbers.
Book a demo