As distributed energy scales, software platforms are increasingly expected to do more than store information or automate workflows. They’re asked to support real decisions — repeatedly, consistently, and importantly - with imperfect inputs.
From a technology perspective, this creates a unique challenge. Building systems that work when data is clean and complete is relatively simple. Building systems that help people make informed, confident decisions when the data is partial, inconsistent, or arrives over a period of time — which is almost always the case in distributed energy - is anything but.
Over the past year, we’ve learned that doing this well depends less on clever algorithms and more on how decision systems are designed at a foundational level. In particular: how they handle incomplete data, how they make similar assets described in different ways comparable, and how they can be audited over time.
Irregular data isn’t a failure — it’s the norm
In distributed energy, project data comes from many parties — developers, operators, financiers, and digital systems — each describing similar assets, systems and devices in different ways. Two projects might be fundamentally comparable, but be represented using different metrics, data formats, or terminology. Simple examples could include:
- Installed capacity (kWp) vs Panel count and wattage
- Estimated annual generation vs Monthly predicted output
- Performance ratio vs Uptime percentage
There’s also the example where both projects report on the same calculated value - e.g. Projected IRR - but that the underlying calculation uses different assumptions, e.g.
- Project A models flat degradation, no downtime and stable tariffs
- Project B models early degradation, maintenance downtime and annual tariff growth
Individually, each project makes sense. There’s no obvious “missing” data.
But taken together, those differences show. Those same projects are difficult to compare or aggregate without intervention.
A common assumption in software systems is to treat this as a temporary problem to be fixed before decisions can happen. In practice, that often leads to false certainty: fields get filled too early, assumptions disappear into free text, and confidence is implied where it shouldn’t be.
One of the most important principles of the technology we build at Odyssey is that good decision systems don’t hide uncertainty — they make it visible. Partial data can still be evaluated. Differing datasets can still be compared. They just need closer attention.
Standardisation enables comparison, not uniformity
A large part of our technical work involves making data comparable across assets, projects, and portfolios. We apply logic and modelling to turn raw data into structured, standardised datasets that make meaningful comparison possible.
This means:
- Mapping diverse inputs into standardised models
- Normalising terminology while preserving original meaning
- Keeping source data alongside its standardised representation
Done well, this creates a consistent internal language that systems can reason over.
Auditability has to be built in, not added later
When data visualisations are based on standardised data, being able to see an audit trail matters even more. Auditability isn’t something you can bolt on at the end with reports or exports. It becomes core to how the system itself is designed.
How we’re leveraging AI
AI is genuinely useful in this environment. It can help extract structure from documents, reconcile inconsistencies, summarise large volumes of information and importantly, spot trends or patterns that would otherwise be missed.
We leverage AI not to make decisions or manipulate data, but to surface what would otherwise be hard to see. We use it to highlight irregularities, anomalies and patterns so that we can make more informed decisions about how to resolve it.
Beyond Data and into Decision Making
As distributed energy continues to grow, we see the potential for software to act not just as a way to visualise data, but to support real decision making — from how assets are compared across a portfolio, to how impact is understood, attributed, and enhanced over time.
We can’t make the raw data perfect — but we can build intelligence that turns it into insights capable of driving greater impact across the industry than ever before.
.png)

