The workflow engine
for energy operations.
Not another SaaS dashboard. A programmable pipeline that turns documents into structured data, applies your rules, and produces auditable outputs.
What it replaces
The tools you're currently stitching together
Gridline replaces the fragile workflows that energy operators rely on. No more duct-taped spreadsheets, no more copy-paste from PDFs, no more wondering if a statement went out. Instead, you get a deterministic pipeline that handles document ingestion, data extraction, rule application, and output generation — with full audit trails and the ability to rerun any step.
allocate_by: "kwh_usage" fallback: "even_split" validate: required_fields How it works
Connectors → Workflow Engine → Templates → Data Model
Each use case is just a config + a few step functions, not a new app
The Pipeline
Six stages from document to delivery
These six primitives are the building blocks Gridline reuses for every workflow — billing, interconnection, financing, regulatory filings. Unlike typical document automation tools that use probabilistic ML models, Gridline combines hybrid NLP primitives: LLM-powered extraction with structured validation, rule-based computation with version control, and templated generation with full lineage tracking.
No other platform in energy operations combines deterministic extraction, declarative rule engines, and workflow-as-data in a single pipeline. This isn't RPA. This isn't generic ETL. It's purpose-built infrastructure for regulated industries handling financial documents.
Upload
PDF ingestion, file validation, S3 storage
bill.pdf Parse
LLM extraction, schema validation, confidence scoring
parsed_data.json Compute
Allocation rules, rate schedules, credit distribution
allocations.json Deliver
Email dispatch, portal tokens, delivery confirmation
delivered: true Invoice
PayPal sync, payment links, webhook handlers
invoice_id Generate
PDF templating, statement rendering, branding
statement.pdf Tech Stack
Production infrastructure, not prototypes
Where We're Pushing
The unrefined edges. The things nobody else is building.
Deterministic Extraction
ShippedLLM-powered PDF parsing with structured output validation. Not probabilistic guessing — deterministic field extraction with confidence scores and fallback rules.
Most document AI is black-box. We expose confidence scores, let you set thresholds, and provide human-in-the-loop for low-confidence extractions.
Workflow-as-Data
ShippedRuns are first-class data objects, not ephemeral processes. Every step, artifact, and decision is queryable, replayable, and auditable.
Traditional ETL pipelines are fire-and-forget. We treat workflow execution as structured data you can inspect, debug, and replay.
Template Composition
BuildingEach use case is just a config + step functions. New workflows are assembled from primitives, not built from scratch.
We're building towards a workflow DSL where operators can compose new automations without engineering support.
Provenance Graphs
ResearchEvery output artifact links back to its source documents and transformation steps. Full lineage for any data point.
When an auditor asks "where did this number come from?" — you can show the exact PDF page, extraction rule, and calculation.
Security
Built for regulated industries handling financial data
Row-Level Security
Every database query filtered by tenant. No cross-customer data leakage possible at the query layer.
Signed Portal Links
Time-limited, cryptographically signed URLs. Tokens expire, can be revoked, audit-logged.
Encryption at Rest
All artifacts encrypted with AES-256. Database encrypted. Backups encrypted. Keys rotated.
Full Audit Trail
Every action logged with actor, timestamp, IP. Immutable audit log. Compliance-ready exports.
Input Validation
Zod schemas on every endpoint. File type validation. Size limits. Sanitization.
Network Security
TLS 1.3 everywhere. HSTS. CSP headers. Rate limiting. DDoS protection via Cloudflare.
Runs & Artifacts
Everything is data. Everything is replayable.
See it run.
No sales pitch. No signup. Just the product running on sample data.