Technology stack engineered for liquidation certainty.

From low-latency blockchain data ingestion to on-chain oracle delivery, the LiquidatedLabs technology stack prioritizes determinism, explainability, and observability so risk teams can trust every signal with confidence.

Layer 01: Data Ingestion

Real-Time Monitoring Layer

Our monitoring infrastructure ingests blockchain data from multiple sources with sub-500ms latency, normalizes protocol-specific state representations, and stores both hot and cold data for instant retrieval and historical analysis.

High-Performance Ingestion Mesh

Rust and TypeScript agents subscribe to node WebSocket connections, mempool streams, and subgraph APIs, reconciling data across sources in under 500ms. Each chain adapter is optimized for the specific virtual machine (EVM, WASM, Solana runtime) and includes circuit breakers that enforce latency budgets and prevent cascading failures.

  • Rust-based high-performance adapters for critical paths
  • TypeScript agents for rapid protocol integration
  • Chain-specific adapters per VM architecture
  • Latency budgets enforced via circuit breakers
  • Automatic failover and redundancy systems
  • Multi-source data reconciliation for accuracy

State Normalization Engine

Position states from different protocols are standardized using unified schemas that capture collateral amounts, debt levels, liquidation thresholds, oracle price references, and volatility context. This normalization enables meaningful cross-protocol comparisons and portfolio-level risk aggregation.

  • Unified position state schemas
  • Protocol-specific adapter normalization
  • Cross-protocol health factor conversion
  • Oracle price source tracking
  • Volatility regime classification

Hybrid Storage Architecture

Columnar storage formats (Parquet + Delta Lake) back historical queries and analytics while hot data sits in in-memory Redis clusters for instant retrieval. This hybrid approach balances query performance with storage costs while maintaining full historical audit trails.

  • Columnar storage for historical analytics
  • In-memory Redis for hot data access
  • Delta Lake for time-travel queries
  • Automatic data tiering and archival
  • Full audit trail preservation
Layer 02: Risk Computation

Advanced Risk Engine

The risk computation layer applies deterministic stress tests, runs Monte Carlo simulations, and generates ETA forecasts using sophisticated mathematical models that account for volatility regimes, correlation structures, and market microstructure effects.

Deterministic Stress Testing Framework

Applies static shocks on collateral valuations with asset-specific liquidity penalties to mimic panic scenarios. The framework supports configurable shock levels, correlation overlays for systemic events, and multi-asset collateral basket stress testing. Each stress scenario produces graded warnings that feed into dashboards and automated bot systems.

Monte Carlo Simulation Planner

Generates stochastic price paths with regime-switching volatility models that adapt to market conditions. The probability of liquidation per asset pair is recalculated every 15 seconds using thousands of simulation runs. Adaptive time steps adjust based on market speed, and correlation-aware simulations account for multi-asset positions.

ETA Forecasting with Hazard Models

Uses sophisticated hazard models to estimate when liquidation probability will cross configurable thresholds, enabling proactive rebalancing before positions become critical. The forecaster provides real-time countdown timers, confidence intervals, and historical accuracy tracking for continuous model calibration.

Explainability and Lineage

Every alert includes complete lineage: oracle source identification, stress scenario parameters, volatility regime classification, and model confidence scores. This explainability is essential for risk teams who need to understand why a position is flagged and what actions are recommended.

  • Complete alert lineage tracking
  • Oracle source attribution
  • Model confidence scoring
  • Scenario parameter documentation

Observability and Monitoring

Prometheus metrics and OpenTelemetry traces expose ingestion lag, model runtimes, alert fan-out performance, and system health indicators. This observability enables proactive system maintenance and performance optimization while ensuring service level agreements are met.

  • Prometheus metrics export
  • OpenTelemetry distributed tracing
  • Real-time system health dashboards
  • Alert performance monitoring
Layer 03: Signal Delivery

Signals and Multi-Channel Delivery

Risk signals are delivered through multiple channels optimized for different use cases, from low-latency bot feeds to comprehensive institutional dashboards.

Bot-Grade API Infrastructure

gRPC and WebSocket endpoints distribute sorted liquidation candidates with profit estimates, gas cost projections, and execution difficulty scores. The API includes rate limiting, authentication, and historical data access for backtesting strategies.

  • gRPC for high-performance streaming
  • WebSocket for real-time updates
  • REST API for standard integrations
  • Rate limiting and authentication
  • Historical data access

Human-Focused Dashboards

Professional user interfaces showcase portfolio overlays, circuit-breaker recommendations, and defense playbooks that guide analysts and treasury managers. Export capabilities enable reporting and integration with existing risk management workflows.

  • Portfolio risk visualization
  • Circuit-breaker recommendations
  • Defense action playbooks
  • Export and reporting tools
  • Custom alert configuration

Notification and Integration Fabric

Webhook endpoints, PagerDuty integration, Slack notifications, email alerts, and custom hook systems tie risk events directly into existing incident management workflows, enabling 24/7 readiness without constant manual monitoring.

  • Custom webhook endpoints
  • PagerDuty and Opsgenie integration
  • Slack channel notifications
  • Email alerts with digest options
  • Custom integration API
Layer 04: On-Chain Integration

On-Chain Risk Oracle

The on-chain oracle publishes aggregated risk metrics to blockchain networks, enabling smart contracts to automatically respond to systemic risk conditions.

Oracle Publishing Loop

  1. Aggregation: Risk vectors are aggregated including risk scores, red-zone status indicators, and systemic health metrics from across all monitored protocols and positions.
  2. Signing: Updates are cryptographically signed using MPC (Multi-Party Computation) key shares distributed across a validator set, ensuring security and decentralization.
  3. Publication: Signed updates are pushed to oracle smart contracts on each supported blockchain where protocols can subscribe and automate defensive responses.

Protocol Integration Capabilities

Maker emergency modules, Aave risk councils, and DAO insurance vaults hook into the oracle to automatically throttle risk exposure. Integration is designed to be non-invasive, allowing protocols to maintain full control over their risk parameters while benefiting from real-time risk intelligence.

  • Maker emergency module integration
  • Aave risk council hooks
  • DAO treasury automation
  • Protocol-specific adapters
  • Non-invasive integration design

Audit-Ready Architecture

Cryptographic proofs, model versioning, and public changelogs help protocols pass governance proposals and exchange-level due diligence. The oracle maintains full transparency while protecting proprietary risk model details.

  • Cryptographic proof generation
  • Model versioning and changelogs
  • Public audit trails
  • Governance proposal support
  • Exchange compliance documentation