Admin Portal — Restricted Access Sign out

Investigation Dashboard

Overview of the evidence corpus, data integrity, and publication status.

Guided Workflow

1
Upload & Categorize
Add evidence files, tag with hypothesis & sector
2
Curate & Organize
Build curated lists, bind to report chapters
3
Data & Visualize
Normalize CSV data, configure timelines & charts
4
Build & Publish
Rebuild manifest, verify integrity, deploy

Sector Distribution

Hypothesis Coverage

Upload New Evidence

📄
Drag & drop files here, or click to browse
Supports HTML, PDF, CSV, JSON, images

    Metadata


    Chain of Custody & Integrity

    Optional but encouraged. Items without verification can still be browsed but will appear in the Evidence Health dashboard.

    Upload a file to compute hash...

    Manage Evidence

    ID Title Hypotheses Sector Confidence Tier Integrity Actions

    Evidence Health Dashboard

    Items below are missing metadata, local archives, or integrity verification.

    ID Title Issues Actions

    Bookmark Archives

    Browse saved Bookmark bookmarks, spreadsheet evidence, and cross-reference against the evidence manifest.

    Curated Lists

    Create named evidence lists and bind them to hypothesis chapters. Featured lists appear as primary evidence callouts in the report.

    Create / Edit List


    Select Evidence Items

    0 selected

    Existing Curated Lists

    Tags selected items as featured, binds hypothesis badges, and sets chapter positions in the manifest.

    Data Normalizer

    Strip Shodan/Censys/Prometheus CSVs down to investigation-essential columns. Choose exactly which fields to keep — raw files stay untouched. Normalized output gets SHA-256 hashes and full chain-of-custody tracking.

    Data Provenance & Chain of Custody

    Central registry of all normalized datasets with source hashes, output hashes, format detection, and transformation audit trail.

    Data Pipeline

    AUDIT → EXTRACT → TRANSFORM → DASHBOARD pipeline for Example-Target spreadsheet data.

    --
    Files Audited
    --
    Total Rows
    --
    Extracted

    Pipeline Steps

    1. AUDIT — Inventory all files, compute SHA-256 hashes, extract schemas
    2. EXTRACT — Parse high-value spreadsheets into normalized JSON
    3. TRANSFORM — Dedup, cross-reference, merge with evidence manifest
    4. DASHBOARD — Wire into visualizations

    Ingest CSV / JSON Data

    Bulk import evidence from Shodan exports, Censys results, RIPEstat BGP data, or custom CSV/JSON files.

    📊
    Drop CSV or JSON files here
    Supported: Shodan CSV, Censys JSON, RIPEstat BGP CSV, custom formats

    Timeline Editor

    Tag evidence with event dates and manage timeline tracks for the dual-timeline visualization.

    Add Timeline Event

    Timeline Events

    DateTrackTitleEvidenceActions

    Site Settings

    Edit the centralized project configuration. Changes apply site-wide.

    Site Identity

    Author

    Key Metrics

    Build & Publish

    Rebuild the evidence manifest and site data files.

    Current Status

    0 evidence items in manifest
    0 metadata sidecar files
    0 curated lists
    0 provenance records
    Last build: Unknown

    Dashboard Pages

    Manage investigation dashboard pages. View status, open dashboards, and control which pages appear in the site navigation.

    Sprint Extractions

    View and run data extraction scripts. Each sprint targets specific source files and produces structured JSON for the dashboard pipeline.

    Extraction Scripts

    Extraction Results

    File Records Size Last Modified Actions

    Agent Scratchpad

    Impressions and follow-up flags captured by agents during data ingestion. Each flag identifies datasets to analyze further, comparisons to run, or data quality issues.