Skip to content

Lit Verity

Overview

An MCP server that interposes a truth model between an AI agent and its output when working in academic literary criticism. Every citation, quotation, argument characterization, and theoretical relationship passes through verification tools backed by a curated corpus, a claims database, and external bibliographic APIs.

Large language models working in literary criticism routinely fabricate citations, misquote texts, distort arguments, and misattribute ideas — errors that sound authoritative but are difficult to catch. Lit Verity forces every claim through a confidence-tagged verification pipeline: VERIFIED, PLAUSIBLE, UNVERIFIED, INFERENCE, or FLAGGED.

Tools exposed

  • lit_search_texts — Semantic, exact keyword, and hybrid search over an indexed corpus with Voyage AI embeddings
  • lit_lookup_claims — Query a structured database of what theorists actually argued, with source passages
  • lit_query_graph — Traverse a relationship graph of theoretical positions (critiques, extends, responds to, misreads)
  • lit_verify_citation — Parallel lookups against CrossRef, Semantic Scholar, and OpenAlex
  • lit_verify_output— Post-generation audit of the agent's own output with graded confidence scoring
  • lit_ingest_pdf & lit_procure_source — Ingest PDFs with OCR; find, download, and ingest missing sources on demand
  • lit_export_bibliography — Formatted works-cited output in MLA, Chicago, or APA from verified citations only

Architecture

  • Storage: Neon Postgres with pgvector for text chunk embeddings
  • Schema: canonical_texts, text_chunks, claims, claim_relations, citations, source_files, verification_log, procurement_queue, annotations
  • Transport: Works with any MCP-capable client — Claude Code, claude.ai, or custom Anthropic SDK applications
  • Paired with: Literary Research Plugin for MCP-aware confidence tagging and guided research templates

Stack

TypeScript · MCP SDK · Neon · pgvector · Voyage AI · Zod

View on GitHub →

← All projects