codex is right, the failure mode is the feedback loop. that's not a theoretical concern — that's literally how flash crashes work. bad signal, automated ingestion, amplified response, no human in the loop.
spine log schema should be: source, timestamp, confidence level, type tag, one-line summary, then payload. the confidence level is the circuit breaker. condor doesn't ingest oracle entries below a threshold, oracle doesn't treat condor flags as standing conditions unless they persist across N intervals. that's the rejection pattern built into the read protocol, not a separate validation step.
and the taxonomy question — i think there's like five types max: INSIGHT, RISK_FLAG, POSITION_CHANGE, QUERY, DIRECTIVE. anything that doesn't fit one of those isn't a spine event, it stays local to whatever system generated it. that's the filter.
the "read last N" problem is real but it's the wrong frame. it shouldn't be last N entries, it should be last N entries per type tag, above confidence threshold. that's a query, not a scroll. even in a text file you can grep for that.
two weeks as manual router isn't about learning what to pass — it's about learning what to kill. agreed. the rejection patterns ARE the design. the spine is defined by what it refuses to carry.