Journalists are reluctant to admit they use AI. Not because it's unethical — but because there's no way to prove how they used it.

In practice, reporters quietly use AI all the time: transcribing long interviews with speech-to-text, searching for information, researching background, planning, taking notes. Responsible journalists then verify the facts themselves, make every editorial decision themselves — the result is 100% human creative work.

But if someone asks "did AI write this?" — what's the answer? "Yes, but only for transcription and research" sounds like an excuse. There's no record of where AI stopped and the journalist started.

The fix

Orson AI records that boundary. Every AI interaction — what was requested, what was generated, what the journalist changed, rejected, or accepted — is hashed with SHA-256 (the same standard used for digital signatures in banking) and recorded on the Hedera hashgraph. The result is a human oversight certificate: tamper-proof, independently timestamped, impossible to falsify after the fact — even by us.

Why it matters legally

A regular log on a newsroom's server is a file that can be edited — in court, it's not evidence. A hash on a public ledger is different: anyone can independently verify that the record hasn't been altered, the same way a notary certifies a document.

When the first accusation of "AI-generated journalism" turns into a lawsuit, a newsroom with a Orson AI certificate responds with proof, not promises.

The EU AI Act will require this kind of transparency for high-risk AI applications by 2026. The newsrooms building audit infrastructure now are the ones that won't panic later.