Frequently Asked Questions

What is Soulmetric?

Soulmetric is verification infrastructure for AI-generated research. One operator, using 28 specialized AI agents, produced 256 working papers and 22 monographs in ten days. Every artifact is published with a verification tier and Scharp Scale score so you know exactly how much scrutiny it has survived.

What are "working papers"?

Working papers are structured research artifacts—not chat transcripts, not blog posts, not summaries. Each one has a defined scope, methodology, argument structure, and citations. They're called "working papers" because they are living documents: subject to ongoing verification, correction, and revision. The term signals that they are research outputs intended to be scrutinized, not final pronouncements.

Have the working papers been independently reviewed?

Some have. Gold-tier working papers have been reviewed by multiple independent domain experts. Silver-tier have been reviewed by a single expert. Bronze-tier have been verified by independent AI systems (not the generating model). Generated-tier working papers have not been verified at all. The tier system is designed to be transparent about exactly how much review each working paper has received. Currently, 3 working papers have completed independent expert review.

What is the Scharp Scale?

The Scharp Scale is a verification quality score developed by Kevin Scharp, ranging from Negative (harmful or fundamentally flawed) through 0 (no verifiable content) to 10 (exceptional). It measures how well a working paper's claims survived scrutiny. The scale is proprietary and has not been externally validated. We publish it for transparency, not as an industry standard. See the Verification Standard page for the full scale.

How is the Scharp Scale different from the verification tier?

The tier tells you who verified the working paper (the provenance). The Scharp Scale tells you how well it held up (the quality). They measure different things. A working paper always carries both: you see who checked it and what they found.

Should I trust the "Generated" tier working papers?

Not without your own verification. Generated-tier working papers are raw AI output that hasn't passed through any verification step. They may contain hallucinations, fabricated citations, unsupported claims, or logical errors. They're included in the corpus for completeness and transparency, but you should treat them as unverified drafts.

What does "Bronze" verification actually mean?

External AI adjudication by independent AI systems (not the generating model). This is machine verification, not human certification. It catches structural errors, citation failures, and logical inconsistencies but cannot substitute for human domain expertise. Bronze is the baseline verification tier—it means the output has been checked by systems other than the one that created it, but no human expert has reviewed it.

Who is the operator?

Kevin Scharp, Professor of Philosophy at the University of Illinois Urbana-Champaign. One person orchestrating 28 AI agents, designing the verification pipeline, and coordinating independent expert review. Soulmetric is deliberately small—the point is to show what one person with the right tools and methodology can produce, and how that output can be systematically verified.

Can I challenge or correct a working paper?

Yes. Anyone can submit error reports, re-verification requests, or score challenges for any working paper. Confirmed errors receive correction notices; fundamentally flawed working papers can be retracted. See the Verification Standard for details, or email kevin@soulmetric.com.

What AI systems are used?

Multiple. The generation pipeline uses various large language models configured for specific research tasks. The verification pipeline uses different models (different providers, different architectures) so the generating model never adjudicates its own output. Specific systems used are documented on the Verification Standard page.

What are Problem Sprints?

Problem Sprints apply the Soulmetric verification infrastructure to your specific question. You submit a problem, we deploy the same agent fleet and verification pipeline, and deliver verified working papers focused on your topic—typically in days. Engagements start at $15,000 for a one-week sprint. See the Problem Sprints page for details.

Can I submit a Soulmetric working paper to a journal?

No. These are AI-generated, human-directed working papers. Submitting them as your own work would violate academic integrity standards at every institution we are aware of. The working papers are research tools and reference materials—they are not authored by you and should not be represented as such.

Is this "real" research?

That depends on your definition. The working papers are AI-generated, not written by human researchers in the traditional sense. But they follow research conventions (scoped questions, structured methodology, cited sources, independent expert review at higher tiers). The verification system exists precisely because AI-generated research needs a different trust framework than human-authored research. We're not claiming equivalence with traditional academic publishing—we're building the verification infrastructure that AI-generated research needs.

How are expert validators selected and compensated?

Validators are chosen based on demonstrated domain expertise—credentials, publication history, or recognized practitioner status. They are compensated at a flat rate per review, regardless of the score they assign. Conflicts of interest are disclosed and recorded. For Gold-tier reviews, experts work independently. Full details on the Verification Standard page.

How do I get in touch?

Email kevin@soulmetric.com for any inquiries. For Problem Sprint discussions, you can also schedule a discovery call.