A proposal for Mercor

Synthetic Expert Alignment

What if you could embed the complete aesthetic geometry of a genius — their works, their critics, their era — into a single latent space, and use it as an alignment signal?

No rubric. No $150/hr poets. Scalable taste at the fidelity of the greatest minds in human history.

“We’ll enshrine taste from every different decade and every different era. Then the model will be able to learn what taste you have.”

Brendan Foody — Conversations with Tyler, Ep. 267

Three paradigms of taste alignment

Tyler asked you on-mic: can taste be captured in a rubric? Your honest answer was that RLHF preference ranking is the fallback when rubrics fail. There’s a third option.

I
Rubric
Mechanism
Expert writes scoring criteria. Model is graded against them.
Captures
Explicit, codifiable quality markers
Misses
Everything Kant said it would — taste, atmosphere, the ineffable
Cost
$150/hr per expert
Scales?
Linearly with domains
II
RLHF
Mechanism
Human picks preferred output from pairs. Model learns the preference gradient.
Captures
Implicit taste that can’t be written as rules
Misses
Regresses to the mean of available evaluators. Current taste ≠ peak taste.
Cost
$150/hr × thousands of comparisons
Scales?
Linearly with domains × eras

Multimodal corpus → unified space → reward signal

Using natively multimodal embedding models, we map text, images, structural data, and critical analysis into one mathematical space. The subjective becomes computable.

Primary works text, plans, scores Visual / spatial photos, drawings Critical analysis Scully, historians Context / era history, culture Unified latent space 3,072 dimensions Vector proximity = reward signal Model alignment at infinite scale Once embedded, the reward signal costs near-zero to query. Dead geniuses. Every era. Every modality.

The embedding space in action

Switch experts. Click clusters. See why the strongest pull is never to the obvious source — it’s to the analytical or contextual cluster that reveals the non-obvious structural parallel.

Primary works Critical analysis Context / material Philosophy / tradition Novel query

Beyond RLHF: two killer applications

The same embedding mechanism that supersedes expert-level RLHF also unlocks specific market opportunities.

Architectural intelligence
A civic platform that crowdsources revealed aesthetic preferences into the same embedding space, generating AI baselines that architects must compete to improve upon. Directly applicable to the Collison-Cowen New Aesthetics grants and the YIMBY problem Sam Altman flagged as unsolved.
Read the full proposal →
Synthetic RLHF at scale
Replace the $150/hr expert bottleneck with infinitely queryable synthetic personas. Enshrine peak-era taste from every domain, every decade. Orwell for political language. Steinbeck for economic reportage. Plath for phenomenology of consciousness. The evaluator panel humanity deserves.
Read the full proposal →

This needs two kinds of people I’m not

I have the idea, the theoretical architecture, and the applied verticals mapped out. What I need are collaborators who can make the embedding space real and fill it with the right material.

Expert curators
Architectural historians, phenomenologists, art critics, literary scholars — people with deep domain taste who can identify which corpora to embed, how to weight critical analysis against primary works, and whether the space is capturing genuine aesthetic structure or just surface similarity. You know what “good” looks like in your field and can articulate why.
ML engineers / researchers
People who work with multimodal embedding models, vector databases, and RL pipelines — who can build the ingestion architecture, configure the latent space, and turn vector proximity into a functioning reward signal. Ideally someone who’s thought about the limits of RLHF and is curious about what comes next.

If either of these is you, I’d love to talk.

matthew [at] latentlayer [dot] ai

Matthew McRedmond · Dublin, Ireland