dydact predicts material properties by learning the geometry, not the surface. The API exposes the Tc-prediction and candidate-ranking surface from the methodology paper — every response ships with honest confabulation telemetry, including a hold-out-validated confidence rating.
Query by compound + condition. Returns predicted Tc, confidence rating, pressure-curve envelope, and coverage-gap flags when your query sits outside the calibrated region.
Void-boundary inversion on the oracle_synthesized endpoint. Returns top-K candidates with per-candidate σ-convergence, stability, primitive-recall, and basin-tag fields.
Reed's mode-I / mode-II / clean verdict surfaces how strong the pressure response is relative to crystal-axis noise. Never silent failure — the API tells you when to trust it.
The paper's Nb and FeSe hold-out tests reproduce end-to-end against a pinned model version. Cookbook recipe 03 runs the full flow from a fresh API key.
Three steps. No SDK. Python 3.10+, only httpx as an external dep. The env-var pattern
matches every other API cookbook you've used.
Apply for access. Academic tier is free, allocated per department.
export DYDACT_API_KEY="dyd_academic_..."
import httpx, os r = httpx.post( "https://api.dydact.io/api/v1/predict/tc", headers={"Authorization": f"Bearer {os.environ['DYDACT_API_KEY']}"}, json={"compound": "Nb", "pressure_gpa": 0.0001}, ) print(r.json())
{
"predicted_tc_k": 9.82,
"predicted_tc_log_decades": 0.992,
"pressure_gpa": 0.0001,
"archetype": "BCS_classical",
"confidence": "clean",
"model_version": "v0.3",
"coverage_gap": false
}
confidence keys: clean (trust) ·
mode-II (cross-check experimental) ·
mode-I (directional only).
Never silent — the API refuses to pretend it's calibrated when it isn't.
Every recipe runs end-to-end from a fresh academic key. Full source at github.com/dydact/dydact-cookbook.
If you're applying dydact to known material classes, it's a transaction. If you're using dydact to find something new — novel compounds, post-silicon candidates, drug leads — dydact is a stakeholder in what comes out. Equity + IP sharing, not a licensing fee. Full breakdown →
Trained v0.3 operator applied to known classes. Academic free (1k/day/key, dept-level). Corporate bulk-per-call. Enterprise contracted unmetered. No IP claim on output.
predict/tc, predict/tc/sweep, holdoutcalibration, modelsOracle synthesis + candidate ranking + private training. Generates novel compounds → generates IP. Entry requires a paid retainer (funds substrate time regardless of outcome) AND an executed agreement covering equity, attribution, spigot terms. Both conditions. Not one or the other.
oracle/synthesize, candidates/rank, training/initTier (academic / corporate / enterprise) is orthogonal to mode. An academic key is mode-1-only by default; discovery access requires a separate agreement regardless of tier. See pricing for the verticals we actively partner in and the compliance overlays (GDPR / SOC 2 / HIPAA / ISO / GMP / FedRAMP) that stack on top.
Tell us who's using it and what for. Academic applications accept on a per-department basis — one key per dept per university. Corporate applications get a 15-minute scoping call.