Skip to content

Large Engineering Model API

The Large Engineering Model API is a hosted knowledge layer for engineering agents. It returns cited context, source pointers, and workflow hints through an OpenAI-style REST API. It is separate from sim: sim remains the open-source local runtime for operating solver software, while LEM is a network-accessible model/API surface.

The production base URL is:

https://api.svdailab.com/v1

API keys are managed from platform.svdailab.com. Keys use OpenAI-style prefixes such as sk_live_...; product access is controlled by backend scopes.

The first public model is lem-1.

lem-1 is retrieval-first and deterministic. Runtime responses do not call an LLM; they return context blocks, citations, recommended records, and agent hints assembled for the request. Future synthesis models may use the same API family or a successor model id.

lem-1 is currently strongest for COMSOL-style multiphysics modeling questions, especially example discovery, setup patterns, solver troubleshooting, boundary conditions, and cited context packs. It is a retrieval-first context API: it does not run simulations, inspect private project files, or replace solver verification. Broader engineering corpora and client/private corpora may be added under the same API family over time.

Send the key in the Authorization header:

Terminal window
curl https://api.svdailab.com/v1/models \
-H "Authorization: Bearer $SVD_API_KEY"

Keys for lem-1 must have the model:lem-1 scope. Responses include request and rate-limit headers so production agents can log failures and back off when needed.

GET /v1/models

Example response:

{
"object": "list",
"data": [
{
"id": "lem-1",
"object": "model",
"owned_by": "svd-ai-lab",
"status": "available",
"required_scope": "model:lem-1",
"description": "Retrieval-first Large Engineering Model for engineering agents."
}
]
}
POST /v1/search

Use /v1/search when an agent or tool wants raw ranked records, snippets, facets, and citations.

Request fields:

FieldTypeNotes
modelstringRequired. Use lem-1.
querystring or content arrayRequired. Query or task text, limited to 4,000 characters.
max_resultsintegerOptional. 1 to 20, default 10.
max_tokensintegerOptional total snippet budget.
max_tokens_per_recordintegerOptional snippet cap per result.
filtersobjectOptional deterministic filters.
includestring arrayOptional. snippets, facets, sources, scores.

Supported filters include application_area, source_kind, corpus, module, file_type, and has_local_path.

Example:

Terminal window
curl https://api.svdailab.com/v1/search \
-H "Authorization: Bearer $SVD_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "lem-1",
"query": "nonlinear solver convergence for coupled heating",
"max_results": 5,
"include": ["snippets", "sources", "facets"]
}'

Response shape:

{
"id": "search_...",
"object": "search",
"model": "lem-1",
"results": [
{
"id": "record_...",
"object": "engineering_record",
"title": "Example record",
"snippet": "Compact retrieval snippet...",
"source": {
"record_id": "record_...",
"title": "Example record",
"type": "engineering_reference"
},
"score": 8.231
}
],
"usage": {
"input_tokens": 12,
"output_tokens": 160,
"total_tokens": 172
},
"lem": {
"llm_used": false,
"mode": "raw_search",
"result_count": 1
},
"request_id": "req_..."
}
POST /v1/responses

Use /v1/responses when an agent wants a compact context pack rather than raw search results.

Request fields:

FieldTypeNotes
modelstringRequired. Use lem-1.
inputstring or content arrayRequired. Query or task text, limited to 4,000 characters.
max_output_tokensintegerOptional. 250 to 12,000, default 4,000.
metadata.agentstringOptional caller name such as codex, claude-code, or chatgpt.
metadata.solverstringOptional solver hint.
metadata.solver_versionstringOptional solver version hint.
metadata.source_preferencestringOptional. any, local, or online.

Example:

Terminal window
curl https://api.svdailab.com/v1/responses \
-H "Authorization: Bearer $SVD_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "lem-1",
"input": "solver convergence issue in a coupled heat-transfer model",
"max_output_tokens": 900,
"metadata": {
"agent": "codex",
"solver": "comsol",
"source_preference": "any"
}
}'

Response shape:

{
"id": "resp_...",
"object": "response",
"model": "lem-1",
"output": [
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "Deterministic context summary..."
}
]
}
],
"usage": {
"input_tokens": 14,
"output_tokens": 426,
"total_tokens": 440
},
"sources": [
{
"id": "src_0",
"record_id": "record_...",
"title": "Example record",
"type": "engineering_reference"
}
],
"lem": {
"llm_used": false,
"context_blocks": [],
"recommended_records": [],
"agent_hints": []
}
}

The versioned OpenAPI document is available at:

GET /v1/openapi.json

Use it for generated clients, agent tool schemas, and integration tests. Access still requires a scoped API key for protected endpoints.

Agents should treat sources as citations and lem.context_blocks as retrieval context, not as final engineering truth. For production agent loops, cite returned sources, keep solver actions local and verifiable, and ask the user before loading proprietary project files.

When using LEM with sim, keep the separation clear: LEM provides context through api.svdailab.com; sim operates local solver software on the user’s machine.