Documentation Index
Fetch the complete documentation index at: https://docs.datris.ai/llms.txt
Use this file to discover all available pages before exploring further.
This guide walks through ingesting a CSV file into PostgreSQL using the pipeline.
Onboarding data from an external source instead of a local file? Skip the manual config and use Discovery — describe the source, pick the datasets, and Datris generates the taps, pipelines, and (optionally) schedules them for you in one wizard.
1. Start the stack
Wait for the pipeline container to show Started PipelineServiceApplication.
2. Register a pipeline
Create a pipeline configuration that defines the source schema and PostgreSQL destination:
curl -X POST http://localhost:8080/api/v1/pipeline \
-H "Content-Type: application/json" \
-d '{
"name": "employees",
"source": {
"schemaProperties": {
"fields": [
{"name": "id", "type": "int"},
{"name": "first_name", "type": "string"},
{"name": "last_name", "type": "string"},
{"name": "email", "type": "string"},
{"name": "salary", "type": "double"}
]
},
"fileAttributes": {
"csvAttributes": {
"delimiter": ",",
"header": true,
"encoding": "UTF-8"
}
}
},
"destination": {
"database": {
"dbName": "datris",
"schema": "public",
"table": "employees",
"usePostgres": true
}
}
}'
3. Upload a CSV file
Create a sample CSV file:
cat > /tmp/employees.csv << 'EOF'
id,first_name,last_name,email,salary
1,John,Doe,john@example.com,75000.00
2,Jane,Smith,jane@example.com,82000.00
3,Bob,Johnson,bob@example.com,68000.00
EOF
Upload it:
curl -X POST http://localhost:8080/api/v1/pipeline/upload \
-F "file=@/tmp/employees.csv" \
-F "pipeline=employees"
The response contains a pipelineToken for tracking:
4. Check status
curl "http://localhost:8080/api/v1/pipeline/status?pipelinetoken=pt-abc12345-6789-..."
The status shows processing stages: begin, processing, end.
5. Verify the data
Query the data via the REST API:
curl -X POST http://localhost:8080/api/v1/query/postgres \
-H "Content-Type: application/json" \
-d '{"sql": "SELECT * FROM public.employees"}'
The default database is datris — this is where all PostgreSQL pipelines write unless you specify a different dbName in the pipeline’s destination config.
Next Steps