The data platform monitors theDocumentation Index
Fetch the complete documentation index at: https://docs.datris.ai/llms.txt
Use this file to discover all available pages before exploring further.
{env}-raw MinIO bucket for new files. When a file is dropped into this bucket, the data platform picks it up and processes it automatically. There are three patterns: single data file, metadata file with data file, and bulk upload.
File Naming Convention
Single Data File
Place a file in the MinIO raw bucket following this naming pattern:| Segment | Description |
|---|---|
pipeline | Name of the registered pipeline configuration |
publishertoken | Optional identifier for the data publisher — must be a unique value, UUID recommended |
anything | Any additional text (timestamp, version, etc.) |
pipeline (literal) | Fixed literal sentinel string |
ext | File extension: csv, json, xml |
Metadata File + Data File
For more control, upload the data file first, then upload a metadata file (.metadata.json):
Data file: stock_price.2026-03-15.pipeline.csv
Metadata file: stock_price.2026-03-15.metadata.json
Bulk Upload
Upload multiple files to a directory, then trigger processing with a metadata file:dataFilePath directory for each bulk upload. Do not reuse directories.
Compressed Files
Compressed archives (.zip, .gz, .tar, .jar) are automatically decompressed. The pipeline name and publisher token are parsed from the archive filename:
How It Works
- A file is placed in a MinIO bucket (via
mc cp, API, or another system) - MinIO sends an S3-compatible event notification to the ActiveMQ file-notifier queue
- The pipeline polls the queue on a configurable schedule (default: every 5 seconds)
- The pipeline parses the event, resolves the pipeline, and processes the file
Configuration
The queue name and polling interval are configured inapplication.yaml:
