Getting started
IntelliJ run configs are provided for all node types, and execute using the provided config/config.yaml
. These configurations are stored in the .run
folder and should automatically be detected by IntelliJ upon importing the project.
Quick start
Build and run docker compose to bring up dependencies and Astra nodes
docker build -t slackhq/astra .
docker compose up
In Kafka container terminal, create input topic (preprocessor crashes if it does not exist before configuring manager in next step)
kafka-topics.sh --create --topic test-topic-in --bootstrap-server localhost:9092
Run 2 curl commands to configure 1 partition
curl -XPOST -H 'content-type: application/json; charset=utf-8; protocol=gRPC' http://localhost:8083'/slack.proto.astra.ManagerApiService/CreateDatasetMetadata' -d '{
"name": "test",
"owner": "test@email.com",
"serviceNamePattern": "_all"
}'
curl -XPOST -H 'content-type: application/json; charset=utf-8; protocol=gRPC' http://localhost:8083'/slack.proto.astra.ManagerApiService/UpdatePartitionAssignment' -d '{
"name": "test",
"throughputBytes": "4000000",
"partitionIds": ["0"]
}'
This can optionally be achieved in the manager UI at http://localhost:8083/docs
Add logs via bulk ingest
curl --location 'http://localhost:8086/_bulk' \
--header 'Content-type: application/x-ndjson' \
--data '{ "index" : { "_index" : "test", "_id" : "100" } }
{ "@timestamp": "2024-03-07T12:00:00.000Z", "level": "INFO", "message": "This is a log message", "service-name": "test" }
'
Example curl to read data Note: This is similar to ES _msearch but
size
,lte
, andgte
are all currently required.
curl --location 'http://localhost:8081/_msearch' \
--header 'Content-type: application/x-ndjson' \
--data '{ "index": "test"}
{"query" : {"match_all" : {}, "gte":1625156649889,"lte":2708540790265}, "size": 500}
'
Query via Grafana
http://localhost:3000/explore
Last modified: 03 September 2024