Website | Getting started | Documentation | Discord | Crates
Iggy is the persistent message streaming platform written in Rust, supporting QUIC, TCP (custom binary specification) and HTTP (regular REST API) transport protocols. Currently, running as a single server, it allows creating streams, topics, partitions and segments, and send/receive messages to/from them. The messages are stored on disk as an append-only log, and are persisted between restarts.
The goal of the project is to make a distributed streaming platform (running as a cluster), which will be able to scale horizontally and handle millions of messages per second (actually, it's already very fast, see the benchmarks below).
The name is an abbreviation for the Italian Greyhound - small yet extremely fast dogs, the best in their class. Just like mine lovely Fabio & Cookie ❤️
- Highly performant, persistent append-only log for the message streaming
- Very high throughput for both writes and reads
- Low latency and predictable resource usage thanks to the Rust compiled language (no GC)
- Users authentication and authorization with granular permissions and PAT (Personal Access Tokens)
- Support for multiple streams, topics and partitions
- Support for multiple transport protocols (QUIC, TCP, HTTP)
- Fully operational RESTful API which can be optionally enabled
- Available client SDK in multiple languages
- Works directly with the binary data (lack of enforced schema and serialization/deserialization)
- Configurable server features (e.g. caching, segment size, data flush interval, transport protocols etc.)
- Possibility of storing the consumer offsets on the server
- Multiple ways of polling the messages:
- By offset (using the indexes)
- By timestamp (using the time indexes)
- First/Last N messages
- Next N messages for the specific consumer
- Possibility of auto committing the offset (e.g. to achieve at-most-once delivery)
- Consumer groups providing the message ordering and horizontal scaling across the connected clients
- Message expiry with auto deletion based on the configurable retention policy
- Additional features such as server side message deduplication
- TLS support for all transport protocols (TCP, QUIC, HTTPS)
- Optional server-side as well as client-side data encryption using AES-256-GCM
- Optional metadata support in the form of message headers
- Built-in CLI to manage the streaming server
- Built-in benchmarking app to test the performance
- Single binary deployment (no external dependencies)
- Running as a single node (no cluster support yet)
- Streaming server caching and I/O improvements
- Low level optimizations (zero-copy etc.)
- Clustering & data replication
- Rich console CLI
- Advanced Web UI
- Developer friendly SDK supporting multiple languages
- Plugins & extensions support
For the detailed information about current progress, please refer to the project board.
The brand new, rich, interactive CLI is being implemented under the cmd
project, to provide the best developer experience. This will be a great addition to the Web UI, especially for all the developers who prefer using the console tools.
There's an ongoing effort to build the administrative web UI for the server, which will allow to manage the streams, topics, partitions, messages and so on. Check the Web UI repository
You can find the Dockerfile
and docker-compose
in the root of the repository. To build and start the server, run: docker compose up
.
Additionally, you can run the CLI
which is available in the running container, by executing: docker exec -it iggy-server /cli
.
Keep in mind that running the container on the OS other than Linux, where the Docker is running in the VM, might result in the significant performance degradation.
The official images can be found here, simply type docker pull iggyrs/iggy
.
The default configuration can be found in server.toml
(the default one) or server.json
file in configs
directory.
The configuration file is loaded from the current working directory, but you can specify the path to the configuration file by setting IGGY_CONFIG_PATH
environment variable, for example export IGGY_CONFIG_PATH=configs/server.json
(or other command depending on OS).
For the detailed documentation of the configuration file, please refer to the configuration section.
Build the project (the longer compilation time is due to LTO enabled in release profile):
cargo build
Run the tests:
cargo test
Start the server:
cargo r --bin iggy-server
Start the CLI (transports: quic
, tcp
, http
):
cargo r --bin iggy-cli -- --transport tcp
Authenticate yourself with the default credentials:
user.login|iggy|iggy
Create a stream with ID 1 named dev
:
stream.create|1|dev
List available streams:
stream.list
Get stream details (ID 1):
stream.get|1
Create a topic for stream dev
(ID 1), with ID 1, 2 partitions (IDs 1 and 2), disabled message expiry (0 seconds), named sample
:
topic.create|1|1|2|0|sample
List available topics for stream dev
(ID 1):
topic.list|1
Get topic details (ID 1) for stream dev
(ID 1):
topic.get|1|1
Send a message 'hello world' (ID 1) to the stream dev
(ID 1) to topic sample
(ID 1) and partition 1:
message.send|1|1|p|1|1|hello world
Send another message 'lorem ipsum' (ID 2) to the same stream, topic and partition:
message.send|1|1|p|1|2|lorem ipsum
Poll messages by a regular consumer c
(g
for consumer group) with ID 0 from the stream dev
(ID 1) for topic sample
(ID 1) and partition with ID 1, starting with offset (o
) 0, messages count 2, without auto commit (n
) (storing consumer offset on server) and using string format s
to render messages payload:
message.poll|c|0|1|1|1|o|0|2|n|s
Finally, restart the server to see it is able to load the persisted data.
The HTTP API endpoints can be found in server.http file, which can be used with REST Client extension for VS Code.
To see the detailed logs from the CLI/server, run it with RUST_LOG=trace
environment variable.
See the images below
Files structure
Server start
CLI start
Server restart
You can find the sample consumer & producer applications under examples
directory. The purpose of these apps is to showcase the usage of the client SDK. To find out more about building the applications, please refer to the getting started guide.
To run the example, first start the server with cargo r --bin iggy-server
and then run the producer and consumer apps with cargo r --example message-envelope-producer
and cargo r --example message-envelope-consumer
respectively.
You might start multiple producers and consumers at the same time to see how the messages are being handled across multiple clients. Check the Args struct to see the available options, such as the transport protocol, stream, topic, partition, consumer ID, message size etc.
By default, the consumer will poll the messages using the next
available offset with auto commit enabled, to store its offset on the server. With this approach, you can easily achieve at-most-once delivery.
To benchmark the project, first start the server in release mode: cargo r --bin iggy-server -r
and then run the benchmarking app:
cargo r --bin iggy-bench -r -- --tcp --test-send-messages --streams 10 --producers 10 --parallel-producer-streams --messages-per-batch 1000 --message-batches 1000 --message-size 1000
cargo r --bin iggy-bench -r -- --tcp --test-poll-messages --streams 10 --consumers 10 --parallel-consumer-streams --messages-per-batch 1000 --message-batches 1000
Depending on the hardware, settings in configs/server.toml
(the default configuration) or server.json
file, transport protocol (quic
, tcp
or http
) and payload size (messages-per-batch * message-size
) you might expect over 4000 MB/s (e.g. 4M of 1 KB msg/sec) throughput for writes and 6000 MB/s for reads. The current results have been achieved on Apple M1 Max with 64 GB RAM.
Write benchmark
Read benchmark