Byron Ruth’s Post

View profile for Byron Ruth, graphic

VP of Product and Engineering, Synadia

This is such an important insight and a key reason why NATS.io is differentiated in this space. The Kafka model of topics, partitions, and keys are optimized for writes, but reads (consumers) don't have the ability to granularly filter on a subset of keys. The consumption pattern requires consuming everything on the partition and dropping data on the floor. NATS.io takes a different approach to persisted messages, we call streams. All messages are addressed by subject (https://lnkd.in/egvGwDaW) and this enables subject-based server-side filtering to be configured for consumers. As a side note, interesting you note EventStore! NATS.io have a growing community of developers that apply event sourcing and CQRS who are adopting NATS since it supports OCC on streams on a per-subject basis, messaging (for really performant pub-sub including request-reply), and materialized views like key-value and object stores. Folks will definitely want to check out the `event-sourcing` channel in the NATS Slack (https://slack.nats.io).

View profile for Michael Drogalis, graphic

Founder shadowtraffic.io | Rapidly simulate production traffic

There's a lot to like about the Kafka protocol, but numeric partitions were clearly a mistake. For example, let's say you made a transactions topic with 16 partitions: 1. Why 16? What happens if you need more? Less? Everyone knows how painful this is. 2. How do you consume all transactions for Alice? Subscribe to partition 4. What?! It would be way better if partitions were light-weight and self-identifiable. e.g. a partition for Alice, a partition for Bob. Stream processing would get a whole lot simpler, too. I know EventStore has had these semantics for a while, and I think Astradot (new competitor) is out to do this. Can't wait to to see this abstraction become more widespread.

To view or add a comment, sign in

Explore topics