franz
Types
pub opaque type ClientBuilder
pub type ClientConfig {
RestartDelaySeconds(Int)
ReconnectCoolDownSeconds(Int)
AllowTopicAutoCreation(Bool)
AutoStartProducers(Bool)
DefaultProducerConfig(List(producer_config.ProducerConfig))
UnknownTopicCacheTtl(Int)
}
Constructors
-
RestartDelaySeconds(Int)
How long to wait between attempts to restart FranzClient process when it crashes. Default: 10 seconds
-
ReconnectCoolDownSeconds(Int)
Delay this configured number of seconds before retrying to establish a new connection to the kafka partition leader. Default: 1 second
-
AllowTopicAutoCreation(Bool)
By default, Franz respects what is configured in the broker about topic auto-creation. i.e. whether auto.create.topics.enable is set in the broker configuration. However if allow_topic_auto_creation is set to false in client config, Franz will avoid sending metadata requests that may cause an auto-creation of the topic regardless of what broker config is. Default: true
-
AutoStartProducers(Bool)
If true, Franz client will spawn a producer automatically when user is trying to call produce but did not call Franz.start_client() explicitly. Can be useful for applications which don’t know beforehand which topics they will be working with. Default: false
-
DefaultProducerConfig(List(producer_config.ProducerConfig))
Producer configuration to use when auto_start_producers is true. Default: []
-
UnknownTopicCacheTtl(Int)
For how long unknown_topic error will be cached, in ms. Default: 120000
pub type ConsumerGroup {
ConsumerGroup(group_id: String, protocol_type: String)
}
Constructors
-
ConsumerGroup(group_id: String, protocol_type: String)
pub type Endpoint {
Endpoint(host: String, port: Int)
}
Constructors
-
Endpoint(host: String, port: Int)
pub type FetchOption {
MaxWaitTime(Int)
MinBytes(Int)
MaxBytes(Int)
IsolationLevel(isolation_level.IsolationLevel)
}
Constructors
-
MaxWaitTime(Int)
The maximum time (in millis) to block wait until there are enough messages that have in sum at least min_bytes bytes. The waiting will end as soon as either min_bytes is satisfied or max_wait_time is exceeded, whichever comes first. Defaults to 1 second.
-
MinBytes(Int)
The minimum size of the message set. If it there are not enough messages, Kafka will block wait (but at most for max_wait_time). This implies that the response may be actually smaller in case the time runs out. If you set it to 0, Kafka will respond immediately (possibly with an empty message set). You can use this option together with max_wait_time to configure throughput, latency, and size of message sets. Defaults to 0.
-
MaxBytes(Int)
The maximum size of the message set. Note that this is not an absolute maximum, if the first message in the message set is larger than this value, the message will still be returned to ensure that progress can be made. Defaults to 1 MB.
-
IsolationLevel(isolation_level.IsolationLevel)
This setting controls the visibility of transactional records. Using read_uncommitted makes all records visible. With read_committed, non-transactional and committed transactional records are visible. To be more concrete, read_committed returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard aborted transactional records. Defaults to read_committed.
pub type FranzClient
pub type FranzError {
UnknownError
ClientDown
UnknownTopicOrPartition
ProducerDown
TopicAlreadyExists
ConsumerNotFound(String)
ProducerNotFound(String, Int)
OffsetOutOfRange
}
Constructors
-
UnknownError
-
ClientDown
-
UnknownTopicOrPartition
-
ProducerDown
-
TopicAlreadyExists
-
ConsumerNotFound(String)
-
ProducerNotFound(String, Int)
-
OffsetOutOfRange
pub type KafkaMessage {
KafkaMessage(
offset: Int,
key: BitArray,
value: BitArray,
timestamp_type: TimeStampType,
timestamp: Int,
headers: List(#(String, String)),
)
KafkaMessageSet(
topic: String,
partition: Int,
high_wm_offset: Int,
messages: List(KafkaMessage),
)
}
Constructors
-
KafkaMessage( offset: Int, key: BitArray, value: BitArray, timestamp_type: TimeStampType, timestamp: Int, headers: List(#(String, String)), )
-
KafkaMessageSet( topic: String, partition: Int, high_wm_offset: Int, messages: List(KafkaMessage), )
pub type TimeStampType {
Undefined
Create
Append
}
Constructors
-
Undefined
-
Create
-
Append
Functions
pub fn create_topic(
endpoints endpoints: List(Endpoint),
name name: String,
partitions partitions: Int,
replication_factor replication_factor: Int,
configs configs: List(#(String, String)),
timeout_ms timeout: Int,
) -> Result(Nil, FranzError)
Create a new topic with the given number of partitions and replication factor.
pub fn delete_topics(
endpoints endpoints: List(Endpoint),
names names: List(String),
timeout_ms timeout: Int,
) -> Result(Nil, FranzError)
pub fn fetch(
client client: FranzClient,
topic topic: String,
partition partition: Int,
offset offset: Int,
options fetch_options: List(FetchOption),
) -> Result(#(Int, KafkaMessage), FranzError)
Fetch a single message set from the given topic-partition. On success, the function returns the messages along with the last stable offset (when using ReadCommited mode, the last committed offset) or the high watermark offset (offset of the last message that was successfully copied to all replicas, incremented by 1), whichever is lower. In essence, this is the offset up to which it was possible to read the messages at the time of fetching
pub fn new(bootstrap_endpoints: List(Endpoint)) -> ClientBuilder
Create a new client builder with the given bootstrap endpoints.
pub fn start(
client_builder: ClientBuilder,
) -> Result(FranzClient, FranzError)
Start a new client with the given configuration.
pub fn with_config(
client_builder: ClientBuilder,
client_config: ClientConfig,
) -> ClientBuilder
Add a client configuration to the client builder.