Kafka Tuning Recommendations

Kafka Tuning Recommendations
  • Kafka Brokers per Server
  • Recommend 1 Kafka broker per server- Kafka not only disk-intensive but can be network intensive so if you run multiple broker in a single host network I/O can be the bottleneck . Running single broker per host and having a cluster will give you better availability.
  • Increase Disks allocated to Kafka Broker
  • Kafka parallelism is largely driven by the number of disks and partitions per topic.
  • From the : “We recommend using multiple drives to get good throughput and not sharing the same drives used for Kafka data with application logs or other OS filesystem activity to ensure good latency. As of 0.8 you can format and mount each drive as its own directory. If you configure multiple data directories partitions will be assigned round-robin to data directories. Each partition will be entirely in one of the data directories. If data is not well balanced among partitions this can lead to load imbalance between disks.”
  • Number of Threads
  • Make sure you set num.io.threads to at least no.of disks you are going to use by default its 8. It be can higher than the number of disks.
  • Set num.network.threads higher based on number of concurrent producers, consumers, and replication factor.
  • Number of partitions
  • Ideally you want to assign the default number of partitions (num.partitions) to at least n-1 servers. This can break up the write workload and it allows for greater parallelism on the consumer side. Remember that Kafka does total ordering within a partition, not over multiple partitions, so make sure you partition intelligently on the producer side to parcel up units of work that might span multiple messages/events.
  • Message Size
  • Kafka is designed for small messages. I recommend you to avoid using kafka for larger messages. If thats not avoidable there are several ways to go about sending larger messages like 1MB. Use compression if the original message is json, xml or text using compression is the best option to reduce the size. Large messages will affect your performance and throughput. Check your topic partitions and replica.fetch.size to make sure it doesn’t go over your physical ram.
  • Large Messages
  • Another approach is to break the message into smaller chunks and use the same message key to send it same partition. This way you are sending small messages and these can be re-assembled at the consumer side.
  • Broker side:
  1. message.max.bytes defaults to 1000000 . This indicates the maximum size of message that a kafka broker will accept.
  2. replica.fetch.max.bytes defaults to 1MB . This has to be bigger than message.max.bytes otherwise brokers will not be able to replicate messages.
  • Consumer side:
  1. fetch.message.max.bytes defaults to 1MB. This indicates maximum size of a message that a consumer can read. This should be equal or larger than message.max.bytes.
  • Kafka Heap Size
  • By default kafka-broker jvm is set to 1Gb this can be increased using Ambari kafka-env template. When you are sending large messages JVM garbage collection can be an issue. Try to keep the Kafka Heap size below 4GB.
  • Example: In kafka-env.sh add following settings.
  • export KAFKA_HEAP_OPTS="-Xmx16g -Xms16g"
  • export KAFKA_JVM_PERFORMANCE_OPTS="-XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80"
  • Dedicated Zookeeper
  • Have a separate zookeeper cluster dedicated to Storm/Kafka operations. This will improve Storm/Kafka’s performance for writing offsets to Zookeeper, it will not be competing with HBase or other components for read/write access.
  • ZK on separate nodes from Kafka Broker
  • Do Not Install zk nodes on the same node as kafka broker if you want optimal Kafka performance. Disk I/O both kafka and zk are disk I/O intensive.
  • Disk Tuning sections
  • Please review the Kafka documentation on .
  • Disable THP according to .
  • Either ext4 or xfs filesystems are recommended for performance benefit.
  • Minimal replication
  • If you are doing replication, start with 2x rather than 3x for Kafka clusters larger than 3 machines. Alternatively, use 2x even if a 3 node cluster if you are able to reprocess upstream from your source.
  • Avoid Cross Rack Kafka deployments
  • Avoid cross-rack Kafka deployments for now until Kafka 0.8.2 - see: