celery - Improve RabbitMQ throughput -
i'm using rabbitmq , celery project , i've reached bottleneck.
my architecture follows:
- 1 rabbitmq node
- between 1 , 7 nodes read rabbitmq via celery
i started performance measurements , pre-populating rabbit 200k messages, node performs 600msg/sec.
starting 2 nodes same pre-populated queue, little under 600msg/sec both nodes.
adding more node same scenario leads drastic loss in throughput, reaching under 400msg/sec 7 nodes.
i've started add settings (some rabbitmq site) lead no improvement.
my current settings configuration is
[ {kernel, [ {inet_default_connect_options, [{nodelay, true}]}, {inet_default_listen_options, [{nodelay, true}]} ]}, {rabbit, [ {tcp_listeners,[{"0.0.0.0", 5672}]}, {vm_memory_high_watermark, 0.6}, {tcp_listen_options, [binary, {packet, raw}, {reuseaddr, true}, {backlog, 128}, {nodelay, true}, {exit_on_close, false}, {keepalive, true}, {sndbuf, 32768}, {recbuf,32768} ]} ]} ].
i've been reading blogs , posts users , mention lot bigger throughput achieving. there mentions of 100k/sec while i'm barely getting 2.8k/sec.
any thoughts on can improve throughput?
try using transient queues setting following in celeryconfig.py
:
celery_default_delivery_mode = 'transient'
this prevents rabbitmq saving disk can improve throughput
also if you're using rabbitmq result backend you'll better performance switching redis.
Comments
Post a Comment