NAME READY STATUS RESTARTS AGE
messaging-topology-operator-68bdb4ffcd-9fq6n 1/1 Running 0 54m
rabbitmq-cluster-operator-645d7645c-sshhm 1/1 Running 0 54m
Deploying the setup
Please review the yaml file for the configuration on how you define the users and permission levearging the Messaging toplogy Operator.
This Kubernetes configuration defines an upstream RabbitMQ cluster named upstream-rabbit with specific plugins enabled, including stream, schema sync, and standby replication. It configures schema and standby replication to connect to specific endpoints with provided credentials.
A Secret named upstream-secret stores the username and password. A User named rabbitmq-replicator is created, referencing the upstream cluster and importing credentials from the Secret. Permissions are granted to this user for the rabbitmq_schema_definition_sync vhost and a new vhost named test.
A Policy named upstream-policy is applied to queues in the test vhost, enabling remote data center replication. Finally, configurations for the default vhost (β/β) are also included with similar replication policies and permissions for the rabbitmq-replicator user.
1
kubectl apply -f upstream-config.yaml
This Kubernetes configuration defines a downstream RabbitMQ cluster, downstream-rabbit, designed to replicate schema and data from an upstream cluster. It enables necessary plugins like stream and standby replication. The configuration specifies the upstream connection details, including address and credentials. It also defines rules for synchronizing schema definitions and managing local entities (users, queues, etc.) on the downstream cluster, filtering out those matching specified patterns.
1
kubectl apply -f downstream-config.yaml
Intall RabbitmqAdmin CLI
Interacting with RabbitMQ Server using rabbitmqadmin v2 CLI . Below steps work on MAC. For other OS please download the executalbe from git releases and move it to /usr/local/bin.
Pull the default username and password created as a k8s Secret for RMQ:
Below perftest are configured to user defalut user created by the operator. Run this in your terminal for the instance you want run the below labs. The below script will export the username and password to your terminal session.
When running on container platforms like kubernetes, we need to port forward to access the management UI. You can access the blue and green cluster using the below urls.
βRabbitMQ has a throughput testing tool, PerfTest, that is based on the Java client and can be configured to simulate basic workloads and more advanced workloads as well. PerfTest has extra tools that produce HTML graphs of the output.
A RabbitMQ cluster can be limited by a number of factors, from infrastructure-level constraints (e.g. network bandwidth) to RabbitMQ configuration and topology to applications that publish and consume. PerfTest can demonstrate baseline performance of a node or a cluster of nodes.
PerfTest uses the AMQP 0.9.1 protocol to communicate with a RabbitMQ cluster. Use Stream PerfTest if you want to test RabbitMQ Streams with the stream protocol.β
Classic Queue Perf Test
These kubectl run commands launch one-off Kubernetes Pods in the default namespace to run RabbitMQ performance tests using the pivotalrabbitmq/perf-test image.
The first command starts a Pod named sa-workshop with 10 producers sending 10,000 messages each to a pre-declared queue βsa-workshopβ with routing key βsa-workshopβ at a rate of 100 messages/second. It also starts 5 consumers reading from the same queue at 10 messages/second, acknowledging every 10 messages. The queue will not auto-delete.
The second command starts a Pod named sa-workshop-new with 10 producers sending 10,000 messages each to a pre-declared queue βsa-workshop-newβ with routing key βsa-workshop-newβ at a rate of 100 messages/second. This test does not include any consumers, and the queue will also not auto-delete.
Both commands target the RabbitMQ instance specified by the $service URI using provided credentials. They are designed to generate load on the RabbitMQ server for performance evaluation.
These kubectl run commands initiate performance tests against a RabbitMQ instance (specified by $service) within the default namespace as one-off Pods, except for perf-syn-check.
The first command (sa-workshop-quorum) tests a quorum queue. It uses 10 producers sending 1,000 messages each to a pre-declared βsa-workshop-quorumβ queue with the same routing key at 100 messages/second. 5 consumers read from it at 10 messages/second, acknowledging every 10 messages.
The second command (sa-workshop-quorum-new) also tests a quorum queue. It uses 10 producers sending 1,000 messages each to a pre-declared βsa-workshop-quorumβ queue (note the queue name is the same as the first command, potentially leading to interaction) with the routing key βsa-workshop-quorum-newβ at 100 messages/second. It has no consumers.
The third command (perf-syn-check) runs persistently (βrestart=Always) to perform a synthetic health check. It sends 5 persistent messages to the βq.sys.synthetic-health-checkβ queue over 120 iterations with specific message size, batch size, and other parameters, using one consumer.
This kubectl run command deploys a persistent Pod named stream in the default namespace. It uses the pivotalrabbitmq/perf-test image to benchmark a RabbitMQ stream queue.
The test involves 10 producers sending a total of 100,000 messages to a pre-declared stream queue named βsa-workshop-streamβ with the routing key βsa-workshop-streamβ at 100 messages/second per producer. Simultaneously, 5 consumers read from the same stream queue at 10 messages/second each, acknowledging every message. Each consumer uses 10 concurrent connections. This command continuously evaluates the performance of RabbitMQ streams under load.
We have enabled Shovel pulgin on the cluster via yaml configurations. Shovel is an amazing plugin you can leverage to move messages from one to another queue.
Usecases:
Moving messages between queues on same or different cluster
Queues types changed
Queue names changed
Queue is full and need to be drained
This kubectl exec command directly executes rabbitmqctl within the upstream-rabbit-server-0 Pod in the default namespace. It configures a shovel named my-shovel.
This shovel is set up to move messages from the quorum queue named sa-workshop-quorum on the upstream-rabbit service to another queue named sa-workshop-shovelq on the same service. The destination queue sa-workshop-shovelq is explicitly created as a quorum queue using the dest-queue-args. This command essentially sets up a mechanism to transfer messages between two quorum queues within the same RabbitMQ cluster. The rabbitmq_shovel plugin must be enabled for this to function.
Below command will move messages from sa-workshop-stream to sa-workshop-shovel
Verify Enterprise RMQ Operations Upgrade for Operators
1
kubectl get pods -n rabbitmq-system
Edit the upstream-rabbit-new cluster yaml and remove the image line and save it.
1
k edit rabbitmqclusters.rabbitmq.com -n default upstream-rabbit
Repeate the above for downstream cluster to perform upgrade
1
k edit rabbitmqclusters.rabbitmq.com -n default downstream-rabbit
ππ°π¦ LAB 10: Working RabbitmqAdmin cli ππ°π¦
NOTE To simply interacting with rabbitmqadmin v2 cli. We can create the below guest user with admin priviliages. Consider using the default creds and specifiy them as options to rabbitmqadmin v2 cli.
π Congratulations, Messaging Maestro! π Youβve now taken a fantastic journey through deploying and interacting with RabbitMQ on Kubernetes! Youβve installed the operator, deployed single and multi-node clusters, enabled plugins, managed users, and even run performance tests.
Keep exploring, experimenting, and having fun with RabbitMQ and Kubernetes! The world of distributed messaging awaits your command! ππ°π¦
πΆπ₯ππ°π¦ One Server to Queue them All !!!!!!! ππ°π¦π₯πΆ
An AI generated song dedicated to RabbitMQ. Enjoy the music! πΆπ₯ππ°π¦
Troubleshooting
Verfity typo in token or username when logging to helm repo to pull enterprise images
Check the pods logs
Kubectl cmd to clean up pods that are not in Running State. Usefull when trying to rerun perftest pods
1
kubectl -n default delete pod $(kubectl -n default get pod -o jsonpath='{.items[?(@.status.phase!="Running")].metadata.name}')