Potkay, Peter M (CTO Architecture + Engineering)
2014-02-06 12:55:34 UTC
You can vote for this RFE here if you think it's a good idea:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=44622
Description:
Similar to z/OS, we would like the ability to have one or more queues use a separate and dedicated of storage on Windows and Unix MQ systems.
Use case:
For example on Windows, I would like to be able to create a T:\ drive and then have the SYSTEM.DEAD.LETTER.QUEUE use that for its storage, while the rest of the Queue Manager's queues reside on the E:\ drive. On Linux, I would like to be able to create a new file system separate from /var/mqm and put QM1.XMITQ and QM2.XMITQ there, while all other queues reside in the default location.
Another use case might be to have all the queue manager's system queues (other than the DLQ) reside on the default storage, have the DLQ reside on its own storage, and app queues reside on a third section of storage. If a new app comes to use this queue manager and they have the need for a lot of deep queues occasionally, we could create a 4th area of storage and build their queues there.
Another use case would be to get a very NAS qTree, maybe 750 GB, to be used for Dead Letter Queues. Connect this NAS to each MQ server over 10 gigabit. And then aim each QM's DLQ towards this one common qTree. The odds of multiple QMs needing a large amount of storage for their DLQs at the same time is very small. But at any one time each QM would have the ability to queue up to 750 GB of dead letter messages, without having to give 750 GB of storage to QM1, another 750 GB to QM2, another 750 GB to QM3, etc., which would all end up sitting unused 99.99% of the time, but could certainly be useful at anytime.
Currently we have to use a RCVR channels Message Retry Count and Interval to try and throttle how fast messages get off loaded to a DLQ, with serious impacts to that channel's throughput for all the other innocent app's messages trying to share. Having giant jumbo DLQs would allow us to tune down these message retry counts and intervals, making for better channel performance in a shared environment when one app starts misbehaving.
Business justification:
If we could segregate the system queues from the app queues and the DLQ, we could make it more likely that the QM would not encounter a disk full situation because some app on some other Queue Manager continued to send massive volumes of messages causing a spillover to the DLQ. The common queues like DLQs and XMITQs need to be able to handle an occasional BIG message, and the occasional burst of millions of little messages, so we are forced to create the Max Q Depth and Max Message length of these queues in such a way that we are vulnerable to having the queue hold millions of small to big messages, overwhelming the underlying storage. On a shared queue manager with dozens or hundreds of queues it's not realistic to set every queue's Max Q Depth and Max Message Size to low enough levels where there is zero chance of filling your one and only amount of disk space for the entire QM. Being able to segregate queues with a higher likelihood of having to hold a large amount of data to separate storage would be very useful. Being able to segregate an app's queues to dedicated storage would allow us to make the app that requires big Max Q Depth and Max Message Size go onto their own storage, which we could then charge back to them.
Peter Potkay
************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************
To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=44622
Description:
Similar to z/OS, we would like the ability to have one or more queues use a separate and dedicated of storage on Windows and Unix MQ systems.
Use case:
For example on Windows, I would like to be able to create a T:\ drive and then have the SYSTEM.DEAD.LETTER.QUEUE use that for its storage, while the rest of the Queue Manager's queues reside on the E:\ drive. On Linux, I would like to be able to create a new file system separate from /var/mqm and put QM1.XMITQ and QM2.XMITQ there, while all other queues reside in the default location.
Another use case might be to have all the queue manager's system queues (other than the DLQ) reside on the default storage, have the DLQ reside on its own storage, and app queues reside on a third section of storage. If a new app comes to use this queue manager and they have the need for a lot of deep queues occasionally, we could create a 4th area of storage and build their queues there.
Another use case would be to get a very NAS qTree, maybe 750 GB, to be used for Dead Letter Queues. Connect this NAS to each MQ server over 10 gigabit. And then aim each QM's DLQ towards this one common qTree. The odds of multiple QMs needing a large amount of storage for their DLQs at the same time is very small. But at any one time each QM would have the ability to queue up to 750 GB of dead letter messages, without having to give 750 GB of storage to QM1, another 750 GB to QM2, another 750 GB to QM3, etc., which would all end up sitting unused 99.99% of the time, but could certainly be useful at anytime.
Currently we have to use a RCVR channels Message Retry Count and Interval to try and throttle how fast messages get off loaded to a DLQ, with serious impacts to that channel's throughput for all the other innocent app's messages trying to share. Having giant jumbo DLQs would allow us to tune down these message retry counts and intervals, making for better channel performance in a shared environment when one app starts misbehaving.
Business justification:
If we could segregate the system queues from the app queues and the DLQ, we could make it more likely that the QM would not encounter a disk full situation because some app on some other Queue Manager continued to send massive volumes of messages causing a spillover to the DLQ. The common queues like DLQs and XMITQs need to be able to handle an occasional BIG message, and the occasional burst of millions of little messages, so we are forced to create the Max Q Depth and Max Message length of these queues in such a way that we are vulnerable to having the queue hold millions of small to big messages, overwhelming the underlying storage. On a shared queue manager with dozens or hundreds of queues it's not realistic to set every queue's Max Q Depth and Max Message Size to low enough levels where there is zero chance of filling your one and only amount of disk space for the entire QM. Being able to segregate queues with a higher likelihood of having to hold a large amount of data to separate storage would be very useful. Being able to segregate an app's queues to dedicated storage would allow us to make the app that requires big Max Q Depth and Max Message Size go onto their own storage, which we could then charge back to them.
Peter Potkay
************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************
To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html