Discussion:
How big is your DLQ?
Potkay, Peter M (CTO and Service Mgmt)
2014-01-28 22:00:09 UTC
Permalink
In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won't fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Neil Casey
2014-01-28 22:31:42 UTC
Permalink
Hi Peter,

I don’t have a way to throttle the performance of the channel, but I do have a suggestion that might help in a reasonably generic way, without having to move to MQ 7.5.

Assuming that you already have a configuration where you cope with DLQ activity, and you want to introduce a new (large message, high volume) application which could put that at risk, you could:
Create new channels for the message flow (or reuse channels if you already have some that work like this) and on the receiving channel:
Set the MRRTY to a non-default and very large value, say '999 999 999’.
Set the MRTMR to a larger than normal value, say 10 000 - this is in milliseconds, so 10 seconds - the default is 1 second.
Set the BATCHSZ to a small value (1 or 2) because we don’t want to repeatedly fail a batch because we can only fit 9 new messages on a queue.

The effect will be that PUT failures to queues via this channel will cause the channel to pause for the time controlled by MRTMR (limiting the rate at which messages can arrive if the receiver is not coping). The PUTS are retries effectively forever. The messages will basically never got to the DLQ because of the MRRTY value.

Alternatively, if you didn’t want to be this draconian, you could limit the rate at which messages could be sent to the DLQ by setting MRRTY to a smaller value (say 60). The channel will perform message retry for MRRTY * MRTMR ms (60*10000/1000/60 minutes = 10 minutes) before putting 1 message to the DLQ.

There is a scalability risk with this… The message will block other messages on the channel while it is retrying. So you might need to have dedicated channels for applications.

Hopefully this would only be needed in rare and exceptional circumstances, and most applications could use the normal channels with standard DLQ processing.

Neil


--
Neil Casey
Senior Consultant | Syntegrity Solutions

+61 414 615 334 neil.casey-VLLIzlmz+***@public.gmane.org
Syntegrity Solutions Pty Ltd | Level 23 | 40 City Road | Southgate | VIC 3006
Analyse >> Integrate >> Secure >> Educate



On 29 Jan 2014, at 9:00 am, Potkay, Peter M (CTO and Service Mgmt) <Peter.Potkay-***@public.gmane.org> wrote:

> In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?
>
> For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.
>
> For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.
>
> So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.
>
> And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.
>
> And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.
>
> I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won’t fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.
>
> If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?
>
> How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.
>
> I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.
>
> At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.
>
> I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.
>
> Peter Potkay
>
>
>
> ************************************************************
> This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
> ************************************************************
>
>
> List Archive - Manage Your List Settings - Unsubscribe
> Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com
>


To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Potkay, Peter M (CTO and Service Mgmt)
2014-01-28 22:56:49 UTC
Permalink
Yeah, I did think about new channels, but as I mentioned it doesn't scale. And in an MQ clustered environment its particularly unpleasant to have to start creating new cluster channels for new overlapping clusters.

If its worth doing for one app, isn't it worth doing for every app? Every app has the ability to misbehave like this technically and I don't see how to protect against it. You can do all the code reviews you want, set all the soft limits on queue sizes, have all the monitoring. At the end of the day the app can drop into a tight loop and have at it flooding the sytem. We do use message retry on our RCVR channels, and that does slow things down a bit, but it penalizes all the well behaved apps at the same time in a shared environment.

Being able to assign a queue to its own storage might help, but what are you going to do - assign 500 GB of storage to each DLQ in your environment. And watch them all sit at 0% utilized 99.9999999% of the time?

You gotta thin provision the queues, making shared queues like DLQs and XMITQs be able to handle the biggest messages and the highest bursts of little messages, which means you open the door for high bursts of giant messages.

I keep going back to the idea of wishing there was a way to throttle a SVRCONN channel in the product. Once the app told us in the design phase what it was going to send worst case, we could isnure that's all they could do.


Peter Potkay


From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of Neil Casey
Sent: Tuesday, January 28, 2014 5:32 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

Hi Peter,

I don't have a way to throttle the performance of the channel, but I do have a suggestion that might help in a reasonably generic way, without having to move to MQ 7.5.

Assuming that you already have a configuration where you cope with DLQ activity, and you want to introduce a new (large message, high volume) application which could put that at risk, you could:
Create new channels for the message flow (or reuse channels if you already have some that work like this) and on the receiving channel:
Set the MRRTY to a non-default and very large value, say '999 999 999'.
Set the MRTMR to a larger than normal value, say 10 000 - this is in milliseconds, so 10 seconds - the default is 1 second.
Set the BATCHSZ to a small value (1 or 2) because we don't want to repeatedly fail a batch because we can only fit 9 new messages on a queue.

The effect will be that PUT failures to queues via this channel will cause the channel to pause for the time controlled by MRTMR (limiting the rate at which messages can arrive if the receiver is not coping). The PUTS are retries effectively forever. The messages will basically never got to the DLQ because of the MRRTY value.

Alternatively, if you didn't want to be this draconian, you could limit the rate at which messages could be sent to the DLQ by setting MRRTY to a smaller value (say 60). The channel will perform message retry for MRRTY * MRTMR ms (60*10000/1000/60 minutes = 10 minutes) before putting 1 message to the DLQ.

There is a scalability risk with this... The message will block other messages on the channel while it is retrying. So you might need to have dedicated channels for applications.

Hopefully this would only be needed in rare and exceptional circumstances, and most applications could use the normal channels with standard DLQ processing.

Neil


--
Neil Casey
Senior Consultant | Syntegrity Solutions

[cid:image001.jpg-P3XAXleoJD1WulKtK/***@public.gmane.org] +61 414 615 334<tel:+61%20414%20615%20334>[cid:image002.jpg-P3XAXleoJD1WulKtK/***@public.gmane.org] neil.casey-VLLIzlmz+***@public.gmane.org <mailto:neil.casey-VLLIzlmz+***@public.gmane.org>
Syntegrity Solutions Pty Ltd<http://www.syntegrity.com.au/> | Level 23 | 40 City Road | Southgate | VIC 3006
Analyse >> Integrate >> Secure >> Educate

[cid:image003.png-P3XAXleoJD1WulKtK/***@public.gmane.org]

On 29 Jan 2014, at 9:00 am, Potkay, Peter M (CTO and Service Mgmt) <Peter.Potkay-***@public.gmane.org<mailto:Peter.Potkay-***@public.gmane.org>> wrote:


In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won't fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>


________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>
************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Paul Clarke
2014-01-28 22:58:30 UTC
Permalink
Hi Peter,

Perhaps not what you are looking for but it would be fairly easy to write an API Crossing Exit which did exactly that. Of course it may be more difficult to explain to the application programmer why you decided to slow down his messages and I’m not sure I could help you there.

Cheers,
Paul.

Paul Clarke
www.mqgem.com

From: Potkay, Peter M (CTO and Service Mgmt)
Sent: Tuesday, January 28, 2014 10:00 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: How big is your DLQ?

In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won’t fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************



--------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe
Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Roger Lacroix
2014-01-28 23:17:32 UTC
Permalink
Hi Paul,

My first thought was a channel receive exit. Can
I ask why you thought an API Exit is a good choice?

Regards,
Roger Lacroix
Capitalware Inc.

At 05:58 PM 1/28/2014, you wrote:
>Hi Peter,
>
>Perhaps not what you are looking for but it
>would be fairly easy to write an API Crossing
>Exit which did exactly that. Of course it may be
>more difficult to explain to the application
>programmer why you decided to slow down his
>messages and I’m not sure I could help you there.
>
>Cheers,
>Paul.
>
>Paul Clarke
>www.mqgem.com
>
>From:
><mailto:Peter.Potkay-***@public.gmane.org>Potkay, Peter M (CTO and Service Mgmt)
>Sent: Tuesday, January 28, 2014 10:00 PM
>To:
><mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>MQSERIES-0lvw86wZMd9k/***@public.gmane.orgAT
>Subject: How big is your DLQ?
>
>In a shared environment (multiple apps sharing
>the QM), on a QM that has multiple channels to
>and from other QMs, how big do you make your QM's DLQ?
>
>For the DLQ's Max Message Size you want to be
>able to DLQ that occasional 10 MB message that one app sends 2 of once a month.
>
>For the DLQ's Max Q Depth you want to be able to
>DLQ the 100,000 little 500 byte messages that other app sends every hour.
>
>So in a shared environment you are forced to
>make the one DLQ able to handle the big messages
>and the numerous messages. Either use case on
>its own is not a problem in the DLQ - one 10 MB
>or 100K of the 500 bytes - who cares.
>
>And then come along app C. The start pumping
>messages as fast as they can to their remote
>queue on QM1 aiming at QM2. And because you set
>their Max Q depth and Max Q size on QM2
>properly, they quickly fill their queues and start spilling into the DLQ.
>
>And now you see why I ask the question - given
>that you have to cater the DLQ to the occasional
>single big message and the occasional big group
>of tiny messages, the DLQ is really big,
>and this 3rd app has the ability to put a lot
>of data into the DLQ. Probably so much that the
>disk space fills up before the DLQ's Max Q depth is reached.
>
>I could make the DLQ's Max Q Depth really low so
>if full of the biggest possible messages it
>won’t fill disk, to protect against this 3rd
>app, but then that harmless batch of 100K tiny
>messages that we were able to DLQ easily will
>now cause the channel from the other QM to stop.
>Or, I could leave the Max Q Depth of the DLQ
>high and knock down the Max message Length, but
>then that one lonely 10 MB message that I was
>able to DLQ in the past will cause the channel to stop.
>
>If you multiplied your DLQ's Max Q Depth times
>its Max message Size, what do you end up with? 1
>GB? 10 GB? Did you just max it out at
>999,999,999 of 100 Mb and cross your fingers and toes?
>
>How do you protect against this 3rd app? You can
>do all you want with setting this apps Max Q
>Depths and Max Message Sizes, but nothing
>prevents the app from sending unlimited numbers
>of messages that are < Max Message Size of their
>SVRCONN channel as long as they fit into the
>XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.
>
>I can set artificially low values for Size and
>Depth on the DLQ and the XMITQs to push the
>failure 'up the chain' until the problem app
>gets a failed MQPUT because the XMITQ behind
>their Remote Q def is full, but now I'm setting
>artificially low limits for all other well
>behaved apps and causing premature QM to QM
>channel hard stops to prevent that one app from
>filling a DLQ and then an XMITQ.
>
>At MQ 7.1 I can set up a dedicated set of QM to
>QM channels with dedicated XMITQs for this 3rd
>app and set their channels to not use a DLQ and
>set their queues and XMITQs to an artificially
>low limit. That way when they fill things up
>they are only impacting themselves. But that's a
>one off and sets a bad example. Pretty soon I'm
>doing this for every app and I have a million
>SNDR/RCVR channels. Doesn't scale.
>
>I wish we could throttle a SVRCONN channel to
>limit the number of bytes or number of messages
>an app could inject into the MQ layer per hour or per day.
>
>Peter Potkay
>
>
>
>
>************************************************************
>This communication, including attachments, is
>for the exclusive use of addressee and may
>contain proprietary, confidential and/or
>privileged information. If you are not the
>intended recipient, any use, copying,
>disclosure, dissemination or distribution is
>strictly prohibited. If you are not the
>intended recipient, please notify the sender
>immediately by return e-mail, delete this communication and destroy all copies.
>************************************************************
>
>
>----------
><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>Archive -
><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>Your List Settings -
><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>
>
>Instructions for managing your mailing list
>subscription are provided in the Listserv
>General Users Guide available at
><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>
>
>----------
><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>Archive -
><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>Your List Settings -
><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>
>
>Instructions for managing your mailing list
>subscription are provided in the Listserv
>General Users Guide available at
><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Paul Clarke
2014-01-28 23:40:32 UTC
Permalink
Well, for two main reasons......
a.. It should really apply to all applications....not just channels. Who is to say a locally bound application won’t throw a wobbly too

b.. Receive exits are tricky. IBM doesn’t publish the format of them and you are not supposed to reverse engineer. If you wanted to do anything even slightly sophisticated, for example throttle only PUTs to the DLQ, then it would be very hard to do in a receive exit. And, of course, clients don’t have Message Exits.
there may be other reasons but it’s late

Paul Clarke
www.mqgem.com

From: Roger Lacroix
Sent: Tuesday, January 28, 2014 11:17 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

Hi Paul,

My first thought was a channel receive exit. Can I ask why you thought an API Exit is a good choice?

Regards,
Roger Lacroix
Capitalware Inc.

At 05:58 PM 1/28/2014, you wrote:

Hi Peter,

Perhaps not what you are looking for but it would be fairly easy to write an API Crossing Exit which did exactly that. Of course it may be more difficult to explain to the application programmer why you decided to slow down his messages and I’m not sure I could help you there.

Cheers,
Paul.

Paul Clarke
www.mqgem.com

From: Potkay, Peter M (CTO and Service Mgmt)
Sent: Tuesday, January 28, 2014 10:00 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: How big is your DLQ?

In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won’t fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************


------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com



------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com


--------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe
Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
rweinger-5mf8PG+
2014-01-28 23:14:42 UTC
Permalink
You can trigger a DLQ handler to move the messages to an 'app error
queue'. You would have to size those, but it won't shut down your
channels. Then get some SLA with the app owner as to what to do with them.
Most of our volume is request-reply and whatever ends up on the DLQ is
usually discardable.






From:
"Potkay, Peter M (CTO and Service Mgmt)" <Peter.Potkay-***@public.gmane.org>
To:
<MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>
Date:
01/28/2014 05:00 PM
Subject:
How big is your DLQ?
Sent by:
MQSeries List <MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>



In a shared environment (multiple apps sharing the QM), on a QM that has
multiple channels to and from other QMs, how big do you make your QM's
DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional
10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little
500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to
handle the big messages and the numerous messages. Either use case on its
own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who
cares.

And then come along app C. The start pumping messages as fast as they can
to their remote queue on QM1 aiming at QM2. And because you set their Max
Q depth and Max Q size on QM2 properly, they quickly fill their queues and
start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the
DLQ to the occasional single big message and the occasional big group of
tiny messages, the DLQ is really big, and this 3rd app has the ability to
put a lot of data into the DLQ. Probably so much that the disk space fills
up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest
possible messages it won?t fill disk, to protect against this 3rd app, but
then that harmless batch of 100K tiny messages that we were able to DLQ
easily will now cause the channel from the other QM to stop. Or, I could
leave the Max Q Depth of the DLQ high and knock down the Max message
Length, but then that one lonely 10 MB message that I was able to DLQ in
the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what
do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of
100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with
setting this apps Max Q Depths and Max Message Sizes, but nothing prevents
the app from sending unlimited numbers of messages that are < Max Message
Size of their SVRCONN channel as long as they fit into the XMITQ's Max
Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the
XMITQs to push the failure 'up the chain' until the problem app gets a
failed MQPUT because the XMITQ behind their Remote Q def is full, but now
I'm setting artificially low limits for all other well behaved apps and
causing premature QM to QM channel hard stops to prevent that one app from
filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated
XMITQs for this 3rd app and set their channels to not use a DLQ and set
their queues and XMITQs to an artificially low limit. That way when they
fill things up they are only impacting themselves. But that's a one off
and sets a bad example. Pretty soon I'm doing this for every app and I
have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or
number of messages an app could inject into the MQ layer per hour or per
day.

Peter Potkay



************************************************************
This communication, including attachments, is for the exclusive use of
addressee and may contain proprietary, confidential and/or privileged
information. If you are not the intended recipient, any use, copying,
disclosure, dissemination or distribution is strictly prohibited. If you
are not the intended recipient, please notify the sender immediately by
return e-mail, delete this communication and destroy all copies.
************************************************************


List Archive - Manage Your List Settings - Unsubscribe
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com

The information contained in this message may be CONFIDENTIAL and is for the intended addressee only. Any unauthorized use, dissemination of the information, or copying of this message is prohibited. If you are not the intended addressee, please notify the sender immediately and delete this message.

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Roger Lacroix
2014-01-29 01:28:15 UTC
Permalink
Hi Paul,

True but the problem with an API Exit is that the
channel name is only available in WMQ v7.1 and
higher. Where as, the receive exit has the channel name in any version of MQ.

My only concern (ok, 1 of many), is what is going
to happen if an exit decides that the application
needs a "5 second timeout" because it has reached
its max limit. Would the client-side MCA think
the server-side has gone away? Hence, break the
connection and return 2009 to the client application? I don't know.

Putting throttle into an exit will test the
limits of MQ and MQ's MCA - "unpredictable
behavior" is a sentence I can hear IBM saying.

Regards,
Roger Lacroix
Capitalware Inc.

At 06:40 PM 1/28/2014, you wrote:
>Well, for two main reasons......
> * It should really apply to all
> applications....not just channels. Who is to
> say a locally bound application won’t throw a wobbly too
> * Receive exits are tricky. IBM doesn’t
> publish the format of them and you are not
> supposed to reverse engineer. If you wanted to
> do anything even slightly sophisticated, for
> example throttle only PUTs to the DLQ, then it
> would be very hard to do in a receive exit.
> And, of course, clients don’t have Message Exits.
>there may be other reasons but it’s late
>Smile
>
>
>Paul Clarke
>www.mqgem.com
>
>From: <mailto:roger.lacroix-***@public.gmane.org>Roger Lacroix
>Sent: Tuesday, January 28, 2014 11:17 PM
>To:
><mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>MQSERIES-0lvw86wZMd9k/***@public.gmane.orgAT
>Subject: Re: How big is your DLQ?
>
>Hi Paul,
>
>My first thought was a channel receive
>exit. Can I ask why you thought an API Exit is a good choice?
>
>Regards,
>Roger Lacroix
>Capitalware Inc.
>
>At 05:58 PM 1/28/2014, you wrote:
>>Hi Peter,
>>
>>Perhaps not what you are looking for but it
>>would be fairly easy to write an API Crossing
>>Exit which did exactly that. Of course it may
>>be more difficult to explain to the application
>>programmer why you decided to slow down his
>>messages and I’m not sure I could help you there.
>>
>>Cheers,
>>Paul.
>>
>>Paul Clarke
>>www.mqgem.com
>>
>>From:
>><mailto:Peter.Potkay-***@public.gmane.org>Potkay, Peter M (CTO and Service Mgmt)
>>Sent: Tuesday, January 28, 2014 10:00 PM
>>To:
>><mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>MQSERIES-0lvw86wZMd9k/***@public.gmane.org.AT
>>Subject: How big is your DLQ?
>>
>>In a shared environment (multiple apps sharing
>>the QM), on a QM that has multiple channels to
>>and from other QMs, how big do you make your QM's DLQ?
>>
>>For the DLQ's Max Message Size you want to be
>>able to DLQ that occasional 10 MB message that one app sends 2 of once a month.
>>
>>For the DLQ's Max Q Depth you want to be able
>>to DLQ the 100,000 little 500 byte messages that other app sends every hour.
>>
>>So in a shared environment you are forced to
>>make the one DLQ able to handle the big
>>messages and the numerous messages. Either use
>>case on its own is not a problem in the DLQ -
>>one 10 MB or 100K of the 500 bytes - who cares.
>>
>>And then come along app C. The start pumping
>>messages as fast as they can to their remote
>>queue on QM1 aiming at QM2. And because you set
>>their Max Q depth and Max Q size on QM2
>>properly, they quickly fill their queues and start spilling into the DLQ.
>>
>>And now you see why I ask the question - given
>>that you have to cater the DLQ to the
>>occasional single big message and the
>>occasional big group of tiny messages, the DLQ
>>is really big, and this 3rd app has the
>>ability to put a lot of data into the DLQ.
>>Probably so much that the disk space fills up
>>before the DLQ's Max Q depth is reached.
>>
>>I could make the DLQ's Max Q Depth really low
>>so if full of the biggest possible messages it
>>won’t fill disk, to protect against this
>>3rd app, but then that harmless batch of 100K
>>tiny messages that we were able to DLQ easily
>>will now cause the channel from the other QM to
>>stop. Or, I could leave the Max Q Depth of the
>>DLQ high and knock down the Max message Length,
>>but then that one lonely 10 MB message that I
>>was able to DLQ in the past will cause the channel to stop.
>>
>>If you multiplied your DLQ's Max Q Depth times
>>its Max message Size, what do you end up with?
>>1 GB? 10 GB? Did you just max it out at
>>999,999,999 of 100 Mb and cross your fingers and toes?
>>
>>How do you protect against this 3rd app? You
>>can do all you want with setting this apps Max
>>Q Depths and Max Message Sizes, but nothing
>>prevents the app from sending unlimited numbers
>>of messages that are < Max Message Size of
>>their SVRCONN channel as long as they fit into
>>the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.
>>
>>I can set artificially low values for Size and
>>Depth on the DLQ and the XMITQs to push the
>>failure 'up the chain' until the problem app
>>gets a failed MQPUT because the XMITQ behind
>>their Remote Q def is full, but now I'm setting
>>artificially low limits for all other well
>>behaved apps and causing premature QM to QM
>>channel hard stops to prevent that one app from
>>filling a DLQ and then an XMITQ.
>>
>>At MQ 7.1 I can set up a dedicated set of QM to
>>QM channels with dedicated XMITQs for this 3rd
>>app and set their channels to not use a DLQ and
>>set their queues and XMITQs to an artificially
>>low limit. That way when they fill things up
>>they are only impacting themselves. But that's
>>a one off and sets a bad example. Pretty soon
>>I'm doing this for every app and I have a
>>million SNDR/RCVR channels. Doesn't scale.
>>
>>I wish we could throttle a SVRCONN channel to
>>limit the number of bytes or number of messages
>>an app could inject into the MQ layer per hour or per day.
>>
>>Peter Potkay
>>
>>
>>
>>
>>************************************************************
>>This communication, including attachments, is
>>for the exclusive use of addressee and may
>>contain proprietary, confidential and/or
>>privileged information. If you are not the
>>intended recipient, any use, copying,
>>disclosure, dissemination or distribution is
>>strictly prohibited. If you are not the
>>intended recipient, please notify the sender
>>immediately by return e-mail, delete this communication and destroy all copies.
>>************************************************************
>>
>>
>>----------
>><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>>Archive -
>><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>>Your List Settings -
>><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>>
>>
>>Instructions for managing your mailing list
>>subscription are provided in the Listserv
>>General Users Guide available at
>><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>>
>>
>>----------
>><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>>Archive -
>><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>>Your List Settings -
>><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>>
>>
>>Instructions for managing your mailing list
>>subscription are provided in the Listserv
>>General Users Guide available at
>><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>
>
>----------
><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>Archive -
><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>Your List Settings -
><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>
>
>Instructions for managing your mailing list
>subscription are provided in the Listserv
>General Users Guide available at
><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>
>
>----------
><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>Archive -
><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>Your List Settings -
><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>
>
>Instructions for managing your mailing list
>subscription are provided in the Listserv
>General Users Guide available at
><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Paul Clarke
2014-01-29 05:36:32 UTC
Permalink
I’m not really sure why you think you need the channel name in the exit. If you are trying to throttle puts to particular queues does it really mater which channel is doing it ?

I share your concern about artificially slowing down the channel. It is, without doubt, a risk that making the channel less responsive might cause the ‘other side’ to time out. You are correct that after ‘heartbeat seconds plus a bit’ the client end would think the server has gone away. However, I don’t see how you would be any better off if you decided to implement this in a receiver exit. The same thing would happen. After all, the receiver exit and API crossing exit will be invoked on the same thread will they not ? The likelihood is that you would be slowing down the MQPUT call by a much smaller amount of time than the heartbeat interval though so the channel timing out ought not to be too much of an issue. Having said that surely there does come a point, if you are constantly slowing down a putting application, where perhaps that application should be told in some way ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

The bottom line is that all the solutions I have heard mention will have different behaviours which may or may not be what you wish for. I suggested the API crossing exit because it seemed closest to what Peter was asking for and seemed the most controllable. I’m not necessarily advocating that it is the ‘right’ solution. However it does have the third advantage (that I failed to vocalise last night) that you can decide what reason code to give the application if you ultimately decide enough is enough. You can’t do that in a receive exit, the application will essentially see a channel failure.

Of course MQ’s philosophy tends to be that everyone puts to the DLQ (or something similar) and you deal with badly behaved applications ‘offline’. However, I agree with Peter that MQ should have some sort of ‘push back’ for impedance matching and have long advocated this in MQ development. Sadly, ‘push back’ has never made it to the top of anyone’s list so if you want it you are down to writing either wrappers or exits. And, as we all know, it is harder to get ‘ideal’ behaviour in an exit than if it is coded in the Queue Manager.

Cheers,
P.

Paul Clarke
www.mqgem.com

From: Roger Lacroix
Sent: Wednesday, January 29, 2014 1:28 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

Hi Paul,

True but the problem with an API Exit is that the channel name is only available in WMQ v7.1 and higher. Where as, the receive exit has the channel name in any version of MQ.

My only concern (ok, 1 of many), is what is going to happen if an exit decides that the application needs a "5 second timeout" because it has reached its max limit. Would the client-side MCA think the server-side has gone away? Hence, break the connection and return 2009 to the client application? I don't know.

Putting throttle into an exit will test the limits of MQ and MQ's MCA - "unpredictable behavior" is a sentence I can hear IBM saying.

Regards,
Roger Lacroix
Capitalware Inc.

At 06:40 PM 1/28/2014, you wrote:

Well, for two main reasons......
a.. It should really apply to all applications....not just channels. Who is to say a locally bound application won’t throw a wobbly too
b.. Receive exits are tricky. IBM doesn’t publish the format of them and you are not supposed to reverse engineer. If you wanted to do anything even slightly sophisticated, for example throttle only PUTs to the DLQ, then it would be very hard to do in a receive exit. And, of course, clients don’t have Message Exits.
there may be other reasons but it’s late

Paul Clarke
www.mqgem.com

From: Roger Lacroix
Sent: Tuesday, January 28, 2014 11:17 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

Hi Paul,

My first thought was a channel receive exit. Can I ask why you thought an API Exit is a good choice?

Regards,
Roger Lacroix
Capitalware Inc.

At 05:58 PM 1/28/2014, you wrote:

Hi Peter,

Perhaps not what you are looking for but it would be fairly easy to write an API Crossing Exit which did exactly that. Of course it may be more difficult to explain to the application programmer why you decided to slow down his messages and I’m not sure I could help you there.

Cheers,
Paul.

Paul Clarke
www.mqgem.com

From: Potkay, Peter M (CTO and Service Mgmt)
Sent: Tuesday, January 28, 2014 10:00 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: How big is your DLQ?

In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won’t fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************


----------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com



----------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com


------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com



------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com





--------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe
Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Ian Alderson
2014-01-29 12:08:09 UTC
Permalink
Hi Peter,
I agree that I think the thrust of your problem can be solved by adding the ability to throttle on a channel (or possibly better, a queue). Other messaging providers I believe are already there to some extent, for example Flow Control for fast producers with other *cough* products.

The architecture and internal workings of MQ are different to other competitor products, but just as IBM have embraced features such as fault tolerant pairs (aka MIM with client reconnect), JMS “High Persistence” to match the performance for the JMS spec level of persistence, and adding Read Ahead capability to improve client performance over network latency at the sacrifice of assured delivery, I think this is one feature that would be a great enhancement for MQ Administrators.

Even if a lot of users don’t use these advanced features, the (lack of) ability to control the rate of data (bytes not messages) hitting a queue is I believe a gap in MQ product, and certainly an RFE I would vote for.

Ian




Ian Alderson
MQ Technical Architect

[cid:***@6aaf64e2.428b5bcd]

DL 0203 003 3055


________________________________
Ignis Asset Management
Fixed Income | Equities | Real Estate | Advisors | Solutions
150 Cheapside | London | EC2V 6ET

http://www.ignisasset.com
http://twitter.com/IgnisAM
http://www.linkedin.com/companies/ignis-asset-management

From: MQSeries List [mailto:***@LISTSERV.MEDUNIWIEN.AC.AT] On Behalf Of Potkay, Peter M (CTO and Service Mgmt)
Sent: Tuesday, January 28, 2014 10:57 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT
Subject: Re: How big is your DLQ?

Yeah, I did think about new channels, but as I mentioned it doesn’t scale. And in an MQ clustered environment its particularly unpleasant to have to start creating new cluster channels for new overlapping clusters.

If its worth doing for one app, isn’t it worth doing for every app? Every app has the ability to misbehave like this technically and I don’t see how to protect against it. You can do all the code reviews you want, set all the soft limits on queue sizes, have all the monitoring. At the end of the day the app can drop into a tight loop and have at it flooding the sytem. We do use message retry on our RCVR channels, and that does slow things down a bit, but it penalizes all the well behaved apps at the same time in a shared environment.

Being able to assign a queue to its own storage might help, but what are you going to do – assign 500 GB of storage to each DLQ in your environment. And watch them all sit at 0% utilized 99.9999999% of the time?

You gotta thin provision the queues, making shared queues like DLQs and XMITQs be able to handle the biggest messages and the highest bursts of little messages, which means you open the door for high bursts of giant messages.

I keep going back to the idea of wishing there was a way to throttle a SVRCONN channel in the product. Once the app told us in the design phase what it was going to send worst case, we could isnure that’s all they could do.


Peter Potkay

From: MQSeries List [mailto:***@LISTSERV.MEDUNIWIEN.AC.AT] On Behalf Of Neil Casey
Sent: Tuesday, January 28, 2014 5:32 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>
Subject: Re: How big is your DLQ?

Hi Peter,

I don’t have a way to throttle the performance of the channel, but I do have a suggestion that might help in a reasonably generic way, without having to move to MQ 7.5.

Assuming that you already have a configuration where you cope with DLQ activity, and you want to introduce a new (large message, high volume) application which could put that at risk, you could:
Create new channels for the message flow (or reuse channels if you already have some that work like this) and on the receiving channel:
Set the MRRTY to a non-default and very large value, say '999 999 999’.
Set the MRTMR to a larger than normal value, say 10 000 - this is in milliseconds, so 10 seconds - the default is 1 second.
Set the BATCHSZ to a small value (1 or 2) because we don’t want to repeatedly fail a batch because we can only fit 9 new messages on a queue.

The effect will be that PUT failures to queues via this channel will cause the channel to pause for the time controlled by MRTMR (limiting the rate at which messages can arrive if the receiver is not coping). The PUTS are retries effectively forever. The messages will basically never got to the DLQ because of the MRRTY value.

Alternatively, if you didn’t want to be this draconian, you could limit the rate at which messages could be sent to the DLQ by setting MRRTY to a smaller value (say 60). The channel will perform message retry for MRRTY * MRTMR ms (60*10000/1000/60 minutes = 10 minutes) before putting 1 message to the DLQ.

There is a scalability risk with this
 The message will block other messages on the channel while it is retrying. So you might need to have dedicated channels for applications.

Hopefully this would only be needed in rare and exceptional circumstances, and most applications could use the normal channels with standard DLQ processing.

Neil


--
Neil Casey
Senior Consultant | Syntegrity Solutions

[cid:***@01CF1CE9.5102CF10] +61 414 615 334<tel:+61%20414%20615%20334>[cid:***@01CF1CE9.5102CF10] ***@syntegrity.com.au <mailto:***@syntegrity.com.au>
Syntegrity Solutions Pty Ltd<http://www.syntegrity.com.au/> | Level 23 | 40 City Road | Southgate | VIC 3006
Analyse >> Integrate >> Secure >> Educate

[cid:***@01CF1CE9.5102CF10]

On 29 Jan 2014, at 9:00 am, Potkay, Peter M (CTO and Service Mgmt) <***@THEHARTFORD.COM<mailto:***@THEHARTFORD.COM>> wrote:

In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won’t fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>


________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>


**************************************************************
The information contained in this email (including any attachments transmitted within it) is confidential and is intended solely for the use of the named person.
The unauthorised access, copying or re-use of the information in it by any other person is strictly forbidden.
If you are not the intended recipient please notify us immediately by return email to ***@ignisasset.com.

Internet communication is not guaranteed to be timely, secure, error or virus free. We accept no liability for any harm to systems or data, nor for personal emails. Emails may be recalled, deleted and monitored.

Ignis Asset Management is the trading name of the Ignis Asset Management Limited group of companies which includes the following subsidiaries:
Ignis Asset Management Limited (Registered in Scotland No. SC200801), Ignis Investment Services Limited* (Registered in Scotland No. SC101825)
Ignis Fund Managers Limited* (Registered in Scotland No. SC85610) Scottish Mutual Investment Managers Limited* (Registered in Scotland No. SC88674)
Registered Office: 50 Bothwell Street, Glasgow, G2 6HR, Tel: 0141-222-8000 and Scottish Mutual PEP & ISA Managers Limited* (Registered in England No. 971504)
Registered Office: 1 Wythall Green Way, Wythall, Birmingham B47 6WG and Ignis Investment Management Limited (Registered in England No. 5809046)
Registered Office: 150 Cheapside, London, EC2V 6ET Tel: 020 3003 3000. Scottish Mutual is a registered trade mark of Scottish Mutual Assurance Limited

*Authorised and regulated by the Financial Conduct Authority.

**************************************************************
Roger Lacroix
2014-01-29 19:11:05 UTC
Permalink
Hi Paul,

> I'm not really sure why you think you need the
channel name in the exit. If you are trying to
throttle puts to particular queues does it really
mater which channel is doing it ?

Yes and no. I see your point but if you have the
same application (i.e. app001) connecting from
different servers using different channels and
you wanted to throttle the application (i.e.
app001) connecting from a particular server, you
would need the channel name. This may sound strange but I have seen it.

If you have 20 applications sending request
messages to a queue for a server component for
processing and all client applications are Java
or Java/JMS and the MQMD Put-Application Name
field (for every message) has "Websphere MQ
Client for Java", how do you know which
application to throttle. The channel name in
this case (hopefully) would help par it down to a
single application rather than throttling all 20 applications.

> However, I don't see how you would be any
better off if you decided to implement this in a receiver exit.

Oh no. I see the issue in both an API exit and a receive exit.

> ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

Now that's an RFE worth creating!! :)

> However it does have the third advantage (that
I failed to vocalise last night) that you can
decide what reason code to give the application
if you ultimately decide enough is enough.

I've tried to do that for years, when API code
sets ExitResponse = MQXCC_SUPPRESS_FUNCTION it
always returns RC of 2374 (MQRC_API_EXIT_ERROR)
to the application no matter what I set for CC
and RC. So, if you have some pointers, I'm all hears.

Regards,
Roger Lacroix
Capitalware Inc.

At 12:36 AM 1/29/2014, Paul Clarke wrote:
>I’m not really sure why you think you need the
>channel name in the exit. If you are trying to
>throttle puts to particular queues does it
>really mater which channel is doing it ?
>
>I share your concern about artificially slowing
>down the channel. It is, without doubt, a risk
>that making the channel less responsive might
>cause the ‘other side’ to time out. You are
>correct that after ‘heartbeat seconds plus a
>bit’ the client end would think the server has
>gone away. However, I don’t see how you would
>be any better off if you decided to implement
>this in a receiver exit. The same thing would
>happen. After all, the receiver exit and API
>crossing exit will be invoked on the same thread
>will they not ? The likelihood is that you would
>be slowing down the MQPUT call by a much smaller
>amount of time than the heartbeat interval
>though so the channel timing out ought not to be
>too much of an issue. Having said that surely
>there does come a point, if you are constantly
>slowing down a putting application, where
>perhaps that application should be told in some
>way ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.
>
>The bottom line is that all the solutions I have
>heard mention will have different behaviours
>which may or may not be what you wish for. I
>suggested the API crossing exit because it
>seemed closest to what Peter was asking for and
>seemed the most controllable. I’m not
>necessarily advocating that it is the
>‘right’ solution. However it does have the
>third advantage (that I failed to vocalise last
>night) that you can decide what reason code to
>give the application if you ultimately decide
>enough is enough. You can’t do that in a
>receive exit, the application will essentially see a channel failure.
>
>Of course MQ’s philosophy tends to be that
>everyone puts to the DLQ (or something similar)
>and you deal with badly behaved applications
>‘offline’. However, I agree with Peter that
>MQ should have some sort of ‘push back’ for
>impedance matching and have long advocated this
>in MQ development. Sadly, ‘push back’ has
>never made it to the top of anyone’s list so
>if you want it you are down to writing either
>wrappers or exits. And, as we all know, it is
>harder to get ‘ideal’ behaviour in an exit
>than if it is coded in the Queue Manager.
>
>Cheers,
>P.
>
>Paul Clarke
>www.mqgem.com
>
>From: <mailto:roger.lacroix-***@public.gmane.org>Roger Lacroix
>Sent: Wednesday, January 29, 2014 1:28 AM
>To:
><mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>MQSERIES-0lvw86wZMd9k/***@public.gmane.orgAT
>Subject: Re: How big is your DLQ?
>
>Hi Paul,
>
>True but the problem with an API Exit is that
>the channel name is only available in WMQ v7.1
>and higher. Where as, the receive exit has the
>channel name in any version of MQ.
>
>My only concern (ok, 1 of many), is what is
>going to happen if an exit decides that the
>application needs a "5 second timeout" because
>it has reached its max limit. Would the
>client-side MCA think the server-side has gone
>away? Hence, break the connection and return
>2009 to the client application? I don't know.
>
>Putting throttle into an exit will test the
>limits of MQ and MQ's MCA - "unpredictable
>behavior" is a sentence I can hear IBM saying.
>
>Regards,
>Roger Lacroix
>Capitalware Inc.
>
>At 06:40 PM 1/28/2014, you wrote:
>>Well, for two main reasons......
>> * It should really apply to all
>> applications....not just channels. Who is to
>> say a locally bound application won’t throw a wobbly too
>> * Receive exits are tricky. IBM
>> doesn’t publish the format of them and
>> you are not supposed to reverse engineer. If
>> you wanted to do anything even slightly
>> sophisticated, for example throttle only PUTs
>> to the DLQ, then it would be very hard to do
>> in a receive exit. And, of course, clients don’t have Message Exits.
>>there may be other reasons but it’s late
>>Smile
>>
>>
>>Paul Clarke
>>www.mqgem.com
>>
>>From: <mailto:roger.lacroix-***@public.gmane.org>Roger Lacroix
>>Sent: Tuesday, January 28, 2014 11:17 PM
>>To:
>><mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>MQSERIES-0lvw86wZMd9k/***@public.gmane.org.AT
>>Subject: Re: How big is your DLQ?
>>
>>Hi Paul,
>>
>>My first thought was a channel receive
>>exit. Can I ask why you thought an API Exit is a good choice?
>>
>>Regards,
>>Roger Lacroix
>>Capitalware Inc.
>>
>>At 05:58 PM 1/28/2014, you wrote:
>>>Hi Peter,
>>>
>>>Perhaps not what you are looking for but it
>>>would be fairly easy to write an API Crossing
>>>Exit which did exactly that. Of course it may
>>>be more difficult to explain to the
>>>application programmer why you decided to slow
>>>down his messages and I’m not sure I could help you there.
>>>
>>>Cheers,
>>>Paul.
>>>
>>>Paul Clarke
>>>www.mqgem.com
>>>
>>>From:
>>><mailto:Peter.Potkay-***@public.gmane.org>Potkay, Peter M (CTO and Service Mgmt)
>>>Sent: Tuesday, January 28, 2014 10:00 PM
>>>To:
>>><mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>MQSERIES-0lvw86wZMd9k/***@public.gmane.orgC.AT
>>>Subject: How big is your DLQ?
>>>
>>>In a shared environment (multiple apps sharing
>>>the QM), on a QM that has multiple channels to
>>>and from other QMs, how big do you make your QM's DLQ?
>>>
>>>For the DLQ's Max Message Size you want to be
>>>able to DLQ that occasional 10 MB message that one app sends 2 of once a month.
>>>
>>>For the DLQ's Max Q Depth you want to be able
>>>to DLQ the 100,000 little 500 byte messages that other app sends every hour.
>>>
>>>So in a shared environment you are forced to
>>>make the one DLQ able to handle the big
>>>messages and the numerous messages. Either use
>>>case on its own is not a problem in the DLQ -
>>>one 10 MB or 100K of the 500 bytes - who cares.
>>>
>>>And then come along app C. The start pumping
>>>messages as fast as they can to their remote
>>>queue on QM1 aiming at QM2. And because you
>>>set their Max Q depth and Max Q size on QM2
>>>properly, they quickly fill their queues and start spilling into the DLQ.
>>>
>>>And now you see why I ask the question - given
>>>that you have to cater the DLQ to the
>>>occasional single big message and the
>>>occasional big group of tiny messages, the DLQ
>>>is really big, and this 3rd app has the
>>>ability to put a lot of data into the DLQ.
>>>Probably so much that the disk space fills up
>>>before the DLQ's Max Q depth is reached.
>>>
>>>I could make the DLQ's Max Q Depth really low
>>>so if full of the biggest possible messages it
>>>won’t fill disk, to protect against
>>>this 3rd app, but then that harmless batch of
>>>100K tiny messages that we were able to DLQ
>>>easily will now cause the channel from the
>>>other QM to stop. Or, I could leave the Max Q
>>>Depth of the DLQ high and knock down the Max
>>>message Length, but then that one lonely 10 MB
>>>message that I was able to DLQ in the past will cause the channel to stop.
>>>
>>>If you multiplied your DLQ's Max Q Depth times
>>>its Max message Size, what do you end up with?
>>>1 GB? 10 GB? Did you just max it out at
>>>999,999,999 of 100 Mb and cross your fingers and toes?
>>>
>>>How do you protect against this 3rd app? You
>>>can do all you want with setting this apps Max
>>>Q Depths and Max Message Sizes, but nothing
>>>prevents the app from sending unlimited
>>>numbers of messages that are < Max Message
>>>Size of their SVRCONN channel as long as they
>>>fit into the XMITQ's Max Message Size. And
>>>then they can swamp the remote QM's DLQ.
>>>
>>>I can set artificially low values for Size and
>>>Depth on the DLQ and the XMITQs to push the
>>>failure 'up the chain' until the problem app
>>>gets a failed MQPUT because the XMITQ behind
>>>their Remote Q def is full, but now I'm
>>>setting artificially low limits for all other
>>>well behaved apps and causing premature QM to
>>>QM channel hard stops to prevent that one app
>>>from filling a DLQ and then an XMITQ.
>>>
>>>At MQ 7.1 I can set up a dedicated set of QM
>>>to QM channels with dedicated XMITQs for this
>>>3rd app and set their channels to not use a
>>>DLQ and set their queues and XMITQs to an
>>>artificially low limit. That way when they
>>>fill things up they are only impacting
>>>themselves. But that's a one off and sets a
>>>bad example. Pretty soon I'm doing this for
>>>every app and I have a million SNDR/RCVR channels. Doesn't scale.
>>>
>>>I wish we could throttle a SVRCONN channel to
>>>limit the number of bytes or number of
>>>messages an app could inject into the MQ layer per hour or per day.
>>>
>>>Peter Potkay
>>>
>>>
>>>
>>>
>>>************************************************************
>>>This communication, including attachments, is
>>>for the exclusive use of addressee and may
>>>contain proprietary, confidential and/or
>>>privileged information. If you are not the
>>>intended recipient, any use, copying,
>>>disclosure, dissemination or distribution is
>>>strictly prohibited. If you are not the
>>>intended recipient, please notify the sender
>>>immediately by return e-mail, delete this communication and destroy all copies.
>>>************************************************************
>>>
>>>
>>>----------
>>><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>>>Archive -
>>><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>>>Your List Settings -
>>><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>>>
>>>
>>>Instructions for managing your mailing list
>>>subscription are provided in the Listserv
>>>General Users Guide available at
>>><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>>>
>>>
>>>----------
>>><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>>>Archive -
>>><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>>>Your List Settings -
>>><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>>>
>>>
>>>Instructions for managing your mailing list
>>>subscription are provided in the Listserv
>>>General Users Guide available at
>>><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>>
>>
>>----------
>><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>>Archive -
>><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>>Your List Settings -
>><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>>
>>
>>Instructions for managing your mailing list
>>subscription are provided in the Listserv
>>General Users Guide available at
>><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>>
>>
>>----------
>><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>>Archive -
>><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>>Your List Settings -
>><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>>
>>
>>Instructions for managing your mailing list
>>subscription are provided in the Listserv
>>General Users Guide available at
>><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>
>
>----------
><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>Archive -
><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>Your List Settings -
><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>
>
>Instructions for managing your mailing list
>subscription are provided in the Listserv
>General Users Guide available at
><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>
>
>----------
><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>Archive -
><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>Your List Settings -
><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>
>
>Instructions for managing your mailing list
>subscription are provided in the Listserv
>General Users Guide available at
><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Paul Clarke
2014-01-29 20:30:06 UTC
Permalink
Hi Roger,

Comments added below.

Cheers,
Paul.

Paul Clarke
www.mqgem.com

From: Roger Lacroix
Sent: Wednesday, January 29, 2014 7:11 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

Hi Paul,

> I'm not really sure why you think you need the channel name in the exit. If you are trying to throttle puts to particular queues does it really mater which channel is doing it ?

Yes and no. I see your point but if you have the same application (i.e. app001) connecting from different servers using different channels and you wanted to throttle the application (i.e. app001) connecting from a particular server, you would need the channel name. This may sound strange but I have seen it.
And suppose you wanted to control local applications or receiver channels ? Anyway, I’m not saying that you couldn’t do it in a receiver exit. If you prefer a receiver exit then go for it but I think I’d still prefer an Api exit for the reasons I mentioned before.

If you have 20 applications sending request messages to a queue for a server component for processing and all client applications are Java or Java/JMS and the MQMD Put-Application Name field (for every message) has "Websphere MQ Client for Java", how do you know which application to throttle. The channel name in this case (hopefully) would help par it down to a single application rather than throttling all 20 applications.
Two things here. Firstly, from 7.5 onwards I thought the Java client identified itself by the main class rather than the rather useless "WebSphere MQ Client for Java". Secondly, it would surprise me if the identity context was filling on in the MQMD on an MQPUT and I don’t think you can rely on it. The identity context will be set at connect time and changed by the QM according (unless the application has used one of the SET_IDENTITY_CONTEXT options of course.

> However, I don't see how you would be any better off if you decided to implement this in a receiver exit.

Oh no. I see the issue in both an API exit and a receive exit.

> ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

Now that's an RFE worth creating!! :)

> However it does have the third advantage (that I failed to vocalise last night) that you can decide what reason code to give the application if you ultimately decide enough is enough.

I've tried to do that for years, when API code sets ExitResponse = MQXCC_SUPPRESS_FUNCTION it always returns RC of 2374 (MQRC_API_EXIT_ERROR) to the application no matter what I set for CC and RC. So, if you have some pointers, I'm all hears.
Are you trying to over complicate things ? have you tried just setting the ReasonCode field in your API Exit ?

Regards,
Roger Lacroix
Capitalware Inc.

At 12:36 AM 1/29/2014, Paul Clarke wrote:

I’m not really sure why you think you need the channel name in the exit. If you are trying to throttle puts to particular queues does it really mater which channel is doing it ?

I share your concern about artificially slowing down the channel. It is, without doubt, a risk that making the channel less responsive might cause the ‘other side’ to time out. You are correct that after ‘heartbeat seconds plus a bit’ the client end would think the server has gone away. However, I don’t see how you would be any better off if you decided to implement this in a receiver exit. The same thing would happen. After all, the receiver exit and API crossing exit will be invoked on the same thread will they not ? The likelihood is that you would be slowing down the MQPUT call by a much smaller amount of time than the heartbeat interval though so the channel timing out ought not to be too much of an issue. Having said that surely there does come a point, if you are constantly slowing down a putting application, where perhaps that application should be told in some way ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

The bottom line is that all the solutions I have heard mention will have different behaviours which may or may not be what you wish for. I suggested the API crossing exit because it seemed closest to what Peter was asking for and seemed the most controllable. I’m not necessarily advocating that it is the ‘right’ solution. However it does have the third advantage (that I failed to vocalise last night) that you can decide what reason code to give the application if you ultimately decide enough is enough. You can’t do that in a receive exit, the application will essentially see a channel failure.

Of course MQ’s philosophy tends to be that everyone puts to the DLQ (or something similar) and you deal with badly behaved applications ‘offline’. However, I agree with Peter that MQ should have some sort of ‘push back’ for impedance matching and have long advocated this in MQ development. Sadly, ‘push back’ has never made it to the top of anyone’s list so if you want it you are down to writing either wrappers or exits. And, as we all know, it is harder to get ‘ideal’ behaviour in an exit than if it is coded in the Queue Manager.

Cheers,
P.

Paul Clarke
www.mqgem.com

From: Roger Lacroix
Sent: Wednesday, January 29, 2014 1:28 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

Hi Paul,

True but the problem with an API Exit is that the channel name is only available in WMQ v7.1 and higher. Where as, the receive exit has the channel name in any version of MQ.

My only concern (ok, 1 of many), is what is going to happen if an exit decides that the application needs a "5 second timeout" because it has reached its max limit. Would the client-side MCA think the server-side has gone away? Hence, break the connection and return 2009 to the client application? I don't know.

Putting throttle into an exit will test the limits of MQ and MQ's MCA - "unpredictable behavior" is a sentence I can hear IBM saying.

Regards,
Roger Lacroix
Capitalware Inc.

At 06:40 PM 1/28/2014, you wrote:

Well, for two main reasons......
a.. It should really apply to all applications....not just channels. Who is to say a locally bound application won’t throw a wobbly too
b.. Receive exits are tricky. IBM doesn’t publish the format of them and you are not supposed to reverse engineer. If you wanted to do anything even slightly sophisticated, for example throttle only PUTs to the DLQ, then it would be very hard to do in a receive exit. And, of course, clients don’t have Message Exits.
there may be other reasons but it’s late

Paul Clarke
www.mqgem.com

From: Roger Lacroix
Sent: Tuesday, January 28, 2014 11:17 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

Hi Paul,

My first thought was a channel receive exit. Can I ask why you thought an API Exit is a good choice?

Regards,
Roger Lacroix
Capitalware Inc.

At 05:58 PM 1/28/2014, you wrote:

Hi Peter,

Perhaps not what you are looking for but it would be fairly easy to write an API Crossing Exit which did exactly that. Of course it may be more difficult to explain to the application programmer why you decided to slow down his messages and I’m not sure I could help you there.

Cheers,
Paul.

Paul Clarke
www.mqgem.com

From: Potkay, Peter M (CTO and Service Mgmt)
Sent: Tuesday, January 28, 2014 10:00 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: How big is your DLQ?

In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won’t fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************


--------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com



--------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com


----------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com



----------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com



------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com



------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com





--------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe
Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Potkay, Peter M (CTO and Service Mgmt)
2014-01-29 21:57:10 UTC
Permalink
A Dead Letter Handles that moves the message to another queue doesn't help - all the queues are tied to the same storage. If you have enough disk to hold these additional queues for the DLH to put to, you have enough space for the DLQ to hold more to begin with.

Have the DLH move the messages to some other Dead Letter Queue Manager? Yeah, I guess we could do that at the expense of another MQ server (admin costs, license costs), but even then all you are doing is delaying the inevitable.

I used this analogy in the thread on mqseries.net on this topic. Night clubs have bouncers at the entrances to not only control who gets in, but how many get in and how fast they get in. I want a bouncer on each of my entry points into my MQ club. I don't think its unreasonable to be able to control how much data gets injected into my MQ club.

You can do all the design work and requirement vetting and queue sizing and monitoring you want - the next morning the legitimate app in the DEV environment starts looping dumping millions of messages into your QM. Or the Monday morning after a production release the Business Analysts mutters to themselves:"Pesky metric system, I guess its 100,000 customer sending 1 MB messages and not 1000 customers..."

As it stands now any suitable authorized app can send any number of messages any time they want.
When dealing with a single QM and local queues only the problem is trivial to solve. Just set Max Q Depth x Max Message Size. Done. If the app goes nuts, they fill their queue, only they get impacted. Life is good. But as soon as you introduce distributes queueing and QM to QM channels and DLQs and XMITQs, now one app can do a lot of damage.

For sure not every app is a candidate for a shared environment. My point is that any app in any environment can go from Dr Jykell to Mr Hyde and you can't proactively do anything about it.

I'll be opening two RFEs and will share the links.

The first RFE will be for IBM to allow MQ Admins to set throttle limits on channels so an MQ Admin can restrict how many bytes or how many messages can go over that channel per minute, or hour or day. And the option to make those limits persist across QM restarts but more importantly across channel restarts, so the MQ Client app can't get cute and connect an disconnect for each message they want to send, although that would seriously throttle the rate itself.

The send RFE is to allow MQ Admins to allocate dedicated storage per queue. That way we can put all our system queues (not DLQ) on separate storage from the app queues, so the queue manager isn't forced to keel over just because one app put to many messages. Or, to allow us to map an app queue to an app owned chunk of storage - if they want to queue 100 GB of data, they can pay for 100 GB of storage. Or relevant to the tile of this post, imagine being able to create a 750 GB NAS qTree to mount to each MQ server, and then you assign all your DLQs to this one common chunk of storage. The odds of multiple DLQs needing a giant amount of storage at the same time is tiny, but ant any given time any one DLQ would have the ability to queue up to 750 GB of dead letter messages. And even if it did fill it, the MQ would not croak since its own storage had plenty of space.


Peter Potkay


From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of rweinger-5mf8PG+***@public.gmane.org
Sent: Tuesday, January 28, 2014 6:15 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?


You can trigger a DLQ handler to move the messages to an 'app error queue'. You would have to size those, but it won't shut down your channels. Then get some SLA with the app owner as to what to do with them. Most of our volume is request-reply and whatever ends up on the DLQ is usually discardable.




From:

"Potkay, Peter M (CTO and Service Mgmt)" <Peter.Potkay-***@public.gmane.org<mailto:Peter.Potkay-***@public.gmane.org>>

To:

<MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org<mailto:MQSERIES-0lvw86wZMd9k/***@public.gmane.orgAT>>

Date:

01/28/2014 05:00 PM

Subject:

How big is your DLQ?

Sent by:

MQSeries List <MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>>


________________________________



In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won't fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************



________________________________

List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>
The information contained in this message may be CONFIDENTIAL and is for the intended addressee only. Any unauthorized use, dissemination of the information, or copying of this message is prohibited. If you are not the intended addressee, please notify the sender immediately and delete this message.
________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>
************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Glenn Baddeley
2014-01-29 22:48:10 UTC
Permalink
That's exactly what we do in our enterprise internal MQ environment and it
works really well. System DLQ is maxdepth 500000. maxmsgl 104857600. App
DLQs are maxdepth 50000. We monitor and alert the curdepths using Tivoli.

HTH,
Glenn Baddeley

On Tue, 28 Jan 2014 18:14:42 -0500, rweinger-5mf8PG+***@public.gmane.org wrote:
>You can trigger a DLQ handler to move the messages to an 'app error
>queue'. You would have to size those, but it won't shut down your
>channels. Then get some SLA with the app owner as to what to do with
them.
>Most of our volume is request-reply and whatever ends up on the DLQ is
>usually discardable.
>

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Potkay, Peter M (CTO and Service Mgmt)
2014-01-30 22:00:30 UTC
Permalink
All things being equal, and rarely they are, I would rather have a solution that is implemented at the channel level versus the QM level.

A channel level solution can be implemented for SVRCONN Channel A, without touching channels B thru Z, and without restarting the QM. Yeah, I realize you can’t control what mix of channels are running in any one amqrmppa process, and maybe that exit for that one channel is now somehow a participant in that amqrmppa pool, so all channels are impacted anyway.

A channel level solution can be backed off for a channel by changing and restarting that channel, versus restarting the QM.

In my own little world, I happen to already have an API exit running from some other vendor. I don’t want to chain API exits. When things go bump it gets complicated if multiple API exits are being called.

In my own little world, I’m only concerned about throttling MQ Clients. I do realize this functionality if implemented would have broad appeal to customers who have lots of apps that connect in binding’s mode, so an API exit would be needed for them.


Besides throttling maybe an option is to just slam the door shut when the limit is exceeded. In that case the existing MQRC that tells you your channel was ended by an exit would suffice. But the ability to throttle is definitely a desire. Clear and concise evidence of the slowdown due to intended throttling would be required, otherwise hello “MQ performance issue” PMRs left and right. Just stopping the connection cold when the limit has been exceeded is a far simpler use case to code for I imagine, and frankly would suit me just fine.


Peter Potkay


From: MQSeries List [mailto:***@LISTSERV.MEDUNIWIEN.AC.AT] On Behalf Of Paul Clarke
Sent: Wednesday, January 29, 2014 3:30 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT
Subject: Re: How big is your DLQ?

Hi Roger,

Comments added below.

Cheers,
Paul.

Paul Clarke
www.mqgem.com

From: Roger Lacroix<mailto:***@ROGERS.COM>
Sent: Wednesday, January 29, 2014 7:11 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>
Subject: Re: How big is your DLQ?

Hi Paul,

> I'm not really sure why you think you need the channel name in the exit. If you are trying to throttle puts to particular queues does it really mater which channel is doing it ?

Yes and no. I see your point but if you have the same application (i.e. app001) connecting from different servers using different channels and you wanted to throttle the application (i.e. app001) connecting from a particular server, you would need the channel name. This may sound strange but I have seen it.

And suppose you wanted to control local applications or receiver channels ? Anyway, I’m not saying that you couldn’t do it in a receiver exit. If you prefer a receiver exit then go for it but I think I’d still prefer an Api exit for the reasons I mentioned before.

If you have 20 applications sending request messages to a queue for a server component for processing and all client applications are Java or Java/JMS and the MQMD Put-Application Name field (for every message) has "Websphere MQ Client for Java", how do you know which application to throttle. The channel name in this case (hopefully) would help par it down to a single application rather than throttling all 20 applications.

Two things here. Firstly, from 7.5 onwards I thought the Java client identified itself by the main class rather than the rather useless "WebSphere MQ Client for Java". Secondly, it would surprise me if the identity context was filling on in the MQMD on an MQPUT and I don’t think you can rely on it. The identity context will be set at connect time and changed by the QM according (unless the application has used one of the SET_IDENTITY_CONTEXT options of course.


> However, I don't see how you would be any better off if you decided to implement this in a receiver exit.

Oh no. I see the issue in both an API exit and a receive exit.

> ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

Now that's an RFE worth creating!! :)

> However it does have the third advantage (that I failed to vocalise last night) that you can decide what reason code to give the application if you ultimately decide enough is enough.

I've tried to do that for years, when API code sets ExitResponse = MQXCC_SUPPRESS_FUNCTION it always returns RC of 2374 (MQRC_API_EXIT_ERROR) to the application no matter what I set for CC and RC. So, if you have some pointers, I'm all hears.

Are you trying to over complicate things ? have you tried just setting the ReasonCode field in your API Exit ?

Regards,
Roger Lacroix
Capitalware Inc.

At 12:36 AM 1/29/2014, Paul Clarke wrote:
I’m not really sure why you think you need the channel name in the exit. If you are trying to throttle puts to particular queues does it really mater which channel is doing it ?

I share your concern about artificially slowing down the channel. It is, without doubt, a risk that making the channel less responsive might cause the ‘other side’ to time out. You are correct that after ‘heartbeat seconds plus a bit’ the client end would think the server has gone away. However, I don’t see how you would be any better off if you decided to implement this in a receiver exit. The same thing would happen. After all, the receiver exit and API crossing exit will be invoked on the same thread will they not ? The likelihood is that you would be slowing down the MQPUT call by a much smaller amount of time than the heartbeat interval though so the channel timing out ought not to be too much of an issue. Having said that surely there does come a point, if you are constantly slowing down a putting application, where perhaps that application should be told in some way ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

The bottom line is that all the solutions I have heard mention will have different behaviours which may or may not be what you wish for. I suggested the API crossing exit because it seemed closest to what Peter was asking for and seemed the most controllable. I’m not necessarily advocating that it is the ‘right’ solution. However it does have the third advantage (that I failed to vocalise last night) that you can decide what reason code to give the application if you ultimately decide enough is enough. You can’t do that in a receive exit, the application will essentially see a channel failure.

Of course MQ’s philosophy tends to be that everyone puts to the DLQ (or something similar) and you deal with badly behaved applications ‘offline’. However, I agree with Peter that MQ should have some sort of ‘push back’ for impedance matching and have long advocated this in MQ development. Sadly, ‘push back’ has never made it to the top of anyone’s list so if you want it you are down to writing either wrappers or exits. And, as we all know, it is harder to get ‘ideal’ behaviour in an exit than if it is coded in the Queue Manager.

Cheers,
P.

Paul Clarke
www.mqgem.com<http://www.mqgem.com/>

From: Roger Lacroix<mailto:***@ROGERS.COM>
Sent: Wednesday, January 29, 2014 1:28 AM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>
Subject: Re: How big is your DLQ?

Hi Paul,

True but the problem with an API Exit is that the channel name is only available in WMQ v7.1 and higher. Where as, the receive exit has the channel name in any version of MQ.

My only concern (ok, 1 of many), is what is going to happen if an exit decides that the application needs a "5 second timeout" because it has reached its max limit. Would the client-side MCA think the server-side has gone away? Hence, break the connection and return 2009 to the client application? I don't know.

Putting throttle into an exit will test the limits of MQ and MQ's MCA - "unpredictable behavior" is a sentence I can hear IBM saying.

Regards,
Roger Lacroix
Capitalware Inc.

At 06:40 PM 1/28/2014, you wrote:

Well, for two main reasons......

* It should really apply to all applications....not just channels. Who is to say a locally bound application won’t throw a wobbly too
* Receive exits are tricky. IBM doesn’t publish the format of them and you are not supposed to reverse engineer. If you wanted to do anything even slightly sophisticated, for example throttle only PUTs to the DLQ, then it would be very hard to do in a receive exit. And, of course, clients don’t have Message Exits.
there may be other reasons but it’s late [Image removed by sender. Smile]

Paul Clarke
www.mqgem.com<http://www.mqgem.com/>

From: Roger Lacroix<mailto:***@ROGERS.COM>
Sent: Tuesday, January 28, 2014 11:17 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>
Subject: Re: How big is your DLQ?

Hi Paul,

My first thought was a channel receive exit. Can I ask why you thought an API Exit is a good choice?

Regards,
Roger Lacroix
Capitalware Inc.

At 05:58 PM 1/28/2014, you wrote:

Hi Peter,

Perhaps not what you are looking for but it would be fairly easy to write an API Crossing Exit which did exactly that. Of course it may be more difficult to explain to the application programmer why you decided to slow down his messages and I’m not sure I could help you there.

Cheers,
Paul.

Paul Clarke
www.mqgem.com<http://www.mqgem.com/>

From: Potkay, Peter M (CTO and Service Mgmt)<mailto:***@THEHARTFORD.COM>
Sent: Tuesday, January 28, 2014 10:00 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>
Subject: How big is your DLQ?

In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won’t fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************
________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>


________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>
************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************
Paul Clarke
2014-01-30 22:11:41 UTC
Permalink
Well, it sounds as though you really do want something simple and unsophisticated. If you just want to throttle total amount of traffic (ie. across all queues) then this would be a fairly trivial channel exit to write. Of course you could tie yourself in knots providing the ‘clear and concise evidence’ since putting a message to an event queue is perhaps not the wisest given what we are trying to avoid

I’m not sure I entirely follow your reasoning that an API exit requires a QM shutdown to change though. It is just code after all, if you want it to be dynamically switchable then that could be coded into it.

Cheers,
Paul.

Paul Clarke
www.mqgem.com

From: Potkay, Peter M (CTO and Service Mgmt)
Sent: Thursday, January 30, 2014 10:00 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

All things being equal, and rarely they are, I would rather have a solution that is implemented at the channel level versus the QM level.



A channel level solution can be implemented for SVRCONN Channel A, without touching channels B thru Z, and without restarting the QM. Yeah, I realize you can’t control what mix of channels are running in any one amqrmppa process, and maybe that exit for that one channel is now somehow a participant in that amqrmppa pool, so all channels are impacted anyway.



A channel level solution can be backed off for a channel by changing and restarting that channel, versus restarting the QM.



In my own little world, I happen to already have an API exit running from some other vendor. I don’t want to chain API exits. When things go bump it gets complicated if multiple API exits are being called.



In my own little world, I’m only concerned about throttling MQ Clients. I do realize this functionality if implemented would have broad appeal to customers who have lots of apps that connect in binding’s mode, so an API exit would be needed for them.





Besides throttling maybe an option is to just slam the door shut when the limit is exceeded. In that case the existing MQRC that tells you your channel was ended by an exit would suffice. But the ability to throttle is definitely a desire. Clear and concise evidence of the slowdown due to intended throttling would be required, otherwise hello “MQ performance issue” PMRs left and right. Just stopping the connection cold when the limit has been exceeded is a far simpler use case to code for I imagine, and frankly would suit me just fine.





Peter Potkay





From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of Paul Clarke
Sent: Wednesday, January 29, 2014 3:30 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?



Hi Roger,



Comments added below.



Cheers,
Paul.



Paul Clarke
www.mqgem.com



From: Roger Lacroix

Sent: Wednesday, January 29, 2014 7:11 PM

To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org

Subject: Re: How big is your DLQ?



Hi Paul,

> I'm not really sure why you think you need the channel name in the exit. If you are trying to throttle puts to particular queues does it really mater which channel is doing it ?

Yes and no. I see your point but if you have the same application (i.e. app001) connecting from different servers using different channels and you wanted to throttle the application (i.e. app001) connecting from a particular server, you would need the channel name. This may sound strange but I have seen it.



And suppose you wanted to control local applications or receiver channels ? Anyway, I’m not saying that you couldn’t do it in a receiver exit. If you prefer a receiver exit then go for it but I think I’d still prefer an Api exit for the reasons I mentioned before.

If you have 20 applications sending request messages to a queue for a server component for processing and all client applications are Java or Java/JMS and the MQMD Put-Application Name field (for every message) has "Websphere MQ Client for Java", how do you know which application to throttle. The channel name in this case (hopefully) would help par it down to a single application rather than throttling all 20 applications.



Two things here. Firstly, from 7.5 onwards I thought the Java client identified itself by the main class rather than the rather useless "WebSphere MQ Client for Java". Secondly, it would surprise me if the identity context was filling on in the MQMD on an MQPUT and I don’t think you can rely on it. The identity context will be set at connect time and changed by the QM according (unless the application has used one of the SET_IDENTITY_CONTEXT options of course.



> However, I don't see how you would be any better off if you decided to implement this in a receiver exit.

Oh no. I see the issue in both an API exit and a receive exit.

> ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

Now that's an RFE worth creating!! :)

> However it does have the third advantage (that I failed to vocalise last night) that you can decide what reason code to give the application if you ultimately decide enough is enough.

I've tried to do that for years, when API code sets ExitResponse = MQXCC_SUPPRESS_FUNCTION it always returns RC of 2374 (MQRC_API_EXIT_ERROR) to the application no matter what I set for CC and RC. So, if you have some pointers, I'm all hears.



Are you trying to over complicate things ? have you tried just setting the ReasonCode field in your API Exit ?


Regards,
Roger Lacroix
Capitalware Inc.

At 12:36 AM 1/29/2014, Paul Clarke wrote:

I’m not really sure why you think you need the channel name in the exit. If you are trying to throttle puts to particular queues does it really mater which channel is doing it ?

I share your concern about artificially slowing down the channel. It is, without doubt, a risk that making the channel less responsive might cause the ‘other side’ to time out. You are correct that after ‘heartbeat seconds plus a bit’ the client end would think the server has gone away. However, I don’t see how you would be any better off if you decided to implement this in a receiver exit. The same thing would happen. After all, the receiver exit and API crossing exit will be invoked on the same thread will they not ? The likelihood is that you would be slowing down the MQPUT call by a much smaller amount of time than the heartbeat interval though so the channel timing out ought not to be too much of an issue. Having said that surely there does come a point, if you are constantly slowing down a putting application, where perhaps that application should be told in some way ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

The bottom line is that all the solutions I have heard mention will have different behaviours which may or may not be what you wish for. I suggested the API crossing exit because it seemed closest to what Peter was asking for and seemed the most controllable. I’m not necessarily advocating that it is the ‘right’ solution. However it does have the third advantage (that I failed to vocalise last night) that you can decide what reason code to give the application if you ultimately decide enough is enough. You can’t do that in a receive exit, the application will essentially see a channel failure.

Of course MQ’s philosophy tends to be that everyone puts to the DLQ (or something similar) and you deal with badly behaved applications ‘offline’. However, I agree with Peter that MQ should have some sort of ‘push back’ for impedance matching and have long advocated this in MQ development. Sadly, ‘push back’ has never made it to the top of anyone’s list so if you want it you are down to writing either wrappers or exits. And, as we all know, it is harder to get ‘ideal’ behaviour in an exit than if it is coded in the Queue Manager.

Cheers,
P.

Paul Clarke
www.mqgem.com

From: Roger Lacroix
Sent: Wednesday, January 29, 2014 1:28 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

Hi Paul,

True but the problem with an API Exit is that the channel name is only available in WMQ v7.1 and higher. Where as, the receive exit has the channel name in any version of MQ.

My only concern (ok, 1 of many), is what is going to happen if an exit decides that the application needs a "5 second timeout" because it has reached its max limit. Would the client-side MCA think the server-side has gone away? Hence, break the connection and return 2009 to the client application? I don't know.

Putting throttle into an exit will test the limits of MQ and MQ's MCA - "unpredictable behavior" is a sentence I can hear IBM saying.

Regards,
Roger Lacroix
Capitalware Inc.

At 06:40 PM 1/28/2014, you wrote:



Well, for two main reasons......

a.. It should really apply to all applications....not just channels. Who is to say a locally bound application won’t throw a wobbly too
b.. Receive exits are tricky. IBM doesn’t publish the format of them and you are not supposed to reverse engineer. If you wanted to do anything even slightly sophisticated, for example throttle only PUTs to the DLQ, then it would be very hard to do in a receive exit. And, of course, clients don’t have Message Exits.
there may be other reasons but it’s late

Paul Clarke
www.mqgem.com

From: Roger Lacroix
Sent: Tuesday, January 28, 2014 11:17 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

Hi Paul,

My first thought was a channel receive exit. Can I ask why you thought an API Exit is a good choice?

Regards,
Roger Lacroix
Capitalware Inc.

At 05:58 PM 1/28/2014, you wrote:



Hi Peter,

Perhaps not what you are looking for but it would be fairly easy to write an API Crossing Exit which did exactly that. Of course it may be more difficult to explain to the application programmer why you decided to slow down his messages and I’m not sure I could help you there.

Cheers,
Paul.

Paul Clarke
www.mqgem.com

From: Potkay, Peter M (CTO and Service Mgmt)
Sent: Tuesday, January 28, 2014 10:00 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: How big is your DLQ?

In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won’t fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************


------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com






--------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




--------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com

************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Potkay, Peter M (CTO and Service Mgmt)
2014-01-30 22:15:37 UTC
Permalink
“I’m not sure I entirely follow your reasoning that an API exit requires a QM shutdown to change though.”

Sorry, wasn’t clear. Maybe too concise ☺

To implement an API exit, or remove it, requires a QM outage. To change what the API exit is doing once loaded, I assume can be done real time and with no outage to the QM.



Peter Potkay


From: MQSeries List [mailto:***@LISTSERV.MEDUNIWIEN.AC.AT] On Behalf Of Paul Clarke
Sent: Thursday, January 30, 2014 5:12 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT
Subject: Re: How big is your DLQ?

Well, it sounds as though you really do want something simple and unsophisticated. If you just want to throttle total amount of traffic (ie. across all queues) then this would be a fairly trivial channel exit to write. Of course you could tie yourself in knots providing the ‘clear and concise evidence’ since putting a message to an event queue is perhaps not the wisest given what we are trying to avoid [Smile]

I’m not sure I entirely follow your reasoning that an API exit requires a QM shutdown to change though. It is just code after all, if you want it to be dynamically switchable then that could be coded into it.

Cheers,
Paul.

Paul Clarke
www.mqgem.com<http://www.mqgem.com>

From: Potkay, Peter M (CTO and Service Mgmt)<mailto:***@THEHARTFORD.COM>
Sent: Thursday, January 30, 2014 10:00 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>
Subject: Re: How big is your DLQ?

All things being equal, and rarely they are, I would rather have a solution that is implemented at the channel level versus the QM level.

A channel level solution can be implemented for SVRCONN Channel A, without touching channels B thru Z, and without restarting the QM. Yeah, I realize you can’t control what mix of channels are running in any one amqrmppa process, and maybe that exit for that one channel is now somehow a participant in that amqrmppa pool, so all channels are impacted anyway.

A channel level solution can be backed off for a channel by changing and restarting that channel, versus restarting the QM.

In my own little world, I happen to already have an API exit running from some other vendor. I don’t want to chain API exits. When things go bump it gets complicated if multiple API exits are being called.

In my own little world, I’m only concerned about throttling MQ Clients. I do realize this functionality if implemented would have broad appeal to customers who have lots of apps that connect in binding’s mode, so an API exit would be needed for them.


Besides throttling maybe an option is to just slam the door shut when the limit is exceeded. In that case the existing MQRC that tells you your channel was ended by an exit would suffice. But the ability to throttle is definitely a desire. Clear and concise evidence of the slowdown due to intended throttling would be required, otherwise hello “MQ performance issue” PMRs left and right. Just stopping the connection cold when the limit has been exceeded is a far simpler use case to code for I imagine, and frankly would suit me just fine.


Peter Potkay

From: MQSeries List [mailto:***@LISTSERV.MEDUNIWIEN.AC.AT] On Behalf Of Paul Clarke
Sent: Wednesday, January 29, 2014 3:30 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>
Subject: Re: How big is your DLQ?

Hi Roger,

Comments added below.

Cheers,
Paul.

Paul Clarke
www.mqgem.com<http://www.mqgem.com>

From: Roger Lacroix<mailto:***@ROGERS.COM>
Sent: Wednesday, January 29, 2014 7:11 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>
Subject: Re: How big is your DLQ?

Hi Paul,

> I'm not really sure why you think you need the channel name in the exit. If you are trying to throttle puts to particular queues does it really mater which channel is doing it ?

Yes and no. I see your point but if you have the same application (i.e. app001) connecting from different servers using different channels and you wanted to throttle the application (i.e. app001) connecting from a particular server, you would need the channel name. This may sound strange but I have seen it.

And suppose you wanted to control local applications or receiver channels ? Anyway, I’m not saying that you couldn’t do it in a receiver exit. If you prefer a receiver exit then go for it but I think I’d still prefer an Api exit for the reasons I mentioned before.

If you have 20 applications sending request messages to a queue for a server component for processing and all client applications are Java or Java/JMS and the MQMD Put-Application Name field (for every message) has "Websphere MQ Client for Java", how do you know which application to throttle. The channel name in this case (hopefully) would help par it down to a single application rather than throttling all 20 applications.

Two things here. Firstly, from 7.5 onwards I thought the Java client identified itself by the main class rather than the rather useless "WebSphere MQ Client for Java". Secondly, it would surprise me if the identity context was filling on in the MQMD on an MQPUT and I don’t think you can rely on it. The identity context will be set at connect time and changed by the QM according (unless the application has used one of the SET_IDENTITY_CONTEXT options of course.


> However, I don't see how you would be any better off if you decided to implement this in a receiver exit.

Oh no. I see the issue in both an API exit and a receive exit.

> ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

Now that's an RFE worth creating!! :)

> However it does have the third advantage (that I failed to vocalise last night) that you can decide what reason code to give the application if you ultimately decide enough is enough.

I've tried to do that for years, when API code sets ExitResponse = MQXCC_SUPPRESS_FUNCTION it always returns RC of 2374 (MQRC_API_EXIT_ERROR) to the application no matter what I set for CC and RC. So, if you have some pointers, I'm all hears.

Are you trying to over complicate things ? have you tried just setting the ReasonCode field in your API Exit ?

Regards,
Roger Lacroix
Capitalware Inc.

At 12:36 AM 1/29/2014, Paul Clarke wrote:
I’m not really sure why you think you need the channel name in the exit. If you are trying to throttle puts to particular queues does it really mater which channel is doing it ?

I share your concern about artificially slowing down the channel. It is, without doubt, a risk that making the channel less responsive might cause the ‘other side’ to time out. You are correct that after ‘heartbeat seconds plus a bit’ the client end would think the server has gone away. However, I don’t see how you would be any better off if you decided to implement this in a receiver exit. The same thing would happen. After all, the receiver exit and API crossing exit will be invoked on the same thread will they not ? The likelihood is that you would be slowing down the MQPUT call by a much smaller amount of time than the heartbeat interval though so the channel timing out ought not to be too much of an issue. Having said that surely there does come a point, if you are constantly slowing down a putting application, where perhaps that application should be told in some way ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

The bottom line is that all the solutions I have heard mention will have different behaviours which may or may not be what you wish for. I suggested the API crossing exit because it seemed closest to what Peter was asking for and seemed the most controllable. I’m not necessarily advocating that it is the ‘right’ solution. However it does have the third advantage (that I failed to vocalise last night) that you can decide what reason code to give the application if you ultimately decide enough is enough. You can’t do that in a receive exit, the application will essentially see a channel failure.

Of course MQ’s philosophy tends to be that everyone puts to the DLQ (or something similar) and you deal with badly behaved applications ‘offline’. However, I agree with Peter that MQ should have some sort of ‘push back’ for impedance matching and have long advocated this in MQ development. Sadly, ‘push back’ has never made it to the top of anyone’s list so if you want it you are down to writing either wrappers or exits. And, as we all know, it is harder to get ‘ideal’ behaviour in an exit than if it is coded in the Queue Manager.

Cheers,
P.

Paul Clarke
www.mqgem.com<http://www.mqgem.com/>

From: Roger Lacroix<mailto:***@ROGERS.COM>
Sent: Wednesday, January 29, 2014 1:28 AM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>
Subject: Re: How big is your DLQ?

Hi Paul,

True but the problem with an API Exit is that the channel name is only available in WMQ v7.1 and higher. Where as, the receive exit has the channel name in any version of MQ.

My only concern (ok, 1 of many), is what is going to happen if an exit decides that the application needs a "5 second timeout" because it has reached its max limit. Would the client-side MCA think the server-side has gone away? Hence, break the connection and return 2009 to the client application? I don't know.

Putting throttle into an exit will test the limits of MQ and MQ's MCA - "unpredictable behavior" is a sentence I can hear IBM saying.

Regards,
Roger Lacroix
Capitalware Inc.

At 06:40 PM 1/28/2014, you wrote:
Well, for two main reasons......

* It should really apply to all applications....not just channels. Who is to say a locally bound application won’t throw a wobbly too
* Receive exits are tricky. IBM doesn’t publish the format of them and you are not supposed to reverse engineer. If you wanted to do anything even slightly sophisticated, for example throttle only PUTs to the DLQ, then it would be very hard to do in a receive exit. And, of course, clients don’t have Message Exits.
there may be other reasons but it’s late [Image removed by sender. Smile]

Paul Clarke
www.mqgem.com<http://www.mqgem.com/>

From: Roger Lacroix<mailto:***@ROGERS.COM>
Sent: Tuesday, January 28, 2014 11:17 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>
Subject: Re: How big is your DLQ?

Hi Paul,

My first thought was a channel receive exit. Can I ask why you thought an API Exit is a good choice?

Regards,
Roger Lacroix
Capitalware Inc.

At 05:58 PM 1/28/2014, you wrote:
Hi Peter,

Perhaps not what you are looking for but it would be fairly easy to write an API Crossing Exit which did exactly that. Of course it may be more difficult to explain to the application programmer why you decided to slow down his messages and I’m not sure I could help you there.

Cheers,
Paul.

Paul Clarke
www.mqgem.com<http://www.mqgem.com/>

From: Potkay, Peter M (CTO and Service Mgmt)<mailto:***@THEHARTFORD.COM>
Sent: Tuesday, January 28, 2014 10:00 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT>
Subject: How big is your DLQ?

In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won’t fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************
________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>


________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:***@LISTSERV.MEDUNIWIEN.AC.AT?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>
************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************
Paul Clarke
2014-01-30 23:06:27 UTC
Permalink
Exactly so I’m not sure that this is a good enough reason to avoid using an API exit. There will be times, maintenance windows etc, where the QM has to come down and at that time one presumably could install an API exit. Provided that it is sufficiently configurable at runtime so that one could switch it on and off and change what it operates on then I don’t see that as too much of a problem. If one wrote the exit well enough with some form of bootstrap loader then I suspect you could even change the version of the API exit at run time too without requiring a QM outage.

P.

Paul Clarke
www.mqgem.com

From: Potkay, Peter M (CTO and Service Mgmt)
Sent: Thursday, January 30, 2014 10:15 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

“I’m not sure I entirely follow your reasoning that an API exit requires a QM shutdown to change though.”



Sorry, wasn’t clear. Maybe too concise J



To implement an API exit, or remove it, requires a QM outage. To change what the API exit is doing once loaded, I assume can be done real time and with no outage to the QM.







Peter Potkay





From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of Paul Clarke
Sent: Thursday, January 30, 2014 5:12 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?



Well, it sounds as though you really do want something simple and unsophisticated. If you just want to throttle total amount of traffic (ie. across all queues) then this would be a fairly trivial channel exit to write. Of course you could tie yourself in knots providing the ‘clear and concise evidence’ since putting a message to an event queue is perhaps not the wisest given what we are trying to avoid



I’m not sure I entirely follow your reasoning that an API exit requires a QM shutdown to change though. It is just code after all, if you want it to be dynamically switchable then that could be coded into it.



Cheers,
Paul.



Paul Clarke
www.mqgem.com



From: Potkay, Peter M (CTO and Service Mgmt)

Sent: Thursday, January 30, 2014 10:00 PM

To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org

Subject: Re: How big is your DLQ?



All things being equal, and rarely they are, I would rather have a solution that is implemented at the channel level versus the QM level.



A channel level solution can be implemented for SVRCONN Channel A, without touching channels B thru Z, and without restarting the QM. Yeah, I realize you can’t control what mix of channels are running in any one amqrmppa process, and maybe that exit for that one channel is now somehow a participant in that amqrmppa pool, so all channels are impacted anyway.



A channel level solution can be backed off for a channel by changing and restarting that channel, versus restarting the QM.



In my own little world, I happen to already have an API exit running from some other vendor. I don’t want to chain API exits. When things go bump it gets complicated if multiple API exits are being called.



In my own little world, I’m only concerned about throttling MQ Clients. I do realize this functionality if implemented would have broad appeal to customers who have lots of apps that connect in binding’s mode, so an API exit would be needed for them.





Besides throttling maybe an option is to just slam the door shut when the limit is exceeded. In that case the existing MQRC that tells you your channel was ended by an exit would suffice. But the ability to throttle is definitely a desire. Clear and concise evidence of the slowdown due to intended throttling would be required, otherwise hello “MQ performance issue” PMRs left and right. Just stopping the connection cold when the limit has been exceeded is a far simpler use case to code for I imagine, and frankly would suit me just fine.





Peter Potkay



From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of Paul Clarke
Sent: Wednesday, January 29, 2014 3:30 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?



Hi Roger,



Comments added below.



Cheers,
Paul.



Paul Clarke
www.mqgem.com



From: Roger Lacroix

Sent: Wednesday, January 29, 2014 7:11 PM

To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org

Subject: Re: How big is your DLQ?



Hi Paul,

> I'm not really sure why you think you need the channel name in the exit. If you are trying to throttle puts to particular queues does it really mater which channel is doing it ?

Yes and no. I see your point but if you have the same application (i.e. app001) connecting from different servers using different channels and you wanted to throttle the application (i.e. app001) connecting from a particular server, you would need the channel name. This may sound strange but I have seen it.



And suppose you wanted to control local applications or receiver channels ? Anyway, I’m not saying that you couldn’t do it in a receiver exit. If you prefer a receiver exit then go for it but I think I’d still prefer an Api exit for the reasons I mentioned before.

If you have 20 applications sending request messages to a queue for a server component for processing and all client applications are Java or Java/JMS and the MQMD Put-Application Name field (for every message) has "Websphere MQ Client for Java", how do you know which application to throttle. The channel name in this case (hopefully) would help par it down to a single application rather than throttling all 20 applications.



Two things here. Firstly, from 7.5 onwards I thought the Java client identified itself by the main class rather than the rather useless "WebSphere MQ Client for Java". Secondly, it would surprise me if the identity context was filling on in the MQMD on an MQPUT and I don’t think you can rely on it. The identity context will be set at connect time and changed by the QM according (unless the application has used one of the SET_IDENTITY_CONTEXT options of course.



> However, I don't see how you would be any better off if you decided to implement this in a receiver exit.

Oh no. I see the issue in both an API exit and a receive exit.

> ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

Now that's an RFE worth creating!! :)

> However it does have the third advantage (that I failed to vocalise last night) that you can decide what reason code to give the application if you ultimately decide enough is enough.

I've tried to do that for years, when API code sets ExitResponse = MQXCC_SUPPRESS_FUNCTION it always returns RC of 2374 (MQRC_API_EXIT_ERROR) to the application no matter what I set for CC and RC. So, if you have some pointers, I'm all hears.



Are you trying to over complicate things ? have you tried just setting the ReasonCode field in your API Exit ?


Regards,
Roger Lacroix
Capitalware Inc.

At 12:36 AM 1/29/2014, Paul Clarke wrote:

I’m not really sure why you think you need the channel name in the exit. If you are trying to throttle puts to particular queues does it really mater which channel is doing it ?

I share your concern about artificially slowing down the channel. It is, without doubt, a risk that making the channel less responsive might cause the ‘other side’ to time out. You are correct that after ‘heartbeat seconds plus a bit’ the client end would think the server has gone away. However, I don’t see how you would be any better off if you decided to implement this in a receiver exit. The same thing would happen. After all, the receiver exit and API crossing exit will be invoked on the same thread will they not ? The likelihood is that you would be slowing down the MQPUT call by a much smaller amount of time than the heartbeat interval though so the channel timing out ought not to be too much of an issue. Having said that surely there does come a point, if you are constantly slowing down a putting application, where perhaps that application should be told in some way ie. the MQPUT fails with MQRC_YOU_ARE_SWAMPING_ME.

The bottom line is that all the solutions I have heard mention will have different behaviours which may or may not be what you wish for. I suggested the API crossing exit because it seemed closest to what Peter was asking for and seemed the most controllable. I’m not necessarily advocating that it is the ‘right’ solution. However it does have the third advantage (that I failed to vocalise last night) that you can decide what reason code to give the application if you ultimately decide enough is enough. You can’t do that in a receive exit, the application will essentially see a channel failure.

Of course MQ’s philosophy tends to be that everyone puts to the DLQ (or something similar) and you deal with badly behaved applications ‘offline’. However, I agree with Peter that MQ should have some sort of ‘push back’ for impedance matching and have long advocated this in MQ development. Sadly, ‘push back’ has never made it to the top of anyone’s list so if you want it you are down to writing either wrappers or exits. And, as we all know, it is harder to get ‘ideal’ behaviour in an exit than if it is coded in the Queue Manager.

Cheers,
P.

Paul Clarke
www.mqgem.com

From: Roger Lacroix
Sent: Wednesday, January 29, 2014 1:28 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

Hi Paul,

True but the problem with an API Exit is that the channel name is only available in WMQ v7.1 and higher. Where as, the receive exit has the channel name in any version of MQ.

My only concern (ok, 1 of many), is what is going to happen if an exit decides that the application needs a "5 second timeout" because it has reached its max limit. Would the client-side MCA think the server-side has gone away? Hence, break the connection and return 2009 to the client application? I don't know.

Putting throttle into an exit will test the limits of MQ and MQ's MCA - "unpredictable behavior" is a sentence I can hear IBM saying.

Regards,
Roger Lacroix
Capitalware Inc.

At 06:40 PM 1/28/2014, you wrote:

Well, for two main reasons......

a.. It should really apply to all applications....not just channels. Who is to say a locally bound application won’t throw a wobbly too
b.. Receive exits are tricky. IBM doesn’t publish the format of them and you are not supposed to reverse engineer. If you wanted to do anything even slightly sophisticated, for example throttle only PUTs to the DLQ, then it would be very hard to do in a receive exit. And, of course, clients don’t have Message Exits.
there may be other reasons but it’s late

Paul Clarke
www.mqgem.com

From: Roger Lacroix
Sent: Tuesday, January 28, 2014 11:17 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: How big is your DLQ?

Hi Paul,

My first thought was a channel receive exit. Can I ask why you thought an API Exit is a good choice?

Regards,
Roger Lacroix
Capitalware Inc.

At 05:58 PM 1/28/2014, you wrote:

Hi Peter,

Perhaps not what you are looking for but it would be fairly easy to write an API Crossing Exit which did exactly that. Of course it may be more difficult to explain to the application programmer why you decided to slow down his messages and I’m not sure I could help you there.

Cheers,
Paul.

Paul Clarke
www.mqgem.com

From: Potkay, Peter M (CTO and Service Mgmt)
Sent: Tuesday, January 28, 2014 10:00 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: How big is your DLQ?

In a shared environment (multiple apps sharing the QM), on a QM that has multiple channels to and from other QMs, how big do you make your QM's DLQ?

For the DLQ's Max Message Size you want to be able to DLQ that occasional 10 MB message that one app sends 2 of once a month.

For the DLQ's Max Q Depth you want to be able to DLQ the 100,000 little 500 byte messages that other app sends every hour.

So in a shared environment you are forced to make the one DLQ able to handle the big messages and the numerous messages. Either use case on its own is not a problem in the DLQ - one 10 MB or 100K of the 500 bytes - who cares.

And then come along app C. The start pumping messages as fast as they can to their remote queue on QM1 aiming at QM2. And because you set their Max Q depth and Max Q size on QM2 properly, they quickly fill their queues and start spilling into the DLQ.

And now you see why I ask the question - given that you have to cater the DLQ to the occasional single big message and the occasional big group of tiny messages, the DLQ is really big, and this 3rd app has the ability to put a lot of data into the DLQ. Probably so much that the disk space fills up before the DLQ's Max Q depth is reached.

I could make the DLQ's Max Q Depth really low so if full of the biggest possible messages it won’t fill disk, to protect against this 3rd app, but then that harmless batch of 100K tiny messages that we were able to DLQ easily will now cause the channel from the other QM to stop. Or, I could leave the Max Q Depth of the DLQ high and knock down the Max message Length, but then that one lonely 10 MB message that I was able to DLQ in the past will cause the channel to stop.

If you multiplied your DLQ's Max Q Depth times its Max message Size, what do you end up with? 1 GB? 10 GB? Did you just max it out at 999,999,999 of 100 Mb and cross your fingers and toes?

How do you protect against this 3rd app? You can do all you want with setting this apps Max Q Depths and Max Message Sizes, but nothing prevents the app from sending unlimited numbers of messages that are < Max Message Size of their SVRCONN channel as long as they fit into the XMITQ's Max Message Size. And then they can swamp the remote QM's DLQ.

I can set artificially low values for Size and Depth on the DLQ and the XMITQs to push the failure 'up the chain' until the problem app gets a failed MQPUT because the XMITQ behind their Remote Q def is full, but now I'm setting artificially low limits for all other well behaved apps and causing premature QM to QM channel hard stops to prevent that one app from filling a DLQ and then an XMITQ.

At MQ 7.1 I can set up a dedicated set of QM to QM channels with dedicated XMITQs for this 3rd app and set their channels to not use a DLQ and set their queues and XMITQs to an artificially low limit. That way when they fill things up they are only impacting themselves. But that's a one off and sets a bad example. Pretty soon I'm doing this for every app and I have a million SNDR/RCVR channels. Doesn't scale.

I wish we could throttle a SVRCONN channel to limit the number of bytes or number of messages an app could inject into the MQ layer per hour or per day.

Peter Potkay




************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************


------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com






--------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com




--------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com

************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************




--------------------------------------------------------------------------------

List Archive - Manage Your List Settings - Unsubscribe

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com

************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
************************************************************

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Loading...