Discussion:
MQ shunting question
Marsh, Marcella
2014-04-28 12:39:00 UTC
Permalink
Hello all,

We are running MQ for z/OS v7.1 and we are getting the following message periodically:

CSQR026I MQ1 LONG-RUNNING UOW SHUNTED TO RBA=037F57314B5D, URID=037F5554E0C4 connection name=MQT1CHIN

Directly after this message we issue a DISPLAY QSTATUS(*) WHERE(UNCOM NE NO) to identify the queue(s) involved.

In an effort to do some performance and tuning, we had the application teams increase their commit frequency. This was done in an attempt to get rid of shunting and for the most part this resulted in the shunting for the queues going away. However we have several applications that we have changed from committing every 1000 messages to 500, then 250, then 100 to no avail.

Can anyone explain how MQ determines when shunting takes place? At what point does increasing the commit frequency potentially cause other performance issues?

We also have an application that use an MQ get wait interval of 2 hours to have their batch job wait for the responses. Does anyone have any insight as to what is best practice for this? Is there another way to do this that won't cause a long running unit of work or shunting?
Marcy Marsh
MF Hosting Transactional, Messaging, Database (TMD) Technical Services
Office: 317-581-8134 Indianapolis
Cell: 317-691-5830
E-mail: Marcella.Marsh-2dM4qJlLJvxvgq9H/jF5PwC/***@public.gmane.org<mailto:***@libertymutual.com>
[cid:image001.png-***@public.gmane.org]




To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Bob Buxton
2014-05-02 14:33:39 UTC
Permalink
As the term implies Long-running UOW is related to time and not number of
messages so you can't eliminate the problem solely by reducing the number of
message between commit - if they are arriving slowly any number greater than
1 could result in a long running UOW!

There is no inherent problem with Committing frequently, a common model is
Get a message, update database, put reply, commit.

If you do want to batch work before committing use a short Get wait
interval, when you get no msg available Commit before going into your long wait,

Bob Buxton

On Mon, 28 Apr 2014 12:39:00 +0000, Marsh, Marcella
Post by Marsh, Marcella
Hello all,
We are running MQ for z/OS v7.1 and we are getting the following message
CSQR026I MQ1 LONG-RUNNING UOW SHUNTED TO RBA=037F57314B5D,
URID=037F5554E0C4 connection name=MQT1CHIN
Post by Marsh, Marcella
Directly after this message we issue a DISPLAY QSTATUS(*) WHERE(UNCOM NE
NO) to identify the queue(s) involved.
Post by Marsh, Marcella
In an effort to do some performance and tuning, we had the application
teams increase their commit frequency. This was done in an attempt to get
rid of shunting and for the most part this resulted in the shunting for the
queues going away. However we have several applications that we have
changed from committing every 1000 messages to 500, then 250, then 100 to no
avail.
Post by Marsh, Marcella
Can anyone explain how MQ determines when shunting takes place? At what
point does increasing the commit frequency potentially cause other
performance issues?
Post by Marsh, Marcella
We also have an application that use an MQ get wait interval of 2 hours to
have their batch job wait for the responses. Does anyone have any insight
as to what is best practice for this? Is there another way to do this that
won't cause a long running unit of work or shunting?
Post by Marsh, Marcella
Marcy Marsh
MF Hosting Transactional, Messaging, Database (TMD) Technical Services
Office: 317-581-8134 Indianapolis
Cell: 317-691-5830
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Norbert Pfister
2014-05-02 14:13:18 UTC
Permalink
Hi Marcy,

i can't answer your first question directly.
But in our company we use utilities like CSQ1LOGP(with parameter
EXTRACT(YES)) and SupportPac MP1B to identify the bad eggs .
Please ask for further details.

For your second question:
You have V7.1 so applications can use the callback function instead of get
wait (
http://pic.dhe.ibm.com/infocenter/wmqv7/v7r1/topic/com.ibm.mq.doc/fg20500_.htm)
.
This will surely save resources.

HTH
Norbert
Post by Marsh, Marcella
Hello all,
We are running MQ for z/OS v7.1 and we are getting the following message
CSQR026I MQ1 LONG-RUNNING UOW SHUNTED TO RBA=037F57314B5D,
URID=037F5554E0C4 connection name=MQT1CHIN
Directly after this message we issue a DISPLAY QSTATUS(*) WHERE(UNCOM NE
NO) to identify the queue(s) involved.
In an effort to do some performance and tuning, we had the application
teams increase their commit frequency. This was done in an attempt to get
rid of shunting and for the most part this resulted in the shunting for the
queues going away. However we have several applications that we have
changed from committing every 1000 messages to 500, then 250, then 100 to
no avail.
Can anyone explain how MQ determines when shunting takes place? At what
point does increasing the commit frequency potentially cause other
performance issues?
We also have an application that use an MQ get wait interval of 2 hours to
have their batch job wait for the responses. Does anyone have any insight
as to what is best practice for this? Is there another way to do this that
won't cause a long running unit of work or shunting?
*Marcy Marsh*
*MF Hosting** Transactional, Messaging, Database (TMD) Technical
Services*
Office: 317-581-8134 Indianapolis
Cell: 317-691-5830
------------------------------
List Archive <http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage
Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>-
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>
To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Lyn Elkins
2014-05-04 14:37:46 UTC
Permalink
Hi Marcella,

It looks like all the 'usual suspects' have been mentioned by others, so I
will ask about some of the less typical causes.

Has the volume of persistent messages increased suddenly? If the log usage
has gone up a lot, and the logs are switching more frequently, log shunting
rates can also increase. While not typical I've seen some instances of log
shunting and reports of long running UOW increase substantially when new
persistent workload was added to an existing queue manager, whether just
from volume or application changes. And this can show up in workload that
was not increased, but just happens to be in the same queue manager.

I've also seen in it situations where a heavy volume was split across two
queue managers. The commit point had been tuned to volume X, and when
volume x/2 was the new norm the commit point needed to be reduced. Sounds
like from other responses you have checked into the commit interval.

Have your logs decreased in size? One situation I have seen a couple of
time was where several new log files were added to the active log string,
but they were very small. In one case the new logs were allocated at just
1,000 records for some reason I never learned. Once the queue manager got to
those log files the amount of shunting and switching went up rather
remarkably! In fact one customer reported a situation of an entire small
log file filling with shuts, and that stopping shunt processing

These are far less common reasons, but I have seen them from time to time.
Explains my rapidly whitening hair.

Lyn

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Marsh, Marcella
2014-05-12 16:03:08 UTC
Permalink
Lynn,

Our logs have not decreased in size but interestingly enough during the times when some of our most heavy shunting occurs we see our logs filling more rapidly. Perhaps part of the issue is that maybe our logs are too small although they seem huge to us at 14595 TRKS. Perhaps we need to increase the size of our logs, are there any performance issues with a larger log size? Are there any best practices for sizing these logs?

Marcy Marsh
MF Hosting Transactional, Messaging, Database (TMD) Technical Services
Office: 317-581-8134 Indianapolis
Cell: 317-691-5830
E-mail: ***@libertymutual.com




-----Original Message-----
From: MQSeries List [mailto:***@LISTSERV.MEDUNIWIEN.AC.AT] On Behalf Of Lyn Elkins
Sent: Sunday, May 04, 2014 10:38 AM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT
Subject: Re: MQ shunting question

Hi Marcella,

It looks like all the 'usual suspects' have been mentioned by others, so I will ask about some of the less typical causes.

Has the volume of persistent messages increased suddenly? If the log usage has gone up a lot, and the logs are switching more frequently, log shunting rates can also increase. While not typical I've seen some instances of log shunting and reports of long running UOW increase substantially when new persistent workload was added to an existing queue manager, whether just from volume or application changes. And this can show up in workload that was not increased, but just happens to be in the same queue manager.

I've also seen in it situations where a heavy volume was split across two queue managers. The commit point had been tuned to volume X, and when volume x/2 was the new norm the commit point needed to be reduced. Sounds like from other responses you have checked into the commit interval.

Have your logs decreased in size? One situation I have seen a couple of time was where several new log files were added to the active log string, but they were very small. In one case the new logs were allocated at just
1,000 records for some reason I never learned. Once the queue manager got to those log files the amount of shunting and switching went up rather remarkably! In fact one customer reported a situation of an entire small log file filling with shuts, and that stopping shunt processing

These are far less common reasons, but I have seen them from time to time.
Explains my rapidly whitening hair.

Lyn

To unsubscribe, write to ***@LISTSERV.MEDUNIWIEN.AC.AT and, in the message body (not the subject), write: SIGNOFF MQSERIES Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html

To unsubscribe, write to ***@LISTSERV.MEDUNIWIEN.AC.AT and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: h
Yagudayeva, Irina
2014-05-12 16:53:56 UTC
Permalink
Hi Marcella,
Our logs are twice larger. We increased them a while ago to extend the period between archiving. The logs are getting full in less than 10 minutes when we have a high activity in production.
It all depends on size and persistence of the messages and on what's going on in application which processes them.
I spent a significant time on research of long-running uow-es and shunting.
I captured all CICS transactions and processes on Windows causing those Long-running uow. Then, worked with developers to see whether any code enhancements are possible to avoid them.
Backend processing in all those cases involves complicated searches in DB2 tables. And they take a long time. There is not much you can do about business logic in your app.
Unless, you can convenience your developers dump the messages into some database after Get and Commit.
To me this is just shifting long uow from MQ to the DataBase.
I gave up and increased the number and size of the logs.
We use 12 dual logs 1500 cyls each.


-----Original Message-----
From: MQSeries List [mailto:***@LISTSERV.MEDUNIWIEN.AC.AT] On Behalf Of Marsh, Marcella
Sent: Monday, May 12, 2014 12:03 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT
Subject: Re: MQ shunting question

Lynn,

Our logs have not decreased in size but interestingly enough during the times when some of our most heavy shunting occurs we see our logs filling more rapidly. Perhaps part of the issue is that maybe our logs are too small although they seem huge to us at 14595 TRKS. Perhaps we need to increase the size of our logs, are there any performance issues with a larger log size? Are there any best practices for sizing these logs?

Marcy Marsh
MF Hosting Transactional, Messaging, Database (TMD) Technical Services
Office: 317-581-8134 Indianapolis
Cell: 317-691-5830
E-mail: ***@libertymutual.com




-----Original Message-----
From: MQSeries List [mailto:***@LISTSERV.MEDUNIWIEN.AC.AT] On Behalf Of Lyn Elkins
Sent: Sunday, May 04, 2014 10:38 AM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT
Subject: Re: MQ shunting question

Hi Marcella,

It looks like all the 'usual suspects' have been mentioned by others, so I will ask about some of the less typical causes.

Has the volume of persistent messages increased suddenly? If the log usage has gone up a lot, and the logs are switching more frequently, log shunting rates can also increase. While not typical I've seen some instances of log shunting and reports of long running UOW increase substantially when new persistent workload was added to an existing queue manager, whether just from volume or application changes. And this can show up in workload that was not increased, but just happens to be in the same queue manager.

I've also seen in it situations where a heavy volume was split across two queue managers. The commit point had been tuned to volume X, and when volume x/2 was the new norm the commit point needed to be reduced. Sounds like from other responses you have checked into the commit interval.

Have your logs decreased in size? One situation I have seen a couple of time was where several new log files were added to the active log string, but they were very small. In one case the new logs were allocated at just
1,000 records for some reason I never learned. Once the queue manager got to those log files the amount of shunting and switching went up rather remarkably! In fact one customer reported a situation of an entire small log file filling with shuts, and that stopping shunt processing

These are far less common reasons, but I have seen them from time to time.
Explains my rapidly whitening hair.

Lyn

To unsubscribe, write to ***@LISTSERV.MEDUNIWIEN.AC.AT and, in the message body (not the subject), write: SIGNOFF MQSERIES Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html

To unsubscribe, write to ***@LISTSERV.MEDUNIWIEN.AC.AT and, in the message body (not the subject), write: SIGNOFF MQSERIES Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html

________________________________

This email is intended solely for the recipient. It may contain privileged, proprietary or confidential information or material. If you are not the intended recipient, please delete this email and any attachments and notify the sender of the error.

To unsubscribe, write to ***@LISTSERV.MEDUNIWIEN.AC.AT and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http:
Lyn Elkins
2014-05-13 16:33:38 UTC
Permalink
Hi Marcella,

Guidance on the size of the logs is given in SupportPac MP16, which can be found here:

http://www-01.ibm.com/support/docview.wss?rs=171&uid=swg24007421&loc=en_US&cs=utf-8&lang=en

The current log file sizes may already be at the current maximum usable size (depends on the archiving target if memory serves), so what you have observed may be just from increased volume making transactions appear to run longer than they did formerly. They are probably not in clock time, but when measured in log file switches.

You may have discovered a new time unit! But do take a look at the SupportPac, Tony did a really good job of spelling out the various log file sizes.

Lyn

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES

Loading...