Discussion:
Managed File transfer and WMQ
David González Portusach
2014-04-10 14:07:56 UTC
Permalink
Hi,

We are testing a MQ FTE transmission and we need to find the maximum ratio of transfers. We have a 10Gb network device, and we are using 4 agents in z/os against 1 QMGR en z/os vs 1 QMGR in Linux with 4 agents.

We didn't reach more than 200MB/s. we use a different features like multiple channel enabled and chunk size 512KB, network cluster MQ... in fact we didn't have memory problems for each agent....

When we are trying duplicate the numbers of servers in EEDD we have reached until 200MB/s for server, in amount 400MB/s

Anybody knows what is the limit or relationship between agents, or servers? How the Queues are different and specific for each agents, the ratio should be independent in a perfect network, isn't' it? A mean, I had understood that with more agents we could up the throughput.

Anybody has reached more than 500MB/s? If so what is the limit you got it?

Obviously the ratio is considering several parallel transfers.

Thank you in advanced




________________________________

AVISO DE CONFIDENCIALIDAD.
Este correo y la información contenida o adjunta al mismo es privada y confidencial y va dirigida exclusivamente a su destinatario. everis informa a quien pueda haber recibido este correo por error que contiene información confidencial cuyo uso, copia, reproducción o distribución está expresamente prohibida. Si no es Vd. el destinatario del mismo y recibe este correo por error, le rogamos lo ponga en conocimiento del emisor y proceda a su eliminación sin copiarlo, imprimirlo o utilizarlo de ningún modo.

CONFIDENTIALITY WARNING.
This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail.

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
T.Rob
2014-04-10 14:25:32 UTC
Permalink
If you look at CPU utilization per-core, do you see one CPU maxed out? Last
time I ran into a throughput problem the CPU utilization looked great until
we looked at it per-core. It turned out that the thread running the channel
was maxing out its CPU but in aggregate the utilization appeared to be <
10%. I forget if we had to set the channel to use separate processes but do
remember we had to tune it a bit to get multiple channels running on
separate dedicated cores.



Kind regards,

-- T.Rob



T.Robert Wyatt, Managing partner

IoPT Consulting, LLC

+1 704-443-TROB

https://ioptconsulting.com

https://twitter.com/tdotrob



From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of
David González Portusach
Sent: Thursday, April 10, 2014 10:08 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Managed File transfer and WMQ



Hi,



We are testing a MQ FTE transmission and we need to find the maximum ratio
of transfers. We have a 10Gb network device, and we are using 4 agents in
z/os against 1 QMGR en z/os vs 1 QMGR in Linux with 4 agents.



We didn’t reach more than 200MB/s. we use a different features like multiple
channel enabled and chunk size 512KB, network cluster MQ… in fact we didn’t
have memory problems for each agent….



When we are trying duplicate the numbers of servers in EEDD we have reached
until 200MB/s for server, in amount 400MB/s



Anybody knows what is the limit or relationship between agents, or servers?
How the Queues are different and specific for each agents, the ratio should
be independent in a perfect network, isn’t’ it? A mean, I had understood
that with more agents we could up the throughput.



Anybody has reached more than 500MB/s? If so what is the limit you got it?



Obviously the ratio is considering several parallel transfers.



Thank you in advanced









_____


AVISO DE CONFIDENCIALIDAD.
Este correo y la información contenida o adjunta al mismo es privada y
confidencial y va dirigida exclusivamente a su destinatario. everis informa
a quien pueda haber recibido este correo por error que contiene información
confidencial cuyo uso, copia, reproducción o distribución está expresamente
prohibida. Si no es Vd. el destinatario del mismo y recibe este correo por
error, le rogamos lo ponga en conocimiento del emisor y proceda a su
eliminación sin copiarlo, imprimirlo o utilizarlo de ningún modo.

CONFIDENTIALITY WARNING.
This message and the information contained in or attached to it are private
and confidential and intended exclusively for the addressee. everis informs
to whom it may receive it in error that it contains privileged information
and its use, copy, reproduction or distribution is prohibited. If you are
not an intended recipient of this E-mail, please notify the sender, delete
it and do not read, act upon, print, disclose, copy, retain or redistribute
any portion of this E-mail.



_____

List Archive <http://listserv.meduniwien.ac.at/archives/mqser-l.html> -
Manage Your List Settings
<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> -
Unsubscribe
<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%
20mqseries>

Instructions for managing your mailing list subscription are provided in the
Listserv General Users Guide available at http://www.lsoft.com
<http://www.lsoft.com/resources/manuals.asp>


To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
David González Portusach
2014-04-10 15:40:55 UTC
Permalink
Hi,

We are checking all CPU utilization, and we didn't have a high utilization of CPU.

How can we set the channels to use separate process? We can try to check.

Thank you.




De: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] En nombre de T.Rob
Enviado el: Thursday, April 10, 2014 4:26 PM
Para: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Asunto: Re: Managed File transfer and WMQ

If you look at CPU utilization per-core, do you see one CPU maxed out? Last time I ran into a throughput problem the CPU utilization looked great until we looked at it per-core. It turned out that the thread running the channel was maxing out its CPU but in aggregate the utilization appeared to be < 10%. I forget if we had to set the channel to use separate processes but do remember we had to tune it a bit to get multiple channels running on separate dedicated cores.

Kind regards,
-- T.Rob

T.Robert Wyatt, Managing partner
IoPT Consulting, LLC
+1 704-443-TROB
https://ioptconsulting.com
https://twitter.com/tdotrob

From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of David González Portusach
Sent: Thursday, April 10, 2014 10:08 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Managed File transfer and WMQ

Hi,

We are testing a MQ FTE transmission and we need to find the maximum ratio of transfers. We have a 10Gb network device, and we are using 4 agents in z/os against 1 QMGR en z/os vs 1 QMGR in Linux with 4 agents.

We didn't reach more than 200MB/s. we use a different features like multiple channel enabled and chunk size 512KB, network cluster MQ... in fact we didn't have memory problems for each agent....

When we are trying duplicate the numbers of servers in EEDD we have reached until 200MB/s for server, in amount 400MB/s

Anybody knows what is the limit or relationship between agents, or servers? How the Queues are different and specific for each agents, the ratio should be independent in a perfect network, isn't' it? A mean, I had understood that with more agents we could up the throughput.

Anybody has reached more than 500MB/s? If so what is the limit you got it?

Obviously the ratio is considering several parallel transfers.

Thank you in advanced




________________________________

AVISO DE CONFIDENCIALIDAD.
Este correo y la información contenida o adjunta al mismo es privada y confidencial y va dirigida exclusivamente a su destinatario. everis informa a quien pueda haber recibido este correo por error que contiene información confidencial cuyo uso, copia, reproducción o distribución está expresamente prohibida. Si no es Vd. el destinatario del mismo y recibe este correo por error, le rogamos lo ponga en conocimiento del emisor y proceda a su eliminación sin copiarlo, imprimirlo o utilizarlo de ningún modo.

CONFIDENTIALITY WARNING.
This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail.

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________

AVISO DE CONFIDENCIALIDAD.
Este correo y la información contenida o adjunta al mismo es privada y confidencial y va dirigida exclusivamente a su destinatario. everis informa a quien pueda haber recibido este correo por error que contiene información confidencial cuyo uso, copia, reproducción o distribución está expresamente prohibida. Si no es Vd. el destinatario del mismo y recibe este correo por error, le rogamos lo ponga en conocimiento del emisor y proceda a su eliminación sin copiarlo, imprimirlo o utilizarlo de ningún modo.

CONFIDENTIALITY WARNING.
This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail.

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
T.Rob
2014-04-10 16:39:14 UTC
Permalink
So take a look at MCATYPE
(http://pic.dhe.ibm.com/infocenter/wmqv7/v7r5/topic/com.ibm.mq.ref.con.doc/q
082000_.htm ), although IO see the default is PROCESS for SDR and SVR
channels.



Since the FTE agents use non-persistent messages and will recover from lost
messages, also look at NPMSPEED and make sure it is set to FAST.
http://pic.dhe.ibm.com/infocenter/wmqv7/v7r5/topic/com.ibm.mq.ref.con.doc/q0
82100_.htm



There is also a whole section on "Determining whether the channel can
process messages fast enough" here:
http://pic.dhe.ibm.com/infocenter/wmqv7/v7r5/topic/com.ibm.mq.mon.doc/q03811
0_.htm



So far you've told us the throughput numbers you have hit but not much about
the diagnostic or tuning. Throughput depends on where the process is being
throttled. Somewhere it must hit a ceiling and that is usually network,
CPU, memory, or disk I/O. It sounds like you are not hitting the network
limits but that is an assumption so far not justified by any statistics.
For example, in most blade systems and large frames, the network I/O is
virtual NICs sharing one or a few physical NICs. So when comparing the new
implementation to EEDD (did you mean Quick-EDD by any chance?) we don't know
if these are similar platforms, etc. It also sounds like you are not
hitting CPU or memory limits.



That leaves disk. Have you checked disk I/O? I once helped a customer
prove their issue was disk I/O by creating a virtual drive in memory and
running off of that. We saw close to 100-times improvement.



When you did multi-channel, did you confirm traffic across both channels?
If so, in what proportions did the traffic spread across both?



Are you seeing FDCs or error messages indicating channel or other problems?



You should be able to expect that WMQ will move traffic as fast as possible
until hitting some resource constraint. Eventually it hits a point where
the resource that it throttling throughput can't be increased and that's
your limit. But if you do not know which resource is throttling the
throughput, the first step is to identify that. If it is detectable by MQ
you will usually see FDCs or errors pointing toward the culprit. If it is
not in MQ then system capacity tools can usually spot the bottleneck.



Kind regards,

-- T.Rob



T.Robert Wyatt, Managing partner

IoPT Consulting, LLC

+1 704-443-TROB

https://ioptconsulting.com

https://twitter.com/tdotrob



From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of
David González Portusach
Sent: Thursday, April 10, 2014 11:41 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: RV: Managed File transfer and WMQ



Hi,



We are checking all CPU utilization, and we didn’t have a high utilization
of CPU.



How can we set the channels to use separate process? We can try to check.



Thank you.









De: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] En nombre de
T.Rob
Enviado el: Thursday, April 10, 2014 4:26 PM
Para: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Asunto: Re: Managed File transfer and WMQ



If you look at CPU utilization per-core, do you see one CPU maxed out? Last
time I ran into a throughput problem the CPU utilization looked great until
we looked at it per-core. It turned out that the thread running the channel
was maxing out its CPU but in aggregate the utilization appeared to be <
10%. I forget if we had to set the channel to use separate processes but do
remember we had to tune it a bit to get multiple channels running on
separate dedicated cores.



Kind regards,

-- T.Rob



T.Robert Wyatt, Managing partner

IoPT Consulting, LLC

+1 704-443-TROB

https://ioptconsulting.com

https://twitter.com/tdotrob



From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of
David González Portusach
Sent: Thursday, April 10, 2014 10:08 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Managed File transfer and WMQ



Hi,



We are testing a MQ FTE transmission and we need to find the maximum ratio
of transfers. We have a 10Gb network device, and we are using 4 agents in
z/os against 1 QMGR en z/os vs 1 QMGR in Linux with 4 agents.



We didn’t reach more than 200MB/s. we use a different features like multiple
channel enabled and chunk size 512KB, network cluster MQ… in fact we didn’t
have memory problems for each agent….



When we are trying duplicate the numbers of servers in EEDD we have reached
until 200MB/s for server, in amount 400MB/s



Anybody knows what is the limit or relationship between agents, or servers?
How the Queues are different and specific for each agents, the ratio should
be independent in a perfect network, isn’t’ it? A mean, I had understood
that with more agents we could up the throughput.



Anybody has reached more than 500MB/s? If so what is the limit you got it?



Obviously the ratio is considering several parallel transfers.



Thank you in advanced








To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
George Carey
2014-04-12 06:16:41 UTC
Permalink
Say one is not a MQ shop already what MFT product would be recommended as
best for partner interactions currently done extensively using SFTP file
transfers?

IBM’s MQ FTE or a product like Linoma’s GoAnywhere product or some other?
Anyone done a head to head comparison … ease of setup, use, reliability,
features, robustness, etc.



The GoAnywhere product seems to have good set of features and can be used
with only a browser interface to transfer files … no MQ or queue managers
required.

Seems to have a good set of intro videos to give a warm and fuzzy on how
easy it is to use yet robust. http://www.goanywheremft.com/videos



From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of
T.Rob
Sent: Thursday, April 10, 2014 10:26 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: Managed File transfer and WMQ



If you look at CPU utilization per-core, do you see one CPU maxed out? Last
time I ran into a throughput problem the CPU utilization looked great until
we looked at it per-core. It turned out that the thread running the channel
was maxing out its CPU but in aggregate the utilization appeared to be <
10%. I forget if we had to set the channel to use separate processes but do
remember we had to tune it a bit to get multiple channels running on
separate dedicated cores.



Kind regards,

-- T.Rob



T.Robert Wyatt, Managing partner

IoPT Consulting, LLC

+1 704-443-TROB

https://ioptconsulting.com

https://twitter.com/tdotrob



From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of
David González Portusach
Sent: Thursday, April 10, 2014 10:08 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Managed File transfer and WMQ



Hi,



We are testing a MQ FTE transmission and we need to find the maximum ratio
of transfers. We have a 10Gb network device, and we are using 4 agents in
z/os against 1 QMGR en z/os vs 1 QMGR in Linux with 4 agents.



We didn’t reach more than 200MB/s. we use a different features like multiple
channel enabled and chunk size 512KB, network cluster MQ… in fact we didn’t
have memory problems for each agent….



When we are trying duplicate the numbers of servers in EEDD we have reached
until 200MB/s for server, in amount 400MB/s



Anybody knows what is the limit or relationship between agents, or servers?
How the Queues are different and specific for each agents, the ratio should
be independent in a perfect network, isn’t’ it? A mean, I had understood
that with more agents we could up the throughput.



Anybody has reached more than 500MB/s? If so what is the limit you got it?



Obviously the ratio is considering several parallel transfers.



Thank you in advanced









_____


AVISO DE CONFIDENCIALIDAD.
Este correo y la información contenida o adjunta al mismo es privada y
confidencial y va dirigida exclusivamente a su destinatario. everis informa
a quien pueda haber recibido este correo por error que contiene información
confidencial cuyo uso, copia, reproducción o distribución está expresamente
prohibida. Si no es Vd. el destinatario del mismo y recibe este correo por
error, le rogamos lo ponga en conocimiento del emisor y proceda a su
eliminación sin copiarlo, imprimirlo o utilizarlo de ningún modo.

CONFIDENTIALITY WARNING.
This message and the information contained in or attached to it are private
and confidential and intended exclusively for the addressee. everis informs
to whom it may receive it in error that it contains privileged information
and its use, copy, reproduction or distribution is prohibited. If you are
not an intended recipient of this E-mail, please notify the sender, delete
it and do not read, act upon, print, disclose, copy, retain or redistribute
any portion of this E-mail.



_____

List Archive <http://listserv.meduniwien.ac.at/archives/mqser-l.html> -
Manage Your List Settings
<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> -
Unsubscribe
<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%
20mqseries>

Instructions for managing your mailing list subscription are provided in the
Listserv General Users Guide available at http://www.lsoft.com
<http://www.lsoft.com/resources/manuals.asp>



_____

List Archive <http://listserv.meduniwien.ac.at/archives/mqser-l.html> -
Manage Your List Settings
<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> -
Unsubscribe
<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%
20mqseries>

Instructions for managing your mailing list subscription are provided in the
Listserv General Users Guide available at http://www.lsoft.com
<http://www.lsoft.com/resources/manuals.asp>


To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Glenn Baddeley
2014-04-13 23:41:24 UTC
Permalink
As T-Rob stated, MQ and FTE will always try to run as fast as possible. The
performance limit will usually be outside MQ and FTE, such as CPU, disk I/O,
network, OS. You just need to work out which one!

FTE is not designed for very high throughput performance. Its written in Java,
after all. Its usage of MQ messaging and queues is not highly optimized.

If you need high throughtput and reliable transfer of data, managed file
transfer using FTE is probably not the best choice. MQ point-to-point
messaging will likely be much faster.

HTH,
Glenn Baddeley
Senior Middleware Software Engineer
Coles Supermarkets Australia Pty Ltd

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Michael Dag
2014-04-14 12:09:39 UTC
Permalink
With regard to your last to statements: What?!

Are you talking large number of small files here or what? If that's the case
then I tend to agree...
But for large files I would like to see a more controlled alternative that
is faster, more reliable and offers more control...

Michael Dag
www.mqsystems.com

-----Original Message-----
From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of
Glenn Baddeley
Sent: maandag 14 april 2014 1:41
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: Managed File transfer and WMQ

As T-Rob stated, MQ and FTE will always try to run as fast as possible. The
performance limit will usually be outside MQ and FTE, such as CPU, disk I/O,
network, OS. You just need to work out which one!

FTE is not designed for very high throughput performance. Its written in
Java, after all. Its usage of MQ messaging and queues is not highly
optimized.

If you need high throughtput and reliable transfer of data, managed file
transfer using FTE is probably not the best choice. MQ point-to-point
messaging will likely be much faster.

HTH,
Glenn Baddeley
Senior Middleware Software Engineer
Coles Supermarkets Australia Pty Ltd

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and, in the
message body (not the subject), write: SIGNOFF MQSERIES Instructions for
managing your mailing list subscription are provided in the Listserv General
Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Pere Guerrero Olmedo
2014-04-14 12:52:31 UTC
Permalink
My question is:

What do you consider fast?

Nobody mention this, in this post nor other related to MQFTE, What are the limits anybody has reached?

In my case we've achieved 800 GB/h with 1 z/OS sender a 1 Linux receiver. We've asked to arrive to 4 TB/h, has anybody achieved these numbers?

Thanks in advance.
Regards.
Pere


-----Mensaje original-----
De: MQSeries List [mailto:***@LISTSERV.MEDUNIWIEN.AC.AT] En nombre de Michael Dag
Enviado el: lunes, 14 de abril de 2014 14:10
Para: ***@LISTSERV.MEDUNIWIEN.AC.AT
Asunto: Re: Managed File transfer and WMQ

With regard to your last to statements: What?!

Are you talking large number of small files here or what? If that's the case then I tend to agree...
But for large files I would like to see a more controlled alternative that is faster, more reliable and offers more control...

Michael Dag
www.mqsystems.com

-----Original Message-----
From: MQSeries List [mailto:***@LISTSERV.MEDUNIWIEN.AC.AT] On Behalf Of Glenn Baddeley
Sent: maandag 14 april 2014 1:41
To: ***@LISTSERV.MEDUNIWIEN.AC.AT
Subject: Re: Managed File transfer and WMQ

As T-Rob stated, MQ and FTE will always try to run as fast as possible. The performance limit will usually be outside MQ and FTE, such as CPU, disk I/O, network, OS. You just need to work out which one!

FTE is not designed for very high throughput performance. Its written in Java, after all. Its usage of MQ messaging and queues is not highly optimized.

If you need high throughtput and reliable transfer of data, managed file transfer using FTE is probably not the best choice. MQ point-to-point messaging will likely be much faster.

HTH,
Glenn Baddeley
Senior Middleware Software Engineer
Coles Supermarkets Australia Pty Ltd

To unsubscribe, write to ***@LISTSERV.MEDUNIWIEN.AC.AT and, in the message body (not the subject), write: SIGNOFF MQSERIES Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html

To unsubscribe, write to ***@LISTSERV.MEDUNIWIEN.AC.AT and, in the message body (not the subject), write: SIGNOFF MQSERIES Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html

________________________________

AVISO DE CONFIDENCIALIDAD.
Este correo y la información contenida o adjunta al mismo es privada y confidencial y va dirigida exclusivamente a su destinatario. everis informa a quien pueda haber recibido este correo por error que contiene información confidencial cuyo uso, copia, reproducción o distribución está expresamente prohibida. Si no es Vd. el destinatario del mismo y recibe este correo por error, le rogamos lo ponga en conocimiento del emisor y proceda a su eliminación sin copiarlo, imprimirlo o utilizarlo de ningún modo.

CONFIDENTIALITY WARNING.
This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portio
David González Portusach
2014-04-14 13:44:46 UTC
Permalink
Hi,

First of all thank you for your answer.

In this arquitecture, we are trying to find the maximum throughput between a QMGR(701 MQ Version)) in mainframe and 3 Qmgr in Linux server(7.5.0.3 MQ Version). For each qmgr in Linux server, we didn't pass more than 200-220 MB/s. We are using 4 agents FTE (in mode client), and files of 3 GB size. We have 10Gb cards in both sides.

We were sending transfers to files placed directly in memory, and it improved the throughput, but not too much as we expected ...300 MB/s.

We think that the bottleneck doesn't be in transport layer, because when we are sending transfers to different servers, in each server we have the same ratio. This exclude z/OS side, so with 1 Linux server we achieve 200MB/s and with 3 Linux servers (keeping 1 z/os QMGR as sender) we reach 600MB/s

We are using the feature multi-channel and we have defined 20 channels. For each channels flow traffic correctly. We are using cluster MQ feature, and this parameters in agent.properties

agentMultipleChannelsEnabled=true
agentWindowSize=80
agentMessageBatchSize=40
agentChunkSize=524288
agentCheckpointInterval=400


Apparently we didn't detected any problems with cpu o memory... but it seems that the client FTE reached the top.

I ' ll open a PMR to help us to discover the bottleneck

Thank you.

De: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] En nombre de T.Rob
Enviado el: Thursday, April 10, 2014 6:39 PM
Para: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Asunto: Re: Managed File transfer and WMQ

So take a look at MCATYPE (http://pic.dhe.ibm.com/infocenter/wmqv7/v7r5/topic/com.ibm.mq.ref.con.doc/q082000_.htm ), although IO see the default is PROCESS for SDR and SVR channels.

Since the FTE agents use non-persistent messages and will recover from lost messages, also look at NPMSPEED and make sure it is set to FAST. http://pic.dhe.ibm.com/infocenter/wmqv7/v7r5/topic/com.ibm.mq.ref.con.doc/q082100_.htm

There is also a whole section on "Determining whether the channel can process messages fast enough" here: http://pic.dhe.ibm.com/infocenter/wmqv7/v7r5/topic/com.ibm.mq.mon.doc/q038110_.htm

So far you've told us the throughput numbers you have hit but not much about the diagnostic or tuning. Throughput depends on where the process is being throttled. Somewhere it must hit a ceiling and that is usually network, CPU, memory, or disk I/O. It sounds like you are not hitting the network limits but that is an assumption so far not justified by any statistics. For example, in most blade systems and large frames, the network I/O is virtual NICs sharing one or a few physical NICs. So when comparing the new implementation to EEDD (did you mean Quick-EDD by any chance?) we don't know if these are similar platforms, etc. It also sounds like you are not hitting CPU or memory limits.

That leaves disk. Have you checked disk I/O? I once helped a customer prove their issue was disk I/O by creating a virtual drive in memory and running off of that. We saw close to 100-times improvement.

When you did multi-channel, did you confirm traffic across both channels? If so, in what proportions did the traffic spread across both?

Are you seeing FDCs or error messages indicating channel or other problems?

You should be able to expect that WMQ will move traffic as fast as possible until hitting some resource constraint. Eventually it hits a point where the resource that it throttling throughput can't be increased and that's your limit. But if you do not know which resource is throttling the throughput, the first step is to identify that. If it is detectable by MQ you will usually see FDCs or errors pointing toward the culprit. If it is not in MQ then system capacity tools can usually spot the bottleneck.

Kind regards,
-- T.Rob

T.Robert Wyatt, Managing partner
IoPT Consulting, LLC
+1 704-443-TROB
https://ioptconsulting.com
https://twitter.com/tdotrob

From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of David González Portusach
Sent: Thursday, April 10, 2014 11:41 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: RV: Managed File transfer and WMQ

Hi,

We are checking all CPU utilization, and we didn't have a high utilization of CPU.

How can we set the channels to use separate process? We can try to check.

Thank you.




De: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] En nombre de T.Rob
Enviado el: Thursday, April 10, 2014 4:26 PM
Para: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org<mailto:MQSERIES-0lvw86wZMd9k/***@public.gmane.orgN.AC.AT>
Asunto: Re: Managed File transfer and WMQ

If you look at CPU utilization per-core, do you see one CPU maxed out? Last time I ran into a throughput problem the CPU utilization looked great until we looked at it per-core. It turned out that the thread running the channel was maxing out its CPU but in aggregate the utilization appeared to be < 10%. I forget if we had to set the channel to use separate processes but do remember we had to tune it a bit to get multiple channels running on separate dedicated cores.

Kind regards,
-- T.Rob

T.Robert Wyatt, Managing partner
IoPT Consulting, LLC
+1 704-443-TROB
https://ioptconsulting.com
https://twitter.com/tdotrob

From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of David González Portusach
Sent: Thursday, April 10, 2014 10:08 AM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org<mailto:MQSERIES-0lvw86wZMd9k/***@public.gmane.orgAC.AT>
Subject: Managed File transfer and WMQ

Hi,

We are testing a MQ FTE transmission and we need to find the maximum ratio of transfers. We have a 10Gb network device, and we are using 4 agents in z/os against 1 QMGR en z/os vs 1 QMGR in Linux with 4 agents.

We didn't reach more than 200MB/s. we use a different features like multiple channel enabled and chunk size 512KB, network cluster MQ... in fact we didn't have memory problems for each agent....

When we are trying duplicate the numbers of servers in EEDD we have reached until 200MB/s for server, in amount 400MB/s

Anybody knows what is the limit or relationship between agents, or servers? How the Queues are different and specific for each agents, the ratio should be independent in a perfect network, isn't' it? A mean, I had understood that with more agents we could up the throughput.

Anybody has reached more than 500MB/s? If so what is the limit you got it?

Obviously the ratio is considering several parallel transfers.

Thank you in advanced




________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________

AVISO DE CONFIDENCIALIDAD.
Este correo y la información contenida o adjunta al mismo es privada y confidencial y va dirigida exclusivamente a su destinatario. everis informa a quien pueda haber recibido este correo por error que contiene información confidencial cuyo uso, copia, reproducción o distribución está expresamente prohibida. Si no es Vd. el destinatario del mismo y recibe este correo por error, le rogamos lo ponga en conocimiento del emisor y proceda a su eliminación sin copiarlo, imprimirlo o utilizarlo de ningún modo.

CONFIDENTIALITY WARNING.
This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail.

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Roger Lacroix
2014-04-14 14:53:40 UTC
Permalink
Hi George,

> Say one is not a MQ shop already what MFT
product would be recommended as best for partner
interactions currently done extensively using SFTP file transfers?

Have you looked at the free open source project
called: Universal File Mover
(UFM). http://www.capitalware.com/ufm_overview.html

It supports WebSphere MQ, of course, plus it
supports sending/receiving files via FTP, SFTP,
SCP, HTTP and even Email (SMTP).

Regards,
Roger Lacroix
Capitalware Inc.

At 02:16 AM 4/12/2014, you wrote:
>Say one is not a MQ shop already what MFT
>product would be recommended as best for partner
>interactions currently done extensively using SFTP file transfers?
>IBM’s MQ FTE or a product like
>Linoma’s GoAnywhere product or some
>other? Anyone done a head to head comparison …
>ease of setup, use, reliability, features, robustness, etc.
>
>The GoAnywhere product seems to have good set of
>features and can be used with only a browser
>interface to transfer files … no MQ or queue managers required.
>Seems to have a good set of intro videos to give
>a warm and fuzzy on how easy it is to use yet
>robust. http://www.goanywheremft.com/videos
>
>From: MQSeries List
>[mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of T.Rob
>Sent: Thursday, April 10, 2014 10:26 AM
>To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
>Subject: Re: Managed File transfer and WMQ
>
>If you look at CPU utilization per-core, do you
>see one CPU maxed out? Last time I ran into a
>throughput problem the CPU utilization looked
>great until we looked at it per-core. It turned
>out that the thread running the channel was
>maxing out its CPU but in aggregate the
>utilization appeared to be < 10%. I forget if
>we had to set the channel to use separate
>processes but do remember we had to tune it a
>bit to get multiple channels running on separate dedicated cores.
>
>Kind regards,
>-- T.Rob
>
>T.Robert Wyatt, Managing partner
>IoPT Consulting, LLC
>+1 704-443-TROB
><https://ioptconsulting.com>https://ioptconsulting.com
>https://twitter.com/tdotrob
>
>From: MQSeries List
>[<mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>mailto:MQSERIES-***@public.gmane.orgWIEN.AC.AT]
>On Behalf Of David González Portusach
>Sent: Thursday, April 10, 2014 10:08 AM
>To:
><mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>MQSERIES-0lvw86wZMd9k/***@public.gmane.orgAT
>Subject: Managed File transfer and WMQ
>
>Hi,
>
>We are testing a MQ FTE transmission and we need
>to find the maximum ratio of transfers. We have
>a 10Gb network device, and we are using 4 agents
>in z/os against 1 QMGR en z/os vs 1 QMGR in Linux with 4 agents.
>
>We didn’t reach more than 200MB/s. we use a
>different features like multiple channel enabled
>and chunk size 512KB, network cluster MQ… in
>fact we didn’t have memory problems for each agent….
>
>When we are trying duplicate the numbers of
>servers in EEDD we have reached until 200MB/s for server, in amount 400MB/s
>
>Anybody knows what is the limit or relationship
>between agents, or servers? How the Queues are
>different and specific for each agents, the
>ratio should be independent in a perfect
>network, isn’t’ it? A mean, I had understood
>that with more agents we could up the throughput.
>
>Anybody has reached more than 500MB/s? If so what is the limit you got it?
>
>Obviously the ratio is considering several parallel transfers.
>
>Thank you in advanced
>
>
>
>
>
>
>AVISO DE CONFIDENCIALIDAD.
>Este correo y la información contenida o adjunta
>al mismo es privada y confidencial y va dirigida
>exclusivamente a su destinatario. everis informa
>a quien pueda haber recibido este correo por
>error que contiene información confidencial cuyo
>uso, copia, reproducción o distribución está
>expresamente prohibida. Si no es Vd. el
>destinatario del mismo y recibe este correo por
>error, le rogamos lo ponga en conocimiento del
>emisor y proceda a su eliminación sin copiarlo,
>imprimirlo o utilizarlo de ningún modo.
>
>CONFIDENTIALITY WARNING.
>This message and the information contained in or
>attached to it are private and confidential and
>intended exclusively for the addressee. everis
>informs to whom it may receive it in error that
>it contains privileged information and its use,
>copy, reproduction or distribution is
>prohibited. If you are not an intended recipient
>of this E-mail, please notify the sender, delete
>it and do not read, act upon, print, disclose,
>copy, retain or redistribute any portion of this E-mail.
>
>
><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>Archive -
><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>Your List Settings -
><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>
>
>Instructions for managing your mailing list
>subscription are provided in the Listserv
>General Users Guide available at
><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>
>
>----------
><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>Archive -
><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>Your List Settings -
><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>
>
>Instructions for managing your mailing list
>subscription are provided in the Listserv
>General Users Guide available at
><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com
>
>
>----------
><http://listserv.meduniwien.ac.at/archives/mqser-l.html>List
>Archive -
><http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1>Manage
>Your List Settings -
><mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>Unsubscribe
>
>
>Instructions for managing your mailing list
>subscription are provided in the Listserv
>General Users Guide available at
><http://www.lsoft.com/resources/manuals.asp>http://www.lsoft.com

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Lyn Elkins
2014-04-14 18:51:53 UTC
Permalink
Hi David,

One common bottleneck when using MFT on z/OS is that messages are getting
flushed from the bufferpool onto pagesets, introducing I/O that impacts
through put and can often be avoided. Have you looked at your bufferpool
statistics to see if I/O is taking place?





To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Glenn Baddeley
2014-04-14 23:28:28 UTC
Permalink
Hi Michael,

Yes, a large number of small files, where the overhead of managing a transfer
is higher in relation to the size of each file. I regard a small file as less than
one megabyte.

Glenn.

On Mon, 14 Apr 2014 14:09:39 +0200, Michael Dag
<maillists-***@public.gmane.org> wrote:
>With regard to your last to statements: What?!
>
>Are you talking large number of small files here or what? If that's the case
>then I tend to agree...
>But for large files I would like to see a more controlled alternative that
>is faster, more reliable and offers more control...
>
>Michael Dag
>www.mqsystems.com
>
>-----Original Message-----
>From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On
Behalf Of
>Glenn Baddeley
>Sent: maandag 14 april 2014 1:41
>To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
>Subject: Re: Managed File transfer and WMQ
>
>As T-Rob stated, MQ and FTE will always try to run as fast as possible. The
>performance limit will usually be outside MQ and FTE, such as CPU, disk I/O,
>network, OS. You just need to work out which one!
>
>FTE is not designed for very high throughput performance. Its written in
>Java, after all. Its usage of MQ messaging and queues is not highly
>optimized.
>
>If you need high throughtput and reliable transfer of data, managed file
>transfer using FTE is probably not the best choice. MQ point-to-point
>messaging will likely be much faster.
>
>HTH,
>Glenn Baddeley
>Senior Middleware Software Engineer
>Coles Supermarkets Australia Pty Ltd

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Glenn Baddeley
2014-04-14 23:37:29 UTC
Permalink
Have you tried using FTE in server mode rather than client mode? In client
mode, all MQI and data traffic needs to flow synchronously through the TCP
stack (and network) to/from the queue managers. In server mode, it bypasses
TCP and uses direct memory-memory to the local queue manager. MQ then
handles data transfer to/from remote queue managers using a more efficient
protocol across TCP.

Have you considered the impact of what happens if a 3GB file transfer fails
and FTE needs to recover?

Have you looked at alternatives like file replication products, typically used for
multiple data center high availability?

HTH,
Glenn.

On Mon, 14 Apr 2014 13:44:46 +0000, David González Portusach
<David.Gonzalez.Portusach-***@public.gmane.org> wrote:

>Hi,
>
>First of all thank you for your answer.
>
>In this arquitecture, we are trying to find the maximum throughput between
a QMGR(701 MQ Version)) in mainframe and 3 Qmgr in Linux server(7.5.0.3 MQ
Version). For each qmgr in Linux server, we didn't pass more than 200-220
MB/s. We are using 4 agents FTE (in mode client), and files of 3 GB size. We
have 10Gb cards in both sides.
>
>We were sending transfers to files placed directly in memory, and it improved
the throughput, but not too much as we expected ...300 MB/s.
>
>We think that the bottleneck doesn't be in transport layer, because when we
are sending transfers to different servers, in each server we have the same
ratio. This exclude z/OS side, so with 1 Linux server we achieve 200MB/s and
with 3 Linux servers (keeping 1 z/os QMGR as sender) we reach 600MB/s
>
>We are using the feature multi-channel and we have defined 20 channels.
For each channels flow traffic correctly. We are using cluster MQ feature, and
this parameters in agent.properties
>
>agentMultipleChannelsEnabled=true
>agentWindowSize=80
>agentMessageBatchSize=40
>agentChunkSize=524288
>agentCheckpointInterval=400
>
>
>Apparently we didn't detected any problems with cpu o memory... but it
seems that the client FTE reached the top.
>
>I ' ll open a PMR to help us to discover the bottleneck
>
>Thank you.

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
T.Rob
2014-04-15 01:54:51 UTC
Permalink
X-Originating-IP: 184.154.225.7
X-SpamExperts-Domain: siteground247.com
X-SpamExperts-Username: 184.154.225.7
X-SpamExperts-Outgoing-Class: ham
X-SpamExperts-Outgoing-Evidence: SB/global_tokens (0.00794663144981)
X-Recommended-Action: accept
X-PMX-Version: 5.6.1.2065439, Antispam-Engine: 2.7.2.376379,
Antispam-Data: 2014.4.15.13623
X-PMX-Spam: Gauge= Probability=9%, Report='
AT_TLD 0.1, REPLYTO_FROM_DIFF_ADDY 0.1, FROM_NAME_ONE_WORD 0.05, HTML_00_01 0.05, HTML_00_10 0.05, BODY_SIZE_4000_4999 0, BODY_SIZE_5000_LESS 0, BODY_SIZE_7000_LESS 0, DATE_TZ_NA 0, FORGED_MUA_OUTLOOK 0, URI_ENDS_IN_HTML 0, WEBMAIL_SOURCE 0, WEBMAIL_XOIP 0, WEBMAIL_X_IP_HDR 0, __ANY_URI 0, __BOUNCE_CHALLENGE_SUBJ 0, __BOUNCE_NDR_SUBJ_EXEMPT 0, __CP_URI_IN_BODY 0, __CT 0, __CTE 0, __CT_TEXT_PLAIN 0, __FORWARDED_MSG 0, __HAS_FROM 0, __HAS_LIST_HEADER 0, __HAS_LIST_HELP 0, __HAS_LIST_SUBSCRIBE 0, __HAS_LIST_UNSUBSCRIBE 0, __HAS_MSGID 0, __HAS_REPLYTO 0, __HAS_X_MAILER 0, __IN_REP_TO 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __OUTLOOK_MUA 0, __OUTLOOK_MUA_1 0, __SANE_MSGID 0, __SUBJ_ALPHA_END 0, __SUBJ_ALPHA_NEGATE 0, __TO_MALFORMED_2 0, __TO_NO_NAME 0, __URI_NS , __USER_AGENT_MS_GENERIC 0'
Sender: MQSeries List <MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>
In-Reply-To: <9869_1397518655_1397518655_LISTSERV%201404150137299242.0118-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>
Precedence: list
List-Help: <http://listserv.meduniwien.ac.at/cgi-bin/wa?LIST=MQSERIES>,
<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?body=INFO%20MQSERIES>
List-Unsubscribe: <mailto:MQSERIES-unsubscribe-request-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>
List-Subscribe: <mailto:MQSERIES-subscribe-request-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>
List-Owner: <mailto:MQSERIES-request-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org>
List-Archive: <http://listserv.meduniwien.ac.at/cgi-bin/wa?LIST=MQSERIES>
X-PMX-Version: 5.6.1.2065439, Antispam-Engine: 2.7.2.376379, Antispam-Data: 2014.4.15.14818
Archived-At: <http://permalink.gmane.org/gmane.network.mq.devel/17681>

Thanks Glenn, I was going to mention client mode as well. That adds a whole
other network hop, another channel, another core, etc.

But I'm puzzled over your assertions concerning FTE's design. Since it uses
non-persistent messages, it is possible to move files at wire speed and I've
helped people get some very impressive numbers. And if a 3GB transfer
fails, FTE will pick right up where it left off. The wording in your post
suggests FTE is somehow deficient in this regard compared to other
solutions. Am I reading that right?

I'm currently working an FTE engagement and the client is pumping massive
amounts of data over about 3k agents, all day, every day, and it's fast and
rock solid. I'm here to do a health check and won't have many
recommendations on the FTE aspects because it's working so well. (I will,
of course, have some recommendations regarding security so they will get
something more than a rubber stamp out of the engagement.)

Kind regards,
-- T.Rob

T.Robert Wyatt, Managing partner
IoPT Consulting, LLC
+1 704-443-TROB
https://ioptconsulting.com
https://twitter.com/tdotrob


> -----Original Message-----
> From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf
> Of Glenn Baddeley
> Sent: Monday, April 14, 2014 19:37 PM
> To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
> Subject: Re: Managed File transfer and WMQ
>
> Have you tried using FTE in server mode rather than client mode? In
> client mode, all MQI and data traffic needs to flow synchronously through
> the TCP stack (and network) to/from the queue managers. In server mode, it
> bypasses TCP and uses direct memory-memory to the local queue manager. MQ
> then handles data transfer to/from remote queue managers using a more
> efficient protocol across TCP.
>
> Have you considered the impact of what happens if a 3GB file transfer
> fails and FTE needs to recover?
>
> Have you looked at alternatives like file replication products, typically
> used for multiple data center high availability?
>
> HTH,
> Glenn.
>
> On Mon, 14 Apr 2014 13:44:46 +0000, David González Portusach
> <David.Gonzalez.Portusach-***@public.gmane.org> wrote:
>
> >Hi,
> >
> >First of all thank you for your answer.
> >
> >In this arquitecture, we are trying to find the maximum throughput
> >between
> a QMGR(701 MQ Version)) in mainframe and 3 Qmgr in Linux server(7.5.0.3
> MQ Version). For each qmgr in Linux server, we didn't pass more than 200-
> 220 MB/s. We are using 4 agents FTE (in mode client), and files of 3 GB
> size. We have 10Gb cards in both sides.
> >
> >We were sending transfers to files placed directly in memory, and it
> >improved
> the throughput, but not too much as we expected ...300 MB/s.
> >
> >We think that the bottleneck doesn't be in transport layer, because
> >when we
> are sending transfers to different servers, in each server we have the
> same ratio. This exclude z/OS side, so with 1 Linux server we achieve
> 200MB/s and with 3 Linux servers (keeping 1 z/os QMGR as sender) we reach
> 600MB/s
> >
> >We are using the feature multi-channel and we have defined 20 channels.
> For each channels flow traffic correctly. We are using cluster MQ feature,
> and this parameters in agent.properties
> >
> >agentMultipleChannelsEnabled=true
> >agentWindowSize=80
> >agentMessageBatchSize=40
> >agentChunkSize=524288
> >agentCheckpointInterval=400
> >
> >
> >Apparently we didn't detected any problems with cpu o memory... but it
> seems that the client FTE reached the top.
> >
> >I ' ll open a PMR to help us to discover the bottleneck
> >
> >Thank you.
>
> To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and, in the
> message body (not the subject), write: SIGNOFF MQSERIES Instructions for
> managing your mailing list subscription are provided in the Listserv
> General Users Guide available at http://www.lsoft.com
> Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Glenn Baddeley
2014-04-15 23:39:46 UTC
Permalink
Hi T-Rob,

I was concerned about the interruption to a 3GB transfer if FTE had a hiccup
and needed to recover and continue the transfer. It might mean that the
effective transfer rate won't meet service delivery expectations.

The company I work for has nearly 3K agents and averages nearly 200K
transfers per day, some up to the GB range. We make heavy use of FTE
client mode and find it is quite robust and has good throughput for nearly all
needs.

We sometimes face delays, recovering transfers and reconnecting agents due
to unreliable and very slow network at the agent client endpoints spread all
over Australia. We have a trail of PMRs and found many bugs in FTE.

The main tuning point is agentChunkSize agent.properties. There can be
benefit in tweaking up or down the default 32K.

Glenn.

On Mon, 14 Apr 2014 21:54:51 -0400, T.Rob <t.rob-CkT6zf+urXSzW/GOMZKyElesiRL1/***@public.gmane.org>
wrote:

>Thanks Glenn, I was going to mention client mode as well. That adds a whole
>other network hop, another channel, another core, etc.
>
>But I'm puzzled over your assertions concerning FTE's design. Since it uses
>non-persistent messages, it is possible to move files at wire speed and I've
>helped people get some very impressive numbers. And if a 3GB transfer
>fails, FTE will pick right up where it left off. The wording in your post
>suggests FTE is somehow deficient in this regard compared to other
>solutions. Am I reading that right?
>
>I'm currently working an FTE engagement and the client is pumping massive
>amounts of data over about 3k agents, all day, every day, and it's fast and
>rock solid. I'm here to do a health check and won't have many
>recommendations on the FTE aspects because it's working so well. (I will,
>of course, have some recommendations regarding security so they will get
>something more than a rubber stamp out of the engagement.)
>
>Kind regards,
>-- T.Rob
>
>T.Robert Wyatt, Managing partner
>IoPT Consulting, LLC
>+1 704-443-TROB
>https://ioptconsulting.com
>https://twitter.com/tdotrob
>
>
>> -----Original Message-----
>> From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On
Behalf
>> Of Glenn Baddeley
>> Sent: Monday, April 14, 2014 19:37 PM
>> To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
>> Subject: Re: Managed File transfer and WMQ
>>
>> Have you tried using FTE in server mode rather than client mode? In
>> client mode, all MQI and data traffic needs to flow synchronously through
>> the TCP stack (and network) to/from the queue managers. In server mode,
it
>> bypasses TCP and uses direct memory-memory to the local queue
manager. MQ
>> then handles data transfer to/from remote queue managers using a more
>> efficient protocol across TCP.
>>
>> Have you considered the impact of what happens if a 3GB file transfer
>> fails and FTE needs to recover?
>>
>> Have you looked at alternatives like file replication products, typically
>> used for multiple data center high availability?
>>
>> HTH,
>> Glenn.
>>
>> On Mon, 14 Apr 2014 13:44:46 +0000, David González Portusach
>> <David.Gonzalez.Portusach-***@public.gmane.org> wrote:
>>
>> >Hi,
>> >
>> >First of all thank you for your answer.
>> >
>> >In this arquitecture, we are trying to find the maximum throughput
>> >between
>> a QMGR(701 MQ Version)) in mainframe and 3 Qmgr in Linux server(7.5.0.3
>> MQ Version). For each qmgr in Linux server, we didn't pass more than 200-
>> 220 MB/s. We are using 4 agents FTE (in mode client), and files of 3 GB
>> size. We have 10Gb cards in both sides.
>> >
>> >We were sending transfers to files placed directly in memory, and it
>> >improved
>> the throughput, but not too much as we expected ...300 MB/s.
>> >
>> >We think that the bottleneck doesn't be in transport layer, because
>> >when we
>> are sending transfers to different servers, in each server we have the
>> same ratio. This exclude z/OS side, so with 1 Linux server we achieve
>> 200MB/s and with 3 Linux servers (keeping 1 z/os QMGR as sender) we
reach
>> 600MB/s
>> >
>> >We are using the feature multi-channel and we have defined 20 channels.
>> For each channels flow traffic correctly. We are using cluster MQ feature,
>> and this parameters in agent.properties
>> >
>> >agentMultipleChannelsEnabled=true
>> >agentWindowSize=80
>> >agentMessageBatchSize=40
>> >agentChunkSize=524288
>> >agentCheckpointInterval=400
>> >
>> >
>> >Apparently we didn't detected any problems with cpu o memory... but it
>> seems that the client FTE reached the top.
>> >
>> >I ' ll open a PMR to help us to discover the bottleneck
>> >
>> >Thank you.

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Loading...