Discussion:
Installing MQSeries binaries on SAN storage......
Coombs, Lawrence
2013-06-13 17:32:49 UTC
Permalink
I have a requirement to setup a three node Veritas cluster so that a queue manager can run on any of the three nodes. Queue managers will be running on all three nodes at the same time(active/active/active).

Is it possible to install the binaries (/opt/mqm and /var/mqm)one time on SAN storage and then mount the file system to the node?
Even if this is possible how could multiple queue managers run on one node? You can only mount a file system on one node at a time.



This message, including any attachments, is the property of Sears Holdings Corporation and/or one of its subsidiaries. It is confidential and may contain proprietary or legally privileged information. If you are not the intended recipient, please delete it w
Neil Casey
2013-06-13 22:38:05 UTC
Permalink
Hi Lawrence,

This is an interesting thought, and one which was discussed as a
possibility for one of my customers some time ago. In essence, it is
similar to what happens with Solaris 10 zones.

I think that you should be able to make it work even without a clustered
file system (such as OCFS2, VxCFS, GPFS or GFS2).

Let say you take one of your systems and designate it as primary... call
it SYS1.

Mount a small (say 2GB) LUN as a file system at /opt/mqm/inst1.

Perform an MQ installation (MQ 7.5 of course) at /opt/mqm/inst1.

Now unmount that volume, and remount it read only. You should be able to
mount the same volume read only on all of your other systems (SYS2, SYS3
etc) at once. Whether this is possible probably depends on the exact
capabilities of the SAN you are using, but most will support concurrent
Read Only mounts, even if they don't support concurrent Read/Write.

On each machine, you should now be able to either use MQ by setting up the
environment (setmqenv) or by making the installation primary (setmqinst).

As far as I am aware, nothing on the /opt/mqm path should need to be
writeable at run time, so as long as you create /var/mqm and whichever
directories you want to use for queue manager data and logs, you should be
fine. The /var/mqm and other directories would be on private volumes, not
shared. If you want to add Multi-Instance support to the environment, then
your queue manager data and logs could be on a shared (RW) file system,
either with NFS v4, or using VxCFS, GPFS or GFS2, depending on what
platform you are running on.

You still need to have licenses for all the systems of course, and I am
not sure that the systems produced this way would be officially supported,
so if you had problems, you might need to recreate them in a standard
environment before getting support.

Assuming that it all works, you should be able to do patching by defining
another file system (mount at /opt/mqm/inst2). Install MQ there, and apply
a fix pack. unmount, and remount read only on all of your systems, and you
should be able to switch your queue managers to use the new patch level.
You can also switch your primary installation if you want to.

Caveat: This is all just a thought experiment. I haven't tried to do any
of this myself, or seen it done successfully.



Regards

Neil Casey
Technical Consultant Messaging


Phone: +61-3-8641-1068 | Mobile: +61-438-573-152
E-mail: Neil.Casey-***@public.gmane.org


c/- NAB 14/555 Collins St
Melbourne, Vic 3000
Australia






Disclaimer: Opinions expressed are those of the author, and do not
represent any commitment (or anything else) from IBM.



From: "Coombs, Lawrence" <Lawrence.Coombs-***@public.gmane.org>
To: MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org,
Date: 14/06/2013 03:38
Subject: Re: Installing MQSeries binaries on SAN storage......
Sent by: MQSeries List <MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org>



I have a requirement to setup a three node Veritas cluster so that a queue
manager can run on any of the three nodes. Queue managers will be running
on all three nodes at the same time(active/active/active).

Is it possible to install the binaries (/opt/mqm and /var/mqm)one time on
SAN storage and then mount the file system to the node?
Even if this is possible how could multiple queue managers run on one
node? You can only mount a file system on one node at a time.



This message, including any attachments, is the property of Sears Holdings
Corporation and/or one of its subsidiaries. It is confidential and may
contain proprietary or legally privileged information. If you are not the
intended recipient, please delete it without reading the contents. Thank
you.


To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Coombs, Lawrence
2013-06-13 23:07:43 UTC
Permalink
I am using 7.0.18. not 7.5

From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of Neil Casey
Sent: Thursday, June 13, 2013 5:38 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: Installing MQSeries binaries on SAN storage......

Hi Lawrence,

This is an interesting thought, and one which was discussed as a possibility for one of my customers some time ago. In essence, it is similar to what happens with Solaris 10 zones.

I think that you should be able to make it work even without a clustered file system (such as OCFS2, VxCFS, GPFS or GFS2).

Let say you take one of your systems and designate it as primary... call it SYS1.

Mount a small (say 2GB) LUN as a file system at /opt/mqm/inst1.

Perform an MQ installation (MQ 7.5 of course) at /opt/mqm/inst1.

Now unmount that volume, and remount it read only. You should be able to mount the same volume read only on all of your other systems (SYS2, SYS3 etc) at once. Whether this is possible probably depends on the exact capabilities of the SAN you are using, but most will support concurrent Read Only mounts, even if they don't support concurrent Read/Write.

On each machine, you should now be able to either use MQ by setting up the environment (setmqenv) or by making the installation primary (setmqinst).

As far as I am aware, nothing on the /opt/mqm path should need to be writeable at run time, so as long as you create /var/mqm and whichever directories you want to use for queue manager data and logs, you should be fine. The /var/mqm and other directories would be on private volumes, not shared. If you want to add Multi-Instance support to the environment, then your queue manager data and logs could be on a shared (RW) file system, either with NFS v4, or using VxCFS, GPFS or GFS2, depending on what platform you are running on.

You still need to have licenses for all the systems of course, and I am not sure that the systems produced this way would be officially supported, so if you had problems, you might need to recreate them in a standard environment before getting support.

Assuming that it all works, you should be able to do patching by defining another file system (mount at /opt/mqm/inst2). Install MQ there, and apply a fix pack. unmount, and remount read only on all of your systems, and you should be able to switch your queue managers to use the new patch level. You can also switch your primary installation if you want to.

Caveat: This is all just a thought experiment. I haven't tried to do any of this myself, or seen it done successfully.



Regards


Neil Casey
Technical Consultant Messaging

________________________________

Phone: +61-3-8641-1068 | Mobile: +61-438-573-152
E-mail: Neil.Casey-***@public.gmane.org<mailto:neilc-***@public.gmane.org>

[IBM]

c/- NAB 14/555 Collins St
Melbourne, Vic 3000
Australia

[Certified for WebSphere]


Disclaimer: Opinions expressed are those of the author, and do not represent any commitment (or anything else) from IBM.



From: "Coombs, Lawrence" <Lawrence.Coombs-***@public.gmane.org<mailto:Lawrence.Coombs-***@public.gmane.org>>
To: MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org<mailto:MQSERIES-JX7+OpRa80Ties2YCUG/***@public.gmane.orgniwien.ac.at>,
Date: 14/06/2013 03:38
Subject: Re: Installing MQSeries binaries on SAN storage......
Sent by: MQSeries List <MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org<mailto:MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org>>

________________________________



I have a requirement to setup a three node Veritas cluster so that a queue manager can run on any of the three nodes. Queue managers will be running on all three nodes at the same time(active/active/active).

Is it possible to install the binaries (/opt/mqm and /var/mqm)one time on SAN storage and then mount the file system to the node?
Even if this is possible how could multiple queue managers run on one node? You can only mount a file system on one node at a time.



This message, including any attachments, is the property of Sears Holdings Corporation and/or one of its subsidiaries. It is confidential and may contain proprietary or legally privileged information. If you are not the intended recipient, please delete it without reading the contents. Thank you.

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

This message, including any attachments, is the property of Sears Holdings Corporation and/or one of its subsidiaries. It is confidential and may contain proprietary or legally privileged information. If you are not the intended recipient, please delete it without reading the contents. Thank you.

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Neil Casey
2013-06-13 23:53:03 UTC
Permalink
Hi Lawrence,

that makes things like patching a lot more difficult. You would need to
have MQ down on every system in order to apply a fix pack. You would bring
MQ down, unmount /opt/mqm, then mount /opt/mqm in Read/Write mode on one
system and apply the fix pack. It would have to be the same system that
originally installed MQ because otherwise the rpm repository won't be
right (for linux, but there are equivalent issues on Solaris or AIX or
whatever).

You would perform a normal install (to /opt/mqm) on one system. unmount
and mount on all systems Read Only.

On each system, after mounting /opt/mqm, you need to run a script to
create the symbolic links so that MQ will work. The script is
/opt/mqm/bin/crtmqlnk.

On each system you create the /var/mqm/ directories and proceed as if for
a normal installation of MQ.

I think you might have problems with gskit. With MQ 7.5, the packaging of
gskit has been redone so that it is fully integrated as an MQ component.
With MQ 7.0.1.? gskit will be separate. I haven't thought about what you
would need to do to make gskit available on all your systems, but based on
what Solaris does for Solaris 10 zones, you might have to install it
separately on each system.

Going back to your original email, only /opt/mqm could be shared in this
read only mode. /var/mqm would be separate on each system.



Regards

Neil Casey
Technical Consultant Messaging


Phone: +61-3-8641-1068 | Mobile: +61-438-573-152
E-mail: Neil.Casey-***@public.gmane.org


c/- NAB 14/555 Collins St
Melbourne, Vic 3000
Australia






Disclaimer: Opinions expressed are those of the author, and do not
represent any commitment (or anything else) from IBM.



From: "Coombs, Lawrence" <Lawrence.Coombs-***@public.gmane.org>
To: MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org,
Date: 14/06/2013 09:13
Subject: Re: Installing MQSeries binaries on SAN storage......
Sent by: MQSeries List <MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org>



I am using 7.0.18. not 7.5

From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf
Of Neil Casey
Sent: Thursday, June 13, 2013 5:38 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: Installing MQSeries binaries on SAN storage......

Hi Lawrence,

This is an interesting thought, and one which was discussed as a
possibility for one of my customers some time ago. In essence, it is
similar to what happens with Solaris 10 zones.

I think that you should be able to make it work even without a clustered
file system (such as OCFS2, VxCFS, GPFS or GFS2).

Let say you take one of your systems and designate it as primary... call
it SYS1.

Mount a small (say 2GB) LUN as a file system at /opt/mqm/inst1.

Perform an MQ installation (MQ 7.5 of course) at /opt/mqm/inst1.

Now unmount that volume, and remount it read only. You should be able to
mount the same volume read only on all of your other systems (SYS2, SYS3
etc) at once. Whether this is possible probably depends on the exact
capabilities of the SAN you are using, but most will support concurrent
Read Only mounts, even if they don't support concurrent Read/Write.

On each machine, you should now be able to either use MQ by setting up the
environment (setmqenv) or by making the installation primary (setmqinst).

As far as I am aware, nothing on the /opt/mqm path should need to be
writeable at run time, so as long as you create /var/mqm and whichever
directories you want to use for queue manager data and logs, you should be
fine. The /var/mqm and other directories would be on private volumes, not
shared. If you want to add Multi-Instance support to the environment, then
your queue manager data and logs could be on a shared (RW) file system,
either with NFS v4, or using VxCFS, GPFS or GFS2, depending on what
platform you are running on.

You still need to have licenses for all the systems of course, and I am
not sure that the systems produced this way would be officially supported,
so if you had problems, you might need to recreate them in a standard
environment before getting support.

Assuming that it all works, you should be able to do patching by defining
another file system (mount at /opt/mqm/inst2). Install MQ there, and apply
a fix pack. unmount, and remount read only on all of your systems, and you
should be able to switch your queue managers to use the new patch level.
You can also switch your primary installation if you want to.

Caveat: This is all just a thought experiment. I haven't tried to do any
of this myself, or seen it done successfully.



Regards

Neil Casey
Technical Consultant Messaging



Phone: +61-3-8641-1068 | Mobile: +61-438-573-152
E-mail: Neil.Casey-***@public.gmane.org


c/- NAB 14/555 Collins St
Melbourne, Vic 3000
Australia






Disclaimer: Opinions expressed are those of the author, and do not
represent any commitment (or anything else) from IBM.



From: "Coombs, Lawrence" <Lawrence.Coombs-***@public.gmane.org>
To: MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org,
Date: 14/06/2013 03:38
Subject: Re: Installing MQSeries binaries on SAN storage......
Sent by: MQSeries List <MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org>




I have a requirement to setup a three node Veritas cluster so that a queue
manager can run on any of the three nodes. Queue managers will be running
on all three nodes at the same time(active/active/active).

Is it possible to install the binaries (/opt/mqm and /var/mqm)one time on
SAN storage and then mount the file system to the node?
Even if this is possible how could multiple queue managers run on one
node? You can only mount a file system on one node at a time.



This message, including any attachments, is the property of Sears Holdings
Corporation and/or one of its subsidiaries. It is confidential and may
contain proprietary or legally privileged information. If you are not the
intended recipient, please delete it without reading the contents. Thank
you.


List Archive - Manage Your List Settings - Unsubscribe
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
This message, including any attachments, is the property of Sears Holdings
Corporation and/or one of its subsidiaries. It is confidential and may
contain proprietary or legally privileged information. If you are not the
intended recipient, please delete it without reading the contents. Thank
you.


List Archive - Manage Your List Settings - Unsubscribe
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Coombs, Lawrence
2013-06-13 23:53:26 UTC
Permalink
Thanks for your input.

From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of Neil Casey
Sent: Thursday, June 13, 2013 6:53 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org
Subject: Re: Installing MQSeries binaries on SAN storage......

Hi Lawrence,

that makes things like patching a lot more difficult. You would need to have MQ down on every system in order to apply a fix pack. You would bring MQ down, unmount /opt/mqm, then mount /opt/mqm in Read/Write mode on one system and apply the fix pack. It would have to be the same system that originally installed MQ because otherwise the rpm repository won't be right (for linux, but there are equivalent issues on Solaris or AIX or whatever).

You would perform a normal install (to /opt/mqm) on one system. unmount and mount on all systems Read Only.

On each system, after mounting /opt/mqm, you need to run a script to create the symbolic links so that MQ will work. The script is /opt/mqm/bin/crtmqlnk.

On each system you create the /var/mqm/ directories and proceed as if for a normal installation of MQ.

I think you might have problems with gskit. With MQ 7.5, the packaging of gskit has been redone so that it is fully integrated as an MQ component. With MQ 7.0.1.? gskit will be separate. I haven't thought about what you would need to do to make gskit available on all your systems, but based on what Solaris does for Solaris 10 zones, you might have to install it separately on each system.

Going back to your original email, only /opt/mqm could be shared in this read only mode. /var/mqm would be separate on each system.



Regards


Neil Casey
Technical Consultant Messaging

________________________________

Phone: +61-3-8641-1068 | Mobile: +61-438-573-152
E-mail: Neil.Casey-***@public.gmane.org<mailto:neilc-***@public.gmane.org>

[IBM]

c/- NAB 14/555 Collins St
Melbourne, Vic 3000
Australia

[Certified for WebSphere]


Disclaimer: Opinions expressed are those of the author, and do not represent any commitment (or anything else) from IBM.



From: "Coombs, Lawrence" <Lawrence.Coombs-***@public.gmane.org<mailto:Lawrence.Coombs-***@public.gmane.org>>
To: MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org<mailto:MQSERIES-JX7+OpRa80Ties2YCUG/***@public.gmane.orgniwien.ac.at>,
Date: 14/06/2013 09:13
Subject: Re: Installing MQSeries binaries on SAN storage......
Sent by: MQSeries List <MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org<mailto:MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org>>

________________________________



I am using 7.0.18. not 7.5

From: MQSeries List [mailto:MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org] On Behalf Of Neil Casey
Sent: Thursday, June 13, 2013 5:38 PM
To: MQSERIES-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org<mailto:MQSERIES-0lvw86wZMd9k/***@public.gmane.orgAC.AT>
Subject: Re: Installing MQSeries binaries on SAN storage......

Hi Lawrence,

This is an interesting thought, and one which was discussed as a possibility for one of my customers some time ago. In essence, it is similar to what happens with Solaris 10 zones.

I think that you should be able to make it work even without a clustered file system (such as OCFS2, VxCFS, GPFS or GFS2).

Let say you take one of your systems and designate it as primary... call it SYS1.

Mount a small (say 2GB) LUN as a file system at /opt/mqm/inst1.

Perform an MQ installation (MQ 7.5 of course) at /opt/mqm/inst1.

Now unmount that volume, and remount it read only. You should be able to mount the same volume read only on all of your other systems (SYS2, SYS3 etc) at once. Whether this is possible probably depends on the exact capabilities of the SAN you are using, but most will support concurrent Read Only mounts, even if they don't support concurrent Read/Write.

On each machine, you should now be able to either use MQ by setting up the environment (setmqenv) or by making the installation primary (setmqinst).

As far as I am aware, nothing on the /opt/mqm path should need to be writeable at run time, so as long as you create /var/mqm and whichever directories you want to use for queue manager data and logs, you should be fine. The /var/mqm and other directories would be on private volumes, not shared. If you want to add Multi-Instance support to the environment, then your queue manager data and logs could be on a shared (RW) file system, either with NFS v4, or using VxCFS, GPFS or GFS2, depending on what platform you are running on.

You still need to have licenses for all the systems of course, and I am not sure that the systems produced this way would be officially supported, so if you had problems, you might need to recreate them in a standard environment before getting support.

Assuming that it all works, you should be able to do patching by defining another file system (mount at /opt/mqm/inst2). Install MQ there, and apply a fix pack. unmount, and remount read only on all of your systems, and you should be able to switch your queue managers to use the new patch level. You can also switch your primary installation if you want to.

Caveat: This is all just a thought experiment. I haven't tried to do any of this myself, or seen it done successfully.



Regards


Neil Casey
Technical Consultant Messaging

________________________________

Phone: +61-3-8641-1068 | Mobile: +61-438-573-152
E-mail: Neil.Casey-***@public.gmane.org<mailto:neilc-***@public.gmane.org>

[IBM]

c/- NAB 14/555 Collins St
Melbourne, Vic 3000
Australia

[Certified for WebSphere]


Disclaimer: Opinions expressed are those of the author, and do not represent any commitment (or anything else) from IBM.



From: "Coombs, Lawrence" <Lawrence.Coombs-***@public.gmane.org<mailto:Lawrence.Coombs-***@public.gmane.org>>
To: MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org<mailto:MQSERIES-JX7+OpRa80Ties2YCUG/***@public.gmane.orgniwien.ac.at>,
Date: 14/06/2013 03:38
Subject: Re: Installing MQSeries binaries on SAN storage......
Sent by: MQSeries List <MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org<mailto:MQSERIES-JX7+OpRa80QeFbOYke1v4oOpTq8/***@public.gmane.org>>

________________________________




I have a requirement to setup a three node Veritas cluster so that a queue manager can run on any of the three nodes. Queue managers will be running on all three nodes at the same time(active/active/active).

Is it possible to install the binaries (/opt/mqm and /var/mqm)one time on SAN storage and then mount the file system to the node?
Even if this is possible how could multiple queue managers run on one node? You can only mount a file system on one node at a time.



This message, including any attachments, is the property of Sears Holdings Corporation and/or one of its subsidiaries. It is confidential and may contain proprietary or legally privileged information. If you are not the intended recipient, please delete it without reading the contents. Thank you.
________________________________

List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

This message, including any attachments, is the property of Sears Holdings Corporation and/or one of its subsidiaries. It is confidential and may contain proprietary or legally privileged information. If you are not the intended recipient, please delete it without reading the contents. Thank you.

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

________________________________
List Archive<http://listserv.meduniwien.ac.at/archives/mqser-l.html> - Manage Your List Settings<http://listserv.meduniwien.ac.at/cgi-bin/wa?SUBED1=mqser-l&A=1> - Unsubscribe<mailto:LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org?subject=Unsubscribe&BODY=signoff%20mqseries>

Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com<http://www.lsoft.com/resources/manuals.asp>

This message, including any attachments, is the property of Sears Holdings Corporation and/or one of its subsidiaries. It is confidential and may contain proprietary or legally privileged information. If you are not the intended recipient, please delete it without reading the contents. Thank you.

To unsubscribe, write to LISTSERV-0lvw86wZMd9k/bWDasg6f+***@public.gmane.org and,
in the message body (not the subject), write: SIGNOFF MQSERIES
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://listserv.meduniwien.ac.at/archives/mqser-l.html
Potkay, Peter M (CTO Architecture + Engineering)
2013-06-14 00:46:51 UTC
Permalink
Can you install MQ 3 times, once on each server into its local /opt/mqm? Then create a new cluster group for each QM that you want, each group with its own SAN disk, VIP and QM. Add as many of these groups as you like, within reason, and define their preferred nodes such that when things are normal the cluster groups (each with one QM) will be evenly spread across the 3 servers.

If you have an unscheduled or scheduled outage on one server the other 2 servers will host the cluster groups while the 1st server gets fixed or gets its maintenance (i.e. an MQ upgrade do its local /opt/mqm).


Peter Potkay


-----Original Message-----
From: MQSeries List [mailto:***@LISTSERV.MEDUNIWIEN.AC.AT] On Behalf Of Coombs, Lawrence
Sent: Thursday, June 13, 2013 1:33 PM
To: ***@LISTSERV.MEDUNIWIEN.AC.AT
Subject: Re: Installing MQSeries binaries on SAN storage......

I have a requirement to setup a three node Veritas cluster so that a queue manager can run on any of the three nodes. Queue managers will be running on all three nodes at the same time(active/active/active).

Is it possible to install the binaries (/opt/mqm and /var/mqm)one time on SAN storage and then mount the file system to the node?
Even if this is possible how could multiple queue managers run on one node? You can only mount a file system on one node at a time.



This message, including any attachments, is the property of Sears Holdings Corporation and/or one of its subsidiaries. It is confidential and may contain proprietary or legally privileged information. If you are not the intended recipient, please delete it without reading the contents. Thank you.
************************************************************
This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies.
***********

Loading...