home / infca / mq / mq_nix (navigation links) TY3

Unix MQM | AIX | MB 6.1 @ AIX | HP-UX | WLB vs HA | HACMP | Health Check {ext} Free space | Solaris | Init | Fine shell(s) | Remote Admin | Links

MQ & MB @ Unix


Unix MQM AIX HP-UX Sys Params HACMP Free space
HealthCheck Solaris IC91 Init HowTo cfg Remote Admin

Unix MQM user & group Up

WebSphere MQ requires a user ID of the name mqm, with a primary group of mqm. The mqm user ID owns the directories and files that contain the resources associated with the product.

Create the required user ID and group ID before you install WebSphere MQ. Both user ID and group ID must be set to mqm.

It is also suggested that you set the mqm user's home directory to /var/mqm.

Quick Beginnings for HP-UX. url


AIX (BS) Up

See MQ v7 at AIX install

The installation directory for the MQ product code is /usr/mqm. Working data is stored in /var/mqm. You cannot change these locations.

MQ v 6.0 for AIX, Quick beginnings, GC34-6478-00, page 7 [21/72]

Before you install WebSphere MQ for AIX, create and mount a file system called /var/mqm. If possible, use a partition strategy with a separate volume for the data.

You can also create separate file systems for your log data (/var/mqm/log) and error files (/var/mqm/errors). If possible, store log files on a different physical volume from the MQ queues (/var/mqm).

MQ v 6.0 for AIX, Quick Beginnings, GC34-6478-00, page 8 [22/72]

MQ requires a user ID of the name mqm, with a primary group of mqm. The mqm user ID owns the directories and files that contain the resources associated with the product. It is also suggested that you set the mqm user's home directory to /var/mqm

1.- File System's and Logical Volume's
df -k | grep mqm /dev/mqm_usrlv 163840 108880 34% 1793 4% /usr/mqm /dev/mqmlv 163840 143524 13% 202 1% /var/mqm Veus que els files systems /usr/mqm i /var/mqm estan definits sobre els logicals volumes mqm_usrlv i mqmlv respectivament. Si vols veure la informació del logical volume en concret: lslv mqm_usrlv ... Es veu que el logical volume mqm_usrlv pertany al volume group t24vg I per veure els logical volumes que formen part d'un volume group: lsvg -l nom_volume_group Aquí, lsvg -l t24vg
2.- com es fa per assignar al MQ un Volum a part ?
Primer has de crear els logical volumes mqm_usrlv i mqmlv en el volume group que hagis decidit. La comanda seria: mklv -y mqm_usrlv nom_volume_group número_de_pps El número de pps depen del pp size del volume group. Exemple : el logical volume mqm_usrlv és de 10 pps x la mida de un pp = 16Mb => 160 Mb. Després crear el file system sobre el logical volume previament definit, amb el smit. Smitty jfs /add a fs on a previous defined logical volume /add a std journaled fs /especificar-li el nom del logical volume i el punt de muntatge /usr/mqm. I per últim muntar el file system: mount /usr/mqm
3.- tamany FS
163Mb per cada file system.

Review the machine's configuration, using the command sysdef -i.
The kernel values are set in the /etc/system file.


Filesystem size

Allow 50 MB as a minimum for a MQ server and 15 MB as a minimum for a MQ client.

MQ v 6.0 for AIX, Quick Beginnings, GC34-6478-00, page 8 [22/72]

What's new in MQ for AIX, version 6.0

MQ v 6.0 for AIX, Quick Beginnings, GC34-6478-00, page IX [11/72]


MQ v 6.0 install on AIX

Log as root and use smit. Select the required smit window using the following sequence:

Software Installation and Maintenance Install and Update Software Install and Update from ALL Available Software

MQ for AIX v6.0, "Quick Beginnings", GC34-6478-00, page 11 [25/72]

MQ v 6.0 Client install on AIX
mqm mqm 257.656.645 C8478ML.tar.Z gunzip C8478ML.tar.Z => C8478ML.tar (230 MB) tar tvf C8478ML.tar tar xvf C8478ML.tar smit Software Installation and Maintenance Install and Update Software Install Software INPUT device / directory : /software/mq6cli Installation Summary -------------------- Name Level Part Event Result ---------------------------------------------------------------------------- Java14_64.sdk 1.4.2.0 USR APPLY SUCCESS Java14_64.license 1.4.2.0 USR APPLY SUCCESS Java14_64.ext.javahelp 1.4.2.0 USR APPLY SUCCESS Java14_64.ext.commapi 1.4.2.0 USR APPLY SUCCESS Java14.license 1.4.2.0 USR APPLY SUCCESS Java14.ext.commapi 1.4.2.0 USR APPLY SUCCESS mqm.base.runtime 6.0.0.0 USR APPLY SUCCESS mqm.base.runtime 6.0.0.0 ROOT APPLY SUCCESS mqm.msg.en_US 6.0.0.0 USR APPLY SUCCESS mqm.java.rte 6.0.0.0 USR APPLY SUCCESS mqm.client.rte 6.0.0.0 USR APPLY SUCCESS mqm.base.samples 6.0.0.0 USR APPLY SUCCESS mqm.base.sdk 6.0.0.0 USR APPLY SUCCESS mqm.man.en_US.data 6.0.0.0 SHARE APPLY SUCCESS gsksa.rte 7.0.3.15 USR APPLY SUCCESS gskta.rte 7.0.3.15 USR APPLY SUCCESS mqm.keyman.rte 6.0.0.0 USR APPLY SUCCESS mqm mqm 254105600 6.0.1-WS-MQ-AixPPC64-FP0001.tar tar tvf 6.0.1-WS-MQ-AixPPC64-FP0001.tar tar xvf 6.0.1-WS-MQ-AixPPC64-FP0001.tar smit Software Installation and Maintenance Install and Update Software Install Software INPUT device / directory : /software/mq6clifp Installation Summary -------------------- Name Level Part Event Result ------------------------------------------------------------------------------- mqm.msg.en_US 6.0.1.1 USR APPLY SUCCESS mqm.man.en_US.data 6.0.1.1 SHARE APPLY SUCCESS mqm.java.rte 6.0.1.1 USR APPLY SUCCESS mqm.client.rte 6.0.1.1 USR APPLY SUCCESS mqm.base.sdk 6.0.1.1 USR APPLY SUCCESS mqm.base.samples 6.0.1.1 USR APPLY SUCCESS mqm.base.runtime 6.0.1.1 USR APPLY SUCCESS mqm.base.runtime 6.0.1.1 ROOT APPLY SUCCESS mqm.base.runtime 6.0.1.1 USR COMMIT SUCCESS mqm.base.runtime 6.0.1.1 ROOT COMMIT SUCCESS mqm.base.samples 6.0.1.1 USR COMMIT SUCCESS mqm.base.sdk 6.0.1.1 USR COMMIT SUCCESS mqm.client.rte 6.0.1.1 USR COMMIT SUCCESS mqm.java.rte 6.0.1.1 USR COMMIT SUCCESS mqm.man.en_US.data 6.0.1.1 SHARE COMMIT SUCCESS mqm.msg.en_US 6.0.1.1 USR COMMIT SUCCESS gskta.rte 7.0.3.18 USR APPLY SUCCESS gsksa.rte 7.0.3.18 USR APPLY SUCCESS mqm.keyman.rte 6.0.1.1 USR APPLY SUCCESS mqm.keyman.rte 6.0.1.1 USR COMMIT SUCCESS

Crear queue manager :

crtmqm -q -lc -lf 1000 -ld "/var/mqm/log" -u DLQ QMNAME

Crear objectes :

define qlocal(DLQ) replace define channel(SYSTEM.ADMIN.SVRCONN) CHLTYPE(SVRCONN) alter channel(SYSTEM.ADMIN.SVRCONN) CHLTYPE(SVRCONN) MCAUSER('mqm')

Start listener :

nohup runmqlsr -t TCP -p 1414 -m MQDES01 &

Display Running Qmgrs

AIX :

#!/usr/bin/ksh echo "Veamos ..." dspmq | grep Running | cut -d '(' -f2,3 | cut -d ')' -f1 | while read qmgr do status=`dspmq -m $qmgr | cut -d '(' -f2,3 | cut -d ')' -f2 | cut -d '(' -f2` echo QMgr $qmgr is $status done

Guindous :

for /F "tokens=1,2,3,4,5,* delims=()" %i IN ('dspmq') DO @IF NOT %l==Running echo QMgr %j is %l
SWVPD
La información sobre los paquetes/parches instalados en AIX se guarda en algo llamado "Software Vital Product Data" (SWVPD) que suelen ser unos ficheros que cuelgan en clases de la ODM (es algo así como el registro del windows para que nos entendamos) en /etc/objrepos.
La única forma de actualizar esa BD es con los comandos:

lppchk -cu "paquete problematico" lppchk -lu "paquete problematico"
lppchk
[root@cmqb132]:~> lppchk -v lppchk: The following filesets need to be installed or corrected to bring the system to a consistent state: mqsi60 6.0.0.1 (COMMITTED) mqsi60.brokerc 6.0.0.1 (COMMITTED) mqsi60.brokerf 6.0.0.1 (COMMITTED) mqsi60.core 6.0.0.1 (COMMITTED) mqsi60.data 6.0.0.1 (COMMITTED) mqsi60.disthub 6.0.0.1 (COMMITTED) mqsi60.is 6.0.0.1 (COMMITTED) mqsi60.itlm 6.0.0.1 (COMMITTED) mqsi60.la 6.0.0.1 (COMMITTED) mqsi60.links 6.0.0.1 (COMMITTED) mqsi60.merant 6.0.0.1 (COMMITTED) mqsi60.mrm 6.0.0.1 (COMMITTED) mqsi60.profiles 6.0.0.1 (COMMITTED) mqsi60.samples 6.0.0.1 (COMMITTED) mqsi60.tsamples 6.0.0.1 (COMMITTED) mqsi60.tsc 6.0.0.1 (COMMITTED) mqsi60.tsf 6.0.0.1 (COMMITTED) rsct.basic.rte 2.4.4.0 (not installed; requisite fileset)

Solución :

url - the errors are due to a defect in the ISMP version


Instalació MS03
url
1430135 Jan 14 19:40 ms03_unix.tar.Z 2629719 Feb 23 14:05 ms03_unix.tar.Z uncompress ms03_unix.tar.Z gunzip ms03_unix.tar.Z 3225600 Jan 14 19:40 ms03_unix.tar 5283840 Feb 23 14:05 ms03_unix.tar tar -xvf ms03_unix.tar tar -xvf ms03_unix.tar 231041 Jul 24 20:25 saveqmgr.aix53 238401 Jul 24 20:25 saveqmgrc.aix53 mqm@lope:/home/mqm/eines> ./saveqmgr.aix53 -m QMX -o -f SAVEQMGR V6.1.0 Compiled for Websphere MQ V7.0 on Jul 24 2008 Requesting attributes of the queue manager... Writing Queue Manager definition to QMX.MQS. Generating attributes for Websphere MQ Release 7.0.0 Generating code for platform UNIX Requesting attributes of all authinfo objects... Requesting attributes of all queues... Requesting attributes of all channels... Requesting attributes of all processes... Requesting attributes of all namelists... Requesting attributes of all listeners... Requesting attributes of all services... Requesting attributes of all topics... Requesting attributes of all subscriptions... Writing AuthInfo definitions to QMX.MQS. Writing Queue definitions to QMX.MQS. Skipping dynamic queue SAVEQMGR.496A84FB20003705 Writing Channel definitions to QMX.MQS. Writing Process definitions to QMX.MQS. Writing Namelist definitions to QMX.MQS. Writing Listener definitions to QMX.MQS. Writing Service definitions to QMX.MQS. Writing Topic definitions to QMX.MQS. Writing Subscription definitions to QMX.MQS. mqm@lope:/home/mqm/eines> dir
Compilació MS03 @ AIX 7.1
(ADLTAQMQ0):[mqm] /tools/mq/eines/ms03-> cat makefile.aix # Module Name: makefile.aix # WebSphere MQ save queue manager object definitions using PCFs (ms03 supportpac) # This makefile makes the saveqmgr executables on aix (ms03) # make -f makefile.aix # run saveqmgr.aix # Set the suffix for the target files EXESUF = aix # CC defines the compiler. CC = xlc # LC defines the linker LC = $(CC) # MQM library directory MQMLIB = /usr/mqm/lib64 # set LIBS to list all the libraries ms03 should link with LIBS = -lm -lmqm LIBC = -lm -lmqic # set INCS to list all the header the compiler needs INCS = -I. -I/usr/include -I/usr/include/sys -I/usr/mqm/inc # Set CCOPTS - the compiler options CCOPTS = -c -DUNIX -q64 -o $*.$(OBJSUF) CCOPTC = -c -DUNIX -DUSEMQCNX -q64 -o $@ # Set LCOPTS - the linker options LCOPTS = -o $@ -L$(MQMLIB) -L. $(LIBS) -q64 -bnoquiet LCOPTC = -o $@ -L$(MQMLIB) -L. $(LIBC) -q64 # Set the suffix for the object files OBJSUF = o # Include the file which does the real work include makefile.common (ADLTAQMQ0):[mqm] /tools/mq/eines/ms03->

Instalació MB 6.1 @ AIX

Requisits :

System Requirements for WebSphere Message Broker V6.1 for AIX

350337415 Jan 16 18:27 C19YNML.tar.gz -rw-r--r-- 1 root system 303249821 Jan 16 18:35 setup.jar -rw-r--r-- 1 root system 36810931 Jan 16 18:35 setupaix Your user ID {wbrkadm} must have root authority to complete installation. Create the 'mqbrkrs' group Add 'root' to 'mqm' and 'mqbrkrs' group. {root} setupaix -console / setupaix -i console IBM WebSphere Message Broker Version: 6.1.0.2 Directory Name: [/opt/IBM/mqsi/6.1 1. [x] Broker 2. [x] User Name Server 3. [x] Configuration Manager IBM WebSphere Message Broker 6.1 will be installed in the following location: /opt/IBM/mqsi/6.1 with the following features: Broker User Name Server Configuration Manager for a total size: 884.3 MB mqm@lope:/home/mqm> df -k Filesystem 1024-blocks Free %Used Iused %Iused Mounted on /dev/hd4 32768 19648 41% 1777 28% / /dev/hd2 7667712 4394568 43% 53195 6% /usr /dev/hd9var 32768 17208 48% 704 15% /var /dev/hd3 425984 164052 62% 1099 3% /tmp /dev/hd1 6291456 2985556 53% 2470 1% /home /proc - - - - - /proc /dev/hd10opt 2097152 2020740 4% 2350 1% /opt Installing IBM WebSphere Message Broker 6.1. Please wait... |-----------|-----------|-----------|------------| 0% 25% 50% 75% 100% |||||||||||||||||||||||||||||Unregistering Errors occurred during the installation. An error occurred and product installation failed. Look at the log file /mqsi6_install.log for details. {readmes didn't exist} Creating uninstaller... Creating WorkPath... Please wait... ------------------------------------------------------------------------------- The InstallShield Wizard has successfully installed IBM WebSphere Message Broker 6.1. wbrkadm@lope:/home/wbrkadm> mqsiservice -v BIPv610 en US ucnv Console CCSID 819 dft ucnv CCSID 819 ICUW ISO-8859-1 ICUA ISO-8859-1 BIP8996I: Version: 6102 BIP8997I: Product: WebSphere Message Brokers BIP8998I: CMVC Level: S610-FP02 DH610-FP02 BIP8999I: Build Type: Production BIP8071I: Successful command completion. root@lope:/home/soft/mb61/src/messagebroker_runtime1> df -k Filesystem 1024-blocks Free %Used Iused %Iused Mounted on /dev/hd4 32768 18248 45% 1777 30% / /dev/hd2 7667712 4394304 43% 53195 6% /usr /dev/hd9var 32768 17188 48% 723 16% /var /dev/hd3 425984 203472 53% 834 2% /tmp /dev/hd1 6291456 2985388 53% 2478 1% /home /proc - - - - - /proc /dev/hd10opt 2097152 1211936 43% 4419 2% /opt
Configuració MB 6.1 @ AIX

Agafar odbc64.ini :

wbrkadm@lope:/home/wbrkadm> cp /opt/IBM/mqsi/6.1/ODBC64/V5.3/odbc64.ini .

Adaptar odbc64.ini :

[ODBC] # To turn on ODBC trace set Trace=1 or 2 or 3. Trace=0 TraceFile=/home/wbrkadm/trace/odbctrace64.out TraceDll=/opt/IBM/mqsi/6.1/ODBC64/V5.3/lib/odbctrac.so InstallDir=/opt/IBM/mqsi/6.1/ODBC64/V5.3 UseCursorLib=0 IANAAppCodePage=4 UNICODE=UTF-8

Edit .profile :

echo "Posar ODBC64.INI :" export ODBCINI="/home/wbrkadm/odbc64.ini"

Si quelcom va malament ...
truss mqsiservice -v 2> truss.txt

Trassa MB @ AIX

Engegar :

clear NOM_BROKER=INTPRD01 NOM_EXGRP=GE_GENERICO NOM_FLUX=MSG_FLW_TST_01_NET echo "Engegar la trassa. Broker {" $NOM_BROKER "}, Grup {" $NOM_EXGRP "}, Flux {" $NOM_FLUX "}." mqsichangetrace $NOM_BROKER -u -e $NOM_EXGRP -f $NOM_FLUX -l normal -c 10240 -r

Aturar i llistar :

clear NOM_BROKER=INTPRD01 NOM_EXGRP=GE_GENERICO NOM_FLUX=MSG_FLW_TST_01_NET rm -f $NOM_FLUX.txt.old rm -f $NOM_FLUX.xml mv $NOM_FLUX.txt $NOM_FLUX.txt.old echo "===================================================================== Aturar la trassa" mqsichangetrace $NOM_BROKER -u -e $NOM_EXGRP -f $NOM_FLUX -l none echo "===================================================================== Llegir el LOG" mqsireadlog $NOM_BROKER -u -e $NOM_EXGRP -o $NOM_FLUX.xml echo "===================================================================== Formatejar el LOG" mqsiformatlog -i $NOM_FLUX.xml -o $NOM_FLUX.txt echo "Mostrar el resultado" echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++" cat $NOM_FLUX.txt echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"

HP-UX Up

Creating WebSphere MQ file systems

The installation directory for the MQ product code is /opt/mqm. Working data is stored in /var/mqm. You cannot change these locations.

{bestp}
Creating separate file systems for working data

You can also create separate file systems for your log data /var/mqm/log and error files /var/mqm/errors. If possible, store log files on a different physical disk from the MQ queues /var/mqm.

AMQCAC07.pdf = MQ for HP-UX v 6.0 Quick Beginnings

Si volem posar cada Queue Manager en un disc diferent, per agafar un queue manager des una màquina diferent, llavors hem de crear /var/mqm/qmgrs/<nom>/ abans de cridar crtmqm. Aquesta funció crearà /var/mqm/qmgrs/<nom>.000/, que hem de moure al lloc adient abans esmentat, a la vegada que corregim /var/mqm/mqs.ini

Stopping all queue managers
dspmq | grep -i "QMNAME" | cut -d '(' -f2,3 | cut -d ')' -f1 | while read QMGR do echo "stopping Command Server for Queue Manager ${QMGR}" /usr/bin/su - $UID -c "endmqcsv $QMGR" echo "Command Server for Queue Manager ${QMGR} stopped" sleep 3 echo "stopping Queue Manager ${QMGR}" /usr/bin/su - $UID -c "endmqm -i $QMGR" echo "Queue Manager ${QMGR} stopped" /usr/bin/su - $UID -c "endmqlsr -m $QMGR" done

Kernel parameters
NCALLOUT = NKTHREAD = NPROC = /etc/security/limits :

Como usuario "root", editar el fichero /etc/sysctl.conf y comprobar la existencia de los valores abajo indicados. Esos valores son valores mínimos, asi que si el fichero contiene un valor superior, este debe prevalecer. También se puede verificar la lista de valores mediante el comando /etc/sysctl -a

kernel.msgmni = 1024 kernel.shmmni = 4096 kernel.shmall = 2097152 kernel.sem = 500 256000 250 1024 fs.file-max = 32768 net.ipv4.tcp_keepalive_time = 300

ECI I-006 : instalación en Z (64).


Amunt! Top Amunt!
WLB versus HA

WLB permite que el servicio para nuevas transacciones continue despues de una falla en uno de los brokers pero los mensajes que se hallan encolados en el queue manager que ha fallado permancen perdidos hasta que se restature el servicio del queue manager y del broker. Aqui es donde HA sirve para restaturar el servicio en forma automatica y lo mas pronto posible.

MB1 MB2 Q1 Q1 QM1 QM2

Que pasa si ...

(1) unos pocos mensajes quedan atrapados en las colas de QM2, pero los mensajes futuros iran sin problema a QM1 y MB1.

(2) los mensajes seguirán repartiendose entre QM1 y QM2. Si MB2 no los trata, se acumularán en Q1.


HACMP Up

HACMP = High Availability Cluster Multi-Processing

An HA cluster is a collection of nodes and resources (such as disks and networks) which cooperate to provide high availability of services running within the cluster.

This SupportPac includes a monitor for WMQ, which will allow the HA product to monitor the health of the queue manager and initiate recovery actions that you configure, including the ability to restart the queue manager locally or move it to an alternate system.

We will use commands like

url MC91 - high availability for MQ on Unix.
url IC91 - high availability for MB on distributed platforms.

Tinc :

Online : part 1, part 2.

Great Resources list.

IP 172.16.16.175 Hostname RCMQB271 < IP used ? IP-2 172.16.16.176 Hostname RCMQB272 < IP used ? rcmqb_cfg 172.16.16.178 < IP used in channel between CfgMgr and Broker rcmqb_mqb 172.16.16.177 < IP used in channel between Broker and CfgMgr rcmqb_ora 172.16.16.179
HACMP installation

The operating system, the HA product and MQ should already be installed, using the normal procedures on all systems in the cluster. You should install MQ onto internal disks on each of the nodes and not attempt to share a single installation on shared disks.

Step 1. Configure the HA Cluster

Step 2. Configure the shared disks

For performance, it is recommended that a queue manager uses separate filesystems for logs and data.

Mount points must all be owned by the mqm user.

You will need the following filesystems:

Per node:
/var on internal disks - this is a standard filesystem or directory which will already exist. You only need one of these per node regardless of the number of queue managers that the node may host. It is important that all queue managers that may run on this node use one filesystem for some of their internal control information and the example scripts designate /var/mqm for this purpose. With the suggested configuration, not much WMQ data is stored in /var, so it should not need to be extended. The filesystem layout from the previous step can be simplified if you know you are only going to have a single queue manager on a node, which fails over to a standby machine. For such a configuration, you can continue to use the /var/mqm and /var/mqm/log filesystems. However, this layout will not make it simple to extend if you later change your mind, and want to have an active/active system.

Per queue manager:

MC91.pdf, v7, page 15, 14/46

  1. Create the volume group that will be used for this queue manager's data and log files.
  2. Create the /MQHA/<qmgr>/data/ and /MQHA/<qmgr>/log/ filesystems using the volume group created above.
  3. For each node in turn, import the volume group, vary it on, ensure that the filesystems can be mounted, unmount the filesystems and varyoff the volume group.

Step 3. Create the Queue Manager

  1. Select a node on which to perform the following actions
  2. Ensure the queue manager's filesystems are mounted on the selected node.
  3. Create the queue manager on this node, using the hacrtmqm script
  4. Start the queue manager manually, using the strmqm command
  5. Create any queues and channels
  6. Test the queue manager
  7. End the queue manager manually, using endmqm
  8. On the other nodes, which may takeover the queue manager, run the halinkmqm script

Step 4. Configure the movable resources

The queue manager has been created and the standby/takeover nodes have been updated. You now need to define a resource or service group which will contain the queue manager and all its associated resources. The resource group can be either cascading or rotating. Whichever you choose, remember that the resource group will use the IP address as the service label. This is the address which clients and channels will use to connect to the queue manager.

  1. Create a resource group and select the type as discussed above.
  2. Configure the resource group in the usual way adding the service IP label, volume group and filesystem resources to the resource group.
  3. Synchronise the cluster resources.
  4. Start HACMP on each cluster node in turn and ensure that the cluster stabilizes, that the respective volume groups are varied on by each node and that the filesystems are mounted correctly.

Step 5. Configure the Application Server or Agent

The queue manager is represented within the resource group by an application server or agent. The SupportPac includes example server start and stop methods which allow the HA products to start and end a queue manager, in response to cluster commands or cluster events. For HACMP the hamqm_start, hqmam_stop and hamqm_applmon programs are ksh scripts.

  1. Define an application server which will start and stop the queue manager. The start and stop scripts contained in the SupportPac may be used unmodified, or may be used as a basis from which you can develop customized scripts. The examples are called hamqm_start and hamqm_stop.
  2. Add the application server to the resource group definition created in the previous step.
  3. Optionally, create a user exit in /MQHA/bin/rc.local
  4. Synchronise the cluster configuration.
  5. Test that the system can start and stop the queue manager, by bringing the resource group online and offline.

Step 6. Configure an Application Monitor

To benefit from queue manager monitoring you must define an Application Monitor. If you created the queue manager using hacrtmqm then one of these will have been created for you, in the /MQHA/bin directory, and is called hamqm_applmon.$qmgr.

  1. To enable queue manager monitoring, define a custom application monitor for the Application Server created in Step 5, providing the name of the monitor script and tell HACMP how frequently to invoke it. Set the stabilisation interval to 10 seconds.
  2. To configure for local restarts, specify the Restart Count and Restart Interval.
  3. Synchronise the cluster resources.
  4. Test the operation of the application monitoring, and in particular verify that the local restart capability is working as configured.

When the "stop" scripts are called, part of the processing is to forcefully kill all of the processes associated with the queue manager if they do not stop properly. In previous versions of the HA SupportPacs, the list of processes was hardcoded in the stop or restart scripts. For this version, the list of processes is in an external file called hamqproc.

Step 7. Removal of Queue Manager from Cluster

Should you decide to remove the queue manager from the cluster, it is sufficient to remove the application server (and application monitor, if configured) from the HA configuration. You may also decide to delete the resource group. This does not destroy the queue manager, which will continue to function normally, but under manual control.

  1. Delete the application monitor, if configured (HACMP/ES only)
  2. Delete the application server
  3. Remove the filesystem, service label and volume group resources from the resource group.
  4. Synchronise the cluster resources configuration.

Step 8. Deletion of Queue Manager

If you decide to delete the queue manager, then you should first remove it from the cluster configuration, as described in the previous step. Then, to delete the QM, perform the following actions.

  1. Make sure the queue manager is stopped, by issuing the endmqm command.
  2. On the node which currently has the queue manager's shared disks and has the queue manager's filesystems mounted, run the hadltmqm script provided in the SupportPac.
  3. You can now destroy the filesystems /MQHA/<qmgr>/data and /MQHA/<qmgr>/log.
  4. You can also destroy the volume group.
  5. On each of the other nodes in the cluster,
    1. Run the hadltmqm command as above, which will clean up the subdirectories related to the queue manager.
    2. Manually remove the queue manager stanza from the /var/mqm/mqs.ini file.

The queue manager has now been completely removed from the cluster and the nodes.

HACMP & smitty

Communications Applications and Services + HACMP for AIX

Operacions
HACMP & MQ clusters

MQ Clusters reduce administration and provide load balancing of messages across instances of cluster queues. They also offer higher availability than a single queue manager, because following a failure of a queue manager, messaging applications can still access surviving instances of a cluster queue. However, MQ Clusters alone do not provide automatic detection of queue manager failure and automatic triggering of queue manager restart or failover. HA clusters provide these features. The two types of cluster can be used together to good effect.

MQ under HACMP migration to v 6

Applying maintenance
All nodes in a cluster should normally be running exactly the same levels of the WMQ software. Sometimes however it is necessary to apply updates, such as for service fixes. This is best done by means of a "rolling upgrade".
The principle of a rolling upgrade is to apply the new software to each node in turn, while continuing the WMQ service on other nodes. Assuming a two-node active/active cluster, the steps are

  1. Select one machine to upgrade first
  2. At a suitable time, when the moving of a queue manager will not cause a serious disruption to service, manually force a migration of the active queue manager to its partner node
  3. On the machine that is now running both queue managers, disable the failover capabilities for the queue managers.
  4. Upgrade the software on the machine that is not running any queue managers
  5. Re-enable failover, and move both queue managers across to the newly upgraded machine
  6. Disable failover again
  7. Upgrade the original box
  8. Re-enable failover
  9. When it will cause least disruption, move one of the queue managers across to balance the workload

Migration from WMQ V5.3 to WMQ V6
The internal implementation of WMQ V6 has changed from WMQ V5.3 in that it uses additional subdirectories for its IPC keys. A tool is provided with this SupportPac to assist in migrating queue managers created with WMQ V5.3 and its corresponding version of the hacrtmqm script. The hamigmqm script (which is in the common subdirectory of the tar file) can be used even with a currently-running V5.3 queue manager as it creates new files but does not modify any existing files. The procedure described above can be used for migration of WMQ V5.3 to WMQ V6 with this additional step before the first one:

MC91.pdf

.000 added to Queue Manager
crtmqm -lc -lf 1024 -ld "/MQHA/QMDESA01/log/" -u DLQ QMDESA01 dx0609-2:/MQHA/QMDESA01/log # dir drwxrwx--- 3 mqm mqm 256 Jan 02 15:31 QMDESA01.000 drwxr-xr-x 2 root system 256 Dec 19 14:49 lost+found

Solution : remove "lost+found" directory prior launching "crtmqm".


Free space Up

FileSystem good initial / minimum values :

Software /usr/mqm 1 GB  
Queues /var/mqm 4 GB 30 MB
Logs /var/mqm/log 1 GB 20 MB
Errors /var/mqm/errors 1 GB 4 MB

From wasv51base_gettingstarted.pdf, page 52 [SC31-6323-04].


Solaris Up

Quick beginnings for Solaris


Amunt! Top Amunt!
qm.ini

A complete file looks like this :

ExitPath: ExitsDefaultPath=/var/mqm/exits ExitsDefaultPath64=/var/mqm/exits64 Service: Name=AuthorizationService EntryPoints=13 ServiceComponent: Service=AuthorizationService Name=MQSeries.UNIX.auth.service Module=/opt/mqm/bin/amqzfu 1 ComponentDataSize=0 Log: LogPrimaryFiles=3 LogSecondaryFiles=2 LogFilePages=1024 LogType=CIRCULAR LogBufferPages=0 LogPath=/var/mqm/log/saturn!queue!manager/ XAResourceManager: Name=DB2 Resource Manager Bank SwitchFile=/usr/bin/db2swit XAOpenString=MQBankDB XACloseString= ThreadOfControl=THREAD Channels: MaxChannels=20 MaxActiveChannels=100 MQIBindType=STANDARD TCP: KeepAlive = Yes QMErrorLog: ErrorLogSize=262144 ExcludeMessage=7234 SuppressMessage=9001,9002,9202 SuppressInterval=30 ApiExitLocal: Name=ClientApplicationAPIchecker Sequence=3 Function=EntryPoint Module=/usr/Dev/ClientAppChecker Data=9.20.176.20

qmgr configuration files, qm.ini

How to verify the size of the buffer of a Persistent queue:

TuningParameters: DefaultPQBufferSize=94371840

Amunt! Top Amunt!
Software
Version \ Platform SPARC x86-64
v6 Server = C8470ML [379.564.401]
Client = C847BML [202.016.088]
FP 6.0.2.5
Server = C87RWML [455.930.544]
Client 6.0.1 = C87RZML [250.287.831]
FP 6.0.2.5
v7 Server = C19LPML [493.660.774]
Client = C19M0ML [332.445.292]
FP 7.0.0.1
Server = C19LQML [508.647.631]
Client = C19M1ML [383.508.277]
FP 7.0.0.1
CheckList
Create the required mqm user ID and mqm group ID before you install WebSphere MQ.
gunzip C48UPML.tar.Z => obtenim C48UPML.tar
tar xvf C48UPML.tar => obtenim MQ53Client_Solaris.tar
tar xvf MQ53Client_Solaris.tar => obtenim directori "MQClient"
crear usuari mqm del grup mqm.
crear un filesystem de 500 MB i muntar-lo a /var/mqm
verificar els requeriments del kernel : sysdef -i
{root} treure el package vell : pkgrm mqm
{root} acceptar la llicència : ./mqlicense.sh -text-only
{root} instalar el nou package : pkgadd -d ./mqs530.img
verificar l'instalació
Resultat instalació MQ v 6.0.2.5 a Solaris
$ pwd /opt/mqm/bin $ ls amqcctca amqwCleanSideQueue.sh amqzslf0 dspmqras runmqbrk amqcltca amqwclientConfig.sh amqzxma0 dspmqrte runmqchi amqcrsta amqwclientTransport.wsdd amqzxma0_nd dspmqtrc runmqchl amqcrsta_nd amqwdeployWMQService.sh clrmqbrk dspmqtrn runmqchl_nd amqfcxba amqwsetcp.sh crtmqcvx dspmqver runmqdlq amqharmx amqwstartwin.sh crtmqlnk endmqbrk runmqlsr amqhasmx amqxmsg0 crtmqm endmqcsv runmqlsr_nd amqicdir amqzdmaa dltmqbrk endmqlsr runmqsc amqiclen amqzfuma dltmqlnk endmqm runmqtmc amqldmpa amqzlaa0 dltmqm endmqtrc runmqtrm amqoamd amqzlaa0_nd dltmqm_nd ffstsummary setmqaut amqpcsea amqzllp0 dmpmqaut migmqbrk setmqprd amqrcmla amqzlsa0 dmpmqlog mqrc strmqbrk amqrdbgm amqzlsa0_nd dspmq mqver strmqcsv amqrfdm amqzlwa0 dspmqaut rcdmqimg strmqm amqrmppa amqzmgr0 dspmqbrk rcrmqobj strmqtrc amqrrmfa amqzmuc0 dspmqcsv restrictedmode_migrateQM amqsstop amqzmur0 dspmqfls rsvmqtrn $
AIX sample programs
mqm@lope:/usr/mqm/samp> ls *.c amqsaem0.c amqsapt0.c amqscbf0.c amqsget0.c amqsmon0.c amqsput0.c amqsseta.c amqstrg0.c amqsvfc0.c amqzscgx.c amqsaicq.c amqsaxe0.c amqscnxc.c amqsgrma.c amqsprma.c amqsqrma.c amqsstma.c amqstxgx.c amqswlm0.c amqzscix.c amqsaiem.c amqsbcg0.c amqsecha.c amqsinqa.c amqsptl0.c amqsreq0.c amqsstop.c amqstxpx.c amqsxae0.c amqsailq.c amqsblst.c amqsgbr0.c amqsiqma.c amqspuba.c amqssbxa.c amqssuba.c amqstxsx.c amqsxrma.c

Instalació Client 5.3

This installation procedure uses the pkgadd program, enabling you to choose which components you want to install.

The installation directory for the WebSphere MQ product code is /opt/mqm. Working data is stored in /var/mqm. You cannot change these.

Any (client) error is logged in /var/mqm/errors/AMQERR01.LOG

If you cannot install the product code in this file system (for example, if it is too small to contain the product), you can do one of the following:

  1. Create a new file system and mount it as /opt/mqm.
  2. Create a new directory anywhere on your machine, and create a symbolic link from /opt/mqm to this new directory.

For example:

mkdir /bigdisk/mqm ln -s /bigdisk/mqm /opt/mqm

Creating a file system for the working data :
Before you install WebSphere MQ for Solaris, create and mount a file system called /var/mqm. Use a partition strategy with a separate volume for the WebSphere MQ data.

Verificació instalació Client 5.3

Al servidor, donar

Al Client, donar

  You have successfully verified the client installation.

WebSphere MQ for Solaris, Quick Beginnings, Version 5.3, capitol 5. GC34-6075-02

Compilació

SPARC 32-bit, C, client

cc -xarch=v8plus -mt -o amqsputc_32 amqsput0.c \ -I/opt/mqm/inc -L/opt/mqm/lib -R/opt/mqm/lib -R/usr/lib/32 \ -lmqic -lmqmcs -lmqmzse -lsocket -lnsl -ldl

SPARC 32-bit, C, server

cc -xarch=v8plus -mt -o amqsput_32 amqsput0.c \ -I/opt/mqm/inc -L/opt/mqm/lib -R/opt/mqm/lib -R/usr/lib/32 \ -lmqm -lmqmcs -lmqmzse -lsocket -lnsl -ldl

Chapter 24, "Building your application on Solaris", Application Programing Guide, MQ v 6.0, SC34-6595-01.

AIX [lope] :

(g)cc -L/usr/mqm/lib -lmqm -o amqsailq amqsailq.c (g)cc -q64 -L/usr/mqm/lib64 -lmqm -o amqsput amqsput0.c

See "how to compile a user-exit"

Linking libraries
Server for C : libmqm.so Client for C : libmqic.so (libmqic_r are threaded libs)

Pagina 351, Application Programing Guide, MQ v 6.0, SC34-6595-01.

Error(s) link-edicio
/usr/bin/ld: skipping incompatible /opt/mqm/lib/libmqmr.so when searching for -lmqmr


IC91 @ Unix

AIX, HACMP, Websphere MQ, Websphere MQ SupportPac MC91, WMB and DB2 should already be installed, using the normal procedures onto internal disks on each of the nodes.

HA for MB, IC91.pdf, page 12 [13/79]

The hamqsicreatebroker command will create the broker and will ensure that its directories are arranged to allow for HA operation.

On any other nodes in the resource group's nodelist (i.e. excluding the one on which you just created the broker) the hamqsiaddbrokerstandby command will create the information required for a cluster node to act as a standby for the broker.


32-bit Applications & 64-bit Qmgr

Initial start-up at Power Up

In AIX, startup of software is controlled by /etc/inittab script so you must add an entry like:

rclocal:2:wait:/etc/rc.local > /dev/console 2>&1
in /etc/inittab file.
rc.local rc.oracle rc.mqm

rc.oracle :

dx0649:/etc # cat rc.oracle su - ora9 -c "/oracle/ora92/bin/lsnrctl start" su - ora9 -c "/sistemas/scripts/oracle_start.sh mqbkdesa"

rc.mqm :

dx0649:/etc # cat rc.mqm # arranque gestores MQ su - mqm -c "/MQHA/bin/strqmdesa06.ksh" # arranque brokers su - wbrkadm6 -c "/MQHA/bin/strbrkdesa06.ksh"
ShutDown

In order to run an ordenated stop script you must create the /etc/rc.shutdown file, that is read by the shutdown process every time the AIX is rebooted. One problem with /etc/rc.shutdown is that checks for the error code in every execution of the scripts, that is, if the script runs and returns error then shutdown operation will halt. You must review the error condition, and once solved, issue the shutdown again.


Unix special MQ commands
AIX special MQ commands
[root@dmqb261]:/usr/mqm/bin> ./amqiclen /? Usage: ./amqiclen {-c | -x} -c = check -x = destroy [-m <qmgr>] -m = queue manager. [-p <prefix>] qmgrs directory Prefix (/var/mqm) [-q] queue manager subpool [-i] IPCC subpool [-o] persistent queue manager subpool [-t] trace control [-s] subpools lock [-c] check only [-F] Force (deleted active segments) [-v] verbose [-h] headings [-d] display remaining resources

This is required after the queue manager has ended abnormally and before it's restarted. This command removes all shared memory resources and semaphores that belong to user mqm: Code:

ipcs -a | grep mqm | awk '{printf( "-%s %s ", $1, $2 )}' | xargs ipcrm

Amunt! Top Amunt!
How to configure UNIX and Linux systems for MQ

Excelent url

mqconfig.sh

The mqconfig script analyzes your system and compares its settings to the IBM recommended values for WebSphere MQ 7.5, 7.1 or 7.0. It displays the results of this comparison in an easy to read format, along with a PASS, WARN, or FAIL grade for each setting. The mqconfig script does not make any modifications to your systems.

Get mqconfig-old for MQ v6 !

MQ v8 details

IPC tuning

Few parameters control Inter-Process Communication (IPC) semaphore and shared memory resources used by WebSphere MQ.


Amunt! Top Amunt!
Few fine shells

[mqm@mqm]$ echo 'DISPLAY QLOCAL('DLQ') CURDEPTH' | runmqsc Queue_Manager_Name

[mqm@MiHost config]$ cat ver_dlq.sh #!/bin/bash runmqsc MiQM << EOF display qlocal('DLQ') CURDEPTH; EOF

Veure CURDEPTH continuament :

dx0609:/home/mqm # cat verEncola2.sh #!/bin/bash while true do runmqsc QMSAG < dis_depth.tst | grep CURDEPTH sleep 1 date done

on dis_depth.tst

DIS QLOCAL(NOM_CUA) CURDEPTH

Tinc un bon repositori !


Amunt! Top Amunt!
Stopping queue managers in WebSphere MQ for UNIX systems

To stop a queue manager running under MQ for UNIX systems :

amqzmuc0 Critical process manager amqzxma0 execution controller amqzfuma OAM process amqzlaa0 LQM agents amqzlsa0 LQM agents amqzmgr0 process controller amqzmur0 restartable process manager amqrmppa process pooling process amqrrmfa the repository process (for clusters) amqzdmaa deferred message processor amqpcsea the command server

Note: processes that fail to stop can be ended using kill -9. If you stop the queue manager manually, FFSTs might be taken, and FDC files placed in /var/mqm/errors. Do not regard this as a defect in the queue manager.

The queue manager should restart normally, even after you have stopped it using this method.

URL (sys admin guide)


Critical MQ processes [Doug]
amqzlaa0 - LQM agent - a qmgr will have at least one of these if running amqzmur0 - journal utility manager amqrmppa - channel process pooler amqhasmn - the logger

Remote Administration Up

Required
display channel(SYSTEM.ADMIN.SVRCONN) all 1 : display channel(SYSTEM.ADMIN.SVRCONN) all AMQ8414: Display Channel details. CHANNEL(SYSTEM.ADMIN.SVRCONN) CHLTYPE(SVRCONN) TRPTYPE(TCP) DESCR( ) SCYEXIT( ) MAXMSGL(4194304) SCYDATA( ) HBINT(300) SSLCIPH( ) SSLCAUTH(REQUIRED) KAINT(AUTO) MCAUSER(mqexplor) ALTDATE(2006-09-21) ALTTIME(11.59.46) SENDEXIT( ) RCVEXIT( ) SENDDATA( ) RCVDATA( ) SSLPEER()

Amunt! Top Amunt!
Links

Valid HTML 4.01!   Valid CSS! Site under construction. Escriu-me !
Updated 20140926 (a)
Uf !