Topics: HP Output Server
HPOS on Windows
Windows is a supported platform for running HP Output Server. However, the intial development on HPOS takes place on the UNIX environment, and is also seen as the correct environment to run HPOS on. The UNIX environment is usually tested a lot more thourougly than the Windows environment. You can use the Windows server version of HPOS as a test environment; just don't run your production system on it. However some customers use Windows environments though. Unix is typically more stable.
HPOS users may have wondered: is there still development on HP Output Server? The release of HPOS 3.5 in January 2006 surely is a proof of the continuing development on HPOS. HP has given a five year commitment for HPOS in the summer of 2005 to continue development and marketing of the product (A few years ago, this wasn't the case).
On Dazel 3.3 job databases tend to grow very large, sometimes up to 200 or 300 MB per JQM. Stopping or starting the JQM won't solve this problem; you'll have to recreate the JQM:
Stop the JQM:
# stop_server jqmDelete the JQM:
# config_server -d jqmDon't worry about the delivery pathways, the CM will still hold their configurations.
Recreate the JQM:
# config_server -t JQM jqmIf you have a special start order for the JQM, set it now:
# config_server -u -x"server-start-order 700" jqmThis will bring back the job database to approximately 400 KB.
Rule 1: Not more that 200 Logical Destinations per JQM.
Rule 2: For high-volume or Fax Destinations, use a separate JQM for both these type of destinations.
Rule 3: Use the same amount of DLMs as JQMs.
Rule 4: Per DSM 30 to 40 Physical Destinations, that will be 5 DSMs per JQM.
Rule 5: Have a separate DSM for every Fax destination.
- Use as much Postscript templates as possible. If you use a lot of PCL templates, you'll get a lot of ghostscript processes on your system, required for translation of Postscript to PCL. Most printers nowadays understand Postscript, and Postscript usually gives the best result, without the need of translation processes.
- Keep the HPOS installation in 1 filesystem, if possible. When files are transferred through HPOS, they get moved from the DLM (dm) directory, to the JQM (drm) directory and ultimately to the DSM (dsup) directory. If you have these 3 in different file systems, the jobs need to be transferred from 1 file system to the other multiple times, causing a lot of disk I/O. By keeping it all in 1 filesystem, a move of a job file is nothing more than setting a file pointer, which is a lot faster and saves huge amounts of disk I/O.
- Recycle your JQM's at regular intervals (discribed above). Be deleting the JQM and configuring it again, the job database is deleted and recreated, usually saving you hundreds of megabytes in disk space and memory consumption.
- Restart your system and complete HPOS environment at least once a month. This will clear the memory.
- Set the server-log-level of each server to "terse" or "info". This will keep the logging to a minimum.
- Rotate the log files daily: copy the log files, clean out the original log files and remove old log files.
- The faster the CPUs in your system, the better (this is a very obvious point...)
- Put the JFS log on a different disk than your HPOS "var" directory. This will separate the disk I/O for JFS logging and the jobs of HPOS, avoiding disk contention.
- Keep the amount of jobs in HPOS to a minimum. The more jobs in HPOS, the more memory and CPU it uses.
- Use "generic" templates if you wish to stop HPOS from doing any job delivery monitoring. PJL templates will interrogate the printer about the job status frequently, and thus use more CPU. PJL templates do have a more reliable job delivery method though.
- Restart any DSM's that use a lot of disk space and/or memory. By restarting them, the disk space and memory is cleared. This will restart any active print jobs, so be sure to check if any jobs are active for the DSM, before restarting it (Especially those 1000 page print jobs at 99% completion....).
- If you have geographically dispersed users, it might be best to set up a secondary server, which also runs DLM, JQM and DSM, and EM processes. This will keep most network traffic local to the users, instead of moving documents from one location to the other for processing and then transferring it back over the network to the users again. Having an EM process local to the users, saves a lot on EM network traffic also.
The LPG daemon at default is able to process 11 jobs at a time. Under heavy load, the LPG daemon might stop because of this limitation. To avoid this, you can use the -R option for the LPR Gateway, so that it will allow an unlimited number of jobs coming in.
This is how to add the -R option:
- Login as user root.
- Stop your line printer gateway:
# stop_server lpg
- Goto your HPOS installation directory. Then switch to the etc subdirectory.
- Backup your current HostConfig.sgml file (just in case!):
# cp HostConfig.sgml HostConfig.sgml.original
- Edit the HostConfig.sgml file, and add the -R option under the -server-executable-options under your DAZEL-SERVER NAME="lpg", just like this:
<DAZEL-AVPAIR NAME="server-executable-options">
<DAZEL-VALUE STRING-VALUE="-n !{server-name}!">
<DAZEL-VALUE STRING-VALUE="-l !{server-login-name}!">
<DAZEL-VALUE STRING-VALUE="-R">
</DAZEL-AVPAIR> - Now start your line printer gateway again:
# start_server lpg
- Check your LPG server to see if the -R option is enabled:
# list_server lpg
The following is a simple script to show you the number of jobs currently active within a given JQM per logical destination. The script is called nq:
#!/bin/kshYou'll have to modify this script to set the correct directory to setup_env.sh.
. /appl/dazel/etc/setup_env.sh
jqm=$1
if [ -z "$jqm" ] ; then
echo Please enter the name of a jqm.
exit
fi
pdls -c j -a destination $jqm: | grep destination | sort | awk '{print $3}' > /tmp/nq.$$
cat /tmp/nq.$$ | sort -dfu | while read printer ; do echo $printer: `grep $printer /tmp/nq.$$ | wc -l | awk '{print $1}'` jobs
done
When you run this script, enter the name of a JQM as a parameter for this script:
# nq jqm_01
pr31249: 1 jobs
pr43461: 8 jobs
pr58153: 11 jobs
pr77996: 1 jobs
pr03226: 5 jobs
The following is a simple script to show you the busiest logical destination there is. The script is called tq and needs nq (see elsewhere on this website) to run:
#!/bin/kshYou'll have to modify this script to set the correct directory to nq.
cd /appl/dazel
nmcp list ".*,config,server_manager" | grep jqm | cut -f1 -d, | while read JQM ; do
./nq $JQM >> /tmp/tq.$$
done
sort -nk2 /tmp/tq.$$
rm /tmp/tq.$$
# tq
pr26381: 1 jobs
pr50342: 1 jobs
pr50555: 1 jobs
pr50895: 1 jobs
pr69215: 1 jobs
pr36418: 2 jobs
pr39993: 2 jobs
pr50692: 3 jobs
Topics: HP Output Server↑
Core dump AIM
If your AIM daemon dumps core frequently and you have a lot of destinations in HP Output Server, it may well be that the default stack size for the AIM daemon is too small. Default, this is 64 KB. You can confirm this, by checking the errpt on AIX only: If the errpt shows an error about Too many stack elements, then you should adjust the stack size of the AIM daemon. This may be encountered when searching for destinations on location within the Envoy Delivery Agent.
To solve this: Go to $DAZEL_HOME/etc and edit HostConfig.sgml. Search for aim in this file and scroll down to server-executable-options. Add the following line:
<DAZEL-VALUE STRING-VALUE="-s 4096">This will increase the stack size to 4 MB. You can increase it up to 8 MB if you like. Now restart the AIM daemon and your problems should be gone.
Topics: HP Output Server↑
Stack timeout
Problems may be encountered with printers connected via Axis Boxes to the network, producing errors with some of the prints, usually larger prints. The printer prints a page with the following text: "ERROR: timeout, OFFENDING COMMAND: timeout, STACK:".
The solution is quite easy. The port speed of the LPT port used on the Axis Box is probably set to "Standard". Set it to "High Speed" and timeouts don't occur anymore.


