My Tricks

Cable Info


Digi Portserver





Linux software




PC Hardware



Red Hat and Fedora



Sun Hardware

Sun Software




Veritas and Extricity info

Extricity Install notes


Back To Obergsweb Main Page

horizontal rule

Cable Info

horizontal rule

CAT 5 Cable Info

1. White / Orange
2. Orange
3. White / Green
4. Blue
5. White / Blue
6. Green
7. White / Brown
8. Brown

(Cross over)

1. White / Green
2. Green
3. White / Orange
4. Blue
5. White / Blue
6. Orange
7. White / Brown
8. Brown

Unix Serial Port Resources Sun Serial Port & Cables Pinouts


Back To Top


horizontal rule

Bluetooth enabled electronic devices connect and communicate wirelessly through short-range, ad hoc networks known as piconets. Each device can simultaneously communicate with up to seven other devices within a single piconet. Each device can also belong to several piconets simultaneously. Piconets are established dynamically and automatically as Bluetooth enabled devices enter and leave radio proximity

Bluetooth technology operates in the unlicensed industrial, scientific and medical (ISM) band at 2.4 to 2.485 GHz, using a spread spectrum, frequency hopping, full-duplex signal at a nominal rate of 1600 hops/sec. The 2.4 GHz ISM band is available and unlicensed in most countries.

Bluetooth technology’s adaptive frequency hopping (AFH) capability was designed to reduce interference between wireless technologies sharing the 2.4 GHz spectrum. AFH works within the spectrum to take advantage of the available frequency. This is done by detecting other devices in the spectrum and avoiding the frequencies they are using. This adaptive hopping allows for more efficient transmission within the spectrum, providing users with greater performance even if using other
technologies along with Bluetooth technology. The signal hops among 79 frequencies at 1 MHz intervals to give a high degree of interference immunity.

The operating range depends on the device class:

Class 3 radios – have a range of up to 1 meter or 3 feet
Class 2 radios – most commonly found in mobile devices – have a range of 10 meters or 30 feet
Class 1 radios – used primarily in industrial use cases – have a range of 100 meters or 300 feet
The most commonly used radio is Class 2 and uses 2.5 mW of power. Bluetooth technology is designed to have very low power consumption. This is reinforced in the specification by allowing radios to be powered
down when inactive.

Data Rate
1 Mbps for Version 1.2; Up to 3 Mbps supported for Version 2.0 + EDR | Compare with Other Technologies


Back To Top


Digi Portserver

horizontal rule

Digi Portserver Commands

who: shows who is connected
terminate connections by port: kill tty=#
Restart Terminal Server: boot action=reset, b a = r

Set IP Info:

set config ip=
set config myname=tsnau012
set config gateway=
set config
set config submask=
set config nameserv=

Show firmware Rev: set config

Show Configuration: cpconf term


Back To Top


horizontal rule

Distance Limitations
Precisely how much benefit you see will greatly depend on how far you are from the central office of the company providing the ADSL service. ADSL is a distance-sensitive technology: As the connection's length
increases, the signal quality decreases and the connection speed goes down. The limit for ADSL service is 18,000 feet (5,460 meters), though for speed and quality of service reasons many ADSL providers place a
lower limit on the distances for the service. At the extremes of the distance limits, ADSL customers may see speeds far below the promised maximums, while customers nearer the central office have faster
connections and may see extremely high speeds in the future. ADSL technology can provide maximum downstream (Internet to customer) speeds of up to 8 megabits per second (Mbps) at a distance of about 6,000 feet (1,820 meters), and upstream speeds of up to 640 kilobits per second (Kbps). In practice, the best speeds widely offered today are 1.5 Mbps downstream, with upstream speeds varying between 64 and 640 Kbps.

You might wonder, if distance is a limitation for DSL, why it's not also a limitation for voice telephone calls. The answer lies in small amplifiers called loading coils that the telephone company uses to boost
voice signals. Unfortunately, these loading coils are incompatible with ADSL signals, so a voice coil in the loop between your telephone and the telephone company's central office will disqualify you from receiving
ADSL. Other factors that might disqualify you from receiving ADSL include:

* Bridge taps - These are extensions, between you and the central office, that extend service to other customers. While you wouldn't notice these bridge taps in normal phone service, they may take the
total length of the circuit beyond the distance limits of the service provider.
* Fiber-optic cables - ADSL signals can't pass through the conversion from analog to digital and back to analog that occurs if a portion of your telephone circuit comes through fiber-optic cables.
* Distance - Even if you know where your central office is (don't be surprised if you don't -- the telephone companies don't advertise their locations), looking at a map is no indication of the distance a signal must travel between your house and the central office.

Back To Top



horizontal rule

SBC e-mail setup for dynamic IPs

Yahoo! Help - SBC Yahoo! Business Email

Sendmail Reference

SPF: Project Overview

How to route your outgoing mail through your Internet Service Provider's mail servers

The ol' port 25 email telnet test

Connected to
Escape character is '^]'.

250 OK

250 OK - Mail from <>

250 OK <> ... Recipient ok

354  Enter mail, end with "." on a line by itself
        End data with <CR><LF><CR><LF>

Subject: test message

This is a test and only a test.
(type <CR><LF>.<CR><LF> or [enter].[enter] to end data)

250 OK: Queued (Message accepted for delivery)

221 Closing connect, good bye

Back To Top


horizontal rule

Traffic Delivery Services

In North America, there are (currently) three delivery methods. The first way is through a wireless data system like that offered by MSN Direct. They take the traffic data collected by and send it over wireless airwaves to a receiver that is either built into your GPS or added as an external antenna.

A second method is via a “silent” transmission over FM radio. In addition to the music you might hear from a radio station, other “silent” signals can be sent over the frequency. A special FM antenna is either built into your GPS or connected as an antenna which listens for those special signals being broadcast over FM radio frequencies.

The final, and least common method is to utilize a data connection from a mobile phone. Your mobile phone connects to the Internet over the phone’s cellular connection, downloads the traffic information, and sends it via Bluetooth to your GPS. Your phone must be in an area covered by your phone’s data plan’s coverage area. This is the method used by the TomTom PLUS services.


Back To Top



horizontal rule

Here's a couple LDAP links that seem to have good info....

Building an LDAP Server on Linux, Part 3

AW: Problem with ldapadd - parent does not exist

Google Groups : linux.debian.user

Google Groups : comp.sys.hp.mpe

Back To Top


Linux software

horizontal rule

Linux and missing physical RAM:
If you're using lilo, try typing
LILO: linux mem=256m
at the bootprompt. If it works, add
in /etc/lilo.conf and run /sbin/lilo.

Linux setiathome setup:
Put setathome execs someplace like /usr/local/bin/seti
and modify the crontab:
0,5,10,15,20,25,30,35,40,45,50,55 * * * * cd /usr/local/bin/seti; ./setiathome -proxy proxy:80 -nice 19 -email -graphics > /dev/null 2> /dev/null

Linux swap file setup:
You can create a swapFILE instead of a swap-partition.
To initialize a file called 'swapfile' in your current directory with a 64 meg swapfile:
dd if=/dev/zero of=swapfile bs=1024 count=65536
mkswap swapfile 65536
swapon swapfile

Core dumps on Linux:

Disabling SELinux :Disabling SELinux altogether by setting the line
in your /etc/sysconfig/selinux file.

man chcon

Back To Top


horizontal rule

MRTG Performance Monitoring Extensions

Quick HOWTO: Advanced Server Monitoring With MRTG


Net-SNMP tutorial

NET-SNMP Tutorial -- Using local MIBs

ByteSphere's MIB Download Area



Subject: stopping and starting mrtg
After logging in as root type source /.profile before doing the following:
To start mrtg type in
/usr/bin/perl -w /usr/local/bin/mrtg /usr/local/etc/mrtg/mrtg.cfg > /dev/null 2>&1

To get a log in /tmp
/usr/bin/perl -w /usr/local/bin/mrtg --logging=/tmp/mrtg.log /usr/local/etc/mrtg/mrtg.cfg >/tmp/mrtg.log 2>&1 &

To stop mrtg :
ps -ax|grep mrtg

=>ps -ax|grep mrtg
32089 ?? S 0:01.00 /usr/bin/perl -w /usr/local/bin/mrtg /usr/local/etc/mrtg/mrtg.cfg
32098 ?? S 0:00.96 /usr/bin/perl -w /usr/local/bin/mrtg /usr/local/etc/mrtg/mrtg.cfg
32103 ?? S 0:00.98 /usr/bin/perl -w /usr/local/bin/mrtg /usr/local/etc/mrtg/mrtg.cfg
32108 ?? R 0:00.99 /usr/bin/perl -w /usr/local/bin/mrtg /usr/local/etc/mrtg/mrtg.cfg
32158 ?? S 0:00.97 /usr/bin/perl -w /usr/local/bin/mrtg /usr/local/etc/mrtg/mrtg.cfg
32160 ?? S 0:00.69 /usr/bin/perl -w /usr/local/bin/mrtg /usr/local/etc/mrtg/mrtg.cfg
42226 ?? Is 1116:27.41 /usr/bin/perl -w /usr/local/bin/mrtg /usr/local/etc/mrtg/mrtg.cfg
35720 p0 S+ 0:00.00 grep mrtg

do a kill on process 42226 (always the one with Is in the 3rd field this is the parent)

How do I get mrtg ping probe to work on Fedora 4
In mrtg-ping-probe Look for

# try to find round trip times
if ($ping_output =~m@round-trip(....................

I added
elsif ($ping_output =~m@rtt(................same as above sting.......

Need to add the new sting for packet loss

A MRTG Ping sh

# Google, for example
DATA=`$PING -c10 -s500 $ADDR -q `
LOSS=`echo $DATA | awk '{print $18 }' | tr -d %`
echo $LOSS
if [ $LOSS = 100 ];
echo 0
echo $DATA | awk -F/ '{print $5 }'

Title[]: Round Trip Time
PageTop[]: <H1>Round Trip Time</H1>
Target[]: `/etc/mrtg/`
MaxBytes[]: 2000
Options[]: growright,unknaszero,nopercent,gauge
LegendI[]: Pkt loss %
LegendO[]: Avg RTT
YLegend[]: RTT (ms)


Back To Top


horizontal rule

Nagios plug-in development guidelines

Network UPS Tools

Cortona VRML Client Web3D viewer

NagiosExchange: Home: Welcome

# Define exit state for nagios for status
$STATE_OK = 0;


Back To Top


horizontal rule

The MAC sublayer collects bits from the reconciliation layer. One of its functions is to check for invalid MAC frames by checking the Frame Check Sequence (FCS) field. It does so by computing the 32-bit CRC of the received frame and comparing it to the received 32-bit CRC in the FCS field. In case of a mismatch, it should reject the frame.

IP Address Format
Class A            Subnet Mask    Is reserved for use as the default route    Private Address     Is reserved for use as a loopback address (typically 

Class B            Subnet Mask    Private Address

Class C            Subnet Mask    Private Address

Class D            Used for multi-cast applications        OSPF devices respond to packets sent to this address

Class E            Reserved for future use

Info on 802.11a Wi-Fi
5 GHz band
Theoretical throughput 54 Mbps
eight nonoverlapping channels
will see ~21 Mbps

Info on 802.11b Wi-Fi
2.4 GHz band
Theoretical throughput 11 Mbps
Three truly nonoverlapping channels
will see ~5 Mbps

Info on 802.11g Wi-Fi
2.4 GHz band
Theoretical throughput 54 Mbps
Compatible with 802.11b

Netgear or Zyxel router and DYNDNS.ORG
For custom DNS add "&system=custom" without the quotes after the domain in the "Host" field

Update URL stringfor

Back To Top

PC Hardware

horizontal rule

IDE hard drives
Maximum transfer rate is 100MB per second

IDE / ATA hard drives
Maximum transfer rate is 133MB per second

SATA Serial ATA hard drives
Maximum transfer rate is 150MB per second

(DDR) double-data-rate SDRAM
PC1600 = 200MHz 184 pin DIMM
PC2100 = 266MHz 184 pin DIMM
PC2700 = 333MHz 184 pin DIMM
PC3200 = 400MHz 184 pin DIMM

PCI Express
hot -pluggable
Based on serial technology, the specification defines bit rates of 2.5 gigabits per second in each direction for each lane, and manufacturers will be able to implement the interface with as many as 32 lanes, for total throughput as high as 16GB per second.

AGP 8X tops out at 2.1GB per second.

PCI Technology Overview

Complete Information about iSCSI, SCSI, RAID, SAS and related subjects - David Woodsmall

Back To Top


horizontal rule

Treo pin out

Pin 1 is on the Left, Pin 15 is on the right (with the treo on it's back looking directly at the bottom)

This section defines the functions of the signals on the 15-pin Cradle Connector. The signals are described in alphabetical order following the table below. Active-low signals have a “*” at the end of their names.

Treo Bottom Connector Pin Summary

Pin Name I/O/P1 Function

1 RXD I Receive Data
2 TXD O/P Transmit Data/Power
3 No Connect No Connect
4 HS2* I Serial Cradle Detect
5 HS1* I HotSync Interrupt
6 GND P Ground
7 USB_D- I/O USB Data
8 USB_D+ I/O USB Data
9 No Connect No Connect
10 No Connect No Connect
11 No Connect No Connect
12 GND P Ground
13 GND P Ground
14 VDOCK P Cradle Power (charging)
15 VDOCK P Cradle Power (charging)
1I = input, O = output, P = power


Back To Top



horizontal rule

PCGuide - Ref - RAID Levels

Complete Information about iSCSI, SCSI, RAID, SAS and related subjects - David Woodsmall

Will add the device specified so that it is accessible to the system.
echo "scsi add-single-device <H> <C> <I> <L>" > /proc/scsi/scsi
        where <H> <C> <I> <L>* represents
            Host <H>,
            Channel <C>,
            Id <I>
            Lun <L>

Will remove the device specified so that it is no longer accessible to the system.
echo "scsi remove-single-device <H> <C> <I> <L>" > /proc/scsi/scsi*
        where <H> <C> <I> <L>* represents
            Host <H>,
            Channel <C>,
            Id <I>
            Lun <L>

Will scan all host adapters again to see if there are any new devices.
echo "scsi scan-new-devices" > /proc/scsi/scsi*

Will dump the status of all current SCSI commands. <#> is the number specifing the level of detail for the dump, 0-9 are valid.
echo "scsi dump <#>" > /proc/scsi/scsi*


Back To Top


  Red Hat and Fedora

horizontal rule

Install rpm:
rpm -i <filename.rpm>

List all rpm:
rpm -qa

upgrade an rpm:
rpm -Uvh <filename.rpm>

Query installed rpm packages:
rpm -q -a

Query package owning "file":
which iostat
rpm -qf /usr/bin/iostat

for service files launched by xinetd.

dd SCSI:
dd if=/dev/sda of=/dev/sdb bs=65536

default route, hostname, etc

Add user from command line:
useradd -c 'RF Gun 001' -d /home/rf001 -m -u 8001 -g 8000 -s /bin/bash -p rf001 rf001

Make service startup at boot (example - ntpd):
/etc/init.d/chkconfig --list
/etc/init.d/chkconfig --level 35 ntpd on (sets ntpd to start at runlevel 3 and 5)

Options for services:

shows link info:

Nail up interfaces to 100 FDX:
mii-tool -F 100baseTx-FD

multiple nics . . . .

more ifcfg-eth0

Network Info...

Setting up heartbeat on Fedora by steve

# This document describes the installation and configuration of a heartbeat
# based two server cluster.

1. Acquire the heartbeat package and other supporting packages.
1.1 Download the appropriate packages
1.2 Run "rpm -i"
1.3 Run "rpm -i"
1.4 Run "rpm -i"

2. Configure heartbeat
2.1 Go to /etc/ha.d
2.2 Move the example config files into place:
        cp /usr/share/doc/heartbeat-1.2.2/ .
        cp /usr/share/doc/heartbeat-1.2.2/haresources .
        cp /usr/share/doc/heartbeat-1.2.2/authkeys .
2.3 Edit and change the following :
        Uncomment "# logfile /var/log/ha-log"
        Uncomment "# bcast eth1" and add a line for each other heartbeat NIC
        Set auto_failback to off
        Modify the "node" lines as needed
2.4 Edit haresources and add a line like the below.
        Note that scripts to be run will be looked for in
        /etc/ha.d/resource.d and in /etc/rc.d/init.d in that order.
        <Node name>                                        <shared IP> <script to run>      apache
2.5 Edit authkeys and enable crc auth
        auth 1
        1 crc
2.6 chmod 600 authkeys
3. Test the cluster (note that ipchains rules may need to be set to allow the heartbeat through)

Create a swap file

create a swap file using the command, for example:
    mkfile 25m /files/swapfile
activate the swap file, for example:
    /usr/sbin/swap -a /files/swapfile
add an entry for the swap file to the /etc/vfstab file, for example:
    /files/swapfile - - swap - no -
verify that the swap file is added by typing
    /usr/sbin/swap -l
see space swap -s

Red Hat Cluster
cluadmin> cluster status
cluadmin> service disable oracle_db_name
cluadmin> service enable oracle_db_name

Module information
lsmod List modules corrently loaded.
insmod Loads a module into the kernel.
rmmod Unloads a module currently loaded.
modinfo Displays infomation about a module.
depmod Creat a dependency file listing all outher modules on which the specified module may rely.
modprobe Loads a module with any dependent modules.


Back To Top


horizontal rule


Net-SNMP tutorial

NET-SNMP Tutorial -- Using local MIBs

ByteSphere's MIB Download Area

ipMonitor Support Portal        Login as Guest


snmp walk
snmpwalk -c pub localhost ucdavis|grep dskPercent

This will send back all perimeters from ucdavis module a list all Disk stats without grep will give all perimeters.

UCD-SNMP-MIB::dskPercent.1 = INTEGER: 27   /

UCD-SNMP-MIB::dskPercent.2 = INTEGER: 48   /usr/local/extricity

UCD-SNMP-MIB::dskPercent.3 = INTEGER: 12   /ex01

UCD-SNMP-MIB::dskPercent.4 = INTEGER: 27

UCD-SNMP-MIB::dskPercent.5 = INTEGER: 48

UCD-SNMP-MIB::dskPercent.6 = INTEGER: 12

UCD-SNMP-MIB::dskPercentNode.1 = INTEGER: 2

UCD-SNMP-MIB::dskPercentNode.2 = INTEGER: 30

UCD-SNMP-MIB::dskPercentNode.3 = INTEGER: 4

UCD-SNMP-MIB::dskPercentNode.4 = INTEGER: 2

UCD-SNMP-MIB::dskPercentNode.5 = INTEGER: 30

UCD-SNMP-MIB::dskPercentNode.6 = INTEGER: 4

snmpwalk -v1 -c pub localhost .
UCD-SNMP-MIB::ssIndex.0 = INTEGER: 1
UCD-SNMP-MIB::ssErrorName.0 = STRING: systemStats
UCD-SNMP-MIB::ssSwapOut.0 = INTEGER: 0
UCD-SNMP-MIB::ssIOReceive.0 = INTEGER: 94
UCD-SNMP-MIB::ssSysInterrupts.0 = INTEGER: 52
UCD-SNMP-MIB::ssSysContext.0 = INTEGER: 96
UCD-SNMP-MIB::ssCpuUser.0 = INTEGER: 46
UCD-SNMP-MIB::ssCpuSystem.0 = INTEGER: 1
UCD-SNMP-MIB::ssCpuIdle.0 = INTEGER: 51
UCD-SNMP-MIB::ssCpuRawUser.0 = Counter32: 15521926
UCD-SNMP-MIB::ssCpuRawNice.0 = Counter32: 73
UCD-SNMP-MIB::ssCpuRawSystem.0 = Counter32: 605810
UCD-SNMP-MIB::ssCpuRawIdle.0 = Counter32: 17388159
UCD-SNMP-MIB::ssRawInterrupts.0 = Counter32: 146419452
UCD-SNMP-MIB::ssRawContexts.0 = Counter32: 203942247



Back To Top


horizontal rule

Notes on ntp *** Did not load on last test ***

 public ntp time server for everyone



Removing ^M from a file

cat <filename>|tr -d '\015' > <fixed_filename>  This will strip out the ^M chars and put it to a file.

Configuring and using syslogd

Unix - VNC
Remote LTSP access with SSHVNC applet

LISa - LAN Information Server

Back To Top

Sun Hardware

horizontal rule

Unix Serial Port Resources Sun Serial Port & Cables Pinouts



Back To Top

Sun Software

horizontal rule

Solaris FAQ

Booting issues & Problems in Solaris

Sys Admin Magazine>Successful SolarisTM Performance Tuning

lockfs -fa; init 6

lockfs -fa; init 0

Use undocumented Solaris command netstat -k for kernal stats relating to network
netstat -r to see route table

Solaris multipathing

Add Logical Interface to Physical Interface:
ifconfig eri0:1 plumb
ifconfig eri0:1 netmask
ifconfig eri0:1 up
ifconfig eri0:1 netmask broadcast
add network to /etc/netmasks if applicable
in /etc/ create hostname.interface name:instance (hostname.hme0:1)
contents of this file can be hostname or ip address.
Add to /etc/hosts

100baseTX(FD/HD) Auto negotiate issues 

I used the ndd utility to check the settings of the hme interface. 
The commands were: 
ndd /dev/hme adv_100fdx_cap 
ndd /dev/hme adv_100hdx_cap 
ndd /dev/hme adv_10fdx_cap 
ndd /dev/hme adv_10hdx_cap 
ndd /dev/hme adv_autoneg_cap 

I then forced the hme interface to run at 100Mb FDX using the following: 
ndd -set /dev/hme adv_100fdx_cap 1 
ndd -set /dev/hme adv_100hdx_cap 0 
ndd -set /dev/hme adv_10fdx_cap 0 
ndd -set /dev/hme adv_10hdx_cap 0 
ndd -set /dev/hme adv_autoneg_cap 0 

The parameters changed by ndd remain until the next system reboot. 
To make them more permanent, I added the following to my /etc/system file 
set hme:hme_adv_100fdx_cap = 1 
set hme:hme_adv_100hdx_cap = 0 
set hme:hme_adv_10fdx_cap = 0 
set hme:hme_adv_10hdx_cap = 0 
set hme:hme_adv_autoneg_cap = 0 

To read floppy disks on Solaris 2.5.1 and greater.
Insert the floppy.
% ps -ef | grep vold

If the "vold process is running",
% volcheck
% tar tvf /volmpt/dev/rdiskette0/unlabeled

% tar tvf /dev/rfd0

Sendmail/alias tools
praliases -f <filename> (to view contents of alias.db)
sendmail -v -bv <aliasname> (to see if aliases are working)

Subject: mail permissions for patching... 
#set permissions for mail complaints
print "setting /etc to 755...\n";
print LOGFILE "setting /etc to 755...\n";
`rsh $target chmod 755 /etc`;
print "setting /etc/mail to 755...\n";
print LOGFILE "setting /etc/mail to 755...\n";
`rsh $target chmod 755 /etc/mail`;
print "setting /etc/mail to root...\n";
print LOGFILE "setting /etc/mail to root...\n";
`rsh $target chown root /etc/mail`;
#we will try this twice, just to be sure
`rsh $target chmod 755 /etc`;
print "run newaliases...\n";
print LOGFILE "run newaliases...\n";
`rsh $target

SUN virtual destop settings
To get the virtual desktop, modify .xinitrc to
startup olvwm instead of olwm.
For openwin.

SUN CDE setup
To get CDE running with our (GDC) login scripts,
you must check for both x11only
as well as the DT variables (in .profile). Like this:
# ExeBash if found (and not an Xterminal)

if [ -x $BASH ]
echo "Select bash."
echo "bash shell front end not found or failed, run /bin/sh."
SHELL=/bin/sh; export SHELL
exec $SHELL

if [ "$X11ONLY" != "YES" -a ! "$DT" ]; then
echo "Exec shell."
exec $SHELL -login

echo "Exit .profile"

To get Suns to disallow some user logins,
use netgroups. in nsswitch.conf make the passwd line passwd: compat - this make the password scheme SunOS compatible... the netgroup disallows will not work without it. create the appropriate netgroups in the NIS netgroup file. modify the NIS passwd and shadow files to look for specific netgroups to allow in. All else will get a sorry script. NIS passwd additions sample:
NIS shadow sample:

Sorry script sample:

trap '' 1 2 3 18


echo " "
echo "This host is a primary file server; "
echo "Access is restricted. "
echo " "

logger -p auth.notice "Primary server access denied for $WHOAMI"

sleep 3

To clean out an old user's account:
1. copy users account to the staging area (/vol/staging/username)
2. move mail for user from /var/spool/mail to the staging area
The mail file for a given user is named username
3. move mail(notes) from technotes(epc945) d:\notes\data\mail\username.nsf
to the staging area
4. clean up group
5. comment out user in passwd
6. unlink home dir
7. remove user from auto.home
8. verify that staging area and old home are same (du -sk)
9. delete old home directory

Sun printing utilities:
use lpstat to see all current print jobs
lpstat -a for all printers
lpstat -d gived current default printer
lpstat -o for queued jobs
lpstat -t for everything
- on 388 as root for:
cancel /print-id/ clears queue of this print job
cancel user printqueuename clears all user jobs from queuea
lprm -P printer jobnumber kills a paticular job
use lpq -P printqueuename to see jobs
lpshut stops print scheduler
/usr/lib/lp/lpsched restarts print scheduler

If lpstat will not return data (hangs) and lpshut hangs too,
check to see that /var is not full (/var/spool/lp/requests/tmp)
kill lpsched, then check and kill SCHEDLOCK in /var/spool/lp
if needed. Then restart lpsched.

To control print jobs, use lpc:
to elevate a job to the top of the queue, use
lpc <enter>
topq print_queue job_name

To handle the lpsched using 100% CPU on 2.5.1:
1. For Solaris 2.5.1 and below WITHOUT 103959-03 patch
# lpshut
# cd /var/spool/lp
# rm -r requests tmp temp fifos
# ps -ef | grep lp
(kill any lp process still running after the lpshut)
# /usr/lib/lpsched

use the script /vol/ecc/pkg/sun/ for solaris 2.5.1
to do the following:
add patch 103959-10 (if needed)
clear the queues
replace the /etc/lp/Systems file
remove all existing printers
add all good printers

UNIX mkdir whole trees
use mkdir -p to make whole trees of directories.

UNIX The 'variable syntax' error in csh
is caused by a colon at the end a var. like 
$PATH=$PATH:. is WRONG. use $PATH=${PATH}:.

SUN To setup a new HP JetDirect printer on the suns:
add printer to hosts file and ethers file. then run /usr/lib/hpnp/jetadmin to setup bootp. when this works, add the print queue. When this works, add printer name to the /etc/printcap file so that samba can pick it up and share it. BTW, the printcap in esun388:/etc/NIS is dormant. the REAL one is /etc/printcap. Note also that all Suns have to be rdisted or admintooled to see the new queue.

ftp software from ssnau150:/ssnau150/patches-8
put in /knexasp0702/patches/wuftpd-2.6.2-sol8-sparc-local
# pkgadd -d wuftpd-2.6.2-sol-sparc-local
1 SMCwuftpd wuftpd (sparc) 2.6.2
puts a bunch of stuff in /usr/local/ directorys
sample ftpaccess file in /usr/local/etc/ftpaccess
sample ftpconversions file in /usr/local/etc/ftpconversions
/usr/local/sbin/in.ftpd wu-ftpd binary
need to change /etc/inetd.conf to use this new one
/etc/shells build from list in "man shells" and add "/bin/false"
(copy from knexasp0200) since users will have /bin/false in their /etc/passwd entry, they won't be able to login, but can still ftp
unixids are already setup - so this will be a little different than the other servers I setup wu-ftpd on
/etc/group ftpartnr::104:
/etc/passwd - currently
ibroker:x:4603:104:for jeff 8/20/02:/tmp:/bin/false
dsptc100:x:4604:104:for jeff 8/20/02:/tmp:/bin/false
dsptc101:x:4605:104:for jeff 8/20/02:/tmp:/bin/false

ibroker:x:4603:104:for jeff 8/20/02:/usr/local/extricity/data/external/ibroker:/bin/false
dsptc100:x:4604:104:for jeff 8/20/02:/usr/local/extricity/data/external /dsptc100:/bin/false
dsptc101:x:4605:104:for jeff 8/20/02:/usr/local/extricity/data/external /dsptc101:/bin/false

Way it's currently setup, users in group "ftpartnr" are controlled
Sol 7 /etc/ftpaccess
Sol 8 /usr/local/etc/ftpaccess
class all real,guest,anonymous *
limit all 10 Any
readme    README*    login
readme    README*    cwd=*
message    /welcome.msg    login
message    .message    cwd=*
compress    yes    alle    sh
tar    yes    all
greeting brief
##restricted-uid *
restricted-gid ftpartnr
upload    /usr/local/extricity/data/external/ibroker
*    yes    ibroker    ftpartnr    0660    nodirs
upload /usr/local/extricity/data/external/dsptc100
*    yes    dcptc100    ftpartnr    0660    nodirs
upload /usr/local/extricity/data/external/dsptc101
*    yes    dcptc101    ftpartnr    0660    nodirs
log commands real
log transfers anonymous,real inbound,outbound
shutdown /etc/shutmsg
email user@hostname
check subdirs currently under /usr/local/extricity/data/external/ibroker
create same set under the rest of the unixids - then change owner/group/permissions
OLD    ftp    stream    tcp    nowait    root /usr/sbin/in.ftpd    in.ftpd
NEW    ftp    stream    tcp    nowait    root /usr/local/sbin/in.ftpd    in.ftpd    -a
kill -HUP <inetd pid>
I'm blocked out from telneting in as the user I can ftp in as the user, and only put files under my homedir

Steps against Nimba Worm for Samba
Author: HASEGAWA Yosuke
Translator: TAKAHASHI Motonobu <>
The information in this article applies to
Samba 2.0.x
Samba 2.2.x
Windows 95/98/Me/NT/2000

This article describes measures against Nimba Worm for Samba server.

Nimba Worm is infected through shared disks on a network, as well as through Microsoft IIS, Internet Explorer and mailer of Outlook series. At this time, the worm copies itself by the name *.nws and *.eml on the shared disk, moreover, by the name of Riched20.dll in the folder where *.doc file is included. To prevent infection through the shared disk offered by Samba, set up as follows:


# This can break Administration installations of Office2k.
# in that case, don't veto the riched20.dll
veto files = /*.eml/*.nws/riched20.dll/
By setting the "veto files" parameter, matched files on the Samba server are completely hidden from the clients and making it impossible to access them at all. In addition to it, the following setting is also pointed out by the samba-jp:09448 thread: when the "readme.txt.{3050F4D8-98B5-11CF-BB82-00AA00BDCE0B}" file exists on a Samba server, it is visible only as "readme.txt" and dangerous code may be executed if this file is double-clicked.

Setting the following,
veto files = /*.{*}/
any files having CLSID in its file extension will be inaccessible from any clients. This technical article is created based on the discussion of samba-jp:09448 and samba-jp:10900 threads.

SUN samba changes
on /vol/ecc/cfg/samba do a co -l on smb.conf.esunXXX
ci -u smb.conf.esunXXX
note that samba reads this config every few minutes
to reload... or just kill and restart

To move a user to the efiler:
1. Contact user and arrange a time
2. Remove existing link /export/home/(username) on the users current server. Note that this link points to the current directory, however.
3. Most servers have the efiler mounted as /mnt_filer. The filer dirs are files1...filesn. Use the last one available. Use the path /mnt_filer/filesn/home/(user).
4. Make a new dir on the filer
5. Chmod it to agree with the users existing dir.
6. Move the users files:
cd (old homedir)
tar cf - . | (cd new_home_dir; tar xfBp -)
or use gtar to avoid the path size limit (GNUtar):
gtar cf - . | (cd new_home_dir; gtar xfBp -)
or 'rsh esunxxx "(cd /filesxx/vol/ana_bld; ufsdump cf - .)" | ufsrestorexvf -'
note that the ufsdump method seems ro recreate a sub-dir of the same name, this can be mv'd to  get the target in the right place, also, this is done from the target dir (not the source as with tar) to retain permissions
Watch for errors.
7. perform a du -sk (old_home) (new_home) to get a size comparison; they should be similar
8. Create a link to the new dir
ln -s /filesn/home/username /mnt_filer/exports/home/username
9. Create new NT account:
Check to see that account does not already exist
Get on epc945 (technotes)
connect F: to \\efiler2162\username
dont forget to set dial-in access
10. Set efiler to export:
browser to consoles/console (UNIX passwd required)
choose network appliance
choose CIFS
choose share
choose new
1) acct name (NT username)
2) mntpoint /vol/vol0/filesn/home/username
3) descrip. is full users name
4) userlimit NONE
5) force NONE
11. set access levels:
selecy the share just created
select new access
delete everyone
access by user FULL CONTROL
access by root FULL CONTROL
12. edit auto.home on esun388 to point to new home

The System Corefile is helpful during problem analysis on a SUN Solaris Computer.
When is a System corefile produced ?

A System Corefile is produced when the panic() routine calls vfs_syncall() and dumpsys() to sync physical memory to the appropriate disks and the current kernel image to the dump device. When savecore is run during bootup, it scans the top end of the primary swap partition and creates a unix.0 and a corresponding vmcore.0 files. These files are automatically incremented as additional corefiles are captured. The .bounds file keeps track of the current increment. Panic() is called when a situation occurs which would compromise the data integrity of the running system. The philosopy is that continuing would be worse than stopping and rebooting. 

What to do when a System hangs ?

This are the steps in case a SUN Solaris System hangs:

bulletThe first goal is to get the system to the OK> prompt by pressing the 'Stop' & 'A' keys together or by sending a 'break' signal using a TTY, or unplugging and replugging the console keyboard.



A system corefile can then be manually produced by typing:

OK> sync

Successfully capturing a corefile is dependent upon patch level, the type of device used for primary swap space. Sun Support has a utility named that will report if the system is at the proper patch level and is configured properly to capture a corefile. This is available upon request at the SUN support.

What is captured in a System Corefile ?

All kernel memory pages are saved, active pages in the kernel segment map are saved, and running user process stacks are saved. By default, the kernel memory pages of active processes are saved. Setting the appropriate switches with dumpadm -u -c all, forces all memory pages to be captured, however most of this data is not useful and capturing it creates extremely large corefiles. Our advice is to not to enable this feature unless directed by Sun Support. See the manpage on dumpadm for more details.

Why is the System Corefile valuable to analyze a Problem ?

A system corefile is a snapshot of kernel memory at the moment of the panic. This data shows what threads are running on each cpu, the process table, the current threads on the dispatch queue, the kernel memory structures. Through corefile analysis, SUN is able to reconstruct the events which led to the panic.

Based upon this information SUN can usually determine if the problem was caused by hardware or software, which part caused the panic, what code the cpu was running when the panic condition occurred and then search for an exisiting bug and patch fix. Just because a cpu reported the panic, that doesn't mean the cpu was the cause.

How to get the Panic Strings ?

It's important to produce the panic strings and provide this info when opening a case with Sun. If this a known problem it could save hours of effort to find a solution.

# strings vmcore.* | head

How is Savecore enabled ?

In Solaris 2.5 through 2.6, savecore is normally not enabled. It must be enabled by the system administrator through editing the /etc/init.d/sysetup file. If the system panics, the /var/adm/messages file will show 'dumping pages....', this indicates that the system has captured a corefile. If savecore has not been enabled, it may be run manually shortly after reboot by cd'ing into a directory with sufficient space to hold the system corefile and typing the command savecore -v . which tells the system to dump the savecore 'here' and provides a verbose status message if it was able to process the savecore.

In Solaris 7 and above, savecore is enabled by default and is controlled by the dumpadm command. You can run the dumpadm command without arguments to get the current configuration. Starting with Solaris 7, the system corefile is automatically compressed to conserve room in the primary swap partition.

# dumpadm

Dump content: kernel pages
Dump device: /dev/dsk/c0t0d0s3 (swap)
Savecore directory: /var/crash/diamond
Savecore enabled: yes

If you are using something other than a raw primary swap partition, there is a risk that a savecore may not be produced. For instance if the ' vxfs ' driver caused the panic, the savecore may not work if swap is under ' vxfs ' control. The fewer layers of drivers involved, the better chance of capturing a useful corefile.

It's critical that the directory where the /etc/init.d/sysetup puts the corefile is:


Has enough space available


Mounted at the time savecore is going to run

Savecore normally runs as part of /etc/rc2.d/SXXsysetup, /var is normally mounted right away, before run level 2, so that should be OK if there is enough room on /var for the core file.

Configuration Files and Setup


Check if savecore is enabled


Configuration File for dumpadm

/var/crash/`uname -n`

Location of Crash Dump Directory


Save a crash dump


Core dump analysis with crash
The crash utility can be used to perform some elementary core dump analysis.

Invoke crash on a crash dump with the command crash -d vmcore.# -n unix.#.

Use stat to obtain general information, including the program counter and the stack pointer, along with some error indications.

Use u to obtain information on the current process, including the process slot number.

proc lists out the process table. Match up process slot numbers to find the culprit in the case of a system crash. proc -l reports user credentials information, and proc -e reports all processes.

For system hangs, the kmastat option may provide useful clues regarding kernel memory useage. If it has been enabled (see the Kernel Memory Page), kmausers can provide detailed information regarding memory allocation inside each of the buckets.
defthread and defproc provide the current thread and process addresses. ? provides online help.

More on panic swtch - kadb and savecore
1) kadb. Instead of booting normally, boot kadb (ie from the prom "b kadb"). This will load and run /vmunix, but when the kernel crashes, instead of rebooting the system it will drop into an 'adb-like' debugger and let you perform stack traces etc on the recently crashed kernel.
2) savecore When the machine panics and dumps it writes the current in-core memory image to the swap partition. Early in the boot sequence before swapping is enabled you can use the savecore command which will extract the dumped image from swap and save it in the filesystem, enabling you to perform post-mortem debugging on the kernel.
Both methods allow you to post-mortem debug the kernel. Helpful if you have deep(ish) knowledge of kernel structures/memory, or if someone is trying to help you via email and you can send them stack traces etc.

Change Solaris Hostname
/etc/defaultdomain    Set the default domain name, if it changed.
/etc/defaultrouter    Set the default router's IP address, if it changed.
/etc/hostname.le0    (or .hme0 or ?) Update this if the hostname changed.
/etc/nodename        Update this if the hostname changed.
/etc/nsswitch.conf    Update if your name resolution method/order changed.
/etc/resolv.conf        Update if your name servers/domain changed (DNS only).
/etc/inet/hosts        Make sure your IP address is updated or added here.
/etc/inet/ipnodes    IPv6 version of hosts file (Solaris 8+).
/etc/inet/netmasks    Set your network number & netmask, if it changed.
/etc/inet/networks    Set your network name,if it changed.
/etc/net/ticlts/hosts    For the streams-level loopback interface.
/etc/net/ticots/hosts    For the streams-level loopback interface.
/etc/net/ticotsord/hosts    For the streams-level loopback interface.
/etc/dumpadm.conf    Update hostname
/var/crash/    < the system name >


cron tab
export EDITOR=vi
crontab -l = to list
crontab -e = to edit

Back To Top


horizontal rule

Maintenance Commands
vmstat - report virtual memory statistics

Removing files by date (unix ADM stuff)
listing only files by date and only the filenames (NO attributes details)
Prints out only the filename
use in command:
<some_command> `ls -l|grep "<insert date pattern>"|awk '{print $9}' `

Ex removing files from Dec only:
rm -f `ls -l|grep "Dec"|awk '{print $9}' `

by specific day:
rm -f `ls -l|grep "Dec 1"|awk '{print $9}' `

By Range Dec 1 to 13:
rm -f `ls -l|grep "Dec"|awk '$7 >= 1 && $7< 14'

Copying/moving directories and retaining links:
cd to directory that will be copied
tar cpf - .|(cd <dest_dir>; tar xf -)

Memory performance on UNIX:
First of all, there is paging. Paging is where the system takes part of a running process and pages it to disk. This is not so bad... general rule on thumb is that %80 of a procs time is spent on %20 of its code. Also be aware of "demand paging" where procs are put to disk on startup, and paged in as necessary. Lots of page outs over an extended period are not so good... but not so bad. Swapping, however can be really bad. Swapping is taking a WHOLE process to disk. It has a large performance penalty. Also, know "desperation swapping" where RUNNING procs are swapped out in an emergency. This is the worst case, and the system will pound sand trying to run anything. Use vmstat to see where the rubber hits the road: vmstat 5
procs memory page disk faults cpu
r  b w  swap     free   re   mf   pi po fr de sr s3 s7 s1 s1  in    sy    cs  us  sy  id
4 0 0  98048 10032   0  258  0   0  0  0  0   0  0   0  0    65  789 176 80 20  0
5 0 0  94364   7012   0  188  0   0  0  0  0   0  0   0  0    78  489 143 86 14  0
5 0 0  93184   5488   0  246  0   0  0  0  0   0  0   0  0  162  253 217 77 23  0
4 0 0  94060   6088   0  209  0   0  0  0  0   0  0   0  0  211  211 257 78 22  0
2 0 0  95940   7408   0  181  0   0  0  0  0   0  0   0  0  182  160 221 56 20 23
3 0 0  96188   7664   0  195  0   0  0  0  0   3  0   0  0  221  185 248 76 24  0
3 0 0  95444   7468   0  212  0   0  0  0  0   0  0   0  0  222  186 265 74 26  0
3 0 0  96844   8036   0  175  0   0  0  0  0   0  0   0  0  209  183 246 78 21  1

Here, under procs, r is running, b is blocked (IO bottleneck if high), w is runnable procs SWAPPED out: BAD BAD BAD. Also important are the pi and po columns. These are page-ins and page-outs. Lots of page-outs are a sign of trouble.

This data, by the way, was gleaned from O'Rielly's System Performance Tuning.

Linux Squid setup...
1) Check your disk space! Requirements and source are found here:
2) unpack the source,
run './configure'
run 'make'
if all is OK, then run 'make install'
3) squid is now installed.
Your squid.conf file is in /etc/squid. goto ukproxy to see how we set that up
4) smb_auth is what we use for smb authentication. find it here:
same thing: './configure', 'make', 'make install'
FYI, smb_auth requires samba (there are multiple packages to samba, when I did ukproxy one was missing... use rpm to verify. There are on the Linux CD if you need them.
5) get sqmgrlog for web based reports
do './configure', 'make', 'make install'
configure your systems httpd be carefule with port numbers, your proxy is using port 80, so you need to tell httpd to use something else... ukproxy httpd is set to port 8080.

make sure name resolution is working, use smb_auth debug to test
Piece o' cake...

To mount a SMB mount point on Linux:
"smbmount //AMSTECH/C /sound/test2 -N -I"

where arg1 is the netbios share name
arg2 is the local mount point
arg3 (-N) specifies no password for read-by-all
arg4 (-I) sppecifies the destination IP ... sometimes this is required

cron tab
export EDITOR=vi
crontab -l = to list
crontab -e = to edit

The following table shows what TCP/UDP ports bind before 8.x DNS uses to send and receive queries:

Prot Src Dst Use udp 53
53 Queries between servers (eg, recursive queries)
Replies to above tcp 53
53 Queries with long replies between servers, zone
transfers Replies to above
udp >1023 53 Client queries (sendmail, nslookup, etc ...)
udp 53 >1023 Replies to above
tcp >1023 53 Client queries with long replies
tcp 53 >1023 Replies to above
Note: >1023 is for non-priv ports on Un*x clients. On other client
types, the limit may be more or less.

BIND 8.x no longer uses port 53 as the source port for recursive queries, nor uses it as the destination port for corresponding replies. By default it uses a random port >1023, although you can configure a specific port (and it be port 53 if you want).

Another point to keep in mind when designing filters for DNS is that a DNS server uses port 53 both as the source and destination for its queries. So, a client queries an initial server from an unreserved port number to UDP port 53. If the server needs to query another server to get the required info, it sends a UDP query to that server with both source and destination ports set to 53. The response is then sent with the same src=53 dest=53 to the first server which then responds to the original client from port 53 to the original source port number.

The point of all this is that putting in filters to only allow UDP between a high port and port 53 will not work correctly, you must also allow the port 53 to port 53 UDP to get through.

Also, ALL versions of BIND use TCP for queries in some cases. The original query is tried using UDP. If the response is longer than the allocated buffer, the resolver will retry the query using a TCP connection. If you block access to TCP port 53 as suggested above, you may find that some things don't work.

Newer version of BIND allow you to configure a list of IP addresses from which to allow zone transfers. This mechanism can be used to prevent people from outside downloading your entire namespace.

Show SSL server certs
openssl s_client -showcerts -connect <system name>:4443

Solaris Internals tools

  Back To Top


horizontal rule

HTTPs return codes


100            CONTINUE
102            PROCESSING
200            OK
201            CREATED
202            ACCEPTED
203            NON_AUTHORITATIVE
204            NO_CONTENT
205            RESET_CONTENT
206            PARTIAL_CONTENT
207            MULTI_STATUS
300            MULTIPLE_CHOICES
301            MOVED_PERMANENTLY
302            MOVED_TEMPORARILY
303            SEE_OTHER
304            NOT_MODIFIED
305            USE_PROXY
400            BAD_REQUEST
401            UNAUTHORIZED
402            PAYMENT_REQUIRED
403            FORBIDDEN
404            NOT_FOUND
405            METHOD_NOT_ALLOWED
406            NOT_ACCEPTABLE
408            REQUEST_TIME_OUT
409            CONFLICT
410            GONE
411            LENGTH_REQUIRED
414            REQUEST_URI_TOO_LARGE
423            LOCKED
424            FAILED_DEPENDENCY
501            NOT_IMPLEMENTED
502            BAD_GATEWAY
504            GATEWAY_TIME_OUT
506            VARIANT_ALSO_VARIES
510            NOT_EXTENDED

usage: ./apachectl

start - start httpd
startssl - start httpd with SSL enabled
stop - stop httpd
restart - restart httpd if running by sending a SIGHUP or start if not running
fullstatus - dump a full status screen; requires lynx and mod_status enabled
status - dump a short status screen; requires lynx and mod_status enabled
graceful - do a graceful restart by sending a SIGUSR1 or start if not running
configtest - do a configuration syntax test
help - this screen

HTML Redirect code:
<meta http-equiv="refresh" content="0;URL=/horde/imp/">
<body text="#000000" bgcolor="#FFFFFF" link="#0000EE" vlink="#551A8B"
Redirecting to <a href="/horde/imp/">Easy2mail</a>

<Location> directive
Syntax: <Location URL-path|URL> ... </Location>
Context: server config, virtual host
Status: core
Compatibility: Location is only available in Apache 1.1 and later. The <Location> directive provides for access control by URL. It is similar to the <Directory> directive, and starts a subsection which is terminated with a </Location> directive. <Location> sections are processed in the order they appear in the configuration file, after the <Directory> sections and.htaccess files are read, and after the <Files> sections. Note that URLs do not have to line up with the filesystem at all, it should be emphasized that <Location> operates completely outside the filesystem. For all origin (non-proxy) requests, the URL to be matched is of the form /path/, and you should not include any http://servername prefix. For proxy requests, the URL to be matched is of the form scheme://servername/path, and you must include the prefix. The URL may use wildcards In a wild-card string, `?' matches any single character, and `*' matches any sequences of characters. Apache 1.2 and above: Extended regular expressions can also be used, with the addition of the ~ character. For example: <Location ~ "/(extra|special)/data"> would match URLs that contained the substring "/extra/data" or "/special/data". In Apache 1.3 and above, a new directive <LocationMatch> exists which behaves identical to the regex version of <Location>. The Location functionality is especially useful when combined with the SetHandler directive. For example, to enable status requests, but allow them only from browsers at, you might use:

<Location /status>
SetHandler server-status
Order Deny,Allow
Deny from all
Allow from

Apache 1.3 and above note about / (slash): The slash character has special meaning depending on where in a URL it appears. People may be used to its behavior in the filesystem where multiple adjacent slashes are frequently collapsed to a single slash (i.e., /home///foo is the same as /home/foo). In URL-space this is not necessarily true. The <LocationMatch> directive and the regex version of <Location> require you to explicitly specify multiple slashes if that is your intention. For example, <LocationMatch ^/abc> would match the request URL /abc but not the request URL //abc. The (non-regex) <Location> directive behaves similarly when used for proxy requests. But when (non-regex) <Location> is used for non-proxy requests it will implicitly match multiple slashes with a single slash. For example, if you specify <Location /abc/def> and the request is to /abc//def then it will match. See also: How Directory, Location and Files sections work for an explanation of how these different sections are combined when a request is received


Syntax: <LocationMatch regex> ... </LocationMatch>

Context: server config, virtual host

Status: core

Compatibility: LocationMatch is only available in Apache 1.3 and later.

The <LocationMatch> directive provides for access control by URL, in an

identical manner to <Location>. However, it takes a regular expression as an

argument instead of a simple string. For example:

<LocationMatch "/(extra|special)/data">

would match URLs that contained the substring "/extra/data" or


See also: How Directory, Location and Files sections work for an explanation

of how these different sections are combined when a request is received

Proxy Reverse setings

Listen 80
Listen 4080

<VirtualHost _default_:4080>
#DocumentRoot "/usr/apache/htdocs"
ErrorLog /usr/apache/logs/4080error_log
TransferLog /usr/apache/logs/4080access_log
ProxyRequests Off
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
NoCache *

/etc/init.d/apache (start stop restart)

Script that will make the Makefile needed to compile and install apache with proxy.
## config.status -- apache auto-generated configuration restore script
## Use this shell script to re-run the APACI configure script for
## restoring your configuration. Additional parameters can be supplied.
make distclean
SSL_BASE="../openssl-0.9.5a" \
RSA_BASE="../rsaref-2.0/local" \
./configure \
"--with-layout=Apache" \
"--prefix=/usr/local/extricity/apache" \
"--enable-module=so" \
"--enable-module=ssl" \
"--enable-module=proxy" \
"--enable-shared=proxy" \

How to install the FrontPage 2000 Server Extensions to an Apache Web server


  Back To Top


horizontal rule

Win DNS:
Windows 2000 & XP have a bad habit of caching DNS names in the resolver cache - I used to do a reboot to clear it - instead ipconfig has a handy option:

ipconfig /flushdns - Purges the DNS Resolver cache.

also to show the resolver cache use the /displaydns option.

To kill a processes in NT
Type TLIST in a command shell to get a list of processes and KILL -F #id to force a kill on process #id.

NT Registry for Browse Master
The parameters that control network bindings for the Browser service
are described in "NetRules Subkey Entries" in the article "Network
Adapter Cards Entries, Part 1."

Under the following Registry path, two parameters are found:


CacheHitLimit REG_DWORD 0 to 256
Describes the number of NetServerEnum requests required to qualify
that the response to a NetServerEnum request be cached. If the
browser receives more than CacheHitLimit NetServerEnum requests
with a particular set of parameters, it caches the response and
returns that value to the client. Default: 1

CacheResponseSize REG_DWORD 0 to xffffffff
Specifies the maximum number of responses kept for each transport.
To disable this feature, set this value to 0. Default: 10

IsDomainMasterBrowser REG_SZ Boolean
For TCP/IP, specifies a workstation within a workgroup which can be
included in global LMHOSTS file. When this parameter is set to Yes,
it forces the elevation of a workstation's priority for the
browser. This helps with WAN browsing.

This value should be set on a few systems for the workgroup,
placing mappings for each in the global LMHOSTS file. For example,
in a workgroup with 20 members, set this value on three of the
computers to earn a better chance to act as master browsers. This
facilitates remote browsing ability for workstations in remote
domains whose domain master browser has successful mappings for
these special workgroup members.

MaintainServerList REG_SZ Boolean or Auto
If this value is No, this server is not a browse server. If this
value is Yes, this server becomes a browse server. It attempts to
contact the Master Browse Server to get a current browse list. If
it cannot find the Master Browse Server, it forces an election and
is, of course, a candidate to become the master.

If MaintainServerList is Auto, this server may or may not become a
browse server, depending on the results of the Registry exchange
with the Master Browse Server.

If MaintainServerList is set to Yes, the computer is configured to
always be a backup browser.

Default: Auto, if none is present. (This server contacts the Master
Browse Server, and the Master Browse Server tells this server
whether it should become a browse server.)

QueryDriverFrequency REG_DWORD 0 to 900
Indicates the time after which a browser master will invalidate its
NetServerEnum response cache and the frequency that a master
browser will query the browser driver to retrieve the list of
servers. Increasing this time makes browsing somewhat faster, but
browse information will not necessarily be 100 percent accurate to
the minute. Lowering this time makes browse response more accurate,
but will increase the CPU load on the browse master. Default: 30

The following Browser driver parameters are found under this Registry
path for the Datagram Receiver:


BrowserServerDeletionThreshold REG_DWORD
BrowserDomainDeletionThreshold REG_DWORD 0 to 0xffffffff
If more than BrowserServerDeletionThreshold servers (or
BrowserDomainDeletionThreshold) servers (or domains) are flushed in
a 30-second interval, this will cause an event to be generated.
Default: 0xffffffff

FindMasterTimeout REG_DWORD 0 to 0xfffffff
Specifies the maximum number of seconds that FindMaster requests
should be allowed to take. If you have a slow LAN, you may want to
increase this value (but only if directed by Microsoft Product
Support services). Default: 0xffffffff

GetBrowserListThreshold REG_DWORD Number
Represents the threshold that the Browser uses before logging an
error indicating that too many of these requests have been
"missed." If more requests than the value of GetBrowserServerList
are missed in an hour, the Browser logs an event indicating that
this has happened. Default: 0xffffffff (That is, never log events.)

MailslotDatagramThreshold REG_DWORD Number
Represents the threshold that the Browser uses before logging an
error indicating that too many of these requests have been
"missed." If more mailslots than the value of
MailslotDatagramThreshold are missed in an hour, the Browser logs
an event indicating that this has happened. Default: 0xffffffff
(That is, never log events.)

Unencrypted Password SP3 Fails to Connect to SMB Server

•Microsoft Windows NT Workstation version 4.0 •Microsoft Windows NT Server version 4.0 


After upgrading your Windows NT 4.0 computer to Service Pack 3 (SP3), you are unable to 
connect to SMB servers (such as Samba or Hewlett-Packard (HP) LM/X or LAN Manager 
for UNIX) with an unencrypted (plain text) password. When attempting to connect after you 
upgrade to Windows NT 4.0 Service Pack 3, you receive the following error message: 

System error 1240 has occurred.

The account is not authorized to login from this station.


This is because the SMB redirector in Service Pack 3 handles unencrypted passwords differently 
than previous versions of Windows NT. Beginning with Service Pack 3, the SMB redirector does 
not send an unencrypted password unless you add a registry entry to enable unencrypted passwords. 


To enable unencrypted (plain text) passwords, modify the registry in the following way: 

WARNING: Using the registry editor incorrectly can cause serious, Use this tool at your own risk. 

1.Run Registry Editor (Regedt32.exe). 

2.From the HKEY_LOCAL_MACHINE subtree, go to the following key: 


3.Click Add Value on the Edit menu. 

4.Add the following:

Value Name: EnablePlainTextPassword
Data Type: REG_DWORD
Data: 1

5.Click OK and then quit Registry Editor. 

6.Shut down and restart Windows NT. 

To enable unencrypted (plain text) passwords in an automated setup, modify the registry in the following way: 

WARNING: Using the registry editor incorrectly can cause serious, system- wide problems. Use this tool at your own risk. 

Add the following line to the Product.Add.Reg section of the Update.inf file: 

"EnablePlainTextPassword", 0x10001, 1

Environment stuff:
When you run an MS-DOS - based application that requires a large amount of
environment space, such as a compiler, you may encounter Runtime Error

This error occurs when there are not enough bytes allocated for the

The default environment size for MS-DOS - based applications running under
Windows NT is 256 bytes. Windows NT sets up many more variables than an
average MS-DOS operating system usually does and can quickly meet the
default size. The following list is an example of default variables after
you install Windows NT over MS-DOS:



The environment size can be adjusted from the command line or in a .BAT or
.CMD file in the SYSTEM32 directory by adding the following line:


Additionally, you can use the /P parameter to make the new command
interpreter permanent, and you can use the /C parameter to run a specific
program after initiating Command.

The environment can also be changed by adding the following line to the
CONFIG.NT file in the SYSTEM32 subdirectory


where "SIZE" is the maximum length in bytes you want COMMAND.COM to
allocate for each program.

The maximum size for the environment is 32768 bytes.



 Back To Top

Veritas and Extricity info

horizontal rule

Veritas Path:

To switch from 502 to 501:
/opt/VRTSvcs/bin/hagrp -switch appgrp -to knexasp0501

Status: / Status -summary:
/opt/VRTSvcs/bin/hastatus -summary

Clear Faults:
hares -clear xxxx -sys xxxx
hagrp -clear appgrp -sys knexasp0701

hares -online xxxx
hagrp -online [-nopre] <group> -sys <system>
hagrp -online appgrp -sys knexasp0702

hagrp -offline <group> -sys <system>:

Shuts down cluster, leaving resources up:
/opt/VRTSvcs/bin/hastop -all -force
Start; on both nodes

Start and stop resources:

If VERITAS Cluster Server is started on a system with a stale configuration file.
/opt/VRTSvcs/bin/hasys -force system_name

Clear sys faults (vxcluster manager fails to start for a member of cluster)
cd /etc/init.d
ps -ef | grep llt (if not running then do ./llt.rc start)
ps -ef | grep gab (if not running then do ./gab start)
Cluster member should start up.

To shut down Extricity:
As root: (Note: do a uname first to make sure the system host is online
knexasp0701 or knexasp0702
step 1:
/opt/VRTSvcs/bin/hares -offline app-apache -sys knexasp0701
step 2:
/opt/VRTSvcs/bin/hares -offline Extricity -sys knexasp0701

To restart Extricity:
Step 1:
/opt/VRTSvcs/bin/hares -online Extricity -sys knexasp0701
Step 2:
/opt/VRTSvcs/bin/hares -online app-apache -sys knexasp0701

Here's the command that you need to run in order to add the disk on a disk group:( from VERITAS Support )
# Connect the SAN disk on the system
# Run the "format" command and label it.
# Run the  following VRTS commands to detect the new disks:
# vxprint -ht volname
        take note of the size of the volume/plex, this is in sector count
# vxdctl enable
# vxdisk list
        can you see the disk? If yes then proceed below
# vxdisksetup -i cxtxdx
# vxdg -g dgname adddisk diskname=cxtxdx
# vxdisk list
    can you see the disk under VM control? If yes then resize the volume
# vxresize -g dgname volname +length diskname
    ex:  vxresize -g datadg vol01 +34g disk01 disk02 disk03 etc.
        this will resize the volume and the filesystem as well.
# vxprint -ht volname
    take note of the size of the volume/plex, this is in sector count

Some really useful Veritos commands (from D.G)
Mirroring and encapsulating rootdg:
1. Run vxdisk list to see what disks are available
2. Run vxdiskadm:  select option 2, do a list then select the free disk to encapsulate and take the rootdg dg (default). Answer the rest of the questions. When you get to the question of customizing the disk name of the encapsulated disk then ANSWER YES. Place in the appropriate name.
3. exit the disk adm tool.
4. eeprom use-nvramrc?=true
5. shutdown -g0 -y -i6 (From the option#2 Encapsulating disks output)
6. Done

To remove a mirror or clean up disabled plexes use:
1. Do a vxprint -ht to get the name of disk/plexes to remove
2. Disable & remove the plex : vxplex -g <dg_name> -o rm dis <plex_name>
(Note: if roodg: Will need to remove both rootvol & swapvol (NEVER RM c0t0d0) Typically it would be c0t1d0
3. print -ht to check if the plex is removed.
4. Remove the disk from VXVM vol conf: vxdg -g <dg_name> rmdisk <disk_name>
5. vxprint -ht to check if disk is removed from dg in VXVM
6 vxdisk list to see if disk has been removed from dg
7. Done

To Grow an existing Volume (add more disks)
1. initialize the additional disk first
vxdisksetup -i cxtxdx
2. Add the additional disk to the disk group
vxdg -g <dgname_to_grow> adddisk <disk_name>=cxtxdx
3. Grow The DG:
vxresize -g <dgname_to_grow> <Volume_name_the_dg_belongs_to> +<total_capacity_of_new_disk>g <disk_name_from_previouse_step
4. Done. Note that this will do the file system automatically only when it is mounted and the data is preserved. This can run without taking the system down.

Example: to grow the volume exvol containing disk group (dg) exdg (with one 34g disk in exdg01) an additional 34GB:
1. vxdisksetup -i c2t5d20
2. vxdg -g exdg adddisk exdg02=c2t5d20
3. vxresize -g exdg exvol +34g exdg02
4. Done

How do I make vxfs support large files?
/usr/lib/fs/vxfs/fsadm -o largefiles /mountpoint

How to enable the largefiles option for a VERITAS File System

Following are ways to enable the largefiles (to permit files larger than 2 gigabytes to be created) option for a VxFS file system:

With the mkfs command when initially creating a VxFS file system:

         # mkfs -F vxfs -o largefiles /dev/vx/rdsk/<disk_group>/<volume_name>    

With the fsadm command, which permits online administration of a VxFS file system (file system must be mounted):

          #  /usr/lib/fs/vxfs/fsadm -o largefiles  /<mount_point_of_file_system>

                   AND to disable the largefiles option:

          # /usr/lib/fs/vxfs/fsadm -o nolargefiles  /<mount_point_of_file_system>

                   Note that this option to disable largefiles only works if there are no files greater than 2Gb resident in the file system

Verify that the largefiles flag is enabled:

    #  /usr/lib/fs/vxfs/fsadm  /<mount_point_of_file_system>


This is my cheat sheet, A little cryptic, but it can jog the memory. (from J.P.)

USCO Veritas/RaidMgr

VLN PCI RAID card info:
raidutil -d0 -L all

Root mirror:
eeprom nvramrc ?=true
(can run multiple times)


vxlicense -p
vxlicence -c



vxdg list
vxinfo <volname>

vxtask list
vxtask monitor taskid 164

vxdisksetup -i <device>
vxdisk list


Single User Mode Backup
boot -s
vxdg import appdg
vxdg list
vxvol -g appdg startall
vxprint -th (look for ENABLE)
vxdump 0uf /dev/rmt/0cn /dev/vx/rdsk/appdg/u01

umount /u01
vxdg deport appdg


export PATH=$PATH:/sbin:/opt/VRTSvcs/bin



ha daemon:
hastop [-all -force]
hastop -local -evacuate
hastart (on all systems)
hastatus summ

hagrp -switch -to ssasp201
hagrp -flush marc -sys ssasp401
hagrp -freeze|-unfreeze <service group>
hagrp -clear <service group>

hares -display
hares -online|offline|offprop|clear Extricity -sys ssasp201

For Maintenance:
hastop -all -force (leaves resources up)
hasys -freeze|-unfreeze <system>

hastart (on both systems)

[edit config]
haconf -makerw
<edit, etc>
hacf -verify
haconf -dump -makero

# haconf -makerw

# hauser -add ????

Enter Password:

hares -?hagrp Enter Again:
# -dump -makero


Heartbeat ports:
lltstat -nvv
lltconfig -a list

If hosts dropped simul, chk prvt interconnect:
gabconfig -a (status)
gabconfig -xc
-x (override)
-u (unregister)
-c (config)

"Stale Admin Wait"

hasys -force ssasp201
force read of

A <system name>          ADMIN_WAIT          0
A <system name>          ADMIN_WAIT          0
bash-2.03# hacf -verify .

pkginfo -l | grep SUNWosar

Back To Top