Monday, December 05, 2005

RHCE: Success!

I have received the results of the RHCE exam and I have passed well! Hopefully I can begin updating my blog more frequently.

Saturday, December 03, 2005

RHCE: I took the exam on 12/02/2005

After taking the exam yesterday, I should find out the results next week. I'll let you know how I did.

Note: As is common knowledge, no discussion of the exam or contents can occur after taking the exam by participants. Please refer to the website at for any questions you may have. There is a detailed exam prep page that will outline all of the requirements.

Sunday, November 27, 2005

RHCE: Run-time RAID Configuration

Here is a link to an excellent article on creating RAID arrays using the mdadm tool.

Here is an excerpt:

Creating an Array

Create (mdadm --create) mode is used to create a new array. In this example I use mdadm to create a RAID-0 at /dev/md0 made up of /dev/sdb1 and /dev/sdc1:

# mdadm --create --verbose /dev/md0 --level=0

--raid-devices=2 /dev/sdb1 /dev/sdc1

mdadm: chunk size defaults to 64K

mdadm: array /dev/md0 started.

The --level option specifies which type of RAID to create in the same way that raidtools uses the raid-level configuration line. Valid choices are 0,1,4 and 5 for RAID-0, RAID-1, RAID-4, RAID-5 respectively. Linear (--level=linear) is also a valid choice for linear mode. The --raid-devices option works the same as the nr-raid-disks option when using /etc/raidtab and raidtools.

In general, mdadm commands take the format:

mdadm [mode]  [options] 

Each of mdadm's options also has a short form that is less descriptive but shorter to type. For example, the following command uses the short form of each option but is identical to the example I showed above.

# mdadm -Cv /dev/md0 -l0 -n2 -c128 /dev/sdb1 /dev/sdc1

Saturday, November 12, 2005

RHCE: Installing RPMs from an NFS share

Using an NFS share to install packages is a very convenient means of ensuring that all of your systems are running similar packages and always having those packages available. I will not cover configuring an NFS share and will assume that the reader is already familiar with that function or has the capacity to figure it out.

1. Verify that the NFS share is available and mount on the local filesystem, if not already mounted.

mount -t nfs /local/install/point /nfs/server

2. If using the command line, simple use the RPM command to install the application:

rpm -Uvh /path/to/share/application.rpm

3. If using the package manager, use the following command from the command line:

system-config-packages --tree=/path/to/nfs/share &

You will now be able to select the applications that you would like to add or remove.

RHCE: Logical Volume Manager

Red Hat Enterprise Linux 4.0 uses a logical volume manager to facilitate efficient management of disks and partitions. With the Logical Volume Manager, partitions can be created that span multiple physical volumes and partitions. In this sense a physical volume is a hard disk. With this ability, an administrator can easily expand a partition or create an efficient partition scheme. RHEL 4.0 uses LVM2 by default.

When using LVM, it may be difficult to remember all of the commands that are possible and necessary to create a Logical Volume. An easy way to get a list of all of the related commands is to enter the lvm console by typing 'lvm' on the command line. Once in the LVM console, type 'help' and all of the available commands will be listed with a short description.

CAUTION: using the logical volume manager can and probably will destroy data. Verify that you have created backups of all of your data before trying the samples below.

Using the Logical Volume Manager

1. Partition the physical hard disks that will be used as part of the Logical Volume(s)

Use 'fdisk' as appropriate. Remember to set the system type as 'Linux LVM', which is type '8e'.

2. Create physical volume(s)

From within the lvm console, use pvcreate on each partition that will participate in the logical volume group.

pvcreate 'physical partition'


lvm> pvcreate /dev/hdd1
Incorrect metadata area header checksum
Physical volume "/dev/hdd1" successfully created

3. Create a volume group

Using the vgcreate command, create a volume group which consists of the physical volumes created previously.

vgcreate 'volume group name' 'physical volume' ['physical volume'] ...


lvm> vgcreate test1 /dev/hdd1 /dev/hdd2
Incorrect metadata area header checksum
Volume group "test1" successfully created

Verification of the volume group creation can be done with the vgdisplay command:

lvm> vgdisplay
--- Volume group ---
VG Name test1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 93.15 GB
PE Size 4.00 MB
Total PE 23846
Alloc PE / Size 0 / 0
Free PE / Size 23846 / 93.15 GB
VG UUID znEYOy-n4oJ-zmXq-QARI-cRvD-YmY5-Gq6qpd

4. Create a logical volume in an existing Logical Volume Group

Using the lvcreate command, create a logical volume:

lvcreate [-L 'size'] [-n 'logical volume name'] 'logical volume group'


lvcreate -L 200M -n vol1 test1
Incorrect metadata area header checksum
Logical volume "vol1" created

Verify that the logical volume is as desired with the lvdisplay command:

lvm> lvdisplay
Incorrect metadata area header checksum
--- Logical volume ---
LV Name /dev/test1/vol1
VG Name test1
LV UUID HPqCiY-58NT-X1ae-5vk3-1hLw-f2no-AYe52O
LV Write Access read/write
LV Status available
# open 0
LV Size 200.00 MB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0

5. Format the logical volume with the filesystem desired (ext3 shown)

[root@primary ~]# mke2fs -j /dev/test1/vol1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
51200 inodes, 204800 blocks
10240 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
25 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729

Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

6. In good Red Hat form, create a label for the filesystem with the e2label command

[root@primary ~]# e2label /dev/test1/vol1 test1

7. Create an entry in the /etc/fstab file for this logical volume so that it will be mounted on subsequent boots. Use a meaningful mount point:

LABEL=test1 /test1 ext3 defaults 0 0

These steps have covered how to create a logical volume and use it. You can also expand an existing logical volume and perform other maintenance tasks. The only other task covered here will be expanding an existing logical volume.

Expand a logical volume

1. Add desired disk space to your volume group, if necessary

a. use fdisk to create a new partition of type '8e'
b. use pvcreate to initialize the new partition as a physical volume

[root@primary ~]# pvcreate /dev/hdd3
Physical volume "/dev/hdd3" successfully created

c. use vgextend to add the physical volume to the volume group

[root@primary ~]# vgextend test1 /dev/hdd3
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Volume group "test1" successfully extended

CAUTION: The next step will remove all data from the partitions in question. Verify that you have backups.

d. use lvextend to expand the logical volume to the desired size

[root@primary ~]# lvextend -L 300M /dev/test1/vol1
Incorrect metadata area header checksum
Extending logical volume vol1 to 300.00 MB
Logical volume vol1 successfully resized

e. re-format your logical volume, relabel it, and remount it

umount 'logical volume'
mke2fs -j 'logical volume'
e2label 'path to volume' 'label'
mount 'path to volume in /etc/fstab'

Trouble Shooting

1. 'physical partition' not identified as an existing physical volume

lvm> vgcreate 'volume group' 'physical partition' 'physical partition'
Incorrect metadata area header checksum
Incorrect metadata area header checksum
No physical volume label read from 'physical partition'
not identified as an existing physical volume
Unable to add physical volume 'physical partition' to volume group 'volume group'.

To correct this problem, use lvm to create a physical volume on each partition with the pvcreate command:

lvm> pvcreate 'physical partition'
Incorrect metadata area header checksum
Physical volume "physical partition" successfully created

Create the volume group with the vgcreate command.

Sunday, October 23, 2005

RHCE: Networking and Network Configuration

One of the greatest things about Linux is the ability to easily network systems. Linux excels when using wired or wireless networking. There are several basic requirements that must be met for your Linux machine to communicate with other machines or devices on a network.

The first requirement is that your machine must have a means of communicating with other machines through hardware. This requirement is typically met through a network interface card (NIC) at the host level and a switch or router at the local area network (LAN) level. These devices are connected through ethernet cables or another suitable medium (which may also include wireless devices). This requirement is also referred to as the Physical Layer in the OSI Reference Model.

The next requirement is that each network interface must be configured properly. Each interface must be configured with an IP address, netmask, and gateway. These simple parameters allow the interface to communicate with other interfaces on the network. Each interface may be configured through the GUI tools which are provided by Red Hat, by editing configuration files with a text editor, or they can be manually configured from the command line. I will review the command line and text file configurations only. This requirement covers layers 2 and 3 of the OSI Reference Model.

Another requirement that must be met if communication with the network outside of the immediate network is desired is proper configuration of the /etc/resolv.conf file with the IP address of a valid DNS server:

The /etc/resolv.conf file, when configured properly, will allow the machine to obtain IP addresses for hosts which are known only by host name. It is critical that you do not allow any spaces before the nameserver directive in this file or it will not function. This file is configured automatically when using DHCP and manually when using a static IP.

There is a configuration file for each interface in the following directory:
These files serve as parameter files for the network startup script which is:
When the network service is started, either manually or on system boot, the network startup script reads the configuration file to determine whether or not the interface should be configured to start and how to configure the interface. If the interface is configured to obtain an IP address via DHCP then it will broadcast for a DHCP server and set the IP address, netmask, default gateway, /etc/resolv.conf file, and possibly the hostname for the machine, depending on how the system is configured. If the interface is configured to use a static IP address then most of the above settings will be configured through the configuration file in:
The hostname and /etc/resolv.conf file must be configured manually. An example of this file properly configured with a static IP address is as follows:

[root@primary network-scripts]# cat ifcfg-eth0
Most of the file is self-explanatory and fulfills the requirements listed above with regard to the interface configuration. If the interface is configured to use DHCP, the file will resemble the following:

[root@primary ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
This simple file will cause the interface to obtain an IP address, netmask, and gateway on startup and configure the /etc/resolv.conf file with the nameserver used by the DHCP server. This is a very common configuration and makes network management very robust.

The last part of this article illustrate how to configure an interface manually on the command line. I will merely list the steps required to configure the same interface that is listed in the configuration file above with the same parameters:

ifconfig eth0
route add default gw
ifconfig eth0 up
You can now verify your settings with:

ifconfig devicename
netstat -r
To manually obtain an IP address via DHCP, use the dhclient utility. If no interface is specified then the utility will attempt to obtain a DHCP address for each non-multicast address on the system. An example of using this utility is as follows:

dhclient eth0
This command will configure the eth0 interface with an IP address, netmask, default gateway, and setup /etc/resolv.conf with the proper nameserver directives.

Through this article I explained what is required to network a Red Hat Enterprise Linux system. I also explained how to configure an interface so that the machine it is on will be able to communicate with other machines and devices on a network. The configuration can be done manually on the command line, through editting text files, and with the GUI tools provided by Red Hat with the operating system. The ability to configure Linux to interact with other systems is an essential skill, as a computer without the ability to network is useless.

Saturday, October 22, 2005

RHEL: RAID Performance

I recently purchased 3 new 200GB Maxtor hard drives. I haven't bought a hard drive in a couple of years and the biggest one I owned before this was 80GB, with a second place runner up of 20GB. I'm doing pretty good with the new drives. They are 7200 rpm, IDE, 16MB cache type drives. I just hooked them up in my SMP athlon system and configured a few raid paritions in RAID 0 to test the performance:

root@primary ~]# hdparm -tT /dev/md1

Timing cached reads: 956 MB in 2.00 seconds = 477.36 MB/sec
Timing buffered disk reads: 264 MB in 3.01 seconds = 87.72 MB/sec
[root@primary ~]# hdparm -tT /dev/md3

Timing cached reads: 948 MB in 2.00 seconds = 473.60 MB/sec
Timing buffered disk reads: 270 MB in 3.01 seconds = 89.68 MB/sec

It's not often that you can get this kind of performance out of an IDE drive, so I'm pretty happy with my purchases. I did test them with Knoppix in a non-RAID configuration and was able to get 62 MB/sec, so this is nearly 30 MB/sec better!

Wednesday, October 19, 2005

RHCE: Exam Prep

All right, it's time to continue my studies for the RHCE exam. I've registered for the exam and I'll be taking it on Friday, December 2nd. I hope to be able to prepare fully by then with the book that I purchased and by practicing a lot. I just ordered some new hard disks so that I'll be able to play around with more configurations and have enough space on my main machine.

Tonight I'll be reviewing the RHCE exam prep guide.

Tuesday, September 06, 2005

Perl: Great scripting language...

I have always been a bash or ruby scripter and very rarely used Perl. Lately I have been using Perl quite a bit and have found that it is a joy to use! There is a lot of power behind Perl and it is very widely used in the industry. I realize that Google uses Python, but uses Perl. When performing job skill searches on the job boards of late, I have found quite a few that ask for Perl skills.

I am taking a short break from my studies for the RHCE because I received an un-solicited request for an interview from a large company in Seattle. I will keep you all updated.

Wednesday, August 24, 2005

NFS: Windows client?

In my recent studies for the RHCE I have covered the NFS file sharing system. I have used this system in the past, but not extensively. I have been told that it is faster than SMB by a great degree between Linux and Unix boxes so I am going to run some tests on it and see how it works. I am currently downloading Services for Unix from Microsoft so that I can test the Windows client that they provide for NFS. I'll post results.

RHCE: The Road to Certification

Aggregation of my posts on the RHCE:

Starting out!
RHCE Exam Prep Guide
RHCE Networking and Configuration

RHCE: The Road to Certification

Over the next couple of months I'm going to be studying for the RHCE exam. I have purchased the book, "RHCE 4th edition", by Michael Jang as a study guide since I heard it was the best one out there. I'm not terribly impressed with it so far, as I have found quite a few technical errors. I hope to be able to start posting about my learning experiences and relate that to how well I perform on the exam. One of the reasons that I am doing this is so that I can start doing some private consulting around my area and I'd like to have some certifications to back up my experience and knowledge. Stay tuned!

Sunday, August 14, 2005

NX: Remote X-Server Session

I recently discovered an excellent client application for connecting to X remotely. NX is a client/server application developed by NoMachinewith some clever caching techniques which provide near real-time response for X-sessions. The NX server works with an SSH server to provide secure authentication and SSL encrypted traffic, if desired. A GPL'd server has been written and is available here, while a free client is available from The NX client will also connect to Microsoft applications which use the RDP protocol and VNC sessions. An excellent series of articles has been written on NX at which provides some details as to the actual technical operations.

For anyone who has used VNC for remote GUI control of their systems, NX is far superior!

Wednesday, August 10, 2005


I recently received an email from Speakeasy with an offer for broadband service in my area. I was impressed with the email as they gave me an option to test my current speed with one of their servers in Seattle. I have attached a link to this speed test in the left nav bar for all to use. I really like the way that Speakeasy does things and I do plan on switching to their service sometime in the next year. One of the good flexible options that Speakeasy has is that you can become a mini WISP if you purchase their service and they will take care of the billing and email problems with your customers. The bad part is that you have to take care of the tech support and security portions.

Tuesday, August 02, 2005

Responsible Disclosure: Ciscogate

For those who have not yet heard (shouldn't be anyone), Mike Flynn presented a flaw in Cisco routers at Black Hat 2005 that could bring the Internet to it's knees. There are conflicting sides to the story, but the gist is that Cisco was trying to down-play the seriousness of the flaw and keep the researcher from disclosing the vulnerability. Responsible disclosure means that after a reasonable amount of time trying to work with the vendor, the researcher must disclose the vulnerability to the security community so that the flaw may be fixed or defended against. There are rumors that the Chinese have already been exploiting this flaw, which makes it imperative that the security community know about it.

Open Standards: HTML and web technologies

As an avid supporter of open standards in all things digital, I was pleased to see this article on Slashdot wherein Paul Thurrott talks about boycotting Internet Explorer 7.0 until Microsoft comes out with a standards-compliant browser. I think that IE is a huge disappointment and a very lazy offering by MS. Any self-respecting tech company will strive to better the field that they work in and IE has made the field of web browsing and development worse for the wear. Please use Firefox.

Other reasons to not use Internet Explorer:

1. Privacy
2. Security
3. Diversity
4. Competition is better for innovation (not patents -- contrary to popular belief)

Sunday, July 31, 2005

Linux Computing: Thin Clients

Thin clients may be the way of the future for computing in large corporations and governments. I do hope that thing clients never take over my PC computing at home though, I always want to be able to control my computing experience. This thin client is the best one that I have seen recently. I have done some work with the Linux Terminal Server Project which provides an excellent solution for using one server to host many thin clients that eases the burden of system administration. I really like the idea from a systems administrator perspective.

Friday, July 29, 2005

Black Hat USA 2005

I just got back from Las Vegas, NV where I attended Black Hat USA 2005. This IT security conference is incredible! All of the briefings are new material only, which give you a fresh perspective on security issues in the IT field. The presenters were people from The Schmoo Group, Dan Kaminsky, the Choicepoint CISO, and many others! I saw some excellent briefings and learned quite a bit. This conference is a "must attend" for any serious security professional.

Saturday, July 16, 2005

Network Monitoring: Storage of capture data

I recently played around with trying to store some pcap capture data in a MySQL database so that I could analyze it and look for trends. I had the capture set to create 20MB full content files so that I could manipulate them easily:

tcpdump -s 1515 -C 20 -w content.lpc

I next created a Ruby script that would open the pcap file and write the data that I wanted to store to a CSV file that I would then bulk load into the MySQL database. This part worked very well and very quickly. I found that when I inserted the data into an InnoDB table, while only storing the source IP, destination IP and port, and the time of the packet, that 20 capture files would take up 1GB of space. Not only that, but it turned out to be over 1.3 million packets. This amount of data is really testing my SQL skills, as I try to create intelligent queries that will allow me to aggregate the data on specific parameters.

Anyone have any better solutions?

Securing the mother-in-law's computer.

This week I had the opportunity to take a look at my mother-in-law's computer, after having gone over it pretty thoroughly 6 months ago to make sure some basic security measures were in place, to make sure she was safe on-line. I was talking to her about how she accessed the internet and browsed web pages, as well as using her digital camera to create photo pages. She told me that when she accessed the Internet, she has to disable 'that ZoneAlarm' program so that it wouldn't take as long...and sometimes it stopped web pages from loading altogether! This really suprised me, as I thought that I had explained the situation better than that. Her firewall was being disabled at the time she needed it most.

My mother-in-law is running Microsoft Windows 98 and has been using it for nearly 7 years. She knows how to get around and sees no reason to upgrade to Windows XP or Linux. As security people, I believe that we need to advise people to use systems that are as secure as possible...especially since Microsoft does not, and cannot, maintain the security of it's Operating Systems. The real answer here is to use an Operating System that is more secure so that the users do not have to understand so much about how the technology works to be secure on-line.

Sunday, June 19, 2005

Home PC: How secure do you feel?

I recently helped my brother-in-law setup a new computer that he had purchased, just to make sure that he would not be plagued with the endless spyware and adware that most home users are afflicted with. The biggest issue being that most people run their personal machines as a member of the Administrators group. One thing that I noticed as I waded through all of the "utility" software on his machine was that there is a lot of JUNK on OEM machines!! I have only purchased 1 OEM machine in my lifetime, while building the rest of my machines or buying them used from University surplus sales, so I didn't realize how much crap they put on these things. I got the feeling from this situation that if the user feels safe because of the massive amount of software designed to make them safe on the machine that they must be safe -- or at least that's what the OEM would have you believe. After I cleaned all of the AOL, Norton trial, and Mcafee trial software off the machine, it booted twice as fast and ran much more smoothly. I also installed AVG Free edition for Anti-Virus and enabled the built in Windows Internet Connection Firewall. Now he will be able to use the full power of his machine and not get plagued by viruses and other malicious code.

Some things that are just smart to do with a Windows machine to maintain it -- in order of importance:

1. Do not use an Administrator account unless you are installing software or configuring your machine (this will save most people)

2. Use a firewall of some sorts

3. Enable automatic updates for Windows

4. Use anti-virus software

Wednesday, June 15, 2005

Gentoo Linux: Founder hired by Microsoft

Gentoo founder and former Chief Architect Daniel Robbins has accepted a job with Microsoft to help them understand Open Source software. Gentoo has been my Linux distribution of choice for the past year and a half and this comes as a huge suprise to me. I don't think that Gentoo will suffer because of this change but I do think that Daniel Robbins will suffer. I have so much respect for the Gentoo team that I cannot believe that the ideals of the founder would coincide with anything at Microsoft. I hope the best for Robbins and Gentoo.

(Announcement is on the front page of Gentoo site.)

VMWare: Seattle Conference

This morning I attended the VMWare conference in Seattle, WA. VMWare is an essential tool when analyzing malicious code. It's very easy to setup a [sandbox] network of 2-10 machines so that you don't damage any of your production machines -- and you have the option of freezing the virtual machine state so that you can restart any malware exam if you miss something. For the forensic examiners, you can mount a raw disk image in VMWare and start it as a virtual machine! If you plan on analyzing malicious code (virus', worms, trojans), this software is invaluable!!

The main point behind the VMWare conference was for developers and testers, but I found it useful to go along and get the free $200 license for VMWare 5.0.

Wednesday, May 11, 2005

Resources: TCPDump Pocket Reference

I hate to copy other blogs but I found a great reference on the open source weblog for anyone who uses TCPDump. This great reference is put out by the SANs institute as a TCPDump pocket reference guide. The reference consists of a two page printout that contains valuable information on processing the output of any network dump.

Monday, May 09, 2005

Command Line: find

One of the most valuable commands at your fingertips when using Linux or Unix is the find command. This versatile command can be used for a variety of tasks, from listing the contents of a directory or filesystem to indexing your entire filesystem. Find can be difficult for the novice to master, especially when there is no instruction available. The man pages don't really show the friendly side of the system:



find - search for files in a directory hierarchy


find [path...] [expression]

Some basics to find are as follows:

A simple find command will list all files and folders recursively in your current working directory (CWD):

secondary ~ # find

The second argument to find is the path, which, if blank, is assumed to be your CWD, as shown previously. You can also explicitly give the path:

secondary ~ # find .

From the man page, we can see that after the path, we can give find an expression. This is where most people have trouble when starting out. The tendancy of most people is to limit themselves to a regular expression-type of expression when the bigger picture is that the expression possibilities are immense. Take the following, for example:

secondary ~ # find / -type d -name sbin

As you can see, this command listed all of the directories in the filesystem by the name of sbin. The -type command was used to specify the type of file to find, in this case it was a directory. The options available are:

-type c
File is of type c:

b block (buffered) special

c character (unbuffered) special

d directory

p named pipe (FIFO)

f regular file

l symbolic link (never true if the -L option or the -follow option is in effect, unless the
symbolic link is broken).

s socket

D door (Solaris)

The two most used are going to be the file and directory options. The next option used was the -name option, which can be used to specify the name or a part of a name to search for. You can use a wildcard to find variations, since the '-name bin' option will not find 'sbin' or 'bind', but '-name *bin*' will find all of them. Note that using the -regex option can be very complicated, so it is easier to use the '-name' option and possibly a wildcard or two.

While these few options are enough to get you started, they are nowhere near tapping the resources of this powerful command. I recommend exploring and using this command frequently, as it will make your CLI experience much more rewarding!

Thursday, May 05, 2005

Spam Increase due to

I have noticed a marked increase in the amount of spam that I get since my sister tried to sign me up for the experience. Since they now have control of her hotmail address book, it only makes sense that they would spam everyone in it, including me. The sad part is that most of the spam I am getting now is "adult" related. I have never received much spam and I am very careful with my email addresses. Now I am receiving 3-6 messages each day that I believe are a direct result of It just takes one person who doesn't have a clue to ruin it for you.

Please do not use

Reporting to Microsoft

After the episode a few days ago, I reported it to This morning I received an automated reply stating that I need to send a hotmail addressed email to them. They are evidently not the right people to be notifying about the scandal that is running. The email is as follows:


This is an auto-generated response designed to answer your question as quickly as possible. Please note that you will not receive a reply if you respond directly to this message.

Unfortunately, we cannot take action on the mail you sent us because it does not reference a Hotmail account. Please send us another message that contains the full Hotmail e-mail address and the full e-mail message to:

>>>>>> To forward mail with full headers

Using Hotmail:
1. Click "Options" to the right of the "Contacts" tab. The "Options" page appears.
2. Under "Additional Options", click "Mail Display Settings". The "Mail Display Settings" page appears.
3. Under "Message Headers", select "Full" and click "OK".
4. Forward the resulting mail to:

Using MSN Explorer:
1. Open the message, and then click "More" in the upper right corner.
2. Click "Message Source". The message opens in a new window with all the header information visible.
3. Copy all the text and paste it into a new message. Send this message to:

Using Outlook Express or Outlook:
1. On the unopened mail, place your cursor over the mail, right-click, and click "Options".
2. Under "Internet headers", copy the contents of the full header.
3. Open the e-mail in question and forward a complete copy of the message, including the full message header you copied at the beginning of your message, to:

If you're not a Hotmail member, consult the Help associated with your e-mail program to determine how to view complete header information. Then forward the message to:

If the unsolicited junk e-mail or "spam" comes from a non-Hotmail account, you can send a complaint to the service provider that sent the mail. Make sure that you include full headers when you send your complaint.

In the full header, look at the last "Received" notation to locate what .com domain it came from. It looks something like:
[service provider domain name].com

Forward a complete copy of the message, including the full message header, to:
abuse@[service provider domain name].com

If the domain does not have an abuse service, forward your complaint to:
webmaster@[service provider domain name].com

All Hotmail customers have agreed to MSN Website Terms of Use and Notices(TOU) that forbid e-mail abuse. At the bottom of any page in Hotmail, click "Terms of Use" to view the Terms of Use document in its entirety.

Thank you for helping us enforce our TOU.

Tuesday, May 03, 2005

Identity Integrity:

This morning I received a very strange email from my sister asking me to update my personal contact information on I was very skeptical that this email actually came from my sister, so I immediately emailed her to ask her if she had sent me the message. She replied saying that everyone she knows uses this thing and she lost her address book, so she would too. The message was as follows:


I am updating my address book and it would be very helpful if you could click on the link below and enter your contact details for me:

I am using a new service that helps people stay in touch. It is only for direct friends and allows
you to privately exchange contact details and view one another's photos. You choose what to share.

Thank you for helping.

At this point I'm very worried that my sister may have fallen for a scam of some sort, so I tell her that I am concerned that she may be using for my personal data...and she replied that she would not. She also was under the impression that is part of hotmail. Now I was getting worried that Microsoft was pulling a fast one on people and trying to take over the world by combining with their webmail service -- but I hadn't seen it on the all knowing Slashdot yet.

I did a little research on the website and was not able to find anything that would link them to Microsoft. I did some more googling and found that many people were receiving spam and were unhappy with how hijacked their hotmail password/account so I thought I would investigate this. The first step would be to create a throw-away email address with hotmail.

I created an account with hotmail called ''. This took a very short time, filling out each form with bogus information.

First name: bebo1
Last name: bebo2

The next step is to sign up with and try to find out where they link with hotmail. I then signed up with the username, 'isthisbebo'. The following information is requested about the person signing up:

My Contact Details
First Name
Last Name
Date Of Birth
Email AddressesHome
Phone NumbersHome
Postal AddressesHome

The very next page shows a couple of text boxes which allow you to enter your hotmail email account and password so that can show you who IN YOUR ADDRESS BOOK is already in Is that scary or what? This service is using people's email accounts to access address books. Why write a virus to do this, just create a website and ask people, they will give you their passwords!! I wonder if Microsoft condones this practice? The next step was to enter my hotmail email address and password and watch it go over the wire in the clear...which it did:

Email form:

Add Friends

Request contact details from your own friends and populate your free address book.

Hotmail Users
Enter your Hotmail details below and we'll show you who's already using Bebo from your Hotmail Address Book.

Hotmail Email Address

Hotmail Password

~ OR ~

Copy and Paste the wording below into an email.
Send the email to friends to request their contact details. You can send from either your Hotmail account and/or ANY other email account you may have.
Need instructions on how to Copy and Paste? Click here

Ethereal Capture:

POST /WhosHere.jsp?Ran=289260571 HTTP/1.1
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.7) Gecko/20050414 Firefox/1.0.3
Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cookie: bdaysession=251377972379689953;; Username=isthisbebo; A=-1; G=0
Content-Type: application/x-www-form-urlencoded
Content-Length: 73

ScraperTypeCd=H& 200 OK
Server: Resin/2.1.16
Content-Type: text/html; charset=utf-8
Content-Length: 8888
Connection: close
Date: Wed, 04 May 2005 02:04:24 GMT

In conclusion, is NOT integrated with The practice that has started of trying to fool people into giving them their hotmail username/password is very disconcerting. I am going to warn my family and friends to be very careful when using this service and not give out other email addresses or passwords. If a hacker were to compromise this system, there is no requirement for them to disclose it to the users, as far as I know -- and they would have a valid email address with password for some users. also reserves the right to send spam to those on their lists.

Monday, May 02, 2005

ISP Security

While researching ISPs lately, I came across an interesting concept on the website of Speakeasy where they will allow individual customers to be a mini-ISP with their netshare program. With this program, any customer who considers themselves competent can share their connection with others for profit and the customer is responsible for the actions of the users who they are sharing their connection with. If someone you happen to be sharing your connection with is downloading child porn or other illicit activity, you will be held responsible if you do not take care of it. Also, the system is designed for wireless connections, and the customer is responsible for the security of the configuration. It seems pretty un-safe to me to allow consumers who "think" that they are competent be responsible for the security of the information that their neighbors pass over the network! I think that there needs to be a qualification check in place to make sure that this doesn't get out of hand. Another caveat, the customer who administers the netshare program must provide tech-support for their users, which could be a big hassle if not properly managed.

Friday, April 29, 2005

Encryption: Enigmail for Mozilla Thunderbird

Enigmail is an extension for Mozilla Thunderbird that will allow integration with the GnuPG encryption utility. This is a very useful tool that features key management, email signing, and encryption of email. I heartily recommend this extension to anyone who uses Thunderbird.

While installing Enigmail for Mozilla Thunderbird, I had some difficulty getting the extension installed. I would open the extensions dialogue and select the xpi file from my desktop and nothing would happen. I have not had to install the windows version for such a long time that I forgot that I had to perform the install as an Administrator account before I install it as a Limited-Access User Account. I don't agree with the way that this system works, as it means that the application is too closely coupled with the system registry and affects more than my single user when I install this extension. If this extension requires Administrator privileges to install, why doesn't it install for every user on the system when I do perform the Admin install?

Wednesday, April 27, 2005

Security Principle: Separation of Privilege

There is an excellent article on by Daniel Hanson that talks about the downfall of running any system as an administrative account. Daniel makes an excellent reference to the Linspire way of doing things, which follows Microsoft and runs all users as root. As Daniel so eloquently points out, running as root is like putting all of your vegetables in the same pile -- if one of them begins to rot, the rest will most likely begin rotting and you will have no more vegetables left. On the other hand, if you put restrictions on your users and run with Least-privilege User Access, you will be able to maintain the integrity of your system. One of the fundamental elements of Information Security is integrity (CIA) -- which is making sure that your data is the same now as when you put it there. If you run all of your users as root, or even you run as root as you surf the web and check your email, you run a significant risk of losing control of the integrity of your data.

It is always easier to run as root until you lose some data. This can be compared to the person who doesn't believe they need to backup their data -- they will quickly change their mind after they lose critical data (although some people never do learn and that idea must be applied here). If Linspire has to go through the same maturity lesson that Microsoft has gone through then it will be a stain on the reputation of Linux as part of the operating system.

Tuesday, April 26, 2005

Current Events: Server Compromise

This past weekend I noticed a huge amount of traffic from one IP trying to break into my SSH server at home. After some investigation, I discovered that this IP had made over 1100 intrusion attempts. The attacker was a script-kiddie using a dictionary attack. I performed an aggressive nmap on the IP to discover the type of machine attacking me with the following command:

nmap -sS -sV -O -v -T5 'ip address'

After discovering that the IP had a tempting number of services available, in addition to several IRC servers running, I attempted to view the web page that the server was serving by viewing it in Firefox. I was suprised to discover that the web site was an e-commerce site that belonged to a religious organization. Armed with this new information, I was convinced that the site had been compromised and that they needed to be informed. By looking up the whois data, I discovered that the server was hosted in the US and that there was a technical contact listed. I emailed the technical contact, as well as the root/abuse/info at the domain in question and informed them of the problem. I received a response a couple of hours later and the site was taken down for maintenance.

A couple of things I take away from this is that I can make a difference by being aware of what is happening to me and doing some minor investigating when an intrusion attempt occurs. Also, the whois data being public is essential for people like me who care about the safety of others to be able to inform server admins that they may have a problem with the integrity of their systems. Sorry about the lack of detail on the site, but I don't want to make them a target or give them any undue publicity.

Saturday, April 23, 2005

Book Review: The Art of Intrustion (Mitnick & Simon)

The Art of Intrusion is a book written by a convicted cracker who has solicited stories from other crackers so that he can tell them through this book. Kevin Mitnick has made quite a name for himself through the crimes that he committed and the sentence that he received. The Art of Intrusion is a book designed for the "not so technically inclined" who want to know how crackers feel and work.

Throughout The Art of Intrusion, Mitnick relates unfounded but convincing stories of cracking performed by others. With each event, Mitnick related how to prevent the attack and how to fix the problem before it begins. Mitnick does not reveal any new information in this book that any security professional worth their salt does not already know. Mitnick's style of story-telling almost feels like he wants to be writing a technical document but doesn't make it there which results in a book which is awkward to read and not very interesting until the last two chapters. I had to convince myself to keep reading in hopes of finding out something new.

The biggest complaint that I have about this book is that Mitnick is continually trying to convince the reader that crackers are doing society a favor by exploiting vulnerable systems and that all of the really good security consultants were once [or still are] black-hat crackers. Mitnick and others who commit cyber crimes evidently believe that they should not be punished if they report the crime to the party who their crime effects -- even though malicious activity has occurred. If the crime is committed, the consequences should be faced.

I do not recommend this book.

Wednesday, April 20, 2005

Books: The Art of Intrusion

I am currently reading The Art of Intrusion, by Kevin Mitnick, and will post a full review when I am done. After reading the first 4-5 chapters I am disappointed by the lack of technical detail and the method Mitnick uses to tell the story. Mitnick is giving out security advice during and after each account which has not revealed any gems thus far. If the book continues as it has, I will be forced to give this book to my mother-in-law, as it does not reflect the level of knowledge that I expect.

To be continued...

Sunday, April 17, 2005

Email Clients: Mutt

I recently SSH'd into one of my servers running Fedora Core 3 and wanted to check my local mail. Not wanting to use the basic mail utility, I tried for my old favorite, Pine. With Pine nowhere to be found, I began to search for an alternative to it [since it is being used less and less, as I have found -- FreeBSD actually discourages the installation of Pine due to some security vulnerabilities]. After some very short searching on Google, I ran across Mutt and decided to try it out. It takes a few minutes to get the basics down, but this MUA is excellent! I really enjoy being able to use my "default editor" on any system to edit my email. I am a hard-core vi user, so being able to edit email in vi makes my life easier. There seems to be a fairly active user and developer base for Mutt so this bodes well for support and documentation. There is also a Mutt wiki.

Friday, April 15, 2005

Microsoft Security: Right direction?

The biggest problem with Windows security has always been that it is nearly impossible to run as a non-administrator when performing normal operations. It is possible, but it is very difficult. With it being so difficult to run as a non-administrator, most users run with full system privileges all the time which brings their system(s) under attack from every web page they visit and every email they open. Windows experts have instructed users to 'down-grade' their privileges when using their browser or email client, which is never done due to the additional steps that it takes to accomplish this seemingly simple task -- this is backwards, you should have to elevate your privileges to perform privileged functions!!

Microsoft has made some big strides in improving this model of operation recently with the 'Run-As' command but it has also been difficult to use. With the next release of Windows coming up, code-named Longhorn, Microsoft is embracing the principle of Least-privilege User Account (LUA). The principle of LUA has long been enforced in the Unix/Linux worlds with all users being able to control their own profile and nothing else or an account having access to control one daemon or service except the root user who is used to perform administrative functions. I am anxious to see how Microsoft does in this implementation, although I do expect it will take a few tries to get it right. This may turn into another version of the same thing we have now -- with there being 15 different levels of administrator and the Limited Account that still cannot function.

Wednesday, April 13, 2005

Linux Distro: OpenNA Linux

While reading a whitepaper in the SANS reading room today, I came across a reference to OpenNA Linux. OpenNA Linux is a distribution designed with a high level of security in mind. The distribution is somehow derived from Red Hat Linux originally, but now maintained by the OpenNA security solutions team which offers it for free [without support]. The fact that it stems from a Red Hat system makes it easy to install RPMs and gives Red Hat admins a good sense of familiarity. After reading a bit on their website, I plan on testing OpenNA Linux.

OpenNA Linux aims to be more secure than the average main-stream Linux distribution by removing all unnecessary software and services with role-based installations. If you are going to deploy a web-server, you install only the applications necessary to run a web-server. While role-based security is fairly obvious, very few distributions allow you the flexibility of installing only the bare minimum to run the services that you desire. OpenNA Linux even discourages installing an X Window system, which should be advised to any production server.

On a side note, Werner Puschitz, has written an article on how to secure a Linux system that is well worth reading. After reading the article, I have just a couple of additions to the article. The first thing that I would do is with the sshd_config file; replace the following line:

#Protocol 2,1

with this line:

Protocol 2

This change will prevent the SSH server from using the SSH protocol 1 to authenticate users and it will be more secure. The other item that I don't quite agree with pertains to passwords. The auther encourages very complex passwords which makes it difficult for users to remember them. I do agree with his password scheme for any privileged accounts or accounts with remote access, but for normal users who do not have remote access (outside the subnet) there should be a more relaxed scheme. I would recommend only requiring at least two of the many criteria that he listed, as well as a minimum length of 8 characters.

Overall, I highly recommend reading his article and will get back on how I review the distribution.

Thursday, April 07, 2005

Biometrics: Good Idea or Not?

If the use of biometrics to increase the level of security or safety that you enjoy appeals to you, visit and read about a Malaysian businessman who lost his finger because it was the only means to start his Mercedes. This incident shows to me that biometrics are NOT a viable security alternative. I don't want someone trying to cut off my finger or pull my eyeball out of the socket so that they can take my car for a joy-ride around the city. I agree with the author of the article on that I do not want something used for security that is physically tied to me.

This incident reminds me of hearing about foreign diplomats who are implanted with RFID tags so that they can be located and recovered in the event of a kidnapping. The crooks are not all foolish, they found out and began removing limbs that held the RFID tags (which were usually hands). What are YOU willing to sacrifice for that level of "safety"?

Tuesday, April 05, 2005

FOSS Providing Means to Educate Millions

The MIT Media Lab is launching a program to develop and distribute $100 laptops to children around the world who are in need of education and technology. The program will provide laptops to children in developing nations who do not have access to the Internet or education materials, even books. With the idea to issue a laptop to a specific child who is able to take the laptop home and use it with their family, this program will provide a means to educate millions (orders will be for at least 1 million laptops).

Free and/or Open Source Software (FOSS) create a means to provide technology to these children without spending lots of money on a proprietary operating system or office package and still provide a complete computing experience. If this operation were required to spend $199 on MS Windows and $399 on MS Office, the whole endeavor would be dismissed as impossible. I know that there are bulk licenses, but they cannot compete with the free software available. I'm not saying that there is not a place for the MS software though, they do provide a rewarding experience for those who are able to pay for the expensive licenses and who are able to pay for the support required to run these systems.

I applaud the efforts of MIT and I am certain that without the movement of FOSS that opportunities like this would not be available. If this effort is successful and laptops are distributed in mass quantities, the acceptance of Linux and Open Source software around the world will sky-rocket.

How do you compete with an opponent that has no price? I don't know, but MS has enough money they may find a way.

Monday, April 04, 2005

Security Principle: Least Privilege

One of the most important concepts in the IT security world is that of least privilege. When you create a user account and give it access permissions, you should give that account the least amount of privileges that the account requires to perform it's function. Following this principle will save you an incredible amount of time and hassle when administering a network and maintaining the security of your system(s).

With 5 years of Linux administration and 10 years of administering Windows machines, it is increasingly apparent to me that the biggest cause of security breaches is that of too much user privilege. I see many shops where the administrators are running as administrator or root on the machines that they use for email, web-browsing, and non-administrative tasks. I also see a MS Windows environment where it is incredibly difficult for a user to not run as administrator and still get normal day-to-day tasks done -- but it is possible. When administering a network with 30 users on Windows XP/2K machines for 1.5 years I had no virus or worm outbreaks, and no loss of data. I did experience one incident of spyware when a user played a joke on another user by installing a screen-saver. On every network that I administered where the users were able to access the administrator account(s), there were always problems with virus outbreaks and worms causing hours of work for me to recover the systems.

I have heard from some system administrators and even security professionals that it is not possible to force users to not run with administrator privileges. This is not a correct statement or thought process. If you take the time to learn how to administer your systems properly, it will save you time in the long run. Unix and Linux have the 'su' command that will allow you to temporarily become an administrator to perform administrator functions. MS Windows has the 'Run-as' command that works fairly well to do the same. You should NEVER have to login to your system as the administrator user account. It is very difficult with MS Windows to maintain this security policy, but it is doable. One of the best ways to get used to this practice is to do it at home, where I'll bet most people do not! I can honestly say that I do not login to my machines as root unless I am performing administrative tasks, and then I logout as soon as I am done.

The following link from Microsoft gives a good overview of tools and methodologies which help run with least privilege: article.

Tuesday, March 29, 2005

Bash Scripting: Tip of the Day!

I know that we all love and hate shell scripting. Bash can be the best friend in the world at times, then the worst enemy a second later. While working on a parsing script today, I came across a problem where I wanted to be able to pass an argument to a shell script, then use that argument in some processing. The only problem was, I needed to be able to pass arguments that had multiple words and spaces! I have come across this problem before and worked around it, but today I decided to solve the problem and figure out how to work with it.

In order to pass an argument to a shell script on the command line that is not all one word, you must enclose the command line argument in single quotes and the $1 variable in the code in double quotes. An example:


echo "$1"


non-root@localhost$> ./ 'hello my loyal blog readers'
hello my loyal blog readers

I hope that this tip helps someone looking for the same thing.

Wednesday, March 23, 2005

Mac OS X Security

Computer Crime Research News has an article where they discuss an article by major Anti-virus vendor Symantec which talks about how Mac OS X is increasing in popularity and becoming a target of hackers. This discussion is excellent for the security industry and those who would like to see more variety in the operating systems deployed on the desktop. With BSD and Mac OS X being rated as the most secure OS for either 2003 or 2004 (I forgot the link, I'll try to find it), we will now be able to see how secure it is. We will also be able to see how secure the applications that Apple has added to the BSD system are and how viable the Apple solution is compared to Linux [and Microsoft Windows].

I don't think that the vulnerabilities will ever match, or come close to matching what we have seen with Microsoft products. The reasoning for this is that Microsoft has been in the spotlight for many years as the front-runner, while the other boys have stood by the way-side learning and moving in a more secure fashion while learing from the mistakes and successes of Microsoft. It is much better to have options when deploying a server or desktop, so I welcome the competition of Microsoft, Apple, and any other OS vendor. I do like to preach about open-standards though, and the consumer should not have to be the one struggling to implement a solution because the vendor has gone out of their way to prevent interoperability (read Microsoft).

Friday, February 18, 2005

Commercial Software Regulation

With all the push lately for security in the software and IT markets, what will it take for companys to implement secure practices? According to Richard Clarke, former Whitehouse cybersecurity and counterterrorism adviser, there must be some regulation put in place to force companys to adhere to open standards and regulations which will promote better cybersecurity:

Article with quote

"But Clarke, during one panel discussion yesterday, called on Microsoft and other software companies to become more publicly accountable in their efforts to develop secure software. He said he asked Microsoft last year to disclose the specific quality-assurance practices it was following in the pursuit of more-secure software code.

The idea, he said, would be for the software industry to collectively come up with a set of best practices for secure software development. Outside experts would then be able to judge how well each company lives up to those practices.

"There's no fine involved, there's no liability involved, but the marketplace is better informed, and the marketplace works better when it knows what's going on," Clarke said, drawing a round of applause from the crowd at San Francisco's Moscone Center. Panelists compared the concept to the effort to hold public companies to standards for financial reporting under the Sarbanes-Oxley Act."

With the creation of open standards which will be regulated by the IT industry itself, and held accountable by the government and people, the industry will be able to move forward with the security and safety of the Internet and applications that rely on the internet.

Thursday, February 17, 2005

Student Privacy in Public Schools

In an elementary school in Sutter, California, the school implemented a policy to use RFID tags to track students movements throughout the school. The system was supposed to make it easier for administrators and teachers to take attendance and monitor the location of students. The initial plan included tracking students into the bathrooms, which was protested successfully by parents.

This type of automated tracking is a clear invasion of privacy. I am not suggesting that we have a right to privacy (although I do support the right to privacy), but I am suggesting that in this situation, the parents should be able to decide whether or not school employees will have access to their children at all times. I would want measures to be in place to ensure that the local pedophile would not have access to the children's location when the school was short-staffed. We all know that background checks are not 100% accurate, and that school employees are under-paid. The RFID technology is not mature enough to prevent third party reading and tracking either. There should be more planning and risk-analysis involved in a policy such as this.

The idea of monitoring our children is not a bad one, as they require monitoring by responsible individuals who care for their well being. The monitoring becomes a problem when it is automated and access may be given to individuals who the parents are not informed about. I can see this issue getting more of the spotlight as more monitoring solutions are created.

Consequences for Hacking?

T-Mobile was the victim of a hacker for a period of one year, possibly continuous. This hacker, Nicolas Jacobsen, was able to access all of the customer records and personal data of T-Mobile customers. Nicolas then offered this personal data for sale on-line, even offering the data to Secret Service agents, who were investigating him at the time. Nicolas also accessed the classified email of a Secret Service agent who was using a Sidekick for email purposes related to open and active cases. According to Kevin Poulson at, the sentencing for this crime will be a maximum of 5 years. I can hardly see this as adequate punishment for the crime that was committed. I do not think that this type of consequence will deter other criminals and prevent identity theft! I realize that Nicolas will probably be able to aid in the prosecution of other hacking cases, but the ruin that he could cause to the thousands of people whose identity that he may have stolen will follow them for their entire lives. Nicolas will serve 5 years and get out to start his own consulting company which will make him a millionaire at 30. Is this fair?

PC Simulations in Court

PC simulations are used for many studies from weather and seismic activity, to nuclear explosions. However, using PC simulations in court has not been a common practice. I found an article this morning which discusses a trial taking place in Seattle where a man is being charged with vehicular homicide and a software simulation is being used to aid in the prosecution. I wonder how far out of hand this practice will become before sufficient regulation and certification is put in place to make it fair, if that is even possible. I can see a situation where the weekly software update was done improperly and the crime will have to be re-tried due to simulation error. I wonder if the convicted would be able to make a case against the application programmer for any mistake or suffering if the case were later overturned.

In the Seattle trial, a man is being charged with vehicular homicide after taking a ride with a friend in his new sports car. Witnesses saw the pair leave with the friend driving the car, who was then killed in an accident involving a tree and a mailbox. The prosecution is using PC-Crash, a computer simulation, to try to prove that the occupants switched roles and that the survivor of the crash was driving and caused the crash.

Wednesday, February 16, 2005

Finding Rootkits

I was reading Bruce Schneier's blog today and found a post on the Ghostbuster, which is an idea from Microsoft that would check a system for rootkits and other hidden software. The application would reside on a CD with it's own OS and once inserted would check the system for hidden files and folders that may belong to a piece of malware or exploit.

The idea seems very efficient, except that the system would have to be stopped to perform the check... A solution to this problem would be to have several servers load-balanced so that the sysadmin could check each system while there were other servers there to maintain the load.

This idea could also be accomplished using Knoppix, albeit not as quickly or efficiently unless the admin had written a script or program to check it for them.

Tuesday, February 15, 2005

Slashdot Discussion with Martin Taylor

There is a very interesting discussion on Slashdot with Microsoft's Martin Taylor that I recommend reading. It is always good to hear from Martin Taylor, as the politically correct spokesman for Microsoft. I don't agree with everything that he says, but he does speak well with the information that he has. While reading this article, it helps me become less biased and see the idiocy of trying to use or avoid a specific vendor for emotional reasons. The most valuable IT person will be the one who knows how to analyze a situation and use the best tool for the job. I don't want to be a Linux or Windows expert, I want to be a security expert who knows how to use Windows or Linux and make either of them as secure as they need to be for the situation at hand. On another note, the Linux professionals that Martin talks about get paid a lot more than the Windows professionals, which tells me that if I want to get paid more I don't need to work on becoming a Windows expert as much as I should just continue with my Linux/Unix skill-set.

Friday, February 11, 2005

Transport Layer Protocols

While reviewing some API documentation on network programming I was alerted to the fact that TCP and UDP are not the only transport protocols in use (as defined by IETF). I did a little bit of reading on SCTP, or Streaming Control Transmission Protocol, which allows a multi-homed host to establish a stream or session with another host. The big picture scenario here is that a multi-homed host can establish a session that will allow it to use any number of it's interfaces throughout the communication, allowing for one or more of it's interfaces to fail. As long as one interface remains, the host can continue the communication.

More documentation can be found in the RFCs which describe it:

RFC 2906
RFC 3309
RFC 3758

Tuesday, February 08, 2005

10 Computer Security Laws

I have recently begun reading posts from, and read an article this morning that discussed 10 constants in the IT security field. The ideas presented in this article should be part of the training program of any IT shop and all System Administrators and those who are in charge of Sys Admins should be aware of these concepts as well. The article lists the laws as follows:

  • Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore.
  • Law #2:If a bad guy can alter the operating system on your computer, it's not your computer anymore.
  • Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore.
  • Law #4: If you allow a bad guy to upload programs to your Web site, it's not your Web site anymore.
  • Law #5: Weak passwords trump strong security.
  • Law #6: A computer is only as secure as the administrator is trustworthy.
  • Law #7: Encrypted data is only as secure as the decryption key.
  • Law #8: An out of date virus scanner is only marginally better than no virus scanner at all.
  • Law #9: Absolute anonymity isn't practical, in real life or on the Web.
  • Law #10: Technology is not a panacea.
I highly recommend reading the article located here or the Microsoft article located here.

Wednesday, February 02, 2005

Dial-Up Internet is Terrible!

I have been using dial-up internet connection for a few days now, and it is terrible. I wonder how many people really don't have broadband internet yet. I have not been without broadband for nearly 8 years and would not go without it unless my home physically were incapable of it. At least I now know that the modem in my laptop is functional.

Monday, January 24, 2005

Yet Another Web Resource

I have recently discovered a new web resource for Tech news and information, thanks to /., It appears to contain information related to bigger players in the tech market.

Thursday, January 20, 2005

What exactly does a search engine do for me?

How many of your friends really understand what a search engine does? I have found that most of the people who I associate with do not understand that a search engine does not search the Internet...what does it search then?...

Even though the popular search engine Google has indexed over 8,000,000,000 pages that you are allowed to search when you use it, you are not searching the entire Internet... You are searching the internet as Google sees it. What does this really mean? Google does not want to keep track of all of the trash and garbage that is on the Internet, they want to keep track of the data that they think people want to know about. Google has developed a very complex algorithm that allows them to make an automated decision about each page that they potentially index based upon some pre-determined and proprietary parameters. One of the greatest challenges to advertisers and marketers is figuring out exactly what a search engine is looking for so that they show up in the search! If you merely create a web page and then search for it on a search engine an hour later, you will not find it there! Maybe an example will help...

A search engine is similar to a person who is in an occupation where they need to remember a lot of stuff... When you need to know something, you go to this person because they usually have the answer, or something close to what you want. Well, this person has to obtain this information somehow, and has to prioritize what information they want to learn or retain. Google is a very intelligent person capable of retaining a lot of information (it does run on over 100,000 Linux machines) which gives you a very good chance of finding the data that you want. The only problem is, that like most people, Google does not know everything...and probably never will...

If you really had the capacity to search the entire Internet in the time that your favorite search engine returns the results of your search, then the world you live in would have massive amounts of processing power and more bandwidth than you would know what to do with...

Keeping this in mind, this is why it is very good for all of us that there is more than one search engine... We would not want one company to define the internet that we have the capacity to search, just like we would not want one company deciding what we can do with our computer...(unless you use Windows, the you have already handed the keys to Microsoft).