Fanying Jen The Fanying Jen Network Search The Fanying Jen Network FYNET Services About The Fanying Jen Network Contact FYNET
Part IV. IT Professional 2000 - Today
[Previous] [Index] [Next]

Information Technology

It has been eight years since my parents purchased that 486DX computer in April of 1992. The eight years have taken me from merely learning how to type on Word Perfect to design and implementing secure and reliable information systems using Linux and Unix. The journey took me into areas of computer hardware, operating systems, applications, software development, networking, the Internet, and security. These major components formed the foundations of my career in information technology.


High Availability Clustering

The eight years have also taken my 486DX-33 computer from being a simple home desktop for word processing to an experimental server and now is a full time Internet server. The 486DX-33 is now providing information services for many people including virtual hosting and email. The 486 also provides the all important DNS services as well for multiple domains. All of that services running on that single machine created a single point of failure situation. The failure of that single 486 could bring down every services including DNS, web, and email. This situation was not good particularly now that I collocate the machine in a remote location and would not have easy physical access to the machine. The problem gave me the incentive to find and implement a solution. The solution I settled upon was the use of high availability clustering.

High availability clustering is a system architecture that allows other nodes to take over for a failed node. There are many different types of high availability architectures. The architecture I selected was a active-active loosely coupled cluster. I choose active-active since I was using two 486DX-33 servers for the cluster I did not want one node at full load while the other was at idle. It was also a big waste of electric power. The loosely coupled cluster was not my original decision but technical and economical reason compelled me to use this architecture.

I started the implementation of the cluster by first paring the original 486DX-33 with another 486DX-33 server of identical hardware and software. The two systems were configured to detect a failure and reroute to the good node. Every component was implemented successful and now I came to the most difficult part, a shared storage system. The storage system is a RAID array with a RAID controller in each machine. A clustering cable (SCSI Y) was used to attach the nodes to the storage array. I installed FreeBSD on both nodes and mounted the user data and system applications on the shared storage array. Now my cluster was complete or so I thought. Unfortunately, as soon as I have the two nodes start working, they both crashed at the same time! That was not good for what should be high availability. I soon realized that the two nodes were clashing with each other in trying to access the shared storage. The clashing between the two nodes corrupted the file system causing both computers to crash.

I later learned they were the classic concurrence resource problems that people first encountered when time sharing machines were first used during the 1960's. The rule from computer theory states that only one entity may access a particular resource at a particular time. That entity generates something called a lock which restricts access to that resource until that entity releases it. This allows the concurrent resource sharing without data corruption.

I searched for a solution to this problem. The solutions I came across as technically feasible were NFS and GFS. The NFS solution required a NAS (Network Attach Storage) system or a server with storage to act as the NFS server. The problems with this solution were the NAS system would be a single point of failure, the fact that a third system dedicated for NFS must be purchased, and NFS was slow. To address the issue of single points of failure would require a second NAS system to be purchase which made the NFS solution both costly and an inefficient solution.

The second solution is GFS or Global File System. GFS allows multiple nodes to concurrently access a pool of share storage. I first believed that GFS would be the way to go. GFS did not require the purchase of new servers and since GFS works o direct storage, was also much faster as well. Unfortunately, GFS required Linux and I was using FreeBSD and second and most importantly, GFS required special controllers and disk drives to do the resource locking which made the solution prohibitively expensive. The controllers themselves would cost more than a NAS unit and plus they are difficult to acquire. Needless to say, I settle on the share-nothing, loosely-coupled cluster architecture for my high availability system.

The cluster was finally complete and I had tested the fail over capabilities successfully and finally put it into production. My experience in implementing a cluster has taught me fundamentals in enterprise system integration not to mention that I can sleep better at night too.


Databases

My experience with databases goes back to the days when I completed my first search engine. Though I did not know it at the time, I was working with a database. The database manipulation has always been writing by someone else. The databases I had worked on in the beginning were flat files or Unix DBM. These databases were very simple and easy to work with. The applications that utilized the databases were more than adequate for the task. As my applications grew in sophistication and features, not to mention now having to run on a clustered environment, that these simple databases were no longer adequate for the job. Security and reliability were also other factors that influenced my decisions to examine a better database solution.

The database that had the scalability, reliability, and security I needed was an SQL database. An SQL database is a structured language database which allows very fast queries on a large database. Many SQL databases also have locking capabilities which prevent two or more entities from accessing the same records as the same time, thus, preventing database corruption. In addition to the reliability feature, SQL databases generally have security controls which reduce the risk of data integrity and confidentiality compromises and data corruption between applications.

The first SQL database I worked on was MySQL since it was open source and it was easy to configure. I was able to quickly integrate the application to the database successfully. Unfortunately, I discovered that MySQL though was scalability and very fast did not have the crucial two phase commit procedure. The two phase commit procedure ensures that any changes that occurs in the database is atomic or occurs either completely or not at all. There are no in betweens. This procedure protects the data integrity of the database which is especially crucial in clustering operating as I am in. Therefore, I began learning PostgreSQL which does have the two phase commit at the cost of speed but it was fast enough and it is open source as well. PostgreSQL was a very different beast for me. I struggle with PostgreSQL especially the security systems in the database which I do like. Though I have been able to create small demonstration applications using PostgreSQL, I am still continuing to work on PostgreSQL today.


LDAP Directories

The directory that I learned in the last year was not the directory mostly associated with the file system. This directory is analogous to the Yellow Pages from your local phone company. This directory is called LDAP. LDAP is a database arranged in a tree-like structure which permits very fast queries that an SQL database might not be able to keep up. LDAP was introduced to me by one of my colleges when I was searching for a secure central authentication solution. NIS was not an option. My systems and its services had grown to the point where management of user access has become burdensome.

Learning LDAP was very difficult especially since I must consider security implications in my implementation. It took me a year just to learn the terminology of LDAP and top it off with adding encryption and access controls. I eventually learned enough of LDAP to create a more uniform authentication system for my cluster and a third system, an Athlon, I just placed into production. I had even created LDAP applications which allowed quick manipulation of the directories. Furthermore, I also wrote an application which allows me to synchronized between a single master LDAP server and multiple slave servers. This was perfect in a clustering environment plus it saved a lot of labor. My LDAP work is still a work in process today and I am increasing integrating my current and legacy services into the LDAP structure.


DigiTrade Inc, ILX Systems

I landed a position as a Unix System Administrator for DigiTrade Inc. after graduating from school in late 2000. DigiTrade Inc. is subdivision of ILX Systems Corp which itself is a division of the Thomson Corp. family of companies . DigiTrade designs and constructs high quality financial trading systems for a variety of financial companies at economical cost. Its clients include brokages from small operations to large banks wanting to provide investment services for their customers. ILX Systems provides real time quote feeds and news information to large brokages and fund managers.

Working at DigiTrade was my first experience in a corporate environment. Though I had the technical skills, I was weak in the areas of people relationship particularly communicating my thoughts to the managers. My manager, Wilfred Nyaki or Fred as he preferred to be called, explained to me that the most important thing when working in a corporate environment was understanding the business. Prior to working at DigiTrade, my decisions were made based on technical needs, now I would have to make decisions based on business needs. The reason is businesses need to make decision that would generate as much revenue as possible with as little investment and risk as possible. That is where I learned the concepts of benefits vs. costs, return on investments, and business processes.

Learning business concepts was difficult for me at first particularly being my first employment. However, with help from my manager, I slowly began to understand the decision making process. In addition to my manager, my colleges Christopher M. Jones, another System Administrator, and Javier Roman, a facilities personal, supported me greatly in my adjustment to the corporate environment and assisted me when interacting with other personal in the company. Working with my manager and colleges have enable me to take the first steps from being a person who enjoys computers as a hobby to an IT professional whom solves business problems using technology.


Integration

By late 2001, my increasing skills in a wide area of information technology provided me with the foundation of integration. Many individual applications were integrated into a cohesive and manageable system while maintaining the modularity of the applications. Previously separate services such as web, email, instant massaging via Jabber, and LDAP were integrated together. Integration also enabled me to reduce labor in management. In addition to saving labor and providing more powerful services, I must consider security when planning integrations. Poor integrations lead to easy security compromises and privacy breaches as Microsoft had found out the hard way, aka. Code Red, Nimda, and SirCam.

While integration could lead to more powerful applications, my strongest desire to integrate was so I could audit the system more thoroughly and react swiftly should an event arise. Information security and disaster planning has become my primary focus particularly when I worked for DigiTrade where security is a major concern. In addition to integrating applications within a single system, I am currently working with other people to integrate the security intelligence systems together and fine tune them so as to extend our eyes and ears to the increasing threats to information systems.

Integration enables an increase level of efficiency in the flow of information and flexibility in the management of information. Security also plays the largest role in the decision making process of integration. Confidentiality, integrity, and authenticity must be preserved at all cost. My observation on the Internet has seen an increase in the numbers of attacks on Internet infrastructure.


September 11, 2001

September 11, 2001 started out just like September 10, 2001 but by the end of the day, things have changed dramatically. I was serving jury duty that day and like other people, saw the second jet hit the World Trade Center and then watch them collapse. I don't need to tell you my reaction to the sight. The events of September 11 change many things including information technology.

Prior to September 11, my focus on information security was basically a response to the threats on the Internet. September 11 changed the priorities of security from a major priority to the main priority. The events also have changed my career objectives as well. My original objectives were to manage and maintain large information systems. Now my primary objective is to enable people to properly secure their information infrastructures.

This change of objectives prompts me to engage in information security. My first steps was to join security oriental mailing list and attend meetings such as the ISSA. I have also refocused my efforts from information system development to information security via educating myself in more advance security concepts. Though I still work on information systems and architecture, I am increasing integrating security systems into the fundamental design process.

My experiences from when I first faced with security compromises in 1996 have enabled me to understand security concepts, implementation, and operations today. I am fortunate that I was compromised back when my servers were nothing more than play machines. Nowadays, I run a full fledged information system operation whereby any security compromises would be disastrous.


Open Source Advocate

My transition from being a Linux skeptic to an Open Source and Linux Advocate was a long and hard journey. My initial impressions of Linux in 1995 were not very positive and I had no help from anyone except Rex Ballard. I didn't even use Linux as a desktop until 1999, a full four years after I first installed Linux. However, the only thing that kept me going on Linux was the freedom to decide what I want to do with the computer at no monetary cost.

Using Microsoft Windows as a server would have required me to junk my 486 computer and purchased a new server. Every upgrade of Windows required a new computer. The reason for the upgrades was not for features but to remain compatible. The licenses alone would have cost most than a new car. Furthermore, the proprietary nature of the software along with poor auditing and poor engineering design results in easy security compromises and harder detection and recovery. With Linux and later FreeBSD, I did not have to worry about any of this.

Windows on the desktop still held its ground for a while. The reason was the lack of productivity software and applications on Linux. However, that began to change in late 1998 and early 1999 when productivity suites and desktop environment began to appear. Word Perfect was finally released for Linux by Corel and two desktop environment came to fore, KDE and Gnome. Additional advances including simplifying the installation and configuration processes made Linux more viable on the desktop. I finally jumped to Linux completely in 1999 and I haven't looked back since.

Microsoft was already attempting to increase its control and restrictions on the desktop. Combined restrictions with security vulnerabilities that are difficult for even an experienced computer user like myself to resolve. These issues made Windows no longer viable for the desktop when critical operations are involved. Critical operations are when the user must authenticate to a remote resource which must be protected from security compromises. An example is a user entering credit card information via Internet Explorer or a user authenticating to a company's database.

Linux and Open Source have open the doors to my career in information technology. The cost of Microsoft Windows itself along with the hardware and labor that is involve simply would not have allowed me to acquired the knowledge and skills required to succeed in the information technology field. I have now join Linux user clubs to share my knowledge and experience with people just starting out with Linux and Open Source.


Today

As I look back from the time my parents purchased the first 486 computer to the time I first installed Linux, little did I know what started out as a hobby has become my livelihood. Though computers have now become my career, I still enjoy it as a hobby. That is what maintains my motivation and is what I believe it should be. The next most important I discover while working on computers is diversification. Diversifications in skills and technologies. I have already mastered Linux and Open Source technologies. Now I educating my myself in new skills including enterprise information systems such as MQSeries, Oracle, DB2, and iPlanet.

Currently, I am working with organizations and individuals to help inform and assist people about open source technology and information system security. Oh, by the way, whatever happen to my first 486DX-33 computer? That 486 computer has now been coupled with a sister server machine as a cluster running my information system. Thanks to Linux, FreeBSD, and Open Source, my i486DX-33, now with 128MB of RAM and 18GB of RAID 1 storage, has ran through the gauntlet of obsolescence and is still in production today, a full decade from the time my parents purchased that machine. Not bad for a 10 year ago computer.


[Previous] [Index] [Next]

Copyright © 1996, 2002 Fanying Jen. All Rights Reserved.