Wednesday, December 22, 2010

Technology Today

There are millions of possible answers I think if I were to enumerate the reasons why we use technology. For me, the general answers for these are the following:

  • Advances in the field of education (to where education nowadays is not only taught in school, it could now also be done through the use of internet)

  • Benefits of medical technologies ( many people live longer so as medicines and high medical technologies make peoples' lives healthier)

  • Travel or explore the world (airplanes, ships etc. tend people to see the different side of the world)

  • In communication ( people can easily communicate/interact with their loved ones as cellphones, computers etc. exist)

  • Act as guide (physically impaired person finds light as technology leads the way whenever they cannot see it literally, communicate whenever they cannot talk, listen whenever they cannot hear etc.)

  • Living a peaceful life ( armies, police, security guard etc. fought against bad people with their gadgets to protect innocents or civilians from any harm or danger)

There are still a lot of things behind the use of technology but I just enumerated ones that technology has a big contribution of.

The points that have influenced us are when times people are becoming dependent to technology to which it resulted from being lazy and lack of self discipline of many people though there are some other aspect that technology is really beneficial to us. What I am trying to say is through acts of self control and discipline ones must instill so technology may not even abuse our own self .

I think the factors involve in technology change are peoples' dissatisfaction and keeping things as perfect as possible or in other words making complicated things easier.

Thursday, September 2, 2010

Enrollment Input Form (PRF) and the Enrollment University Interface

The enrollment input form (PRF) which mainly use every semester such that students merely write out the information needed. Data inputs such as name, gender, section, ID number, types (if you are old student or new and if you belong to a day or evening class), subjects to be enrolled (consisting of the code, title, description, unit, day, time and room), course, year, school year, the scholarship organization, semester, and the student’s address are the all needed information for the enrollment. As we will observe the PRF, the structure is truly a simple paper with computerized labels of identification. Actually, it serves as guide for encoders of the real enrollment input form which is an automated system is used. This basically acts as the finalization of input that will also be the basis of the registrar.

I find the enrollment university interface is structurally the same as of that the enrollment input form such that I have a glimpse of it during the enrollment. I can say that a draft is never a “nothing” to the co-existence of an automated system because as I believe the draft is the beginning of an idea wherein automated system builds that idea. Considering the fact that until now, we are still using the both and not just the other one alone. The mere fact stressing that those come first in existence or the common thing people got to do when making systems is by starting with a scratch, thus, can be use as basis or if you want to understand the system (automated), you can as well trace back to where it starts. And we all know about that!

I already observe this kind of a situation even when not talking about the enrollment input form (manual or automated) such when I do programming. Before I coded the program, I would first visualize the image since I cannot do it by mere imagining. Mostly, we will get a piece of paper and pen then structures the output of the program. What I realize about it when I have read the question is that I kind a found that I have done a proper way of doing my program. If one will try on their own without considering having a draft for their program, how will they code it without having any guide? For sure if one can do it, basically the mind is good in visualization. Truly, its kind a hard to do that, seriously! So, from the situation I have cited, may someone appreciate and find the real importance of just a single draft though it is something meant to forget because there are new things that is good in the eyes, still it contributes more to what have now being implemented.

For the encoder’s part when inputting data, I said a while ago that he or she is guided by the PRF. Come to think of it that students are the major users of it. Students were asked by advisers to write down information being asked on the PRF because what has been written is what would also reflect on to the enrollment system interface. Encoder will just record the information needed then as it finishes, the encoder will just put the PRF on the trash. Simply because the data has already been stored and what has been stored in the system (automated) will automatically record on the database itself, in other words final.

Since I have already pointed out the structure of the system (manual and automated), I come to think about the appearance, how labels are arranged and used. I find that both systems appear to be the same; they are just unlike on the arrangement of labels. To sum it up, still both have the same meaning; they are just different on how they are organized.

What I learn about contrasting and discussing the enrollment system (manual and automated), we tend to understand certain things by midst of comparison. And I think it is not bad since from that experience or situation, a person way of approach especially when it comes to making system, do not depart from what is really intended. Though certain things are put into detail still the relation of one component to another must be as one and that is meant for the system itself. Taking an example of filling up an application where an automated system do not coordinate with its draft or works the other way around. It means to say that the structure of the application system is really different from the draft that is being filled up. Considering that kind of a scene, the application form just turned out a waste because what is a sense of having a form that electronic form itself cannot follow. I just want to point out that drafts are guide and by means of a draft, the electronic system will get the information from it. If ever the information needed and structure of input from the draft is the same, therefore, there is nothing wrong with it. With the fact that the two means of storing or getting information were the same then mistakes or falsification can be avoided. That is for sure the big reason I can merely say why it is suppose to be the same. I do agree with the idea of using both since both were just helping each other and without the other one, the process would really be difficult to handle. In other words, the other feels empty without the other ones presence.

To sum it up, everything in the world of work, a draft is always being put into first since ideas are all started here. The draft when planned, some would meant it will be successful in terms of giving service to people. The product would not have been achieved if it was not been planned that well in a draft. In other words, the sharing and binding up of ideas is in the draft itself. We all know that when new things will come to take part or improve the work of draft still the way of having it is like a tradition that will never fade forever.

ACCION, HONEY LYNNE C.
BSCS-4

blog:

Thursday, July 29, 2010

Networking System in the University

In school, idealistically network is everywhere in the field of university. Technology is really a widespread and seems it touches the heart of every man. Gradually, as anyone would spoke about it, merely they say “I love technologies because it helps in many things. Since people are already capable of living with a life surrounded by different kinds of gadgets, merely do understood its coexistence. The fact that the world has it, humans somehow lives an easy life. Though technology may be there to supply the needs still perfections can never be achieve. Disadvantages is truly inevitable, people whom are knowledgeable just tend to lessen the bad effects. Technology specialists are there to accompany the issues system or may be network faced. They are known to repair problems and create effectiveness by means of giving good service to its user. If we are to consider a college university implementing networking structure by midst of handling a number of computers from different colleges and offices, what comes out unto my mind are the users. If somehow they still feel contented of the service being provided. Others would just think about the situation that if there are many users, somehow, the flow of connections will be affected. The personnel whom take in charge of the network is finding his way of blocking social networking cites, games, and other prohibited cites that could somehow make the internet connections slow. As a matter of fact, slow internet connections mainly are the cause of lots of student head aches. One goes upon ones mind as to how certain things are managed down the network and how do services equally distributed to its student. As what I thoroughly understand about when it comes to the accounted service student had overcome in the line of network, most of those used computers seems to give not so good service at all, but in terms of new ones, their advantages is high. Evidently, they are good ones because new things try to develop and experiment what could be of good deliverability to its users. Even there are new down the line of course people should maintain the capability of its computer. If people have the sense, it just only relies as to how they will take care and maintain the effectiveness of their devices. Technologies could be of good service but they were not in fact there always to maintain its good service still it needs someone for continuity.


Moving on to what the group had taken upon the distribution of questions to the said network specialist, he barely answered some of it personally. I just attach everything I learn and understood about his answers.

A. Reyes stated that “[T]alking about hardware component and technology used, basically I, assigned as the network administrator, am entrusted to maintain our different servers to run 24/7. Currently, we have our Web Server hosted here in our University in our HP ProLiant ML350 Server. It’s an old but stable server set-up here in our Networks Office and has been active since Engr. Val A. Quimno, not yet a dean was appointed as the Network Administrator. The said server has the following specification:
• Intel Xeon 3.0 GHz, 3.2 GHz, or 3.4 GHz processors (dual processor capability) with 1MB level 2 cache standard. Processors include support for Hyper-Threading and Extended Memory 64 Technology (EM64T)
• Intel® E7520 chipset
• 800-MHz Front Side Bus
• Integrated Dual Channel Ultra320 SCSI Adapter
• Smart Array 641 Controller (standard in Array Models only)
• NC7761 PCI Gigabit NIC (embedded)
• Up to 1 GB of PC2700 DDR SDRAM with Advanced ECC capabilities (Expandable to 8 GB)
• Six expansion slots: one 64-bit/133-MHz PCI-X, two 64-bit/100-MHz PCI-X, one 64-bit/66-MHz PCI-X, one x4 PCI-Express, and one x8 PCI-Express
• New HP Power Regulator for ProLiant delivering server level, policy based power management with industry leading energy efficiency and savings on system power and cooling costs
• Three USB ports: 1 front, 1 internal, 1 rear
• Support for Ultra320 SCSI hard drives (six hot plug or four non-hot plug drives supported standard, model dependent)
• Internal storage capacity of up to 1.8TB; 2.4TB with optional 2-bay hot plug SCSI drive
• 725W Hot-Plug Power Supply (standard, most models); optional 725W Hot-Pluggable Redundant Power Supply (1+1) available. Non hot plug SCSI models include a 460W non-hot plug power supply.
• Tool-free chassis entry and component access
• Support for ROM based setup utility (RBSU) and redundant ROM
• Systems Insight Manager, SmartStart, and Automatic Server Recovery 2 (ASR-2) included
• Protected by HP Services and a worldwide network of resellers and service providers. Three-year Next Business Day, on-site limited global warranty. Certain restrictions and exclusions apply. Pre-Failure Notification on processors, memory, and SCSI hard drives.
Aside from it, our mail server running under Compaq Proliant ML330 Server, our oldest server, is also hosted here in our Networks Office. Together with other Servers, such as Proxy and Enrollment Servers, both proxy and our enrollment servers are running in a microcomputer/personal computer but with higher specifications to act as servers.

All Servers are connected in a shared medium grouped as one subnetwork. In general, our network follows the extended star topology which is connected to a DUAL WAN Router that serves as the load balancer between our two Internet Service Providers. All other workstations are grouped into different subnetworks as in star topology branching out from our servers subnetwork as in extended star topology. At present, we are making use of class C IP Address for private IP address assignments. Other workstations IP assignments are configured statically (example: laboratories) while others are Dynamic (example: offices). All workstations are connected via our proxy servers that do some basic filtering/firewall to control users’ access to the internet aside from router filtering/firewall management. So, whenever any workstation has to connect to the internet, it has to pass through software and hardware based firewall.

All workstations are connected via a proxy server. It means that whenever a workstation is turned on, it requests for an IP address from the proxy server (for dynamically configured IP address) and connect to the network after IP address is acquired. As connection is established, each system can now communicate and share resources within the same subnet work and to server following the concepts discuss in your Computer Network Class.

Basically, our servers are expected to be in good condition since it is required to be up 24/7. Daily, during my vacant period, monitoring on the servers are observed that includes checking logs, checking hardware performance such as CPU health, etc. If problems are observed, remedies are then and then applied. Once in a week, regular overall checkup is observed as preventive maintenance to ensure not to experience longer downtime if possible.

As I was appointed as the Network Administrator, everything was already in place except for some minor changes. Basically, different networking standards was already observed such as cabling standards, TIA/EIA 568A-B, different IEEE standards as discussed in your Computer Networks Subject, etc.

As I have mentioned, we have implemented both software and hardware based filtering/firewall. Basically, Risks or vulnerabilities and different mitigation techniques were considered to increase security in our network. Aside from filtering/firewall, constant monitoring on networks activity also increases the security of the system.

Major Interferences are normally encountered as an effect of unforeseen and beyond our control events such as black outs, and the like. The said interference would of course affect University’s day-to-day businesses for obviously this will paralyze all our activities that rely on electricity and further this might cause damage on our network devices, etc. that may later be the reason for longer downtime. Problems encountered by our providers such as connection to the National/International Gateway also affect University’s business such as correlating to University’s Business Partners outside and within the country.


With regards to the book I read about Networking authored by G. Keiser stated that to which also relates to what the university specialist talks about that “Once the hardware and software elements of a local area network (LAN) have been properly installed and successfully integrated, they need to be managed to ensure that the required level of network performance is met. In addition, the network devices must be monitored to verify that they are configured properly to ensure that corporate policies regarding network use and security procedures are followed. This is carried out through network management, which is a service that users a variety of, hardware and software tools, applications, and devices to assist human networks.

In an actual system different groups of network operations personnel normally take separate responsibilities for issues such as administration aspects, performance monitoring, networking integrity, access control and security. There is no special method of organization may take a different approach to fit its own needs. There are two categorize being used namely LAN element management and LAN operations management. The first deals with administrative and performance aspects of individual network components, whereas the second is concerned with the operation with the LAN as a whole and its interaction with other networks.

What would probably be an aide for an effective and efficient network environment ideal for the university is to know the basic network management functions. These are performance, configuration, accounting, and fault and security management.

Performance Management

In carrying out Performance Management a system will monitor parameter such as network throughputs, user response times, line utilization, and the number of seconds during which error occur, and the number of bad messages delivered. This function also is responsible for collecting traffic statistics and applying controls to prevent congestion. Another performance management function is to monitor control the quality of service continually. This may include assigning threshold values to performance or resource parameters and informing the management system or generating alarms when these thresholds are exceeded. Examples of resource parameters include memory usage, free disk space, and the number of concurrent logins or sessions.

Performance Management also permits proactive planning. For example, a software-based capacity-planning tool can used to predict how network growth will affect performance metrics. Capacity planning involves plans to ensure that the network will be able to support the anticipated resources.

Configuration Management

The goal of Configuration Management is to monitor both network setup information and network device configurations in order track and manage the effects on network generation of the various constituent hardware and software elements. Configuration management allows a system to provide network resources and services, to monitor and control their state and to collect status information. This provisioning includes reserving bandwidth for a user, distributing software to computers, scheduling jobs, and updating applications and corporate computers. In addition, information technology support personnel need to know what hardware, operating system, and application software resources are installed on both fixed and mobile computers.

Accounting Management

The purpose of accounting management is to measure network utilization parameters so that individuals or groups of users on the networks can be regulated and billed for services appropriately. This regulation maximizes the fairness of network access across all users since network resources can be allocated based on their capacities. Thus accounting management is responsible for measuring, collecting, and recording statistics on resource and network usage. In addition, accounting management is also may examine current patterns in order to allocate network usage quotas.

Fault Management

Faults in a network, such as physical cuts in a communication line or failure of a circuit card, can cause portions of a network to be in gradable. Since network faults can result system downtime or unacceptable network degradation, fault management is one of the most widely implemented and important network management functions. With the growing dependence of people on network resources for carrying out their work and communications, users expect rapid and reliable resolution on network resources for carrying out their work and communications, users expect rapid and reliable resolution on network fault conditions. Fault management involves the following process:

- Detecting fault or degradation symptoms, this usually is done through alarm surveillance.
- Determining the origin and possible cause of faults either automatically or through the interaction of a network manager.
- Once the faults are isolated, the system issues trouble tickets that indicate what the problem is and possible means of how to resolve it.
- Once the problem has been fixed, the repair is operationally tested on all major subsystems on the network.

Security Management

The ability of users to gain worldwide access to information resources easily and rapidly has made network security is major concern among network administrator. In addition, the mind of network users and personnel who telecommute to access corporate data from outside of the corporation presents another dimension to network security. LAN security covers number of disciplines including:

-Develop security policies and principles
-Creating security architecture for the network
-Implementing special firewall software to prevent unauthorized access of corporate information from the Internet.
-Applying encryption techniques to certain types of traffic.
-Setting up virus protection software
-Establishing access authorization procedures
-Enforcing network security

The principal goal of network security management is to establish and enforce guidelines to control access to network resources. This content is recorded to prevent unintentional sabotage of network capabilities and to prevent viewing or modification of sensitive information by people who do not have appropriate access authorization.

On a certain research paper I had read, it points about policies and guidelines to have an effective network management whish I find helpful. It stated that:

Policies and guidelines are IT security policies, organizational security, asset classification and control, personnel security, operation management and information management. These guidelines if implemented by the appropriate authorities will go a long way in alleviating problems of network insecurity.

IT Security Policy

IT security policies are the rules and practices that an institution uses to manage and protect its information resources. These policies must be developed, documented, implemented, reviewed and evaluated to ensure a properly managed and secured network. Hence, the need for IT security policies in any institution cannot be overemphasized.

Developing Security Policies

Developing security policies involves developing the following: Program policies, System-specific policies and Issue-specific policies [1], [2].
Program policies: addresses overall IT security goals and it should apply to all IT resources within an institution. The institution’s president or an appointed representative must direct policy development to ensure that the policies address the IT security goals of all systems operating within the institution. For instance, program policies can address confidentiality or service availability. All program policies should meet the following criteria:

•Comply with existing laws, regulations, and state and federal policies.

•Support and enforce the institution’s mission statement and organizational structure.

System-specific policies: addresses the IT security issues and goals of a particular system. Large facilities may have multiple sets of system-specific policies that address all levels of security from the very general (access control rules) to the particular (system permissions that reflect the segregation of duties among a group of employees).

Issue-specific polices address particular IT security issues such as, Internet access, installation of unauthorized software or equipment, and sending/receiving e-mail attachments.

Once you have identified the IT security issues you need to address, develop issue-specific policies using the components defined in table 2
The guidelines for developing security policies are:

•Obtain a commitment from senior management to enforce security policies.

•Establish working relationships between departments, such as human resources, internal audit, facilities management, and budget and policy analysis.

•Establish an approval process to include legal and regulatory specialists, human resources specialists, and policy and procedure experts. Allow enough time for the review and respond to all comments whether you accept them or not.

Implementing Security Policies

Successful implementation of IT security policies requires security awareness at all levels of the organization. You can create awareness through widely disseminated documentation, newsletters, e-mail, a web site, training programs, and other notifications about security issues. Table 4 outlines the guidelines for implementing IT security policies:

Reviewing and Evaluating Policies

Institutions/organizations should review their security policies periodically to ensure they continue to fulfill the institutions security needs. Each department is also responsible for reviewing and evaluating the effectiveness of their policies and the accompanying procedures. After an institution/organization has developed IT security policies, the appointed security team will evaluate the policies and provide feedback.

Policy Review within the Institution

Each institution/organization should develop a plan to review and evaluate their IT security policies once they are in place. The guidelines are [2]:


Documentation guideline for security policy

Guideline: Define policies
Description:
Define policies by documenting the following information:
•Identify general areas of risk.
•State generally how to address the risk.
•Provide a basis for verifying compliance through audits.
•Outline implementation and enforcement plans.
•Balance protection with productivity.

Guideline: Define standards
Description:
Define IT security standards by documenting the following information:
•Define minimum requirements designed to address certain risks.
•Define specific requirements that ensure compliance with policies.
•Provide a basis for verifying compliance through audits.
•Outline implementation and enforcements plans.
•Balance protection with productivity.

Guideline: Define guidelines
Description:
Define IT security guidelines by documenting the following information:
•Identify best practices to facilitate compliance
•Provide additional background or other relevant information

Guideline: Define enforcement
Description:
Define how policies will be enforced by documenting the following information:
•Identify personnel who are authorized to review and investigate breaches of policy.
•Identify the means to enforce policies.

Guideline: Define exceptions
Description:
Define the possible exceptions to the IT security policies.

Guidelines for implementing IT security policies

Guideline: Create awareness
Description:
Create user awareness using the following methods:
•Notify employees about the new security polices.
•Update employees on the progress of new security policies.
•Publish policy documentation electronically and on paper.
•Develop descriptive security documentation for users.
•Develop user-training sessions.
•Require new users to sign a security acknowledgement.

Guideline: Maintain awareness
Description:
Maintain user awareness of ongoing and new security issues using the following methods:
•Web site
•Posters
•Newsletters
•E-mail for comments, questions, and suggestions
•Assign responsibility for reviewing policies and procedures.
•Implement a reporting plan in which departments report security incidents to designated
•Implement regular reviews to evaluate the following:
- Nature, number, and impact of recorded security incidents.
- Cost and impact of controls on business efficiency, including third-party vendor compliance.
- Effects of changes to organizations or technology.


Reference:

G. Keiser, “Local Area Networks.”
Jonathan Gana KOLO and Umar Suleiman DAUDA, “Network Security: Policies and Guidelines for Effective Network Management.”








Tuesday, July 20, 2010

The Design of the Enrollment System

The design of the enrollment system probably is an aide for those people who find the enrollment process confusing. Mostly, enrollment system design is naturally the highlights for new incoming students since they are still new in school and just adjusting their selves to the new environment they have chosen. Apparently, through the said design students tend to follow what is being instructed to it. In other words, the students find it useful and helpful in the process. Even though most of the population would say that the design is considerably a good guide, still there are certain issues faced such that some intend to ask questions from other people which are inevitable. This scenario is mostly happening for those new in the process such that they never know where the next step is could be seen and fortunately it’s not a worst thing to do because the design is just mainly a guide or procedural steps for students to know. It does not mean that if you do guided by the design, you all know about what is written on it. Basically, old students in the school can also be a prospect for you to fulfill the process. The design of the enrollment system is just hoping when students have seen it, they will follow and everything will just rely of how eager the students are on finding a way to achieve the next step, in other words, pursue the process.

Without having a design, do students can say I am confident about this enrollment? .Sad to say no. See, we really find the real importance of a thing if we discard it in the process. Mainly, its nothingness in the system is considerably a big loss. What would somehow be the face of the enrollment system without its participation? If I will to say, for sure many would say “What kind of process is this school has? ; Is this the service the school talks about? ; What are they thinking about us, ’nothing’ in this school?”. Somehow, these may be the questions and issues the school will face if that happens. Of course, if the school really is the school for education, they should have think first how the students will be served and to what way the students could coordinate in the process without having any difficulties. I may say that the design itself is one thing that the school being implemented to cater good service to its students.

Tuesday, July 6, 2010

USEP Enrollment System

As being a student, basically I can say something to the enrollment system being implemented this semester. Somehow, I can attach or understand what it is going on the process since students are the actors involved in the enrollment system itself. Enrollment system covered a number of subsystems, involving manual and automated. For a number of operations involved, mainly transactions are uptight. There are also a large number of students will enroll. So merely, the school will find ways on how they could provide service to their students. Actually, my long years of residence here in the university of Southeastern Philippines, I have merely witnessed different ways of enrollment system. I am sure that they really want to try new things or try if it will be an effective one. For every semester of enrollment, I always hear issues such that the system is slow; some said that they come in school early to enroll still they would finish the next day since it can just accompany small number of students at that day. Others just chose to be calm because they have said, after all they would still be enrolled as long as you remain patient, just follow the rules and everything would be ok.

Going to the enrollment system implemented this semester, if I would rate it, probably it passes. By looking at the first procedural step of the system wherein student will pay their miscellaneous fee, though the operation is manual still I can say it delivers good service because student meant to line properly. But if I will to say this service will be improve, I just want that for the next semester of enrollment or maybe next year, the OSCSS as well as the Headlight Office must have their own system, an automated system so that the transactions will be fast. For sure, by then bad comments will lessen.

Scholars of different programs such as barangay or city officials when follow up their scholarships at the OSS in order to be enrolled cause to much effort. The reason why I have said this, it is because I have encountered it a lot. Have to line yourself together with some other scholars in order to be accounted as scholar for the semester. With the problem I observed about this area, the only solution came to my mind is to have an automated system that have the coordination between school and the scholarship organization. If the organization will immediately confirm the school that the student has already been issued a scholarship, then the school will permit the student to be enrolled as input of the system invokes that it will be paid by a scholarship fund. And also, the student will just be updated through mobile phones or via mail regarding the status of their scholarship. I think with this idea, student will not be worried or discourage to enroll.

On the next stop is our way to advising. Here, I may say that the environment is hot and as the quote always state that having patience is a virtue. I intend to say that because the number of line I have face within this semester is kind a long but still I relax myself and just go with the flow. Apparently with the longer hours of waiting, I was entertained and evaluated. In this area, probably what could I suggest is particularly to have an automated system that will automatically generates the subjects that will be enrolled by the students itself than the manual storing of inputs wherein it each to much time. The system must also be accompanied with current and past grades of the students so that it will be the system’s basis for his or her subjects to be enrolled. The miscellaneous payments must also play a part of the system since it is also requirement for evaluation. I suggested as one solution for improvement because every time I encountered this kind of a situation, I merely kind of imagining that scene wherein in fact will free us away from stress and away from the old system.

With the encoding, I don’t really have any problem against it, since it is automated. The service is evidently fast. If I have something to say about this, for sure it is likely those times many students are being encoded and you have to wait for your copy. But then again no problem with that because it just take less of time to wait.

Actually because I am a scholar definitely on the cashier’s area, scholars will just present their scholarship card having updated to this semester together with the COR. Basically, the two things being mentioned will be their basis for the approval of enrollment and subjects taken for this semester. Simply presenting the required documents for the scholars whereas for those ordinary students whom take time to line since they will be issued an official receipt when they paid. Truly, counting money would really took much time but then again, still the process is smooth and nothing to be worry about because students were guided accordingly. For the improvement, I would suggest if the school have fund, I want that respective colleges have respective cashiers in such a way it would ease the process or if it is impossible, why not have added the number of cashiers? (laughs ,still the same!).

Moving on to the registrar’s area, I see that the service being provided is for the whole student even though you are a scholar or not, you have no special treatment and were treated fair. That is why; the students would take time to line to be entertained. But still the service is properly handled and organized especially there are four registrar officer would cater the students. It seems that I have not much comment about the registrar’s service, things were just gone good.

On the last stop is on the library area where you need to present your library card as well as the COR for verification. Then, the library card itself will be validated. So, that’s the end of the process. As you can see in the library’s area, you will really don’t see any problem since the service they catered is not that time consuming.

Thursday, March 11, 2010

ERP Implementation in Oil Refineries

By Muhammad Mubashir Nazir, ACCA, CISA

Published in Daily Business Recorder Karachi on 25 August 2005

Over recent years the acquisition, implementation and use of Enterprise Resource Planning (ERP) Systems have become a standard feature of most national and multinational companies in Pakistan. Todate most of the literature on ERP implementation has focused on the standard methodologies of ERP implementations.

This article focuses on ERP implementation specifically in refining industry and highlights the issues faced by implementers in this industry.

Axline Markus defines ERP systems as "commercial software packages that enable the integration of transaction-oriented data and business processes throughout an organisation".

ERP systems provide cross-organization integration through embedded business processes and are generally composed of several modules, including human resources, sales, assets management, procurement, project management etc. World's leading ERPs include SAP, Oracle, Peoples Soft and JD Edwards.

During the 1990s ERP systems were the de-facto standard for replacement of legacy (old) systems in large companies around the globe. In Pakistan, a large number of national and multinational companies, including Sui Southern Gas Corporation, Pak Arab Refinery Limited (Parco), Pakistan Tobacco Company, ICI Pakistan Limited, Pakistan State Oil, Shell Pakistan Limited, Unilever Pakistan Limited etc, have implemented or going to implement ERPs system.

The impact of ERP systems is so broad, touching many internal and external aspects of an organisation's operations, that the successful implementation and use of these systems are critical to organisational performance and survival.

Various oil refineries in Pakistan e.g. National Refinery (NRL), Pakistan Refinery (PRL) and Parco have recently implemented SAP and Oracle Financials applications to streamline their business processes. Major business processes of an oil refinery include procurement of crude oil and other feed stock, inventory management for hydrocarbons and stores and spares, product sales, production planning and scheduling, assets management, financial and operational budgeting and financial and managerial reporting.

Following a tested implementation methodology is a prerequisite for successful ERP implementation. All implementation methodologies e.g. Oracle Application Implementation Methodology (AIM), Accelerated SAP (ASAP) etc suggest at least five phases of ERP implementation: Define; Design; Build; Transition; and Go Live & Support.



Some methodologies split a phase into two and someones merge two phases into one. These phases have been depicted.

During "Define" phase, the company implementing the ERP should clearly determine the objectives of ERP implementation, business process change strategy and its specific information requirements (e.g. production quantities at various temperature and pressure levels in an oil refinery).

Current (as-is) and future (to-be) business processes should be documented. A dedicated project team should be developed and trained for ERP implementation. Various ERP systems should be evaluated on the basis of information requirements of the company.

A gap analysis should be performed between specific requirements of the refining industry and features available in the ERP products and the best-fit product should be selected. Data conversion requirements should be analysed. Readiness plan for senior and middle management should be developed.

In "Design" phase, information requirements should be mapped with the features of selected ERP. Technical architecture and interfaces of various applications with ERP should be designed, data transition strategy should be developed, functional and technical design of databases and applications should be finalised, and user learning plan should be developed.

During "Build" phase, interfaces between various applications should be developed, application forms & reports should be customised if required, data conversion programs should be developed, user guides and necessary reference material should be prepared, and applications and interfaces should be tested for all business scenarios in an integrated environment.

In this phase, all users of the applications should be provided with adequate training. User acceptance testing must also be performed in this phase. "Transition" phase involves applications setup and conversion of legacy systems data into the new system.

"Go Live & Support" phase is the final phase in ERP Implementation. In this phase, ERP should be assessed for its effectiveness, all errors appeared in live environment should be removed, legacy systems should be decommissioned, and future information requirements should be analysed.

During implementation of ERPs at oil refineries, few tasks are critical for making the project successful. These tasks, as depicted in Figure 2, include business requirements analysis, mapping business solution with company's requirements, business process re-engineering, development of interfaces with other applications, data conversion from legacy to new system, and user readiness.




The reason of major ERP failures at oil refineries is that these steps are not adequately handled during the implementation.

Oil refineries usually require unique sets of information for their operations. For example volumes of hydrocarbons change with a change in temperature or pressure. An Oil & Gas Accounting system should be capable to convert the volumes of hydrocarbons at natural temperature & pressure to those at specific temperatures (e.g. 85 F).

Similarly, an Oil & Gas Accounting System should maintain calibration charts and records of dips to determine the product quantities. A refinery consists of a chain of process units (e.g. crude distillation unit, hydro-treating unit, catalytic cracking unit etc) Output of one unit may be input for other unit(s). Production Scheduling Software for the refinery must have capability to record the flow of raw materials and semi-finished goods from one unit to the other.

Most of the ERPs do not provide the features to capture refinery specific information. Oil refining companies usually develop their in-house systems for oil & gas accounting, refinery management and production scheduling.

During ERP implementations, if refinery-specific information requirements are not correctly captured during "Requirements Analysis Task", it results in ERP failure because the new ERP is not in a position to satisfy the information requirements of top and middle management.

Similarly incorrect mapping of business processes with application features may result in complete ERP failure because the system will not be able to capture all business processes according to company requirements.

If a refining company decides to develop its in-house application to meet its specific information requirements, it takes a long time to develop and implement the application. Debugging of a new application takes a long time, which results in overall delay in the project.

Development of interfaces and their testing requires more resources, effort and time. Problems in poorly-designed interfaces result in failure of entire project.

Legacy systems of oil refinery usually do not work in an integrated environment. They do not have enough capabilities to record the information as compared to new ERP systems. During implementation, it becomes difficult to fill all the required fields of new systems due to which data conversion exercise faces a lot of problems.

Mapping of fields in new and old system also becomes a major issue because the users are familiar to old conventions and it is very difficult for them to recognise the new chart of account, new supplier and customer codes etc.

Lack of change-management skills in project team also results in project failure. As Peter Drucker points out, "Experience has shown that grafting innovation on to a traditional enterprise does not work. The enterprise has to become a change agent... Instead of seeing change as a threat, its people will come to see it as an opportunity."

In my opinion, the biggest problem in ERP implementation in oil refineries is inadequate user readiness. Most refineries at Pakistan are owned by the public sector. A significant number of employees in these refineries are not properly trained to use an ERP.

It is a recognised fact today that if a technical solution such as an ERP does not induce necessarily the expected changes, it is not because of the technology, it is due to lack of adequate social changes required for the success of an ERP system.

Technology itself does not induce the social game, the collective process. Only people together are able to make a success, or a failure, or neutralise technical systems, especially complex ones such as ERPs. In the words of Ann Miller points out "People are always key to any process improvement, so methods to help staff ramp up on the learning curve of a technology or process are extremely important."

ERP implementers should keep in mind a few realities while planning for change management. Firstly, facing change, one should remain modest because the collective game builds itself without obeying to any single will or to any predefined planning.

Actors have to build the story together. Secondly, one should not start from the ERP technical solutions, but from problems to solve, that is to identify actual needs before making an adapted and robust technical offer.

Thirdly, in order to be able to analyse problems and evaluate needs, one should remain attentive to people and social behaviour so that help in educating people can be provided: both individual education (learning what the ERP modules are doing and how to use them) and collective education (learning how to integrate the ERP in each department or service operational practices).

For example, mastering all the new accounting capabilities of the ERP Finance module requires building a new knowledge base among all the individuals first, then in the Accounting Department(s) as a whole. Actually, any success will depend on the collective evolution of the organisation.

As far as resistance to change is concerned, the most problematic issue is that there is no resistance to change per se, neither because of habits gained, nor because of any "social inertia".

However, resistance to change does occur and has got a twofold origin: technology resists and social organisations too. Technology resists because it has got its own principle of reality: for example an ERP by itself will never be able to deliver manufactured goods, only a co-ordinated organisation can. Social organisations themselves have their own principle of reality.

They do not resist just for the sake of resisting, but build their needs depending on their goals and evolution of beliefs.

When technology meets a market ready to pay for it, there is no resistance. Just to make sure, see the speed with which such technologies as fax machines or mobile phones have spread.

Resistance to IT was caused by being tired of forced computerisation failures and tired of forced obsolescence of hardware, software and IT concepts. Operational users are fed up with-this ongoing race to innovation, since the situation they are living in is not yet stabilised.

The discourse about the "technological plus" has come to some discredit among users who do not hesitate any more to express their concern. Technology evolves at such a pace that it generates what is called "techno-stress" among staff at all levels of an organisation.

In fact, workers say they are "techno-stressed" because they have to learn, know and use technologies that are constantly evolving. Moreover, they consider they have little control over the choice of technologies to use and they lack training on them.

Five major factors have been identified as generating "techno-stress": System problems; computing errors; Learning time for getting used to new technologies due to the fact that technologies said to be "time-saving", increase tasks more than they alleviate them; and also the difficulty of following the fast evolving technologies.

To this, one can add the "technology-aided employee scrutiny" which results in job loss of those employees who are not capable enough to update themselves with the fast-moving technology.

According to various surveys, it seems that "techno-stress" is more and more affecting executives and managers. They fear IT generates a loss of privacy, an information overload, a lack of personal contacts, a need for a continuous learning of new skills and the missing of promotion due to lack of IT knowledge. Managers who frequently avoid technologies and suffer from a lack of technical knowledge, have nevertheless to make decisions about buying expensive IT equipment and have to manage investment, education and support budgets.

Moreover, it seems that managers who are familiar with technologies also suffer some "techno-stress" because of the fast changing pace of IT in short, the preceding human factors are paramount when it comes to ERP implementation and may explain to some extend why an ERP needs a lot of care and support when deployed in an organisation both by internal management and external consultants.

Although these issues can be faced by any organisation but due to lack of skilled and motivated staff, refineries in public sector usually face these problems around the globe. We expect that in near future, if properly implemented with all issues addressed properly, ERP systems will become an integral part of oil refineries information systems.

Muhammad Mubashir Nazir
August 2005

Thursday, March 4, 2010

characteristics that an analyst examines when choosing or defining the deployment environment.

Analysts must consider the configuration of computer equipment, operating systems, and networks that will exist when the new application system is deployed.

• Configuration of

-Computer hardware
-System software
-Networks
-Development tools

> Development environment – programming languages, CASE tools, and other software used to develop application software

>Java and Visual Studio .NET are examples

>Application deployment environment decisions limit development tool choices
• Operating system environment
• Database management system (DBMS)
• Distributed software standard

• Existing environment generally considered and compared with proposed environment

Deployment Environment Characteristics to Consider

• Compatibility with system requirements
• Compatibility among hardware and system software
• Required interfaces to external systems
• Conformity with IT strategic plan and architecture plans
• Cost and schedule

Defining the Application Deployment Environment

• Centralized Systems
• Distributed Computing
• The Internet and Intranets
• Development and System Software Environments
• The Environment at Rocky Mountain Outfitters


Defining the Application Deployment Environment
Application deployment environment: The configuration of computer equipment, operating systems, and networks for the new system.

In selecting an appropriate solution, analysts need to first consider the application deployment environment. By application deployment environment, we mean the configuration of computer equipment, operating systems, and networks that will exist when the new application system is deployed. The client and users of the new system are obviously most interested in the functions of the application itself because they need it to carry out the business of the organization. However, the application does not function in a vacuum. There must be a stable environment of supporting components to enable it to execute successfully. If the environment is not suitable and stable, then the application will not function. An important part of any project is defining and ensuring that the application deployment environment is defined, developed, and deployed so that it is stable. The following sections describe various alternative processing environments.


Centralized Systems
Centralized mainframes are generally used for large-scale batch processing applications. Such applications are common in industries such as banking, insurance, and catalog sales. Information systems in such industries often have the following characteristics:

• Some input transactions do not need to be processed in real-time (e.g., out-of-state checks delivered in large nightly batches from central bank clearinghouses).
• On-line data entry personnel can be centrally located (e.g., a centrally located group of telephone order takers can serve geographically dispersed customers).
• Large numbers of periodic outputs are produced by the system (e.g., monthly credit card statements mailed to customers).
Any application that has two or three of these characteristics is a viable candidate for implementation on a centralized mainframe.
Single Computer Architecture

As its name implies, single computer architecture places all information system resources on a single computer system and its directly attached peripheral devices. Users interact with the system via simple input/output devices that are directly connected to the computer. Single computer architecture requires that all system users be located near the computer. The primary advantage of single computer architecture is its simplicity.

Clustered and Multicomputer Architectures

A clustered architecture employs a group (or cluster) of computer systems to provide needed processing or data storage and retrieval capacity. Computers from the same manufacturer and model family are networked together. Similar hardware and operating systems allow application programs to execute on any machine in the cluster without modification. In effect, a cluster acts as a single large computer system. Often there is one computer that acts as the entry point to the system. The other computers in the system function as slave computers and are assigned tasks by the controlling computer.

A multicomputer architecture also employs multiple computer systems, but hardware and operating systems are not required to be as similar as in a clustered architecture. Hardware and software differences make it impractical to move application programs from one machine to another. Instead, a suite of application programs and data resources is exclusively assigned to each computer system. Even though this architecture is similar to a distributed configuration (discussed in the next section), we classify it as a centralized system since it functions as a single large computer.

Clustered architecture: A group of computers of the same type that have the same operating environment and share resources.

Multicomputer architecture: A group of dissimilar computers that are clustered together.

Distributed Computing
Components of a modern information system are typically distributed across many computer systems and geographic locations. For example, corporate financial data might be stored on a centralized mainframe computer. Personal computers in many locations might be used to access and view periodic reports as well as to directly update the central database. Such an approach to distributing components across computer systems and locations is generically called distributed computing.

Distributed computing: The approach to distributing a system across several computers and locations.

Computer Networks

A computer network is a set of transmission lines, specialized hardware and communication protocols that allow communication among different users and computer systems. Computer networks are divided into two classes depending on the distance they span. A local area network (LAN) is typically less than one kilometer in length and connects computers within a single building or floor. The term wide area network (WAN) can describe any network over one kilometer, though much greater distances spanning cities, countries, continents, or the entire globe are typically implied.

Computer network: A set of transmission lines, equipment and communication protocols to permit sharing of information and resources.

Local area network (LAN): A computer network where the distances are local such as in the same building.

Wide area network (WAN): A computer network across large distances such as a city, state, or nation.

There are many ways to distribute information system resources across a computer network. Users, application programs, and databases can be placed on the same computer system, on different computer systems on the same LAN, or different computer systems on different LANs. Application programs and databases can also be subdivided and each distributed separately.

Client-Server Architecture

Client-server architecture is currently the dominant architectural model for distributing information system resources. Client-server architecture divides information system processes into two classes - client and server. A server computer manages one or more system resources and provides access to those resources through a well-defined communication interface. A client computer uses the communication interface to request resources, and the server responds to those requests. Software that implements the communication interface is usually called middleware.

Router: A piece of equipment that is used to direct information within the network.

Server computer: A computer that provides services to other computers on the network.

Client computer: A computer that requests services from other computers on the network.

Middleware: Computer software that implements communication protocols on the network and helps different systems communicate.

N-Layer Client-Server Architecture

An information system application program can be divided into a set of client and server processes or layers. This approach to client-server architecture is sometimes called three-layer architecture.

Data layer: The layer on a client-server configuration that contains the database.
Business logic layer: The part of a client-server configuration that contains the programs that implement the program logic or the application.
View layer: The part of the client-server configuration that contains the user interface and other components to access the system.
Three-layer architecture: A client-server architecture that contains the three layers of view layer, business logic layer, and data layer.
N-layer architectures or n-tiered architectures: A client-server architecture that contains n layers.

Middleware: Computer software that implements communication protocols on the network and helps different systems communicate.

Enterprise application development (EAD): An approach to developing information systems for enterprise-wide deployment in a distributed fashion.

The Internet, Intranets, and Extranets

The Internet and World Wide Web are becoming increasingly popular frameworks for implementing and delivering information system applications. The Internet is a global collection of networks that are interconnected using a common low-level networking standard—TCP/IP (Transmission Control Protocol/Internet Protocol). The World Wide Web (WWW), also called simply the Web, is a collection of resources (programs, files, and services) that can be accessed over the Internet by a number of standard protocols. The Internet is the infrastructure upon which the Web is based. In other words, resources of the Web are delivered to users over the Internet.

Internet: A global collection of networks that use the same networking protocol--TCP/IP.

World Wide Web (WWW): A collection of resources such as files and programs that can be accessed over the Internet using standard protocols.

An intranet is a private network that uses Internet protocols but is accessible only by a limited set of internal users (usually members of the same organization or workgroup). The term also describes a set of privately accessible resources that are organized and delivered via one or more Web protocols over a network that supports TCP/IP. An intranet uses the same protocols as the Internet and Web but restricts resource access to a limited set of users. An extranet is an intranet that has been extended to include directly related business users outside the organization (e.g., suppliers, large customers, and strategic partners). An extranet allows separate organizations to exchange information and coordinate their activities, thus forming a virtual organization.
Intranet: A private network that is accessible to a limited number of users, but which uses the same TCP/IP protocols as the Internet.
Extranet: An intranet that has been extended outside of the organization to facilitate the flow of information.
Virtual organization: A loosely coupled group of people and resources that work together as though they were an organization.
Virtual private network: A network with security and controlled access for a private group but built on top of a public network such as the Internet.
The Internet as an Application Platform

Internet and Web technologies present an attractive alternative for implementing information systems. Another alternative for implementing remote access for buyers is to construct an application that uses a Web browser interface. Such an application executes on a Web server and is accessible from any computer with an Internet connection. Buyers can use a web browser on their laptop computer and connect to the application via an Internet service provider wherever they’re currently located.

Implementing an application using the Web, an intranet, or an extranet has a number of advantages over traditional client-server approaches to application architecture including wide accessibility, low-cost communication, and widely implemented standards.

Of course, there are negative aspects of application delivery via the Internet and Web technologies including security, reliability, throughput, and volatile standards.

B2B and B2C Applications and Hubs

With the widespread growth of e-commerce, there are many other alternative uses for the Internet. In business-to-business (B2B) relationships, companies such as RMO can also use the Internet to develop relationships with its suppliers. RMO’s suppliers can use Web browser technology to check their inventory levels and automatically replenish them at RMO when order points are reached.

A new type of company has also come into existence to support higher levels of e-commerce. Currently, RMO sends its buyers out to visit different suppliers and establish contracts and purchase agreements. However, this function, too, can be done electronically. One approach to finding suppliers is through a company that acts as an aggregator or an electronic exchange. In this situation, equipment and materials suppliers register with the aggregator, which then acts as a broker to help buyers and sellers get together.

Similar concepts apply in business-to-consumer (B2C) relationships. Many companies have Web sites to promote and sell their own products. However, electronic storefronts are also appearing to provide a centralized shopping location for consumers. Thus, a company like RMO may have its own Web presence, but it may also sell its products through cybermalls and other electronic distributors.

Development and System Software Environments
The development environment consists of the standards and tools that are in use in the organization. For example, specific languages, CASE tools, and programming standards may be required. The system software environment includes operating systems, network protocols, database management systems, and so forth. In some projects, the development and system software environment may be open to choice. In other situations, they must conform to the existing environment. In either case, an important activity of the analysis phase is to determine the components of the environment that will control the development of the new application system.

The important components of the development and system software environment that will affect the project are the language environment and expertise, existing CASE tools and methodologies, required interfaces to other systems, the operating system, and the database system.

References:

ocw.kfupm.edu.sa/user/MIS30103/08IMCh%20notes.doc
hercules.gcsu.edu/~adahanay/cbis3210/Chapter%208-reviewQ.doc
people.stfx.ca/rpalanis/415/lecture/08.ppt

Thursday, February 25, 2010

Characteristics of an analyst when evaluating DFD quality

We shall first discuss what a Data Flow Diagram is.

When it comes to conveying how information data flows through systems (and how that data is transformed in the process), data flow diagrams (DFDs) are the method of choice over technical descriptions for three principal reasons.

1. DFDs are easier to understand by technical and non technical audiences

2. DFDs can provide a high level system overview, complete with boundaries and connections to other systems

3. DFDs can provide a detailed representation of system components

DFDs help system designers and others during initial analysis stages visualize a current system or one that

may be necessary to meet new requirements. Systems analysts prefer working with DFDs, particularly when they require a clear understanding of the boundary between existing systems and postulated systems. DFDs represent the following:

1. External devices sending and receiving data

2. Processes that change that data

3. Data flows themselves

4. Data storage locations

The hierarchical DFD typically consists of a top-level diagram (Level 0) underlain by cascading lower level diagrams (Level 1, Level 2…) that represent different parts of the system.


Defining DFD Components

DFDs consist of four basic components that illustrate how data flows in a system: entity, process, data store, and data flow.

Entity

An entity is the source or destination of data. The source in a DFD represents these entities that are outside the context of the system. Entities either provide data to the system (referred to as a source) or receive data from it (referred to as a sink). Entities are often represented as rectangles (a diagonal line across the right-hand corner means that this entity is represented somewhere else in the DFD). Entities are also referred to as agents, terminators, or source/sink.

Process

The process is the manipulation or work that transforms data, performing computations, making decisions (logic flow), or directing data flows based on business rules. In other words, a process receives input and generates some output. Process names (simple verbs and dataflow names, such as “Submit Payment” or “Get Invoice”) usually describe the transformation, which can be performed by people or machines. Processes can be drawn as circles or a segmented rectangle on a DFD, and include a process name and process number.

Data Store

A data store is where a process stores data between processes for later retrieval by that same process or another one. Files and tables are considered data stores. Data store names (plural) are simple but meaningful, such as “customers,” “orders,” and “products.” Data stores are usually drawn as a rectangle with the right hand side missing and labeled by the name of the data storage area it represents, though different notations do exist.

Data Flow

Data flow is the movement of data between the entity, the process, and the data store. Data flow portrays the interface between the components of the DFD. The flow of data in a DFD is named to reflect the nature of the data used (these names should also be unique within a specific DFD). Data flow is represented by an arrow, where the arrow is annotated with the data name.

For an analyst to achieve the proper characteristics in examining a data flow diagram, he or she must know the set of standards in evaluating a DFD quality.

Evaluating DFD Quality

  • Readable

-your data flow diagram must be readable so that your audience can understand its contents and what it meant to say.

  • Internally consistent
-a number of rules and guidelines that help ensure the dataflow diagram is consistent with the other system models -- the entity-relationship diagram, the state-transition diagram, the data dictionary, and the process specification. However, there are some guidelines that we use now to ensure that the DFD itself is consistent.

The major consistency guidelines are these:

*Avoid infinite sinks, bubbles that have inputs but no outputs. These are also known by systems analysts as “black holes,” in an analogy to stars whose gravitational field is so strong that not even light can escape.


*Avoid spontaneous generation bubbles; bubbles that have outputs but no inputs are suspicious, and generally incorrect. One plausible example of an output-only bubble is a random-number generator, but it is hard to imagine any other reasonable example.


*Beware of unlabeled flows and unlabeled processes. This is usually an indication of sloppiness, but it may mask an even deeper error: sometimes the systems analyst neglects to label a flow or a process because he or she simply cannot think of a reasonable name. In the case of an unlabeled flow, it may mean that several unrelated elementary data items have been arbitrarily packaged together; in the case of an unlabeled process, it may mean that the systems analyst was so confused that he or she drew a disguised flowchart instead of a dataflow diagram.


*Beware of read-only or write-only stores. This guideline is analogous to the guideline about input-only and output-only processes; a typical store should have both inputs and outputs. The only exception to this guideline is the external store, a store that serves as an interface between the system and some external terminator.

  • Accurately represents system requirements
  • Reduces information overload: Rule of 6 +/- 3
*Single DFD should have not more than 6 +/- 3 processe
*No more than 6 +/- 3 data flows should enter or leave a process or data store on a single DFD
  • Minimizes required number of interfaces

Data Flow Consistency Problems

  • Differences in data flow content between a process and its process decomposition

-want to have balancing: equivalence of data content between data flows entering and leaving a process or its decomposition)

  • Data outflows without corresponding inflows
  • Data inflows without corresponding outflows
  • Results in unbalanced DFDs
  • Black hole - a process with input that is never used to produce a data output
  • Miracle - a process with a data output that is created out of nothing (I.e. “miraculously appears”)
  • Most CASE tools perform data flow consistency checking

*Black hole and miracle problems apply to both processes and data stores


Consistency Rules

  • All data that flows into a process must:
*Flow out of the process or
*Be used to generate data that flow out of the process

All data that flows out of a process must:


Have flowed into the process or

Have been generated from data that flowed into the process

Documentation of DFD Components

  • Lowest level processes need to be described in detail
  • Data flow contents need to be described
  • Data stores need to be described in terms of data elements
  • Each data element needs to be described
  • Various options for process definition exist

Some Guidelines about Valid and Non-Valid Data Flows


  • Before embarking on developing your own data flow diagram, there are some general guidelines you should be aware of.
  • Data stores are storage areas and are static or passive; therefore, having data flow directly from one data store to another doesn't make sense because neither could initiate the communication.
  • Data stores maintain data in an internal format, while entities represent people or systems external to them. Because data from entities may not be syntactically correct or consistent, it is not a good idea to have a data flow directly between a data store and an entity regardless of direction.
  • Data flow between entities would be difficult because it would be impossible for the system to know about any communication between them. The only type of communication that can be modeled is that which the system is expected to know or react to.
  • Processes on DFDs have no memory, so it would not make sense to show data flows between two asynchronous processes (between two processes that may or may not be active simultaneously) because they may respond to different external events.

Therefore, data flow should only occur in the following scenarios:

· Between a process and an entity (in either direction)

· Between a process and a data store (in either direction)

· Between two processes that can only run simultaneously

Here are a few other guidelines on developing DFDs:

· Data that travel together should be in the same data flow

· Data should be sent only to the processes that need the data

· A data store within a DFD usually needs to have an input data flow

· Watch for Black Holes: a process with only input data flows

· Watch for Miracles: a process with only output flows

· Watch for Gray Holes: insufficient inputs to produce the needed output

· A process with a single input or output may or may not be partitioned enough

· Never label a process with an IF-THEN statement

· Never show time dependency directly on a DFD (a process begins to perform tasks as soon as it receives the necessary input data flows)

*Data flow diagramming is a highly effective technique for showing the flow of information through a system. DFDs are used in the preliminary stages of systems analysis to help understand the current system and to represent a required system. The DFDs themselves represent external entities sending and receiving information (entities), the processes that change information (processes), the information flows themselves (data flows), and where information is stored (data stores).

DFDs are a form of information development, and as such provide key insight into how information is transformed as it passes through a system. Having the skills to develop DFDs from functional specs and being able to interpret them is a value-add skill set that is well within the domain of technical communications.

References:

  • http://www.stc.org/confproceed/2000/PDFs/00098.PDF