File Processing System

In this article, we will cover File Processing System in detail. We will cover the advantages of file system and the disadvantages of file system.

Introduction

In daily life, we come across various needs to store data. It can be maintaining daily household bills, bank account details, salary details, payment details, student information, student reports, books in the library, etc. How it will be recorded in one place so that we can get it back when required? It should be recorded in such a way that

  1. Should be able to get the data any point in time latter
  2. Should be able to add details to it whenever required
  3. Should be able to modify stored information, as needed
  4. Should also be able to delete them

In the traditional approach, before to computer, all pieces of information were stored in papers. When we need information, we used to search through the papers. If we know a particular date or category of information we are searching for, we go to that particular session in the papers. When we want to update or delete some data, we search for it and modify them or strike off them. If the data is limited, then all these tasks are easy. Imagine library information or information about a student in school, or a banking system! How do we search for single required data in papers? It is a never-ending task! Yes, Computers solved our problems.

File Processing System

When computers came, all these jobs become easy. But initial days, these records were stored in the form of files. The way we stored in files is similar to papers, in the form of flat files – to be simpler, in notepad. Yes, the pieces of information where all in the notepads with each field of information separated by space, tab comma, semicolon, or any other symbol.

structure-of-dbms

All the files were grouped based on their categories; the file used to have only related information and each file is named properly. As we can see in the above sample file has Student information. Student files for each class were bundled inside different folders to identify it quickly.

structure-of-dbms

Now, if we want to see a specific Student detail from a file, what do we do? We know which file will have the data, we open that file and search for his details. Fine, here we see the files; we can open it and search for it. But imagine we want to display student details in a UI. Now how will we open a file, read or update it? There different programs like C, C++, COBOL, etc which helps to do this task. Using these programming languages, we can search for files, open them, search for the data inside them, and go to a specific line in the file, add/update/delete specific information.

Disadvantages of file processing

The file processing system is good when there is only a limited number of files and data in are very less. As the data and files in the system grow, handling them becomes difficult. There are following disadvantages of file system.

  1. Data Mapping and Access: – Although all the related information is grouped and stored in different files, there is no mapping between any two files. i.e.; any two dependent files are not linked. Even though Student files and Student_Report files are related, they are two different files and they are not linked by any means. Hence if we need to display student details along with his report, we cannot directly pick from those two files. We have to write a lengthy program to search the Student file first, get all details, then go Student_Report file and search for his report. When there is a very huge amount of data, it is always a time-consuming task to search for particular information from the file system. It is always an inefficient method to search for the data.
  2. Data Redundancy: – There are no methods to validate the insertion of duplicate data in the file system. Any user can enter any data. The file system does not validate for the kind of data being entered nor does it validate for the previous existence of the same data in the same file. Duplicate data in the system is not appreciated as it is a waste of space and always leads to confusion and mishandling of data. When there are duplicate data in the file, and if we need to update or delete the record, we might end up in updating/deleting one of the records, leaving the other record in the file. Again the file system does not validate this process. Hence the purpose of storing the data is lost. Though the file name says Student file, there is a chance of entering staff information or his report information in the file. The file system allows any information to be entered into any file. It does not isolate the data being entered from the group it belongs to.
  3. Data Dependence: – In the files, data are stored in a specific format, say tab, comma, or semicolon. If the format of any of the files is changed, then the program for processing this file needs to be changed. But there would be many programs dependent on this file. We need to know in advance all the programs which are using this file and change in the entire place. Missing to change in any one place will fail the whole application.  Similarly, changes in the storage structure, or accessing the data, affect all the places where this file is being used. We have to change its entire programs. That is the smallest change in the file affects all the programs and needs changes in all of them.
  4. Data inconsistency: – Imagine Student and Student_Report files have student’s address in it, and there was a change request for one particular student’s address. The program searched only the Student file for the address and it updated it correctly. There is another program that prints the student’s report and mails it to the address mentioned in the Student_Report file. What happens to the report of a student whose address is being changed? There is a mismatch in the actual address and his report is sent to his old address. This mismatch in different copies of the same data is called data inconsistency. This has occurred here because there is no proper listing of files which has the same copies of data.
  5. Data Isolation: – Imagine we have to generate a single report of a student, who is studying in a particular class, his study report, his library book details, and hostel information. All these information are stored in different files. How do we get all these details in one report? We have to write a program. But before writing the program, the programmer should find out which all files have the information needed, what is the format of each file, how to search for data in each file, etc. Once all this analysis is done, he writes a program. If there are 2-3 files involved, programming would be a bit simple. Imagine if there is a lot many files involved in it? It would require a lot of effort from the programmer. Since all the data are isolated from each other in different files, programming becomes difficult.
  6. Security: – Each file can be password protected. But what if you have to give access to only a few records in the file? For example, the user has to be given access to view only their bank account information in the file. This is very difficult in the file system.
  7. Integrity: – If we need to check for certain insertion criteria while entering the data into the file it is not possible directly. We can do it writing programs. Say, if we have to restrict the students above age 18, then it is by means of the program alone. There is no direct checking facility in the file system. Hence these kinds of integrity checks are not easy in the file systems.
  8. Atomicity: – If there is any failure to insert, update, or delete in the file system, there is no mechanism to switch back to the previous state. Imagine marks for one particular subject need to be entered into the Report file and then total needs to be calculated. But after entering the new marks, the file is closed without saving. That means whole of the required transaction is not performed. Only the totaling of marks has been done, but the addition of marks not being done. The total mark calculated is wrong in this case. Atomicity refers to the completion of the whole transaction or not completing it at all. Partial completion of any transaction leads to incorrect data in the system. The file system does not guarantee atomicity. It may be possible with complex programs, but introduce for each transaction costs money.
  9. Concurrent Access: – Accessing the same data from the same file is called concurrent access. In the file system, concurrent access leads to incorrect data. For example, a student wants to borrow a book from the library. He searches for the book in the library file and sees that only one copy is available. At the same time, another student also wants to borrow the same book and checks that one copy available. The first student opt for borrow and gets the book. But it is still not updated to zero-copy in the file and the second student also opt for borrow! But there are no books available. This is the problem of concurrent access to the file system.

We have covered about File Processing System in the next tutorial we will cover Database Management System

Reference

Translate »