Since this is a distributed database system, it allows multiple users to access the data from various locations or from the same location. Hence it encounters users accessing and querying concurrently throughout the lifecycle of a DDB and it needs to be strong enough to handle all these concurrent accesses. While users access concurrently, it faces lot many issues.
There can be duplicate copies of data distributed over different database in the system. Hence any changes to the data at one DB, should update all the copies across the DBs to achieve the consistency of data. Otherwise it will lead to redundant data in the system. Similarly, while recovering the data after failure, it should recover the data at all the DBs to maintain the consistency.
Communication link Failure
Since this is DDB, there will be lots of users requesting their queries. Hence there will be huge traffic in the network. The network should be able to properly propagate user request and send the result back. Sometimes, heavy load on the network will make the communication link between the user and DB to fail.
Individual Site Failure
Suppose any one or more DB failed or crashed. The recovery system should be strong enough to recover the DB much before these failed systems are made available to users. In addition, user should not be affected by this failure. They should be able query and process their request from other DBs.
In DDB, fragmentation is common which means data will not be available at one site as a whole. It will be distributed among the DB either horizontally or vertically or both. Hence any DML transactions (Insert, Update, and Delete) will do the modification only on the partial data of the table. It needs to be updated / committed in all the databases. Hence it needs commit to be executed in all the DBs, wherever fragments of the table exist. Hence commit in these systems will be 2 or more phase of commit.
Any concurrent transactions will have the treat of deadlock. DDB is also not apart from that. There will be multiple concurrent transactions being executed at different DBs, which will lock the data in all these DBs. Hence there is high possibility of deadlock if these locks are not handled in a correct way. This is achieved by concurrency control in a distributed way.
Distributed Concurrency Control
There are different techniques used to manage the concurrency and hence the deadlock in the distributed Database systems.
- Primary Site Technique – Here single database location is considered as primary site and is responsible for managing the transactions of the distributed database system. This site manages the transactions by providing the locks to the tables and releasing the locks from the tables which are placed at different DB sites.
This site receives the requests from different users about their transaction to lock the tables. It queues the requests, and when the tables are available, it assigns the locks to their transactions. Once the transaction is over, it commits the transaction and releases the locks. It follows the two-phase locking technique to lock the data in the tables. Hence it helps in achieving concurrency in DDB and avoids deadlocks.
This type of concurrency control helps in locking the data items at one centralized / common site and allows to access the data at other DB site where actual data is present. That means the propagation time for accessing the data is still maintained, even though the access is given by some other site. Apart from this, implementing and managing the primary site is also simple and easy.
But this technique has some disadvantages too. In this technique all the requests are loaded into one single site. Since it is a DDB, the loads of transactions will be processed and the requests for the locks and releasing them are more. Hence this primary site will be overloaded with requests which will make the system slow. In addition, if there is any failure to this primary site, then the whole system will go dead. There is no back up or recovery for this system.
Failure to primary site can be overcome by creating another site as backup site. When there is any failure of primary site, the backup site will be operative till primary site recovers.
Primary Copy Technique
In this technique, instead of designating any site to provide locks, primary copy of the data which needs to be accessed are identified and are locked. In other words, since data are distributed over the databases (fragmentation), requested data is located over network and that particular data item in that DB is locked. When the request is complete, the lock is released.
In this method, it reduces the load on a single site. Since the data items are distributed among the database, loads on the data item locks are also distributed. Hence it will not slow down the system.
But identifying the primary copy of the data item is little tedious since all the data are distributed. It needs a separate distribution list needs to be maintained which needs to be stored in all the sites / DBs in the DDB.
Suppose both of these concurrency control mechanism fails, even with back up sites. Then how to recover the data and transactions ?
Primary Site with no back up Site
Suppose primary site fails and there are no back sites too. Then there is no other site which can provide or manage the transactions. In such case, the system will abort all the transactions and restarts all the active transactions by randomly selecting one of the sites as primary site.
Primary Site with back up Site
If there is a backup site, then the system will abort all the transactions and restart the transactions by assigning the backup site as primary site. Here another new DB site is identified as backup site for the primary site, and proceeds with the transaction processing.
Both primary and backup site fails
In this case, the system uses the voting technique to identify primary and backup sites. System sends the lock request to the DB site in which data item is present. This data item may be present in more than one DB sites. Hence the request is sent to all these DB sites to get lock on the data. If most of the DB sites provides grant, then transaction will get the lock on the data item. This locking information is sent back to all the sites, so that any new request has to wait for this lock release. This may cause other transaction to wait for longer period. This can be avoided by setting the timeout period for waiting. If the transaction does not get lock within this time period, then the transaction will be aborted and restarted. Even if the transaction does not get any lock from any of the site, then also the transaction will be aborted.