Gibbs sampling block updating
The root node was responsible for generating and partitioning the matrix , transmitting the submatrices, updating and broadcasting item parameters, and recording execution time, in addition to assuming the same duties as the slave nodes.
Each node on the cluster has an Intel Xeon dual CPU quad-core processor clocked at 2.3 GHz, 8 GB of RAM, 90 TB storage, and Linux 64 bit operating system.
Since a large number of iterations are needed for the Markov chain to reach convergence, the algorithm is computationally intensive and requires a considerable amount of execution time, especially with large datasets .  developed a parallel algorithm where they used domain decomposition to divide the updating tasks into blocks.
In particular, their approach was to assign each processor a column block of the data matrix , it is reasonable to believe that if the domain decomposition is done differently, such as the one depicted in Figure 2, one can achieve a greater speedup.
The present study focuses on the development of the parallel algorithm using such a row-wise decomposition.
In their implementation, the person parameters were communicated between the root and the processor nodes.
In spite of the many advantages, the fully Bayesian estimation is computationally expensive, which further limits its actual applications.
It is hence important to seek ways to reduce the execution time.
Given that the fully Bayesian estimation of IRT models requires a minimum of 20 or 30 times more subjects than test items, which typically occurs in a test situation, it is believed that one can reduce the communication overhead if item parameters are communicated instead of person parameters.
Hence, an alternative approach is to decompose data matrices and person parameters into rows.