21 May 2021
09:00 Master's Defense Fully distance
Theme
Acceleration of seismic stacking methods with GPUs in the computational cloud
Student
Gustavo Ciotto Pinton
Advisor / Teacher
Edson Borin
Brief summary
Maritime soil mapping techniques have been fundamental for several industry applications, notably for oil and gas exploration. Such techniques are important mainly because they allow geophysicists to obtain parameters, such as the speed of propagation of the medium, for example, and models of the regions of interest with high precision. Because they use a very high volume of seismic measurements and have many links, these methods tend to require a lot of calculations and, therefore, a lot of processing time when we consider traditional architectures containing a few dozen CPUs. Such characteristics invite us to propose and explore ways to make faster and more efficient execution. In view of the fact that such methods have many links whose iterations are mostly independent of each other, parallelization, whether through the use of GPUs or nodes spread across the computational cloud (or both), becomes the most viable alternative and simple to speed them up. Therefore, in this work, a parallelization method was implemented with support for both platforms compatible with the CUDA framework and the OpenCL standard based on the genetic algorithm of differential evolution for the search of the parameters of the medium in order to maximize the semblance measure for the stacking from three different transit time calculation models, which are, in increasing order of complexity, the Common Mid Point, the Zero Offset Common Reflection Surface and, finally, the Offset Continuation Trajectory. A data selection technique was also proposed for each of these same models in order to minimize the amount of data to be transferred to the accelerators at each iteration. Finally, such solutions were integrated into the SPITS framework for the distribution of computational tasks between the various nodes in a cloud. In comparison with some solutions found in the literature for the transit times Common Mid Point and Zero Offset Common Reflection Surface, the performance obtained by the work was superior and, regarding the executions in computational nodes with multiple GPUs, we verified that the property scalability is respected, that is, the execution time drops in the same proportion as the number of GPUs grows. The same is observed for the case of executions distributed among several nodes of a computational cloud through the use of the SPITS framework.
Examination Board
Headlines:
Edson Borin IC / UNICAMP
Alba Cristina Magalhães Alves de Melo CIC / UnB
Tiago Tavares Leite Barros CT / UFRN
Hermes Senger DC / UFSCar
Substitutes:
Sandro Rigo IC / UNICAMP
Alexandro José Baldassin IGCE / UNESP