Since 1978
Published in Sarov (Arzamas-16), Nizhegorodskaya oblast

RUSSIAN FEDERAL
NUCLEAR CENTER -
ALL-RUSSIAN RESEARCH INSTITUTE
OF EXPERIMENTAL PHYSICS
 
 Русский |  English
ABOUT EDITORIAL BOARD PUBLICATION ETHICS RULES FOR AUTHORS AUTHORS ARCHIVE MOST RECENT ISSUE IN NEXT ISSUE PAPER OF THE YEAR




FORMAT FOR DESCRIPTION OF NON-REGULAR POLYHEDRAL GRID IN THE TIM TECHNIQUE

A. A. Voropinov, S. S. Sokolov, A. I. Panov, I. G. Novikov
VANT. Ser.: Mat. Mod. Fiz. Proc 2007. Вып.3-4. С. 55-63.

The TIM technique solves time-dependent 3D continuum mechanics problems using arbitrarily structured non-regular polyhedral Lagrangian grids. The authors faced the problem of the format implementation to describe the grid structure at developing the technique foundation. The storage format should be economical enough, that is it requires the minimum main memory capacity and satisfies the implementation of the counting algorithms of the technique. Three universal formats have been investigated, and “bound-by-bound” storage structure has been chosen for the TIM technique: bound list is stored for the cell, a pair of cells and node list are stored for the bound, the number of one of the bounds is stored for the node.



APPLICATION OF OpenMP INTERFACE FOR TIM PARALLELIZATION

A. A. Voropinov, L G. Novikov, S. S. Sokolov
VANT. Ser.: Mat. Mod. Fiz. Proc 2007. Вып.3-4. С. 74-82.

The computational technique TIM is to solve time-dependent multidimensional continuum mechanics problems using arbitrarily structured Lagrangian grids. This technique allows computations for 2D problems (TIM-2D) in cylindrical and Cartesian coordinates and 3D problems (TIM-3D) in Cartesian coordinates. To reduce run-times, TIM implements parallelization in the shared memory model using OpenMP interface. The parallelization is performed by OpenMP parallelization directives added to each loop, which iterations may be independent of each other. The computational modules for gas dynamics, elasticity-plasticity, magnetic hydrodynamics, two-flow and two-temperature behavior, and maintenance of grid, as well as a number of support procedures have been parallelized. Each loop has been parallelized independently. In some cases we had to revise algorithms used for sequential computations. The parallelization has been implemented for the computational modules requiring 99 % of the total run-time during the sequential computations. The algorithms implemented in the code have been verified for a number of tests, methodical and production runs. The efficiency of computations on 8 processors is 85 % on average.










[ Back ]


 
 
 
© FSUE "RFNC-VNIIEF", 2000-2024