9/2/2023 0 Comments Fp64 graphicsWe thought it could go either way, and said as much. And that was before we saw the balance of vector core and tensor core computing in the Nvidia “Ampere” A100 GPU accelerators was going to heavily emphasize AI training on mixed-precision Tensor Cores, with HPC workloads using FP64 vector units taking a bit of a backseat in the architecture. And in the summer of 2019, in the wake of the iterative refinement model work done to use mixed precision math units to get to the same answer as FP64 compute on the Linpack benchmark and ahead of Nvidia’s “Ampere” GA100 GPU launch the following spring, we took another run at this HPC-AI divergence idea. A harmonic convergence, as we called it at the time, that this massively parallel processor could do both HPC simulation and modeling and AI training.īut only five years into the AI revolution, which began in earnest in 2012 when image recognition software could beat the accuracy of human beings performing the same task, we were wondering if this happy overlap between HPC and AI could last. And maybe specifically when you are talking about predicting the future of supercomputers.Īs we noted many years ago, the fact that AI training workloads using convolution neural networks came along with enough data to actually start working at the same time that the major HPC centers of the world had been working with Nvidia for several years on a GPU offload approach for simulation and modeling was a very happy coincidence. Predicting the future is hard, even with supercomputers.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |