NVIDIA has planned to drop the support for GPUs with Tesla architecture (compute capability 1.x) in upcoming releases of CUDA Toolkit. In fact, GPUs with compute capability 1.0 have already been removed as a target device from CUDA Toolkit 6.5, released in August 2014. With toolkit 6.5, you can no longer specify compute_10, sm_10 for the code generation. Not only this, NIVIDIA has also removed the CC 1.0 from the comparison tables in the Programming Guide 6.5
The default architecture has been changed to compute_20, sm_20 in the rules file of CUDA Toolkit 6.5. As for the rest of Tesla architectures, i.e. CC 1.1, 1.2 and 1.3, they are still supported as a target, but are marked as deprecated. The following warning is generated by the compiler if we attempt to compile the code for Tesla architecture with CUDA 6.5:
CUDACOMPILE : nvcc warning : The ‘compute_11′, ‘compute_12′, ‘compute_13′, ‘sm_11′, ‘sm_12′, and ‘sm_13′ architectures are deprecated, and may be removed in a future release.
According to the release notes of CUDA Toolkit 7.0 early access version, the support for Tesla architecture has been dropped altogether. The minimum target architecture supported by CUDA Toolkit 7.0 is compute_20, sm_20.
It’s important to mention that it is the Tesla architecture that’s being depreciated and not the Tesla Series GPUs (that support Fermi and Kepler).
This deprecation is logical as the Tesla architecture is very basic and has been around for so long. Here’s a list of important features that are vital to any CUDA application but aren’t supported by Tesla architecture:
- Simultaneous kernels execution
- CUDA surfaces
- Larger shared memory
- 3D grid of thread blocks
- Dynamic parallelism
- Floating point atomic operations
These features helps achieve better speedup of CUDA applications over CPU counterparts with a better utilization of the GPU architecture. Developers are now encouraged to use them in their legacy CUDA applications to make room for further optimizations.