The GeForce 300 Series is a family of graphics cards being developed by Nvidia originally slated for production in November 2009[2][3] but shipping has been postponed until some time in March 2010.
On November 27, 2009, Nvidia released its first GeForce 300 series video card, the GeForce 310. However, this card is a rebrand of one of Nvidia's older models and not based on the newer Fermi architecture.[4] According to Nvidia's specifications, the GeForce 310 supports the older DirectX 10.1 technology[5] rather than the newer DirectX 11 technology.
The Fermi chip is said to be a very large chip with 512 CUDA cores and 3 billion transistors. The silicon is to be manufactured by TSMC in a 40-nanometer process. This will also be Nvidia's first chip to support DirectX 11. Consumer GeForce cards are expected to come with 1.5GB of memory while 3GB to 6GB are expected to be made available to the commercial Quadro and Tesla versions (That is, 256MB, 512MB or 1GB attached to each of the chip's six independent GDDR5 memory controllers). The chip features ECC protection on the memory, and is believed to be 400% faster than previous Nvidia chips in double-precision floating point operations; with these features, combined with support for Visual Studio and C++, Nvidia hopes to appeal to the High-Performance Computer users who might presently be using Tesla systems.[6]
On 30 September 2009, Nvidia released a white paper describing the architecture:[7] the chip features 16 'Shader Clusters' each with 32 'Shader Cores' capable of one single-precision operation per cycle or one double-precision operation every other cycle, a 40-bit virtual address space which allows the host's memory to be mapped into the chip's address space, meaning that there is only one kind of pointer and making C++ support significantly easier, and a 384-bit-wide GDDR5 memory interface. As with the G80 and GT200, threads are scheduled in 'warps', sets of 32 threads each running on a single shader core. While the GT200 had 16kb 'shared memory' associated with each shader cluster, and required data to be read through the texturing units if a cache was needed, GF100 has 64kb of memory associated with each cluster, which can be used either as a 48kb cache
从原本十一月讲到三月,现在也听说要延迟到五月了~