- 精华
- 0
- 帖子
- 4122
- 威望
- 0 点
- 积分
- 4982 点
- 种子
- 707 点
- 注册时间
- 2011-2-28
- 最后登录
- 2023-4-13
|
发表于 2018-10-17 17:49 · 日本
|
显示全部楼层
本帖最后由 gzfrozen 于 2018-10-17 17:53 编辑
Risbb 发表于 2018-10-17 17:10 
用sci-hub抓来看了。不过是一篇18 siggraph的课程workshop介绍类的小文章而已。没有什么技术细节。 ...
我居然还正在NV官网看到了这么一段介绍:
DLSS requires a training set of full resolution frames of the aliased images that use one sample per pixel to act as a baseline for training. Another full resolution set of frames with at least 64 samples per pixel acts as the reference that DLSS aims to achieve. At runtime, a full resolution frame with motion vectors is required in addition to the input image to be anti-aliased. The motion vectors are used to temporally transform the previously anti-aliased frame to align with the current frame, enabling the algorithm to use temporal information. Training also requires a variety of scenes from the game in order to generate the best results.
按照这段话的说法话,显卡实际上是工作在原生4K下,生成无抗锯齿画面。再用AI去进行抗锯齿而已。整个过程中并没有任何的插值算法,不需要。
按照这个说法的话,rtx的性能比预想的还高。 |
|