» 您尚未登录:请 登录 | 注册 | 标签 | 帮助 | 小黑屋 |


发新话题
打印

[新闻] 巨硬的神秘力量:X1的ESRAM理论带宽从102GB/s飙升88%至192GB/s

60fps vs 30fps?


TOP

引用:
原帖由 west2046 于 2013-6-29 08:07 发表

文章说,虽然现在PS4的游戏表现没有ONE好(索饭对这个不承认吧),虽然PS4的内存延迟严重,虽然PS4的音效芯片可能消耗内存和CPU,但是ONE的DDR3带宽不行,高速缓存即使可以到192GB/S容量不够,硬件参数不行,但开发 ...
音频部分有什么好谈的,PS3要不是为了统一和兼容,游戏支持DTS-HA和TRUEHD的实时输出7.1都可以
难道到了PS4难道还有担心这种资源分配问题,这是开历史的倒车还是咸吃萝卜淡操心



TOP

引用:
原帖由 小僵尸 于 2013-6-29 09:07 发表


音频部分有什么好谈的,PS3要不是为了统一和兼容,游戏支持DTS-HA和TRUEHD的实时输出7.1都可以
难道到了PS4难道还有担心这种资源分配问题,这是开历史的倒车还是咸吃萝卜淡操心
EUROGAMER只是陈述一下客观事实,不管要占多少资源,总归要占,X1可是有独立音频芯片的


TOP

PS4研发团队的另一个工作重点是将很多基本功能分配到专门的运算单元中,这样就不需要分配资源以处理这些事物,从而提高了主机的灵活性。比如,为音效处理准备专门的硬件单元,这样在游戏内同时进行语音聊天就无需占用游戏的资源。同理还有视频压缩和解压。另外音频单元还负责处理游戏中大量的MP3数据流的解压。

连PS3这种内存匮乏的机器在音视频处理上都不是大问题,PS4上没道理成为问题的.

TOP

就算是这样,其实Xone性能还是没有优势啊

他那个196带宽其实也就对应32MB的esram而已

对外带宽仍是硬伤啊

有一种游戏对这这种架构比较有助益,例如任天堂的大乱斗,因为每一祯的数据重复率很高

TOP

恭喜Xone又追近一点点性能差距,

真的只有"一点点"

TOP

posted by wap, platform: Chrome
引用:
原帖由 @小僵尸  于 2013-6-29 09:20 发表
PS4研发团队的另一个工作重点是将很多基本功能分配到专门的运算单元中,这样就不需要分配资源以处理这些事物,从而提高了主机的灵活性。比如,为音效处理准备专门的硬件单元,这样在游戏内同时进行语音聊天就无需占用游戏的资源。同理还有视频压缩和解压。另外音频单元还负责处理游戏中大量的MP3数据流的解压。

连PS3这种内存匮乏的机器在音视频处理上都不是大问题,PS4上没道理成为问题的.
不一样啊,PS3用的单独的内存,对音频的利用不影响显存
PS4这回是统一内存(准确的说是拿显存当内存),而且只有这一个内存,当然要占据一部分带宽和性能(不管实际影响有多小,人家EUROGAMER只是陈述事实而已)
X1有一个ESRAM作为“显存”,相对独立一些

本帖最后由 KoeiSangokushi 于 2013-6-29 09:33 通过手机版编辑

TOP

就算这可怜的32MB esram带宽200以上还是沒有优势的

而且对编程来说相对麻烦

[ 本帖最后由 AngryMulch 于 2013-6-29 10:23 编辑 ]

TOP

posted by wap, platform: Chrome
引用:
原帖由 @AngryMulch  于 2013-6-29 09:34 发表
就算这可怜的32MB esram带宽200以上还是还有优势的

而且对编程来说相对麻烦
微软刚推出的DX11.2就是通过API更新的形式,解决了对ESRAM利用中编程麻烦的问题

TOP

这和内存独立与否有什么关系

PS系的机器从PS1开始,音视频这块就是独立模块化的

就算是PC应用了,独立声卡和独立的视频编辑卡你看看能占用什么性能资源

除非PS4的音视频是依靠CPU软解的

不然单独提出来讨论没有什么意义

TOP

引用:
原帖由 你老闆 于 2013-6-29 02:43 发表

To the best of their knowledge, 800MHz remains the clock speed of the graphics component of the processor, and the main CPU is operating at the target 1.6GHz. In both respects, this represents parit ...
To the best of their knowledge
ms当然不会告诉他们降频了 我不清楚内存带宽是否能把读写加起来算 就算可以 原来是102 那也该是102X2=204 怎么才192?
老外有个解释
Right, that 204 is the theoretical max if they could just read and write wherever they wanted on every clock cycle. A real-world 133 GB/s would be much more realistic.
However, they gave 192 GB/s as a theoretical max. If this is the accurate figure, there are precisely two possibilities why:
1-The methodology they use to simultaneously read and write to the eSRAM is mathematically proven to work, at best, on ~8/9 clock cycles. This would explain the 1.88 factor of increase, as even in a perfect world you could not double the bandwidth.
2-The theoretical max assumes with perfect optimization read/write is always possible, but the GPU has been downclocked 50 MHz so the bandwidth in each direction is now 96 GB/s.
The first scenario is less likely, imo, because 8/9 is a really weird ratio for a binary system. Also, there is very rarely a reason to do anything but double the max rate of theoretical access if you discover sometimes simultaneous read/write is possible. In theory, one should be able to read and write completely separate memory adresses exclusively, which would truly double the rate.
I am not an expert, and the article may not be accurate, but this sure looks a lot like a downclock.

TOP

posted by wap, platform: Chrome
引用:
原帖由 @BlackGod  于 2013-6-29 11:12 发表
To the best of their knowledge
ms当然不会告诉他们降频了 我不清楚内存带宽是否能把读写加起来算 就算可以 原来是102 那也该是102X2=204 怎么才192?
老外有个解释
Right, that 204 is the theoretical max if they could just read and write wherever they wanted on every clock cycle. A realworld 133 GB/s would be much more realistic.
However, they gave 192 GB/s as a theoretical max. If this is the accurate figure, there are precisely two possibilities why:
1The methodology they use to simultaneously read and write to the eSRAM is mathematically proven to work, at best, on ~8/9 clock cycles. This would explain the 1.88 factor of increase, as even in a perfect world you could not double the bandwidth.
2The theoretical max assumes with perfect optimization read/write is always possible, but the GPU has been downclocked 50 MHz so the bandwidth in each direction is now 96 GB/s.
The first scenario is less likely, imo, because 8/9 is a really weird ratio for a binary system. Also, there is very rarely a reason to do anything but double the max rate of theoretical access if you discover sometimes simultaneous read/write is possible. In theory, one should be able to read and write completely separate memory adresses exclusively, which would truly double the rate.
I am not an expert, and the article may not be accurate, but this sure looks a lot like a downclock.
两种可能,一种是降频到750MHz,读写都达到96GB/S
另一种可能,不降频,192GB/S是说读写不能都达到102GB/S,比如读达到102GB/S了,写就只能90GB/S了

本帖最后由 KoeiSangokushi 于 2013-6-29 11:14 通过手机版编辑

TOP

posted by wap, platform: Android

文中的带宽在最理想的环境也不可能达到
我看了看外站的一些文章和评论,感觉是gpu降频了50mhz,然后不好交代啊,工程师想了个理论上同时读写的概念来掩盖这个事实

TOP

posted by wap, platform: Android
引用:
原帖由 @KoeiSangokushi  于 2013-6-29 11:14 发表
posted by wap, platform: Chrome

两种可能,一种是降频到750MHz,读写都达到96GB/S
另一种可能,不降频,192GB/S是说读写不能都达到102GB/S,比如读达到102GB/S了,写就只能90GB/S了

本帖最后由 KoeiSangokushi 于 2013629 11:14 通过手机版编辑  
肯定是降频了唉

TOP

posted by wap, platform: Chrome
引用:
原帖由 @madrista7  于 2013-6-29 11:18 发表
posted by wap, platform: Android

肯定是降频了唉
如果没降频你打算怎么办?

TOP

发新话题
     
官方公众号及微博