• úvod
  • témata
  • události
  • tržiště
  • diskuze
  • nástěnka
  • přihlásit
    registrace
    ztracené heslo?
    HNIZDOnVIDIA - grafické karty, technologie, aplikace, hry
    Informace o grafických kartách nVidia - benchmarky, recenze, zkušenosti, přetaktování, chlazení, ovladače, hry, akcelerované aplikace.
    Cyberpunk2077 - technická stránka a nastavení, gameplay. Důvodem je svévolné a neodůvodněné omezení přispívání moderátorem do původního fóra.

    PhysX
    CUDA, vývoj

    Ovladače: nVidia download

    NVCUVENC dll download

    Trolling, OT a napadání se odměňují zahnutým ovocem.
    rozbalit záhlaví
    MCKIDNEY
    MCKIDNEY --- ---
    HNIZDO: O tom žádná FSync je hrozná mrdka.

    Jenom to, že GSync je další WinMore technologie a pokud někdo doufal, že to bude řešit problémy slabších strojů - tak bude zklamanej
    HNIZDO
    HNIZDO --- ---
    MCKIDNEY: Tak je tam buffer, bude tam lag, ale aspon to funguje az do nuly. Ja mam v arme mezi 30-60fps, to si muzou nejakej fsync strcit, kdyz bude furt vypnutej. Nechapu jak ty kokoti muzou tohle pustit ven.

    A jeste jedna vec - overdrive. Gsync modul pocita overdrive dynamicky, ale fsync ne, to jsem desne zvedavej na ty testy... I u soucasnych TN to dela duchy, protoze overdrive se u treba toho Benqu nepouziva kdyz je aktivni fsync (!). Jenze TNko to jeste unese, je rychly. Ale predstav si IPS matrici bez overdrive.
    MCKIDNEY
    MCKIDNEY --- ---
    HNIZDO: Od lidi co znam a hrajou si GSync, tak to taky není žádná sláva.
    Téměř všichni si pochvalovali práci okolo 60 a více FPS.
    Ale také si stěžovali na znatelný lag při FPS okolo 40.
    HNIZDO
    HNIZDO --- ---
    PRAASHEK: jo taak, humoorrrr..
    PRAASHEK
    PRAASHEK --- ---
    HNIZDO: ehm... pouze jsem tropil žert, něco jiného je výroba 16nm technologií a něco jiného 16nm velký čip pič
    HNIZDO
    HNIZDO --- ---
    PRAASHEK:
    1) Rozmery cipu prestali linearne skalovat s technologii uz davno, ono 16nm+ FINFET je neco jako 22nm+16nmFinfetHradlo
    2) cipy budou slozitejsi, rychlejsi, a pravdepodobne stejne velke jako ted
    PRAASHEK
    PRAASHEK --- ---
    HNIZDO: to budou hodně malé čipy
    HNIZDO
    HNIZDO --- ---
    NVIDIA využije továrny Samsungu, můžeme se těšit na 14nm čipy? | Svět hardware
    http://www.svethardware.cz/nvidia-vyuzije-tovarny-samsungu-muzeme-se-tesit-na-14nm-cipy/40227
    HNIZDO
    HNIZDO --- ---
    AMD Radeon Rx 300: většinou jde o přeznačené starší verze | Svět hardware
    http://www.svethardware.cz/amd-radeon-rx-300-vetsinou-jde-o-preznacene-starsi-verze/40228

    Takze pokud cekate na pristi generaci amd, nove jadro se tyka pouze nejvyssiho a mozna druheho nejvyssiho modelu R380/390X. Predpokladana cena R390X je $/E700, tedy pres 23kkc. Odpovidalo by to velkemu drahemu jadru na 28nm a drahym HBM vram.
    HNIZDO
    HNIZDO --- ---
    Tak na fsync muzeme s klidem zapomenout.

    Od zacatku vsude pisu, jak se obavam co se stane kdyz fps klesnou pod limit panelu. Zatimco gsync data nasobi z bufferu, fsync buffer nema. Je to presne ten pruser kterej jsem od zacatku cekal.

    AMD FreeSync First Impressions and Technical Discussion | Inside and Outside the VRR Window
    http://www.pcper.com/...eSync-First-Impressions-and-Technical-Discussion/Inside-and-Outside-VRR-Wind

    For that situation, the BENQ panel behaves like a fixed 40 Hz refresh rate display, and does what you would expect if V-Sync is on or off (judder or tearing).
    The take home point from my observations is that when gaming lower than the variable range, FreeSync panels retain the disadvantages of the V-Sync on/off setting, but amplify those effects as the panel is refreshing at an even lower rate than a standard display (e.g. 60 Hz).

    To znamena, ze pokud fps klesne pod hranici panelu, dostaneme bud sileny tearing, nebo pokud mame zapnuty vsync, tak silenej lag.
    HNIZDO
    HNIZDO --- ---
    Neoficiální testy výkonu Radeonu R9 390X i GeForce Titan X – Živě.cz
    http://www.zive.cz/...px#utm_medium=selfpromo&utm_source=zive&utm_campaign=RSSfeed&utm_reader=feedly

    Podle dosavadnich testu R390X Titan neprokona, bude to plichta. Finalni ovladace pridaji tak jeste 10% navrch. Cena se odhaduje na $700, neni zcela jasne, jestli se mysli vzduchem chlazena 4GB verze, nebo 8GB verze na vode.

    Nutno poznamenat, ze R390X podle predbeznych testu dosahuje v nekterych rozlisenich DVOJNASOBEK vykonu R290X. To je znacny mezigeneracni krok, i kdyz je vykoupen spotrebou, cenou, a vodnikem.
    HNIZDO
    HNIZDO --- ---
    GeForce GTX Titan-X: nejvýkonější grafický čip v testu
    http://pctuning.tyden.cz/...rty/33839-geforce-gtx-titan-x-nejvykonejsi-graficky-cip-v-testu?start=10

    Vykon narostl skutecne znatelne. Presto karta nedava nektere soucasne hry na max ve 4k (a zde myslim bez vyfikundaci typu SSAA nebo supersampling). Kupovat tyhle kartu pro fullhd je imho nesmysl, je to zcela jiste prvni pouzitelna single chip karta pro hrani ve 4k, navic s nizkou spotrebou chlazena tichym vzduchem jeste s neobykle vysokym OC potencialem, a za tohle prvenstvi si zaslouzi oceneni.
    HNIZDO
    HNIZDO --- ---
    Neco o DX12

    D3D12 articles - so much misunderstandings and miscommunications - AnandTech Forums
    http://forums.anandtech.com/showthread.php?t=2422223

    I have read many articles about D3D12 and I think many of these are making more questions than answers. I'm tired of these misunderstandings and miscommunications on the hardware support, so here is a summary about the new API. Please understand that even if I'm posting here anonymously I have to keep some information secret. This post is based on public documentations, and these specs are finalized. Sorry to say but most of the tech journalists are lazy now, and they simply don't even do basic search for their articles.

    Here is a simple FAQ:
    - Is D3D12 require new hardware?
    No! The API will works fine with the existing GPUs if the D3D12 driver exist for them. The actual hardware support already announced.
    - What about the features?
    Some feature will go, and some feature will come.
    The low-level APIs will simplify the access to the hardware. In the past, many new features came to the API because the driver actually hid the GPU memory from the application. So every new thing had to be implemented in the API, and then a new driver introduced the support for it. After this the application can access the new feature. D3D12 will allow explicit access to the GPU memory so some earlier features will not accessible in D3D12 in their "traditional D3D11 form". But this is not a problem, because with explicit memory access all of these (and many more) can be implemented in the application. For example tiled resources will be gone in the actual form, but it is possible to write an own implementation for it.
    The resource model will be also advancing, so for example Typed UAV Load will be a new feature.
    - Are these new features will require new hardware?
    The best answer is yes and no. This is a complicated question, and hard to answer it when the specs are not public. But let's say Typed UAV Load will require hardware support. The GCN based Radeons can support it, as well the Maxwell v2 (GM206/GM204) architecture. Maybe more hardware can access the feature from NVIDIA, but I don't know because they don't disclose what possible with Maxwell v1/Kepler/Fermi. Intel might support it, but I'm not familiar with these iGPUs.
    But many of these new features can be... I don't want to say emulated, but some workaround is possible. So even if the hardware support is not present, the actual effect might be executable on all GPUs. Of course, these workarounds will have some performance hit.

    These are the most important things to know.

    There are some other important things like the binding model. I have read frequently that D3D12 is bindless. No it's not. Bindless is only possible with AMD GCN, and NV Kepler/Maxwell. D3D12 is a universal API, so bindless is not suitable for it. But this doesn't mean that the D3D12 binding model is bad. It's actually very nice.

    In this PDF you can see the resource binding tiers at page 39 (if you don't want to download the file than here is an image). This is the D3D12 binding table, and the GPUs must support one of these tiers.
    Most of the GPUs support the first tier or TIER1.
    Maxwellv2 (GM206 and GM204) support the second tier or TIER2.
    All GCN-based Radeons support the third tier or TIER3.
    I'm expect that all future hardware will support TIER3.

    One more thing. We all know that D3D12 is built for efficiency. Yep, this is true, but Microsoft only talk about the batch performance. Everybody knows the advantages, it will mostly! eliminate the limitations on the CPU side.
    There are two other features in D3D12 that will eliminate the limitations on the GPU side! These will help to speed up the rendering even when the application seems to be limited by the GPU.
    These optional features are called asynchronous DMA and asynchronous compute. Simple definitions:
    - Asynchronous DMA will allow data uploads without pausing the whole pipeline. It will need two active DMA engines in the GPU so this feature is supported by all GCN-based Radeons or Maxwellv2(GM206/GM204)-based GeForce. Most modern NVIDIA GPUs use two DMA engines, but one of these disabled on the GeForce product line, so in the past this was a professional feature. On the GM206/GM204 GPUs the two DMA not just present in the hardware but activated as well.
    - Asynchronous compute allow overlapping of compute and graphics workloads. Most GPUs can use this feature, but not all hardware can execute the workloads efficiently. The GCN-based Radeons with 8 ACEs! are very good at this in my own tests.

    I can't tell you more, because there is an embargo for some infos.

    If you want to ask what GPU is the best for D3D12 at present, than I will say go for a GCN-based Radeon (prefer GPUs with 8 ACEs) or a Maxwellv2(GM206/GM204)-based GeForce. These are the most future-proof architectures now, so these will support a higher resource binding tier and most of the optional D3D12 features.
    MCKIDNEY
    MCKIDNEY --- ---
    Hmm problémy API a driveru jsou zajímavé. Ten článek mě nutí pochybovat o DX12 jako významnému kroku vpřed.
    Na druhou stranu DX11 bylo velké překvapení a i zde ho zmínil.

    Jsem na to zvědavý.
    HNIZDO
    HNIZDO --- ---
    Částečné specifikace Titanu X a skóre v 3DMarku. O 40 % rychlejší než GTX 980? | Cnews.cz
    http://www.cnews.cz/castecne-specifikace-titanu-x-skore-v-3dmarku-o-40-rychlejsi-nez-gtx-980
    HNIZDO
    HNIZDO --- ---
    HNIZDO
    HNIZDO --- ---
    K tem kecum o podpore vicejadrovych procesoru, ktere zpochybnuju uz leta (protoze jednak sam programuju realtime aplikace pro PLC, a jednak znam core programatora z MINDWARE):

    The third lesson: It's unthreadable. The IHVs sat down starting from maybe circa 2005, and built tons of multithreading into the driver internally. They had some of the best kernel/driver engineers in the world to do it, and literally thousands of full blown real world test cases. They squeezed that system dry, and within the existing drivers and APIs it is impossible to get more than trivial gains out of any application side multithreading. If Futuremark can only get 5% in a trivial test case, the rest of us have no chance.
    HNIZDO
    HNIZDO --- ---
    Totot pise exvyvojar nvidie:

    The first lesson is: Nearly every game ships broken. We're talking major AAA titles from vendors who are everyday names in the industry. In some cases, we're talking about blatant violations of API rules - one D3D9 game never even called BeginFrame/EndFrame. Some are mistakes or oversights - one shipped bad shaders that heavily impacted performance on NV drivers. These things were day to day occurrences that went into a bug tracker. Then somebody would go in, find out what the game screwed up, and patch the driver to deal with it. There are lots of optional patches already in the driver that are simply toggled on or off as per-game settings, and then hacks that are more specific to games - up to and including total replacement of the shipping shaders with custom versions by the driver team. Ever wondered why nearly every major game release is accompanied by a matching driver release from AMD and/or NVIDIA? There you go.

    Ale je tam toho mnohem vic, doporucuju precist cele
    What are your opinions on DX12/Vulkan/Mantle? - Graphics Programming and Theory - GameDev.net
    http://www.gamedev.net/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/#entry5215019
    HNIZDO
    HNIZDO --- ---
    GeForce GTX Titan X - 1 GHz, 35 procent nad GTX 980
    http://pctuning.tyden.cz/...1-aktualni-zpravy/33665-geforce-gtx-titan-x-1-ghz-35-procent-nad-gtx-980

    35-50% na 980ku (referencni), to by byl realisticky odhad.
    HNIZDO
    HNIZDO --- ---
    NVIDIA Opens PhysX Code to UE4 Developers
    https://www.unrealengine.com/blog/nvidia-opens-physx-code-to-ue4-developers

    pouze cpu verze a gameworks, samozrejme. gpgpu nema ani smysl uvolnovat (musely by se otevrit i drivery)
    Kliknutím sem můžete změnit nastavení reklam