• úvod
  • témata
  • události
  • tržiště
  • diskuze
  • nástěnka
  • přihlásit
    registrace
    ztracené heslo?
    HNIZDOnVIDIA - grafické karty, technologie, aplikace, hry
    VIRUS31
    VIRUS31 --- ---
    PRAASHEK: neni

    Mimochodem, prehled funkci z Photoshop CC, ktere jsou akcelerovane GPU je dostupnej zde: http://www.adobe.com/.../creativesuite/production/cs6/pdfs/adobe-hardware-performance-whitepaper.pdf

    Poslal jsem ho rypavym emailem ceske podpore Adobe v navaznosti na muj vesely telefonat nulove informacni hodnoty s nimi na tema podpory GPU a funkci Photoshop CC.
    HNIZDO
    HNIZDO --- ---
    New Witcher 3 details: Ciri, cities, real-time beard growth - PC Gamer
    http://www.pcgamer.com/new-witcher-3-details-ciri-cities-real-time-beard-growth/

    Fluid gameplay on 1080p with everything on maximum +nvidia hairworks on GTX 980

    A doprdele :)
    HNIZDO
    HNIZDO --- ---
    Jeste shrnuti fsync vs gsync:
    AMD FreeSync First Impressions and Technical Discussion | Inside and Outside the VRR Window
    http://www.pcper.com/...eSync-First-Impressions-and-Technical-Discussion/Inside-and-Outside-VRR-Wind

    Pokud ma FS monitor spodni limit 40Hz, pod 40fps se FS vypina. Monitor pak pracuje s fixed refresh 40Hz, a graficka karta bud s vsync a vysokym lagem odpovidajicimu 40Hz, nebo bez vsync s vysokym tearingem taktez odpovidajicim 40Hz.

    Narozdil od Gsyncu, kde modul provadi nasobeni aktualni snimkove frekvence a k podobnemu jevu nemuze dojit.

    " I'll gladly take the potential slight flicker of G-Sync over the 40 Hz judder/tearing of the BENQ. The take home point from my observations is that when gaming lower than the variable range, FreeSync panels retain the disadvantages of the V-Sync on/off setting, but amplify those effects as the panel is refreshing at an even lower rate than a standard display (e.g. 60 Hz)."

    Dale to vypada, ze budou problemy s overdrive, protoze je nutne ho pocitat dynamicky podle fps (coz gsync modul dela). U testovaneho BenQu se s fsync overdrive vypina a monitor ghostuje.

    G-Sync uses its on-board memory to buffer the previous frame, using this data combined on-the-fly with the next incoming frame to calculate appropriate drive voltages and compensate for ghosting (similar to overdrive).
    FreeSync relies on the regular adaptive sync TCONs that at present don't seem capable of correcting for this when operating in VRR modes.

    Takze fsync je sice skutecne pro vyrobce monitoru "levnejsi" - pro kupujici zatim ne, ale taky ma pulku funkcionality oproti gsync.
    Zatim plati jedno pravidlo, monitory se spodnim limitem >9Hz nebrat. A hlidat si overdrive, na tftcentral to uz testuji.

    fsync monitory se spodnim limitem >9Hz jsou v praxi nepouzitelne, ale i 9fps panely budou zvyraznovat vsechno trhani a kratkodobe propady fps.
    HNIZDO
    HNIZDO --- ---
    bohuzel, jabka jim jen nenacaty :(
    PRAASHEK
    PRAASHEK --- ---
    VIRUS31: msi afterburner pro mac není?
    VIRUS31
    VIRUS31 --- ---
    Jak se jmenuje nejakej profiler od nvidia, kterej mi ukaze aktualni zatizeni karty, pro os x? Zajima me,jestli karta nezahali a co ji pripadne vytezuje. IStat menus neumi procesy rozlisit.
    HNIZDO
    HNIZDO --- ---
    MCKIDNEY: Tak je tam buffer, bude tam lag, ale aspon to funguje az do nuly. Ja mam v arme mezi 30-60fps, to si muzou nejakej fsync strcit, kdyz bude furt vypnutej. Nechapu jak ty kokoti muzou tohle pustit ven.

    A jeste jedna vec - overdrive. Gsync modul pocita overdrive dynamicky, ale fsync ne, to jsem desne zvedavej na ty testy... I u soucasnych TN to dela duchy, protoze overdrive se u treba toho Benqu nepouziva kdyz je aktivni fsync (!). Jenze TNko to jeste unese, je rychly. Ale predstav si IPS matrici bez overdrive.
    MCKIDNEY
    MCKIDNEY --- ---
    HNIZDO: Od lidi co znam a hrajou si GSync, tak to taky není žádná sláva.
    Téměř všichni si pochvalovali práci okolo 60 a více FPS.
    Ale také si stěžovali na znatelný lag při FPS okolo 40.
    HNIZDO
    HNIZDO --- ---
    PRAASHEK: jo taak, humoorrrr..
    PRAASHEK
    PRAASHEK --- ---
    HNIZDO: ehm... pouze jsem tropil žert, něco jiného je výroba 16nm technologií a něco jiného 16nm velký čip pič
    HNIZDO
    HNIZDO --- ---
    PRAASHEK:
    1) Rozmery cipu prestali linearne skalovat s technologii uz davno, ono 16nm+ FINFET je neco jako 22nm+16nmFinfetHradlo
    2) cipy budou slozitejsi, rychlejsi, a pravdepodobne stejne velke jako ted
    PRAASHEK
    PRAASHEK --- ---
    HNIZDO: to budou hodně malé čipy
    HNIZDO
    HNIZDO --- ---
    NVIDIA využije továrny Samsungu, můžeme se těšit na 14nm čipy? | Svět hardware
    http://www.svethardware.cz/nvidia-vyuzije-tovarny-samsungu-muzeme-se-tesit-na-14nm-cipy/40227
    HNIZDO
    HNIZDO --- ---
    AMD Radeon Rx 300: většinou jde o přeznačené starší verze | Svět hardware
    http://www.svethardware.cz/amd-radeon-rx-300-vetsinou-jde-o-preznacene-starsi-verze/40228

    Takze pokud cekate na pristi generaci amd, nove jadro se tyka pouze nejvyssiho a mozna druheho nejvyssiho modelu R380/390X. Predpokladana cena R390X je $/E700, tedy pres 23kkc. Odpovidalo by to velkemu drahemu jadru na 28nm a drahym HBM vram.
    HNIZDO
    HNIZDO --- ---
    Tak na fsync muzeme s klidem zapomenout.

    Od zacatku vsude pisu, jak se obavam co se stane kdyz fps klesnou pod limit panelu. Zatimco gsync data nasobi z bufferu, fsync buffer nema. Je to presne ten pruser kterej jsem od zacatku cekal.

    AMD FreeSync First Impressions and Technical Discussion | Inside and Outside the VRR Window
    http://www.pcper.com/...eSync-First-Impressions-and-Technical-Discussion/Inside-and-Outside-VRR-Wind

    For that situation, the BENQ panel behaves like a fixed 40 Hz refresh rate display, and does what you would expect if V-Sync is on or off (judder or tearing).
    The take home point from my observations is that when gaming lower than the variable range, FreeSync panels retain the disadvantages of the V-Sync on/off setting, but amplify those effects as the panel is refreshing at an even lower rate than a standard display (e.g. 60 Hz).

    To znamena, ze pokud fps klesne pod hranici panelu, dostaneme bud sileny tearing, nebo pokud mame zapnuty vsync, tak silenej lag.
    HNIZDO
    HNIZDO --- ---
    Neoficiální testy výkonu Radeonu R9 390X i GeForce Titan X – Živě.cz
    http://www.zive.cz/...px#utm_medium=selfpromo&utm_source=zive&utm_campaign=RSSfeed&utm_reader=feedly

    Podle dosavadnich testu R390X Titan neprokona, bude to plichta. Finalni ovladace pridaji tak jeste 10% navrch. Cena se odhaduje na $700, neni zcela jasne, jestli se mysli vzduchem chlazena 4GB verze, nebo 8GB verze na vode.

    Nutno poznamenat, ze R390X podle predbeznych testu dosahuje v nekterych rozlisenich DVOJNASOBEK vykonu R290X. To je znacny mezigeneracni krok, i kdyz je vykoupen spotrebou, cenou, a vodnikem.
    HNIZDO
    HNIZDO --- ---
    GeForce GTX Titan-X: nejvýkonější grafický čip v testu
    http://pctuning.tyden.cz/...rty/33839-geforce-gtx-titan-x-nejvykonejsi-graficky-cip-v-testu?start=10

    Vykon narostl skutecne znatelne. Presto karta nedava nektere soucasne hry na max ve 4k (a zde myslim bez vyfikundaci typu SSAA nebo supersampling). Kupovat tyhle kartu pro fullhd je imho nesmysl, je to zcela jiste prvni pouzitelna single chip karta pro hrani ve 4k, navic s nizkou spotrebou chlazena tichym vzduchem jeste s neobykle vysokym OC potencialem, a za tohle prvenstvi si zaslouzi oceneni.
    HNIZDO
    HNIZDO --- ---
    Neco o DX12

    D3D12 articles - so much misunderstandings and miscommunications - AnandTech Forums
    http://forums.anandtech.com/showthread.php?t=2422223

    I have read many articles about D3D12 and I think many of these are making more questions than answers. I'm tired of these misunderstandings and miscommunications on the hardware support, so here is a summary about the new API. Please understand that even if I'm posting here anonymously I have to keep some information secret. This post is based on public documentations, and these specs are finalized. Sorry to say but most of the tech journalists are lazy now, and they simply don't even do basic search for their articles.

    Here is a simple FAQ:
    - Is D3D12 require new hardware?
    No! The API will works fine with the existing GPUs if the D3D12 driver exist for them. The actual hardware support already announced.
    - What about the features?
    Some feature will go, and some feature will come.
    The low-level APIs will simplify the access to the hardware. In the past, many new features came to the API because the driver actually hid the GPU memory from the application. So every new thing had to be implemented in the API, and then a new driver introduced the support for it. After this the application can access the new feature. D3D12 will allow explicit access to the GPU memory so some earlier features will not accessible in D3D12 in their "traditional D3D11 form". But this is not a problem, because with explicit memory access all of these (and many more) can be implemented in the application. For example tiled resources will be gone in the actual form, but it is possible to write an own implementation for it.
    The resource model will be also advancing, so for example Typed UAV Load will be a new feature.
    - Are these new features will require new hardware?
    The best answer is yes and no. This is a complicated question, and hard to answer it when the specs are not public. But let's say Typed UAV Load will require hardware support. The GCN based Radeons can support it, as well the Maxwell v2 (GM206/GM204) architecture. Maybe more hardware can access the feature from NVIDIA, but I don't know because they don't disclose what possible with Maxwell v1/Kepler/Fermi. Intel might support it, but I'm not familiar with these iGPUs.
    But many of these new features can be... I don't want to say emulated, but some workaround is possible. So even if the hardware support is not present, the actual effect might be executable on all GPUs. Of course, these workarounds will have some performance hit.

    These are the most important things to know.

    There are some other important things like the binding model. I have read frequently that D3D12 is bindless. No it's not. Bindless is only possible with AMD GCN, and NV Kepler/Maxwell. D3D12 is a universal API, so bindless is not suitable for it. But this doesn't mean that the D3D12 binding model is bad. It's actually very nice.

    In this PDF you can see the resource binding tiers at page 39 (if you don't want to download the file than here is an image). This is the D3D12 binding table, and the GPUs must support one of these tiers.
    Most of the GPUs support the first tier or TIER1.
    Maxwellv2 (GM206 and GM204) support the second tier or TIER2.
    All GCN-based Radeons support the third tier or TIER3.
    I'm expect that all future hardware will support TIER3.

    One more thing. We all know that D3D12 is built for efficiency. Yep, this is true, but Microsoft only talk about the batch performance. Everybody knows the advantages, it will mostly! eliminate the limitations on the CPU side.
    There are two other features in D3D12 that will eliminate the limitations on the GPU side! These will help to speed up the rendering even when the application seems to be limited by the GPU.
    These optional features are called asynchronous DMA and asynchronous compute. Simple definitions:
    - Asynchronous DMA will allow data uploads without pausing the whole pipeline. It will need two active DMA engines in the GPU so this feature is supported by all GCN-based Radeons or Maxwellv2(GM206/GM204)-based GeForce. Most modern NVIDIA GPUs use two DMA engines, but one of these disabled on the GeForce product line, so in the past this was a professional feature. On the GM206/GM204 GPUs the two DMA not just present in the hardware but activated as well.
    - Asynchronous compute allow overlapping of compute and graphics workloads. Most GPUs can use this feature, but not all hardware can execute the workloads efficiently. The GCN-based Radeons with 8 ACEs! are very good at this in my own tests.

    I can't tell you more, because there is an embargo for some infos.

    If you want to ask what GPU is the best for D3D12 at present, than I will say go for a GCN-based Radeon (prefer GPUs with 8 ACEs) or a Maxwellv2(GM206/GM204)-based GeForce. These are the most future-proof architectures now, so these will support a higher resource binding tier and most of the optional D3D12 features.
    MCKIDNEY
    MCKIDNEY --- ---
    Hmm problémy API a driveru jsou zajímavé. Ten článek mě nutí pochybovat o DX12 jako významnému kroku vpřed.
    Na druhou stranu DX11 bylo velké překvapení a i zde ho zmínil.

    Jsem na to zvědavý.
    Kliknutím sem můžete změnit nastavení reklam