• úvod
  • témata
  • události
  • tržiště
  • diskuze
  • nástěnka
  • přihlásit
    registrace
    ztracené heslo?
    TUHOKlimaticka zmena / Thank you so much for ruining my day
    XCHAOS
    XCHAOS --- ---
    TUHO: přes 80 km/h minulý týden foukalo i v Praze :-) já bych teda slavil, že konečně trochu zapršelo, i když samozřejmě, vyschlá půda se zjevně sesouvá snáz, než zpevněná vegetací :-/
    TUHO
    TUHO --- ---
    Tak v Kalifornii konecne zaprselo...

    Americký stát Kalifornie zasáhly silné bouře, které na řadě míst zdecimovaných při letošní sezoně požárů vyvracely stromy, sloupy elektrického vedení a způsobily bleskové záplavy či sesuvy půdy. Vítr ve městě San Francisco dosahoval rychlosti přes 80 kilometrů v hodině a meteorologové zaznamenali řadu srážkových rekordů.

    Kalifornii zasáhly silné bouře, způsobily záplavy, sesuvy i výpadky proudu — ČT24 — Česká televize
    https://ct24.ceskatelevize.cz/svet/3390778-kalifornii-zasahly-silne-boure-zpusobily-zaplavy-sesuvy-i-vypadky-proudu
    XCHAOS
    XCHAOS --- ---
    YMLADRIS: to jen ze začátku... pak přijde bod zvratu a začne to bejt naopak společenská povinnost.
    XCHAOS
    XCHAOS --- ---
    TADEAS: tak snad aspoň naplněj zásobníky, po půl roce bez srážek...
    GLOBETROTTER
    GLOBETROTTER --- ---
    SCHWEPZ: Kolik procent energie těch 5 procent elektráren vyrobí? Ono takto to vypadá hrozně jednoduše na řešení, ale těžko srovnávat nějakou gigantickou uhelnou elektrárnu v Číně s nějakou malou elektrárnou na naftu na malém řeckém ostrově.
    JANDOUR
    JANDOUR --- ---
    TUHO:
    MAZA:
    tak tohlemi fest zamotalo hlavu a i kdyz tady https://www.ceskatelevize.cz/porady/1096902795-studio-6/221411010101025/video/871424 maj ten graf rozbetlovanej, tak se o tom bohuzel nezminili, ac je to asi dost dulezitej moment a stalo by za to vysvetlit explicitne, proc to vypada tak jinak.
    V tom grafu Fakt o klimatu je ta hodnota cena povolenky na MWh, v tom grafu od Mazy cena za tunu CO2.

    A ted teda premejslim, esli jdou na sobe ty dva rusty tak malo proporcne zavisly, protoze je v Evrope tolik jadra a oze, ze se vliv povolenek promita tak malo?
    TADEAS
    TADEAS --- ---
    An ‘extreme and possible historic atmospheric river’ is battering California
    https://www.washingtonpost.com/weather/2021/10/24/california-atmospheric-river-soaked/
    TADEAS
    TADEAS --- ---
    YMLADRIS: tak to nejsou humanisti, ale transhumanisti :)

    a maj nadity penezenky ,)

    effective altruism (EA) movement, which was introduced by Ord in around 2011 and now boasts of having a mind-boggling $46 billion in committed funding
    YMLADRIS
    YMLADRIS --- ---
    TADEAS: imo zalezi, zda se jedinec identifikuje s "humanismem" nebo s cim se teda identifikuje. kdo se identifikuje s "lidstvim", bude proti (genetickym modifikacim, AI, longtermismu, transhumanismu..). bude kolem toho hodně nenávisti
    TADEAS
    TADEAS --- ---
    TADEAS: me to celkove prijde jako nebezpecna, ale zajimava hra ,) a uz se hraje, protoze je to ideologie tech hracu, tough life :)

    tzn. jo, lze hejtovat muska za britalni ekologickou stopu z privatejetu, ale bez techle vecicek se ty velky veci tezko organizujou. neuralink pro sirokou verejnost by mel byt cca za 12 let - co realne udela synteza chytrejch lidi, pokrocilejch algoritmu, velkyho vypocetniho vykonu s nejakejma napr. materialovejma pokrokama, ktery jsou potreba pro realizaci ty nefosilni/solarni civilizace? (jakozto predstupen civilizace mimoplanetarni) ... atd.

    takze pred velkym krachem tu jeste probehnou hodne zajimavy pokusy, urcite pohaneny i touhle ideologii
    YMLADRIS
    YMLADRIS --- ---
    TADEAS: diky ze jsi to nasel/postnul

    co se tyce ty eticky namitky, zde je to off topic, ale nevim. prijde mi to jako dve strategie. "minimalizovat utrpeni" a "existovat co nejdyl". co je pro koho vic atraktivni se beztak ukaze ze je kdovijak epigeneticky nebo nejak kodovany. mne to pripada fajn oboji, "most dangerous credo" je blbost. naopak ta perspektiva ze ano, je to na piču ale důlezity je preziti druhu, pomuze lidem tim klimatem projit, kdyz je teda buh mrtvej
    TADEAS
    TADEAS --- ---
    YMLADRIS: byla to odpoved na tvoji otazku proc musk rika, ze problem je ze theres too few people - protoze vic lidi, vic vedomejch bytosti, a to je lepsi... z toho ultradlouhodobyho hlediska, zda z hlediska tedka nekolika generaci, tezko rict jestli se vyjadroval prave k tomu a jestli na to existuje neideologicka odpoved :)

    sireji k tomu co pises - hothouse earth se asi nebere jako pravdepodobny a podletoho co pisou tyjle sekundarni zdroje ta uvaha bude takova ze zastavit ten civilizacni vyvoj by z toho ultradlouhodobyho pohledu bylo nevyhodny - hothouse asi nenastane, ale je tu pravdepodobnost, ze civilizacni rozvoj nas (resp enktery z nas - kyborgy) posune tak daleko, ze zase z toho ultradlouhodobyho pohledu to je profit - vic vedomejch bytosti
    YMLADRIS
    YMLADRIS --- ---
    TADEAS: mi to tak jasny nepripada (ze klima zmena je jen zaskobrtnuti), jednak je tam otazka tech tipping pointu, jako kam az vysoko se teplota vysplha. pokud by to znamenalo ze prezije jen maly procento lidi, muzou je zlikvidovat napr. pandemie, treba proto, ze nebudou mit aparaty se proti nim efektivne branit.

    za druhe, pochopila jsem to tak, ze musk s tim multiplanetary tak specha, protoze nikde neni zaruceny, ze Zeme zustane bezpecna nejak libovolne dlouho. a kdy zase bude vyvojovy okno na lety do budovani zakladen mimo planetu.

    ale tak je hezky si myslet, ze lidi, kteri vazne promýšlí existenciální risky, nepovažují klima za zásadní problém, to je takove uklidnujici

    co se tyce muska, vim ze nekdy v mladi nechapal proc by mel zit, nedavalo mu to smysl a pak si rekl ze pridat se k misi za zachovani Vedomi ve vesmiru je uspokojujici a tak to dela, takze ano.

    nicmene proc by se population decline nedal fakt celkem jednoduse resit, i kdyby se na to mely vynalezt umele delohy, kdyz nezaberou financni a dalsi pobidky, to zatim nevidim
    TADEAS
    TADEAS --- ---
    better safe than sorry .)

    Reid Hoffman, the multi-billionaire who cofounded LinkedIn, reports that “more than 50 percent of Silicon Valley’s billionaires have bought some level of ‘apocalypse insurance,’ such as an underground bunker.”

    Doomsday Prep for the Super-Rich | The New Yorker
    https://www.newyorker.com/magazine/2017/01/30/doomsday-prep-for-the-super-rich

    How Silicon Valley Billionaires Became Doomsday Preppers
    https://www.businessinsider.com/silicon-valley-doomsday-preppers-new-zealand-2020-3
    TADEAS
    TADEAS --- ---
    TADEAS: tady to mate polopate ,)

    The Dangerous Ideas of “Longtermism” and “Existential Risk” ❧ Current Affairs
    https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk

    Longtermism should not be confused with “long-term thinking.” It goes way beyond the observation that our society is dangerously myopic, and that we should care about future generations no less than present ones. At the heart of this worldview, as delineated by Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.

    This is what “our potential” consists of, and it constitutes the ultimate aim toward which humanity as a whole, and each of us as individuals, are morally obligated to strive. An existential risk, then, is any event that would destroy this “vast and glorious” potential, as Toby Ord, a philosopher at the Future of Humanity Institute, writes in his 2020 book The Precipice, which draws heavily from earlier work in outlining the longtermist paradigm. (Note that Noam Chomsky just published a book also titled The Precipice.)

    The point is that when one takes the cosmic view, it becomes clear that our civilization could persist for an incredibly long time and there could come to be an unfathomably large number of people in the future. Longtermists thus reason that the far future could contain way more value than exists today, or has existed so far in human history, which stretches back some 300,000 years. So, imagine a situation in which you could either lift 1 billion present people out of extreme poverty or benefit 0.00000000001 percent of the 1023 biological humans who Bostrom calculates could exist if we were to colonize our cosmic neighborhood, the Virgo Supercluster. Which option should you pick? For longtermists, the answer is obvious: you should pick the latter. Why? Well, just crunch the numbers: 0.00000000001 percent of 1023 people is 10 billion people, which is ten times greater than 1 billion people. This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As the FHI longtermists Hilary Greaves and Will MacAskill—the latter of whom is said to have cofounded the Effective Altruism movement with Toby Ord—write, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”

    This brings us back to climate change, which is expected to cause serious harms over precisely this time period: the next few decades and centuries. If what matters most is the very far future—thousands, millions, billions, and trillions of years from now—then climate change isn’t going to be high up on the list of global priorities unless there’s a runaway scenario. Sure, it will cause “untold suffering,” but think about the situation from the point of view of the universe itself. Whatever traumas and miseries, deaths and destruction, happen this century will pale in comparison to the astronomical amounts of “value” that could exist once humanity has colonized the universe, become posthuman, and created upwards of 1058 (Bostrom’s later estimate) conscious beings in computer simulations.

    ...

    In the same paper, Bostrom declares that even “a non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback,” describing this as “a giant massacre for man, a small misstep for mankind.”

    ...

    These aren’t the only incendiary remarks from Bostrom, the Father of Longtermism. In a paper that founded one half of longtermist research program, he characterizes the most devastating disasters throughout human history, such as the two World Wars (including the Holocaust), Black Death, 1918 Spanish flu pandemic, major earthquakes, large volcanic eruptions, and so on, as “mere ripples” when viewed from “the perspective of humankind as a whole.”

    ... etc
    TADEAS
    TADEAS --- ---
    TADEAS:

    2020 The Precipice: Existential Risk and the Future of Humanity
    http://libgen.li/edition.php?id=138269530
    TADEAS
    TADEAS --- ---
    longtermism ve skutecne long, hypercivilizacni podobe / bostrom, musk, thiel & spol

    mozna taky smer kde lze hledat odpoved pro [YMLADRIS @ Elon Musk (SpaceX - Falcon / Dragon / Starship, Tesla Motors, Solarcity, Hyperloop, Neuralink, Starlink, Boring atd.)] tj. klima zmena neni zasadni zpusobem ohrozujici, je to z opravdu long term hlediska malej problem, technologicky a jinak resitelnej a problem je spis pokud bude prilis malo lidi realizujici svuj potencial prilis omezenejma zpusobama, ktery ho nerozvinou... nebo neco takovyho ,)


    Why longtermism is the world’s most dangerous secular credo | Aeon Essays
    https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

    the topic of our extinction has received little sustained attention from philosophers until recently, and even now remains at the fringe of philosophical discussion and debate. On the whole, they have been preoccupied with other matters. However, there is one notable exception to this rule: over the past two decades, a small group of theorists mostly based in Oxford have been busy working out the details of a new moral worldview called longtermism, which emphasizes how our actions affect the very long-term future of the universe – thousands, millions, billions, and even trillions of years from now. This has roots in the work of Nick Bostrom, who founded the grandiosely named Future of Humanity Institute (FHI) in 2005, and Nick Beckstead, a research associate at FHI and a programme officer at Open Philanthropy. It has been defended most publicly by the FHI philosopher Toby Ord, author of The Precipice: Existential Risk and the Future of Humanity (2020). Longtermism is the primary research focus of both the Global Priorities Institute (GPI), an FHI-linked organisation directed by Hilary Greaves, and the Forethought Foundation, run by William MacAskill, who also holds positions at FHI and GPI. Adding to the tangle of titles, names, institutes and acronyms, longtermism is one of the main ‘cause areas’ of the so-called effective altruism (EA) movement, which was introduced by Ord in around 2011 and now boasts of having a mind-boggling $46 billion in committed funding.

    It is difficult to overstate how influential longtermism has become. Karl Marx in 1845 declared that the point of philosophy isn’t merely to interpret the world but change it, and this is exactly what longtermists have been doing, with extraordinary success. Consider that Elon Musk, who has cited and endorsed Bostrom’s work, has donated $1.5 million dollars to FHI through its sister organisation, the even more grandiosely named Future of Life Institute (FLI). This was cofounded by the multimillionaire tech entrepreneur Jaan Tallinn, who, as I recently noted, doesn’t believe that climate change poses an ‘existential risk’ to humanity because of his adherence to the longtermist ideology.

    Meanwhile, the billionaire libertarian and Donald Trump supporter Peter Thiel, who once gave the keynote address at an EA conference, has donated large sums of money to the Machine Intelligence Research Institute, whose mission to save humanity from superintelligent machines is deeply intertwined with longtermist values

    ...

    The point is that longtermism might be one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about. I believe this needs to change because, as a former longtermist who published an entire book four years ago in defence of the general idea, I have come to see this worldview as quite possibly the most dangerous secular belief system in the world today. But to understand the nature of the beast, we need to first dissect it, examining its anatomical features and physiological functions.
    TUHO
    TUHO --- ---
    MAZA: bohuzel to nemuzu najit primo z jejich stranek, abych chcecknul zdroje…
    MAZA
    MAZA --- ---
    TUHO
    TUHO --- ---
    Pekna grafika k debate o cenach emisnich povolenek

    Kliknutím sem můžete změnit nastavení reklam