• úvod
  • témata
  • události
  • tržiště
  • diskuze
  • nástěnka
  • přihlásit
    registrace
    ztracené heslo?
    TADEASplanetarita - 'making life planetary'
    resersovaci klubik k tematum planetarity

    planetarita jako jednotici pohled na soucasne deni na Zemi a roli lidi v nem.

    temata:

    - architektura planetarniho usporadani: klima, ekosystemy, logistika
    - sociopoliticke regiony planety a jejich cesta k planetarite
    - planetarita a architektura digitalni / vypocetni infrastruktury: ukladani dat, zpracovani vypoctu a identity
    - process civilizace v kontextu planetarity
    - kosmicky vyzkum, gradace civilizace v hypercivilizaci - 'making life multiplanetary'
    - management ekosystemu v kontextu planetarity - making life 'planetary' [for the first time]
    - terapie v kontextu planetarity: individualni terapie, facilitace spolecenskych procesu, biogeoterapie
    - exoplanetarita / interplanetarita - tema jinych civilizaci na jinych planetach [ne jako spekulativni blaboleni a amaterska ufologie] ale jako principialni tema

    a tak dale.
    rozbalit záhlaví
    TADEAS
    TADEAS --- ---
    Y LeCun
    https://www.facebook.com/share/1BDH2cXmnz/

    At the AI Impact Summit in Delhi, Yoshua Bengio says that AI systems should make predictions without any goal.
    He says goals would bias the systems in possibly dangerous ways by giving it drives and desires.
    He claims that the proper template to use is idealized human scientists.

    I completely disagree with the whole premise.
    I don't think any system can do anything useful without an objective.

    One point we agree on is that LLMs are intrinsically unsafe. But they are unsafe precisely because they don't have any objectives and merely emulate the humans who produced the text they've been trained on.

    My recommendation is the exact opposite of Yoshua's.
    AI systems *should* have goals. They should be designed so that they can do nothing else but fulfilling the goals we give them.

    Naturally, these goals and objectives must include safety guardrails.

    But the point is that, by construction, the system *must* fulfill the goal we give it and *must* abide by the safety guardrail constraints.

    I call this objective-driven AI architectures.
    TADEAS
    TADEAS --- ---
    TADEAS:
    TADEAS:

    Adam Marblestone – AI is missing something fundamental about the brain
    https://youtu.be/_9V_Hbe-N1A?si=nrXVwPrLHAVWwsY_
    TADEAS
    TADEAS --- ---
    TADEAS:

    The Sweet Lesson of Neuroscience—Asterisk
    https://asteriskmag.com/issues/13/the-sweet-lesson-of-neuroscience

    I believe the brain may have something more to teach us about AI — and that, in the process, AI may have quite a bit to teach us about the brain. Modern AI research centers on three key ingredients: architectures, learning rules, and training signals. The first two — how to build up complex patterns of information from simple ones and how to learn from errors to produce useful patterns — have been substantially mastered by modern AI. But the third factor — what training signals (typically called “loss” functions, “cost” functions, or “reward”) should drive learning — remains deeply underexplored. And that, I think, is where neuroscience still has surprises left to deliver.

    I’ve been fascinated by this question since 2016, when advances in artificial deep learning led me to propose that the brain probably has many highly specific cost functions built by evolution that might train different parts of the cerebral cortex to help an animal learn exactly what it needs to in its ecological niche.

    More recently, Steve Byrnes, a physicist turned AI safety researcher, has shed new light on the question of how the brain trains itself. In a remarkable synthesis of the neuroscience literature, Byrnes recasts the entire brain as two interacting systems: a learning subsystem and a steering subsystem. The first learns from experience during the animal’s lifetime — a bit like one of AI’s neural networks that starts with randomly initialized “weights,” or “parameters,” inside the network, which are adjusted by training. The second is mostly hardwired and sets the goals, priorities, and reward signals that shape that learning. A learning machine — like a neural network — can learn almost anything; the steering subsystem determines what it is being asked to learn.

    Byrnes’ work suggests that some of the most relevant insights in AI alignment will come from neuroscientific frameworks about how the steering system teaches and aligns the learner from within. I agree. This perspective is the seed of what we might call the “sweet lesson” of neuroscience.
    TADEAS
    TADEAS --- ---
    Dario Amodei — “We are near the end of the exponential”
    https://youtu.be/n1E9IZfvGMA?si=8IgDbhd631trdABS


    "But there is a puzzle either way, which is that in pre-training we use trillions of tokens. Humans don't see trillions of words. So there is an actual sample efficiency difference here. There is actually something different here. The models start from scratch and they need much more training. But we also see that once they're trained, if we give them a long context length of a million — the only thing blocking long context is inference — they're very good at learning and adapting within that context."

    ...

    I think there's something going on where pre-training is not like the process of humans learning, but it's somewhere between the process of humans learning and the process of human evolution. We get many of our priors from evolution. Our brain isn't just a blank slate. Whole books have been written about this. The language models are much more like blank slates. They literally start as random weights, whereas the human brain starts with all these regions connected to all these inputs and outputs. Maybe we should think of pre-training — and for that matter, RL as well — as some mix of two different things that happen in human life. One is human evolution and the other is human learning. Then we should think of in-context learning as a mix of two things that happen in human life. One is human learning and the other is the kind of immediate thinking and processing you do in real time. What we would want to reach is a learning algorithm where as the models get better, they move more toward the learning side and less toward the evolution side. Then you wouldn't need 10 trillion tokens. You'd maybe need 100 billion and then the models can figure out the rest from in-context learning or from RL that happens in real time
    TADEAS
    TADEAS --- ---
    Žijeme ve vedlejších efektech planetárního krachu, říkají antropologové z projektu ResisTerra | Deník Alarm
    https://denikalarm.cz/2026/02/zijeme-ve-vedlejsich-efektech-planetarniho-krachu-rikaji-antropologove-z-projektu-resisterra/

    Kde se vzalo pořekadlo „nechte to na bobrech“ a co znamená vnímání více-než-lidských aktérů? Antropologové z projektu ResisTerra zkoumají vliv i jiných než lidských aktérů a ukazují nečekaná spojenectví a souvislosti. Poukazují také na často nedomyšlené chování lidí. „Lidé tak dlouho snili o tom, že nakrmí všechny, až plantáže zlikvidovaly biodiverzitu. Industrializace měla vyřešit materiální problémy, ale zapomnělo se na globální oteplování,“ říkají výzkumníci
    TADEAS
    TADEAS --- ---
    Guided by Plant Voices - Nautilus
    https://nautil.us/guided-by-plant-voices-237798/

    Researcher and ecologist Monica Gagliano says her experiences with indigenous people, such as the Huichol in Mexico (pictured), informed her view that plants have a range of feelings. “I don’t know if they would use those words to describe joy or sadness, but they are feeling bodies,” she says.

    Her experiments suggest plants have the capacity to learn, remember, and make choices.

    She's also embraced the use of psychedelics in her research. Galgiano considers these explorations in non-Western ways of seeing the world to be part of her scientific work.

    While a visiting scholar at Dartmouth College, Gagliano spoke with Steve Paulson about her experiments, the emerging field of plant intelligence, and her own experiences of talking with plants.
    TADEAS
    TADEAS --- ---
    TADEAS:

    Elon Musk – "In 36 months, the cheapest place to put AI will be space”
    https://youtu.be/BYXbuik3dgA?si=2aChlAllBox9GK4A
    TADEAS
    TADEAS --- ---
    Is there a ‘meta’-crisis? Yes. – Adapt Research Ltd
    https://adaptresearchwriting.com/2026/02/04/is-there-a-meta-crisis-yes/

    Global risk mitigation is like the parable of the blind monks and the elephant: each of at least six disciplines grasps a real part of the problem, but none sees or acts on the whole.

    Current disaster risk reduction reveals we are systematically underprepared for rare-but-catastrophic events; global catastrophic risk research shows that some of these threats could overwhelm civilisation entirely. Yet national risk assessments indicate that governments mostly plan as if risks were local, isolated, and manageable, when in reality they are not.

    Systemic risk and polycrisis research deepens the picture by showing that the world is not just facing many dangers, but rising, interacting stresses that can cascade across tightly coupled global systems. This means today’s risk landscape is not simply a series of external shocks, but a living, unstable system generating hazards from within itself.

    But these frameworks still leave a crucial question unanswered: why do humans keep building such a fragile world?

    In my talk I noted that the answer requires turning to human behaviour and cultural evolution. Human actions are shaped by biases, incentives, institutions, and evolved social dynamics that develop in response to built and inherited human environments.

    These processes give rise to many strategies that are locally successful but globally disastrous. Over time, these dynamics can create maladaptive “trap states”, even worse, they can erode society’s very capacity to adapt.

    Evolvability is the key

    I contested that the notion of ‘evolvability’ becomes central. For societies to cope with an unpredictable future, humanity must avoid entrenchments and path-dependent maladaptation. There is need for the right kinds of variation, modularity, institutional and informational stability, and effective constraints on harmful “outlaw” strategies, or complex adaptations to mitigate risk cannot emerge. Yet arguably all of these are currently degrading on the global stage.

    As a result, humanity is not just producing risks faster than it can manage them; it is undermining the mechanisms that would allow us to learn, adapt, and recover.

    ...

    Systemic risk thinking is no longer confined to niche complexity scholarship but is increasingly shaping both academic risk analysis and practical decision-making frameworks.

    I suggest that even with this convergence on the nuance and interdependent complexity of risk, we will never escape a cascade of escalating global risk until we find ways to address the behavioural and evolutionary generative mechanisms of the situation the world is presently in.

    We should build societies that are safe and resilient because they can evolve well, not because they try to predict everything or stay the same.

    A focus on engineering and nudging ‘evolvability’ provides the potential for a broad-based structural solution to global risk.
    VOYTEX
    VOYTEX --- ---
    TADEAS: "By using an electromagnetic mass driver " 🤔
    TADEAS
    TADEAS --- ---
    STINKY: postupně :)
    STINKY
    STINKY --- ---
    TADEAS: tak schválně kdy slíbí Dysonovu sféru :-))
    TADEAS
    TADEAS --- ---
    TADEAS: vektor k hypercivilizaci


    SpaceX has acquired xAI

    Elon's letter: "The basic math is that launching a million tons per year of satellites generating 100 kW of compute power per ton would add 100 gigawatts of AI compute capacity annually, with no ongoing operational or maintenance needs. Ultimately, there is a path to launching 1 TW/year from Earth."

    For context, the entire U.S. currently generates 1.3 TW.

    "My estimate is that within 2 to 3 years, the lowest cost way to generate AI compute will be in space.

    Factories on the Moon can take advantage of lunar resources to manufacture satellites and deploy them further into space. By using an electromagnetic mass driver and lunar manufacturing, it is possible to put 500 to 1000 TW/year of AI satellites into deep space, meaningfully ascend the Kardashev scale and harness a non-trivial percentage of the Sun’s power.

    The capabilities we unlock by making space-based data centers a reality will fund and enable self-growing bases on the Moon, an entire civilization on Mars and ultimately expansion to the Universe.

    Thank you for everything you have done and will do for the light cone of consciousness."

    SpaceX
    https://spacex.com/updates#xai-joins-spacex
    TADEAS
    TADEAS --- ---
    "The Bioelectric Interface to the Collective Intelligence of Morphogenesis" by Michael Levin
    https://youtu.be/L0D4FdJ4K3g?si=j_yfF2z0yqFP-ANq
    TADEAS
    TADEAS --- ---
    Russia vs. USA - The Race to Crack UFO Technology
    https://youtu.be/wsfTIh8PV6o?si=P5w6hmpx8GU2ZFA-
    TADEAS
    TADEAS --- ---
    Neuroscience Beyond Neurons in the Diverse Intelligence Era | Michael Levin & Robert Chis-Ciure
    https://youtu.be/mu9Kv-o6iOI?si=TcAGxOrBu2lGTMHE



    TIMESTAMPS:
    0:00​ – Introduction: Why Neuroscience Must Go Beyond Neurons
    3:12​ – The Central Claim: Cognition Is Not Exclusive to Brains
    7:05​ – Defining Cognition, Intelligence, and Agency Without Neurons
    11:02​ – Bioelectricity as a Control Layer for Morphogenesis
    15:08​ – Cells as Problem-Solvers: Goals, Memory, and Error Correction
    19:41​ – The Body as a Cognitive System: Scaling Intelligence Across Levels
    24:10​ – Developmental Plasticity and Non-Neural Decision-Making
    28:36​ – Morphological Computation and Collective Cellular Intelligence
    33:02​ – Challenging Neuron-Centric Neuroscience Assumptions
    37:18​ – Bioelectric Networks vs Neural Networks: Key Differences
    41:55​ – Memory Without Synapses: Storing Information in Living Tissue
    46:07​ – Rewriting Anatomy: Regeneration, Repatterning, and Control
    50:29​ – Cancer, Developmental Errors, and Cognitive Breakdown
    54:48​ – Pluribus: Philosophical Implications
    59:14​ – From Cells to Selves: Where Does Agency Begin?
    1:03:22​ – Implications for AI: Intelligence Without Brains or Neurons
    1:08:11​ – Rethinking Consciousness: Gradualism vs Binary Models
    1:12:47​ – Ethics of Expanding the Moral Circle Beyond Humans
    1:17:31​ – Future Science: New Tools for a Post-Neuron Neuroscience
    1:22:54​ – Closing Reflections: Life, Mind, and Intelligence All the Way Down
    TADEAS
    TADEAS --- ---
    TADEAS:

    The Neanderthal Mind: Exploring Cognition and Language in Our Extinct Relatives | Human Origins
    https://youtu.be/Rs6BHxrLSTQ?si=WsaPOv-51cGiYsQr
    TADEAS
    TADEAS --- ---
    TADEAS
    TADEAS --- ---
    TADEAS
    TADEAS --- ---
    The Unacknowledged Other
    Sartrean Phenomenology and Encounters with Non-Human Intelligences (NHI)
    in Sartre Studies International
    Author: Kimberly Engels

    https://www.berghahnjournals.com/view/journals/sartre-studies/31/2/ssi310207.xml


    I examine encounters with alleged non-human intelligences (NHI) through the lens of Sartrean phenomenology. In a society that does not officially recognize non-human intelligences as “real” these encounters are existentially disruptive and often traumatic. In encounters that involve apparent NHI, subjects are faced with an Other who is unaccounted for and believed by mainstream society not to exist. Faced with an encounter with the seemingly impossible, experiencers undergo ontological shock, existential rupture, and abjection, questioning the boundaries of self, other, and the world. I conclude by giving suggestions for authentic existentialist responses to these alleged encounters.
    TADEAS
    TADEAS --- ---
    Paul Rosolie: Uncontacted Tribes in the Amazon Jungle | Lex Fridman Podcast #489
    https://youtu.be/Z-FRe5AKmCU?si=26YE63OkZyz-OlrY
    Kliknutím sem můžete změnit nastavení reklam