• úvod
  • témata
  • události
  • tržiště
  • diskuze
  • nástěnka
  • přihlásit
    registrace
    ztracené heslo?
    TADEASplanetarita - 'making life planetary'
    resersovaci klubik k tematum planetarity

    planetarita jako jednotici pohled na soucasne deni na Zemi a roli lidi v nem.

    temata:

    - architektura planetarniho usporadani: klima, ekosystemy, logistika
    - sociopoliticke regiony planety a jejich cesta k planetarite
    - planetarita a architektura digitalni / vypocetni infrastruktury: ukladani dat, zpracovani vypoctu a identity
    - process civilizace v kontextu planetarity
    - kosmicky vyzkum, gradace civilizace v hypercivilizaci - 'making life multiplanetary'
    - management ekosystemu v kontextu planetarity - making life 'planetary' [for the first time]
    - terapie v kontextu planetarity: individualni terapie, facilitace spolecenskych procesu, biogeoterapie
    - exoplanetarita / interplanetarita - tema jinych civilizaci na jinych planetach [ne jako spekulativni blaboleni a amaterska ufologie] ale jako principialni tema

    a tak dale.
    rozbalit záhlaví
    SCHWEPZ
    SCHWEPZ --- ---
    Mise nese název Artemis II a v jejím rámci se čtyři astronauti vydají na desetidenní cestu kolem odvrácené strany Měsíce a zpět. To podle agentury připraví půdu pro budoucí přistání na Měsíci. Mluvčí NASA ale uvedla, že posádka na povrchu Měsíce nepřistane, protože nemá potřebné vybavení jako například lunární přistávací modul, napsal web CNN. Mise by měla překonat rekord nejvzdálenější cesty do vesmíru, který doposud drží mise Apollo 13.

    Moon to Mars | NASA's Artemis Program - NASA
    https://www.nasa.gov/humans-in-space/artemis/


    Artemis II mission is about to fly humans to the Moon — here’s the science they’ll do
    https://www.nature.com/articles/d41586-026-00964-4
    TADEAS
    TADEAS --- ---
    These Mini Brains Just Learned to Solve a Classic Engineering Problem
    https://singularityhub.com/2026/03/24/these-mini-brains-just-learned-to-solve-a-classic-engineering-problem/


    Attaching living brain tissue to computers sounds like science fiction. But brain organoids have already made it reality.


    These blobs of brain cells often start life as skin cells that have been turned back into stem cells. After bathing in a special cocktail of nutrients, they develop into various types of brain cells that self-organize into intricate three-dimensional structures similar to parts of the brain. Neurons form networks, ripple with electrical waves, and when connected to other tissues—such as an artificial spinal cord and lab-grown muscles—can control them.


    Bioengineers have taken notice, envisioning organoids as potential living processors. Our brains use far less power and are more adaptable than the most advanced neuromorphic chips and brain-inspired AI. Brain organoids linked together into computers could theoretically enable computation in a dish at a fraction of the energy cost.


    There are hints this blue-sky idea could work. Scientists have taught hundreds of thousands of isolated neurons to play the video games Pong and, more recently, Doom. Separately, researchers used cultured neurons to control the simple movements of a vehicle.


    But mini brains are different. Unlike isolated neurons, organoids’ 3D structures and connections are harder to decipher. Yet predictable learning is essential to realizing “organoid intelligence.” Their electrical activity needs to rapidly adapt to inputs, strengthening or weakening circuits.
    TADEAS
    TADEAS --- ---
    Home page | Rangelands ATLAS
    https://www.rangelandsdata.org/atlas/

    Rangelands are areas of grasses, grass-like plants, forbs, shrubs and sometimes trees that are grazed or have the potential to be grazed by livestock and wildlife. They are diverse in their vegetation highly influenced by rainfall, temperature and other climate phenomena, and habitat for a wide range of wildlife, many species of which are found nowhere else.

    Rangelands are home to millions of people, from pastoralists to hunter-gatherers to ranchers to conservationists. Rangelands feed millions of people worldwide. Rangelands have significant cultural and aesthetic value too, and for many, are places of inspiration and beauty.

    This Rangelands Atlas has been developed to raise awareness on the importance of rangelands and highlight the changes taking place which are having significant impacts on rangelands, demanding their protection and restoration
    TADEAS
    TADEAS --- ---
    Kick-Off Webinar for Regenerating Earth Through Collapse
    https://youtu.be/_uBIkII3Mp4?si=9iw_LPMyYdnHV_qZ
    TADEAS
    TADEAS --- ---
    Design School for Regenerating Earth
    https://design-school-for-regenerating-earth.mn.co/
    TADEAS
    TADEAS --- ---
    This week, Google DeepMind proposed an important cognitive framework for measuring progress toward AGI by decomposing intelligence into ten human cognitive faculties and evaluating systems against human baselines.

    This is a major advance over vague, single-number claims about “AGI,” because it makes capability claims empirically testable, human-comparable, and diagnostically rich.

    We argue, however, that a human-comparative cognitive profile, while necessary, is not sufficient for the next phase of AGI evaluation. Three shortcomings motivate the extension:

    - First, a highly general system may operate in a deeply non-human style while still exhibiting powerful abstraction, transfer, world-modeling, and self-improvement.

    - Second, social fluency and ethical polish do not by themselves constitute beneficial moral agency.

    - Third, many of the most consequential deployment failures stem not from static capability deficits but from behavioral shifts that emerge under stress, competition, temptation, or self-preservation pressure.

    Learn more:

    Beyond Human Comparison - by Ben Goertzel - Eurykosmotron
    https://bengoertzel.substack.com/p/beyond-human-comparison
    TADEAS
    TADEAS --- ---
    ‘My ideas are a little revolutionary’: ecologist Suzanne Simard on intelligent forests, the climate and her critics | Trees and forests | The Guardian
    https://www.theguardian.com/environment/2026/mar/14/my-ideas-are-a-little-revolutionary-ecologist-suzanne-simard-on-intelligent-forests-the-climate-and-her-critics
    TADEAS
    TADEAS --- ---
    Here is an excellent paper that clearly explains the philosophy that guides Yann LeCun's research in AI and his new company, AMI Labs. It also perfectly expresses my complaints about the trope of artificial general intelligence -- AGI, or BS for short.

    LeCun et al reject the idée fixe that obsesses the Promethean dreams of too many of the AI boys: that they have the power, nearly there, to surpass human intelligence in every way: thus, it is general. The paper argues instead that human intelligence itself is not general: Each of us is good at some things, incompetent at others.

    To set the goal for AI development in anthropomorphic and ultimately hubristic terms is a mistake. Instead, how much better it will be to build systems that are specialized (as humans are) to concentrate scarce resources on efficiently advancing toward one skill or another, not all. "Given finite energy, an approach that directs available energy towards learning a finite set of tasks will reasonably outperform an approach that distributed the finite energy over an infinite amount of tasks." Or in its pithy conceit quoted here: "The AI that folds our proteins should not be the AI that folds our clothes!"

    LeCun also believes that embracing specialization will enable a system's creators to limit its function, thus its power, and ensure its safety. The other AI boys think they will create the God machine whose fury even they cannot contain. LeCun has the more mature view that machines, even intelligent ones, are still machines with plugs to pull.

    The paper indirectly illuminates LeCun's devotion to world models over large-language-models' text prediction. Or as the company's homepage puts it: "We share one belief: real intelligence does not start in language. It starts in the world." LeCun himself pioneered thinking that helped lead to LLMs, but he believes text can take the technology only so far. He aims to build systems that can adapt to reality because they are trained on reality, not on text as tokens or pixels next to pixels, but as machines able to train themselves to understand the laws of nature that toddlers and cats discern, without language.

    Here's the paper, written by LeCun, Judah Goldfeder, Philippe Wyder, and Ravid Shwartz-Ziv :

    https://arxiv.org/pdf/2602.23643


    src: https://www.facebook.com/share/1AmNPF2MNz/
    TADEAS
    TADEAS --- ---
    Ben Goertzel discussed a recent paper by Professor Yann LeCun and co-authors proposing Superhuman Adaptable Intelligence (SAI) as an alternative framing to AGI, which they describe as an overloaded and flawed concept.

    The paper defines SAI as intelligence capable of adapting to exceed humans at any task humans can perform, while also adapting to tasks outside the human domain that have utility.

    Dr. Goertzel assessed SAI as a specific parameterization within existing theoretical frameworks of general intelligence, particularly Efficient Pragmatic General Intelligence (EPGI).

    He suggested that extending the concept with explicit safety constraints and mechanisms for open-ended capability growth would better address key questions in long-term AGI development.

    Read Dr. Goertzel's analysis and explore why SAI is a special case of AGI, not an alternative:

    LeCun’s “Superhuman Adaptable Intelligence” Is a Special Case of AGI, Not an Alternative
    https://bengoertzel.substack.com/p/lecuns-sai-is-a-special-case-of-agi
    TADEAS
    TADEAS --- ---
    Intelligence is not a collection of skills nor an accumulation of declarative knowledge.

    Intelligence is the ability to accomplish new tasks with no prior training or with fast training.

    This points to the necessity of System 2, world models, and planning.

    [2602.23643v1] AI Must Embrace Specialization via Superhuman Adaptable Intelligence
    https://arxiv.org/abs/2602.23643v1
    TADEAS
    TADEAS --- ---
    NASA Study: Non-biologic Processes Don't Fully Explain Mars Organics - Astrobiology
    https://astrobiology.com/2026/02/nasa-study-non-biologic-processes-dont-fully-explain-mars-organics.html

    In a new study, researchers say that non-biological sources they considered could not fully account for the abundance of organic compounds in a sample collected on Mars by NASA’s Curiosity rover
    TADEAS
    TADEAS --- ---
    Metallic sphere hypothesis: Are spheres spying on us? | Reality Check
    https://youtu.be/SGFrfW5seiI?si=x-Udfj7pEbDxJICj
    TADEAS
    TADEAS --- ---
    Is Buzz Aldrin being forced into silence after witnessing a UFO? | Reality Check
    https://youtu.be/qyNU8ZJbv1w?si=3ahqMpD39Xr02QfT
    TADEAS
    TADEAS --- ---
    Y LeCun
    https://www.facebook.com/share/1BDH2cXmnz/

    At the AI Impact Summit in Delhi, Yoshua Bengio says that AI systems should make predictions without any goal.
    He says goals would bias the systems in possibly dangerous ways by giving it drives and desires.
    He claims that the proper template to use is idealized human scientists.

    I completely disagree with the whole premise.
    I don't think any system can do anything useful without an objective.

    One point we agree on is that LLMs are intrinsically unsafe. But they are unsafe precisely because they don't have any objectives and merely emulate the humans who produced the text they've been trained on.

    My recommendation is the exact opposite of Yoshua's.
    AI systems *should* have goals. They should be designed so that they can do nothing else but fulfilling the goals we give them.

    Naturally, these goals and objectives must include safety guardrails.

    But the point is that, by construction, the system *must* fulfill the goal we give it and *must* abide by the safety guardrail constraints.

    I call this objective-driven AI architectures.
    TADEAS
    TADEAS --- ---
    TADEAS:
    TADEAS:

    Adam Marblestone – AI is missing something fundamental about the brain
    https://youtu.be/_9V_Hbe-N1A?si=nrXVwPrLHAVWwsY_
    TADEAS
    TADEAS --- ---
    TADEAS:

    The Sweet Lesson of Neuroscience—Asterisk
    https://asteriskmag.com/issues/13/the-sweet-lesson-of-neuroscience

    I believe the brain may have something more to teach us about AI — and that, in the process, AI may have quite a bit to teach us about the brain. Modern AI research centers on three key ingredients: architectures, learning rules, and training signals. The first two — how to build up complex patterns of information from simple ones and how to learn from errors to produce useful patterns — have been substantially mastered by modern AI. But the third factor — what training signals (typically called “loss” functions, “cost” functions, or “reward”) should drive learning — remains deeply underexplored. And that, I think, is where neuroscience still has surprises left to deliver.

    I’ve been fascinated by this question since 2016, when advances in artificial deep learning led me to propose that the brain probably has many highly specific cost functions built by evolution that might train different parts of the cerebral cortex to help an animal learn exactly what it needs to in its ecological niche.

    More recently, Steve Byrnes, a physicist turned AI safety researcher, has shed new light on the question of how the brain trains itself. In a remarkable synthesis of the neuroscience literature, Byrnes recasts the entire brain as two interacting systems: a learning subsystem and a steering subsystem. The first learns from experience during the animal’s lifetime — a bit like one of AI’s neural networks that starts with randomly initialized “weights,” or “parameters,” inside the network, which are adjusted by training. The second is mostly hardwired and sets the goals, priorities, and reward signals that shape that learning. A learning machine — like a neural network — can learn almost anything; the steering subsystem determines what it is being asked to learn.

    Byrnes’ work suggests that some of the most relevant insights in AI alignment will come from neuroscientific frameworks about how the steering system teaches and aligns the learner from within. I agree. This perspective is the seed of what we might call the “sweet lesson” of neuroscience.
    TADEAS
    TADEAS --- ---
    Dario Amodei — “We are near the end of the exponential”
    https://youtu.be/n1E9IZfvGMA?si=8IgDbhd631trdABS


    "But there is a puzzle either way, which is that in pre-training we use trillions of tokens. Humans don't see trillions of words. So there is an actual sample efficiency difference here. There is actually something different here. The models start from scratch and they need much more training. But we also see that once they're trained, if we give them a long context length of a million — the only thing blocking long context is inference — they're very good at learning and adapting within that context."

    ...

    I think there's something going on where pre-training is not like the process of humans learning, but it's somewhere between the process of humans learning and the process of human evolution. We get many of our priors from evolution. Our brain isn't just a blank slate. Whole books have been written about this. The language models are much more like blank slates. They literally start as random weights, whereas the human brain starts with all these regions connected to all these inputs and outputs. Maybe we should think of pre-training — and for that matter, RL as well — as some mix of two different things that happen in human life. One is human evolution and the other is human learning. Then we should think of in-context learning as a mix of two things that happen in human life. One is human learning and the other is the kind of immediate thinking and processing you do in real time. What we would want to reach is a learning algorithm where as the models get better, they move more toward the learning side and less toward the evolution side. Then you wouldn't need 10 trillion tokens. You'd maybe need 100 billion and then the models can figure out the rest from in-context learning or from RL that happens in real time
    TADEAS
    TADEAS --- ---
    Žijeme ve vedlejších efektech planetárního krachu, říkají antropologové z projektu ResisTerra | Deník Alarm
    https://denikalarm.cz/2026/02/zijeme-ve-vedlejsich-efektech-planetarniho-krachu-rikaji-antropologove-z-projektu-resisterra/

    Kde se vzalo pořekadlo „nechte to na bobrech“ a co znamená vnímání více-než-lidských aktérů? Antropologové z projektu ResisTerra zkoumají vliv i jiných než lidských aktérů a ukazují nečekaná spojenectví a souvislosti. Poukazují také na často nedomyšlené chování lidí. „Lidé tak dlouho snili o tom, že nakrmí všechny, až plantáže zlikvidovaly biodiverzitu. Industrializace měla vyřešit materiální problémy, ale zapomnělo se na globální oteplování,“ říkají výzkumníci
    TADEAS
    TADEAS --- ---
    Guided by Plant Voices - Nautilus
    https://nautil.us/guided-by-plant-voices-237798/

    Researcher and ecologist Monica Gagliano says her experiences with indigenous people, such as the Huichol in Mexico (pictured), informed her view that plants have a range of feelings. “I don’t know if they would use those words to describe joy or sadness, but they are feeling bodies,” she says.

    Her experiments suggest plants have the capacity to learn, remember, and make choices.

    She's also embraced the use of psychedelics in her research. Galgiano considers these explorations in non-Western ways of seeing the world to be part of her scientific work.

    While a visiting scholar at Dartmouth College, Gagliano spoke with Steve Paulson about her experiments, the emerging field of plant intelligence, and her own experiences of talking with plants.
    TADEAS
    TADEAS --- ---
    TADEAS:

    Elon Musk – "In 36 months, the cheapest place to put AI will be space”
    https://youtu.be/BYXbuik3dgA?si=2aChlAllBox9GK4A
    Kliknutím sem můžete změnit nastavení reklam