• úvod
  • témata
  • události
  • tržiště
  • diskuze
  • nástěnka
  • přihlásit
    registrace
    ztracené heslo?
    TADEASplanetarita - 'making life planetary'
    TADEAS
    TADEAS --- ---
    Kick-Off Webinar for Regenerating Earth Through Collapse
    https://youtu.be/_uBIkII3Mp4?si=9iw_LPMyYdnHV_qZ
    TADEAS
    TADEAS --- ---
    Design School for Regenerating Earth
    https://design-school-for-regenerating-earth.mn.co/
    TADEAS
    TADEAS --- ---
    This week, Google DeepMind proposed an important cognitive framework for measuring progress toward AGI by decomposing intelligence into ten human cognitive faculties and evaluating systems against human baselines.

    This is a major advance over vague, single-number claims about “AGI,” because it makes capability claims empirically testable, human-comparable, and diagnostically rich.

    We argue, however, that a human-comparative cognitive profile, while necessary, is not sufficient for the next phase of AGI evaluation. Three shortcomings motivate the extension:

    - First, a highly general system may operate in a deeply non-human style while still exhibiting powerful abstraction, transfer, world-modeling, and self-improvement.

    - Second, social fluency and ethical polish do not by themselves constitute beneficial moral agency.

    - Third, many of the most consequential deployment failures stem not from static capability deficits but from behavioral shifts that emerge under stress, competition, temptation, or self-preservation pressure.

    Learn more:

    Beyond Human Comparison - by Ben Goertzel - Eurykosmotron
    https://bengoertzel.substack.com/p/beyond-human-comparison
    TADEAS
    TADEAS --- ---
    ‘My ideas are a little revolutionary’: ecologist Suzanne Simard on intelligent forests, the climate and her critics | Trees and forests | The Guardian
    https://www.theguardian.com/environment/2026/mar/14/my-ideas-are-a-little-revolutionary-ecologist-suzanne-simard-on-intelligent-forests-the-climate-and-her-critics
    TADEAS
    TADEAS --- ---
    Here is an excellent paper that clearly explains the philosophy that guides Yann LeCun's research in AI and his new company, AMI Labs. It also perfectly expresses my complaints about the trope of artificial general intelligence -- AGI, or BS for short.

    LeCun et al reject the idée fixe that obsesses the Promethean dreams of too many of the AI boys: that they have the power, nearly there, to surpass human intelligence in every way: thus, it is general. The paper argues instead that human intelligence itself is not general: Each of us is good at some things, incompetent at others.

    To set the goal for AI development in anthropomorphic and ultimately hubristic terms is a mistake. Instead, how much better it will be to build systems that are specialized (as humans are) to concentrate scarce resources on efficiently advancing toward one skill or another, not all. "Given finite energy, an approach that directs available energy towards learning a finite set of tasks will reasonably outperform an approach that distributed the finite energy over an infinite amount of tasks." Or in its pithy conceit quoted here: "The AI that folds our proteins should not be the AI that folds our clothes!"

    LeCun also believes that embracing specialization will enable a system's creators to limit its function, thus its power, and ensure its safety. The other AI boys think they will create the God machine whose fury even they cannot contain. LeCun has the more mature view that machines, even intelligent ones, are still machines with plugs to pull.

    The paper indirectly illuminates LeCun's devotion to world models over large-language-models' text prediction. Or as the company's homepage puts it: "We share one belief: real intelligence does not start in language. It starts in the world." LeCun himself pioneered thinking that helped lead to LLMs, but he believes text can take the technology only so far. He aims to build systems that can adapt to reality because they are trained on reality, not on text as tokens or pixels next to pixels, but as machines able to train themselves to understand the laws of nature that toddlers and cats discern, without language.

    Here's the paper, written by LeCun, Judah Goldfeder, Philippe Wyder, and Ravid Shwartz-Ziv :

    https://arxiv.org/pdf/2602.23643


    src: https://www.facebook.com/share/1AmNPF2MNz/
    TADEAS
    TADEAS --- ---
    Ben Goertzel discussed a recent paper by Professor Yann LeCun and co-authors proposing Superhuman Adaptable Intelligence (SAI) as an alternative framing to AGI, which they describe as an overloaded and flawed concept.

    The paper defines SAI as intelligence capable of adapting to exceed humans at any task humans can perform, while also adapting to tasks outside the human domain that have utility.

    Dr. Goertzel assessed SAI as a specific parameterization within existing theoretical frameworks of general intelligence, particularly Efficient Pragmatic General Intelligence (EPGI).

    He suggested that extending the concept with explicit safety constraints and mechanisms for open-ended capability growth would better address key questions in long-term AGI development.

    Read Dr. Goertzel's analysis and explore why SAI is a special case of AGI, not an alternative:

    LeCun’s “Superhuman Adaptable Intelligence” Is a Special Case of AGI, Not an Alternative
    https://bengoertzel.substack.com/p/lecuns-sai-is-a-special-case-of-agi
    TADEAS
    TADEAS --- ---
    Intelligence is not a collection of skills nor an accumulation of declarative knowledge.

    Intelligence is the ability to accomplish new tasks with no prior training or with fast training.

    This points to the necessity of System 2, world models, and planning.

    [2602.23643v1] AI Must Embrace Specialization via Superhuman Adaptable Intelligence
    https://arxiv.org/abs/2602.23643v1
    TADEAS
    TADEAS --- ---
    NASA Study: Non-biologic Processes Don't Fully Explain Mars Organics - Astrobiology
    https://astrobiology.com/2026/02/nasa-study-non-biologic-processes-dont-fully-explain-mars-organics.html

    In a new study, researchers say that non-biological sources they considered could not fully account for the abundance of organic compounds in a sample collected on Mars by NASA’s Curiosity rover
    TADEAS
    TADEAS --- ---
    Metallic sphere hypothesis: Are spheres spying on us? | Reality Check
    https://youtu.be/SGFrfW5seiI?si=x-Udfj7pEbDxJICj
    TADEAS
    TADEAS --- ---
    Is Buzz Aldrin being forced into silence after witnessing a UFO? | Reality Check
    https://youtu.be/qyNU8ZJbv1w?si=3ahqMpD39Xr02QfT
    TADEAS
    TADEAS --- ---
    Y LeCun
    https://www.facebook.com/share/1BDH2cXmnz/

    At the AI Impact Summit in Delhi, Yoshua Bengio says that AI systems should make predictions without any goal.
    He says goals would bias the systems in possibly dangerous ways by giving it drives and desires.
    He claims that the proper template to use is idealized human scientists.

    I completely disagree with the whole premise.
    I don't think any system can do anything useful without an objective.

    One point we agree on is that LLMs are intrinsically unsafe. But they are unsafe precisely because they don't have any objectives and merely emulate the humans who produced the text they've been trained on.

    My recommendation is the exact opposite of Yoshua's.
    AI systems *should* have goals. They should be designed so that they can do nothing else but fulfilling the goals we give them.

    Naturally, these goals and objectives must include safety guardrails.

    But the point is that, by construction, the system *must* fulfill the goal we give it and *must* abide by the safety guardrail constraints.

    I call this objective-driven AI architectures.
    TADEAS
    TADEAS --- ---
    TADEAS:
    TADEAS:

    Adam Marblestone – AI is missing something fundamental about the brain
    https://youtu.be/_9V_Hbe-N1A?si=nrXVwPrLHAVWwsY_
    TADEAS
    TADEAS --- ---
    TADEAS:

    The Sweet Lesson of Neuroscience—Asterisk
    https://asteriskmag.com/issues/13/the-sweet-lesson-of-neuroscience

    I believe the brain may have something more to teach us about AI — and that, in the process, AI may have quite a bit to teach us about the brain. Modern AI research centers on three key ingredients: architectures, learning rules, and training signals. The first two — how to build up complex patterns of information from simple ones and how to learn from errors to produce useful patterns — have been substantially mastered by modern AI. But the third factor — what training signals (typically called “loss” functions, “cost” functions, or “reward”) should drive learning — remains deeply underexplored. And that, I think, is where neuroscience still has surprises left to deliver.

    I’ve been fascinated by this question since 2016, when advances in artificial deep learning led me to propose that the brain probably has many highly specific cost functions built by evolution that might train different parts of the cerebral cortex to help an animal learn exactly what it needs to in its ecological niche.

    More recently, Steve Byrnes, a physicist turned AI safety researcher, has shed new light on the question of how the brain trains itself. In a remarkable synthesis of the neuroscience literature, Byrnes recasts the entire brain as two interacting systems: a learning subsystem and a steering subsystem. The first learns from experience during the animal’s lifetime — a bit like one of AI’s neural networks that starts with randomly initialized “weights,” or “parameters,” inside the network, which are adjusted by training. The second is mostly hardwired and sets the goals, priorities, and reward signals that shape that learning. A learning machine — like a neural network — can learn almost anything; the steering subsystem determines what it is being asked to learn.

    Byrnes’ work suggests that some of the most relevant insights in AI alignment will come from neuroscientific frameworks about how the steering system teaches and aligns the learner from within. I agree. This perspective is the seed of what we might call the “sweet lesson” of neuroscience.
    TADEAS
    TADEAS --- ---
    Dario Amodei — “We are near the end of the exponential”
    https://youtu.be/n1E9IZfvGMA?si=8IgDbhd631trdABS


    "But there is a puzzle either way, which is that in pre-training we use trillions of tokens. Humans don't see trillions of words. So there is an actual sample efficiency difference here. There is actually something different here. The models start from scratch and they need much more training. But we also see that once they're trained, if we give them a long context length of a million — the only thing blocking long context is inference — they're very good at learning and adapting within that context."

    ...

    I think there's something going on where pre-training is not like the process of humans learning, but it's somewhere between the process of humans learning and the process of human evolution. We get many of our priors from evolution. Our brain isn't just a blank slate. Whole books have been written about this. The language models are much more like blank slates. They literally start as random weights, whereas the human brain starts with all these regions connected to all these inputs and outputs. Maybe we should think of pre-training — and for that matter, RL as well — as some mix of two different things that happen in human life. One is human evolution and the other is human learning. Then we should think of in-context learning as a mix of two things that happen in human life. One is human learning and the other is the kind of immediate thinking and processing you do in real time. What we would want to reach is a learning algorithm where as the models get better, they move more toward the learning side and less toward the evolution side. Then you wouldn't need 10 trillion tokens. You'd maybe need 100 billion and then the models can figure out the rest from in-context learning or from RL that happens in real time
    TADEAS
    TADEAS --- ---
    Žijeme ve vedlejších efektech planetárního krachu, říkají antropologové z projektu ResisTerra | Deník Alarm
    https://denikalarm.cz/2026/02/zijeme-ve-vedlejsich-efektech-planetarniho-krachu-rikaji-antropologove-z-projektu-resisterra/

    Kde se vzalo pořekadlo „nechte to na bobrech“ a co znamená vnímání více-než-lidských aktérů? Antropologové z projektu ResisTerra zkoumají vliv i jiných než lidských aktérů a ukazují nečekaná spojenectví a souvislosti. Poukazují také na často nedomyšlené chování lidí. „Lidé tak dlouho snili o tom, že nakrmí všechny, až plantáže zlikvidovaly biodiverzitu. Industrializace měla vyřešit materiální problémy, ale zapomnělo se na globální oteplování,“ říkají výzkumníci
    TADEAS
    TADEAS --- ---
    Guided by Plant Voices - Nautilus
    https://nautil.us/guided-by-plant-voices-237798/

    Researcher and ecologist Monica Gagliano says her experiences with indigenous people, such as the Huichol in Mexico (pictured), informed her view that plants have a range of feelings. “I don’t know if they would use those words to describe joy or sadness, but they are feeling bodies,” she says.

    Her experiments suggest plants have the capacity to learn, remember, and make choices.

    She's also embraced the use of psychedelics in her research. Galgiano considers these explorations in non-Western ways of seeing the world to be part of her scientific work.

    While a visiting scholar at Dartmouth College, Gagliano spoke with Steve Paulson about her experiments, the emerging field of plant intelligence, and her own experiences of talking with plants.
    TADEAS
    TADEAS --- ---
    TADEAS:

    Elon Musk – "In 36 months, the cheapest place to put AI will be space”
    https://youtu.be/BYXbuik3dgA?si=2aChlAllBox9GK4A
    TADEAS
    TADEAS --- ---
    Is there a ‘meta’-crisis? Yes. – Adapt Research Ltd
    https://adaptresearchwriting.com/2026/02/04/is-there-a-meta-crisis-yes/

    Global risk mitigation is like the parable of the blind monks and the elephant: each of at least six disciplines grasps a real part of the problem, but none sees or acts on the whole.

    Current disaster risk reduction reveals we are systematically underprepared for rare-but-catastrophic events; global catastrophic risk research shows that some of these threats could overwhelm civilisation entirely. Yet national risk assessments indicate that governments mostly plan as if risks were local, isolated, and manageable, when in reality they are not.

    Systemic risk and polycrisis research deepens the picture by showing that the world is not just facing many dangers, but rising, interacting stresses that can cascade across tightly coupled global systems. This means today’s risk landscape is not simply a series of external shocks, but a living, unstable system generating hazards from within itself.

    But these frameworks still leave a crucial question unanswered: why do humans keep building such a fragile world?

    In my talk I noted that the answer requires turning to human behaviour and cultural evolution. Human actions are shaped by biases, incentives, institutions, and evolved social dynamics that develop in response to built and inherited human environments.

    These processes give rise to many strategies that are locally successful but globally disastrous. Over time, these dynamics can create maladaptive “trap states”, even worse, they can erode society’s very capacity to adapt.

    Evolvability is the key

    I contested that the notion of ‘evolvability’ becomes central. For societies to cope with an unpredictable future, humanity must avoid entrenchments and path-dependent maladaptation. There is need for the right kinds of variation, modularity, institutional and informational stability, and effective constraints on harmful “outlaw” strategies, or complex adaptations to mitigate risk cannot emerge. Yet arguably all of these are currently degrading on the global stage.

    As a result, humanity is not just producing risks faster than it can manage them; it is undermining the mechanisms that would allow us to learn, adapt, and recover.

    ...

    Systemic risk thinking is no longer confined to niche complexity scholarship but is increasingly shaping both academic risk analysis and practical decision-making frameworks.

    I suggest that even with this convergence on the nuance and interdependent complexity of risk, we will never escape a cascade of escalating global risk until we find ways to address the behavioural and evolutionary generative mechanisms of the situation the world is presently in.

    We should build societies that are safe and resilient because they can evolve well, not because they try to predict everything or stay the same.

    A focus on engineering and nudging ‘evolvability’ provides the potential for a broad-based structural solution to global risk.
    VOYTEX
    VOYTEX --- ---
    TADEAS: "By using an electromagnetic mass driver " 🤔
    TADEAS
    TADEAS --- ---
    STINKY: postupně :)
    Kliknutím sem můžete změnit nastavení reklam