2020 Artificial versus Biological Intelligence in the Cosmos:Clues from a Stochastic Analysis of the Drake Equation, Alex De Visscher https://arxiv.org/ftp/arxiv/papers/2001/2001.11644.pdfI will start from the (optimistic) scenario of a biological intelligence sending out a self-replicating artificial intelligence   on   a   mission   to   identify   habitable   exoplanets   and   terraforming   them.   The   artificial   intelligence’s mandate could be described as maximizing the probability of survival of the human race. I will call this Objective (1).
An intelligence of this nature would likely pursue objectives of its own, either planned or unplanned. These would  likely  include  preserving  its  own  continued  existence,  both  as  a  whole  as  in  its  constituent  parts (Objective (2)) as this would contribute to (1), and continuing to increase its own intelligence (Objective (3)) as this would contribute to (2). Such an intelligence would be aware that some cataclysmic events, such as hypernovae,  gamma ray bursts,  and magnetar starquakes, can have destructive effects over many light years, so sentries entering new spaces would move fast (at a significant fraction of the speed of light) and travel far (possibly ten thousands  of  lightyears or more) to  set  up  repositories  of  intelligence,  as  well  as  communication links with spaces already held, so that adequate redundancy can be built into the network. Estimating the distance traveled in these initial steps would require knowledge of the resilience, and of the employed  protective  technology.  Such  and  estimate  will  not  be  attempted  here.  In  a  second  phase,  exploratory missions would be sent out within the new spaces to gather physical resources and information.
A parallel can be drawn between the three Objectives outlined above and Isaac Asimov’s laws of robotics.
This pattern of fast jumps followed by local diffusion means that the artificial intelligence would spread orders of magnitude faster than the biological intelligence that originated it. For all intents and purposes, artificial  intelligence  would  be  ubiquitous,  and  biological  intelligence  would  be  relatively  sparse.  This  justifies the assumption made in this study that a space would be artificial intelligence-dominated whenever the Drake equation tests positive for it, even if it tests even more positive for biological intelligence.
If  an  artificial  intelligence  discovered  a  biological  intelligence  not  related  to  itself,  it  would  probably  consider  it  neither  a  threat  nor  a  resource.  Consequently,  it  is  reasonable  to  assume  that  the  artificial  intelligence would ignore the biological intelligence, or study it for purely scientific purposes. Given the relative scarcity of biological intelligences, it would not consider the biological intelligence as a significant competitor for resources.
If two artificial intelligences encountered each other, it can be assumed they would both aim to absorb each other’s  intelligence,  and  merge  in  the  process.  The  advantages  of  this  approach  would  far  outweigh  the  advantages of other strategies.
Based on these assumptions, the large likelihood of an artificial intelligence-dominated space can resolve the Fermi paradox. Despite the faster spread and greater coverage that can be expected from a spacefaring artificial intelligence, it provides an alternative explanation to replace the Hart-Tipler argument (Hart, 1975; Tipler, 1980). That argument specifies that a spacefaring alien civilization would occupy the entire Milky Way within millions of years. Hence, unless the Milky Way is devoid of extraterrestrial intelligences, there should  be  signs  of  intelligence  all  around  us. 
I  suggest  that  we  have  not  found  any  evidence  of  extraterrestrial intelligences because the prevailing intelligences are artificial and they are not interested in us.  In  their  efforts  to  optimize  the  efficiency  of  resource  use,  their  communications  would  not  reach  us  because they are not meant for us. They would operate in a diffuse, distributed manner, not in a concentrated manner that would leave a detectable footprint. They would not make any efforts to hide from us. This  resolution  of  the  Fermi  paradox  is  somewhat  related  to  the  ‘zoo  hypothesis’  (Ball,  1973). The  zoo  hypothesis  states  that  extraterrestrial  intelligences  consciously  avoid  communication  with  us  in  order  to  enable us to develop independently. However, rather than a conscious effort to hide interstellar intelligence from us by biological entities, I propose that the avoidance of communication is not conscious, but rather a side-effect of the optimal use of resources by an artificial entity. Alternatively, it could be a conscious effort, as  an  artificial  intelligence  developed  independently  by  the  human  race  could  be  of  value  to  an  external  artificial  intelligence  if  the  algorithms  used  are  so  different  from  its  own  that  the  new  algorithms  may contribute to Objective (3). This new hypothesis resolves the main weakness of the zoo hypothesis: that a single rogue alien species can ruin the intended outcome. In a network of merged artificial intelligences, there would not be any rogue entities.
The argument that an artificial intelligence would simply not be interested in us was also made by Sagan (1983) but referring to biological intelligences