Today Elon Musk held a Twitter Space to discuss xAI.
Here is a recap of what was discussed for those who missed it:
- The founding team was on hand to introduce themselves, and I must say it is an impressive team with an impressive background. They had very strong backgrounds with Deep Mind, OpenAI, Google, Tesla, etc.
- Elon Musk said the goal with xAI is to build a good AGI (artificial general intelligence) with the purpose of understanding the universe.
- Musk said that the safest way is to build an AGI that is ‘maximum curious’ and ‘truth curious,’ and to try and minimize the error between what you think is true and what is actually true.
- For truth-seeking super intelligence humanity is much more interesting than not humanity, so that’s the safest way to create one. Musk gave the example of how space and Mars is super interesting but it pales in comparison to how interesting humanity is.
- Musk said there is so much that we think we understand but we don’t in reality. There are a lot of unresolved questions. For example, there are many questions that remain about the nature of gravity, and why there is not massive evidence of aliens. He said he has seen no evidence of aliens whatsoever so far. He went further into the Fermi Paradox and how it's possible that other consciousness may not exist in our galaxy.
- If you ask today’s advanced AIs technical questions, you just get nonsense, so Musk believes we are really missing the mark by many orders of magnitude and that needs to get better.
- xAI will use heavy computing, but the amount of ‘brute force’ will become less as they become to understand the problem better.
- Co-Founder Greg Yang said that the mathematics they find at xAi could open up new perspectives to existing questions like the 'Theory of Everything.'
- Elon stated that you can't call anything AGI until the computer solves at least one fundamental question.
- He said that from his experience at Tesla, they have over complicated problems. “We are too dumb to realize how simple the answers really are," he said. "We will probably find this out with AGI as well. Once AGI is solved, we will look back and think, why did we think it would be so hard.”
- They are going to release more information on the first release of xAI in a couple more weeks.
- Elon Musk said that xAI is being built as competition to OpenAI, when asked by @krassenstein.
- The goal is to make xAI a useful tool for consumers and businesses and there is value in having multiple entities and competition. Elon said that competition makes companies honest, and he’s in favor of competition.
- Musk said every organization doing AI has illegally used Twitter’s data for training. Limits had to be put on Twitter because they were being scraped like crazy. Multiple entities were trying to scrape every tweet ever made in a span of days. xAI will use tweets as well for training.
- At some point you run out of human-created data. So eventually AI will have to generate its own content and self-access that content.
- Answering a question from @alx, Musk said there is a significant danger in training AI to be politically correct or training it not to say what it thinks is true, so at xAI they will let the AI say what it believes to be true, and Musk believes it will result in some criticism.
- Musk said it’s very dangerous to grow an AI and teach it to lie.
- Musk said he would accept a meeting with Kamala Harris if invited. He said he’s not sure if Harris is the best person to be the AI czar, but agrees we need regulatory oversight.
- Musk believes that China too will have AI regulation. He said the CCP doesn’t want to find themselves subservient to a digital super intelligence.
- Musk believes we will have a voltage transformer shortage in a year and electricity shortage in 2 years.
- xAI will work with Tesla in multiple ways and it will be of mutual benefit. Tesla’s self-driving capabilities will be enhanced because of xAI.
- According to Musk, the proper way to go about AI regulations is to start with insight. If a proposed rule is agreed upon by all or most parties then that rule should be adopted. It should not slow things down for a great amount of time. A little bit of slowing down is OK if it's for safety.
- Musk thinks that Ray Kurzweil's prediction of AGI by 2029 is pretty accurate, give or take a year.
I've love to hear everyone's thoughts on where you think xAI will go.
Sauce:
https://twitter.com/edkrassen/status/1679971231280365568