Now that we have language models… great! I can just write this post as bullet points and you may simply copy/paste into the AI of your choice and turn it into a text that I’m sure is better than what I could conjure in an hour.
- An AI revolution is inevitable.
- The move into the physical space is inevitable.
- We are still in the beginning where we have a bunch of models that are good at one particular thing, but not others. The real revolution will be when AIs can autonomously combine what they need and learn on demand.
- Society needs to be ready for AIs replacing many human tasks and we need to move away from the idea that everybody needs to have some random job to be valuable to society.
- A lot of the fear of generative AI, I think, is similar to when Photoshop came out 30 years ago and users were scared of image manipulation. In the end, it’s a tool that doesn’t replace true creativity. If I can ask stable diffusion to generate a painting, Rembrandt Style, the artistic part is the style that the AI didn’t create. In the future, artists may just set the style overall and the intricacies of creating individual works are not as important. Yes, AIs are great but I’ll be really impressed if I can ask them to create a new art style or something similar.
- Pretty much all businesses need to evaluate if they want to build or buy AI (or if it doesn’t fit their business model which is, I believe, not as likely as one might hope).
- There will be tectonic shift in tech where companies that do AI right disrupt entire industries and others that were too slow will be pushed aside.
- Humans as well as AIs learn from other works. If the work of an AI doesn’t count as a valid “thing” then the work of a human that has been exposed to other human works (very likely) is also not valid.
- There are huge privacy implications with how many AIs currently work (Looking at you, ChatGPT). Many companies pushing for AI are taking a cloud and crowdsourcing approach, but we shouldn’t underestimate the value of dedicated hardware in personal devices that can process AI tasks.
- There’s a danger of things becoming to “uniform”. If everything is run by the same AI, then nuances that are so important for society to evolve and have mature discussions are going to be indistinguishable. I already see this issue with social media content algorithms and it’s going to get worse with AIs.
- Censorship at the whim of AI companies will divide society into people that know how to train their own models and the ones who don’t.
- AI is kind of a buzzword incorporating many things. But the bottom line is “Thing slurps up all this data and now comes up with new data based on that but I’m not a hundred percent sure how it works”.
- Regulation will be painful. Particularly open source. We cannot prohibit people from compiling their own AIs and feeding them with data. Security through obscurity doesn’t work. Technology will inevitably get out and we need to embrace it. Maybe it’ll just be about resources. In theory, everyone can learn how to build a nuclear bomb, too, but at-home uranium enrichment is just too resource intensive. This means that the really powerful AIs will only be available to and controlled by organizations that have a lot of resources. (for better or worse). Over time, this will shift, though, because processing power for consumers will steadily increase (whereas the resources required to build centrifuges haven’t changed as much since the 1950s).