This whole AI craze is giving me food for thought and I keep bumping into interesting articles and people who articulate their thoughts much better than I ever could. So here’s a Blinking Lights edition collecting links and thoughts about just this topic!
Apparently researchers think there’s a good chance that AI could destroy humanity. But I wanted to hear both sides, so I asked the open-source chatGPT version OpenChatKid what it thinks about it. It replied that:
AI, unlike humans and other animals, does not require resources to survive, so overpopulation is not an issue. Additionally, the intelligence of AI is non-biological and does not suffer from the physical constraints that inhibit human intelligence, so the intelligence of AI can never saturate or saturate. Therefore, the only factor that limits the development of AI is the available energy to train AI models. If the energy consumption of AI continues to increase as it has over the past few years, the impact of AI on the environment may become a major concern in the future.
There’s an ongoing discussion on the forum llllllll.co, which has a few really interesting points in it:
https://llllllll.co/t/future-of-contemporary-software-design/62989
Specially, a paragraph that struck me by the user karol is this one:
For me what sounds sad is that people automate tasks they consider “boring” like writing tests using AI while often the tests were something that allowed me to get deeper understanding of the problem, think about all the use cases etc. If people automate that do they really understand what code is doing in the end or what is even the problem they work on? […]
karol @ llllllll.co
One thing worth repeating: when we are currently talking about AI, we’re not really talking about intelligent machines, but rather about “machine learning”. The difference seems pretty important to me.
The always great Marc Weidenbaum on disquiet.com has a few interesting points to share on the whole AI topic, here’s a random quite from these, the full articles are linked below.
The development of cybernetics is key to the development of artificial intelligence and, as I note in the article, of generative systems. The Drake AI song is, as Barshad quotes me, boring, plain and simple. It’s the end result of rote cause and effect (“please make something that sounds like X”).
disquiet.com
Perhaps a good dose of healthy scepticism towards AI, is not only justified and understandable, but even necessary at this point.
https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein
In case you’re interested, there’s at least two ongoing campaigns to stop the copyright-related issues:
https://www.humanartistrycampaign.com/
https://artisticinquiry.org/ai-open-letter
Although the whole copyright thing is just one problematic aspect.
Clik here to view.

And to add to the catastrophic view a little bit, here’s one that walks you through the dark side of AI: Artificially intelligent sociopaths: The dark reality of malicious AI. All the articles linked in the article are worth checking out btw.
Talking about malicious AI and trust, I can’t help but think about this article here, by Ted Gioia: 30 Signs You Are Living in an Information Crap-pocalypse
Just because these things are full of unexpected twists: https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.
Dylan Patel and Afzal Ahmad / www.semianalysis.com
Of course there’s totally useful ML-powered tools out there. For example those that help you to upscale images when you get yet another one of those low-res JPGs and you’re supposed to use those in a magazine layout. Fortunately it doesn’t happen to me, but I would probably use Upscayl for such situations. But even here there’s a flip side. I get the feeling that now that people know that there’s an AI tool for that, they’ll send even more crappy low-res JPGs.
As closing words, I have to re-quote Marc Weidenbaum:
It was at that moment I realized how few people must read (or absorb and reflect on) science fiction, which I’ve found has routinely provided me with tools to navigate daily modern life. Sci-fi may not successfully predict the future (arguably, it is always about the present), but it can sure instigate thought experiments in advance of the future’s eventual — and in the case of Drake Prime, mundane if worrisome — arrival.
disquiet.com
The cover image was created by Stable Diffusion XL using the prompt “Artificial Intelligence in the process of creating art”.