I have been using some AI tools over the last few months and have found them to be quite useful, namely, in helping me write better. I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles. It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.
Activity tagged "artificial intelligence"
Stack Overflow, a legendary internet forum for programmers and developers, is coming under heavy fire from its users after it announced it was partnering with OpenAI to scrub the site's forum posts to train ChatGPT. Many users are removing or editing their questions and answers to prevent them from being used to train AI — decisions which have been punished with bans from the site's moderators.
The problem here is not that Rabbit Inc. used to be an NFT company, or that Jesse Lyu was its CEO, or that any of the GAMA team is or was part of making the Rabbit R1. Companies pivot. It happens. When they do so they communicate how they’re doing so to their users, and do so with transparency. The problem is that it appears that GAMA holders, and anyone who took significant interest in GAMA and the things that Lyu promised — a comic, a television show, a Massively Multiplayer Online Role Playing Game, a physical store to educate people on Web3 and NFTs, a rocket ship with a satellite — were left in the lurch. Less than a year before the Rabbit R1 launched, Lyu was discussing integrating AI into a completely different product, and people believed that he was sincerely focused on creating all the things he’d promised to in GAMA and the Gamaverse.
finally made an "AI" category for Web3 is Going Just Great to capture all the disasters pertaining to AI-powered cryptocurrencies and cryptocurrency-powered AI
The components sourced from an intern fixing ChatGPT’s output just enough for it to run and the exhaustively tested ones from a senior developer are equivalent in the eyes of management.
And one is much, much cheaper than the other.
If you’re unlucky enough to have to use any of this garbage we’re shipping and calling ‘software’, now you know why it all feels a bit shit.
If you work as a software developer, it means employers will continue to emphasise frameworks over functionality because that makes you easier to replace. They will sacrifice software security to make your job easier to outsource. They will let their own businesses suffer by shipping substandard software because they believe they can recoup those losses at your expense.
This is what unions were made for
was briefly baffled by this CAPTCHA until i realized it was asking me to identify the animal that was bigger in real life than the other animals in the picture, not the animal that, in real life, is bigger than roughly 1cm
we are rapidly approaching the point at which CAPTCHAs clever enough to keep the bots out are too confusing for the humans
AI isn't useless. But is it worth it?
I now believe that there is even less intelligence and reasoning in these LLMs than I thought before. Many of the proposed use cases now look like borderline fraudulent pseudoscience to me.
There are many legitimate criticisms of LLMs. The copyright issues involved in their training, their enormous power consumption and the risks of people trusting them when they shouldn’t (considering both accuracy and bias) are three that I think about a lot. The one criticism I wont accept is that they aren’t useful.
Futurism report highlights the reputational cost of publishing AI-generated content.