There are a variety of reactions to what has been happening in AI lately.
Some of them are depressing. All of the people who will not say the sun comes up until Microsoft Teams or Office says it does are doing the same thing with AI. They think it is all about OpenAI and ChapGPT. MS owns part of OpenAI. LLMs are one of the biggest changes to come along in technology in a decade. We should not pass up this chance to prevent the company that has made technology suck for the past three decades have any more influence in our lives or society [Note 1].
Maybe I should not be shocked by the immensity of human stupidity, but I do not understand why people are not grabbing this chance to get Microsoft out of our lives with both hands.
I see this a lot at my employer [Note 2]. There the unholy trinity is OpenAI, ChatGPT, and Microsoft Azure. There are a lot of higher-ups who just push whatever garbage vendors are trying to sell, regardless of whether or not any of the companies that we are supposed to be helping (aka “paying clients”) want any of it. I do not understand why these people are paid lots of money to look at “new technology”, only to turn around and say “More Oracle! More SAP! More Microsoft!” A few years ago, a lot of them were pushing blockchain, which has gone nowhere. And none of these block-chumps admit that they were wrong about that. [Note 3]
Now these shysters are pushing the metaverse, even though only literally the only person on the planet who wants it is Mark Snakerberg. One reason I think he wants it is if they can get you to go to their site with their gear, then they own the whole experience. Right now you have to use a browser on a PC or an app on a phone to use Facehook, and Meta does not own an OS or a browswer. But that is not anybody else’s problem. I think the other reason is that he is so stiff and robotic he is the only person who has an avatar that looks more human than he does, even without legs. Here is an article and discussion on Slashdot about retailers dumping the metaverse. I can see retailers using the metaverse: people might want to try out new clothes without actually changing several times. If retailers and Disney do not want the metaverse, then it really is dead. I have heard the headsets are too heavy. Do you want something wireless surrounding your entire head? (Hey, maybe that’s why Snakerberg keeps throwing money at something nobody else wants.)
I wonder if he is regretting giving Sandberg the boot.
Even the Emacs community is gung-ho about OpenAI and their products. I noticed that Sacha Chua started a section for AI in her weekly Emacs news posts. The first AI mention I could find was for a GPT package on 2022-11-21 (as of 2023-04-24, it looks like that package is for OpenAI models only). The section started showing up on 2022-12-12, on and off until February, and then consistently since then. There are a few packages that say they will incorporate other LLMs as more are available. Most of the AI packages are just more wrappers around ChatGPT. There are a few posts on the subreddit asking about open source alternatives, and one about OpenAssistant (which I mention below). The posts are here, here (with a response from Irreal here) and here. I know that OpenAI’s products are more mature than others at the moment, but it seems like even the open source crowd is going all-in on the billion-ai-res’ shiny object. [Note 4]
It will be interesting how the corrupt-o-currency crowd reacts to AI. A lot of people insisted they were in corrupt-o-currency “for the technology”. I think AI will show us if those people are really interested in technology, or if they are stupid, or grifters. Now we have something that really IS interesting. Corrupt-o-currency tech was not that interesting. Increment a number going into a hash function until your output starts with a particular number of zeroes. That is really it. Immutability is not the default in a lot of programming languages, and it can make data easier to manage and reason about, but it is not unique to corrupt-o-currency. And it’s a pretty dumb reason to fool yourself.
Blockchain was a stalking horse for bitcon that promised a glorious future that never arrived. Every time I watched or watched anything about blockchain, it was always vendors pushing products, and there were never any user testimonials. Contrast that with AI: with AI we see users actually trying it out. There are a lot of people talking about AI who are not trying to sell you something, while blockchain/NFT/bitcon was nothing but grifting. We do not see vendors going on about “someday”, or AI bros saying, “Have fun staying dumb.” We do see people implementing models to compete with ChatGPT (more on that below).
A lot of corrupt-o-currency advocates say it’s still early days for their magic beans. They have been saying that for ten years, and there have been a LOT of people pushing it and trying to find a use for it. They might counter this is not the first time people thought AI was going to change the world, and there were a couple of AI winters: 1974–1980 and 1987–1993, so could there be a corrupt-o-currency winter as well? One difference is that nobody went to jail for AI fraud. There were no “rug pulls” in AI back in the day. AI has been attempts at solutions that until recently never solved the problems they were trying to solve. Block-coin was a technology looking for a problem to solve (besides money laundering and selling drugs). The whole concept of “digital assets” makes no sense at all. They do not have income streams like companies do, and they have no intrinsic use beyond financial transactions like commodities do. It is like someone found a way to combine Beanie Babies and coin flips and people decided to start gambling on them. Aside from helping pedophiles launder North Korean drug money, none of this serves any purpose.
AI has been around for a while, but I think that it was the release of Chap-GPT 3.5 in November that really changed things. The first time we talked about ChatGPT at the EmacsATX meetings was in December.
Right now AI and LLMs are en fuego. Not everyone is just pushing incumbent vendors and their products. There are a lot of projects working on open source LLMs. I wrote about the idea that people might try to run LLMs locally, and there are projects working on that.
Some of them are based on Facebook’s LLaMA model, so they cannot be used for commercial purposes. One is llama.cpp. There is one in Golang called llama.go. Another in Golang called LocalAI. One goal of these projects is to be able to be run on a CPU, instead of needing a large cluster of GPUs. There is also a subreddit dedicated to running LLaMA locally. Some experts think that making models with more parameters is not the way to make progress, and that algorithms are where progress will occur. Will this mean that the need for GPUs will plateau, and CPUs will become more important in AI? I don’t know. I tried using some model files from Hugging Face that are based on LLaMA, like Stanford’s Alpaca, but so far I have not been able to get anything working with that one. I think it only works with the original LLaMA files.
There is a project called Red Pajama from Together.xyz to build a completely open source counterpart to the LLaMA model. Their plan is to have something that can be run on a consumer grade GPU.
An interesting project is GPT4All. This project can be run locally. It is made by a company called Nomic.AI; their main product makes visual representations of AI datasets. I found out about it from Matthew Berman’s Youtube channel. It is uses an older model from EleutherAI called GPT-J (Hugging Face page here, Wikipedia page here). I am part of the Discord for this project, so I will keep an eye on this. The CEO of Nomic was interviewed by Matthew Berman, and he talked about how they went through a lot of effort to get a dataset that is completely open source and can be used for research and/or commercial purposes. He said that he thinks OpenAI has a lot of proprietary data in their dataset, partially due to how they created it, partially due to people uploading their own internal data into it. He predicts there will be a lot of lawsuits over AI data for years.
I got GPT4All to work locally, but I did get an error the first time: /bin/chat: error while loading shared libraries: libxcb-cursor.so.0: cannot open shared object file: No such file or directory. A Google search led me to an issue on their Github repo. I did not need to run all those commands, I only needed to run this one: apt install libxcb-cursor0.
Another project is OpenAssistant, made by a non-profit in Germany called LAION (Large-scale Artificial Intelligence Open Network, pronounced like the large feline “lion”) (Github page here, Hugging Face page here). I think the goal is not only to use their dataset to respond to prompts, but to be able to retrieve external data (like Google search results) in real-time. They want to release something that can be run locally, but they want the main interface to be their website. One of the ways the developed their dataset is by having humans ask questions, other people answer questions, and still more people rate the questions and the answers; someone could perform all three roles for different questions.
I found out about this from the YouTube channel of one of the leaders of the project named Yannic Kilcher. At first I did not think he was a real person but an AI-generated avatar because there are some jumps and jerks in his speech. Then I looked at a few live streams, and I think that for his videos he edits out all the filler words and pauses. I am in their Discord and will keep an eye on this.
Another open source project is StableLM from Stable.AI. They made the image generator Stable Diffusion. I thought they were all about images until their LLM came out. Right now I do not know too much about this one.
One possible clue to what might happen is a science fiction project called Orion’s Arm (site here, Wikipedia page here). Unlike most sci-fi, there are no humanoid aliens. There are baseline humans, humans augmented with various levels of technology, and AIs of various levels, up to almost god-like omniscience. Some people have put some thought into how humans will live with AI beyond just “AI will kill us all.” Interestingly, some of the images on that site are made with the Midjourney AI.
One thing about all of this that depresses me is I never got the chance to work with technologies that really interest me. When I read about or talk to Lisp and Smalltalk developers of yore, it seems like there was an age of heroes when gods and men strode the earth together. Now the world is full of pinheads pushing vendor nonsense, and people too stupid and too lazy to use something not made by Microsoft. Let’s get this company out of our lives forever.
Another thing that depresses me is I bet all the alternative medicine woo-woo people are not worried about their jobs.
You’re welcome.
Note 1: If you think OpenAI is actually open because it has the word “open” in it, you are probably the same sort of person who thinks SharePoint is good for sharing because it has the word “share” in it. Do us all a favor: go sit in a rocking chair right now and get all your nutrients through a straw. You are about as useful to society as someone already in that state.
Note 2: I do not speak for my employer, nor do they endorse this blog, as per the About page. Per the Disclaimer page: if any of this bothers you, go jump off a cliff.
Note 3: I don’t want anybody to think I have nothing good to say about my employer. Granted, I am currently making a living with a proprietary software tool that I really do not like. When I joined I knew nothing about it, and they paid me while they trained me how to use it. A lot of companies would never do that. And there were a couple of times in the past decade when someone found something else for me to do while things were slow. It is a big company. I think it has more than half a million employees. There are some smart people there, and frankly some dumb ones. It seems like too many decisions are made for the wrong reasons: inertia, the sunk cost fallacy, and this-is-what-the-vendor-told-us-to-do. Why can’t these companies founded by billionaires do their own marketing?
The company does actually do some useful things. They do a lot of work for various governments and large corporations around the world. Entities that actually make products and services people use. Maybe startups have more interesting technology, but I never bought into the whole startup religion here in Austin. There are a lot of people who seem to want to work for startups just because they are startups. Asking them what their company does or why anyone should care is a question a lot of them do not seem to like and sometimes do not have an answer for. There were a few people that I would see at some meetups maybe once a year. Every time I saw them they were at a different company that I had never heard of. They all just seemed to fade away. Maybe they made more money than me, but what are they really doing for the world? If any of those companies are still around, they are just leaving messes for someone else to clean up.
Note 4: I do not want any of this to be interpreted as disparaging Sacha Chua or all the things she does for the Emacs community. I think her posts simply reflect what is happening in the Emacs community. There are some people in the Emacs community who are as leery of AI consolidation as I am.
Image from The Codex of Fernando I and Doña Sancha, aka Beatus Facundus, an 11th century manuscript manuscript of ‘Commentary on the Apocalypse‘, written in the 8th century by Beatus of Liébana; manuscript created at monastery of Santo Toribio de Liébana (Wikipedia page here), currently housed at the National Library of Spain; manuscript information here. Image from World Document Library, licensed under CC BY-NC-SA 4.0.