I Will Be Buying a New Laptop Soon

Around tax day the file system on my laptop starting going into read-only mode. I followed the directions I used the last time this happened in 2016.

That trick worked a few times. After the first time I got an external hard drive from Best Buy and saved everything I could onto it. I already had an external hard drive, but I decided to get another one. The easiest way to manage risk is redundancy and spare capacity. After a few times of running fsck the machine stopped booting up.

I will go with another machine from System76. I have been happy with my Meerkat, and I think it is important to support a Linux vendor. I asked on the System76 subreddit about using a laptop for AI work, and the consensus is that AI work should be left to desktops. I will probably get a system that can hold 64 GB of memory.

I thought about getting another system from Discount Electronics, but they do not always have systems that can handle 64 GB of memory.

I did look again at Star Labs Systems, but it would take at least two months to get a decent machine. Maybe they need time to access the Chip Force (perhaps they should call the STAR Labs on Earth Prime). Even a machine from System76 will take two weeks. So far I have had no issues with my Meerkat, but I prefer having two systems.

A few years ago I was at a meetup talking to another member, and we started talking about what systems would be like in the future. I pointed out all the Apple fanbois thought when the iPad came out that nobody would be buying desktops or laptops in five to ten years; obviously that prediction turned out to be wrong [Note 1]. He thought that we might just plug our phones into ports. I think it would be neat if there were stations at companies, hotels, conferences, and else where set up for minis, like the Meerkat, the Librem Mini, systems by SimplyNUC, or the Focus NX [Note 2]. There could be a power outlet, with monitor, mouse, keyboard and other peripherals accessible via cables and a USB port. It would have more capacity than a phone, and it could fit in a bag. I thought a mini could be inserted into a slot, but that might cause heat issues. The user would have to have a decent firewall.

I remember lugging around a massive desktop for a project at school around 2000 or 2001. The fact that these small systems are beyond what was available then is amazing.

I just have to get over being depressed about spending a lot of money before I make the purchase.

You’re welcome.

Note 1: It is amazing how many Apple users will pride themselves for being skeptical of Microsoft and mock MS fanbois for not having thoughts that do not come from MS, but then turn around and will not say whether or not the sun is out unless their iJunk tells them it is. Let’s not forget: Blow Jobs criticized MS for their lack of taste, not their business practices.

Note 2: Is the Focus NX the first Warp 5 mini?

Image from Grec 1208, a 12th-century manuscript housed at Bibliothèque nationale de France; image from Gallica BnF, assumed allowed under public domain.

Reactions To Recent AI Developments

There are a variety of reactions to what has been happening in AI lately.

Some of them are depressing. All of the people who will not say the sun comes up until Microsoft Teams or Office says it does are doing the same thing with AI. They think it is all about OpenAI and ChapGPT. MS owns part of OpenAI. LLMs are one of the biggest changes to come along in technology in a decade. We should not pass up this chance to prevent the company that has made technology suck for the past three decades have any more influence in our lives or society [Note 1].

Maybe I should not be shocked by the immensity of human stupidity, but I do not understand why people are not grabbing this chance to get Microsoft out of our lives with both hands.

I see this a lot at my employer [Note 2]. There the unholy trinity is OpenAI, ChatGPT, and Microsoft Azure. There are a lot of higher-ups who just push whatever garbage vendors are trying to sell, regardless of whether or not any of the companies that we are supposed to be helping (aka “paying clients”) want any of it. I do not understand why these people are paid lots of money to look at “new technology”, only to turn around and say “More Oracle! More SAP! More Microsoft!” A few years ago, a lot of them were pushing blockchain, which has gone nowhere. And none of these block-chumps admit that they were wrong about that. [Note 3]

Now these shysters are pushing the metaverse, even though only literally the only person on the planet who wants it is Mark Snakerberg. One reason I think he wants it is if they can get you to go to their site with their gear, then they own the whole experience. Right now you have to use a browser on a PC or an app on a phone to use Facehook, and Meta does not own an OS or a browswer. But that is not anybody else’s problem. I think the other reason is that he is so stiff and robotic he is the only person who has an avatar that looks more human than he does, even without legs. Here is an article and discussion on Slashdot about retailers dumping the metaverse. I can see retailers using the metaverse: people might want to try out new clothes without actually changing several times. If retailers and Disney do not want the metaverse, then it really is dead. I have heard the headsets are too heavy. Do you want something wireless surrounding your entire head? (Hey, maybe that’s why Snakerberg keeps throwing money at something nobody else wants.)

I wonder if he is regretting giving Sandberg the boot.

Even the Emacs community is gung-ho about OpenAI and their products. I noticed that Sacha Chua started a section for AI in her weekly Emacs news posts. The first AI mention I could find was for a GPT package on 2022-11-21 (as of 2023-04-24, it looks like that package is for OpenAI models only). The section started showing up on 2022-12-12, on and off until February, and then consistently since then. There are a few packages that say they will incorporate other LLMs as more are available. Most of the AI packages are just more wrappers around ChatGPT. There are a few posts on the subreddit asking about open source alternatives, and one about OpenAssistant (which I mention below). The posts are here, here (with a response from Irreal here) and here. I know that OpenAI’s products are more mature than others at the moment, but it seems like even the open source crowd is going all-in on the billion-ai-res’ shiny object. [Note 4]

It will be interesting how the corrupt-o-currency crowd reacts to AI. A lot of people insisted they were in corrupt-o-currency “for the technology”. I think AI will show us if those people are really interested in technology, or if they are stupid, or grifters. Now we have something that really IS interesting. Corrupt-o-currency tech was not that interesting. Increment a number going into a hash function until your output starts with a particular number of zeroes. That is really it. Immutability is not the default in a lot of programming languages, and it can make data easier to manage and reason about, but it is not unique to corrupt-o-currency. And it’s a pretty dumb reason to fool yourself.

Blockchain was a stalking horse for bitcon that promised a glorious future that never arrived. Every time I watched or watched anything about blockchain, it was always vendors pushing products, and there were never any user testimonials. Contrast that with AI: with AI we see users actually trying it out. There are a lot of people talking about AI who are not trying to sell you something, while blockchain/NFT/bitcon was nothing but grifting. We do not see vendors going on about “someday”, or AI bros saying, “Have fun staying dumb.” We do see people implementing models to compete with ChatGPT (more on that below).

A lot of corrupt-o-currency advocates say it’s still early days for their magic beans. They have been saying that for ten years, and there have been a LOT of people pushing it and trying to find a use for it. They might counter this is not the first time people thought AI was going to change the world, and there were a couple of AI winters: 1974–1980 and 1987–1993, so could there be a corrupt-o-currency winter as well? One difference is that nobody went to jail for AI fraud. There were no “rug pulls” in AI back in the day. AI has been attempts at solutions that until recently never solved the problems they were trying to solve. Block-coin was a technology looking for a problem to solve (besides money laundering and selling drugs). The whole concept of “digital assets” makes no sense at all. They do not have income streams like companies do, and they have no intrinsic use beyond financial transactions like commodities do. It is like someone found a way to combine Beanie Babies and coin flips and people decided to start gambling on them. Aside from helping pedophiles launder North Korean drug money, none of this serves any purpose.

AI has been around for a while, but I think that it was the release of Chap-GPT 3.5 in November that really changed things. The first time we talked about ChatGPT at the EmacsATX meetings was in December.

Right now AI and LLMs are en fuego. Not everyone is just pushing incumbent vendors and their products. There are a lot of projects working on open source LLMs. I wrote about the idea that people might try to run LLMs locally, and there are projects working on that.

Some of them are based on Facebook’s LLaMA model, so they cannot be used for commercial purposes. One is llama.cpp. There is one in Golang called llama.go. Another in Golang called LocalAI. One goal of these projects is to be able to be run on a CPU, instead of needing a large cluster of GPUs. There is also a subreddit dedicated to running LLaMA locally. Some experts think that making models with more parameters is not the way to make progress, and that algorithms are where progress will occur. Will this mean that the need for GPUs will plateau, and CPUs will become more important in AI? I don’t know. I tried using some model files from Hugging Face that are based on LLaMA, like Stanford’s Alpaca, but so far I have not been able to get anything working with that one. I think it only works with the original LLaMA files.

There is a project called Red Pajama from Together.xyz to build a completely open source counterpart to the LLaMA model. Their plan is to have something that can be run on a consumer grade GPU.

An interesting project is GPT4All. This project can be run locally. It is made by a company called Nomic.AI; their main product makes visual representations of AI datasets. I found out about it from Matthew Berman’s Youtube channel. It is uses an older model from EleutherAI called GPT-J (Hugging Face page here, Wikipedia page here). I am part of the Discord for this project, so I will keep an eye on this. The CEO of Nomic was interviewed by Matthew Berman, and he talked about how they went through a lot of effort to get a dataset that is completely open source and can be used for research and/or commercial purposes. He said that he thinks OpenAI has a lot of proprietary data in their dataset, partially due to how they created it, partially due to people uploading their own internal data into it. He predicts there will be a lot of lawsuits over AI data for years.

I got GPT4All to work locally, but I did get an error the first time: /bin/chat: error while loading shared libraries: libxcb-cursor.so.0: cannot open shared object file: No such file or directory. A Google search led me to an issue on their Github repo. I did not need to run all those commands, I only needed to run this one: apt install libxcb-cursor0.

Another project is OpenAssistant, made by a non-profit in Germany called LAION (Large-scale Artificial Intelligence Open Network, pronounced like the large feline “lion”) (Github page here, Hugging Face page here). I think the goal is not only to use their dataset to respond to prompts, but to be able to retrieve external data (like Google search results) in real-time. They want to release something that can be run locally, but they want the main interface to be their website. One of the ways the developed their dataset is by having humans ask questions, other people answer questions, and still more people rate the questions and the answers; someone could perform all three roles for different questions.

I found out about this from the YouTube channel of one of the leaders of the project named Yannic Kilcher. At first I did not think he was a real person but an AI-generated avatar because there are some jumps and jerks in his speech. Then I looked at a few live streams, and I think that for his videos he edits out all the filler words and pauses. I am in their Discord and will keep an eye on this.

Another open source project is StableLM from Stable.AI. They made the image generator Stable Diffusion. I thought they were all about images until their LLM came out. Right now I do not know too much about this one.

One possible clue to what might happen is a science fiction project called Orion’s Arm (site here, Wikipedia page here). Unlike most sci-fi, there are no humanoid aliens. There are baseline humans, humans augmented with various levels of technology, and AIs of various levels, up to almost god-like omniscience. Some people have put some thought into how humans will live with AI beyond just “AI will kill us all.” Interestingly, some of the images on that site are made with the Midjourney AI.

One thing about all of this that depresses me is I never got the chance to work with technologies that really interest me. When I read about or talk to Lisp and Smalltalk developers of yore, it seems like there was an age of heroes when gods and men strode the earth together. Now the world is full of pinheads pushing vendor nonsense, and people too stupid and too lazy to use something not made by Microsoft. Let’s get this company out of our lives forever.

Another thing that depresses me is I bet all the alternative medicine woo-woo people are not worried about their jobs.

You’re welcome.

Note 1: If you think OpenAI is actually open because it has the word “open” in it, you are probably the same sort of person who thinks SharePoint is good for sharing because it has the word “share” in it. Do us all a favor: go sit in a rocking chair right now and get all your nutrients through a straw. You are about as useful to society as someone already in that state.

Note 2: I do not speak for my employer, nor do they endorse this blog, as per the About page. Per the Disclaimer page: if any of this bothers you, go jump off a cliff.

Note 3: I don’t want anybody to think I have nothing good to say about my employer. Granted, I am currently making a living with a proprietary software tool that I really do not like. When I joined I knew nothing about it, and they paid me while they trained me how to use it. A lot of companies would never do that. And there were a couple of times in the past decade when someone found something else for me to do while things were slow. It is a big company. I think it has more than half a million employees. There are some smart people there, and frankly some dumb ones. It seems like too many decisions are made for the wrong reasons: inertia, the sunk cost fallacy, and this-is-what-the-vendor-told-us-to-do. Why can’t these companies founded by billionaires do their own marketing?

The company does actually do some useful things. They do a lot of work for various governments and large corporations around the world. Entities that actually make products and services people use. Maybe startups have more interesting technology, but I never bought into the whole startup religion here in Austin. There are a lot of people who seem to want to work for startups just because they are startups. Asking them what their company does or why anyone should care is a question a lot of them do not seem to like and sometimes do not have an answer for. There were a few people that I would see at some meetups maybe once a year. Every time I saw them they were at a different company that I had never heard of. They all just seemed to fade away. Maybe they made more money than me, but what are they really doing for the world? If any of those companies are still around, they are just leaving messes for someone else to clean up.

Note 4: I do not want any of this to be interpreted as disparaging Sacha Chua or all the things she does for the Emacs community. I think her posts simply reflect what is happening in the Emacs community. There are some people in the Emacs community who are as leery of AI consolidation as I am.

Image from The Codex of Fernando I and Doña Sancha, aka Beatus Facundus, an 11th century manuscript manuscript of ‘Commentary on the Apocalypse‘, written in the 8th century by Beatus of Liébana; manuscript created at monastery of Santo Toribio de Liébana (Wikipedia page here), currently housed at the National Library of Spain; manuscript information here. Image from World Document Library, licensed under CC BY-NC-SA 4.0.

Alternatives to Org (that I will not be using)

One of the items in my ever-growing to-do list in Org Mode was to look at other outlining software. I think I started the list with the intent of trying these out, but as it grew I just kept track of new programs as I came across them. I have decided to abandon this list since I think Org and Emacs will fit whatever needs I have now and in the future. Plus I kept finding other alternatives to Org (either as outliners or for to-do lists), and since the list got longer and longer, I just decided to scrap the idea. Here I will just list what I found.

  • Leo Editor
  • Todo.txt
    • Main page.
    • Neil Van Dyke’s page on Todo.txt. He also has a page on Emacs packages he has written.
    • Emacs mode for Todo.txt.
  • TaskWarrior
    It looks like this requires a separate server to run.

  • A Plain Text Personal Organizer. This is more of a system, and there does not seem to be an application.
  • [x]it (I guess pronounced like “exit”)
    I looked at the syntax guide for about a second, and decided I did not need to read any further. With Org, you can set the status of an item with a word (like “TODO”, “INPROGRESS”, “CANCELLED”, “DONE”); I think you can add your own if you want. Using punctuation for that is inefficient. I do not need yet another thing to remember. I prefer thinking in words rather than symbols, and I prefer context being somewhere other than in my head.

One of the Hacker News discussions on xit had a comment that made me realize there was not much point in keeping this list:

If you want a plain text organizational, compositional and scheduling tool that you can use for the rest of your life and know that 30 years down the line it will be actively supported, developed and you will be able to tweak anything you want…. emacs/org mode is by far and away the best choice. It isn’t even remotely close, we are talking about the difference between a planet and a tiny asteroid when you compare org mode to other plain textish organizational, compositional and scheduling tools.

For as long as humanity doesn’t collapse and probably even after it does org mode and emacs will be used (there is going to be some nerd somewhere using org mode and ledger cli to meticulously track how many smoked rats and cockroach kebobs they have left to eat before they have to leave their bunker), there is just such an intense critical mass of utility under an open source license.

WRT outliners: I do not create plain text files anymore. I just make everything an Org file, and collapse parts that I am not interested in looking at in any given moment. When you have a tool that can do anything, why use anything else?

You’re welcome.

Image from an 11th century manuscript made at the monastery of St. Gall, housed at the Jagiellonian Library (Wikipedia page here), image from e-Codices, assumed allowed under CC BY-NC 4.0.

2023-04 Austin Emacs Meetup

There was another meeting a couple of weeks ago of EmacsATX, the Austin Emacs Meetup group. For this month we had no predetermined topic. However, as always, there was mention of new modes, packages, technologies and websites that I had never heard of, and some of this may be of interest to you as well.

#1 was one of the organizers; he used to live in Austin and now lives in East Texas.
#2 was the developer in north Texas.
#3 was not here. (I might give The Esteemed Gentleman of Oklahoma a permanent number.)
#4 was a hardware designer in north Texas near Dallas.
#5 a developer in Australia.
#6 did not speak much.
#7 was the other organizer, formerly working for the City of Austin.
#8 was the devops engineer from the company that makes quantum computers from lasers.
#9 was our professor in OKC.

In a change to the format, here is a list of the modes and packages that were mentioned (I will not list the big ones here, like Org, Doom, Spacemacs):

Non-Emacs Topics:

#1 got us started. He helped someone at his job with Base64 encoding and decoding for OAuth. He may have sold someone on trying Emacs. He does not evagelize too often since Emacs does require a commitment. However, #1 mentioned this person was a ham radio operator, so perhaps the Church of Emacs will save another soul.

#2 pointed out Emacs can convert text to Morse code and the NATO alphabet. As a non-native English speaker, he finds NATO alphabet useful.

You know you want it, so here are those previous two sentences in Morse code:

#..--- .--./---/../-./-/./-.. ---/..-/- ./--/.-/-.-./... -.-./.-/-. -.-./---/-./...-/./.-./- -/./-..-/- -/--- 
--/---/.-./.../. -.-./---/-../. .-/-./-.. -/..../. -./.-/-/--- .-/.-../.--./..../.-/-..././-/.-.-.- 
.-/... .- -./---/-./-....-/-./.-/-/../...-/. ./-./--./.-../../.../.... .../.--././.-/-.-/./.-./--..-- 
..../. ..-./../-./-../... -./.-/-/--- .-/.-../.--./..../.-/-..././- ..-/..././..-./..-/.-../.-.-.-

We looked at the “Games and Other Amusements” page in the Emacs documentation, and spent a few minutes trying some of them out. When people say Emacs is stuck in the 1970s, show them the built-in Tetris.

#4 asked about Hyperbole (Emacswiki link here, GNU link here). #4 cannot find a good use case for it since Org handles what he does.

I am surprised more people are not using Hyperbole. It is the best Emacs mode ever. It is so good, there is a Youtube channel for it. Really, the videos are just amazing. Hyperbole mode is totally top notch. I just cannot convey how great it is. High marks.

I am not the only one blown away by how excellent Hyperbole mode is. The critics have spoken:

..../-.--/.--././.-./-.../---/.-../. --/---/-../. .-/-.-/-.../.-/.-.
- Morse Code Allah

Try Hyerbole Mode. Think really hard. Write down how great Hyperbole Mode is.
- Richard Feynman

Zathras try Hyperbole. Hyperbole best thing happen to Zathras. 
Zathras must go tell Zathras.
- Zathras

I loved it. It was better than "Cats". I am going to use it again and again.
- Saucer-Eyed Tourist in NYC

But isn't vim better thaALL GLORY TO HYPERBOLE MODE.
- Resident of New New York, Year 2999

Much Lisp. Such Meta-X. Def wow.
- Dogechan

We also talked about Big Brother Database: ELPA page here, EmacsWiki page here.

#2 showed a Chat-GPT package. Apparently there are a lot. He asked for a prompt, and I told him to ask about a good use for Hyperbole. I am not sure which package #2 used. I think it was either gptel or gpt.el. #1 wants syntax highlighting in his GPT client. I have not used any of the GPT clients, so I was not aware there was no highlighting. #2 says you have to give Chap-GPR precise directions, and uses it for proofreading.

We got to know #4 a bit more. He is originally from Austin, and now lives North Texas near Dallas. He works for a semiconductor company based in Boise. He uses some Emacs in his job and uses Org to track tasks and his time. He mentioned Org Notion, which I assumes helps Org talk to the Notion mentioned at this link (I had never heard of it). He runs Emacs on Windows through WSL. His job is hardware design, but I do not think he uses Emacs for that specific task.

I googled “Emacs CAD”, and the best result I got was a Reddit thread from 2017. There is also an architecture firm in Nairobi called “Emacs CAD“, but I do not think it has anything to do with the Emacs we are all familiar with. The founder’s name is Emmanuel, so perhaps that is a factor in why the firm is called “Emacs”. However, their “About Us” page has a few statements I think we can all agree with:

With a passion to raise the standards, Emacs will satisfy and exceed your requirements.

Emacs values and recognises talent and variety from other professions and approaches its solutions provision using the consortium approach.

The Emacs Vision is to be an excellent provider of life enhancing solutions.

Then there was a lot of talk about configuration. #4 is a former Vim user, likes to tinker, made the switch to Emacs four years ago, and might declare Emacs bankruptcy. #2 uses Doom core with mostly his own modules. #2 talked about difference between Spacemacs and Doom. #2 showed us a few Doom macros for configuration: after! and map!. #4 makes a config per module and groups them into directories. This allows him to zero to a package to find out which is causing issues. He talked about wanting to use daemons to run multiple instances to debug problems.

#8 revealed that he is one of the maintainers of Spacemacs, and sees using a daemon connecting to different clients as an anti-pattern. One exception would be if he used a different daemon to run unit tests when he updates a repo; that way a failing test would not upset his workflow. #8 said there has been cross-pollination between Doom and Spacemacs.

#9 said he formerly used a config based on Daniel Higginbotham’s config for Clojure For the Brave and True. Now his is all in one file, which is easy to keep in version control. It is about 6K lines. He tried Org mode for his config, but could not get it to work. My Emacs config is also based on DH’s. At some point I will try to get it to work with use-package or as an Org file.

#2 has his config set to be lazily loaded, so it starts fast but he pays for it later. #8 declared Emacs bankruptcy. He went bankrupt two times in three years. Now with Spacemacs his config is 300 lines of Emacs Lisp. Before his config had 1000 pages. He said you can but page breaks into files and use a customized character for the page break, like a horizontal line. This can make a large file easier to parse visually. Here are a few links for my convenience: A page on Emacs wiki here, an old page at Indiana University here, page-break-lines.el on Github here, the Emacs manual’s page on Pages (getting meta with Meta-X) here. I will add this to my ever-growing to-do list. The stereotype is that Emacs users are always changing their config. I keep adding to a list of things I say I will add to my config someday.

#8 said there might be some changes coming to tree-sitter, but did not say too much else. That link says to use the built-in Emacs tree-sitter module for 29+.

Then we got back to AI. #8 talked about a study that was done for AI and chess. AI could beat a grandmaster. But if an AI gave an amateur a list of 10 possible moves and the amateur picked from that list, the amateur beat the AI. And if the AI gave a grandmaster options for the grandmaster to choose, they beat all everyone else. Perhaps that will be a model for software development going forward.

The conversation then shifted to ways of writing in general, not just software. A few participants talked about Tolkien’s Leaf by Niggle, and then the snowflake method of writing. Honestly it was a bit upsetting.

We talked a bit about Latex. I had a job working with Latex to prepare them for academic journals. #9 said he is only one on his campus that uses it. That surprised me.

Lastly, as people dropped off,#7 and I talked to #5. #5 was in Melbourne, Australia. Not a banana bender, crow eater, top ender, or sandgroper. (For some reason there is no slang term for people from New South Wales.) Another continent checked off. All we need is Africa and South America, and we have a complete set.

You’re welcome.

I give people numbers since I do not know if they want their names in this write-up. Think of it as the stoner’s version of the Chatham House Rule. I figured that numbers are a little clearer than “someone said this, and someone else said that, and a third person said something else”. Plus it gives participants some deniability. People’s numbers are based on the order they are listed on the call screen, and the same person may be referred to by different numbers in different months.

I am not the official spokesperson for the group. I just got into the habit of summarizing the meetings every month, and adding my own opinions about things. The participants may remember things differently, and may disagree with opinions expressed in this post. Nothing should be construed as views held by anyone’s employers past, present or future. That said, if you like something in this post, I will take credit; if you don’t, blame somebody else.

Image from Matenadaran MS 1568, a 12th-century Armenian manuscript housed at the Matenadaran Institute of Ancient Manuscripts in Yerevan; image from Wikimedia, assumed allowed under public domain.


Notes on LLMs and AI

There has been a lot of press lately about AI, OpenAI, GPT-${X}, and how AI will affect the world. For the time being, I plan on not looking further into AI (unless my current or a future employer compels me to). I think that right now there is not enough diversity in the vendor population. I also have some thoughts on how it will affect things going forward.

I do not like to get too meta in my posts, but I have been writing this on and off for over a week, and I want to get it out before too much more time passes. I am still learning about this stuff, like what is the difference between models, weights, and datasets; some articles use a project name to refer to all three components. The LLaMA debacle is the textbook case: some stuff was released, some was leaked, there are projects that are based on Meta’s work, some that seem to be clean-room implementations, so how it all fits together is murky to me.

GPT-${X} by OpenAI is taking the world by storm, particularly Chat-GPT. It was the focus of a recent EmacsATX meeting. It is disruptive in the sense that it has capabilities beyond prior AI technology, and will probably have a profound affect on society going forward. But in another sense, it is the opposite of disruptive; it consolidates power and influence in OpenAI. One of the owners of OpenAI is Microsoft, and for me that makes anything by OpenAI something to avoid. They are not doing this for you.

I think a lot of people do not realize that when they play around with the OpenAI prompts in ChapGPT, they are training the OpenAI models and making them better and more powerful. Power that can be used by other users of tool. Not only the vendors, but also your competitors. There have been reports of confidential data and PII being put into ChatGPT, and then extracted by other users later. People need to be more careful. And stop making the rich and powerful richer and more powerful. A lot of people in corporate America might work at companies that are independent on paper, yet they all act like they want to be subsidiaries of Microsoft. Start looking out for your own company and your own career and your own life.

The GPT-${X} products were used in making GitHub Copilot. I mentioned Copilot when I posted I was moving from Github to Codeberg. It does not respect licenses, which could put a company at legal risk, and sometimes it “solves” a problem while violating stated constraints. GPT-${X} has the same issues: Who owns the training data? Who owns the output?

It is good to automate things, but could relying on AI too much make people stupider? A good point was brought up in the discussion about why MIT dropped SICP: When you rely on a black box, how do you know you can rely on the black box? I think we might be coming close to fulfilling a prophecy from Dune:

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

I think we should collectively make an effort to avoid anything by OpenAI, and anything Microsoft. I do not know how long Microsoft has been involved with OpenAI, but there are a few MS hallmarks: it is called “OpenAI” even though it is not open (they have been tight-lipped about how they trained their data), and when it is wrong it insists you are wrong. And when it is incorporated into MS products it has started pushing ads.

There are a few alternatives out there. There is a company called Hugging Face that I think provides datasets, different models and hosting for AI. I think you can provide your own data. There is a company called Lambda Labs which provides all your AI/GPU needs: cloud hosting, colocation, servers, workstations with terrabytes of memory, and a very expensive and very nice looking laptop with Ubuntu pre-installed (a LOT more than System76, but it is nice to see more Linux laptops out there).

WRT software, there are some implementations of AI that are open source. NanoGPT can be run on a local system, although it might take a while. You can find the Github link here, and a link to what might be a fork on Codeberg here. It was started by Andrej Karpathy, who worked on autonomous driving at Tesla and worked at OpenAI.

GPT is a type of artificial neural network known as a large language model, or LLM. Then Facehook/Meta released an LLM called Large Language Model Meta AI, or LLaMA, so now there are a few projects with names referring to South American camelids: llama.cpp (Github link here, Hacker News discussion here), and a fork of llama.cpp called alpaca.cpp (Github link here, Codeberg link here). Once they saw money going to someone else’s pockets, Stanford decided to get in on the act with their own LLaMA implementation, also called Alpaca. There is one caleld Vicuna (intro page here, Github link here). And, last but not least, Guanaco, which look like a fork of Stanford’s Alpaca (Github repos here, page here). You would think AI researchers would come up with more original names rather than run a theme into the ground.

Note: I think Facebook/Meta did release some papers about LLaMA, and then some parts of it were leaked. The status of these projects is a bit unclear to me at the moment. Some of the projects mentioned cannot be used for commercial purposes. IANAL, but I think that llama.cpp and alpaca.cpp can since they are clean-room implementations and were not created with any assistance or collaboration with Meta. Stanford got some early access to LLaMA, so its project and Vicuna cannot be used for commercial purposes.

You can find some more info about open source AI here on Medium (archive here), and here on HN. I think the group EleutherAI is trying to be an open source counter to OpenAI.

There are a LOT of other AI projects out there, but a lot of them are just interfaces to Chat-GPT or DALL-E or something else from OpenAI, as opposed to a program you can run for yourself. A lot of the forks and clean-room/non-OpenAI models require a LOT of memory. Some need at least 60 GB. The mini I got from System76 can have up to 64GB. They have desktops that can go up to 1TB of memory, and servers up to 8TB. Granted, maybe something local will never catch up to OpenAI, but as a few comments in the HN discussion on llama.cpp pointed out: the open source models are becoming very efficient very quickly. Granted, some of the commenters said that AI might be out-of-reach for the hobbyist. But then all this stuff is doing is simulating a human.

So where does all this go next? Honestly, who knows, but I will share my thoughts anyway.

First off: I dismiss the doomsday scenario that AI will kill us all. Like the Wikipedia page on “pessimism porn” states: A lot of people like to predict disaster because it makes them feel smart, even if years go by and their predictions never come to pass. There are lot of people with blogs and YouTube channels that are always predicting a stock market collapse, or who think we are about to become Weimar Germany if the price of a gallon of milk goes up one cent. They dismiss you if you cannot offer irrefutable proof that the world will NOT end, yet they insist their predictions are to be regarded as self-evident. Granted, maybe those are not the best arguments against Skynet, but I have dealt with a lot of people who confuse the strength of their convictions for logic. Sometimes the best prediction is that things will mostly continue as they are, just with more of something you do (or do not) like.

Since this will be a major change, there will be an effect on jobs. Some jobs will be lost. But there might actually be more jobs due to AI. Scott McNeally pointed out that making a system used to be a masters thesis, and systems were pretty limited. Now we have powerful software that is easy to install. We have packages (like JDK, Golang, Elixir) that are powerful compilers and runtimes, far beyond what people thought was possible a few decades ago, yet they can be downloaded as tar or zip files that once expanded let people create robust, powerful software. Linux and these VMs have created a lot of technology jobs. I think AI might wind up on net creating more than we have now.

Granted, it is possible that the jobs that get created are more soul-sucking than what we have. I joked on Mastodon that AI will not take your job; it will just take away the parts you like, leaving you with the parts you do not like.

There is one group of people that I do hope loses their jobs: all the More Bad Advice pinheads [Note 1] who all sound the same and think the answer to everything is to cut costs. I have had good and bad bosses, but honestly, a lot of people in Corporate America sound the same: asking when things will be done, going on and on about how important some arbitrary deadline they pulled out of thin air is, harping on about innovation yet only having the same tired ideas (piling on more work during the so-called “good times”, then cutting staff when things start looking shaky).

And there will be more people thinking the same. One thing that really grates on me is that we are told in technology that we have to be constantly learning new things. Yet the world is full of business pinheads who cannot conceive of not using Excel. It bugs me that there are lot of people in corporate America And there are plenty of software developers who cannot conceive of doing something in a language that is not Javascript. I have a bad feeling that OpenAI will become the Third Pillar of Technology Stupidity.

Sadly, maybe that will be the way to stay employed. Be a Microsoft drone, a Javascript drone, or an OpenAI drone. I have met tech people older than me who said they could do things decades ago with Lisp and Smalltalk that most languages and runtimes still cannot match. I feel like we took a wrong turn somewhere.

That said, even if AI leads to more jobs, there could still be downsides. We are already seeing this: Generative AI is already being used to craft more effective phishing emails. ChatGPT accused a law professor of sexual harassment (article here, HN discussion here. The HN comments have examples of AI making stuff up, but the professor gave a good summary: “ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper.” What if this is used for background checks and nobody verifies what the AI tells them? This could cause a lot of damage to people. Per the quote misattributed to Mark Twain, a lie can travel halfway around the world before the truth can get its boots on.

We should call AI “artificial inference”, because it mostly makes up stuff that sounds true. It just makes guesses about what seems logical. For a long time it was logical to think the earth is flat. Yet for some reason people think the output of AI is always true. Perhaps they are assuming that it must be true since it is based on large data and advanced technology. But sometimes the machine learning is just machine lying. Marissa Mayer said Google search results seem worse because web is worse (articles here and here). People used to put content on the web to tell you things, and now they just want to sell you things. There is lots of junk on the web. I predict here will be a lot of junk in AI.

Microsoft is putting ads in Bing AI chat which is already fostering distrust in AI (article here and here). Unlike Google search ads, the ads in the chat are hard to distinguish from the rest of the results. If companies need to put ads in AI, then make it like Google ads. People realize that things need to be paid for. Intermingling ads with AI just ruins the AI. You do not need advanced AI to say something you are getting paid to say. Google has been able to serve ads based on user input since 2004.

I think AI will lead to a lot of artificial and misleading content. Not just text, but also audio and video. People might not be able to believe what they read, see or hear online. It could cause more cynicism and distrust in our society. Perhaps we will not get Skynet, just a slow decay and further fracturing of society.

AI could, of course, lead to massive job losses. A lot of people care more about cost than quality. And it is possible that after a time some of those jobs might come back. There is a post on Reddit (link here, HN discussion here) about a freelance writer who lost a gig to ChatGPT. (Another writer wrote an “AI survival guide“.) A few comments gave anecdotes of multiple applications to jobs that all sounded the same that the HR people realized were all done with AI. If more companies start using AI, a lot of websites will all start to be the same. A lot of people hate it when an article “sounds/feels like it was written by AI”. Perhaps the human touch will make a comeback. There is a joke I read somewhere:

An LLM walks into a bar.
The bartender asks, "What will you have?"
The LLM says, "That depends. What is everyone else having?"

Granted, it might be a while before jobs lost to AI come back, assuming they ever do. And not all of the jobs might not come back.

I think that people who understand concepts will do better in the long run than people who just know a tool. At least, that is how things have been. It could be different this time. On the other hand, could an AI come up with “Artisanal Bitcoin“?

Software used to be done in binary or assembly, and over time the languages became more powerful, and the number of jobs increased. Software was always about automation, and there was always something to automate. Has that cycle stopped?

I am worried, but I cannot just yet get on board the Doom Train. I remember working at Bank of America in the 00s/Aughts/Whatever that decade is called, and we all thought that all our jobs would go to India and there would be No More Software Made In ‘Merica. That did not happen.

Or maybe it is all a bubble that will burst.

Maybe the AI is not as advanced as the companies are telling us. OpenAI does not publicize it, but they used people in Kenya to filter the bad stuff (Reddit discussions here, here and here, Time article here with archive here, Vice article here with archive here, another Vice article here with archive here). One major focus of the artcles is that looking at all the toxic content was so traumatic for the workers that the company that got the contract ended it several months early. Looking at toxic content can wear on people. But isn’t the point of an AI to figure this stuff out?

My employer had us watch some videos on up and coming technology, and one of them was on AI. One of the people on the panel kept taking about how important it is to “train” and “curate” your data. They kept saying that over and over. And I had the same thought: isn’t that what the AI is supposed to do? They made it sound like AI was just a big fancy grep or SQL query.

Per the Vice articles, tech and social media companies have been using people in low-wage countries to flag content for years, while letting people think that their technology was so amazing. Perhaps ChatGPT is no different. I do not know if they have to re-flag everything for each version of GPT. I get the impression the data is trained when the AI is started up, and from there it is just repeating what it figured out. Does it actually learn in real-time the way a human can? Can an AI make new inferences and be an old dog learning new tricks the way a human can, or does it just keep re-inforcing the same ideas and positions the longer it runs? What if you train your data and the world changes? What if neo-nazis stop using triple parentheses as an anti-Jewish code today, and your training data is from two years ago? I guess you are just back to web search.

I think part of what is going on is hype. As Charlie Stross pointed out, it does seem interesting that we see the AI hype just starting as the corrupt-o-currency hype is winding down. The vulture capitalists need something new to sell.

Another issue is: will this scale going forward? Technology does not always progress at the same rate. We could be headed for another AI winter. Research into AI for autonomous driving has hit a wall (no pun intended).

And how will this scale? The human brain still have 1000 times the number of connections as GPT-4 has parameters. There is already a shortage forming for the chips used for AI. Is it worth it to burn the planet and use all that metal and electricity to chew through a lot of data…to do what? Simulate a human brain in a world with 8 billion people? Especially when a lot of the humans’ intelligence is not being used efficiently (see penetration of Lisp vs Windows).

That said, I don’t think AI will go away. If I could have one thing, I would like to see alternatives to OpenAI, particularly open source. It might be possible to run LLMs locally. Do you really need an AI that knows about oceanography? Most of us do not. I do not think that AI will kill us all (it is not clear to me how we go from chatbot to Terminator). But corporate consolidation in AI would be a tragedy.

I just need a job where I can use Emacs and call people stupid.

You’re welcome.

Note 1: Not all More Bad Advice pinheads actually have a More Bad Advice degree. However, the ideas pushed by the More Bad Advice crowd are pretty much everywhere.

Image from an 11th-century manuscript housed in the Topkapi Palace in Istanbul, image from The Gabriel Millet Collection (collection page here), assumed allowed under public domain.

Tracking Tax Documents And Other Ideas For Learning Org Mode

Whenever Emacs or Org mode is mentioned on Hacker News, there is usually at least one comment from someone who said they started learning it, but had a hard time sticking with it. It is easier to learn a new technology if you have a goal to use it for something, especially something non-technical. “I want to learn Rails/Phoenix/etc to make a web app to keep track of $SOMETHING-IN-PARTICULAR” is better than “I want to learn Rails/Phoenix/etc”. They could learn Emacs and/or Org if they had a reason to learn them. Here are a few ideas on things you can do to learn Org. It doesn’t have to be a major project.

Right now it is tax time in the United States. They have to be filed with the Internal Revenue Service by April 15. Employers, HSAs, banks and brokerages have been sending out a LOT of forms. You could make a simple checklist for each of the forms you get, and check them off as you get them. Or make them TODO headings, and mark them as DONE as they arrive; this will add a timestamp. Plus: You can make this outline into a template, and re-use it next year.

I think there is a way to make templates in Org using something called Capture, based on what I have read about it. At some point I will read the Org manual or finish Rainer Koenig’s Org-mode course on Udemy. I only finished half of it, and I still got a lot out of it. Right now when I need to re-use a list, I make a template, and when I need a new one, I put it in the kill ring with org-cut-subtree, and I call org-paste-subtree twice: Once to bring back the template, and a second time to have a copy as an instance.

I also have a list of repeating tasks for things that I need to do every month: pay rent, pay electric bill, pay the water bill. I have monthly tasks for backing up my GnuCash files and my KeePassXC files, and I bundle my Org files and put them on thumb drives. I also have tasks for my car: getting my oil changed, and for replacing different fluids. I have yearly goals for bills that I pay once a year: web hosting, mail box, some insurance.

If you make repeating tasks, look in setting the org-log-into-drawer variable. A “drawer” in Org is a section that is hidden by default, but can still be viewed by calling M-x org-cycle.

I just started Org by making to-do lists, and learned more over time.

You’re welcome.

Image from Hillinus Codex, Cod. 12, an 11-century manuscript housed at the Archbishop’s Diocesan and Cathedral Library in Cologne, Germany, allowed under CC BY-NC 4.0.

2023-03 Austin Emacs Meetup

There was another meeting a couple of weeks ago of EmacsATX, the Austin Emacs Meetup group. For this month we had no predetermined topic. However, as always, there was mention of new modes, packages, technologies and websites that I had never heard of, and some of this may be of interest to you as well.

#1 was one of the organizers; he used to live in Austin and now lives in East Texas.
#2 was a programming instructor in NYC.
#3 was The Artist Known as Number Three, the Esteemed Gentleman From Oklahoma.
#4 was a devops engineer for a quantum computing company in Madison, WI (he is in Madison, but the company is based elsewhere).
#5 was the other organizer, formerly working for the City of Austin.
#6 was our professor, the Other Esteemed Gentleman From Oklahoma.

When I dialed in the guys were talking about how to keep their kids out of their home offices, although some allow them to go in sometimes. #3 said his kids are adults, and they still do not go into his office. His adult daughter will not cross that yellow line.

#4 talked about the company he works for and his use of Emacs. They make quantum computers by using lasers to trap atoms in a vacuum. I asked him if he worked for Rigetti Computing, but he said it was another company. I guessed Rigetti because 1. It is the only quantum computing company I could name, and 2. Whenever someone on the web asks who uses Common Lisp, a Rigetti person mentions they use Common Lisp. I assume a company that uses Common Lisp would have a lot of Emacs users. He explained that Rigetti makes a chip, while making a quantum computer with trapped ions is a completely different process. #4 said he does not know too much about the lasers and that there is an engineering team that configures them. He said that the configurations are stored in Org files and the process can be run multple times from the same configuration.

Still, I think it is interesting that Emacs can be used to control lasers. And because there were some snarky comments in the chat, he was compelled to point out the lasers are not mounted on sharks.

#4 did know more about lasers than the average person (like what the can and cannot cut through), and he and #3 spent a few minutes discussing lasers. #3 worked with lasers in the Navy. I remarked that #3 seemed like a Renaissance Man: He uses Emacs, he was in the military, he shoots lasers, he has had some music from different genres on in the backround during meetings.

#2 introduced himself. He found out about the group from #4. They met through the nanofiction community. #2 has been using Emacs for 2 years, and came to Emacs from Vim because he thought Vim was too limited. #4 came to Emacs from SciTE.

I mentioned that I wished the developer from Dallas (who was last month’s #2) was on the call. He was speaking to a newcomer who was having a hard time getting into Emacs that many people see Emacs as an editor, tool, or IDE, and while it is those things, ultimately it is a Lisp REPL. My power went out, and I wished I heard him finish the thought. A few people said he went on for several minutes. This led into a rehashing of some of last month’s topics, including Verb mode. #2 thought we said “Bird mode”. It turns out there is a Bird mode; none of us had heard of it or could tell what it does from the sparse Readme.

The two words sound very similar. The word for last month’s mode is “verb”, and connotes action. The word for this month’s mode, on the other hand, has a more avian flare to it. The word for this month’s mode is an ornithogical expression, as it were. What is the word? My friends, “bird” is the word. (According to Wikipedia, it took four people to write that song.)

#4 shared link to RDE, a Scheme repo for reproducible development environments. He and #2 talked for a few moments about Guix, which uses Scheme for configuration.

I asked about an issue I was having with REPLs in CIDER: It would hang when I hit return. The solution was to add this to the Emacs configuration:

(define-key paredit-mode-map (kbd "RET") nil)

This solution was also suggested at Emacs Redux. But when I added it, I kept getting this error:

Symbol's value as variable is void: paredit-mode-map

#4 said it was because it was trying to set paredit-mode-map before paredit was loaded. I am not sure why that would be happening, since I put this at the end of my file. He suggested I change it to this:

(with-eval-after-load 'paredit (define-key paredit-mode-map (kbd "RET") nil))

That worked. I do not know why I was getting the error even though I put the “define-key” towards the end. Perhaps it is time to read Mastering Emacs, the Emacs manual, or perhaps both. Anyway, devops for the win.

I edited my config, and tried out the solution while the conversation continued. When I came back, they were talking about configs and start-up times.

Then talk shifted to startup time. #3 and #6 finally met up in OKC, and #3 helped #6 with his Emacs init, and significantly reduced his startup time. #3 said the basic idea was to use use-package to defer what you need until you need it. Now #6 loads his packages at the beginning using add-to-list to add packages and then configures them with use-package. He has about 155 packages.

Now that I think about it, use-package used to get mentioned a lot at these meetings, but lately it has not been mentioned as much.

#4 runs Emacs server on login, and then runs Emacs client when he needs to edit files, like bluetooth: it starts when you login, not when you connect a device. If #4 comes back on later, I will have to ask him if he keeps any instances going all day, starts and exits, or some combination. He also said that other people on his team use Emacs and talked about their configs, but I did not add that to my notes.

#2 asked about tiling window managers, and #3 talked about WMs that he has used. I use Emacs with the “–no-window-system” option, and just stick with the GUIs that Ubuntu and System76 give me, so some of this went over my head. #3 avoids i3wm because others use it. Not only is he a Renaissance Man, he is a Hipster Renaissance Man.

#3 mentioned he tries to live in Emacs as much as possible, but is not an extremist. He uses Gnus to check email and does a lot of things that most people do in other programs, but he admits there are some things we should do outside of Emacs. He does not play music or browse the web in Emacs. He tried does that. Checks Gnus and email in Emacs. Lives in Emacs. He tried using Emacs to access KeePassXC,but did not like it. I also tried it using keepass-mode, but I did not like the fact that I could not sort the entries in a folder alphabetically. I also tried accessing my database on the command line with keepassxc-cli and I still could not figure out how to list the entries in a group the way I wanted.

#1 talked about using Chemacs, using different profiles. #3 said there is a with-emacs shell script; I assume he was talking about this. Emacs 29 will probably make those obsolete. #3 pointed out that Chemacs can mess with some configs. He mentioned an “early init file“, which I had never heard of. I guess his work with Crafted Emacs forces him to deal with corner cases that most users never deal with.

#6 asked about blogging. He uses org2blog to publish to WordPress. I also blog with WordPress. I write the post in Org, use org-export-dispatch to export to HTML, and then copy and past the HTML into Classic Editor. I hate the Gutenberg editor, and based on the reviews for the Classic Editor plugin, I am not alone; instead of calling the new editor Gutenberg, they should have called it Torquemada. If the Classic Editor plugin gets discontinued, I might go with JBake. #3 uses WriteFreely. #4 pointed out Hugo supports Org, and mentioned write.as and Keybase. #6 also looking at a way to display inline pdfs in Org.

#3 pointed out that Emacs runs on Android. At first I don’t see the point, but #3 mentioned that Chromebooks can run Android.

#4 gave a link to a reg-ex debugger, but this site might not work with Emacs regular expressions. So with Emacs regex, you wind up having three problems. You can find a page on Emacs regular expressions here, Perl here, and Java here. There is a free regex tester that is a part of a group of sties called Dan’s Tools. A few people shared some regex horror stories. (Is “regex horror story” redundant?)

#5 had his cat say hello. #3 mentioned his dogs. I asked if there are any cat people in Oklahoma, and #3 said there are, but it is mostly a dog state.

#4 asked about Zile, GNU’s other configurable editor. Per GNU, it means “Zile Implements Lua Editors” and also “Zile is Lossy Emacs“. I guess it can be used to run a Lua editor names Zz, or an Emacs clone called Zemacs. This led to talk of other Emacsen. Many of them have fallen away, but they were specific platforms that no longer exist. Linus Torvalds uses MicroEMACS, which was last released in 1996. The source to Gosling Emacs was released recently (Github link here, Hacker News link here). #5 gave a rundown of Gosling emacs: Gosling put a wrapper around the TECO language, sold it to a company that charged for it, RMS got mad and rewrote Gosling’s Lisp.

#3 pointed out RMS does not use external packages, per RMS’s EmacsConf talk. He uses VC, and sees no point in integrating Magit into Emacs. To be fair, per the Emacs docs VC can interface with other version control systems in addition to Git.

The converstation turned to Emacs code: a lot of it is forgotten and hasn’t been looked at or changed in a long time, and that there are a lot of features people do not know about. #3 said that working on Crafted Emacs led him to find features that he had never heard of. One feature that was mentioned was Whitespace mode (see here and here). #3 and #4 also talked about proced, which can manage processes. There is no mention of it in the Emacs documentation. There is an article about it on Mastering Emacs and an article and and article here with discussion here. The source code is here, and mirrored here. You have to invoke it old-school with M-x proced. As far as I know, there is no key chord for it.

Like Dired, there are some commands you can run in the buffer:

(n)ext, (p)revious, (m)ark, (u)nmark, (k)ill, (q)uit (type ? for more help)

Using M-x describe-bindings in the proced buffer, I was able to find the proced functions:

Key Binding
RET proced-refine
C-n next-line
C-p previous-line
SPC next-line
0 .. 9 digit-argument
< beginning-of-buffer
> end-of-buffer
? proced-help
C proced-mark-children
F proced-format-interactive
M proced-mark-all
P proced-mark-parents
T proced-toggle-tree
U proced-unmark-all
d proced-mark
f proced-filter-interactive
g revert-buffer
h describe-mode
k proced-send-signal
m proced-mark
n next-line
o proced-omit-processes
p previous-line
q quit-window
r proced-renice
s Prefix Command
t proced-toggle-marks
u proced-unmark
x proced-send-signal
DEL proced-unmark-backward
S-SPC previous-line
<down> next-line
<header-line> Prefix Command
<mouse-2> proced-refine
<remap> Prefix Command
<up> previous-line
<remap> <advertised-undo> proced-undo
<remap> <undo> proced-undo
<header-line> <mouse-1> proced-sort-header
<header-line> <mouse-2> proced-sort-header
s S proced-sort-interactive
s c proced-sort-pcpu
s m proced-sort-pmem
s p proced-sort-pid
s s proced-sort-start
s t proced-sort-time
s u proced-sort-user


You’re welcome.

I give people numbers since I do not know if they want their names in this write-up. Think of it as the stoner’s version of the Chatham House Rule. I figured that numbers are a little clearer than “someone said this, and someone else said that, and a third person said something else”. Plus it gives participants some deniability. People’s numbers are based on the order they are listed on the call screen, and the same person may be referred to by different numbers in different months.

I am not the official spokesperson for the group. I just got into the habit of summarizing the meetings every month, and adding my own opinions about things. The participants may remember things differently, and may disagree with opinions expressed in this post. Nothing should be construed as views held by anyone’s employers past, present or future. That said, if you like something in this post, I will take credit; if you don’t, blame somebody else.

Image from Grec 224, an 11th-century manuscript housed at the National Library of France; image assumed allowed under public domain.

Making An Elixir Project

Now I will go over making an Elixir project. This is a continuation of my post about learning project structure and testing from the beginning when learning a new programming language.

Elixir took a bit more work. I made a project and I thought I was doing it correctly, but after a certain point every time I ran the tests it ran the app instead. I could not figure out why. So I started over. I followed along with a project in Dave Thomas’s Elixir book. He does not start a project until Chapter 13, which I think is odd. Why not start a project from the beginning?

Right now I do not know a whole lot about Elixir or the Elixir community or ecosystem, so this post might contain some opinions and speculations that will seem $INSERT_NEGATIVE_TERM to Elixir experts.

You can install Elixir with the asdf tool. It should manage dependencies for Elixir itself, but not your Elixir projects; Elixir requires another language named Erlang to be installed. Check the asdf Getting Started page to download and install it.

After you install asdf, you need to install the Erlang and Elixir plugins, and then install Erlang and Elixir themselves.

asdf plugin add erlang https://github.com/asdf-vm/asdf-erlang.git
asdf plugin-add elixir https://github.com/asdf-vm/asdf-elixir.git
asdf plugin list
asdf install erlang latest
asdf install elixir latest
asdf list
asdf list elixir
asdf list-all erlang
asdf list-all elixir

The tool to manage Elixir projects and dependecies is called Mix. To list all the commands, use “mix help”. You can find out more here and here. It is to Elixir what Maven or Gradle is to Java, or Leiningen is to Clojure. I think it is more like Gradle or Leiningen than Maven, because I think that it is easier to add functionality to Mix that it is to Maven, and it is easier to add functionality to Gradle and Leiningen than Maven. I think the Phoenix web framework adds some Mix tasks. My installation of Elixir and Mix has some Phoenix tasks built-in. I do not know if that is because whoever made the asdf package included them, or if they are part of all Elixir installations. I would be a bit surprised if the Elixir maintainers would include Phoenix and play favorites.

First make a directory for Elixir projects.

ericm@latitude:~$ mkdir elixir.projects
ericm@latitude:~$ cd elixir.projects/

Next run Mix to make a new project

ericm@latitude:~$ cd elixir.projects/
ericm@latitude:~/elixir.projects$ mix new foothold
\* creating README.md
- creating .formatter.exs
- creating .gitignore
- creating mix.exs
- creating lib
- creating lib/foothold.ex
- creating test
- creating test/test_helper.exs
- creating test/foothold_test.exs

Your Mix project was created successfully.
You can use "mix" to compile it, test it, and more:

    cd foothold
    mix test

Run "mix help" for more commands.
ericm@latitude:~/elixir.projects$ cd foothold/
ericm@latitude:~/elixir.projects/foothold$ ls
lib/  mix.exs  README.md  test/

Elixir uses modules instead of classes, and they are in namespaces. I want to make one for my project called “foothold”. I ran “mix help”, but none of the task summaries looked like what I want, so we have to go old-school and do this by hand. I am not sure if Elixir calls them “namespaces”, but that is how I think of them.

ericm@latitude:~/elixir.projects/foothold$ mkdir lib/foothold
ericm@latitude:~/elixir.projects/foothold$ mkdir test/foothold

As with our golang project, make a package (or namespace, or prefix) for some modules that we will write.

ericm@latitude:~/elixir.projects/foothold$ mkdir lib/more_strings
ericm@latitude:~/elixir.projects/foothold$ mkdir test/more_strings

We will make a couple of files that make duplicates of strings and reverse strings, and we will include some tests for them. The modules will have the “ex” extension. The tests will have the “exs” extension because they are scripts and if we compile our app the tests would not be included.

Make a file lib/more_strings/duplicate.ex:

defmodule MoreStrings.Duplicate do

  def duplicate_string(arg_string) do
    String.duplicate(arg_string, 2)

  def make_three_copies(arg_string) do
    String.duplicate(arg_string, 3)


Make a file test/more_strings/duplicate_test.exs:

defmodule MoreStrings.DuplicateTest do
  use ExUnit.Case          # bring in the test functionality
  import ExUnit.CaptureIO  # And allow us to capture stuff sent to stdout
  doctest MoreStrings.Duplicate
  alias MoreStrings.Duplicate, as: MSD

  test "try duplicate_string" do
    assert "andand" == MSD.duplicate_string( "and" )
    refute "andanda" == MSD.duplicate_string( "and" )

  test "try make_three_copies" do
    IO.puts "In the test for make_three_copies"
    assert "zxcvzxcvzxcv" == MSD.make_three_copies( "zxcv" )

Make lib/more_strings/reverse.ex:

defmodule MoreStrings.Reverse do

  def reverse_stuff do
    IO.puts "In MoreStrings.Reverse"

  # why doesn't it like this?
  def actually_reverse_string(arg_string) do
    IO.puts "In MoreStrings.actually_reverse_string with arg #{arg_string}"
    IO.puts String.reverse(arg_string)

  def revv(arg_string) do
    IO.puts "In MoreStrings.Reverse.revv with arg #{arg_string}"
     IO.puts String.reverse(arg_string)

Make test/more_strings/reverse_test.exs

defmodule MoreStrings.ReverseTest do
  use ExUnit.Case          # bring in the test functionality
  import MoreStrings.Reverse
  import ExUnit.CaptureIO  # And allow us to capture stuff sent to stdout

  # alias MoreStrings.Reverse, as: MSR
  # import MoreStrings.Reverse

  test "try reverse" do
    IO.puts "In the test try reverse"
    # assert "dolleh" == MSR.actually_reverse_string( "ahello" )
    assert MoreStrings.Reverse.actually_reverse_string("ahello") == "olleha"
    refute actually_reverse_string( "hello" ) == "dollehd"

  test "ttttttt" do
    IO.puts "In test tttttt"
    assert 4 == 2 + 2


Now compile the app with “mix compile” and run the tests with “mix test –trace”. Adding the –trace will print a message to the console for each test being run even if you do not have any IO.puts statements.

ericm@latitude:~/elixir.projects/foothold$ mix compile
Compiling 3 files (.ex)
Generated foothold app
ericm@latitude:~/elixir.projects/foothold$ mix test --trace
Compiling 3 files (.ex)
Generated foothold app
warning: unused import ExUnit.CaptureIO

warning: unused import ExUnit.CaptureIO

MoreStrings.DuplicateTest [test/more_strings/duplicate_test.exs]
  * test try duplicate_string (0.02ms) [L#7]
  * test try make_three_copies [L#12]In the test for make_three_copies
  * test try make_three_copies (0.03ms) [L#12]

FootholdTest [test/foothold_test.exs]
  * doctest Foothold.hello/0 (1) (0.00ms) [L#3]
  * test greets the world (0.00ms) [L#5]
In the test try reverse

MoreStrings.ReverseTest [test/more_strings/reverse_test.exs]
In MoreStrings.actually_reverse_string with arg ahello
  * test try reverse [L#9]olleha
In MoreStrings.actually_reverse_string with arg hello
ollehericm@latitude:~/elixir.projects/foothold$ mix compile
Compiling 3 files (.ex)
Generated foothold app
ericm@latitude:~/elixir.projects/foothold$ mix test --trace
Compiling 3 files (.ex)
Generated foothold app
warning: unused import ExUnit.CaptureIO

warning: unused import ExUnit.CaptureIO

MoreStrings.DuplicateTest [test/more_strings/duplicate_test.exs]
  * test try duplicate_string (0.02ms) [L#7]
  * test try make_three_copies [L#12]In the test for make_three_copies
  * test try make_three_copies (0.03ms) [L#12]

FootholdTest [test/foothold_test.exs]
  * doctest Foothold.hello/0 (1) (0.00ms) [L#3]
  * test greets the world (0.00ms) [L#5]
In the test try reverse

MoreStrings.ReverseTest [test/more_strings/reverse_test.exs]
In MoreStrings.actually_reverse_string with arg ahello
  * test try reverse [L#9]olleha
In MoreStrings.actually_reverse_string with arg hello
  * test try reverse (0.1ms) [L#9]
  * test ttttttt [L#16]In test tttttt
  * test ttttttt (0.02ms) [L#16]

Finished in 0.1 seconds (0.00s async, 0.1s sync)
1 doctest, 5 tests, 0 failures

Randomized with seed 154594

  * test try reverse (0.1ms) [L#9]
  * test ttttttt [L#16]In test tttttt
  * test ttttttt (0.02ms) [L#16]

Finished in 0.1 seconds (0.00s async, 0.1s sync)
1 doctest, 5 tests, 0 failures

Randomized with seed 154594

Run “iex -S mix” in the root of your project to use your modules. IEx is the interactive Elixir shell that comes with Elixir. You can type in Elixir code and get results. It is sort of like un-automated unit tests. You can end the session by hitting Control-C (or as we say in Emacs land: C-c) and then “a” and the return key.

ericm@latitude:~/elixir.projects/foothold$ iex -S mix
Erlang/OTP 25 [erts-13.0.2] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [jit:ns]

Interactive Elixir (1.13.4) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> MoreStrings.Reverse.actually_reverse_string("ahello")
In MoreStrings.actually_reverse_string with arg ahello
iex(2)> alias MoreStrings.Duplicate, as: MSD
iex(3)> MSD.duplicate_string( "and" )
iex(4)> MSD.make_three_copies( "zxcv" )
BREAK: (a)bort (A)bort with dump (c)ontinue (p)roc info (i)nfo
       (l)oaded (v)ersion (k)ill (D)b-tables (d)istribution

Now add an external dependency to the project. The package we will add is Decimal, a package for arbitrary precision decimal artithmatic (Hex page here, documentation here, Github repo here). First we need to add a reference to it in our mix.exs file in the “defp deps” section:

defp deps do
    {:decimal, "~> 2.0"}
    # {:dep_from_hexpm, "~> 0.3.0"},
    # {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git", tag: "0.1.0"}

Here are the Mix tasks associated with dependencies:

mix deps              # Lists dependencies and their status
mix deps.clean        # Deletes the given dependencies' files
mix deps.compile      # Compiles dependencies
mix deps.get          # Gets all out of date dependencies
mix deps.tree         # Prints the dependency tree
mix deps.unlock       # Unlocks the given dependencies
mix deps.update       # Updates the given dependencies

Run “mix deps.get” to fetch the dependencies and “mix deps.compile” if it makes you feel better:

ericm@latitude:~/elixir.projects/foothold$ mix deps.get
Resolving Hex dependencies...
Resolution completed in 0.033s
  decimal 2.0.0
\* Getting decimal (Hex package)
ericm@latitude:~/elixir.projects/foothold$ mix deps.compile
==> decimal
Compiling 4 files (.ex)
Generated decimal app
ericm@latitude:~/elixir.projects/foothold$ mix deps
\* decimal 2.0.0 (Hex package) (mix)
  locked at 2.0.0 (decimal) 34666e9c

Add a module that depends on Decimal in lib/foothold/decimal_stuff.ex, and make a few calls so we have something to test:

defmodule Foothold.DecimalStuff do
  def do_decimal_add(a, b) do
    Decimal.add(a, b)

  def do_decimal_subtract(a, b) do
    Decimal.sub(a, b)

  def do_decimal_compare(a, b) do
    Decimal.compare(a, b)


Add the following to test/foothold/decimal_test.exs

defmodule Foothold.DecimalTest do

  use ExUnit.Case
  import Foothold.DecimalStuff 
  import Decimal

  test "test do_decimal_add" do
    assert Decimal.add(2,3) == do_decimal_add( 2, 3 )

  test "test do_decimal_compare_lt" do
    assert :lt == do_decimal_compare(1, 2)

  test "test do_decimal_compare_gt" do
    assert :gt == do_decimal_compare( 2, 1 )

  test "test do_decimal_subtract" do
    # assert 3 == do_decimal_subtract( 5, 2 )
    # assert Decimal.subtract( 5, 2 ) == do_decimal_subtract( 5, 2 )
    assert Decimal.new( 3 ) == do_decimal_subtract( 5, 2 )

  #  def do_decimal_subtract(a, b) do
  #   def do_decimal_compare(a, b) do

Now run the tests again:

ericm@latitude:~/elixir.projects/foothold$ mix test --trace
==> decimal
Compiling 4 files (.ex)
Generated decimal app
==> foothold
Compiling 1 file (.ex)
Generated foothold app
warning: unused import Decimal

warning: unused import ExUnit.CaptureIO

warning: unused import ExUnit.CaptureIO

FootholdTest [test/foothold_test.exs]
  * doctest Foothold.hello/0 (1) (0.00ms) [L#3]
  * test greets the world (0.00ms) [L#5]

MoreStrings.DuplicateTest [test/more_strings/duplicate_test.exs]
  * test try duplicate_string [L#7]In the test for make_three_copies
  * test try duplicate_string (0.00ms) [L#7]
  * test try make_three_copies (0.1ms) [L#12]
In the test try reverse

MoreStrings.ReverseTest [test/more_strings/reverse_test.exs]
In MoreStrings.actually_reverse_string with arg ahello
  * test try reverse [L#9]olleha
In MoreStrings.actually_reverse_string with arg hello
  * test try reverse (0.1ms) [L#9]
  * test ttttttt [L#16]In test tttttt
  * test ttttttt (0.02ms) [L#16]

Foothold.DecimalTest [test/foothold/decimal_test.exs]
  * test test do_decimal_compare_gt (0.01ms) [L#15]
  * test test do_decimal_subtract (0.01ms) [L#19]
  * test test do_decimal_add (0.01ms) [L#7]
  * test test do_decimal_compare_lt (0.00ms) [L#11]

Finished in 0.04 seconds (0.00s async, 0.04s sync)
1 doctest, 9 tests, 0 failures

Randomized with seed 333086

Next add a module to be the main module for a command line app. Put this in lib/foothold/cli.ex:

defmodule Foothold.CLI do

  import MoreStrings.Reverse
  import MoreStrings.Duplicate

  @default_count 4
  @moduledoc """
  Handle the command line parsing and the dispatch to
  the various functions 
  def main(argv) do
    IO.puts "in main for Foothold"

    # why doesn't it like this?
    actually_reverse_string( "this is my string" )
    revv( "this is my string for revv" )
    IO.puts duplicate_string "this is a string to be duplicated"
    IO.puts make_three_copies "one copy "

    |> parse_args
    |> process
    IO.puts "Done with CLI"

  @doc """
  'argv' can be -h or --help, which returns :help

  Otherwise it is a github user name, project name, and (optionally)
  the number of entries to format.

  Return a tuple '{ user, project, count }', or ':help' if help was given.
  def parse_args(argv) do
    OptionParser.parse(argv, switches: [ help: :boolean],
                             aliases:  [ h:    :help   ])
    |> elem(1)
    |> args_to_internal_representation()

  def args_to_internal_representation([user, project, count]) do
    { user, project, String.to_integer(count) }

  def args_to_internal_representation([user, project]) do
    { user, project, @default_count }

  def args_to_internal_representation(_) do # bad arg or --help

  def process(:help) do
    IO.puts """
    usage:  issues <user> <project> [ count | #{@default_count} ]

  def process({_user, _project, _count}) do
    IO.puts "In process"


Next, put the following in the mix.exs file for the project:

defp escript_config do
    main_module: Foothold.CLI

Escript is an Elixir utility that turns compiled projects into zip archives.

Then we can compile our application with “mix compile” and run it with “mix run -e ‘Foothold.CLI.main([“-h”])'”.

ericm@latitude:~/elixir.projects/foothold$ mix compile
warning: function escript_config/0 is unused

Compiling 2 files (.ex)
Generated foothold app
ericm@latitude:~/elixir.projects/foothold$ mix run -e 'Foothold.CLI.main(["-h"])'
warning: function escript_config/0 is unused

in main for Foothold
In MoreStrings.Reverse
In MoreStrings.actually_reverse_string with arg this is my string
gnirts ym si siht
In MoreStrings.Reverse.revv with arg this is my string for revv
vver rof gnirts ym si siht
this is a string to be duplicatedthis is a string to be duplicated
one copy one copy one copy 
usage:  issues <user> <project> [ count | 4 ]

That is the basics to get a project up and running as you learn Elixir. As I stated before, I do not like having code floating in space, or making tiny edits to small files.

I think that deploying an Elixir app to production would take more steps, and you have to know more about the Erlang VM, but that should be enough to get you started.

You’re welcome.

Image from Jruchi II Gospel, a 12-century manuscript housed at the National Research Center of Georgian Art History and Monument Protection, assumed allowed under public domain.

I Moved to Codeberg

I have moved my repos on Github to Codeberg.

I do not care if I sound like I am stuck in the 1990s, but I never liked Microsoft, and I never trusted them. I do not care how independent MS says Github is. If I can avoid using anything MS and use an alternative instead, I will.

I am tired of having to use Windows just because everybody else uses Windows. It is like Javascript: It is a status quo that everybody defends, nobody actually chose, and very few actually like. I am tired of having to use it just because everybody else uses it. Using MS technologies makes people stupid [Note 1]. Try explaining a BPM server (like Camunda or jBPM) to “business people”. They get the concept until they ask how it integrates with Office and you tell them it does not. Then for some reason a concept they understood five seconds ago is beyond them. If you could solve any problem by putting another spreadsheet onto Sharepit, the world would not have any problems. Yet here we are.

And then there is Copilot. I refuse to work for free for one of the wealthiest companies on the planet. If Bill needs more lawyers to deal with the Epstein fallout, that is not my problem [Note 2]. Maybe he can sell some of the farmland he is buying. Not only are they stealing your work, they are putting you at legal risk: Copilot ignores licenses, and just gives you what it barfs up.

ChatGPT also ignores licensing issues, and insists it is never wrong. Sounds like Bill Gates and Microsoft to me. For anyone looking at ChatGPT: Do not touch it. It is made by OpenAI: which is owned by (among others) Microsoft, Elon Musk and Peter Thiel. If you use it, you will just make rich jerks richer [Note 3].

I started looking into alternatives after the Software Freedom Conservancy had a post on getting off of Github (blog post here, page here, post on Hacker News here). As the blog post points out, Copilot is not using MS code for input. Why not? MS owns them. If MS says there are licensing issues, all the repo owners on Github could say the same thing. Rules for thee, not for me. No thank you.

And now Windows is just a big spyware and ad machine, despite Microsoft having a captive audience and being one of the largest companies in the world (see article here). Yes, you can turn it off, but you should not have to. If Microsoft is so great, why do they constantly have to push their software on you? It’s my machine; I should be able to do what I want, and not see what I do not want. It is possible I am running something that is sending some telemetry somewhere, but I do not have to be perfectly consistent to want Microsoft out of my life.

So I moved all my repos over. I now have nothing on Github. I also moved where I store the repos on my own machine, so I had to change some paths in some config files. My Emacs config has a lot of URLs in comments for other packages, and there are a LOT of things on Github. The open source world is highly dependent on Github. At the next Austin Golang meetup I will ask how Go manages packages. I see a lot of github.com URLs in Go repos. For Java, I think the central Maven repository is run by a company called Sonatype, but I think the Apache Foundation can take it over if Sonatype goes down or goes under (see this article). Granted, if Apache runs Maven, that gives IBM more influence; can you see why I am looking beyond Java?

To migrate to Codeberg, you can follow the directions on Codeberg here. They even have a page that lets you migrate from another git repository, like Github, as long as the other repo is publicly web-accessible. You just put in the other repo’s URL, and it does it all for you. Granted, I have no idea what is to stop a person from migrating someone else’s repo and claiming it as their own. But I only did mine.

I like the fact that they display my name as “EMacAdie” and not “emacadie” like Github. I guess there are some people in Germany working on Codeberg with case sensitive names. Thanks, Herr von Cognizant, whoever you are.

Codeberg says they have enough money for ten years. Still, I might donate to them in the future. Unless the Emacs Mastodon server needs money.

You decided to bend over
Because Bill and Steve told ya
The new guy is named Satya
And if you're still on Windows, he's got ya

You’re welcome.

Note 1: You don’t think contact with MS makes people stupid? Look at his ex-wife. She got big fancy degrees, but then she married Bill Gates.

Most people never really admired Gates. People just liked his money.

On the other hand, he is not one of the many billionaires trying to overturn a legitimate election and cares about energy and climate change, so I guess he’s not ALL bad. Just 99% bad.

Note 2: To give you an idea of Bill Gates’ ethics and intelligence: His first personal investment manager was a convicted felon (see Forbes article here). Gates hired him post-conviction. Even though anyone with two brain cells could tell you that is as bad an idea as it looks, he had to be told by his mother this was a bad idea. He was an adult at the time. I wonder if she asked for a refund from the private school she sent him to. How do such stupid people get to be wealthy? Also: The first manager has not done so well since he got replaced per the linked article.

Note 3: Added 2023-03-31: Elon Musk is a founder of OpenAI, but not an owner. He and other people released an open letter saying there should be a pause in AI research.

Image from the Munich Serbian Psalter, a 14-th century manuscript housed in the Bavarian State Library, allowed under CC0 1.0 license.

2023-02 Austin Emacs Meetup

There was another meeting a few weeks ago of EmacsATX, the Austin Emacs Meetup group. For the this month we had no predetermined topic.

#1 was one of the organizers. He used to live in Austin, but now lives in East Texas.
#2 was a developer near Dallas. He was a power user of IntelliJ, but now uses Doom.
#3 was one of our developers in OKC (Still the Artist Known as Number Three).
#4 did not say much; their name was unfamiliar to me.
#5 was one of the organizers, and formerly worked for the City of Austin.
#6 was a guy from San Francisco, who also did not say much.
#7 was our professor from OKC.
#8 was from Seattle. I think he attended in 2022-12, and was trying to transition from VS Code to Emacs.
There was a #9 and a #10, but they did not say anything while I was on.

I joined a bit late, and there was a lot of talk about running a meeting. I think #6 is involved in the Emacs group in San Francisco. He said that running a meeting is a lot of work. Someone mentioned recording the meeting, and that was shot down. I think a lot of people did not like being recorded unfiltered. Granted, Emacs users are even more intelligent, sophisticated and attractive than other IT folks, even readers of Tales From the Jar Side. EmacsSF does have a Youtube channel, but there are some gaps. I do not remember if #6 said why they stopped recording or if he had any part in the decision. Maybe they just got sick of doing it.

#8 said it is hard to get into Emacs. #1 recommended resources: YouTube, Sacha Chua, ChatGPT. There were some suggestions about how to discover things in Emacs: C-h o (which runs describe-symbol), info pages. #7 mentioned the Emacs Buddy initiative system: you can connect with an experienced Emacs user. You can find the web page about it here, and an EmacsConf22 talk about it here. I have still not gotten around to watching the videos from prior EmacsConf years.

A lot of the meeting was #2 and #3 sharing their screens showing the rest of us how they use Emacs and Org to manage REST requests. They use different languages inside the Org files to make the requests and to process the results. I have to admit sometimes I was a bit lost during their demos; their Emacs-fu is very powerful.

They both mentioned an Emacs package called verb to help manage requests. #2 uses shell, awk and Python to make the requests, then transforms the JSON result into edn (extensible data notation) (pronounced like “Garden of edn”) and works with it using Clojure in a REPL inside Emacs. He also changed his file to make a request with curl. #3 had an elisp function inside javascript to populate his JSON request.

#3 talked about verb using the header line. I honestly have never heard of the header line. The mode line is the line at the bottom, and every config has that; the header line is like a mode line at the top. I think most configs do not use it or disable it. Prelude does not have it. My config (based on an Emacs config for Clojure by Daniel Higginbotham, aka flyingmachine) does not have it either (see Note 1 below). I think the mode line has always been in Emacs, and the header line was just added a few years ago.

#3 also mentioned which-key: per its web page it is a package “that displays available keybindings in popup”. #3 says lives in Org mode, and uses it to keep notes for meetings; he is an inspiration to us all.

#8 said he loves and hates VS Code. MSCode is easy to use, and he said he was having a hard time getting into Emacs. I think he might be trying to do too much at once with Emacs. #1 mentioned you do not have to open PDFs in Emacs if you do not want to. #2: said many people see Emacs as an editor, tool, or IDE, and while it is those things, ultimately it is a Lisp REPL. I wish I heard what he said after that, but then my power went out. Perhaps next time he will complete the thought again.

You’re welcome.

Note 1: Prelude and flyingmachine’s Emacs config may have changed since I last downloaded them. My version of Prelude is from a year ago, and I have been altering flyingmachine’s config from about three years ago.

I give people numbers since I do not know if they want their names in this write-up. Think of it as the stoner’s version of the Chatham House Rule. I figured that numbers are a little clearer than “someone said this, and someone else said that, and a third person said something else”. Plus it gives participants some deniability. People’s numbers are based on the order they are listed on the call screen, and the same person may be referred to by different numbers in different months.

I am not the official spokesperson for the group. I just got into the habit of summarizing the meetings every month, and adding my own opinions about things. The participants may remember things differently, and may disagree with opinions expressed in this post. Nothing should be construed as views held by anyone’s employers past, present or future. That said, if you like something in this post, I will take credit; if you don’t, blame somebody else.

Image from Mungi Gospels, an 11th-century Armenian manuscript housed at the Matenadaran Institute of Ancient Manuscripts in Yerevan; image from Wikimedia, assumed allowed under public domain.