Trying Out GPT4All

I tried out GPT4All. It is an app that can run an LLM on your desktop. Some of the models it can use allow the output to be used for commercial purposes. The only downside was it is not very fast, and makes my CPU run hot. The project provides installers for Linux, Apple, and Windows (or, as I like to call it: Win-Don’t).

I found out about it while looking into alternatives to ChatGPT and OpenAI. There are a few videos about it on Matthew Berman’s channel, including an interview with the CEO of Nomic AI, the company that started the GPT4All project. I think it is great that people are making alternatives to ChatGPT and OpenAI.

I first downloaded it a couple of weeks ago. The installer itself worked fine, but I had to change the permissions to get it to run: chmod 755 gpt4all-installer-linux.run. I got an error when I tried to run the program: bin/chat: error while loading shared libraries: libxcb-cursor.so.0: cannot open shared object file: No such file or directory. I installed the library as root: apt install libxcb-cursor0. After that it worked fine.

I downloaded the installer again today and re-installed to see if anything was different. I might have still gotten the libxcb-cursor error if I had not installed it earlier. Now the program lets you choose a model when you first run it. Before the model file had to be in the same directory as the executable, and if there was more than one model file it would pick the default. When I tried to run it the first time, I could download a couple of models, but I could not get past that screen. I had to exit and restart the app to actually use the models. There is a list of the models available here (historical commit here). The output of the default model can be used in commercial applications. Most of the other models are based on LLaMA, and commercial use of the output is prohibited.

The new version also gives you the option of sending feedback on the responses back to the GPT4All project.

The model files are from 3.5GB to about 7.5 GB. I have downloaded several model files over the past few weeks, and I have used up as much hard drive space in two weeks as I did in five years on my recently deceased laptop.

While the application is idle it does not use a lot of resources, but when it is generating output the CPU usage and temperature both spike. I do not know if I will be running this on a regular basis. My Meerkat has an Intel i5-10210U CPU, 4.20 GHz, 6 MB Cache, 4 Cores and 8 Threads with 64 GB of memory. The GPU is Intel, so I do not think I will be running a lot of Pytorch.

When I was using the version of the software from two weeks ago and using the ggml-gpt4all-j-v1.2-jazzy model file, I asked it to write some sample sales emails for companies in different industries. It kept writing emails asking for sample sales email instead of an actual sales email. I tried the new ggml-gpt4all-j-v1.3-groovy model (“Current best commercially licensable model based on GPT-J and trained by Nomic AI on the latest curated GPT4All dataset”), and it did better, but the results were pretty similar for each one. They all started with “Exciting news!” and offered a “wide range of” something. I read “Exciting news!” so many times, I started hearing the voice of Professor Farnsworth in my head. Here is the output from the prompt “Write an email from a consulting company whose software developers use Emacs”:

Subject: Exciting news! Our software developers are now using Emacs.

Dear [Recipient’s Name],
I hope this email finds you well. My name is [Your Name] and I am the owner of [Company Name]. We are excited to announce that our software developers are now using Emacs as their primary text editor.

As a company that specializes in software development, we understand the importance of having the best tools for the job. That’s why we offer a wide range of software development tools, including Emacs. Our developers are trained to use the best tools available, and we are always looking for new clients to hire.

If you are interested in learning more about our services, please don’t hesitate to contact us. We would love to discuss your project and see how we can help you achieve your goals.

Thank you for considering [Company Name]. We look forward to hearing from you soon.

I have to agree, Emacs is the best tool for just about any job.

I tried the same prompt with the same model on the new version, and I got this:

Subject: Help with Emacs – Our Developers Need Assistance

Dear [Name],

I hope this email finds you well. We are writing to inquire about your experience using the popular text editor, Emacs. As we understand that you work as a developer for our esteemed client, we would like to offer some assistance in resolving any issues you may be facing while working on their projects.

We have noticed that many of our clients often struggle with certain aspects of using Emacs, such as navigating through files efficiently, finding specific lines of code, or even understanding basic syntax. These challenges can lead to frustration when trying to accomplish tasks within the program.

To help alleviate these frustrations, we would love to invite you to participate in a webinar where we will provide step-by-step instructions on how to effectively navigate and utilize the features of Emacs. This webinar will cover topics such as file navigation, search functions, and programming language support.

If you are interested in participating, please let us know by responding to this email. If not, no need to worry, we completely understand if you prefer to continue using another tool. However, we believe that our webinar could greatly benefit you and your team members who also work on projects involving Emacs.

Thank you for considering our invitation. We look forward to hearing back from you soon.

Best regards,
[Your Name]

Not as positive. I selected the option to regenerate output, and it was just one sentence that did not even mention Emacs.

I tried the prompt with the model ggml-gpt4all-l13b-snoozy, which is “Current best non-commercially licensable model based on Llama 13b and trained by Nomic AI on the latest curated GPT4All dataset”, and I got this:

Subject: Emacs Software Development at [Consulting Company]

Dear [Client],

I hope this email finds you well. My name is [Your Name], and I am the Director of Software Development at [Consulting Company]. I am writing to express our excitement about the opportunity to work with you on your upcoming project.

At [Consulting Company], we pride ourselves on our use of Emacs as our primary text editor. Our developers have found that Emacs provides a powerful and flexible platform for software development, and we believe that it will be an asset to your project as well.

We understand that your project is of the utmost importance, and we are committed to delivering a high-quality solution that meets your needs. Our team of experienced developers will work closely with your team to ensure that your project is completed on time and within budget.

If you have any questions or concerns, please do not hesitate to reach out to us. We look forward to working with you and delivering a successful project.

Best regards,

[Your Name]
Director of Software Development
[Consulting Company]

I agree that “Emacs provides a powerful and flexible platform for software development”, and that it is as asset to any project.

Some of the responses interpreted the prompt to produce what I want (“We are amazing and Emacs is amazing and we can make amazing software for you with Emacs”) and one was offering to teach developers how to use Emacs. I tried a couple of more times, and again the results were different with the same prompt.

There is a project that provides a command line interface to the models called LlamaGPTJ-chat, which I might get to in a few more days. I have tried a couple of other CLI programs, and they also use a LOT of CPU.

Michael Finney wrote about his experience with GPT4All here. Per his article, GPT4All will soon have an API that can be called and run on a local server. I joined the GPT4All discord, but I have not spent too much time there.

I do not know if I will be using this too much since it puts a lot of stress on my system. I will see if anyone is running models on the cloud and find out how much it costs. But I support anyone who will give the world alternatives to OpenAI.

You’re welcome.

I am not using an image of someone writing because while this post mentions Emacs, it is not about Emacs.

Image from Madrid Skylitzes, a 12th century manuscript of Synopsis of Histories by Greek historian John Skylitzes; manuscript housed at National Library of Spain; manuscript information here. Image from World Document Library, licensed under CC BY-NC-SA 4.0.

Reactions To Recent AI Developments

There are a variety of reactions to what has been happening in AI lately.

Some of them are depressing. All of the people who will not say the sun comes up until Microsoft Teams or Office says it does are doing the same thing with AI. They think it is all about OpenAI and ChapGPT. MS owns part of OpenAI. LLMs are one of the biggest changes to come along in technology in a decade. We should not pass up this chance to prevent the company that has made technology suck for the past three decades have any more influence in our lives or society [Note 1].

Maybe I should not be shocked by the immensity of human stupidity, but I do not understand why people are not grabbing this chance to get Microsoft out of our lives with both hands.

I see this a lot at my employer [Note 2]. There the unholy trinity is OpenAI, ChatGPT, and Microsoft Azure. There are a lot of higher-ups who just push whatever garbage vendors are trying to sell, regardless of whether or not any of the companies that we are supposed to be helping (aka “paying clients”) want any of it. I do not understand why these people are paid lots of money to look at “new technology”, only to turn around and say “More Oracle! More SAP! More Microsoft!” A few years ago, a lot of them were pushing blockchain, which has gone nowhere. And none of these block-chumps admit that they were wrong about that. [Note 3]

Now these shysters are pushing the metaverse, even though only literally the only person on the planet who wants it is Mark Snakerberg. One reason I think he wants it is if they can get you to go to their site with their gear, then they own the whole experience. Right now you have to use a browser on a PC or an app on a phone to use Facehook, and Meta does not own an OS or a browswer. But that is not anybody else’s problem. I think the other reason is that he is so stiff and robotic he is the only person who has an avatar that looks more human than he does, even without legs. Here is an article and discussion on Slashdot about retailers dumping the metaverse. I can see retailers using the metaverse: people might want to try out new clothes without actually changing several times. If retailers and Disney do not want the metaverse, then it really is dead. I have heard the headsets are too heavy. Do you want something wireless surrounding your entire head? (Hey, maybe that’s why Snakerberg keeps throwing money at something nobody else wants.)

I wonder if he is regretting giving Sandberg the boot.

Even the Emacs community is gung-ho about OpenAI and their products. I noticed that Sacha Chua started a section for AI in her weekly Emacs news posts. The first AI mention I could find was for a GPT package on 2022-11-21 (as of 2023-04-24, it looks like that package is for OpenAI models only). The section started showing up on 2022-12-12, on and off until February, and then consistently since then. There are a few packages that say they will incorporate other LLMs as more are available. Most of the AI packages are just more wrappers around ChatGPT. There are a few posts on the subreddit asking about open source alternatives, and one about OpenAssistant (which I mention below). The posts are here, here (with a response from Irreal here) and here. I know that OpenAI’s products are more mature than others at the moment, but it seems like even the open source crowd is going all-in on the billion-ai-res’ shiny object. [Note 4]

It will be interesting how the corrupt-o-currency crowd reacts to AI. A lot of people insisted they were in corrupt-o-currency “for the technology”. I think AI will show us if those people are really interested in technology, or if they are stupid, or grifters. Now we have something that really IS interesting. Corrupt-o-currency tech was not that interesting. Increment a number going into a hash function until your output starts with a particular number of zeroes. That is really it. Immutability is not the default in a lot of programming languages, and it can make data easier to manage and reason about, but it is not unique to corrupt-o-currency. And it’s a pretty dumb reason to fool yourself.

Blockchain was a stalking horse for bitcon that promised a glorious future that never arrived. Every time I watched or watched anything about blockchain, it was always vendors pushing products, and there were never any user testimonials. Contrast that with AI: with AI we see users actually trying it out. There are a lot of people talking about AI who are not trying to sell you something, while blockchain/NFT/bitcon was nothing but grifting. We do not see vendors going on about “someday”, or AI bros saying, “Have fun staying dumb.” We do see people implementing models to compete with ChatGPT (more on that below).

A lot of corrupt-o-currency advocates say it’s still early days for their magic beans. They have been saying that for ten years, and there have been a LOT of people pushing it and trying to find a use for it. They might counter this is not the first time people thought AI was going to change the world, and there were a couple of AI winters: 1974–1980 and 1987–1993, so could there be a corrupt-o-currency winter as well? One difference is that nobody went to jail for AI fraud. There were no “rug pulls” in AI back in the day. AI has been attempts at solutions that until recently never solved the problems they were trying to solve. Block-coin was a technology looking for a problem to solve (besides money laundering and selling drugs). The whole concept of “digital assets” makes no sense at all. They do not have income streams like companies do, and they have no intrinsic use beyond financial transactions like commodities do. It is like someone found a way to combine Beanie Babies and coin flips and people decided to start gambling on them. Aside from helping pedophiles launder North Korean drug money, none of this serves any purpose.

AI has been around for a while, but I think that it was the release of Chap-GPT 3.5 in November that really changed things. The first time we talked about ChatGPT at the EmacsATX meetings was in December.

Right now AI and LLMs are en fuego. Not everyone is just pushing incumbent vendors and their products. There are a lot of projects working on open source LLMs. I wrote about the idea that people might try to run LLMs locally, and there are projects working on that.

Some of them are based on Facebook’s LLaMA model, so they cannot be used for commercial purposes. One is llama.cpp. There is one in Golang called llama.go. Another in Golang called LocalAI. One goal of these projects is to be able to be run on a CPU, instead of needing a large cluster of GPUs. There is also a subreddit dedicated to running LLaMA locally. Some experts think that making models with more parameters is not the way to make progress, and that algorithms are where progress will occur. Will this mean that the need for GPUs will plateau, and CPUs will become more important in AI? I don’t know. I tried using some model files from Hugging Face that are based on LLaMA, like Stanford’s Alpaca, but so far I have not been able to get anything working with that one. I think it only works with the original LLaMA files.

There is a project called Red Pajama from Together.xyz to build a completely open source counterpart to the LLaMA model. Their plan is to have something that can be run on a consumer grade GPU.

An interesting project is GPT4All. This project can be run locally. It is made by a company called Nomic.AI; their main product makes visual representations of AI datasets. I found out about it from Matthew Berman’s Youtube channel. It is uses an older model from EleutherAI called GPT-J (Hugging Face page here, Wikipedia page here). I am part of the Discord for this project, so I will keep an eye on this. The CEO of Nomic was interviewed by Matthew Berman, and he talked about how they went through a lot of effort to get a dataset that is completely open source and can be used for research and/or commercial purposes. He said that he thinks OpenAI has a lot of proprietary data in their dataset, partially due to how they created it, partially due to people uploading their own internal data into it. He predicts there will be a lot of lawsuits over AI data for years.

I got GPT4All to work locally, but I did get an error the first time: /bin/chat: error while loading shared libraries: libxcb-cursor.so.0: cannot open shared object file: No such file or directory. A Google search led me to an issue on their Github repo. I did not need to run all those commands, I only needed to run this one: apt install libxcb-cursor0.

Another project is OpenAssistant, made by a non-profit in Germany called LAION (Large-scale Artificial Intelligence Open Network, pronounced like the large feline “lion”) (Github page here, Hugging Face page here). I think the goal is not only to use their dataset to respond to prompts, but to be able to retrieve external data (like Google search results) in real-time. They want to release something that can be run locally, but they want the main interface to be their website. One of the ways the developed their dataset is by having humans ask questions, other people answer questions, and still more people rate the questions and the answers; someone could perform all three roles for different questions.

I found out about this from the YouTube channel of one of the leaders of the project named Yannic Kilcher. At first I did not think he was a real person but an AI-generated avatar because there are some jumps and jerks in his speech. Then I looked at a few live streams, and I think that for his videos he edits out all the filler words and pauses. I am in their Discord and will keep an eye on this.

Another open source project is StableLM from Stable.AI. They made the image generator Stable Diffusion. I thought they were all about images until their LLM came out. Right now I do not know too much about this one.

One possible clue to what might happen is a science fiction project called Orion’s Arm (site here, Wikipedia page here). Unlike most sci-fi, there are no humanoid aliens. There are baseline humans, humans augmented with various levels of technology, and AIs of various levels, up to almost god-like omniscience. Some people have put some thought into how humans will live with AI beyond just “AI will kill us all.” Interestingly, some of the images on that site are made with the Midjourney AI.

One thing about all of this that depresses me is I never got the chance to work with technologies that really interest me. When I read about or talk to Lisp and Smalltalk developers of yore, it seems like there was an age of heroes when gods and men strode the earth together. Now the world is full of pinheads pushing vendor nonsense, and people too stupid and too lazy to use something not made by Microsoft. Let’s get this company out of our lives forever.

Another thing that depresses me is I bet all the alternative medicine woo-woo people are not worried about their jobs.

You’re welcome.


Note 1: If you think OpenAI is actually open because it has the word “open” in it, you are probably the same sort of person who thinks SharePoint is good for sharing because it has the word “share” in it. Do us all a favor: go sit in a rocking chair right now and get all your nutrients through a straw. You are about as useful to society as someone already in that state.

Note 2: I do not speak for my employer, nor do they endorse this blog, as per the About page. Per the Disclaimer page: if any of this bothers you, go jump off a cliff.

Note 3: I don’t want anybody to think I have nothing good to say about my employer. Granted, I am currently making a living with a proprietary software tool that I really do not like. When I joined I knew nothing about it, and they paid me while they trained me how to use it. A lot of companies would never do that. And there were a couple of times in the past decade when someone found something else for me to do while things were slow. It is a big company. I think it has more than half a million employees. There are some smart people there, and frankly some dumb ones. It seems like too many decisions are made for the wrong reasons: inertia, the sunk cost fallacy, and this-is-what-the-vendor-told-us-to-do. Why can’t these companies founded by billionaires do their own marketing?

The company does actually do some useful things. They do a lot of work for various governments and large corporations around the world. Entities that actually make products and services people use. Maybe startups have more interesting technology, but I never bought into the whole startup religion here in Austin. There are a lot of people who seem to want to work for startups just because they are startups. Asking them what their company does or why anyone should care is a question a lot of them do not seem to like and sometimes do not have an answer for. There were a few people that I would see at some meetups maybe once a year. Every time I saw them they were at a different company that I had never heard of. They all just seemed to fade away. Maybe they made more money than me, but what are they really doing for the world? If any of those companies are still around, they are just leaving messes for someone else to clean up.

Note 4: I do not want any of this to be interpreted as disparaging Sacha Chua or all the things she does for the Emacs community. I think her posts simply reflect what is happening in the Emacs community. There are some people in the Emacs community who are as leery of AI consolidation as I am.

Image from The Codex of Fernando I and Doña Sancha, aka Beatus Facundus, an 11th century manuscript manuscript of ‘Commentary on the Apocalypse‘, written in the 8th century by Beatus of Liébana; manuscript created at monastery of Santo Toribio de Liébana (Wikipedia page here), currently housed at the National Library of Spain; manuscript information here. Image from World Document Library, licensed under CC BY-NC-SA 4.0.

Notes on LLMs and AI

There has been a lot of press lately about AI, OpenAI, GPT-${X}, and how AI will affect the world. For the time being, I plan on not looking further into AI (unless my current or a future employer compels me to). I think that right now there is not enough diversity in the vendor population. I also have some thoughts on how it will affect things going forward.

I do not like to get too meta in my posts, but I have been writing this on and off for over a week, and I want to get it out before too much more time passes. I am still learning about this stuff, like what is the difference between models, weights, and datasets; some articles use a project name to refer to all three components. The LLaMA debacle is the textbook case: some stuff was released, some was leaked, there are projects that are based on Meta’s work, some that seem to be clean-room implementations, so how it all fits together is murky to me.

GPT-${X} by OpenAI is taking the world by storm, particularly Chat-GPT. It was the focus of a recent EmacsATX meeting. It is disruptive in the sense that it has capabilities beyond prior AI technology, and will probably have a profound affect on society going forward. But in another sense, it is the opposite of disruptive; it consolidates power and influence in OpenAI. One of the owners of OpenAI is Microsoft, and for me that makes anything by OpenAI something to avoid. They are not doing this for you.

I think a lot of people do not realize that when they play around with the OpenAI prompts in ChapGPT, they are training the OpenAI models and making them better and more powerful. Power that can be used by other users of tool. Not only the vendors, but also your competitors. There have been reports of confidential data and PII being put into ChatGPT, and then extracted by other users later. People need to be more careful. And stop making the rich and powerful richer and more powerful. A lot of people in corporate America might work at companies that are independent on paper, yet they all act like they want to be subsidiaries of Microsoft. Start looking out for your own company and your own career and your own life.

The GPT-${X} products were used in making GitHub Copilot. I mentioned Copilot when I posted I was moving from Github to Codeberg. It does not respect licenses, which could put a company at legal risk, and sometimes it “solves” a problem while violating stated constraints. GPT-${X} has the same issues: Who owns the training data? Who owns the output?

It is good to automate things, but could relying on AI too much make people stupider? A good point was brought up in the discussion about why MIT dropped SICP: When you rely on a black box, how do you know you can rely on the black box? I think we might be coming close to fulfilling a prophecy from Dune:

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

I think we should collectively make an effort to avoid anything by OpenAI, and anything Microsoft. I do not know how long Microsoft has been involved with OpenAI, but there are a few MS hallmarks: it is called “OpenAI” even though it is not open (they have been tight-lipped about how they trained their data), and when it is wrong it insists you are wrong. And when it is incorporated into MS products it has started pushing ads.

There are a few alternatives out there. There is a company called Hugging Face that I think provides datasets, different models and hosting for AI. I think you can provide your own data. There is a company called Lambda Labs which provides all your AI/GPU needs: cloud hosting, colocation, servers, workstations with terrabytes of memory, and a very expensive and very nice looking laptop with Ubuntu pre-installed (a LOT more than System76, but it is nice to see more Linux laptops out there).

WRT software, there are some implementations of AI that are open source. NanoGPT can be run on a local system, although it might take a while. You can find the Github link here, and a link to what might be a fork on Codeberg here. It was started by Andrej Karpathy, who worked on autonomous driving at Tesla and worked at OpenAI.

GPT is a type of artificial neural network known as a large language model, or LLM. Then Facehook/Meta released an LLM called Large Language Model Meta AI, or LLaMA, so now there are a few projects with names referring to South American camelids: llama.cpp (Github link here, Hacker News discussion here), and a fork of llama.cpp called alpaca.cpp (Github link here, Codeberg link here). Once they saw money going to someone else’s pockets, Stanford decided to get in on the act with their own LLaMA implementation, also called Alpaca. There is one caleld Vicuna (intro page here, Github link here). And, last but not least, Guanaco, which look like a fork of Stanford’s Alpaca (Github repos here, page here). You would think AI researchers would come up with more original names rather than run a theme into the ground.

Note: I think Facebook/Meta did release some papers about LLaMA, and then some parts of it were leaked. The status of these projects is a bit unclear to me at the moment. Some of the projects mentioned cannot be used for commercial purposes. IANAL, but I think that llama.cpp and alpaca.cpp can since they are clean-room implementations and were not created with any assistance or collaboration with Meta. Stanford got some early access to LLaMA, so its project and Vicuna cannot be used for commercial purposes.

You can find some more info about open source AI here on Medium (archive here), and here on HN. I think the group EleutherAI is trying to be an open source counter to OpenAI.

There are a LOT of other AI projects out there, but a lot of them are just interfaces to Chat-GPT or DALL-E or something else from OpenAI, as opposed to a program you can run for yourself. A lot of the forks and clean-room/non-OpenAI models require a LOT of memory. Some need at least 60 GB. The mini I got from System76 can have up to 64GB. They have desktops that can go up to 1TB of memory, and servers up to 8TB. Granted, maybe something local will never catch up to OpenAI, but as a few comments in the HN discussion on llama.cpp pointed out: the open source models are becoming very efficient very quickly. Granted, some of the commenters said that AI might be out-of-reach for the hobbyist. But then all this stuff is doing is simulating a human.

So where does all this go next? Honestly, who knows, but I will share my thoughts anyway.

First off: I dismiss the doomsday scenario that AI will kill us all. Like the Wikipedia page on “pessimism porn” states: A lot of people like to predict disaster because it makes them feel smart, even if years go by and their predictions never come to pass. There are lot of people with blogs and YouTube channels that are always predicting a stock market collapse, or who think we are about to become Weimar Germany if the price of a gallon of milk goes up one cent. They dismiss you if you cannot offer irrefutable proof that the world will NOT end, yet they insist their predictions are to be regarded as self-evident. Granted, maybe those are not the best arguments against Skynet, but I have dealt with a lot of people who confuse the strength of their convictions for logic. Sometimes the best prediction is that things will mostly continue as they are, just with more of something you do (or do not) like.

Since this will be a major change, there will be an effect on jobs. Some jobs will be lost. But there might actually be more jobs due to AI. Scott McNeally pointed out that making a system used to be a masters thesis, and systems were pretty limited. Now we have powerful software that is easy to install. We have packages (like JDK, Golang, Elixir) that are powerful compilers and runtimes, far beyond what people thought was possible a few decades ago, yet they can be downloaded as tar or zip files that once expanded let people create robust, powerful software. Linux and these VMs have created a lot of technology jobs. I think AI might wind up on net creating more than we have now.

Granted, it is possible that the jobs that get created are more soul-sucking than what we have. I joked on Mastodon that AI will not take your job; it will just take away the parts you like, leaving you with the parts you do not like.

I do hope all the More Bad Advice pinheads who all sound the same and think the answer to everything is to cut costs lose their jobs. I have had good and bad bosses, but honestly, a lot of people in Corporate America sound the same: asking when things will be done, going on and on about how important some arbitrary deadline they pulled out of thin air is, harping on about innovation yet only having the same tired ideas (piling on more work during the so-called “good times”, then cutting staff when things start looking shaky).

And there will be more people thinking the same. One thing that really grates on me is that we are told in technology that we have to be constantly learning new things. Yet the world is full of business pinheads who cannot conceive of not using Excel. It bugs me that there are lot of people in corporate America And there are plenty of software developers who cannot conceive of doing something in a language that is not Javascript. I have a bad feeling that OpenAI will become the Third Pillar of Technology Stupidity.

Sadly, maybe that will be the way to stay employed. Be a Microsoft drone, a Javascript drone, or an OpenAI drone. I have met tech people older than me who said they could do things decades ago with Lisp and Smalltalk that most languages and runtimes still cannot match. I feel like we took a wrong turn somewhere.

That said, even if AI leads to more jobs, there could still be downsides. We are already seeing this: Generative AI is already being used to craft more effective phishing emails. ChatGPT accused a law professor of sexual harassment (article here, HN discussion here. The HN comments have examples of AI making stuff up, but the professor gave a good summary: “ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper.” What if this is used for background checks and nobody verifies what the AI tells them? This could cause a lot of damage to people. Per the quote misattributed to Mark Twain, a lie can travel halfway around the world before the truth can get its boots on.

We should call AI “artificial inference”, because it mostly makes up stuff that sounds true. It just makes guesses about what seems logical. For a long time it was logical to think the earth is flat. Yet for some reason people think the output of AI is always true. Perhaps they are assuming that it must be true since it is based on large data and advanced technology. But sometimes the machine learning is just machine lying. Marissa Mayer said Google search results seem worse because web is worse (articles here and here). People used to put content on the web to tell you things, and now they just want to sell you things. There is lots of junk on the web. I predict here will be a lot of junk in AI.

Microsoft is putting ads in Bing AI chat which is already fostering distrust in AI (article here and here). Unlike Google search ads, the ads in the chat are hard to distinguish from the rest of the results. If companies need to put ads in AI, then make it like Google ads. People realize that things need to be paid for. Intermingling ads with AI just ruins the AI. You do not need advanced AI to say something you are getting paid to say. Google has been able to serve ads based on user input since 2004.

I think AI will lead to a lot of artificial and misleading content. Not just text, but also audio and video. People might not be able to believe what they read, see or hear online. It could cause more cynicism and distrust in our society. Perhaps we will not get Skynet, just a slow decay and further fracturing of society.

AI could, of course, lead to massive job losses. A lot of people care more about cost than quality. And it is possible that after a time some of those jobs might come back. There is a post on Reddit (link here, HN discussion here) about a freelance writer who lost a gig to ChatGPT. (Another writer wrote an “AI survival guide“.) A few comments gave anecdotes of multiple applications to jobs that all sounded the same that the HR people realized were all done with AI. If more companies start using AI, a lot of websites will all start to be the same. A lot of people hate it when an article “sounds/feels like it was written by AI”. Perhaps the human touch will make a comeback. There is a joke I read somewhere:

An LLM walks into a bar.
The bartender asks, "What will you have?"
The LLM says, "That depends. What is everyone else having?"

Granted, it might be a while before jobs lost to AI come back, assuming they ever do. And not all of the jobs might not come back.

I think that people who understand concepts will do better in the long run than people who just know a tool. At least, that is how things have been. It could be different this time. On the other hand, could an AI come up with “Artisanal Bitcoin“?

Software used to be done in binary or assembly, and over time the languages became more powerful, and the number of jobs increased. Software was always about automation, and there was always something to automate. Has that cycle stopped?

I am worried, but I cannot just yet get on board the Doom Train. I remember working at Bank of America in the 00s/Aughts/Whatever that decade is called, and we all thought that all our jobs would go to India and there would be No More Software Made In ‘Merica. That did not happen.

Or maybe it is all a bubble that will burst.

Maybe the AI is not as advanced as the companies are telling us. OpenAI does not publicize it, but they used people in Kenya to filter the bad stuff (Reddit discussions here, here and here, Time article here with archive here, Vice article here with archive here, another Vice article here with archive here). One major focus of the artcles is that looking at all the toxic content was so traumatic for the workers that the company that got the contract ended it several months early. Looking at toxic content can wear on people. But isn’t the point of an AI to figure this stuff out?

My employer had us watch some videos on up and coming technology, and one of them was on AI. One of the people on the panel kept taking about how important it is to “train” and “curate” your data. They kept saying that over and over. And I had the same thought: isn’t that what the AI is supposed to do? They made it sound like AI was just a big fancy grep or SQL query.

Per the Vice articles, tech and social media companies have been using people in low-wage countries to flag content for years, while letting people think that their technology was so amazing. Perhaps ChatGPT is no different. I do not know if they have to re-flag everything for each version of GPT. I get the impression the data is trained when the AI is started up, and from there it is just repeating what it figured out. Does it actually learn in real-time the way a human can? Can an AI make new inferences and be an old dog learning new tricks the way a human can, or does it just keep re-inforcing the same ideas and positions the longer it runs? What if you train your data and the world changes? What if neo-nazis stop using triple parentheses as an anti-Jewish code today, and your training data is from two years ago? I guess you are just back to web search.

I think part of what is going on is hype. As Charlie Stross pointed out, it does seem interesting that we see the AI hype just starting as the corrupt-o-currency hype is winding down. The vulture capitalists need something new to sell.

Another issue is: will this scale going forward? Technology does not always progress at the same rate. We could be headed for another AI winter. Research into AI for autonomous driving has hit a wall (no pun intended).

And how will this scale? The human brain still have 1000 times the number of connections as GPT-4 has parameters. There is already a shortage forming for the chips used for AI. Is it worth it to burn the planet and use all that metal and electricity to chew through a lot of data…to do what? Simulate a human brain in a world with 8 billion people? Especially when a lot of the humans’ intelligence is not being used efficiently (see penetration of Lisp vs Windows).

That said, I don’t think AI will go away. If I could have one thing, I would like to see alternatives to OpenAI, particularly open source. It might be possible to run LLMs locally. Do you really need an AI that knows about oceanography? Most of us do not. I do not think that AI will kill us all (it is not clear to me how we go from chatbot to Terminator). But corporate consolidation in AI would be a tragedy.

I just need a job where I can use Emacs and call people stupid.

You’re welcome.

Image from an 11th-century manuscript housed in the Topkapi Palace in Istanbul, image from The Gabriel Millet Collection (collection page here), assumed allowed under public domain.

Making An Elixir Project

Now I will go over making an Elixir project. This is a continuation of my post about learning project structure and testing from the beginning when learning a new programming language.

Elixir took a bit more work. I made a project and I thought I was doing it correctly, but after a certain point every time I ran the tests it ran the app instead. I could not figure out why. So I started over. I followed along with a project in Dave Thomas’s Elixir book. He does not start a project until Chapter 13, which I think is odd. Why not start a project from the beginning?

Right now I do not know a whole lot about Elixir or the Elixir community or ecosystem, so this post might contain some opinions and speculations that will seem $INSERT_NEGATIVE_TERM to Elixir experts.

You can install Elixir with the asdf tool. It should manage dependencies for Elixir itself, but not your Elixir projects; Elixir requires another language named Erlang to be installed. Check the asdf Getting Started page to download and install it.

After you install asdf, you need to install the Erlang and Elixir plugins, and then install Erlang and Elixir themselves.


asdf
asdf plugin add erlang https://github.com/asdf-vm/asdf-erlang.git
asdf plugin-add elixir https://github.com/asdf-vm/asdf-elixir.git
asdf plugin list
asdf install erlang latest
asdf install elixir latest
asdf list
asdf list elixir
asdf list-all erlang
asdf list-all elixir

The tool to manage Elixir projects and dependecies is called Mix. To list all the commands, use “mix help”. You can find out more here and here. It is to Elixir what Maven or Gradle is to Java, or Leiningen is to Clojure. I think it is more like Gradle or Leiningen than Maven, because I think that it is easier to add functionality to Mix that it is to Maven, and it is easier to add functionality to Gradle and Leiningen than Maven. I think the Phoenix web framework adds some Mix tasks. My installation of Elixir and Mix has some Phoenix tasks built-in. I do not know if that is because whoever made the asdf package included them, or if they are part of all Elixir installations. I would be a bit surprised if the Elixir maintainers would include Phoenix and play favorites.

First make a directory for Elixir projects.


ericm@latitude:~$ mkdir elixir.projects
ericm@latitude:~$ cd elixir.projects/
ericm@latitude:~/elixir.projects$ 

Next run Mix to make a new project


ericm@latitude:~$ cd elixir.projects/
ericm@latitude:~/elixir.projects$ mix new foothold
\* creating README.md
- creating .formatter.exs
- creating .gitignore
- creating mix.exs
- creating lib
- creating lib/foothold.ex
- creating test
- creating test/test_helper.exs
- creating test/foothold_test.exs

Your Mix project was created successfully.
You can use "mix" to compile it, test it, and more:

    cd foothold
    mix test

Run "mix help" for more commands.
ericm@latitude:~/elixir.projects$ cd foothold/
ericm@latitude:~/elixir.projects/foothold$ ls
lib/  mix.exs  README.md  test/

Elixir uses modules instead of classes, and they are in namespaces. I want to make one for my project called “foothold”. I ran “mix help”, but none of the task summaries looked like what I want, so we have to go old-school and do this by hand. I am not sure if Elixir calls them “namespaces”, but that is how I think of them.


ericm@latitude:~/elixir.projects/foothold$ mkdir lib/foothold
ericm@latitude:~/elixir.projects/foothold$ mkdir test/foothold

As with our golang project, make a package (or namespace, or prefix) for some modules that we will write.


ericm@latitude:~/elixir.projects/foothold$ mkdir lib/more_strings
ericm@latitude:~/elixir.projects/foothold$ mkdir test/more_strings

We will make a couple of files that make duplicates of strings and reverse strings, and we will include some tests for them. The modules will have the “ex” extension. The tests will have the “exs” extension because they are scripts and if we compile our app the tests would not be included.

Make a file lib/more_strings/duplicate.ex:


defmodule MoreStrings.Duplicate do

  def duplicate_string(arg_string) do
    String.duplicate(arg_string, 2)
  end

  def make_three_copies(arg_string) do
    String.duplicate(arg_string, 3)
  end

end

Make a file test/more_strings/duplicate_test.exs:


defmodule MoreStrings.DuplicateTest do
  use ExUnit.Case          # bring in the test functionality
  import ExUnit.CaptureIO  # And allow us to capture stuff sent to stdout
  doctest MoreStrings.Duplicate
  alias MoreStrings.Duplicate, as: MSD

  test "try duplicate_string" do
    assert "andand" == MSD.duplicate_string( "and" )
    refute "andanda" == MSD.duplicate_string( "and" )
  end

  test "try make_three_copies" do
    IO.puts "In the test for make_three_copies"
    assert "zxcvzxcvzxcv" == MSD.make_three_copies( "zxcv" )
  end
end

Make lib/more_strings/reverse.ex:


defmodule MoreStrings.Reverse do

  def reverse_stuff do
    IO.puts "In MoreStrings.Reverse"
  end

  # why doesn't it like this?
  def actually_reverse_string(arg_string) do
    IO.puts "In MoreStrings.actually_reverse_string with arg #{arg_string}"
    IO.puts String.reverse(arg_string)
    String.reverse(arg_string)
  end

  def revv(arg_string) do
    IO.puts "In MoreStrings.Reverse.revv with arg #{arg_string}"
     IO.puts String.reverse(arg_string)
  end
end

Make test/more_strings/reverse_test.exs


defmodule MoreStrings.ReverseTest do
  use ExUnit.Case          # bring in the test functionality
  import MoreStrings.Reverse
  import ExUnit.CaptureIO  # And allow us to capture stuff sent to stdout

  # alias MoreStrings.Reverse, as: MSR
  # import MoreStrings.Reverse

  test "try reverse" do
    IO.puts "In the test try reverse"
    # assert "dolleh" == MSR.actually_reverse_string( "ahello" )
    assert MoreStrings.Reverse.actually_reverse_string("ahello") == "olleha"
    refute actually_reverse_string( "hello" ) == "dollehd"
  end

  test "ttttttt" do
    IO.puts "In test tttttt"
    assert 4 == 2 + 2
  end

end

Now compile the app with “mix compile” and run the tests with “mix test –trace”. Adding the –trace will print a message to the console for each test being run even if you do not have any IO.puts statements.


ericm@latitude:~/elixir.projects/foothold$ mix compile
Compiling 3 files (.ex)
Generated foothold app
ericm@latitude:~/elixir.projects/foothold$ mix test --trace
Compiling 3 files (.ex)
Generated foothold app
warning: unused import ExUnit.CaptureIO
  test/more_strings/reverse_test.exs:4

warning: unused import ExUnit.CaptureIO
  test/more_strings/duplicate_test.exs:3


MoreStrings.DuplicateTest [test/more_strings/duplicate_test.exs]
  * test try duplicate_string (0.02ms) [L#7]
  * test try make_three_copies [L#12]In the test for make_three_copies
  * test try make_three_copies (0.03ms) [L#12]

FootholdTest [test/foothold_test.exs]
  * doctest Foothold.hello/0 (1) (0.00ms) [L#3]
  * test greets the world (0.00ms) [L#5]
In the test try reverse

MoreStrings.ReverseTest [test/more_strings/reverse_test.exs]
In MoreStrings.actually_reverse_string with arg ahello
  * test try reverse [L#9]olleha
In MoreStrings.actually_reverse_string with arg hello
ollehericm@latitude:~/elixir.projects/foothold$ mix compile
Compiling 3 files (.ex)
Generated foothold app
ericm@latitude:~/elixir.projects/foothold$ mix test --trace
Compiling 3 files (.ex)
Generated foothold app
warning: unused import ExUnit.CaptureIO
  test/more_strings/reverse_test.exs:4

warning: unused import ExUnit.CaptureIO
  test/more_strings/duplicate_test.exs:3


MoreStrings.DuplicateTest [test/more_strings/duplicate_test.exs]
  * test try duplicate_string (0.02ms) [L#7]
  * test try make_three_copies [L#12]In the test for make_three_copies
  * test try make_three_copies (0.03ms) [L#12]

FootholdTest [test/foothold_test.exs]
  * doctest Foothold.hello/0 (1) (0.00ms) [L#3]
  * test greets the world (0.00ms) [L#5]
In the test try reverse

MoreStrings.ReverseTest [test/more_strings/reverse_test.exs]
In MoreStrings.actually_reverse_string with arg ahello
  * test try reverse [L#9]olleha
In MoreStrings.actually_reverse_string with arg hello
olleh
  * test try reverse (0.1ms) [L#9]
  * test ttttttt [L#16]In test tttttt
  * test ttttttt (0.02ms) [L#16]


Finished in 0.1 seconds (0.00s async, 0.1s sync)
1 doctest, 5 tests, 0 failures

Randomized with seed 154594

  * test try reverse (0.1ms) [L#9]
  * test ttttttt [L#16]In test tttttt
  * test ttttttt (0.02ms) [L#16]

Finished in 0.1 seconds (0.00s async, 0.1s sync)
1 doctest, 5 tests, 0 failures

Randomized with seed 154594

Run “iex -S mix” in the root of your project to use your modules. IEx is the interactive Elixir shell that comes with Elixir. You can type in Elixir code and get results. It is sort of like un-automated unit tests. You can end the session by hitting Control-C (or as we say in Emacs land: C-c) and then “a” and the return key.


ericm@latitude:~/elixir.projects/foothold$ iex -S mix
Erlang/OTP 25 [erts-13.0.2] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [jit:ns]

Interactive Elixir (1.13.4) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> MoreStrings.Reverse.actually_reverse_string("ahello")
In MoreStrings.actually_reverse_string with arg ahello
olleha
"olleha"
iex(2)> alias MoreStrings.Duplicate, as: MSD
MoreStrings.Duplicate
iex(3)> MSD.duplicate_string( "and" )
"andand"
iex(4)> MSD.make_three_copies( "zxcv" )
"zxcvzxcvzxcv"
iex(5)> 
BREAK: (a)bort (A)bort with dump (c)ontinue (p)roc info (i)nfo
       (l)oaded (v)ersion (k)ill (D)b-tables (d)istribution

Now add an external dependency to the project. The package we will add is Decimal, a package for arbitrary precision decimal artithmatic (Hex page here, documentation here, Github repo here). First we need to add a reference to it in our mix.exs file in the “defp deps” section:


defp deps do
  [
    {:decimal, "~> 2.0"}
    # {:dep_from_hexpm, "~> 0.3.0"},
    # {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git", tag: "0.1.0"}
  ]
end

Here are the Mix tasks associated with dependencies:


mix deps              # Lists dependencies and their status
mix deps.clean        # Deletes the given dependencies' files
mix deps.compile      # Compiles dependencies
mix deps.get          # Gets all out of date dependencies
mix deps.tree         # Prints the dependency tree
mix deps.unlock       # Unlocks the given dependencies
mix deps.update       # Updates the given dependencies

Run “mix deps.get” to fetch the dependencies and “mix deps.compile” if it makes you feel better:


ericm@latitude:~/elixir.projects/foothold$ mix deps.get
Resolving Hex dependencies...
Resolution completed in 0.033s
New:
  decimal 2.0.0
\* Getting decimal (Hex package)
ericm@latitude:~/elixir.projects/foothold$ mix deps.compile
==> decimal
Compiling 4 files (.ex)
Generated decimal app
ericm@latitude:~/elixir.projects/foothold$ mix deps
\* decimal 2.0.0 (Hex package) (mix)
  locked at 2.0.0 (decimal) 34666e9c
  ok

Add a module that depends on Decimal in lib/foothold/decimal_stuff.ex, and make a few calls so we have something to test:


defmodule Foothold.DecimalStuff do
  def do_decimal_add(a, b) do
    Decimal.add(a, b)
  end


  def do_decimal_subtract(a, b) do
    Decimal.sub(a, b)
  end

  def do_decimal_compare(a, b) do
    Decimal.compare(a, b)
  end

end

Add the following to test/foothold/decimal_test.exs


defmodule Foothold.DecimalTest do

  use ExUnit.Case
  import Foothold.DecimalStuff 
  import Decimal

  test "test do_decimal_add" do
    assert Decimal.add(2,3) == do_decimal_add( 2, 3 )
  end

  test "test do_decimal_compare_lt" do
    assert :lt == do_decimal_compare(1, 2)
  end

  test "test do_decimal_compare_gt" do
    assert :gt == do_decimal_compare( 2, 1 )
  end

  test "test do_decimal_subtract" do
    # assert 3 == do_decimal_subtract( 5, 2 )
    # assert Decimal.subtract( 5, 2 ) == do_decimal_subtract( 5, 2 )
    assert Decimal.new( 3 ) == do_decimal_subtract( 5, 2 )
  end

  #  def do_decimal_subtract(a, b) do
  #   def do_decimal_compare(a, b) do
end

Now run the tests again:


ericm@latitude:~/elixir.projects/foothold$ mix test --trace
==> decimal
Compiling 4 files (.ex)
Generated decimal app
==> foothold
Compiling 1 file (.ex)
Generated foothold app
warning: unused import Decimal
  test/foothold/decimal_test.exs:5

warning: unused import ExUnit.CaptureIO
  test/more_strings/reverse_test.exs:4

warning: unused import ExUnit.CaptureIO
  test/more_strings/duplicate_test.exs:3


FootholdTest [test/foothold_test.exs]
  * doctest Foothold.hello/0 (1) (0.00ms) [L#3]
  * test greets the world (0.00ms) [L#5]

MoreStrings.DuplicateTest [test/more_strings/duplicate_test.exs]
  * test try duplicate_string [L#7]In the test for make_three_copies
  * test try duplicate_string (0.00ms) [L#7]
  * test try make_three_copies (0.1ms) [L#12]
In the test try reverse

MoreStrings.ReverseTest [test/more_strings/reverse_test.exs]
In MoreStrings.actually_reverse_string with arg ahello
  * test try reverse [L#9]olleha
In MoreStrings.actually_reverse_string with arg hello
olleh
  * test try reverse (0.1ms) [L#9]
  * test ttttttt [L#16]In test tttttt
  * test ttttttt (0.02ms) [L#16]

Foothold.DecimalTest [test/foothold/decimal_test.exs]
  * test test do_decimal_compare_gt (0.01ms) [L#15]
  * test test do_decimal_subtract (0.01ms) [L#19]
  * test test do_decimal_add (0.01ms) [L#7]
  * test test do_decimal_compare_lt (0.00ms) [L#11]


Finished in 0.04 seconds (0.00s async, 0.04s sync)
1 doctest, 9 tests, 0 failures

Randomized with seed 333086

Next add a module to be the main module for a command line app. Put this in lib/foothold/cli.ex:


defmodule Foothold.CLI do

  import MoreStrings.Reverse
  import MoreStrings.Duplicate

  @default_count 4
  @moduledoc """
  Handle the command line parsing and the dispatch to
  the various functions 
  """
  def main(argv) do
    IO.puts "in main for Foothold"

    reverse_stuff()
    # why doesn't it like this?
    actually_reverse_string( "this is my string" )
    revv( "this is my string for revv" )
    IO.puts duplicate_string "this is a string to be duplicated"
    IO.puts make_three_copies "one copy "


    argv
    |> parse_args
    |> process
    IO.puts "Done with CLI"
  end

  @doc """
  'argv' can be -h or --help, which returns :help

  Otherwise it is a github user name, project name, and (optionally)
  the number of entries to format.

  Return a tuple '{ user, project, count }', or ':help' if help was given.
  """
  def parse_args(argv) do
    OptionParser.parse(argv, switches: [ help: :boolean],
                             aliases:  [ h:    :help   ])
    |> elem(1)
    |> args_to_internal_representation()
  end

  def args_to_internal_representation([user, project, count]) do
    { user, project, String.to_integer(count) }
  end

  def args_to_internal_representation([user, project]) do
    { user, project, @default_count }
  end

  def args_to_internal_representation(_) do # bad arg or --help
    :help
  end

  def process(:help) do
    IO.puts """
    usage:  issues <user> <project> [ count | #{@default_count} ]
    """
    System.halt(0)
  end

  def process({_user, _project, _count}) do
    IO.puts "In process"
  end

end

Next, put the following in the mix.exs file for the project:


defp escript_config do
  [
    main_module: Foothold.CLI
  ]
end

Escript is an Elixir utility that turns compiled projects into zip archives.

Then we can compile our application with “mix compile” and run it with “mix run -e ‘Foothold.CLI.main([“-h”])'”.


ericm@latitude:~/elixir.projects/foothold$ mix compile
warning: function escript_config/0 is unused
  mix.exs:30

Compiling 2 files (.ex)
Generated foothold app
ericm@latitude:~/elixir.projects/foothold$ mix run -e 'Foothold.CLI.main(["-h"])'
warning: function escript_config/0 is unused
  mix.exs:30

in main for Foothold
In MoreStrings.Reverse
In MoreStrings.actually_reverse_string with arg this is my string
gnirts ym si siht
In MoreStrings.Reverse.revv with arg this is my string for revv
vver rof gnirts ym si siht
this is a string to be duplicatedthis is a string to be duplicated
one copy one copy one copy 
usage:  issues <user> <project> [ count | 4 ]

That is the basics to get a project up and running as you learn Elixir. As I stated before, I do not like having code floating in space, or making tiny edits to small files.

I think that deploying an Elixir app to production would take more steps, and you have to know more about the Erlang VM, but that should be enough to get you started.

You’re welcome.

Image from Jruchi II Gospel, a 12-century manuscript housed at the National Research Center of Georgian Art History and Monument Protection, assumed allowed under public domain.

Adding Dependencies To The Go Project

In a previous post I wrote that if a language has a standard way of making projects, then that should be taught right away, along with automated tests. Then I wrote a post in which I went over making a simple project with Go.

I realized I skipped something important: adding external dependencies. I will re-use the project from the previous post. You should complete the steps in that post before starting this one.

The project already has two packages. This time I will add two files in one of our existing packages. Each file will use a different module. I will also include a test for each.

I picked a couple of project off of the Awesome Go github page.

The first one is stringy (link to module on pkg.go.dev here).


cd spearhead
emacs morestrings/use_stringy.go

Here is the contents of the file:


package morestrings

import (
    "fmt"
    "github.com/gobeam/stringy"
) 

func UseReplaceLast(root_string, old_s, new_s string ) string {
    fmt.Println("in morestrings/UseReplaceLast")
    replaceLast := stringy.New(root_string)
    return replaceLast.ReplaceLast(old_s, new_s)
}

I am only using the “ReplaceLast” function. Notice the line importing “github.com/gobeam/stringy”.

Next I run “go build”:


go build
morestrings/use_stringy.go:5:2: no required module provides package github.com/gobeam/stringy; to add it:
    go get github.com/gobeam/stringy

I will add code to spearhead.go to invoke my function:


fmt.Println("about to call morestrings.UseReplaceLast('this is my string, working with strings, i love strings', 'string', 'rope') : ", 
        morestrings.UseReplaceLast("this is my string, working with strings, i love strings", "string", "rope"))

Before I run “go get”, here is what is in the go.mod file:


more go.mod 
module info/shelfunit/spearhead

go 1.18

Now I run “go get”:


go get github.com/gobeam/stringy
go: added github.com/gobeam/stringy v0.0.5

Here is go.mod:


module info/shelfunit/spearhead

go 1.18

require github.com/gobeam/stringy v0.0.5 // indirect

There are no issues running “go build” and running the executable:


go build
ericm@latitude:~/github/goapps/spearhead$ ./spearhead 
Hello, world.
about to call numberstuff.AddOneToSum( 5, 6):  12
about to call numberstuff.SubtractOneFromSum( 5, 6 ):  10
about to call numberstuff.AddOneToProduct( 5, 6):  31
about to call numberstuff.SubtractOneFromProduct( 5, 6 ):  29
about to call morestrings.ReturnStringTwice("twice"):  twicetwice
in morestrings/UseReplaceLast
about to call morestrings.UseReplaceLast('this is my string, working with strings, i love strings', 'string', 'rope') :  this is my string, working with strings, i love ropes

The next step is to make a test file: morestrings/use_stringy_test.go


package morestrings

import (
    "fmt"
    "testing"
)

func TestUseReplaceLast(t *testing.T) {
    fmt.Println( "Starting TestUseReplaceLast" )
    cases := []struct {
        root, old, new, want string
    }{
        {"Hello, world, i love the world, dog meets world", "world", "food", "Hello, world, i love the world, dog meets food"},
        {"learning go, will go places, gonna be big",       "go",    "wa",   "learning go, will go places, wanna be big"},
        {"I love life, go live your life, go create life", "life",   "lice", "I love life, go live your life, go create lice"},
    }
    for _, c := range cases {
        fmt.Println( "c.root: ", c.root, ", c.old: ", c.old, ", c.new: ", c.new, ", c.want: ", c.want )
        got := UseReplaceLast(c.root, c.old, c.new)
        if got != c.want {
            t.Errorf("UseReplaceLast(%q, %q, %q) == %q, want %q", c.root, c.old, c.new, got, c.want)
        }
    }
    fmt.Println( "Ending TestUseReplaceLast" ) 
}

When I run “go test -v ./morestrings/” we get all the tests for this module. I will only share the output of the test I just added:


=== RUN   TestUseReplaceLast
Starting TestUseReplaceLast
c.root:  Hello, world, i love the world, dog meets world , c.old:  world , c.new:  food , c.want:  Hello, world, i love the world, dog meets food
in morestrings/UseReplaceLast
c.root:  learning go, will go places, gonna be big , c.old:  go , c.new:  wa , c.want:  learning go, will go places, wanna be big
in morestrings/UseReplaceLast
c.root:  I love life, go live your life, go create life , c.old:  life , c.new:  lice , c.want:  I love life, go live your life, go create lice
in morestrings/UseReplaceLast
Ending TestUseReplaceLast
--- PASS: TestUseReplaceLast (0.00s)

If I run “go test -v ./morestrings/use_stringy_test.go”, I get an error:


# command-line-arguments [command-line-arguments.test]
morestrings/use_stringy_test.go:19:16: undefined: UseReplaceLast
FAIL	command-line-arguments [build failed]
FAIL

I am sure some Golang experts are shaking their heads because I did something wrong. As far as I know, while the “go test” command can run tests using a directory as an argument, it does not work as well when you specify individual files.

To test only the test I just created, I can run this command:


go test -v ./morestrings -run TestUseReplaceLast 

I suppose if you wanted to run multiple tests in a specific file, you would need to give them all a name that is common to the tests in that file, but also unique to that group of tests. So you need to be careful when naming your tests. The argument after the “run” is a regular expression. If I wanted to run the tests from the last post in ./numberstuff/multiplication_enhancements_test.go, I would use this command:


go test -v ./numberstuff/ -run Test.*Product

To run the tests in numberstuff/addition_enhancements_test.go, I would use this command:


go test -v ./numberstuff/ -run Test.*Sum.*

The next package I will use is strutil (link to module on pkg.go.dev here). The file using this module is morestrings/use_strutil.go:


package morestrings

import (
    "fmt"
    "github.com/ozgio/strutil"
) 

func UseCountWords(root_string string ) int {
    fmt.Println("in morestrings/UseCountWords")
    return strutil.CountWords(root_string)
}

Again: note there is an import statement for our module. Here is the output of “go build”:


go build
morestrings/use_strutil.go:5:2: no required module provides package github.com/ozgio/strutil; to add it:
    go get github.com/ozgio/strutil

So I run the “go get” command:


go get github.com/ozgio/strutil
go: added github.com/ozgio/strutil v0.4.0

Now we have two modules in a block in go.mod:


more go.mod 
module info/shelfunit/spearhead

go 1.18

require (
    github.com/gobeam/stringy v0.0.5 // indirect
    github.com/ozgio/strutil v0.4.0 // indirect
)

I will add a couple of calls to our new function in spearhead.go:


fmt.Println("about to call morestrings.UseCountWords('this is my string with words'): ", morestrings.UseCountWords("this is my string with words"))
    root_string := "I am again called upon by the voice of my country to execute the functions of its Chief Magistrate"
    fmt.Println("about to call UseCountWords('I am again called upon by the voice of my country to execute the functions of its Chief Magistrate'): ",
        morestrings.UseCountWords(root_string))

And again I call “go build” and run the executable:


go build
ericm@latitude:~/github/goapps/spearhead$ ./spearhead 
Hello, world.
about to call numberstuff.AddOneToSum( 5, 6):  12
about to call numberstuff.SubtractOneFromSum( 5, 6 ):  10
about to call numberstuff.AddOneToProduct( 5, 6):  31
about to call numberstuff.SubtractOneFromProduct( 5, 6 ):  29
about to call morestrings.ReturnStringTwice("twice"):  twicetwice
in morestrings/UseReplaceLast
about to call morestrings.UseReplaceLast('this is my string, working with strings, i love strings', 'string', 'rope') :  this is my string, working with strings, i love ropes
in morestrings/UseCountWords
about to call morestrings.UseCountWords('this is my string with words'):  6
in morestrings/UseCountWords
about to call UseCountWords('I am again called upon by the voice of my country to execute the functions of its Chief Magistrate'):  19

I will make a test file in morestrings/use_strutil_test.go:


package morestrings

import (
    "fmt"
    "testing"
)

func TestUseCountWords(t *testing.T) {
    fmt.Println( "Starting TestUseCountWords" )
    cases := []struct {
        in_string string
        want int
    }{
        {"Among the vicissitudes incident to life no event could have filled me with greater anxieties than that of which the notification was transmitted by your order, and received on the 14th day of the present month", 36},
        {"I am again called upon by the voice of my country to execute the functions of its Chief Magistrate", 19},
        {"When it was first perceived, in early times, that no middle course for America remained between unlimited submission to a foreign legislature and a total independence of its claims, men of reflection were less apprehensive of danger from the formidable power of fleets and armies they must determine to resist than from those contests and dissensions which would certainly arise concerning the forms of government to be instituted over the whole and over the parts of this extensive country", 79},
    }
    for _, c := range cases {
        fmt.Println( "----- c.in_string: ", c.in_string, ", c.want: ", c.want) // , ", here it is: ", UseCountWords(c.in_string))
        got := UseCountWords(c.in_string)
        if got != c.want {
            t.Errorf("ERROR: UseCountWords(%q) == %d, want %d", c.in_string, got, c.want)
        }
    }
    fmt.Println( "Ending TestUseCountWords" )
}

Here is the result of go test -v ./morestrings -run TestUseCountWords:


=== RUN   TestUseCountWords
Starting TestUseCountWords
----- c.in_string:  Among the vicissitudes incident to life no event could have filled me with greater anxieties than that of which the notification was transmitted by your order, and received on the 14th day of the present month , c.want:  36
in morestrings/UseCountWords
----- c.in_string:  I am again called upon by the voice of my country to execute the functions of its Chief Magistrate , c.want:  19
in morestrings/UseCountWords
----- c.in_string:  When it was first perceived, in early times, that no middle course for America remained between unlimited submission to a foreign legislature and a total independence of its claims, men of reflection were less apprehensive of danger from the formidable power of fleets and armies they must determine to resist than from those contests and dissensions which would certainly arise concerning the forms of government to be instituted over the whole and over the parts of this extensive country , c.want:  79
in morestrings/UseCountWords
Ending TestUseCountWords
--- PASS: TestUseCountWords (0.00s)
PASS
ok  	info/shelfunit/spearhead/morestrings	0.002s

In case you are wondering, while reading through post, what those quotes are, it is my duty to inform you, as any good upstanding citizen would, that they are from the first three inaugural addresses of the heads of government of our great country, whose writing style, which leaves much to be desired by people living in the post-Hemingway era, given that inserting entire thoughts in the middle of sentences can introduce unneeded complexity, I have done my utmost to emulate.

Given the length and content of Washington’s second inaugural address, I get the impression he did not want a second term.

You’re welcome.

Image from Synaxarium of Euthymius of Athos, an 11-century Georgian manuscript, image from Wikimedia, assumed allowed under public domain.

Making a Go Project

In a post I mentioned that when learning a new programming language we should start out making projects and tests for our code.

One language I am interested in is Go. Here I will list the steps to make a Go project with some tests. I will also include the source code for each file. I know this is redundant since there is a tutorial for this on the Go language site, but I wanted to make one with multiple packages. [Note 1]

First cd to a directory where you will create your project.


go.dir$ mkdir spearhead
go.dir$ cd spearhead
go.dir/spearhead$ go mod init info/shelfunit/spearhead
go: creating new go.mod: module info/shelfunit/spearhead
go.dir/spearhead$ more go.mod 
module info/shelfunit/spearhead

go 1.18

NOTE: You must use slashes, not dots. I put in dots the first time, and it did not work.

First I will make a few functions that do some simple things with numbers. I will make a couple that add two numbers, and then add one to the total. I will make another function that subtracts one from the total. I will do the same for multiplication.


mkdir numberstuff
emacs -nw numberstuff/addition_enhancements.go

Here is the contents of the file:


package numberstuff 

func AddOneToSum( x, y int ) int {
    return x + y + 1
}

func SubtractOneFromSum( x, y int ) int {
    return x + y - 1
}

Now I will write a few tests for this. This will go in numberstuff/addition_enhancements_test.go. Since I am using Emacs, I will create the new buffer while in numberstuff/addition_enhancements.go. I will probably not have too many Emacs commands going forward. Also: I am not too familiar with the Go toolchain or the Emacs Go mode, so I will be running some tests and commands on the command line. When my Emacs-fu is more powerful, I will be able to do it all in Emacs.


package numberstuff

import (
    "fmt"
    "testing"
)

func Test_AddOneToSum( t *testing.T ) {
    fmt.Println("Testing AddOneToSum")
    result := AddOneToSum(3, 4)
    if result != 8 {
        t.Error("Incorrect result, expected 8, got ", result)
    }
}

func Test_AddOneToSumCases(t *testing.T) {
    fmt.Println("Testing AddOneToSumCases")
    cases := []struct {
        a, b, result int
    }{
        {3, 4, 8},
        {4, 5, 10},
        {5, 6, 12},
    }
    for _, c := range cases {
        got := AddOneToSum(c.a, c.b)
        if (got != c.result) {
            t.Errorf( "incorrect result: AddOneToCases(%x, %x) gave %x, wanted %x", c.a, c.b, got, c.result )
        }
    }
}

func Test_SubtractOneFromSum( t *testing.T ) {
    result := SubtractOneFromSum(3, 4)
    if result != 6 {
        t.Error("Incorrect result, expected 6, got ", result)
    }
}

Now I will do the same for multiplication. Here is numberstuff/multiplication_enhancements.go:


package numberstuff 

func AddOneToProduct( x, y int ) int {
    return (x * y) + 1
}

func SubtractOneFromProduct( x, y int ) int {
    return (x * y) - 1
}

Here is numberstuff/multiplication_enhancements_test.go:


package numberstuff

import (
    "fmt"
    "testing"
)

func Test_AddOneToProduct( t *testing.T ) {
    fmt.Println("Testing AddOneToProduct")
    result := AddOneToProduct(3, 4)
    if result != 13 {
        t.Error("Incorrect result, expected 13, got ", result)
    }
}

func Test_SubtractOneFromProduct( t *testing.T ) {
    result := SubtractOneFromProduct(3, 4)
    if result != 11 {
        t.Error("Incorrect result, expected 11, got ", result)
    }
}

Next I will invoke these from my main file, spearhead.go:


package main

import (
    "fmt"
    "info/shelfunit/spearhead/numberstuff"
) 

func main() {
  fmt.Println("Hello, world.")
    fmt.Println("about to call numberstuff.AddOneToSum( 5, 6): ", numberstuff.AddOneToSum( 5, 6 ) )
    fmt.Println("about to call numberstuff.SubtractOneFromSum( 5, 6 ): ", numberstuff.SubtractOneFromSum( 5, 6 ) )
    fmt.Println("about to call numberstuff.AddOneToProduct( 5, 6): ", numberstuff.AddOneToProduct( 5, 6 ) )
    fmt.Println("about to call numberstuff.SubtractOneFromProduct( 5, 6 ): ", numberstuff.SubtractOneFromProduct( 5, 6 ) )
}

Now run this from the command line.


go build

This will create an executable called “spearhead” next to “spearhead.go”. There are two ways I can run this program:


go run spearhead.go

Or just


./spearhead

Either way gives the following:


Hello, world.
about to call numberstuff.AddOneToSum( 5, 6):  12
about to call numberstuff.SubtractOneFromSum( 5, 6 ):  10
about to call numberstuff.AddOneToProduct( 5, 6):  31
about to call numberstuff.SubtractOneFromProduct( 5, 6 ):  29

To run the test, I need to go into the numberstuff directory:


cd numbertest
go test

Here is the result:


Testing AddOneToSum
Testing AddOneToSumCases
Testing AddOneToProduct
PASS
ok  	info/shelfunit/spearhead/numberstuff	0.003s

I put in some calls to fmt to print out stuff to the command line so I know it is working. For that, we have “go test -v”:


go test -v
=== RUN   Test_AddOneToSum
Testing AddOneToSum
--- PASS: Test_AddOneToSum (0.00s)
=== RUN   Test_AddOneToSumCases
Testing AddOneToSumCases
--- PASS: Test_AddOneToSumCases (0.00s)
=== RUN   Test_SubtractOneFromSum
--- PASS: Test_SubtractOneFromSum (0.00s)
=== RUN   Test_AddOneToProduct
Testing AddOneToProduct
--- PASS: Test_AddOneToProduct (0.00s)
=== RUN   Test_SubtractOneFromProduct
--- PASS: Test_SubtractOneFromProduct (0.00s)
PASS
ok  	info/shelfunit/spearhead/numberstuff	0.003s

If I want to run the test from the root of the project, run go test with the relative path to the project, with or without “-v” as desired:


go test ./numberstuff
ok  	info/shelfunit/spearhead/numberstuff	0.002s

Just to make sure I am not going crazy, let’s change one of the result assertions in numberstuff/addition_enhancements_test.go from 8 to 9:


func Test_AddOneToSum( t *testing.T ) {
    fmt.Println("Testing AddOneToSum")
    result := AddOneToSum(3, 4)
    if result != 9 {
        t.Error("Incorrect result, expected 9, got ", result)
    }
}

That gives us this:


go test  ./numberstuff
Testing AddOneToSum
--- FAIL: Test_AddOneToSum (0.00s)
    addition_enhancements_test.go:12: Incorrect result, expected 9, got  8
Testing AddOneToSumCases
Testing AddOneToProduct
FAIL
FAIL	info/shelfunit/spearhead/numberstuff	0.003s
FAIL

Change it back before moving forward.

To prove that I know what I am doing, I will add another package: morestrings. This is the same package that is used in How To Write Go, but I am going to make a different function.

I will make a file morestrings/duplicate_string.go:


package morestrings

func ReturnStringTwice(s string) string {
    return s + s
}

Next I will make a test file morestrings/duplicate_string_test.go:


package morestrings

import (
    "fmt"
    "testing"
)

func TestDuplicateString(t *testing.T) {
    fmt.Println( "Starting TestDuplicateString" )
    cases := []struct {
        in, want string
    }{
        {"Hello, world", "Hello, worldHello, world"},
        {"eating nun arrays funny", "eating nun arrays funnyeating nun arrays funny"},
        {"negative houses gauge freedom", "negative houses gauge freedomnegative houses gauge freedom"},
    }
    for _, c := range cases {
        fmt.Println( "c.in: ", c.in, ", c.want: ", c.want )
        got := ReturnStringTwice(c.in)
        if got != c.want {
            t.Errorf("ReturnStringTwice(%q) == %q, want %q", c.in, got, c.want)
        }
    }
}

Now we run the tests:


go test -v ./morestrings/ 
=== RUN   TestDuplicateString
Starting TestDuplicateString
c.in:  Hello, world , c.want:  Hello, worldHello, world
c.in:  eating nun arrays funny , c.want:  eating nun arrays funnyeating nun arrays funny
c.in:  negative houses gauge freedom , c.want:  negative houses gauge freedomnegative houses gauge freedom
--- PASS: TestDuplicateString (0.00s)
PASS
ok  	info/shelfunit/spearhead/morestrings	(cached)

Now we update spearhead.go:


package main

import (
    "fmt"
    "info/shelfunit/spearhead/morestrings"
    "info/shelfunit/spearhead/numberstuff"
) 

func main() {
    fmt.Println("Hello, world.")
    fmt.Println("about to call numberstuff.AddOneToSum( 5, 6): ", numberstuff.AddOneToSum( 5, 6 ) )
    fmt.Println("about to call numberstuff.SubtractOneFromSum( 5, 6 ): ", numberstuff.SubtractOneFromSum( 5, 6 ) )
    fmt.Println("about to call numberstuff.AddOneToProduct( 5, 6): ", numberstuff.AddOneToProduct( 5, 6 ) )
    fmt.Println("about to call numberstuff.SubtractOneFromProduct( 5, 6 ): ", numberstuff.SubtractOneFromProduct( 5, 6 ) )
    fmt.Println("about to call morestrings.ReturnStringTwice(\"twice\"): ", morestrings.ReturnStringTwice( "twice" ) )
}

We can run all the tests with this command:


go  test -v ./...

That is a period, a slash, and three periods. Sometimes the generator and browser make the three periods into one character.

I know we should be open to new ideas and new ways of doing things, but I wish “go test” just ran all the tests in the project tree. Just like Maven and Gradle. And Leiningen. And Clojure CLI tools And Ruby. And Elixir. Before anyone tries to play the “because Ken Effing Thompson sez so” card, I already do that with the ternary operator: he is the guy who put it in C, and he kept it out of Go.

You’re welcome.

Note 1: I have also been trying to do this with Elixir. For some reason, I have been able to make a few different modules, but I cannot get the tests to run. I also do not get any errors for the tests. I will step through the relevant chapter in Dave Thomas’ book and post about Elixir in the near future.

Image from Évangéliaire de Saint-Mihiel, aka Irmengard Codex, a 10th century manuscript held at the Colleges of the Université Catholique de Lille; image from Wikimedia, assumed allowed under public domain. “Irmengard, it’s a cerdex.”

2022-11-20: New Stuff I Learned: Grep and Databases

I recently learned a couple of new tricks, and I want to share a practice I engage in that I have gotten some compliments on.

I have worked on a few Java web applications, and instead of searching with the IDE, I use grep in cygwin. Sometimes the term I search for is not just in a Java file, but it is also in a Javascript file that has no line breaks. (Seriously, why do people make files like this?) This makes the results harder to read. One possibility is to pipe the results of the grep to another grep:

grep -riIn someString * | grep -v '.min.js' | grep -v '.js.map'

The disadvantage of this method is you lose the color in the output. I read the man page, and I found there is an exlusion flag: –exclude. Even better, you can use it more than once:

grep -riIn --exclude=*min.js --exclude=*.js.map someString * 

I also found out the -I option can exclude binary files, which were also polluting the output.

The database we use is Oracle. I started using the Explain Plan to analyze some queries. You can significanly reduce the cost of a query by using upper to compare strings. Going from this:

where some_column like 'some string%'

to this:

where upper( some_column ) like upper ( 'some string%' )

reduced the cost of a few queries from 20,000 to 40,000 to between 30 to 40. I thought that using “upper” would take longer, since it has to run that comparison on the whole table, and that using equals would be faster, but using “upper” is faster. I do not know if this also holds true for other databases. On Oracle “upper” seems to be faster than “lower”.

Another thing I want to mention is that I have gotten a few compliments on how I format insert statements. I line the column names up, and I put in comments in the values() part of the statement for the name of each column.

You have probably seen insert statements like this:

insert into us_state( state_id, state_name, capital,largest_city, abbreviation, population, median_age, persons_sq_mile,nickname, area_sq_mile, avg_elevation, north_latitude_deg, north_latitude_min, south_latitude_deg, south_latitude_min, east_longitude_deg, east_longitude_min, west_longitude_deg, west_longitude_min, created_date, created_by, modified_date, modified_by)
values (func_to_get_next_id(), 'Illinois', 'Springfield', 'Chicago', 'IL', 12882135, 39, 231, 'Prairie State', 57915, 600, 42, 30, 36, 58, 87, 30, 91, 31, sysdate(), user_id, sysdate(), user_id);

I have some that are worse than that. Imagine that statement with no spaces after the commas.

Compare that to this:

insert into us_state( 
    state_id,           state_name,         capital,            largest_city,       
    abbreviation,       population,         median_age,         persons_sq_mile,
    nickname,           area_sq_mile,       avg_elevation,      north_latitude_deg, 
    north_latitude_min, south_latitude_deg, south_latitude_min, east_longitude_deg,
    east_longitude_min, west_longitude_deg, west_longitude_min, created_date,       
    created_by,         modified_date,      modified_by 
) values (
    func_to_get_next_id(), -- state_id
    'Illinois',            -- state_name
    'Springfield',         -- capital
    'Chicago',             -- largest_city
    'IL',                  -- abbreviation
    12882135,              -- population
    39,                    -- median_age
    231,                   -- persons_sq_mile
    'Prairie State',       -- nickname
    57915,                 -- area_sq_mile
    600,                   -- avg_elevation,
    42,                    -- north_latitude_deg
    30,                    -- north_latitude_min
    36,                    -- south_latitude_deg
    58,                    -- south_latitude_min
    87,                    -- east_longitude_deg
    30,                    -- east_longitude_min
    91,                    -- west_longitude_deg
    31,                    -- west_longitude_min
    sysdate(),             -- created_date
    user_id,               -- created_by
    sysdate(),             -- modified_date
    user_id                -- modified_by
);

It is a little more typing up front, but it is less thinking later on [1].

I also reformat numbers in my notes. I keep notes as I work on stories/defects. If I get an id or a long number from a database, I will copy it, and then add dashes every third number:

4198275630/419-827-563-0
5832064197/583-206-419-7

Years ago I worked on a project where the IDs were a string based on the date down to the nanosecond with no dashes or separators. So I was dealing with very long strings of about 30 characters, half of which I did not need. I got tired of parsing long numbers in my head. More typing means less thinking. The fewer plates I have to spin in my head, the better. Granted, a lot of things that could reduce my cognitve load are usually out of my control, but I change what I can change.

You’re welcome.

Note 1: The formatting is lined up in the WordPress editor; sometimes it looks crooked in the final version. Things like this are why I sometimes think about going with a static site.

Image from Golden Gospels of Henry III, aka Codex Aureus of Speyer or Speyer Gospels (Speyerer Evangeliar), an 11th-centry manuscript housed in the Royal Site of San Lorenzo de El Escorial; image from Wikimedia assumed allowed under public domain.

Learning Languages Test-first and Project-first

Sometimes I wonder why don’t books and tutorials teach programming languages closer to the way web apps are taught and made: By making projects up front, and by incorporating tests right away.

I am going through some Pluralsight tutorials for Go and Elixir, and while there is a lot of information in them, in most of them the code is not well organized. Only one of the dozen or so Go tutorials that I have been through makes a project. Why not do that every time? That is how we make web apps, especially since after Rails most frameworks are created with generators. You can make projects with most languages, but for some reason it is not emphasized. Granted, Go is a language where you can run code outside of a project: You can just put code into “some_file.go”, and then type “go run some_file.go”, but in my opinion it is a bad practice.

Several years ago, I went through a Rails bootcamp. After I was done, another alumni and I started a small study group to go through the RSpec book by David Chelimsky et al. We only had a few meetings, and I do not remember how far in the book we got. We thought that testing was not covered enough in the bootcamp, and we thought it was odd that some languages (like Ruby) have a strong “test-first” culture but when it comes to learning the language tests are treated like an afterthought. If we tell people to write their apps tests-first (or at least simultaneously with their code), why not teach languages test-first? A lot of books wait until halfway through before talking about tests and/or projects, or leave it until the end.

Even Dave Thomas holds off on projects and testing. I thought one of the guys famous for being part of the agile/extreme/TDD movement would start out with tests right away. The pickaxe book and his Elixir book do not get to them until halfway through; it is chapter 13 out of 25 in the Elixir book. Chapter 13 starts with: “Let’s stop hacking and get serious.” Why not start out serious?

It also might help in learning a language. If you are going through a tutorial, I think it is better to have the different sections as files in a project as opposed to just one-off files or snippets floating in space.

It might seem like it is too much for beginners. But what is another chapter in a 200+ page book? And people should know it anyway. If we want people to be competent professionals, they should learn the proper way to do things from the beginning. I think we need to get rid of this idea that a smart developer can learn a language in a weekend. Yes, syntax can be picked up quickly. But to really be good at a language, you need to know the libraries, frameworks and tools. And how to structure your code so that other people can understand and use it.

To introduce people to projects and tests, a tutorial could make a package (or namespace or whatever your language calls them) that just has a few functions/methods that take a couple of numbers, sums them, and adds one to the sum. And another method that subtracts one from the sum. And you could have another package that does the same for multiplication.

I might re-do some of the Pluralsight tutorials by making them into projects. I noticed there is now one for the Gin web framework. Perhaps someone at Pluralsight read my post a few months back on things I do not like about Javascript. They have a tutorial on Go web services that includes a massive Angular app; it takes up more than 500 MB on my hard drive. I wrote it would have been easier (and better for learning Go) to just make a web app in Go.

I was looking for a way to make a project with Go, when I found a video that creates a project while using VS Code. This tutorial follows a page on the Go site. One thing I do not like about Go is that “go test” does not run tests in all the sub-directories by default. Maybe it does and I am just not as familiar with it. But “mvn test” or “gradle test” or “lein test” will run all the tests.

Keep in mind, I am not a TDD purist. I do not always write failing tests first. I have no issue with writing tests after code; Ken Kousen calls it GDD: Guilt Driven Development. I also have no problem with tests hitting the database. A presenter at Austin Ruby years ago pointed out that you can get mocks and stubs to do what you want. If your code has a lot of mocks and stubs, are your tests telling you that your code does what you think it should do, or are they telling you that you are good at mocking and stubbing? Some TDD people say it is fine to use an in-memory database. Why not use an actual database? You are still going outside your code either way.

If you are a TDD purist, do not bother arguing with me. You will not change my mind, and NOTHING ever changes the mind of a purist. Not even the fact that purists are more likely to turn people off to something than turn them on to it. I announced my Groovy Email Server on a list somewhere, and someone got pissy with me because the first thing they did was run tests, and they failed because there was no database connection. Even though the README clearly states that you need to set up a database first. Like Will Rogers said, some people read the sign, and some people just gots to touch the electric fence.

You’re welcome.

Image from León Antiphonary, a 10th-century manuscript hosted at Virtual Library of Bibliographic Heritage, image allowed under Creative Commons CC BY 4.0.

Random Thoughts On Javascript

Random thoughts on Javascript.

As Matthew Butterick put it, Lisp makes me smarter, while anything named *Script kills brain cells on contact.

Here is a comment from Hacker News that sums it up for me: ..they feel like walled gardens of arbitrary knowledge that only applies to their ecosystem rather than to some fundamental learning about software. When I learned about OOP I was able to understand OOP code in a broad range of applications, languages and frameworks, but learning how react works gives me no insight into anything but react, and it’s even abstracted enough from JavaScript that you could learn React without ever properly knowing how to write JavaScript (which is something I’ve encountered in a handful of new devs).

It does seem like a lot of Javascript developers are like small children: They only know Javascript, and cannot imagine anyone not knowing only Javascript. Like iPhone users but worse. They seem to constantly re-invent the wheel, and there seems to be no collective learning. If Javascript is so wonderful, why are there so many libraries, frameworks and entire languages that transpile to Javascript? If Javascript is so wonderful, why do people go through all that effort to avoid dealing with Javascript directly?

Whenever I tell people I want nothing to do with Javascript, I get the same song and dance that “you have to know Javascript.” First off, I do not use it at my current job, so for a lot of jobs you do not. But I do have to use apps made by Microsoft. Because I have to. I am tired of using garbage because I have to. That is one reason I tend not to call in to the Austin Clojure Meetup when they talk about ClojureScript. Lisp is something I want to learn. I spend enough time with things I have to use. Don’t pollute the stuff I want to use with the stuff I have to use.

And usually you get to the first stage of “you have to use X” because a lot of people are too lazy or too stupid to investigate alternatives. The second stage is the X that you have to use is the only alternative, and most people are too lazy or too stupid to know there ever was an alternative. If I wanted to do what everybody else does, I would not be interested in Lisp or Clojure. You use Lisp because you know the world is wrong.

Maybe everybody else uses it because everybody else is stupid. JS is like Windows: It is a status quo that everybody defends, nobody actually chose, and very few actually like.

I don’t buy the argument that developers should learn Javascript because it is a combination of Lisp (the good parts) and C (everything else). “It has the stuff you like with the stuff you do not like.” I would like my life to have more of the stuff that I like, and less of the stuff I do not like, not to mix them together. “It’s Lisp and C” is really a reason to learn Lisp.

I am tired of being treated like a dog, and people thinking that I will just eat whatever is put in front of me.

But I am not the only one. The README for Coast states “It uses a relational database and renders html on the server without javascript which allows you to ship your web applications faster.” The Pragmatic Bookshelf has a book called Programming Phoenix LiveView: Interactive Elixir Web Programming Without Writing Any JavaScript. Granted, they do not seem to be against JS. I think that LiveView is like GWT: You write code in Elixir (LiveView) or Java (GWT), and it gets translated to JS. Because JS is too complicated.

The PDFs for that book said that we need JS to make single-page apps and give users all the “modern” features of the web: like notifications and infinite scroll. Personally, I need fewer notifications and I hate infinite scroll. JS seems to give nothing but bloat. Granted, there are some things it is useful for: I think a lot of video players are done with JS. But beyond that, all JS has given us is bloat and complexity. I feel like I am spending more and more time on websites waiting for circles or rings to stop spinning.

I think we need to get back to CRUD apps. The kind we used to make 10 years ago. Yes, you had to know several languages, like HTML, CSS, SQL, maybe some Javascript. But you could focus on a language that interested you, like Java, or Ruby, or Python. Now it seems like a lot of people want to make it all Javascript all the time, now it’s Javascript AND maybe a little bit of the thing you really wanted to learn. Two years ago I took a course on Pluralsight about making web services in Go. The course included an Angular app to view the database. The Angular app takes up 589 megabytes on my hard drive. I took the course to learn how to use Go, not deal with Javascript. It reminds me of Joe Armstrong’s line about OOP: I just wanted a banana, not the monkey holding the banana and the tree it is sitting in.

Why they did not have a prerequisite course about making a web application in Go is beyond me.

Javascript is like someone comes to a restaurant, and at first they seem outgoing and charismatic. Then you realize the guy is groping all the women, he’s hammered and you can’t get rid of him.

I haven’t gotten around to reading any of the PDFs I have going over the history of Lisp. Perhaps for a couple of decades the Lisp world was total chaos like JS is today. But now it isn’t. Why not stand on the shoulders of giants?

If my impression of Javascript is out of date, so be it. It wasn’t my idea to ignore history and push a language which was described by its own creator as having “a lot of stupid in it.” He is still jerking around the world by giving us a browser that shields you from ads so they can give you their ads. And if that isn’t shady enough, Brave is involved with corrupt-o-currency. This guy is the poster jerk for crank magnetism.

I am open to learning new things. Lisp. Go. Elixir. Functional programming. Graph databases. I probably need to learn more about relational databases. Just using Emacs will help you to pick up new things: Org mode, CIDER. I learned some Scheme and Racket and how to get them working in Emacs so I could go through Simply Scheme so I could eventually go through SICP.

I have noticed that when people judge you for not being open to new ideas or learning something new, what they are really doing is being judgemental because you are not interested in what they are interested in. Being open to new ideas does not mean being open to all ideas.

You’re welcome.

Image from the Melisende Psalter, a 12th century manuscript assumed to have been created at the Monastery of the Holy Sepulchre; housed in the British Library as Egerton MS 1139, assumed allowed under Public Domain.

2021-11 Emacs Meetup

There was a meeting of EmacsATX, the Austin Emacs meetup, last week. There was no set agenda.

I started off by mentioning that while I have not continued working through Rainer Konig’s Org tutorial, I was using Org a lot more. I am using it to keep track of tasks. I am converting a lot of text files into org files, since a lot of topics in the files appear and re-appear. I also mentioned that I had started using the Emacs Desktop to save sessions (see this page in the Emacs manual). A lot of people were not familiar with that term. I don’t know why Emacs calls the file that saves session information the “Desktop”. Perhaps that is a holdover from the PDP-11 days. I used to keep an instance of JEdit open, and I am slowly phasing that over to an instance that uses the desktop. I also made an alias with “–no-desktop” so other instances to not try to use the desktop file. Since a lot of email is happening on GMail and Outlook, and the Emacs browser is not the greatest, it might be hard to do everything in Emacs these days. But I am using it more and more.

I am also using Org to write this post.

Anyway, one guy ranted about MELPA. He might make a blog post about it. I know some people do not think that Emacs handles packages and dependencies very well, but from what I have read, it used to be worse than it is today.

A few people talked about using LSP, the Language Server Protocol with Emacs. It started with VSCode. I think the idea is to make it easier for an editor to work with different programming languages. I don’t know if this makes language modes unnecessary in Emacs.

There is nothing planned for the next meeting. I pointed out that since EmacsConf 2021 is between now and then, we could watch the videos on our own and talk about them. A few people seemed to like that idea. I have a task tree in an Org file to watch the videos from prior years. This was the first meeting in a few months in which Sacha Chua, the organizer of EmacsConf, was not present.

You’re welcome.

Image from Add Ms 28815, a 10th century manuscript housed at the British Library; image assumed allowed under Public Domain.