I am still working through Clojure For the Brave and True.
I am on the exercises at the end of chapter 5, but I might skip a couple of them. I don’t do too well with these “re-implement function x” exercises. I will also start going through the videos on Purely Functional. I think he is raising the price, and my membership will renew at the end of the year.
I am looking into some Big Data/Deep Learning libraries for Clojure. This was inspired by the last Austin Clojure meetup: There were only four of us and we had a hard time thinking of topics. I tried searching for Clojure meetups around the country for topic ideas, and frankly the pickings were kind of slim.
Anyway, the consensus in the Clojure community is that Clojure needs to be a bigger player in these spaces. There is a LOT of Python in AI. Being a JVM language, Clojure will have wrappers around a lot of the Java libraries I wrote about in Thoughts On Native, GPU, Groovy, Java and Clojure (even though there was not a lot of Clojure in that post).
I know that Big Data and AI are different things. I was thinking about looking at Sparkling to work with Spark (which I hope I can do on my laptop; do you need big machines to work with Big Data libraries?). This weekend I started looking at some of the videos on the Clojure TV channel on YouTube from Clojure Conj 2017. I did not go, but there seemed to be a LOT of videos about AI/Deep Learning (yes, I am using those terms interchangeably even though a lot of people do not).
There was Deep Learning Needs Clojure by Carin Meier, author of Living Clojure. She wasted the first seven minutes on some stupid joke about Skynet, which is a lot for a thirty minute presentation. I am glad I did not pay to see that. After that it gets better. The talk was pretty general. She mentioned some Clojure libraries, Graal VM, and at about 25:00 talks about how to get into Deep Learning.
Declarative Deep Learning In Clojure by Will Hoyt talked about Deeplearning4j. He says that matrices and linear algebra pretty much IS deep learning. There is some neuroscience in this presentation. He also talks about how the Clojure code is easier to deal with than Java and builders. I do not think he ever posts a link to dl4clj, which according to the Deeplearning4j site is the official port.
The Tensors Must Flow by William Piel is about his library Guildsman, which is a new Clojure interface to Google’s TensorFlow. There are already two Clojure projects that allow access to TensorFlow (clojure-tensorflow and tensorflow-clj). They do some Java interop around the TensorFlow Java API. (You can see Google’s not-quite-Javadoc here.) He wanted something that was more idiomatic for Clojure programmers. TensorFlow is written in Python (what else?), which Google then ports to C++ and other languages. But like most Python AI libs, it seems like it is just a wrapper around CPU or GPU code.
I understood the talk when I watched it. Really, I did. From what I remember, TensorFlow uses something called gradients in its process. I think a gradient implementation is an operation in the process. bpiel says the best way to contribute to Guildsman is to actually contribute C++ gradients to TensorFlow itself. He said in the talk he wanted to be done with Guildsman before the Conj in October. It is almost June, and he is still working on it.
The last one about Deep Learning was Building Machine Learning Models with Clojure and Cortex by Joyce Xu. She talked about a Clojure library called Cortex. This library is closer to Uncomplicate, in that while it does interface with a GPU library, it is not a wrapper around a Java library in the middle.
The traffic on the Cortex mailing list seems to have dropped off, it’s not at version 1 and there seems to be a drop-off in contributions since January.
I do wish the speakers spent a bit more time talking about the implementation details of these libraries. Hosted languages (like Java or Python) do not do a lot of their AI crunching directly. They usually call a native library to calculate on either the CPU or the GPU. And for the GPU, some can do either CUDA (in other words, NVidia) or OpenCL (every other video card). Some libraries have multiple options, like Uncomplicate or Deeplearning4j. TensorFlow can use a CPU (they have instructions on getting the JNI file here) or GPU withy NVidia only. I have not tried Guildsman, so I do not know how he handles things or if he requires an NVidia GPU. I also have no idea how Cortex handles it. Their instructions tell you to get some SDK from NVidia. Perhaps they default to a CPU if there is no NVidia GPU.
I bought my laptop used, and the one I used before this one is at least six years old. I think the older one had an Intel video card, but I could not find any SDK for that version of the video chip. I think my current laptop may also be too old. (Dealing with the Intel Math Kernel is a LOT easier than wading through their OpenCL pages.) The only reason I can think of to buy an Apple laptop is to not deal with this. It is a bit frustrating. The whole point of using a language like Java or Ruby or Python is that I do not want to deal with hardware details.
Anyway, besides all that, I still have a few ideas for a few web apps to do in Clojure.
I looked at the Cortex mailing list, and apparently you can run the tests just using the CPU:
lein with-profile cpu-only test
It would be great if they put that in the README.
Image from M.p.th.f.66, Quattuor Evangelia, a 9th century manuscript on Franconica, the online repository for the Würzburg University Library; image covered under CC BY-NC-SA 4.0 and CC BY-NC-ND 4.0; Imagemagick was used to enhance the colors, shrink the image, and rotate it.
1 thought on “2018-05-28 Update: Clojure AI”
Thanks for the Guildsman mention. I am still working on Guildsman, but it’s been pretty infrequent. This year has been almost non-stop distractions and hurdles. Unfortunately, I’m feeling a little burnt out from it all. I’m hoping to get back to it. We’ll see.
Comments are closed.