Xah’s JavaScript Tutorial

Many of us have benefited from Xah Lee’s Emacs Lisp tutorial. I found it very useful for learning the Elisp library and idioms. Now Lee has put together a JavaScript tutorial.

When I was learning Elisp, I had the advantage of already being familiar with Common Lisp and Scheme so I understood the Lisp parts. It was just the Elisp specific bits that I needed help with. In the case of JavaScript, I’m a complete novice as I don’t know the language at all. It’s enough like C that I can get a general idea what a simple program is doing but I certainly couldn’t write in JavaScript or even understand anything harder than a toy program.

That makes me a perfect test case for Lee’s tutorial. I’ve gone through the first section (8 lessons) and learned a lot. I don’t have much interest in writing large JavaScript programs but I would like to have enough facility that I can read and understand others’ work and write my own (possibly simple) programs when necessary.

It’s still too early for me to say I’ve reached that goal but I do understand some of the things that confused when I looked at JavaScript scripts before. If you’ve been wanting to learn JavaScript, Lee’s tutorial is an easy way. You can go through a lesson or two a day in just a few minutes so it doesn’t require a large time commitment. Perhaps I’ll write a followup post after I finish the tutorial. In the meantime, I’m enjoying it and learning a bit of JavaScript.

Posted in Programming | Tagged | 1 Comment

14 Great Programmers

Over at NetworkWorld they have an interesting article featuring the 14 greatest living programmers. I was a bit surprised that DMR wasn’t on the list but then I remembered the part about living. Also, oddly, you and I aren’t on it either.

The latter shocking omissions aside, I think you’ll agree that the 14 are world class engineers. Perhaps your favorite isn’t there but by and large it’s a good list. My only complaint is that the article is in the form of one of those obnoxious slide shows designed to maximize ad impressions.

Definitely worth a glimpse.

Posted in General | Tagged | Leave a comment

Bad Spellers and Typists Rejoice

Some people are bad spellers or at least consistently have trouble with certain words. Others can spell but are poor typists and constantly mistype words. Some, I suppose, fall into both categories. If any of this describes you, don’t despair: Bruce Connor has a solution for you.

Over at Endless Parentheses he presents a bit of Elisp that will look up the correct spelling with ispell (or aspell or whatever you’re using) and make an abbreviation for your incorrect spelling so that it will be automatically corrected in the future. It’s probably not for everyone but if you consistently make spelling errors—for whatever reason—it may be helpful.

Endless Parentheses is a fairly new blog that concentrates on short, mostly weekly, Emacs-oriented posts. As of this writing there are only eight short posts so you might want to read them all. It won’t take much time and you’ll probably learn something.

Posted in Programming | Tagged | 1 Comment

Literate Programming and Your Emacs Configuration

My init.el isn’t very organized. Partly, that’s because I like having a single file rather than the multitude of special configuration files that many of my Emacs heroes prefer. But even given that it’s a single file, it’s not really organized. I have sections that more or less group functionality but like many Emacs configurations it has grown (some would say metastasized) over the years and accumulated a thick layer of barnacles.

Now Grant Rettke has shamed me even more by publishing his configuration. It’s a wonderful example of literate programming. Others have used Org-mode and Babel for their Emacs configurations but Rettke’s is a marvel. It’s really more of an essay that explains what he’s trying to do, why he chose each configuration item, and even why he didn’t choose others. He has a long section on key bindings, what he was trying to accomplish, and what he finally chose.

It’s worth reading through his configuration just for some of the ideas. I learned about packages and settings that I wasn’t familiar with. The easiest way to do that is to read the TC3F.txt file because it’s nicely formatted. Ideally, you’d want to read TC3F.org but for some reason GitHub displays it in raw mode rather than the nicely formatted mode that GitHub usually uses for Org files. It’s also worth taking a look at the makefile to see how he builds init.el from his TC3F.org file.

Most readers will probably find Rettke’s Org file overkill—it’s 4,460 lines long—but I like the idea of documenting why you made the choices you did and what a particular piece of code is doing. The table of contents at the beginning makes it easy to locate a particular part of the configuration. In Rettke’s case there’s the extra step of building the configuration file by running the makefile but these days I usually only make a couple of changes a month so it’s not much of a burden, at least for me. If that bothers you, you can have it loaded automatically by bootstrapping it from a minimal init.el as Sacha demonstrates with her configuration.

My point in writing about Rettke’s configuration is not that I think everyone will want to have the same kind of comprehensive document but that it serves as an excellent example of how you can use literate programming for your Emacs configuration. As I said above, it’s worth reading through the TC3F.txt file to find packages and features that you didn’t know about and the TC3F.org file to see how you can put together an Emacs configuration using literate programming methods.

Posted in General | Tagged , | Leave a comment

Proced

The invaluable Mickey has an excellent post up on proced, a ps/top-like utility built into Emacs. If you find yourself using top and ps—and what serious developer doesn’t?—you’ll love having the functionality built right into Emacs. One more reason not to leave the comfort of your editor.

As usual, Mickey explains the functionality in detail so I’ll just send you to him for the full picture. Sadly, if you’re an OS X user, like me, proced won’t work because OS X lacks /proc. It’s doubly insulting that it does work under Windows. If you’re a Linux or Windows user, check out Mickey’s post and enjoy and power. If you’re an OS X user, have a beer and weep into it. I feel your pain.

Posted in General | Tagged | Leave a comment

Kitchin on Org-mode

John Kitchin is on a roll. Just the other day, I wrote about his org-ref project and the really great work he is doing in introducing his students to reproducible research via Emacs and Org-mode. Now, he has a new video out on the splendors of Org-mode.

There are lots of videos on Org-mode, of course, but Kitchin emphasizes its use to produce technical papers using the reproducible research method. Kitchin, of course, is an export on the subject matter so the video is definitely worth your time. It’s just over 18 minutes so it won’t take up a lot of your time. Very definitely recommended.

As one of the commenters noted, you can lose a day reading the Org-mode entries on his blog. I’ve spent some time there myself and recommend that as well. Kitchin has been doing some outstanding work on Org-mode so it’s worth paying attention to his blog.

Posted in General | Tagged , | Leave a comment

Automating Git Bisect

I’ve written a couple of times about Git bisect. It’s a way of finding the commit that introduced an error. It works by (essentially) doing a binary search on the commit history. Now, Curtis Poe over at Ovid shows us how to partially automate the process.

The Git bisect process involves testing each candidate commit and marking the result as good or bad. Usually this process doesn’t involve a large number of steps (being a log2 n process) but it can be (in Poe’s words) boring. Happily, you can often automate the process and avoid the tedium. You do this by writing a script that tests each commit and then just tell Git to run the script on each candidate commit and tell you when it finds the commit that introduced the error. See Poe’s post for the details.

This doesn’t always work because sometimes it’s hard to write a script that will reliably detect an error and sometimes because the test may fail in different ways. To help with the second case, Poe provides a perl script that will look for some specific output or, alternatively, a pattern. If you do a lot of testing using Git bisect, you should give Poe’s post a read. Even if you only occasionally use Git bisect, using the automation that Poe describes may save you some time and effort. If you’re a Magit user, the bisect function—including the run command that automates the process—is built in.

Posted in Programming | Tagged , , | Leave a comment

Reflections on Trusting Trust

A reference to Ken Thompson’s fantastic paper Reflections on Trusting Trust popped up yesterday on Hacker News. I’ve written about this paper before but it deserves a periodic mention.

If you haven’t read this paper before, I urge you in the strongest terms to do so right now. Suffice to say that it has been described as “a truly moby hack” and “the greatest hack of all time.” Really, you’ll be glad you checked it out.

Posted in Programming | Tagged | Leave a comment

A Medical Decision Tree

Back in March, I wrote about Edward Frenkel and his explanation of the purported backdoor in the Dual_EC_DRBG random number generator. I also mentioned his book Love and Math: The Heart of Hidden Reality. Now Zygmunt Zając over at FastML has a nice post on one of the stories from the book.

It’s about how Frenkel hand-constructed a decision tree to determine the treatment plan for kidney transplant patients. What’s interesting is that Frenkel has no medical training or domain knowledge. Rather, he worked with a physician to capture the physicians ad hoc decisions and turn them into an algorithm.

The story is interesting to folks like us because of the way he solved it. It’s a method that developers might try when attempting to capture a human decision process into a program. At first he asked the physician a series of question trying to understand the various parameters and how they related to a final decision. That approach was singularly unsuccessful so Frankel took another approach.

He randomly selected some patient records and had the physician ask him questions, which he answered by consulting the patient’s file. By analyzing the questions, the order in which they were asked, and what the follow up questions were, he was able to construct a decision tree that accurately captured the physician’s subjective thought process. After two and a half dozen cases, Frankel’s algorithm was almost as good as the physician at making diagnoses. By the end of the process his diagnoses were 95% accurate.

In the end, the process was good enough to patent. What I like is how simple and direct the process was and yet it produced outstanding results. I really recommend this post: it’s short but clearly outlines an approach to problem solving that you may find useful.

Posted in General | Tagged | Leave a comment

Guess Who Those Targeted Individuals Are

The Targets

Here in the U.S., today is independence day. It’s a day to celebrate our forebears’ refusal to submit to what they considered unjust treatment at the hands of their government. One of their major complaints was general warrants: the notion that the government could enter your home and rifle through your belongings at will. They felt so strongly about it that the prohibition of the practice is enshrined in our constitution as the Fourth Amendment.

Now, sadly, general warrants are making a comeback, at least in the digital realm, in the United States and most other first world countries. We all know this story: the NSA, GCHQ, and other intelligence agencies have decided that they have the right—in the name of national security, of course—to snoop on our communications and digital data without warrant or specific cause.

The NSA for its part insists that, yes, they collect almost everyone’s information but except for targeted individuals that information is flushed within 48 hours or, at most, 30 days. If, like me, you’re inclined to a nasty, suspicious mindset, you might wonder who those targeted individuals are. Your Aunt Millie is sure it’s just Osama Bin Laden and a couple of his friends but the more cynical of us wonder if “targeted individuals” might be a bit more general.

Now we have an answer. One way to be targeted is to read Boing Boing. Well, Boing Boing is vaguely left-wing, I suppose, although still well within the mainstream so why would that get you targeted? Surely, reading an apolitical, technical site won’t get you targeted. It turns out, though, that reading Linux Journal—reportedly considered an “extremist forum” by the NSA—can also get you on the list. What do these sites have in common that excites the NSA’s suspicions?

The answer is, at the same time, shocking and obvious: reading an article on the technical details of TOR, Tails, or other privacy enhancing software is enough to provoke the NSA’s interest. Cory Doctorow reports that one expert suggested that the NSA is trying to separate the sheep from the goats; to split the population into those who know how to protect their privacy and those that don’t. Naturally, those in the first group are suspicious and therefore targeted.

The Sources

If you read the links above you will see that the information is explosive. Almost too good to check as cynical journalist like to quip. So where did this story come from? The story apparently originated on the German site Tagesschau.de (in German, Google translation here). Happily, for those who don’t read German, there is an English language article that extends the Tagesschau.de article on DasErste.de. If you read nothing else, you should read this article, as it explains in detail where the information comes from and includes a link to XKeystore deep packet inspection rules.

There has been speculation that these latest revelations point to the existence of a second NSA leaker as none of this information was included in the known Snowden documents. Bruce Schneier, who has access to the Snowden documents, does not believe this information came from Snowden and believes there is a second leaker.

There’s a lot of information in the above links but you really should read it all. The original Tagesschau article was about the targeting of German sites so this story concerns you whether or not you are an American. If, after you read it, you aren’t infuriated, let me nominate you for The Alfred E. Neuman “What, me worry?” award.

Posted in General | Tagged | Leave a comment