Emacs Byte Code

Over at null program, Christopher Wellons has an excellent post on Emacs Byte Code Internals. Most people won’t care, of course, but we’re nerds and we don’t like black boxes. As Wellons says, the byte code internals are under documented—or in some cases, undocumented—so this post is welcome for those of us who’d like to know what’s going on.

There’s no point in my recapitulating what Wellons said so you should head on over and take a look. I don’t think this information is going to be of much practical use unless you’re interested in working on the byte compiler or interpreter but you may find that it’s just what you need for some project you’re thinking of. If nothing else, you’ll discover a bit more of how Emacs works.

Wellons is on a roll. Last week, I wrote about his post on closures in Emacs. He’s providing the sort of technical information that helps us all become better Emacs Knights. The more we know about the minutia of Emacs internals, the more productive Emacs users we’ll be.

Posted in Programming | Tagged , | Leave a comment

Emacs Colors with OS X

Bozhidar Batsov has his third post on the features of the coming Emacs 24.4 up. This time, he discusses the addition of sRGB to the NextStep (NS) build section of Emacs. The NS build section is where all the Mac OS X specific code and configuration lives. If you’re looking for better color support, you’ll be getting it in Emacs 24.4.

I don’t use a prepackaged color theme; I prefer to just add a pale beige (oldlace) background to the standard white screen color set. From time to time I look at the themes people have made and still haven’t found anything I like better than my simple set up. But that’s just me. Most people prefer darker screens (why is a mystery to me just as why I prefer light screens is doubtless a mystery to them) and most prefer one of the many themes that people have put together.

All that said, I will find it interesting to see if and how the new color facility changes my mind. If you’re one of those people who have been applying the sRGB patch or using homebrew your life is about to get easier.

Posted in General | Tagged | 3 Comments

Know Lisp, Get a Job?

Over at the Lisp subreddit, nhs111throwaway asks if anyone actually got a job because they know Lisp. The problem with a question like that is that you’re going to get only anecdotal information as answers. Still, it’s interesting to read about people’s experiences.

If I had to guess before reading those answers, I would have said that there are very few Lisp jobs and knowing Lisp probably doesn’t help much except as an indication that you have broad based knowledge. It turns out, at least according to the answers to nhs111throwaway’s question, that my intuition is wrong. Several responders recounted how Lisp helped them get the job they wanted. Sometimes it was a Lisp job, other times it was a job using a different language but they were able to leverage Lisp as a tool in support of the other language.

If you’re a Lisper and would like to get a Lisp job, you might find comfort in the answers. It really does seem, as Paul Graham says, that Lisp is a secret weapon that no one talks about very much but that gives its practitioners a big leg up.

Posted in General | Tagged , | Leave a comment

When Your git Server Dies

I’ve written before about how I use git to keep my two main machines in sync. The other day the linux server that I kept the repositories on died. I had turned it off for a few seconds to deal with a power problem and afterwards it wouldn’t spin up. That was a little annoying because I’d recently rebuilt the OS. The big problem, though, was that it was the machine that hosted my git repositories.

Until I could replace the machine, I needed a temporary home for those repositories. Happily, I use git instead of something like subversion so my two machines had the full history for each repository. All I needed to do was make a directory on aineko, my iMac, to hold the repositories, clone the repositories into that directory, create the git-daemon-export-ok file in each repository, and finally point the individual repositories on each machine to aineko instead of the linux machine.

Easy enough but I wasn’t looking forward to the drudgery of it all. I decided to solve the problem once and for all by automating the process. It’s nothing special but if anyone else has the misfortune to lose the server hosting their repositories, perhaps this will help.

#! /bin/bash
# -*- mode: sh -*-
REPOS="/Users/jcs/org /Users/jcs/medical /Users/jcs/tax /Users/jcs/.emacs.d /Users/jcs/tt"
for r in $REPOS
do
    git clone --bare $r `basename $r`".git"
    touch `basename $r`".git/git-daemon-export-ok"
    sed 's/url = bedia:/url = aineko:/' $r"/.git/config" > tmp
    mv tmp $r"/.git/config"
done

Now all I had to do was create the new directory, put this script in it, and run it. I also had to run it, without the git clone and touch on my other machine to point the repositories to the new server. Once I get a new server and configure it, I can just copy the repository directory on aineko to the new machine.

Posted in Programming | Tagged | Leave a comment

Unicode Representation in Emacs Strings

Xah Lee posted a useful fact that I’m sure I knew but had forgotten or at least not internalized. The tip is how to encode unicode in Emacs strings. Given that Emacs supports Unicode and, indeed, uses UTF-8 as its default file format, you can usually just place the Unicode symbol right in the string. You can also encode the symbol as \uXXXX or \UXXXXXX—see Lee’s post for details.

Why would we need this alternative representation? As Lee points out, sometimes you want to embed a non-printable character in the string and the \u or \U representation is more convenient—especially when the non-printable character involves cursor motion of some type.

Another example is to deal with missing glyphs in a font. For example, the Inconolata font that I use doesn’t support some of the glyphs that I need. An easy way of representing them is to use the alternative encoding. I can still embed the glyph in the string, of course, but it will appear as a tiny sliver of white space. Unless you look carefully, it just appears as if nothing is there. With the alternative encoding you can see that something’s there, even if you have to look up what it represents.

Posted in Programming | Tagged | Leave a comment

Optimizing Lisp

Over at the Lisp Subreddit, they have a pointer to an interesting 2006 paper on How to make Lisp go faster than C by Didier Verna. One of the persistent myths about Lisp is that it’s slow. That comes from the old days when Lisp was interpreted but it hasn’t been true for a long time.

Verna shows that Lisp can be as fast as C and sometimes even a bit faster. This always seems counterintuitive but it shouldn’t. After all, the C compiler and the Lisp compiler both do the same thing: they compile their source languages into machine language. It’s really (mostly) a matter of how good the compilers are. Yes, there are things like garbage collection but especially for numerical work there’s no reason that one should run faster than the other.

What’s interesting to me are the reddit comments. They were mostly along the lines of, “Yeah but you have to add all these yucky declarations into the source code and it makes it ugly.” To me that completely misses the point. Leaving aside the fact that you always need declarations in C, the point is you can quickly develop a first version, working interactively with Lisp. If the situation calls for it, you can speed things up by adding declarations and declamations to the code. Usually you won’t need to but when you do, Lisp has the appropriate tools available. You get the best of both worlds: you can easily prototype an application and then, if needed, make it production-strength by added a few declarations.

In any event, if you use Lisp and would like a nice introduction to optimizing your code, give Verna’s paper a read.

Posted in Programming | Tagged , | Leave a comment

Before There Was Snowden

The New York Times has an absolutely fascinating story about a 43 year old crime that was, until recently, unsolved. It’s a story that shows there is nothing new under the sun. It’s also a story intrinsically involved with the U.S. Government’s last scandal involving spying on its citizens.

The crime was the theft from an FBI office in a suburb of Philadelphia of documents that showed the FBI was spying on Americans, trying to infiltrate anti Vietnam war groups, and incite them to perform illegal acts so that the antiwar movement would be discredited. The documents, subsequently mailed to a number of newspapers, were instrumental in getting the Church commission, that investigated the aforementioned last scandal, formed.

The parallels with the Snowden story are astounding. The NYT story at the link has a 13 minute video about the story that will seem very familiar. After the revelation, the government tried desperately to bury the story. When the papers published it anyway, high ranking government officials lied on the record. The FBI assigned 200 agents to track down the perpetrators.

The way the people involved were finally discovered makes the story all the better. Whatever your thoughts on the Vietnam War—or even if you don’t have any—and on the Snowden affair, you really need to watch that video. If nothing else, it’s a great caper story.

Posted in General | Tagged | Leave a comment

Slime Moving to Github

Xach is reporting that Slime is moving to Github. That’s great news but mostly it’s good news for Xach who no longer has to deal with CVS to get the latest version of Slime for Quicklisp. For the rest of us, Xach handles that for us when we use Quicklisp.

If you’re not already using Quicklisp, believe me you’re missing out and should get it installed right away. It will take care of updating Slime for you and you’ll never have to worry about it again. All you need do, once Quicklisp is installed, is evaluate

(ql:quickload "quicklisp-slime-helper")

and follow the instructions to install and configure Slime.

If you choose not use Quicklisp for some reason, the move to Github will make your life a bit easier.

Posted in Programming | Tagged , , , | Leave a comment

How To Reallocate Memory

Chris Taylor has a nice post on reallocating arrays. The problem is a common one: you initially allocate an array (or other data structure) and later want to make it larger. In C, for example, you would use the realloc function. The problem at hand is how much additional memory to request with the reallocation. You’d like to arrange things so that some of the memory can be reused in further reallocations. Taylor explains exactly what this means in his post.

The theoretical answer turns out be that you should increase the current allocation by a factor of ≈1.618 or, to be precise, by the golden ratio. As a practical matter, a factor of 1.5 is probably a good value. None of this is of great import, of course, but the mathematician in me was charmed by the result. The golden ration keeps popping up in unexpected places.

Posted in Programming | Leave a comment

FIDO

Ars Technica is reporting that Microsoft has joined the FIDO alliance. The FIDO, Fast IDentity Online, alliance is an industry group that is developing protocols to replace the passwords for access to Web sites. The idea is to use public key cryptography to replace the current password system. The macro view is that you would have a public/private key pair for each site you visit. The site would hold the public key and you would hold the private key. When you log onto a site, they would send you a random message, which you would sign with your private key. The site would check the signature and, if legitimate, sign you on.

Notice how this solves several problems with the current system. The three major problems with passwords are:

  1. Users choose weak passwords
  2. Users reuse passwords
  3. Sites don’t properly hash the stored passwords

The public key cryptography solves the first problem because the user doesn’t choose a password and the keys are secure by construction. The password reuse problem also goes away because the site generates the key pairs so, again, the user doesn’t have an opportunity to do the wrong thing. Finally, even if a bad guy is able to recover the public keys from a site they can’t recover the private keys to gain access to the site. After all, in public key cryptography the public keys are available to anyone through the key servers.

The Ars post has some of the details on how the protocol is envisioned to work. As you’d expect, getting the details right is the hard part. The system has to be easy for users and site operators alike. FIDO’s plans call for submitting the result of their research to a body such as the IETF for standardization.

Posted in General | Tagged | Leave a comment