Seems indicative of shifting expectations/demands that SICP used to be freshman course book; now it's considered "super advanced reading" :/
— Andy Matuschak (@andy_matuschak) March 26, 2015
Seems indicative of shifting expectations/demands that SICP used to be freshman course book; now it's considered "super advanced reading" :/
— Andy Matuschak (@andy_matuschak) March 26, 2015
On the tenth anniversary of Git, Linux.com has a nice interview with Linus. In it they explore the reason for Git’s creation, its supposed opaqueness to ordinary users, its success, and its future.
Although many already know the story, Linus recounts how he was forced to give up using BitKeeper when one of the kernel hackers, Tridge, started reverse engineering its protocol in violation of the agreement that allowed its free use for kernel development. Add to that the fact that many developers objected to BK because it wasn’t open software and Linus pretty much had to change. The problem was that there was nothing else that came close to meeting his requirements. As a result, he spent a few days hacking together the first version of Git. What I didn’t realize was that it was self-hosting Git development after the first day. The first kernel commit happened at about day 10. That’s pretty amazing for any non-trivial development effort.
Linus admits that Git was difficult to use for the first six months because all the effort went into getting it running and adding features. Now, he says, that’s no longer true. Its one weakness, he says, is that there are several ways of accomplishing some tasks.
As for Git’s success, Linus thinks it’s mostly because all the other systems are so bad. First, you have to have a distributed model for a successful VCS. Even the distributed systems that came before were lacking. All that said, I think it’s pretty clear that Git has a simple, easily understood model that contributes greatly to its success.
Linux.com asked Linus if he foresaw some other system replacing Git in the next ten years. Linux replied that one thing for sure is that he wouldn’t be the one to write it. He also noted that if something did replace Git, it would have to be Git-like.
It’s an interesting interview and well worth a few minutes of your time.
I’ve written before about Karl Voit (1, 2, 3) and his quest to record and digitize every aspect of his life. His Memacs system, which collects data from his email, phone, social media and other sources, is an excellent example of that quest. His latest post is about the lessons he learned digitizing all his papers and books.
Voit is an Open Source advocate so he turned first to open solutions for his scanning and OCS needs. Unfortunately, none of the tools were up to his specifications. After looking at—and even trying—different hardware, he settled on the Fujitsu ScanSnap S15001. I have the same scanner and agree with Voit that it’s a clear winner. When I used a flatbed scanner, the process was so inconvenient that I let my scanning pile up and would have to spend an hour or more scanning the accumulated documents. The ScanSnap is small enough to keep on my desk, is always ready, and takes no time at all to scan and run OCR on a document.
So far, Voit has scanned over 40,000 pages. That includes books, from which he removed the spines. When he was done, he threw away all the paper and books. Now everything is digitized and instantly searchable—a clear win, especially for someone like Voit who is dedicated to having the events in his life instantly retrievable.
If you’ve thought about getting that pile of old papers under control, take a look at Voit’s post. It may provide the inspiration you need to get started. As for me, all new paper coming into the house is scanned and shredded. I’m still working on the backlog and will probably never take my books apart to scan.
UPDATE: splines → spines
Howard Abrams is continuing his Literate Devops series with a great post on Literate Databse Work. He takes the ideas he developed in his Literate Devops post and applies them to investigating some problems with MySQL.
This time, he handles communication with the remote host by using SSH port redirection to make it appear that the MySQL server is on his local machine. Then he can use Org mode code blocks marked SQL to experiment with the database and put the results right in his Org file.
This is a really nice technique and one I wish I’d known when I was still grappling with databases. It much easier than logging into the actual server to do your work and you have a record of what you did and the results.
That last part is non-trivial because as Abrams says, you can pass those results to your team members by simply emailing them the Org file or by exporting them to HTML and posting them on a community Web server. Take a look at his post and see what he did and how he did it. If you work with SQL you may find it has some ideas you can use.
If you offer it, they will come. You’ve been warned.
There’s been a huge amount of interest and uptake on abo-abo’s Hydra package. I’ve got some Hydras in my init.el
and am planning on implementing more.
If you’re wondering what the excitement is about or would like to see if it’s something you can use, Eric James Michael Ritz has a nice introduction to Hydra and how he uses it. It’s short and easy to digest. After reading it, you should have a good idea of whether or not it’s something you can use.
If you decide that you can use it, you’ll want to check the documentation on the project page at the link above. You might also enjoy watching abo-abo’s introductory video. It includes several example of Hydras that he’s implemented for his own use.
The UK government's unironic embrace of Orwellian imagery is really something. pic.twitter.com/11BvAZCUL8
— Franklin Harris (@FranklinH3000) March 10, 2015
I wanted to add further comment but, really, I don’t know what to say.
If you enjoyed the New Yorker article on RMS and GNU you may also enjoy this 1989 New York Times article on the same subject. It’s almost surrealistic to read about the GNU project from a 1989 point of view.
There was a great deal of doubt as to whether the concept of Free Software could ever gain traction and the GNU project was regarded as, if not quite Quixotic, a very iffy endeavor. Mostly GNU had Emacs and GCC so it had not yet attained the critical mass to make it a successful, stand-alone system.
My favorite quote of the article—and the part that shows how far we’ve come—is the obligatory discussion of “hacker:”
While the press has come to identify the term hacker with malicious individuals who break into computers over telephone lines, the hackers themselves have an earlier and different definition. A hacker, Mr. Stallman said, is one who “acts in the spirit of creative playfulness.”
I love that part about telephone lines. Doubtless, many reading this post don’t remember that and possibly some don’t even get the reference. In those days, home users didn’t keep their computers on all the time because connecting to the Internet involved dialing up with a modem and telephone line.
As far as “hacker” is concerned, we were fighting the same battle in 1989 that we are now. The press is never going to give up their preferred meaning of “one who breaks into computers; a criminal.”
If you remember those days, this article is a trip down memory lane; if you don’t, it’s a glimpse into the way things were not all that long ago. It’s a good read and well worth your time.
Someone else said it and I may even have blogged about it before but it bears repeating: If you start playing an unexpected video automatically when I visit your site, I’m gone. Immediately. No questions. No exceptions.
Michael Hicks is doing a series of interviews with PhDs in industry working on programming languages. In his latest post, he interviews Russ Cox and Sameer Ajmani both of whom work for Google on the Go language.
As many of you know, I’m a fan of Cox and have followed his work since his Plan 9 days at Bell Labs. He talks a little about those times in the interview but it’s mostly about Go and his (and Ajmani’s) work on it. Cox’s PhD research was in compilers so joining Google to work on Go was a natural for him. Currently, he and Rob Pike run the Go project jointly.
Ajmani was working at Google when he attended a workshop on Go given by Pike. He really liked the language, started using it, and spent some of his time contributing to it. Eventually he was asked to join the team full time. He works on making the language useful for building the large production systems at Google.
Most of the interview focuses on the design and origins of Go and how it is used at Google. Cox and Ajmani are engaging and make for an interesting interview. Definitely worth a read especially if you’re interested in Go.