indexpost archiveatom feed syndication feed icon

CGI Programming in Go


While perusing the documentation I found an enticing reference to CGI programming. Go is of course a reasonably modern language compared to TCL but the fact that they've included support for CGI is novel given the state of Python which has deprecated its CGI module.


Rather than do anything too interesting I figured I'd try re-implementing the CGI program currently backing the search form on the archive page. It requires some minimal HTML, database access, and request form handling. I've done "real" Go programming before but could never tell if I didn't like the language for the sorts of places I was made to use it.

It turns out the CGI interface is actually pretty nice. Rather than build much at all the cgi.Serve is functionally a translation layer to the usual (w http.ResponseWriter, r *http.Request) pattern of web request handling. This actually feels clever because all the existing libraries that use this interface basically just work. There's not much documentation necessary because it is just the usual Go HTTP request handling. The only real CGI specifics are limited to the function handler which prints HTML and looks up the HTML form data, all told about 15 lines of code, dealing with SQL seems pretty painless and I'm still puzzled by Go error handling. I don't think I'm really missing much about it I'm just confused by how basic it is — I'm also pretty sure that is the point.

The Code

package main

import (

	_ ""

type PostResult struct {
	title     string
	path_slug string
	date      string

func postsMatchingQuery(terms string) ([]PostResult, error) {
	var db *sql.DB
	db, err := sql.Open("sqlite3", "/var/www/data/posts.db?mode=ro")
	if err != nil {

	var posts []PostResult

	rows, err := db.Query(`select post.title, path_slug, date
				 from post_fts
				 join post using(path)
				where post_fts.body match ?
				order by date desc;`, terms)
	if err != nil {
		return nil, fmt.Errorf("postsMatchingQuery %q: %v", terms, err)
	defer rows.Close()

	for rows.Next() {
		var postResult PostResult
		if err := rows.Scan(&postResult.title, &postResult.path_slug, &; err != nil {
			return nil, fmt.Errorf("postsMatchingQuery %q: %v", terms, err)
		posts = append(posts, postResult)

	if err := rows.Err(); err != nil {
		return nil, fmt.Errorf("postsMatchingQuery %q: %v", terms, err)
	return posts, nil

func handler(w http.ResponseWriter, r *http.Request) {
	header := w.Header()
	header.Set("Content-Type", "text/html; charset=utf-8")

	form := r.Form
	terms := form.Get("terms")

	posts, err := postsMatchingQuery(terms)
	if err != nil {
		fmt.Fprintln(w, "<!DOCTYPE html><html><p>A problem occurred with your search</html>")

	fmt.Fprintln(w, "<!DOCTYPE html>")
	fmt.Fprintln(w, "<html><body><ul>")

	for _, post := range posts {
		fmt.Fprintln(w, "<li><a href=\"/"+post.path_slug+"\">"+post.title+"</a></li>")

	fmt.Fprintln(w, "</ul></body></html>")

func main() {
	err := cgi.Serve(http.HandlerFunc(handler))
	if err != nil {


While I don't know that this has entirely won me over to Go as a substitute for all of my recreational programming I can see the appeal in projects I might develop with other people. There are remarkably few ways of doing things so all the documentation and examples you might find look basically the same. It is easy enough to pick back up using the approach of monkey-see-monkey-do and worrying about the details later. I will admit I was expecting Go to absolutely smoke my 33 lines of TCL on the performance front; I was ready to be soundly convinced that for such an under powered server as this it was obviously worth the additional effort to port things from a slow language. Instead my very rudimentary testing shows a modest improvement at best and any gains are probably lost in the general noise of real-world internet latency. I think CPU usage might be better but it too seems marginal. My sense is that there's only so much to be done when the program is this simple and Go doesn't have much wiggle room to improve on what is probably a thin layer over the C code of TCL and SQLite. It is nonetheless interesting and I'll keep it in mind as I inevitably continue to write small TCL programs and CGI.

Half the fun in writing this was how painless it was to A/B test with the web server configuration. I ended up putting both programs in the cgi-bin directory, named search-tcl and search-go. Toggling between them was as simple as rewriting a symbolic link to search that the archive page POSTs to. Compared to most of my usual development cycles this proved positively delightful.