setting a default virtual host in nginx Aug

If there are multiple conf files included in the sites-enabled directory you can set the Nginx server to default to a particular site by adding a listen directive:

server {
listen 80 default;
... }


install and remove .deb packages in ubuntu Aug

You can use the <dpkg> package manager to install downloaded .deb packages on Ubuntu and other debian-based systems.

To install a package pass in the -i flag:

sudo dpkg -i <package.deb>

You can list installed packages with the -l and an optional pattern:

dpkg -l <optional pattern>

This list can be quite long, so it might be helpful to redirect the output into a file.

dpkg -l > dpkg.txt

Use -r to remove a package.

sudo dpkg -r <package>

-r uninstalls the package, but it leaves any configuration files in place so if you ever install that package again it'll already be initialized.

To remove all associated files use -P to purge all the data for the desired package.

sudo dpkg -P <package>


getting the browser to detect rss and atom feeds Aug

When writing sections that has periodic updates, like a blog, it's commonly requested to create an rss or an atom syndication feed for the section.

I develop with Django, and fortunately it comes with a great and easy to use syndication feed framework out-of-the-box to automate much of this process. However, the documentation doesn't talk about how to get browsers to automatically detect that a feed is available so that the syndication feed icon automatically shows up in the user's browser.

The technical term for this is "autodiscovery", and not surprisingly it's easy to implement.

All that needs to be done is to add something similar to this into the <head>  of your HTML page for RSS:

<link href="" rel="alternate" type="application/rss+xml" title="<title>"/>

or for ATOM:

<link href="" rel="alternate" type="application/atom+xml" title="<title>"/>


password protect nginx directory/location with htpasswd Jul

In apache you can password protect a directory with an .htaccess file. It's a little different in nginx, although I think it's more elegant.

All you need to do is add the following two entries in the location listing in the server conf settings.

location = / {
auth_basic "Restricted";
auth_basic_user_file /path/to/htpasswd; ...

and then run htpasswd in the proper location:

htpasswd -c -b htpasswd <user> <pass>


dynamically create an xml file in django Jun

Sometimes when communicating to external systems like Flash I am required to dynamically create an XML file. You can utilise the template framework to do the heavy lifting.

It's worth noting that if you need to create syndication feeds you should use the syndication framework, and if you want to create a site map you should use the sitemap framework.

With that being said, lets say you have a model that looks something like this (I'll use a blog post concept for the demo):


class Post(models.Model):
    user = models.ForeignKey(User, blank=True, null=True)
    publish = models.BooleanField(default=True)
    title = models.CharField(max_length=255)
    slug = models.SlugField(unique=True)
    edited = models.CharField(max_length=255, blank=True)
    tags = models.ManyToManyField('Tag')
    body = models.TextField()
    date_created = models.DateTimeField(auto_now_add=True)
    date_modified = models.DateTimeField(auto_now=True)

    def __unicode__(self):
        return self.title

    def get_absolute_url(self):
        post_date = self.date_created
        return_vars = {
            'year': post_date.year,
            'month': post_date.month,
            'slug': self.slug,

        return('blog_post', (), return_vars)


    def get_posts(self, limit=None):
        if limit:
            return Post.objects.filter(publish=True).order_by('-date_created')[0:limit]
            return Post.objects.filter(publish=True).order_by('-date_created')

You can link a reference to an xml file in


urlpatterns = patterns('project.quotes.views',
    url(r'^(?P<year>\d{4})/(?P<month>\d{1,2})/(?P<day>\d{1,2})/(?P<slug>[\w-]+)/, 'post', name='blog_post'),
    url(r'^xml/latest.xml, 'xml_latest', name='blog_xml_latest'),

You can load and render the template which is in the xml format, and return it with mimetype="text/xml". Your view can be something like:


template_vars = {...}

def xml_latest(request):
    returns an XML of the most latest posts
    template_vars['posts'] = Post.get_posts(30)
    template_vars['site'] = Site.objects.get_current()

    t = loader.get_template('blog/xml/posts.xml')
    c = Context(template_vars)

    return HttpResponse(t.render(c), mimetype="text/xml")

and your template can be something like:

# posts.xml

<?xml version="1.0" encoding="UTF-8"?>
  {% for post in posts %}
    <date>{{ post.date_created|date:"M j, Y" }}</date>
    {{ if post.edited }}
    <edited>{{ post.edited|date:"M j, Y" }}</edited>
    {{ endif }}
    <title>{{ post.title }}</title>
    <body>{{ post.body|striptags }}</body>
        {% for tag in post.tags %}
            <name>{{ }}</name>
            <url>http://{{ site.domain }}{{ tag.get_absolute_url }}</url>
        {% endfor %}
    <absolute_url>http://{{ site.domain }}{{ post.get_absolute_url }}</absolute_url>
  {% endfor %}

And that's that. There's probably a better more dynamic way to generate the XML file, but this works for me when I need something quick and dirty.

Please let me know if you have any suggestions on how to improve this process.


postgresql cheatsheet Jun

I'm transitioning my databases from MySQL to PostgreSQL. Overall it's been an easy transition. One of the main differences that I had to adjust to is that by default postgres uses the unix user accounts to handle authentication into the CLI. This can be changed in the settings, and it also means that you might want to set the password for the postgres unix user.

Also it's good to note that postgres automatically translates non-strings to lowercase. This is an issue if you do things like camel casing your tables and columns, so it's best to avoid doing so if you're into this habit.

I do find the command lines to be more streamlined and the config to be more straight forward, but that's really just a personal preference.

Here's a rundown of the commands I often use. I'm posting them here for my convenience, but perhaps someone else might also find these useful.

# login
psql -U <user>
psql -U <user> -d <database>
psql -U <user> -h <host> -d <database>

# user management
CREATE USER <user> WITH PASSWORD '<password>'
ALTER USER <user> WITH PASSWORD '<password>'
DROP USER <user>

createuser -D -A -P <user>
psql -U <admin> -c "CREATE USER <user> WITH PASSWORD '<password>'" -d template1

# database
CREATE DATABASE <database> OWNER <user>
DROP DATABASE <database>

ALTER TABLE <table> OWNER TO <user>

createdb -O <user> <database>

# install
apt-get install posgresql
apt-get install python-psycopg2


apt-get install libpq-dev
pip install psycopg2

# configuration
# pg_hba.conf
host all all md5 # open up all connections, used for dev

# postgresql.conf
listen_addresses = '*' # listen for addresses

# dump and load data
pg_dump <database> > <file.sql>
psql <database> < <file.sql>

# from
To dump a database called mydb into a SQL-script file:

$ pg_dump mydb > db.sql

To reload such a script into a (freshly created) database named newdb:

$ psql -d newdb -f db.sql

To dump a database into a custom-format archive file:

$ pg_dump -Fc mydb > db.dump

To reload an archive file into a (freshly created) database named newdb:

$ pg_restore -d newdb db.dump

To dump a single table named mytab:

$ pg_dump -t mytab mydb > db.sql

To dump all tables whose names start with emp in the detroit schema,
except for the table named employee_log:

$ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql

To dump all schemas whose names start with east or west and end in gsm,
excluding any schemas whose names contain the word test:

$ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql

The same, using regular expression notation to consolidate the switches:

$ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql

To dump all database objects except for tables whose names begin with ts_:

$ pg_dump -T 'ts_*' mydb > db.sql

To specify an upper-case or mixed-case name in -t and related switches,
you need to double-quote the name; else it will be folded to lower case (see Patterns).
But double quotes are special to the shell, so in turn they must be quoted.
Thus, to dump a single table with a mixed-case name, you need something like

$ pg_dump -t '"MixedCaseName"' mydb > mytab.sql

# export csv
psql -U <user> <database>
\f ','
\o file.csv
SELECT <statement>


random numbers in javascript Jun

This is a stepwise procedure to generate a random number within a given range in JavaScript.

var number = Math.random(); // generate a random decimal between 0 and 1
number *= 10; // scale the number so that it falls between the range 0 and 10 (or any number you'd like to make the max limit)
number = Math.ceil(number); // round the result up to the nearest whole number

You can combine this into one line to easily create a random number generator:

var number = Math.ceil(10 * Math.random());


fixing jquery ajax caching Jun

Depending on browser settings you might run into problems with caching when calling pages through AJAX. I've run into issues with WebKit based browsers (Chrome, Safari) as it seems they are more aggressive with their caching than Firefox and even IE. This was one of the few times where something worked in IE and not Safari.

If your ajax is calling a dynamically generated page where its content changes over time you need to disable caching on the AJAX call.

In JQuery you can just set the cache parameter to false:

  url: "/path/to/page/",
  type: "POST",
  cache: false,
  success: function(data){

Another technique that you can do is to generate a random variable on each call to the end of URL to trick the browser.

  url: "/path/to/page/?v=" Math.ceil(10000*Math.random()),
  type: "POST",
  success: function(data){


create/export and load/import csv files with mysqldump in mysql May

I'm often asked to create csv files to import into other applications. If you're using MySQL it's handy to be able to create a CSV on the fly with mysqldump. This is easily achieved with the following command:

mysqldump -u <user> -p --tab=<directory> --fields-enclosed-by='"' --fields-terminated-by=',' <database> <table>

This creates a .sql dump file and a .txt csv in the directory that you specified with [--tab]. The second part of this operation (the .txt csv creation) will be performed by the MySQL user, so it'll need to have write permissions in the [--tab] directory. Also, the mysql user needs to have the appropriate permissions (I believe it's the FILE privileges).

Alternatively you can import a csv into the database with the following command while logged into the mysql shell using the appropriate database:

LOAD DATA LOCAL INFILE '/path/to/my.csv' INTO TABLE <table>
(<field1>, <field2>, <field3>);


auto-populate subject and body line in mailto link May

Sometimes I need to auto-populate the subject and body lines inside a mailto link. All you need to do is attach the <subject> and <body> as GET variables.

<a href=" subject goes here.&body=Place the body text here.">Send me an email</a>

You can omit the email address if you want the user to be able to choose who she wants to send this to.

<a href="mailto:?subject=Your subject goes here.&body=Place the body text here.">Send me an email</a>