How to add GPS coordinate to your photo using data from google

I have Nikon DSLR and I wanted to add GPS location so when I upload them to flickr I have it on a map and don’t need to add location by hand.

If you have android phone, you don’t need any extra GPS hardware, to have GPS location on your photos taken with your camera.

The solution, if you have GPS enabled in your phone is it take location history from google map using this url: (export option is in dropdown in gear icon at the bottom)

You will have JSON file with GSP coordinates and timestamps for all your locations from your phone.

Now to extract the GPS out of JSON file I use this small script written in Python, it use exiftool to write EXIF back to the file because PIL can only read exif not write it.

#!/usr/bin/env python

from __future__ import division
import simplejson as json
from PIL import Image
from dateutil import parser
from optparse import OptionParser
from subprocess import call
from datetime import datetime, timedelta

def get_date_taken(path):

def nearest(items, pivot):
    return min(items, key=lambda x: abs(x - pivot))

def comparator(date, hours_shift = None):
    def compare(x):
        current = datetime.fromtimestamp(int(x['timestampMs']) / 1000.0)
        if hours_shift is not None:
            current = current + timedelta(seconds = hours_shift * 60 * 60)
        return abs(current - date)
    return compare

def get_gps(gps, date, hours_shift = None):
    return min(gps['locations'], key=comparator(date, hours_shift))

def parse_date(str):
    return datetime.strptime(str, "%Y:%m:%d %H:%M:%S")

def timestamp(dt, epoch=datetime(1970,1,1)):
    td = dt - epoch
    return (td.microseconds + (td.seconds + td.days * 86400) * 10**6) / 10**6

def timezone(date, hours):
    return date - timedelta(seconds = hours * 60 * 60)

if __name__ == '__main__':
    from sys import argv
    opt = OptionParser()
    opt.add_option('-l', '--location')
    opt.add_option('-t', '--timezone')
    (options, args) = opt.parse_args()
    if options.location is None or len(args) != 1:
        print "usage %s [--timezone <hours shift>] --location [History JSON File] <IMAGE FILE>" % argv[0]
        gps_list = json.loads(open(options.location).read())
        date = parse_date(get_date_taken(args[0]))
        if options.timezone is not None:
            loc = get_gps(gps_list, date, -float(options.timezone))
            loc = get_gps(gps_list, date)
        found = datetime.fromtimestamp(
            int(loc['timestampMs']) / 1000.0
        print "%s == %s" % (date, found)
            '-GPSLatitude=%s' % str(int(loc['latitudeE7']) / 1e7),
            '-GPSLongitude=%s' % str(int(loc['longitudeE7']) / 1e7),

To use it you need to use terminal and execute --tomezone <hours shift> --location <Path to JSON> <Image File>

The only one issue I’ve found is that PIL can’t extract exif from RAW/NEF files, so you can only use JPEG but you can write exif to RAW/NEF but read create time out of JPG file, if you shot in both JPG and RAW like I do.

EDIT: I was on a trip to Cracow in March of 2019, I was walking from my hotel to train station taking pictures and when I was, at home GPS was connected without timezone so maybe the shift only came from Day Light saving. (I was in Tuscany in April 2018 the shift was 1 which would be about right because in April there is summer time in Poland).

How to automate uploading SSL certificates from letsencrypt’s to DirectAdmin

On my shared hosting (where i have my personal website and my Polish blog) I had access to DirectAdmin panel and it have option to set SSL certificate, but it don’t have support for lensencrypt out of the box.

So I thought that I create python script that will upload Certificate for all of my domains that I can access from DirectAdmin and upload http challenges that are used by letsencrypt to confirm that I’m in control of the domain.

The script can be found on github gist

You need to update the script to include your username/password, ftp host and url for DirectAdmin interface, in my case I was using static url that was redirecting to different port and hosting provider domain, I’ve used that url because sometime the real server and domain change. That’s why the script use recursive function that send HEAD requests to get real url after redirect.

The gist include bash script, that can be used in cron (you need to put your domains and subdomains). If you will execute the script first time and was not using certbot before you will need to put your email and agree to TOS (using command line options).

The script is used in two modes one, if certbot environment variables are set, to upload a challenge to ftp and the other to set SSL certificate for each domain using DirectAdmin.

Instruction how to install certbot can be found on After installation you can find short docs in man page for certbot that explain each command line option.

I’ve tested the script only on GNU/Linux. For Windows/MacOSX it will require modification like path to certificates generated by letsencrypt.

Python one liner for css file compression

I need a script to compress css file, and I wrote this python one liner to do this. It removes unnecessary white space and strip comments.

python -c 'import re,sys;print re.sub("\s*([{};,:])\s*", "\\1",
re.sub("/\*.*?\*/", "", re.sub("\s+", " ",'

Dowload files from rapidshare

Here are function writen in Python for downloading files from Rapidshare. Unfortunetly you must wait couple of seconds, you can’t skip this because it seems that counting is on server side too.

from urllib2 import urlopen, Request
from urllib import urlencode
import os
import re
import time
import base64

class DownloadLimitException(Exception):
    def __init__(self, *arg):
        Exception.__init__(self, *arg)

def download(url):
    page = urlopen(url).read()
    action_url_regex = '<form id="[^"]*" action="([^"]*)"'
    s_url =, page)
    if s_url:
        url =
        data = urlencode({'dl.start': 'Free'})
        request = Request(url, data)
        page = urlopen(request).read()
        action_url_regex = '<form name="[^"]*" action="([^"]*)"'
        c_regex = 'var c=([0-9]*);'
        limit_regex = 'You have reached the download limit for free-users'
        if re.match(limit_regex, page):
            raise DownloadLimitException(limit_regex)
        s_url =, page)
        s_c =, page)
        if s_url and s_c:
            url =
            c = int(
            print "wait %i seconds" % c
            os.system('wget "%s"' % url)

You could use this function like this. Notice that you must have wget installed on your system.

def main():
    from sys import argv
    if len(argv) == 2:
        except DownloadLimitException,
            print "limit reached"

if __name__ == '__main__':

Program for showing similar things

On you could find similar things like movies, book and music. They provide API which return XML or JSON response. Here are code, in Python which use this API. You could download source code from GitHub.

This program:

  • will display similar things if you use it without options
  • display information about movie/book/show/author if you put -i option
  • display descriptions if you use -d options
  • and translate them to your language with -l option.

Check -h options for more info.

from urllib2 import urlopen, Request, HTTPError
from urllib import quote_plus
from sys import argv, stderr
from os.path import basename
from getopt import getopt, GetoptError
from StringIO import StringIO
import json

def gets(dict, *keys):
    "function generate values from dictionary which is on the list."  
    for e in keys:
	yield dict[e]

def partial_first(fun, *args):
    "return single argument function which execute function fun with arguments."
    def p(object):
	return fun(object, *args)
    return p

def gets_fun(*args):
    "return function which return values from dictinary which is on the list."
    return partial_first(gets, *args) 

class ServerJsonException(Exception):
class Similar(object):
    def __init__(self, stuff, type=None):
	query = quote_plus(stuff)
	if type:
	    query = '%s:%s' % (type, query)
	url = ''
	response_data = urlopen(url % query).read()
	if not re.match('{.*}', response_data):
	    raise ServerJsonException
	#fix malform json
	response_data = response_data.replace('}{', '},{') = json.load(StringIO(response_data))
    def infos(self):
	for i in['Similar']['Info']:
	    yield Similar.Stuff(*list(gets(i, 'Name', 'Type', 'wTeaser')))

    def similar(self):
	"generate list of Stuff."
	results =['Similar']['Results']
	if len(results) == 0:
	    yield None
	for result in results:
	    elems = list(gets(result, 'Name', 'Type', 'wTeaser', 'yTitle', 'yUrl'))
	    yield Similar.Stuff(*elems)
    class Stuff(object):
	def __init__(self, name, type, description, y_title=None, y_url=None): = name.encode('UTF-8')
	    self.type = type.encode('UTF-8')
	    self.description = description.encode('UTF-8')
	    if y_title:
		self.y_title = y_title.encode('UTF-8')
	    if y_url:
		self.y_url = y_url.encode('UTF-8')

usage = """usage:
%s -d -i -y -l  
d - display descriptions
i - display only info
y - display youtube links
l - translate descriptions
    lang should be one of:
    af - afrikaans
    sk - albánskej
    ar - عربي
    be - Беларускі
    bg - Български
    zh - 荃湾
    zh - 太阳
    hr - Hrvatski
    cs - Český
    da - Danske
    et - Eesti
    tl - filipiński
    fi - Suomi
    fr - Français
    gl - galijski
    el - Ελληνικά
    iw - עברית
    hi - हिन्दी
    es - Español
    nl - Nederlands
    id - indonezyjski
    ga - Gaeilge
    is - Íslenska
    ja - 日本語
    yi - ייִדיש
    ca - Català
    ko - 한국의
    lt - Lietuvos
    lv - Latvijas
    mk - Македонски
    ms - Melayu
    mt - Malti
    de - Deutsch
    no - Norsk
    fa - فارسی
    pl - polski
    ru - Русский
    ro - Română
    sr - Српски
    sk - Slovenský
    sl - Slovenski
    sw - Swahili
    sv - Svenska
    th - ภาษาไทย
    tr - Türk
    uk - Український
    cy - walijski
    hu - Magyar
    vi - Việt
    it - Italiano

put "band:", "movie:", "show:", "book:" or "author:" before name if you want to specify search
""" % basename(argv[0])

def main():
        opts, rest = getopt(argv[1:], 'dl:iy')
    except GetoptError:
        print usage
    opts = dict(opts)
    if opts.has_key('-h'):
        print usage
    description = opts.has_key('-d')
    info = opts.has_key('-i')
    youtube = opts.has_key('-y')
    lang = opts.get('-l')
    if len(rest) == 0:
	print usage
	    stuff = Similar(' '.join(rest))
	    if info:
		for info in stuff.infos():
		    print '%s (%s)' % (, info.type)
		    if lang:
			from xgoogle.translate import Translator
			translate = Translator().translate
			print translate(info.description, lang_to=lang)
			print info.description
		for stuff in stuff.similar():
		    if youtube:
			print 'Youtube:'
			print '\t%s' % stuff.y_title
			print '\t%s' % stuff.y_url
		    if description:
			if lang:
			    from xgoogle.translate import Translator
			    translate = Translator().translate
			    print translate(stuff.description, lang_to=lang)
			    print stuff.description
	except ServerJsonException:
	    print >> stderr, "Error: can't read recived data from the server"

if __name__ == '__main__':
    except KeyboardInterrupt:
        #when user hit Ctrl-C

You need xgoogle library to it to work, you can download it from here.

You can use like this (you can change name of the script to like)

like Matrix
like Matrix, Ghost in the shell

If you want to check two or more movies/books/shows separate them with coma. If the same name has movie and book you can put type of things before the name

like movie:the gathering

like music:the gathering

allowed types are movie, show, book and author

if you want to download all youtube files use on Debian/Ubuntu

apt-get instll youtube-dl


sudo ./

and then run the script:

for i in ` -y $1 | grep http`; do
    youtube-dl $i