Looking for similar point and click games

I’m in a constant lookout for humours, sofa friendly point and click/adventure games. Please send me a suggestion if you know one.

Here’s the list of game I played. I hope it gets indexed for you to find when you are on a quest to find a new game. I highly recommend all of the ones below:

  • Anna’s Quest
  • Broken Sword 1
  • Randal’s Monday
  • Book of Unwritten Tales
  • Book of Unwritten Tales – Critter Chronicles
  • Book of Unwritten Tales 2
  • Armikrog
  • Deponia
  • Deponia Doomsday
  • Edna & Harvey: The Breakout
  • Edna & Harvey: Harvey’s New Eyes
  • Goodbye Deponia
  • Her Majesty’s Spiffing
  • Kelvin and the Infamous Machine
  • Oxenfree
  • Tales of Monkey Island
  • Tesla Effect
  • Escape from Money Island(tm)
  • Leisure Suit Larry

All above games are available on Steam and Humble Bundle.

Share and Enjoy:
  • Digg
  • Facebook
  • Blip
  • Kciuk.pl
  • StumbleUpon
  • Wykop

imagemagick and ffmpeg cheat sheet

Make a sprite from photos

magick.exe  *.jpg +append sprites_P0100.jpg

Resize all images in place

magick.exe mogrify -resize 380 *.jpg

Crop images in place

magick.exe mogrify -crop 3200x3200+400+400 *.jpg

Make and scale video from images

ffmpeg  -i DSC_%04d.jpg -c:v libx264 -vf fps=25 scale=800:800  alpin_400.mp4
Share and Enjoy:
  • Digg
  • Facebook
  • Blip
  • Kciuk.pl
  • StumbleUpon
  • Wykop

Smart Home DIY on a tight budget

After twenty years of reading about smart homes I decided to finally make mine smart(-ish) as well. I’m working to reuse as much of existing infrastructure, so I can spend as little as possible.

At the moment my system consist of:
* hub
* four google assistant speakers
* hive thermostat
* cctv camera
* two electric switches
More will follow shortly (they are on their way from China).

The hub

My hub is based on QNAP NAS TS-453A, which I had already. I wouldn’t buy it just for that, Raspberry Pi would work as well. This NAS drive runs a QTS operating system, which is basically a linux machine with very convenient web UI. Among many features it offers a “Container Station”, which is a docker subsystem with large set of packages ready to install. I’m running two:
* Home Assistant – opens source hub for home automation hass.io
* Eclipse Mosquitto – MTQQ broker working as a transport layer between switches and the hub

Note: the container station offers three ways to connect the docker deployed apps to the network. I’m using the “Host” mode, which mean the apps are binding directly to the network interface of the NAS. You want to set it as such, so the devices on the network can easily connect to both apps.

Google assistants

Home Assistant is available as a service for Google Assistant. There’s currently a limited set of accepted commands – currently they support lights and thermostat only. Hass.io service in assistant directory.

All switches/lights can be renamed from Google Home app on Android.


I’m using cheapest Sonoff Basic switches (less than £4 on Banggood) flashed with custom firmware Sonoff-Tasmota.

Share and Enjoy:
  • Digg
  • Facebook
  • Blip
  • Kciuk.pl
  • StumbleUpon
  • Wykop

Case Study: Using Machine Learning to find my Teddy Bear


For the last 8 years I’ve been travelling with my Teddy Bear (Optymis) taking “selfies” of him whenever I go. The result is a massive collection of photos like those:

I wanted to find them all to create an album.

My wife and I accumulated a lot of photos – my Google Drive shows over 20000 photos taken with my mobile phone and we have additional 450000 photos stored on NAS drive. I didn’t fancy browsing through all that manually.

Solution idea

Machine Learning excels in image recognition, so I decided to try this approach. My friend suggested looking for a pretrained model instead of starting from scratch. Quick search revealed that Inception model from Tensor Flow contains a “teddy bear” class, so it should work for me well.

Inception CNN (Convolutional Neural Network) developed by Google is mature, very sophisticated network and, luckily for me, is distributed with checkpoint file which contains network state after it was trained on 1.2M images from ImageNet contest. I decided to use latest available version which is Inception v4 with inception_v4_2016_09_09.tar.gz checkpoint (available to download here: https://github.com/tensorflow/models/tree/master/research/slim).


For development I used a subset of 1323 images out of which 650 contained the teddy. I’ve sorted the photos manually to get a benchmark of results from the network.

My first approach was to try to feed the the whole image at once into a network and take five classes with highest score as an answer. This is a naive approach and can be improved by using the score threshold instead of fixed number of best guesses. It’s a first optimization point.

The result was better than expected, but far from ideal.

Total Files: 1323
Total Positives: 650

Missed: 273
False positives:  7

I’ve checked the data and I’ve noticed that the bear was sometimes recognized as a dog, so I’ve tried to massage the data by loosening the criteria and allowing various breeds of dogs to be treated as Optymis:

Total Files: 1323
Total Positives: 650

Loose match ('teddy bear' and various dogs breeds):
Missed: 186
False:  13

There’s an improvement in matches, but I’m getting more false positives. This wasn’t the solution and, as you’ll see later, it gets worse. The point to take away: don’t do silly, random changes just because they seem to work in one specific case. It something sounds wrong, it is wrong.

Making it better – understanding the input data

My photos are “holiday snaps”, not portraits of my bear and thus are meant to show an objects and area behind him. Because of that Optymis is usually located in such way that is takes a small portion of the picture.

My first approach asked the network to recognise objects on the whole picture at once, so in many cases it did find a mountain or a church etc and missed the bear. So I decided to split the image into smaller chunks and process it in bits. I used a sliding window with 50% width and 50% height of the image. I decided to use an overlap in X axis only to limit the amount of images being worked on – it was pretty safe to do as most of my images are landscape and the bear is almost always shown near the bottom edge (to hide my hand). I left full image processing as a 7th step to catch the rare cases. This is an another naive approach. In my data the bear is almost never in 2nd or 3rd box, so I could skip them to optimise the speed.

The sliding window approach gave amazing results:

Total processing time of 1323 items was 863.86130s
	of which ML time was 519.58978s

Strict ('teddy bear' found):
Missed: 23
False:  8

Loose match ('teddy bear' and various dogs breeds):
Missed: 21
False:  25

The error rate is less than 2.5% which is incredible. There are some photos of different teddy bears (the network wasn’t trained on Optymis, so it catches other bears too). The application didn’t work correctly on panoramic images, which was expected – the image is resized to square 299×299 image before it is fed to the network, so wider the image the greater the distortion. This can be easily fixed by improvements to sliding window sizes.

The network also found out nearly 20 my mistakes I made during manual classification – I made both false positive and missed positive errors.

Here’s one of the examples of image I missed, but network recognised correctly. Chapeau bas!

The initial “optimisation” I made (using “dogs” as positive for bear) gave much worse results – a penalty for slightly lower missed rate is the higher increase in false positives. It’s probably not worth it.


The test was done using 1323 photos which are nearly 4.7GB in total stored in a Vera crypt volume.
The code I wrote is quick and dirty without much (premature) optimisations. It runs on a Windows 10 box with i7 and GTX 1070 GPU. The program is single threaded and runs in a loop: open file, cut, scale, recognise, store result in a MySQL database.

Total processing time of 1323 items was 863.86130s
	of which ML time was 519.58978s

The Tensor Flow takes about 11 seconds to initialise. After that it process a photo (7 runs – for each of the sliding windows) in about 0.4s. The average GPU utilisation is 60%, with the remaining 40% time spent preparing the input data and storing result. It should be fairly easy to shave some of the preprocessing it.

The CPU utilisation reported by Windows for this program is about 15% which it more than expected for a single core application (this is a 12 thread CPU). Some of the libraries used must be doing multi-threading by themselves (cool!).

Memory usage is negligible – about 130MB.

Production run on a bigger data set shows consistent results.


This weekend project was very successful. The recognition rate is incredibly high and the performance is acceptable. It would take less than 4 days to process ten years worth of my photos.

Source Code


import tensorflow as tf
from nets.inception_v4 import inception_v4
import nets.inception_utils as inception_utils
from PIL import Image
import numpy as np
from datasets import imagenet
from timeit import default_timer as timer

class FindABear():

    im_size = 299

    def __init__(self):
        start = timer()
        self.num_top_predictions = 5
        self.names = imagenet.create_readable_names_for_imagenet_labels()
        slim = tf.contrib.slim
        self.sess = tf.Session()
        inception_v4.default_image_size = self.im_size
        arg_scope = inception_utils.inception_arg_scope()
        self.inputs = tf.placeholder(tf.float32, (None, self.im_size, self.im_size, 3))

        with slim.arg_scope(arg_scope):
            self.logits, end_points = inception_v4(self.inputs, is_training=False)

        saver = tf.train.Saver()

        end = timer()

    def find(self,image):
        start = timer()
        im = Image.open(image)
        im = im.resize((299, 299))
        im = np.array(im)
        im = im.reshape(-1, 299, 299, 3)
        im = 2. * (im / 255.) - 1.

        end = timer()
        return results, (end-start)

    def findWithSlidingWindow(self,image):
        start = timer()


        resultsAll = []

        im = Image.open(image)

        width, height = im.size

        # X steps will be overlapping, Y steps won't
        stepsX = 2
        stepsY = 2

        windowwidth = (width / stepsX)
        windowheight = (height / stepsY)

        stepX = (width / (stepsX + 2))
        stepY = (height / stepsY)

        for x in range(0, stepsX + 1):
            for y in range(0, stepsY):
                #print("crop to (%d,%d,%d,%d)" % (stepX * x, stepY * y, stepX * x + windowwidth, stepY * y + windowheight))
                im2 = im.crop((stepX * x, stepY * y, stepX * x + windowwidth, stepY * y + windowheight))
                im2 = im2.resize((299, 299))
                im2 = np.array(im2)
                im2 = im2.reshape(-1, 299, 299, 3)
                im2 = 2. * (im2 / 255.) - 1.
                results, mltime=self.findInImage(im2)
                resultsAll = resultsAll + results

        # and now the whole image
        im = im.resize((299, 299))
        im = np.array(im)
        im = im.reshape(-1, 299, 299, 3)
        im = 2. * (im / 255.) - 1.
        results, mltime = self.findInImage(im)
        resultsAll = resultsAll + results

        results,mltime = self.findInImage(im)
        totalmltime += mltime

        end = timer()
        return resultsAll, (end - start), totalmltime

    def findInImage(self,im):
        start = timer()

        logit_values = self.sess.run(self.logits, feed_dict={self.inputs: im})

        top_k = predictions.argsort()[-self.num_top_predictions:][::-1]
        for node_id in top_k:
            human_string = self.names[node_id]
            score = predictions[node_id]
            result=(node_id, score, human_string)

        end = timer()
        return results,(end-start)


from find_a_bear import FindABear
import mysql.connector
import os

class Runner:

    def initDb(self):
        self.cnx = mysql.connector.connect(user='****', password='****',

    def cleanUp(self):

    def findCandidates(self, start_path):
        addFileQuery = ("INSERT IGNORE INTO files(filename) values (%(filename)s)")
        cursor = self.cnx.cursor()
        for dirpath, dirnames, filenames in os.walk(start_path):
            for filename in [f for f in filenames if (f.endswith(".jpg") or f.endswith(".JPG"))]:

    def findPositives(self, start_path, data_path):

        addPositivesQuery = ("INSERT IGNORE INTO positives(filename) values (%(filename)s)")

        cursor = self.cnx.cursor()

        for dirpath, dirnames, filenames in os.walk(start_path):
            for filename in [f for f in filenames if (f.endswith(".jpg"))]:
                fullfilename=fullfilename.replace(start_path, data_path)

    def processFiles(self):
        addResultQuery = ("INSERT INTO results (id_files, score, name_id, name) values (%(id_files)s, %(score)s, %(name_id)s, %(name)s)")
        findFilesToProcessQuery = ("select id_files, filename from files where result is null")

        cursor = self.cnx.cursor()

        for(id_files, filename) in cursor:

        if len(files)==0:
            print("No new files")

        print("Init time %.3f" % finder.init_time)

        cursor = self.cnx.cursor()

        for (id_files, filename) in files:

                #print('Processing time %.3f' % processing_time)
                for result in results:
                    name_id, score, name=result
                    cursor.execute(addResultQuery,{"id_files":id_files, "score":float(score), "name":name, "name_id":int(name_id)})

                updateQuery=("update files set result=%(result)s where id_files=%(id_files)s")
                cursor.execute(updateQuery, {"result":allresults, "id_files":id_files})

            except ValueError:
                print("Error processing %s" % filename )

            if (total_items%100==0):
                print("\tProcessing time so far of %d items was %.5f" % (total_items, total_processing_time))
                print("\t\tof which ML time was %.5f" % total_ml_time)

        print ("Total processing time of %d items was %.5f" % (total_items,total_processing_time))
        print ("\tof which ML time was %.5f" % total_ml_time)

    def printResults(self):
        cursor = self.cnx.cursor()
        getAllQuery="select filename, f.id_files, name from files f left join results r on f.id_files=r.id_files"
        for (filename, id_files, name) in cursor:
            print("%d %s %s" % (id_files,filename,name))

    def calculateStats(self):
        cursor = self.cnx.cursor()

        print("Updating stats")
        cursor.execute("update files set loosly_ok=false, strict_ok=false")
        cursor.execute("update files set strict_ok=true where id_files in (select id_files from results where name='teddy, teddy bear')")
        cursor.execute("""update files set loosly_ok=true where id_files in (select id_files from results where name in ( 
            'toy poodle',
            'standard poodle',
            'miniature poodle',
            'cocker spaniel, English cocker spaniel, cocker',
            'Airedale, Airedale terrier',
            'wire-haired fox terrier',
            'Welsh springer spaniel',
            'Irish water spaniel',
            'Brittany spaniel',
            'Irish terrier',
            'Bedlington terrier',
            'Eskimo dog, husky',
            'English foxhound',
            'French bulldog'


    def displayStats(self):
        cursor = self.cnx.cursor()

        cursor.execute("SELECT count(*) FROM  `positives` ")
        totalPositives = cursor.next()

        cursor.execute("SELECT count(*) FROM  files ")
        totalFiles = cursor.next()

        cursor.execute("SELECT count(*) FROM  `positives` p left join files f on f.filename=p.filename WHERE f.strict_ok =false")

        cursor.execute("SELECT count(*) FROM  `positives` p left join files f on f.filename=p.filename WHERE f.strict_ok =false and f.loosly_ok = false")

        cursor.execute("select count(*) from files f left join positives p on f.filename=p.filename where f.strict_ok = true and p.id_positives is null")

        cursor.execute("select count(*) from files f left join positives p on f.filename=p.filename where (f.strict_ok =true or loosly_ok = true) and p.id_positives is null")


        print("Total Files: %s" % totalFiles)
        print("Total Positives: %s" % totalPositives)

        print("\nStrict ('teddy bear' found):")
        print("Missed: %s" % missedStrict)
        print("False:  %s" % falseStrict)

        print("\nLoose match ('teddy bear' and dogs):")
        print("Missed: %s" % missedLoose)
        print("False:  %s" % falseLoose)

runner.findCandidates("m:\\Google Drive\\Google Photos (1)")
runner.findPositives("e:\\workspace\\znajdz_optymisie\\using_inception_v4\\images","m:\\Google Drive\\Google Photos") 

Database schema:

CREATE TABLE `files` (
  `id_files` int(11) NOT NULL,
  `filename` varchar(255) NOT NULL,
  `result` varchar(200) DEFAULT NULL,
  `strict_ok` tinyint(1) DEFAULT NULL,
  `loosly_ok` int(11) DEFAULT NULL

CREATE TABLE `positives` (
  `id_positives` int(11) NOT NULL,
  `filename` varchar(200) NOT NULL

CREATE TABLE `results` (
  `id_results` int(11) NOT NULL,
  `id_files` int(11) NOT NULL,
  `score` float NOT NULL,
  `name_id` int(11) NOT NULL,
  `name` varchar(200) NOT NULL

  ADD PRIMARY KEY (`id_files`),
  ADD UNIQUE KEY `filename` (`filename`);

ALTER TABLE `positives`
  ADD PRIMARY KEY (`id_positives`),
  ADD UNIQUE KEY `filename` (`filename`);

ALTER TABLE `results`
  ADD PRIMARY KEY (`id_results`),
  ADD KEY `name` (`name`);
Share and Enjoy:
  • Digg
  • Facebook
  • Blip
  • Kciuk.pl
  • StumbleUpon
  • Wykop

Tensorflow 1.5 built with AVX support

TL;DR – download tensorflow 1.5 with AVX support from the link on the bottom of this post

When running machine learning code on a new hardware using libraries available on PIP we are not using all capabilities provided by our cpu:
2018-01-10 09:35:05.048387: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2

Last night I’ve rebuilt the tensorflow to support AVX CPU instructions. The set up for build takes about an hour. The build itself took 2 hours 20 minutes on my i7-8700k desktop with Windows 10 and hit the computer quite hard.

I’ve used official build manual (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/cmake/README.md), but it doesn’t mention all requirements:
* you need to install numpy in the environment you use for build
* you need to install wheel in the environment you use for build (otherwise it fails after 2 hours of build – sweet)
* if building against cuda9.1 you need to copy math_functions.h from cuda91/include/crt/ to cuda91/include directory (otherwise it fails after 1h of build)

The results?
Sample program without AVX:
start: 2018-01-10 09:35:04.609053
finish:2018-01-10 09:36:00.339329

total: ~55.5s

The same code with AVX:
start: 2018-01-10 09:36:18.167291
finish:2018-01-10 09:36:55.693329

total: ~37.5s

Here is the wheel file with support for AVX tensorflow_gpu-1.5.0rc0-cp36-cp36m-win_amd64.whl if you don’t want to run the build process itself.

And CPU usage during build (I got a new computer yesterday and I’m still excited by new toy :))

Share and Enjoy:
  • Digg
  • Facebook
  • Blip
  • Kciuk.pl
  • StumbleUpon
  • Wykop