Adding swap space to your DigitalOcean droplet, if you run out of RAM

The Open Event Android App generator runs on a DigitalOcean. The deployment runs on a USD 10 box, that has 1 GB of RAM, but for testing I often use a USD 5 box, that has only 512mb of RAM.

When trying to build an android app using gradle and Java 8, there could be an issue where you run out of RAM (especially if it’s 512 only).

What we can do to remedy this problem is creating a swapfile. On an SSD based system, Swap spaces work almost as fast as RAM, because SSDs have very high R/W speeds.

Check hard disk space availability using

df -h

There should be an output like this

Filesystem      Size  Used Avail Use% Mounted on
udev            238M     0  238M   0% /dev
tmpfs            49M  624K   49M   2% /run
/dev/vda1        20G  1.1G   18G   6% /
tmpfs           245M     0  245M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           245M     0  245M   0% /sys/fs/cgroup
tmpfs            49M     0   49M   0% /run/user/1001

The steps to create a swap file and allocating it as swap are

sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

We can verify using

sudo swapon --show
NAME      TYPE  SIZE USED PRIO
/swapfile file 1024M   0B   -1

And now if we see RAM usage using free -h , we’ll see

              total        used        free      shared  buff/cache   available
Mem:           488M         37M         96M        652K        354M        425M
Swap:          1.0G          0B        1.0G

Do not use this as a permanent measure for any SSD based filesystem. It can corrupt your SSD if used as swap for long. We use this only for short periods of time to help us build android apks on low ram systems.

Doing a table join in Android without using rawQuery

The Open Event Android App, downloads data from the API (about events, sessions speakers etc), and saves them locally in an SQLite database, so that the app can work even without internet connection.

Since there are multiple entities like Sessions, Speakers, Events etc, and each Session has ids of speakers, and id of it’s venue etc, we often need to use JOIN queries to join data from two tables.

 

Android has some really nice SQLite helper classes and methods. And the ones I like the most are the SQLiteDatabase.query, SQLiteDatabase.update, SQLiteDatabase.insert ones, because they take away quite a bit of pain for typing out SQL commands by hand.

But unfortunately, if you have to use a JOIN, then usually you have to go and use the SQLiteDatabase.rawQuery method and end up having to type your commands by hand.

But but but, if the two tables you are joining do not have any common column names (actually it is good design to have them so – by having all column names prefixed by tablename_ maybe), then you can hack the usual SQLiteDatabase.query() method to get a JOINed query.

Now ideally, to get the Session where speaker_id was 1, a nice looking SQL query should be like this –

SELECT * FROM speaker INNER JOIN session
ON speaker_id = session_speaker_id
WHERE speaker_id = 1

Which, in android, can be done like this –

String rawQuery = "SELECT * FROM " + SpeakerTable.TABLE_NAME + " INNER JOIN " + SessionTable.TABLE_NAME
        + " ON " + SessionTable.EXP_ID + " = " + SpeakerTable.ID
        + " WHERE " + SessionTable.ID + " = " +  id;
Cursor c = db.rawQuery(
        rawQuery,
        null
);

But of course, because of SQLite’s backward compatible support of the primitive way of querying, we turn that command into

SELECT *
FROM session, speaker
WHERE speaker_id = session_speaker_id AND speaker_id = 1

Now this we can write by hacking the terminology used by the #query() method –

Cursor c = db.query(
        SessionTable.TABLE_NAME + " , " + SpeakerTable.TABLE_NAME,
        Utils.concat(SessionTable.PROJECTION, SpeakerTable.PROJECTION),
        SessionTable.EXP_ID + " = " + SpeakerTable.ID + " AND " + SpeakerTable.ID + " = " +  id,
        null,
        null,
        null,
        null
);

To explain a bit, the first argument String tableName can take table1, table2 as well safely, The second argument takes a String array of column names, I concatenated the two projections of the two classes. and finally, put by WHERE clause into the String selection argument.

You can see the code for all database operations in the android app here  https://github.com/fossasia/open-event-android/blob/master/android/app/src/main/java/org/fossasia/openevent/dbutils/DatabaseOperations.java

Keep your node server running using PM2

The open event webapp generator is a node projects (using an express server), and a copy of it runs all the time on my personal DigitalOcean box (other than our heroku instance).

On a service like Heroku, the platform manages the task of bringing your server process up. But on, say a Linux distro on the DO box, you have to manually do

 npm run server

to be able to run the server.

While that is all good, it is a foreground shell process, which means, you will lose the node server, when you log out (or your internet connection into the ssh breaks).
So we need to be able to keep the process running in the background.

The way we do it in bash on Unix, is that we can do either of the following

 npm run server&

The “&” at the end means it will make a background fork of this task. Or if you’ve already started it without it, you can also do the following.

npm run server # starts in foreground
#Press Ctrl + Z, this pauses task and frees the shell
bg 1 # sends task no 1 to background thread.

Again, both these are hacky methods, will work only on Unix OSs, and are not really recommended for production.
For production, we need a Process Manager, for Node.js the best we can get is pm2 – purpose built process manager for node.

Install pm2 first

sudo npm install -g pm2

Using pm2, we can start any process that can be started with node. We can start the app.js script like this

pm2 start src/app.js

Also, pm2 can run npm tasks too like

pm2 start npm -- start

Pm2 has a pretty status message display window. And we can start, stop, pause, kill and/or restart any process.

 

Screenshot from 2016-08-28 01-19-29

Using Partial in Handlebars and Reusing Code

Open Event Webapp uses handlebar partials for optimizing code. We can reuse a template using Handlebars partial.

How to use Handlebars partial ?

To use Handlebars partial, we have to follow some easy steps:

Step 1: In the .hbs file containing code, register your partial by using function Handlebars.registerPartial 

Handlebars.registerPartial('myPartial', '{{name}}')

Step 2: Calling the partial

{{> myPartial }}

In Open-Event Webapp we have made partials for common templates like navbar and footer.

1. // Navbar template (navbar.hbs)

  
 <!-- Fixed navbar -->
 <nav class="navbar navbar-default navbar-fixed-top">
  <div class="container">
   <div class="navbar-header navbar-left pull-left">
    <a class="navbar-brand" href="{{ eventurls.main_page_url }}">
    {{#if eventurls.logo_url}}
    <img alt="{{eventurls.name}}" class="logo logo-dark" src="{{  eventurls.logo_url }}">
    {{else}}
    {{ eventurls.name }}
    {{/if}}
    </a>
   </div>
 <div class="navbar-header navbar-right pull-right">
   <ul style="margin-left:20px" class="nav navbar-nav pull-left">
   {{#sociallinks}}
   {{#if show}}
    <li class="pull-left"><a href="{{link}}" style="padding-right:0; padding-left:0;margin-left:15px"><i class="fa fa-lg fa-{{icon}}" aria-hidden="true" title="{{{icon}}}"></i></a></li>
   {{/if}}
   {{/sociallinks}}
   </ul>
 <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse" style="margin-left:1em;margin-top:1em;">
   <span class="sr-only">Toggle navigation</span>
   <span class="icon-bar"></span>
   <span class="icon-bar"></span>
   <span class="icon-bar"></span>
 </button>
 </div>

 <div class="hidden-lg hidden-md hidden-sm clearfix"></div>
   <div class="collapse navbar-collapse">
    <ul class="nav navbar-nav navbar-right">
     <li class="navlink"><a id="homelink" href="index.html">Home</a>
     {{#if timeList}}
     <li class="navlink">
     <a id="schedulelink"href="schedule.html">
     Schedule</a>
     </li>
     {{/if}}
     {{#if tracks}}
     <li class="navlink">
      <a id="trackslink" href="tracks.html">Tracks</a>
    </li>
  {{/if}}
     {{#if roomsinfo}}
     <li class="navlink">
      <a id="roomslink" href="rooms.html">Rooms</a>   
     </li>
    {{/if}}
    {{#if speakerslist}}
    <li class="navlink">
      <a id="speakerslink" href="speakers.html">Speakers</a>
     </li>
    {{/if}}
   </ul>
     </div>
   </div>
 </nav>
//Compiling Template by providing path

2. const navbar = handlebars.compile(fs.readFileSync(__dirname + '/templates/partials/navbar.hbs').toString('utf-8'));
// Register Partial

3. handlebars.registerPartial('navbar', navbar);

PIL to convert type and quality of image

Image upload is an important part of the server. The images can be in different formats and after applying certain javascript modifications, they can be changed to different formats. For example, when an image is uploaded after cropping in open event organizer server, it is saved in PNG format. But PNG is more than 5 times larger than JPEG image. So when we upload a 150KB image, the image finally reaching the server is around 1MB which is huge. So we need to decide in the server which image format to select in different cases and how to convert them.

Continue reading “PIL to convert type and quality of image”

File upload progress in a Node app using Socket.io

If you look at the webapp generator, you’ll see that there is an option to upload a zip file containing event data. We wanted to give visual cue to the user when he is uploading to see how much file has uploaded.

We are uploading the file, and giving the generate start command via socket.io events instead of POST requests here.

To observe file upload progress on socket (when sending file using a Buffer), there is an awesome node module available called socketio-upload-progress.

In our webapp you can see we implemented it on the frontend here in the form.js and here in the backend in app.js

Basically on the backend you should add the socketio-file-upload module as a middleware to express

var siofu = require("socketio-file-upload");
var app = express()
    .use(siofu.router)
    .listen(8000);

After a socket is opened, set up the upload directory and start listening for uploads

io.on("connection", function(socket){
    var uploader = new siofu();
    uploader.dir = "/path/to/save/uploads";
    uploader.listen(socket);
});

On the frontend, we’ll listen for an input change on an file input type element whose id is siofu_upload

var socket = io.connect();
var uploader = new SocketIOFileUpload(socket);
uploader.listenOnInput(document.getElementById("siofu_input"));

One thing to note here is that, if you observe percentage of upload on frontend, it’ll give you false values. The correct values of how much data is actually transferred can be found in the backend. So observe progress in backend, and send percentage to frontend using the same socket.

  uploader.on('progress', function(event) {
    console.log(event.file.bytesLoaded / event.file.size)
    socket.emit('upload.progress', {
      percentage:(event.file.bytesLoaded / event.file.size) * 100
    })
});

 

Adding Google Analytics To All Pages Using Flask

Google Analytics gives a detailed insight about your website including how many people visited, time, demography, how many returning visitors and all such information. It’s a real important tool to have. All you have to do is create a Universal Analytics Tracking code and use it in a javascript code. The only problem is this code needs to be present in all the pages that you wants the analytics data for. So changing any part of the javascript code anytime, needs to be changed in all .html files.

However, there is a better way of doing it in flask. Create a file base.html and write the code:

<script>
(function(i,s,o,g,r,a,m){i[‘GoogleAnalyticsObject’]=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,’script’,’https://www.google-analytics.com/analytics.js’,’ga’);

ga(‘create’, ‘<track-code>’, ‘auto’);
ga(‘send’, ‘pageview’);

</script>

Then using the property of jinja2 template extend this file in all the html files, i.e. {% extends ‘gentelella/admin/base.html’ %}. Thus now when you make some change in the above mentioned javascrpt code, you need to change it only in one place and it is changed in all other places.

Getting started with Docker Compose

In this post, I will talk about running multiple containers at once using Docker Compose.

The problem ?

Suppose you have a complex app with Database containers, Redis and what not. How are you going to start the app ? One way is to write a shell script that starts the containers one by one.

docker run postgres:latest --name mydb -d
docker run redis:3-alpine --name myredis -d
docker run myapp -d

Now suppose these containers have lots of configurations (links, volumes, ports, environment variables) that they need to function. You will have to write those parameters in the shell script.

docker network create myapp_default
docker run postgres:latest --name db -d -p 5432:5432 --net myapp_default
docker run redis:3-alpine --name redis -d -p 6379:6379 \
    --net myapp_default -v redis:/var/lib/redis/data
docker run myapp -d -p 5000:5000 --net myapp_default -e SOMEVAR=value \
    --link db:db --link redis:redis -v storage:/myapp/static

Won’t it get un-manageable ? Won’t it be great if we had a cleaner way to running multiple containers. Here comes docker-compose to the rescue.

Docker compose

Docker compose is a python package which does the job of handling multiple containers for an application very elegantly. The main file of docker-compose is docker-compose.yml which is a YAML like syntax file with the settings/components required to run your app. Once you define that file, you can just do docker-compose up to start your app with all the components and settings. Pretty cool, right ?

So let’s see the docker-compose.yml for the fictional app we have considered above.

version: '2'

services:
  db:
    image: postgres:latest
    ports:
      - '5432:5432'

  redis:
    image: 'redis:3-alpine'
    command: redis-server
    volumes:
      - 'redis:/var/lib/redis/data'
    ports:
      - '6379:6379'

  web:
    build: .
    environment:
      SOMEVAR: value
    links:
      - db:db
      - redis:redis
    volumes:
      - 'storage:/myapp/static'
    ports:
      - '5000:5000'

volumes:
  redis:
  storage:

Once this file is in the project’s root directory, you can use docker-compose up to start the application. It will run the services in the order in which they have been defined in the YAML file.

Docker compose has a lot of commands that generally correspond to the parameters that docker runaccepts. You can see a full list on the official docker-compose reference.

Conclusion

It’s no doubt that docker-compose is a boon when you have to run complex applications. It personally use Compose in every dockerized application that I write. In GSoC 16, I dockerized Open Event. Here is the docker-compose.yml file if you are interested.

PS – If you liked this post, you might find my other posts on Docker interesting. Do take a look and let me know your views.

 

{{ Repost from my personal blog http://aviaryan.in/blog/gsoc/docker-compose-starting.html }}

KISS Datatable

Recenlty I’ve faced a problem with sorting columns in Datatable.

What is Datatable?

Datatable is a plug-in for Jquery library. It provides a lot of features like pagination, quick search or multi-column ordering. Besides, you can develop easily Bootstrap or Foundation ui css styles. There are also more other option but It doesn’t make sense to list it here, because you can visit their site and you can read clearly documentation. On Datatable website you can see a lot of examples. First of them shows how to improve your ordinary table to awesome and rich of features table. One function changes everything, It’s fantastic!  

$('#myTable').DataTable();

Returning to my problem which I’ve faced, as I told it was problem related to sorting column in table.

I know sorting is a trivial thing. I hope that everyone knows it 🙂 Sorting by a date is also implemented in a datatable library. So everything is clear when we don’t change date format to human readable format. I mean something like this ‘3 hours ago’, ‘1 year ago’.

When Open Event team tested how datatable manages ordering columns in that format it didn’t work. It’s quite hard to sort by that format, So I’ve invented an idea. Surely you are wondering what i’ve invented. I’ve postponed my minds about sort by this values. It can direct to overwork. When I thought about it, weird ideas came to my mind, a lots of conditions, If statements… Therefore I’ve resigned from this. I’ve used KISS principle. KISS means ‘keep it simple stupid’. I like it!

Therefore that sorting is implemented on frontend side. I’ve decided not to display human readable date format at the beginning. I find  all dates which have format “YYYY-MM-DD HH:mm” then I replace that format to human readable format. So it’s very quick and comfortable, and doesn’t require a lot conditions to write. Of course I’ve tried to implement it in Datatable library. I suppose that it would  require more effort than it’s now.

Below You can see great function which changes a date in frontend side but does not change data in a datatable. So sorting process takes place in a datatable using format  “YYYY-MM-DD HH:mm” but user see human readable format. Isn’t it awesome?!

function change_column_time_to_humanize_format(datatable_name, column_id) {
  $(datatable_name).each(function( key, value ) {
    $(value).children().each(function( key1, value2 ) {
       if(key1 === column_id ){
          var current_value = $(value2).text().slice(0, 16);
          var changed_value = moment(current_value, "YYYY-MM-DD hh:mm").fromNow()
          var isValid = moment(current_value, "YYYY-MM-DD HH:mm", true).isValid()
          if (changed_value !== current_value && isValid === true){
              $(value2).text(changed_value)
          }
      }
  });
});

 

Building the Scheduler UI

{ Repost from my personal blog @ https://blog.codezero.xyz/building-the-scheduler-ui }

If you hadn’t already noticed, Open Event has got a shiny new feature. A graphical and an Interactive scheduler to organize sessions into their respective rooms and timings.

As you can see in the above screenshot, we have a timeline on the left. And a lot of session boxes to it’s right. All the boxes are re-sizable and drag-drop-able. The columns represent the different rooms (a.k.a micro-locations). The sessions can be dropped into their respective rooms. Above the timeline, is a toolbar that controls the date. The timeline can be changed for each date by clicking on the respective date button.

The Clear overlaps button would automatically check the timeline and remove any sessions that are overlapping each other. The Removed sessions will be moved to the unscheduled sessions pane at the left.

The Add new micro-location button can be used to instantly add a new room. A modal dialog would open and the micro-location will be instantly added to the timeline once saved.

The Export as iCal allows the organizer to export all the sessions of that event in the popular iCalendar format which can then be imported into various calendar applications.

The Export as PNG saves the entire timeline as a PNG image file. Which can then be printed by the organizers or circulated via other means if necessary.

Core Dependencies

The scheduler makes use of some javascript libraries for the implementation of most of the core functionality

  • Interact.js – For drag-and-drop and resizing
  • Lodash – For array/object manipulations and object cloning
  • jQuery – For DOM Manipulation
  • Moment.js – For date time parsing and calculation
  • Swagger JS – For communicating with our API that is documented according to the swagger specs.
Retrieving data via the API

The swagger js client is used to obtain the sessions data using the API. The client is asynchronously initialized on page load. The client can be accessed from anywhere using the javascript function initializeSwaggerClient.

The swagger initialization function accepts a callback which is called if the client is initialized. If the client is not initialized, the callback is called after that.

var swaggerConfigUrl = window.location.protocol + "//" + window.location.host + "/api/v2/swagger.json";  
window.swagger_loaded = false;  
function initializeSwaggerClient(callback) {  
    if (!window.swagger_loaded) {
        window.api = new SwaggerClient({
            url: swaggerConfigUrl,
            success: function () {
                window.swagger_loaded = true;
                if (callback) {
                    callback();
                }

            }
        });
    } else {
        if (callback) {
            callback();
        }
    }
}

For getting all the sessions of an event, we can do,

initializeSwaggerClient(function () {  
    api.sessions.get_session_list({event_id: eventId}, function (sessionData) {
        var sessions = sessionData.obj;  // Here we have an array of session objects
    });
});

In a similar fashion, all the micro-locations of an event can also be loaded.

Processing the sessions and micro-locations

Each session object is looped through, it’s start time and end time are parsed into moment objects, duration is calculated, and it’s distance from the top in the timeline is calculated in pixels. The new object with additional information, is stored in an in-memory data store, with the date of the event as key, for use in the timeline.

The time configuration is specified in a separate time object.

var time = {  
    start: {
        hours: 0,
        minutes: 0
    },
    end: {
        hours: 23,
        minutes: 59
    },
    unit: {
        minutes: 15,
        pixels: 48,
        count: 0
    },
    format: "YYYY-MM-DD HH:mm:ss"
};

The smallest unit of measurement is 15 minutes and 48px === 15 minutes in the timeline.

Each day of the event is stored in a separate array in the form of Do MMMM YYYY(eg. 2nd May 2013).

The array of micro-location objects is sorted alphabetically by the room name.

Displaying sessions and micro-locations on the timeline

According to the day selected, the sessions for that day are displayed on the timeline. Based on their time, the distance of the session div from the top of the timeline is calculated in pixels and the session box is positioned absolutely. The height of the session in pixels is calculated from it’s duration and set.

For pixels-minutes conversion, the following are used.

/**
 * Convert minutes to pixels based on the time unit configuration
 * @param {number} minutes The minutes that need to be converted to pixels
 * @returns {number} The pixels
 */
function minutesToPixels(minutes) {  
    minutes = Math.abs(minutes);
    return (minutes / time.unit.minutes) * time.unit.pixels;
}

/**
 * Convert pixels to minutes based on the time unit configuration
 * @param {number} pixels The pixels that need to be converted to minutes
 * @returns {number} The minutes
 */
function pixelsToMinutes(pixels) {  
    pixels = Math.abs(pixels);
    return (pixels / time.unit.pixels) * time.unit.minutes;
}
Adding interactivity to the session elements

Interact.js is used to provide interactive capabilities such as drag-drop and resizing.

To know how to use Interact.js, you can checkout some previous blog posts on the same, Interact.js + drag-drop and Interact.js + resizing.

Updating the session information in database on every change

We have to update the session information in database whenever it is moved or resized. Every time a session is moved or resized, a jQuery event is triggered on $(document) along with the session object as the payload.

We listen to this event, and make an API request with the new session object to update the session information in the database.


The scheduler UI is more complex than said in this blog post. To know more about it, you can checkout the scheduler’s javascript code atapp/static/js/admin/event/scheduler.js.