Open Event API Server: Implementing FAQ Types

In the Open Event Server, there was a long standing request of the users to enable the event organisers to create a FAQ section.

The API of the FAQ section was implemented subsequently. The FAQ API allowed the user to specify the following request schema

{
 "data": {
   "type": "faq",
   "relationships": {
     "event": {
       "data": {
         "type": "event",
         "id": "1"
       }
     }
   },
   "attributes": {
     "question": "Sample Question",
     "answer": "Sample Answer"
   }
 }
}

 

But, what if the user wanted to group certain questions under a specific category. There was no solution in the FAQ API for that. So a new API, FAQ-Types was created.

Why make a separate API for it?

Another question that arose while designing the FAQ-Types API was whether it was necessary to add a separate API for it or not. Consider that a type attribute was simply added to the FAQ API itself. It would mean the client would have to specify the type of the FAQ record every time a new record is being created for the same. This would mean trusting that the user will always enter the same spelling for questions falling under the same type. The user cannot be trusted on this front. Thus the separate API made sure that the types remain controlled and multiple entries for the same type are not there.

Helps in handling large number of records:

Another concern was what if there were a large number of FAQ records under the same FAQ-Type. Entering the type for each of those questions would be cumbersome for the user. The FAQ-Type would also overcome this problem

Following is the request schema for the FAQ-Types API

{
 "data": {
   "attributes": {
     "name": "abc"
   },
   "type": "faq-type",
   "relationships": {
     "event": {
       "data": {
         "id": "1",
         "type": "event"
       }
     }
   }
 }
}

 

Additionally:

  • FAQ to FAQ-type is a many to one relation.
  • A single FAQ can only belong to one Type
  • The FAQ-type relationship will be optional, if the user wants different sections, he/she can add it ,if not, it’s the user’s choice.

Related links

Advantage of Open Event Format over xCal, pentabarf/frab XML and iCal

Event apps like Giggity and Giraffe use event formats like xCal, pentabarf/frab XML, iCal etc. In this blog, I present some of the advantages of using FOSSASIA’s Open Event data format over other formats. I added support for Open Event format in these two apps so I describe the advantages and improvements that were made with respect to them.

  • The main problem that is faced in Giggity app is that all the data like social links, microlocations, the link for the logo file etc., can not be fetched from a single file, so a separate raw file is being added to provide this data. Our Open Event format provides all this information from the single URL that could be received from the server so no need to use any separate file.
  • Here is the pentabarf format data for FOSSASIA 2016 conference excluding sessions. Although it provides all the necessary information it leaves the information for logo URL, details for longitude and latitude for microlocations (rooms) and links to social media and website. While the open event format provides all the missing details including some extra information like language, comments etc. See FOSSASIA 2016 Open Event format sample.
<conference>
<title>FOSSASIA 2016</title>
<subtitle/>
<venue>Science Centre Road</venue>
<city>Singapore</city>
<start>2016-03-18</start>
<end>2016-03-20</end>
<days>3</days>
<day_change>09:00:00</day_change>
<timeslot_duration>00:00:00</timeslot_duration>
</conference>
  • The parsing of received file format gets very complicated in case of iCal, xCal etc. as tags needs to be matched to get the data. Howsoever there are various libraries available for parsing JSON data. So we can create simply an array list of the received data to send it to the adapter. See this example for more information of working code. You can also see the parser for iCal to compare the complexity of the code.
  • The other more common problem is the structure of the formats received is sometimes it becomes complicated to define the sub parts of a single element. For example for the location we define latitude and longitude separately while in iCal format it is just separated by a comma. For example for

iCal

GEO:1.333194;103.736132

JSON

{
           "id": 1,
           "name": "Stage 1",
           "floor": 0,
           "latitude": 37.425420,
           "longitude": -122.080291,
           "room": null
}

And the information provided is more detailed.

  • Open Event format is well documented and it makes it easier for other developers to work on it. Find the documentation here.

References:

 

Export an Event using APIs of Open Event Server

We in FOSSASIA’s Open Event Server project, allow the organizer, co-organizer and the admins to export all the data related to an event in the form of an archive of JSON files. This way the data can be reused in some other place for various different purposes. The basic workflow is something like this:

  • Send a POST request in the /events/{event_id}/export/json with a payload containing whether you require the various media files.
  • The POST request starts a celery task in the background to start extracting data related to event and jsonifying them
  • The celery task url is returned as a response. Sending a GET request to this url gives the status of the task. If the status is either FAILED or SUCCESS then there is the corresponding error message or the result.
  • Separate JSON files for events, speakers, sessions, micro-locations, tracks, session types and custom forms are created.
  • All this files are then archived and the zip is then served on the endpoint /events/{event_id}/exports/{path}
  • Sending a GET request to the above mentioned endpoint downloads a zip containing all the data related to the endpoint.

Let’s dive into each of these points one-by-one

POST request ( /events/{event_id}/export/json)

For making a POST request you firstly need a JWT authentication like most of the other API endpoints. You need to send a payload containing the settings for whether you want the media files related with the event to be downloaded along with the JSON files. An example payload looks like this:

{
   "image": true,
   "video": true,
   "document": true,
   "audio": true
 }

def export_event(event_id):
    from helpers.tasks import export_event_task

    settings = EXPORT_SETTING
    settings['image'] = request.json.get('image', False)
    settings['video'] = request.json.get('video', False)
    settings['document'] = request.json.get('document', False)
    settings['audio'] = request.json.get('audio', False)
    # queue task
    task = export_event_task.delay(
        current_identity.email, event_id, settings)
    # create Job
    create_export_job(task.id, event_id)

    # in case of testing
    if current_app.config.get('CELERY_ALWAYS_EAGER'):
        # send_export_mail(event_id, task.get())
        TASK_RESULTS[task.id] = {
            'result': task.get(),
            'state': task.state
        }
    return jsonify(
        task_url=url_for('tasks.celery_task', task_id=task.id)
    )


Taking the settings about the media files and the event id, we pass them as parameter to the export event celery task and queue up the task. We then create an entry in the database with the task url and the event id and the user who triggered the export to keep a record of the activity. After that we return as response the url for the celery task to the user.

If the celery task is still underway it show a response with ‘state’:’WAITING’. Once, the task is completed, the value of ‘state’ is either ‘FAILED’ or ‘SUCCESS’. If it is SUCCESS it returns the result of the task, in this case the download url for the zip.

Celery Task to Export Event

Exporting an event is a very time consuming process and we don’t want that this process to come in the way of user interaction with other services. So we needed to use a queueing system that would queue the tasks and execute them in the background with disturbing the main worker from executing the other user requests. We have used celery to queue tasks in the background and execute them without disturbing the other user requests.

We have created a celery task namely “export.event” which calls the event_export_task_base() which in turn calls the export_event_json() where all the jsonification process is carried out. To start the celery task all we do is export_event_task.delay(event_id, settings) and it return a celery task object with a task id that can be used to check the status of the task.

@celery.task(base=RequestContextTask, name='export.event', bind=True)
def export_event_task(self, email, event_id, settings):
    event = safe_query(db, Event, 'id', event_id, 'event_id')
    try:
        logging.info('Exporting started')
        path = event_export_task_base(event_id, settings)
        # task_id = self.request.id.__str__()  # str(async result)
        download_url = path

        result = {
            'download_url': download_url
        }
        logging.info('Exporting done.. sending email')
        send_export_mail(email=email, event_name=event.name, download_url=download_url)
    except Exception as e:
        print(traceback.format_exc())
        result = {'__error': True, 'result': str(e)}
        logging.info('Error in exporting.. sending email')
        send_export_mail(email=email, event_name=event.name, error_text=str(e))

    return result


After exporting a path to the export zip is returned. We then get the downloading endpoint and return it as the result of the celery task. In case there is an error in the celery task, we print an entire traceback in the celery worker and return the error as a result.

Make the Exported Zip Ready

We have a separate export_helpers.py file in the helpers module of API for performing various tasks related to exporting all the data of the event. The most important function in this file is the export_event_json(). This file accepts the event_id and the settings dictionary. In the export helpers we have global constant dictionaries which contain the order in which the fields are to appear in the JSON files created while exporting.

Firstly, we create the directory for storing the exported JSON and finally the archive of all the JSON files. Then we have a global dictionary named EXPORTS which contains all the tables and their corresponding Models which we want to extract from the database and store as JSON.  From the EXPORTS dict we get the Model names. We use this Models to make queries with the given event_id and retrieve the data from the database. After retrieving data, we use another helper function named _order_json which jsonifies the sqlalchemy data in the order that is mentioned in the dictionary. After this we download the media data, i.e. the slides, images, videos etc. related to that particular Model depending on the settings.

def export_event_json(event_id, settings):
    """
    Exports the event as a zip on the server and return its path
    """
    # make directory
    exports_dir = app.config['BASE_DIR'] + '/static/uploads/exports/'
    if not os.path.isdir(exports_dir):
        os.mkdir(exports_dir)
    dir_path = exports_dir + 'event%d' % int(event_id)
    if os.path.isdir(dir_path):
        shutil.rmtree(dir_path, ignore_errors=True)
    os.mkdir(dir_path)
    # save to directory
    for e in EXPORTS:
        if e[0] == 'event':
            query_obj = db.session.query(e[1]).filter(
                e[1].id == event_id).first()
            data = _order_json(dict(query_obj.__dict__), e)
            _download_media(data, 'event', dir_path, settings)
        else:
            query_objs = db.session.query(e[1]).filter(
                e[1].event_id == event_id).all()
            data = [_order_json(dict(query_obj.__dict__), e) for query_obj in query_objs]
            for count in range(len(data)):
                data[count] = _order_json(data[count], e)
                _download_media(data[count], e[0], dir_path, settings)
        data_str = json.dumps(data, indent=4, ensure_ascii=False).encode('utf-8')
        fp = open(dir_path + '/' + e[0], 'w')
        fp.write(data_str)
        fp.close()
    # add meta
    data_str = json.dumps(
        _generate_meta(), sort_keys=True,
        indent=4, ensure_ascii=False
    ).encode('utf-8')
    fp = open(dir_path + '/meta', 'w')
    fp.write(data_str)
    fp.close()
    # make zip
    shutil.make_archive(dir_path, 'zip', dir_path)
    dir_path = dir_path + ".zip"

    storage_path = UPLOAD_PATHS['exports']['zip'].format(
        event_id=event_id
    )
    uploaded_file = UploadedFile(dir_path, dir_path.rsplit('/', 1)[1])
    storage_url = upload(uploaded_file, storage_path)

    return storage_url


After we receive the json data from the _order_json() function, we create a dump of the json using json.dumps with an indentation of 4 spaces and utf-8 encoding. Then we save this dump in a file named according to the model from which the data was retrieved. This process is repeated for all the models that are mentioned in the EXPORTS dictionary. After all the JSON files are created and all the media is downloaded, we make a zip of the folder.

To do this we use shutil.make_archive. It creates a zip and uploads the zip to the storage service used by the server such as S3, google storage, etc. and returns the url for the zip through which it can be accessed.

Apart from this function, the other major function in this file is to create an export job entry in the database so that we can keep a track about which used started a task related to which event and help us in debugging and security purposes.

Downloading the Zip File

After the exporting is completed, if you send a GET request to the task url, you get a response similar to this:

{
   "result": {
     "download_url": "http://localhost:5000/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
   },
   "state": "SUCCESS"
 }

So on opening the download url in the browser or using any other tool, you can download the zip file.

One big question however remains is, all the workflow is okay but how do you understand after sending the POST request, that the task is completed and ready to be downloaded? One way of solving this problem is a technique known as polling. In polling what we do is we send a GET request repeatedly after every fixed interval of time. So, what we do is from the POST request we get the url for the export task. You keep polling this task url until the state is either “FAILED” or “SUCCESS”. If it is a SUCCESS you append the download url somewhere in your website which can then clicked to download the archived export of the event.

 

Reference:

 

Creating Unit Tests for File Upload Functions in Open Event Server with Python Unittest Library

In FOSSASIA‘s Open Event Server, we use the Python unittest library for unit testing various modules of the API code. Unittest library provides us with various assertion functions to assert between the actual and the expected values returned by a function or a module. In normal modules, we simply use these assertions to compare the result since the parameters mostly take as input normal data types. However one very important area for unittesting is File Uploading. We cannot really send a particular file or any such payload to the function to unittest it properly, since it expects a request.files kind of data which is obtained only when file is uploaded or sent as a request to an endpoint. For example in this function:

def uploaded_file(files, multiple=False):
    if multiple:
        files_uploaded = []
        for file in files:
            extension = file.filename.split('.')[1]
            filename = get_file_name() + '.' + extension
            filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
            if not os.path.isdir(filedir):
                os.makedirs(filedir)
            file_path = filedir + filename
            file.save(file_path)
            files_uploaded.append(UploadedFile(file_path, filename))

    else:
        extension = files.filename.split('.')[1]
        filename = get_file_name() + '.' + extension
        filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
        if not os.path.isdir(filedir):
            os.makedirs(filedir)
        file_path = filedir + filename
        files.save(file_path)
        files_uploaded = UploadedFile(file_path, filename)

    return files_uploaded


So, we need to create a mock uploading system to replicate this check. So inside the unittesting function we create an api route for this particular scope to accept a file as a request. Following is the code:

@app.route("/test_upload", methods=['POST'])
        def upload():
            files = request.files['file']
            file_uploaded = uploaded_file(files=files)
            return jsonify(
                {'path': file_uploaded.file_path,
                 'name': file_uploaded.filename})


In the above code, it creates an app route with endpoint test_upload. It accepts a request.files. Then it sends this object to the
uploaded_file function (the function to be unittested), gets the result of the function, and returns the result in a json format.
With this we have the endpoint to mock a file upload ready. Next we need to send a request with file object. We cannot send a normal data which would then be treated as a normal request.form. But we want to receive it in request.files. So we create 2 different classes inheriting other classes.

def test_upload_single_file(self):

        class FileObj(StringIO):

            def close(self):
                pass

        class MyRequest(Request):
            def _get_file_stream(*args, **kwargs):
                return FileObj()

        app.request_class = MyRequest


MyRequest
class inherits the Request class of Flask framework. We define the file stream of the Request class as the FileObj. Then, we set the request_class attribute of the Flask app to this new MyRequest class.
After we have it all setup, we need to send the request and see if the uploaded file is being saved properly or not. For this purpose we take help of StringIO library. StringIO creates a file-like class which can be then used to replicate a file uploading system. So we send the data as {‘file’: (StringIO(‘1,2,3,4’), ‘test_file.csv’)}. We send this as data to the /test_upload endpoint that we have created previously. As a result, the endpoint receives the function, saves the file, and returns the filename and file_path for the stored file.

 with app.test_request_context():
            client = app.test_client()
            resp = client.post('/test_upload', data = {'file': (StringIO('1,2,3,4'), 'test_file.csv')})
            data = json.loads(resp.data)
            file_path = data['path']
            filename = data['name']
            actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename
            self.assertEqual(file_path, actual_file_path)
            self.assertTrue(os.path.exists(file_path))


After this is done, we need to check if the file_path that we receive is the expected file path that we should get. Secondly, we also check whether the file was really created or is this just some dummy data sent. We get the expected path by this:

actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename.

Then we assert that actual_file_path is same as the resulting path we received using the assertEqual. Thirdly, we use assertTrue to ensure that there is a file in that path. That is,

self.assertTrue(os.path.exists(file_path))

Which gives a True if file exists or False if not.

So that basically sums up the unittesting.
1) If the file is saved in the correct path, and
2) The file actually exist
The the unittest passes only if both is True and is thus successful. Else we get either an error or a failure.

Following is the entire code snippet for this unit testing function:

def test_upload_single_file(self):

        class FileObj(StringIO):

            def close(self):
                pass

        class MyRequest(Request):
            def _get_file_stream(*args, **kwargs):
                return FileObj()

        app.request_class = MyRequest

        @app.route("/test_upload", methods=['POST'])
        def upload():
            files = request.files['file']
            file_uploaded = uploaded_file(files=files)
            return jsonify(
                {'path': file_uploaded.file_path,
                 'name': file_uploaded.filename})

        with app.test_request_context():
            client = app.test_client()
            resp = client.post('/test_upload', data = {'file': (StringIO('1,2,3,4'), 'test_file.csv')})
            data = json.loads(resp.data)
            file_path = data['path']
            filename = data['name']
            actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename
            self.assertEqual(file_path, actual_file_path)
            self.assertTrue(os.path.exists(file_path))

Resources:

Adding JSONAPI Support in Open Event Android App

The Open Event API Server exposes a well documented JSONAPI compliant REST API that can be used in The Open Even App Generator and Frontend to access and manipulate data. So it is also needed to add support of JSONAPI in external services like The Open Even App Generator and Frontend. In this post I explain how to add JSONAPI support in Android.

There are many client libraries to implement JSONAPI support in Android or Java like moshi-jsonapi, morpheus etc. You can find the list here. The main problem is most of the libraries require to inherit attributes from Resource model but in the Open Event Android App we already inherit from a RealmObject class and in Java we can’t inherit from more than one model or class. So we will be using the jsonapi-converter library which uses annotation processing to add JSONAPI support.

1. Add dependency

In order to use jsonapi-converter in your app add following dependencies in your app module’s build.gradle file.

dependencies {
	compile 'com.github.jasminb:jsonapi-converter:0.7'
}

2.  Write model class

Models will be used to represent requests and responses. To support JSONAPI we need to take care of followings when writing the models.

  • Each model class must be annotated with com.github.jasminb.jsonapi.annotations.Type annotation
  • Each class must contain a String attribute annotated with com.github.jasminb.jsonapi.annotations.Id annotation
  • All relationships must be annotated with com.github.jasminb.jsonapi.annotations.Relationship annotation

In the Open Event Android we have so many models like event, session, track, microlocation, speaker etc. Here I am only defining track model because of its simplicity and less complexity.

@Type("track")
public class Track extends RealmObject {

        	@Id(IntegerIdHandler.class)
        	private int id;
        	private String name;
        	private String description;
        	private String color;
        	private String fontColor;
        	@Relationship("sessions")
        	private RealmList<Session> sessions;

        	//getters and setters
}

Jsonapi-converter uses Jackson for data parsing. To know how to use Jackson for parsing follow my previous blog.

Type annotation is used to instruct the serialization/deserialization library on how to process given model class. Each resource must have the id attribute. Id annotation is used to flag an attribute of a class as an id attribute. In above class the id attribute is int so we need to specify IntegerIdHandler class which is ResourceHandler in the annotation. Relationship annotation is used to designate other resource types as a relationship. The value in the Relationship annotation should be as per JSONAPI specification of the server. In the Open Event Project each track has the sessions so we need to add a Relationship annotation for it.

3.  Setup API service and retrofit

After defining models, define API service interface as you would usually do with standard JSON APIs.

public interface OpenEventAPI {
    @GET("tracks?include=sessions&fields[session]=title")
    Call<List<Track>> getTracks();
}

Now create an ObjectMapper & a retrofit object and initialize them.

ObjectMapper objectMapper = OpenEventApp.getObjectMapper();
Class[] classes = {Track.class, Session.class};

OpenEventAPI openEventAPI = new Retrofit.Builder()
                    .client(okHttpClient)
                    .baseUrl(Urls.BASE_URL)
                    .addConverterFactory(new JSONAPIConverterFactory(objectMapper, classes))
                    .build()
                    .create(OpenEventAPI.class);

 

The classes array instance contains a list of all the model classes which will be supported by this retrofit builder and API service. Here the main task is to add a JSONAPIConverterFactory which will be used to serialize and deserialize data according to JSONAPI specification. The JSONAPIConverterFactory constructor takes two parameters ObjectMapper and list of classes.

4.  Use API service  

Now after setting up all the things according to above steps, you can use the openEventAPI instance to fetch data from the server.

openEventAPI.getTracks();

Conclusion

JSON API is designed to minimize both the number of requests and the amount of data transmitted between clients and servers

Shrinking Model Classes Boilerplate in Open Event Android Projects Using Jackson and Lombok

JSON is the de facto standard format used for REST API communication, and for consuming any of such API on Android apps like Open Event Android Client and Organiser App, we need Plain Old Java Objects, or POJOs to map the JSON attributes to class properties. These are called models, for they model the API response or request. Basic structure of these models contain

  • Private properties representing JSON attributes
  • Getters and Setters for these properties used to change the object or access its data
  • A toString() method which converts object state to a string, useful for logging and debugging purposes
  • An equals and hashcode method if we want to compare two objects

These can be easily and automatically be generated by any modern IDE, but add unnecessarily to the code base for a relatively simple model class, and are also difficult to maintain. If you add, remove, or rename a method, you have to change its getters/setters, toString and other standard data class methods.

There are a couple of ways to handle it:

  • Google’s Auto Value: Creates Immutable class builders and creators, with all standard methods. But, as it generates a new class for the model, you need to add a library to make it work with JSON parsers and Retrofit. Secondly, there is no way to change the object attributes as they are immutable. It is generally a good practice to make your models immutable, but if you are manipulating their data in your application and saving it in your database, it won’t be possible except than to create a copy of that object. Secondly, not all database storage libraries support it
  • Kotlin’s Data Classes: Kotlin has a nice way to made models using data classes, but it has certain limitations too. Only parameters in primary constructor will be included in the data methods generated, and for creating a no argument constructor (required for certain database libraries and annotation processors), you either need to assign default value to each property or call the primary constructor filling in all the default values, which is a boilerplate of its own. Secondly, sometimes 3rd party libraries are needed to work correctly with data classes on some JSON parsing frameworks, and you probably don’t want to just include Kotlin in your project just for data classes
  • Lombok: We’ll be talking about it later in this blog
  • Immutables, Xtend Lang, etc

This is one kind of boilerplate, and other kind is JSON property names. As we know Java uses camelcase notation for properties, and JSON attributes are mostly:

  • Snake Cased: property_name
  • Kebab Cased: property-name

Whether you are using GSON, Jackson or any other JSON parsing framework, it works great for non ambiguous property names which match as same in both JSON and Java, but requires special naming attributes for translating JSON attributes to camelcase and vice versa. Won’t you want your parser to intelligently convert property_name to propertyName without having you write the tedious mapping which is not only a boilerplate, but also error prone in case your API changes and you forget to update the annotations, or make a spelling mistake, as they are just non type-safe strings.

These boilerplates cause serious regressions during development for what should be a simple Java model for a simple API response. These both kinds of boilerplate are also related to each other as all JSON parsers look for getters and setters for private fields, so there are two layers of potential errors in modeling JSON to Java Models. This should be a lot simpler than it is. So, in this blog, we’ll see how we can configure our project to be 0 boilerplate tolerant and error free. We reduced approximately 70% boilerplate using this configuration in our projects. For example, the Event model class we had (our biggest) reduced from 590 lines to just 74!

We will use a simpler class for our example here:

public class CallForPapers {

    private String announcement;
    @JsonProperty("starts-at")
    private String startsAt;
    private String privacy;
    @JsonProperty("ends-at")
    private String endsAt;

    // Getters and Setters

    @Override
    public String toString() {
        return "CallForPapers{" +
                "announcement='" + announcement + '\'' +
                ", startsAt='" + startsAt + '\'' +
                ", privacy='" + privacy + '\'' +
                ", endsAt='" + endsAt + '\'' +
                '}';
    }
}

 

Note that getters and setters have been omitted for brevity. The actual class is 57 lines long

As you can see, we are using @JsonProperty annotation to properly map the starts-at attribute to startsAt property and similarly on endsAt. First, we’ll remove this boilerplate from our code. Note that this seems a bit overkill for 2 attributes, but imagine the time you’ll save by not having to maintain 100s of attributes for the whole project.

Jackson is smart enough to map different naming styles to one another in both serializing and deserializing. This is done by using Naming Strategy class in Jackson. There is an option to globally configure it, but I found that it did not work for our case, so we had to apply it to each model. It can be simply done by adding another annotation on the top of your class declaration and removing the JsonProperty attribute from your fields

@JsonNaming(PropertyNamingStrategy.KebabCaseStrategy.class)
public class CallForPapers {

    private String announcement;
    private String startsAt;
    private String privacy;
    private String endsAt;

    // Getters and Setters

    @Override
    public String toString() {
        return "CallForPapers{" +
                "announcement='" + announcement + '\'' +
                ", startsAt='" + startsAt + '\'' +
                ", privacy='" + privacy + '\'' +
                ", endsAt='" + endsAt + '\'' +
                '}';
    }
}

 

Our class looks like this now. But be careful to properly name your getters and setters because now, Jackson will map attributes by method names, so if you name the setter for startsAt -> setStartsAt it will automatically understand that the attribute to be mapped is “starts-at”. But, if the method name is something else, then it won’t be able to correctly map the fields. If your properties are not private, then Jackson may instead use them to map fields, so be sure to name your public properties in a correct manner.

Note: If your API does not use kebab case, there are plenty of other options or naming strategies present in Jackson, one example will be

  • PropertyNamingStrategy.SnakeCaseStrategy for attributes like “starts_at”

 Needless to say, this will only work if your API uses a uniform naming strategy

Now we have removed quite a burden from the development lifecycle, but 70% of class is still getters, setters, toString and other data methods. Now, we’ll configure lombok to automatically generate these for us. First, we’ll need to add lombok in our project by adding provided dependency in build.gradle and sync the project

provided 'org.projectlombok:lombok:1.16.18'

 

And now you’d want to install Lombok plugin in Android Studio by going to Files > Settings > Plugins and searching and installing Lombok Plugin and restarting the IDE

 

After you have restarted the IDE, navigate to your model and add @Data annotation at the top of your class and remove all getters/setters, toString, equals, hashcode and if the plugin was installed correctly and lombok was installed from the gradle dependencies, these will be automatically generated for you at build time without any problem. A way for you to see the generated methods is to the structure perspective in the Project Window.

 

There are many more fun tools in lombok and more fine grained control options are provided. Our class looks like this now

@Data
@JsonNaming(PropertyNamingStrategy.KebabCaseStrategy.class)
public class CallForPapers {
    private String announcement;
    private String startsAt;
    private String privacy;
    private String endsAt;
}

 

Reduced to 16 lines (including imports and package). Now, there are some corner cases that you want to iron out for the integration between lombok and Jackson to work correctly.

Lombok uses property names for generating its getters and setters. But there’s a different convention for handling booleans. For the sake of simplicity, we’ll only talk about primitive boolean. You can check out the links below to learn more about class type. The primitive boolean property of the standard Java format, for example hasSessions will generate hasSessions and getter and setHasSessions. Jackson is smart but it expects a getter named getHasSessions creating problems in serialization. Similarly, for a property name isComplete, generate getter and setter will be isComplete and setComplete, creating a problem in deserialization too. Actually, there are ways how Jackson can get boolean values mapped correctly with these getters/setters, but that method needs to rename property itself, changing the getters and setters generated by Lombok. There is actually a way to tell Lombok to not generate this format of getter/setter. You’d need to create a file named lombok.config in your project directory app/ and write this in it

lombok.anyConstructor.suppressConstructorProperties = true
lombok.addGeneratedAnnotation = false
lombok.getter.noIsPrefix = true

 

There are some other settings in it that make it configured for Android specific project

There are some known issues in Android related to Lombok. As lombok itself is an annotation processor, and there is no order for annotation processors to run, it may create problems with other annotation processors. Dagger had issues with it until they fixed it in their later versions. So you might need to check out if any of your libraries depend upon the lombok generated code like getters and setters. Certain database libraries use that and Android Data Binding does too. Currently, there is no solution to the problem as they will throw an error about not finding a getter/setter because they ran before lombok. A possible workaround is to make properties public so that instead of using getters and setters, these libraries use them instead. This is not a good practice, but as this is a data class and you are already creating getters and setters for all fields, this is not a security vulnerability.

There are tons of options for both Jackson and Lombok with a lot of features to help the development process, so be sure to check out these links:

Serialisation, Deserialisation and gson in Open Event Android

JSON is a format to exchange and inter-change data. It has almost become a universal means for transferring data because of it being lightweight, human understandable, machine readable and most importantly, it’s language independence. It can be used with, and in any programming language. (Reference)

FOSSASIA’s Open Event project makes extensive use of JSON for transferring information about events, their speakers, sessions and other event information. The Open Event Android application itself loads all it’s data in JSON format. This JSON is uploaded by the user in the form of a .zip compressed file or by giving an API link. Now before we use this data in the app, we have to parse the data to get Java objects that can be used in Android. Deserialisation is the process of converting JSON to Java objects. In the same way, Serialisation refers to converting Java objects into JSON.

In most applications and frameworks, JSON is serialized and deserialized on multiple instances. The most common approach is to create objects corresponding to the JSON format and then use functions to convert them to JSON line-by-line, attribute by attribute. While this approach will work, it will mean writing unnecessary code and spending a lot more time on it. In Open-event-Android, we are using Google’s gson library to  serialise and deserialise JSON and corresponding Java objects respectively.

Why are we using gson specifically? And why do we need any library in the first place?

Yes, we can obviously make functions in Java by defining all the JSON parameters and converting the JSON into Java objects manually. It is indeed the obvious approach, but the entire process will be time-consuming, complex and definitely not error-free. Also, as projects become bigger, it is inevitable to take care of the code size and reduce it to only what is necessary.

Let’s take a look at the one of the Open Event JSON file as a sample to try and understand things in a better way. This is an example of event.json in the FBF8’2017 sample.

{

"id": 180,
"name": "F8-Facebook Developer Conference 2017",
"latitude": 37.329008,
"longitude": -121.888794,
"location_name": "San Jose Convention Center, 150 W San Carlos St  San Jose  CA USA 95113 ",
"start_time": "2017-04-18T10:00:00-07:00",
"end_time": "2017-04-19T10:00:00-07:00",
"timezone": "US / Pacific",
"description": "Join us for our annual 2-day event, where developers and businesses come together to explore the future of technology. Learn how Facebook connects the world through new products and innovation. This year's event is bigger than ever – with more than 50 sessions, interactive experiences, and the opportunity to meet one-on-one with members of the Facebook team.",
"background_image": "/images/background.jpg",
"logo": "/images/fbf8.png",
"organizer_name": "Facebook",
"organizer_description": "Join us for our annual 2-day event, where developers and businesses come together to explore the future of technology. Learn how Facebook connects the world through new products and innovation. This year's event is bigger than ever – with more than 50 sessions, interactive experiences, and the opportunity to meet one-on-one with members of the Facebook team.",
"event_url": "https://www.fbf8.com",
"social_links": [
{"id": 1,
"link": "https://www.facebook.com/FacebookForDevelopers",
"name": "Facebook"},
{"id": 2,
"link": "https://twitter.com/fbplatform",
"name": "Twitter"},
{"id": 3,
"link": "https://www.instagram.com/explore/tags/fbf8/",
"name": "Instagram"},
{"id": 4,
"link": "https://github.com/Facebook",
"name": "GitHub"}],
"ticket_url": null,
"privacy": "public",
"type": "",
"topic": "Science & Technology",
"sub_topic": "High Tech",
"code_of_conduct": "",
"copyright": {
"logo": "",
"licence_url": "https://en.wikipedia.org/wiki/All_rights_reserved",
"holder": "Facebook",
"licence": "All rights reserved",
"holder_url": null,
"year": 2017},
"call_for_papers": null,
"email": null,
"has_session_speakers": false,
"identifier": "7d16c124",
"large": null,
"licence_details": {
"logo": "",
"compact_logo": "",
"name": "All rights reserved",
"url": "https://en.wikipedia.org/wiki/All_rights_reserved",
"long_name": "All rights reserved",
"description": "The copyright holder reserves, or holds for their own use, all the rights provided by copyright law under one specific copyright treaty."},
"placeholder_url": null,
"schedule_published_on": null,
"searchable_location_name": null,
"state": "Published",
"version": {
"event_ver": 0,
"microlocations_ver": 0,
"sessions_ver": 0,
"speakers_ver": 0,
"sponsors_ver": 0,
"tracks_ver": 0}}

For a file with so many attributes, it can be very tedious to make a parser class in Java. Instead, we simply use the gson library to do the job.

To install gson in maven central, add the following dependency:

<dependencies>
   <!--  Gson: Java to Json conversion -->
   <dependency>
     <groupId>com.google.code.gson</groupId>
     <artifactId>gson</artifactId>
     <version>2.8.0</version>
     <scope>compile</scope>
   </dependency>
</dependencies>                                             

[source:https://github.com/google/gson/blob/master/UserGuide.md]

To use gson in Android, we need to add the following dependency in build.gradle of the project.

compile ‘com.google.code.gson:gson:2.6.2’

Using gson in your code is as simple as using any built-in class in java. There are two main functions – toJson()and fromJson() that will be used throughout the processes of serialisation and deserialisation.

First off, let’s declare a basic Java object. I will take an example of a Java object that represents a car to simplify things a bit. This object car contains 3 variables- Name, Model number and color of the car. For example:

class Car(){
String  name;
int model_no;
String colour;
Car(){}
}

Serialisation:

Serialising the car object now, that is converting the object into JSON format:

Car one = new Car();
Gson gson = new GSON();
String serialised_output= gson.toJson(car);//serialised_output=>JSON

Deserialisation:

Deserializing the JSON now, that is converting the JSON into java objects:

Car car = gson.fromJson(serialised_output,Car.class);

Finally, you can find the GitHub link for gson library here in case you want to know more about it or if you are facing some issues with the library, you can report them there. This is the link to the official Google group for discussions related to gson.

JSON Deserialization Using Jackson in Open Event Android App

The Open Event project uses JSON format for transferring event information like tracks, sessions, microlocations and other. The event exported in the zip format from the Open Event server also contains the data in JSON format. The Open Event Android application uses this JSON data. Before we use this data in the app, we have to parse the data to get Java objects that can be used for populating views. Deserialization is the process of converting JSON data to Java objects. In this post I explain how to deserialize JSON data using Jackson.

1. Add dependency

In order to use Jackson in your app add following dependencies in your app module’s build.gradle file.

dependencies {
	compile 'com.fasterxml.jackson.core:jackson-core:2.8.9'
	compile 'com.fasterxml.jackson.core:jackson-annotations:2.8.9'
	compile 'com.fasterxml.jackson.core:jackson-databind:2.8.9'
}

2.  Define entity model of data

In the Open Event Android we have so many models like event, session, track, microlocation, speaker etc. Here i am only defining track model because of it’s simplicity and less complexity.

public class Track {

	private int id;
	private String name;
	private String description;
	private String color;
	@JsonProperty("font-color")       
	private String fontColor;
    
	//getters and setters
}

Here if the property name is same as json attribute key then no need to add JsonProperty annotation like we have done for id, name color property. But if property name is different from json attribute key then it is necessary to add JsonProperty annotation.

3.  Create sample JSON data

Let’s create sample JSON format data we want to deserialize.

{
        "id": 273,
        "name": "Android",
        "description": "Sample track",
        "color": "#94868c",
        "font-color": "#000000"
}

4.  Deserialize using ObjectMapper

ObjectMapper is Jackson serializer/deserializer. ObjectMapper’s readValue() method is used for simple deserialization. It takes two parameters one is JSON data we want to deserialize and second is Model entity class. Create an ObjectMapper object and initialize it.

ObjectMapper objectMapper = new ObjectMapper();

Now create a Model entity object and initialize it with deserialized data from ObjectMapper’s readValue() method.

Track track = objectMapper.readValue(json, Track.class);

So we have converted JSON data into the Java object.

Jackson is very powerful library for JSON serialization and deserialization. To learn more about Jackson features follow the links given below.

Creating A Sample Event Web App Using the Open Event Framework

I have been creating sample events for the Open Event Framework by creating JSON files for them. In this blog, I will walk you through 4 easy steps to create a sample event for FOSSASIA Open Event. These are the steps that I followed while I contributed in making sample web apps:

Let’s start by taking the example of a sample event. I recently created a sample for FBF8’2017. F8 is conference held by Facebook annually.

I started by visiting the official F8 site. I personally used Google Chrome for it as I was almost positive that the official site will be having an API which will make it easier for me to extract JSON for the event. Also, Google Chrome developer tools make it easier to access the API. I have previously explained how to access JSON through APIs in one of my previous blog posts that can be found here.

So in the F8 site, I looked for the page that will lead me to the full schedule of the conference:

After clicking on the “SEE FULL SCHEDULE” button, this is where it leads us, with the developer tools ON.

That weird named file on that appears in under the XHR tab contains our JSONs!

[{1492502400: [{day: 1, sessionTitle: "Registration", sessionSlug: "day-1-registration",…},…],…}, {,…}]

0:{1492502400: [{day: 1, sessionTitle: "Registration", sessionSlug: "day-1-registration",…},…],…}

1:{,…}

This is how the file looks on clicking it at first. You must be seeing small clickable arrows on the sides which expand the JSON to reveal all the contents. But okay, we have crossed our first step: Obtaining JSON from the website’s API!

Step 2: Tailoring the JSON according to the Open Event format needs to be done now. To view the latest API format, visit here. Now this step is obviously most time-consuming and tedious. We need to modify the API we obtained from the F8 site to meet the Open Event requirements.

I mainly used Sublime text editor to do this. It has this great feature to find and replace text in text files.

Taking an example of a session object. This is what we will get from the API:-

allDay:false

createdAt:"2017–01–28T01:24:12.770Z"

day:1

displayTime:"3:00pm"

endTime:{__type: "Date", iso: "2017–04–18T15:50:00.000Z"}

iso:"2017–04–18T15:50:00.000Z"

__type:"Date"

hasDetails:true

objectId:"62yCeJMvAf"

ogImage:{__type: "File", name: "296643cca9aad196bbd3534324c52049_Making Great Camera Effects.png",…}

name:"296643cca9aad196bbd3534324c52049_Making Great Camera Effects.png"

url:"https://f82017-parse.s3.amazonaws.com/296643cca9aad196bbd3534324c52049_Making%20Great%20Camera%20Effects.png"

__type:"File"

onMySchedule:false

sessionDescription:"With AI that transforms images, Facebook's Camera Effects Platform lets you create animated masks, interactive overlays, frames, and other effects that people can apply to their photos and videos in the Facebook camera. Learn best technical and design practices for creating effects using this innovative technology, and see examples of successful effects."

sessionLocation:"G"

sessionSlug:"making-great-camera-effects"

sessionTitle:"Making Great Camera Effects"

sortTime:"1492527600"

speakers:[{speakerName: "Volodymyr Giginiak", createdAt: "2017–01–28T01:27:49.292Z",…},…]

0:{speakerName: "Volodymyr Giginiak", createdAt: "2017–01–28T01:27:49.292Z",…}

1:{speakerName: "Stef Smet", createdAt: "2017–01–28T01:30:16.022Z",…}

startTime:{__type: "Date", iso: "2017–04–18T15:00:00.000Z"}

iso:"2017–04–18T15:00:00.000Z"

__type:"Date"

tags:["Camera"]

0:"Camera"

updatedAt:"2017–04–18T19:19:16.501Z"

Now, this is where I used the find & replace feature in Sublime text to make it look like the open-event standard JSON. For instance, replacing “sessionLocation” with “microlocation”, “sessionDescription” with “long_abstract” and so on. Yes, there were a few manual changes as well that I had to make.

This is how the final JSON session object looked like after completing step 2:

{
     "short_abstract": "",
     "id": 26,
     "subtitle": "making-great-camera-effects",
     "title": "Making Great Camera Effects",
     "long_abstract": "With AI that transforms images, Facebook's Camera Effects Platform lets you create animated masks, interactive overlays, frames, and other effects that people can apply to their photos and videos in the Facebook camera. Learn best technical and design practices for creating effects using this innovative technology, and see examples of successful effects.",
     "end_time": "2017-04-18T15:50:00-07:00",
     "microlocation": {
        "id": 5,
        "name": "Hall G"
     },
     "start_time": "2017-04-18T15:00:00-07:00",
     "comments": "",
     "language": "",
     "slides": "",
     "audio": "",
     "video": "",
     "signup_url": "",
     "state": "accepted",
     "session_type": {
        "id": 6,
        "name": "50 minutes session",
        "length": "00.50"
     },
     "track": {
        "id": 2,
        "name": "Camera",
        "color": "#FF3046"
     },
     "speakers": [{
           "city": "",
           "heard_from": "",
           "icon": "",
           "long_biography": "",
           "small": "",
           "photo": "",
           "thumbnail": "",
           "short_biography": "",
           "speaking_experience": "",
           "sponsorship_required": "",
           "organisation": "Facebook",
           "id": 40,
           "name": "Volodymyr Giginiak"
        }
     ]
  }

Apart from this, there were places where I had to dig for information about speakers, their social website links, their images and so on because the open-event JSON format expects all this.

Step 3: JSON Validation

It is very likely that one of your JSON files will contain invalid JSON strings. So it is important to validate them. I used this online tool and I liked it very much. Once the JSON is validated, everything is set. All you need to now do is-move to step 4!

Step 4: Web app generation

Compress your JSON archive in .zip format and then go to the webapp generator. Enter your email ID and upload the zip file. Now, you can deploy, download and preview your personalised web-app, generated by FOSSASIA open-event web app generator.

The official web-app generator repo is here.

If you want to contribute to FOSSASIA by creating a sample, you can create an issue in the official open-event repo, along with a PR which contains the JSON sample files, the zipped file and a link to the web-app deployment on GitHub pages.

Unit testing JSON files in assets folder of Android App

So here is the scenario, your android app has a lot of json files in the assets folder that are used to load some data when in first runs.
You are writing some unit tests, and want to make sure the integrity of the data in the assets/*.json are preserved.

You’d assume, that reading JSON files should not involve using the Android Runtime in any way, and we should be able to read JSON files in local JVM as well. But you’re wrong. The JSONObject and JSONArray classes of Android are part of android.jar, and hence

 
JSONObject myJson = new JSONObject(someString);

The above code will not work when running unit tests on local JVM.

Fortunately, our codebase already using Google’s GSoN library to parse JSON, and that works on local JVM too (because GSoN is a core Java library, not specifically an Android library).

Now the second problem that comes is that when running unit tests on local JVM we do not have the getResources() or getAssets() functions.
So how do we retrieve a file from the assets folder ?

So what I found out (after a bit of trial and error and poking around with various dir paths), is that the tests are run from the app folder (app being the Android application module – it is named app by default by Android Studio, though you might have had named it differently)

So in the tests file you can define at the beginning

    public static final String  ASSET_BASE_PATH = "../app/src/main/assets/";

And also create the following helper function

    public String readJsonFile (String filename) throws IOException {
        BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(ASSET_BASE_PATH + filename)));
        StringBuilder sb = new StringBuilder();
        String line = br.readLine();
        while (line != null) {
            sb.append(line);
            line = br.readLine();
        }

        return sb.toString();
    }

Now wherever you need this JSON data you can just do the following

        Gson gson = new GsonBuilder().create();
        events = gson.fromJson(readJsonFile("events.json"),
                Event.EventList.class);
        eventDatesList = gson.fromJson(readJsonFile("eventDates.json"), EventDates.EventDatesList.class);