What is Open Source and why you should do it?

Since Codeheat is going on and Google Code-in has started, I would like to share some knowledge with the new contributors with the help of this blog.

What is an Open Source software?

When googled, you will see:

“Open-source software is computer software with its source code made available with a license in which the copyright holder provides the rights to study, change, and distribute the software to anyone and for any purpose.”

To put it in layman terms, “A software whose source code is made available to everyone to let them change/improve provided that the contributor who changes the code cannot claim the software to be his own.”

Thus, you don’t own the software thoroughly. All you can do is change the code of the software to make it better. Now, you may be thinking what’s there in for you? There are all pros according to me and I have explained them in the latter half of this article.

Why am I writing this?

I was just in the freshman’s year of my college when I came to know about the web and how it works. I started my journey as a developer, building things, started doing some projects and keeping it with myself. Those days,  exploring more, I first came to know about the Open Source software.

Curiously, wanting to know more about the same, I got to know that anyone can make his/her software Open so as to make it available to others for use and development. Thus, learning more about the same led me to explore other’s projects on GitHub and I went through the codebases of the softwares and started contributing. I remember my first contribution was to correct a “typo” i.e correcting a spelling mistake in the README of the project. That said, I went on exploring more and more and got my hands on Open Source which made me share some of my thoughts with you.

What’s there in for you doing Open Source Contribution?

1) Teaches you how to structure code:

Now a days, nearly many of the software projects are Open Sourced and the community of developer works on the projects to constantly improve them. Thus, big projects have big codebases too which are really hard to understand at first but after giving some time to understand and contribute, you will be fine with those. The thing with such projects is they have a structured code, by “structured”, I mean to say there are strict guidelines for the project i.e they have good tests written which make you write the code as they want, i.e clean and readable. Thus, by writing such code, you will learn how to structure it which ultimately is a great habit that every developer should practice.

2) Team Work:

Creating and maintaining a large project requires team work. When you contribute to a project, you have to work in a team where you have to take others opinions, give your opinions, ask teammates for improvisations or ask anything whichever you are stuck with. Thus, working in team increases productivity, community interaction, your own network, etc.

3) Improves the developer you:

Okay, so I think, one of the most important part of your developer journey is and should be “LEARNING ALWAYS”. Thus, when you contribute, your code is reviewed by others (experts or maintainers of project) who eventually point out the mistakes or the improvisations to be done in the code so that the code can be written much cleaner than you had written. Also, you start to think a problem widely. While solving the problem, you ensure that the code you have written makes the app scalable for a large number of users, also prolonging the life of code.

4) Increases your Network:

One advantage of Open Source contribution is that it also increases your network in the developer community. Thus, you get to know about the things that you have never heard of, you get to explore them, you get to meet people, you get to know what is going in what parts of the world, etc. Having connections with other developers sitting in different countries is always a bonus.

5) Earn some bucks too:

At the end of the day, money matters. Earlier days, people used to think that contributing to Open Source projects won’t earn you money, etc. But if you are a maintainer or a continuous contributor of a great project, you get donations to get continuing the project and making it available to people.

For students in college, doing Open Source is a bonus. There are programmes like:

These programmes offer high incentives and stipends to the fellow students. FOSSASIA participates in GSoC so you can go ahead and try getting in GSoC under FOSSASIA.

6) Plus point for job seekers:

When it comes to applying for job, if you have a good Open Source profile, the recruiter finds a reason to take you out and offer you an interview since you already know how to “manage a project”, “work in team”, “get work done”, “solve a problem efficiently”, etc. Now a days, many companies mention on their job application page as “Open Source would be a bonus”.

7) Where can you start:

We have many projects at FOSSASIA to start with. There are no restrictions on the language since we have projects available for most of the languages.

Currently, we are having a couple of programs open at FOSSASIA. They are:

Feel free to check out the programs and the projects under FOSSASIA at https://github.com/fossasia.

Conclusion

So, yeah. This was it. Hope you understood what Open Source is and how would it benefit you. Keep contributing to FOSSASIA and you will see the effects in no time.

Open Event Server: Creating/Rebuilding Elasticsearch Index From Existing Data In a PostgreSQL DB Using Python

The Elasticsearch instance in the current Open Event Server deployment is currently just used to store the events and search through it due to limited resources.

The project uses a PostgreSQL database, this blog will focus on setting up a job to create the events index if it does not exist. If the indices exists, the job will delete all the previous the data and rebuild the events index.

Although the project uses Flask framework, the job will be in pure python so that it can run in background properly while the application continues its work. Celery is used for queueing up the aforementioned jobs. For building the job the first step would be to connect to our database:

from config import Config
import psycopg2
conn = psycopg2.connect(Config.SQLALCHEMY_DATABASE_URI)
cur = conn.cursor()

 

The next step would be to fetch all the events from the database. We will only be indexing certain attributes of the event which will be useful in search. Rest of them are not stored in the index. The code given below will fetch us a collection of tuples containing the attributes mentioned in the code:

cur.execute(
       "SELECT id, name, description, searchable_location_name, organizer_name, organizer_description FROM events WHERE state = 'published' and deleted_at is NULL ;")
   events = cur.fetchall()

 

We will be using the the bulk API, which is significantly fast as compared to adding an event one by one via the API. Elasticsearch-py, the official python client for elasticsearch provides the necessary functionality to work with the bulk API of elasticsearch. The helpers present in the client enable us to use generator expressions to insert the data via the bulk API. The generator expression for events will be as follows:

event_data = ({'_type': 'event',
                  '_index': 'events',
                  '_id': event_[0],
                  'name': event_[1],
                  'description': event_[2] or None,
                  'searchable_location_name': event_[3] or None,
                  'organizer_name': event_[4] or None,
                  'organizer_description': event_[5] or None}
                 for event_ in events)

 

We will now delete the events index if it exists. The the event index will be recreated. The generator expression obtained above will be passed to the bulk API helper and the event index will repopulated. The complete code for the function will now be as follows:

 

@celery.task(name='rebuild.events.elasticsearch')
def cron_rebuild_events_elasticsearch():
   """
   Re-inserts all eligible events into elasticsearch
   :return:
   """
   conn = psycopg2.connect(Config.SQLALCHEMY_DATABASE_URI)
   cur = conn.cursor()
   cur.execute(
       "SELECT id, name, description, searchable_location_name, organizer_name, organizer_description FROM events WHERE state = 'published' and deleted_at is NULL ;")
   events = cur.fetchall()
   event_data = ({'_type': 'event',
                  '_index': 'events',
                  '_id': event_[0],
                  'name': event_[1],
                  'description': event_[2] or None,
                  'searchable_location_name': event_[3] or None,
                  'organizer_name': event_[4] or None,
                  'organizer_description': event_[5] or None}
                 for event_ in events)
   es_store.indices.delete('events')
   es_store.indices.create('events')
   abc = helpers.bulk(es_store, event_data)

 

Currently we run this job on each week and also on each new deployment. Rebuilding the index is very important as some records may not be indexed when the continuous sync is taking place.

To know more about it please visit https://gocardless.com/blog/syncing-postgres-to-elasticsearch-lessons-learned/

Related links:

Open Event API Server: Implementing FAQ Types

In the Open Event Server, there was a long standing request of the users to enable the event organisers to create a FAQ section.

The API of the FAQ section was implemented subsequently. The FAQ API allowed the user to specify the following request schema

{
 "data": {
   "type": "faq",
   "relationships": {
     "event": {
       "data": {
         "type": "event",
         "id": "1"
       }
     }
   },
   "attributes": {
     "question": "Sample Question",
     "answer": "Sample Answer"
   }
 }
}

 

But, what if the user wanted to group certain questions under a specific category. There was no solution in the FAQ API for that. So a new API, FAQ-Types was created.

Why make a separate API for it?

Another question that arose while designing the FAQ-Types API was whether it was necessary to add a separate API for it or not. Consider that a type attribute was simply added to the FAQ API itself. It would mean the client would have to specify the type of the FAQ record every time a new record is being created for the same. This would mean trusting that the user will always enter the same spelling for questions falling under the same type. The user cannot be trusted on this front. Thus the separate API made sure that the types remain controlled and multiple entries for the same type are not there.

Helps in handling large number of records:

Another concern was what if there were a large number of FAQ records under the same FAQ-Type. Entering the type for each of those questions would be cumbersome for the user. The FAQ-Type would also overcome this problem

Following is the request schema for the FAQ-Types API

{
 "data": {
   "attributes": {
     "name": "abc"
   },
   "type": "faq-type",
   "relationships": {
     "event": {
       "data": {
         "id": "1",
         "type": "event"
       }
     }
   }
 }
}

 

Additionally:

  • FAQ to FAQ-type is a many to one relation.
  • A single FAQ can only belong to one Type
  • The FAQ-type relationship will be optional, if the user wants different sections, he/she can add it ,if not, it’s the user’s choice.

Related links

Discount Codes in Open Event Server

The Open Event System allows usage of discount codes with tickets and events. This blogpost describes what types of discount codes are present and what endpoints can be used to fetch and update details.

In Open Event API Server, each event can have two types of discount codes. One is ‘event’ discount code, while the other is ‘ticket’ discount code. As the name suggests, the event discount code is an event level discount code and the ticket discount code is ticket level.

Now each event can have only one ‘event’ discount code and is accessible only to the server admin. The Open Event server admin can create, view and update the ‘event’ discount code for an event. The event discount code followsDiscountCodeEvent Schema. This schema is inherited from the parent class DiscountCodeSchemaPublic. To save the unique discount code associated with an event, the event model’s discount_code_id field is used.

The ‘ticket’ discount is accessible by the event organizer and co-organizer. Each event can have any number of ‘ticket’ discount codes. This follows the DiscountCodeTicket schema, which is also inherited from the same base class ofDiscountCodeSchemaPublic. The use of the schema is decided based on the value of the field ‘used_for’ which can have the value either ‘events’ or ‘tickets’. Both the schemas have different relationships with events and marketer respectively.

We have the following endpoints for Discount Code events and tickets:
‘/events/<int:event_id>/discount-code’
‘/events/<int:event_id>/discount-codes’

The first endpoint is based on the DiscountCodeDetail class. It returns the detail of one discount code which in this case is the event discount code associated with the event.

The second endpoint is based on the DiscountCodeList class which returns a list of discount codes associated with an event. Note that this list also includes the ‘event’ discount code, apart from all the ticket discount codes.

class DiscountCodeFactory(factory.alchemy.SQLAlchemyModelFactory):
   class Meta:
       model = DiscountCode
       sqlalchemy_session = db.session
event_id = None
user = factory.RelatedFactory(UserFactory)
user_id = 1


Since each discount code belongs to an event(either directly or through the ticket), the factory for this has event as related factory, but to check for 
/events/<int:event_id>/discount-code endpoint we first need the event and then pass the discount code id to be 1 for dredd to check this. Hence, event is not included as a related factory, but added as a different object every time a discount code object is to be used.

@hooks.before("Discount Codes > Get Discount Code Detail of an Event > Get Discount Code Detail of an Event")
def event_discount_code_get_detail(transaction):
   """
   GET /events/1/discount-code
   :param transaction:
   :return:
   """
   with stash['app'].app_context():
       discount_code = DiscountCodeFactory()
       db.session.add(discount_code)
       db.session.commit()
       event = EventFactoryBasic(discount_code_id=1)
       db.session.add(event)
       db.session.commit()


The other tests and extended documentation can be found 
here.

References:

Open Event Server: Getting The Identity From The Expired JWT Token In Flask-JWT

The Open Event Server uses JWT based authentication, where JWT stands for JSON Web Token. JSON Web Tokens are an open industry standard RFC 7519 method for representing claims securely between two parties. [source: https://jwt.io/]

Flask-JWT is being used for the JWT-based authentication in the project. Flask-JWT makes it easy to use JWT based authentication in flask, while on its core it still used PyJWT.

To get the identity when a JWT token is present in the request’s Authentication header , the current_identity proxy of Flask-JWT can be used as follows:

@app.route('/example')
@jwt_required()
def example():
   return '%s' % current_identity

 

Note that it will only be set in the context of function decorated by jwt_required(). The problem with the current_identity proxy when using jwt_required is that the token has to be active, the identity of an expired token cannot be fetched by this function.

So why not write a function on our own to do the same. A JWT token is divided into three segments. JSON Web Tokens consist of three parts separated by dots (.), which are:

  • Header
  • Payload
  • Signature

The first step would be to get the payload, that can be done as follows:

token_second_segment = _default_request_handler().split('.')[1]

 

The payload obtained above would still be in form of JSON, it can be converted into a dict as follows:

payload = json.loads(token_second_segment.decode('base64'))

 

The identity can now be found in the payload as payload[‘identity’]. We can get the actual user from the paylaod as follows:

def jwt_identity(payload):
   """
   Jwt helper function
   :param payload:
   :return:
   """
   return User.query.get(payload['identity'])

 

Our final function will now be something like:

def get_identity():
   """
   To be used only if identity for expired tokens is required, otherwise use current_identity from flask_jwt
   :return:
   """
   token_second_segment = _default_request_handler().split('.')[1]
   missing_padding = len(token_second_segment) % 4
   payload = json.loads(token_second_segment.decode('base64'))
   user = jwt_identity(payload)
   return user

 

But after using this function for sometime, you will notice that for certain tokens, the system will raise an error saying that the JWT token is missing padding. The JWT payload is base64 encoded, and it requires the payload string to be a multiple of four. If the string is not a multiple of four, the remaining spaces can pe padded with extra =(equal to) signs. And since Python 2.7’s .decode doesn’t do that by default, we can accomplish that as follows:

missing_padding = len(token_second_segment) % 4

# ensures the string is correctly padded to be a multiple of 4
if missing_padding != 0:
   token_second_segment += b'=' * (4 - missing_padding)

 

Related links:

Checking Whether Migrations Are Up To Date With The Sqlalchemy Models In The Open Event Server

In the Open Event Server, in the pull requests, if there is some change in the sqlalchemy model, sometimes proper migrations for the same are missed in the PR.

The first approach to check whether the migrations were up to date in the database was with the following health check function:

from subprocess import check_output
def health_check_migrations():
   """
   Checks whether database is up to date with migrations, assumes there is a single migration head
   :return:
   """
   head = check_output(["python", "manage.py", "db", "heads"]).split(" ")[0]
   
   if head == version_num:
       return True, 'database up to date with migrations'
   return False, 'database out of date with migrations'

 

In the above function, we get the head according to the migration files as following:

head = check_output(["python", "manage.py", "db", "heads"]).split(" ")[0]


The table alembic_version contains the latest alembic revision to which the database was actually upgraded. We can get this revision from the following line:

version_num = (db.session.execute('SELECT version_num from alembic_version').fetchone())['version_num']

 

Then we compare both of the given heads and return a proper tuple based on the comparison output.While this method was pretty fast, there was a drawback in this approach. If the user forgets to generate the migration files for the the changes done in the sqlalchemy model, this approach will fail to raise a failure status in the health check.

To overcome this drawback, all the sqlalchemy models were fetched automatically and simple sqlalchemy select queries were made to check whether the migrations were up to date.

Remember that a raw SQL query will not serve our purpose in this case as you’d have to specify the columns explicitly in the query. But in the case of a sqlalchemy query, it generates a SQL query based on the fields defined in the db model, so if migrations are missing to incorporate the said change proper error will be raised.

We can accomplish this from the following function:

def health_check_migrations():
   """
   Checks whether database is up to date with migrations by performing a select query on each model
   :return:
   """
   # Get all the models in the db, all models should have a explicit __tablename__
   classes, models, table_names = [], [], []
   # noinspection PyProtectedMember
   for class_ in db.Model._decl_class_registry.values():
       try:
           table_names.append(class_.__tablename__)
           classes.append(class_)
       except:
           pass
   for table in db.metadata.tables.items():
       if table[0] in table_names:
           models.append(classes[table_names.index(table[0])])

   for model in models:
       try:
           db.session.query(model).first()
       except:
           return False, '{} model out of date with migrations'.format(model)
   return True, 'database up to date with migrations'

 

In the above code, we automatically get all the models and tables present in the database. Then for each model we try a simple SELECT query which returns the first row found. If there is any error in doing so, False, ‘{} model out of date with migrations’.format(model) is returned, so as to ensure a failure status in health checks.

Related:

Implementing Health Check Endpoint in Open Event Server

A health check endpoint was required in the Open Event Server be used by Kubernetes to know when the web instance is ready to receive requests.

Following are the checks that were our primary focus for health checks:

  • Connection to the database.
  • Ensure sql-alchemy models are inline with the migrations.
  • Connection to celery workers.
  • Connection to redis instance.

Runscope/healthcheck seemed like the way to go for the same. Healthcheck wraps a Flask app object and adds a way to write simple health-check functions that can be used to monitor your application. It’s useful for asserting that your dependencies are up and running and your application can respond to HTTP requests. The Healthcheck functions are exposed via a user defined flask route so you can use an external monitoring application (monit, nagios, Runscope, etc.) to check the status and uptime of your application.

Health check endpoint was implemented at /health-check as following:

from healthcheck import HealthCheck
health = HealthCheck(current_app, "/health-check")

 

Following is the function for checking the connection to the database:

def health_check_db():
   """
   Check health status of db
   :return:
   """
   try:
       db.session.execute('SELECT 1')
       return True, 'database ok'
   except:
       sentry.captureException()
       return False, 'Error connecting to database'

 

Check functions take no arguments and should return a tuple of (bool, str). The boolean is whether or not the check passed. The message is any string or output that should be rendered for this check. Useful for error messages/debugging.

The above function executes a query on the database to check whether it is connected properly. If the query runs successfully, it returns a tuple True, ‘database ok’. sentry.captureException() makes sure that the sentry instance receives a proper exception event with all the information about the exception. If there is an error connecting to the database, the exception will be thrown. The tuple returned in this case will be return False, ‘Error connecting to database’.

Finally to add this to the endpoint:

health.add_check(health_check_db)

Following is the response for a successful health check:

{
   "status": "success",
   "timestamp": 1500915121.52474,
   "hostname": "shubham",
   "results": [
       {
           "output": "database ok",
           "checker": "health_check_db",
           "expires": 1500915148.524729,
           "passed": true,
           "timestamp": 1500915121.524729
       }
   ]
}

If the database is not connected the following error will be shown:

{
           "output": "Error connecting to database",
           "checker": "health_check_db",
           "expires": 1500965798.307425,
           "passed": false,
           "timestamp": 1500965789.307425
}

Related:

Supporting Dasherized Attributes and Query Params in flask-rest jsonapi for Open Event Server

In the Open Event API Server project attributes of the API are dasherized.

What was the need for dasherizing the attributes in the API ?

All the attributes in our database models are separated by underscores i.e first name would be stored as first_name. But most of the API client implementations support dasherized attributes by default. In order to attract third party client implementations in the future and making the API easy to set up for them was the primary reason behind this decision.Also to quote the official json-api spec recommendation for the same:

Member names SHOULD contain only the characters “a-z” (U+0061 to U+007A), “0-9” (U+0030 to U+0039), and the hyphen minus (U+002D HYPHEN-MINUS, “-“) as separator between multiple words.

Note: The dasherized version for first_name will be first-name.

flask-rest-jsonapi is the API framework used by the project. We were able to dasherize the API responses and requests by adding inflect=dasherize to each API schema, where dasherize is the following function:

def dasherize(text):
   return text.replace('_', '-')

 

flask-rest-jsonapi also provides powerful features like the following through query params:

But we observed that the query params were not being dasherized which rendered the above awesome features useless 🙁 . The reason for this was that flask-rest-jsonapi took the query params as-is and search for them in the API schema. As Python variable names cannot contain a dash, naming the attributes with a dash in the internal API schema was out of the question.

For adding dasherizing support to the query params, change in the QueryStringManager located at querystring.py of the framework root are required. A config variable named DASHERIZE_APIwas added to turn this feature on and off.

Following are the changes required for dasherizing query params:

For Sparse Fieldsets in the fields function, replace the following line:

result[key] = [value]
with
if current_app.config['DASHERIZE_API'] is True:
    result[key] = [value.replace('-', '_')]
else:
    result[key] = [value]

 

For sorting, in the sorting function, replace the following line:

field = sort_field.replace('-', '')

with

if current_app.config['DASHERIZE_API'] is True:
   field = sort_field[0].replace('-', '') + sort_field[1:].replace('-', '_')
else:
   field = sort_field[0].replace('-', '') + sort_field[1:]

 

For Include related objects, in include function, replace the following line:

return include_param.split(',') if include_param else []

with

if include_param:
   param_results = []
   for param in include_param.split(','):
       if current_app.config['DASHERIZE_API'] is True:
           param = param.replace('-', '_')
       param_results.append(param)
   return param_results
return []

Related links:

Running Dredd Hooks as a Flask App in the Open Event Server

The Open Event Server is based on the micro-framework Flask from its initial phases. After implementing API documentation, we decided to implement the Dredd testing in the Open Event API.

After isolating each request in Dredd testing, the real challenge is now to bind the database engine to the Dredd Hooks. And as we have been using Flask-SQLAlchemy db.Model Baseclass for building all the models and Flask, being a micro framework itself, came to our rescue as we could easily bind the database engine to the Flask app. Conventionally dredd hooks are written in pure Python, but we will running them as a self contained Flask app itself.

How to initialise this flask app in our dredd hooks. The Flask app can be initialised in the before_all hook easily as shown below:

def before_all(transaction):
    app = Flask(__name__)
    app.config.from_object('config.TestingConfig')

 

The database can be binded to the app as follows:

def before_all(transaction):
app = Flask(__name__)
app.config.from_object('config.TestingConfig')
db.init_app(app)
Migrate(app, db)

 

The challenge now is how to bind the application context when applying the database fixtures. In a normal Flask application this can be done as following:

with app.app_context():
#perform your operation

 

While for unit tests in python:

with app.test_request_context():
#perform tests

 

But as all the hooks are separate from each other, Dredd-hooks-python supports idea of a single stash list where you can store all the desired variables(a list or the name stash is not necessary).

The app and db can be added to stash as shown below:

@hooks.before_all
def before_all(transaction):
app = Flask(__name__)
app.config.from_object('config.TestingConfig')
db.init_app(app)
Migrate(app, db)
stash['app'] = app
stash['db'] = db

 

These variables stored in the stash can be used efficiently as below:

@hooks.before_each
def before_each(transaction):
with stash['app'].app_context():
db.engine.execute("drop schema if exists public cascade")
db.engine.execute("create schema public")
db.create_all()

 

and many other such examples.

Related Links:
1. Testing Your API Documentation With Dredd: https://matthewdaly.co.uk/blog/2016/08/08/testing-your-api-documentation-with-dredd/
2. Dredd Tutorial: https://help.apiary.io/api_101/dredd-tutorial/
3. Dredd Docs: http://dredd.readthedocs.io/

Displaying a Comments dialogfragment on a Button Click from the Feed Adapter in the Open Event Android App

Developing the live feed of the event page from Facebook for the Open Event Android App, there were questions how best to display the comments in the feed.  A dialog fragment over the feeds on the click of a button was the most suitable solution. Now the problem was, a dialogfragment can only be called from an app component (eg- fragment or an activity). Therefore, the only challenge which remained was to call the dialogfragment from the adapter over the feed fragment with the corresponding comments of the particular post on a button click.

What is a dialogfragment?

A dialogfragment displays a dialog window, floating on top of its activity’s window. This fragment contains a Dialog object, which it displays as appropriate based on the fragment’s state. Control of the dialog (deciding when to show, hide, dismiss it) should be done through the API here, not with direct calls on the dialog (Developer.Android.com).

Solution

The solution which worked on was to define a adapter callback interface with a onMethodCallback method in the feed adapter class itself with the list of comment items fetched at runtime on the button click of a particular post. The interface had to be implemented by the main activity which housed the feed fragment that would be creating the comments dialogfragment with the passed list of comments.

Implementation

Define an interface adapterCallback with the method onMethodCallback parameterized by the list of comment items in your adapter class.

public interface AdapterCallback {
   void onMethodCallback(List<CommentItem> commentItems);
}

 

Create a constructor of the adapter with the adapterCallback as a parameter. Do not forget to surround it with a try/catch.

public FeedAdapter(Context context, AdapterCallback adapterCallback, List<FeedItem> feedItems) {
     this.mAdapterCallback = adapterCallback;
}

 

On the click of the comments button, call onMethodCallback method with the corresponding comment items of a particular feed.

getComments.setOnClickListener(v -> {
   if(commentItems.size()!=0)
       mAdapterCallback.onMethodCallback(commentItems);
});

 

Finally implement the interface in the activity to display the comments dialog fragment populated with the corresponding comments of a feed post. Pass the comments with the help of arraylist through the bundle.

@Override
public void onMethodCallback(List<CommentItem> commentItems) {
   CommentsDialogFragment newFragment = new CommentsDialogFragment();
   Bundle bundle = new Bundle();
   bundle.putParcelableArrayList(ConstantStrings.FACEBOOK_COMMENTS, new ArrayList<>(commentItems));
   newFragment.setArguments(bundle);
   newFragment.show(fragmentManager, "Comments");
}

 

Conclusion

The comments generated with each feed post in the open event android app does complement the feed well. The pagination is something which is an option in the comments and the feed both however that is something for some other time. Until then, keep coding!

Resources