Implement Email in Open Event Server

In FOSSASIA’s Open Event Server project, we send out emails when various different actions are performed using the API. For example, when a new user is created, he/she receives an email welcoming him to the server as well as an email verification email. Users get role invites from event organisers in the form of emails, when someone buys a ticket he/she gets a PDF link to the ticket as email. So as you can understand all the important informations that are necessary to be notified to the user are sent as an email to the user and sometimes to the organizer as well.

In FOSSASIA, we use sendgrid’s API or an SMTP server depending on the admin settings for sending emails. You can read more about how we use sendgrid’s API to send emails in FOSSASIA here. Now let’s dive into the modules that we have for sending the emails. The three main parts in the entire email sending are:

  1. Model – Storing the Various Actions
  2. Templates – Storing the HTML templates for the emails
  3. Email Functions – Individual functions for various different actions

Let’s go through each of these modules one by one.

Model

USER_REGISTER = 'User Registration'
USER_CONFIRM = 'User Confirmation'
USER_CHANGE_EMAIL = "User email"
INVITE_PAPERS = 'Invitation For Papers'
NEXT_EVENT = 'Next Event'
NEW_SESSION = 'New Session Proposal'
PASSWORD_RESET = 'Reset Password'
PASSWORD_CHANGE = 'Change Password'
EVENT_ROLE = 'Event Role Invitation'
SESSION_ACCEPT_REJECT = 'Session Accept or Reject'
SESSION_SCHEDULE = 'Session Schedule Change'
EVENT_PUBLISH = 'Event Published'
AFTER_EVENT = 'After Event'
USER_REGISTER_WITH_PASSWORD = 'User Registration during Payment'
TICKET_PURCHASED = 'Ticket(s) Purchased'


In the Model file, named as
mail.py, we firstly declare the various different actions for which we send the emails out. These actions are globally used as the keys in the other modules of the email sending service. Here, we define global variables with the name of the action as strings in them. These are all constant variables, which means that there value remains throughout and never changes. For example, USER_REGISTER has the value ‘User Registration’, which essentially means that anything related to the USER_REGISTER key is executed when the User Registration action occurs. Or in other words, whenever an user registers into the system by signing up or creating a new user through the API, he/she receives the corresponding emails.
Apart from this, we have the model class which defines a table in the database. We use this model class to store the actions performed while sending emails in the database. So we store the action, the time at which the email was sent, the recipient and the sender. That way we have a record about all the emails that were sent out via our server.

class Mail(db.Model):
    __tablename__ = 'mails'
    id = db.Column(db.Integer, primary_key=True)
    recipient = db.Column(db.String)
    time = db.Column(db.DateTime(timezone=True))
    action = db.Column(db.String)
    subject = db.Column(db.String)
    message = db.Column(db.String)

    def __init__(self, recipient=None, time=None, action=None, subject=None,
                 message=None):
        self.recipient = recipient
        self.time = time
        if self.time is None:
            self.time = datetime.now(pytz.utc)
        self.action = action
        self.subject = subject
        self.message = message

    def __repr__(self):
        return '<Mail %r to %r>' % (self.id, self.recipient)

    def __str__(self):
        return unicode(self).encode('utf-8')

    def __unicode__(self):
        return 'Mail %r by %r' % (self.id, self.recipient,)


The table name in which all the information is stored is named as mails. It stores the recipient, the time at which the email is sent (timezone aware), the action which initiated the email sending, the subject of the email and the entire html body of the email. In case a datetime value is sent, we use that, else we use the current time in the time field.

HTML Templates

We store the html templates in the form of key value pairs in a file called system_mails.py inside the helpers module of the API. Inside the system_mails, we have a global dict variable named MAILS as shown below.

MAILS = {
    EVENT_PUBLISH: {
        'recipient': 'Organizer, Speaker',
        'subject': u'{event_name} is Live',
        'message': (
            u"Hi {email}<br/>" +
            u"Event, {event_name}, is up and running and ready for action. Go ahead and check it out." +
            u"<br/> Visit this link to view it: {link}"
        )
    },
    INVITE_PAPERS: {
        'recipient': 'Speaker',
        'subject': u'Invitation to Submit Papers for {event_name}',
        'message': (
            u"Hi {email}<br/>" +
            u"You are invited to submit papers for event: {event_name}" +
            u"<br/> Visit this link to fill up details: {link}"
        )
    },
    SESSION_ACCEPT_REJECT: {
        'recipient': 'Speaker',
        'subject': u'Session {session_name} has been {acceptance}',
        'message': (
            u"Hi {email},<br/>" +
            u"The session <strong>{session_name}</strong> has been <strong>{acceptance}</strong> by the organizer. " +
            u"<br/> Visit this link to view the session: {link}"
        )
    },
    SESSION_SCHEDULE: {
        'recipient': 'Organizer, Speaker',
        'subject': u'Schedule for Session {session_name} has been changed',
        'message': (
            u"Hi {email},<br/>" +
            u"The schedule for session <strong>{session_name}</strong> has been changed. " +
            u"<br/> Visit this link to view the session: {link}"
        )
    },


Inside the MAILS dict, we have key-value pairs, where in keys we use the global variables from the Model to define the action related to the email template. In the value, we again have 3 different key-value pairs – recipient, subject and message. The recipient defines the group who should receive this email, the subject goes into the subject part of the email while message forms the body for the email. For subject and message we use unicode strings with named placeholders that are used later for formatting using python’s
.format() function.

Email Functions

This is the most important part of the entire email sending system since this is the place where the entire email sending functionality is implemented using the above two modules. We have all these functions inside a single file namely mail.py inside the helpers module of the API. Firstly, we import two things in this file – The global dict variable MAILS defined in the template file above, and the various global action variables defined in the model. There is one main module which is used by every other individual modules for sending the emails defined as send_email(to, action, subject, html). This function takes as parameters the email to which the email is to be sent, the subject string, the html body string along with the action to store it in the database.

Firstly we ensure that the email address for the recipient is present and isn’t an empty string. After we have ensured this, we retrieve the email service as set in the admin settings. It can either be “smtp” or “sendgrid”. The email address for the sender has different formatting depending on the email service we are using. While sendgrid uses just the email say for example “medomag20@gmail.com”, smtp uses a format  a little different like this: Medozonuo Suohu<medomag20@gmail.com>. So we set that as well in the email_from variable.

def send_email(to, action, subject, html):
    """
    Sends email and records it in DB
    """
    if not string_empty(to):
        email_service = get_settings()['email_service']
        email_from_name = get_settings()['email_from_name']
        if email_service == 'smtp':
            email_from = email_from_name + '<' + get_settings()['email_from'] + '>'
        else:
            email_from = get_settings()['email_from']
        payload = {
            'to': to,
            'from': email_from,
            'subject': subject,
            'html': html
        }

        if not current_app.config['TESTING']:
            if email_service == 'smtp':
                smtp_encryption = get_settings()['smtp_encryption']
                if smtp_encryption == 'tls':
                    smtp_encryption = 'required'
                elif smtp_encryption == 'ssl':
                    smtp_encryption = 'ssl'
                elif smtp_encryption == 'tls_optional':
                    smtp_encryption = 'optional'
                else:
                    smtp_encryption = 'none'

                config = {
                    'host': get_settings()['smtp_host'],
                    'username': get_settings()['smtp_username'],
                    'password': get_settings()['smtp_password'],
                    'encryption': smtp_encryption,
                    'port': get_settings()['smtp_port'],
                }

                from tasks import send_mail_via_smtp_task
                send_mail_via_smtp_task.delay(config, payload)


After this we create the payload containing the email address for the recipient, the email address of the sender, the subject of the email and the html body of the email.
For unittesting and any other testing we avoid email sending since that is really not required in the flow. So we check that the current app is not configured to run in a testing environment. After that we have two different implementation depending on the email service used.

SMTP

There are 3 kind of possible encryptions for the email that can be used with smtp server – tls, ssl and optional. We determine this based on the admin settings again. Also, from the admin settings we collect the host, username, password and port for the smtp server.

After this we start a celery task for sending the email. Since email sending to a number of clients can be time consuming so we do it using the celery queueing service without disturbing the main workflow of the entire system.

@celery.task(name='send.email.post.smtp')
def send_mail_via_smtp_task(config, payload):
    mailer_config = {
        'transport': {
            'use': 'smtp',
            'host': config['host'],
            'username': config['username'],
            'password': config['password'],
            'tls': config['encryption'],
            'port': config['port']
        }
    }

    mailer = Mailer(mailer_config)
    mailer.start()
    message = Message(author=payload['from'], to=payload['to'])
    message.subject = payload['subject']
    message.plain = strip_tags(payload['html'])
    message.rich = payload['html']
    mailer.send(message)
    mailer.stop()

Inside the celery task, we use the Mailer and Message classes from the marrow module of python. We configure the Mailer according to the various settings received from the admin and then use the payload to send the email.

Sendgrid

For sending email using the sendgrid API, we need to set the Bearer key which is used for authenticating the email service. This key is also defined in the admin settings. After we have set the Bearer key as the authorization header, we again initiate the celery task corresponding to the sendgrid email sending service.

@celery.task(name='send.email.post')
def send_email_task(payload, headers):
    requests.post(
        "https://api.sendgrid.com/api/mail.send.json",
        data=payload,
        headers=headers
    )


For sending the email service, all we need to do is make a POST request to the api endpoint “
https://api.sendgrid.com/api/mail.send.json” with the headers which contains the Bearer Key and the data which contains the payload containing all the information related to the recipient, sender, subject of email and the body of the email.

Apart from these, this module implements all the individual functions that are called based on the various functions that occur. For example, let’s look into the email sending function in case a new session is created.

def send_email_new_session(email, event_name, link):
    """email for new session"""
    send_email(
        to=email,
        action=NEW_SESSION,
        subject=MAILS[NEW_SESSION]['subject'].format(
            event_name=event_name
        ),
        html=MAILS[NEW_SESSION]['message'].format(
            email=email,
            event_name=event_name,
            link=link
        )
    )


This function is called inside the Sessions API, for every speaker of the session as well as for every organizer of the event to which the session is submitted. Inside this function, we use the
send_email().  But firstly we need to create the subject of the email and the message body of the email using the templates and by replacing placeholders by actual value using python formatting. MAILS[NEW_SESSION] returns a unicode string: u’New session proposal for {event_name}’ . So what we do is use the .format() function to replace {event_name} by the actual event_name received as parameter. So it is equivalent to doing something like:

u'New session proposal for {event_name}'.format(‘FOSSASIA’)

which would give us a resulting string of the form:

u'New session proposal for FOSSASIA'

Similarly, we create the html message body using the templates and the parameters received. After this is done, we make a function call to send_email()  which then sends the final email.

References:

Implementing Search Functionality In Calendar Mode On Schedule Page In Open Event Webapp

f79b5a2b-b679-4095-8311-cf403193e0cc.png

Calendar Mode

fef75c16-da0a-4145-a2e5-620d63637194.png

The list mode of the page already supported the search feature. We needed to implement it in the calendar mode. The corresponding issue for this feature is here. The whole work can be seen here.

First, we see the basic structure of the page in the calendar mode.

<div class="{{slug}} calendar">
 <!-- slug represents the currently selected date -->
 <!-- This div contains all the sessions scheduled on the selected date -->
 <div class="rooms">
   <!-- This div contains all the rooms of an event -->
   <!-- Each particular room has a set of sessions associated with it on that particular date -->
   <div class="room">
     <!-- This div contains the list of session happening in a particular room -->
     <div class="session"> <!-- This div contains all the information about a session -->
       <div class="session-name"> {{title}} </div> <!-- Title of the session -->
       <h4 class="text"> {{{description}}} </h4> <!-- Description of the session -->
       <!-- This div contains the info of the speakers presenting the session -->
       <div class="session-speakers-list">
         <div class="speaker-name"><strong>{{{title}}}</div> <!-- Name of the speaker -->
           <div class="session-speakers-more"> {{position}} {{organisation}} </div> <!-- Position and organization of speaker-->
         </div>
       </div>
     </div>
   </div>
 </div>
</div>

The user will type the query in the search bar near the top of the page. The search bar has the class fossasia-filter.

c30a84c8-c456-4116-86fa-c5ffc382bdb3.png

We set up a keyup event listener on that element so that whenever the user will press and release a key, we will invoke the event handler function which will display only those elements which match the current query entered in the search bar. This way, we are able to change the results of the search dynamically on user input. Whenever a single key is pressed and lifted off, the event is fired which invokes the handler and the session elements are filtered accordingly.

Now the important part is how we actually display and hide the session elements. We actually compare few session attributes to the text entered in the search box. The text attributes that we look for are the title of the session, the name, position , and organization of the speaker(s) presenting the session. We check whether the text entered by the user in the search bar appears contiguously in any of the above-mentioned attributes or not. If it appears, then the session element is shown. Otherwise, its display is set to hidden. The checking is case insensitive. We also count the number of the visible sessions on the page and if it is equal to zero, display a message saying that no results were found.

For example:- Suppose the user enters the string ‘wel’ in the search bar, then we will iterate over all the different sessions and only those who have ‘wel’ in their title or in the name/ position/organization of the speakers will be visible. Rest all the sessions would be hidden.

Here is the excerpt from the code. The whole file can be seen here

$('.fossasia-filter').change(function() {
 var filterVal = $(this).val(); // Search query entered by user
 $('.session').each(function() { // Iterating through all the sessions. Check for the title of the session and the name of the
   // speaker and its position and organization
if ($(this).find('.session-name').text().toUpperCase().indexOf(filterVal.toUpperCase()) >= 0 ||
 $(this).find('.session-speakers-list a p span').text().toUpperCase().indexOf(filterVal.toUpperCase()) >= 0 || $(this).find('.speaker-name').text().toUpperCase().indexOf(filterVal.toUpperCase()) >= 0) {
     $(this).show(); // Matched so display the session
   } else {
     $(this).hide(); // Hide the Element
   }
 });
 var calFilterLength = $('.calendar:visible').length;
 if((isCalendarView && calFilterLength == 0)) { // No session elements found
   $('.search-filter:first').after('<p id="no-results">No matching results found.</p>');
 }
}).keyup(function() {
 $(this).change();
});

Below is the default view of the calendar mode on the schedule page

471d6b30-d8db-4dea-8593-df0ad68072a6.png

On entering ‘wel’ in the search bar, sessions get filtered

d6eec31e-fde1-4640-881b-a0216b68533d.png

 References:

Implementing Roles API on Open Event Frontend to Create Roles Using an External Modal

This blog article will illustrate how the roles are created via the external model  on the admin permissions page in Open Event Frontend, using the roles API. Our discussion primarily will involve the admin/permissions/index route to illustrate the process.The primary end point of Open Event API with which we are concerned with for fetching the permissions  for a user is

POST /v1/roles

First we need to create a model for the user-permissions, which will have the fields corresponding to the api, so we proceed with the ember CLI command:

ember g model role

Next we define the model according to the requirements. The model needs to extend the base model class, and has only two fields one for the title and one for the actual name of the role.

import attr from 'ember-data/attr';
import ModelBase from 'open-event-frontend/models/base';

export default ModelBase.extend({
 name           : attr('string'),
 titleName      : attr('string')
 });

Next we need to modify the existing modal to incorporate the API and creation of roles in it. It is very important to note here that using createRecord as the model will result in a major flaw. If createRecord is used and the user tries to create multiple roles, other than the first POST request all the subsequent requests will be PATCH requests and will keep on modifying the same role. To avoid this, a new record needs to be created every time the user clicks on Add Role.  We slightly modify the modal component call to pass in the name and titleName to it.

{{modals/add-system-role-modal  isOpen=isAddSystemRoleModalOpen
                                isLoading=isLoading
                                name=name
                                titleName=titleName
                                addSystemRole=(action 'addSystemRole')}}

Upon entering the details of the roles and successful validation of the form, if the user clicks the Add Role button of the modal, the action addSystemRole will be triggered. We will write the entire logic for the same in the respective controller of the route.

addSystemRole() {
     this.set('isLoading', true);
     this.get('store').createRecord('role', {
       name      : this.get('name'),
       titleName : this.get('titleName')
     }).save()
       .then(() => {
         this.set('isLoading', false);
         this.notify.success(this.l10n.t('User permissions have 
         been saved successfully.'));
         this.set('isAddSystemRoleModalOpen', false);
         this.setProperties({
           name          : null,
           roleTitleName : null
         });
       })
       .catch(()=> {
         this.set('isLoading', false);
         this.notify.error(this.l10n.t('An unexpected error has occurred.
         User permissions not saved.'));
       });
   },

At first the isLoading property is made true.This adds the semantic UI class loading to the the form,  and so the form goes in the loading state, Next, a record is created of the type role  and it’s properties are made equal to the corresponding values entered by the user.

Then save() is called, which subsequently makes a POST request to the server. If the request is successful the modal is closed by setting the isAddSystemRoleModalOpen property to false. Also, the fields of the modal are cleared for a  better user experience in case multiple roles need to be added one after the other.

In cases when  there is an error during the processing of the request the catch() block executes. And the modal is not closed. Neither are the fields cleared.

Resources

Giving Offline Support to the Open Event Organizer Android App

Open Event Organizer is an Android Application for Event Organizers and Entry Managers which uses Open Event API Server as a backend. The core feature of the App is to scan a QR code to validate an attendee’s check in. The App maintains a local database and syncs it with the server. The basic workflow of the attendee check in is – the App scans a QR code on an attendee’s ticket. The code scanned is processed to validate the attendee from the attendees database which is maintained locally. On finding, the App makes a check in status toggling request to the server. The server toggles the status of the attendee and sends back a response containing the updated attendee’s data which is updated in the local database. Everything described above goes well till the App gets a good network connection always which cannot be assumed as a network can go down sometimes at the event site. So to support the functionality even in the absence of the network, Orga App uses Job Schedulers which handle requests in absence of network and the requests are made when the network is available again. I will be talking about its implementation in the App through this blog.

The App uses the library Android-Job developed by evernote which handles jobs in the background. The library provides a class JobManager which does most of the part. The singleton of this class is initialized in the Application class. Job is the class which is where actually a background task is implemented. There can be more than one jobs in the App, hence the library requires to implement JobCreator interface which has create method which takes a string tag and the relevant Job is returned. JobCreator is passed to the JobManager in Application while initialization. The relevant code is:

JobManager.create(this).addJobCreator(new OrgaJobCreator());

Initialization of JobManager in Application class

public class OrgaJobCreator implements JobCreator {
   @Override
   public Job create(String tag) {
       switch (tag) {
           case AttendeeCheckInJob.TAG:
               return new AttendeeCheckInJob();
           default:
               return null;
       }
   }
}

Implementation of JobCreator

public class AttendeeCheckInJob extends Job {
   ...
   ...
   @NonNull
   @Override
   protected Result onRunJob(Params params) {
       ...
       ...
       Iterable<Attendee> attendees = attendeeRepository.getPendingCheckIns().blockingIterable();
       for (Attendee attendee : attendees) {
           try {
               Attendee toggled = attendeeRepository.toggleAttendeeCheckStatus(attendee).blockingFirst();
               ...
           } catch (Exception exception) {
               ...
               return Result.RESCHEDULE;
           }
       }
       return Result.SUCCESS;
   }

   public static void scheduleJob() {
       new JobRequest.Builder(AttendeeCheckInJob.TAG)
           .setExecutionWindow(1, 5000L)
           .setBackoffCriteria(10000L, JobRequest.BackoffPolicy.EXPONENTIAL)
           .setRequiredNetworkType(JobRequest.NetworkType.CONNECTED)
           .setRequirementsEnforced(true)
           .setPersisted(true)
           .setUpdateCurrent(true)
           .build()
           .schedule();
   }
}

Job class for attendee check in job

To create a Job, these two methods are overridden. onRunJob is where the actual background job is going to run. This is the place where you implement your job logic which should be run in the background. In this method, the attendees with pending sync are fetched from the local database and the network requests are made. On failure, the same job is scheduled again. The process goes on until the job is done. scheduleJob method is where the related setting options are set. This method is used to schedule an incomplete job.

So after this implementation, the workflow described above is changed. Now on attendee is found, it is updated in local database before making any request to the server and the attendee is flagged as pending sync. Accordingly, in the UI single tick is shown for the attendee which is pending for sync with the server. Once the request is made to the server and the response is received, the pending sync flag of the attendee is removed and double tick is shown against the attendee.

Links:
1. Documentation for Android-Job Library by evernote
2. Github Repository of Android-Job Library

Copying Event in Open Event API Server

The Event Copy feature of Open Event API Server provides the ability to create a xerox copy of event copies with just one API call. This feature creates the complete copy of event by copying the related objects as well like tracks, sponsors, micro-locations, etc. This API is based on the simple method where an object is first removed is from current DB session and then applied make_transient. Next step is to remove the unique identifying columns like “id”, “identifier” and generating the new identifier and saving the new record. The process seems simple but becomes a little complex when you have to generate copies of media files associated and copies of related multiple objects ensuring no orders, attendees, access_codes relations are copied.

Initial Step

The first thing to copy the event is first to get the event object and all related objects first

if view_kwargs.get('identifier').isdigit():
   identifier = 'id'

event = safe_query(db, Event, identifier, view_kwargs['identifier'], 'event_'+identifier)

Next thing is to get all related objects to this event.

Creating the new event

After removing the current event object from “db.session”, It is required to remove “id” attribute and regenerate “identifier” of the event.

db.session.expunge(event)  # expunge the object from session
make_transient(event)
delattr(event, 'id')
event.identifier = get_new_event_identifier()
db.session.add(event)
db.session.commit()

Updating related object with new event

The new event created has new “id” and “identifier”. This new “id” is added into foreign keys columns of the related object thus providing a relationship with the new event created.

for ticket in tickets:
   ticket_id = ticket.id
   db.session.expunge(ticket)  # expunge the object from session
   make_transient(ticket)
   ticket.event_id = event.id
   delattr(ticket, 'id')
   db.session.add(ticket)
   db.session.commit()

Finishing up

The last step of Updating related objects is repeated for all related objects to create the copy. Thus a new event is created with all related objects copied with the single endpoint.

References

How to clone a sqlalchemy object
https://stackoverflow.com/questions/28871406/how-to-clone-a-sqlalchemy-db-object-with-new-primary-key

Reset password in Open Event API Server

The addition of reset password API in the Open Event API Server enables the user to send a forgot password request to the server so that user can reset the password. Reset Password API is a two step process. The first endpoint allows you to request a token to reset the password and this token is sent to the user via email. The second process is making a PATCH request with the token and new password to set the new password on user’s account.

Creating a Reset token

This endpoint is not JSON spec based API. A reset token is simply a hash of random bits which is stored in a specific column of user’s table.

hash_ = random.getrandbits(128)
self.reset_password = str(hash_)

Once the user completed the resetting of the password using the specific token, the old token is flushed and the new token is generated. These tokens are all one time use only.

Requesting a Token

A token can be requested on a specific endpoint  POST /v1/auth/reset-password
The token with the direct link will be sent to registered email.

link = make_frontend_url('/reset-password', {'token': user.reset_password})
send_email_with_action(user, PASSWORD_RESET,     
                       app_name=get_settings()['app_name'], link=link)

Flow with frontend

The flow is broken into 2 steps with front end is serving to the backend. The user when click on forget password will be redirected to reset password page in the front end which will call the API endpoint in the backend with an email to send the token. The email received will contain the link for the front end URL which when clicked will redirect the user to the front end page of providing the new password. The new password entered with the token will be sent to API server by the front end and reset password will complete.

Updating Password

Once clicked on the link in the email, the user will be asked to provide the new password. This password will be sent to the endpoint PATCH /v1/auth/reset-password. The body will receive the token and the new password to update. The user will be identified using the token and password is updated for the respective user.

try:
   user = User.query.filter_by(reset_password=token).one()
except NoResultFound:
   return abort(
       make_response(jsonify(error="User not found"), 404)
   )
else:
   user.password = password
   save_to_db(user)

References

  1. Understand Self-service reset password
    https://en.wikipedia.org/wiki/Self-service_password_reset
  2. Python – getrandbits()
    https://docs.python.org/2/library/random.html

Post Payment Charging in Open Event API Server

Order flow in Open Event API Server follows very simple process. On successfully creating a order through API server the user receives the payment-url. API server out of the box provides support for two payment gateways, stripe and paypal.
The process followed is very simple, on creating the order you will receive the payment-url. The frontend will complete the payment through that url and on completion it will hit the specific endpoint which will confirm the payment and update the order status.

Getting the payment-url

Payment Url will be sent on successfully creating the order. There are three type of payment-modes which can be provided to Order API on creating order. The three payment modes are “free”, “stripe” and “paypal”. To get the payment-url just send the payment mode as stripe or paypal.

POST Payment

After payment processing through frontend, the charges endpoint will be called so that payment verification can be done on the server end.

POST /v1/orders/<identifier>/charge

This endpoint receives the stripe token if the payment mode is stripe else no token is required to process payment for paypal.
The response will have the order details on successful verification.

Implementation

The implementation of charging is based on the custom data layer in Orga Server. The custom layer overrides the Base data layer and provide the custom implementation to “create_object” method thus, not using Alchemy layer.

def create_object(self, data, view_kwargs):
   order = Order.query.filter_by(id=view_kwargs['id']).first()
   if order.payment_mode == 'stripe':
       if data.get('stripe') is None:
           raise UnprocessableEntity({'source': ''}, "stripe token is missing")
       success, response = TicketingManager.charge_stripe_order_payment(order, data['stripe'])
       if not success:
           raise UnprocessableEntity({'source': 'stripe_token_id'}, response)

   elif order.payment_mode == 'paypal':
       success, response = TicketingManager.charge_paypal_order_payment(order)
       if not success:
           raise UnprocessableEntity({'source': ''}, response)
   return order

With the resource class as

class ChargeList(ResourceList):
   methods = ['POST', ]
   schema = ChargeSchema

   data_layer = {
       'class': ChargesLayer,
       'session': db.session
   }

Resources

  1. Paypal Payments API
    https://developer.paypal.com/docs/api/payments/
  2. Flask-json-api custom layer docs
    http://flask-rest-jsonapi.readthedocs.io/en/latest/data_layer.html#custom-data-layer
  3. Stripe Payments API
    https://stripe.com/docs/charges

Implementing Users API to Display the Users at Admin Route in Open Event Frontend

This article will illustrate how the users are displayed and updated on the /admin/users route, their roles, user links etc. using the users API in Open Event Frontend. The primary end point of Open Event API with which we are concerned with for fetching the users is

GET /v1/users

First, we need to create a model for the user, which will have the fields corresponding to the API, so we proceed with the ember CLI command:

ember g model user

Next, we need to define the model according to the requirements. The model needs to extend the base model class. As a user can have multiple notifications, orders and  sessions etc. so we have to use ember data relationships “hasMany”. Hence, the model will have the following format.

import ModelBase from 'open-event-frontend/models/base';
import { hasMany } from 'ember-data/relationships';

export default ModelBase.extend({
  email        : attr('string'),
  password     : attr('string'),
  isVerified   : attr('boolean', { readOnly: true }),
  isSuperAdmin : attr('boolean', { readOnly: true }),
  isAdmin      : attr('boolean', { readOnly: true }),
  firstName : attr('string'),
  lastName  : attr('string')
});

The complete code for the model can be seen here

Now, we need to load the data from the API using the above model, so we will send a GET request to the API to fetch the users. This can be easily achieved using this.

return this.get('store').query('user', {'page[size]': 10 });

The above line is querying for the users from the store which is place where cache of all of the records that have been loaded by our application is there. If a route asks for a record, the store can return it immediately if it is there in the cache and we want to display only 10 users in a page so defined how many number of users has to be loaded at a time.

Now we need to filter the users based on whether they are active or they have deleted their accounts. For this purpose, we need to pass filter to the query which will tell what type of users to be loaded at once.

The next thing we need to do is to display the above data fetched from the API into an ember table. Ember table helps in allowing us to render very large data sets by only rendering the rows that are being displayed. For this, we defined a controller class which will help in letting the table know what all columns will be required to display and the attribute values they correspond in the API. We can also define the template for each column. The code for the controller class looks like this.

import Ember from 'ember';

const { Controller } = Ember;
export default Controller.extend({
  columns: [
    {
      propertyName     : 'first-name',
      title            : 'Name',
      disableSorting   : true,
      disableFiltering : true
    },
    {
      propertyName     : 'email',
      title            : 'Email',
      disableSorting   : true,
      disableFiltering : true
     },
     {
      propertyName     : 'last-accessed-at',
      title            : 'Last Accessed',
      template         : 'components/ui-table/cell/admin/users/cell-last-accessed-at',
      disableSorting   : true,
      disableFiltering : true
    }
   ]
});

In the above code, we can see a field called ‘disableSorting’ which is true if we don’t want to sort the table based on that column. Since we want the last-accessed-at column to be customized, so we have separately added a template for the column which will ensure how it will look in the column. The complete code for the other columns which are there in table apart from these can be found here.

Now to display the ember table we will write the following code.

{{events/events-table columns=columns data=model
    useNumericPagination=true
    showGlobalFilter=true
    showPageSize=true
}}

In the above piece of code, we are calling the same ember table as we used in case of events to reduce the code duplication. We are passing the columns and data in the table which remains unique to the table. Next, we are ensuring that our page shows the amount of data we’re fetching at one go, allows the filtering the table based on the columns.

The UI of the users page for the above code snippets look like this.

Fig 1: The UI of the users table under admin/users route

The entire code for implementing the users API can be seen here.

To conclude, this is how we efficiently fetched the users using the Open-Event-Orga users API, ensuring that there is no unnecessary API call to fetch the data and no code duplication using the same ember table again.

Resources:

Implementing Sessions API for the event in Open Event Frontend

This article will illustrate how the sessions are displayed and updated on the events/{event_id}/sessions route to display the sessions available for a particular event using the sessions API in Open Event Frontend. The primary end point of Open Event API with which we are concerned with for fetching the sessions is

GET /v1/sessions/{session_id}

First, we need to create a model for the sessions, which will have the fields corresponding to the API, so we proceed with the ember CLI command:

ember g model session

Next, we need to define the model according to the requirements. The model needs to extend the base model class. As a session can have multiple speakers and a session always belongs to an event, so we have to use ember data relationships “hasMany” and “belongsTo”. Hence, the model will have the following format.

import ModelBase from 'open-event-frontend/models/base';
import { belongsTo, hasMany } from 'ember-data/relationships';

export default ModelBase.extend({
  title         : attr('string'),
  subtitle      : attr('string'),

  speakers      : hasMany('speaker'),
  event         : belongsTo('event')
});

Complete code for the model can be seen here

Now, we need to load the data from the API using the above model, so we will send a GET request to the API to fetch the sessions corresponding to a particular event. This can be easily achieved using this.

return this.modelFor('events.view').query('sessions');

The above line is asking for getting the current model that is on the route events.view and query for the sessions property from that model.

Now we need to filter the sessions based on their sessions whether they have been accepted or confirmed or pending or rejected and display them on different pages. For this purpose, we need to pass filter and pages to the query which will tell what type and now of sessions to be loaded at once. Also, we need to display the speakers associated with session and event details. For this case, the above query will be formatted like this.

return this.modelFor('events.view').query('sessions', {
      include      : 'event,speakers',
    filter       : filterOptions,
      'page[size]' : 10
    });  

In the above query, the filterOptions are designed in such a way which check for what type of sessions user is querying for. The code can be found here.

The next thing we need to do is to display the above data fetched from the API into an ember table. For this, we need to have a controller class which will help in letting the table know what all columns will be required to display and the attribute values they correspond in the API. We can also define the template for each column. The code for the controller class looks like this.

export default Controller.extend({
  columns: [
    {
      propertyName   : 'state',
      title          : 'State',
      disableSorting : true,
      template       : 'components/ui-table/cell/events/view/sessions/cell-session-state'
    },
    {
      propertyName : 'title',
      title          : 'Title'
    },
    {
      propertyName    : 'speakers',
      template       : 'components/ui-table/cell/cell-speakers',
      title          : 'Speakers',
      disableSorting  : true
     }]
});

In the above code, we can see a field called ‘disableSorting’ which is true if we don’t want to sort the table based on that column. Since we want the state column to be customized, so we have separately added a template for the column which will ensure how it will look in the column. The complete code for the other columns which are there in table apart from the state, title and speakers can be found here.

Now to display the ember table we will write the following code.

{{events/events-table columns=columns data=model
    useNumericPagination=true
    showGlobalFilter=true
    showPageSize=true
}}
I

In the above piece of code, we are calling the same ember table as we used in case of events to reduce the code duplication. We are passing the columns and data in the table which remains unique to the table. Next, we are ensuring that our page shows the amount of data we’re fetching at one go, allows the filtering the table based on the columns.

The UI of the sessions page for the above code snippets look like this.

Fig 1: The UI of the session table under events/{event_id}/session route

The entire code for implementing the sessions API can be seen here.

To conclude, this is how we efficiently fetched the sessions details using the Open-Event-Orga sessions API, ensuring that there is no unnecessary API call to fetch the data and no code duplication using the same ember table again.

Resources:

Adding Service Workers In Generated Event Websites In Open Event Webapp

var urlsToCache = [
 './css/bootstrap.min.css',
 './offline.html',
 './images/avatar.png'
];
self.addEventListener('install', function(event) {
 event.waitUntil(
   caches.open(CACHE_NAME).then(function(cache) {
     return cache.addAll(urlsToCache);
   })
 );
});

All the other assets are cached lazily. Only when they are requested, we fetch them from the network and store it in the local store. This way, we avoid caching a large number of assets at the install step. Caching of several files in the install step is not recommended since if any of the listed files fails to download and cache, then the service worker won’t be installed!

self.addEventListener('fetch', function(event) {
event.respondWith(caches.match(event.request).then(function(response) {
     // Cache hit - return response
     if (response) { return response; }
     // Fetch resource from internet and put into cache
     var fetchRequest = event.request.clone();
     return fetch(fetchRequest).then(function(response) {
         var responseToCache = response.clone();
         caches.open(CACHE_NAME).then(function(cache) {
           cache.put(event.request, responseToCache);
         });
         return response;
       }).catch(function(err) {
         // Return fallback page when user is offline and resource is not cached
         if (event.request.headers.get('Accept').indexOf('text/html') !== -1) {
           return caches.match('./offline.html');
         }
       });
   })
 );
});

One thing which we need to keep in mind is that the website content should not become stale. Because it might happen that the website is updated but the actual content shown is older since response to every request is being fetched from the outdated cache. We need to delete the old cache and install a new service worker when the content of the site is updated. To deal with this issue, we calculate the hash of event website folder after the site has been generated. This value of the hash is used as the cache name inside which we keep the cached resources. When the content of the event website changes, the value of the hash changes. Due to this, a new service worker (containing the new value of the hash) is installed. After the currently open pages of the website are closed, the old service worker is killed and the activate event of the new service worker is fired. During this activate event, we clear the old cache containing the outdated content!  

// Hash of the event website Folder
var CACHE_NAME = 'Mtj5WiGtMtzesubewqMtdGS9wYI=';
self.addEventListener('activate', function(event) {
 event.waitUntil(caches.keys().then(function(cacheNames) {
   return Promise.all(cacheNames.map(function(cacheName) {
     if (cacheName !== CACHE_NAME) {
       console.log('Deleting cache ' + cacheName);
       return caches.delete(cacheName);
     }
   })
   );
 }));
});

Here are many screenshots showing service workers in action

  • When the site is updated, the outdated cache contents are cleared

29312352-87f9e8da-81d2-11e7-9f05-02a557a820da.png

  • On visiting a page which has not yet been cached, its contents are placed into cache for utilization on future visits

29312487-2155e70e-81d3-11e7-85c1-004c4a475e82.png

  • On visiting the cached page again, the assets will be served from the cache directly.

29312349-87f6c222-81d2-11e7-8fb4-c54feb175ed9.png

  • The page will comfortably load even on offline connections

29312351-87f8eec6-81d2-11e7-985a-7f1ba2dba482.png

  • On requesting for a page which has not yet been cached, we show a custom fallback page which shows a message

29312350-87f821c6-81d2-11e7-90e7-30f6467b2988.pngReferences: