Abstract

Many schools and universities today still use a traditional pen-and-paper method for recording classroom attendance. This is a manual process that is both highly time inefficient and easy to cheat in classes with large enrollment. The purpose of this project is to develop a quick and secure system for recording attendance. The system must be easily deployable, scalable to a large number of attendees, and secure against spoofing attacks. We leverage Bluetooth Low Energy beacons and facial recognition technology to develop a smartphone application that analyzes and verifies the students’ credentials and location. Attendance records are managed on AWS IoT mainly with the use of AWS Lambda and DynamoDB.

Background

1. Server and Database

The server side setup is inspired by the design of other widely-used systems. [1] is an implementation that is closely related to our system’s requirements. For the management and storage of this attendance system, we plan to use a database, a database server, and a management server. The management server is connected with the applications to receive the submission of the student attendance information such as student ID, and unique code from the beacon. The database server will have communication between the database itself as well as the management server.

2. Bluetooth Beacon

There have been several implementations of attendance systems that utilize card readers, NFC, Bluetooth, and smartphones, but none of them seem to strike the balance between convenience, scalability, and security that this project aims to achieve.

The method outlined in [2] presents a system where individual students are identified by a unique Estimote sticker beacon. False attendances are tracked by comparing the number of students recorded against the number of students detected from a infrared motion analyzer. Scalability and deployability is poor with this system since a pack of ten stickers retails for $99. The infrared counting method does not protect against physical body proxies and requires that only the precise amount of people will be in the room at any time. If the amount of people expected in the room changes, the system has to be manually corrected

[3] uses a similar concept of taking a self-portrait for recording attendance. However, the system is inconvenient. It uses only one device which has to be passed around the classroom. Auditing the roll sheet is also a manual operation performed by the instructor. This project digitizes the attendance process but offers no automation.

This approach outlined in [4] improves upon scalability and time efficiency by providing an application that students can install on his or her own smartphone. Students identify themselves by scanning their NFC identify card either to their device or to the instructor’s terminal device. Presence in the classroom in the classroom is determined by proximity to a Bluetooth beacon, similar to the approach we aim to take. Security, however, is not improved since a student can simply leave his or her identification card for another student in the classroom to scan.

3. Facial Recognition

a. Traditional Facial Recognition methods:

Since 2012, OpenCV comes with packages that can be applied to facial recognition. The algorithms implemented are Eigenfaces, Fisherfaces and Local Binary Patterns Histograms. OpenCV also provides implementation methods for both Android and iOS in their respective development environment [5]. OpenCV can accomplish the task we initially set - to identify if the figure in two images are the same person quite efficiently given that images taken by the students will roughly fall into same settings.

b. Combined with Machine Learning:

Dlib is an open source C++ library that provides machine learning algorithms we can use to create our own softwares. For our purpose, we will want to combine it with Facial Recognition algorithms. According to the test run by the OpenFace team, we know that the speed of completing the comparison is up by roughly 2 times the speed of traditional method. There are pre-trained models available on Dlib’s websites [6]. We can train our own model if necessary.

4. Amazon Web Service

AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices. IoT Core can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely.

We use AWS lambda for holding server-side logic without provisioning or managing specific servers. AWS Lambda replaces the “always active server” and automatically scales based on input volume.

Amazon DynamoDB is a fast and flexible NoSQL database service for all applications. It is a fully managed cloud database and supports both document and key-value store models.

An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do within AWS. In AWS, every component needs an IAM role to run or communicate with each other.

Cognito

Cognito is a certificate system that gives the mobile user a proper IAM role in AWS. It scales to millions of users, and supports sign-in with social identity providers such as Facebook, Google, and Amazon.

Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications running on AWS.

Assumptions

A cornerstone of our system design is scalability to a large number of students. We rely on the ubiquity of smartphones and require that every student who attends the class has one. Due to the limited scope and time constraints of this project, the initial release only supports Android devices. More specifically, user must have at least Android 5.0. At least 72% of devices in the world have this version or above.

Our design also assumes that students will always carry his or her phone during the class session. A student’s credentials can only ever be registered on either the application or server once and submitting attendance results is only possible through the application installed on the student’s phone. Therefore, a situation where a student would forfeit access to his or her phone for the class duration is not possible.

The inspiration from this project comes from the EE297 Seminar at UCLA. In the seminar, not all people in the audience are registered students. Our solution allows for guests where registration and attendance is not required.

The last assumption is that a student would only attempt simple methods of cheating. These attacks would be analogous to students signing the names of other students in the traditional pen-and-paper attendance methods. Elaborate attacks such as decompiling the published Java code or repeating the broadcasted beacon signal to a remote location are not considered.

Approach and System Structure

System architecture diagram

Project Diagram

System Architectural Implementation

Our system design comprises four main components: a mobile application, a Bluetooth beacon, a database for storing attendance related data (e.g. Student ID), and a cloud server platform to connect with and link the other components. The mobile application is deployed on the students’ smartphones and links that device to an individual student. A Raspberry Pi is used to implement the iBeacon protocol that communicates the application to determine a student’s’ presence in a room. Amazon DynamoDB provides the NoSQL database for the system where attendance records are maintained. AWS Lambda and Node.JS work concurrently to trigger IoT interactions to the application and beacon. This entire backend of the system is managed on the AWS IoT cloud platform.

Methodology

Frontend

The Android application will be consist of three parts - face recognition, iBeacon detection, and AWS communication. The application will first validate the user’s identity against the stored Face ID. During the class the application will randomly receive input from the server to scan for the appropriate iBeacon signal. At the same time, the Raspberry Pi will begin broadcasting that signal. True will only be returned to the server if identification was verified and the correct iBeacon message was received.

1. Face recognition

We implemented our facial recognition based on the algorithms developed by OpenCV. We decided to go with the wrapper version of it developed by a group named Bytedeco - the JavaCV.

In consideration of security, no images are allowed to be stored in the phone or uploaded to the cloud server. We implement the process of facial recognition frame based and thus real-time.

a. OpenCV vs JavaCV

Using JavaCV over OpenCV provides two advantages: Firstly, OpenCV is developed in C++. While developing Android application, native classes requires java native interface to work. Which is hard to debug and extend by others. JavaCV is the wrapper class of OpenCV but pure Java.

Secondly, with some testing and research, JavaCV performs better with mobile devices like android phone. The time used to capture and recognize the faces are faster in JavaCV than it is in native OpenCV.

b. Algorithm for Face Recognition
I. EigenFace and FisherFace:

EigenFace is an algorithm based on eigen vectors derived from various faces which can be considered “Standardized face ingredients”. Which means that all faces are formed with standardized vectors. FisherFace is a more developed version of EigenFace. While EigenFace represents the most variant for facial representation, the limitation of EigenFace is that, this algorithm is highly sensitive to lighting on the face, poses of the face and also other transforms of the face, thus requires restrict control on environment for it to work steadily. FisherFace improves EigenFace by Linear Discriminant Analysis, which is used to find a subset of EigenFace improving the classification function but deprecated on the generality. However, FisherFace is still not steady enough for everyday and everywhere use which is required by our application.

II. Local Binary Pattern Histogram:

Local Binary Pattern Histogram(LBPH) is the algorithm we used in our application, this method creates by far the most gap in confidence score between face recognized and not recognized, thus is the most steady in daily usage.

2. iBeacon

An iBeacon message is essentially a 31 byte special instance Bluetooth Low Energy message. The following is an example of a typical iBeacon message:

1E 02 01 1A 1A FF 4C 00 02 15 63 6F 3F 8F 64 91 4B EE 95 F7 D8 CC 64 A8 63 B5 00 00 00 00 C8

The first 10 bytes are required as headers for BLE messages and specifically distinguishes this payload as iBeacon. The remaining 21 bytes are the iBeacon specific message - 16 byte UUID, 2 byte Major, 2 byte Minor, and 1 byte transmit power. The power can calibrated depending on the size of the room. Typically this is limited to about 100m. The 20 bytes composed of the UUID, Major, and Minor are what our system randomly changes.

Commercial beacons were designed on the premise that UUID would be fixed to a particular vendor or manufacturer and Major and Minor will be changed to represent different locations where these beacons would be deployed. Our design, however, utilizes a randomly changing UUID as security against spoofing an attendees location. Commercially available beacons are thus not a good hardware platform for our implementation. Instead we use a Raspberry Pi since its broadcasting UUID can be easily and quickly changed through the Linux Bluetooth protocol stack. Since the Raspberry Pi also has internet connectivity, the UUID can easily be synced between the mobile application and the beacons through the AWS IoT server.

In this implementation, both application and Raspberry Pi subscribe to the same AWS topic. What they receive from the topic is a MQTT message, in JSON format, composed of five 32-bit integers. These integers are then each split into four 2 byte hexadecimal numbers. The first four integers thus split up to be the UUID and the last integer is the combined Major and Minor. The Raspberry Pi broadcasts this message while the application scans for the exact same message. Upon receiving a new UUID, the application only has 30 seconds to detect it before the scan times out. A true is published from the application back to the server if the scan is successful.

3. AWS Connection

We will use AWS SDK for Android as our basic for connecting to the AWS IoT server. AWS Connection will be acted as a bridge between Face recognition and iBeacon. The communication between phone and AWS IoT is MQTT connection. When the user sign up, the recognition will call AWS Connection to talk to server and see whether the student has already been signed up yet. When the user is signing in, the Connection first talked to the server, and it will subscribe to the “attendance poll” once it gets a successful response. During the random attendance period, the Connection will be waked once it get a request from the server, then the Connection will wake up the iBeacon scanner to finish the bluetooth scan and return the result back to the server.

Backend

AWS General Graph

As we mentioned above, the entire framework of the backend are integrated and developed on Amazon Web Service, including AWS IoT, AWS Lambda, Amazon DynamoDB, AWS Cognito, AWS IAM Roles, and AWS CloudWatch. We implement our backend logic on AWS Lambda, connect our things with AWS IoT, monitor our action on AWS CloudWatch, and with the implementation of utilizing AWS Cognito and AWS IAM Roles, we can define our functions, their event sources, and permissions. Below is the diagram shows the working flow at backend.

Reset Student Info →

For Each Seminar:

→ Initialize Session → Student Sign up → Student Sign in → Send UUID → Update Message Match Result → End Session

→ Course Final Grade Analysis

Before analyzing the working flow above, there are two major definitions on our AWS system need to be clarified:

And the example of AWS CloudWatch Rules and AWS IoT Rules are shown in the images below: AWS IoT Rule Example AWS CloudWatch Rule Example

In addition, we need to define the tables in AWS DynamoDB, and the structure is shown below:

DynamoDB Table Design

Now, turns to the details of the working flow as the diagram shown above.

1.Student Information Reset

In this section, the lambda function named StudentInfoReset is called by CloudWatch at each beginning of the quarter and the student information table StudentInfo in DynamoDB will be cleared and reset and wait for new student sign in in the new quarter.

2.Session Initialization

In this section, the lambda function InitializeSession is called by CloudWatch before each seminar class begins and the ActiveDevicesList table will be cleared and reset for the new seminar use. Meanwhile, the time on CloudWatch for triggering SendUUID lambda function to send first UUID and major-minor and EndSession lambda function are decided.

3.Student Sign Up

In this section, the lambda function NormalSignUp is called by the mobiles’ app in AWS IoT. Phone side provides the student ID with MQTT (Message Queuing Telemetry Transport) JSON format to the server, and server with let NormalSignUp function to check if it is a duplicated sign up into the StudentInfo table. If it is, the sign up will be ignored, and the student will be noticed to do sign in instead; if it is not, the student ID will be recorded into the StudentInfo table with resetting row store.

4.Student Sign In

In this section, the lambda function NormalSignIn is called by mobiles’ app in AWS IoT. Mobile end provides the student ID with MQTT JSON format to the serverm and the server let NormalSignIn function to check if it is a duplicated sign in into the ActiveDevicesList table. If it is, the server will let the mobile end to talk to student that he/she re-sign in successfully; If it is not, the student ID will be added to ActiveDevicesList table with resetting row store.

5.UUID Sending

In this section, the lambda function SendUUID is called by CloudWatch. Remember, after running the function of InitializeSession the first time of triggering this function has been set on CloudWatch. After called by CloudWatch, the server will push the topic including random generated UUID and major-minor with MQTT JSON format in AWS IoT as figure shown below, and let mobile end and Raspberry Pi to subscribe the topic and receive the data of UUID and major-minor. Then SendUUID will randomly generate another time within seminar time interval on CloudWatch to send UUID and major-minor. In our use case, the function SendUUID will be called 5 times, i.e. sending iBeacon 5 times at random time to check the attendance and the times count variable is stored in GlobalVar table as global variable which could help to monitor the times that UUID and major-minor are sent and stop sending them when the time reaches to 5, but this amount of sending times could be changed with different requirement assigned by instructor.

UUID JSON Format

6.Message Match Result Update

In this section, the lambda function UpdateTimeSlotResult is called by mobiles’ app in AWS IoT. After mobile end checking the UUID and major-minor with what Raspberry Pi broadcasting, the mobile end will submit a boolean as for whether the message is matched or not, means True or False, along its student ID to the server. If it is True (default is False), the server will let UpdateTimeSlotResult function to change time_stamps column which stores how many matched True events happened in ActiveDevicesList table in DynamoDB, and meanwhile the true_count column which counts the times of True for the student will be updated as well.

7.Session End

In this section, the lambda function EndSession is called by CloudWatch, and the time, as mentioned above, is determined when running InitializeSession lambda function. By calling EndSession, the CloudWatch on SendUUID function will be set to fair long time before the next seminar round with running InitializeSession to reset the time for calling SendUUID. In addtion, the EndSession function will scan and analyze time_slots result from ActiveDevicesList table StudentInfo to finish the record of student attendance in that seminar. In our use case, if a student has true_count column in ActiveDevicesList that is equal or larger than 4 (5 total, 1 torrance), the seminars column that stores the list of true count for attending the seminar will add True to the list (default False) and attends column in StudentInfo table will be added by 1, which means the student is regarded as attending this seminar. Otherwise, the student will be considered as absence in the seminar.

8.Course Final Grade Analysis

In this section, the lambda function AnalyzeFinalGrade is called by course instructor for making the final decision as for grading of this course. By calling AnalyzeFinalGrade, it will first scan the attends column in StudentInfo table for each student and update the final_grade column to True (default False), if value of attends, in our case, is equal to or larger than 9 (10 total, 1 torrance). Otherwise, the student who has False on final_grade column will be considered as not satisfying the attendance requirement of this course.

Discussion

Face Recognition

Our current facial recognition technique is purely mathematical based which means all we get is the confidence score of each face recognized. We can improve the result by using Machine Learning. For example, dLib is an open source neural network available for machine learning algorithm. Furthermore, TensorFlow has official support on both ios and android now. We can also use TensorFlow for easier integration and avoid native integration on android and ios.

Security Concerns

Security is a key design criteria for this project. We feel that we have developed a system that is robust against spoofing attacks and secure with personal information. One possible attack would be a student leaving their phone with another student who will be in the class. This type of attack is deterred by the design of having to detect the student’s identity at the start of the attendance session. What this design is not protected against is using an image to the student to verify identity. Liveness check is not yet implemented in the facial recognition. This attack, however, also goes against our assumption that a student would be separated from his for her phone for an extended period of time.

A rouge student may also attempt to work around this by re-registering multiple students on the same device. This is not possible because the application limits registration to be a one-time process. Simply reinstalling the application would seem to work around this. The server side, however, has protection in place as well. Once a student has been registered, the server does not allow multiple registration with the same name and student ID number.

Another attack scenario would be someone relaying the iBeacon UUID to a remote site. While UUIDs are generated randomly, their values can be easily determined as an inherent property of broadcasted iBeacon messages. The one defense our system has against this is to limit the beacon detection scan time on the application side. A counterfeit iBeacon UUID would need to be created and broadcasted to the remote site within this time limit.

Personal security of the student’s is also protected. The application does not store any pictures in the process of training or recognition. All training pictures are immediately deleted after training, and all data related to the trained face model is stored in application specific directories that are only available to the application. This is an inherent design implemented by the Android operating system. During the face recognition phase, data is pulled from the live camera frames; no pictures are taken or sent.

Future Work

User Guide

  1. Upon installation and first run, the application will ask user for permission of the use of camera, external storage, bluetooth and internet;
  2. Every user is allowed to register once, if there is any mistake made in the process, please uninstall and reinstall(if results recorded in the server have mistakes, please also contact the instructor for correction);

Main Page:

SignUp SignIn

Before you register, sign in will be disabled as shown in the left, after you register, the register button will be disabled as shown in the right.

Register Page:

register

In the register page, please enter your info as prompted. No registration is allowed until all info is entered, including face registration. Warning will be given if any error has occurred. You can use system back button or the “BACK” button in the page to go back to main page. “Register your Face” button will take you to the face recognition training page. “Register” button will send required information to the server and record it.

Register Your Face:

registerFace

Any face Recognized will be marked in a green square with label on top of it. At the bottom of the page, there are three buttons:

User can return to the registration page using the system back button.

registerSuccessful

Sign In:

signIn

On the sign in page, there will be two button available:

Server

If you want to use/develop/re-deploy this class attendance system, at first you would have a AWS account that can help you to develop the system.

Then creating three tables in DynamoDB, named as StudentInfo, GlobalVar, and ActiveDevicesList.

After creating table with specific setting as above mentioned, it is time to deploy the logics for your system copy on AWS. We have code open source for you to implement on your side. Please visit our repository on github.

In this folder of repository, you could find all the deployment packages (.zip format) including all the codes and libraries related to the AWS services that we use in this system including AWS IoT, AWS Lambda, Amazon DynamoDB, AWS Cognito, AWS IAM Roles, and AWS CloudWatch, and AWS Serverless Application Model (SAM) files (.yaml format) that defines functions, events resources and permissions among the AWS services.

After finding all there crucial packages and files, you can use CloudFormation to deploy and manage a similar application. Please click here to learn more about how to deploy a application with CloudFormation in your own AWS development environement.

Beacon

Using the Raspberry Pi as an iBeacon requires only two steps. The first step is to run the initialize.sh script after the Raspberry Pi has booted. This will bring up the Bluetooth device as well as enable advertising but disable connections. The script needs to be ran every time the Raspberry Pi boots up. Running the Python script send_uuid_job.py subscribes the Raspberry Pi to the AWS topic and waits to receive the randomly generated UUIDs. This script terminates after the duration of one class session. Schedule this script in cron to run at the beginning of every class session.

Note: This is targeted for Raspberry Pi 3+. Older units will require an additional USB Bluetooth 4.0 module as well as BlueZ, the official Linux Bluetooth protocol stack. All Raspberry Pi also require an internet connection.

Application

The mobile application requires at least Android 5.0 as the operating system.

Demo

Part 1

PART 1

Part 2

PART 2

Timeline

Week 3

Week 4

Week 4 - 5

Week 6 - 7

Week 8

Week 9

Week 10

Timeline last updated December 15, 2017.

References

[1] S. A. M. Noor, N. Zaini, M. F. A. Latip and N. Hamzah, “Android-based attendance management system,” 2015 IEEE Conference on Systems, Process and Control (ICSPC), Bandar Sunway, 2015, pp. 118-122.

[2] R. Apoorv and P. Mathur, “Smart attendance management using Bluetooth Low Energy and Android,” 2016 IEEE Region 10 Conference (TENCON), Singapore, 2016, pp. 1048-1052.

[3] J. Iio, “Attendance Management System Using a Mobile Device and a Web Application,” 2016 19th International Conference on Network-Based Information Systems (NBiS), Ostrava, 2016, pp. 510-515.

[4] S. Noguchi, M. Niibori, E. Zhou and M. Kamada, “Student Attendance Management System with Bluetooth Low Energy Beacon and Android Devices,” 2015 18th International Conference on Network-Based Information Systems, Taipei, 2015, pp. 710-713.

[5] Docs.opencv.org. (2017). Face Recognition with OpenCV — OpenCV 2.4.13.4 documentation. [online] Available at: https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html?highlight=facial%20recognition#id27 [Accessed 23 Oct. 2017].

[6] King, D. (2017). High Quality Face Recognition with Deep Metric Learning. [online] Blog.dlib.net. Available at: http://blog.dlib.net/2017/02/high-quality-face-recognition-with-deep.html [Accessed 23 Oct. 2017].