The Spark Challenge Student Check-In/Out Tracking Script

Overview

The Spark Challenge Student Check-In/Out Tracking Script

This Python Script uses the Student ID Database to match the entries with the ID Card Swipe and records the entries and swipe times.

Big thanks to Claire Poukey!!

The verification file is not included in this repository since it includes sensitive and confidential information about students

What Do You Need To Run This Script?

  • Python3
  • ID Number Database (verification.txt)
  • Magnetic Card Strip Reader

How to Run This Script?

  1. Clone the script to your machine
  2. Connect the magnetic card strip reader to your machine
  3. Run the command make check-in in terminal
  4. Swipe card once for check-in
  5. Swipe card second time for check-out
  6. When you are done with all entries exit the program safely using exit command when SWIPE PUID prompted.
  7. Run the attandance script by running the command make attendance

Note that the script stops recording after 2 swipes and the entries need to be deleted manually from the log file.

What This Script Does?:

  1. Records the check-in time with the first card swipe
  2. Records the check-out time with the second card swipes
  3. Ignores the entries after second card swipe
  4. Takes the difference between the check-in and check-out time to see how long the student stayed at the event
  5. Prints the ID card number and the time stayed to a CSV File

Parts of This Script:

  1. Python Script (checkin.py)
  2. Attendance Script (attandance.py)
  3. Makefile (to make things a little easier)
  4. Log Files

Things to be Careful About:

  1. Running make clean clears the log files so use with caution
  2. If the program crashes for some reason you have to manually rerun the script using make check-in command
  3. You have to manually run make attendance after the event is over to process the data.

Things That Still Need Attention:

  1. Nothing. (To be updated)
You might also like...
University Challenge 2021 With Python

University Challenge 2021 This repository contains: The TeX file of the technical write-up describing the University / HYPER Challenge 2021 under late

Projeto para realizar o RPA Challenge . Utilizando Python e as bibliotecas Selenium e Pandas.
Projeto para realizar o RPA Challenge . Utilizando Python e as bibliotecas Selenium e Pandas.

RPA Challenge in Python Projeto para realizar o RPA Challenge (www.rpachallenge.com), utilizando Python. O objetivo deste desafio é criar um fluxo de

Evaluation of a Monocular Eye Tracking Set-Up
Evaluation of a Monocular Eye Tracking Set-Up

Evaluation of a Monocular Eye Tracking Set-Up As part of my master thesis, I implemented a new state-of-the-art model that is based on the work of Che

collect training and calibration data for gaze tracking
collect training and calibration data for gaze tracking

Collect Training and Calibration Data for Gaze Tracking This tool allows collecting gaze data necessary for personal calibration or training of eye-tr

A script to "SHUA" H1-2 map of Mercenaries mode of Hearthstone

lushi_script Introduction This script is to "SHUA" H1-2 map of Mercenaries mode of Hearthstone Installation Make sure you installed python=3.6. To in

Python script for transferring data between three drives in two separate stages

Waterlock Waterlock is a Python script meant for incrementally transferring data between three folder locations in two separate stages. It performs ha

This is a python script to navigate and extract the FSD50K dataset

FSD50K navigator This is a script I use to navigate the sound dataset from FSK50K.

AptaMat is a simple script which aims to measure differences between DNA or RNA secondary structures.

AptaMAT Purpose AptaMat is a simple script which aims to measure differences between DNA or RNA secondary structures. The method is based on the compa

Python script to automate the plotting and analysis of percentage depth dose and dose profile simulations in TOPAS.

topas-create-graphs A script to automatically plot the results of a topas simulation Works for percentage depth dose (pdd) and dose profiles (dp). Dep

Comments
  • Fixed time bug

    Fixed time bug

    • (Hopefully) Fixed the time issue --> Testing was completed by forcing a PUID into the verified database, adding a pre-set check-in time, and checking out at various time intervals. Must be tested with an actual swipe to ensure the transfer of the changes
    • Standardized all print statements to use formatted strings
    • Made the code generally easier to read
    • Added another python file used to play around with the DateTime library
    • Switched the gitignore file to ignore all text files (to prevent accidental leakage of student PUIDs from testing or otherwise) and added an additional rule to ignore all csv files as well (to prevent accidental leakage of all student attendance records from testing or otherwise)
    opened by ClaireXP 0
Owner
Electrical Engineering Student at Purdue University
null
Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python

Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python This project is a good starting point for those who have little

Himanshu Kumar singh 2 Dec 4, 2021
Building house price data pipelines with Apache Beam and Spark on GCP

This project contains the process from building a web crawler to extract the raw data of house price to create ETL pipelines using Google Could Platform services.

null 1 Nov 22, 2021
Reading streams of Twitter data, save them to Kafka, then process with Kafka Stream API and Spark Streaming

Using Streaming Twitter Data with Kafka and Spark Reading streams of Twitter data, publishing them to Kafka topic, process message using Kafka Stream

Rustam Zokirov 1 Dec 6, 2021
Pyspark project that able to do joins on the spark data frames.

SPARK JOINS This project is to perform inner, all outer joins and semi joins. create_df.py: load_data.py : helps to put data into Spark data frames. d

Joshua 1 Dec 14, 2021
Monitor the stability of a pandas or spark dataframe ⚙︎

Population Shift Monitoring popmon is a package that allows one to check the stability of a dataset. popmon works with both pandas and spark datasets.

ING Bank 403 Dec 7, 2022
Pandas and Spark DataFrame comparison for humans

DataComPy DataComPy is a package to compare two Pandas DataFrames. Originally started to be something of a replacement for SAS's PROC COMPARE for Pand

Capital One 259 Dec 24, 2022
This mini project showcase how to build and debug Apache Spark application using Python

Spark app can't be debugged using normal procedure. This mini project showcase how to build and debug Apache Spark application using Python programming language. There are also options to run Spark application on Spark container

Denny Imanuel 1 Dec 29, 2021
BigDL - Evaluate the performance of BigDL (Distributed Deep Learning on Apache Spark) in big data analysis problems

Evaluate the performance of BigDL (Distributed Deep Learning on Apache Spark) in big data analysis problems.

Vo Cong Thanh 1 Jan 6, 2022
Parses data out of your Google Takeout (History, Activity, Youtube, Locations, etc...)

google_takeout_parser parses both the Historical HTML and new JSON format for Google Takeouts caches individual takeout results behind cachew merge mu

Sean Breckenridge 27 Dec 28, 2022
An Indexer that works out-of-the-box when you have less than 100K stored Documents

U100KIndexer An Indexer that works out-of-the-box when you have less than 100K stored Documents. U100K means under 100K. At 100K stored Documents with

Jina AI 7 Mar 15, 2022