Automated evaluation system for Full Stack Development DevOps assignments. Provisions AWS EC2 instances per student, SSHes in to run grading checks, and aggregates results into a final grades CSV.
- Provision — Terraform creates N EC2 instances (Ubuntu 24.04, t3.medium) on AWS
- Evaluate —
run_eval.pySSHes into each instance, uploadseval_machine.sh, and runs 15 checks - Aggregate — Results saved per-run;
best_scores.pypicks the best score per student - Cleanup —
restrict.py,logout.py,cleanup.py, ordestroy.shto wind down
- Python 3 +
paramiko - Terraform
- AWS CLI configured with profile
nst
python3 -m venv venv
source venv/bin/activate
pip install paramiko./provision.sh batch-1Terraform outputs credentials to student_credentials.csv. Populate students.csv from this file.
# Evaluate specific students — list URNs in eval_targets.txt
python run_eval.py
# Or evaluate everyone
python run_eval.py --allResults land in results/run_<timestamp>/grades.csv.
python best_scores.pyWrites final_grades.csv with the highest score per student across all runs.
python restrict.py # Change passwords to "restricted"
python logout.py # Log out all sessions
python cleanup.py # Remove eval scripts from instances
./destroy.sh # Tear down all EC2 instances| # | Check |
|---|---|
| 01 | Docker installed |
| 02 | Docker daemon running |
| 03 | System packages up to date |
| 04 | Exactly 2 containers running (app + MongoDB) |
| 05 | MongoDB container using mongo:7 image |
| 06 | App container bound to host port 80 |
| 07 | HTTP 200 from localhost and public IP |
| 08 | Built React frontend at / (not dev server) |
| 09 | GET /api/users returns 200 + JSON |
| 10 | MongoDB /data/db on persistent volume |
| 11 | POST /api/users creates a user |
| 12 | Created user readable via GET |
| 13 | MongoDB port 27017 NOT exposed on host |
| 14 | SPA routing via Express wildcard |
| 15 | App and MongoDB on shared Docker network |
An ungraded integrity check flags git commits made before the lab start time.
results/
└── run_20260426_HHMMSS/
├── grades.csv # Per-student results for this run
├── raw/ # Raw eval_machine.sh output (URN.txt)
└── errors/ # SSH error logs (URN.log)
| File | Purpose |
|---|---|
students.csv |
Student info + EC2 credentials (gitignored) |
eval_targets.txt |
URNs to evaluate in the next run |
final_grades.csv |
Aggregated best scores (gitignored) |
install-docker.sh |
Docker setup script provided to students |
INVIGILATOR_TOOLS.md |
Proctor instructions and mark deduction guide |
- Region:
ap-south-1 - Instance:
t3.medium, Ubuntu 24.04 LTS, 25 GB root volume - Default count: 125 instances
- Open ports: 22, 80, 443, 3000, 8080
Feel free to raise pull requests.
Created with <3 by Krushn Dayshmookh at Newton School of Technology, Pune.