Being a fan of the FPS genre for nearly two decades I could not let a game like Reflex get away from me. Reflex is a modern take on the old classic quake-games and a damn good one. The game is still in Early Access and far from done but that does not mean you can't enjoy the game. They have already come a long way and pushed out a lot of major updates so if you feel like backing up a solid project, this is the one!
A major feature that sets this game apart from the rest is the map editor. Instead of using a separate tool for building maps you can actually edit levels in-game, it even works in multiplayer! With features like this you can expect a lot of maps so eventually members from the community developed reflexfiles.com, a website where mappers could share their creations. However the lack of an API at the time of writing can make it a real pain for serveradmins to keep their maps up to date. That's why I developed a bash-script that could help me make sure I always had the latest version for the maps me and my friends enjoy playing.
Writing a command line tool like this I would prefer to use something a bit more adapted for the situation, say Ruby. But hey, that's barely a challenge when you can use proper libraries for actually parsing the html. I wanted to create a simple bash script to do my bidding, using as few dependencies as possible. I would imagine what stands out the most among them are bash version 4 that will be used for the associative arrays and bsdtar because of its ability to read data through a pipe to extract on the fly.
% ./reflexfilesget -h Usage: reflexfilesget [-p path] [-l | -u] [-a id] [-r id] Options: -p, --path <directory> maps are located in <directory> -l, --list display subscribed maps -a, --add <id> add map -r, --remove <id> remove map -u, --update update all subscribed maps -q, --quiet do not write messages on standard output -h, --help display this
There is a default path set within the script but it can be overwritten. For the sake of demonstration I'll just start out in a empty directory.
% ./reflexfilesget -p test -a 143 :: Downloaded dp5.map (id:143)
Aiming for a minimalistic approach the information about the files are stored as symlinks formatted as
.sub_<id>_<unixtime>-<uid> that points to a file with the
.map extension. This way there's a reference of time and a unique id associated with the download. First timestamps are compared and if they don't match (with a two minute margin) the download-page will be requested to see if the unique id for the download is similar to the local copy, if not we can determine that there's been an update. Because there's lack of precision when we go from minutes ago to hrs ago the uid is used to determine if the file did actually change. Otherwise we would just keep updating the timestamp and fetch the file endlessly if script was run often. With the
-u parameter the information is scraped from the /files page and we can make sure the files are kept up to date:
% ./reflexfilesget -p test -u :: Downloaded dp5.map (id:143)
The script will also validate and sanitize user input:
% ./reflexfilesget -p test -a http://reflexfiles.com/file/157 :: Downloaded thct1.map (id:157) % ./reflexfilesget -p test -a 157 error: File already exists % ./reflexfilesget -p test -a non1digits2are9stripped :: Downloaded thcdm13.map (id:129)
To list files we keep track of:
$ ./reflexfilesget -p test -l 157 -> thct1.map 129 -> thcdm13.map
I have made it available for download over at my gist