Skip to main content

· 8 min read

Preamble

At this point, I think it's no secret that Winamp going "open-source" has been a bit of a shit-show.

The initial outrage on the repository was in response to the apparent lack of understanding of what open-source means on the side of Winamp, as they claim that their license is "copyleft", while allowing for neither redistribution of modified versions, nor contributions to the official project, and initially not even allowing forking! Someone did point it out to them that they can't ban forking on GitHub, as that constitutes a violation of the TOS. So, Winamp updated their license to allow forking, but it still doesn't allow anything else that would constitute "open-source" - redistribution or even outside contributions.

A truly historic moment, ladies and gentlemen :)

In objective terms, this license falls in the category of "source-available" rather than open-source, and this is an important distinction to make. I have no problem with Winamp making their source available as a valuable case-study, but they really shouldn't have used the term "copyleft" if that is the intention.

But we're here to discuss more so another glaring issue with the repository, which might actually end up landing a lawsuit at Winamp's doors. And sadly, they can't really do anything about it anymore.

A rough overview of GitHub

In its simplest form, GitHub can be described as a web-hosted git server for the public. As a developer interested in open-source, you can create a git repository, write some incredible code that you AND your mum are really proud of, and share it with the world on GitHub, so everyone can appreciate your contribution to the democratisation of programming.

But GitHub is more than simply a place to host your git repository. It is also a collaboration platform, where anyone who feels like they like your code, and would like to change it until they love it, can "fork" your repository. Forking is in fact not a more aggressive version of spooning, but simply the act of making a clone of the original repository in your account. This forked repository contains all the history of the original - all the code, and all the commits that came before the fork.

Back to the beauty that is the Winamp GitHub repo

If you were to peruse the code on the Winamp repository in the first few days after its publishing, you would have noticed some very intriguing things. For example, a BuildTools directory which included things like whole executable files the 7zip, git, and other programs just hanging around, designated as "build tools". And although very odd indeed, these things seem more so stupid than harmful.

But there's much more there! Inside the repo there was proprietary code from Dolby, some questionable SDKs which Winamp may or may-not have the right to redistribute, an entire directory containing the commercial version of Qt, some, admittedly expired, credentials, and so on, and so forth.

Winamp did take some action

There are numerous pull requests deleting said files, and the maintainers did also remove a lot, if not all the unlicensed code. And they did this... in public. You can clearly see many of the commits that delete stuff right there on the commits page. Which, you know, is a problem. If you open any of the commits listed there, you can simply look at the code that was deleted. If you simply clone that repository locally, you can move around that history and see it at the times before it was removed. This public deletion without rebasing just does nothing.

They deleted some of the commits

OK well, if you just scroll around the commits history, you will notice that the commits that delete, for example the Dolby code is nowhere to be found. But in the end, that doesn't really do anything either, since you can still just go back in history to a commit before they deleted it, and just see it there.

Does this count as distribution?

Ok, what can they do?

I mean, they could just rebase everything, this would theoretically get rid of the commit history. They'd need to get rid of pull requests as well, since those contain code from certain history positions. And to be honest, at this point I'd just delete the repo from GitHub, clean up the codebase locally, and create the GitHub repo anew.

But really, even that would not be enough anymore.

A lesser-known "feature" of GitHub

Remember how I said that they deleted some of the commits from their history, so you can't simply see what code they removed? Well, even though that commit is gone from the git history, and you wouldn't see it even if you cloned the repository locally... you can still see it on GitHub.

You see, to facilitate collaboration through forking, GitHub introduces something called a "repository network". This network holds information of the "upstream" or parent repository, all of its forks, and all the commits that belong to each fork.

Additionally, GitHub caches commits, so that they can be accessed by other repositories in the repository network. As explained by GitHub themselves, "GitHub stores the parent repository along with forks in a "repository network". It is a known behavior that objects from one network member are readable via other network members. Blobs and commits are stored together, while refs are stored separately for each fork. This shared storage model is what allows for pull requests between members of the same network.".

So, not only are the deleted commits potentially visible inside of forks that may have this commit from before it was deleted in the upstream repository. No, they are also just visible in general, as long as you know the commit hash. GitHub simply caches them, and you can just visit the page for that commit, and see all the changes it has made, and the entire code-base at that point in history.

You can still see deleted commits on GitHub

So, even if you rebase your repository, the commits history is still there, cached and awaiting. And all forks that were made before the rebase, you can just peruse those anyway.

Now, to be fair - GitHub has an entire docs section about leaking sensitive data. It does mention that you can contact support, and request the cached commits to be removed, if they deem that sensitive data has been exposed.

So, what if GitHub clears the cache, and we delete the repo?

Well, nothing. It has been already forked more than 2600 thousand times as of the time I'm writing this. It's out there on GitHub :) A lot of the forks were even made before anything was deleted from the repo. So you don't even have to rely on cached commit hashes. In the same docs page I linked above, GitHub themselves say that yeah - if someone has forked your repo, you're on your own, buddy.

If you remove the upstream repository of a repository network, GitHub simply designates another fork to be the upstream one. And all of that history and cached commits are still accessible through any of the forked repositories.

Conclusion

I do find it ironic, that this same collaborative feature which Winamp tried to disallow us from using - forking their repo, is also the one which in the end will not allow them to ever escape the reality that they managed to leak some confidential data on their repository.

But hey, what's done is done, and it did allow me to learn more about how GitHub works with its repository networks and whatnot, so I guess it is possible to learn from someone else's mistakes. I hope you learned something from this too, at the very least to be very careful with what you push to GitHub. And if you do make a mistake - just delete the repo before it was forked, and try again. QED

In terms of the Winamp repository, I am happy that they decided to share their codebase. Even though you literally can't do anything with it apart from reading it, it's still a nice case-study. And I do not wish for them to regret going source-available, so I hope they don't get any legal problems.

But on the other hand, I do wish they'd gone fully open-source. I mean, what are they protecting right now? Winamp is a big part of history, everyone has at least heard of it, if not used it. But nowadays it's just that - part of history. But we've seen that through collaboration and openness of the developers, a lot of software can live for a very long time. Winamp could become something modern and fully maintained, if it allowed for outside contribution. I do hope they take that next step at some point. After they clean up this mess of a repo...

· 7 min read

Motivation

I've been trying to maintain a dotfiles repository for a few years now. There, I keep configurations for all kinds of different tools and applications I keep on my development machines. It's great for maintainability and versioning, but maintaining and keeping the dotfiles up to date can be a tedious task. But I've quite accidentally found a good and easy way to do it!

The three major options I've considered are:

  • Setting up a home directory as a --bare git repository.
  • Symlinking every configuration manually
  • Stow

The first option of the runt, is of course - from a storytelling point, the first one to discard.

Setting up your entire home directory as a repository, you have to be careful with exactly what to track, and what not. Basically, you'd need a .gitignore file that you'd have to constantly update with everything but the things you want to keep track of. It's very prone to accidentally adding something you don't want tracked, and forgetting to include it in the .gitignore. Plus, a giant git repo in my home directory isn't really to my taste.

Manual symlinking does solve those problems, but it can be quite complicated to automate and keep track of. Stow is an abstraction on top of symlinks, that allows us to automate symlink management, and turn it into package management. Let's see how to set up your dotfiles repo, in order to make use of stow.

Setup

Before using stow, my config files were laid out in a very simple way. Basically, everything that was in the $HOME/.config/ directory, was just copied into $HOME/code/dotfiles/.config/. Other configs that were just files or directories inside the $HOME directory, I just copied into the repo root $HOME/code/dotfiles/ - for example from $HOME/.bashrc to $HOME/code/dotfiles/.bashrc.

~/code/dotfiles
.
|-- .bashrc
|-- .config
| |-- gtk-3.0
| | `-- [... files]
| |-- i3
| | `-- [... files]
| |-- nvim
| | |-- after
| | | |-- ftplugin
| | | | `-- [... files]
| | | `-- plugin
| | | `-- [... files]
| | |-- init.lua
| | |-- lua
| | | `-- kiroki
| | | `-- [... files]
| | `-- plugin
| | `-- [... files]
| `-- terminator
| `-- [... files]
|-- i3blocks
| `-- [... files]
`-- .local
`-- share
`-- fonts
`-- [... files]

How stow likes it

Stow works more like (or exactly like) a package manager. We have to think of each configuration we manage as a package. So, instead of having a bunch of configurations under the .config directory, like $HOME/code/dotfiles/.config/i3 and $HOME/code/dotfiles/.config/nvim, we can split these into separate directories, in this example $HOME/code/dotfiles/i3/.config/i3 and $HOME/code/dotfiles/nvim/.config/nvim.

We can name them however though, so it could be $HOME/code/dotfiles/foo/.config/i3 for our i3 config.

And technically, if we want to be not-so-clever, we can just do something like $HOME/code/dotfiles/my-dot-config-directory/.config/<everything like i3 and nvim>. But the power of stow is that we can stow and unstow each config like a package. This technically means, that we can also version our configs. For example, we could have one version of i3 for Arch under $HOME/code/dotfiles/i3-arch/.config/i3, and one for Ubuntu under $HOME/code/dotfiles/i3-ubuntu/.config/i3. Because of these reasons, I recommend this package structure.

For another example, the .bashrc file is typically right in the $HOME directory, so we can "package" it simply as $HOME/code/dotfiles/bash/.bashrc. I know, I know - who uses bash anymore... well, I do apparently :)

tip

Stow is technically a package manager. To make full use of it, we can turn every configuration we contain in our dotfiles into a package, by placing it in its own directory.

How it is now

After we migrate to using stow, our repo structure now looks like this:

~/code/dotfiles
.
|-- bash
| `-- .bashrc
|-- fonts
| `-- .local
| `-- share
| `-- fonts
| `-- [... files]
|-- gtk-3.0
| `-- .config
| `-- gtk-3.0
| `-- [... files]
|-- i3
| `-- .config
| `-- i3
| `-- [... files]
|-- i3blocks
| `-- i3blocks
| `-- [... files]
|-- nvim
| `-- .config
| `-- nvim
| |-- after
| | |-- ftplugin
| | | `-- [... files]
| | `-- plugin
| | `-- [... files]
| |-- init.lua
| |-- lua
| | `-- kiroki
| | `-- [... files]
| `-- plugin
| `-- [... files]
|-- stow_config.sh
`-- terminator
`-- .config
`-- terminator
`-- [... files]

Usage

Stowing

Well, great. So far, we've basically just moved some directories around. So, what now?

Well, now we can just run stow for each of these newly created packages. The way that stow works, is that it takes the directory inside of the "package" directory, and creates a symlink to it in the parent of the current working directory.

So, for example, if we now cd into $HOME/code/dotfiles/, we can run stow i3. What this will do is, it will create a symlink to $HOME/code/dotfiles/i3/.config/i3 in $HOME/code/.config/. That will look something like this:

lrwxrwxrwx  1 kblagoev kblagoev   30 Oct  6 23:14 i3 -> ../code/dotfiles/i3/.config/i3/

"But wait!", I hear you say. "Isn't this i3 directory, or symlink, or whatever, supposed to be in our $HOME directory? What is it doing in $HOME/code/?".

You're absolutely right. Let's fix this. Stow has a flag -t, or --target, with which we can specify the root of the package management. This target is by default the parent of the pwd, and that's why by running stow inside of $HOME/code/dotfiles/, the symlinking occurred under $HOME/code/ (and resulted in our symlink being $HOME/code/.config/i3. It can be a bit confusing to keep track of this, but yeah). So, instead, we want to target the $HOME directory. That's why we should run stow -t $HOME i3 instead.

tip

If we don't place our dotfiles repository in the $HOME directory, we have to target it when we use stow by utilising the -t flag, e.g. stow -t $HOME i3.

Unstowing

Removing a config is super simple with stow as well. Following our example with i3, we can simply run stow -D -t $HOME i3. The -D flag deletes the symlink, and our config is gone from the $HOME/.config/ directory. And only that config!

Additional note on Usage

There is a flag --dotfiles, which allows to rename hidden directories, such as .whatever-the-name-is to dot-whatever-the-name-is, and for them to be pre-processed by stow by replacing dot- with .. This is useful, so there aren't hidden files and directories in the repo. Quite useful for easier searching that respects hidden files.

This is great and all, but in the latest version of stow on Ubuntu there's a bug with that. The bug is fixed in the newest release of stow, but I will wait for it to get updated in apt, before migrating to that setting - just for availability reasons.

But if you're going to install the latest version of stow, do keep that option in your mind. It's pretty neat.

And lastly, for my own convenience, I've written a bash script which can stow and unstow all the packages inside my repo with one command. I've opted into having a manually updated list of the packages, just because I keep some other junk in the dotfiles repo, but this can be changed. I will paste the script here, if you'd like to use it yourself (or a modified version of it).

stow_config.sh
#!/bin/bash

# Define an array of package names
packages=(
"bash"
"gtk-3.0"
"i3"
"i3blocks"
"terminator"
"nvim"
"fonts"
)

# Check if the first argument is "remove" to use the -D flag
stow_flag="-t"
if [ "$1" == "remove" ]; then
stow_flag="-D -t"
fi

# Loop through each package and run stow or unstow with -D
for package in "${packages[@]}"; do
if [ "$1" == "remove" ]; then
echo "Unstowing $package..."
else
echo "Stowing $package..."
fi

stow $stow_flag "$HOME" "$package"

if [ $? -eq 0 ]; then
if [ "$1" == "remove" ]; then
echo "$package unstowed successfully."
else
echo "$package stowed successfully."
fi
else
if [ "$1" == "remove" ]; then
echo "Error unstowing $package."
else
echo "Error stowing $package."
fi
fi
done

echo "All done!"


Running ./stow_config.sh will stow, and running ./stow_config.sh remove will unstow the listed packages.

That's it! glhf

· 5 min read

Motivation

If you're like me, you try and keep all of your coding-based clutter on a virtual machine. Which usually works great. You can maintain a clean OS, separate work from pleasure, and quickly return to a snapshot if something goes terribly wrong. But when I tried doing development for Android from a virtual machine, I quickly arrived at an unexpected hurdle.

If you try and run Android Studio's emulators, you'll quickly be reminded that you can't simply run a virtual machine inside a virtual machine. You technically can, if you enable Nested VT-x/AMD-V, but even if you succeed navigating the hell of starting an AVD android emulator from inside your VM, you'd be met by the huge performance hit of running nested virtualisation.

You can instead consider running either the Genymotion emulator or Xamarin Android Player, which both use Oracle's VirtualBox to run their emulators. But instead of running them inside your VM (and being met with the issue of nested virtualisation), you can run these emulators on your host OS. Since both of these use VirtualBox, you can take full use of the networking aspect of VMs to connect your development VM to your Android emulator VM!

note

Hence, my solution is to have two separate virtual machines running in VirtualBox on the host OS, and let them communicate using networking.

Setup

I'm assuming you've already

  1. Downloaded and installed VirtualBox
  2. Set up a development VM inside of VirtualBox
  3. Connected your dev VM to the outside world using either NAT or Bridged networking (I personally use bridged, since it helps me to easier access any web dev instances running in the VM)

Creating a virtual device

Alrighty, now we can download and install either Genymotion emulator or Xamarin Android Player on your host OS.

After you've done that, you can start up your software, and set up a virtual device (phone) by following the instructions.

created virtual device

Now we need to do some configuration work inside VirtualBox.

Configuring the virtual mobile device

If you open up VirtualBox, you'll see that the newly created virtual mobile device is listed in the list of machines.

list virtual devices

We need to edit the network settings for the mobile device. It will need two adapters. One is to connect to the network of the development machine (NAT or Bridged). In my case the machines are in a bridged network, which makes it easier to access from the host OS, as well as between each-other.

tip

For some development platforms like React-Native, it's important that both the dev machine and mobile device are on the same network to allow easy debugging.

mobile bridged adapter

The other, arguably more important adapter to add/enable is the Host-only Adapter. This is the one adb will connect over.

mobile host-only adapter

Make note of the adapter name here!

VirtualBox has the habit of creating multiple host-only adapters, so this name is very important to make note of.

Configuring the development VM

Now that we have set the mobile device settings up, we need to mirror them in the develpment VM. If you've used NAT for the mobile device, redo the settings in the development one. If you've put it into a bridged network, do the same to the dev VM.

After this, we need to add a Host-only Adapter to the dev VM. Did you take note of the Adapter name from the mobile machine? This is where you use it!

dev machine host-only adapter

Starting up the machines

Normally, you just start them from their respective applications. Start the mobile machine from Xamarin or Genymotion, and start your dev VM from VirtualBox.

But I've found that if I start the mobile machine from Genymotion, it tends to reset the Host-only Adapter name, add a new one, and fail to start. So, if this happens to you, this is what I do:

  1. Start the mobile machine from VirtualBox. A command-line terminal will pop up, and will start loading.
  2. When it appears that the terminal isn't doing anything anymore, you can safely start the mobile device from Genymotion.

mobile device ready to be started from genymotion

This is how my terminal looks when it stops loading. At this stage, I start the device from Genymotion as normal.

If you haven't already, you can now start your development VM as well.

Connecting to the mobile device

In this scenario, we are going to use adb to connect to the mobile device from the dev VM. All we need is the IP that the device is running on. Both Genymotion and Xamarin provide some way to see the IP.

genymotion shows ip

But if you're having trouble to find it, you can go to VirtualBox, and see the IP from the terminal we saw earlier.

virtualbox shows ip

So, now we can easily connect from our dev VM using the command-line tool adb

kblagoev@deva:~/Android/Sdk/platform-tools$ ./adb connect 192.168.68.101
* daemon not running; starting now at tcp:5037
* daemon started successfully
connected to 192.168.68.101:5555
kblagoev@deva:~/Android/Sdk/platform-tools$

Success!

And since I've used a bridged network, I can also easily do network connections between the two devices, which can be quite useful in some cases. So I recommend it, unless you have your own way of doing it.

Now you can develop! Have fun!

· 5 min read

This is a writeup for the Key Mission challenge, part of the Hack the box's Cyberapocalypse CTF 2021, category Forensics.

Prompt

The secretary of earth defense has been kidnapped. We have sent our elite team on the enemy's base to find his location. Our team only managed to intercept this traffic. Your mission is to retrieve secretary's hidden location.

· 2 min read

This is a writeup for the CaaS challenge, part of the Hack the box's Cyberapocalypse CTF 2021, category Web.

Prompt

cURL As A Service or CAAS is a brand new Alien application, built so that humans can test the status of their websites. However, it seems that the Aliens have not quite got the hang of Human programming and the application is riddled with issues.

· 4 min read

Well, here we go. This was the very first CTF event that I took part of while it was happening, and I'm quite proud of my results!

Most importantly, almost every challenge that I finished taught me a bunch of new concepts and techniques, and showcased what can be expected in the field of hunting vulnerabilities.

By the end of the 5-day ordeal I got to 357th place out of 4740 teams and more than 9500 players, and I'm very satisfied with such an outcome from my very first event of this type.

But, as I can now see, the learning process continues even after the event, in the form of writing write-ups! The much more level-headed approach of trying to explain and follow the process of finding the flags means that I get to see my ideas and techniques in a new light, and organise the tools and concepts much better.

Reading write-ups is also indisplensable, as there were quite a few other challenges that I was sure I was very near to solving, but never found out what I was missing. Reading other people's write-ups allowed me to see what I was did right or wrong, and how to improve on that.

So I hope you get that from reading these write-ups as well, and learn a few things from them, or at least see an another approach to solving the same problem.

Enjoy!

Category Web

Inspector Gadget - Cyberapocalypse 2021 CTF

An easy warm-up challenge in the style of a scavenger hunt

CaaS - Cyberapocalypse 2021 CTF

Exploiting curl running locally on the host, to get local file inclusion

miniSTRypalace - Cyberapocalypse 2021 CTF

Showcasing the importance of white-listing instead of black-listing commands and strings inside PHP

BlitzProp - Cyberapocalypse 2021 CTF

Exploring the interesting concept ot AST injection and prototype pollution, resulting in remote code exectution. And all of that because we use the wrong version of a library in node

E.Tree - Cyberapocalypse 2021 CTF

Blind XPATH injection with a slight twist

Wild goose hunt - Cyberapocalypse 2021 CTF

A challenge showcasing the weakness of improper handling of mongo queries

Emoji Voting - Cyberapocalypse 2021 CTF

This fun challenge showcases blind SQL injection inside of an ORDER BY clause

Category Reverse

Passphrase - Cyberapocalypse 2021 CTF

A simple reverse-engineering challenge invoving stringcompare

Authenticator - Cyberapocalypse 2021 CTF

Reverse-engineering a binary, involving both stringcompare and a bit of XOR magic

Category Forensics

Key Mission - Cyberapocalypse 2021 CTF

Having fun with USB Human Interface Devices, namely a keyboard. The twist was having to deal with the Shift key

Category Crypto

Nintendo Base64 - Cyberapocalypse 2021 CTF

An easy warm-up cryptography challenge, dealing with multilayered base64 encoding and obfuscation

Soulcrabber - Cyberapocalypse 2021 CTF

A challenge written in Rust, showcasing using known seeds for pseudo-random number generators

Phasestream 1- Cyberapocalypse 2021 CTF

The first challenge of this series showcased XOR encryption with a 5-byte key

Phasestream 2 - Cyberapocalypse 2021 CTF

Still on the topic of XOR, this time using a 1-byte key, but hiding the real flag in a list of 9999 different strings

Phasestream 3 - Cyberapocalypse 2021 CTF

A challenge showcasing the devastating effects of reusing keystreams in AES encryption

Phasestream 4 - Cyberapocalypse 2021 CTF

A direct follow-up of the previous challenge, introducing a bit of a guessing game

Category Misc

Alien Camp - Cyberapocalypse 2021 CTF

A fun scripting challenge involving the automatic handling of nc based services

Input as a Service - Cyberapocalypse 2021 CTF

input() in python2.x is scary by default

· 3 min read

This is a writeup for the Alien Camp challenge, part of the Hack the box's Cyberapocalypse CTF 2021, category Misc.

Prompt

The Ministry of Galactic Defense now accepts human applicants for their specialised warrior unit, in exchange for their debt to be erased. We do not want to subject our people to this training and to be used as pawns in their little games. We need you to answer 500 of their questions to pass their test and take them down from the inside.