Skip navigation

So it’s been a while since I have posted anything. It’s not for lack of material, mostly due to lack of time. I should be adding a few more posts in the next week, and I am going to start off this latest string of posts with a quick short one that extends my previous discussion of thin clients.

In the last post, I talked about using boto and python to push data around. I found boto to be very useful in streamlining the process of getting data on and off AWS instances, which can be a slow and tedious task when using a thin client setup and scp. While exploring other ways to make the thin client lifestyle easier to manage, I ‘discovered’ that github is not only a valuable tool for version control, but also a fairly useful tool for synchronized file storage. In addition to moving a lot of data around, I was regularly using scp to push python scripts and other files to and from different machines. The biggest pain point with scp is you need to keep track of the IP addresses for each machine. Using github, you simply need to remember the repo name (much easier to memorize than an IP address), and then clone the repo to the machine. And of course you have the added bonus of being able to commit any changes you make back to the remote repo.

For seasoned veterans, this ‘revelation’ of mine would seem rather obvious. But for those of us that are still learning the ropes, version control adds a lot of extra overhead to the already heavy mental load that comes with learning new programming languages and tools. So in hindsight, after using github heavily for a few months, the idea that it could be used as a synced file storage system seems obvious, but at the time, I was quite happy to discover it’s secondary use case.