A grab bag of half-finished posts


Because I’ve started a bunch of posts, haven’t finished them, don’t really want to delete them but stuff looks potentially useful, and I’m clearing stuff out.

HLDS improvements

“Less condescending, more slicing robots in half.”

First off, getting auto-updating to *actually* work:

Edit line 302 of tf2/orangebox/srcds_run: Instead of PATH=..:.:${PATH}, use PATH=..:.:../..:${PATH}. What this does is include the directory 2 levels up from the srcds_run file, which is the one that contains the steam binary.

I’m so wrong. Don’t edit the file, it’ll get overwritten each and every time auto-update runs. Instead, I added ~ to the PATH variable in .bashrc. That doesn’t get overwritten regularly.

Something I’m keeping my eye on: I’m getting a lot of  “_restart on mapchange” messages when running in gamelobby mode… I’m thinking it might be a race condition – the server won’t restart until the map changes, but map changes are controlled by the lobby server, and the lobby server won’t change the map because the files are out of date.

 Python build error on Windows w/ VS2010

I was getting errors of the form

error: Unable to find vcvarsall.bat

Fix is in cmd (command prompt), run SET VS90COMNTOOLS=%VS100COMNTOOLS%

In PowerShell, $Env:VS90COMNTOOLS = $Env:VS100COMNTOOLS

More Crashplan WTFery.

I’m doing one final upgrade for my family – replacing the hard drives of all the desktops.

And I got the error “logged out by authority: invalid application state”

Ended up having to  ‘readopt’ the computer. Having only changed the hard drive… Does hard drive size make up a component of the GUID?

(Writing this a few months later: More likely would be the hard drive serial number that’s used in calculating the GUID. But that’d still be stupid.)

Configuring instances on EC2 to pull down stuff automagically

Because I’m a cheap person, I’ve got EC2 instances running as spot instances, instead of actual proper instances. So they could go down at any point in time. And because they’re backed by an EBS volume snapshot, I need to have a way for it to automatically pull stuff down.

v1 had stuff embedded: crontab has @reboot, rsync & fixed private key to upload data every 30 mins (but not to download on startup, something which I wanted to fix in v2)

v2 was going to be more intricate embedding. But it looks like I should be using user data instead. So that’s what I shall look at.

(Writing this later: I never got around to looking at this because I switched to using a cheap OpenVZ based VPS instead of paying for EC2.)

iCal creation in Python

I started working on a script that would take course schedules from uWaterloo’s API and create an iCal file that I could import into Google Calendar. And I’d make it into a script so other people can use it to do the same because, hey! I like making stuff people use.

But I’m giving up on doing it entirely in Python, because of a few reasons:

  1. iCalendar documentation is horrible. I had to jump into the code to try and get the repeating rule syntax right (though I seem to have got it working with event.add(‘rrule’,{‘FREQ’:’WEEKLY’, ‘BYDAY’:’TU’,’WKST’:’MO’}) (Tip: WKST means the day the week starts, ie. Monday.)
  2. Timezones. TIMEZONES. ARGH. Google Calendar appears to allow me to declare a timezone for everything, and then it’ll take care of DST/etc timezone transitions on repeated events. Unlike iCalendar when I’ll end up having to use UTC internally, so I’ll need to declare timezones in each event and think about how

Though Google says there’s no Python packages for Google Calendar, I found a link for a bunch of Python wrappers for their APIs at developers.google.com/api-client-library/python/start/installation

(Again, writing later: Discarded the idea – the API doesn’t list the irregularly scheduled classes, ie. Lab sessions are biweekly, but don’t show up in the API. Since I was going to use this to keep track of the irregular classes, it’s pretty much useless. Instead, I’m using schedule.wattools.com/)

,

  1. #1 by Brett Trotter on November 10, 2014 - 10:17 pm

    I created a CrashPlan Docker image and I keep /usr/local/crashplan in a volume that’s shared when I re-build the container. Everything about the docker container is identical down to the last detail, but somehow CrashPlan knows and I have to re-adopt and re-index a 2TB repository. I’m about ready to build the container twice and byte compare the differences…

(will not be published)