Archive for category Personal

On interviews and looking back at the future

I don’t think I’ve published a post on my job searches as part of my time in the SoftEng program at UWaterloo, even though I’ve got a few in my drafts.

Since I’ve had my last interview (and Jobmine has closed & matched), here’s some fun stats without naming companies:

  • Favourite interviewer line: “So our alphabet ends with Zee… actually wait, you go to Waterloo, right? So Zed.”
  • Favourite interviewer compliment: “If we hired based on domain name, you’d be our first choice!”
  • Most common uncommon interview question: “What’s your favourite TF2 class, and why?”
  • Most common common interview question: “Tell me about yourself!” (Still don’t have a fixed answer for this)
  • Most unexpectedly large number of interviews from a small-ish company: 4
  • Most unexpectedly small number of interviews from a large company: 1 (+ code test)
  • Weirdest application experience: Rejected on Jobmine, requested on LinkedIn (larger company, left hand doesn’t know what the right is doing)
  • Weirdest process experience: “Thanks for your interest in the Winter 2016 Internship position! We’d like to schedule your interview ASAP!” *interview* “Thanks for your interest in the Fall 2016 Internship position! We’d like to schedule your interview ASAP!”
  • Most annoying experience: “We’ll definitely consider you for the SRE position since you explicitly requested it!” “Here is the interview schedule for your SWE interviews”
  • # of code problems in phone/skype interviews left unfinished: 3
  • Best code writing platform: Coderpad
  • Worst code writing platform: Google Docs (“Let me just create some indentation groups for you”)
  • # of technologies to investigate: 6 (Stemcell, Consul, Elasticsearch, Kibana, Fluentd, Graphana)

Read the rest of this entry »

,

No Comments

My State of Serving, aka VPS recap

So I wrote a bunch about VPSes a few months ago, and what I thought my future looked like with them. Well, a bunch has changed since then, and will going forward, so let’s go:

Full out cloud hosting

Still good for buy-as-you-need systems, still not right for my usecase. Other than reading about price drops in The Register/on Twitter, I’ll be passing over them.

Cloudy VPSes

Haven’t seen too much movement on this. I’m going to lump RamNode (www.ramnode.com) into the Cloudy VPSes section, simply because they’re at 5 locations. Sheer scale. Also, their prices are approaching Digital Ocean level. I’d still go with DO though – The cheaper plans are OpenVZ based, not KVM as DO is.

Traditional VPSes

This is probably the most significant change – instead of migrating to a Digital Ocean droplet/Vultr node, I decided to go with Crissic.

I haven’t actually had my VPS with them go down over ~1 year (yet), and the weird SSH issues resolved themselves around December, so I decided to bite the bullet and extend my existing VPS plan with them. (Or should I say him – all the support tickets have been signed Skylar, so it’s looking like a one man operation.) Crissic is consistently coming in the top 5 in the LowEndTalk forum provider poll so that was additional validation.

With Nginx/PHP-FPM/MariaDB all going, I’m hovering around 300MB of RAM used. So I have a bunch of headroom, which is good.

I also picked up a bunch of NAT-ed VPSes – MegaVZ is offering 5 512MB nodes spread around the world for 20€. So I got them as OpenVPN endpoints. They’re likely going to be pressed into service as build nodes for a bunch of projects (moving Jenkins off my Crissic VPS), but those plans are still up in the air. We shall see…

Dedicated Server?

The most interesting thing I found was Server Bidding. Hetzner (massive German DC company) auctions off servers that were setup for people, but have since cancelled their service. Admittedly, most of their cheaper stuff is consumer grade (i7 instead of Xeons), but I can’t really complain. There’s an i7-3770 with 32GB of RAM and 2x 3TB drives going for € 30.25/USD$34 right now. And prices only go down over time (until someone takes it).

That mix of price/specs is pretty much unmatchable. KVM servers are ~$7/month for a 1GB (and roughly scale linearly), so I’d be looking at $28/month for 4GB of RAM if I were to go normal routes. And that’s totally ignoring the 2x3TB HDD (admittedly, I’d want these in RAID 1, so effectively 3TB only.)

No Comments

My state of VPSes

The VPS market is really really interesting to watch. Maybe it’s just me, but the idea of being able to get a year of a decent system for the price of a pizza is fascinating – and somewhat dangerous to my wallet. At my peak, I had 4 VPSes running at the same time – and each of them doing practically nothing. So I ran TF2 servers off the ones that could support it until the prepaid year was over.

It’s just so… attractive. My ‘own’ servers, without the expense of needing to buy and pizza boxes that sound like hurricanes and are apparently good places for cat puke. And now I find myself needing to get one to power everything in the near future because I’ve been unsatisfied with Dreamhost for about 2 years, and they finally pushed me over recently. So I decided to take notes with the target of migrating to having everything of mine on a VPS in December.

My usecase can be summed up simply as a always-on low traffic webserver, hosting a personal blog (WordPress., while I look at Ghost), MySQL database and Jenkins build server (them Lightroom plugins!). Somewhat price-sensitive – I’m not made of money, but at the same time I’m not going to sweat the price of a coffee at Starbucks. Bonus points for extra space for hosting friends’ sites though. They can pay me back by buying me a drink or something.

Cloud Hosting

Cloud hosting is hard to quantify – off-hand, the big three are Amazon, Google And Azure. But I’m going to expand my definition to cloud = anything that’s on demand, per-usage charges. Which opens it up.

The big 3 are pretty much unsuitable. Amazon’s still aiming toward companies that need to spin up and down instances – and quickly prices themselves out looking at the alternatives. Even looking at the price of the heavy use reserved instances, that’s $51+ hourly fees/year for the smallest system (t2.micro with 1GB of RAM & a throttled CPU). But wait, that’s actually $4.25 + $2.16 = $6.41 per month (assuming the current $0.003/hour charge stays the same) which isn’t so bad.

Except I had a wonderful manager in the past who pointed out that it’s not the cost of the compute power that was the killer when it came to building a company on EC2, it’s the cost of the outbound bandwidth. Data out is still an extra fee. Oh, and disk space is charged per GB as well. The wonders of all those options!

Google Compute Engine and Microsoft Azure are pretty much the same. Google wants $6.05 for their lowest system (0.6GB of RAM & 1 “Virtual Core”, likely aka throttled CPU much like Amazon); Microsoft just states an estimated monthly price of $13/month for a A0 instance, for 1 core and 0.75GB of RAM. Here’s a nice difference though: It comes with 20GB of disk.

So the big 3 are out for me (I write, being able to look back after looking in other places). Let’s look at other providers, shall we?

Cloudy VPSes

So named because they are charge-per-use VPSes, but all the ancillary stuff is in the price like traditional VPSes.

I haven’t done a really extensive search for these type of providers. The biggest known one is DigitalOcean, and I found Vultr on Twitter. And then I found Atlantic.Net somewhere else. (I think a Hacker News thread on DigitalOcean, amusingly.)

The biggest thing for me recently was the sudden addition of Atlantic.net‘s lowest end system (www.atlantic.net/vps/vps-hosting/) – $0.99/month for 256MB of RAM, 10GB of disk and 1TB of transfer is pretty much unheard of. Making the ultra low price point generally available is entirely new, and it’s going to be interesting to see if anyone in the cloudy space matches it – it’s actually cheaper than traditional VPS providers for a small always on box. It’s definitely on my list to watch (and possibly get for things like a target for automatic MySQL backups).

DigitalOcean would have my vote right off the bat for that sweet sweet $100 in credit they’re giving to students (www.digitalocean.com/company/blog/were-participating-in-githubs-student-developer-program/), except for the fact it’s for new users only. Which is rather annoying.

All 3 providers appear to be about the same:

  • Vultr appears to be trying to undercut DigitalOcean in the low end – More RAM/less space/same price for the cheapest box, same RAM/less space/cheaper for the next level up.
  • Atlantic.Net also appears to trying to undercut DigitalOcean, just with more space and bandwidth instead of RAM. And literally cents cheaper! ($4.97 vs $5? Mhmm…)
  • Meanwhile, DigitalOcean coasts along on brand recognition and testimonials of friends.

So cloudy VPSes are far better for the single always-on server usecase. How about traditional VPSes?

 Traditional VPSes

Tradiational VPSes and I have an interesting history. Prior to finding LowEndBox, VPSes were super expensive compared to shared hosting, eg www.1and1.com/vps-hosting – I still have a client/friend hanging onto his $50/month 1and1 VPS that I cringe about.

That pretty much changed when I found LowEndBox. For the ultralow low pricepoint (like Atlantic.Net), LowEndBox has had $10/year offers in the past, primarily for systems with less RAM (but 256MB has happened – lowendbox.com/blog/crissic-solutions-10year-256mb-openvz-vps-and-more-in-jacksonville-florida/)

Downside of traditional VPSes are that the better prices == prepay annually. No pay-as-you-go here.

My history:

  • Virpus.com for 2 Xen VPSes in 2012 – I think they are one of the few who are doing Xen as opposed to OpenVZ or even KVM
  • Hostigation in 2013 – for a personal project that never actually happened
  • WeLoveServers in September 2013 – basic server for $19/year
  • Crissic in early 2014 – Ostensibly better than WLS for $2/month that I hosted a client site on

Looking at the Virpus site, they’ve dropped their prices from $10/month to $5/month. I can’t remember what I paid for Hostigation, but I rarely even logged into that VPS.

WLS was at the very least decent. I used it mainly as an OpenVPN endpoint more than anything.

Crissic is good, but overly touchy on limits. I regularly get my SSH connection closed despite keepalives being set, and I can’t figure out why.

Overselling is a thing that happens though – CPU usage in particular is really limited, and suspensions happen. So I’m leery of moving my stuff to a traditional VPS provider without being able to test it. Especially if I just paid for a year of hosting.

New to me: RAM heavy VPSes

VPSDime (https://vpsdime.com/index.php) is a new-to-me provider. 6GB of RAM for $7 a month. Unfortunately, offset by only 30GB of storage and 2TB of bandwidth vs WeLoveServers’ 6GB/80GB/10TB for $19/month. Then again, I don’t really use that much storage or bandwidth.

Bonus: Storage VPSes!

The main reason I kept DreamHost for 2 years after I got annoyed with them was that their AUP allowed me to use RAW images on my photography site, and finding space for 500GB of photos was not cheap anywhere else. With the introduction of their new DreamObjects service (read: S3 clone), that changed, and my last reason for keeping them disappeared.

VPSDime has interesting storage VPS offers. 500GB of disk for $7 is $2 more than Amazon Glacier (not accounting for the fact that I’m unlikely to get to use all 500GB), but offset that against the fact that it’s online (no retrieval times) and I don’t have to pay retrieval fees. Maybe I’ll get to ditch one of my external drives…

Runner up: BuyVM – more established, but about double the price for the same amount of space 🙁

TL;DR

I’m inclined to see what Vultr’s 1GB/$7 month plan is like to host my personal stuff. Also thinking of getting a 1GB WLS VPS to be an OpenVPN endpoint/development server. Possibly Atlantic.Net‘s cheapest service instead, but an extra 58 cents per month isn’t going to break the bank and I have the advantage of not needing to give Atlantic.Net my credit card details. In the worst case I can walk away from WLS without worrying about my credit card being charged.

Yay for backups and Ansible making it possible to deploy a new server in minutes once I get the environment setup just right.

,

3 Comments

Musings on the Mythical Man-Month Chapter 2

tl;dr: Scheduling tasks is hard

  1. We assume everything will go well, but we’re actually crap at estimating
  2. We confuse progress with effort
  3. Because we’re crap at estimating, managers are also crap at estimating
  4. Progress is poorly monitored
  5. If we fall behind, natural reaction is to add more people

Overly optimistic:

Three stages of creation: Idea, implementation, interaction

Ideas are easy, implementation is harder (and interaction is from the end user). But our ideas are flawed.

We approach a task as a monolithic chunk, but in reality it’s many small pieces.

If we use probability and say that we have a 5% chance of issues, we would budget 5% because it’s one monolithic thing.

But the real situation is that each of the small tasks has a 5% probability of being delayed. Thus, 0.05^n

Oh, and our ideas being flawed? Yeah… virtually certainty that the 5% will be used.

Progress vs effort:

Wherein the man-month fallacy is discussed. It comes down to:

  1. Adding people works only if they’re completely independent and autonomous. No interaction means assign a smaller portion, which equates to being done faster
  2. If you can’t split it up tasks, it’s going to take a fixed amount of time. Example in this case is childbearing – you can’t shard the child across multiple mothers. (“Unpartitionable task”)
  3. Partitionable w/ some communication overhead – pretty much the best you can do in Software. You incur a penalty when adding new people (training time!) and a communication overhead (making sure everyone knows what is happening)
  4. Partitionable w/ complex interactions – Significantly greater communication overhead

Estimation issues:

Testing is usually ignored/the common victim of schedule slippage. But it’s frequently the time most needed because you’re finding & fixing issues with your ideas.

Recommended time division is 1/3 planning, 1/6 coding, 1/4 unit tests, 1/4 integration tests & live tests (I modified the naming)
Without checkins, if the schedule slips, people only know towards the end of the schedule, when the product is almost due. This is bad because a) people are preparing for the new thing, and b) the business has invested on getting code out that day (purchasing & spinning up new servers, etc)

Better estimates:

When a schedule slips, we can either extend the schedule, or force stuff to be done to the original timeframe (crunch time!) Like an omelette, devs could increase intensity, but that rarely works out well

It’s common to schedule to an end-user’s desired date, rather than going on historical figures.

Managers need to push back against schedules done this way, going instead for at least somewhat data-based hunches instead of wishes

Fixing a schedule:

Going back to progress vs effort.

You have a delayed job. You could add more people, but that rarely turns out well. Overhead of training new people & communication takes it’s toll, and you end up taking more time than if you just stuck with the original team

Recommended thing is to reschedule; and add sufficient time to make sure that testing can be done.

Comes down to the maximum number of people depends on the number of independent subtasks. You will need time, but you can get away with fewer people.


The central idea, that one can’t substitute people for months is pretty true. I’ve read things that say it takes around 3 months to get fully integrated into a job, and I’ve found that to be largely true for me. (It’s one of my goals for future co-op terms to try and get that lowered.)

The concept of partitioning tasks makes sense, and again it comes back to services. If services are small, 1 or 2 person teams could easily take care of things, and with minimal overhead. When teams start spiraling larger, you have to be better at breaking things down into small tasks, so you can assign them easily, and integrate them (hopefully) easily. It seems a bit random, but source control helps a lot here.

Estimation is tricky for me, and will continue to be, since it only comes from experience – various ‘rules of hand’ that I’ve heard include take the time that you’ll think you’ll need, double it, then double it again.

But it’s a knock-on effect – I estimate badly, tell my manager, he has bad data, so he estimates wrongly as well… I’ve heard stories that managers pad estimates. That makes sense, especially for new people. I know estimates of my tasks have been wildly off. Things that I thought would take days end up taking a morning. Other things like changing a URL expose a recently broken dependency, and then you have to fix that entire thing… yeah. 5 minute fix became afternoon+. One thing which I’ll try to start doing is noting down how ling I expect things to take me, and then compare it at the end to see whether or not I was accurate. Right now it’s a very handwavey “Oh I took longer/shorter than I wanted, hopefully I’ll remember that next time!”

Which, sadly, I usually don’t.

No Comments

Musings on The Mythical Man-Month Chapter 1

Summary of the chapter:

Growing a program

A standalone product, running on the dev’s environment, is cheap.

It gets expensive if:

  1. You make it generic, such that it can be extended by other people. This means you have to document it, testing it (unit tests!), and maintain it.
  2. You make it part of a larger system. For example, cross-platformness. You have to put effort into system integration.

Each of those tasks takes ~3x effort of creating the original program. Therefore, creating a final product takes about ~9x the effort. Suddenly, it doesn’t look simple anymore.

Why Software

  1. Sheer joy of making things. Especially things that you make yourself.
  2. Joy of making things for other people
  3. Fascination at how everything works together
  4. Joy of always learning
  5. Joy at working in an insanely flexible medium – a creative person, but the product of the creativity has a purpose. (Unlike poetry, for example)

In summary, programming is fun because it scratches an itch to design and make something, and that itch is surprisingly common among people.

Why not Software

  1. You have to be perfect. People introduce bugs. You are a person. Therefore you aren’t perfect, and a paradox occurs, which resolves in the program being less than perfect.
  2. Other people tend to dictate the function/objective of the program – leaving the writer with authority insufficient for his responsibility. In other words, you can’t order people around, even though you need stuff from them. Particularly infrastructure people, given programs that aren’t necessarily well working and they’re expected to make them run.
  3. Designing things is fun. Bug fixing isn’t. (This is the actual work part.) Particularly where each successive bug tends to take longer to find & isolate than the last one.
  4. And when you’re done, frequently what you’ve made is ready to be superseded by a new better program. Luckily, that shiny new thing is usually also in gestation, so your program gets put into service. Also, it’s natural – tech is always moving on, unlike your specs, which are generally frozen at a fixed point. The challenge is finding solutions to problems within budget and on time.

So I got ahold of the much talked about Mythical Man-Month book of essays on Software Engineering… and I’ve decided to read an essay a night, and muse about it, after writing a summary of the chapter (read: taking notes about the book, so I’m not just passively reading).

I agree with pretty much everything – and I’ll cover points in order of where they appear in the essay.

Growing a Program: The extra work done in getting systems integrated is pretty accurate. I think that’s driving a lot of the move towards offering everything as services instead of one monolithic thing. Moving to using a service means a lot of stuff is abstracted away for you – you can ignore the internal workings (more so than using a library, which you have to keep track of) in the hope that stuff works as advertised. So you save some time on the integration side of things by reducing the amount of surface area you have to integrate with.

However, the fleshing out of the program – writing tests and documenting everything, is harder to avoid. A lot of the boilerplate stuff is automated away by IDEs (auto generating test stubs, for example), but there’s still work that needs to be done to make stuff into a proper dependable system – and that’s really the stuff that’s separating the small scale, internal software from public scale.

Admittedly, that’s a bit of a tautology. But I think a lot of the growing is just forced by not wanting to keep on fixing bugs in the same code. By having a test against it, you know whether or not at the very least, the expected behaviour occurs.

Why software: I chose software over hardware in Uni because it’s so much more flexible (#5). I like making things (#1), especially those things which help people (#2). I do a mental happy dance every time someone posts a nice comment on Lightroom Plugin page on deviantArt. Though the happiness of understanding how things fit together (#3) is more of “Ha! Got <complicated API> to actually work!” And #4 is more frantically Googling so as to not look like an idiot to the rest of my team.

Why not Software: Uh… yeah. #1 & #3 – damn bugs. See the 6+2 stages of debugging. Sadly true, especially the moment of realisation, followed by how did that ever work. But fixing bugs is satisfying, particularly a new bug that you’ve never seen before. #2 – That’s, well, the nature of work when you’re not at the top. The authority/responsibility trade off is real. I like to think I’ve worked around it at Twitter by following Twitter’s “Bias towards action” guideline – I have submitted fixes for other projects, gotten reviews and submitted code. Much more efficient than filing a bug and saying “BTW, you’re blocking me right now.” And #4 – That goes along with the learning new stuff thing. Also, it’s probably a good thing that a new version will come along soon – you get closer to what the user wants by iterating. If you’ve stopping iterating, the product is either a) perfect, or b) cost/benefit analysis says it’s not worth updating, run it in pure maintenance mode.

Or c) you just really don’t care about it anymore. Which is really just a variation on b.

No Comments

Chocolate chip cookies

On a random note, because I’m living on my own and have been trying recipes:

Ingredients

  • 1/2 cup white sugar
  • 1/4 cup brown sugar (In theory, brown & white sugar amounts are swappable, haven’t tried it yet)
  • 1 egg
  • 1 tsp vanilla extract
  • 1 cup flour
  • 1/2 tsp baking soda
  • 1 cup chocolate chips
  • 1/2 cup (I use unsalted) butter/ 1/2 cup margarine & pinch of salt (which the recipe called for)

Cooking

  1. Beat butter, sugar, egg, vanilla, baking soda till smooth (ie, use a electric beater if you have one)
  2. Mix flour & chocolate chips in (don’t use the electric beater for this part, because everything gets way too thick to handle
  3. Bake at 350F for 10 mins

Outcome

First batch was horrid – way overcooked to the point that one tray was inedible

Second batch was somewhat better – at 10 minutes they looked underdone, so I left them in for another two minutes, and they looked & felt good when they came out. However, they ended up overly crunchy. I also substituted the 1 cup of chocolate chips for 1/2 cup of chocolate chips & 1/2 cup of mini M&Ms.

Next batch will be done till 11 minutes MAX, since that seems to be the issue.

,

No Comments

LED cubes and Raspberry Pis…

My plan of making an LED cube with a Raspberry Pi is coming together… been making a list of stuff I need, and looking good resources.

IC 1: Serial in/Parallel out.

Apparently this particular type of IC is called a 74HC595. Found it on a list of popular ICs for Arduino chips. Linked datasheet is helpful to get timings for the chips. Also, the 74×595 is supported by WiringPI, so that’s a drop in library that looks easy to use.

LEDs need resistors to limit current, this explains why, and how to calculate the resistances needed. This explains why resistors appear in multiple component lists, but not the capacitors… but apparently it’s for filtering as well as dampening the surge when the LEDs in a layer kick on.

Random stuff:

 

Still to do:

  1. Order Rasp.PI
  2. Order components other than the blue LEDs that I’m prototyping with
  3. Design the circuit.
  4. Run the circuit by people who know stuff
  5. Find out how to do colour changing for RGB LEDs
  6. PWM stuff?

,

No Comments

OpenVPN & China’s Firewall

Ended up choosing an SSH SOCKS proxy + Tunnelblick because it had the fewest moving parts.

Combined with a passwordless SSH key, I saw this status on Facebook today:

Kyle is truly a computer wizard! as in, his Tunnelblick thingy is working!

Location? China.

Success.

8 Comments

LED Cubes, and Raspberry Pis

Raspberry Pis have been out for a while. And I’m thinking of picking one up for projects over the work term in May – August.  (Also, because a Raspberry Pi is just something interesting to have… the possibilities of a cheap & low-power Linux system are numerous.)

I’m not sure how I got onto the subject of LED cubes today in particular, but only seems to be a single project for doing it with a Raspberry Pi, though many tutorials exist for doing it with arduinos.

Of course, I went poking around for components (particularly LEDs), and there’s a whole bunch of stuff posted on ebay.

Fun times ahead, hopefully…

Including new attempts at soldering! Whee!

,

No Comments

I’m glad I went

I went to an outdoor music festival thing (Igloo Fest) in Montreal today,  on the recommendation of a friend.

I almost didn’t go. But I’m glad I did.

It was kind of amusing.

They had a fire, and people were giving out marshmallows on sticks, so I grabbed one and toasted it. And as I was standing there, with thousands of watts of speakers thumping about 100m away, the wind kicks up and I end up squinting through the snow and wood smoke with tears in my eyes.

And then two girls next to me kiss each other. Not making out, just a short kiss, and they both walk off, presumably to get marshmallows.

It took a moment to register.

But then I started laughing to myself.

Simply because it was so absurdly different from what I’d experienced so far.

I was standing in the middle of a crowd (I don’t usually like crowds), toasting a marshmallow (Admittedly, I’ve done that before), squinting against driving snow (a rare experience for me) and wood smoke (more experience, but still rare), watching two people share a semi-intimate moment with each other. (Erm… TV.)

I do not believe I’d ever see any combination of that anytime soon in Singapore (even without the snow).

I can’t put my finger on why it struck me so much. But it did.

The only thing I could think of was that I’ve been isolated, partly by choice, partly by circumstances; and now I’m opening up to new experiences. And the world is going “Oh hey, you know that place you lived in for 18 years? Yeah, well, the world is a whole lot more messy. And dirty. And different. But it’s a whole lot more interesting. Enjoy.”

Hello world.

(Alternate title was The Real World Incorporated – from a TDWTF forum topic about expectations not matching to the world. But then I remembered a line from a song I heard somewhere.)

No Comments