Archive for May, 2012

You know you need more RAM when…

aka, my errors, let me show you them.
or, inadvertent DDoSes are fun when you do them to yourself!
finally, A: “I can see 4GB of RAM but I can only use 1.5.” Q: What is “Why I loathe 1and1”?

sudo -i
sudo: unknown uid: 10003
sudo: cannot fork: Cannot allocate memory

-bash: /usr/bin/sudo: Argument list too long

ps aux|grep httpd|wc -l
 -bash: fork: Cannot allocate memory
 -bash: start_pipeline: pgrp pipe: Cannot allocate memory
 -bash: ps: command not found
 -bash: /bin/grep: Cannot allocate memory
[s15353558 ~]$ sudo -i
 -bash: /bin/egrep: Cannot allocate memory
 -bash: echo: write error: Cannot allocate memory
 -bash: fork: Cannot allocate memory
 -bash-3.2# whoami
bash: fork: Resource temporarily unavailable
[[email protected] conf]# screen
 bash: /bin/grep: Argument list too long
 bash: /bin/grep: Cannot allocate memory
 bash: /usr/bin/id: Cannot allocate memory
 bash: [: =: unary operator expected
 bash: /sbin/consoletype: Cannot allocate memory
 bash: /usr/bin/id: Cannot allocate memory
 [[email protected] conf]#
[[email protected] conf.d]# vim pagespeed.conf
 Vim: Warning: Output is not to a terminal
[[email protected] conf]# service httpd graceful
 httpd: Syntax error on line 190 of /etc/httpd/conf/httpd.conf: Cannot load /etc/httpd/modules/ into server: /etc/httpd/modules/ cannot open shared object file: Too many open files in system
 [[email protected] conf]# service httpd graceful
 /usr/sbin/apachectl: line 102: 16247 Killed                  $HTTPD $OPTIONS -t >&/dev/null
 apachectl: Configuration syntax error, will not run "graceful":
 httpd: Syntax error on line 169 of /etc/httpd/conf/httpd.conf: Cannot load /etc/httpd/modules/ into server: /etc/httpd/modules/ failed to map segment from shared object: Cannot allocate memory
 [[email protected] conf]# service httpd graceful
 apachectl: Configuration syntax error, will not run "graceful":
 /usr/sbin/httpd: error while loading shared libraries: cannot open shared object file: Error 23
 [[email protected] conf]# service httpd graceful
 /etc/rc.d/init.d/functions: line 19: /sbin/consoletype: Cannot allocate memory
 /etc/profile.d/ line 53: /sbin/consoletype: Cannot allocate memory
 /usr/sbin/apachectl: fork: Cannot allocate memory
 [[email protected] conf]# service httpd graceful
 /etc/rc.d/init.d/functions: fork: Cannot allocate memory
 /etc/profile.d/ fork: Cannot allocate memory

, ,

No Comments

Quick tip: Use a QR code to get a site on your mobile device before you leave the house

A few times I’ve been in a rush to get somewhere, but have an article or two open on my screen that I’m in the middle of reading. Before I got a shiny new Galaxy Nexus, I used to email the link to myself, and open the email and then open the link.

It wasn’t too bad, but for a single article it was tedious and annoying – even one link in an email was a rush.

And then I recently had a case where I had a long bus + train ride ahead of me, so I wanted to get a long article that I was reading on my desktop onto my phone. I don’t know why or how I thought of it, but the idea of using a QR code to get the URL onto my phone came to me. Possibly because I was reading an article lambasting the use of QR codes as links on websites.

Which is completely correct – it’s meant to convey data, so using them as internal links defeats the purpose. But I could use QR codes to transfer data between my desktop and my phone – I just had to convert the URL of the page into a form suitable to be displayed on my phone!

Some Google-fu later (for “QR code generator”) brought me to here, which was a perfect solution. Now, all I have to do is open that site, paste the page URL in, and snap a picture using the Android app Barcode Scanner. Then the app allows me to open the URL in the Android Browser.

Which felt very easy. Easier than emailing stuff to myself.



3 things about backups

In this post I’ll talk about 3 things about backups that you should know. At the end of it, you should understand the different types of data involved in a backup, and some programs which can help you with the backups.

3 different types of data

(Or, why not all data is equal)

Data can generally be classified into 3 groups:

  1. Important stuff that changes frequently – Think work documents, reports and the like
  2. Important stuff that is not changed frequently – Think family photos, videos, music
  3. Unimportant stuff – Stuff like recorded TV shows, or backups of your DVDs

If you want to get technical, you can split ‘unimportant stuff’ into changes frequently/rarely, but that is about making backups simple, so I’m ignoring that distinction.

These distinctions are quite important – the ideal type of backups for each is different. For stuff that changes frequently, a backup process that runs constantly and allows you to retrieve old versions of a file (in case you inadvertently delete a crucial section of your report, for example) is better than something that’s run once a week. But for the stuff that’s changed irregularly, you could probably get away with weekly backups.

I’d wager that most people think of backups as plugging in an external drive and copying files over. Which leads me to my next point: If that is your backup strategy, when was the last time you did it?

Automation of backups

See, backups are worthless unless they’re done regularly. In the first 4 months of the year, I shot 131GB worth of photos. All of them were automatically added to my backups. If I had to do it manually, I’m not sure I would have backed them up.

Which is my point: It’s not a backup unless it’s done without intervention. Human nature is simple – we don’t really want to do stuff. If we don’t have to do it, and we don’t want to do it, face it: we’re not going to do it. Surveys have shown that more than half of computer users backup less than once a year. So, yeah, backups aren’t sexy. In fact, you hope never to need them. But it’s when you don’t have them that you wish you did.

Programs to help you

So, the third and final point. Programs that will help you backup.

First, we’ll look at the important stuff that changes regularly. For that, take advantage of cloud sync tools like Dropbox or SkyDrive. They’re intended for synchronizing two or more computers, but the nice fact is is that they keep copies of your stuff on their servers so they act like a backup. And Dropbox allows you to restore old copies of files from the past 30 days. That meets our needs for backing up important, frequently changed documents. (Not to mention that for stuff like reports, it’s nicer to email people a link to a file rather than attach the entire file, especially if it’s a big one.)

Next, the important stuff that does not change regularly. For that, I look at online backup tools. Nice thing about them is that they’re offsite, so you don’t lose everything if your house burns down. If you’ve got one or two computers to backup, I can’t recommend BackBlaze enough. For less than the price of a coffee at Starbucks each month, you can keep your computer backed up online. If you’ve got more computers though, BackBlaze loses out to CrashPlan, where they’ve currently got a plan that will backup up to 10 computers for USD$6/month. (That’s the plan that I’m personally using.)

And, finally, the unimportant stuff, stuff that you can live with losing. Good news for those who backup with external drives. Your investment hasn’t gone to waste. This is the perfect use for that external drive. Programs like SyncToy or SyncBack allow you to synchronize your files on the desktop/laptop’s drive to the external drive. (I’ve also seen good mentions about Karen’s Replicator, but I’ve haven’t heard as much about it.)

And if you want to have a scorched earth/bare metal backup policy… I shall point you at DriveImage XML. Like Acronis True Image, but free. I tend not to bother with that, because if my computer fails, that probably means it was overdue for an OS reinstall, and the associated program clean up.


No Comments

cron screwyness

While trying to diagnose a problem with my VMs (namely, why starting a fedora 16 based vm fails to bring up the network connection), I ran into a strange issue – on my dom0 and some of the domUs, logrotate hasn’t been running, leaving me with insanely long logfiles!

So I set out trying to understand where the problem came from. Both the f16-based dom0 and domU had old logs, but my f14 domU was still working. That pointed to an update failing. And because logrotate never ran, I’ve got the yum logs of what was installed.

logrotate by default is setup to append the date to the filename, so I knew when logrotate was last run – 20111127. So anything between 27th November and 4th December could be the culprit.  To start with, I looked at cron in particular, since that’s what supposed to run logrotate.

When I checked in my dom0, I got this:

[[email protected] log]# grep cron yum.log
 Dec 01 00:23:46 Updated: cronie-anacron-1.4.8-2.fc14.x86_64
 Dec 01 00:23:48 Updated: cronie-1.4.8-2.fc14.x86_64
 Dec 01 01:38:54 Updated: cronie-anacron-1.4.8-2.fc15.x86_64
 Dec 01 01:38:56 Updated: cronie-1.4.8-2.fc15.x86_64
 Dec 01 01:38:58 Updated: crontabs-1.11-2.20101115git.fc15.noarch
 Dec 01 03:05:41 Updated: cronie-anacron-1.4.8-10.fc16.x86_64
 Dec 01 03:05:43 Updated: cronie-1.4.8-10.fc16.x86_64

Which showed me that I upgraded my dom0 to f16 on December 1st. Which is kinda helpful – I did upgrade cronie and cronie-anacron. Also, crontabs was upgraded, but only in F15. Then I looked at my domU. Again, judging from the dates in /var/log, I was looking for something between November 7 – 13. And lo and behold, I found lines that looked exactly like those in my dom0:

[[email protected] log]# grep cron yum.log
Aug 20 15:20:05 Updated: cronie-1.4.8-2.fc14.x86_64
Aug 20 15:20:05 Updated: cronie-anacron-1.4.8-2.fc14.x86_64
Nov 07 18:33:50 Updated: cronie-anacron-1.4.8-2.fc15.x86_64
Nov 07 18:33:54 Updated: cronie-1.4.8-2.fc15.x86_64
Nov 07 18:33:54 Updated: crontabs-1.11-2.20101115git.fc15.noarch
Nov 07 20:50:42 Updated: cronie-anacron-1.4.8-10.fc16.x86_64
Nov 07 20:50:44 Updated: cronie-1.4.8-10.fc16.x86_64

Except the F14 update was done in August, before the problem started. So, hmm, maybe the upgrade to Fedora 15 broke it. It would certainly explain why the F14 domU is still working fine. And I just happened to have an F15 domU that I had yet to upgrade to F16.

But it was strange – when I checked it, logrotate was working fine. /var/log had stuff timestamped 20120429, so the upgrade to F15 couldn’t have been the problem. logrotate kept an old version of yum.log, so I was able to look for cron related entries:

[[email protected] log]# grep cron yum.log*
yum.log:Apr 19 13:16:38 Updated: cronie-anacron-1.4.8-4.fc15.x86_64
yum.log:Apr 19 13:16:40 Updated: cronie-1.4.8-4.fc15.x86_64
yum.log-20120101:Sep 12 18:54:29 Updated: cronie-anacron-1.4.8-2.fc15.x86_64
yum.log-20120101:Sep 12 18:54:30 Updated: cronie-1.4.8-2.fc15.x86_64

Which wasn’t very helpful – I was using virtually the exact same versions that I had upgraded my F16 systems to – 1.4.2. So it wasn’t the F14 -> F15 upgrade. So maybe it was the F15 -> F16 upgrade?

I puzzled over this, then I saw a log named ‘cron’. Which turned out to really be key:

[[email protected] log]# tail -n 5 cron
Dec  1 01:39:01 elemental crond[2011]: (*system*) RELOAD (/etc/cron.d/0hourly)
Dec  1 01:41:01 elemental crond[2011]: (*system*) RELOAD (/etc/cron.d/smolt)
Dec  1 01:43:01 elemental crond[2011]: (*system*) RELOAD (/etc/cron.d/sysstat)
May  1 23:49:29 elemental crontab[9780]: (root) BEGIN EDIT (root)
May  1 23:49:41 elemental crontab[9780]: (root) END EDIT (root)

The crontab made sense – I run crontab -e to see if logrotate was in root’s crontab (which it wasn’t), but the crond entries were interesting – for one, it showed me the exact time it stopped, but also told me that cron has a daemon. And guess what was part of the F15 -> F16 upgrade? Moving stuff to the new systemd.

So I quickly checked crond’s status on a F16 machine:

[[email protected] log]# systemctl status crond.service
crond.service - Command Scheduler
          Loaded: loaded (/lib/systemd/system/crond.service; disabled)
          Active: inactive (dead)
          CGroup: name=systemd:/system/crond.service

Which, oops, that looks bad. But the F15 machine looked a lot better:

[[email protected] log]# systemctl status crond.service
crond.service - Command Scheduler
          Loaded: loaded (/lib/systemd/system/crond.service)
          Active: active (running) since Mon, 30 Apr 2012 14:16:01 +0800; 1 day and 10h ago
        Main PID: 831 (crond)
          CGroup: name=systemd:/system/crond.service
                  â 831 /usr/sbin/crond -n

And finally the F14 machine looked like this:

[[email protected] init.d]# service crond status
crond (pid  1118) is running...

So it seems the fix is to enable crond. I have no clue why crond would be disabled between F15 -> F16 upgrades, but I’m seeing it on two different F16 systems, both of which were upgraded from F15. I might do a clean install of F16 and look, as well as try an F15 -> F16 upgrade, but that’s for later.

For now, I’m doing a systemctl enable crond.service followed by systemctl start crond.service on the three F16 systems and calling it a day.

, ,

No Comments