About Lee Devlin

I'm Lee Devlin from Greeley, Colorado.

Ubuntu 9.04 upgrade and flash boot

Share

I upgraded 3 dual-boot PCs to Ubuntu 9.04, (aka Jaunty Jackalope) over the weekend. One of the motivations for the upgrade was that I heard that it booted in less than a minute, and I can happily report that it really does. I found that it was easier to just download the CD image via bittorrent than to let each computer try to pull down a complete update individually. I had one computer that was still running Ubuntu 7.10, and needed the 8.10 CD to upgrade it first. One of the benefits of Ubuntu is that the updates are easy, but you do have to boot and run the computer periodically because you generally can’t upgrade to the latest release until you have all the updates for the previous release. On a dual boot computer that is only booted to Ubuntu infrequently, it’s easy to get behind a full release, which is what happened in the case of my computer with version 7.10. Having a physical CD has another benefit which I’ll get into later.

I’ve noticed that my Windows computer becomes like a ship at sea, collecting software much the same way that a ship collects barnacles. It also appears to have the same effect: it slows everything down. My friend Chris gets around this problem by completely reinstalling Windows every day. OK, that’s a bit of an exaggeration, but he rarely goes for a month without doing a fresh Windows install. This keeps his Windows boot times down to less than a minute, which is what my boot times were when I had a fresh XP install. How I long for those days. But the OS is just a small part of what I use on my Windows computer. The applications I have would take a day to re-install so re-imaging the OS every month is not practical. And all my customizations would similarly take time to reconfigure. For example, what ever motivated Microsoft to hide file extensions by default? I think that ranks up there as the dumbest idea ever brought to us by our friends from Redmond.

My Windows XP boot times and shutdown times on my main desktop are getting so long that I tend to leave it on all the time. In case you think I’m a Luddite for using XP, I have tried Vista and found that it provided no advantages over XP and hid everything I knew how to find previously. I watched my productivity plummet as I struggled to find all the useful things you need on a Windows computer (like a DOS prompt ;-). I will wait for Windows 7 and see if they fix that egregious error, i.e., a gratuitous rearranging of menus and locations of utilities. Now that Linux is booting in less than a minute, I may use it instead of Windows for many of my computing needs. I may even turn it off at night. The majority of what I do on the computer is related to editing, email, and Internet access, which Linux does just fine. I switched over to Thunderbird and Firefox more than a year ago and don’t miss Outlook or IE at all. I’ve also been relying more on Gmail to consolidate most of my email addresses and it works great on Thunderbird, or the web browser, or even on the iPhone.

Who wants to wait for 5 minutes to look up something on the Internet? Not me, and I know I’m not alone. More and more people are leaving their computers on all day (and sometimes all night) to avoid having to endure a long boot time. Leaving a computer on 24×7 wastes energy, of course, but it saves time. The wasted electricity will be less than the cost of lost productivity if you’re continually waiting for the computer to reboot because you shut it off whenever you don’t need it. A typical desktop uses about 1 cent in electricity per hour (assuming a 100 watt average draw) or about a fifth of that for a laptop. Even at the U.S. minimum wage of $6.55/hour, a person’s time is worth more than a 10 cents every minute. But if you leave several desktop computers on all month long, each one would add about $7.30 to your monthly electric bill, so even though a penny an hour doesn’t sound like much, it does add up over time, especially if you’ve got some fire-breathing gaming PC with 10 fans trying to keep it cooled, like some people I know.

The culprits that seem to really slow down the computer in my case are those annoying yet necessary programs that have services that run all the time in the background. The instant messenger clients have become particularly bad over time, with Yahoo and MSN loading up many unrelated items to try to get you to visit their sponsor’s websites. I have found pidgin on Linux helps consolidate multiple chat clients into one that isn’t a constant source of spam or other distractions.

I like to use Linux because it doesn’t seem to be much affected by the performance-robbing effect of adding programs like Windows does. Perhaps Linux programs just work better together and go to sleep when not in use. And they don’t seem to affect the shutdown time as much either. In Windows, it invariably has to wait for programs that are no longer responding during a shutdown (and then kill them) and it takes a while to get through the list. It’s no wonder people dread a reboot when installing software on Windows. The Ubuntu systems shut down completely in about 15 seconds or less. I often wait for several minutes for my Windows systems to shut down. It seems like the shutdown time increases in proportional to the time the computer’s been running.

I can go for many hours sometimes without having to use Windows. But I don’t think I can use Linux exclusively. Invariably, there’s a program that I’ll need to use that only runs on Windows. Also, I don’t want to be one of those annoying Linux bigots who haughtily dismisses anyone who uses Windows (or Macs).

One of the other issues associated with having multiple computers is that no two will be alike. For example, your bookmarks, applications, plugins, files, email, etc., all tend to require local data and settings that don’t propagate through to the other computers. The proposed solution to that, of course, is to move all your data and applications to the ‘cloud’. I have done that with my bookmarks, using Google’s bookmark app and toolbar, but it’s hard to get everything into the cloud. Also it is not without its own set of issues, and one of them is that you need to have a persistent Internet connection without which you can’t get anything done. You also place a lot of trust in the vendor who runs the cloud, perhaps too much trust, in keeping your data safe and private. So far, many cloud services are ‘free’, but in the future, you may have to pay some fee to keep using them, but they’ll only implement that policy after you’ve become completely dependent on them.

I’ve experimented with using a combination of portable computing approaches over the years including a flash drive (both U3 and PortableApps) but I’ve been thinking that a flash drive with the OS and everything else like your programs and data on it would be even more useful. Well, there is a feature on Ubuntu that allows you to take a live CD and make a complete bootable operating system with a USB thumb drive. Now that flash drives are getting big enough to store not just data, but all the applications as well as an entire operating system, it just may be time for the flash-based virtual computer. I made up a Ubuntu boot flash drive yesterday and found that I was able to boot perfectly on 4 separate computers. It usually takes a bit of fiddling with the BIOS to get it to work, but it comes very close to having a total computing enviroment that fits in your pocket and remembers its state and all the other things that a truly ‘personal’ computer will remember.

USB flash memory isn’t as fast as a hard drive, taking several minutes to boot to the OS, but it’s actually quite usable. But putting all your data on one device is like having all your eggs in a single basket. If you lose the flash drive, it’s almost as bad has having your computer stolen, and so it needs to have a workable backup solution too, but that shouldn’t be too difficult if you could copy your data to a networked backup drive whenever you were working. You don’t really need to copy the applications or OS, since those can easily be downloaded again.

I really like the idea of the USB bootable flash drive. It had been a few years since I last experimented with it using DSL (damn small linux) which, at an image limit of 50MB, was just too ‘DS’ for me. But with an 8GB flash drive, I easily fit a complete Ubuntu 9.04 distribution on it, and I put Apache, MySQL and PHP on it as well. Imagine that, a server that fits in the palm of your hand! Well, almost, since you still need a motherboard to run it on. Next time, I’ll write about my new low-power computing platform built from an Atom Mini-ITX board and chassis.

Solar array is up and generating…

Share

I flipped the switch on the solar array today and watched my electric meter begin to run backwards, erasing not just today’s electricity usage, but most of yesterday’s as well. Today was a very sunny day in Colorado. These words were written on a computer that was, at the time of the writing, operating on solar energy alone.

For as long as I can recall, I’ve always wanted to own a house that ran on solar energy. My dad worked on the very first government communications satellites back in the 60’s and 70’s and he’d sometimes bring home bits and pieces of that project for my amusement. One of those early artifacts was a solar cell which is one of the technologies that allowed satellites to be practical in the first place. I remember being fascinated as I watched the solar cell power a small motor from a lamp. This was long before solar cells started showing up in calculators (which didn’t even exist at the time). The solar cells I played with back then are very similar to the ones that are now powering our entire house.

A 5.6 kW Solar Array Generates all our electricity

A 5.6 kW Solar Array Generates all our electricity

This solar installation uses a method called ‘net metering’, which feeds any excess electricity to the grid for use by my neighbors when the sun is shining. During this time, my meter runs backwards. After the sun goes down, my meter runs forward again. Based on the size of the array and our annual electricity usage, our house should have net zero electricity consumption over the course of the year. A net metering system has a few advantages over batteries because I don’t have to worry if we get several days with no sun, since I’m still hooked up to the grid. Also, a bank of batteries to hold just a day’s worth of electricity would be enormous, weighing over 2000 lbs. and they would also be costly. The savings from generating your own electricity are real, since for every kWh I generate, it means less coal or natural gas that needs to be burned back at the power plant.

I’ve always looked at the large south facing roof of our house as a perfect location for a solar array and now it’s here.

For those interested in specifics, the system includes 32 Sharp 176W panels connecting in 2 strings feeding a Sunny Boy inverter. Total capacity is 5.6 kW.

Looking for a Twitter app…

Share

In order to improve my productivity, I am looking for a Twitter application with the following automation features:

1. Tweet a quote from some famous person every 5 minutes. I have a book of over 2800 quotes and it would be ideal if it could be scanned into a database and direct the contents into my Twitter stream. It will take about 23 days to cycle through all the quotes at that rate. After it’s over, I want it to loop continuously for the benefit of my new followers and in case someone missed one of the quotes.

2. Check the local weather and send a message to all my peeps about what it looks like outside my window, at least 5 or 6 times a day. It should also tell people when it’s getting dark in my neighborhood.

3. Connect to a pillow sensor so that when I’m hitting the hay, everyone will know, as I’m sure they are curious. It should issue a random yet clever statement with the word ‘pillow’ somewhere in it.

4. Each morning when I arise, it must proclaim that momentous event and simply send the phrase, “Mornin’ Peeps!!!”

5. Whenever Guy Kawasaki tweets anything, which happens about 300-400 times a day, the app should be the first to Re-Tweet it, ideally within 30 milliseconds so I can get my mug to appear in the Tweet stream before his next posting, if possible. For an extra bonus, remove any gratuitous references to alltop.com.

6. It should monitor for any DMs sent to me and forward them to my spam bucket, because, frankly, I just don’t have the time to check my Twitter DMs.

7. It should search through Google’s newsfeed and tweet the top headlines as they change every 3 minutes. It should insert ambiguous and random catch phrases that go something like “This is cool!”, or “Can you believe this?!” in front of the tinyurl link.

8. Harvest the entire Twitter member database and follow everyone.

9. Auto-follow anyone who somehow manages to follow me before I can follow them. It must then send them a Tweet, an email, and a phone text telling them how much I appreciate their follow and how I intend to hang on their every word.

10. If anyone should ever stop following me, notify me about it immediately, so I can launch a marketing campaign to get them back, ASAP, unless it’s someone who doesn’t Tweet every hour, because I really could care less about those kinds of people.

11. Send out some blip.fm song link every 10 minutes that will make my followers think I have very sophisticated musical taste.

Have I left any out? Feel free to add your own ‘must have’ Twitter automation features in the comments…

🙂

UPDATE 2009-03-21: Just in case the satire didn’t shine through, I think that automation in social networking is a slippery slope that eventually ruins the experience. People who engage in the techniques above make me want to ‘unfollow’ them on Twitter.

Converting from Blogger to WordPress

Share

For some time, I’ve wanted to change the landing page of my website to my blog, since it was the only part of my website that was changing on a regular basis. I figured the best way to do that was to change over from the blogging tool I had been using for years (Google’s Blogger) to a more full-functioning solution that made adding content and customizations easier. After briefly considering a few Content Management Systems such as Joomla and Drupal, I ended up choosing WordPress. WordPress isn’t really a full content management system, but I’ve played around with WordPress before with an account on WordPress.com, which anyone can get for free. What attracted me to WordPress was that it installs the scripts and database locally and it is open source. Because it’s open source, it has attracted a number of developers who have written plug-ins for it. There are more than 4,000 WordPress plugins available. It also uses ‘widgets’ which allow you to customize the sidebar with things like a calendar, archive list, blogroll, and many other features. Blogger had the ability to do this too, but much of it required you to go in and edit the template, which was very painful and prone to error. And if you ever switched templates, you had to start over with your customizations. Because I host the blog on my own domain, Blogger also required me to completely regenerate every page whenever I made the slightest change to the template. That was taking longer and longer as my archive of postings grew.

WordPress uses PHP and MySQL to serve its pages. If I make a change to the template, it doesn’t need to regenerate any pages since they are generated on demand. I had hoped that by using WordPress’s pretty permalinks that all of my Blogger links could be preserved so as not to lose search engine traffic, but because I was moving the whole blog up a level in my domain and also because every page in a WordPress blog is essentially a php script, it didn’t work out that way. So I’m becoming skilled in adding ‘301 redirects’ for my more popular pages in .htaccess file to maintain my website’s search engine mojo.

Another big change I made last month was to switch hosting services. I had been running on a Windows IIS-based platform, something I chose about 8 years ago without thinking about it too much, to a Linux Apache-based platform that now allows me to have SSH login privileges. I think that once you have a website up and running, there is a natural reluctance to making major changes because you never know how much work it’s going to be to get all the pages and email addresses working again. Windows doesn’t care about capitalization in the URL so a file called ‘image.JPG’ and one named ‘image.jpg’ are the same file. Not so with a Linux system. This can lead to a lot of broken links if you were sloppy when you created the original links. I also had a number of sites I help host as a ‘re-seller’ under my previous web host plan and I took the opportunity to combine them all into subdomains that are under on my upper-level domain. This makes them easier to manage and cheaper for everyone.

I started taking two classes at the local college in January on web technologies that I knew about, but felt I had only a superficial understanding of them. It’s pretty easy to learn HTML through osmosis, by using the ‘view source’ feature and by referring to a manual. But the technologies that run the web now, namely CSS, Javascript, PHP, and content databases have completely changed how the web works. You really have to understand a lot more to put together a website these days and you can’t do it by making static HTML pages. So I’m going through the rigor of taking these classes, doing the assignments and projects, and having many ‘a ha!’ moments where something that was confusing suddenly makes sense. I’ve also found myself in the role of tutor to the others in the class. I welcome that opportunity because the best way to really learn a subject is to attempt to teach it. Sometimes I feel a little like the blind leading the blind, but I am learning the material better as a result of tutoring others.

I had used the now-abandoned Microsoft FrontPage to generate my static web pages previously, something that always gets an audible groan from any self-respecting web developer, but now I’ve begun using Dreamweaver instead. However, in one of the classes where we use Linux exclusively, I’ve switched to just using gedit and Firefox as my sole web development tools. I had tried this a long time ago with Notepad and Internet Explorer, but it was painful to do because Notepad doesn’t color code the text to help alert you to formatting errors, which are very easy to make in HTML and CSS files. But gedit on Linux (or Notepad++ on Windows) have almost made the WYSIWYG web editor obsolete. With Firefox plugins like Firebug to help debug CSS and Javascript, you can do web development without costly tools like Dreamweaver, which only seems to get in the way when pages depend on Javascript and CSS to render properly.

If you’re interested in changing over from Blogger to WordPress, drop me an email and I’m sure I can help now that I’ve just done it. I had thought it would be as simple as an XML export/import, but it turned out to be much more complicated, requiring multiple steps along with an account on WordPress.com to talk directly to an account on Blogspot.com in order to get the content to import properly.