<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[speakofthedevel.com]]></title><description><![CDATA[Software developer by profession. Hardware enthusiast, hockey watcher, and/or baritone saxophonist by night.]]></description><link>https://speakofthedevel.com/</link><generator>Ghost 0.11</generator><lastBuildDate>Thu, 26 Feb 2026 19:32:02 GMT</lastBuildDate><atom:link href="https://speakofthedevel.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Surviving Ubuntu's perpetual out of space boot partition]]></title><description><![CDATA[Ubuntu's default boot partition is too small. Using two techniques can help prevent apt upgrade errors due to being out of space on your boot partition.]]></description><link>https://speakofthedevel.com/surviving-ubuntus-out-of-space-boot-partition/</link><guid isPermaLink="false">c793376b-c1eb-4c99-b2f1-79f95c2743cf</guid><category><![CDATA[Servers]]></category><dc:creator><![CDATA[Kendrick Erickson]]></dc:creator><pubDate>Fri, 23 Dec 2016 04:02:25 GMT</pubDate><media:content url="https://speakofthedevel.com/content/images/2016/12/apt-get-failure.png" medium="image"/><content:encoded><![CDATA[<img src="https://speakofthedevel.com/content/images/2016/12/apt-get-failure.png" alt="Surviving Ubuntu's perpetual out of space boot partition"><p>I'm somewhat guilty of just using the default partitioning schemes when installing recent versions of Ubuntu (currently 16.04 LTS). I don't want to bother with manually setting up LVMs and encrypted partitions and segregating log data from user data and all of that. I just need a root partition and I'll handle everything else on my own. Unfortunately, that means Ubuntu gives you a &lt;256MB /boot partition. That's not enough.</p>

<p>Something like this happens while going about your business upgrading your system:</p>

<pre>
update-initramfs: Generating /boot/initrd.img-4.4.0-57-generic

gzip: stdout: No space left on device
E: mkinitramfs failure find 141 cpio 141 gzip 1
update-initramfs: failed for /boot/initrd.img-4.4.0-57-generic with 1.
run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1
dpkg: error processing package linux-image-extra-4.4.0-57-generic (--configure):
 subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of linux-image-generic:
 linux-image-generic depends on linux-image-extra-4.4.0-57-generic; however:
  Package linux-image-extra-4.4.0-57-generic is not configured yet.

dpkg: error processing package linux-image-generic (--configure):
 dependency problems - leaving unconfigured
</pre>

<h2 id="whatsgoingon">What's going on?</h2>

<p>If you're keeping up with your regular <code>apt</code> updates, you're usually keeping around four or so kernels on your boot partition. Generally, that includes the initial kernel that was installed with the system and three of the most recent kernels (and if you haven't restarted in awhile, your currently running kernel too). The problem with that is that you're now basically out of space.</p>

<p>Being out of space doesn't really manifest itself as a problem until you go to update your system again. <code>apt</code> will dutifully download the packages, install the kernel, library modules, header files, and whatever else you've installed/required. But usually by the time it goes to rebuild the initramfs images, you'll find that you're out of space on your boot partition and that causes things to go south. Hopefully, it's only the initramfs steps that have been caught by this, but sometimes additional packages that were in the queue to be installed or updated get caught in the crossfire.</p>

<p>Usually at this point you can run <code>apt autoremove</code> to reclaim some space from a now-obsolete kernel and then manually reinstall the failed new kernel packages. Sadly, being on top of running auto-remove before updating doesn't necessarily guarantee you'll avoid this.</p>

<p>What can you do?</p>

<ol>
<li>Periodically running <code>apt autoremove</code> might not help.  </li>
<li>There does not appear to be a way to configure a number of installed kernels.  </li>
<li>Repartitioning a more-or-less production system is a headache. </li>
</ol>

<h2 id="keepingbootcruftfree">Keeping <code>/boot</code> cruft free</h2>

<p>What's left is using a few ways of keeping the boot partition slimmed down. The first is using a script called <code>purge-old-kernels</code> to minimize the number of installed kernels, and the other is adjusting how <code>initramfs-tools</code> creates the initrd images.</p>

<h3 id="purgeoldkernels"><code>purge-old-kernels</code></h3>

<p>This script created by Dustin Kirkland happened to be installed by default on my machine as part of the <code>byobu</code> package.<sup id="fnref:1"><a href="https://speakofthedevel.com/surviving-ubuntus-out-of-space-boot-partition/#fn:1" rel="footnote">1</a></sup> The author <a href="http://blog.dustinkirkland.com/2016/06/purge-old-kernels.html">goes into more detail</a> on his blog about what the utility is and does, and what to do if you're not running 16.04+. It is as simple as running:</p>

<pre><code>sudo purge-old-kernels
</code></pre>

<p>The two things we need to know are that it won't attempt to remove a running kernel and it (by default) keeps two additional kernels. Periodically running this (or maybe even just running it before a kernel package upgrade) should keep enough space available.</p>

<h3 id="trimminginitrdimages">Trimming initrd images</h3>

<p>You'll probably find that the biggest disk space offender in your boot partition are the initrd images that Linux uses when it's booting up to speed up the loading of modules and other things it might need to boot your system (such as more esoteric file system or network drivers).<sup id="fnref:2"><a href="https://speakofthedevel.com/surviving-ubuntus-out-of-space-boot-partition/#fn:2" rel="footnote">2</a></sup></p>

<p>Changing initramfs options to only use the modules our kernel is using, and using a (slightly) more efficient compression algorithm will also significantly slim down those images. On Ubuntu, we can edit <code>/etc/initramfs-tools/initramfs.conf</code> and then rebuild the images.</p>

<p>The two lines to edit in <code>initramfs.conf</code> are:</p>

<pre><code>MODULES=dep    # Rather than MODULES=most
COMPRESS=lzma  # Rather than COMPRESS=gzip
</code></pre>

<p>You can choose any of the listed compression algorithms, I just happened to pick lzma. Once you've edited the configuration file, you'll need to rebuild the images. Though, if you're paranoid like me, you'll run <code>uname -a</code> to figure out what your currently running kernel version is, and copy its initrd image to a safe place, just in case. </p>

<pre><code>sudo update-initramfs -u -k all
</code></pre>

<p>On my machine, doing those two things dropped individual image sizes from 35MB to around 9MB. Multiply those space savings by how many kernels you happen to have installed, and that can be nearly half of your default partition size. A quick reboot to ensure that your new initrd images are working just fine is all that's need to finish this task up.</p>

<div class="footnotes"><ol><li class="footnote" id="fn:1"><p>Honestly, it is perhaps a little weird for a screen or tmux enhancement package to be providing such a utility, but as he mentioned in his blog post, he was tired of having to copy it around. *shrugs* <a href="https://speakofthedevel.com/surviving-ubuntus-out-of-space-boot-partition/#fnref:1" title="return to article">↩</a></p></li>

<li class="footnote" id="fn:2"><p>There's actually a lot going on in the initrd images. To learn more, check out the <a href="https://en.wikipedia.org/wiki/Initrd">initrd Wikipedia page</a>. <a href="https://speakofthedevel.com/surviving-ubuntus-out-of-space-boot-partition/#fnref:2" title="return to article">↩</a></p></li></ol></div>]]></content:encoded></item><item><title><![CDATA[radon: The "low-power" server build]]></title><description><![CDATA[<p>Deep in the bowels of my basement is a 42U rack, which formerly housed the fully operational battle station that was my office's online presence at the University of Minnesota. Alas, we were all told we couldn't have our own server rooms anymore and our server rack and its contents</p>]]></description><link>https://speakofthedevel.com/radon-the-low-power-server-build/</link><guid isPermaLink="false">cd1898eb-5bb2-4e87-9aa4-0a06632380d2</guid><category><![CDATA[Docker]]></category><category><![CDATA[Servers]]></category><dc:creator><![CDATA[Kendrick Erickson]]></dc:creator><pubDate>Wed, 14 Sep 2016 04:48:07 GMT</pubDate><media:content url="https://speakofthedevel.com/content/images/2016/09/img_1487.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://speakofthedevel.com/content/images/2016/09/img_1487.jpg" alt="radon: The "low-power" server build"><p>Deep in the bowels of my basement is a 42U rack, which formerly housed the fully operational battle station that was my office's online presence at the University of Minnesota. Alas, we were all told we couldn't have our own server rooms anymore and our server rack and its contents were sent to the Reuse Center to find a new loving home. I was able to snag the rack and it is now sitting mostly empty, except for one nice little 2U server that powers most of the important pieces of my network&mdash;including this blog. </p>

<p>Following my usual network naming scheme (elements of the periodic table), it received the apt name of <strong>radon</strong>.</p>

<p><img src="https://speakofthedevel.com/content/images/2016/09/radon-testing.jpg" alt="radon: The "low-power" server build"></p>

<h2 id="specifications">Specifications</h2>

<table>  
<thead>  
<tr>  
<th>Component</th>  
<th>Item</th>  
</tr></thead>  
<tbody>

<tr>  
<td>CPU</td>  
<td>Intel Core i7-4790S 3.2GHz Quad-Core Processor</td>  
</tr>

<tr>  
<td>Motherboard</td>  
<td>ASRock H97M PRO4 Micro ATX LGA1150 Motherboard</td>  
</tr>

<tr>  
<td>RAM</td>  
<td>Crucial Ballistix Tactical 32GB (4 x 8GB) DDR3-1600 Memory</td>  
</tr>

<tr>  
<td>Storage</td>  
<td>  
Samsung 850 Pro Series 256GB 2.5" Solid State Drive<br>  
SABRENT 2.​5" to 3.5"​ Internal ​Hard Disk ​Drive Moun​ting Kit (​BK-HDDH)</td>  
</tr>

<tr>  
<td>Case</td>  
<td>  
Rosewill R​SV-Z2600 2​U Rackmoun​t Server C​hassis<br>  
3x Cooler Master Blade Master 40.8 CFM 80mm Fans  
</td>  
</tr>

<tr>  
<td>Power Supply</td>  
<td>Corsair RM 450W 80+ Gold Certified Fully-Modular ATX Power Supply</td>  
</tr>

<tr>  
<td>Operating System</td>  
<td>Ubuntu 16.​04.1 LTS S​erver</td>  
</tr>

</tbody>  
</table>

<h2 id="performance">Performance</h2>

<p>The biggest part of building this server was finding something that would be somewhat power efficient since it's on 24/7 and usually idle, but still able to function adequately for some moderately heavy loads. Thus, I landed on the S variant of the i7-4790, which provides a 65W TDP. (There is the T variant which has a 45W TDP, but its clock speed maxes out at 2.7GHz where as the S is normally at 3.2 with a turbo to 4.)</p>

<p>Measured with a Kill-a-Watt, the power usage of the entire system at idle was hovering around 20W. Using the stock cooler combined with a motherboard that supports PWM case fans, this makes the machine fairly silent even while sitting next to you in the room. Between this and a surprisingly efficient Synology 1U NAS device (which provides most of the bulk storage to my network via iSCSI) and a few other network devices, the smallish 1000VA UPS in the rack will last over two hours off of mains power.</p>

<p>My usual means of configuring servers is installing Ubuntu Server LTS and running some well-formed shell scripts from my configuration Git repository. However, I'm finding it nicer to move more and more things over to Docker as I'm getting more comfortable with it. That handles both the software installation and configuration and I can be at least somewhat assured that things will "just work" assuming network configuration was properly done up front.</p>

<p>Other than ensuring that data has a permanent home outside of the ephemeral Docker containers, moving the network services (DNS resolution, site hosting, monitoring, etc.) into containers has been a lot of fun. It also allows me to try out more stuff than I would normally be inclined to because of onerous installation requirements (i.e., install this awesome thing that requires a fully operational Ruby on Rails installation, or NodeJS installation, etc.). </p>]]></content:encoded></item><item><title><![CDATA[Ghost: The Blog That Could]]></title><description><![CDATA[I've been looking for a Markdown-based blog that could be used with a minimum of fuss to write small things in. Ghost is one entry in the litany of entrants]]></description><link>https://speakofthedevel.com/ghost-the-blog-that-could/</link><guid isPermaLink="false">118bfc8c-65c8-4986-9f4f-073782a3f8b3</guid><category><![CDATA[Docker]]></category><category><![CDATA[Programming]]></category><category><![CDATA[Ghost]]></category><dc:creator><![CDATA[Kendrick Erickson]]></dc:creator><pubDate>Tue, 26 Jan 2016 07:24:00 GMT</pubDate><media:content url="https://speakofthedevel.com/content/images/2016/08/IMG_20160705_183521.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://speakofthedevel.com/content/images/2016/08/IMG_20160705_183521.jpg" alt="Ghost: The Blog That Could"><p>I've been looking for a Markdown-based blog that could be used with a minimum of fuss to write small things in. <a href="https://ghost.org/">Ghost</a> is one entry in the litany of entrants. Written in JavaScript and running in <a href="https://nodejs.org/">Node.js</a>, it seems like I might have finally found what I'm looking for.</p>

<h2 id="thebasics">The Basics</h2>

<p>Ghost is a JavaScript-based blogging software based on Node.js. In all of my testing, I've been running it in a Docker container with a permanent data directory.<sup id="fnref:1"><a href="https://speakofthedevel.com/ghost-the-blog-that-could/#fn:1" rel="footnote">1</a></sup> It's actually been pretty easy to set up, with a minimum of fuss. Assuming you have a running Docker installation, you can just run the following to get the instance set up:</p>

<pre><code>sudo docker run --name ghost-dev -p 8080:2368 -v /data/ghost:/var/lib/ghost -d ghost
</code></pre>

<p>In essence, it's making a Docker container named ghost-dev (<code>--name ghost-dev</code>), forwarding port 8080 on the Docker host to the port that Ghost is running on in the container (<code>-p 8080:2368</code>), and providing for a somewhat permanent data directory outside of the container (<code>-v /data/ghost:/var/lib/ghost</code>).</p>

<p>Docker just looks up the information it needs to create the container on <a href="https://hub.docker.com/">Docker Hub</a>. It then downloads the data it needs and starts running the container. About the only thing I needed to edit was the <code>config.js</code> file that assumed that it was still running on <code>localhost:2368</code> rather than the host name that I configured for it.</p>

<p>That's it. You can go to the correct address (probably something like <a href="http://localhost:8080/">http://localhost:8080/</a>) and play around with the first post and the administrative interface. Eventually, you'll get things sort of where you want them and you'll probably want to create a second container that has the <code>-e "NODE_ENV=production"</code> switch so that it's using minified JavaScript and CSS among whatever other things Node.js does. I did have to copy <code>ghost-dev.db</code> to <code>ghost.db</code> in my <code>/data/ghost/data</code> directory to get all of my changes copied over to production.<sup id="fnref:2"><a href="https://speakofthedevel.com/ghost-the-blog-that-could/#fn:2" rel="footnote">2</a></sup></p>

<h2 id="whatsitmissing">What's It Missing?</h2>

<p>As near as I can tell, these are the things that are missing from Ghost's functionality that I really would like:</p>

<ol>
<li><p><strong>Static site generation.</strong> <br>
I much prefer to host things on Amazon S3 and CloudFront since there are fewer moving parts. There is something called <a href="https://github.com/axitkhurana/buster">buster</a>, but it seems to revolve around using GitHub Pages in the same way that <a href="https://jekyllrb.com/">Jekyll</a>'s used. The temporary workaround will be using Nginx's page caching abilities.</p></li>
<li><p><strong>Edit histories/versioning.</strong> <br>
I can't go back into the past to see what a post looked like (or in the case of multiple users, to see who edited what and when). This might prevent adoption at work. Having a blog that was backed by a Git repository was on my functionality short-list.</p></li>
<li><p><strong>Easier methods of templating.</strong> <br>
I've had some issues with getting short excerpts around the site to appear in a way that I would like. By default, you can truncate the posts by word or character count. However, it is a hard stop. It doesn't continue to the end of the sentence or paragraph. This is apparently a <a href="https://github.com/TryGhost/Ghost/issues/5060">known issue</a> that they're looking to resolve. In the mean time, I've manually patched in <a href="https://github.com/TryGhost/Ghost/pull/5609/files">this closed pull request</a> which gets me rounding to the next paragraph. This is actually incredibly annoying as I have to patch the Ghost codebase inside the Docker container every time there's an update (which normally results in just skipping new versions).</p></li>
</ol>

<h2 id="conclusions">Conclusions</h2>

<p>For now, I will continue to use Ghost in more or less a vanilla state. I don't know that I'll be able to find anything closer to what I'm looking for, and I don't have the bandwidth to start a project from scratch. The biggest positive is that Ghost is open-source, backed by a non-profit entity. </p>

<div class="footnotes"><ol><li class="footnote" id="fn:1"><p>To be perfectly honest, setting up the blog was about finding something to test in a Docker container more than anything else. <a href="https://speakofthedevel.com/ghost-the-blog-that-could/#fnref:1" title="return to article">↩</a></p></li>

<li class="footnote" id="fn:2"><p>Clearly, this will be more useful going the other direction when trying to test a copy of the production database on a new version of Ghost (or testing my code hacks).  <a href="https://speakofthedevel.com/ghost-the-blog-that-could/#fnref:2" title="return to article">↩</a></p></li></ol></div>]]></content:encoded></item><item><title><![CDATA[Branching out on my own (Git it?)]]></title><description><![CDATA[<p>Lately I've been working with <a href="http://git-scm.com/">Git</a>. Git is a <a href="http://en.wikipedia.org/wiki/Revision_control">revision control system</a>, along the lines of CVS or Subversion. It has two main advantages: it is a distributed RCS meaning it allows for decentralized revision control and it does branching and merging quite well. It also happens to be the</p>]]></description><link>https://speakofthedevel.com/branching-out-on-my-own/</link><guid isPermaLink="false">1507fe73-1f46-4e80-b636-8d2e34f8deb1</guid><category><![CDATA[Programming]]></category><category><![CDATA[Git]]></category><category><![CDATA[Log Burning]]></category><dc:creator><![CDATA[Kendrick Erickson]]></dc:creator><pubDate>Mon, 23 Jun 2008 04:45:00 GMT</pubDate><content:encoded><![CDATA[<p>Lately I've been working with <a href="http://git-scm.com/">Git</a>. Git is a <a href="http://en.wikipedia.org/wiki/Revision_control">revision control system</a>, along the lines of CVS or Subversion. It has two main advantages: it is a distributed RCS meaning it allows for decentralized revision control and it does branching and merging quite well. It also happens to be the brainchild of <a href="http://en.wikipedia.org/wiki/Linus_Torvalds">Linus Torvalds</a>, of <a href="http://www.kernel.org/">Linux</a> fame. </p>

<p>Having mainly used CVS my entire professional life (all of eight years), I've grown accustomed to its eccentricities especially when it comes to branching (and merging) and file management. Once I saw Linus's <a href="http://youtube.com/watch?v=4XpnKHJAok8">Google TechTalk on Git</a><sup id="fnref:1"><a href="https://speakofthedevel.com/branching-out-on-my-own/#fn:1" rel="footnote">1</a></sup><sup id="fnref:2"><a href="https://speakofthedevel.com/branching-out-on-my-own/#fn:2" rel="footnote">2</a></sup>, I decided that I'd give Git a try. I like it so much more than CVS that I'm plugging it wherever I can. Admittedly, this is probably not so much due to Git being awesome (which itself is somewhat similar to Mercurial, Bazaar, or Darcs) but rather to CVS being horrible.</p>

<p>I found Git a bit confusing beyond the update code and commit work flow. Everything being a SHA1 sum led to much of that confusion. A commit is a SHA1 sum. The tree is a SHA1 sum. Content is a SHA1 sum. Reading a few articles and blog posts of other folks who were at one time or another similarly afloat in a sea of "Dur?" was tremendously helpful, especially <a href="http://www.newartisans.com/blog_files/git.from.bottom.up.php">Git from the Bottom Up</a> by John Wiegley. Git's documentation is also helpful but not so much the man files than the online documentation including the <a href="http://www.kernel.org/pub/software/scm/git/docs/gitcvs-migration.html">CVS migration manual</a> and <a href="http://www.kernel.org/pub/software/scm/git/docs/user-manual.html">Git User's Manual</a>.</p>

<p>After reading Git from the Bottom Up, everything pretty much just clicked. The structures in which Git stores content is really easy to digest and is the basis for the type of work flows that can be achieved that you would never imagine or find possible to do in CVS.<sup id="fnref:3"><a href="https://speakofthedevel.com/branching-out-on-my-own/#fn:3" rel="footnote">3</a></sup> Git at one time or another was called "the stupid content tracker," and it really is just that (as an aside, you may find <a href="http://www.wincent.com/a/about/wincent/weblog/archives/2007/07/a_look_back_bra.php">this discussion</a> between Linus and Bram Cohen about merging strategies interesting)<sup id="fnref:4"><a href="https://speakofthedevel.com/branching-out-on-my-own/#fn:4" rel="footnote">4</a></sup>. The rather basic content tracking is what allows for its distributed nature and painless branching and merging. </p>

<p>Think of all the things that one must do to set up centralized revision control. First, you have to find some place for the repository to live. Then, you must give access to commit to the repository to those who need it which may entail giving them login access to the machine on which the repository lives. After everyone is able to talk with the repository, rules about branching, merging, and tagging are usually set up to avoid problems. </p>

<p>With Git, the repository lives on the developer's machine. When he or she gets to the point of wanting to share that code with a wider audience, it is simply a matter of making it available via HTTP which they're likely to have set up before hand. (See <a href="http://git.kernel.org/">http://git.kernel.org/</a> for an example of gitweb, a nice front end to the <a href="http://www.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git">directory structure</a> that a repository lives in.) Any changes that someone else would make are sent by e-mail, done over HTTP as the clone was, or could potentially occur over SSH. (They'll rely on SHA1 sums of the content, tree, and commit history common to both repositories to facilitate the merge.) This eliminates a lot of the annoying server administrivia to manage a repository.</p>

<p>Furthermore, setting up rules for branching and merging are completely unnecessary because the repository you cloned from the original programmer is yours to do with as you please. You can commit without regard to whether or not it will break the code base or someone may be checking out code later on in the day. Things like <a href="http://blog.madism.org/index.php/2007/09/09/138-git-awsome-ness-git-rebase-interactive">git rebase --interactive</a> are sufficiently advanced to be freakin' magic to CVS users such as myself and help in creating a single commit or series of commits to send back to the original (and possibly authoritative) developer for inclusion in their repository.</p>

<p>This has been rather haphazardly put together, but I hope that you'll take a look at Git if you haven't already. I highly recommend reading Git from the Bottom Up because it is interesting from a computer science standpoint and serves as a primer to the staging area and content storage model Git uses. I'm hoping that as time goes on, I will have worn down my co-workers' resolve (really, we just have to find the time to do it) and we'll finally port our CVS repository to Git. In the mean time, I'll be happily coding away on my personal projects with Git.</p>

<div class="footnotes"><ol><li class="footnote" id="fn:1"><p>As a side note, probably my favorite Linus quote comes from this video: "[...] the way merging is done is the way real security is done--by a network of trust. If you have ever done any  security work and it did not involve the concept of network of trust, it wasn't security work. It was masturbation." <a href="https://speakofthedevel.com/branching-out-on-my-own/#fnref:1" title="return to article">↩</a></p></li>

<li class="footnote" id="fn:2"><p>Randal Schwartz of Perl fame is apparently involved with Git and did a <a href="http://www.youtube.com/watch?v=8dhZ9BXQgc4">TechTalk</a> about six months after Linus. <a href="https://speakofthedevel.com/branching-out-on-my-own/#fnref:2" title="return to article">↩</a></p></li>

<li class="footnote" id="fn:3"><p>(2016) There's an XKCD for everything. <a href="https://xkcd.com/1597/">https://xkcd.com/1597/</a> <a href="https://speakofthedevel.com/branching-out-on-my-own/#fnref:3" title="return to article">↩</a></p></li>

<li class="footnote" id="fn:4"><p>(2016) Here's a Hacker News discussion about the blog post from 2014. <a href="https://news.ycombinator.com/item?id=8118817">https://news.ycombinator.com/item?id=8118817</a> <a href="https://speakofthedevel.com/branching-out-on-my-own/#fnref:4" title="return to article">↩</a></p></li></ol></div>

<hr>

<p><em>This post was originally posted on Log Burning, a personal blog I had on the University of Minnesota's now defunct UThink platform. Its content and formatting have been edited.</em></p>]]></content:encoded></item><item><title><![CDATA[iPhone 3G, New SDK Features]]></title><description><![CDATA[<p>Sometime in March, my Razr was on its last legs and I wasn't held down to any contract, so I started my search for a phone that would let me install software on it. The iPhone was the closest thing to it--the Palm and Blackberry options didn't appeal to my</p>]]></description><link>https://speakofthedevel.com/iphone-3g-new-sdk-features/</link><guid isPermaLink="false">90ed8c50-f878-48f0-baa3-b870870b5b01</guid><category><![CDATA[Apple]]></category><category><![CDATA[Log Burning]]></category><dc:creator><![CDATA[Kendrick Erickson]]></dc:creator><pubDate>Sun, 15 Jun 2008 08:21:00 GMT</pubDate><content:encoded><![CDATA[<p>Sometime in March, my Razr was on its last legs and I wasn't held down to any contract, so I started my search for a phone that would let me install software on it. The iPhone was the closest thing to it--the Palm and Blackberry options didn't appeal to my programming tastes and I was intrigued by the touch interface. Knowing the SDK was coming out, I took the plunge and bought it. However, it appears I bought my iPhone prematurely.</p>

<p>Although I wouldn't call myself a Mac zealot, I watched the live keynote updates on <a href="http://macrumorslive.com">macrumorslive.com</a> just like everyone else. Most of what was revealed was expected. There were three pieces I was most interested in: an addition to the SDK that was an attempt at pleasing those who wanted background processes, a way of adding applications to the iPhone without having to use the App Store or have an enterprise SDK license, and the addition of a GPS chip. </p>

<p>The push notification service, as they call it, is pretty neat. It does take care of a good 75% of use cases for background processes on a mobile device--but not all. Say you wanted to create a service that sent severe weather updates to a user's phone based on their location. If you assume the user is stationary, the push notification service will work. </p>

<p>However, if they're mobile, they could be driving 70 MPH into a dangerous situation and there's nothing you can do to warn them (although the large line of dark clouds will probably tip them off). Apple missed the boat in not doing an information pull at the same time as they do the information push. There's not a whole lot of other data that would change over time while the user is not utilizing their phone--and I'm sure there are other uses for live user location data. Maybe they'll work on that after September.</p>

<p>The ability to add your own applications to the iPhone (and 99 of your closest friends) without having to go through the App Store or shell out for the enterprise license is awesome. I figured that they would at least allow personal applications to be put on the iPhone, but the ability to send them to friends, family, co-workers, or whatever is great news. I guess others expected this, but I found it to be a nice surprise.</p>

<p>Then there's the addition of a true GPS receiver. Being an amateur radio operator (<a href="http://www.qrz.com/callsign/K0WMS">K0WMS</a>), it was at this point that an idea slapped me in the face so hard that I have to try and get to work. Now that the iPhone has a GPS, you can write custom software for it, and there will be an easier to use headphone jack--it seems that it would make for the perfect <a href="http://en.wikipedia.org/wiki/Automatic_Packet_Reporting_System">APRS</a> unit. Just connect the iPhone's headphone jack to the packet radio adapter on your trusty <a href="http://en.wikipedia.org/wiki/VX-7R">VX-7R</a> using the sound card to generate both the data stream and PTT signal and you're set.</p>

<p>As strange as it sounds, this might be the killer feature that will push me over the edge (no pun intended) to upgrade, rather than the better internet data speeds afforded by 3G. I will say that I have found EDGE to be an annoyingly slow service. If AT&amp;T is reasonable about upgrading to the iPhone 3G, I may have to consider it. I hate you, Steve Jobs. </p>

<hr>

<p><em>This post was originally posted on Log Burning, a personal blog I had on the University of Minnesota's now defunct UThink platform. Its content and formatting have been edited.</em></p>]]></content:encoded></item><item><title><![CDATA[Sony Vaio VGN-SZ791N/X Notes]]></title><description><![CDATA[<p>I just got a new Vaio to replace my dead Toshiba laptop on February 4th, 2008. The following are the installation notes and thoughts about my Sony Vaio VGN-SZ791N/X from someone new to the Sony Vaio family of laptops. Long story short: recent Linux distributions will work, and it</p>]]></description><link>https://speakofthedevel.com/sony-vaio-vgn-sz791n-x-notes/</link><guid isPermaLink="false">840f3caf-5fc0-45ea-b75f-fb87939e6c50</guid><category><![CDATA[Reviews]]></category><dc:creator><![CDATA[Kendrick Erickson]]></dc:creator><pubDate>Thu, 21 Feb 2008 06:50:00 GMT</pubDate><content:encoded><![CDATA[<p>I just got a new Vaio to replace my dead Toshiba laptop on February 4th, 2008. The following are the installation notes and thoughts about my Sony Vaio VGN-SZ791N/X from someone new to the Sony Vaio family of laptops. Long story short: recent Linux distributions will work, and it has a lot of untapped potential.</p>

<h2 id="initialthoughts">Initial Thoughts</h2>

<p>In terms of the specifications and looks of the laptop, the SZ791 is really nice. I like the way that it looks, and it weighs at least two or three pounds less than my old Toshiba laptop. I was waiting rather impatiently for <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16834117676">NewEgg</a> to stock this laptop, which left enough time for the MacBook Air to be announced. While the SZ791 isn't as thin (at most 1.5" thick), at four pounds it's still rather portable (and certainly more user-serviceable).</p>

<p>For those that cannot be bothered to look up the specifications of the laptop:</p>

<ul>
<li><strong>CPU:</strong> Intel Core 2 Duo T9300 (Penryn, 45nm) 2.5GHz, 6MB shared L2 cache</li>
<li><strong>RAM:</strong> 4GB (DDR2 667)</li>
<li><strong>Video Cards:</strong> nVidia GeForce 8400M GS, Intel GMA X3100, 1280x800 native resolution (13.3" monitor)</li>
<li><strong>Hard Drive:</strong> SATA, 250GB, 5400RPM</li>
<li><strong>Networking:</strong> Modem, gigabit LAN, 802.11 a/b/g/n, EV-DO (CDMA, Sprint), Bluetooth</li>
<li><strong>Operating System:</strong> Windows Vista Business (32-bit) pre-installed</li>
<li><strong>Etc.:</strong> Webcam, fingerprint reader, Firewire, card reader, DVD burner</li>
</ul>

<p>Those of you paying attention to the specifications will notice a glaring problem with the initial configuration of this laptop. It comes with a 32-bit operating system pre-installed, but contains 4GB of RAM. Without going into detail, this essentially means the 32-bit OS will not give full access of the 4GB of RAM to programs. To do that, you'll need to use a 64-bit operating system (such as any Linux distribution these days or by purchasing a 64-bit version of XP/Vista). With the default installation, Vista reported a little over 3GB of RAM.</p>

<p>I had access to a free copy of 64-bit Vista Business and was planning on dual-booting the laptop anyway so reinstalling Windows was already on my list of things to do (better to nuke the bloatware from orbit, it's the only way to be sure). I can't really blame Sony for not wanting to ship a 64-bit operating system with new laptops, but I naively thought that this would not really be a problem. Sadly, I was wrong. Sony does not supply 64-bit drivers for this laptop. Other, older Vaio laptops have unofficial 64-bit drivers, so with a bit of luck Sony may supply them for the SZ791.</p>

<h2 id="whatworkswhatdoesnt">What Works, What Doesn't</h2>

<p>Let's start with Vista. You will not find a Vista recovery CD with your SZ791. Instead, you will find a CD (DVD? I didn't open it.) that will allow you to downgrade to Windows XP. If you want recovery DVDs for Vista, you'll have to make your own. Sony has put drivers, recovery software, and apparently a somewhat clean Vista install on a separate partition on the hard drive which can be copied to two DVD-Rs (using the built-in burner, of course). Those DVDs are important because they will contain most of the drivers which can also be used on 64-bit Vista.</p>

<p>There are a few rather important drivers that will refuse to install which include both video drivers (both the X3100 and the nVidia 8400M GS), the Sony function keys (the programmable buttons and the LCD brightness) and touchpad. The most infuriating of these is the trackpad (no scrolling using the right and bottom parts of the trackpad--it otherwise operates just fine) and the function keys because who knows when Sony will decide to release 64-bit versions. Other drivers, including the X3100 drivers are available from Windows Update, "Problem Reports and Solutions," or manufacturers' web sites.</p>

<p>I won't go through the full list of what works and what doesn't because I don't have an exhaustive list handy. The main features mostly work and a good percentage of the extras work. I haven't tried the webcam (I would assume, though, that it will fall in the same category as the trackpad and function keys), the CDMA modem, the card reader, or anything in the PC Express slot.</p>

<p>Linux is, of course, a much brighter picture. I installed Fedora 8 on the laptop and pretty much everything works out of the box. I have not yet attempted to install the drivers for either video card, but the X3100 deals with Compiz just fine (if you ignore the blacklisting). The wireless network connection also works just fine, however, the wired ethernet driver appears to be misconfigured. I haven't been able to track down the problem (what little attention I've given it so far). Otherwise, audio drivers, all features of the trackpad, controlling the monitor brightness (via the command line) all work without a problem.</p>

<p><img src="https://speakofthedevel.com/content/images/2016/12/IMG_20120304_134919-1.jpg" alt="Here's the laptop next to my oscilloscope working on my large seven-segment display project."></p>

<h2 id="bottomline">Bottom Line</h2>

<p>I love this laptop. I love the fact that it dual-boots without a problem. I don't like that Sony has completely shut out anyone that needs or wants to use a 64-bit version of Windows on their laptops, but I suppose that's their prerogative. I'm sure that with time they'll start pushing out more beta drivers and perhaps even support 64-bit Vista on it some day. In the mean time, it works without any major issues with 64-bit operating systems if you can rough it on your own. If 32-bit Vista works for you, then there's really no reason to be afraid of picking this laptop up.</p>

<p>I only have minor nitpicky issues with the keyboard (the size is great, but the key movement is somewhat slushy), the cover over the modem and ethernet ports (connected via a dongle attached to the side of the body), the antenna (another flimsy-feeling plastic piece that seems like it could be accidentally ripped off rather easily), and finally the battery (it's not solidly connected to the body of the laptop, it moves around slightly in the battery bay when you set it down). They're not deal-breaking problems by any stretch of the imagination but they are in the way of what would otherwise be a solid—literally and figuratively—laptop.</p>]]></content:encoded></item><item><title><![CDATA[I am allergic to spam.]]></title><description><![CDATA[We were under a tight deadline to switch over which meant no spam protection. After two weeks, the spam deluge started to bother me enough to do something.]]></description><link>https://speakofthedevel.com/i-am-allergic-to-spam/</link><guid isPermaLink="false">0df8c182-2721-462a-ac14-6d7480678de0</guid><category><![CDATA[Ancient Imports]]></category><category><![CDATA[Log Burning]]></category><dc:creator><![CDATA[Kendrick Erickson]]></dc:creator><pubDate>Tue, 24 Jul 2007 02:12:00 GMT</pubDate><content:encoded><![CDATA[<p>One of my uncle's friends, Marc Breitsprecher, runs an internet business from his home selling ancient coins. Back in 2000, before I even started my undergrad, he approached me and asked if I could build a web site for him. Previously, he was just selling his coins on eBay.</p>

<p>Over the last seven years, <a href="http://ancientimports.com/">Ancient Imports</a> has grown beyond both of our expectations. He was able to quit his job at the postal service and work on the site full-time. We've outgrown two hosting providers, the most recent event happening a few weeks ago.</p>

<p>We moved from a poor shared hosting environment to a spiffy virtual private server. It's the closest thing to having full control over a physical machine as we can get right now. It's fun for me because I essentially have full control over the virtual machine, which means I'm pretty much free to do whatever I need to do to implement new functionality. The cost, however, was that I also have to maintain the security, e-mail services, and DNS that were previously dealt with by the hosting provider. The classic blessing and curse.</p>

<p>We were under a somewhat tight deadline to switch over (long story short, they blamed us for the problems we had with their service--that did not sit well with either of us), so I just threw up the e-mail server and configured it to make sure the mail was still delivered. That meant no spam protection. After about two weeks, the deluge of spam started to bother me enough to do something about it.</p>

<p>I was pleasantly surprised at how effective just a few anti-spam measures were. The first counter-measure I added was to make the server a bit more strict as to what it will accept as a properly formatted message (e-mail originating from domains that actually exist, etc.) which I assume is not immediately recommended because of the extra DNS lookups it incurs. The second was to check a DNSBL to see if the originating IP address is a known spammer (also another DNS lookup). Both of these tweaks killed a bunch of spam with the small effort of adding four lines of code to the configuration file.</p>

<p>The second, more involved, counter-measure I added was greylisting. This is a really nifty technique that is mostly invisible to people sending or receiving mail. The e-mail server will feign unavailability to any sender and recipient pair that it hasn't seen before. Upon receiving the temporary error message, normal e-mail servers will attempt to redeliver the message in another 10-20 minutes, at which point the server will remember the previous attempt and accept the e-mail.</p>

<p>Most software used by spammers to send their advertisements, however, are not so well behaved. They're more interested in sending out as much mail as possible in the shortest amount of time. This means that the spammer is unlikely to attempt a redelivery (at least using the same source e-mail address) to the same person in a reasonable amount of time. Even if they do, they greatly increase their chances of appearing in the DNSBL the next time they attempt to connect. Greylisting essentially gives you a two-for-one special.</p>

<p>As for numbers, the server's been running with the new counter-measures since early Sunday morning. Since then, 611 attempts were blocked by the SpamHaus DNSBL, 163 attempts were greylisted, and only 40 e-mails were actually delivered to Marc and myself (and most of those originated from the website itself and not from outside sources). Of course, the default behavior of rejecting e-mail for unknown users (and domains) is the most effective "counter-measure." In the same period, the mail server rejected 3,957 attempts at sending mail to ghosts.</p>

<hr>

<p><em>This post was originally posted on Log Burning, a personal blog I had on the University of Minnesota's now defunct UThink platform. Its content and formatting have been edited.</em></p>]]></content:encoded></item></channel></rss>