Skip navigation

Monthly Archives: January 2015

So I mentioned in a previous post that I was getting the occasional bad reading from the DS18B20 temperature sensor that I was using to monitor beer fermentation.  I noticed the issue because I was getting very ‘choppy’ temperature profiles.  It was obvious that really low temperature values (in this case it turned out to be a value of -0.06, not sure why) were getting averaged into the temperature readings (the system averages 60 readings at an interval of approximately 1/sec, and then sends that average across the network).  While I was aware of the issue pretty soon after getting the system up and running, I was a bit pressed for time and decided to put fixing the issue on the back burner.

LoggerIssue

Example of the impact of the bad reads. Every few minutes, a bad read would get averaged into one of the data points, causing the temperature to appear lower, resulting in this ‘spiky’ profile.

Coincidentally, the issue ended up fixing itself (sort of).  After a few successful trial runs, I added an additional temperature sensor to the Arduino, and  in order to be able to identify the data coming from the two sensors, I needed to pull the serial addresses.  Turns out that the bad reads that I was getting also resulted in a bad address, so I added some code to throw out that data.

LoggerIssueFixed

Notice how the fluctuations are on the order of magnitude of ~0.1 degree F, as opposed to ~1 degree F previously. Much better!

So I should probably mention that this entire issue would have been avoided if I had just copy and pasted the Arduino sketch code for the DS18B20.   In the example code, they have a few different checks, such as a check that ensures the sensor is of the right type, as well as a CRC (cyclic redundancy check) validation.  The CRC is very similar to a checksum,the senor performs a mathematical operation on the data and then sends along the value (the CRC) of the operation.  This allows the person to verify that the data was successfully transmitted by re-performing the operation and comparing it to the CRC value.  If the values do not match, it probably means that there was an issue with sending the data across the wire.  The code in question is below.

 if ( OneWire::crc8( addr, 7) != addr[7]) {
      Serial.print("CRC is not valid!\n");
      return;
}

Had I included the above code in my sketch, this issue would have been avoided entirely.  But considering I am doing this project for fun and for the experience/education, I think writing the code from scratch is a good thing to do.  I have certainly created some unnecessary bugs with this approach, but I have also learned a lot more by not copying and pasting code.  But this is also a good lesson in making sure to inspect example and tutorial code very thoroughly to ensure that you are not leaving out anything important.  In this case, leaving out the sensor type check code (the one wire library can work with multiple digital sensor types running on the same pin) made sense because I knew I was only using a certain type of sensor.  Nut leaving out the CRC validation was a bad call.

  • Transferring the Single Hop Cascade and Single Hop Simcoe to secondary.

I brewed two more one-gallon batches of Single Hop Pale Ale.  I used Armarillo in one, and Sorachi Ace in another.  Looking forward to drinking these.  I’ve got a lot more work to do on the project, and will try and fit it in here and there, so stay tuned.

One of my goals for the fermentation temperature logging project was to be able to accommodate multiple temperature sensors running from the same Arduino, as well as handle data from multiple sensor units in different locations.  I was scheduled to brew a 5 gallon batch of beer this past Saturday, and it just so happened that I received my shipment of DS18B20 sensors that same morning.

My brewing buddy had acquired the ingredients for an Imperial Stout for our brew session. Due to the high gravity of the beer, the fermentation requires a stable temperature at a higher range of the yeast’s tolerance (~68F).  Additionally, due to the high levels of sugar in the beer, the initial stages of fermentation can be quite vigorous, which can result in higher temperatures than desired and has the potential to induce the dreaded foam explosion[1].  Considering the particulars of the fermentation for this style of beer, I felt it would be great to get a few of my the sensors hooked up to it.

While we were brewing, I started assembling the sensor unit.  I didn’t have everything I needed on hand, as I have been stashing my electronic projects at my girlfriends place.  So I borrowed an Arduino and breadboard from my roommate, and got everything setup.  The only issue I can potentially see, is that I am using a 10K pull up resistor instead of a 4.7K.  Not sure if this has any impact on the calibration at all.  Something worth looking into later.  After a long brew day however, I was in no shape to work on the code base, so I called it a day and went to sleep.

Getting the code working with a second sensor wasn’t too complicated.  The DS18B20 allows for multiple temperature sensors to be run off of the same Arduino pin.  Because they are digital sensors, they each have a unique address.  So I modified the Arduino code to return the unique sensor address in addition to the temperature.  On the server side, I simply stored the data into separate files based on the sensor address, and then added a separate view that sent along both of the data sets to be plotted together.

TempSensorsx2

Two temperature sensors after a few days of data collection.

While the modifications were fairly straight forward, I did stumble upon some issues that will probably have to be dealt with.  By updating the Arduino sketch, I also had to update the sensor server to handle the new data.  This broke the functionality of the original sensor unit.  And because this unit was at my girlfriends place, I had no way of updating the code on the Arduino[2].

The other issue has to do with the temperature sensor addresses.  While it is convenient that you can run multiple sensors off of the same pin, it is pretty difficult to figure out which address corresponds to which sensor.  In my case I was using one sensor to monitor ambient temperature, and another to monitor the fermentation temperature.  I made a lucky guess, and got the assignment right on the first try.  But I could see this being an issue with more sensors.  Making matters worse is that the address for each sensor is a 16-bit hexadecimal serial number, which is difficult to memorize or write down and enter into a terminal or web form.  Considering the type of application we are using this for, I think it might make more sense to run the sensors off of different pins, and then use the pin numbers as a way to identify which sensor is which.  I doubt the extra pin space will be missed, and this would make it much easier for the user to identify the sensors.

SensorOutput

Example Output from the sensor unit. Which sensor is which? Hard to tell from the unique HEX addresses.

Outside of these minor issues, the sensors have already proven there worth.  The brew session was without issue and we ended up with a very active fermentation.  I was around for most of the day when this occurred, so I was able to monitor the situation and make adjustments accordingly (pulled out the blow off tube off, and left the bucket lid partially ajar to prevent any explosions).  But if you look at the graph above, it shows how the fermentation temperature rose from 65-66F to 70-71F[3].  Had I been out of the house, checking the temperature would have alerted me to these potential issues.  And now that the fermentation has been settling down, the main concern is keeping it warm enough.  With temps dropping to ~65F, I will soon need to start adding some heat to the process to keep the fermentation from stalling out.

  • A hectic day filled with brewing, drinking, and tinkering. Here is a shot of the 'work(brew)bench'. Paper towels used to good effect to prevent any spills or splashes from causing too much damage.

The project is still very much in ‘beer code’ phase (quite literally), there is no database on the backend, I am just copying and pasting html templates to create the different pages for the different profiles, and the sensor-to-server system is hacked together and very fragile.  But all-in-all, I think it is a good proof of concept, and a good starting point for making a more robust application.  Time permitting, i will try and solidify the code a bit more, and I have a few more features that I want to add, so hopefully I will have some more updates on the project soon.  Cheers!

[1] Iv’e mentioned this phenomenon in a previous post.  It occurs when foam (a.k.a krausen) from the fermentation clogs the airlock or blow off tube which prevents CO2 from escaping the fermentation bucket.  As pressure builds up, the lid to the bucket will eventually blow off resulting in a big mess.

[2] I am still looking into the options, but it looks like I might be able to use a CLI tool to update Arduino code remotely.  I will report back if and when I find a solution.

[3] I am fairly confident that the sensors are off (on the low side) by at least 1, maybe two degrees.  Not sure if this is due to the 10K resistor or what.  Mainly this is just a hunch, but partially confirmed by some very inaccurate analog thermometers.  As such, the graph doesn’t show the true magnitude of the ‘active’ period.  It was probably peeking around 72F, which is way too high for an ale.

In my last post, I went over the development of a fermentation temperature monitoring sensor using an Arduino and Raspberry Pi.  The goal of the project was build a system that would allow me to remotely monitor the temperatures of my beer fermentation.  In this post I will be discussing the server and web application portion of the project.

The code I wrote for the sensor portion of the project included a socket server program that could be used for testing the network connection between the sensor and a remote server.  This socket server code is also included in the web application code with a few small modifications.  Details on how to setup and run the code can be found in the readme.

The socket server listens for connection requests from the RasPi.  Upon receiving temperature data from the sensor, the data is appended to a csv file.  I wrote a fairly simple flask app that reads the temperature data and passes it along to the client.  I am using Flot, a JavaScript plotting library, to plot the data on the front-end.  At the moment, I have a live, working site up and running.

The project is very much in the prototype stage at the moment.  I have a few more temperature sensors on order, and as I get a better idea of how to handle the data I will start finalizing the backend and add some more features on the frontend.  In addition to handling multiple temperature sensors, I would also like to be able to specify data recording parameters from the web frontend.  While I wait for the sensors to arrive, I have been testing out the system, and have had some interesting results.

TempLoggerTest

Screenshot of Flot Chart

For the first test I simply set the sensor up in the kitchen and began recording and displaying the ambient temperatures.  A screenshot of the flot chart is above (i still need to figure out how to export flot charts as images).  Everything went smoothly with the test run.  As you can probably see, there was a brief outage on Jan 3rd.  This was because I had unplugged the sensor while clean the kitchen and forgot to plug it back in until later.

I finally got around to brewing some beer this past weekend, so was able to get the sensor hooked up to an actual fermentation.  For this brew session, I made two one-gallon batches of beer.  The standard hombrew batch size is 5 gallons, but I like to experiment from time to time with smaller batches.  For this experiment, I was making two single hop pale ales, using the exact same grain and yeast for the two beer, but using a different variety of hop in each beer. Once the beers were brewed, I taped the temperature sensor to one of the carboys and added some packaging foam to the back of the sensor to try and provide some insulation from the ambient conditions.

CascadeTemp

Cascade Single Hop Pale Ale Fermentation Profile

So if you are familiar with beer brewing, the first thing that you might notice is how high the initial temperature is.  I attached the temperature sensor right after i pitched the yeast, which means the beer was at a temperature of ~77F at the time.  Most ale yeasts prefer to be pitched into wort with a temperature in the high 60’s, so I was too hot by about 10 degrees F.  I primarily blame this hot pitch on the fact that it was getting late, and I was too tired to sit around and wait for the wort to cool more, although I did not realize how hot the wort actually was.  Of course, this also demonstrates one of the benefits of temperature logging.  By keeping a detailed record of the temperature, I get an additional and more immediate form of feedback on my brewing methods.

Another interesting aspect about the chart is the temperature fluctuations.  It has only been a couple of days, so i don’t have a lot of data yet, but it looks like the temperature is fluctuating by about 5 degrees F due to the space conditioning in the apartment.   Because this is only a one-gallon batch, the beer is probably more susceptible to ambient temperature changes than the typical 5 gallon batch.  It will be interesting to see how a 5 gallon fermentation responds, and it could mean that I need to put more effort into controlling the environment for the one gallon batches to prevent any ill affects from temperature fluctuations (according to the literature, beer prefers fairly stable temperatures).

  • Weighing out the grains. I got ~5 lbs of grain for two one-gallon batches. I split the grain in half, and used the same yeast, but used a different variety of hops in each batch. For the first batch, I used 1 oz of Cascade, and for the 2nd 1 oz of Simcoe.

One final curiosity that I came across.  I added a zoom in/out feature to the flot graphs, and when you zoom in on a small section of the plot, you can see some potential issues with the data.

LoggerIssue

Hmm, this doesn’t look right?

The Arduino sends a temperature reading to serial approximately once per second.  From there, the serial2socket.py takes the average of 60 consecutive temperature readings and sends that average across the socket connection.  So my hunch is that the temperature sensor is returning some sort of bad signal or error code every now and then, and then that signal is getting averaged in with the good data.  I will look into this further and report back on what I find.

That’s all I have for now.  I am going to start playing around with some of the flot graphs, and once the extra temperature sensors get here, I will start adding some more features to allow for better and more dynamic control over the system.  Should be fun!

So I finally got around to setting up an Arduino with a temperature sensor in order to monitor beer fermentation.  The end goal is to have a web interface in which I can remotely monitor the fermentation.

For those of you aren’t familiar with beer brewing, the fermentation temperature is a very crucial factor in the beer brewing process.  Not only does it impact the taste of the final product, but if the temperature is too warm, you can get an overly active fermentation which can result in a messy explosion.  Ideally, I will eventually add some type of temperature control system, but for now, simply monitoring the temperature is a good place to start.

For a handful of reasons, I have been sitting on this project for a while.  The concept is pretty simple, and hooking up a temperature sensor to an Arduino is very much a beginner level project.  The primary issue I was struggling with was how to send the data to the remote server.  While Arduino’s provide a quick, easy, and cheap way to start playing around with sensors, the available options for remotely communicating with the Arduino all involve some sort of trade off.  The available Arduino shields, whether it be wifi, bluetooth, or Xbee, were all pretty expensive.  The Spark Core is a nice option, but at the time, it was a little too expensive and the cloud service hadn’t been open sourced [1].  There were a few Chinese wifi chips that were floating around, but they were poorly documented and hard to get a hold of.  I had considered using some cheap RF modules, but that solution would require some sort of base station, adding to the cost and complexity.

In the end, I decided to simply hook up the Arduino to a Raspberry Pi.  I have a few Raspberry Pi’s sitting around that I use as servers.  Hooking them up to the Arduino was just a matter of plugging in the USB cable and reading the serial output.  I think in this case, I was letting the ‘perfect be the enemy of the good’.  I can always continue to research the other options and make adjustments as new or better options become available.  But for now, I am sticking with the Arduino -> RasPi -> Server solution.

The parts list is pretty simple, links and prices provided:

So all in, the parts list comes out to ~$70.  Of course your cost’s will vary depending on what else you need to buy.  If you were starting from scratch, I am not sure I would recommend this setup.  As mentioned, the main reason I went with this setup was because I already have a few RasPi’s sitting around, so my costs for this project were around $15.  It should be noted that using the RasPi as part of the fermentation monitoring setup does not impact my ability to continue to use them as servers, I simply setup separate tmux sessions for each application.  So if you already have a RasPi sitting around, this is a good option to get things up and running quickly.

Another issue I ran into was my initial choice of temperature sensor.  I was originally using the TMP36, a cheap analog temperature sensor.  It was working fine on the bread board, but when I soldered some extension wires to it, the signal got very jumpy.  After some research, it turns out that you need do do some serious signal filtering when using analog sensors with long extension wires.  I played around with adding some resistors to the circuit to clear up the signal, but eventually decided to spring for digital temperature sensor (the DS18B20 listed above).  On a tangential note, one of the downsides to hardware projects (as opposed to software), is if you find yourself in need of parts or components for your project, you can loose a lot of momentum if you have to wait for them to ship to you. In this case, I ordered the temperature sensor form dx.com because it was cheaper and came with free shipping.  The catch was that it was shipping from Asia and took over a month to arrive, which killed the momentum on the project considerably.  Definitely something to keep in mind for future hardware focused projects.

  • Beer Explosion!!! Fortunately I was working from home this day, so I caught it early, but this is what happens when you get an overactive fermentation. The airlock get blocked up with foam, the pressure builds (notice the bulging lid), and eventually blows the lid off, possibly covering your walls with beer! Using a blow off tube can help prevent this (although isn't a guarantee).

Once I had a working arduino/temp sensor combo, it came time to write the code.  For a simple project like this, there is plenty of sample Arduino code that will do exactly what you need.  I personally prefer to try and write the code myself using the tutorials and example code as a guide.  Often time the resultant code will end up looking pretty similar to the example code, but I think it provides a much better learning experience to at least attempt to do it yourself.

The Arduino code was pretty straight forward.  Using the One Wire libarary for Arduino, I was able to piece together an Arduino sketch that read the data from the sensor, converted the data to a temperature value, and then printed the temperature to serial output.  Before this project, I have only played around with analog sensors, so working with a digital sensor was a nice change of pace.

OneWire ds(pin);
//...//
ds.select(addr);
ds.write(0x44, 1);  // send conversion instruction
delay(1000);
present = ds.reset();
ds.select(addr);
ds.write(0xBE);  // read data from scratchpad

With an analog sensor, you simply read the voltage from the signal wire, and then use a formula to convert that signal into some form of data.  One wire digital sensors require two commands to get the data.  The first command ds.write(0x44, 1); instructs the sensor to perform an internal conversion and write the data to a scratchpad (a small piece of RAM on the sensor). In order to give the sensor time to perform the conversion, we issue a delay(1000), and then finally we read the data with ds.write(0xBE);. The ds.select(addr); is used to address a specific device. The cool thing about these sensors is that you can daisy chain them (which is why you need the select command), so when I get more time I will definitely be exploring this because it would be nice to have multiple temperature sensors running off the same Arduino. That way I can monitor the ambient temperature as well as additional fermentation vessels.

After uploading the sketch to the Arduino, I got started on the Python code.  Since I was using a RasPi to interface with the Arduino, it would have been easy enough to simply read the data from the serial port, and then set up a Flask server on the RasPi that could be accessed from the internet.  However, I eventually would like to add a handful of additional temperature sensors to the project, possibly in different locations, so I felt it was a good idea to learn about and tinker with socket programming in python.

The code for this project is available on github. The Arduino sketch is located in the tempsensor directory (tempsensor.ino).  The sketch can be uploaded to the Arduino using any computer that has the Arduino IDE, and once the sketch is loaded, the Arduino can be hooked up to the RasPi (you do not need to install the Arduino IDE on the Pi, the Arduino will start running the Sketch once powered via the usb).  The serial2socket.py program is run on the RasPi, and will pull the temperature data from the Arduino and send it over the network using a socket connection.  The socket_server.py program is primarily for testing purposes.  The program can be run on a second computer or server and used to verify that the RasPi is sucessfully sending the temperature data across the network.  See the repo readme for more information on setting up and running the code.

While I have played around with the try/except pattern in python before, socket programming relies heavily on the pattern due to the potential connection issues that are typical when communicating across a network.  In order to handle these potential issues, I wrote my own custom error handler.

def error_handler(errmsg, message, stop=True):
    """
    Handle Errors for main function
    errmsg: the exception error message
    message: additional message
    stop: default=True, stops exectution of code
    """
    message += " Error Code: " + str(errmsg[0]) + " - " + str(errmsg[1])
    print message
    rightnow = str(datetime.datetime.now())
    logging.debug(rightnow + ": " + message)
    if stop:
        sys.exit()

In order to ensure some robustness in the program, it was necessary to allow the program to continue running after a failed connection attempt. I choose to implement a simple counter that tracked the number of consecutive failed attempts, and if there were 10 failures in a row, exit the program. The error handler allows me to specify whether to exit the program or not, and i can also pass custom error messages that are then printed to the log. That way I can keep an eye on any issues that might be cropping up and make adjustments to the heuristic as necessary. So far, my tests have been running pretty smoothly though.

As I mentioned previously, the socket_server.py code is primarily meant for testing purposes. I am currently working on another git repo that will contain more robust socket server code as well as the web application code. The reason for keeping these repos separate is because I will probably be updating the application code fairly frequently, and having a bunch of copies of the application code sitting on the sensors isn’t really necessary.

That’s all I have for now.  I am pretty excited about this project.  One of my goals is to eventually turn the sensor-server communication into a full blown API with an authentication system and rate specifications (so you can specify how frequently you want to send/update the temperature data).  I will be posting about the web portion of this project shortly, so stay tuned.

[1] Since I originally evaluated Spark’s offerings, they have released a cheaper chip, the photon, and have opened sourced their cloud software.  Something to keep in mind for future versions of this project.

So one of my goals for the new year is to better document and present my side projects.  I have a habit of playing around with an idea and getting a very bare bones prototype, and then letting it sit and languish.  So to kick things off, I present to you Surf Check.

One of the first python programs I ever wrote was a surf scraping program that pulled surf forecast data from SwellIinfo.com, a surf forecast website, for 6 different locations and printed out a condensed surf report for all of the locations on a single page.  Besides getting experience with python, the main reason I wrote the code was because it was a pain to try and flip through a handful of surf forecast pages on the surf website.  The website is loaded with graphics and ads, and it is not easily navigable.  So a quick check of the surf forecast would end up taking over 5 minutes, accounting for page loads and navigation.  I figured building a scraper to grab the data I wanted and condense it down to one page was a perfect first project.

I wrote the initial scraper around March, 2013 when I was just getting started with Python. Overtime I tinkered around with the program, and eventually decided to re-write it and turn it into a small web page to make it easier to access.  So I added a flask server, re-wrote the scraper code, and set up a simple frontend with a jinga template serving basic html.

surfappscreenshot

Screenshot of Surf Check Development

Comparing the before and after, I was able to make some pretty big improvements to the original code.  The original scraper was over 160 lines, and the new project is ~140 lines, including flask server, html template, and scraper.  Of course the comparison is thrown off by the fact that for some reason when I wrote the original program, I couldn’t get Beautiful Soup (a.k.a. bs4, a python html parser) to work.  My guess is it was due to my unfamiliarity with object oriented programming and python in general, but I did a weird workaround where I saved the bs4 output to a text file, imported the text file and then parsed the text to get what I needed.  Ahhh, yes, the things we do when we are inexperienced!  Makes me cringe now, but it is a good lesson.  Had I gotten bs4 to work the first time around, I am pretty sure the scraper code would have been pretty similar to my final version.

A quick note on the code for the project.  Below is the bulk of the code that makes up the views.py file.

app = Flask(__name__)
# Keep track of the time between scrapes to prevent uncessesarry requests
LAST_SCRAPE = datetime.datetime.now()
# Get an intial set of scaped data
SPOT_CONDITIONS = run_scraper()
print LAST_SCRAPE, SPOT_CONDITIONS
# Intervals between scrapes
DELTA = datetime.timedelta(hours=4)

def get_cond_data():
    """
    Returns a dictionary of spot conditions. Uses a global to save the forecast
    data between requests.  If the app has just been initialized, it will run
    the scrpaer, ohterwise, it will re-run the scraper if the last scrape is
    over 4 hours old.
    """
    global SPOT_CONDITIONS, LAST_SCRAPE
    now = datetime.datetime.now()
    if now - LAST_SCRAPE > DELTA:
        SPOT_CONDITIONS = run_scraper()
        LAST_SCRAPE = now
    return SPOT_CONDITIONS

@app.route('/')
def surfs_up():
    """ Returns surf forecast. """
    spot_conditions = get_cond_data()
    return render_template('main.html', last_update=str(LAST_SCRAPE),
                           spots=spot_conditions)

Scraping the forecast data from the surf website takes a while, ~9 seconds for all six pages.  If I was expecting a lot of traffic, I would set up a scheduler that would automatically scrape the site in the background so that the data would be available immediately when a request hit the server.  However, as this is just a small project that I am doing for fun, I don’t want to hit Swellinfo’s servers with excessive scrape requests.  So I decided to scrape only when a request was made to my site.  The obvious downside is that this results in a really long load time for the page, as it has to wait for the scrape to finish before it can serve the data.  To mitigate this issue slightly, and to further limit requests to Swellinfo’s servers, I store the forecast data for a period of time (surf forecasts typically only get updated every 12 hours or so).  At the moment, I have that period set to 4 hours, so if the scraped data is over four hours when a request hits my homepage, it will re-scrape the data, however every homepage request in the next 4 hours will get served the saved scraped data.  Additionally, to keep things simple I choose to forgo any persistent data storage.  So at the moment, the scraped data gets stored in a global variable (SPOT_CONDITIONS).  While using global variables in python are looked down, I thought it was an interesting way to change up the typical flasks MVC (Model-View-Controller) model.  Essentially I have just reduced it down to VC.

I thought that code snippet was fun because, despite it’s apparent simplicity, it hides some complex design decisions.  In the future, it might make sense to implement a more robust scraping mechanism, perhaps by figuring out the exact times that the Swellinfo surf forecasts get updated, and then only re-scraping if the data is old (instead of arbitrarily using the 4 hour cutoff).  I have a few ideas for improvements or features I would like to add to the site, but I also have some more ambitious projects on my plate that are hogging my attention, so well see if I get around to it.  If you want to check out either the old scraper code (my first python program) of this current iteration of the project, the links are here:  My First Python Program!!!  Surf Check Github Repo!!!!

Today, sadly, I will be pulling down the CL-App site.  The site has been somewhat non-operational for a while now, as the IP address has been blocked by Craigslist.  I never meant for the site to be anything more than a demo project, so it was surprising that Craigslist was able to detect the activity.   Anyways, for posterity’s sake, I am going to do a quick overview of the site with some screenshots.

Disclaimer: Before Launching into the overview, i think it is worth discussing my thoughts on web scraping.  While I think scraping is a very handy tool to have, I also think it needs to be used responsibly.  If there is an API available, that should always be used instead.  I built the app for my own entertainment and education, it was a great way to learn how to stitch together a number of python libraries and frameworks into a fully functional site.  I had a long list of features and improvements that I though would be cool to implement, but in the end, because I knew Craigslist is hostile to scrapers, I felt it would be best too leave it as is and move on to other projects.

A Brief Overview

The purpose of the site was to alert users when new posts were created under a certain search criteria.  The alerts could be sent via e-mail or text message.  In order to prevent too many requests to Craigslist, the app would check for new updates once per hour, but the user could specify longer intervals between alerts if desired.  Below are some slides showing screenshots of the site.

  • Login Page

As you can see, the site was pretty bare bones.  Considering the main purpose of the app was to send alerts I didn’t feel the need to invest much time in the frontend.  I did add a feature that returned a xkcd style matplotlib plot that compared the number of new posts in the last 24 hours to the average number of new posts by hour.  Had I put more time into this project, adding some more analytics would have been fun.

If I had to rate the usefulness of this application on a scale of 1 (useless) to 10 (useful), I would give it about a 4 or 5.  I tested it out with a handful of different searches, and never found the alerts to be that useful.  Because I limited the alert interval to a minimum of an hour, this service wasn’t very useful for items that moved very quickly (i.e. free items).  In those cases, you would probably want a minimum of 15 minutes between scrapes.  I do think it would work well for someone looking for a very specific or rare item, but I never really bothered to test that hypothesis.  One feature that I found useful was that the alert status page would display the last 10 posts that fell under your search criteria.  I found that switching between my different alert status pages was a very convenient way to quickly check if there was anything interesting under any of the searches.  In essence, it was a page of bookmarks for various searches, and I found that to be pretty useful.

It should be noted that Craigslist allows you to save searches and to send e-mail alerts.  I just noticed this the other day.  You need to have an account with Craigslist and from what I can tell, you cannot control the e-mail alert too much.  I just started an alert on their site, so I can report back on that.

Anyways, that is all for now.  It was a fun project and a good learning experience, but I am glad to be putting this one away.  While I still have a few projects that involve web scraping, and I will continue to dabble with it from time to time, Craigslist is notoriously hostile to scrapers, so getting a cease and desist from them is one less thing I have to worry about.

In a previous post, I discussed some of the issues I was having with running linux on a Chromebook (using crouton). The main issue was the small hard drive, 16 GB doesn’t give you a lot of room to work with. Accounting for the OS you are left with about 7 GB of usable disk space. For even light weight development, you will run out of space very quickly. Since I wrote that post I have had a few personal developments regarding the issue that I think are worth discussing.

One of the main advantages of the Chromebook is the price. While Mac’s are great, laptops are inherently meant to be a portable device, and are therefore subject to high levels of risk for both damage and theft. At a cost of ~$250, I could go through 4 Chromebooks before reaching the cost of the lowest priced Mac Book Air. And as luck would have it, I ended up destroying (sort of) my Chromebook. While brewing beer with some friends, a pint was knocked over and spilled all over the keyboard. Surprisingly, after drying the laptop out, it appeared to be alright, minus a few sticky keys and some weird behaviors from the track-pad. But about a week later, I spilled a cup of coffee on the thing (maybe it was sign?) and although the keyboard worked initially, it completely crapped out a few weeks later.

How not too brew beer!   I highly recommend keeping laptops in a safer location.

How not too brew beer! I highly recommend keeping laptops in a safer location.

The Chromebook still worked with a usb keyboard and mouse, but considering I move around a lot and like to work from coffee shops and libraries, it had lost a lot of it’s usefulness. A replacement keyboard would be around $50. While that isn’t too bad, there were no guarantees that that would fix the issue, and considering I would be doing the repair myself, I figured it would be best to move on. After all, one of the main reasons for using a Chromebook was their relative expendability[1]. There were a handful of new Chromebooks being released so I decided to ‘upgrade’. After some research, I chose to go with the Toshiba Chromebook 2 4GB model. The main reasons I went with the Toshiba were the 1080p screen, slightly larger size (13-inch), and the availability of the 4 GB model (a 2GB model is also available). The higher resolution and larger screen were a very nice upgrade, my Samsung’s 11-inch screen felt very cramped in comparison. I sprung for the extra RAM because the ChomeOS tends to resort to swap memory fairly often, and that slows things down. The Toshiba still only comes with 16GB of hard disk space, so I also sprung for a 64GB SD card. Including taxes, the laptop and card came in a little under $400. This puts it a little outside of the ‘expendable’ range, I felt that it was worth it.

My initial impressions were very positive. The 1080p screen is gorgeous. The 13-inch size was just right, still small enough to comfortable fit in my backpack, but big enough to have a terminal and editor open side-by-side. In order to get around the issues of disk space, I choose to follow the instructions on this blog in order to get a crouton Linux environment running smoothly off of the 64GB SD card. And now that I had 64GB’s to work with I went ahead and did a big install, downloading all of the standard Ubuntu desktop applications and features. I even set up a second environment that I could use to experiment with the anaconda python distribution. I was pretty stoked.

The Toshiba, look at that screen!

The Toshiba, look at that screen!

But alas, the honeymoon didn’t last much longer than a few days. Turns out the current version of crouton doesn’t play well with externally mounted environments. When the laptop goes to sleep, the external media gets unmounted and returns in read-only mode. Essentially it freezes the session and requires a reboot. All things considered, it’s not a huge deal, you just have to remember to shutdown your Linux session before closing the laptop. But it is annoying, especially considering the boot time is pretty long due to the fact that it is reading system files off of the SD card. One thing I might try, is keeping the chroot on the internal SSD and symlinking the user and application folders to the SD card. Not sure if this would work, but something to consider. Alternatively, I could do a bare bones command line install on the SSD and use that when I need quick access, and save the full Ubuntu desktop environments for when I am putting in some solid hours. Fortunately I now have two Chromebooks, so I can test it out on the old one first!

Anyways, overall, I am still pretty happy with the Chromebook. It’s light, has super long battery life (I only bring my charger if I am going to be gone for more than a day, pretty awesome!), and most importantly, it runs Linux! As a word of advice, if you are looking at Chromebooks for the purpose of running a Linux distro, I would consider looking into a model with a 32GB SSD. With a 32GB, you could easily fit a large Ubuntu chroot on the SSD, and use a SD for external storage. If I find a good workaround for the sleep/unmount issue, I’ll be sure to post it here.

[1] I am in no way endorsing excessive levels of consumption. As stewards of our planet and our environment, I think it is important that we consider the implications of our consumption and our waste. My previous experiences with Mac’s have lead me to believe that despite their seemingly better build quality, the machines don’t have that much better longevity. The fact is, most of the internals for the electronics we buy are manufactured and supplied by many of the same companies, and the only real differentiation is premium materials/features and branding. And you will be happy to know that the Samsung Chromebook is still in service. I have it set up as a desktop, and am playing around with some more experimental linux distros on it. At some point down the road, it would be fun to attempt to repair/replace the keyboard, once I get some other projects off my plate perhaps.