Online courses from Stanford University

FYI: The Stanford University will start doing some public and free classes on their web site starting on January 2012, for more details check the following links:


Civil Engineering

Electrical Engineering

Computer Science






Infocast 8 + Chumby OpenEmbedded (Angstrom Linux Kernel)

Last few days i was playing with an Infocast 8″, the goal was to change the base OS provided by Insignia Inc. and replace it with the Angstrom Linux Kernel provided by the Chumby OpenEmbedded packages builder. For some reason there’s no similar images available for download.

FTDI Interface


After doing minor tweaks to the chumby-oe project, get some help from Guy Carpenter (thanks!) and fix some bitbake files, i managed to create a new bootable image with Wifi support (i mention this because the Marvel wifi chip requires some specific firmware files)

Infocast 8 booting

Infocast 8 booting Angstrom Linux Kernel

Now connect to the Wifi AP is a little tricky, you need to set something in the following order:

# iwpriv mlan0 setregioncode 0x10
# ifconfig mlan0 up
# iwconfig mlan0 mode managed
# iwconfig mlan0 key YOUR_WEP_KEY
# iwconfig mlan0 key on
# iwconfig mlan0 essid YOUR_ESSID
# udhcpc -i mlan0

After that you will be able to connect to your AP and have network access.

You can download the ROM image from here, or if you prefer, you can browse the whole content. Once you get the ROM image you need to burn it into the internal 2G SD Card, you can do it with: dd if=rom-chumby-silvermoon-chumby-starter-image.img of=/dev/sdX bs=8M. Make sure before to run the dd command, do a manual umount for each partition.

Monkey ? NodeJS ?, when & where…

I can not omit the huge impact that the NodeJS project is having as a server side solution with performance and features for new projects nowadays. As i wrote yesterday, i attended the Startechconf and at least two companies are putting their efforts to move to NodeJS as backend solution for their web infraestructure in a few projects: Yahoo and ForkHQ.

I did not know too much about NodeJS, so i dedicated some time to read the documentation and papers available, so being a web server side guy i would like to share my opinion, because i listen too much about that everybody must move to NodeJS.

The primary feature of NodeJS is that provides a framework  based in a language thats handled by thousands of people: Javascript, if you are a real web developer you know what is JavaScript and you know how to deal with it, so you can jump directly from the client to the server side and write your own implementation, based on an event driven infrastructure with reduced I/O and better performance than dynamic content generators available such as Ruby, Python or PHP.  It’s pretty interesting as technology which expose new possibilities to improve backend sides, but you must know when and where to use it.

The good thing is that Node abstract you from the dirty low level concepts of a web server like threading, shared memory, asynchronous sockets, reduced I/O, etc. But this have a cost, this is not magic, is just cool, because it works and have demonstrated to perform very well and have a level of trust as is written on top of V8 JavaScript engine supported by Google. The cost of an event driven solution is that if for some reason the program have an exception, the whole service will block or even crash depending of the case, so you must be aware because if something similar happen. As an example, if some Apache context fails, it will kill the process or thread and start a new one, which is not the case of a common event driven web server. What happen if you have 1000 connections transferring data and the program fail ?, it will be critical, and this things happens when working in high production environment, if you have 50 requests per day you are safe and you can stop reading now :)

Node fills fine if you have thousands of incoming connections and your computing time is reduced, but if you will work with some complexity querying a database, doing some memcache or similar, you should start considering different options.

From now i start talking about solutions for really higher performance, Node is fast, but you cannot compare it with Apache, because Apache is the slowest web server available, compare it with NginX or Monkey. I will do a test now using the Apache Benchmark Utility comparing the NodeJS hello world example against Monkey which will serve a file which contains the Hello World message, the benchmark utility will perform 100.000 requests through 5000 concurrent connections.

NodeJS Benchmark

edsiper@monotop:/home/edsiper/# ab -n 100000 -c 5000 http://localhost:8888/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd,
Licensed to The Apache Software Foundation,
Benchmarking localhost (be patient)

Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests

Server Software:
Server Hostname:        localhost
Server Port:            8888
Document Path:          /
Document Length:        11 bytes

Concurrency Level: 5000
Time taken for tests: 9.403 seconds
Complete requests: 99747
Failed requests: 0
Write errors: 0
Total transferred: 7481025 bytes
HTML transferred: 1097217 bytes
Requests per second: 10608.48 [#/sec] (mean)
Time per request: 471.321 [ms] (mean)
Time per request: 0.094 [ms] (mean, across all concurrent requests)
Transfer rate: 776.99 [Kbytes/sec] received


The NodeJS server was capable to serve 10608 requests per second and took 9 seconds to serve the 100.000 requests. Now let’s see how Monkey did…


Monkey HTTP Daemon Benchmark

edsiper@monotop:/home/edsiper/# ab -n 100000 -c 5000 http://localhost:2001/h.txt
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd,
Licensed to The Apache Software Foundation,
Benchmarking localhost (be patient)

Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests

Server Software:        Monkey/0.30.0
Server Hostname:        localhost
Server Port:            2001
Document Path:          /h.txt
Document Length:        13 bytes
Concurrency Level:      5000
Time taken for tests:   5.718 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      20300000 bytes
HTML transferred:       1300000 bytes
Requests per second:    17489.54 [#/sec] (mean)
Time per request:       285.885 [ms] (mean)
Time per request:       0.057 [ms] (mean, across all concurrent requests)
Transfer rate:          3467.16 [Kbytes/sec] received


Monkey did 17.489 requests per second and took 5.7 seconds to serve the 100.000 requests. Ooops! :)


The impressive results are even better, because Monkey performed 100.000 I/O to retrieve a file from the hard disk and also send a couple of extra bytes one each response (Monkey does not cache file contents or metadata). Serve a file is a slow process due to I/O, so i will do a test later with the same case serving some fixed content through a plugin (something similar to what Node is doing in the test example).

What am trying to say here, is that depending of what are you trying to accomplish and the complexity of your backend., NodeJS can be the solution for your environment as well you could need something even more scalable like Monkey, but the learning curve of NodeJS is short and the learning curve of Monkey is a little high, but this last one  provides a better performance because all is well written in C, as well any extension through the C API interface requires some knowledge which in NodeJS are hidden, you have to balance between goals, knowledge, learning curve and deadlines.


  • Joe provide me a new code to launch Node with multiple workers, so Node increase the performance, the new values were updated.

Startechconf was a great event!

The past weekend i attended the @startechconf to give a talk about Monkey Project with Jonathan Gonzalez called “Monkey, HTTP Server everywhere”. But let’s talk about the event it self..

The psychical place where the event was done, was in the Santa Maria University in Santiago, the chosen place was really nice, 1 big conference room which split in 3 parts for later hold the parallel track sessions. Outside of the conference room, exists a ground where you can talk with each other, take some sun (others a nap), lunch and maybe drink some beer (will describe above).

There were different teams helping to develop the event, i can remember people from the following teams: Security, Support, Presenters, Personal assistance (for international speakers who do not speak Spanish)..etc.  I would count no less than 120 people helping on this, so the event was something big. Just to mention that when i arrive the past Friday in the morning there were about 600 attendances and then after the accreditation…about 800.

I met very nice speakers, like Caridy Patino (Yahoo Senior Search Engineer), Charles Nutter (leader of JRuby), Stephanie Rewis (Founder of W3Conversions), Mark Ramm (Technical leader at, Jano Gonzalez (Continuum Developer), Hannu Krosing (PostgreSQL hacker) and Tomas Pollak (creator of Prey Project). All speakers were very open and nicely, some of them dedicated a lot of time to talk with the attendances and get involved in different activities around the event, which is really good for all attendances, is not common that the whole events the speakers dedicate some time to talk to everyone (even drink a beer).

I went to a couple of sessions and i would like to remark four of them: Caridy talked about how are they implementing Node.JS in Yahoo, before the talk we discussed a lot of what Node.JS is and where is going… seems like a strong competitor for web services is already around and positioning in huge production environments. The talk was pretty good, covering different details about business requirements and technical stuff as solutions. Stephanie (who opened the event, gave the first key note), talked about CSS3 ( i have to admit that i am not a fan of CSS, HTML, i use them… but well..), she gave a master class of new features of CSS3 and tips to provide nice user interfaces, as well how to deal with different browsers. Scott Chacon (author of ProGit book, and VP of GitHub) gave an excellent speak about GIT with focus in trees and the reset command, very nice slides and well done presentation. And i would consider the talk of Tom Preston-Werner (GitHub CTO) the one with a huge personal impact to me. He talked about optimizing for happiness in your daily job, and how the external and internal motivators act directly in benefit of your happiness (or maybe not). I felt very identified when he described a simple example: you go to the office, you do your work, back home and then you work (or hack) your personal projects, and i have to admit that… i am that guy.

The non-technical part, was really impressive that sponsor companies provided free beer for all attendances after each session day!, the two days ended with a beer party in the ground, this is not common, trust me, and is very valuable!, not just for the free beer, just for a different context where all attendances (including speakers and organization) could interact in a different way and relax. Also i can not omit to mention the effort putted by Movistar and (specially) Microsoft to make than more than 800 hackers disconnect from their laptops or geek devices to enjoy something different, if you attended the event you know what i am referring to :)

This is the first time that i do not see any bad points in a conference, the organization did an excellent work, sometimes they look tired but always moving on, putting all their energies to have a successful event and it was. i just say THANK YOU! for have the opportunity to be part of this and enjoy a nice two days event.

I am sure that Startechconf 2012 is something that’s coming, and i cannot imagine how it could be better than the first version. If you could not attend this year, please consider to prepare for the next one, because it will rock!