I can not omit the huge impact that the NodeJS project is having as a server side solution with performance and features for new projects nowadays. As i wrote yesterday, i attended the Startechconf and at least two companies are putting their efforts to move to NodeJS as backend solution for their web infraestructure in a few projects: Yahoo and ForkHQ.

I did not know too much about NodeJS, so i dedicated some time to read the documentation and papers available, so being a web server side guy i would like to share my opinion, because i listen too much about that everybody must move to NodeJS.

The primary feature of NodeJS is that provides a framework  based in a language thats handled by thousands of people: Javascript, if you are a real web developer you know what is JavaScript and you know how to deal with it, so you can jump directly from the client to the server side and write your own implementation, based on an event driven infrastructure with reduced I/O and better performance than dynamic content generators available such as Ruby, Python or PHP.  It’s pretty interesting as technology which expose new possibilities to improve backend sides, but you must know when and where to use it.

The good thing is that Node abstract you from the dirty low level concepts of a web server like threading, shared memory, asynchronous sockets, reduced I/O, etc. But this have a cost, this is not magic, is just cool, because it works and have demonstrated to perform very well and have a level of trust as is written on top of V8 JavaScript engine supported by Google. The cost of an event driven solution is that if for some reason the program have an exception, the whole service will block or even crash depending of the case, so you must be aware because if something similar happen. As an example, if some Apache context fails, it will kill the process or thread and start a new one, which is not the case of a common event driven web server. What happen if you have 1000 connections transferring data and the program fail ?, it will be critical, and this things happens when working in high production environment, if you have 50 requests per day you are safe and you can stop reading now 🙂

Node fills fine if you have thousands of incoming connections and your computing time is reduced, but if you will work with some complexity querying a database, doing some memcache or similar, you should start considering different options.

From now i start talking about solutions for really higher performance, Node is fast, but you cannot compare it with Apache, because Apache is the slowest web server available, compare it with NginX or Monkey. I will do a test now using the Apache Benchmark Utility comparing the NodeJS hello world example against Monkey which will serve a file which contains the Hello World message, the benchmark utility will perform 100.000 requests through 5000 concurrent connections.

NodeJS Benchmark

edsiper@monotop:/home/edsiper/# ab -n 100000 -c 5000 http://localhost:8888/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)

Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests

Server Software:
Server Hostname:        localhost
Server Port:            8888
Document Path:          /
Document Length:        11 bytes

Concurrency Level: 5000
Time taken for tests: 9.403 seconds
Complete requests: 99747
Failed requests: 0
Write errors: 0
Total transferred: 7481025 bytes
HTML transferred: 1097217 bytes
Requests per second: 10608.48 [#/sec] (mean)
Time per request: 471.321 [ms] (mean)
Time per request: 0.094 [ms] (mean, across all concurrent requests)
Transfer rate: 776.99 [Kbytes/sec] received

 

The NodeJS server was capable to serve 10608 requests per second and took 9 seconds to serve the 100.000 requests. Now let’s see how Monkey did…

 

Monkey HTTP Daemon Benchmark

edsiper@monotop:/home/edsiper/# ab -n 100000 -c 5000 http://localhost:2001/h.txt
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)

Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests

Server Software:        Monkey/0.30.0
Server Hostname:        localhost
Server Port:            2001
Document Path:          /h.txt
Document Length:        13 bytes
Concurrency Level:      5000
Time taken for tests:   5.718 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      20300000 bytes
HTML transferred:       1300000 bytes
Requests per second:    17489.54 [#/sec] (mean)
Time per request:       285.885 [ms] (mean)
Time per request:       0.057 [ms] (mean, across all concurrent requests)
Transfer rate:          3467.16 [Kbytes/sec] received

 

Monkey did 17.489 requests per second and took 5.7 seconds to serve the 100.000 requests. Ooops! 🙂

 

The impressive results are even better, because Monkey performed 100.000 I/O to retrieve a file from the hard disk and also send a couple of extra bytes one each response (Monkey does not cache file contents or metadata). Serve a file is a slow process due to I/O, so i will do a test later with the same case serving some fixed content through a plugin (something similar to what Node is doing in the test example).

What am trying to say here, is that depending of what are you trying to accomplish and the complexity of your backend., NodeJS can be the solution for your environment as well you could need something even more scalable like Monkey, but the learning curve of NodeJS is short and the learning curve of Monkey is a little high, but this last one  provides a better performance because all is well written in C, as well any extension through the C API interface requires some knowledge which in NodeJS are hidden, you have to balance between goals, knowledge, learning curve and deadlines.

[UPDATE]:

  • Joe provide me a new code to launch Node with multiple workers, so Node increase the performance, the new values were updated.

The past weekend i attended the @startechconf to give a talk about Monkey Project with Jonathan Gonzalez called “Monkey, HTTP Server everywhere”. But let’s talk about the event it self..

The psychical place where the event was done, was in the Santa Maria University in Santiago, the chosen place was really nice, 1 big conference room which split in 3 parts for later hold the parallel track sessions. Outside of the conference room, exists a ground where you can talk with each other, take some sun (others a nap), lunch and maybe drink some beer (will describe above).

There were different teams helping to develop the event, i can remember people from the following teams: Security, Support, Presenters, Personal assistance (for international speakers who do not speak Spanish)..etc.  I would count no less than 120 people helping on this, so the event was something big. Just to mention that when i arrive the past Friday in the morning there were about 600 attendances and then after the accreditation…about 800.

I met very nice speakers, like Caridy Patino (Yahoo Senior Search Engineer), Charles Nutter (leader of JRuby), Stephanie Rewis (Founder of W3Conversions), Mark Ramm (Technical leader at Sourceforge.net), Jano Gonzalez (Continuum Developer), Hannu Krosing (PostgreSQL hacker) and Tomas Pollak (creator of Prey Project). All speakers were very open and nicely, some of them dedicated a lot of time to talk with the attendances and get involved in different activities around the event, which is really good for all attendances, is not common that the whole events the speakers dedicate some time to talk to everyone (even drink a beer).

I went to a couple of sessions and i would like to remark four of them: Caridy talked about how are they implementing Node.JS in Yahoo, before the talk we discussed a lot of what Node.JS is and where is going… seems like a strong competitor for web services is already around and positioning in huge production environments. The talk was pretty good, covering different details about business requirements and technical stuff as solutions. Stephanie (who opened the event, gave the first key note), talked about CSS3 ( i have to admit that i am not a fan of CSS, HTML, i use them… but well..), she gave a master class of new features of CSS3 and tips to provide nice user interfaces, as well how to deal with different browsers. Scott Chacon (author of ProGit book, and VP of GitHub) gave an excellent speak about GIT with focus in trees and the reset command, very nice slides and well done presentation. And i would consider the talk of Tom Preston-Werner (GitHub CTO) the one with a huge personal impact to me. He talked about optimizing for happiness in your daily job, and how the external and internal motivators act directly in benefit of your happiness (or maybe not). I felt very identified when he described a simple example: you go to the office, you do your work, back home and then you work (or hack) your personal projects, and i have to admit that… i am that guy.

The non-technical part, was really impressive that sponsor companies provided free beer for all attendances after each session day!, the two days ended with a beer party in the ground, this is not common, trust me, and is very valuable!, not just for the free beer, just for a different context where all attendances (including speakers and organization) could interact in a different way and relax. Also i can not omit to mention the effort putted by Movistar and (specially) Microsoft to make than more than 800 hackers disconnect from their laptops or geek devices to enjoy something different, if you attended the event you know what i am referring to 🙂

This is the first time that i do not see any bad points in a conference, the organization did an excellent work, sometimes they look tired but always moving on, putting all their energies to have a successful event and it was. i just say THANK YOU! for have the opportunity to be part of this and enjoy a nice two days event.

I am sure that Startechconf 2012 is something that’s coming, and i cannot imagine how it could be better than the first version. If you could not attend this year, please consider to prepare for the next one, because it will rock!

I have seen around in open source and private web server projects claims to be faster than others.. which is fun, a web server measure should include memory, CPU, scalability, I/O, etc.  There’s no perfect web server for all needs that’s why you cannot be always faster than others…

Would be great that someone could host a web server contest (yearly maybe?) where the projects can participate and someone neutral can measure each one under similar setup, conditions and network environment. Goals ?, make public who does better on which area, with this info would be easily for sysadmins to take better decisions of which web server use versus the business requirements. Also this will motivate to each project to improve the areas that need more work.

I can imagine that after a contest, all of us who develop web servers will start thinking in more innovate ways to improve the performance and maybe propose best architectures for the web.

Who should participate at least ?: Apache foundation, Nginx, Lighttpd, Cherokee, Hiawata, G-Wan, Litespeed, Monkey etc…

If anyone is interested into a web server contest, i can share more ideas and vision about how it should be.

In August 2011, we will launch Bizitout.com, a Business website for outsourcing services, mostly known as a Freelancing site. But not just a common one like the sites available, one business site where users can really trust and get high quality services at  fair rates and out of scammers.

I will share more details shortly, by now i invite you to follow the project at Twitter:

Follow BizItOut on Twitter

Links:

 

As usual we have been working in the next version of Monkey, we are delivering a lot of improvements in terms of performance, scheduler fixes, decreased memory usage, stable plugins for authentication and scripting support, new security model based on network ranges and many more things, more details will come with the official announcement.

i have been surprised that many people from different places has joined to the IRC channel to get some support and know a little bit more about the project, a couple of them are there daily and the best thing is that they are providing patches and suggesting improvements, and that’s is really cool. The project visibility is growing and that’s terrific but also means more work 🙂

Remembering the Monkey life cycle, when 0.9.x was around, it used to have a common networking model “1 thread per client”, in 0.10 we rework almost everything to have a model with fixed threads and asynchronous sockets, in 0.11 we introduced a simple API for plugins, a new configuration mechanism based on indented text plus performance improvements and scripting support. In 0.12 arrived SSL and changed the internal mechanism to handle linked lists (Linux Kernel style). In 0.13 series has been fixes and performance improvements… and now ? what’s next ? 0.13.3 ?, 0.14 ?.. the answer is NO, we will jump directly to Monkey 0.20. The 0.1x series has ended, this new cycle brings a more mature project and we are ready to go for more.

Monkey 0.20 codename is “Maduro Frito con queso” (fried mature banana with cheese), check how it looks:

Picture taken in Guayaquil - Ecuador, strong food before to get drunk

So that’s all for now, we will keep you posted !

Oracle have a huge commitment into the Linux Kernel project, nowadays we can find a lot of projects which are merged into mainline and are a benefit for the community, a few of them are:

  • OCFS2: Oracle Cluster File System (for generic use)
  • ASMLib: Automatic Storage Management feature of the Oracle Database
  • RDS: Allows multiple reliable datagram socket operations between two nodes to share a single connection-oriented connection
  • BTRFS: New scalable file system focusing on fault tolerance, repair and easy administration.
  • A full list at http://oss.oracle.com

 

Part of this commitment, is also the improvement of the Virtualisation technology inside the Kernel. Oracle provides a full Virtualisation infrastructure based on Xen and a user space software stack for it’s management.  I would invite you to read this interesting blog post from Wim Coekaerts about two new features already available into the mainline Kernel which are making the VM stack rock:

Another feature hit Mainline Linux: CleanCache / transcendent memory

This has been a really busy time… i have new roles in my life which makes me feel really excited, my daughter is about four months now (she’s so lovely!), this Sunday i’m getting 6 years together with my wife (two married but i like to count since the beginning), almost one year in my new Oracle work team “Linux Support engineering“, hacking my Monkey and near of the 0.14.0 release, wasting few minutes playing Angry Bird for Chrome (grr i cannot stop playing!), studying Linux Kernel Development for my own and just started to do some minor Linux Kernel patches.

All this requires to optimize the time (yes, Angry bird is not the best example) and balance daily activities… it’s a difficult task but possible 🙂

[UPDATE May 25, 2011]
My first two minor patches for Kernel Kbuild (gconfig) are going to mainline: https://lkml.org/lkml/2011/5/25/417, yay!

Monkey Project team is looking for companies and individuals who requires a really fast and lightweight HTTP server for their products. Our solution is based in open source technologies delivering a high quality product supported by our community as well the main core developers.

Monkey provides an extensible HTTP framework for your needs, we are very careful about performance and low resources usage. On every release we perform a strong QA and different tests to assure backward compatibility.

Here is a list of a few reasons about why Monkey is the right choice:

  • Binary size is 55KB
  • Event driven oriented (asynchronous with fixed threads)
  • HTTP/1.1 compliant
  • Built on top of Linux 2.6 features (specific Linux syscalls dependent)
  • Plugins support: fully modular, extensible through the C API
  • Indented configuration mode (configuration for human beings)
  • CGI supported through our Palm protocol server implementation

 

Monkey has been tested in the following devices:

  • Gumstix boards
  • Beagleboard
  • Android based phones/tablets
  • Laptop/PC

 

If you need to integrate a strong and lightweight web server in your Linux Software stack, evaluate Monkey and let us know about it, we are able to help you out in the process.