1. Mar
    12

    Better network performance in Docker/Test Kitchen with Virtualbox on Mac

    Posted in : virtualbox, docker, test-kitchen, and mac

    If you’ve experienced really slow downloads from within VirtualBox on Mac, chances are you’re using the default NIC for your NAT interface.

    I’ve seen docker’s pull taking ages when a layer size is more than a couple MB and chef installer taking more than 10mn to download the package…

    Here’s how to fix it in docker-machine and test-kitchen.

    For docker-machine

    Check what’s your docker-machine VM’s name:

    docker-machine ls

    it will give you something like the following (could be different depending of your config):

    NAME           ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
    default        -        virtualbox   Running   tcp://192.168.99.100:2376           v1.12.5

    if it’s Running, then stop it first:

    docker-machine stop default

    then change the NIC to use the PCnet-FAST III (Am79C973) instead of the default Intel PRO/1000 MT Desktop (82540EM):

    VBoxManage modifyvm "default" --nictype1 "Am79C973"

    And finally start it, it will now use the new NIC with “hopefully” improved speed:

    docker-machine start default

    For test-kitchen

    Update your kitchen.yml or kitchen.local.yml to have your vagrant driver include the customize config below:

    driver:
      name: vagrant
      customize:
        nictype1: "Am79C973"

    Enjoy using all your available bandwidth!!

    Comments

  2. Nov
    18

    Senlima - Knight Animation

    Posted in : senlima, game, preview, unity, 3d, animation, and knight

    This article has been moved over our dedicated Openhood Games blog.

    Comments

  3. Nov
    17

    Senlima - Upcoming Game Preview

    Posted in : senlima, game, preview, unity, and 3d

    This article has been moved over our dedicated Openhood Games blog.

    Comments

  4. Jun
    28

    NGinx Useful Tips

    Posted in : nginx and sysadmin

    During an epic debugging session with an NGinx configuration for a project,
    I discovered some useful, but not so common (at least to me) configuration.

    Debug

    NGinx do not provide so much help (by default) when it comes to debugging internal redirect,
    proxying and other rewrite rules. But it comes with a handy debug module which allows you
    to get a lot more info.

    You have to enable it at compile time:

    ./configure --with-debug

    And then in your configuration you can set:

    http {
      # At the http level activate debug log for eveything
      error_log /path/to/my/detailed_error_log debug;
    
      server {
        # At the server level activate debug log only for this server
        error_log /path/to/my/detailed_error_log debug;
      }
    
      server {
        # At the server level without the debug keywords it disable debug for this server
        error_log /path/to/my/error_log;
      }
    }

    You can even debug only some connections:

    error_log /path/to/my/detailed_error_log debug;
    events {
        debug_connection   10.0.0.1;
        debug_connection   10.0.1.0/24;
    }

    Source: NGinx Debugging Log

    Proxying

    NGinx is well known for its proxy/reverse-proxy/caching-proxy capabilities, but
    you’d better know how some things works to not waste your time on some odd
    behaviors.

    When proxying to remote host by URLs, be aware that NGinx use its own internal resolver for DNS name.
    This means in some cases it can’t resolve domains unless you specify which DNS to use.

    Let’s take an example:

    server {
      # Let's match everything which starts with /remote_download/
      location ~* ^/remote_download/(.*) {
        # but only when coming from internal request (proxy, rewrite, ...)
        internal;
    
        # Set the URI using the matched location
        set $remote_download_uri $1;
    
        # Set the host to use in request proxied (useful if remote is using vhost
        # but you're using its IP address to reach it in the proxy_pass config)
        set $remote_download_host download.mydomain.tld;
    
        # Set the url we want to proxy to
        # Using IP address of server, be sure to set the $remote_download_host
        set $remote_download_url https://10.0.0.1/$remote_download_uri;
        # Or using the full domain
        # set $remote_download_url https://$remote_download_host/$remote_download_uri;
    
        # Set Host header for vhost to work
        proxy_set_header Host $download_host;
    
        # This clears the Authorization
        proxy_set_header Authorization '';
        # If your remote server needs some auth you can set it there
        # Basic auth would be something like
        # proxy_set_header Authorization 'Basic kjslkjsalkdjaslasdoiejldfkj=';
    
        # Disable local file caching, when serving file
        proxy_max_temp_file_size 0;
    
        # Finally send query to remote and response back to client
        proxy_pass $download_url;
      }
    
      try_files $uri @fallback;
    
      location @fallback {
        proxy_pass http://my_backend;
      }
    }

    Example adapted from Nginx-Fu: X-Accel-Redirect From Remote Servers

    This example is fully working because we used an IP address for $remote_download_url,
    but if we were using the domain (eg: download.mydomain.tld), any request would fail
    with a 502 Bad Gateway error.

    This is due to the way NGinx default resolver works. It’s smart enough to resolve
    the domains in proxy_pass directives as long as they are statics (it can get them
    at boot time) and they are in /etc/hosts. But as we are constructing the URL here,
    it does not try to resolve it. Fortunately you can specify which DNS server
    it should use in such cases by setting:

    http {
      # Globally
      resolver 127.0.0.1; # Local DNS
    
      server {
        # By server
        resolver 8.8.8.8; # Google DNS
    
        location /demo {
          # Or even at location level
          resolver 208.67.222.222; # OpenDNS
        }
      }
    }

    Comments

  5. Jun
    14

    Benchmark ruby versus node.js on Heroku

    Posted in : ruby, node, heroku, sinatra, mongo_mapper, unicorn, express, mongoose, and cluster

    I’ve been playing a lot with node lately thanks to this PeepCode screencast and since Heroku released their new Celadon Cedar stack I’ve been wanting to benchmark ruby versus node.js for a restful API which I need to build.

    At first I tried to compare bare node.js against eventmachine_httpserver. It did quickly became obvious that this kind of micro-benchmark wasn’t going to be very helpful in deciding which one to choose.

    Building the API was going to be messy unless I did start using something like sinatra or express.

    Methodology

    I did setup 2 similar repositories on github:

    https://github.com/JosephHalter/heroku_ruby_bench
    

    This repository is using sinatra, mongo_mapper, yajl and unicorn.

    https://github.com/JosephHalter/heroku_node_bench
    

    This repository is using express, mongoose and cluster.

    I deployed both on Heroku, did setup a free mongohq account for both and run a few tests with various number of workers. All tests have been done from a datacenter in Paris using the following command:

    ab -k -t 10 -c 1000 http://evening-robot-961.herokuapp.com/restaurant/4df550c5c3aaaa0100000001
    ab -k -t 10 -c 1000 http://cold-mist-128.herokuapp.com/restaurant/4df55a0fab9e270007000001

    evening-robot-961 is running on node 0.4.7 and cold-mist-128 is running on ruby 1.9.2

    Checklist

    For those asking:

    • yes, I’ve waited enough time between each heroku scale
    • yes, I’ve retried each test a crazy number of times

    Heating

    Here are the raw results for a concurrency of 100, just to ensure everything works as expected before starting the real test:

                             | completed | failed |
    express/node             | 3358      | 0      |
    express/cluser/3 workers | 7473      | 0      |
    sinatra/thin             | 1649      | 0      |
    unicorn/4 workers        | 6080      | 0      |
    

    Results

    Here are the raw results for a concurrency of 1000:

                               | completed | failed |
    node/1 dyno                | 40524     | 27865  |
    node/5 dynos               | 20755     | 11575  |
    node/10 dynos              | 36953     | 9866   |
    node/20 dynos              | 34724     | 7486   |
    node/40 dynos              | 33919     | 8863   |
    node/60 dynos              | 32218     | 8984   |
    cluster/1 worker           | 21307     | 13193  |
    cluster/2 workers          | 41679     | 18904  |
    cluster/3 workers          | 40700     | 12864  |
    cluster/3 workers/5 dynos  | 37292     |  8360  |
    cluster/3 workers/10 dynos | 19787     | 10870  |
    cluster/3 workers/15 dynos | 35894     |  7119  |
    cluster/3 workers/20 dynos | 35871     |  9807  |
    cluster/3 workers/40 dynos | 32727     |  8371  |
    cluster/4 workers          | 22262     | 14738  |
    thin/1 dyno                | 38813     | 36769  |
    thin/5 dynos               | 40885     | 35178  |
    thin/10 dynos              | 42141     | 29082  |
    thin/40 dynos              | 33014     |  9732  |
    thin/60 dynos              | 31392     |  8747  |
    unicorn/1 worker           | 41032     | 39498  |
    unicorn/2 workers          | 24991     | 22152  |
    unicorn/3 workers          | 22601     | 16319  |
    unicorn/3 workers/5 dynos  | 20386     | 11012  |
    unicorn/4 workers          | 44127     | 37607  |
    unicorn/4 workers/5 dynos  | 39591     | 13426  |
    unicorn/4 workers/10 dynos | 35672     |  9511  |
    unicorn/4 workers/15 dynos | 35925     |  7997  |
    unicorn/4 workers/20 dynos | 34611     |  8131  |
    unicorn/4 workers/40 dynos | 18873     | 11125  |
    unicorn/8 workers          | 45904     | 39819  |
    

    A few charts to make it easier to read:

    express

    express with cluster

    express with cluster/3 workers

    sinatra with thin

    sinatra with unicorn

    sinatra with unicorn/4 workers

    Conclusions

    Using either ruby or node I could easily get more than 2000req/s. I think both are viable alternatives. With only 1 dyno however, you’ll start to have failed connection when faced with massive concurrency because the backlog is full. Increasing the number dynos doesn’t magically allow you to handle more requests per second, however it can decrease the number of failed connections. Heroku pricing is linear to the number of dynos you scale to, however your throughput does only only improve marginally. It could be that I’m benchmarking from only 1 server but I’ve seen almost no difference between having 40 dynos or 60. You’ll see one when you receive the bill so be cautious, especially if you use an auto-scale tool.

    Apart from that, we’re seeing a huge improvement when using unicorn with 4 workers instead of thin which was the only possibility before Celadon Cedar so a big thank you to Heroku who did make this possible. The same applies to node with cluster, you can do more with 15 dynos running cluster with 3 workers than with 60 dynos of node alone (for a quarter of the price!). Special thanks to TJ Holowaychuk who did help us to fixing a stupid issue which prevented us from using cluster on Heroku.

    Comments