• 201606.14

    The world is burning

    It’s true. The world is burning, both literally and figuratively.

    Climate change

    Global warming is a real problem. We’re in the middle of a human-caused mass-extinction and our planet is being trashed by our burning of our favorite fuel source, yet we have people in high places in our governments who not only refuse to acknowledge the issue, but actively fight against it. As if turning your back on a Tsunami will stop the inevitable.

    Those who deny climate change will be dead in the geological blink of an eye (or even sooner, we can hope), leaving the next generations to inherit the consequences. What does this mean? Droughts and famine, mainly. The thing is, we are already seeing these. California is in the middle (or the beginning?) of a horrible drought. The 2015/2016 El Niño gave us a few showers, but pathetic in comparison to prior years. This is not isolated. Many other regions are experiencing unprecedented drought conditions as well.

    So what? Well, when you have drought, you get lack of food. Without food and water, the next step is civil unrest. This isn’t a theory. When people can’t eat, they get pretty pissed off.

    The free market will save us. Right?

    Let’s leave the stupid climate for a minute. Let’s talk about the world economy. Now, just about ever major country is based off of a capitalistic growth economy. What does this mean? It means that if you don’t keep shoveling coal into the fire, the fire dies out. So in this example, coal is people working, right? Sorry, no. Coal is people. A growth economy requires more people. More and more and more people. The problem is, the planet’s resources can’t support the amount of people we currently have, much less more of the wretched things. So we have an economy that is betting on resource exhaustion as a method of self-sustaining. On top of requiring more people, many of our wonderful growth economies are built on top of the fact that there is cheap labor in other countries.

    Here’s a pattern. We want something built cheaper. We build factories in <insert developing country here>. <Developing country> has so much capital pumped into it that prices rise and their standard of living starts to match the guys on top. Ahh, peasants and their desire to be kings! Suddenly, the factories are more expensive to operate. <Developing nation> starts enacting regulations (gasp), and sooner or later, the poor underdogs who run the multinational corporations are in search for <next developing country> to exploit. Won’t someone think of the shareholders??

    Their business model is sound, though. At least, their business model is sound assuming you have endless developing countries to exploit. What happens when the last stable, developing country decides to charge the same to operate your iPhone factory as the factory in the United States? Well, the iPhone jumps from $650 to $3000. The shitty plastic trash bin you used to pay $3.99 for is now $59.99!! Your McDonalds cheedburger is $24.99. Sacrilege!

    Suddenly our consumeristic growth economy becomes a…well, what the hell happens now? We don’t have a name for it because there is only the growth economy. Anything else is filthy communism. Nobody wants to talk about any other form of economy besides a growth economy, as if we can grow forever.

    Univeral contants, anyone?

    Seriously, though. We have a growth economy. Growth requires resources. This is not an opinion, this is a universal constant. In order to grow, a system needs an influx of energy from some external source. Resources. What happens when we run out of these resources?

    Economic collapse. The engine stops, and because nobody is willing to talk about what this means or what happens next, it’s going to be a painful process.


    Let’s tie this all together: climate change is going to be displacing millions of people in coastal areas as ocean levels rise. At the same time, droughts and famine are going to become much more common. Our global economy, which requires growth, and more growth, oh and some growth, will stop growing roughly around the same time the droughts and famine happen.

    Increased population density, compounded by extreme resource limitation (food, water, etc), compounded by economic collapse leaves us with a near human extinction. What drought and famine doesn’t do to us disease will. Viruses and antibiotic-resistant bacteria love sickly people living in close quarters!

    The next 100 years will certainly be an interesting time for humanity.

    I’m not saying these things because I hate humanity. I’m saying these things because I love humanity. I think we’re pretty cool. We’ve accomplished much more than many other ape species, probably. A few of us have even evolved passed our ape nature. There’s something here worth saving. I just hope the good parts survive, and the climate-change-denying, growth-economy-touting simpletons die a slow, horrible death.

    Sleep well!

  • 201606.05

    Ansible: included handlers not running in v2.x

    I’ll keep this short. I recently installed Ansible 2.0 to manage the Turtl servers. However, once I ran some of the roles I used in the old version, my handlers were not running.

    For instance:

    # roles/rethinkdb/tasks/main.yml
    - name: copy rethinkdb monitrc file
      template: src=monit.j2 dest=/etc/monit.d/rethinkdb
      notify: restart monit
    # roles/rethinkdb/handlers/main.yml
    - include roles/monit/handlers/main.yml
    # roles/monit/handlers/main.yml
    - name: restart monit
      command: /etc/rc.d/rc.monit restart

    Note that in Ansible <= 1.8, when the monitrc file gets copied over, it would run the restart monit handler. In 2.0, no such luck.

    The fix

    I found this github discussion which led to this google groups post which says to put this in ansible.cfg:

    task_includes_static = yes
    handler_includes_static = yes

    This makes includes pre-process instead of being loaded dynamically. I don’t really know what that means but I do know it fixed the issue. It breaks looping, but I don’t even use any loops in ansible tasks, so

    Do whatever you want, you little weasel. I don't care. I DON'T CARE.

  • 201605.27

    Spam entry: We are expert

    This is a post in a series of spam responses I’m doing after creating a new domain for my website. After receiving a flood of sales calls and emails, I’m deciding to have some fun.

    Finally, someone who knows what they’re doing.


    Would you be interested in building your website? We are a professional web design company based in India.

    We are expert in the following :-

    Joomla Websites
    Word press Websites
    Magento Websites
    Shopify Websites
    Drupal Website
    E-Commerce Solutions
    Payment Gateway Integration
    Custom Websites
    Mobile Apps
    Digital Marketing

    If you want to know the price/cost and examples of our website design project, please share your requirements and website URL.

    Business Consultant
    Note: We are Offering 20% Discount on Web Development Packages.

    Come to think of it, I DO need a website…


    Thank you for contacting us. I work for a very large government contractor in the United States and we are going to use our domain for a very important project of ours. We were going to put out a bid for website development, but looking over your offer makes me realize that maybe we can just subcontract the project directly through your firm. Now, this is a fairly low-budget project, around $750,000.00 USD so you may not have time to take it on. Also, thank you for your 20% discount, which brings the project total down to $600,000.00 USD. Very kind of you.

    A bit about the project: We’re trying to use open web technologies to create a supercomputer cluster out of visitors who come to the site. Essentially, government agencies submit “jobs” and those jobs are broken into tiny pieces. Anyone who visits the website is put to work such that their browser grabs the next available job, does the work, and submits it back to the website in completed form.

    What we need from your firm is to build a high-throughput queuing system that handles a) breaking large jobs into small ones b) queuing delivery of the jobs to visitors, handling things like connectivity issues and retrying failed jobs c) programming the algorithms in the actual browser that will handle the work itself.

    The algorithms are fairly simple, for instance one of them has to do with processing fourier transforms on incoming SETI waveforms. You will then need to classify the deconstructed wave forms for a distributed self-organizing map (Kohonen network) step-by-step using the queue you build so eventually we can pump a wave form through the system and get an automated classification! Easy stuff, but we just don’t have the development bandwidth for it.

    Another one of the client-side algorithms is a stream processing system which takes certain sensor data from readings at our particle accelerator and searches for anomalies and outliers across a wide range of data. The detection mechanisms you use are up to you! We don’t want to micro-manage. However, if you provide inaccurate results, billions of dollars will be lost, so try to be mindful!

    There are about seven or eight more client-side distributed job algorithms we’ll need, but we can go into details later.

    Lastly, and I know this is stupid, but the website will need some sort of video streaming. Our user’s love videos. We have a feed coming from one of our space stations, however the transmitter on the station is broken and is sending data incorrectly. It’s an old transmitter, so it’s analog, even though the signal is digital. We’re planning on sending a mission out to fix it next year (does your firm do shuttle software?) but until then we need the website to be able to decode this analog signal and de-corrupt it, essentially. We have an internal expert on the video feed and the proprietary digital format it uses, however he’s away on vacation in France for a few months so you’ll need to figure out the format yourself and try to decode it from the analog stream. Kid’s stuff. We can send over his notes if needed, but they are scrawled inside of a Sears catalog (he’s a bit disorganized) and the pages are stuck together for some reason so we need to bring in an expert to digitize the notes. However, a firm of your stature should be able to brush this problem aside without too much effort even without his help.

    Thanks for your time, let us know if this is something you’re interested in!!

    Sometimes the simplest ideas are the best ones. This project should be a cake walk for Prerana and her team.

  • 201605.27

    Spam entry: Send us money so people can find your site!

    This is a post in a series of spam responses I’m doing after creating a new domain for my website. After receiving a flood of sales calls and emails, I’m deciding to have some fun.

    If I send them money, they will make my website findable. Sounds good.

    Attention: Important Notice , DOMAIN SERVICE NOTICE
    Domain Name: da-wedding-site.com

    ATT: Andrew Lyon
    Response Requested By
    22 - May. - 2016


    Attn: Andrew Lyon
    As a courtesy to domain name holders, we are sending you this notification for your business Domain name search engine registration. This letter is to inform you that it’s time to send in your registration.
    Failure to complete your Domain name search engine registration by the expiration date may result in cancellation of this offer making it difficult for your customers to locate you on the web.
    Privatization allows the consumer a choice when registering. Search engine registration includes domain name search engine submission. Do not discard, this notice is not an invoice it is a courtesy reminder to register your domain name search engine listing so your customers can locate you on the web.
    This Notice for: da-wedding-site.com will expire at 11:59PM EST, 22 - May. - 2016 Act now!

    Select Package:

    Payment by Credit/Debit Card

    Select the term using the link above by 22 - May. - 2016

    Must be from Google, right? Resonded:



    Will nobody register my domain with the domain service? How could I have been so negligent?

  • 201605.27

    Spam entry: Logo Cheese

    This is a post in a series of spam responses I’m doing after creating a new domain for my website. After receiving a flood of sales calls and emails, I’m deciding to have some fun.

    What a great name! Cheesy logos! A bargain!

    You are going to need a LOGO!

    Let’s keep it simple - let us design your Logo and build your brand!

    Your Logo is your brand identity, most of the businesses don’t think about it and later on waist thousands of dollars.

    Avail Discount and get 2 custom logo concepts by industry specialist designers in 48 hours for just $29.96

    Activate Your Offer Now and let us take care of the rest!

    Awaiting your Order

    Jennifer Garner

    Design Consultant

    Logo Cheese - USA

    Ahh yes, this reminds me of the times my father and I spent in the English countryside…

    YES A LOGO!!!!! That is what my website is missing!! I knew something was off about my website, but I simple could not put my finger on it. I will certainly Activate My Offer and I would like to order twenty of your finest logos. Please have them sent directly to this email and I will certainly remit payment after I have the logos.

    Now, I know that your logo company specifically makes logos of various cheeses, but I am going to request that you do logos of things OTHER than cheese. I know this is a lot to ask of Logo Cheese - USA but hear me out. When I was but a young lad, my father used to take my brothers and myself horseback riding into the Yorkshire hills. We would laugh and sing and eat assortments of cheeses into the early evening. Then we would ride to my grandpapa’s estate and spend the week eating more cheese and chuckling over fresh cups of English breakfast tea. Not the store-bought tea you find at the local grocers, being bought by the common coupon-waving trash. No, we would have the finest handmade teas with the most expensive ingredients delivered personally by the craftsman himself, I think his name was Edward. No, it must have been Bartholomew. I believe Edward was the local butcher, who would give us the finest cuts of beefs shoulder one could possibly eat!! The beef was from the most expensive cows in all the land, and Edward would let us pick out the cow and would butcher it, alive, right in front of us. It was delightful! You see, if you kill a cow and then butcher it, much of the flavor is lost. So we would all take turns butchering the poor beast as Edward cheered us on! A truly magnificent experience! Then Edward would package our meat and we would feast that very night!!! We would eat our beef shoulder roasts at my grandpapa’s 30-person dining table, waited on by his staff of servants, and then we would sit by the fire and talk of of times past as we drank our English Breakfast tea, hand-delivered by Bartholomew himself. Now, Bartholomew was a character! The days he visited were some of the most exciting, because not only did he craft and deliver our tea, but the man was a magician!! You can imagine how wonderful that would be for a young lad, to drink his tea whilst watching a magic show right before him!! It was safe to say the Bartholomew was one of our greatest companions!! I digress, though.

    You see, one time, in the hillsides, as we were eating our artisanal cheeses and laughing and singing, just before riding to my grandpapa’s house and spending the week drinking the finest tea and eating the finest beef shoulder money can buy, we noticed a shadowy figure approaching from the Northern hills. Years before, papa had instructed us never to go into the Northern hills. There were stories of awful, sickly creatures there, but also of a village deep in the forest where a group of bandits was exiled by King George himself. As the tales go, the bandits had to choose either mating with each other or with the various beasts roaming the hillsides for generations. You can imagine the result! I personally once tried to mate with my father’s prize sheep, but the wretched thing would not sit still long enough. A man of my stature does not take kindly to anyone, or anything, refusing him. Thus, I relished sending that awful sheep to the butcher one day as my father was away on business. But that’s another story!

    As this shadowy figure approached, it became more and more grotesque in appearance. Its shirt (if you can call it that!!) had a stain of some sort right on the chest, and the hem around the trousers looked like it had come undone days ago! I couldn’t help but feel sorry for the disgusting, vile creature. As it came even closer though, I could make out its face. It was Bartholomew!! I had never seen him look so disheveled. It made me want to vomit. But papa says vomiting is for the peasants and the sickly, so I just looked away in disgust instead and tried to think of my mother’s fourty-acre garden, instead of the monstrous image of Bartholomew, lurching through the hillsides with stains on his shirt and tattered trousers.

    My father got in between us and Bartholomew, protecting us from the vile image. Bartholomew spoke: “HELLLLLP…..ME…..” His raspy voice grated on my ears. Must he keep speaking in that despicable voice? Drink a cup of tea, man!

    “Really, Bartholomew,” said father, bravely. “Get a hold of yourself man. You’re scaring the children You ought to be ashamed, wandering the hillside looking like the common London street trash.”


    “I certainly shall not! I refuse to help a man who will not help himself, who staggers around in tattered clothing, expecting a hand out from those who work hard for themselves. It goes without saying we will no longer be needing your services at the estate, and I shall personally see to it that nobody else in the town of Yorkshire ever buys tea from Bartholomew Dunscrup ever again!”

    With that, father turned on his heel, gathered us onto the horses, and we set off for grandapapa’s house. But something was different this evening. The sky was a deep maroon color and the air stank of flesh. We had only made it halfway to grandpapa’s house when the horses slowed, then stopped. Nothing we could do would make them budge. We kicked and pushed, but they sat, still and silent, as if they had given up, like that wretched man we once knew as Bartholomew.. The thought of him sickened me.

    Then it hit me. A hunger I cannot describe. It was not for the countryside’s finest beef shoulder. It was a deep hunger for something else. I could not determine the cause of it until I saw my youngest brother’s neck. My body lurched for him, uncontrollable. Everything turned red. When I came to, hours later (or so it felt), my brothers lay strewn across the hill, missing various body parts. My shirt was covered in what looked like blood, and I had bits of flesh between my teeth. What happened? I did not know. Someone had killed my brothers, and from the looks of it had almost killed me. I looked into the distance and saw a man running! I made chase. Perhaps this fine gentleman could tell me of the events prior! Perhaps he witnessed this occurrence and could help investigate!

    As I gained on the gentleman, I noticed he had a familiar gait. It was father! He looked back at me and screamed.

    “Father, wait!” I shouted. But his pace only quickened. As I gained on him, I noticed a familiar feeling creeping in. A hunger. It gave me an energy I had not felt in the past, and my legs seemed move on their own, accelerating beyond what I thought was possible. Just as I reached father, my vision turned red again.

    I woke up, in the dark, in a pool of father’s blood. Whoever had murdered my brothers had murdered father as well!! I swore vengeance to myself. You see, I did not care much for my brothers, but father was very dear to me.

    Then it struck me!! There was one other person in the hills that night. It was Bartholomew! The vile man had obviously done this to father! I rushed back to town and awoke the constable. He was a dear family friend, and as soon as he heard what had happened, what Bartholomew had done, he rounded up the entire police force and their most capable hounds, and we set off for an evening hunt. I have always loved a good fox hunt, you see, but had never had the opportunity to participate in a hunt at night!! The constable and I laughed together as we spoke of previous hunts and how we would surely catch Bartholomew on this eve!

    Not a minute after we reached the hillside, the dogs picked up a scent. I knew in my heart it was Bartholomew. We made haste and came to a clearing, lit only by the moon, where we saw the same shadowy figure from before, on its knees, crying into its hands. Aha! I thought to myself. We found the wretch!

    We dismounted our horses and as we walked toward the figure, I recognized its unnerving voice.


    Oh, I would help it, certainly. I would help it shed its mortal coil and release its vile soul back to the hell it came from. As I neared closer the figure, I felt the same hunger from before. It must have been Bartholomew, causing this odd feeling! It’s proof! My vision went red again.

    I awoke, but this time it was day. The entire hunting party, all their hounds, and Bartholomew lay strewn before me, their chewed and ravaged corpses beginning to cook slightly in the growing morning sun. Somehow Bartholomew had killed all the policemen, but from the looks of it the dogs must have torn him to shreds.

    I searched the pockets of the creature, more disgusted by him than ever before, and found that not only had he slain my brothers, my father, and the entire Yorkshire police department, but he has stolen cheese from my grandfather!!

    I was in quite a rage at finding this, and you see, to this day, after inheriting my father’s wealth and my grandfather’s estate, after living through this horrid event and living to tell the tale, and after finding the cheese in Bartholomew’s pocket, I no longer can eat cheese.

    Please consider this when sending the logos I have requested.

    Father would be proud that I am carrying on his legacy. I think of him every day. In fact, I am reminded of a time when we…

  • 201605.27

    Spam entry: Add me to your address book

    This is a post in a series of spam responses I’m doing after creating a new domain for my website. After receiving a flood of sales calls and emails, I’m deciding to have some fun.

    Richard just happened to stumble across my new domain!! What are the odds?

    Please add richard@thewebexperts.info to your address book to ensure future email delivery.

    Hi Andrew Lyon,

    My name is Richard. I recently came across da-wedding-site.com and was curious to find out if you have any design needs (redesign,landing pages, etc.)?

    My team and I have worked with organizations like dfwtacticalgear & lapazyachtcharter.

    We are offering an ideal package which has been especially tailor-made for you with no monthly and hidden cost:

    Business website starting @ 400

    e-Commerce/online store starting @ 695

    We also specialize in digital marketing, SEO, and analyzing your sites analytics to keep your audience engaged and on your site longer!

    If you are interested in speaking about your website, please feel free to share your contact and best time/day to reach you.

    Thanks for your time and I hope to hear back from you!

    Richard Direct Line: +1 7733828125 Business Hours: 0900 -1800 EST

    Promptly added richard to my address book, then responded:

    yes hi i want a website but i dont have much money so what i want is to build a website that makes LOTS of money (that’s where you come in) and then once it makes a bunch of money i can pay you back for making the website. lots of people do this. my uncle did this and he was able to put an addition on his trailer AND pay the company that built it back some of the money so its a win win. thx let me know if you are interested!

    I can’t wait to show my uncle the new website! His internet is super fast ever since his neighbor upgraded to cable and didn’t password their router!

  • 201605.27

    Spam entry: A website, for FREE

    This is a post in a series of spam responses I’m doing after creating a new domain for my website. After receiving a flood of sales calls and emails, I’m deciding to have some fun.

    I’m pretty sure the word “free” is somewhere in Vik’s email. Right??

    Dear Andrew,

    I just wanted to know if you would need any assistance with your domain da-wedding-website.com. We can help you in building a new website or a mobile application for your domain.

    We can also help you with SEO/ASO of any of your existing websites or mobile applications.

    Looking forward to hear from you.

    Thank you,

    DB Web Apps
    Phone: +1 415-671-6239
    Email: info@dbwebapps.com


    A website? For FREE? That’s a great deal! Most of the other people sending me emails want to charge me money. This is terrific! My wife will be so pleased at the great deal i have found. Why don’t you send a few free design ideas and I will look them over and tell you which is the best and then you can start work immediately for free.

    I am blown away by your generosity.

    In a world inundated with greed and selfishness, the biggest gesture one can make is an act of selflessness. Thank you, Vik, for your revolutionary kindness.

  • 201605.27

    Spam entry: A reputed web design company (with no website)

    This is a post in a series of spam responses I’m doing after creating a new domain for my website. After receiving a flood of sales calls and emails, I’m deciding to have some fun.

    Their website is so good, it will melt your computer lol which is they we don’t link to it!!!1

    Hi Andrew,

    Out of respect for your time, I thought an email might be less disruptive than an unannounced phone call. We noticed you recently registered “da-wedding-site.com” so thought of reaching out you.

    We have been designing and developing customer-friendly websites for more than 5 years and have managed to live up to the growing expectations of our respective clientele. We believe that a good design always pays off in the long run and helps you attract the attention of your target audience, which eventually converts into ascending sales.

    We are a reputed web design and development company offering business-specific solutions to our clients who are scattered all over the world. Over the past few years, we have helped hundreds of clients in having a distinct web presence. Our services include:

    • Responsive websites on WordPress, Joomla! and Drupal
    • Responsive eCommerce websites on Magento, Prestashop and Shopify
    • Custom Web Applications
    • Custom Mobile Applications
    • Specialized Quality Assurance Solutions

    If you want to have a new website or you want to revamp your existing website to make it more search engine friendly, we are the right company. Reply to this email, and we will get back to you with industry-specific solutions.


    Ken Morgan

    Because Ken was so incredibly respectful of my time I wrote him a very detailed response:

    hi ken thank you SO MUCH for respecting my time i was thinking about your enticing offer and your reputed website development company and i have some great ideas on websites ok so here they are idea 1 a website that gives people seizures when they visit whether or not they are epilptic funny rite? 2 a website that makes people CRAP THEY?RE PANTSS!! omg my friends would go crazy it would go VIRAL and i could put ads on it and make a million dollars which reminds me can you build the websites first and then after i get the million dollars THEN i can pay you after? k cool thx so idea 3 a website that when you go to it you hold the computer up to a wall and you cna SEE THROUGH THE WALL ON THE SCREEN like xmen and i want to put the xmen logo on it but if i get sued i can tell them you guys did it not me (ur insured rite??) next idea 4 is a website that you put in your bank acct # and it sends you $5 wouldnt that be great like everyone would use that every day including me free $5 rite?!! lol yeah so idea 5 is a website where you click a button and the computer starts to LEVITATE and you can sit on it and you are basically flying and you can go places ON TOP OF YOUR COMPUTER and when you get there and ur like “omg i need to check my email” boom your computer is RIGHT THERE UNDER YOU y has no 1 thought of this people are dumb i guess lol so i have more ideas but im going to hold off for now since i need you to confirm you can build these ideas for free up front hereto notwithstanding forgoing payments etc and then i pay after the work is done and my websites sell for big bucks and i also dont want your reputed company to steal my amzaing ideas so plz sign the attached nda and we can talk business kkthx

    <attached an actual NDA>

    Really looking forward to getting some of these exciting ideas off the ground. Sometimes the best way to market is to solve a very difficult technical problem, such as levitation. Surely Ken will deliver. After all, he does work for a very reputed web design and development company offering business-specific solutions.

  • 201603.30

    MarketSpace: Competitive intelligence for your industry

    MarketSpace: Competitive intelligence for your industry

    We at MarketSpace just launched our Spaces page! Follow the industries you’re interested in or customize your own.

    MarketSpace takes information from various places on the web, puts everything in a standard format, remove duplicates, find companies and people with natural language processing and machine learning, but most importantly: we remove irrelevant items so we don’t send updates to you on things that don’t matter.

    Follow companies or entire industries and get alerts through our supported channels:

    • Email
    • RSS
    • Google Sheets
    • Slack
    • Office 365

    Give it a try!

  • 201511.22

    SSH public key fix

    So once in a while I’ll run into a problem where I can log into a server via SSH as one user via public key, and taking the authorized_keys keys and dumping it into another user’s .ssh/ folder doesn’t work.

    There are a few things you can try.


    Try this:

    chmod 0700 .ssh/
    chmod 0600 .ssh/authorized_keys
    sudo chown -R myuser:mygroup .ssh/

    That should fix it 99% of the time.

    Locked account

    Tonight I had an issue where the permissions were all perfect…checked, double checked, and yes they were fine.

    So after poking at it for an hour (instead of smartly checking the logs) I decided to check the logs. I saw this error:

    Nov 23 05:26:46 localhost sshd[1146]: User deploy not allowed because account is locked
    Nov 23 05:26:46 localhost sshd[1146]: input_userauth_request: invalid user deploy [preauth]

    Huh? I looked it up, and apparently an account can become locked if its password is too short or insecure. So I did

    sudo passwd deploy

    Changed the password to something longer, and it worked!

    Have any more tips on fixing SSH login issues? Let us know in the comments below.

  • 201509.05

    Nginx returns error on file upload

    I love Nginx and have never had a problem with it. Until now.

    Turtl, the private Evernote alternative, allows uploading files securely. However, after switching to a new server on Linode, uploads broke for files over 10K. The server was returning a 404.

    I finally managed to reproduce the problem in cURL, and to my surprise, the requests were getting stopped by Nginx. All other requests were going through fine, and the error only happened when uploading a file of 10240 bytes or more.

    First thing I though was that Nginx v1.8.0 had a bug. Nobody on the internet seemed to have this problem. So I installed v1.9.4. Now the server returned a 500 error instead of a 404. Still no answer to why.

    I finally found it: playing with client_body_buffer_size seemed to change the threshold for which files would trigger the error and which wouldn’t, but ultimately the error was still there. Then I read about how Nginx uses temporary files to store body data. I checked that folder (in my case /var/lib/nginx/client_body) and the folder was writeable by the nginx user, however the parent folder /var/lib/nginx was owned by root:root and was set to 0700. I set /var/lib/nginx to be readable/writable by user nginx, and it all started working.

    Check your permissions

    So, check your folder permissions. Nginx wasn’t returning any useful errors (first a 404, which I’m assuming was a bug fixed in a later version) then a 500 error. It’s important to note that after switching to v1.9.4, the Permission Denied error did show up in the error log, but at that point I had already decided the logs were useless (v1.8.0 silently ignored the problem).

    Another problem

    This is an edit! Shortly after I applied the above fix, I started getting another error. My backend was getting the requests, but the entire request was being buffered by Nginx before being proxied. This is annoying to me because the backend is async and is made to stream large uploads.

    After some research, I found the fix (I put this in the backend proxy’s location block:

    proxy_request_buffering off;

    This tells Nginx to just stream the request to the backend (exactly what I want).

  • 201507.29

    Turtl's new syncing architecture

    For those of you just joining us, I’m working on an app called Turtl, a secure Evernote alternative. Turtl is an open-source note taking app with client-side encryption which also allows private collaboration. Think like a private Evernote with a self-hosted option (sorry, no OCR yet =]).

    Turtl’s version 0.5 (the current version) has syncing, but it was never designed to support offline mode, and requires clients to be online to use Turtl. The newest upcoming release supports fully offline mode (except for a few things like login, password changes, etc). This post will attempt to describe how syncing in the new version of Turtl works.

    Let’s jump right in.

    Client IDs (or the “cid”)

    Each object having a globally unique ID that can be client-generated makes syncing painless. We do this using a few methods, some of which are actually borrowed from MongoDB’s Object ID schema.

    Every client that runs the Turtl app creates and saves a client hash if it doesn’t have one. This hash is a SHA256 hash of some (cryptographically secure) random data (current time + random uuid).

    This client hash is then baked into every id of every object created from then on. Turtl uses the composer.js framework (somewhat similar to Backbone) which gives every object a unique ID (“cid”) when created. Turtl replaces Composer’s cid generator with its own that creates IDs like so:

    12 bytes hex timestamp | 64 bytes client hash | 4 bytes hex counter

    For example, the cid


    breaks down as:

     timestamp    client hash                                                      counter
    014edc2d6580 b57a77385cbd40673483b27964658af1204fcf3b7b859adfcb90f8b895521597 0012
     |                                    |                                        |
     |- 1438213039488                     |- unique hash                           |- 18

    The timestamp is a new Date().getTime() value (with leading 0s to support longer times eventually). The client hash we already went over, and the counter is a value tracked in-memory that increments each time a cid is generated. The counter has a max value of 65535, meaning that the only way a client can produce a duplicate cid is by creating 65,535,001 objects in one second. We have some devoted users, but even for them creating 65M notes in a second would be difficult.

    So, the timestamp, client hash, and counter ensure that each cid created is unique not just to the client, but globally within the app as well (unless two clients create the same client hash somehow, but this is implausible).

    What this means is that we can create objects endlessly in any client, each with a unique cid, use those cids as primary keys in our database, and never have a collision.

    This is important because we can create data in the client, and not need server intervention or creation of IDs. A client can be offline for two weeks and then sync all of its changes the next time it connects without problems and without needing a server to validate its object’s IDs.

    Using this scheme for generating client-side IDs has not only made offline mode possible, but has greatly simplified the syncing codebase in general. Also, having a timestamp at the beginning of the cid makes it sortable by order of creation, a nice perk.

    Queuing and bulk syncing

    Let’s say you add a note in Turtl. First, the note data is encrypted (serialized). The result of that encryption is shoved into the local DB (IndexedDB) and the encrypted note data is also saved into an outgoing sync table (also IndexedDB). The sync system is alerted “hey, there are outgoing changes in the sync table” and if, after a short period, no more outgoing sync events are triggered, the sync system takes all pending outgoing sync records and sends them to a bulk sync API endpoint (in order).

    The API processes each one, going down the list of items and updating the changed data. It’s important to note that Turtl doesn’t support deltas! It only passes full objects, and replaces those objects when any one piece has changed.

    For each successful outgoing sync item that the API processes, it returns a success entry in the response, with the corresponding local outgoing sync ID (which was passed in). This allows the client to say “this one succeeded, remove it from the outgoing sync table” on a granular basis, retrying entries that failed automatically on the next outgoing sync.

    Here’s an example of a sync sent to the API:

        {id: 3, type: 'note', action: 'add', data: { <encrypted note data> }}

    and a response:

        success: [
            {id: 3, sync_ids: ['5c219', '5c218']}

    We can see that sync item “3” was successfully updated in the API, which allows us to remove that entry from our local outgoing sync table. The API also returns server-side generate sync IDs for the records it creates in its syncing log. We use these IDs passed back to ignore incoming changes from the API when incoming syncs come in later so we don’t double-apply data changes.

    Why not use deltas?

    Wouldn’t it be better to pass diffs/deltas around than full objects? If two people edit the same note in a shared board at the same time, then the last-write-wins architecture would overwrite data!

    Yes, diffs would be wonderful. However, consider this: at some point, an object would be an original, and a set of diffs. It would have to be collapsed back into the main object, and because the main object and the diffs would be client-encrypted, the server has no way of doing this.

    What this means is that the clients would not only have to sync notes/boards/etc but also the diffs for all those objects, and collapse the diffs into the main object then save the full object back to the server.

    To be clear, this is entirely possible. However, I’d much rather get the whole-object syncing working perfectly before adding additional complexity of diff collapsing as well.

    Polling for changes

    Whenever data changes in the API, a log entry is created in the API’s “sync” table, describing what was changed and who it affects. This is also the place where, in the future, we might store diffs/deltas for changes.

    When the client asks for changes, at does so using a sequential ID, saying “hey, get me everything affecting my profile that happened after <last sync id>”.

    The client uses long-polling to check for incoming changes (either to one’s own profile or to shared resources). This means that the API call used holds the connection open until either a) a certain amount of time passes or b) new sync records come in.

    The API uses RethinkDB’s changefeeds to detect new data by watching the API’s sync table. This means that changes coming in are very fast (usually within a second of being logged in the API). RethinkDB’s changefeeds are terrific, and eliminate the need to poll your database endlessly. They collapse changes up to one second, meaning it doesn’t return immediately after a new sync record comes in, it waits a second for more records. This is mainly because syncs happen in bulk and it’s easier to wait a bit for a few of them than make five API calls.

    For each sync record that comes in, it’s linked against the actual data stored in the corresponding table (so a sync record describing an edited note will pull out that note, in its current form, from the “notes” table). Each sync record is then handed back to the client, in order of occurence, so it can be applied to the local profile.

    The result is that changes to a local profile are applied to all connected clients within a few seconds. This also works for shared boards, which are included in the sync record searches when polling for changes.

    File handling

    Files are synced separately from everything else. This is mainly because they can’t just be shoved into the incoming/outgoing sync records due to their potential size.

    Instead, the following happens:

    Outgoing syncs (client -> API)

    Then a new file is attached to a note and saved, a “file” sync item is created and passed into the ougoing sync queue without the content body. Keep in mind that at this point, the file contents are already safe (in encrypted binary form) in the files table of the local DB. The sync system notices the outgoing file sync record (sans file body) and pulls it aside. Once the normal sync has completed, the sync system adds the file record(s) it found to a file upload queue (after which the outgoing “file” sync record is removed). The upload queue (using Hustle) grabs the encrypted file contents from the local files table uploads it to the API’s attachement endpoint.

    Attaching a file to a note creates a “file” sync record in the API, which alerts clients that there’s a file change on that note they should download.

    It’s important to note that file uploads happen after all other syncs in that bulk request are handled, which means that the note will always exist before the file even starts uploading.

    Encrypted file contents are stored on S3.

    Incoming syncs (API -> client)

    When the client sees an incoming “file” sync come through, much like with outgoing file syncs, it pulls the record aside and adds it to a file download queue instead of processing it normally. The download queue grabs the file via the note attachment API call and, once downloaded, saves it into the local files database table.

    After this is all done, if the note that the file is attached to is in memory (decrypted and in the user’s profile) it is notified of the new file contents and will re-render itself. In the case of an image attachment, a preview is generated and displayed via a Blob URL.

    What’s not in offline mode?

    All actions work in offline mode, except for a few that require server approval:

    • login (requires checking your auth against the API’s auth database)
    • joining (creating an account)
    • creating a persona (requires a connection to see if the email is already taken)
    • changing your password
    • deleting your account

    What’s next?

    It’s worth mentioning that after v0.6 launches (which will include an Android app), there will be a “Sync” interface in the app that shows you what’s waiting to be synced out, as well as file uploads/downloads that are active/pending.

    For now, you’ll just have to trust that things are working ok in the background while I find the time to build such an interface =].

  • 201507.26

    Squeezebox setup without the remote/controller

    My dad recently gave me a Squeezebox for a present after he’d upgraded his home audio system. I was grateful but ultimately stumped on how to set it up. I read a bunch online about setting it up with the remote it comes with. Oh wait. Mine doesn’t have a remote.

    Let the fun begin.

    Connecting to the Squeezebox

    This is harder than it sounds. Initially, I tried wiring it into my router and seeing if it could see it. It could not. This was a WRT54G with Tomato firmware. Maybe the setups just weren’t compatible or something ridiculous like that.

    So I tried another way I found after poking around a lot: the Squeezebox has a build in wireless SSID that you can connect to in ad-hoc mode (after holding the only button for > 6 seconds and it enters reset/config mode).

    However, doing this is finicky and had me tearing my hair out. Ultimately, I got my Windows machine connected to it. When it connects, it gives your machine an IP in the range. If it gives you a 169.xxx.xxx.xxx address, it’s game over. Try restarting your machine. Try resetting the Squeezebox. Try a rain dance while wearing a tribal loincloth. You just might get that IP.

    I recently bought a new router (Buffalo, w/ DD-WRT) and plugged the reset-mode Squeezebox into it (via LAN) and was able to connect to it instantly, so try that first and only go the ad-hoc wireless route if you absolutely have to.

    Talking to the Squeezebox

    The Net-UDAP software is amazing and wonderful. I don’t know what it does under the hood, but it lets you talk to your Squeezebox should you finally get connected to it in some capacity.

    Unless you’re on Windows. Yes, I know, it supposedly works on Windows but just didn’t find the Squeezebox with running discover.

    My only solution was to spin up a linux VM with a bridged network adapter and run Net-UDAP there instead. Worked flawlessly. Hopefully you have a linux box laying around, or maybe it will just work for you in Windows. Try the Windows perl binary instead of cygwin’s perl.

    Anyway, once you’ve got everything connected, you run the Net-UDAP like so:

    cd /path/to/net-udap
    ./scripts/udap_shell.pl -a

    Note that the is the address for the machine you’re running the shell on, not the Squeezebox itself.

    You should get a prompt:


    Now run the discover command:

    UDAP> discover
    info: *** Broadcasting adv_discovery message to MAC address 00:00:00:00:00:00 on
    info:   adv_discovery response received from 69:69:69:69:69:69
    info: *** Broadcasting get_ip message to MAC address 69:69:69:69:69:69 on
    info:   get_ip response received from 69:69:69:69:69:69
    info: *** Broadcasting get_data message to MAC address 69:69:69:69:69:69 on
    info:   get_data response received from 69:69:69:69:69:69

    Hopefully you get output like that. If you get empty output, see “Connecting to the Squeezebox” =[.


    Once that finicky little bastard is discovered, run list:

    UDAP> list  
     #    MAC Address    Type       Status
    == ================= ========== ===============
     1 69:69:69:69:69:69 squeezebox init

    You can see it has an ID of 1 so we do:

    UDAP> conf 1

    Your prompt will now change, and you’re in config mode:

    UDAP [1] (squeezebox 696969)>

    Now you can connect it to your network via wireless by setting values into its config. This will differ network to network, but here are the commands I run to get things working on a network with WPA2-PSK TKIP+AES:

    set hostname=jammy interface=0 lan_gateway= lan_ip_mode=1 primary_dns=
    set wireless_SSID='my network SSID' wireless_wpa_mode=2 wireless_wpa_cipher=3 wireless_keylen=0 wireless_mode=0 wireless_region_id=4 wireless_wpa_on=1 wireless_wpa_psk='WPA passwordddd' wireless_channel=9
    set server_address=

    For a list of the fields and what they mean, type fields:

    UDAP [1] (squeezebox 696969)> fields
                 bridging: Use device as a wireless bridge (not sure about this)
                 hostname: Device hostname (is this set automatically?)
                interface: 0 - wireless, 1 - wired (is set to 128 after factory reset)
              lan_gateway: IP address of default network gateway, (e.g.
              lan_ip_mode: 0 - Use static IP details, 1 - use DHCP to discover IP details
      lan_network_address: IP address of device, (e.g.
          lan_subnet_mask: Subnet mask of local network, (e.g.
              primary_dns: IP address of primary DNS server
            secondary_dns: IP address of secondary DNS server
           server_address: IP address of currently active server (either Squeezenetwork or local server
    squeezecenter_address: IP address of local Squeezecenter server
       squeezecenter_name: Name of local Squeezecenter server (???)
            wireless_SSID: Wireless network name
         wireless_channel: Wireless channel (used by AdHoc mode???)
          wireless_keylen: Length of wireless key, (0 - 64-bit, 1 - 128-bit)
            wireless_mode: 0 - Infrastructure, 1 - Ad Hoc
       wireless_region_id: 4 - US, 6 - CA, 7 - AU, 13 - FR, 14 - EU, 16 - JP, 21 - TW, 23 - CH
       wireless_wep_key_0: WEP Key 0 - enter in hex
       wireless_wep_key_1: WEP Key 1 - enter in hex
       wireless_wep_key_2: WEP Key 2 - enter in hex
       wireless_wep_key_3: WEP Key 3 - enter in hex
          wireless_wep_on: 0 - WEP Off, 1 - WEP On
      wireless_wpa_cipher: 1 - TKIP, 2 - AES, 3 - TKIP & AES
        wireless_wpa_mode: 1 - WPA, 2 - WPA2
          wireless_wpa_on: 0 - WPA Off, 1 - WPA On
         wireless_wpa_psk: WPA Public Shared Key

    To see the values already set, run list when in config mode.

    Great, so you’ve set up all your network values and are confident that you’ve done it all right the first time. Good for you. Now you can run save_data:

    UDAP [1] (squeezebox 696969)> save_data
    info: *** Broadcasting set_data message to MAC address 69:69:69:69:69:69 on
    ucp_method set_data callback not implemented yet at /path/to/udap/../src/Net-UDAP/lib/Net/UDAP.pm line 292.
    Raw msg:
              00 01 02 03 04 05 06 07 - 08 09 0A 0B 0C 0D 0E 0F  0123456789ABCDEF
    00000000  00 02 00 00 00 00 00 00 - 00 01 00 04 20 16 5A 05  ............ .Z.
    00000010  00 01 C0 01 00 00 01 00 - 01 00 06 00 1A           .............
    info:   set_data response received from 69:69:69:69:69:69

    Make sure save_data returns a response similar to this. If it doesn’t, run it again. In fact, run it again anyway. Run it again…and again…and again.

    Great, now run reset to finalize everything:

    UDAP [1] (squeezebox 696969)> reset
    info: *** Broadcasting reset message to MAC address 69:69:69:69:69:69 on
    ucp_method reset callback not implemented yet at /path/to/udap/../src/Net-UDAP/lib/Net/UDAP.pm line 292.
    Raw msg:
              00 01 02 03 04 05 06 07 - 08 09 0A 0B 0C 0D 0E 0F  0123456789ABCDEF
    00000000  00 02 00 00 00 00 00 00 - 00 01 00 04 20 16 5A 05  ............ .Z.
    00000010  00 01 C0 01 00 00 01 00 - 01 00 04                 ...........
    info:   reset response received from 69:69:69:69:69:69

    All done. Now don’t ever change your network setup ever again or you’ll have to deal with this shit again. Or just get a fucking remote…

  • 201507.23

    Harry's razors review

    This is a review of Harry’s razors. I haven’t been payed by them at all or been sent any promotional materials. The words/opinions expressed here are my own.

    I hate shaving, but even more I hate having facial hair. I find it uncomfortable. I’ve used a good amount of shaving products throughout my life, and have settled on the standard cartridge razor, which gives (in my opinion) the best shave-time to shave-closeness ratio and offers a near-perfectly smooth face and neck while only taking about 5-8 minutes to complete.

    Now, I’m a bit different from other shavers (I think) in that, like my clothing, I keep my razors around far longer than most people. I will use the same razor head for up to four months (basically until it’s so dull it just won’t work anymore). I usually shave about 3-4 times a week.

    “Razor companies HATE him!!”

    Up until about 6 months ago, I’d been using mainly Gillette razors. I’d get a big pack of refills at Costco every now and then and work through them over a year or so.

    One thing that pissed me off endlessly about Gillette is that by the time I had gone through my set of razor heads, the handle would be obsolete and I’d have to buy a whole new kit (which they charge a lot extra for). So about the third time this happened I decided there had to be a better way than continuously throwing money at Gillette. By the way, their higher-end razors are great, but their practices of having different handles every week is infuriating.

    I had previously seen ads for Harry’s razors so decided to give them a shot. The company seems small enough that redesigning their handles/connectors every few weeks would bankrupt them, but initial reviews on the razors themselves were good. I picked up the Truman handle with a set of blades.

    Enough babbling, here’s a pros/cons list:

    The good

    • The handle is solid, and has a nice weight to it (as opposed to plasticy and bendy).
    • The razor heads snap in nicely, without any play.
    • The razors have an open back.

      open back I can’t say enough how great this is. All razors I have ever used hide the back of the razor with a bunch of plastic.

      With a covered back, most of the hair you shave off over the course of the blade’s life ends up staying inside the razor head. You can beat it against the sink or blast it with water all you want, there’s always going to be a bunch of old, moldy hair stuck inside your razor.

      With Harry’s, the back is open and a quick rinse under the faucet or showerhead gets rid of all hair on the blade. I cannot stress how easily it is both to unclog and to clean the blades.

    • The blades are easy to unclog. Because of the open back, you can easy rinse the blades to get rid of any hair. This makes them ideal for shaving areas with lots of hair, and while Harry’s is marketed towards men I see no reason why these razors wouldn’t be able to work for women as well (and for a lot cheaper than women’s razor heads).
    • The blades last a long time. My maximum is about four months on one blade. This is made easier by how well the blades clean up after use (once again, thanks to the open back). This means for me that a 4-pack of blades should last about a year (sorry, Harry).

      They do get noticeably duller after about 4-5 uses, but they continue funcitoning admirably for many, many uses. Once again, I shave maybe 3-4 times a week. So conservatively (3 shaves/week over 12 weeks), that’s about 35-40 shaves per razor head.

    The bad

    • The handle is slippery. I routinely drop the handle while shaving. Having ugly rubber grips would detract from the look, but make shaving a lot easier.
    • The razor heads are somewhat bulky. I find it incredibly hard to reach places of my neck/face that the Gillette razors would glide over no problem. I think if they found a way to remove the thickness of the plastic housing the blades themselves, and possibly make the blades stick out of the housing by a few more micrometers, this would make shaving a lot easier.
    • The razors have a strange pulling feeling when shaving, somewhat like pulling a rubber eraser across your face. This is not painful or irritating, just somewhat odd feeling.
    • The piece of plastic that gives the blade “spring” when pushed against your face wears down over time, making it feel spongy (and eventually requiring you to hold the razor flat against your face with a finger/thumb on the hand holding the handle). Not a huge deal, and probably not an issue for most people since razor re-use isn’t a 3-4 month affair.


    The pros outweigh the cons easily.

    Definitely would recommend this brand. So far, they haven’t changed the handle or blade connectors at all. The blades work admirably. They are a bit bulky, but easy to clean and unclog. The whole setup also looks really nice.

    As mentioned, while Harry’s is marketed towards men, this setup could easily work great for women (or anyone) who wants to shave arms/legs as well because of the easy unclogging.

  • 201507.21

    Hackernews: a typical day

  • 201507.18

    Switching to Jekyll

    I’ve decided to get rid of Wordpress that was on blog.killtheradio.net as well as the PHP site at killtheradio.net and combine both into a Jekyll blog on the http://killtheradio.net/ domain.

    Moving to Jekyll from Wordpress took a few days, but I got all my posts moved, edited to fix formatting errors, and switching all discussions to use Disqus (and of course imported the old comments).

    This site now works on mobile devices as well.

    There are a few reasons for all this, but mainly I’ve been intrigued by the idea of static site generators for a while now and wanted to try it out. Also, as time went on, I grew to desipise Wordpress, including all the idiotic security vulnerabilities I suffered through week after week. It’s a slapped-together platform, and the plugins for it are even worse.

    There’s a certain thrill to authoring and publishing new content using only the command line.

  • 201501.26

    Node.js and Cygwin: Unknown system errno 203

    A recent Cygwin upgrade left me ripping my hair out, because none of my npm or grunt commands would work. They spat out the error

    Unknown system errno 203

    Helpful, right?

    The fix

    In your Cygwin ~/.bash_profile file (create it and chmod 755 .bash_profile if it doesn't exist):

    export TMP=/tmp
    export TEMP=$TMP

    This did the trick for me. Special thanks to this github comment.

  • 201409.18

    Sudoers syntax error for specific commands

    This will be short but sweet. When deploying some new servers today, I ran into a problem where no matter what, sudo bitched about syntax errors in my sudoers file. I tried a bunch of different options/whitespace tweaks/etc and nothing worked.

    deploy ALL= NOPASSWD: monit restart my-app

    Looks fine right? Nope.

    Use absolute paths

    This fixed it:

    deploy ALL= NOPASSWD: /usr/bin/monit restart my-app

    Everyone in the world's advice is to "just use visudo" but I couldn't find any info on what was actually causing the syntax error. Hopefully this helps a few lost souls.

  • 201407.20

    Composer.js v1.0 released

    The Composer.js MVC framework has just released version 1.0! Note that this is a near drop-in replacement for Composer v0.1.x.

    There are some exciting changes in this release:

    • Composer no longer requires Mootools... jQuery can be used as a DOM backend instead. In fact, it really only needs the selector libraries from Moo/jQuery (Slick/Sizzle) and can use those directly. This means you can now use Composer in jQuery applications.
    • Controllers now have awareness of more common patterns than before. For instance, controllers can now keep track of sub-controllers as well as automatically manage bindings to other objects. This frees you up to focus on building your app instead of hand-writing boilerplate cleanup code (or worse, having rogue objects and events making your app buggy).
    • The ever-popular RelationalModel and FilterCollection are now included by default, fully documented, and considered stable.
    • New class structures in the framework expose useful objects, such as Composer.Class which gives you a class structure to build on, or Composer.Event which can be used as a standalone event bus in your app.
    • There's now a full test suite so people who want to hack away on Composer (including us Lyon Bros) can do so without worrying about breaking things.
    • We updated the doc site to be much better organized!

    Breaking changes

    Try as we might, we couldn't let some things stay the same and keep a clear conscience. Mainly, the problems we found were in the Router object. It no longer handles hashbang (#!) fallback...it relies completely on History.js to handle this instead. It also fixes a handful of places where non-idiomatic code was used (see below).

    • Composer.Router: the on_failure option has been removed. Instead of

      var router = new Composer.Router(routes, {on_failure: fail_fn});

      you do

      var router = new Composer.Router(routes);
      router.bind('fail', fail_fn);
    • Composer.Router: The register_callback function has been removed. In order to achieve the same functionality, use router.bind('route', myfunction);.
    • Composer.Router: The "preroute" event now passes {path: path} as its argument instead of path. This allows for easier URL rewriting, but may break some apps depending on the old method.
    • Composer.Router: History.js is now a hard requirement.

    Sorry for any inconvenience this causes. However, since the rest of the framework is backwards compatible, you should be able to just use the old Composer.Router object with the new framework without any problems if you don't wish to convert your app.

    Have fun!

    Check out the new Composer.js, and please open an issue if you run into any problems. Thanks!

    - The Lyon Bros.

  • 201407.16

    Nanomsg as the messaging layer for turtl-core

    I recently embarked on a project to rebuild the main functionality of Turtl in common lisp. This requires embedding lisp (using ECL) into node-webkit (or soon, Firefox, as node-webkit is probably getting dumped).

    To allow lisp and javascript to communicate, I made a simple messaging layer in C that both sides could easily hook into. While this worked, I stumbled on nanomsg and figured it couldn't hurt to give it a shot.

    So I wrote up some quick bindings for nanomsg in lisp and wired everything up. So far, it works really well. I can't tell if it's faster than my previous messaging layer, but one really nice thing about it is that it uses file descriptors which can be easily monitored by an event loop (such as cl-async), making polling and strange thread < --> thread event loop locking schemes a thing of the past (although cl-async handles all this fairly well).

    This simplified a lot of the Turtl code, and although right now it's only using the nanomsg "pair" layout type, it could be easily expanded in the future to allows different pieces of the app to communicate. In other words, it's a lot more future-proof than the old messaging system and probably a lot more resilient (dedicated messaging library authored by 0MQ mastermind beats hand-rolled, hard-coded simple messaging built by non-C expert).


Newer  |