-
I’ll keep this short. I recently installed Ansible 2.0 to manage the Turtl servers. However, once I ran some of the roles I used in the old version, my handlers were not running.
For instance:
Note that in Ansible <= 1.8, when the monitrc file gets copied over, it would run the
restart monit
handler. In 2.0, no such luck.The fix
I found this github discussion which led to this google groups post which says to put this in ansible.cfg:
[defaults] ... task_includes_static = yes handler_includes_static = yes
This makes includes pre-process instead of being loaded dynamically. I don’t really know what that means but I do know it fixed the issue. It breaks looping, but I don’t even use any loops in ansible tasks, so
-
201511.22
SSH public key fix
So once in a while I’ll run into a problem where I can log into a server via SSH as one user via public key, and taking the
authorized_keys
keys and dumping it into another user’s.ssh/
folder doesn’t work.There are a few things you can try.
Permissions
Try this:
chmod 0700 .ssh/ chmod 0600 .ssh/authorized_keys sudo chown -R myuser:mygroup .ssh/
That should fix it 99% of the time.
Locked account
Tonight I had an issue where the permissions were all perfect…checked, double checked, and yes they were fine.
So after poking at it for an hour (instead of smartly checking the logs) I decided to check the logs. I saw this error:
Nov 23 05:26:46 localhost sshd[1146]: User deploy not allowed because account is locked Nov 23 05:26:46 localhost sshd[1146]: input_userauth_request: invalid user deploy [preauth]
Huh? I looked it up, and apparently an account can become locked if its password is too short or insecure. So I did
sudo passwd deploy
Changed the password to something longer, and it worked!
Have any more tips on fixing SSH login issues? Let us know in the comments below.
-
201509.05
Nginx returns error on file upload
I love Nginx and have never had a problem with it. Until now.
Turtl, the private Evernote alternative, allows uploading files securely. However, after switching to a new server on Linode, uploads broke for files over 10K. The server was returning a 404.
I finally managed to reproduce the problem in cURL, and to my surprise, the requests were getting stopped by Nginx. All other requests were going through fine, and the error only happened when uploading a file of 10240 bytes or more.
First thing I though was that Nginx v1.8.0 had a bug. Nobody on the internet seemed to have this problem. So I installed v1.9.4. Now the server returned a 500 error instead of a 404. Still no answer to why.
I finally found it: playing with
client_body_buffer_size
seemed to change the threshold for which files would trigger the error and which wouldn’t, but ultimately the error was still there. Then I read about how Nginx uses temporary files to store body data. I checked that folder (in my case/var/lib/nginx/client_body
) and the folder was writeable by thenginx
user, however the parent folder/var/lib/nginx
was owned byroot:root
and was set to0700
. I set/var/lib/nginx
to be readable/writable by usernginx
, and it all started working.Check your permissions
So, check your folder permissions. Nginx wasn’t returning any useful errors (first a 404, which I’m assuming was a bug fixed in a later version) then a 500 error. It’s important to note that after switching to v1.9.4, the Permission Denied error did show up in the error log, but at that point I had already decided the logs were useless (v1.8.0 silently ignored the problem).
Another problem
This is an edit! Shortly after I applied the above fix, I started getting another error. My backend was getting the requests, but the entire request was being buffered by Nginx before being proxied. This is annoying to me because the backend is async and is made to stream large uploads.
After some research, I found the fix (I put this in the backend proxy’s
location
block:proxy_request_buffering off;
This tells Nginx to just stream the request to the backend (exactly what I want).