Monitoring with Uptime Kuma

Earlier today, the server that hosts Gadget Wisdom was down for ten minutes. This happens every so often, and the server is due for replacement one of these days as the oldest one I have. But one of the problems I have is that local monitoring is…well, local. You shouldn’t run your monitoring solely on the server you are monitoring. You need something external as well.

So, enter Uptime Kuma. Uptime Kuma came onto the scene two years ago, as a self-hosted version of something like UptimeRobot(which does offer a free tier). There are other self-hosted products as well, but I was able to get this running in a short period of time and it provides exactly what I want, and it looks like it has an active development team.

So, what features does Uptime Kuma offer?

  • Dozens of notification methods to configure….email, messaging, SMS, etc.
  • HTTP, ping, as well as server specific monitoring.
  • Useful Stats and Graphs
  • Optional Public Status Pages

So, now, I’m waiting for my next downtime, to see how exactly this works in production, but just having the ability to remotely monitor and get notifications is another tool in my monitoring arsenal.

Multiple Vulnerabilities found in Wink and Insteon Systems

Rapid 7 reported that they detected major vulnerabilities in the Wink and Insteon Smart Hub systems.

This is of particular concern to me as a Wink hub user. The Wink Android app was storing sensitive information insecurely, which has now been patched.

The other vulnerability is apparently being fixed. The Wink API does not revoke authentication tokens when you log out, and new tokens do not invalidate the use of old tokens.

I’ve long been concerned about the long term health of Wink. It’s been with two different owners and it is hard to understand where it might go. And hubs in general might go away in favor of wifi or bluetooth as a standard over things like zigbee and z-wave.

But the fact they fixed these issues at least suggests that they plan to move forward.

Let’s Build a Server: Part 2 – Monitoring

Monit

Last time, in Part 1, we discussed setting up a firewall and an email relay so notifications from the firewall could get to us.

Now, in Part 2, we’re going to talk about more signal. Server monitoring and alerting. Our primary software for monitoring is Monit.

Monit has a single configuration file, but many distributions, including mine, set up a /etc/monit.d folder so you can divide your monit configuration into different files.

Once it is running, you can monitor its status by running
monit status
It will show the status of whatever is monitoring. There is also an optional web component, if you want to check status in a web browser.

What can you monitor?

Monit can monitor any program and restart it if it crashes.
check process nginx with pidfile /var/run/nginx.pid
start program = "/bin/systemctl start nginx.service"
stop program = "/bin/systemctl stop nginx.service"
if failed host 127.0.0.1 port 80
protocol http then restart
if 5 restarts within 5 cycles then timeout

As you can see, the simple scripting language allows you to not only restart, execute programs, but alert the user.

Not only can it make sure something is running, but it can monitor its resource usage, as well as system resource usage. It can monitor processes, network connections, programs and scripts, files, directories, etc.

An Alternative to Email Alerts

The default for an alert is to send an alert email, but for bigger emergencies, a phone push notification is also useful.

Monit provides a simple instruction on how to set it up for Pushover. There is also the alternative of PushBullet.

Pushover costs $5 per platform(Android, iOS, Desktop) to use on as many devices as you want. There is a per application limit of 7,500 messages per month. Pushbullet is, by comparison, free. The basic difference as I see it is that Pushbullet is more geared toward the consumer, and Pushover is more geared toward developers in how it was initially set up. They do have similar feature sets though.

Here is Monit’s suggested Pushover script, which can be run instead of an email alert.

/usr/bin/curl -s
-F "token=your_mmonit_or_monit_app_token"
-F "user=your_pushover_net_user_token"
-F "message=[$MONIT_HOST] $MONIT_SERVICE - $MONIT_DESCRIPTION"
https://api.pushover.net/1/messages.json

Here is an alternative version for Pushbullet

curl -u <your_access_token_here>: -X POST https://api.pushbullet.com/v2/pushes --header 'Content-Type: application/json' --data-binary '{"type": "note", "title": "$MONIT_HOST", "body": "$MONIT_SERVICE - $MONIT_DESCRIPTION"}'

Conclusion

In all cases, monit allows you to monitor your system and take action based on a change in performance. The complexity of your rules is entirely up to you. But, if you give thought to their setup, you can not only be told when there is a server emergency, but the system can take action to fix it.

Let’s Build a Server: Part 1 – The Firewall

Tux, the Linux penguin

Necessity is the mother of invention. It is once again time to upgrade the Gadget Wisdom servers. And, as I have committed to writing more here, I will be writing some articles on server construction.

Now, this will all be done using a Virtual Private Server, so the hardware is outside of the scope of this series.

The first piece of software I usually install on network accessible servers is the ConfigServer Security & Firewall(CSF). This is a firewall with login/intrusion detection, and security. Most distributions of Linux come with some sort of firewall, but this set of scripts works with iptables to be much more secure.

CSF provides scripting for a firewall, and handles login failure handling for a variety of stock services, as well as unsupported services using regular expressions.

There are a lot of options in the CSF configuration file…read through the description of each…decide which ports you want open, and deploy. CSF will automatically update itself when there is a new version.

In order to ensure notifications from the firewall and other administrative notifications are read, you will likely wish to arrange for the ability to send mail. However, you may not need or wish the trouble of setting up a mail server. The simpler solution is to set up an SMTP relay.

The example below configures Postfix, available with many Linux distributions, for use with a gmail account. Add the following lines to the bottom of your /etc/postfix/main.cf

smtp_use_tls=yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous

Create a file with your gmail credentials.

smtp.gmail.com user@gmail.com:PASSWORD

Then secure the file.

chmod 640 /etc/postfix/sasl_passwd*
postmap /etc/postfix/sasl_passwd

Now, any external email will route through your gmail account. We have now protected our server from a variety of attacks, and ensured, if there is a problem, we’ll be notified of it.

There are alternatives to Gmail. For example, Mandrill offers 12,000 emails per month for free, and 20 cents per thousand after that, and Sendgrid offers 200 emails, and 10 cents per thousand.

You can use Mandrill or Sendgrid instead of Gmail by using the below credentials.

[smtp.mandrillapp.com]:587 USERNAME:API_KEY
[smtp.sendgrid.net]:587 USERNAME:PASSWORD

 

 

Let’s Encrypt – A New Certificate Authority

Diagram of a public key infrastructure

 

Security Expert Bruce Schneier recently pointed to a joint project to create a new certificate authority that lets everyone get basic certificates for their domain through a simple process.

 

The idea would include not only free, but automatic, secure, transparent, open, and cooperative.

The service, called Let’s Encrypt, is set to launch in the summer of 2015.

The reason for the delay is that the service wants to leverage new standards. The most notable is ACME(Automated Certificate Management Environment). The idea is that the Certificate Authority communicates with the web server and the two work together to prove ownership and download the certificate, as well as handle configuration and renewal.

Now, considering how much of a chore certificates are right now, the standard, even outside of Lets Encrypt, would save a lot of anguish. Once the server has proven that it is the server of record for that domain, it can handle everything.

There’s more to it then that, and certainly, there are still risks, but we’ll see what these people come up with by the time the ACME standard is finalized.

 

KeyCDN: A Review

KeyCDN LogoIn a continuing effort to get the best combination of services and pricing, I often review my choice of provider. While it is a pain to migrate services, things do change over time.

As a small site, I want the benefits of a CDN, but the monthly cost of one is not within my budget. Which is why I explore pay-per-use CDNs. Metering means that I can prepay for a few GBs of traffic and it can last me a while. I wrote about this back in 2013, when I was talking about how new providers intrigued me.

After some problems with other incumbents, I was once again looking for new options, and came upon KeyCDN, which is a Swiss CDN that is well regarded so far. I’ve been using them for about six months now. They offer $0.04 per GB for the first 10TB. And in the last six months, they’ve continued to add features.

Here are a few of their features:

  • Pay Per Use Pricing – So no minimum monthly costs
  • A Free Trial(although most of these services have that)
  • Unlimited Zones that can be aliased as subdomains on your site.
  • SSL – Shared or Custom SSL. Shared SSL is them using their certificate. If you want to alias the CDN zone as a subdomain on your site, you need to buy a certificate from them or supply your own.
  • SPDY Support
  • Push Zones(if you want them to store the content, not just pull and cache it from your site)
    • Cost is $0.90/GB per month.
    • They added rsync support after I signed up, in addition to FTP, allowing you to sync your static site to them if you want.
  • Export log files to your own syslog server
  • An API if you want to control your zones

They keep adding more features….or I keep noticing them. Until I started writing this, I didn’t know they had added syslog support. Which brings me to my only real criticism of KeyCDN. The last time I looked intently, I don’t think I saw the feature. So they certainly could be better at conveying new features to me.

Whenever I’ve needed help, they have been prompt in their response and have worked with me.

But there is so much here I don’t take advantage of that I wish to. This is a basic review, but I may go into more detail in future. I’d like to try to play with their API, as they have a PHP library on Github and I’ve been working on my PHP skills as part of maintaining this site, which runs WordPress(written in PHP).

So, give them a try…if you do, try my links…I wouldn’t mind the extra few credits in my account. If you have any questions, please send a message or leave a comment.

Responsible Disclosure: I am a customer of KeyCDN, and I am using my affiliate link, which provides me with extra CDN credit. However, my decision to finally get around to reviewing them is due to the responsiveness of their support and their feature set.

 

Reconsidering Powerline Networking

Years ago, I tried powerline networking, and it never quite worked for me. But, a recent dead spot in my residence caused me to give it a shot again. Wiring to connect the two locations where I needed network access would be an involved process.

Powerline networking adapters are simple square boxes with a network port. You plug your network into one end, and it comes out the other side. Some of these adapters also act as wireless access points.

It worked surprisingly well, although I was only able to get 2mbps…but this was plugged into an extension cord. The adapters tend to have degraded performance if not directly plugged into the wall.

The adapters I used were an inexpensive set of TP-Link AV200 adapters I got for $25, but there is a faster standard…AV500. On the far end, I hooked up a wireless access point. I have the option of adding a switch in order to wire in items.

So, if you haven’t considered powerline networking of late…you may wish to. A wire is still better, faster, and more reliable, but it is not always an option.

Nginx FastCGI Caching

English: Nginx Logo Español: Logo de Nginx
English: Nginx Logo Español: Logo de Nginx (Photo credit: Wikipedia)

Over the last few months, I’ve been doing a lot of work trying to speed up the sites on my server….perhaps to the detriment of this site, Gadget Wisdom.

Gadget Wisdom runs on WordPress on a Nginx web server. To run PHP on an Nginx server, you need to pass requests to a FastCGI server.

Nginx supports caching the responses. So, WordPress generates a page dynamically, Nginx caches the response and can serve the cached version on request. Since the resource intensive part is the application, and most people don’t need a changing page, it works for the majority of issues.

For the last few years, refreshing the cache has been done by sending a request with a specific header. This has the effect of telling the system to generate the page again and store the result. A recent upgrade added in the optional Nginx Cache Purge module. This allows a purge of a specific page using a simple URL scheme.

The net difference between the two in effect is that the purge function removes the cached version to be regenerated on the next load. The header option generates a new version of the page which is stored in the cache. The disadvantage of the Purge module is you have to custom-compile Nginx…which means you have to manually keep up on security bugfixes.

Either way, once you decide on methodology, you also have cache validity. For example, many people opt for a microcache solution…where the cache time is very short, measured in seconds. This means that only when the site is being hit will people be served ‘stale’ pages.

The alternative is a very long cache time…measured in hours/days. As long as you have a cache refresh function available, such as the options mentioned above so you can remove the stale pages on demand, you can keep the pages around for longer periods of time.

Right now, my cache validity time continues to rise over time. You also have browser caching. Right now, images are instructed to be cached by your browser for days. I don’t usually change my images much after posting…or at all.

So, this post hopefully covered the basic decision making process for FastCGI caching on Nginx. In Part 2(if I get to it), we’ll cover some of the settings to allow this, as well as some of the considerations you have to make while coding this.

Why CDN.net Intrigues Me

CDN.net logo

CDN.net is the latest pay per use CDN I’ve been using. Pay Per Use is the best option if you want to manage your costs.

We were with CDN77 for a while, which offered a flat rate $49/tb service. But the level of service was not one that I was thrilled with. I felt as if they didn’t understand my questions and weren’t interested in working with me.

CDN.net is actually one of two fairly new pay per use CDNs based on the OnApp federated CDN. It essentially uses spare capacity on various servers to offer CDN services. The other is CDNify, which uses a flat rate $49/tb service fee.

CDN.net is more of a marketplace. You can pick your CDN locations out of their available options and customize the package. When I started with them, you needed a free trial to see their pricing, but they’ve recently changed that. Their pricing is a variable rate, based on location. So, you can route your data through Salt Lake City at less than 2 cents a GB, or through Chelyabinsk for 1 dollar a GB.

Your rate is locked in until you update your package…which means if the price goes down in one price, and up in another, you have a hard decision to make. I’m sorry to any Chelyabinsk readers, but you’ll have to wait an extra second or two to be served. I’m not paying that rate.

I like the idea of this choice, but sometimes the results are surprising compared to where statistics report my users are. Analytics suggests that many of my users are in the Eastern United States, but I’m running more data through Dallas than New York. But it is never that simple.

But no one is writing about CDN.net or CDNify. No reviews that I could find. Only early startup announcements.

So, I set up a test for both. I had no problem with performance between the two. But, they are using the same backend. CDNify had some report issues in their free trial, but they fixed that quickly after I advised.

CDN.net, which I ultimately settled on, has answered all of my questions…although they do not seem to respond on weekends. One of their people even invited me to chat.

CDNs are necessary to reduce load from your own server and speed loading of static assets. CDN.net is still new, and there are more things I hope they do in the future. They also believe that they can offer lower prices as they grow their user base, as the cost would be spread out.

In terms of services, I would like to see improvements to their reporting mechanisms, but they have made changes, so I’ll see what happens.

You can go for a free trial here.