Let’s Build a Server: Part 2 – Monitoring

Monit

Last time, in Part 1, we discussed setting up a firewall and an email relay so notifications from the firewall could get to us.

Now, in Part 2, we’re going to talk about more signal. Server monitoring and alerting. Our primary software for monitoring is Monit.

Monit has a single configuration file, but many distributions, including mine, set up a /etc/monit.d folder so you can divide your monit configuration into different files.

Once it is running, you can monitor its status by running
monit status
It will show the status of whatever is monitoring. There is also an optional web component, if you want to check status in a web browser.

What can you monitor?

Monit can monitor any program and restart it if it crashes.
check process nginx with pidfile /var/run/nginx.pid
start program = "/bin/systemctl start nginx.service"
stop program = "/bin/systemctl stop nginx.service"
if failed host 127.0.0.1 port 80
protocol http then restart
if 5 restarts within 5 cycles then timeout

As you can see, the simple scripting language allows you to not only restart, execute programs, but alert the user.

Not only can it make sure something is running, but it can monitor its resource usage, as well as system resource usage. It can monitor processes, network connections, programs and scripts, files, directories, etc.

An Alternative to Email Alerts

The default for an alert is to send an alert email, but for bigger emergencies, a phone push notification is also useful.

Monit provides a simple instruction on how to set it up for Pushover. There is also the alternative of PushBullet.

Pushover costs $5 per platform(Android, iOS, Desktop) to use on as many devices as you want. There is a per application limit of 7,500 messages per month. Pushbullet is, by comparison, free. The basic difference as I see it is that Pushbullet is more geared toward the consumer, and Pushover is more geared toward developers in how it was initially set up. They do have similar feature sets though.

Here is Monit’s suggested Pushover script, which can be run instead of an email alert.

/usr/bin/curl -s
-F "token=your_mmonit_or_monit_app_token"
-F "user=your_pushover_net_user_token"
-F "message=[$MONIT_HOST] $MONIT_SERVICE - $MONIT_DESCRIPTION"
https://api.pushover.net/1/messages.json

Here is an alternative version for Pushbullet

curl -u <your_access_token_here>: -X POST https://api.pushbullet.com/v2/pushes --header 'Content-Type: application/json' --data-binary '{"type": "note", "title": "$MONIT_HOST", "body": "$MONIT_SERVICE - $MONIT_DESCRIPTION"}'

Conclusion

In all cases, monit allows you to monitor your system and take action based on a change in performance. The complexity of your rules is entirely up to you. But, if you give thought to their setup, you can not only be told when there is a server emergency, but the system can take action to fix it.

Let’s Build a Server: Part 1 – The Firewall

Tux, the Linux penguin

Necessity is the mother of invention. It is once again time to upgrade the Gadget Wisdom servers. And, as I have committed to writing more here, I will be writing some articles on server construction.

Now, this will all be done using a Virtual Private Server, so the hardware is outside of the scope of this series.

The first piece of software I usually install on network accessible servers is the ConfigServer Security & Firewall(CSF). This is a firewall with login/intrusion detection, and security. Most distributions of Linux come with some sort of firewall, but this set of scripts works with iptables to be much more secure.

CSF provides scripting for a firewall, and handles login failure handling for a variety of stock services, as well as unsupported services using regular expressions.

There are a lot of options in the CSF configuration file…read through the description of each…decide which ports you want open, and deploy. CSF will automatically update itself when there is a new version.

In order to ensure notifications from the firewall and other administrative notifications are read, you will likely wish to arrange for the ability to send mail. However, you may not need or wish the trouble of setting up a mail server. The simpler solution is to set up an SMTP relay.

The example below configures Postfix, available with many Linux distributions, for use with a gmail account. Add the following lines to the bottom of your /etc/postfix/main.cf

smtp_use_tls=yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous

Create a file with your gmail credentials.

smtp.gmail.com user@gmail.com:PASSWORD

Then secure the file.

chmod 640 /etc/postfix/sasl_passwd*
postmap /etc/postfix/sasl_passwd

Now, any external email will route through your gmail account. We have now protected our server from a variety of attacks, and ensured, if there is a problem, we’ll be notified of it.

There are alternatives to Gmail. For example, Mandrill offers 12,000 emails per month for free, and 20 cents per thousand after that, and Sendgrid offers 200 emails, and 10 cents per thousand.

You can use Mandrill or Sendgrid instead of Gmail by using the below credentials.

[smtp.mandrillapp.com]:587 USERNAME:API_KEY
[smtp.sendgrid.net]:587 USERNAME:PASSWORD

 

 

Why CDN.net Intrigues Me

CDN.net logo

CDN.net is the latest pay per use CDN I’ve been using. Pay Per Use is the best option if you want to manage your costs.

We were with CDN77 for a while, which offered a flat rate $49/tb service. But the level of service was not one that I was thrilled with. I felt as if they didn’t understand my questions and weren’t interested in working with me.

CDN.net is actually one of two fairly new pay per use CDNs based on the OnApp federated CDN. It essentially uses spare capacity on various servers to offer CDN services. The other is CDNify, which uses a flat rate $49/tb service fee.

CDN.net is more of a marketplace. You can pick your CDN locations out of their available options and customize the package. When I started with them, you needed a free trial to see their pricing, but they’ve recently changed that. Their pricing is a variable rate, based on location. So, you can route your data through Salt Lake City at less than 2 cents a GB, or through Chelyabinsk for 1 dollar a GB.

Your rate is locked in until you update your package…which means if the price goes down in one price, and up in another, you have a hard decision to make. I’m sorry to any Chelyabinsk readers, but you’ll have to wait an extra second or two to be served. I’m not paying that rate.

I like the idea of this choice, but sometimes the results are surprising compared to where statistics report my users are. Analytics suggests that many of my users are in the Eastern United States, but I’m running more data through Dallas than New York. But it is never that simple.

But no one is writing about CDN.net or CDNify. No reviews that I could find. Only early startup announcements.

So, I set up a test for both. I had no problem with performance between the two. But, they are using the same backend. CDNify had some report issues in their free trial, but they fixed that quickly after I advised.

CDN.net, which I ultimately settled on, has answered all of my questions…although they do not seem to respond on weekends. One of their people even invited me to chat.

CDNs are necessary to reduce load from your own server and speed loading of static assets. CDN.net is still new, and there are more things I hope they do in the future. They also believe that they can offer lower prices as they grow their user base, as the cost would be spread out.

In terms of services, I would like to see improvements to their reporting mechanisms, but they have made changes, so I’ll see what happens.

You can go for a free trial here.

Lightweight Server Monitoring

Collectd Architecture Schematic

I recently began a move of the Gadget Wisdom and related sites to a new server. The purpose of this was laying the infrastructure for a major upgrade.

One of the major pushes was upgrading monitoring features. Some of the software being used was no longer being maintained, and replacements had to be found.

Nagios and Munin are two of the most popular tools used by IT specialists for infrastructure monitoring. There are good reasons that I opted for something more lightweight though. There are dozens of monitoring tools, and it is quite overwhelming to choose one. These are two that I have been happy with so far.

One of the first ones I installed is collectd. Collectd is a tool that stores performance data. It is plugin based, which means it can be used to pipe into a variety of different pieces of software. So, it is incredibly extensible, which leaves room for future data gathering and future output. It is also incredibly lightweight, which has its advantages.

To output the data into graphs, I’m using a simple front-end called Jarmon for now. Jarmon downloads the files generated by collectd, and renders them on the client side.

The second is a monitoring tool called monit. Monit monitors various services to ensure they are up, and can take action if they go down, such as sending an alert, restarting a service, executing a script, etc. One of the most fun things about having alerts is reading them…and in many cases, knowing I don’t have to do anything, because I told monit to do it for me.

There will be more to come on this, but what do you use in similar situations?

Trimming Your SSD

Kingston SSD Ready for InstallationNearly three years ago, I wrote an article on optimizing for SSDs under Linux. Recently, I decided to revisit the issue after reading a recent blog post.

The recommendation at the time was to enable TRIM support, using the discard option to mount the drive. The first question is, if this is such a good idea, why isn’t it enabled by default? Why do you have to add it to your options, like below?

/dev/sda1 / ext4 discard,defaults

It turns out that enabling the discard option does have a performance hit on deletes. So, how do you keep your SSD Trimmed and avoid a costly performance penalty?

It turns out you can trim manually using the fstrim command, and set up a cron job to run this command once a day. The command takes only one argument by default, the mountpoint of the partition.

Seems like something worth thinking about. However, with the majority of the systems I run SSDs on, the solid state acts an an OS drive. Therefore, the number of deletes are minimal compared to writes.

In the end, enabling TRIM on your drive ensures that the drive will have the best wear-leveling and performance, but there is a cost. For some systems, it is just easier to mount with the discard option, others to run fstrim.

Thinking about RAID vs Backup

Six hard disk drives with cases opened showing...

The cost of storage hit a low the last time it was time for a storage upgrade. Then prices shot through the roof after a flood in Thailand closed factories.

This shut down all of my hard drives purchases for over two years. When I emerged from my cocoon, Samsung was gone as a Hard Drive manufacturer…and I had bought many Samsung OEM hard drives.

The purpose of RAID in a redundant system is to protect against hardware failure. You have different levels of RAID for this, RAID 1 for just a straight mirror, and RAID 5 and 6, which involve a minimum of 3-4 drives to accomplish.

RAID is important if you care about uptime. If you can afford to be down for a bit, backups are a better choice.

What is being stored, in this case, consists of several categories: Video, Music, Documents, Configuration Files. There is no point in storing complete drive images. The OS can be reinstalled, and it probably will be better off and cleaner running after it is. The OS drive on all of the systems I’ve built or refurbed in the last two years is an SSD, which is a common practice nowadays.

I had been mulling this after reading an article on another hardware refresh by Adam Williamson. He hadn’t refreshed in seven and a half years and used a separate NAS and server. So, why refresh after only two and a half years? Partly it was due to mistakes.

I’d been using WD Green drives. These had several limitations. They park the head after only 8 seconds of inactivity, which increased the load cycle count. The WD Red Drive is designed for 24/7 operation in network attached storage, with a longer warranty, and I now have two 3TB drives. The only other alternative in WD’s stable was a Black drive, their performance drive. It might be time to consider a Seagate, the main competitor, as well.

The warranty situation in hard drives now continues to drop. Five years, down to thee, and down to two years. So there is less protection from the manufacturer and less inclination to create quality products. That was why we were buying OEM over Consumer Drives over the last few years.

Back to the subject at hand…why not a RAID? It is simply a matter of cost vs. benefit. This is terabytes of video data, mostly a DVD archive I intend to create by backing up my DVD collection to MKV. If it were lost, the original copies aren’t going anywhere. But, more importantly, cloud backup is impractical.

Using Amazon S3, for example, at a rate of 9.5 cents a GB, that is just under $100 a month per TB. Amazon Glacier, which is their long-term backup option, is 1 cent a GB, or roughly $10 a TB. But once you take video out of the equation, or sharply reduce it, budgeting $5 a month for important data is a reasonable amount, and still gets you a lot of storage options to work with.

So, to ensure redundancy, there is a second drive in the system, and backups will be done to it. From there, the backups of everything but the video store will be sent up to the cloud. As I’ve mostly given up buying DVDs(due to Blu-Ray), the collection should be fairly static.

Back to Adam Williamson, he had a great idea of having the other computers on the network back up their data to the server, independently isolated by each machine having a separate user account on the server. Not quite there yet, but sounds good. I have other plans to download data from my cloud service providers(Google, Dropbox, etc., and maintain a local backup, but that is a longer-term project. I’m reasonably certain in the interim, Google has a better backup system then I do.

What about off-site then? I still have the old 1TB Green Drives. They can be run through diagnostics, loaded up as a backup, and sent off to a relative’s house…I’ve added a hard drive dock through an E-SATA port to support this.

So in the end, RAID wasn’t necessary for me, but some redundancy was. It may be for you. Comments?

More to come…

Chrome OS without Google

English: Main logo and icon for the open sourc...

I have spent a good deal of time with Chrome OS and the Chromebook. The one troubling thing about it is not that it is an operating system in a web browser. You know that going in. It is Google.

Recent events have made all of us a bit wary about Google Services, while still using them. When Google recently launched Google Keep, a decent service in its own right, no one trusted it. Before the Reader debacle, the Google enthusiasts would try it. Now, many are afraid to love a new service.

GigaOm picked up on a feature request for the Chromium OS, the Open-Source version of ChromeOS. The feature would establish an API to allow for extensions to integrate into the file manager, allowing cloud services other than Google Drive to act as ‘drives’ inside the file manager.

Elsewhere, in the Chromium Project, there is a reference to using Chromium without a Google login, but so far, even on the open-source project, you need Google. But, one can ask…if you are going to do Chrome without Google…what’s the point? You might as well run a full-fledged OS.

Years ago, computer design was based on ‘dumb terminals’ and powerful servers. Today’s computers are significantly more powerful than those ‘powerful’ servers. But the truth is that many of us now would rather be able to access identical experiences on multiple devices. But, products like Owncloud prove that while we may wish these things, we want more control and certainty about them. We want control.

Will be looking to see more in this direction. Free services versus Paid Services vs Self-Hosted services. Would welcome your thoughts on this, and what areas you think are worth exploring.

 

Running Personal Services on a Low End VPS

For those of us who like to tinker with client/server software for personal or household, there are many good options. You can use a Raspberry Pi as a server, for example. You can use an old computer.

Both of these would have services running out of your home or business. But, as we are an increasingly mobile society, you might not have good upstream bandwidth, or your ISP may block ports into your home. So, that is where a low-end VPS offering comes in.

ChicagoVPS.netWe chose ChicagoVPS, which offers a $12/year 128mb VPS, with 10GB of storage space and 100GB of monthly bandwidth. That is more than enough for personal use. They offer three locations: Chicago, Buffalo, or LA. There are similar services averaging around $12-15 a year.

This is not the sort of service where you expect a lot of reliability. The service has had some hiccups,  but as long as you backup and take adequate steps you should on any service, there shouldn’t be any problem.

On a 128mb instance, I have Tiny Tiny RSS running, as well as ZNC, and a few other random services that I only use for my own personal interests.

What do you think? Do you have any other recommendations for a tiny VPS? Do you have alternative providers you recommend for cheap VPS services?