Skip to main content

Gadget Wisdom

0 Responses

Nginx FastCGI Caching

English: Nginx Logo Español: Logo de Nginx
English: Nginx Logo Español: Logo de Nginx (Photo credit: Wikipedia)

Over the last few months, I’ve been doing a lot of work trying to speed up the sites on my server….perhaps to the detriment of this site, Gadget Wisdom.

Gadget Wisdom runs on WordPress on a Nginx web server. To run PHP on an Nginx server, you need to pass requests to a FastCGI server.

Nginx supports caching the responses. So, WordPress generates a page dynamically, Nginx caches the response and can serve the cached version on request. Since the resource intensive part is the application, and most people don’t need a changing page, it works for the majority of issues.

For the last few years, refreshing the cache has been done by sending a request with a specific header. This has the effect of telling the system to generate the page again and store the result. A recent upgrade added in the optional Nginx Cache Purge module. This allows a purge of a specific page using a simple URL scheme.

The net difference between the two in effect is that the purge function removes the cached version to be regenerated on the next load. The header option generates a new version of the page which is stored in the cache. The disadvantage of the Purge module is you have to custom-compile Nginx…which means you have to manually keep up on security bugfixes.

Either way, once you decide on methodology, you also have cache validity. For example, many people opt for a microcache solution…where the cache time is very short, measured in seconds. This means that only when the site is being hit will people be served ‘stale’ pages.

The alternative is a very long cache time…measured in hours/days. As long as you have a cache refresh function available, such as the options mentioned above so you can remove the stale pages on demand, you can keep the pages around for longer periods of time.

Right now, my cache validity time continues to rise over time. You also have browser caching. Right now, images are instructed to be cached by your browser for days. I don’t usually change my images much after posting…or at all.

So, this post hopefully covered the basic decision making process for FastCGI caching on Nginx. In Part 2(if I get to it), we’ll cover some of the settings to allow this, as well as some of the considerations you have to make while coding this.

Published on September 16, 2014
Full Post
0 Responses

The New Home Theater Rack

The Storage of Electronic Components is always a hard thing. For years, my audio components have been housed in a Ikea Besta Media Cabinet with ventilation and wiring holes cut into the rear panel. This created some support issues, as the cabinet wasn’t designed for the function I was using it for.

I could have hacked the thing more, but opted for replacement, as the color Besta I use was discontinued, and thus replacement parts weren’t going to be an option in the future.

All of my home networking equipment, cable modem, router, etc, happen to converge under my stereo system, which contains a receiver, Blu-Ray player, DVR Cable Box(under protest from the cable company), etc.

I needed something that was structurally strong, open for ventilation, and easily adjustable. I looked at a variety of commercial media cabinets, relay racks, etc before deciding on Elfa.

Yes, Elfa…I switched from Ikea, the Swedish home furnishings company, to Elfa, a Swedish shelving company. Elfa is owned by The Container Store, which happened to be having its annual Elfa shelving sale.

So, why Elfa? While they offer the traditional easy hang closet type shelving, they offer a freestanding option, which is what I used. While I went for ventilated shelving, they offer solid shelving as well as decorative pieces.

I did go for a simple industrial look, but I didn’t have to. I found examples online of people using the same shelving for TV stands and other functions as well.

In the end, Elfa, even with a 30% off sale, was more expensive than the Besta solution I had, and comparable to assembled media cabinets. I could have gotten a pressboard or glass cabinet of similar design.

But what appeals to me in design is modularity. I will replace a component, need to adjust the height of shelving, need to replace wires…a modular layout permits me to do this. A fixed layout means I am limited.

What solutions have you come up with to solve this sort of problem? What do you think of this one?

Published on March 3, 2014
Full Post
0 Responses

The Pebble: A Review

Pebble
Pebble (Photo credit: the waving cat)

It was back in October that I commented on the ideal smartwatch: “Imagine a watch that contains a multicolored led notification light, a vibrating alert, and less bulky/blocky shape to the Pebble. Bluetooth 4.0 LE by default.

So, here we are a few months later, and I saw a great deal on a Pebble and decided to take advantage of it. This was shortly before Pebble announced the Pebble Steel. The innards of the steel and the software is the same, but the design is a bit more stylish and professional looking. There is an LED light, and it may or may not be software controllable. It is also over $100 more.

Bluetooth 4.0 LE is not supported. In fact, the Android app for Pebble uses the Accessibility system rather than the Notification listener added in 4.3. Using the Notification Listener requires a third-party app that supports it.

Pebble has an Android app update in beta that includes their new watch app and face store, which organizes apps and watchfaces in a single place. This was previously something that was offered by third-parties as opposed to Pebble itself. It is slow, but it is still in beta. I opted to upgrade, rather than waiting. As I do not have an iOS device, I can’t comment on that experience.

So, how does the Pebble stack up? Well, first the band that came with it was too small, but the Pebble uses a standard 22mm watchband, so I was able to replace it with a longer one. The actual device, the watchface, is bigger than some watches, but I’ve been able to get used to the size.

The main purpose of the Pebble for me is notifications. It vibrates and flashes notifications on the screen. One of the biggest problems I’ve always had, is when I put my phone away, I miss calls and other information. This is especially an issue in crowded places.

With the Pebble, the watch vibrates and I can see the message on the screen without having to take the phone out. I saw one argument from a user that while the connection to the Pebble costs in battery life, it could actually save battery net because the user would not be turning on the phone screen to read messages.

The device is at least as rugged as any other watch, and so far I’ve had no failure with it.

The Pebble is still disappointing on the app front. It is getting there though with the new Pebble App Store and Version 2.0 of the firmware/SDK. The new firmware supports retrieval of information from the Internet, something not natively supported prior. This allows for dynamic information such as weather conditions to be added to a watchface.

There is little interactivity I can think of that I’d want with the buttons of the Pebble, other than music control and other simple items. Pebble supports basic music control, which I supplement with Music Boss, which adds additional features.

There is more coming from Pebble on the software front, and from 3rd party developers offering additional features. I’ll be interested to see what’s next.

 

Published on February 11, 2014
Full Post
1 Response

Review: Blu Dash 4.5

I don’t normally do cell phone reviews. But, when I was looking, the Blu line of products seems to be underreviewed. Many of the news sources that mention Blu seem to be merely quoting their press release. Few reviews of the actual product.

Blu, which is a Miami based company, has been in business since 2009 and makes a variety of phones whose specifications are on par with those made by the better known manufacturers, but whose prices are well below. The company sells phones primarily in Latin America. In the U.S., you’ll find them on online retailers such as Amazon and Newegg.

The Blu Dash 4.5 runs near stock 4.2 Jellybean. It has 4GB of internal storage, and 512mb of RAM. It has, as the name suggests, a 4.5″ TFT LCD screen with 480×854 resolution. The processor is a Mediatek Quad-Core 1.2 Ghz. It comes with a screen protector, charger, and case.

It comes in 2 variants. 850/1900 and 850/2100, HSPA+ 21Mbps.

So, those are the specs. The design is fairly standard. It has three hardware buttons…Back, Menu, and Home. The oddest thing about it is the fact that the charging port is on the top, next to the headphone jack.

For a budget phone, it is great. The phone isn’t as powerful as some, and that shows, but for most functions, it is more than adequate. As I write this, I’ve only used it over a weekend, and I plan to give it a few more days.

So far, it has performed all of the functions I would normally do with a phone. I rarely watch video on my phone, but aside from some issues with audio sync at resolutions higher than the screen resolution, it did it all.

Each year, the phones get bigger, the screen resolution gets higher, the processor gets faster, and so on. There are phones with higher specifications, even ones from Blu, but from the looks of it, these are value products.

There is one more aspect of the phone…reception. This is the hardest one. I’m a Verizon customer primarily. I bought a T-Mobile prepaid SIM for testing. T-Mobile uses the 1900MHz spectrum for its GSM service, and is moving its HSPA+ to this frequency. Its HSPA+ network is mostly on the 1700MHz band, which the phone doesn’t support. But since New York City, where I live has mostly been moved, I’ve gotten consistent service at 3G speeds. But, lacking another phone to compare it to, I may not be able to best evaluate this.

Will follow up after more time with the phone.

 

Published on October 20, 2013
Full Post
1 Response

The Ideal Smartwatch Is Discrete

Pebble Watch

I’ve been following items like the Pebble Watch, the Samsung Galaxy Gear, and so on.

None of these are exactly what I’m looking for, but the Pebble is probably the closest. This is why I’m hoping for Pebble 2.0 or similar. Pebble as a company has had issues delivering its existing units though.

So, what do I want in a watch? I was thinking about this. It has to look like a watch. If you want to bridge the divide between a digital watch and a smarter watch, it should be the same size as a normal watch. I don’t want people coming up to me and staring at my watch.

It should be simple. If I wanted a full size device on my arm, I can get a sportband for my cell phone and hold it there. I like the idea of phone integration, but the purpose needs to be simplified.

Look at Ion Glasses, which is set to offer a pair of sunglasses with a notification LED only visible to the wearer and control buttons. This is meant to be discrete…so no one would know about the integration. This is what I want in a wearable device. It should integrate with the environment.

I have a phone/tablet, I don’t need another fully functional screen.  The blocky square that the Pebble seems to be is closer, but only seeing pictures and video, the Pebble doesn’t look that impressive visually.

While writing this, I’m holding my actual watch for inspiration. Imagine a watch that contains a multicolored led notification light, a vibrating alert, and less bulky/blocky shape to the Pebble. Bluetooth 4.0 LE by default.

I could picture a Pebble 2 doing that, allowing me to be notified of important notifications without pulling out my phone, but not something I’m watching continually.

What do you think?

Published on October 18, 2013
Full Post
1 Response

Why CDN.net Intrigues Me

CDN.net logo

CDN.net is the latest pay per use CDN I’ve been using. Pay Per Use is the best option if you want to manage your costs.

We were with CDN77 for a while, which offered a flat rate $49/tb service. But the level of service was not one that I was thrilled with. I felt as if they didn’t understand my questions and weren’t interested in working with me.

CDN.net is actually one of two fairly new pay per use CDNs based on the OnApp federated CDN. It essentially uses spare capacity on various servers to offer CDN services. The other is CDNify, which uses a flat rate $49/tb service fee.

CDN.net is more of a marketplace. You can pick your CDN locations out of their available options and customize the package. When I started with them, you needed a free trial to see their pricing, but they’ve recently changed that. Their pricing is a variable rate, based on location. So, you can route your data through Salt Lake City at less than 2 cents a GB, or through Chelyabinsk for 1 dollar a GB.

Your rate is locked in until you update your package…which means if the price goes down in one price, and up in another, you have a hard decision to make. I’m sorry to any Chelyabinsk readers, but you’ll have to wait an extra second or two to be served. I’m not paying that rate.

I like the idea of this choice, but sometimes the results are surprising compared to where statistics report my users are. Analytics suggests that many of my users are in the Eastern United States, but I’m running more data through Dallas than New York. But it is never that simple.

But no one is writing about CDN.net or CDNify. No reviews that I could find. Only early startup announcements.

So, I set up a test for both. I had no problem with performance between the two. But, they are using the same backend. CDNify had some report issues in their free trial, but they fixed that quickly after I advised.

CDN.net, which I ultimately settled on, has answered all of my questions…although they do not seem to respond on weekends. One of their people even invited me to chat.

CDNs are necessary to reduce load from your own server and speed loading of static assets. CDN.net is still new, and there are more things I hope they do in the future. They also believe that they can offer lower prices as they grow their user base, as the cost would be spread out.

In terms of services, I would like to see improvements to their reporting mechanisms, but they have made changes, so I’ll see what happens.

You can go for a free trial here.

Published on July 11, 2013
Full Post
2 Responses

RIP Google Reader

Today, July 1st, Google Reader is officially gone.

Image representing Google Reader as depicted i...

Om Malik, of GigaOm, asked the question…Google killed Reader instead of updating it. If this is such a wise decision, why are so many companies scrambling to get into this space?

The truth is, Google Reader was based on the Inbox model. You’d see everything. Nowadays, there is too much information, too many sites. My feed reader is rarely empty. But the same can be said for my Twitter stream.

The option to have a more curated experience is the business these people are getting into. Build a better experience and the signal generation will allow for better ad targeting, which is why people are scrambling.

In the end, Google Reader is gone, and those who wanted what it offered will just have to move on.

For those of us who run websites, the question is how to have people learn about and follow our work. And RSS is a big part of that, and will likely continue…although I wouldn’t trust Feedburner. That’s a Google RSS product too, after all.

Published on July 1, 2013
Full Post
0 Responses

Lightweight Server Monitoring

Collectd Architecture Schematic

I recently began a move of the Gadget Wisdom and related sites to a new server. The purpose of this was laying the infrastructure for a major upgrade.

One of the major pushes was upgrading monitoring features. Some of the software being used was no longer being maintained, and replacements had to be found.

Nagios and Munin are two of the most popular tools used by IT specialists for infrastructure monitoring. There are good reasons that I opted for something more lightweight though. There are dozens of monitoring tools, and it is quite overwhelming to choose one. These are two that I have been happy with so far.

One of the first ones I installed is collectd. Collectd is a tool that stores performance data. It is plugin based, which means it can be used to pipe into a variety of different pieces of software. So, it is incredibly extensible, which leaves room for future data gathering and future output. It is also incredibly lightweight, which has its advantages.

To output the data into graphs, I’m using a simple front-end called Jarmon for now. Jarmon downloads the files generated by collectd, and renders them on the client side.

The second is a monitoring tool called monit. Monit monitors various services to ensure they are up, and can take action if they go down, such as sending an alert, restarting a service, executing a script, etc. One of the most fun things about having alerts is reading them…and in many cases, knowing I don’t have to do anything, because I told monit to do it for me.

There will be more to come on this, but what do you use in similar situations?

Published on June 26, 2013
Full Post
0 Responses

Trimming Your SSD

Kingston SSD Ready for InstallationNearly three years ago, I wrote an article on optimizing for SSDs under Linux. Recently, I decided to revisit the issue after reading a recent blog post.

The recommendation at the time was to enable TRIM support, using the discard option to mount the drive. The first question is, if this is such a good idea, why isn’t it enabled by default? Why do you have to add it to your options, like below?

/dev/sda1 / ext4 discard,defaults

It turns out that enabling the discard option does have a performance hit on deletes. So, how do you keep your SSD Trimmed and avoid a costly performance penalty?

It turns out you can trim manually using the fstrim command, and set up a cron job to run this command once a day. The command takes only one argument by default, the mountpoint of the partition.

Seems like something worth thinking about. However, with the majority of the systems I run SSDs on, the solid state acts an an OS drive. Therefore, the number of deletes are minimal compared to writes.

In the end, enabling TRIM on your drive ensures that the drive will have the best wear-leveling and performance, but there is a cost. For some systems, it is just easier to mount with the discard option, others to run fstrim.

Published on May 13, 2013
Full Post
5 Responses

Thinking about RAID vs Backup

Six hard disk drives with cases opened showing...

The cost of storage hit a low the last time it was time for a storage upgrade. Then prices shot through the roof after a flood in Thailand closed factories.

This shut down all of my hard drives purchases for over two years. When I emerged from my cocoon, Samsung was gone as a Hard Drive manufacturer…and I had bought many Samsung OEM hard drives.

The purpose of RAID in a redundant system is to protect against hardware failure. You have different levels of RAID for this, RAID 1 for just a straight mirror, and RAID 5 and 6, which involve a minimum of 3-4 drives to accomplish.

RAID is important if you care about uptime. If you can afford to be down for a bit, backups are a better choice.

What is being stored, in this case, consists of several categories: Video, Music, Documents, Configuration Files. There is no point in storing complete drive images. The OS can be reinstalled, and it probably will be better off and cleaner running after it is. The OS drive on all of the systems I’ve built or refurbed in the last two years is an SSD, which is a common practice nowadays.

I had been mulling this after reading an article on another hardware refresh by Adam Williamson. He hadn’t refreshed in seven and a half years and used a separate NAS and server. So, why refresh after only two and a half years? Partly it was due to mistakes.

I’d been using WD Green drives. These had several limitations. They park the head after only 8 seconds of inactivity, which increased the load cycle count. The WD Red Drive is designed for 24/7 operation in network attached storage, with a longer warranty, and I now have two 3TB drives. The only other alternative in WD’s stable was a Black drive, their performance drive. It might be time to consider a Seagate, the main competitor, as well.

The warranty situation in hard drives now continues to drop. Five years, down to thee, and down to two years. So there is less protection from the manufacturer and less inclination to create quality products. That was why we were buying OEM over Consumer Drives over the last few years.

Back to the subject at hand…why not a RAID? It is simply a matter of cost vs. benefit. This is terabytes of video data, mostly a DVD archive I intend to create by backing up my DVD collection to MKV. If it were lost, the original copies aren’t going anywhere. But, more importantly, cloud backup is impractical.

Using Amazon S3, for example, at a rate of 9.5 cents a GB, that is just under $100 a month per TB. Amazon Glacier, which is their long-term backup option, is 1 cent a GB, or roughly $10 a TB. But once you take video out of the equation, or sharply reduce it, budgeting $5 a month for important data is a reasonable amount, and still gets you a lot of storage options to work with.

So, to ensure redundancy, there is a second drive in the system, and backups will be done to it. From there, the backups of everything but the video store will be sent up to the cloud. As I’ve mostly given up buying DVDs(due to Blu-Ray), the collection should be fairly static.

Back to Adam Williamson, he had a great idea of having the other computers on the network back up their data to the server, independently isolated by each machine having a separate user account on the server. Not quite there yet, but sounds good. I have other plans to download data from my cloud service providers(Google, Dropbox, etc., and maintain a local backup, but that is a longer-term project. I’m reasonably certain in the interim, Google has a better backup system then I do.

What about off-site then? I still have the old 1TB Green Drives. They can be run through diagnostics, loaded up as a backup, and sent off to a relative’s house…I’ve added a hard drive dock through an E-SATA port to support this.

So in the end, RAID wasn’t necessary for me, but some redundancy was. It may be for you. Comments?

More to come…

Published on April 22, 2013
Full Post

Get New Posts By Email