Skip to main content

Gadget Wisdom

Category: Security & Networking

Illustration of a person monitoring POE security cameras using Frigate NVR software on a computer, with outdoor cameras mounted on a house and detection alerts shown on screen.
0 Responses

Why I Switched to POE Cameras and Frigate for Home Surveillance

Illustration of a person monitoring POE security cameras using Frigate NVR software on a computer, with outdoor cameras mounted on a house and detection alerts shown on screen.

 

During my recent renovation, I added two additional cameras to my new space, at the two points of ingress. This was something of a departure as these were also the first Power Over Ethernet(POE) cameras I’ve had installed, as I had someone on-site available who could run the cables cleanly.

I’ve tried a variety of ecosystems for cameras, both for myself and others. Many f them push you toward subscription-based cloud services, which features like video history, motion detection, and notifications only work fully if you pay monthly. Some of them barely provide any features without paying, despite the fact you bought the device.  Even when offering local options, this is often storage with a microSD card in the camera, which is clunky, slow, and unreliable.

That is why I decided to go with a network video recorder. A server that takes the feeds from all the cameras and stores the recordings. You can buy commercial NVRs you can purchase and install in your house, including some that integrate with the specific hardware cameras you bought, but I wanted a solution that aligned with my philosophy of self-hosted, privacy first smart home tech.

So I chose Frigate.

Why Frigate?

Frigate is an open-source NVR designed for real-time object detection all running on local hardware. It is deeply customizable and can be tuned to only record what matters to you – people, cars, or animals, depending on what zones and filters you decide.

For example, one of my outdoor camera flagged every pedestrian across the street, which is well outside of the zone I am concerned about. I can narrow the zone to only my property, to dramatically reduce noise in footage and alerts.

Frigate, recently added:

  • facial recognition
  • license plate recognition.
  • View-only user roles for shared access

Everything is processed locally, with no cloud dependency.

Frigate+: Smarter Detection, Optional Subscription

To improve detection, you can also subscribe to Frigate+, a $50/year subscription which offers better trained models for detection. These are trained by other users of Frigate. You can participate by submitting false positives and other information voluntarily. If you cancel, you get to keep the downloaded models, you just stop getting updates.

This helps support the developers and doesn’t lock you into a traditional subscription model.

Frigate Notifications

One gap in the core Frigate setup is the lack of built-in robust multi-platform notifications. That’s where another piece of software, Frigate- Notify, comes in. It ofers all of the notification options I might want.

  • Rich notifications
  • Cross-platform delivery including mobile, desktop, and messaging apps
  • Fully customizable

Next Steps For My Frigate NVR

Inspired by how well the new system is performing, I plan to replace more of my older Wi-Fi cameras with wired POE models for improved reliability. Wired cameras streaming directly to my NVR reduces lag, improves reliability, and gives me full control over recording, storage, and alerts—without the cloud.

If you’re tired of cloud lock-in and unreliable Wi-Fi cams, and you want a privacy-respecting, smarter surveillance system, Frigate + POE may be the combo you’ve been looking for.

 

Published on September 8, 2025
Full Post
0 Responses

Monitoring with Uptime Kuma

Earlier today, the server that hosts Gadget Wisdom was down for ten minutes. This happens every so often, and the server is due for replacement one of these days as the oldest one I have. But one of the problems I have is that local monitoring is…well, local. You shouldn’t run your monitoring solely on the server you are monitoring. You need something external as well.

So, enter Uptime Kuma. Uptime Kuma came onto the scene two years ago, as a self-hosted version of something like UptimeRobot(which does offer a free tier). There are other self-hosted products as well, but I was able to get this running in a short period of time and it provides exactly what I want, and it looks like it has an active development team.

So, what features does Uptime Kuma offer?

  • Dozens of notification methods to configure….email, messaging, SMS, etc.
  • HTTP, ping, as well as server specific monitoring.
  • Useful Stats and Graphs
  • Optional Public Status Pages

So, now, I’m waiting for my next downtime, to see how exactly this works in production, but just having the ability to remotely monitor and get notifications is another tool in my monitoring arsenal.

Published on December 26, 2023
Full Post
0 Responses

Multiple Vulnerabilities found in Wink and Insteon Systems

Rapid 7 reported that they detected major vulnerabilities in the Wink and Insteon Smart Hub systems.

This is of particular concern to me as a Wink hub user. The Wink Android app was storing sensitive information insecurely, which has now been patched.

The other vulnerability is apparently being fixed. The Wink API does not revoke authentication tokens when you log out, and new tokens do not invalidate the use of old tokens.

I’ve long been concerned about the long term health of Wink. It’s been with two different owners and it is hard to understand where it might go. And hubs in general might go away in favor of wifi or bluetooth as a standard over things like zigbee and z-wave.

But the fact they fixed these issues at least suggests that they plan to move forward.

Published on September 28, 2017
Full Post
0 Responses

Mozilla-supported Let’s Encrypt goes out of Beta

Mozilla-supported Let’s Encrypt goes out of Beta (The Mozilla Blog)

In 2014, Mozilla teamed up with Akamai, Cisco, the Electronic Frontier Foundation, Identrust, and the University of Michigan to found Let’s Encrypt  in order to …

Published on April 17, 2016
Full Post
1 Response

Let’s Build a Server: Part 2 – Monitoring

Monit

Last time, in Part 1, we discussed setting up a firewall and an email relay so notifications from the firewall could get to us.

Now, in Part 2, we’re going to talk about more signal. Server monitoring and alerting. Our primary software for monitoring is Monit.

Monit has a single configuration file, but many distributions, including mine, set up a /etc/monit.d folder so you can divide your monit configuration into different files.

Once it is running, you can monitor its status by running
monit status
It will show the status of whatever is monitoring. There is also an optional web component, if you want to check status in a web browser.

What can you monitor?

Monit can monitor any program and restart it if it crashes.
check process nginx with pidfile /var/run/nginx.pid
start program = "/bin/systemctl start nginx.service"
stop program = "/bin/systemctl stop nginx.service"
if failed host 127.0.0.1 port 80
protocol http then restart
if 5 restarts within 5 cycles then timeout

As you can see, the simple scripting language allows you to not only restart, execute programs, but alert the user.

Not only can it make sure something is running, but it can monitor its resource usage, as well as system resource usage. It can monitor processes, network connections, programs and scripts, files, directories, etc.

An Alternative to Email Alerts

The default for an alert is to send an alert email, but for bigger emergencies, a phone push notification is also useful.

Monit provides a simple instruction on how to set it up for Pushover. There is also the alternative of PushBullet.

Pushover costs $5 per platform(Android, iOS, Desktop) to use on as many devices as you want. There is a per application limit of 7,500 messages per month. Pushbullet is, by comparison, free. The basic difference as I see it is that Pushbullet is more geared toward the consumer, and Pushover is more geared toward developers in how it was initially set up. They do have similar feature sets though.

Here is Monit’s suggested Pushover script, which can be run instead of an email alert.

/usr/bin/curl -s
-F "token=your_mmonit_or_monit_app_token"
-F "user=your_pushover_net_user_token"
-F "message=[$MONIT_HOST] $MONIT_SERVICE - $MONIT_DESCRIPTION"
https://api.pushover.net/1/messages.json

Here is an alternative version for Pushbullet

curl -u <your_access_token_here>: -X POST https://api.pushbullet.com/v2/pushes --header 'Content-Type: application/json' --data-binary '{"type": "note", "title": "$MONIT_HOST", "body": "$MONIT_SERVICE - $MONIT_DESCRIPTION"}'

Conclusion

In all cases, monit allows you to monitor your system and take action based on a change in performance. The complexity of your rules is entirely up to you. But, if you give thought to their setup, you can not only be told when there is a server emergency, but the system can take action to fix it.

Published on December 7, 2014
Full Post
1 Response

Let’s Build a Server: Part 1 – The Firewall

Tux, the Linux penguin

Necessity is the mother of invention. It is once again time to upgrade the Gadget Wisdom servers. And, as I have committed to writing more here, I will be writing some articles on server construction.

Now, this will all be done using a Virtual Private Server, so the hardware is outside of the scope of this series.

The first piece of software I usually install on network accessible servers is the ConfigServer Security & Firewall(CSF). This is a firewall with login/intrusion detection, and security. Most distributions of Linux come with some sort of firewall, but this set of scripts works with iptables to be much more secure.

CSF provides scripting for a firewall, and handles login failure handling for a variety of stock services, as well as unsupported services using regular expressions.

There are a lot of options in the CSF configuration file…read through the description of each…decide which ports you want open, and deploy. CSF will automatically update itself when there is a new version.

In order to ensure notifications from the firewall and other administrative notifications are read, you will likely wish to arrange for the ability to send mail. However, you may not need or wish the trouble of setting up a mail server. The simpler solution is to set up an SMTP relay.

The example below configures Postfix, available with many Linux distributions, for use with a gmail account. Add the following lines to the bottom of your /etc/postfix/main.cf

smtp_use_tls=yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous

Create a file with your gmail credentials.

smtp.gmail.com user@gmail.com:PASSWORD

Then secure the file.

chmod 640 /etc/postfix/sasl_passwd*
postmap /etc/postfix/sasl_passwd

Now, any external email will route through your gmail account. We have now protected our server from a variety of attacks, and ensured, if there is a problem, we’ll be notified of it.

There are alternatives to Gmail. For example, Mandrill offers 12,000 emails per month for free, and 20 cents per thousand after that, and Sendgrid offers 200 emails, and 10 cents per thousand.

You can use Mandrill or Sendgrid instead of Gmail by using the below credentials.

[smtp.mandrillapp.com]:587 USERNAME:API_KEY
[smtp.sendgrid.net]:587 USERNAME:PASSWORD

 

 

Published on November 28, 2014
Full Post
0 Responses

Let’s Encrypt – A New Certificate Authority

Diagram of a public key infrastructure

 

Security Expert Bruce Schneier recently pointed to a joint project to create a new certificate authority that lets everyone get basic certificates for their domain through a simple process.

 

The idea would include not only free, but automatic, secure, transparent, open, and cooperative.

The service, called Let’s Encrypt, is set to launch in the summer of 2015.

The reason for the delay is that the service wants to leverage new standards. The most notable is ACME(Automated Certificate Management Environment). The idea is that the Certificate Authority communicates with the web server and the two work together to prove ownership and download the certificate, as well as handle configuration and renewal.

Now, considering how much of a chore certificates are right now, the standard, even outside of Lets Encrypt, would save a lot of anguish. Once the server has proven that it is the server of record for that domain, it can handle everything.

There’s more to it then that, and certainly, there are still risks, but we’ll see what these people come up with by the time the ACME standard is finalized.

 

Published on November 23, 2014
Full Post
0 Responses

KeyCDN: A Review

KeyCDN LogoIn a continuing effort to get the best combination of services and pricing, I often review my choice of provider. While it is a pain to migrate services, things do change over time.

As a small site, I want the benefits of a CDN, but the monthly cost of one is not within my budget. Which is why I explore pay-per-use CDNs. Metering means that I can prepay for a few GBs of traffic and it can last me a while. I wrote about this back in 2013, when I was talking about how new providers intrigued me.

After some problems with other incumbents, I was once again looking for new options, and came upon KeyCDN, which is a Swiss CDN that is well regarded so far. I’ve been using them for about six months now. They offer $0.04 per GB for the first 10TB. And in the last six months, they’ve continued to add features.

Here are a few of their features:

  • Pay Per Use Pricing – So no minimum monthly costs
  • A Free Trial(although most of these services have that)
  • Unlimited Zones that can be aliased as subdomains on your site.
  • SSL – Shared or Custom SSL. Shared SSL is them using their certificate. If you want to alias the CDN zone as a subdomain on your site, you need to buy a certificate from them or supply your own.
  • SPDY Support
  • Push Zones(if you want them to store the content, not just pull and cache it from your site)
    • Cost is $0.90/GB per month.
    • They added rsync support after I signed up, in addition to FTP, allowing you to sync your static site to them if you want.
  • Export log files to your own syslog server
  • An API if you want to control your zones

They keep adding more features….or I keep noticing them. Until I started writing this, I didn’t know they had added syslog support. Which brings me to my only real criticism of KeyCDN. The last time I looked intently, I don’t think I saw the feature. So they certainly could be better at conveying new features to me.

Whenever I’ve needed help, they have been prompt in their response and have worked with me.

But there is so much here I don’t take advantage of that I wish to. This is a basic review, but I may go into more detail in future. I’d like to try to play with their API, as they have a PHP library on Github and I’ve been working on my PHP skills as part of maintaining this site, which runs WordPress(written in PHP).

So, give them a try…if you do, try my links…I wouldn’t mind the extra few credits in my account. If you have any questions, please send a message or leave a comment.

Responsible Disclosure: I am a customer of KeyCDN, and I am using my affiliate link, which provides me with extra CDN credit. However, my decision to finally get around to reviewing them is due to the responsiveness of their support and their feature set.

 

Published on November 22, 2014
Full Post
0 Responses

Reconsidering Powerline Networking

Years ago, I tried powerline networking, and it never quite worked for me. But, a recent dead spot in my residence caused me to give it a shot again. Wiring to connect the two locations where I needed network access would be an involved process.

Powerline networking adapters are simple square boxes with a network port. You plug your network into one end, and it comes out the other side. Some of these adapters also act as wireless access points.

It worked surprisingly well, although I was only able to get 2mbps…but this was plugged into an extension cord. The adapters tend to have degraded performance if not directly plugged into the wall.

The adapters I used were an inexpensive set of TP-Link AV200 adapters I got for $25, but there is a faster standard…AV500. On the far end, I hooked up a wireless access point. I have the option of adding a switch in order to wire in items.

So, if you haven’t considered powerline networking of late…you may wish to. A wire is still better, faster, and more reliable, but it is not always an option.

Published on November 9, 2014
Full Post
0 Responses

Nginx FastCGI Caching

English: Nginx Logo Español: Logo de Nginx
English: Nginx Logo Español: Logo de Nginx (Photo credit: Wikipedia)

Over the last few months, I’ve been doing a lot of work trying to speed up the sites on my server….perhaps to the detriment of this site, Gadget Wisdom.

Gadget Wisdom runs on WordPress on a Nginx web server. To run PHP on an Nginx server, you need to pass requests to a FastCGI server.

Nginx supports caching the responses. So, WordPress generates a page dynamically, Nginx caches the response and can serve the cached version on request. Since the resource intensive part is the application, and most people don’t need a changing page, it works for the majority of issues.

For the last few years, refreshing the cache has been done by sending a request with a specific header. This has the effect of telling the system to generate the page again and store the result. A recent upgrade added in the optional Nginx Cache Purge module. This allows a purge of a specific page using a simple URL scheme.

The net difference between the two in effect is that the purge function removes the cached version to be regenerated on the next load. The header option generates a new version of the page which is stored in the cache. The disadvantage of the Purge module is you have to custom-compile Nginx…which means you have to manually keep up on security bugfixes.

Either way, once you decide on methodology, you also have cache validity. For example, many people opt for a microcache solution…where the cache time is very short, measured in seconds. This means that only when the site is being hit will people be served ‘stale’ pages.

The alternative is a very long cache time…measured in hours/days. As long as you have a cache refresh function available, such as the options mentioned above so you can remove the stale pages on demand, you can keep the pages around for longer periods of time.

Right now, my cache validity time continues to rise over time. You also have browser caching. Right now, images are instructed to be cached by your browser for days. I don’t usually change my images much after posting…or at all.

So, this post hopefully covered the basic decision making process for FastCGI caching on Nginx. In Part 2(if I get to it), we’ll cover some of the settings to allow this, as well as some of the considerations you have to make while coding this.

Published on September 16, 2014
Full Post

Get New Posts By Email