Skip to main content

Gadget Wisdom

1 Response

Review: Blu Dash 4.5

I don’t normally do cell phone reviews. But, when I was looking, the Blu line of products seems to be underreviewed. Many of the news sources that mention Blu seem to be merely quoting their press release. Few reviews of the actual product.

Blu, which is a Miami based company, has been in business since 2009 and makes a variety of phones whose specifications are on par with those made by the better known manufacturers, but whose prices are well below. The company sells phones primarily in Latin America. In the U.S., you’ll find them on online retailers such as Amazon and Newegg.

The Blu Dash 4.5 runs near stock 4.2 Jellybean. It has 4GB of internal storage, and 512mb of RAM. It has, as the name suggests, a 4.5″ TFT LCD screen with 480×854 resolution. The processor is a Mediatek Quad-Core 1.2 Ghz. It comes with a screen protector, charger, and case.

It comes in 2 variants. 850/1900 and 850/2100, HSPA+ 21Mbps.

So, those are the specs. The design is fairly standard. It has three hardware buttons…Back, Menu, and Home. The oddest thing about it is the fact that the charging port is on the top, next to the headphone jack.

For a budget phone, it is great. The phone isn’t as powerful as some, and that shows, but for most functions, it is more than adequate. As I write this, I’ve only used it over a weekend, and I plan to give it a few more days.

So far, it has performed all of the functions I would normally do with a phone. I rarely watch video on my phone, but aside from some issues with audio sync at resolutions higher than the screen resolution, it did it all.

Each year, the phones get bigger, the screen resolution gets higher, the processor gets faster, and so on. There are phones with higher specifications, even ones from Blu, but from the looks of it, these are value products.

There is one more aspect of the phone…reception. This is the hardest one. I’m a Verizon customer primarily. I bought a T-Mobile prepaid SIM for testing. T-Mobile uses the 1900MHz spectrum for its GSM service, and is moving its HSPA+ to this frequency. Its HSPA+ network is mostly on the 1700MHz band, which the phone doesn’t support. But since New York City, where I live has mostly been moved, I’ve gotten consistent service at 3G speeds. But, lacking another phone to compare it to, I may not be able to best evaluate this.

Will follow up after more time with the phone.

 

Published on October 20, 2013
Full Post
1 Response

The Ideal Smartwatch Is Discrete

Pebble Watch

I’ve been following items like the Pebble Watch, the Samsung Galaxy Gear, and so on.

None of these are exactly what I’m looking for, but the Pebble is probably the closest. This is why I’m hoping for Pebble 2.0 or similar. Pebble as a company has had issues delivering its existing units though.

So, what do I want in a watch? I was thinking about this. It has to look like a watch. If you want to bridge the divide between a digital watch and a smarter watch, it should be the same size as a normal watch. I don’t want people coming up to me and staring at my watch.

It should be simple. If I wanted a full size device on my arm, I can get a sportband for my cell phone and hold it there. I like the idea of phone integration, but the purpose needs to be simplified.

Look at Ion Glasses, which is set to offer a pair of sunglasses with a notification LED only visible to the wearer and control buttons. This is meant to be discrete…so no one would know about the integration. This is what I want in a wearable device. It should integrate with the environment.

I have a phone/tablet, I don’t need another fully functional screen.  The blocky square that the Pebble seems to be is closer, but only seeing pictures and video, the Pebble doesn’t look that impressive visually.

While writing this, I’m holding my actual watch for inspiration. Imagine a watch that contains a multicolored led notification light, a vibrating alert, and less bulky/blocky shape to the Pebble. Bluetooth 4.0 LE by default.

I could picture a Pebble 2 doing that, allowing me to be notified of important notifications without pulling out my phone, but not something I’m watching continually.

What do you think?

Published on October 18, 2013
Full Post
1 Response

Why CDN.net Intrigues Me

CDN.net logo

CDN.net is the latest pay per use CDN I’ve been using. Pay Per Use is the best option if you want to manage your costs.

We were with CDN77 for a while, which offered a flat rate $49/tb service. But the level of service was not one that I was thrilled with. I felt as if they didn’t understand my questions and weren’t interested in working with me.

CDN.net is actually one of two fairly new pay per use CDNs based on the OnApp federated CDN. It essentially uses spare capacity on various servers to offer CDN services. The other is CDNify, which uses a flat rate $49/tb service fee.

CDN.net is more of a marketplace. You can pick your CDN locations out of their available options and customize the package. When I started with them, you needed a free trial to see their pricing, but they’ve recently changed that. Their pricing is a variable rate, based on location. So, you can route your data through Salt Lake City at less than 2 cents a GB, or through Chelyabinsk for 1 dollar a GB.

Your rate is locked in until you update your package…which means if the price goes down in one price, and up in another, you have a hard decision to make. I’m sorry to any Chelyabinsk readers, but you’ll have to wait an extra second or two to be served. I’m not paying that rate.

I like the idea of this choice, but sometimes the results are surprising compared to where statistics report my users are. Analytics suggests that many of my users are in the Eastern United States, but I’m running more data through Dallas than New York. But it is never that simple.

But no one is writing about CDN.net or CDNify. No reviews that I could find. Only early startup announcements.

So, I set up a test for both. I had no problem with performance between the two. But, they are using the same backend. CDNify had some report issues in their free trial, but they fixed that quickly after I advised.

CDN.net, which I ultimately settled on, has answered all of my questions…although they do not seem to respond on weekends. One of their people even invited me to chat.

CDNs are necessary to reduce load from your own server and speed loading of static assets. CDN.net is still new, and there are more things I hope they do in the future. They also believe that they can offer lower prices as they grow their user base, as the cost would be spread out.

In terms of services, I would like to see improvements to their reporting mechanisms, but they have made changes, so I’ll see what happens.

You can go for a free trial here.

Published on July 11, 2013
Full Post
2 Responses

RIP Google Reader

Today, July 1st, Google Reader is officially gone.

Image representing Google Reader as depicted i...

Om Malik, of GigaOm, asked the question…Google killed Reader instead of updating it. If this is such a wise decision, why are so many companies scrambling to get into this space?

The truth is, Google Reader was based on the Inbox model. You’d see everything. Nowadays, there is too much information, too many sites. My feed reader is rarely empty. But the same can be said for my Twitter stream.

The option to have a more curated experience is the business these people are getting into. Build a better experience and the signal generation will allow for better ad targeting, which is why people are scrambling.

In the end, Google Reader is gone, and those who wanted what it offered will just have to move on.

For those of us who run websites, the question is how to have people learn about and follow our work. And RSS is a big part of that, and will likely continue…although I wouldn’t trust Feedburner. That’s a Google RSS product too, after all.

Published on July 1, 2013
Full Post
0 Responses

Lightweight Server Monitoring

Collectd Architecture Schematic

I recently began a move of the Gadget Wisdom and related sites to a new server. The purpose of this was laying the infrastructure for a major upgrade.

One of the major pushes was upgrading monitoring features. Some of the software being used was no longer being maintained, and replacements had to be found.

Nagios and Munin are two of the most popular tools used by IT specialists for infrastructure monitoring. There are good reasons that I opted for something more lightweight though. There are dozens of monitoring tools, and it is quite overwhelming to choose one. These are two that I have been happy with so far.

One of the first ones I installed is collectd. Collectd is a tool that stores performance data. It is plugin based, which means it can be used to pipe into a variety of different pieces of software. So, it is incredibly extensible, which leaves room for future data gathering and future output. It is also incredibly lightweight, which has its advantages.

To output the data into graphs, I’m using a simple front-end called Jarmon for now. Jarmon downloads the files generated by collectd, and renders them on the client side.

The second is a monitoring tool called monit. Monit monitors various services to ensure they are up, and can take action if they go down, such as sending an alert, restarting a service, executing a script, etc. One of the most fun things about having alerts is reading them…and in many cases, knowing I don’t have to do anything, because I told monit to do it for me.

There will be more to come on this, but what do you use in similar situations?

Published on June 26, 2013
Full Post
0 Responses

Trimming Your SSD

Kingston SSD Ready for InstallationNearly three years ago, I wrote an article on optimizing for SSDs under Linux. Recently, I decided to revisit the issue after reading a recent blog post.

The recommendation at the time was to enable TRIM support, using the discard option to mount the drive. The first question is, if this is such a good idea, why isn’t it enabled by default? Why do you have to add it to your options, like below?

/dev/sda1 / ext4 discard,defaults

It turns out that enabling the discard option does have a performance hit on deletes. So, how do you keep your SSD Trimmed and avoid a costly performance penalty?

It turns out you can trim manually using the fstrim command, and set up a cron job to run this command once a day. The command takes only one argument by default, the mountpoint of the partition.

Seems like something worth thinking about. However, with the majority of the systems I run SSDs on, the solid state acts an an OS drive. Therefore, the number of deletes are minimal compared to writes.

In the end, enabling TRIM on your drive ensures that the drive will have the best wear-leveling and performance, but there is a cost. For some systems, it is just easier to mount with the discard option, others to run fstrim.

Published on May 13, 2013
Full Post
5 Responses

Thinking about RAID vs Backup

Six hard disk drives with cases opened showing...

The cost of storage hit a low the last time it was time for a storage upgrade. Then prices shot through the roof after a flood in Thailand closed factories.

This shut down all of my hard drives purchases for over two years. When I emerged from my cocoon, Samsung was gone as a Hard Drive manufacturer…and I had bought many Samsung OEM hard drives.

The purpose of RAID in a redundant system is to protect against hardware failure. You have different levels of RAID for this, RAID 1 for just a straight mirror, and RAID 5 and 6, which involve a minimum of 3-4 drives to accomplish.

RAID is important if you care about uptime. If you can afford to be down for a bit, backups are a better choice.

What is being stored, in this case, consists of several categories: Video, Music, Documents, Configuration Files. There is no point in storing complete drive images. The OS can be reinstalled, and it probably will be better off and cleaner running after it is. The OS drive on all of the systems I’ve built or refurbed in the last two years is an SSD, which is a common practice nowadays.

I had been mulling this after reading an article on another hardware refresh by Adam Williamson. He hadn’t refreshed in seven and a half years and used a separate NAS and server. So, why refresh after only two and a half years? Partly it was due to mistakes.

I’d been using WD Green drives. These had several limitations. They park the head after only 8 seconds of inactivity, which increased the load cycle count. The WD Red Drive is designed for 24/7 operation in network attached storage, with a longer warranty, and I now have two 3TB drives. The only other alternative in WD’s stable was a Black drive, their performance drive. It might be time to consider a Seagate, the main competitor, as well.

The warranty situation in hard drives now continues to drop. Five years, down to thee, and down to two years. So there is less protection from the manufacturer and less inclination to create quality products. That was why we were buying OEM over Consumer Drives over the last few years.

Back to the subject at hand…why not a RAID? It is simply a matter of cost vs. benefit. This is terabytes of video data, mostly a DVD archive I intend to create by backing up my DVD collection to MKV. If it were lost, the original copies aren’t going anywhere. But, more importantly, cloud backup is impractical.

Using Amazon S3, for example, at a rate of 9.5 cents a GB, that is just under $100 a month per TB. Amazon Glacier, which is their long-term backup option, is 1 cent a GB, or roughly $10 a TB. But once you take video out of the equation, or sharply reduce it, budgeting $5 a month for important data is a reasonable amount, and still gets you a lot of storage options to work with.

So, to ensure redundancy, there is a second drive in the system, and backups will be done to it. From there, the backups of everything but the video store will be sent up to the cloud. As I’ve mostly given up buying DVDs(due to Blu-Ray), the collection should be fairly static.

Back to Adam Williamson, he had a great idea of having the other computers on the network back up their data to the server, independently isolated by each machine having a separate user account on the server. Not quite there yet, but sounds good. I have other plans to download data from my cloud service providers(Google, Dropbox, etc., and maintain a local backup, but that is a longer-term project. I’m reasonably certain in the interim, Google has a better backup system then I do.

What about off-site then? I still have the old 1TB Green Drives. They can be run through diagnostics, loaded up as a backup, and sent off to a relative’s house…I’ve added a hard drive dock through an E-SATA port to support this.

So in the end, RAID wasn’t necessary for me, but some redundancy was. It may be for you. Comments?

More to come…

Published on April 22, 2013
Full Post
0 Responses

Chrome OS without Google

English: Main logo and icon for the open sourc...

I have spent a good deal of time with Chrome OS and the Chromebook. The one troubling thing about it is not that it is an operating system in a web browser. You know that going in. It is Google.

Recent events have made all of us a bit wary about Google Services, while still using them. When Google recently launched Google Keep, a decent service in its own right, no one trusted it. Before the Reader debacle, the Google enthusiasts would try it. Now, many are afraid to love a new service.

GigaOm picked up on a feature request for the Chromium OS, the Open-Source version of ChromeOS. The feature would establish an API to allow for extensions to integrate into the file manager, allowing cloud services other than Google Drive to act as ‘drives’ inside the file manager.

Elsewhere, in the Chromium Project, there is a reference to using Chromium without a Google login, but so far, even on the open-source project, you need Google. But, one can ask…if you are going to do Chrome without Google…what’s the point? You might as well run a full-fledged OS.

Years ago, computer design was based on ‘dumb terminals’ and powerful servers. Today’s computers are significantly more powerful than those ‘powerful’ servers. But the truth is that many of us now would rather be able to access identical experiences on multiple devices. But, products like Owncloud prove that while we may wish these things, we want more control and certainty about them. We want control.

Will be looking to see more in this direction. Free services versus Paid Services vs Self-Hosted services. Would welcome your thoughts on this, and what areas you think are worth exploring.

 

Published on April 5, 2013
Full Post
0 Responses

In Brief: HDHomeRun Prime Now Supports DLNA

SiliconDust has finally released the option of DLNA support for its HDHomeRun Prime. As of last week, this feature is available for general release.

SiliconDust HD HomeRun (HDHR) network dual-tun...

The limitation is that protected content can only be streamed to a player supporting DTCP-IP. This is a potential problem, as support is limited. SiliconDust does have plans to release an Android app. They did release an app(Link) for unencypted channels, which goes for $2.99.

Not much help right now, but if more devices can be made to support this, it could be an alternative to cable boxes, even for those of us unfortunate enough to be on Time Warner Cable. Imagine a single 4 tuner Prime and a series of inexpensive DLNA DTCP-IP devices could be a replacement for the $10-$15 per month cable boxes. The entire setup could pay for itself in less than a year.

Published on April 5, 2013
Full Post
0 Responses

In Brief: Bought Vinyl Records? Amazon Has You Covered

Vinyl record.

I really wished I had purchased my vinyl from Amazon, because today, Amazon expanded its AutoRip service to vinyl records purchased since 1998.

AutoRip automatically adds MP3 versions of songs to your Amazon Cloud Player Account. When Amazon released the AutoRip service, they backdated any eligible CDs purchased from Amazon, and they have now extended this to Vinyl Records…interestingly tempting, but a marginal improvement.

Published on April 4, 2013
Full Post

Get New Posts By Email