Skip to main content

Gadget Wisdom

0 Responses

Amazon MP3 Drops Linux Support, Adds DRM-Lite

DRM Is Killing Music

As we’ve previously mentioned, we’ve been redoing our music collection. Now, after weeks of part-time ripping, and some cleanup, it is time to upload the music to various sites, as a test.

Amazon has discontinued its music downloader for Linux and is no longer allowing Linux users to download the .azw file for use with a third-party application. The AZW files are used to download an entire album when purchased.

This occurred concurrently with the rollout of their new Cloud Player product, which included one other fun feature. DRM. Not on the file level. Amazon proudly sells DRM-free MP3s, but to upload or download albums, you need to authorize your device. You are allowed a maximum of 10 devices, you can deauthorize a device and the slot will reopen thirty days later. This includes Android devices. If you don’t do this, you can only download albums one track at a time.

We wanted to see who else was pointing out that this is a DRM-like feature, and came up with an interesting analysis of same by The Leisurely Historian. His theories are: (Comments are ours)

  • Compromise negotiated with music labels over cloud player – This seems the most likely. But, is increased monitoring of download/uploads really an unreasonable restriction? We made a complete backup of all of our Amazon purchases locally and we can copy it anywhere(even back to Amazon Cloud Drive, ironically.
  • Back door to DRM – We agree that DRM on Kindle and Video has been good to Amazon. But they can’t reverse course on music. So, they’ve created this hybrid model to support keeping people in their ecosystem.
  • This is all about User Tracking – This is quite possible. We have the tab…”You listened to ___, people who listened to ___ also bought ____.” This is the classic Amazon upsell method of getting you to buy more, based on offering you things they think you will like.
Basically, Amazon wants people to use Cloud Player and the Cloud Player apps. This keeps people inside their garden. So, bad enough we are forced to boot up Windows, which we never use, to retrieve/upload our music…but there is no indication from Amazon that they plan to restore Linux support in the future.
Even if they do not want to write Linux apps, they could provide developers with an API to build support into their products, but third-party support is not what they want on any platform.
Just to be fair, the web player does work on Linux. And, while we gave them $25 for a year of service, it does not mean we will next year…although it would cost more to store the same amount as data on Amazon S3(although there is always Glacier). It is just disappointing.
Published on August 27, 2012
Full Post
0 Responses

Amazon Glacier for the Home User

 

Backup Backup Backup - And Test Restores

Earlier this week, Amazon announced Glacier, which is long-term storage that costs one cent a gigabyte per month. This compares to the 12 cents a gigabyte per month for S3. The basic difference is that Glacier can take between 3 and 5 hours to retrieve data, and S3 is instantaneous.

Amazon S3 is a durable, secure, simple, and fast storage service designed to make web-scale computing easier for developers. Use Amazon S3 if you need low latency or frequent access to your data. Use Amazon Glacier if low storage cost is paramount, your data is rarely retrieved, and data retrieval times of several hours are acceptable.

But, let’s go to the pricing. As a home user, we’re assuming you have less than 50TB.

  • Storage
    • Glacier – 0.01 per GB/month
    • S3 – 0.12 per GB/month
  • Data Transfers In – Free on All
  • Data Transfer Out - Glacier and S3 both use the same pricing.
    • 1st GB free
    • Next 10GB, 0.12 a GB
    • Next 40GB, 0.09 a GB
  • Requests
    • Glacier
      • Data Retrievals are Free, however, Glacier is designed with the expectation that retrievals are infrequent and unusual, and data will be stored for extended periods of time. You can retrieve up to 5% of your average monthly storage (pro-rated daily) for free each month.
      • If you choose to retrieve more than this amount of data in a month, you are charged a retrieval fee starting at $0.01 per gigabyte. Learn more. In addition, there is a pro-rated charge of $0.03 per gigabyte for items deleted prior to 90 days

Amazon has promised that there will be an upcoming feature to export from S3 to Glacier based on data lifecycle policies. The details on how this will work aren’t 100% available, but we could imagine offloading from S3 to Glacier based on age. So, you keep the last 1-2 months of data on S3, and the older backups on Glacier. It would allow you to save a good deal of money for backups.

Not everyone, for that matter, needs high availability…especially if you are keeping something that is infrequently modified. For example, the family photo album. You can keep your local backups, and for 1 cent a month, you get a copy that you can access in an emergency.

What we’re missing is that many reports indicated that retrieval is potentially costly. But we found it equivalent to S3, only slower.

But, what would you use this for? We’d like to hear your thoughts.

Published on August 25, 2012
Full Post
0 Responses

Send Once, Read Everywhere – Kindle Everywhere

English: Amazon Kindle wordmark.
English: Amazon Kindle wordmark. (Photo credit: Wikipedia)

Earlier this week, Amazon launched the Send to Kindle Browser Extension for Chrome.

This adds to the Kindle ecosystem by allowing you to send web content to your Kindle, and choose to read it, or archive it for future use as part of the Kindle Personal Document Library.

Amazon continues to try to build the ecosystem of the Kindle. There are several third-party applications that had similar functionality, and multiple read it later services. But, one of the brilliant moves made with the Kindle is that it is completely platform agnostic. Amazon may make a Kindle Fire, and the e-ink line of Kindles, but they have desktop apps, mobile apps, a browser based reader, and continue to add functionality to the ecosystem completely independent of any hardware.

This is all part of the plan. People may complain about the walled gardens of certain closed systems, but if you can use your content on everything, then that is as close to open as you can get without actually being so.

Personal Documents was a good move on Amazon’s part, because it allowed reading of any document. People had been side-loading their own content anyway. Some even bought a Kindle and only acquired public-domain and free e-books. It makes the platform more valuable, which the tendency to buy from Amazon. Not only is your paid content there, but your personal content.

This is the sort of all-encompassing presence that turned Google into a verb for “To Search Online,” and iPad a synonym for any tablet(much as we correct people when they say so). Kindle is becoming a synonym for e-book, because of their presence in the market.

Once again, Amazon is getting our business because they have made it so easy, and removed the restrictions…although we hold out hope for more liberal policies in certain areas, we have already accepted the compromises.

That said, Send to Kindle is one more option that allows us to put more into the system, and makes it easy to do so, offering more options and integration than existing third-party options. Thus, it is worth a look.

Published on August 17, 2012
Full Post
1 Response

Mandatory PSA: Secure Your Digital Life

The KeePass Password Safe icon.

Every tech pundit out there has been talking about the heartbreaking story of Mat Honan of Wired and how hackers used social engineering to gain access to one of his accounts, and the chain reaction results.

One of Honan’s problems stemmed from how his accounts were daisy-chained together. `The recovery email for one account led to another, account names on different networks were consistent, etc. Figuring out how to mitigate this requires some thought. We have multiple email accounts, and it will probably require some diagramming and planning to figure everything out there.

Then there are passwords. We admit to people all the time that we don’t even know half our passwords. We use a two-pronged attack on this. One is the open-source, multi-platform app KeePass. KeePass offers a password vault stored as a file, encrypted using a single Master Password. All of the passwords in it are generated by the program and impossible for most people to remember.

We also use Lastpass as a service. Lastpass has a plugin for every browser, offers one click login, form filling, and more. The basic service is free, but the premium version adds mobile support and additional features. We’re not using half of the options that it offers, even with the $12 a year we give them for premium.

But, as part of a redundant philosophy, you should have your most important passwords in multiple locations. Also, having passwords even you don’t know in vault means you can easily change your credentials regularly for individual sites, should you choose to. do so.

Two factor authentication, although it could be a bit more user friendly, is enabled for all Google accounts and Lastpass. This is not a challenge for hackers to hack. There’s nothing very interesting there anyway.

In security, the mantra is trust no one. Try to walk the line between paranoia and rationality very carefully.

The second issue is backup. This is an area where we could be better. We have a backup plan that needs to be upgraded. We have various cloud backup solutions, and a few local ones. They need to be unified. We’ll get back to this in a future post, once we create a checklist.

But, for those of you out there, let’s cover a few basics. Periodically, extract your online data and store a copy somewhere, both locally and remotely, in addition to your cloud storage. Try a relative’s house. The likelihood of you and your relative both suffering calamities is probably slim. Remember that sending your data to a remote drive and deleting your original copy is an archive, not a backup.

Make a plan, automate as much as possible, because manual action is so easy to get behind on.

So, backup, secure your accounts, do some planning…we’ll be back with more. Consider yourself warned.

Published on August 12, 2012
Full Post
2 Responses

Ripping Music Revisited

Bundle of CDs.

Amazon MP3 Tech Support is useless. Of course, as friendly as Amazon is, they have been consistent useless to us. From insisting our package would be delivered when UPS insisted it had been delayed to the latest, asking us to email log files repeatedly to an address that sent back it did not accept incoming emails…and it was apparently correct as we’re still waiting.

At the beginning of the month, we wrote about Amazon upgrading Cloud Player. It prompted us to break out our music collection and try uploading it. Now, we’ve gone back and forth about cloud based music, having tried the now defunct mp3tunes, Moozone, Google Music, and Amazon Cloud Player.

We’ve also bought a lot of DRM-free MP3 files from Amazon during sales. Amazon is great at sales.

So, it made sense to give Amazon a shot, as they’ll store anything you buy from them for free. Their new model is $25 a year for more song space than we can use, and a good amount of general file storage. If only they had full Linux support and/or an API. But we hope this will come soon, at least for the Cloud Drive.

So, we uploaded the entire collection overnight. However, it was several messed up in the metadata department. We spoke to Amazon, and they did not offer any suggestions. We’d had similar problems with Moozone and with Google Music.

Deciding the problem was likely with the decisions made during the initial ripping, we made the decision to rerip the entire collection. Armed with an old laptop and an external hard drive, we’ve been slowly making our way through the collection.

One of the issues came from the decision to originally rip into the Ogg Vorbis format. Now, this was a freedom based decision. We wanted to support open standards, and still do. But, the limitations of this have come to bite us many times. Most notably that Moozone is the only cloud storage that offers decent Ogg support without transcoding, and Moozone appears to be dead in terms of development.

That alone wouldn’t have caused us to go back and destroy all the old files. We’re not audiophile enough to try 320kbps or FLAC, but the original files did show some encoding glitches, and we will be encoding at 256kbps MP3 as opposed to the originally quality of roughly 192kbps, but the big issue was metadata. Our metadata was in horrible shape, and made it impossible to find things.

The hardest type of album to deal with, of which we have many, are ones with multiple artists. ID3 tags initially did not have support. The Album Artist tag came later. In fact, up until more recently, our audio file tagging program on Linux, Easytag, didn’t support the Album Artist tag. It now does, which is most helpful. The other helpful tool was the free MusicBrainz Picard, available for multiple platforms, which encodes files with metadata from the MusicBrainz database.

Even with this, being the musical mavericks we are, there are plenty of CDs we have that have nonexistent or incomplete entries in these databases, that we’ll be going through manually. Also, this has inspired us to fill some gaps in the collection. Some of the files were encoded from audio cassettes, and we’ve been using Amazon Marketplace to purchase selected used CDs of said content for cheap, allowing high quality copies to be made.

It may be time to finally throw away the tapes., however, and go completely digital. As we migrate further from analog media, it is odd we have no intention of chucking the vinyl. What makes vinyl so nostalgic and tapes..not?

So, the above chronicles the journey from freedom loving Ogg user in search of a cloud to freedom-hating individual seeking to be locked into one platform…or not. The truth is, no matter what, we’re committed to a local copy. Cloud services are wonderful for keeping a backup copy, and pulling music on the go when you have a hankering for something from your collection, but trusting any service 100% is foolish, and we all need to be more diligent about that.

Our ripping is being done with Linux based tools. Audex is currently handling the ripping, Easytag the tag editing, and Picard filling in extra metadata. Amazon is providing cover art and data for manual correction as needed from their vast library of pages. This is vastly different from last time. Although things have changed, and ripping music from CDs isn’t as popular as it once was in this digital age, would be curious to see what people think, which is the purpose for this post.

How do you build a perfect digital music collection, what tools(Linux-based preferably) do you use to build it, and what do you do with your collection?

For one, we’ve never created a single playlist. Playlists are the mix-tapes of the modern era. Perhaps it is time to find the mix tape we made in the 90s…Songs to Be Depressed By, and recreate it for the modern era. (Songs to Be Depressed By were actually uplifting songs)

Published on August 11, 2012
Full Post
1 Response

Amazon Cloud Player Updates – Matches Competitors

Amazon MP3 LogoWe’ve had a long road in cloud music. Back in December of last year, we compared the limitations of Google Music to that of Amazon MP3. At the time, Google won. The Amazon web player was not feature filled, the Google Music interface won, for ability to enter metadata, among other things.

But that has changed. Amazon announced a new revamped cloud offering. The most significant innovation is one that iTunes already offers, and that Amazon will now as well. Amazon will scan music libraries and match the songs on their computers to their catalog. All matched songs – even music purchased elsewhere or ripped from CDs will be made instantly available in Cloud Player as 256 Kbps audio.

Cloud Player now allows editing of metadata inside the player, a feature Google has had for some time.

Amazon Cloud Player is expanding to the Roku Box.

And, unlike previously, music purchased prior to the announcement of Amazon Cloud Player will now be available in your box. This was always a pet peeve, as Amazon knew the music was purchased…you bought it from them.

The new Cloud Player offers two options.

  • Cloud Player Free – Store all music purchased from Amazon, plus 250 songs.
  • Cloud Player Premium – Store up to 250,000 songs for $25 a year.

Amazon Cloud Player is now separate from Amazon Cloud Drive. Drive will now be used exclusively for file storage. 5GB is offered free, and 20GB is available for $10 per year.

In both cases, this is a compelling offer. However, there are some things missing. No Linux client for the desktop apps for either Drive or Player. No API for third-party development, which we’ve mentioned before.

How does this compare to Google Music? Google Music, since we last visited it, sells music itself…offers limited download functionality, and still has several limitations. Amazon is looking a lot more compelling.

Published on August 1, 2012
Full Post
0 Responses

Time to Update the Tagline

English: A standard USB connector.

Sometimes, it is time to make a change. We’ve been writing stories for nearly six years now, under the tagline Guide to a Tech Savvy Lifestyle without Emptying Your Wallet.

Today,  we’re changing things a bit. We’ll be changing the tagline to, Living a Tech-Filled Lifestyle without Emptying Your Wallet. This may just be rearranging the deck chairs on the Titanic. It represents what we’ve want this blog to be.

We’ve handled news in the past, Home Theater PCs, Linux, Mobile technologies, Security, Downstreaming(downscaling your cable services), and more. There is so much more we’d like to discuss.

Published on July 25, 2012
Full Post
0 Responses

Trying to Build a Better Web Server

We’ve been working hard here, behind the scenes, upgrading the Weneca Media servers. The Weneca Media Group is the umbrella term for all the sites we collectively host together.

The Weneca server works on what is called a LEMP stack. Linux, Nginx, MySQL, PHP. Nginx(pronounced Engine-X) is a lightweight web server which powers about 10% of the world’s web servers, including sites like WordPress.com and Netflix. Most of you have probably heard of Linux, the MySQL database server, and the PHP scripting language.

Nginx has just announced SPDY support in its development version, which should speed things up more. SPDY is a Google developed protocol to reduce web page load time, and is implemented in both Chrome and Firefox. It can work concurrently with HTTP, the common standard for web serving.

So, with this, we have a solid footing for implementing a lightweight framework to serve a lot of web pages. However, Nginx does not have built in PHP support. You have to pass PHP to be handled by another program. In this case, we are using PHP-FPM, which is now part of the official PHP package. PHP-FPM is a FastCGI manager creates a pool of processes to process PHP scripts and return the results to the server.

To reduce load on this, Nginx supports FastCGI caching, so the results of any dynamically built page, with some deliberate exceptions, are cached for a few minutes, and can be served as static files. The duration of the caching is variable. If you want basically fresh content, you can microcache, cache in seconds. So, only when your server got hammered would it be seeing static content. If you have a bit less dynamic content, you can increase that to minutes, or even hours.

Now, we continue to tweak and improve the services. In future, we’ll be covering a little of the Nginx and PHP-FPM configuration settings you may find interesting.

 

 

Published on June 29, 2012
Full Post
0 Responses

Mixing up the Workflow and Avoiding Overload

This is not the first time we’ve talked about our workflow. It has evolved over the years. Our workflow currently consists of a Read It Laterservice and a

If This Then That. com

long-term bookmark archiving service.

When we started, the Read it Later service was Instapaper. We adopted Pinboard as the long-term archiving service. It is nice to know all the reference material we might use is stored for later use.

We later moved to Read It Later, which has recently rebranded as Pocket. The problem is we have 11,000+ bookmarks in Pinboard, and near 3000 in Pocket. Just reading all the stuff we need to learn to keep informed is a challenge.

[asa]B006GRYADO[/asa]

Clay Johnson’s The Information Diet discusses this problem, and makes a large amount of suggestions on the subject. He refers to the idea as infoveganism. This is not to say we totally agree with Mr. Johnson, but we see the point that information overload is a problem.

Last year, Ars Technical posted an opinion piece titled, “Why keeping up with RSS is poisonous to productivity, sanity.” Perhaps RSS is but so is the alternative, social media. Twitter, Facebook, Google Plus, etc are all sources of often repeating information. Who can keep up with all that?

The secret to a good workflow is to wisely choose your information flows, keep your inbox empty, and try to schedule spring cleaning for your accounts the same as anything else.

As part of that, we’re trying out ifttt.com, which allows you to tie together parts of the Internet. Using If This Then That logic, you can tie things together. For example, since Pocket support in Pinboard doesn’t allow bookmarks to be added when read, ifttt.com can add this functionality. There are dozens of suggested tieups between sites that otherwise would not be possible.

It is time to liquidate the Pocket account, get up to date, prune the Reader accounts again, prune the Twitter followers…

What is your workflow?

Published on May 4, 2012
Full Post
1 Response

Thinking about Dual Band Routers

RADIO FREQUENCY ENVIRONMENT AREA
RADIO FREQUENCY ENVIRONMENT AREA (Photo credit: elycefeliz)

Wireless-G has been the established standard for the last few years. We remember when we started playing with Wireless-B. It was only recently we jumped to Wireless-N. We didn’t need the speed jump.

With the increasing crowding of wireless spectrum, gigabit wired networks, where possible, are probably a good move.

We jumped this past month to dual band Wireless-N because of of the 5GHz frequency it offered. Wi-fi usually operates at 2.4GHz, but N supports two different frequency ranges.

Very few devices take advantage of the 5GHz band, which means that there will be little interference. Living in a city, there are at least 16 2.4GHz wireless networks in range of our test device.

Dual Band routers offer antennas for both frequencies, which means that you can have the devices that do not support 5GHz still operate.

After much consideration, we overbuilt and purchased the WNDR4500 when it was on sale.

[asa]B005KG44V0[/asa]

The router offers speed and reliability for the price, as well as multiple simultaneous full speed connections, guest networking, file sharing, and more. We needed the extra speed after we upgraded to wideband. The router had to keep up with the increased throughput.

This isn’t a router review. It is the most expensive router we have ever purchased. But if house networking is important to you, your router should be too. And if you are concerned about interference from other access points, upgrading to the 5GHz band is a viable option.

[asa]B0036BJN12[/asa]

The cost of a new Intel wireless mini-pci card is not prohibitive either. Most of these cards are easily accessible on a laptop, making it a simple upgrade.

But what do you think? Is less interference worth it? Do you care about the possible 450mbps throughput? What would be your rationale for going with a high-end router?

Published on April 2, 2012
Full Post

Get New Posts By Email