Loading

Friday, August 31, 2012

Blinc M2 with FM Tuner Review - DO NOT BUY!

Enough is enough!  I've intended to write this review for some time now, but have neglected.  Now is the time. I don't usually do product reviews but this must be said.

SPOILER ALERT: DO NOT BUY the Blinc M2 with FM Tuner! And don't expect any kind of customer support or service from Quickline Motorsports!




I purchased the Blinc M2 with FM Tuner (AKA 932 Vcan Blinc M2 with FM Tuner) about 15 months ago, in the spring of 2011.  Right from the start I began to notice some, I guess annoyances.  Well, initially they were annoyances now I flat out hate this thing. The problems usually just occur during or after phone calls.  As a BlueTooth-connected wireless headset for playing music it does OK. So, if I can ride without anyone calling me I usually don't have a problem. But....

The first issue with the Bluetooth phone headset is that no one can hear me while on the phone if I'm moving more than about 20 MPH.  And, often even when I'm stopped callers say they cannot hear or understand me.  Then, after a call (whether the call is miss or answered, and whether an answered call is disconnected by my or the caller) the music being played via my phone's MP3 player gets very choppy.  Immediately after a missed or disconnected call the music cuts out randomly for from one to several seconds.  The only way to correct this is to turn it off, then back on.  And that doesn't always work.  Not to mention the fact that this is difficult and dangerous while driving. Note: I've tried three different phones and all have experienced the same issue.

Next, numerous times the Blinc M2 has just frozen.  Whenever this happens the display will show either the phone number of the last caller or some message like BLINC, or CONNECT.  The only way to recover from this condition is to use the reset key that came with the unit.  After resetting the unit has to be paired with my MP3 player (i.e. phone).

Good
  • Decent headset for music from an MP3 player.
  • Bluetooth is great because it's wireless
  • Controls are OK.  Ability to advance to next (or previous) track is nice.  Volume controls OK.
Bad
  • Sucks as a phone headset
  • Music is choppy after phone calls
  • Unit hangs or freezes periodically requiring a reset
  • Proprietary/unique power cable (WTH couldn't they use a standard USB cable?)
It took me a couple weeks of use to figure out the pattern of exactly what was happening and when. At that point I called Quickline Motorsports where I purchased this thing. I called three times. Two of the times I got through to what I assume was a receptionist, who after pleading my case promised to have someone with authority call me. No one ever did. The third time I left a voice mail which was never returned.  I also sent two emails to their supposed support department, neither of which was answered. I will NEVER buy anything again from Quickline Motorsports cannot recommend them.

The bottom line is that this is an adequate headset for music from an MP3 player, but nothing else.  Definitely not what it promises, nor worth the $150 or so MSRP.

Thursday, August 30, 2012

Elasticfox Firefox Extension for Amazon EC2 is Now Elasticfox-ec2tag

Previously I wrote about a great Amazon AWS management tool, Elasticfox, which is, or was, an extension for Firefox.  I started using Elasticfox a few years ago when I started using Amazon's EC2 (Elastic Compute Cloud) service.  At the time Amazon's Management Console wasn't all that great; it lacked many features and was a little tough to navigate, so I relied heavily on Elasticfox. Since development of Elasticfox stopped on Firefox 3 it's been a while since it would work anyway. Alas, a guy named Genki Sugawara picked up the torch and actively develops Elasticfox-ec2tag. Not only is Elasticfox-ec2tag a great Firefox plugin, it also has a pretty good stand-alone app.

In all fairness Amazon's Management Console has come a long way. And I use it regularly. But, and this may be because of my history using Elasticfox, I use Elasticfox-ec2tag faithfully. In fact, it's always the first tab in my browser.


With Elasticfox-ec2tag I can easily view and manage my EC2 servers, AMI images, Security Groups, EBS volumes, Elastic Load Balancers, etc. With it I can easily connect to any of my AWS accounts, and any AWS region within each account. I really like that you can customize the columns, both which to view and their order.  Another great feature it supports is the EC2 name tag which is also supported by the AWS Management Console.  With this I can assign a meaningful name to each instance and view that name in both tools.

You can download the latest Elasticfox-ec2tag Firefox plugin (.xpi file) directly from Amazon at http://elasticfox-ec2tag.s3-website-ap-northeast-1.amazonaws.com/.

If you haven't used Elasticfox-ec2tag I would highly recommend giving it a try.

Monday, August 27, 2012

Resizing the Root Disk on an AWS EC2 EBS-backed Instance

Have you ever wanted to have a larger root EBS volume on an EC2 Ubuntu instance?  With these steps it's easily accomplished with minimal down time.  (Some of these steps were gleaned from this post at alestic.com.)

I'm setting up a new "base" image for some servers I'm starting in Amazon's us-west-1 region.  I started with a Ubuntu image built by RightScale, then did some basic setup to customize the image.  Now I need to increase the root EBS volume a bit.  Then I can use this as my own base image for starting new Linux servers.

Note: I use a combination of tools to manage my EC2 instances and EBS volumes, from Amazon Management Console, to command line tools, to ElasticFox.  Often the tool I use depends on the way the wind is blowing on a particular day.  For this post I'm using the Amazon Management Console.  For info on the command line tools see the previously mentioned post.  Finally, one critical step cannot be completed using ElasticFox.

First, my volume is only the default size of 10GB (I used df -lah to display the volume size in GB), but I need it to be a little bigger:


The next step is to create a snapshot of the volume.  This can be done a few different ways.  Using various tools (command line tools, Elasticfox, EC2 Management Console, etc.) a snapshot of the volume could be created.  Or, (my preferred method) is to create an AMI of the instance which creates a snapshot of the volume, and gives me a an AMI from which I can launch other, similar instances.  FYI creating an AMI creates a snapshot of the volume.


Once the AMI and/or snapshot is complete create a new volume of the size you desire from the snapshot, which has to be in the same availability zone as the instance.  My instance is in us-west-1a so I'll create my new volume will be in that AZ.  In the AWS Console select Snapshots under Elastic Block Store, right-click the volume's snapshot and select Create Volume from Snapshot.


Specify the size you want.  And, again, make sure to create it in the same AZ as your instance.


Next, stop your instance, detach the current volume, the attach the new volume (under Elastic Block Store, Volumes right-click your new/available volume, select attach EBS volume).  When attaching the new volume select your "stopped" instance and specify /dev/sda1 for the device. This is the default first volume.  Click yes, attach.  Then start your instance and connect to it.


After connecting to your instance with its new volume if you run df it will report the original volume size, not the new size.  So, the final step is to run sudo resize2fs /dev/sda1 in Ubuntu.  Once this is complete you can run df to see the new, increased size of your volume.


The last thing would be to delete the volume you detached from this instance.  Oh, and perhaps to make a new AMI.

Now, not only do you have a larger EBS volume on this instance, future instances made from (your new) AMI of this instance will have the same size volume.

Wednesday, May 30, 2012

What Version of Ubuntu Am I Running?

How to check the version of Ubuntu you are running from the command prompt / terminal (remotely using something like PuTTY or locally...).

Simply run the command: lsb_release -a

This will display the description, release and Ubuntu codename, which you can cross reference at Ubuntu Release Versions.


Monday, April 30, 2012

HP Procurve Switches: Setting Time Using SNTP

I recently picked up a couple ProCurve 2910al-48G-PoE switches.  Since it had been a while since is setup time on a switch I had to research a bit.  Here are the settings I used.

First, you can view the switches current time, timezone, etc. with the show time command.

Next go into config mode (configure terminal) on the switch's CLI.
sntp unicast
sntp 30
sntp server priority 1 65.55.21.17
sntp server priority 2 64.250.177.145
timesync sntp
time timezone -300
time daylight-time-rule continental-us-and-canada
Now, when I run show time the switch displays the current local time!

Thursday, December 29, 2011

Update Windows Path Environment Variable

Today I had to make similar changes to 10 Windows servers.  The changes consisted of copying a couple specialized programs from a source location and updating the Windows path.  I couldn't remember how to update the Windows path variable from the command prompt (other than for the current command prompt of course) so I tooled around the Internet a bit until I found it.  But as I was looking for the solution I found a lot of confusing and incomplete information about the Windows path variable.  Most of the info I found consisted of brief posts often followed by several discussions asking for more information.  Not knocking anyone here but most of this appeared to be by developers and not system admin types.  So I wanted to provide a thorough discussion on the Windows path variable, what is is, how it works and how to use it.

The Windows path variable has its roots way back in MSDOS, which means it's been around for over 30 years.  Basically the path is a list of directories (or folders) to be searched when a command runs.  For example say you're at the root of the C: drive in a Windows command prompt.  If you execute a command like robocopy Windows will look in the current directory first for the executable, robocopy.exe.  Since robocopy.exe isn't in the root of C: by default, and if you didn't have a path variable defined with the correct path you would get the message, "'robocopy' is not recognized as an internal or external command, operable program or batch file."  But since Windows by default has at least "C:\Windows\system32;C:\Windows" (usually more) in the Windows path the first directory, C:\Windows\system32, will be searched for robocopy.exe, and since it's there the program will run.  If it weren't there then the next directory in the path C:\Windows would be searched, then the next and so on until the program is either found or not.

The Windows path will always be searched whenever a command is executed (that isn't a core command like dir, or when the executable isn't found in the current directory) either from within a command prompt or Windows itself.  Remember it's always possible to specify the path and executable like with C:\Windows\system32\robocopy.  This will launch the command or program directly without having to search the path.

One of the quickest and easiest ways to view your current path is to open an command prompt and enter the command, path.


Modifying the Windows Path Environment Variable
It's worth noting that often when a program is installed the path to your application may be added to your path.  Your path will often consist of a dozen or more entries (in fact, I stripped down my path for the previous screen shot to show only a minimal path).

Back in the days of DOS and even Windows 95/98 and Windows NT the path was set through either (or usually both) files called comspec.sys and autoexec.bat.  In order to make a permanent change to your path it was common to edit either or both of these files.  But since these operating systems are so old and probably not used by many we'll skip the details.

So starting with Windows 2000 and XP (right up to and including Windows 2003, Windows 2008, Windows Vista and Windows 7, and I'm sure Windows 8) Microsoft did away with config.sys and autoexec.bat and moved the path variable (and others) into the Windows registry.  Of course you could edit the path directly in the registry but that's usually not a good idea and since there are other ways that's what we'll look at.

First, it's important to understand that there are different path variables; system, user, and what I like to call context (or session) variables.  For example when you open a command prompt the system and user path variables are read from the registry and assigned to that command prompt.  If you use the path command to modify the path it will only be modified within that command prompt session, or the "current context."

Let's say you are using a program while in a command prompt and want to add the path to that program to your path just for the current session.  You would simply enter the command set %path%;<path to new program>.  This would take your current path (as specified with %path%) and add or append the new path.  NOTE: use a semi-colon (;) between path entries.

For this example I added the path to a useful Amazon S3 utility called S3.exe.

Example: set path=%path%;C:\Admin\Utils\S3exe


Now I can run S3.exe without having to specify the full path to the command.  However, when I close the command prompt and open a new one this entry will not be present.  I'd have to add it again.  Or, add it to the Windows registry.

Setting Path Variables (semi-)Permanently
There are a few ways to make the path variables (both system and user) permanent.  At least they are permanent until you change them....

Settings for both user and system path variables (and a lot of others) are accessed through the System Properties applet.  Here are two ways to access System Properties.
  1. Right-click on Computer (formerly My Computer), click Properties.  Within the System window click Advanced System Settings.  This will open the System Properties window.
  2. OR, click Start, Settings, Control Panel, System, Advanced System Settings.   This will also open the System Properties window.

Click Environment Variables in the lower-right.  This will (finally) open the Environment Variables dialogue window.


The Environment Variables window allows you to add/edit entries to both the current user's path and the system path.  Entries in the system variables path will apply to all users on the system, whereas the user variables will apply only to the currently logged on user - i.e. you.

Just like with about everything else there's another way to do this and that's with the setx command from a command prompt (my preferred method.)  Using setx you can make permanent changes to either the user path variable or system path variable (or both).  NOTE: use quotes where shown in the commands.  The /M switch is used to set system path variable.  Both commands are written to the system registry as with the previous examples.

Example (user path variable): setx path "%path%;C:\Admin\Utils\S3exe"
Example (system path variable): setx path "%path%;C:\Admin\Utils\S3exe" /M


NOTE: these commands do NOT update your current path (in the command prompt), so you'll have to open a new command prompt for the path variables to include new/updated entries.

In the beginning I mentioned I had to make similar changes to several servers.  I was able to use robocopy to copy the programs/files I needed and setx to update the system path variable.  My commands consisted of about 5 lines that I just copied and pasted into the command prompt on each server and my "deployment" was completed in minutes using these simple commands.

Wednesday, November 9, 2011

Optimizing Images to Save Bandwidth on AWS

Last month our finance guy came to me in a bit of a panic to point out that our Amazon Web Services (AWS) bill was way higher than expected - by several thousand dollars.  After the initial shock wore off I started digging to figure out just what was going on.

From Amazon's bill I could easily determine that our bandwidth costs were way up, but other expenses (like EC2) were in line with previous months.  Since I have several S3 buckets that house content served from S3 and CloudFront, and since Amazon doesn't break down the costs enough I had to figure this one out on my own.

We serve millions of page views per day, and each page view causes several calls to different elements within our infrastructure.  Each call gets logged, which makes for a lot of log files - hundreds of millions of log lines per day to be exact.  But, this is where the detail I needed would be found so I had to dig in to the log files.

Because I collect so much log information daily I haven't built processes (yet) to get detailed summary data from the logs.  I do however, collect the logs and do some high level analysis for some reports, then zip all the logs and stuff them to a location on S3.  I like to hang on to these because you never know when a) I might need them (like now), and b) I'll get around to doing a deeper analysis on them (which I could really use, especially in light of what I've uncovered tracking down this current issue).

I have a couple of under-utilized servers so I copied a number of the log files from S3 to my servers and went to work analyzing them.

I use a plethora of tools on these log files (generally on Windows) such as S3.exe, 7zip, grep (GNU Win32 grep) and logparser.  One day I'm going to write a post detailing my log collection and analysis processes....

I used logparser to calculate the bandwidth served by each content type (css, html, swf, jpg, etc.) from each bucket on a daily basis.  My main suspect was image files (mostly jpg) because a) we serve a lot every day (100 million plus), and b) they are generally the biggest of the content we serve from S3/CloudFront.

Since my log files are so voluminous it actually took several days to cull enough information from the logs to get a good picture of just what was happening.  Even though I'd gotten over the initial shock of the increased Amazon bill I was a bit shocked again when I saw just what was happening.  Essentially, overnight the bucket serving my images (thumbs & stills) went up 3-4 times on a daily basis.  This told me either a) we were serving more images daily (but other indicators didn't point to this), or b) our images were larger starting that day than they had been previously.

Well, that's exactly what it was - the images were bigger.  Turned out a developer, trying to overcome one bottleneck, stopped "optimizing" our images and the average file size grew by nearly 4 times - just about the amount of bandwidth increase I'd found.  So, in order to save a few minutes per day and even a couple bucks with our encoder this one thing ended up costing us thousands of dollars.

Once I identified the issue I began working with our developers to fast-track a solution so we can accomplish all our goals: save time, save money encoding/optimizing the images, and get them small as possible to save on bandwidth with Amazon therefore saving a lot of money.  I also went to work to identify the images that were too big and optimize them.  In fact, this is actually an ongoing issue as our dev team hasn't quite implemented their new image optimization deployment, so I actually grab unoptimized (i.e. too big) images a few times a day, optimize them, then upload to S3.  This process probably justifies its own post so I'll do that soon & link to it from here.

Delete Files by Date With DOS Batch File

I have a server that is continuously filling it's drive with temporary files left over from a video encoding process we use.  Every week or so I have to manually delete the old files to prevent the drive from filling.  Finally today I headed on a journey to figure out a way to programmatically clean up these old files regularly.  So I tooled around the Internet to find a way to do it from the DOS command line.  Unfortunately others had run into the same issue I was, that DOS commands like del don't have a way to delete by date.  I found a few approaches using complex batch files & even tried one for a few minutes, but when I couldn't get it to work I went back to the drawing board.

I found a really old post suggesting using xcopy to copy the newest (presumably the ones to keep) files to a temp location, then delete the original files, then move those copied away back to their original location.  This approach had some promise, but had some real drawbacks too; particularly that I'm dealing with tens of gigabytes & this would take forever, and that I'm dealing with several and varying subdirectories.

Since xcopy has been deprecated and replaced with robocopy in Windows 2008, Windows 7, etc. that's what I chose to use.  In fact, robocopy has a couple switches that make it easy to move (/MOV) files older than x days (/MINAGE:x).

What I ended up with was a simple two line batch file that I'm using Windows task scheduler to run once a day.  The first line moves files older than x days to a temporary location & the second line deletes the moved files.
robocopy D:\Original_Location D:\DelOldFiles * /S /MOV /MINAGE:7
del D:\DelOldFiles\* /s /q
Breakdown
The robocopy syntax is ROBOCOPY source destination [file [file]...] [options].  I'm using the following switches (options):
  • /S - include subdirectories
  • /MOV - move (which is technically a copy, then delete original)
  • /MINAGE:x - act on files older than x days
After the files are moved I'm deleting all files (*), in all subdirectories (/s) in my temp folder, quietly (/q), i.e. don't prompt, just do it.

See also
Using dos to delete based on date

Thursday, October 27, 2011

ASA 5500 SSL VPN Add Licenses to ASA

I recently had to enable some of my mobile Mac clients with Cisco AnyConnect VPN Client for Mac.  Then, of course since the ASA only included 2 SSL VPN licenses and that's what the AnyConnect VPN Client uses I had to purchase some additional licenses.  I purchased the licenses through a reseller & a couple days later they sent me a PDF listing the product (L-ASA-SSL-10= ASA 5500 SSL VPN 10 Premium User License) and a Product Authorization Key.

First, go to the Cisco Product Registration Page and login with your TAC credentials.  In the Product Authorization Key (PAK) field enter the Product Authorization Key from your PDF then click submit.

Next, follow the prompts and agree to their end user license agreement.  You will have to provide the ASA's serial number which can be obtained from the chassis or via show version from the CLI (this is probably the best method as you can copy the S/N from the CLI, then paste it to the authorization screen).

Now wait.

After submitting the required information and verifying other info you'll see the following message indicating that you'll have to wait up to one hour to receive an email with the xxx.  You'd think Cisco would be able to provide this info right away.  Guess not.

You'll be presented with the following helpful message to read while you wait...
Your license and user information will be sent via email within 1 hour to the email address you specified. If you have not received an email within 1 hour, please open a Service Request using the TAC Service Request Tool. Please have your valid Cisco.com user Id and password available. As an alternative, you may also call our main Technical Assistance Center at 800-553-2447.
Please be sure to check your Junk/Spam email folders for this email from licensing@cisco.com with your license key attached.
Fortunately only a few minutes later I received the email with the ASA activation key (which is 77 characters  long) and the following instructions.

Installing Your Cisco Adaptive Security Appliance Activation Key
Step 1.  From the command line interface (CLI), enter configuration mode using the "conf t" command.
Step 2.  Type the "activation-key" command, and then, when prompted, enter the new activation key listed above.
Which I promptly followed.  Now I have 10 licenses with which to connect my clients.  This by the way is a bit of a disappointment as I already had two.  I would have hoped Cisco would have preserved the two gratis WebVPN licenses and added my 10 new ones.  Not so luck.

Friday, October 21, 2011

Cisco AnyConnect VPN Client for Mac

Recently some of our mobile users needed to connect to one of our networks that's protected by a pair of Cisco ASA firewalls.  It was no problem for the Windows users as I already had what I needed in place, however it was a different story for our Mac users.  Since it had been a while since I setup the ASA for AnyConnect for Windows I'd forgotten everything that was needed so I ran into a little trouble.

First, I downloaded the latest AnyConnect VPN client for Mac's from Cisco (anyconnect-macosx-i386-2.5.3055-k9.dmg at the time of this writing), and installed it on a MacBook Pro.

Notes:
  • Of course, you'll have to have a valid SmartNet agreement and account with Cisco to access these files.
  • And, since the Cisco VPN client only runs on 32 bit Mac's, AnyConnect is the only option for 64 bit Mac's.
With the AnyConnect VPN Client installed on the Mac I launched it and tried to connect to my ASA.  Here's when I ran into my first problem, receiving the message,"The AnyConnect package on the secure gateway could not be located. You may be experiencing network connectivity issues. Please try connecting again."


After a little research I realized I needed to upload the accompanying package (.pkg) file to the ASA.  So I headed back to Cisco to download the package file (anyconnect-macosx-i386-2.5.3055-k9.pkg - must match the version of the AnyConnect VPN Client on the Mac).

With that in hand I copied it to the ASA via TFTP, after, of course, dusting off my (FREE!) SolarWinds TFTP Server I haven't used for quite some time.  Here's the (Cisco) IOS command to copy the file via the terminal:
copy tftp:anyconnect-macosx-i386-2.5.3055-k9.pkg disk0:
Of course you'll have to provide the name/IP address of your TFTP server, which will conveniently be asked.

With that in place I tried again to connect.  However, I had the same problem, again receiving the message,"The AnyConnect package on the secure gateway could not be located. You may be experiencing network connectivity issues. Please try connecting again."  WTF?

Oh, yeah, I had to register the Mac AnyConnect package with the ASA's IOS.  Since I already have the Windows AnyConnect package registered as #1, and since most who connect to my ASA are Windows clients I left that in the first position and registered the Mac package second with the following commands:
config terminal
webvpn
svc image disk0:/anyconnect-macosx-i386-2.5.3055-k9.pkg 2
Then, by running show webvpn svc I can see that both the Windows and Mac AnyConnect packages are registered with my ASA.  


And I can successfully connect my Mac clients.  Booyah!!!

Need help adding SSL VPN licenses to your ASA 5500?