Loading

Friday, October 5, 2012

Improving Web Server Performance in a Big Way With Small Changes


A week ago today we pushed out some code to our IIS and MS SQL servers that has had a huge impact.  In a good way - a very good way.

First, a little background. I work for an online video company that has "widgets" and video "players" that load on thousands of websites millions of times a day.  Each time one of these entities loads numerous calls get made to our systems for both static and dynamic content. One of the highest volume services is what we call "player services," where calls are made to display playlists, video meta data, etc. based on our business rules. For example, can a particular video be displayed on this website, etc. Anyway, these player services get hit a lot in a given day, about 350,000,000 times (or more) a day. That's over 1/3 of a billion (yes, billion, with a B) times a day to just this one service (this represents about 1/3 of the total hits to the systems I oversee.  You do the math...).

We do have a scalable infrastructure which has steadily been growing over the past few years.  In fact, we've grown tremendously as you can see by this Quantcast graph.

Figure 1 - Traffic growth over the past few years.
Over the years we've stumbled a few times, but have learned as we've grown. (I remember when we had our first Drudge link & it caused all kinds of mayhem. Recently we had five links on Drudge simultaneously and barely felt it.) We've also had many break-through's that have made things more efficient and/or saved us money.  This one did both.

We have fairly predictable daily traffic patterns where weekday's are heavier than weekends, and during the day is heavier than at night. Using a couple of my favorite tools (RRDTool & Cacti) I am able to graph all kinds of useful information over time to know how my systems are performing. The graph in figure 2 shows the number of web requests per second to a particular server. It also shows the relative traffic pattern over the course of a week.

Figure 2 - Cacti RRDTool graph showing daily traffic pattern over one week for one server.
I use a great little Cacti plug-in called, "CURL BWTEST Response Time Graph Template" to monitor response times for various calls. As figure 3 shows, our average service response times would climb during peak traffic times each day. I monitor these closely every day and if the response times get too high I can add more servers to reduce the response time to an acceptable level.

Figure 3 - Cacti CURL graph showing service call response time.
Here's the exciting part, finally. As you can see in figure 3 we no longer have increasing response times during the day when our load increases. How did we do it, you ask? Well, a couple of really sharp guys on our team, our DBA and a .NET developer, worked on several iterations of changes to both SQL stored procedures and front-end code that now deliver the requested data quicker and more efficiently. Believe it or not this took a little work. Earlier I alluded to stumbling a few times as we've grown, and that happened here.  A couple weeks ago we made a few additions to our code and deployed it. The first time we basically took down our whole system for a few minutes until we could back it out. Not only did that scare the crap out of us it made us look bad.

So we went back to the drawing board and worked on the "new and improved" addition to our code. After a couple more false starts we finally cracked this one. Without boring you with the details our DBA made the stored procedures much more efficient and our front-end developer made some tweaks to his code & the combination was dynamite.

As figure 3 shows, we aren't suffering from increased response times for calls under heavier load conditions. This in and of itself is fantastic; after all, quicker responses equals quicker load times for our widgets and players which equals a better user experience, which ultimately should translate to increased revenue. But this isn't the only benefit of this "update." The next improvement is the fact that overall the responses are quicker. Much quicker. As you can see in figure 4 on Sept. 14 (before the code deployment) the average response time (broken out per hour) increased dramatically during peak traffic times and averaged a little over 100 ms for the day. After the deployment, looking at data from Oct. 4, not only did the average response time stay flat during the higher traffic times, but it is down considerably overall. Responses now average 25 ms at all hours of the day, even under heavy load. This is a great double whammy improvement! In fact, Oct. 4 was a much heavier load day than 9/14, so not only were the responses quicker, the server handled way more traffic.

Figure 4 - Spreadsheet comparing average load time before and after code deployment.
So far I've discussed the benefits on the front-end web servers, but we've also seen a dramatic improvement on the back-end database servers that service the web servers. Figure 5 shows the CPU utilization over the past four weeks of one of our DB servers and how since the deployment a week ago it has averaged almost half what it did before the deployment. This enabled me to shut down a couple of database servers to save quite a bit of money. I've also been able to shut down several front-end servers.

Figure 5 - Cacti graph of database server CPU utilization over 4 weeks.
Due to this upgrade I have been able to shut down a total of 12 servers, saving us over $5000 per month; made the calls return quicker causing our widgets and players to load faster - at all hours of the day; and made our infrastructure even more scalable. Win, win, win!

For more information, and if you're feeling especially geeky see my post, "I LOVE LogParser" for details on how I am able to use Microsoft's logparser to summarize hundreds of millions of IIS log lines each day, and on-demand when needed to keep a good pulse on just what my servers are doing.

Wednesday, October 3, 2012

Enable Quick Launch in Windows 7, Windows 2008 and 2008 R2

Call me old fashioned.  Say I'm stuck in the past.  Whatever.  I just don't like a lot of the things Microsoft has done to the Windows interface/desktop over the years.  Every time I get a new computer or start up a new server certain things have to be done to make it usable, i.e. not annoying to me.  One of those things is to enable the Quick Launch bar.

So to add Quick Launch to any Windows 7 or Windows 2008 server do the following.
  1.  Right-click an empty area of the taskbar, select Toolbars, then click New toolbar.
  2. In the dialog box, enter %AppData%\Microsoft\Internet Explorer\Quick Launch, then click Select Folder.
Now you can customize Quick Launch by displaying small icons, no labels, moving it, etc.

Monday, October 1, 2012

Windows Command Line WhoIs

I regularly find myself trying to find the owner of a domain or needing other information, like authoritative name servers.  A few command line whois.exe programs exist out there for Windows, but the one I like best is the one by Mark Russinovich at Sysinternals.  You can visit the previous link to download or use wget http://download.sysinternals.com/files/WhoIs.zip at the command line (assuming you have wget for Windows.)

One little trick I like to do is place whois.exe in the Windows system32 directory (c:\windows\system32 for example), because this directory is normally in your system path.  This way whois can be executed in a command prompt from any directory.

Here's a look at whois microsoft.com:


Monday, September 10, 2012

Who's Yer GoDaddy? GoDaddy DNS Down!

About 2:15 EDT today I began to get alarms from some of my monitoring software that it "can't resolve hostname to IP address" for a couple of our lesser used domains.  So, after a little digging it looks like GoDaddy DNS (and other services) are down currently.

Friday, August 31, 2012

Blinc M2 with FM Tuner Review - DO NOT BUY!

Enough is enough!  I've intended to write this review for some time now, but have neglected.  Now is the time. I don't usually do product reviews but this must be said.

SPOILER ALERT: DO NOT BUY the Blinc M2 with FM Tuner! And don't expect any kind of customer support or service from Quickline Motorsports!




I purchased the Blinc M2 with FM Tuner (AKA 932 Vcan Blinc M2 with FM Tuner) about 15 months ago, in the spring of 2011.  Right from the start I began to notice some, I guess annoyances.  Well, initially they were annoyances now I flat out hate this thing. The problems usually just occur during or after phone calls.  As a BlueTooth-connected wireless headset for playing music it does OK. So, if I can ride without anyone calling me I usually don't have a problem. But....

The first issue with the Bluetooth phone headset is that no one can hear me while on the phone if I'm moving more than about 20 MPH.  And, often even when I'm stopped callers say they cannot hear or understand me.  Then, after a call (whether the call is miss or answered, and whether an answered call is disconnected by my or the caller) the music being played via my phone's MP3 player gets very choppy.  Immediately after a missed or disconnected call the music cuts out randomly for from one to several seconds.  The only way to correct this is to turn it off, then back on.  And that doesn't always work.  Not to mention the fact that this is difficult and dangerous while driving. Note: I've tried three different phones and all have experienced the same issue.

Next, numerous times the Blinc M2 has just frozen.  Whenever this happens the display will show either the phone number of the last caller or some message like BLINC, or CONNECT.  The only way to recover from this condition is to use the reset key that came with the unit.  After resetting the unit has to be paired with my MP3 player (i.e. phone).

Good
  • Decent headset for music from an MP3 player.
  • Bluetooth is great because it's wireless
  • Controls are OK.  Ability to advance to next (or previous) track is nice.  Volume controls OK.
Bad
  • Sucks as a phone headset
  • Music is choppy after phone calls
  • Unit hangs or freezes periodically requiring a reset
  • Proprietary/unique power cable (WTH couldn't they use a standard USB cable?)
It took me a couple weeks of use to figure out the pattern of exactly what was happening and when. At that point I called Quickline Motorsports where I purchased this thing. I called three times. Two of the times I got through to what I assume was a receptionist, who after pleading my case promised to have someone with authority call me. No one ever did. The third time I left a voice mail which was never returned.  I also sent two emails to their supposed support department, neither of which was answered. I will NEVER buy anything again from Quickline Motorsports cannot recommend them.

The bottom line is that this is an adequate headset for music from an MP3 player, but nothing else.  Definitely not what it promises, nor worth the $150 or so MSRP.

Thursday, August 30, 2012

Elasticfox Firefox Extension for Amazon EC2 is Now Elasticfox-ec2tag

Previously I wrote about a great Amazon AWS management tool, Elasticfox, which is, or was, an extension for Firefox.  I started using Elasticfox a few years ago when I started using Amazon's EC2 (Elastic Compute Cloud) service.  At the time Amazon's Management Console wasn't all that great; it lacked many features and was a little tough to navigate, so I relied heavily on Elasticfox. Since development of Elasticfox stopped on Firefox 3 it's been a while since it would work anyway. Alas, a guy named Genki Sugawara picked up the torch and actively develops Elasticfox-ec2tag. Not only is Elasticfox-ec2tag a great Firefox plugin, it also has a pretty good stand-alone app.

In all fairness Amazon's Management Console has come a long way. And I use it regularly. But, and this may be because of my history using Elasticfox, I use Elasticfox-ec2tag faithfully. In fact, it's always the first tab in my browser.


With Elasticfox-ec2tag I can easily view and manage my EC2 servers, AMI images, Security Groups, EBS volumes, Elastic Load Balancers, etc. With it I can easily connect to any of my AWS accounts, and any AWS region within each account. I really like that you can customize the columns, both which to view and their order.  Another great feature it supports is the EC2 name tag which is also supported by the AWS Management Console.  With this I can assign a meaningful name to each instance and view that name in both tools.

You can download the latest Elasticfox-ec2tag Firefox plugin (.xpi file) directly from Amazon at http://elasticfox-ec2tag.s3-website-ap-northeast-1.amazonaws.com/.

If you haven't used Elasticfox-ec2tag I would highly recommend giving it a try.

Monday, August 27, 2012

Resizing the Root Disk on an AWS EC2 EBS-backed Instance

Have you ever wanted to have a larger root EBS volume on an EC2 Ubuntu instance?  With these steps it's easily accomplished with minimal down time.  (Some of these steps were gleaned from this post at alestic.com.)

I'm setting up a new "base" image for some servers I'm starting in Amazon's us-west-1 region.  I started with a Ubuntu image built by RightScale, then did some basic setup to customize the image.  Now I need to increase the root EBS volume a bit.  Then I can use this as my own base image for starting new Linux servers.

Note: I use a combination of tools to manage my EC2 instances and EBS volumes, from Amazon Management Console, to command line tools, to ElasticFox.  Often the tool I use depends on the way the wind is blowing on a particular day.  For this post I'm using the Amazon Management Console.  For info on the command line tools see the previously mentioned post.  Finally, one critical step cannot be completed using ElasticFox.

First, my volume is only the default size of 10GB (I used df -lah to display the volume size in GB), but I need it to be a little bigger:


The next step is to create a snapshot of the volume.  This can be done a few different ways.  Using various tools (command line tools, Elasticfox, EC2 Management Console, etc.) a snapshot of the volume could be created.  Or, (my preferred method) is to create an AMI of the instance which creates a snapshot of the volume, and gives me a an AMI from which I can launch other, similar instances.  FYI creating an AMI creates a snapshot of the volume.


Once the AMI and/or snapshot is complete create a new volume of the size you desire from the snapshot, which has to be in the same availability zone as the instance.  My instance is in us-west-1a so I'll create my new volume will be in that AZ.  In the AWS Console select Snapshots under Elastic Block Store, right-click the volume's snapshot and select Create Volume from Snapshot.


Specify the size you want.  And, again, make sure to create it in the same AZ as your instance.


Next, stop your instance, detach the current volume, the attach the new volume (under Elastic Block Store, Volumes right-click your new/available volume, select attach EBS volume).  When attaching the new volume select your "stopped" instance and specify /dev/sda1 for the device. This is the default first volume.  Click yes, attach.  Then start your instance and connect to it.


After connecting to your instance with its new volume if you run df it will report the original volume size, not the new size.  So, the final step is to run sudo resize2fs /dev/sda1 in Ubuntu.  Once this is complete you can run df to see the new, increased size of your volume.


The last thing would be to delete the volume you detached from this instance.  Oh, and perhaps to make a new AMI.

Now, not only do you have a larger EBS volume on this instance, future instances made from (your new) AMI of this instance will have the same size volume.

Wednesday, May 30, 2012

What Version of Ubuntu Am I Running?

How to check the version of Ubuntu you are running from the command prompt / terminal (remotely using something like PuTTY or locally...).

Simply run the command: lsb_release -a

This will display the description, release and Ubuntu codename, which you can cross reference at Ubuntu Release Versions.


Monday, April 30, 2012

HP Procurve Switches: Setting Time Using SNTP

I recently picked up a couple ProCurve 2910al-48G-PoE switches.  Since it had been a while since is setup time on a switch I had to research a bit.  Here are the settings I used.

First, you can view the switches current time, timezone, etc. with the show time command.

Next go into config mode (configure terminal) on the switch's CLI.
sntp unicast
sntp 30
sntp server priority 1 65.55.21.17
sntp server priority 2 64.250.177.145
timesync sntp
time timezone -300
time daylight-time-rule continental-us-and-canada
Now, when I run show time the switch displays the current local time!

Thursday, December 29, 2011

Update Windows Path Environment Variable

Today I had to make similar changes to 10 Windows servers.  The changes consisted of copying a couple specialized programs from a source location and updating the Windows path.  I couldn't remember how to update the Windows path variable from the command prompt (other than for the current command prompt of course) so I tooled around the Internet a bit until I found it.  But as I was looking for the solution I found a lot of confusing and incomplete information about the Windows path variable.  Most of the info I found consisted of brief posts often followed by several discussions asking for more information.  Not knocking anyone here but most of this appeared to be by developers and not system admin types.  So I wanted to provide a thorough discussion on the Windows path variable, what is is, how it works and how to use it.

The Windows path variable has its roots way back in MSDOS, which means it's been around for over 30 years.  Basically the path is a list of directories (or folders) to be searched when a command runs.  For example say you're at the root of the C: drive in a Windows command prompt.  If you execute a command like robocopy Windows will look in the current directory first for the executable, robocopy.exe.  Since robocopy.exe isn't in the root of C: by default, and if you didn't have a path variable defined with the correct path you would get the message, "'robocopy' is not recognized as an internal or external command, operable program or batch file."  But since Windows by default has at least "C:\Windows\system32;C:\Windows" (usually more) in the Windows path the first directory, C:\Windows\system32, will be searched for robocopy.exe, and since it's there the program will run.  If it weren't there then the next directory in the path C:\Windows would be searched, then the next and so on until the program is either found or not.

The Windows path will always be searched whenever a command is executed (that isn't a core command like dir, or when the executable isn't found in the current directory) either from within a command prompt or Windows itself.  Remember it's always possible to specify the path and executable like with C:\Windows\system32\robocopy.  This will launch the command or program directly without having to search the path.

One of the quickest and easiest ways to view your current path is to open an command prompt and enter the command, path.


Modifying the Windows Path Environment Variable
It's worth noting that often when a program is installed the path to your application may be added to your path.  Your path will often consist of a dozen or more entries (in fact, I stripped down my path for the previous screen shot to show only a minimal path).

Back in the days of DOS and even Windows 95/98 and Windows NT the path was set through either (or usually both) files called comspec.sys and autoexec.bat.  In order to make a permanent change to your path it was common to edit either or both of these files.  But since these operating systems are so old and probably not used by many we'll skip the details.

So starting with Windows 2000 and XP (right up to and including Windows 2003, Windows 2008, Windows Vista and Windows 7, and I'm sure Windows 8) Microsoft did away with config.sys and autoexec.bat and moved the path variable (and others) into the Windows registry.  Of course you could edit the path directly in the registry but that's usually not a good idea and since there are other ways that's what we'll look at.

First, it's important to understand that there are different path variables; system, user, and what I like to call context (or session) variables.  For example when you open a command prompt the system and user path variables are read from the registry and assigned to that command prompt.  If you use the path command to modify the path it will only be modified within that command prompt session, or the "current context."

Let's say you are using a program while in a command prompt and want to add the path to that program to your path just for the current session.  You would simply enter the command set %path%;<path to new program>.  This would take your current path (as specified with %path%) and add or append the new path.  NOTE: use a semi-colon (;) between path entries.

For this example I added the path to a useful Amazon S3 utility called S3.exe.

Example: set path=%path%;C:\Admin\Utils\S3exe


Now I can run S3.exe without having to specify the full path to the command.  However, when I close the command prompt and open a new one this entry will not be present.  I'd have to add it again.  Or, add it to the Windows registry.

Setting Path Variables (semi-)Permanently
There are a few ways to make the path variables (both system and user) permanent.  At least they are permanent until you change them....

Settings for both user and system path variables (and a lot of others) are accessed through the System Properties applet.  Here are two ways to access System Properties.
  1. Right-click on Computer (formerly My Computer), click Properties.  Within the System window click Advanced System Settings.  This will open the System Properties window.
  2. OR, click Start, Settings, Control Panel, System, Advanced System Settings.   This will also open the System Properties window.

Click Environment Variables in the lower-right.  This will (finally) open the Environment Variables dialogue window.


The Environment Variables window allows you to add/edit entries to both the current user's path and the system path.  Entries in the system variables path will apply to all users on the system, whereas the user variables will apply only to the currently logged on user - i.e. you.

Just like with about everything else there's another way to do this and that's with the setx command from a command prompt (my preferred method.)  Using setx you can make permanent changes to either the user path variable or system path variable (or both).  NOTE: use quotes where shown in the commands.  The /M switch is used to set system path variable.  Both commands are written to the system registry as with the previous examples.

Example (user path variable): setx path "%path%;C:\Admin\Utils\S3exe"
Example (system path variable): setx path "%path%;C:\Admin\Utils\S3exe" /M


NOTE: these commands do NOT update your current path (in the command prompt), so you'll have to open a new command prompt for the path variables to include new/updated entries.

In the beginning I mentioned I had to make similar changes to several servers.  I was able to use robocopy to copy the programs/files I needed and setx to update the system path variable.  My commands consisted of about 5 lines that I just copied and pasted into the command prompt on each server and my "deployment" was completed in minutes using these simple commands.