Thursday, December 15, 2011

New User Profile, Shack Photos and Amateur Radio Features Released

Recently, we released these new Amateur Radio related features to the community:
  • Amateur Radio Callsign lookup and search support, prepopulated with all US Amateur Radio call signs and updated weekly
  • Amateur radio operators now have a new badge identifying them as Amateur Radio Operators in the forums
Additionally, we released these new features for all of our registered members on the site:
  • Members can upload up to 5 photos of their radio shack / desk / mobile setup to be displayed in their user profile
  • Members can add a short biography about themselves to be displayed in their user profile
  • Members can upload a profile picture that is displayed on your user profile page
  • Members whose username is their Amateur Radio callsign automatically have their callsign information linked to their profile
  • Members may manually link their Amateur Radio callsign to their profile if their username is different from their callsign
To get started with the shack photos feature, head on over to your account page and click the "Shack" tab.

To get started with the biography and user profile picture features, head on over to your account page and click the "Bio" tab.

To get started using the Amateur Radio callsign lookup and search features, click the "Databases" drop down menu item at the top of the page, then click the "Amateur Radio Database" entry.
To manually link your amateur radio callsign to your RadioReference.com username, open your account profile and fill in your amateur radio callsign in the appropriate field. If your callsign does not exist in our callsign database, you will be given the opportunity to submit your callsign details to the database.


Enjoy!

Thursday, December 8, 2011

Lots of traffic

Many have inquired why the site was running so slowly this afternoon. Well, there was a double homicide shooting incident on the Virginia Tech campus today which resulted in tens of thousands of listeners flooding in to the audio portions of the site within a period of about five minutes. Virginia Tech is a highly technical community so they are more online than most.

The site was mentioned by students being interviewed on CNN, as well as numerous viral posts in the social media circuit (Facebook, Twitter etc), which drove a large amount of traffic to the site in a short period of time.

Luckily, we learned an important lesson a few months ago when CNN mentioned us during the east coast earthquake, so we put in place "throttles" to prevent too much traffic from taking the site down until we could provision new servers. On that day when CNN mentioned us on the air, so much traffic so quickly to RadioReference caused our database servers to crash under the load and we were down for over 45 minutes! Running a platform that can instantly see tens of thousands of people flocking to it out of nowhere can be challenging since it is difficult and costly to keep servers constantly provisioned and prepared for such traffic. But, the good thing is the site stayed up and running, albiet slowly, and we were able to provision new servers to better handle the additional traffic we were seeing.

For those that want the nitty gritty on what happened today - we have a proxy server that load balances requests across our Web servers and also throttles requests so if we hit a traffic spike like we did today, we won't overwhelm what is already running. We currently throttle to 2000 req/sec and we were maxed out for most of the afternoon. After the news hit we brought up 2 additional Web servers and a database replica server to handle the load and things are back to reasonable levels of performance.

With that said - for those of you that were having trouble getting your RR fix this afternoon, our apologies.

Friday, August 20, 2010

The monthly bill

Has anyone ever wondered what RadioReference's monthly costs are to provide our services? I figured I would take a moment to share with the community what we pay each month in hosting expenses. We obviously have other expenses, like taxes, payroll, commissions, etc, but our largest expense is our Web hosting hosts (servers, bandwidth, etc).

Our primary Web hosting partner is Amazon Web Services, which provides a fantastic cloud computing model which is critical for our ability to scale up and down based on demand, and pay only for what we use. We have also partnered with ServerBeach to provide a server external to Amazon Web Services for mail and monitoring activities.

Below is our latest invoice for the month of July's hosting on Amazon Web Services... (click the picture to see the invoice full size)

That's right, right about $8,285.00. We also paid ServerBeach approximately $200 for a server hosted with them. That means on a typical busy month we are looking at ~ $8500/month in hosting and bandwidth costs.

How does all this break down?

- $2700/month in server costs (web, database, archive, audio, and management servers)
- $1300/month in storage costs (audio archives, backups, databases, files etc)
- $4500/month in bandwidth (all those audio feeds)

Hopefully this gives a good perspective as to what it costs to run a high traffic, bandwidth intensive Web property.

Next up... our technical architecture. Ever wondered how many servers we have, what they do, and what software we run? Stay tuned!

Saturday, May 15, 2010

Coming in early June, Premium Feed Broadcaster Offering

We are excited to announce that in early June we will release a premium feed broadcaster offering. This is primarily intended for public safety agencies and commercial entities, however any feed broadcaster could upgrade their feed. The features include:

1) 90 day archive retention
2) Ability to automatically trim dead air from archives
3) Brand free set of Web players
4) No ads on your Web players or feeds
5) User defined lead-in audio feature
6) Private Feed support with delegated administration (you can restrict access to your feed to only users that you specify)
7) Complete access to all mount points and server details - do what you want with your feed, anywhere, anytime.
8) Full archive access via API to display archive files and lists on your sites/systems.
9) Full downloadable statistics

Our current offerings will not change, and you reserve the right to switch between being a normal RR feed broadcaster or a premium broadcaster at any time.

The cost for this offering will be $14.95/month per feed.

Look for more details coming soon, including the release date.

Friday, February 12, 2010

Listening to RadioReference Live Audio Feeds using the Logitech SqueezeBox



As you probably know, RadioReference has over 1800+ live audio feeds of public safety communications from all over the world. With the recent release of the ScannerBox plug-in for the Logitech Squeezebox, you can now listen to all of those feeds directly from this full featured Wifi enabled music player.

If you aren't familiar with the Logitech Squeezebox, this device is a dedicated internet radio station player. With the release of the ScannerBox plug-in, you can now instantly listen to all of RadioReference's live audio feeds on this great "appliance."

For more information, on the plug-in, see the links below and watch the YouTube video.

ScannerBox Plug In
Logitech Squeezebox

Thursday, February 11, 2010

RadioReference and Incident Page Network (IPN) Announce Incident Syndication Partnership

RadioReference is pleased to announce that we have formed a partnership with Incident Page Network (IPN), the world's premier incident notification network, to syndicate incident alerts on RadioReference Live Audio County pages. Most incidents displayed will be delayed up to 2 hours, however critical and large scale incidents will be displayed in real time.

In addition, existing IPN subscribers will be soon be able to associate their RadioReference.com Username with their IPN profile, allowing all incidents to be displayed on RadioReference.com pages in real time for existing IPN subscribers.

More details and information on this partnership can be seen here:

Incident Page Network - The RadioReference Wiki

These new features will be rolled out during the evening of Monday, February 8th 2010

Enjoy!

Wednesday, September 2, 2009

RadioReference Infrastructure Updates Coming For September

I wanted to spend some time this evening to document out for you some
of the cool infrastructure updates that we have coming for RadioReference.com.
We've seen tremendous growth this year so this is really
necessitated. We are planning to move our entire hosting platform
over to Amazon Web Services, which is a cloud based computing
environment that lets us build our infrastructure on demand, and pay
for what we use. It is a little more expensive that traditional
hosting providers, but it provides tremendous flexibility for us.
You can see more information on Amazon Web services here:

Amazon Web Services

With what we have architected moving forward, we can scale within
minutes to support millions of visitors to our site - something that
isn't quite so easy with our current architecture. In fact we
recently in the last 6 months moved to a much larger infrastructure
environment, only to exceed the capacity at times after all the
growth..... and during the summer no less, which is the slowest time
for our business. Clearly, we need to take significant action.

So, what do we have planned? What are the technical details? For
those of you that like this stuff... Here is what the site will look
like after the switch in the next few weeks. We will have:

1 Web Proxy Server. This server will proxy all requests to back end
Web servers, allowing us to load balance requests to LOTS of Web
servers versus the two that we have today. It will be a front end to
the entire back-end infrastructure of Web and database servers, and it
will have a hot standby that can come up quickly (in minutes) in case
of a failure.

3 Web servers... to start. The proxy server above will balance
requests across each of these, on round-robin basis. These Web
servers are the cloned the same so I can literally bring online as
many of them as I want and add them to the Web proxy server
configuration. Understand that the bottlenecks that we sometimes see
in site performance happen at the back end Web servers and/or database
servers. A single proxy server could easily funnel 5,000 requests a
second to a back end infrastructure without even breaking a sweat. We
will also look at deploying, on demand, additional servers just for
the forums so that when they get busy from 3:00 PM to 11:00 PM we can
add servers to support the requests, and then bring them down when
things get quiet. If we get a huge influx of traffic, I could bring
up 30 servers if I needed to.

1 Master database server. This thing will be a monster, with 8
virtual cores and 16GB of RAM. All database writes will go to this
server. Meaning anytime something needs to be changed in the
database, this sucker is going to handle it.

2 Slave Databases... to start. A slave database server replicates all
information from the master listed above but functions in a read only
mode. One will be a primary slave server responsible for offloading
read requests from the master server (the master will still serve a
lot of read requests, but this is a start). Another slave database
will be dedicated solely for database backups and snapshots. If we
have to bring up a bunch of Web servers because of increased demand,
we can also bring up slave database servers to serve those Web servers
all their read requests. Again, we can bring up as many of these as
we need to. we are also looking at advanced caching techniques for
the database servers as well (memcached).

1 NFS Server. NFS stands for "network file system", and allows us to
put all of our Web content on a single server and let all the Web
servers reference it. That way we only have to put things in one
place and if we have 100 Web servers they can all reference the same
data.

1 Management Server. This server will update statistics on all the
servers, monitor each of the servers for problems, and alert us when
something goes bad. No more dead server at 11pm and it gets fixed at
7am.

2 Master Audio Servers- These servers will receive all of the audio
feed broadcasts that are hosted on the site. Our plan is to have one
master server for every 1000 audio feeds. We can grow this as needed.

2 Relay Audio Servers... to start. Relay servers are what you connect
to when you listen to a live audio feed. We can add as many of these
as we need to support all the listeners, up to millions of listeners.
Our plan is to have 1 relay server per 3000 listeners.

3 Audio Archive Servers. The audio archive servers, well, archive all
the audio. Each are connected to a 1TB disk store. Our plan is to
have one archive server per 500 feeds.

So many will ask, how much will this cost per month? I would estimate
that our charges per month will exceed $4,000/month. If we have to
scale to meet additional demand we will pay for what we use. But, the
benefits far outweigh the costs and we will be prepared to scale up to
large events and traffic that are invariability going to come our
way. We don't have a choice but to invest, and our existing services
are costing about $3000/month so this is a great business move for us.

And... many will ask, when is this going to happen? Well, half of our
audio infrastructure is already on the new system, and we've moved all
of the static Web content (logos, images, styles, etc) to Amazon's
Cloud Front. The rest of the infrastructure is already up and
running, but going through load testing to make sure things go
smoothly when we switch. I would expect that by the end of
September, we will be fully moved over to this new environment, and we
will be welcoming hoards of new visitors and users.

Thank everyone for your support, and I welcome your feedback.