I wanted to spend some time this evening to document out for you some
of the cool infrastructure updates that we have coming for RadioReference.com.
We've seen tremendous growth this year so this is really
necessitated. We are planning to move our entire hosting platform
over to Amazon Web Services, which is a cloud based computing
environment that lets us build our infrastructure on demand, and pay
for what we use. It is a little more expensive that traditional
hosting providers, but it provides tremendous flexibility for us.
You can see more information on Amazon Web services here:
Amazon Web Services
With what we have architected moving forward, we can scale within
minutes to support millions of visitors to our site - something that
isn't quite so easy with our current architecture. In fact we
recently in the last 6 months moved to a much larger infrastructure
environment, only to exceed the capacity at times after all the
growth..... and during the summer no less, which is the slowest time
for our business. Clearly, we need to take significant action.
So, what do we have planned? What are the technical details? For
those of you that like this stuff... Here is what the site will look
like after the switch in the next few weeks. We will have:
1 Web Proxy Server. This server will proxy all requests to back end
Web servers, allowing us to load balance requests to LOTS of Web
servers versus the two that we have today. It will be a front end to
the entire back-end infrastructure of Web and database servers, and it
will have a hot standby that can come up quickly (in minutes) in case
of a failure.
3 Web servers... to start. The proxy server above will balance
requests across each of these, on round-robin basis. These Web
servers are the cloned the same so I can literally bring online as
many of them as I want and add them to the Web proxy server
configuration. Understand that the bottlenecks that we sometimes see
in site performance happen at the back end Web servers and/or database
servers. A single proxy server could easily funnel 5,000 requests a
second to a back end infrastructure without even breaking a sweat. We
will also look at deploying, on demand, additional servers just for
the forums so that when they get busy from 3:00 PM to 11:00 PM we can
add servers to support the requests, and then bring them down when
things get quiet. If we get a huge influx of traffic, I could bring
up 30 servers if I needed to.
1 Master database server. This thing will be a monster, with 8
virtual cores and 16GB of RAM. All database writes will go to this
server. Meaning anytime something needs to be changed in the
database, this sucker is going to handle it.
2 Slave Databases... to start. A slave database server replicates all
information from the master listed above but functions in a read only
mode. One will be a primary slave server responsible for offloading
read requests from the master server (the master will still serve a
lot of read requests, but this is a start). Another slave database
will be dedicated solely for database backups and snapshots. If we
have to bring up a bunch of Web servers because of increased demand,
we can also bring up slave database servers to serve those Web servers
all their read requests. Again, we can bring up as many of these as
we need to. we are also looking at advanced caching techniques for
the database servers as well (memcached).
1 NFS Server. NFS stands for "network file system", and allows us to
put all of our Web content on a single server and let all the Web
servers reference it. That way we only have to put things in one
place and if we have 100 Web servers they can all reference the same
data.
1 Management Server. This server will update statistics on all the
servers, monitor each of the servers for problems, and alert us when
something goes bad. No more dead server at 11pm and it gets fixed at
7am.
2 Master Audio Servers- These servers will receive all of the audio
feed broadcasts that are hosted on the site. Our plan is to have one
master server for every 1000 audio feeds. We can grow this as needed.
2 Relay Audio Servers... to start. Relay servers are what you connect
to when you listen to a live audio feed. We can add as many of these
as we need to support all the listeners, up to millions of listeners.
Our plan is to have 1 relay server per 3000 listeners.
3 Audio Archive Servers. The audio archive servers, well, archive all
the audio. Each are connected to a 1TB disk store. Our plan is to
have one archive server per 500 feeds.
So many will ask, how much will this cost per month? I would estimate
that our charges per month will exceed $4,000/month. If we have to
scale to meet additional demand we will pay for what we use. But, the
benefits far outweigh the costs and we will be prepared to scale up to
large events and traffic that are invariability going to come our
way. We don't have a choice but to invest, and our existing services
are costing about $3000/month so this is a great business move for us.
And... many will ask, when is this going to happen? Well, half of our
audio infrastructure is already on the new system, and we've moved all
of the static Web content (logos, images, styles, etc) to Amazon's
Cloud Front. The rest of the infrastructure is already up and
running, but going through load testing to make sure things go
smoothly when we switch. I would expect that by the end of
September, we will be fully moved over to this new environment, and we
will be welcoming hoards of new visitors and users.
Thank everyone for your support, and I welcome your feedback.
Wednesday, September 2, 2009
Subscribe to:
Post Comments (Atom)
1 comment:
I can't help but think that it would be a really useful thing, now that I know you've finished your migration, if the people who had to actually *do* it wrote an article on how it went, what you did, what you didn't expect, and like that -- might even sell to some appropriate outlet. :-)
Post a Comment