RadioReference Infrastructure Updates Coming For September

Status
Not open for further replies.

blantonl

Founder and CEO
Staff member
Super Moderator
Joined
Dec 9, 2000
Messages
11,115
Location
San Antonio, Whitefish, New Orleans
I wanted to spend some time this evening to document out for you some
of the cool infrastructure updates that we have coming for RadioReference.com
. We've seen tremendous growth this year so this is really
necessitated. We are planning to move our entire hosting platform
over to Amazon Web Services, which is a cloud based computing
environment that lets us build our infrastructure on demand, and pay
for what we use. It is a little more expensive that traditional
hosting providers, but it provides tremendous flexibility for us.
You can see more information on Amazon Web services here:

Amazon Web Services

With what we have architected moving forward, we can scale within
minutes to support millions of visitors to our site - something that
isn't quite so easy with our current architecture. In fact we
recently in the last 6 months moved to a much larger infrastructure
environment, only to exceed the capacity at times after all the
growth..... and during the summer no less, which is the slowest time
for our business. Clearly, we need to take significant action.

So, what do we have planned? What are the technical details? For
those of you that like this stuff... Here is what the site will look
like after the switch in the next few weeks. We will have:

1 Web Proxy Server. This server will proxy all requests to back end
Web servers, allowing us to load balance requests to LOTS of Web
servers versus the two that we have today. It will be a front end to
the entire back-end infrastructure of Web and database servers, and it
will have a hot standby that can come up quickly (in minutes) in case
of a failure.

3 Web servers... to start. The proxy server above will balance
requests across each of these, on round-robin basis. These Web
servers are the cloned the same so I can literally bring online as
many of them as I want and add them to the Web proxy server
configuration. Understand that the bottlenecks that we sometimes see
in site performance happen at the back end Web servers and/or database
servers. A single proxy server could easily funnel 5,000 requests a
second to a back end infrastructure without even breaking a sweat. We
will also look at deploying, on demand, additional servers just for
the forums so that when they get busy from 3:00 PM to 11:00 PM we can
add servers to support the requests, and then bring them down when
things get quiet. If we get a huge influx of traffic, I could bring
up 30 servers if I needed to.

1 Master database server. This thing will be a monster, with 8
virtual cores and 16GB of RAM. All database writes will go to this
server. Meaning anytime something needs to be changed in the
database, this sucker is going to handle it.

2 Slave Databases... to start. A slave database server replicates all
information from the master listed above but functions in a read only
mode. One will be a primary slave server responsible for offloading
read requests from the master server (the master will still serve a
lot of read requests, but this is a start). Another slave database
will be dedicated solely for database backups and snapshots. If we
have to bring up a bunch of Web servers because of increased demand,
we can also bring up slave database servers to serve those Web servers
all their read requests. Again, we can bring up as many of these as
we need to. we are also looking at advanced caching techniques for
the database servers as well (memcached).

1 NFS Server. NFS stands for "network file system", and allows us to
put all of our Web content on a single server and let all the Web
servers reference it. That way we only have to put things in one
place and if we have 100 Web servers they can all reference the same
data.

1 Management Server. This server will update statistics on all the
servers, monitor each of the servers for problems, and alert us when
something goes bad. No more dead server at 11pm and it gets fixed at
7am.

2 Master Audio Servers- These servers will receive all of the audio
feed broadcasts that are hosted on the site. Our plan is to have one
master server for every 1000 audio feeds. We can grow this as needed.

2 Relay Audio Servers... to start. Relay servers are what you connect
to when you listen to a live audio feed. We can add as many of these
as we need to support all the listeners, up to millions of listeners.
Our plan is to have 1 relay server per 3000 listeners.

3 Audio Archive Servers. The audio archive servers, well, archive all
the audio. Each are connected to a 1TB disk store. Our plan is to
have one archive server per 500 feeds.

So many will ask, how much will this cost per month? I would estimate
that our charges per month will exceed $4,000/month. If we have to
scale to meet additional demand we will pay for what we use. But, the
benefits far outweigh the costs and we will be prepared to scale up to
large events and traffic that are invariability going to come our
way. We don't have a choice but to invest, and our existing services
are costing about $3000/month so this is a great business move for us.

And... many will ask, when is this going to happen? Well, half of our
audio infrastructure is already on the new system, and we've moved all
of the static Web content (logos, images, styles, etc) to Amazon's
Cloud Front. The rest of the infrastructure is already up and
running, but going through load testing to make sure things go
smoothly when we switch. I would expect that by the end of
September, we will be fully moved over to this new environment, and we
will be welcoming hoards of new visitors and users.

Thank everyone for your support, and I welcome your feedback.

Warm regards,
 
Last edited:

INDY72

Monitoring since 1982, using radios since 1991.
Premium Subscriber
Joined
Dec 18, 2002
Messages
14,650
Location
Indianapolis, IN
Awesome!

This will also increase the speed of processes in the admin end! Your freaking amazing boss!
 

bezking

Member
Joined
Aug 5, 2006
Messages
2,656
Location
On the Road
Lindsay,

I give you a heck of a lot of credit for managing such an infrastructure by yourself...
 

eorange

♦Insane Asylum Premium Member♦
Joined
Aug 20, 2003
Messages
2,942
Location
Cleveland, OH
I assume you're referring to the EC2 service. If so...do you need to upload an AMI for every fix and maintenance release you want to implement? Or is the AMI used to initially build your VM and then you have free control from there?
 

blantonl

Founder and CEO
Staff member
Super Moderator
Joined
Dec 9, 2000
Messages
11,115
Location
San Antonio, Whitefish, New Orleans
We have an AMI developed for every "role" that a server can have. They are stored in S3. For roles that we can spawn multiple machines for (Web Servers, Audio Slave Servers, Database Slave Servers) we just pass custom user data (such as name and elastic block store volume ID) when launching those instances and they provision themselves with their proper configuration.
 

eorange

♦Insane Asylum Premium Member♦
Joined
Aug 20, 2003
Messages
2,942
Location
Cleveland, OH
I didn't ask my question clearly enough. What I meant was: do you need to upload a revised AMI every time you need to fix a software bug, or add a feature?
 

Mark

Member
Premium Subscriber
Joined
Jan 14, 2001
Messages
13,364
Location
Northeast Maryland
Lindsay something tells me you have learned a lot about
computers and servers since the beginning.Probably more than
we want too know. LOL :)
Did you start with a single web server in your house?
Seems like yesterday I joined this site,My how time flies!
Well done and keep up the good work

Mark
Maryland USA
 
Last edited:

blantonl

Founder and CEO
Staff member
Super Moderator
Joined
Dec 9, 2000
Messages
11,115
Location
San Antonio, Whitefish, New Orleans
I didn't ask my question clearly enough. What I meant was: do you need to upload a revised AMI every time you need to fix a software bug, or add a feature?

No, all we need to do in that case is pull down SVN changes to the NFS server.

Remember, Amazon EC2 has the concept of Elastic Block Stores, which are persistent disks that you attach to EC2 instances.

The only reason we would need to update the AMI's themselves are for OS updates. All configurations and code reside on the NFS cluster.
 

eorange

♦Insane Asylum Premium Member♦
Joined
Aug 20, 2003
Messages
2,942
Location
Cleveland, OH
Very good. I read about EC2 and they offer pretty comprehensive, a la carte hosting services.

It's interesting - in a good way - that Amazon is providing hosting services, because you know they already have a mature infrastructure to support their primary business. Sound like a good business model to me as far as choosing a hosting provider.

Good luck!
 

JLM7424

Member
Joined
Dec 14, 2004
Messages
349
Location
Williston,ND - Williams county
I hate to Ask Lindsay But Do subscriptions really cover enough for all this ?

I have been here Since 04 and can say RR is the next best thing to breathing.

I can only speek for myself however i would pay double for this Level of services

Lindsay Thankyou To You and the RR Staff for all you Do For Our hobby .
Matt J.
 

PeterGV

K1PGV
Joined
Jul 10, 2006
Messages
754
Location
Mont Vernon, NH
Nice, Lindsay! Thanks for providing the details. Sounds like you've thought this out very well. Cloud-based services changes the game a lot, doesn't it?

I've used S3 as a source for stored streaming audio/video (flash format) and it's worked surprisingly well across the globe, even without enabling Cloud Front.

The only issue this doesn't really address is scaling live audio distribution via Icecast, does it? Or do you plan to have a "master" Icecast server, and then have "endpoint" Icecast servers relay from the master... and fan the listeners out over the endpoint Icecast servers? With 3K users per server, you're saturating a T3... so it seems like geographically distributing the listener load would be sensible (if I'm on the east coast, I connect to an east coast server... even if the feed I want to listen to is on the west coast). Of course, the root problem here is that Icecast is not really well suited to any of this as far as I can tell.

Just curious, of course... thanks for sharing!

Peter
K1PGV
 

blantonl

Founder and CEO
Staff member
Super Moderator
Joined
Dec 9, 2000
Messages
11,115
Location
San Antonio, Whitefish, New Orleans
Peter,

Our icecast infrastructure actually scales quite well. We actually have two Master Icecast Servers. We can spin up quite a few relays without a problem. Geographically spreading the listener load isn't that big of a deal right now, and would actually add to our costs since distributing relays in different amazon availability zones would cost us for the back end bandwidth.

Icecast is pretty well suited for this - some of the issues with the Mp3 streaming infrastructure is it is rather difficult to migrate broadcasting clients over to new servers.
 
Status
Not open for further replies.
Top