originally posted by: jimmyx
a reply to: StargateSG7
absolutely, and you are right to have those backups, modem (if needed) and extra router, backup laptop, backup hard drive that you can switch out( a
few screws along with a data cable, and power plug)...even a laymen could be shown how to do this in ten minutes.....this isn't rocket science....if
you are going to buy a laptop, always make sure it has a removable hard drive and backup along with a backup battery...don't depend on the "cloud"
=====
Your comment brings up a MASSIVE ISSUE with larger corporations
that think Hunky-Dory-All-Is-Well on the data centre front when it
comes to their IT infrastructure. So much work and maintenance
has been outsourced and put into the cloud, if there is ANY single
failure in the system, there is a massive snowball effect that causes
a chain reaction that can bring down the ENTIRE system.
The company I work for NEVER let's go of it's IT infrastructure!
ALL OF IT is done in-house from basic backups, to hot-site hardware
backups with MULTIPLE servers ready to take over in case of failure all
the way to Triple-AES Encryption (3 x 256-bit AES) of ALL data on every
disk in case of theft or network infiltration.
EVERY company (SMALL ONES ESPECIALLY!) should at the very least
have an extra motherboard and power-supply for your Windows 2012
Server or Linux Server machines and, at the very least, have ONE full
on-site and one off-site image backup of the Installed Operating System
and all your documents backed up to MULTIPLE hard drives stored both
On-Site and Off-Site.
Total cost would be less than $10,000 for such a system.
And will save you $50,000+ in headaches if you have say a lightning
strike and need to recover (This actually happened to us!) your
damaged servers.
In our case the longest we would be out would be 4-to-5 hours, and
the only reason for that is it takes 1.5 hours to restore the "Last Known
Good Server Drive Image" to a backup motherboard/power supply and it
takes 2.5 hours+ to restore TWO PETABYTES (2000 Terabytes+) of Client
and In-house Video Data.
If our hardwired network connection goes down we still have backup
routers connected to MULTIPLE 20 megabit connected wireless
smartphones on fancy dataplans which will give us temporary
aggregated 100 megabit internet bandwidth
within 10 minutes of going down.
5 phones x $80/month unlimited plan (each at 20 megabits download
and 8 megabits upload) is only $400 per month for piece of mind.
Our UPS/Generator can also power the IT centre for at least
two weeks on a 3000 Litre propane tank. (10,000 Watts generator
continuous 100% duty cycle) So we are set for uptime in a wide-area
power failure disaster. We've also installed 60 km repeaters on multiple
private properties so we can hop our internet on a private WiMAX-based
VPN connection to outside of the city in case the entire city goes down.
It's slow at only 7 megabits/second per repeater over 60km line of sight
or 3 megabits/second at 8 to 12 km in mountainous or city terrain
but it does work!
Use these:
www.zyxel.com...
Cost: $5000 to $15,000 per repeater depending upon long-range
antennae and power configuration. If you buy the cheapie versions
with cheap antennae you will be getting only 1-to-2 megabits a second
at a maximum range of 10 km (6 miles)
If WE can do that, then WHY can't the NYSE which is many times
bigger than us? It's just common sense to create a backup system
and TRAIN people how to implement it. We are less than 10 people
and small potatoes money-wise! You would think that $500,000 and
some decent IT guys SHOULD be able to fix the NYSE's IT downtime problem!
edit on 2015/7/9 by StargateSG7 because: sp