California Solar Powered Data Centre Specification

California US Solar Powered Data CentreSolar Powered Data Centre
24h System Monitoring
Cisco Powered Network
VMware ESX Platform with VMotion
Technical Overview & Diagram
Server Backup

 

Solar Powered Data Centre

Electricity from 120 solar panels (owned and operated by the data centre) capable of generating up to 60 kilowatt-hours of electricity per day provides all power for data centre offices, air conditioners and all computer equipment (network and servers).

DC power from the solar panel feed is stored in battery banks. A large APC UPS (Uninterruptible Power Supply) system also provides power backup and conditions the power supply protecting equipment from voltage fluctuations and spikes.

In an emergency, power is also available from natural gas standby generators (tested weekly), but in practice these have never been used.

Solar tubes have been installed eliminating the need for electric lighting during daylight hours, saving even more energy.

 

24h System Monitoring

The data centre monitor our servers each minute, 24 hours per day, 7 days per week using multiple independent redundant monitoring systems. Alerts are sent to data centre technicians immediately in the case of any equipment malfunction.

We also remotely monitor all servers every minute from the UK and multiple other locations around the planet. Any issues found and confirmed by several locations will generate alert messages to ourselves and/or our service desk partner.

 

Cisco Powered Network

Cisco Logo
Fully redundant Cisco powered network with multi homed connectivity to the Internet backbone: no single point of failure; multiple redundant Cisco 7200 series routers, Cisco PIX firewalls and Cisco switches.

The high availability network infrastructure is fully redundant with automated failover and full BGP (border gateway protocol) routing across three Internet service links (two wired and one very high speed wireless).

Data centre uptime guarantee to Athenaeum (Ecological Hosting) is 99.9+%.

 

VMware ESX Platform with VMotion

VMware IBM NetApp

The data centre operates VMware virtual server technology powered by multi-processor IBM xSeries servers with large amounts of onboard memory (RAM) and multi-terabyte RAID protected Fibre Channel SANs (storage area networks) from NetApp.

Compared to large numbers of individual physical servers this solution provides massive savings in data centre power consumption as well as noise and other environmental factors. In combination with the SAN and specialist software, each of the new physical servers is capable of running multiple virtual servers through the use of machine hardware emulation. These measures result in increased efficiency giving the ability to make more use of the resources available and generate considerable savings in energy consumption as well as reducing the overall amount of equipment required.

VMware VMotion brings a whole new level of redundancy to our server setup. If a physical server is struggling under the load or in the unlikely event of major hardware failure, VMware virtual servers hosted by the failing/failed physical server will automatically be switched to run on a different physical host. This is made possible by virtue of the servers being virtual and having all the server information stored away from the physical host in the SAN storage arrays.

The SAN also brings additional benefits: Multiple snapshots are taken of each virtual server stored on the SAN throughout the day. This enables server restoration to an earlier time almost instantaneously if needed. These snapshots are created by the SAN hardware and are completely independent of the servers.

Athenaeum (Ecological Hosting) has full remote control over our servers, including remote console access (i.e. even if the server is effectively off the Internet for some reason, we still have access as if we were sat in front of the computer), the ability to power cycle servers and server resource usage monitoring capabilities.

 

Technical Overview & Diagram

Below is a basic network diagram showing how the data centre network is laid out. The explanations that follow should be read in conjunction with the diagram.

Network Overview Diagram

There are three separate Internet backbones running BGP (Border Gateway Protocol), which route all traffic coming in and out of network via the shortest possible path. BGP also allows complete redundancy, if one or two Internet backbone links goes down, the other(s) will continue to handle the traffic.

The three separate Internet connections then connect to two redundant Cisco 7200 series routers, which use HSRP (Hot Spare Router Protocol) allowing one to take over the other during a failure. From there, each one goes to separate trunked switches, which allows one to go down and the other will take over.

Two separate Cisco PIX firewalls operated by the data centre block all but needed ports and monitor each other, so if one goes down the other takes over. Out from the firewalls the traffic goes to another set of separate trunked switches, which can failover from one to the other if one fails.

At this point the servers connect to these switches as follows: Each server has two dual port NICs (network interface cards). NIC 1 Ethernet port 1 is teamed for failover with NIC 2 Ethernet port 1. And NIC 1 Ethernet port 2 is teamed for failover with NIC 2 Ethernet port 2. Then NIC 1 Ethernet port 1 and NIC 2 Ethernet port 2 are connected to Switch A-M, and NIC 1 Ethernet port 2 and NIC 2 Ethernet port 1 are connected to Switch B-M. Each of these two port separate NIC card teams are then teamed together for complete redundancy.

All servers are clustered together and each server has two separate SAN HBA (host bus adapter) cards that give the server redundant conductivity to the SAN. The SAN is a high-speed sub-network of shared storage devices. These storage devices are machines that contain nothing but RAID hard disks for storing data. The SAN’s architecture works in a way that makes all storage devices available to all of the clustered servers. Because the data does not reside directly on any of the clustered servers, any server can go down and the other servers in the cluster will take over and balance the load.

 

Server Backup

In addition to the server disaster recovery snapshots, overnight processes backup the data on each server to a remote location over secured channels and onto encrypted storage.

Important note: clients are responsible for their data and should keep full copies of their web sites and any associated data (e.g. databases), safe on their own systems. The backups referred to here are intended for our own disaster recovery only, though on request we can sometimes restore data for clients from these backups (we store up to 60 days worth of backups) and we would just charge for our time to do this at our published level 2 support consultancy rates.

Comments are closed.