kernelcare

KernelCare Security Update Issued

An update for KernelCare was just released to address CVE-2014-9322 and your OS should be automatically updated, unless specified otherwise.

Multiple Wired to Backup

Remote backup service

A remote, online, or managed backup service, sometimes marketed as cloud backup, is a service that provides users with a system for the backup, storage, and recovery of computer files. Online backup providersare companies that provide this type of service to end users (or clients). Such backup services are considered a form of cloud computing.

Online backup systems are typically built around a client software program that runs on a schedule, typically once a day, and usually at night while computers aren’t in use. This program typically collects, compresses, encrypts, and transfers the data to the remote backup service provider’s servers or off-site hardware.

Characteristics

Service-based

  1. The assurance, guarantee, or validation that what was backed up is recoverable whenever it is required is critical. Data stored in the service provider’s cloud must undergo regular integrity validation to ensure its recoverability.
  2. Cloud BUR (BackUp & Restore) services need to provide a variety of granularity when it comes to RTO’s (Recovery Time Objective). One size does not fit all either for the customers or the applications within a customer’s environment.
  3. The customer should never have to manage the back end storage repositories in order to back up and recover data.
  4. The interface used by the customer needs to enable the selection of data to protect or recover, the establishment of retention times, destruction dates as well as scheduling.
  5. Cloud backup needs to be an active process where data is collected from systems that store the original copy. This means that cloud backup will not require data to be copied into a specific appliance from where data is collected before being transmitted to and stored in the service provider’s data centre.

Ubiquitous access

  1. Cloud BUR utilizes standard networking protocols (which today are primarily but not exclusively IP based) to transfer data between the customer and the service provider.
  2. Vaults or repositories need to be always available to restore data to any location connected to the Service Provider’s Cloud via private or public networks.

Scalable and elastic

  1. Cloud BUR enables flexible allocation of storage capacity to customers without limit. Storage is allocated on demand and also de-allocated as customers delete backup sets as they age.
  2. Cloud BUR enables a Service Provider to allocate storage capacity to a customer. If that customer later deletes their data or no longer needs that capacity, the Service Provider can then release and reallocate that same capacity to a different customer in an automated fashion.

Metered by use

  1. Cloud Backup allows customers to align the value of data with the cost of protecting it. It is procured on a per-gigabyte per month basis. Prices tend to vary based on the age of data, type of data (email, databases, files etc.), volume, number of backup copies and RTOs.

Shared and secure

  1. The underlying enabling technology for Cloud Backup is a full stack native cloud multitenant platform (shared everything).
  2. Data mobility/portability prevents service provider lock-in and allows customers to move their data from one Service Provider to another, or entirely back into a dedicated Private Cloud (or a Hybrid Cloud).
  3. Security in the cloud is critical. One customer can never have access to another’s data. Additionally, even Service Providers must not be able to access their customer’s data without the customer’s permission.

Enterprise-class cloud backup

An enterprise-class cloud backup solution must include an on-premise cache, to mitigate any issues due to inconsistent Internet connectivity.

Hybrid cloud backup is a backup approach combining Local backup for fast backup and restore, along with Off-site backup for protection against local disasters. According to Liran Eshel, CEO of CTERA Networks, this ensures that the most recent data is available locally in the event of need for recovery, while archived data that is needed much less often is stored in the cloud.

Hybrid cloud backup works by storing data to local disk so that the backup can be captured at high speed, and then either the backup software or a D2D2C (Disk to Disk to Cloud) appliance encrypts and transmits data to a service provider. Recent backups are retained locally, to speed data recovery operations. There are a number of cloud storage appliances on the market that can be used as a backup target, including appliances from CTERA Networks, Nasuni, StorSimple and TwinStrata. Examples of Enterprise-class cloud backup solutions include StoreGrid, Datto, GFI Software’s MAX Backup, and IASO Backup.

Recent improvements in CPU availability allow increased use of software agents instead of hardware appliances for enterprise cloud backup.  The software-only approach can offer advantages including decreased complexity, simple scalability, significant cost savings and improved data recovery times. Examples of no-appliance cloud backup providers include Intronis and Zetta.net.

Typical features

Encryption
Data should be encrypted before it is sent across the internet, and it should be stored in its encrypted state. Encryption should be at least 256 bits, and the user should have the option of using his own encryption key, which should never be sent to the server.
Network backup
A backup service supporting network backup can back up multiple computers, servers or Network Attached Storage appliances on a local area network from a single computer or device.
Continuous backup – Continuous Data Protection
Allows the service to back up continuously or on a predefined schedule. Both methods have advantages and disadvantages. Most backup services are schedule-based and perform backups at a predetermined time. Some services provide continuous data backups which are used by large financial institutions and large online retailers. However, there is typically a trade-off with performance and system resources.
File-by-File Restore
The ability for users to restore files themselves, without the assistance of a Service Provider by allowing the user select files by name and/or folder. Some services allow users to select files by searching for filenames and folder names, by dates, by file type, by backup set, and by tags.
Online access to files
Some services allow you to access backed-up files via a normal web browser. Many services do not provide this type of functionality.
Data compression
Data will typically be compressed with a lossless compression algorithm to minimize the amount of bandwidth used.
Differential data compression
A way to further minimize network traffic is to transfer only the binary data that has changed from one day to the next, similar to the open source file transfer service Rsync.  More advanced online backup services use this method rather than transfer entire files.
Bandwidth usage
User-selectable option to use more or less bandwidth; it may be possible to set this to change at various times of day.
Off-Line Backup
Off-Line Backup allows along with and as part of the online backup solution to cover daily backups in time when network connection is down. At this time the remote backup software must perform backup onto a local media device like a tape drive, a disk or another server. The minute network connection is restored remote backup software will update the remote datacenter with the changes coming out of the off-line backup media .
Synchronization
Many services support data synchronization allowing users to keep a consistent library of all their files across many computers. The technology can help productivity and increase access to data.

Common features for business users

Bulk restore
A way to restore data from a portable storage device when a full restore over the Internet might take too long.
Centralized management console
Allows for an IT department or staff member to monitor backups for the user.
File retention policies
Many businesses require a flexible file retention policy that can be applied to an unlimited number of groups of files called “sets”.
Fully managed services
Some services offer a higher level of support to businesses that might request immediate help, proactive monitoring, personal visits from their service provider, or telephone support.
Redundancy
Multiple copies of data backed up at different locations. This can be achieved by having two or more mirrored data centers, or by keeping a local copy of the latest version of backed up data on site with the business.
Regulatory compliance
Some businesses are required to comply with government regulations that govern privacy, disclosure, and legal discovery. A service provider that offers this type of service assists customers with proper compliance with and understanding of these laws.
Seed loading
Ability to send a first backup on a portable storage device rather than over the Internet when a user has large amounts of data that they need quickly backed up.
Server backup
Many businesses require backups of servers and the special databases that run on them, such as groupware, SQL, and directory services.
Versioning
Keeps multiple past versions of files to allow for rollback to or restoration from a specific point in time.

Cost factors

Online backup services are usually priced as a function of the following things:

  1. The total amount of data being backed up.
  2. The number of machines covered by the backup service.
  3. The maximum number of versions of each file that are kept.
  4. Data retention and archiving period options
  5. Managed backups vs. Unmanaged backups
  6. The level of service and features available

Some vendors limit the number of versions of a file that can be kept in the system. Some services omit this restriction and provide an unlimited number of versions. Add-on features (plug-ins), like the ability to back up currently open or locked files, are usually charged as an extra, but some services provide this built in.

Most remote backup services reduce the amount of data to be sent over the wire by only backing up changed files. This approach to backing up means that the customers total stored data is reduced. Reducing the amount of data sent and also stored can be further drastically reduced by only transmitting the changed data bits by binary or block level incremental backups. Solutions that transmit only these changed binary data bits do not waste bandwidth by transmitting the same file data over and over again if only small amounts change.

Advantages

Remote backup has advantages over traditional backup methods:

  • Perhaps the most important aspect of backing up is that backups are stored in a different location from the original data. Traditional backup requires manually taking the backup media offsite.
  • Remote backup does not require user intervention. The user does not have to change tapes, label CDs or perform other manual steps.
  • Unlimited data retention (presuming the backup provider stays in business).
  • Backups are automatic.
  • Some remote backup services will work continuously, backing up files as they are changed.
  • Most remote backup services will maintain a list of versions of your files.
  • Most remote backup services will use a 128 – 448 bit encryption to send data over unsecured links (i.e. internet)
  • A few remote backup services can reduce backup by only transmitting changed binary data bits

Disadvantages

Remote backup has some disadvantages over traditional backup methods:

  • Depending on the available network bandwidth, the restoration of data can be slow. Because data is stored offsite, the data must be recovered either via the Internet or via a disk shipped from the online backup service provider.
  • Some backup service providers have no guarantee that stored data will be kept private — for example, from employees. As such, most recommend that files be encrypted.
  • It is possible that a remote backup service provider could go out of business or be purchased, which may affect the accessibility of one’s data or the cost to continue using the service.
  • If the encryption password is lost, data recovery will be impossible. However with managed services this should not be a problem.
  • Residential broadband services often have monthly limits that preclude large backups. They are also usually asymmetric; the user-to-network link regularly used to store backups is much slower than the network-to-user link used only when data is restored.
  • In terms of price, when looking at the raw cost of hard disks, remote backups cost about 1-20 times per GB what a local backup would.

Managed vs. unmanaged

Some services provide expert backup management services as part of the overall offering. These services typically include:

  • Assistance configuring the initial backup
  • Continuous monitoring of the backup processes on the client machines to ensure that backups actually happen
  • Proactive alerting in the event that any backups fail
  • Assistance in restoring and recovering data
disaster_recovery

Disaster recovery

Disaster recovery (DR) involves a set of policies and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster. Disaster recovery focuses on the IT or technology systems supporting critical business functions, as opposed to business continuity, which involves keeping all essential aspects of a business functioning despite significant disruptive events. Disaster recovery is therefore a subset of business continuity.

Classification of disasters

Disasters can be classified into two broad categories. The first is natural disasters such as floods, hurricanes, tornadoes or earthquakes. While preventing a natural disaster is very difficult, risk management measures such as avoiding disaster-prone situations and good planning can help. The second category is man made disasters, such as hazardous material spills, infrastructure failure, bio-terrorism, and disastrous IT bugs or failed change implementations. In these instances, surveillance, testing and mitigation planning are invaluable.

Importance of disaster recovery planning

Recent research supports the idea that implementing a more holistic pre-disaster planning approach is more cost-effective in the long run. Every $1 spent on hazard mitigation(such as a disaster recovery plan)saves society $4 in response and recovery costs.

As IT systems have become increasingly critical to the smooth operation of a company, and arguably the economy as a whole, the importance of ensuring the continued operation of those systems, and their rapid recovery, has increased. For example, of companies that had a major loss of business data, 43% never reopen and 29% close within two years. As a result, preparation for continuation or recovery of systems needs to be taken very seriously. This involves a significant investment of time and money with the aim of ensuring minimal losses in the event of a disruptive event.

Control measures

Control measures are steps or mechanisms that can reduce or eliminate various threats for organizations. Different types of measures can be included in disaster recovery plan (DRP).

Disaster recovery planning is a subset of a larger process known as business continuity planning and includes planning for resumption of applications, data, hardware, electronic communications (such as networking) and other IT infrastructure. A business continuity plan (BCP) includes planning for non-IT related aspects such as key personnel, facilities, crisis communication and reputation protection, and should refer to the disaster recovery plan (DRP) for IT related infrastructure recovery / continuity.

IT disaster recovery control measures can be classified into the following three types:

  1. Preventive measures – Controls aimed at preventing an event from occurring.
  2. Detective measures – Controls aimed at detecting or discovering unwanted events.
  3. Corrective measures – Controls aimed at correcting or restoring the system after a disaster or an event.

Good disaster recovery plan measures dictate that these three types of controls be documented and exercised regularly using so-called “DR tests”.

Strategies

Prior to selecting a disaster recovery strategy, a disaster recovery planner first refers to their organization’s business continuity plan which should indicate the key metrics of recovery point objective (RPO) and recovery time objective (RTO) for various business processes (such as the process to run payroll, generate an order, etc.). The metrics specified for the business processes are then mapped to the underlying IT systems and infrastructure that support those processes.

Incomplete RTOs and RPOs can quickly derail a disaster recovery plan. Every item in the DR plan requires a defined recovery point and time objective, as failure to create them may lead to significant problems that can extend the disaster’s impact. Once the RTO and RPO metrics have been mapped to IT infrastructure, the DR planner can determine the most suitable recovery strategy for each system. The organization ultimately sets the IT budget and therefore the RTO and RPO metrics need to fit with the available budget. While most business unit heads would like zero data loss and zero time loss, the cost associated with that level of protection may make the desired high availability solutions impractical. A cost-benefit analysis often dictates which disaster recovery measures are implemented.

Some of the most common strategies for data protection include:

  • backups made to tape and sent off-site at regular intervals
  • backups made to disk on-site and automatically copied to off-site disk, or made directly to off-site disk
  • replication of data to an off-site location, which overcomes the need to restore the data (only the systems then need to be restored or synchronized), often making use of storage area network (SAN) technology
  • Hybrid Cloud solutions that replicate both on-site and to off-site data centers. These solutions provide the ability to instantly fail-over to local on-site hardware, but in the event of a physical disaster, servers can be brought up in the cloud data centers as well. Examples include Quorom, rCloud from Persistent Systems or EverSafe.
  • the use of high availability systems which keep both the data and system replicated off-site, enabling continuous access to systems and data, even after a disaster (often associated with cloud storage)

In many cases, an organization may elect to use an outsourced disaster recovery provider to provide a stand-by site and systems rather than using their own remote facilities, increasingly via cloud computing.

In addition to preparing for the need to recover systems, organizations also implement precautionary measures with the objective of preventing a disaster in the first place. These may include:

  • local mirrors of systems and/or data and use of disk protection technology such as RAID
  • surge protectors — to minimize the effect of power surges on delicate electronic equipment
  • use of an uninterruptible power supply (UPS) and/or backup generator to keep systems going in the event of a power failure
  • fire prevention/mitigation systems such as alarms and fire extinguishers
  • anti-virus software and other security measures

Basic operations in OpenVZ environment

This article assumes you have already installed OpenVZ. If not, follow the link to perform the steps needed.

Create and start a container

To create and start a container, run the following commands:

[host-node]# vzctl create CTID --ostemplate osname
[host-node]# vzctl set CTID --ipadd a.b.c.d --save
[host-node]# vzctl set CTID --nameserver a.b.c.d --save
[host-node]# vzctl start CTID

Here CTID is the numeric ID for the container; osname is the name of the OS template for the container, and a.b.c.d is the IP address to be assigned to the container.

Example:

[host-node]# vzctl create 101 --ostemplate fedora-core-5-minimal
[host-node]# vzctl set 101 --ipadd 10.1.2.3 --save
[host-node]# vzctl set 101 --nameserver 10.0.2.1 --save
[host-node]# vzctl start 101

Your freshly-created container should be up and running now; you can see its processes:

[host-node]# vzctl exec CTID ps ax

Enter to and exit from the container

To enter container give the following command:

[host-node]# vzctl enter CTID
entered into container CTID
[container]#

To exit from container, just type exit and press Enter:

[container]# exit
exited from container VEID
[host-node]#

Stop and destroy the container

To stop container:

[host-node]# vzctl stop CTID
Stopping container ...
Container was stopped
Container is unmounted

And to destroy container:

[host-node]# vzctl destroy CTID
Destroying container private area: /vz/private/CTID
Container private area was destroyed

OpenVZ installation

Requirements

This guide assumes you are running RHEL (CentOS, Scientific Linux) 6 on your system. Currently, this is a recommended platform to run OpenVZ on.

/vz file system

It is recommended to use a separate partition for containers (by default /vz) and format it to ext4.

yum pre-setup

Download openvz.repo file and put it to your /etc/yum.repos.d/ repository:

wget -P /etc/yum.repos.d/ http://ftp.openvz.org/openvz.repo

Import OpenVZ GPG key used for signing RPM packages:

rpm --import http://ftp.openvz.org/RPM-GPG-Key-OpenVZ

Kernel installation

Limited OpenVZ functionality is supported when you run a recent 3.x kernel (check vzctl for upstream kernel, so OpenVZ kernel installation is optional but still recommended.

# yum install vzkernel

System configuration

Please make sure the following steps are performed before rebooting into OpenVZ kernel.

sysctl

There are a number of kernel parameters that should be set for OpenVZ to work correctly. These parameters are stored in /etc/sysctl.conf file. Here are the relevant portions of the file; please edit accordingly.

# On Hardware Node we generally need
# packet forwarding enabled and proxy arp disabled
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.proxy_arp = 0

# Enables source route verification
net.ipv4.conf.all.rp_filter = 1

# Enables the magic-sysrq key
kernel.sysrq = 1

# We do not want all our interfaces to send redirects
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0

SELinux

SELinux should be disabled. Put SELINUX=disabled to /etc/sysconfig/selinux:

echo "SELINUX=disabled" > /etc/sysconfig/selinux

Tools installation

Before installing tools, please read about vzstats and opt-out if you don’t want to help the project.

OpenVZ needs some user-level tools installed:

# yum install vzctl vzquota ploop

Reboot into OpenVZ

Now reboot the machine and choose “OpenVZ” on the boot loader menu (it should be default choice).

Download OS templates

An OS template is a Linux distribution installed into a container and then packed into a gzipped tarball. Using such a cache, a new container can be created in a minute.

Download precreated template directly from download.openvz.org/template/precreated, or from one of the mirrors. Put those tarballs as-is (no unpacking needed) to the /vz/template/cache/ directory.