This site best when viewed with a modern standards-compliant browser. We recommend Firefox Get Firefox!.

Linux-HA project logo
Providing Open Source High-Availability Software for Linux and other OSes since 1999.

USA Flag UK Flag

Japanese Flag


About Us

Contact Us

Legal Info

How To Contribute

Security Issues

This web page is no longer maintained. Information presented here exists only to avoid breaking historical links.
The Project stays maintained, and lives on: see the Linux-HA Reference Documentation.
To get rid of this notice, you may want to browse the old wiki instead.

1 February 2010 Hearbeat 3.0.2 released see the Release Notes

18 January 2009 Pacemaker 1.0.7 released see the Release Notes

16 November 2009 LINBIT new Heartbeat Steward see the Announcement

Last site update:
2022-05-27 17:51:03

Pacemaker is the Project Successor

  • {i} Note <!> Pacemaker is the Project Successor of "Heartbeat version 2"

If you are looking for documentation and information for the "Version 2" heartbeat, you should really be reading the Pacemaker home page at

If you are stuck with Heartbeat 2.1.3 or Heartbeat 2.1.4, the best fitting Documentation is probably that for Pacemaker 0.6, which can also be found on the Pacemaker Documentation page {OK}

Quoting the first paragraph from v2 (in this wiki):

Special Note for Version 2 Users

The CRM is now maintained as an independent project called Pacemaker and now has many new features including support for the OpenAIS cluster stack.

Heartbeat 2.1.4 was the last release to contain the CRM or "Version 2 Resource Manager" and since then all development and maintenance is performed as part of the Pacemaker project and all CRM code has been removed from Heartbeat.

For more details on Pacemaker, including latest versions, installation details and documentation, please visit

Linux-HA Release 2 Fact Sheet

Linux-HA provides sophisticated high-availability (failover) capabilities on a wide range of platforms, supporting several tens of thousands of mission critical sites all over the globe. A few of these are documented in our success stories page.

Linux-HA is the oldest, most capable, and best-tested open source HA solution available and has the largest associated community. By project policy, it always compiles with no warnings, and no faults found by a static analysis tool. Source code is periodically subjected to scrutiny by security experts.

It provides monitoring of cluster nodes, applications, and provides a sophisticated dependency model with a rule-based resource placement scheme. When faults occur, or a rules change occurs, the user-supplied rules are then followed to provide the desired resource placement in the cluster.

It is generally at least as capable and easy-to-use as most commercial clustering offerings such as Veritas VCS, SunCluster, LifeKeeper, ServiceGuard and others.


  • Works on all known Linux variants and platforms, supplied with many distributions.
  • Sub-second server failure detection
  • Sophisticated knowledge of service inter-dependencies, always starts and stops resources quickly and correctly.
  • Supports both co-location and anti-colocation constraints.
  • Policy-based resource placement puts services exactly where you want them based on dependencies, user-defined rules, and (user-defined) node attributes and location constraints.
  • Fine-grained resource management can be include user-defined attributes meaning that failover can occur on any user-defined criteria.
  • Time-based rules allow for different policies depending on time. This means both that you can have different rules for where resources are located, and you can also set failback policies differently by time. For example, you can say that failback is normally delayed until night or weekends.
  • Easy-to-understand and deploy init-based application management

    - most services require no application scripts for basic management.

  • Resource groups provide simple, easy-to-use service management for straightforward environments.
  • Active fencing mechanism (STONITH) provides strong data integrity guarantees - even in unusual failure cases. Provides stronger integrity guarantees than simple quorum tie-breakers.

  • Full-featured GUI - for configuring, controlling, and monitoring HA services and servers

  • CIM (Common Information Model) support for industry-standard Systems Management support

  • Open source implementation avoids vendor lock-in and provides great flexibility, stability, responsiveness and testing.
  • Specific support for cluster-aware applications using clone resources - coordinating and keeping peer applications informed
  • Integrated support for LVS load balancing

  • Integrated and easy-to-use support for ClusterIP load distribution

  • Integrated and easy-to-use support for DRBD high-integrity host-based data replication

  • Special support for replication and similar resources using master/slave resources - providing greater data integrity
  • Long history on Linux and strong reputation for robustness.
  • Works on standalone machines to provide easy-to-use init-script-based service monitoring and restart.
  • Supports geographic (split-site) clusters including reliable automated failover.
  • Also available for FreeBSD, OpenBSD and Solaris. Portable to other POSIX-like platforms.

Hardware Requirements

Supported Processors

Linux-HA is used on a huge number of platforms, ranging from ARM processors through mainframes. We test extensively on ia32, powerPC, and System Z (mainframe) platforms (8 hours to 8 days), and perform basic testing of each release on every platform supported by SUSE Linux:

  • ia32, ia64, x86_64, pSeries, zSeries mainframes

Beginning with release 2, we have extended our automated exhaustive testing procedures to OpenPower platforms, and have recently begun such testing on system Z servers.

Linux-HA is portable to many platforms, and we treat portability bugs seriously. Patches to fix portability bugs are welcomed.

Data Sharing Arrangements

Linux-HA has no special shared disk requirements.

It supports (at least) the following data sharing configurations:

  • no shared data
  • replication (DRBD, or application-specific)

  • SCSI RAID controllers supporting clustering (IBM ServeRAID, ICP Vortex)

  • External RAID units - SCSI, Fiber Channel - any kind

The only requirements we have on shared disk is that it support mount and umount. More specifically, it does not rely on SCSI reservations (or their equivalent).

Supported Software Platforms

Linux-HA is highly portable, and runs on many POSIX-like platforms. It is best supported (and works best) on Linux - virtually any version. The build system creates RPMs and Debian packages automatically, and it is also integrated into the Gentoo Linux build system, among others. Linux-HA is provided natively with SUSE Linux, Mandriva Linux, TurboLinux, Red Flag Linux, Debian, Gentoo and a few other Linux distributions, and is a standard part of many Linux-based products.

It also works on FreeBSD and Solaris and the Mac's OS/X.

RAM Requirements

Heartbeat will run in whatever memory your OS and application needs plus about 16 megabytes more. Although it is relatively lightweight, Linux-HA locks certain core components into memory.

Software Requirements

Special libraries

The RPMs document this best. And, if you don't want to install some of these libraries, then many of these dependencies can be automatically eliminated by rebuilding from source. The only two slightly unusual mandatory dependencies are glib2, and libnet >= 1.1. Use of the GUI or CIM requires the GNU TLS libraries.

The STONITH plugins create a variety of dependencies on the libraries they need - but you don't actually need most or any of them for any given installation, and Autoconf will not create modules you don't have the libraries for.

Kernel Versions

Linux-HA will run on any kernel that doesn't have a major scheduler bug. For Linux, that means basically anything but Red Hat Linux kernels 2.4.18-2.4.20.

It has no kernel dependencies, drivers, file system requirements, or other hooks.


Maximum number of nodes?

There is no fixed maximum number of nodes allowed beginning in Version 2.0. We have tested it with up to 16 nodes. Reports occasionally trickle in of people using it with more than double that number.

Are there any Administration-Tools included?

Heartbeat currently comes with the following adminstrative tools:

  • haclient - Graphical User Interface for configuring, controlling and monitoring cluster

  • crmadmin - provide node related details

  • cibadmin - allows the current configuration to be queried and modified

  • crm_verify - checks a configuration is valid

  • crm_mon - provides the current cluster status in text or HTML

  • crm_resource - query and modify all things related to cluster resources/services

  • crm_standby - control a node's standby status (ability to run resources)

  • cl_status - provides low-level node-centric connectivity information.


Heartbeat monitors node death, and includes built-in resource monitoring for all types of resources. Resources are automatically restarted on failure.

Supported Applications

Linux-HA can support virtually any application that can withstand a crash and be restarted robustly every time, and which can somehow access a good copy of its state data from the machines that need to run the service. See the section on Data Sharing Arrangements for details of common methods for sharing application state.

People do a huge variety of things with Linux-HA. If you want it, someone has probably already done it. It supports most applications immediately without writing any scripts.

Automated Notification when one node fails?

Linux-HA provides configurable automated notification whenever resources move from one machine to another, through the MailTo resource agent. You can easily write your own if you don't like ours. Additionally, you can run an SNMP agent which will send out SNMP traps when nodes fail, or monitor and control it through the Common Information Model (CIM) cluster model.


Commercial support is available from a variety of sources including IBM and SUSE/Novell.


Which part of Processor-Performance is used for managing the cluster ?

Linux-HA's processor usage is usually negligible, typically much less than 1 percent. If you configure ultra-fast failover times (< 1 second), then this amount will go up with the required faster heartbeat rates.

Which level of availability is reachable?

This is a difficult question to answer since it depends on where you start. As a rule, good HA systems add about one "9" to your system's availability, when appropriately configured. This general rule applies to Linux-HA as well. That is, if your pre-HA clustering availability was 99.9%, then the resulting availability of your system ought to be something like about 99.99%. One can improve on this through good administrative procedures and higher degrees of redundancy.

How fast can it detect node failure?

When properly configured, Linux-HA can detect failure in less than a second. It is fairly common that people configure a failure detection time of a small number of seconds.



Starting with release 2.0.5, Linux-HA comes with an easy to use GUI for configuring, monitoring and controlling it. Screen shots of the GUI are available online as are some screencasts.

Linux-HA release 2 comes with web-based and command-line-based cluster monitoring capabilities showing most aspects of current cluster status.

You can also monitor its basic capabilities through SNMP using your favorite SNMP-enabled systems management tool or through our cl_status tool.

CIM cluster model support is also provided for leading edge systems management support.

Command to Execute commands on all nodes simultaneously?

Linux-HA does not provide one at this time, but you could add one easily if you felt strongly about it. It would take around 30 lines of shell script.

Since ssh does such a good job, and the security implications are significant, we haven't yet been motivated to provide such a facility.

Remote Administration of nodes?

You can remotely administer nodes with ssh, or the GUI. Resource configuration can be accomplished on any node in the cluster. CIM administration can perform a wide variety of management operations on any cluster node.

Rebooting a node remotely?

We support rebooting through STONITH plugins which we provide. Appropriate hardware is required.

Function to distribute software to all nodes of the cluster?


See Also

Release 1 Fact Sheet, Linux-HA Release Roadmap, Heartbeat version 2 information