Pcsd: Unterschied zwischen den Versionen

Aus xinux.net
Zur Navigation springen Zur Suche springen
Zeile 24: Zeile 24:
 
==die nodes an dem ha cluster anmelden auf beiden nodes==
 
==die nodes an dem ha cluster anmelden auf beiden nodes==
 
<source lang=bash>
 
<source lang=bash>
root@adin:~# pcs cluster auth adin keks
+
root@adin:~# pcs host auth adin keks
 
Username: hacluster
 
Username: hacluster
 
Password:  
 
Password:  

Version vom 18. November 2020, 15:33 Uhr

Install

Configure

Aktiviren des systemd scripts auf beiden nodes

root@adin:~# systemctl start pcsd
root@keks:~# systemctl start pcsd
root@adin:~# systemctl enable pcsd
root@keks:~# systemctl enable pcsd
root@adin:~# systemctl stop pacemaker corosync
root@adin:~# systemctl disable pacemaker corosync
root@adin:~# rm  /etc/corosync/corosync.conf
root@keks:~# systemctl stop pacemaker corosync
root@keks:~# systemctl disable pacemaker corosync
root@keks:~# rm  /etc/corosync/corosync.conf

passwort des clusters setzen auf beiden nodes

root@adin:~# passwd hacluster
root@keks:~# passwd hacluster

die nodes an dem ha cluster anmelden auf beiden nodes

root@adin:~# pcs host auth adin keks
Username: hacluster
Password: 
keks: Authorized
adin: Authorized

Die Einrichtung abschließen und den Cluster starten

  • root@adin:~# pcs cluster setup --name mycluster adin keks --force
  • root@adin:~# pcs cluster start --all

Die Installation verifizieren

  • root@adin:~# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
	id	= 192.168.50.51
	status	= ring 0 active with no faults
  • root@adin:~# corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.50.51) 
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.50.52) 
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined

STONEIT deaktivieren

  • pcs property set stonith-enabled=false
  • crm_verify -L

pcs hilfe anzeigen

  • root@adin:~# pcs
Usage: pcs [-f file] [-h] [commands]...
Control and configure pacemaker and corosync.

Options:
    -h, --help  Display usage and exit
    -f file     Perform actions on file instead of active CIB
    --debug     Print all network traffic and external commands run
    --version   Print pcs version information

Commands:
    cluster     Configure cluster options and nodes
    resource    Manage cluster resources
    stonith     Configure fence devices
    constraint  Set resource constraints
    property    Set pacemaker properties
    acl         Set pacemaker access control lists
    status      View cluster status
    config      View and manage cluster configuration
    pcsd        Manage pcs daemon
    node        Manage cluster nodes
  • root@adin:~# pcs status help
Usage: pcs status [commands]...
View current cluster and resource status
Commands:
    [status] [--full | --hide-inactive]
        View all information about the cluster and resources (--full provides
        more details, --hide-inactive hides inactive resources)

    resources
        View current status of cluster resources

    groups
        View currently configured groups and their resources

    cluster
        View current cluster status

    corosync
        View current membership information as seen by corosync

    nodes [corosync|both|config]
        View current status of nodes from pacemaker. If 'corosync' is
        specified, print nodes currently configured in corosync, if 'both'
        is specified, print nodes from both corosync & pacemaker.  If 'config'
        is specified, print nodes from corosync & pacemaker configuration.

    pcsd [<node>] ...
        Show the current status of pcsd on the specified nodes.
        When no nodes are specified, status of all nodes is displayed.

    xml
        View xml version of status (output from crm_mon -r -1 -X)

node hinzufügen

  • root@adin:~# pcs cluster node add <nodename>

node entfernen

  • root@adin:~# pcs cluster node remove <nodename>

Eine Ressource (ClusterIP) hinzufügen

  • root@francis:~# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.242.151 cidr_netmask=21 op monitor interval=30s
  • root@francis:~# pcs resource show --full
root@francis:~# pcs resource show --full
 Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=192.168.242.151 cidr_netmask=21
  Operations: start interval=0s timeout=20s (ClusterIP-start-interval-0s)
              stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
              monitor interval=30s (ClusterIP-monitor-interval-30s)

Eine Ressource wieder entfernen

  • root@francis:~# pcs resource delete ClusterIP

Apache2 als Cluster-Service

Apache2 installieren

  • apt-get install apache2

Testsite erstellen

  • vi /var/www/html/index.html
 <html>
 <body>My Test Site - $(hostname)</body>
 </html>

status.conf anpassen

  • vi /etc/apache2/conf-available/status.conf
 <Location /server-status>
    SetHandler server-status
    Order deny,allow
    Deny from all
    Allow from 127.0.0.1
 </Location>

Die status.conf verlinken/aktivieren

  • cd /etc/apache2/conf-enabled/
  • ln -s ../conf-available/status.conf .

Die Resource erstellen

heartbeat resource

  • pcs resource create WebSite ocf:heartbeat:apache configfile=/etc/apache2/apache2.conf statusurl="http://localhost/server-status" op monitor interval=1min

lsb resource

  • pcs resource create firewall-cluster lsb:firewall op=monitor interval=1min --force

systemd resource

  • pcs resource create xl2tpd-cluster systemd:xl2tpd op=monitor interval=1min --force

Eine Resource innerhalb einer Gruppe erstellen

  • pcs resource create WebSite ocf:heartbeat:apache configfile=/etc/apache2/apache2.conf statusurl="http://localhost/server-status" op monitor interval=1min --group www-group

Abhängigkeiten erstellen

  • pcs constraint colocation add WebSite with VirtualIP INFINITY
  • pcs constraint
Location Constraints:
Ordering Constraints:
Colocation Constraints:
  WebSite with VirtualIP (score:INFINITY)
  • pcs status
Cluster name: underwood
Last updated: Wed Oct 26 11:28:03 2016		Last change: Wed Oct 26 11:27:39 2016 by root via cibadmin on francis
Stack: corosync
Current DC: francis (version 1.1.14-70404b0) - partition with quorum
2 nodes and 2 resources configured

Online: [ claire francis ]

Full list of resources:

 VirtualIP	(ocf::heartbeat:IPaddr2):	Started claire
 WebSite	(ocf::heartbeat:apache):	Started claire

PCSD Status:
  francis: Online
  claire: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Eine Resource dem Master einer Master/Slave-Resource hinterher ziehen lassen

  • pcs constraint colocation add WebSite with Master WebDataClone INFINITY

Reihenfolge erstellen

  • pcs constraint order VirtualIP then WebSite
Adding VirtualIP WebSite (kind: Mandatory) (Options: first-action=start then-action=start)
  • pcs constraint
Location Constraints:
Ordering Constraints:
  start VirtualIP then start WebSite (kind:Mandatory)
Colocation Constraints:
  WebSite with VirtualIP (score:INFINITY)

Eine bestimmte Node bevorzugen

  • pcs constraint location WebSite prefers francis=50
  • pcs constraint location VirtualIP prefers francis=50
  • pcs constraint
Location Constraints:
  Resource: VirtualIP
    Enabled on: francis (score:50)
  Resource: WebSite
    Enabled on: francis (score:50)
Ordering Constraints:
  start VirtualIP then start WebSite (kind:Mandatory)
Colocation Constraints:
  WebSite with VirtualIP (score:INFINITY)
  • pcs status
Cluster name: underwood
Last updated: Wed Oct 26 11:37:21 2016		Last change: Wed Oct 26 11:35:11 2016 by root via cibadmin on francis
Stack: corosync
Current DC: francis (version 1.1.14-70404b0) - partition with quorum
2 nodes and 2 resources configured
201M       0  201M    0% /run/user/0
root@francis:~# 

Online: [ claire francis ]

Full list of resources:

 VirtualIP	(ocf::heartbeat:IPaddr2):	Started francis
 WebSite	(ocf::heartbeat:apache):	Started francis

PCSD Status:
  francis: Online
  claire: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
   pcsd: active/enabled
  • crm_simulate -sL (zum Anzeigen der placement scores)
Current cluster status:
Online: [ claire francis ]

 VirtualIP	(ocf::heartbeat:IPaddr2):	Started francis
 WebSite	(ocf::heartbeat:apache):	Started francis

Allocation scores:
native_color: VirtualIP allocation score on claire: 0
native_color: VirtualIP allocation score on francis: 100
native_color: WebSite allocation score on claire: -INFINITY
native_color: WebSite allocation score on francis: 50

Manuelles Verschieben einer Resource

  • pcs resource move <resource-name> <zielnode>

resource enable

  • pcs resource enable <resource-name>

resource disable

  • pcs resource disable <resource-name>

verhindern dass eine resource auf einer bestimmten Node ausgeführt wird

  • pcs resource ban <resource-name> <node-name>

resource starten und debuggen

  • pcs resource debug-start <resource-name>

resource klonen (auf beiden Nodes aktiv schalten)

  • pcs resource clone <resource-name>

resource im Master/Slave-Setup erstellen

  • pcs resource master <resource-name>-Clone <resource-name> master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

DRBD erstellen

  • apt-get install drbd8-utils

Die Partionen mit LVM erstellen

  • pvcreate /dev/sdb
  • vgcreate ubuntu-francis /dev/sdb
  • lvcreate --name drbd-demo --size 1G ubuntu-francis

(Für andere Seite wiederholen)

DRBD konfigurieren

Die folgenden Anweißungen müssen auf beiden Nodes ausgeführt werden

  • vi /etc/drbd.d/wwwdata.res
resource wwwdata {
 protocol C;
 meta-disk internal;
 device /dev/drbd1;
 syncer {
  verify-alg sha1;
 }
 net {
  allow-two-primaries;
 }
 on francis {
  disk   /dev/ubuntu-francis/drbd-demo;
  address  192.168.50.51:7789;
 }
 on claire {
  disk   /dev/ubuntu-claire/drbd-demo;
  address  192.168.50.52:7789;
 }
}

drbd-laufwerk erstellen

  • drbdadm create-md wwwdata
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.

drbd-Laufwerk hochfahren

  • modprobe drbd
  • drbdadm up wwwdata

Diesen Rechner als primär verwenden

Die folgenden Befehle müssen nur auf der Node ausgeführt werden, die später die primäre Node sein soll

  • drbdadm primary --force wwwdata

Dateisystem erstellen

  • mkfs.xfs /dev/drbd1
meta-data=/dev/drbd1             isize=512    agcount=4, agsize=131066 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0
data     =                       bsize=4096   blocks=524263, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =Internes Protokoll     bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =keine                  extsz=4096   blocks=0, rtextents=0

Das drbd-Laufwerk mounten

  • mount /dev/drbd1 /mnt

Eine Testsite für Apache2 auf dem drbd-Laufwerk erstellen

  • vi /mnt/index.html
 <html>
  <body>My Test Site - DRBD</body>
 </html>

Das drbd-Laufwerk wieder unmounten

  • umount /dev/drbd1

Resourcen anlegen

in der Datei drbd_cfg zwischenspeichern

  • pcs cluster cib drbd_cfg

Die Resource für das Master/Slave-Setup erstellen

  • pcs -f drbd_cfg resource create WebData ocf:linbit:drbd \
  • >drbd_resource=wwwdata op monitor interval=60s
  • pcs -f drbd_cfg resource master WebDataClone WebData \
  • >master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 \
  • >notify=true
  • pcs -f drbd_cfg resource show
VirtualIP	(ocf::heartbeat:IPaddr2):	Started francis
 WebSite	(ocf::heartbeat:apache):	Started francis
 Master/Slave Set: WebDataClone [WebData]
     Stopped: [ claire francis ]

Die Konfiguration aus der Datei ins cib übernehmen

  • pcs cluster cib-push drbd_cfg
CIB updated
  • pcs status
Cluster name: underwood
Last updated: Wed Oct 26 16:07:39 2016		Last change: Wed Oct 26 16:07:29 2016 by root via cibadmin on francis
Stack: corosync
Current DC: claire (version 1.1.14-70404b0) - partition with quorum
2 nodes and 4 resources configured

Online: [ claire francis ]

Full list of resources:

 VirtualIP	(ocf::heartbeat:IPaddr2):	Started francis
 WebSite	(ocf::heartbeat:apache):	Started francis
 Master/Slave Set: WebDataClone [WebData]
     Masters: [ francis ]
     Slaves: [ claire ]

PCSD Status:
  francis: Online
  claire: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Die Konfiguration in der Datei fs_cfg zwischenspeichern

  • pcs cluster cib fs_cfg

Die Resource für das Filesystem anlegen

  • pcs -f fs_cfg resource create WebFS Filesystem \
  • > device="/dev/drbd1" directory="/var/www/html" fstype="xfs"

Abhängigkeiten zwischen Master/Slave-Setup und Webfilesystem erstellen

  • pcs -f fs_cfg constraint colocation add WebFS with WebDataClone INFINITY with-rsc-role=Master
  • pcs -f fs_cfg constraint order promote WebDataClone then start WebFS
Adding WebDataClone WebFS (kind: Mandatory) (Options: first-action=promote then-action=start)
  • pcs -f fs_cfg constraint colocation add WebSite with WebFS INFINITY
  • pcs -f fs_cfg constraint order WebFS then WebSite
Adding WebFS WebSite (kind: Mandatory) (Options: first-action=start then-action=start)
  • pcs -f fs_cfg constraint
Location Constraints:
  Resource: VirtualIP
    Enabled on: francis (score:50)
  Resource: WebSite
    Enabled on: francis (score:50)
    Enabled on: claire (score:0)
Ordering Constraints:
  start VirtualIP then start WebSite (kind:Mandatory)
  promote WebDataClone then start WebFS (kind:Mandatory)
  start WebFS then start WebSite (kind:Mandatory)
Colocation Constraints:
  WebSite with VirtualIP (score:INFINITY)
  WebFS with WebDataClone (score:INFINITY) (with-rsc-role:Master)
  WebSite with WebFS (score:INFINITY)

Die zwischengespeicherte Konfiguration ins cib übernehmen

  • pcs -f fs_cfg resource show
 VirtualIP	(ocf::heartbeat:IPaddr2):	Started francis
 WebSite	(ocf::heartbeat:apache):	Started francis
 Master/Slave Set: WebDataClone [WebData]
     Masters: [ francis ]
     Slaves: [ claire ]
 WebFS	(ocf::heartbeat:Filesystem):	Stopped
  • pcs cluster cib-push fs_cfg
CIB updated
  • pcs status
Cluster name: underwood
Last updated: Wed Oct 26 16:18:08 2016		Last change: Wed Oct 26 16:18:02 2016 by root via cibadmin on francis
Stack: corosync
Current DC: claire (version 1.1.14-70404b0) - partition with quorum
2 nodes and 5 resources configured

Online: [ claire francis ]

Full list of resources:

 VirtualIP	(ocf::heartbeat:IPaddr2):	Started francis
 WebSite	(ocf::heartbeat:apache):	Started francis
 Master/Slave Set: WebDataClone [WebData]
     Masters: [ francis ]
     Slaves: [ claire ]
 WebFS	(ocf::heartbeat:Filesystem):	Started francis

PCSD Status:
  francis: Online
  claire: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Den Cluster testen

  • pcs status
Cluster name: underwood
Last updated: Wed Oct 26 16:27:41 2016		Last change: Wed Oct 26 16:18:02 2016 by root via cibadmin on francis
Stack: corosync
Current DC: claire (version 1.1.14-70404b0) - partition with quorum
2 nodes and 5 resources configured

Online: [ claire francis ]

Full list of resources:

 VirtualIP	(ocf::heartbeat:IPaddr2):	Started francis
 WebSite	(ocf::heartbeat:apache):	Started francis
 Master/Slave Set: WebDataClone [WebData]
     Masters: [ francis ]
     Slaves: [ claire ]
 WebFS	(ocf::heartbeat:Filesystem):	Started francis

PCSD Status:
  francis: Online
  claire: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
  • pcs cluster standby francis
  • pcs status
Cluster name: underwood
Last updated: Wed Oct 26 16:27:58 2016		Last change: Wed Oct 26 16:27:54 2016 by root via crm_attribute on francis
Stack: corosync
Current DC: claire (version 1.1.14-70404b0) - partition with quorum
2 nodes and 5 resources configured

Node francis: standby
Online: [ claire ]

Full list of resources:

 VirtualIP	(ocf::heartbeat:IPaddr2):	Started claire
 WebSite	(ocf::heartbeat:apache):	Started claire
 Master/Slave Set: WebDataClone [WebData]
     Masters: [ claire ]
     Stopped: [ francis ]
 WebFS	(ocf::heartbeat:Filesystem):	Started claire

PCSD Status:
  francis: Online
  claire: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
  • pcs cluster unstandby francis
  • pcs status
Cluster name: underwood
Last updated: Wed Oct 26 16:28:13 2016		Last change: Wed Oct 26 16:28:11 2016 by root via crm_attribute on francis
Stack: corosync
Current DC: claire (version 1.1.14-70404b0) - partition with quorum
2 nodes and 5 resources configured

Online: [ claire francis ]

Full list of resources:

 VirtualIP	(ocf::heartbeat:IPaddr2):	Started francis
 WebSite	(ocf::heartbeat:apache):	Started francis
 Master/Slave Set: WebDataClone [WebData]
     Masters: [ francis ]
     Slaves: [ claire ]
 WebFS	(ocf::heartbeat:Filesystem):	Started francis

PCSD Status:
  francis: Online
  claire: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Libvirt

Webinterface

  • https://<nodename>:2224

Ressource Strongswan hinzufügen

  • pcs resource create vpn-gw systemd:strongswan

Troubleshooting

  • Wenn das starten einer Ressource einmal fehlschlägt, erscheinen Fehlermeldungen unter den Resourcen beim Befehl "pcs status". Wenn dies der Fall ist, können Resourcen weiterhin Fehlfunktionen im Cluster haben, obwohl sie nachträglich richtig konfiguriert wurden. Sollen diese Resourcen im Cluster normal funktionieren, müssen zunächst die Fehlermeldungen im pcs beseitigt werden. Dies geschiet mit:
    • crm_resource -P

Links