Crm: Unterschied zwischen den Versionen
Zur Navigation springen
Zur Suche springen
Zeile 228: | Zeile 228: | ||
=resource= | =resource= | ||
+ | ==status== | ||
+ | *Den Zustand aller resourcen anzeigen | ||
+ | <source lang=bash> | ||
+ | crm(live)resource# status | ||
+ | Master/Slave Set: drbd_master_slave [drbd_res] | ||
+ | Slaves: [ tic tuc ] | ||
+ | fs_res (ocf::heartbeat:Filesystem): Stopped | ||
+ | virtual-ip (ocf::heartbeat:IPaddr): (target-role:Stopped) Stopped | ||
+ | virtual-ip-2 (ocf::heartbeat:IPaddr): (target-role:Stopped) Stopped | ||
+ | </source> | ||
+ | |||
==ban== | ==ban== | ||
*unter allen Umständen verhindern dass eine resource auf einer Node gestartet wird | *unter allen Umständen verhindern dass eine resource auf einer Node gestartet wird | ||
Zeile 266: | Zeile 277: | ||
*eine resource auf einer Node stoppen | *eine resource auf einer Node stoppen | ||
<source lang=bash> | <source lang=bash> | ||
+ | crm(live)resource# stop virtual-ip | ||
crm(live)resource# status | crm(live)resource# status | ||
Master/Slave Set: drbd_master_slave [drbd_res] | Master/Slave Set: drbd_master_slave [drbd_res] | ||
Zeile 297: | Zeile 309: | ||
</source> | </source> | ||
− | == | + | =script= |
− | * | + | ==list== |
+ | *verfügbare Scripte für die Resourcenbildung anzeigen | ||
+ | <source lang=bash> | ||
+ | crm(live)script# list | ||
+ | Basic: | ||
+ | |||
+ | mailto MailTo | ||
+ | virtual-ip Virtual IP | ||
+ | |||
+ | Database: | ||
+ | |||
+ | database MySQL/MariaDB Database | ||
+ | db2 IBM DB2 Database | ||
+ | db2-hadr IBM DB2 Database with HADR | ||
+ | oracle Oracle Database | ||
+ | |||
+ | Filesystem: | ||
+ | |||
+ | clvm Cluster-aware LVM | ||
+ | clvm-vg Cluster-aware LVM (Volume Group) | ||
+ | drbd DRBD Block Device | ||
+ | filesystem Filesystem (mount point) | ||
+ | gfs2 gfs2 filesystem (cloned) | ||
+ | gfs2-base gfs2 filesystem base (cloned) | ||
+ | ocfs2 OCFS2 filesystem (cloned) | ||
+ | raid-lvm RAID hosting LVM | ||
+ | |||
+ | SAP: | ||
+ | |||
+ | sap-as SAP ASCS Instance | ||
+ | sap-ci SAP Central Instance | ||
+ | sap-db SAP Database Instance | ||
+ | sap-simple-stack SAP Simple Stack Instance | ||
+ | sap-simple-stack-plus SAP SimpleStack+ Instance | ||
+ | |||
+ | Server: | ||
+ | |||
+ | apache Apache Webserver | ||
+ | exportfs NFS Exported File System | ||
+ | haproxy HAProxy | ||
+ | nfsserver NFS Server | ||
+ | |||
+ | Stonith: | ||
+ | |||
+ | libvirt STONITH for libvirt (kvm / Xen) | ||
+ | sbd SBD, Shared storage based fencing | ||
+ | </source> | ||
+ | |||
+ | ==show== | ||
+ | *den Inhalt eines scripts anzeigen lassen | ||
<source lang=bash> | <source lang=bash> | ||
+ | crm(live)script# show virtual-ip | ||
+ | virtual-ip (Basic) | ||
+ | Virtual IP | ||
+ | |||
+ | This Linux-specific resource manages IP alias IP addresses. It can | ||
+ | add an IP alias, or remove one. In addition, it can implement Cluster | ||
+ | Alias IP functionality if invoked as a clone resource. | ||
+ | |||
+ | If used as a clone, you should explicitly set clone-node-max >= 2, | ||
+ | and/or clone-max < number of nodes. In case of node failure, clone | ||
+ | instances need to be re-allocated on surviving nodes. This would not | ||
+ | be possible if there is already an instance on those nodes, and clone- | ||
+ | node-max=1 (which is the default). | ||
+ | |||
+ | 1. Manages virtual IPv4 and IPv6 addresses (Linux specific version) | ||
+ | id (required) (unique) | ||
+ | Identifier for the cluster resource | ||
+ | ip (required) (unique) | ||
+ | IPv4 or IPv6 address | ||
+ | cidr_netmask | ||
+ | CIDR netmask | ||
+ | broadcast | ||
+ | Broadcast address | ||
</source> | </source> |
Version vom 20. Oktober 2016, 13:27 Uhr
configure
show
Anzeigen der Konfiguration
root@worf:~# crm configure show
node 1: worf
node 2: kurn
primitive res-ip IPaddr2 \
params ip=192.168.255.100 nic=eth0 \
op monitor interval=10
primitive res-ipsec lsb:ipsec \
op monitor interval=30s
primitive res-iptables lsb:iptables \
op monitor interval=30s
primitive res-pppoe lsb:pppoe \
op monitor interval=30s
group gr-vpn-gw res-pppoe res-ip res-iptables res-ipsec \
meta target-role=Started
location cli-prefer-gr-vpn-gw gr-vpn-gw role=Started inf: worf
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.14-70404b0 \
cluster-infrastructure=corosync \
cluster-name=debian \
stonith-enabled=false \
no-quorum-policy=ignore
save
Sichern der Konfiguration in eine Datei
root@worf:~# crm configure save vpn-gw.conf
load
Sichern der Konfiguration in eine Datei
root@worf:~# crm configure load update vpn-gw.conf
root@worf:~# crm configure load replace vpn-gw.conf
edit
Interaktiver Modus
root@worf:~# crm configure edit
comit
- Änderungen übernehmen
crm(live)# configure comit
INFO: apparently there is nothing to commit
INFO: try changing something first
monitor
- monitor resourcen-namen (intervall der checks in minuten):(timeout des monitor befehls)
monitor fs_res 6:10
delete
- löschen einer resource
crm(live)# configure delete <res-apache2>
INFO: hanging location:cli-prefer-res-apache2 deleted
INFO: constraint colocation:fs_drbd_colo updated
erase
- erase kann die nodes aus der Konfiguration löschen
crm(live)# configure erase nodes
node
- node hinzufügen
crm(live)# configure node 1: tic
rename
- umbenen eines cib objects
crm(live)configure# rename virtual-ip virtual-ipv4
INFO: modified colocation:fs_drbd_colo from virtual-ip to virtual-ipv4
validate_all
- überprüfen eines cib objects
crm(live)configure# validate_all virtual-ipv4
INFO: Using calculated netmask for 192.168.244.80: 255.255.248.0
rctest
- testet die resource auf beiden seiten
parallax ssh muss installiern sein
https://github.com/krig/parallax
rsctest virtual-ip-2
Probing resources .Warning: could not find an executable path for askpass because
Parallax SSH was not installed correctly. Password prompts will not
work.
testing on tic: virtual-ip-2
testing on tuc: virtual-ip-2
verify
- Überprüft die Konfiguration auf Fehler (bei korrekter Konfiguraton erscheint keine Meldung)
crm(live)configure# verify
ERROR: error: unpack_location_tags: Constraint 'cli-prefer-drbd_res': Invalid reference to 'drbd_res'
Errors found during check: config not valid
corosync
show
- zeigt die corosync-Konfiguration an
totem {
version: 2
cluster_name: debian
secauth: off
transport:udpu
interface {
ringnumber: 0
bindnetaddr: 10.144.144.0
broadcast: yes
mcastport: 5405
}
}
nodelist {
node {
ring0_addr: 10.144.144.2
name: tic
nodeid: 1
}
node {
ring0_addr: 10.144.144.1
name: tuc
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
wait_for_all: 1
last_man_standing: 1
auto_tie_breaker: 0
}
edit
- öffnet die Datei, die sich mit "show" anzeigen lässt, in vi
status
- zeigt den status von corosync an
crm(live)corosync# status
Printing ring status.
Local node ID 1
RING ID 0
id = 10.144.144.2
status = ring 0 active with no faults
Quorum information
------------------
Date: Thu Oct 20 14:58:41 2016
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 1
Ring ID: 64
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 1
Flags: 2Node Quorate WaitForAll LastManStanding
Membership information
----------------------
Nodeid Votes Name
2 1 10.144.144.1
1 1 10.144.144.2 (local)
history
events
crm(live)history# events
INFO: Retrieving information from cluster nodes, please wait...
ERROR: no dateutil, please provide times as printed by date(1)
ERROR: datetime_to_timestamp error: unsupported operand type(s) for -: 'NoneType' and 'datetime.datetime'
WARNING: giving up on log /var/cache/crm/history/live/tuc/ha-log.txt
info
crm(live)history# info
INFO: fetching new logs, please wait ...
ERROR: no dateutil, please provide times as printed by date(1)
ERROR: datetime_to_timestamp error: unsupported operand type(s) for -: 'NoneType' and 'datetime.datetime'
Source: live
Created on: Thu Oct 20 15:02:19 CEST 2016
By: report -Z -Q -f Thu Oct 20 14:02:00 2016 /var/cache/crm/history/live
Period: 2016-10-20 16:02:00 - --/--/-- --:--:--
Nodes: tic tuc
Groups:
Resources: drbd_res fs_res virtual-ip virtual-ip-2
Transitions:
latest
crm(live)history# latest
ERROR: no dateutil, please provide times as printed by date(1)
ERROR: datetime_to_timestamp error: unsupported operand type(s) for -: 'NoneType' and 'datetime.datetime'
INFO: fetching new logs, please wait ...
ERROR: no transitions found in the source
resource
status
- Den Zustand aller resourcen anzeigen
crm(live)resource# status
Master/Slave Set: drbd_master_slave [drbd_res]
Slaves: [ tic tuc ]
fs_res (ocf::heartbeat:Filesystem): Stopped
virtual-ip (ocf::heartbeat:IPaddr): (target-role:Stopped) Stopped
virtual-ip-2 (ocf::heartbeat:IPaddr): (target-role:Stopped) Stopped
ban
- unter allen Umständen verhindern dass eine resource auf einer Node gestartet wird
crm(live)resource# ban virtual-ip-2 tuc
WARNING: Creating rsc_location constraint 'cli-ban-virtual-ip-2-on-tuc' with a score of -INFINITY for resource virtual-ip-2 on tuc.
This will prevent virtual-ip-2 from running on tuc until the constraint is removed using the 'crm_resource --clear' command or manually with cibadmin
This will be the case even if tuc is the last node in the cluster
This message can be disabled with --quiet
cleanup
- den Status einer Resource neu überprüfen
crm(live)resource# cleanup virtual-ip-2
Cleaning up virtual-ip-2 on tic, removing fail-count-virtual-ip-2
Cleaning up virtual-ip-2 on tuc, removing fail-count-virtual-ip-2
* The configuration specifies that 'virtual-ip-2' should remain stopped
Waiting for 2 replies from the CRMd.. OK
start
- eine resource auf einer Node starten
crm(live)resource# start virtual-ip
crm(live)resource# status
Master/Slave Set: drbd_master_slave [drbd_res]
Masters: [ tic ]
Slaves: [ tuc ]
fs_res (ocf::heartbeat:Filesystem): Started
virtual-ip (ocf::heartbeat:IPaddr): Started
virtual-ip-2 (ocf::heartbeat:IPaddr): (target-role:Stopped) Stopped
stop
- eine resource auf einer Node stoppen
crm(live)resource# stop virtual-ip
crm(live)resource# status
Master/Slave Set: drbd_master_slave [drbd_res]
Slaves: [ tic tuc ]
fs_res (ocf::heartbeat:Filesystem): Stopped
virtual-ip (ocf::heartbeat:IPaddr): (target-role:Stopped) Stopped
virtual-ip-2 (ocf::heartbeat:IPaddr): (target-role:Stopped) Stopped
restart
- eine resource auf einer Node neu starten
crm(live)resource# restart virtual-ip
INFO: ordering virtual-ip to stop
waiting for stop to finish . done
INFO: ordering virtual-ip to start
migrate
- eine resource auf dieser Node stoppen und auf einer anderen starten
crm(live)resource# migrate virtual-ip tuc
crm(live)resource# status
Master/Slave Set: drbd_master_slave [drbd_res]
Masters: [ tic ]
Slaves: [ tuc ]
fs_res (ocf::heartbeat:Filesystem): Started
virtual-ip (ocf::heartbeat:IPaddr): Started
virtual-ip-2 (ocf::heartbeat:IPaddr): (target-role:Stopped) Stopped
script
list
- verfügbare Scripte für die Resourcenbildung anzeigen
crm(live)script# list
Basic:
mailto MailTo
virtual-ip Virtual IP
Database:
database MySQL/MariaDB Database
db2 IBM DB2 Database
db2-hadr IBM DB2 Database with HADR
oracle Oracle Database
Filesystem:
clvm Cluster-aware LVM
clvm-vg Cluster-aware LVM (Volume Group)
drbd DRBD Block Device
filesystem Filesystem (mount point)
gfs2 gfs2 filesystem (cloned)
gfs2-base gfs2 filesystem base (cloned)
ocfs2 OCFS2 filesystem (cloned)
raid-lvm RAID hosting LVM
SAP:
sap-as SAP ASCS Instance
sap-ci SAP Central Instance
sap-db SAP Database Instance
sap-simple-stack SAP Simple Stack Instance
sap-simple-stack-plus SAP SimpleStack+ Instance
Server:
apache Apache Webserver
exportfs NFS Exported File System
haproxy HAProxy
nfsserver NFS Server
Stonith:
libvirt STONITH for libvirt (kvm / Xen)
sbd SBD, Shared storage based fencing
show
- den Inhalt eines scripts anzeigen lassen
crm(live)script# show virtual-ip
virtual-ip (Basic)
Virtual IP
This Linux-specific resource manages IP alias IP addresses. It can
add an IP alias, or remove one. In addition, it can implement Cluster
Alias IP functionality if invoked as a clone resource.
If used as a clone, you should explicitly set clone-node-max >= 2,
and/or clone-max < number of nodes. In case of node failure, clone
instances need to be re-allocated on surviving nodes. This would not
be possible if there is already an instance on those nodes, and clone-
node-max=1 (which is the default).
1. Manages virtual IPv4 and IPv6 addresses (Linux specific version)
id (required) (unique)
Identifier for the cluster resource
ip (required) (unique)
IPv4 or IPv6 address
cidr_netmask
CIDR netmask
broadcast
Broadcast address