lunes, 3 de septiembre de 2007

Discos SAN gobernados por VxVM



En Solaris 10, si disponemos de una SAN de cualquier fabricante de almacenamiento (SUN, Hitachi, EMC, HP, IBM...), podemos acceder a los discos de dos formas:

1.- Utilizando el cliente de la SAN propio de cada fabricante. (Ej: Powerpath de EMC)

2.- Utilizando un cliente universal, como puede ser Veritas Volume Manager.

Antes de nada, tenemos que verificar que tenemos una correcta configuración hardware de las tarjetas de fibra (HBA) mediante las cuales vamos a conectar nuestro sistema a la SAN, a través de los switches de fibra.

En nuestro escenario particular utilizaremos dos tarjetas HBA de tipo Emulex, conectándonos a una cabina Symetrix de EMC y utilizando el módulo DMP del VxVM para la gestión de los discos.

Confirmamos que tenemos instalado el driver del fabricante de dichas HBAs.

# pkginfo | grep -i lpfc
system lpfc Emulex LightPulse FC SCSI/IP Host Bus Adapter driver
# pkginfo -l lpfc
PKGINST: lpfc
NAME: Emulex LightPulse FC SCSI/IP Host Bus Adapter driver
CATEGORY: system
ARCH: sparc
VERSION: Release 6.11c
BASEDIR: /
PSTAMP: sunv24020070207110814
INSTDATE: Mar 05 2007 16:19
STATUS: completely installed
FILES: 30 installed pathnames
16 shared pathnames
15 directories
9 executables
3690 blocks used (approx)

La configuración de las mismas reside fundamentalmente en dos ficheros /kernel/drv/lpfc.conf (identificación de los puertos "world wide name" (WWN) y los target de cada HBA, que presenta la cabina a nuestro sistema) y /kernel/drv/sd.conf (asociación de cada target y lun libre, asignado por el sistema a cada disco descubierto)

Y el paquete con todos los binarios que utilizaremos posteriormente (dependiente del anterior).

# pkginfo | grep HBAnyware
system HBAnyware Emulex HBAnyware FC Host Bus Adapter Remote Manager
# pkginfo -l HBAnyware
PKGINST: HBAnyware
NAME: Emulex HBAnyware FC Host Bus Adapter Remote Manager
CATEGORY: system
ARCH: sun4u
VERSION: 3.1a12
BASEDIR: /
PSTAMP: utilsun0420070103103238
INSTDATE: Mar 05 2007 16:26
STATUS: completely installed
FILES: 505 installed pathnames
10 shared pathnames
17 directories
24 executables
55388 blocks used (approx)

Tras la instalación de estos paquetes y el reinicio de nuestro sistema, podemos comprobar que a nivel software el sistema este correctamente configurado para soportar FCP.

# /usr/sbin/lpfc/lputil
LightPulse Common Utility for Solaris/SPARC. Version 2.0a13 (1/3/2006).
Copyright (c) 2005, Emulex Corporation
Emulex Fibre Channel Host Adapters Detected: 2
Host Adapter 0 (lpfc2) is an LP9K (Ready Mode)
Host Adapter 1 (lpfc3) is an LP9K (Ready Mode)

MAIN MENU
1. List Adapters
2. Adapter Information
3. Firmware Maintenance
4. Reset Adapter
5. Persistent Bindings
0. Exit
Enter choice => 2
ADAPTER INFORMATION MENU
1. PCI Configuration Parameters
2. Adapter Revision Levels
3. Wakeup Parameters
4. IEEE Address
5. Loop Map
6. Status & Counters
7. Link Status
8. Configuration Parameters
0. Return to Main Menu
Enter choice => 4
0. lpfc2
1. lpfc3
Select an adapter => 0
IEEE Address for Adapter 0:
[10000000] [C94951F4] <- WWN de la HBA

# format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0
/pci@1f,4000/scsi@3/sd@0,0
1. c0t8d0
/pci@1f,4000/scsi@3/sd@8,0
2. c0t9d0
/pci@1f,4000/scsi@3/sd@9,0
3. c0t10d0
/pci@1f,4000/scsi@3/sd@a,0
4. c0t11d0
/pci@1f,4000/scsi@3/sd@b,0
5. c0t12d0
/pci@1f,4000/scsi@3/sd@c,0
6. c5t0d0
/pci@1f,4000/lpfc@2/sd@0,0
7. c5t0d117
/pci@1f,4000/lpfc@2/sd@0,75
8. c5t0d118
/pci@1f,4000/lpfc@2/sd@0,76
9. c5t0d119
/pci@1f,4000/lpfc@2/sd@0,77
10. c5t0d120
/pci@1f,4000/lpfc@2/sd@0,78
11. c5t0d121
/pci@1f,4000/lpfc@2/sd@0,79
12. c5t0d122
/pci@1f,4000/lpfc@2/sd@0,7a
13. c5t0d123
/pci@1f,4000/lpfc@2/sd@0,7b
14. c5t0d124
/pci@1f,4000/lpfc@2/sd@0,7c
15. c5t1d0
/pci@1f,4000/lpfc@2/sd@1,0
16. c5t1d117
/pci@1f,4000/lpfc@2/sd@1,75
17. c5t1d118
/pci@1f,4000/lpfc@2/sd@1,76
18. c5t1d119
/pci@1f,4000/lpfc@2/sd@1,77
19. c5t1d120
/pci@1f,4000/lpfc@2/sd@1,78
20. c5t1d121
/pci@1f,4000/lpfc@2/sd@1,79
21. c6t0d0
/pci@1f,4000/lpfc@4/sd@0,0
22. c6t0d117
/pci@1f,4000/lpfc@4/sd@0,75
23. c6t0d118
/pci@1f,4000/lpfc@4/sd@0,76
24. c6t0d119
/pci@1f,4000/lpfc@4/sd@0,77
25. c6t0d120
/pci@1f,4000/lpfc@4/sd@0,78
26. c6t0d121
/pci@1f,4000/lpfc@4/sd@0,79
27. c6t0d122
/pci@1f,4000/lpfc@4/sd@0,7a
28. c6t0d123
/pci@1f,4000/lpfc@4/sd@0,7b
29. c6t0d124
/pci@1f,4000/lpfc@4/sd@0,7c
30. c6t1d0
/pci@1f,4000/lpfc@4/sd@1,0
31. c6t1d117
/pci@1f,4000/lpfc@4/sd@1,75
32. c6t1d118
/pci@1f,4000/lpfc@4/sd@1,76
33. c6t1d119
/pci@1f,4000/lpfc@4/sd@1,77
34. c6t1d120
/pci@1f,4000/lpfc@4/sd@1,78
35. c6t1d121
/pci@1f,4000/lpfc@4/sd@1,79

Ya son visibles los discos de la región permitida de la SAN a la que tiene acceso nuestro sistema, luego podemos comenzar a instalar VxVM a traves del CD-ROM:

# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /opt
# cd /opt/software/
# ls
10_Recommended.zip fcaw
C500A5.TXT file_system
CO150A9.PRG getting_started.pdf
EMCPower.SOLARIS.5.0.0.GA.b141.tar gnu
EMCpower installer
LP6DUTIL.EXE lpfc
LP6DUTIL.doc perl
O150a9.txt readme.txt
authentication_service readme_Sol_2612.txt
cd393a0.awc samplescript.txt
cd393a0.zip storage_foundation
cdc393a0.awc storage_foundation_cluster_file_system
cluster_management_console storage_foundation_for_db2
cluster_server storage_foundation_for_oracle
cluster_server_agents storage_foundation_for_oracle_rac
co150a9.zip storage_foundation_for_sybase
emc volume_manager
fca-pci.pkg volume_replicator
fca-pci2612.tar windows

# ./installer -rsh
Storage Foundation and High Availability Solutions 5.0

Symantec Product Version Installed Licensed
===========================================================================
Veritas File System 5.0 yes
Veritas Volume Manager 5.0 yes
Veritas Volume Replicator 5.0 no
Veritas Storage Foundation 5.0 no
Veritas Storage Foundation for Oracle 5.0 no
Veritas Storage Foundation for DB2 no no
Veritas Storage Foundation for Sybase no no
Veritas Storage Foundation Cluster File System 5.0 no
Veritas Storage Foundation for Oracle RAC 5.0 no

Task Menu:

I) Install/Upgrade a Product C) Configure an Installed Product
L) License a Product P) Perform a Pre-Installation Check
U) Uninstall a Product D) View a Product Description
Q) Quit ?) Help

Enter a Task: [I,C,L,P,U,D,Q,?] Enter a Task: [I,C,L,P,U,D,Q,?] I
...

Una vez instalado y reiniciado de nuevo el sistema, listamos las controladoras:

# vxdmpadm listctlr all
CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME
=====================================================
c0 Disk ENABLED Disk
c6 EMC ENABLED EMC1
c5 EMC ENABLED EMC1
c5 EMC ENABLED EMC0
c6 EMC ENABLED EMC0

Y los discos a los que podemos acceder a traves de ellas:

# vxdmpadm getsubpaths ctlr=c0
NAME STATE[A] PATH-TYPE[M] DMPNODENAME ENCLR-TYPE ENCLR-NAME ATTRS
================================================================================
c0t0d0s2 ENABLED(A) - c0t0d0s2 Disk Disk -
c0t8d0s2 ENABLED(A) - c0t8d0s2 Disk Disk -
c0t9d0 ENABLED(A) - c0t9d0 Disk Disk -
c0t10d0 ENABLED(A) - c0t10d0 Disk Disk -
c0t11d0 ENABLED(A) - c0t11d0 Disk Disk -
c0t12d0 ENABLED(A) - c0t12d0 Disk Disk -

La controladora c0 nos permite acceder a los discos locales, como podemos comprobar:

# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c0::dsk/c0t0d0 disk connected configured unknown
c0::dsk/c0t6d0 CD-ROM connected configured unknown
c0::dsk/c0t8d0 disk connected configured unknown
c0::dsk/c0t9d0 disk connected configured unknown
c0::dsk/c0t10d0 disk connected configured unknown
c0::dsk/c0t11d0 disk connected configured unknown
c0::dsk/c0t12d0 disk connected configured unknown
c1 scsi-bus connected unconfigured unknown
c2 scsi-bus connected unconfigured unknown

# vxdmpadm getsubpaths ctlr=c6
NAME STATE[A] PATH-TYPE[M] DMPNODENAME ENCLR-TYPE ENCLR-NAME ATTRS
================================================================================
c6t0d117s2 ENABLED(A) - c5t0d117s2 EMC EMC1 -
c6t0d118s2 ENABLED(A) - c5t0d118s2 EMC EMC1 -
c6t0d119s2 ENABLED(A) - c5t0d119s2 EMC EMC1 -
c6t0d120s2 ENABLED(A) - c5t0d120s2 EMC EMC1 -
c6t0d121s2 ENABLED(A) - c5t0d121s2 EMC EMC1 -
c6t0d122s2 ENABLED(A) - c5t0d122s2 EMC EMC1 -
c6t0d123s2 ENABLED(A) - c5t0d123s2 EMC EMC1 -
c6t0d124s2 ENABLED(A) - c5t0d124s2 EMC EMC1 -
c6t1d117s2 ENABLED(A) - c6t1d117s2 EMC EMC0 -
c6t1d118s2 ENABLED(A) - c6t1d118s2 EMC EMC0 -
c6t1d119s2 ENABLED(A) - c6t1d119s2 EMC EMC0 -
c6t1d120s2 ENABLED(A) - c6t1d120s2 EMC EMC0 -
c6t1d121s2 ENABLED(A) - c6t1d121s2 EMC EMC0 -

# vxdmpadm getsubpaths ctlr=c5
NAME STATE[A] PATH-TYPE[M] DMPNODENAME ENCLR-TYPE ENCLR-NAME ATTRS
================================================================================
c5t0d117s2 ENABLED(A) - c5t0d117s2 EMC EMC1 -
c5t0d118s2 ENABLED(A) - c5t0d118s2 EMC EMC1 -
c5t0d119s2 ENABLED(A) - c5t0d119s2 EMC EMC1 -
c5t0d120s2 ENABLED(A) - c5t0d120s2 EMC EMC1 -
c5t0d121s2 ENABLED(A) - c5t0d121s2 EMC EMC1 -
c5t0d122s2 ENABLED(A) - c5t0d122s2 EMC EMC1 -
c5t0d123s2 ENABLED(A) - c5t0d123s2 EMC EMC1 -
c5t0d124s2 ENABLED(A) - c5t0d124s2 EMC EMC1 -
c5t1d117s2 ENABLED(A) - c5t1d117s2 EMC EMC0 -
c5t1d118s2 ENABLED(A) - c5t1d118s2 EMC EMC0 -
c5t1d119s2 ENABLED(A) - c5t1d119s2 EMC EMC0 -
c5t1d120s2 ENABLED(A) - c5t1d120s2 EMC EMC0 -
c5t1d121s2 ENABLED(A) - c5t1d121s2 EMC EMC0 -

Pero si listamos todos los discos que ve VxVM, es como si existieran el doble... aunque realmente son los mismos a través de la controladora c5 y c6, ya que lo único que cambia es la bandeja de la cabina por la que ataca cada una de las HBA (EMC1 o EMC2).

# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:sliced rootdisk rootdg online
c0t9d0 auto:sliced - - online
c0t10d0 auto:none - - online invalid
c0t11d0 auto:none - - online invalid
c0t12d0 auto:none - - online invalid
c5t0d117s2 auto:none - - online invalid
c5t0d118s2 auto:none - - online invalid
c5t0d119s2 auto:none - - online invalid
c5t0d120s2 auto:none - - online invalid
c5t0d121s2 auto:none - - online invalid
c5t0d122s2 auto:none - - online invalid
c5t0d123s2 auto:none - - online invalid
c5t0d124s2 auto:none - - online invalid
c5t1d117s2 auto:none - - online invalid
c5t1d118s2 auto:none - - online invalid
c5t1d119s2 auto:none - - online invalid
c5t1d120s2 auto:none - - online invalid
c5t1d121s2 auto:none - - online invalid
c6t1d117s2 auto:none - - online invalid
c6t1d118s2 auto:none - - online invalid
c6t1d119s2 auto:none - - online invalid
c6t1d120s2 auto:none - - online invalid
c6t1d121s2 auto:none - - online invalid
c6t0d117s2 auto:none - - online invalid
c6t0d118s2 auto:none - - online invalid
c6t0d119s2 auto:none - - online invalid
c6t0d120s2 auto:none - - online invalid
c6t0d121s2 auto:none - - online invalid
c6t0d122s2 auto:none - - online invalid
c6t0d123s2 auto:none - - online invalid
c6t0d124s2 auto:none - - online invalid

Realizamos un escaneo de los discos desde VxVM:

# vxdctl enable

Inicializamos 3 de los discos, es decir a partir de ese momento VxVM va a gestionar toda su informacion de forma completa.

# vxdisksetup -i c6t1d119
# vxdisksetup -i c6t1d120
# vxdisksetup -i c6t1d121

# vxdisk list | egrep "c6t1d119|c6t1d120s2|c6t1d121s2"
c6t1d119s2 auto:cdsdisk - - online
c6t1d120s2 auto:cdsdisk - - online
c6t1d121s2 auto:cdsdisk - - online

Creamos un grupo de discos denominado app_dg asignandole estos discos.

# vxdg init app_dg c6t1d119s2=c6t1d119
# vxdg -g app_dg adddisk c6t1d120s2=c6t1d120
# vxdg -g app_dg adddisk c6t1d121s2=c6t1d121

Y creamos un volumen test de 3GB de tipo stripe, el cual daremos formato VxFS y montaremos como filesystem /prueba:

# vxassist -g app_dg make test 3072m layout=stripe
# mkfs -F vxfs -o largefiles /dev/vx/rdsk/app_dg/test
# vxprint -g app_dg -th

dg app_dg default default 27000 1176279780.32.teras6

dm c6t1d119s2 c6t1d119s2 auto 65536 47452288 -
dm c6t1d120s2 c6t1d119s2 auto 65536 47452288 -
dm c6t1d121s2 c6t1d119s2 auto 65536 47452288 -

v test - ENABLED ACTIVE 2097152 SELECT - fsgen
pl test-01 test ENABLED ACTIVE 2097152 STRIPE - RW
sd c6t1d119s2-01 test-01 c6t1d119s2 0 2097152 0/0 c6t1d119 ENA
sd c6t1d120s2-01 test-01 c6t1d120s2 0 2097152 1/0 c6t1d120 ENA
sd c6t1d121s2-01 test-01 c6t1d121s2 0 2097152 2/0 c6t1d121 ENA

# mount -F vxfs /dev/vx/dsk/app_dg/test /prueba

Luego a parte de las bondades que nos proporcina una SAN en cuanto a rendimiento, replicacion y backup, hemos conseguido un filesystem VxFS gestionado por VxVM de una región de discos permitida de la SAN. La ventaja de este modelo es simple: si la controladora c6 que sirve una de las dos HBAs deja de funcionar, siempre podremos acceder a los datos por el camino que nos quedaría de la controladora c5 que sirve la otra HBA y además este cambio seria trasparante ya que lo realizaría el módulo DMP.

4 comentarios:

  1. Muy bueno :D

    Es interesante la forma en como lo has hecho, un dia de estos probare ese administrador de discos, pero dime como va su rendimiento? :P

    Un saludo.

    ResponderEliminar
  2. Imaginate el Rendimiento de acceso a disco con un nivel fisico de fibra óptica a Gigabit Ethernet...

    ResponderEliminar
  3. Vaya bastante eficiente.

    Ya lo implementare.

    Un saludo.

    ResponderEliminar
  4. Que buenisimo Tutorial, la verdad me ha ayudado mucho porque no me abre la interfaz grafica vea.

    Gracias

    ResponderEliminar