Welcome to the World of IBM Virtualization


"Everything can be virtualized except the reality!!!"


The main idea behind creating this blog is to provide profound insight into managing IT infrastructure running primarily on IBM AIX and the below technologies as well.

To show your support, become a follower of this blog.

1. Tivoli Storage Manager


2. HACMP (POWERHA)


3. VCS (Veritas Cluster Service)


4. Oracle on AIX


5. SAP (R/3, Enterprise portal, CRM, BW) on AIX


6. Informatica on AIX


7. Websphere on AIX




I'll also provide tips, techniques and best practices followed in configuring IBM VIO servers and Virtual IO client LPARS (logical partitions) and present it in a comprehensible way for system architects and AIX beginners.




Enjoy reading my posts and have intellectual fun.

Wednesday, January 9, 2013

IBM VIO Server Basics

VIO Server:
 
What is VIO? Why do we need?
VIO Server (Virtual I/O Server) is an lpar used to virtualize physical adapters such as ethernet using Shared Ethernet Adapter, Fibre Channel adapter using NPIV and Physical volumes using Virtual SCSI.
This is needed to share the physical adapters between the lpars. To overcome the limitations in number of physical adapters.
 
What are the different versions?
VIO 1.5, 2.1, 2.2.1
 
how do you login to VIO Server?
use ssh padmin@vioservername and once logged in you can execute all the VIO commands. Once you login as padmin, execute "oem_setup_env" command as padmin to login as root. padmin shell is a restricted shell. Not all unix commands will work as padmin. Underlying operating system in VIO is AIX. Most of the AIX commands will work once you login as root using "oem_setup_env" but IT IS NOT RECOMMENDED by IBM
 
What hardware is supported?
Starting from POWER5, then POWER6 and POWER7. Different editions are explained below. To use advanced features such as  Active Memory Sharing and Live Partition Mobility you need Enterprise Edition

Basic commands which can be run as padmin:
license -accept : Use this command to accept license after the installation or migration or upgrade.
 of VIO server
ioslevel : To find out the current VIO version or level
mkvdev -lnagg : Command to create Ethernet link aggregation; Mainly to bundle two or more ethernet connections
mkvdev -sea ent0 -vadapter ent1 -defaultid 1 -default ent1 : Command to create Shared Ethernet Adapter
mkvdev -dev hdisk2 -vadapter vhost2 : Command to create Virtual SCSI mapping to share a disks (physical volume)
lsmap -all : To list all Virtual SCSI mapping i.e. vhost - hdisk
vfcmap : To create Virtual Fibre Channel adapter mapping to share fibre channel adapter (HBA)
lsmap -all -npiv : To list all Virtual Fibre Channel adapter mapping (NPIV Configuration)
lsnports : to verify if a fibre channel adapter supports NPIV
backupios -file /home/padmin/mnt/mksysb-backup -mksysb : To backup VIO to a file /home/padmin/mnt/mksysb-backup
updateios : To upgrade IOS level of VIO server Steps explained below
shutdown -force : to shutdown
shutdown -force -restart : to reboot
Migration and Upgrade methods:
Migration can be performed from version 1.5 to 2.1 using VIO MIGRATION DVD MEDIA. steps are explained in the below URL
For Upgrade from 2.1 to 2.2 or within 2.2 level, steps are explained in the below URL.
Example Steps:
Login to VIO server as padmin
Login as root "oem_setup_env"
mount nfsserver:/viopatches /mnt
exit to padmin "exit"
updateios -commit
updateios -install -accept -dev /mnt/viofixpackFP24
shutdown -force -restart
license -accept
ioslevel
updateios -commit
 
POWERVM:
PowerVM, formerly known as Advanced Power Virtualization (APV), is a chargeable feature of IBM POWER5, POWER6 and POWER7 servers and is required for support of micro-partitions and other advanced features. Support is provided for IBM i, AIX and Linux.
 
Description

IBM PowerVM has the following components:
A "VET" code, which activates firmware required to support resource sharing and other features.
Installation media for the Virtual I/O Server (VIOS), which is a service partition providing sharing services for disk and network adapters.
 
IBM PowerVM comes in three editions.

1.) IBM PowerVM Express
 Only supported on "Express" servers (e.g. Power 710/730, 720/740, 750 and Power Blades).
 Limited to three partitions, one of which must be a VIOS partition.
 No support for Multiple Shared Processor Pools.

This is primarily intended for "sandbox" environments

2.) IBM PowerVM Standard
 Supported on all POWER5, POWER6 and POWER7 systems.
 Unrestricted use of partitioning - 10x LPARs per core (20x LPARs for Power7+ servers) (up to a maximum of 1,000 per system).
 Multiple Shared Processor Pools (on POWER6 and POWER7 systems only).

This is the most common edition in use on production systems.

3.) IBM PowerVM Enterprise
 Supported on POWER6 and POWER7 systems only.
 As PowerVM Standard with the addition of Live Partition Mobility (which allows running virtual machines to migrate to another system) and Active Memory Sharing (which intelligently reallocates physical memory between multiple running virtual machines).

Wednesday, July 25, 2012

DLPAR: Removing I/O Adapters in AIX using HMC

Unlike removing CPU or Memory through Dynamic LPAR operation, removing physical I/O adapters requires few additional steps at the OS level. Each physical I/O adapters do have a parent PCI device and they belong to a slot. To remove an IO adapter, PCI and its child devices and the holding PCI I/O slot need to be removed first.

Steps are detailed below.

In the following example, device fcs0 is removed through Dynamic LPAR operation.

1. Use  lsdev -Cl fcs0 -F parent to find the parent pci device. Lets assume it returns "pci2"

2. Use lsslot -c slot or lsslot -c pci to find the respective slot. e.g. U001.781.DERTRGD.P1-C3

3. Remove pci and child devices. Devices shouldn't be busy. i.e. for fcs devices, volume group must be varied off and for ent devices, network interfaces
must be down.

Use the following command to remove them: rmdev -Rdl pci2

4. After removing the devices, respective slot has to be removed using
"drslot -r -s U001.781.DERTRGD.P1-C3 -c pci "

5. Now, Go to HMC and perform the DLPAR physical IO adapter remove operation. Slot must be listed in the DLPAR window.

Monday, October 3, 2011

IBM VIO Virtual SCSI Disk configuration

Using VIO server, a disk or a logical volume or a volume group can be shared to a client LPAR as a "VIRTUAL SCSI DISK"

Steps to configure Virtual SCSI:
1. Choose a disk (physical volume) in the VIO server. For example hdisk1.

2. Modify LPAR profile of the VIO server to add a virtual server SCSI adapter. Specify a unique adapter slot number and also specify connecting client partition and slot.

3. After adding the virtual scsi adapter and activating the VIO server, you should see a virtual scsi server adapter vhost#. For example vhost0.

4. Modify the client LPAR profile to create new virtual scsi adapter and specify the connecting server partition as the VIO server and server SCSI adapter slot.

5. Then to create the mapping in the VIO execute the following command.

 mkvdev -vdev hdisk1 -vadapter vhost0 -dev client1_rootvg

In Dual VIO setup with a powerpath device coming from SAN:

Let's assume hdiskpower0 is shared between vioservera and vioserverb.

Follow the above steps 1-3 for both the VIO servers.

Execute the following on both the VIO servers:
a. Switch user to root using the command "oem_setup_env"
b. chdev -l hdiskpower0 -a reserve_lock=no hdiskpower0
c. mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev client1_rootvg
d. Perform the step-4 twice for each VIO servers.

The above steps will create two virtual scsi path for the same physical volume. You may login to the client and execute "lspath" to view the vscsi path to the virtual SCSI device.

To verify the mappings in the VIO server, use the command lsmap -all.

Wednesday, February 16, 2011

Configuring NPIV in VIO

Using NPIV setup in the VIO, physical fibre channel adapters can be shared across multiple LPARs. Traditionally we have been assigning physical adapters
to AIX/Linux LPARs, and soon we'd run out of adapters if the requirement to build more LPAR arises.

Below are the steps to configure NPIV:

Minimum Requirements to setup NPIV:

- POWER6
- HMC 7.3.4 or later
- VIOS 2.1.2
- AIX 5.3 TL9, or later / AIX 6.1 TL2 or later
- NPIV enabled Switch
- System Firmware level of EL340_039
- 8 Gigabit PCI Express Dual Port Fibre Channel Adapter (Feature Code 5735)

Steps:

1.) Modify LPAR Profile:
a. Assign the 8 Gigabit PCI Express Dual Port Fibre Channel Adapter to the VIO servers.b. Create virtual "server" fibre channel adapters in VIO and assign adapter ID. Specify the client partition and adapter ID. I would suggest to give the
adapter ID as same as the VIO servers adapter ID.c. Create virtual "client" fibre channel adapters in AIX LPARs and specify VIO partition and adapter ID as the connecting partition.


2.) Activate VIO partition and execute "lsnports" to see the physical adapter assigned and it supports NPIV. You should also a virtual adapter "vfcshost#"
which you created in step 1-b.

For this example, let's assume that fcs0 and fcs1 are the dual physical fibre adapters and vfchost0 and vfchost1 are the virtual adapter.

3.) Execute the following command to create the mapping in the VIO server.
vfcmap -vadapter vfchost0 -fcp fcs0 vfcmap -vadapter vfchost1 -fcp fcs1

4.) To verify the mapping, do "lsmap -npiv" and check the mapping. This command will be useful when there is a problem in the NPIV setup.

5.) Now Activate the client LPAR and you will two fibre channel adapters (fcs0 and fcs1). Use
the WWPN (same as WWN but it is logical ID) of the client
LPARs to assign LUNs.

6.) Inform Storage admins to enable NPIV in the SAN switch ports where the VIO server is connected.

You can use the same physical adapters to create multiple virtual client fibre channel adapters and thus they are shared across LPARs.





Friday, February 4, 2011

VIO Shared Ethernet Setup

This article discusses the steps involved in setting up VIO Shared Ethernet. First of all, why do we need it? Why can't we assign a physical Ethernet adapter to Logical Partition (LPAR) and configure it?
Imagine a physical server (Managed Power system) having just four Ethernet adapters. Now, if we assign one physical adapter per LPAR, we will run out of Ethernet adapter as soon as we build four LPARS. If there is a requirement to build 10 LPARs, how do we suffice the Ethernet adapter requirement? This is where the VIO server comes handy to share the physical adapters across all the LPARs.
TheVIO shared Ethernet adapter will help in sharing a physical adapter across all the LPARs. If we are going to use one physical adapter for all the 10 LPARs, Can it sustain the load i.e. all the network traffic coming from all the 10 LPARs? In the VIO, we can create link aggregation using multiple physical adapters to address the network traffic needs of the LPARs. Now we will talk about how to setup the link aggregation and shared Ethernet in the VIO servers.

Let’s say we have the following physical Ethernet adapters for public network:

Physical Ethernet Adapters
==========================

ent0 - Public network

ent1 - Public network

==========================


Create two Virtual Ethernet Adapters in VIO LPAR Profile. One will be used for communication between VIO and LPARs and another one for control channel. Control channel is used in the dual-VIO setup and used for heartbeat mechanism to detect failures.

=========================

ent2 - Virtual for Public - VLAN ID 1

ent3 - Virtual Control channel for public - VLAN ID 99
==========================

Command to Configure link aggregation

==========================

mkvdev -lnagg ent0,ent1 -attr mode=8023ad
==========================

The above command will create ent4 which is an aggregated link of two physical adapter ent0 and ent1. The mode 8023ad specifies to use IEEE 802.3ad standard and Link Aggregation Control Protocol (LACP) at the switch side. Have the network team configure the etherchannel on the switch ports.

Now it’s time to create the shared Ethernet adapter.
==========================
mkvdev -sea ent4 -vadapter ent2 -default ent2 -defaultid 1 -attr ha_mode=auto ctl_chan=ent3
==========================


The above command will create ent5 where you can assign IP address of the VIO servers for connectivity. Now in the client LPAR profiles, create virtual Ethernet with VLAN ID as 1 to make use of shared Ethernet adapter.


Important Note: In the Dual-VIO setup, make sure control channel is configured properly with proper VLAN ID on both the VIO servers. Any mis-configuration will flood the network with BPDU packets.

Monday, January 10, 2011

TSM Configuration Steps

See below the TSM configuration steps.


1.) Defining library, Tape drives and path:

define library autolibrary libtype=scsi
define path TSMSERVERA autolibrary srctype=server desttype=library device=/dev/smc2 online=yes
define drive autolibrary LIBTAPE0
define drive autolibrary LIBTAPE1
define drive autolibrary LIBTAPE2
define drive autolibrary LIBTAPE3
define drive autolibrary LIBTAPE4
define drive autolibrary LIBTAPE5
define path TSMSERVERA LIBTAPE0 srctype=server desttype=drive library=autolibrary
device=/dev/rmt0 online=yes

define path TSMSERVERA LIBTAPE1 srctype=server desttype=drive library=autolibrary
device=/dev/rmt1 online=yes

define path TSMSERVERA LIBTAPE2 srctype=server desttype=drive library=autolibrary
device=/dev/rmt2 online=yes

define path TSMSERVERA LIBTAPE3 srctype=server desttype=drive library=autolibrary
device=/dev/rmt3 online=yes

define path TSMSERVERA LIBTAPE4 srctype=server desttype=drive library=autolibrary
device=/dev/rmt4 online=yes

define path TSMSERVERA LIBTAPE5 srctype=server desttype=drive library=autolibrary
device=/dev/rmt5 online=yes

/******Second library******/

define library AUTOLIB2 libtype=scsi
define path TSMSERVERA AUTOLIB2 srctype=server desttype=library device=/dev/smc4
online=yes
define drive AUTOLIB2 LIBTAPE6
define path TSMSERVERA LIBTAPE6 srctype=server desttype=drive library=AUTOLIB2
device=/dev/rmt6 online=yes

2) Defining device class.

define devclass 3592LIB library=autolibrary devtype=3592 format=drive MOUNTRetention=5

define devclass 3592LIB2 library=AUTOLIB2 devtype=3592 format=drive MOUNTRetention=5

define devclass FILE directory=/adsmstore devtype=FILE


3) Defining Primary Storage Pool:

define stgpool DB2DBA3592LIB 3592LIB pooltype=primary MAXSCRatch=100
define stgpool DB2DB_OFFLINEA3592LIB 3592LIB pooltype=primary MAXSCRatch=100
define stgpool DB2FSA3592LIB 3592LIB pooltype=primary MAXSCRatch=100
define stgpool DB2LOG1A3592LIB 3592LIB pooltype=primary MAXSCRatch=100
define stgpool DB2LOG2A3592LIB 3592LIB2 pooltype=primary MAXSCRatch=100


4) Defining Copy Storage Pool:


define stgpool CPDB2DBA3592LIB 3592LIB pooltype=copy MAXSCRatch=100
define stgpool CPDB2LOG1A3592LIB 3592LIB pooltype=copy MAXSCRatch=100



5) Defining Policy Domain and policy set:

define domain DB2DOM description="DB2 Policy Domain"
define policy DB2DOM DB2POL description="DB2 Policy Set"

6) Defining Management Class:

define mgmtclass DB2DOM DB2POL DB2MGDB description="Production DB Mgmt Class"
define mgmtclass DB2DOM DB2POL DB2MGDBOFFLINE description="Production DB Offline Mgmt Class"
define mgmtclass DB2DOM DB2POL DB2MGFS description="Production FS Mgmt Class "
define mgmtclass DB2DOM DB2POL DB2MGLOG1 description="Production DB Log 1 Mgmt Class"
define mgmtclass DB2DOM DB2POL DB2MGLOG2 description="Production DB Log 2 Mgmt Class"

/****assigning default management class************/
assign defmgmtclass DB2DOM DB2POL DB2MGFS


Copy Group under management class:

define copygroup DB2DOM DB2POL DB2MGDB standard type=archive destination=DB2DBA3592LIB
retver=60 retmin=90

define copygroup DB2DOM DB2POL DB2MGDBOFFLINE standard type=archive
destination=DB2DB_OFFLINEA3592LIB retver=9999

define copygroup DB2DOM DB2POL DB2MGFS standard type=backup destination=DB2FSA3592LIB
VERExists=180 RETEXTRA=10 VERDELETED=NOLIMIT VERExists=NOLIMIT SERialization=SHRDYnamic

define copygroup DB2DOM DB2POL DB2MGLOG1 standard type=archive
destination=DB2LOG1A3592LIB retver=60

define copygroup DB2DOM DB2POL DB2MGLOG2 standard type=archive
destination=DB2LOG2A3592LIB retver=60

define copygroup DB2DOM DB2POL DB2MGFS standard type=archive destination=DB2FSA3592LIB

retver=60

7) Labeling the Tapes:

Insert the scratch tapes into the library and label them.

label libv autolibrary search=yes checkin=scr labels=barcode
label libv AUTOLIB2 search=yes checkin=scr labels=barcode


Sunday, August 29, 2010

Introduction to POWERVM

Some of the growing challenges for the companies in managing IT infrastructure include cutting down or sharing server resources (such as CPU, memory, IO) reducing power cooling cost and reducing server rack unit space. IBM POWERVM technology which was introduced in POWER6 systems helps in consolidating servers by virtualizing CPU, memory and IO adapter resources. It helps in managing servers efficiently by improving performance and availability of the servers.

CPU Virtualization is achieved through technology called micropartitioning which was introduced in POWER5 systems. Micropartitioning is the process in which a physical CPU can be segmented and shared across multiple logical partitions (LPAR). Memory sharing is achieved through Active Memory Sharing (AMS) setup between VIO and LPARs. For AMS, POWERVM Enterprise edition is needed. IO adapter can be virtualized using Virtual I/O servers (VIO) by creating and configuring the following:

1. Virtual SCSI adapter for virtualizing local or SAN drives.
2. Shared Ethernet Adapter for virtualizing ethernet adapters.
3. NPIV (N-Port ID Virtualization) for virtualizing HBA (Host Bus Adapters).

I'll talk about the steps involved in creating and configuring these virtual adapters in the upcoming posts.