How to Publish Your Own IPS Packages

August 11, 2011 Comments off

Welcome back! This is the last of my three-part mini-series on IPS, the Image Packaging System, for Oracle Solaris 11. The topic of “How to Publish Your Own IPS Packages” builds upon the structure and information contained in Part 1 and Part 2. In Part 3, I will cover how to build your own IPS package, create a second local IPS repository for your own packages, publish the package to your repository, and test the pkg install process for the package.

As background information, we already have a system hosting the Oracle Solaris 11 (snv_167) IPS repository with an AI service properly configured. We will not need the AI service for this topic. However, keep in mind it is possible to customize your XML based AI manifest files to use the new IPS repository we will create here to automatically install your own IPS packages during a new system install. Also, remember we have already created a top-level ZFS file system, rpool/IPS mounted at /IPS as a step in Part 1. We begin by creating a new ZFS file system as a home for our packaging playground and for our new IPS repository.

# zfs list -r rpool/IPS
NAME                USED  AVAIL  REFER  MOUNTPOINT
rpool/IPS          5.88G  7.72G    32K  /IPS
rpool/IPS/s11-167  5.88G  7.72G  5.88G  /IPS/s11-167
# zfs create rpool/IPS/packaging
# zfs create rpool/IPS/myrepo
# zfs list -r rpool/IPS 
NAME                  USED  AVAIL  REFER  MOUNTPOINT
rpool/IPS            5.88G  7.72G    35K  /IPS
rpool/IPS/myrepo       31K  7.72G    31K  /IPS/myrepo
rpool/IPS/packaging    31K  7.72G    31K  /IPS/packaging
rpool/IPS/s11-167    5.88G  7.72G  5.88G  /IPS/s11-167
# 

Now, we will go ahead and create our own IPS repository using the name of mycompany and the location of /IPS/myrepo. We will also set our repository to have it’s own port for remote access via HTTP, so as not to interfere with our default IPS repository. We do this by first verifying the port on our default repository and then creating our new one.

# svccfg -s pkg/server listprop pkg/port 
pkg/port  count    10000
# pkgrepo create /IPS/myrepo 
# pkgrepo set -s /IPS/myrepo publisher/prefix=mycompany
# svccfg -s pkg/server add mycompany
# svccfg -s pkg/server:mycompany addpg pkg application 
# svccfg -s pkg/server:mycompany setprop pkg/port=10001
# svccfg -s pkg/server:mycompany setprop pkg/inst_root=/IPS/myrepo 
# svccfg -s pkg/server:mycompany setprop pkg/readonly=false
# svcadm refresh pkg/server:mycompany
# svcadm enable pkg/server:mycompany
# svcs pkg/server
STATE          STIME    FMRI
online         Aug_09   svc:/application/pkg/server:default
online         16:54:41 svc:/application/pkg/server:mycompany
# 

To keep our packaging process and naming schemes simple we will now create a sub directory called mypackage and create the files we need for our package. For mypackage, I am using the contents of an old package I created in Perl for some Oracle VM Server for SPARC (LDom) automation. However, you may use any type of files, symbolic links, or directories you need for your own package.

# zfs list -r rpool/IPS
NAME                  USED  AVAIL  REFER  MOUNTPOINT
rpool/IPS            5.88G  7.72G    35K  /IPS
rpool/IPS/myrepo     42.5K  7.72G  42.5K  /IPS/myrepo
rpool/IPS/packaging    31K  7.72G    31K  /IPS/packaging
rpool/IPS/s11-167    5.88G  7.72G  5.88G  /IPS/s11-167
# mkdir /IPS/packaging/mypackage
# cd /IPS/packaging/mypackage/
# mv /var/tmp/mypackage.tar ./
# tar xf mypackage.tar 
# rm mypackage.tar 

Now that we’ve painstakingly developed all of our code, created the directory structures, and placed them into our mypackge package directory, we need to create our package manifest using the pkgsend generate command.

# cd /IPS/packaging/
# pkgsend generate /IPS/packaging/mypackage > \
  /IPS/packaging/mypackage.manifest.1
# cat /IPS/packaging/mypackage.manifest.1 
dir group=bin mode=0755 owner=root path=etc
dir group=bin mode=0755 owner=root path=opt
dir group=bin mode=0755 owner=root path=etc/WFBldom
file etc/WFBldom/farmInfo group=bin mode=0644 owner=root \
  path=etc/WFBldom/farmInfo
dir group=bin mode=0755 owner=root path=opt/WFBldom
dir group=bin mode=0755 owner=root path=opt/WFBldom/lib
dir group=bin mode=0755 owner=root path=opt/WFBldom/conf
dir group=bin mode=0755 owner=root path=opt/WFBldom/bin
file opt/WFBldom/README group=bin mode=0644 owner=root \
  path=opt/WFBldom/README
file opt/WFBldom/lib/WFBldom.pm group=bin mode=0644 owner=root \
  path=opt/WFBldom/lib/WFBldom.pm
file opt/WFBldom/bin/ldmDestroy group=bin mode=0550 owner=root \
  path=opt/WFBldom/bin/ldmDestroy
file opt/WFBldom/bin/ldomReport group=bin mode=0550 owner=root \
  path=opt/WFBldom/bin/ldomReport
file opt/WFBldom/bin/mk_pam_changes_for_vas group=bin mode=0500 owner=root \
  path=opt/WFBldom/bin/mk_pam_changes_for_vas
file opt/WFBldom/bin/configure group=bin mode=0500 owner=root \
  path=opt/WFBldom/bin/configure
file opt/WFBldom/bin/create_ldom_xml_bkups.sh group=bin mode=0500 owner=root \
  path=opt/WFBldom/bin/create_ldom_xml_bkups.sh
file opt/WFBldom/bin/ldmAddVdisk group=bin mode=0550 owner=root \
  path=opt/WFBldom/bin/ldmAddVdisk
file opt/WFBldom/bin/format_luns.sh group=bin mode=0500 owner=root \
  path=opt/WFBldom/bin/format_luns.sh
file opt/WFBldom/bin/ldmRemoveVdisk group=bin mode=0550 owner=root \
  path=opt/WFBldom/bin/ldmRemoveVdisk
file opt/WFBldom/bin/adddisks.sh group=bin mode=0500 owner=root \
  path=opt/WFBldom/bin/adddisks.sh
file opt/WFBldom/bin/ldmusage group=bin mode=0550 owner=root \
  path=opt/WFBldom/bin/ldmusage
file opt/WFBldom/bin/label_luns.sh group=bin mode=0500 owner=root \
  path=opt/WFBldom/bin/label_luns.sh
file opt/WFBldom/bin/ldmCreate group=bin mode=0550 owner=root \
  path=opt/WFBldom/bin/ldmCreate
file opt/WFBldom/bin/fix_privs group=bin mode=0500 owner=root \
  path=opt/WFBldom/bin/fix_privs
# 

From the output above, we can see our local file structure matches the path to which they will be installed. Additionally, the pkgsend generate command also picked up our permissions (almost). It did get our file & directory modes correct, but the group ownerships need some adjusting and we need to eliminate the directories delivered by the SUNWcs package (/etc & /opt) so we don’t infringe on changes we should not own. We can do this by hand with an editor, but it’s much simpler to use pkgmogrify to automate the transformations using a transform file to ease the burden of package automation and eliminate potential mistakes. Refer to the pkgmogrify man page for details.

# cat mypackage.transform 
<transform -> edit group bin root>
<transform dir path=(etc|opt)$ -> drop>
# pkgmogrify mypackage.manifest.1 mypackage.transform > mypackage.manifest.2 
# egrep "path=etc$|path=opt$|group=bin" mypackage.manifest.2
# 

Next, we need to discover any dependencies our mypackage package may have. We do this by running the pkgdepend generate command.

# pkgdepend generate -md /IPS/packaging/mypackage mypackage.manifest.2 > \
  mypackage.manifest.3
# grep depend mypackage.manifest.3 | tail -3
depend fmri=__TBD pkg.debug.depend.file=bash pkg.debug.depend.path=usr/bin \
  pkg.debug.depend.reason=opt/WFBldom/bin/adddisks.sh \
  pkg.debug.depend.type=script type=require
depend fmri=__TBD pkg.debug.depend.file=perl pkg.debug.depend.path=usr/bin \
  pkg.debug.depend.reason=opt/WFBldom/bin/ldmAddVdisk \
  pkg.debug.depend.type=script type=require
depend fmri=__TBD pkg.debug.depend.file=perl pkg.debug.depend.path=usr/bin \
  pkg.debug.depend.reason=opt/WFBldom/bin/fix_privs \
  pkg.debug.depend.type=script type=require
# 

We can see our new manifest file now contains dependencies which we must resolve (noted by TBD in the fmri). In this particular case, our file /opt/WFBldom/bin/adddisks.sh has a dependency on the file /usr/bin/bash and our other two files have a dependency upon /usr/bin/perl. We need to automatically resolve these dependencies using the pkgdepend resolve command. This command will create a new, resolved manifest file with “.res” appended to our manifest name which allows multiple packages to be resolved at once. Additionally, this command may take a few moments to run, so be patient here.

# pkgdepend resolve -m mypackage.manifest.3
# grep depend mypackage.manifest.3.res | tail -3
depend fmri=pkg:/runtime/perl-584@5.8.4-0.167 type=require
depend fmri=pkg:/shell/bash@4.1.9-0.167 type=require
# 

Notice how the output above (same grep command as before on the “.res” file) resolved all of our dependencies down to only two packages. This is because all of my files for mypackage only contained BASH or Perl code. Your dependencies will vary based on the contents and specifics of your package. Now, let’s publish our package to the mycompany IPS repository we created at the beginning. To do this, we will use the pkgsend publish command providing the mypackage package version and the Solaris 11 build to which it can be applied (minimum build version). In this example we will use mypackage version 1.0.0 and 0.167 for the Solaris 11 build version, which can be verified with the pkg list kernel output.

# pkgsend -s http://10.36.136.7:10001/ publish -d \
  /IPS/packaging/mypackage mypackage@1.0.0-0.167 \
  mypackage.manifest.3.res 
PUBLISHED
pkg://mycompany/mypackage@1.0.0,5.11-0.167:20110811T204626Z
# 

With our new package published to an IPS repository available via HTTP, we can now query and install our package from another Solaris 11 system remotely. However, to do this, we will need to add our repository as a publisher on the remote system. Let’s do it.

# pkg set-publisher -g http://10.36.136.7:10001 mycompany
# pkgrepo info -s http://10.36.136.7:10001
PUBLISHER PACKAGES STATUS           UPDATED
mycompany 1        online           2011-08-11T20:46:26.848795Z
# pkg search mycompany/mypackage
INDEX      ACTION VALUE               PACKAGE
pkg.fmri   set    mycompany/mypackage pkg:/mypackage@1.0.0-0.167
# pkg install mypackage
               Packages to install:     1
           Create boot environment:    No
DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  1/1       16/16      0.0/0.0$<3>
 
PHASE                                        ACTIONS
Install Phase                                  24/24 
 
PHASE                                          ITEMS
Package State Update Phase                       1/1 
Image State Update Phase                         2/2 
# pkg list mypackage
NAME (PUBLISHER)                                        VERSION              IFO
mypackage (mycompany)                                   1.0.0-0.167          i--
# ls -l /opt/WFBldom/bin/ldmCreate 
-r-xr-x---   1 root     root       19805 Aug 11 21:46 /opt/WFBldom/bin/ldmCreate
# 

There you have it. To recap, we created our own IPS repository called mycompany, our own mypackage package, published it, and installed it from a remote system. This wraps up the three-part mini-series on IPS for Oracle Solaris 11. I hope all of the examples provided in Part 1, Part 2, and Part 3 of this series has helped you gain a better understanding of IPS. I covered a lot of ground in only a few examples, so use the man pages to fill in the gaps as well as the documentation as needed.

Finally, I have to give credit to Bart Smaalders at Oracle for providing a nice blog on IPS packaging and to the creators of the Wiki page Introduction to IPS for Developers for providing Example 3-3.

How to Create an AI Server

August 10, 2011 Comments off

In my last post, I covered the topic of creating a local Solaris 11 IPS repository accessible via HTTP. It is a primer for the information covered below, so you should read it to assist with the information that follows. Today our topic is ‘How to Create an AI Server’ and is the second part of this three-part series. As many of you may not be familiar, AI is the Automated Installer for Solaris 11. It is a complete replacement of it’s predecessor, JumpStart. It is a much more modern and robust installation system using customizable XML based profiles. One of the key benefits of AI is the ability to completely install a system via HTTP without the need for a bootable CD/DVD/ISO locally installed (or attached) to a system. While AI is capable of both SPARC based and x86 based installations, our scope will be limited to the SPARC based architecture (for which there is no need for DHCP or PXE). SPARC based AI installations are capable of installing over a WAN (which we will cover near the end of this post).

As a recap, there is already an IPS repository on our system using Solaris 11 (snv_167 build). Our existing repository is located at /IPS/s11-167/repo. It is now time to install the AI Server and configure it for use. First, we need to create a ZFS file system to contain our AI server. In this instance we will create two file systems rpool/AI and rpool/AI/s11-167 to logically separate Solaris 11 snv_167 from any potential future AI configurations we wish to host.

# zfs create -o mountpoint=/AI rpool/AI
# zfs create rpool/AI/s11-167
# zfs list -r rpool/AI
NAME              USED  AVAIL  REFER  MOUNTPOINT
rpool/AI           63K  8.13G    32K  /AI
rpool/AI/s11-167   31K  8.13G    31K  /AI/s11-167
#

Now, we need to ensure our previously downloaded ISO image has not been corrupted in any way by re-verifying the md5 checksums.

# cd /var/tmp
# grep ai-sparc md5sums.txt 
235326ebe3753b6fcc4acb5dc6f256d1  sol-11-dev-167-ai-sparc.iso
# md5sum sol-11-dev-167-ai-sparc.iso 
235326ebe3753b6fcc4acb5dc6f256d1  sol-11-dev-167-ai-sparc.iso
#

Next, we perform a couple of preliminary tasks. We need to enable the dns/multicast service because AI looks for it to be online since it is listed as a dependency in the install/server service. We do this via the svcadm command below.

# svcadm enable dns/multicast
# svcs dns/multicast
STATE          STIME    FMRI
online         14:01:43 svc:/network/dns/multicast:default
#

The second part of our cleanup commands is specific to the snv_167 build. During packaging, the Oracle developers missed a critical file, cssselect.py in the library/python-2/lxml-26 package. This caused days of trial and error with debugging for me with AI in build snv_167 before I finally obtained the answer and received the cssselect.py from support. This particular issue should not be present in future builds. However, it helps to verify before creating the AI service. The cssselect.py file needs to be located at /usr/lib/python2.6/vendor-packages/lxml/cssselect.py with ownership set to root:bin and permissions set to 444 as follows.

# cp /var/tmp/cssselect.py /usr/lib/python2.6/vendor-packages/lxml/
# chmod 444 /usr/lib/python2.6/vendor-packages/lxml/cssselect.py 
# chown root:bin /usr/lib/python2.6/vendor-packages/lxml/cssselect.py

Now, we can finally use the installadm command to create our AI service using the AI ISO image without DHCP. Note the Oracle documentation describes both with and without DHCP. If you are not using DHCP, your environment should have a system in place to statically assign IP addresses and hostnames via a DNS system and the DNS system should be updated prior to attempting to install a system from the AI service.

# installadm create-service -n s11-167-sparc -s \
  /var/tmp/iso/sol-11-dev-167-ai-sparc.iso /AI/s11-167 
Setting up the target image at /AI/s11-167 ...
Refreshing install services

Detected that DHCP is not set up on this server.
If not already configured, please create a DHCP macro
named 10.36.128.0 with:
   Boot file      (BootFile) : \"http://10.36.136.7:5555/cgi-bin/wanboot-cgi\"
If you are running the Oracle Solaris DHCP Server, use the following
command to add the DHCP macro, 10.36.128.0:
   /usr/sbin/dhtadm -g -A -m 10.36.128.0 -d \
   :BootFile=\"http://10.36.136.7:5555/cgi-bin/wanboot-cgi\":

Note: Be sure to assign client IP address(es) if needed
(e.g., if running the Oracle Solaris DHCP Server, run pntadm(1M)).
Service discovery fallback mechanism set up
Creating SPARC configuration file
#

As mentioned previously, we will not use DHCP for our AI server. Eventhough it appears that this must be done according to the output of the installadm create-service command, it is most likely not a best practice in large environments.

Now that our AI service has been created, we need to slightly tweak our default manifest file for installing the Solaris 11 operating system. For the sake of brevity, I will not cover all of the possibilities for modifications. However, as general background information, the AI service uses XML based configuration files. It is possible to completely customize an installation by creating XML based profiles specifying exactly to which devices, clients (systems), networking, configuration, and default users you desire. There are example profiles listed under your default AI service location (e.g. /AI/s11-167/auto_install/sc_profiles). Likewise, there are XML based manifests that can be customized for a specific AI service to install any valid packages from any IPS repositories you wish. The manifest files are located under a similar path (e.g. /AI/s11-167/auto_install/manifest). Also note, that with each new build (for developers – snv_167) changes are made to the XML specifications allowing more advanced customizations. For our example we will only edit the default manifest. The changes we make to our default manifest follow. First, we backup the default manifest.

# cd /AI/s11-167/auto_install/manifest/
# cp -p default.xml default.xml.orig

Next, since we are only dealing with SPARC based systems and not x86 based systems, we want to add the auto_reboot attribute to the ai_instance tag. This should not be done for x86 based systems since the boot order is not guaranteed on x86.

<ai_instance name="default" auto_reboot="true">

Next, let’s change the name of the boot environment to snv_167 from the default of solaris just so we’re clear on which build we are using.

<be name="snv_167"/>

Next, we need to change our default Solaris 11 IPS repository from the default of:

<origin name="http://pkg.oracle.com/solaris/release"/>

to the one we created in my previous post using the system’s IP address along with the port we specified for our IPS repository:

<origin name="http://10.36.136.7:10000"/>

If you are unsure of the port you specified, it can be verified using the svccfg -s pkg/server listprop pkg/port command.

For our last change to the default manifest, we want to add a section that will install any missing or third party drivers our clients may need that we already have in our IPS repository using the add_drivers tag as follows.

        <add_drivers> 
          <search_all addall="true">
            <source>
              <publisher name="solaris">  
                <origin name="http://10.36.136.7:10000"/>  
              </publisher>  
            </source>  
          </search_all>  
        </add_drivers> 

Now that we’ve made our changes to the default manifest, we need to let the AI service know about them by updating our manifest.

# installadm list -m
 
Service Name   Manifest      Status 
------------   --------      ------ 
s11-167-sparc  orig_default  Default
 
# installadm update-manifest -n s11-167-sparc -f \
  /AI/s11-167/auto_install/default.xml -m orig_default
#

It’s time to test out our AI service backed by our Solaris 11 IPS repository. In this example, an LDom (Oracle VM on SPARC) is being used, but it works identically on any modern physical SPARC based system capable of wanboot. First we need to get the MAC address from our client system on which we want to install Solaris 11 with AI. We can do this from the OBP prompt as follows.

{0} ok cd net
{0} ok .properties
local-mac-address        00 14 4f f8 3c 49 
max-frame-size           00004000 
address-bits             00000030 
reg                      00000000 
compatible               SUNW,sun4v-network
device_type              network
name                     network
{0} ok 

Now that we know our MAC address we need to add our client system to the AI service to allow installation.

# installadm create-client -e 00:14:4f:f8:3c:49 -n s11-167-sparc
Creating SPARC configuration file
 
# installadm list -c 
 
Service Name   Client Address     Arch   Image Path 
------------   --------------     ----   ---------- 
s11-167-sparc  00:14:4F:F8:3C:49  Sparc  /AI/s11-167
 
# 

Now that our AI server knows about our client system, we can initiate the wanboot (remote installation) process from our client system. First we set our client system to auto-boot?=true and set the OBP value for network-boot-arguments with our client information and the HTTP location of our bootable image. It’s important to understand that the output below does not contain any spaces, tabs, or return characters between the comma-separated values for the network-boot-arguments. Doing so will not work and cause unnecessary troubleshooting. If you are not familiar with this value, it is documented in the man page for eeprom in the Oracle Solaris 11 Express documentation.

{0} ok setenv auto-boot? true
auto-boot? =            true
{0} ok setenv network-boot-arguments host-ip=10.36.136.8,
    subnet-mask=255.255.192.0,hostname=ilhsf001v002,
    router-ip=10.36.136.1,
    file=http://10.36.136.7:5555/cgi-bin/wanboot-cgi
network-boot-arguments =  host-ip=10.36.136.8,
subnet-mask=255.255.192.0,hostname=ilhsf001v002,router-ip=10.36.136.1,
file=http://10.36.136.7:5555/cgi-bin/wanboot-cgi
{0} ok 

Next, we can initiate our remote install using the boot net - install command.

{0} ok boot net - install
Boot device: /virtual-devices@100/channel-devices@200/network@0  File and args: - install
<time unavailable> wanboot info: WAN boot messages->console
<time unavailable> wanboot info: configuring /virtual-devices@100/channel-devices@200/network@0
 
<time unavailable> wanboot progress: wanbootfs: Read 366 of 366 kB (100%)
<time unavailable> wanboot info: wanbootfs: Download complete
Wed Aug 10 16:40:03 wanboot progress: miniroot: Read 280394 of 280394 kB (100%)
Wed Aug 10 16:40:03 wanboot info: miniroot: Download complete
SunOS Release 5.11 Version snv_167 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Hostname: ilhsf001v002
Remounting root read/write
Probing for device nodes ...
Preparing network image for use
Downloading solaris.zlib
--2011-08-10 16:40:30--  http://10.36.136.7:5555/AI/s11-167//solaris.zlib
Connecting to 10.36.136.7:5555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 123897344 (118M) [text/plain]
Saving to: `/tmp/solaris.zlib'
 
100%[======================================>] 123,897,344 70.9M/s   in 1.7s    
 
2011-08-10 16:40:32 (70.9 MB/s) - `/tmp/solaris.zlib' saved [123897344/123897344]
 
Downloading solarismisc.zlib
--2011-08-10 16:40:32--  http://10.36.136.7:5555/AI/s11-167//solarismisc.zlib
Connecting to 10.36.136.7:5555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16560128 (16M) [text/plain]
Saving to: `/tmp/solarismisc.zlib'
 
100%[======================================>] 16,560,128  64.3M/s   in 0.2s    
 
2011-08-10 16:40:32 (64.3 MB/s) - `/tmp/solarismisc.zlib' saved [16560128/16560128]
 
Downloading .image_info
--2011-08-10 16:40:32--  http://10.36.136.7:5555/AI/s11-167//.image_info
Connecting to 10.36.136.7:5555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 36 [text/plain]
Saving to: `/tmp/.image_info'
 
100%[======================================>] 36          --.-K/s   in 0s      
 
2011-08-10 16:40:32 (967 KB/s) - `/tmp/.image_info' saved [36/36]
 
Downloading install.conf
--2011-08-10 16:40:32--  http://10.36.136.7:5555/AI/s11-167//install.conf
Connecting to 10.36.136.7:5555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 67 [text/plain]
Saving to: `/tmp/install.conf'
 
100%[======================================>] 67          --.-K/s   in 0s      
 
2011-08-10 16:40:32 (2.07 MB/s) - `/tmp/install.conf' saved [67/67]
 
Done mounting image
Configuring devices.
Service discovery phase initiated
Service name to look up: s11-167-sparc
Service discovery finished successfully
Process of obtaining install manifest initiated
Using the install manifest obtained via service discovery
 
Automated Installation started
The progress of the Automated Installation will be output to the console
Detailed logging is in the logfile at /system/volatile/install_log
 
ilhsf001v002 console login: Press RETURN to get a login prompt at any time.
 
16:41:17    Install Log: /system/volatile/install_log
16:41:17    Starting Automated Installation Service
16:41:17    Using XML Manifest: /system/volatile/ai.xml
16:41:17    Using profile specification: /system/volatile/profile
16:41:18    Using service list file: /var/run/service_list
16:41:18    100% manifest-parser completed.
16:41:18    Manifest /system/volatile/ai.xml successfully parsed
16:41:18    Configuring Checkpoints
16:41:19    2% target-discovery completed.
16:41:19    === Executing Target Selection Checkpoint ==
16:41:20    Selected Disk(s) : c5d0
16:41:20    8% target-selection completed.
16:41:20    12% ai-configuration completed.
16:41:22    cannot share 'rpool' for nfs: protocol not installed
16:41:22    cannot share 'rpool/export' for nfs: protocol not installed
16:41:22    cannot share 'rpool/export/home' for nfs: protocol not installed
16:41:26    15% target-instantiation completed.
16:41:26    === Executing generated-transfer-1080-1 Checkpoint ===
16:41:26    15% Beginning IPS transfer
16:41:26    Creating IPS image
16:41:40    Installing packages from:
16:41:40        solaris
16:41:40            origin:  http://10.36.136.7:10000/
17:04:51    17% generated-transfer-1080-1 completed.
17:04:52    19% initialize-smf completed.
17:04:53    Installing SPARC bootblk to root pool devices: ['/dev/rdsk/c5d0s0']
17:04:53    Setting openprom boot-device
17:04:54    30% boot-configuration completed.
17:04:54    33% update-dump-adm completed.
17:04:54    35% setup-swap completed.
17:04:55    37% set-flush-ips-content-cache completed.
17:04:56    39% device-config completed.
17:04:57    41% apply-sysconfig completed.
17:05:04    86% boot-archive completed.
17:05:04    88% transfer-ai-files completed.
17:05:04    99% create-snapshot completed.
17:05:04    Automated Installation succeeded.
17:05:04    System will be rebooted now
Automated Installation finished successfully
Auto reboot enabled. The system will be rebooted now
Log files will be available in /var/sadm/system/logs/ after reboot
Aug 10 17:05:11 ilhsf001v002 reboot: initiated by root
Aug 10 17:05:17 ilhsf001v002 syslogd: going down on signal 15
syncing file systems... done
rebooting...
Resetting...
 
T5240, No Keyboard
Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.0.b, 8192 MB memory available, Serial #83591076.
Ethernet address 0:14:4f:fb:7f:a4, Host ID: 84fb7fa4.
 
Boot device: /virtual-devices@100/channel-devices@200/disk@0:a  File and args: -Z rpool/ROOT/snv_167
SunOS Release 5.11 Version snv_167 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Loading smf(5) service descriptions: 206/206
Configuring devices.

Once the installation is complete (approximately 20 minutes on my network) the system will reboot and you will be prompted to walk through the basic configuration tasks required by the installer (image below).

installer image of the configuration tasks after a default AI installation

Finally, once you have gone through the installer and completed the post-install configuration tasks, you now have a working system at a login prompt ready for work.

Exiting System Configuration Tool. Log is available at:
/var/tmp/install/sysconfig.log
Loading smf(5) service descriptions: 1/1
Hostname: ilhsf001v002
 
ilhsf001v002 console login: 

Note, however, the manual post-install configuration tasks required by the installer can be eliminated entirely by establishing customized profiles containing the specific information for your client systems. Refer to Oracle’s documentation (or the XML DTD files) for specifics on the allowed tags and attributes for both manifests and profiles for your customized AI service.

In my next post, I will continue on the topic of IPS and cover “How to Publish Your Own IPS Packages”. It will include creating an IPS repository to hold your own packages which others will be able to install remotely.

Happy installing and stay tuned for more!

How to Create the Solaris 11 IPS Repository

August 9, 2011 Comments off

Being inspired by Solaris 11 and the fact that I have some down time between development activities, I thought some of you may be interested in a mini-series of posts (3 parts) on the topic of IPS in Solaris 11. IPS is the ‘Image Packaging System’ designed to replace the current packaging methods that have existed in Solaris for well over a decade. This more modern approach to packaging has some great benefits for both developers and system administrators alike combined with a learning curve (especially if you are not familiar with ZFS and SMF). While Oracle has released documentation on IPS for Solaris 11 Express, it currently lacks some solid examples, which I hope to assist with here. As this topic can stray in many directions, I plan to cover the following in order:

  • How to Create the Solaris 11 IPS Repository (local copy of the repository) – post #1
  • How to Create an AI Server (automated installation server which is a replacement of JumpStart) – post #2
  • How to Publish Your Own IPS Packages – post #3

As background information I will be using the bits from Solaris 11 Express (snv_167) which is currently only available to those who have a valid Oracle Solaris support contract and who are participating in the Solaris Platinum Program. However, the tasks should not change too much, if any, for the GA version slated to come out this year (aside from ISOs/locations). Note also that I am not covering the installation of Solaris 11 on my server (an LDom), refer to the admin guides if you need help installing using the text installer.

To begin, I start off with a fresh install of Solaris 11 (snv_167) using the text installer. Now that my system is installed, I have to obtain the ISO images from Oracle.

# cd /var/tmp/
# export http_proxy=<your http proxy>
# export https_proxy=<your https proxy>
# wget --user=<email> --ask-password <URI to md5sums.txt>
# wget --user=<email> --ask-password <URI to README.txt>
# wget --user=<email> --ask-password <URI to sol-11-dev-167-ai-sparc.iso>
# wget --user=<email> --ask-password <URI to sol-11-dev-167-repo-p01.iso-a>
# wget --user=<email> --ask-password <URI to sol-11-dev-167-repo-p01.iso-b>
# wget --user=<email> --ask-password <URI to sol-11-dev-167-repo-p02.iso-a>
# wget --user=<email> --ask-password <URI to sol-11-dev-167-repo-p02.iso-b>

Now we concatenate the ISO images following the instructions in the README.txt file. This may take some time since each ISO is approximately 1.5g.

# cat sol-11-dev-167-repo-p01.iso-a sol-11-dev-167-repo-p01.iso-b > \
  sol-11-dev-167-repo-p01.iso
# cat sol-11-dev-167-repo-p02.iso-a sol-11-dev-167-repo-p02.iso-b > \
  sol-11-dev-167-repo-p02.iso

Now that we’ve created the ISOs properly, we need to validate the md5 checksums to ensure the bits are good. Simply verify visually the md5sums are identical as below.

# cat md5sums.txt | egrep "ai-sparc|p01.iso$|p02.iso$"
235326ebe3753b6fcc4acb5dc6f256d1 sol-11-dev-167-ai-sparc.iso
0eaedb33bdcb77c188094047981eadc6 sol-11-dev-167-repo-p01.iso
d17885413f6ea60047198ce167547d82 sol-11-dev-167-repo-p02.iso
# md5sum sol-11-dev-167-ai-sparc.iso sol-11-dev-167-repo-p01.iso \
  sol-11-dev-167-repo-p02.iso
235326ebe3753b6fcc4acb5dc6f256d1 sol-11-dev-167-ai-sparc.iso
0eaedb33bdcb77c188094047981eadc6 sol-11-dev-167-repo-p01.iso
d17885413f6ea60047198ce167547d82 sol-11-dev-167-repo-p02.iso
#

We have finally finished the tedious and very long process of obtaining the bits. Now we can move ahead create our first local Solaris 11 IPS repository. The first step is to create a ZFS file system to hold the repository packages.

# zfs create -o mountpoint=/IPS rpool/IPS
# zfs create rpool/IPS/s11-167
# zfs list -r rpool/IPS
NAME                   USED AVAIL REFER MOUNTPOINT
rpool/IPS              63K  8.26G 32K   /IPS
rpool/IPS/s11-167-repo 31K  8.26G 31K   /IPS/s11-167
#

Now we need to mount the first repository ISO image and copy it’s contents to the ZFS file system we created (/IPS/s11-167).

# lofiadm -a /var/tmp/sol-11-dev-167-repo-p01.iso
# mount -F hsfs /dev/lofi/1 /mnt
# rsync -aP /mnt/repo /IPS/s11-167-repo
# umount /mnt
# lofiadm -d /dev/lofi/1

Note there should not be a trailing slash (“/”) on the directory paths above or the rsync command will not copy the data properly and your repository will not function.

Now we need to repeat the steps above for the second ISO image for the repository.

# lofiadm -a /var/tmp/sol-11-dev-167-repo-p02.iso
# mount -F hsfs /dev/lofi/1 /mnt
# rsync -aP /mnt/repo /IPS/s11-167-repo
# umount /mnt
# lofiadm -d /dev/lofi/1

Both of the above rsync processes take quite some time since we are copying approximately 5.9g worth of data. Once the rsync is finished we can then move on to create the IPS repository using SMF. The advantage of using SMF allows one to disable/enable the service as needed as well as providing a quick way to re-point to another ZFS file system containing a possibly newer repository.

# svccfg -s pkg/server setprop pkg/inst_root=/IPS/s11-167/repo
# svccfg -s pkg/server setprop pkg/readonly=true
# svccfg -s pkg/server setprop pkg/port=10000

Verify your changes.

# svccfg -s pkg/server listprop | egrep "inst_root|readonly|port"
pkg/inst_root     astring     /IPS/s11-167/repo
pkg/readonly      boolean     true
pkg/port          count       10000
#

Now we can enable our new repository.

# svcadm refresh pkg/server
# svcadm enable pkg/server
# svcs pkg/server
STATE   STIME     FMRI
online  16:55:13  svc:/application/pkg/server:default
#

Now that our repository has been successfully created, we need to validate it is accessible via HTTP. Simply open up a web browser and point to your new Solaris 11 IPS repository. As we configured it in SMF the URI is http://localhost:10000/. The webpage displayed should be similar to http://pkg.oracle.com/solaris/release. In the sample output below, I used the IP Address of my server to connect.

Solaris 11 IPS Repository

Success! Now let’s do some minor cleanup. By default, the Solaris 11 text installer will add the solaris publisher your list of publishers which can be verified with pkg publisher. Since the default publisher points back to Oracle, and we only want to use our local copy of the Solaris 11 repository, we need to change the publisher information. This is done as follows.

# pkg set-publisher -G http://pkg.oracle.com/solaris/release -g \
  /IPS/s11-167/repo solaris

All finished! Our local Solaris 11 repository has been created via SMF, made available via HTTP and via local ZFS to our hosting server, and our future searches via pkg search will only look at our repository instead of trying Oracle’s first.

In the next post I will cover how to use this Solaris 11 repository as the basis for creating an AI Server for remote installation (via HTTP) to SPARC based systems.

IPMP on Solaris 11

July 20, 2011 Comments off

I have recently been testing the pre-release of Oracle Solaris 11, snv_167 to be precise. While Oracle has done well to update documentation surrounding their recent updates to the Solaris 11 networking and virtualization stack, it will still come as a shock to many experienced Solaris administrators that the old methods of configuring IPMP (IP multipathing) using configuration files is no more! Yes, I’m referring to the /etc/hostname.<interface> files that were so easily understood (and misunderstood in the case of IPMP).

Solaris 11 brings some well overdue changes to how we administer the range of networking. In particular, there are two new commands that effectively end the need for ever using /usr/sbin/ifconfig again. The first of these is /usr/sbin/ipadm which is now delivered by the system/network package. The ipadm command is very straight-forward and easy to use. It is intended by the developers to eliminate ifconfig and the hostname.<interface> files. I must commend them on their efforts since it not only accomplishes their goal, but also simplifies our administration of the network layer.

While there are many ways to configure IPMP, the full-breadth of options will not be discussed here. Instead, I must refer you to the full documentation, Oracle Solaris Administration: Network Interfaces and Network Virtualization guide. This post only covers IPMP in Active-Passive mode. However, in regards to documentation, be certain to read the man page for ipadm as the man page contains better examples than the guide. It seems the guide lacks updates and still refers to ifconfig over ipadm.

As background information for my testing, I have created an LDom and assigned it two interfaces (vnet0 and vnet1) which are bound to separate virtual switches (vsw0 and vsw1) on aggregates (aggr1 and aggr2 respectively) composed of separate physical network cards, each of which connect to a separate physical switch. The diagram below will help ;-)

IPMP Configuration Diagram in an LDom using Link Aggregation

Now, to configure IPMP in Active-Passive mode using vnet0 and vnet1 I simply used the following example.

Active-Passive IPMP Example:
# ipadm create-ip vnet0
# ipadm create-ip vnet1
# ipadm set-ifprop –p standby=on –m ip vnet1
# ipadm create-ipmp –i vnet0 –i vnet1 ipmp0
# ipadm create-addr –T static –a local=10.10.10.1/18 ipmp0/v4

Now what actually occurs is quite interesting. The first two commands using the create-ip subcommand actually perform two tasks. Each command both “plumbs” or enables the network interface and creates an IP interface on that network interface. Next, the ipadm set-ifprop command marks the vnet1 IP interface as the “STANDBY” interface. The fourth command using ipadm create-ipmp, performs 3 separate actions. It creates the IPMP group named “ipmp0” and adds both IP interfaces (vnet0 and vnet1) to the IPMP group with the IPMP interface name of “ipmp0”. Finally, the ipadm create-addr command sets a static IPv4 address to the IPMP interface “ipmp0”. Note the “/v4” syntax appended to the IPMP interface name. This is required for IPv4. Refer to the ipadm man page if configuring IPMP on an IPv6 network. This results in the following active working configuration.

# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
ipmp0: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 1500 index 14
        inet 10.10.10.1 netmask ffffc000 broadcast 10.10.191.255
        groupname ipmp0
vnet0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 17
        inet 0.0.0.0 netmask ff000000
        groupname ipmp0
        ether 0:14:4f:f9:ed:f2
vnet1: flags=61000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,INACTIVE> mtu 1500 index 16
        inet 0.0.0.0 netmask ff000000
        groupname ipmp0
        ether 0:14:4f:f8:66:6d
#

There are also two new methods of verifying the new IPMP configuration. The first still uses the ipadm command and is shown below.

# ipadm show-addr ipmp0/v4
ADDROBJ           TYPE     STATE        ADDR
ipmp0/v4          static   ok           10.10.10.1/18
# ipadm show-if -o all
IFNAME     CLASS    STATE    ACTIVE CURRENT       PERSISTENT OVER
lo0        loopback ok       yes    -m-v------46  --46       --
ipmp0      ipmp     ok       yes    bm--------46  --46       vnet0 vnet1
vnet0      ip       ok       yes    bm---l----46  -l46       --
vnet1      ip       ok       no     bm--sli---46  sl46       --
#

The first command above, using the show-addr subcommand, displays the IP address previously assigned to the IPMP interface “ipmp0”. The second command using the show-if subcommand displays all of the interfaces with their assigned “CLASS” and “CURRENT” associated flags. These are explained in detail in the ipadm man page. However, a little more explanation is warranted here on the “CURRENT” flags. IPMP interface “ipmp0” has “bm——–46” which stands for broadcast, multicast, IPv4, and IPv6 enabled, respectively. IP interfaces “vnet0” and “vnet1” both have flags enabled for broadcast, multicast, link-based probing, IPv4 and IPv6. IP interface “vnet1” has the additional flags of “s” and “i” for standby and inactive, respectively. These last two flags were enabled by the ipadm set-ifprop –p standby=on –m ip vnet1 command.

Finally, the second new command I mentioned toward the beginning of this post, /usr/sbin/ipmpstat. The ipmpstat command is much more simplified and aimed solely at IPMP configurations. There are several additional options besides the ones shown below, but those are referenced in the man page for ipmpstat.

# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       ipmp0       ok        10.00s    vnet0 (vnet1)
# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
vnet0       yes     ipmp0       --mbM--   up        ok        ok
vnet1       no      ipmp0       is-----   up        ok        ok
#

The first command above displays quite simply the contents or IPMP interfaces for the IPMP group named “ipmp0”. The parentheses’ surrounding the IP interface named “vnet1” indicates that as a “STANDBY” interface. The second command, ipmpstat –i, displays roughly the same information, but with more detail in the “FLAGS” section. Note how “vnet1” shows the same inactive and standby flags as previously displayed using the ipadm show-if –o all command. However, “vnet0” displays a new flag of “M” for IPv6 multicast. This shows that all multicast communication (both IPv4 and IPv6) is sent via the “ACTIVE” IPMP interface.

There you have it, IPMP without configuration files! However, I must throw out the disclaimer for the geeky Solaris Admins out there. A file does exist at /etc/ipadm/ipadm.conf that contains any and all information for the active networking configuration created by ipadm. Also, as usual, Oracle advises not to edit the file directly.

Happy networking.

Follow

Get every new post delivered to your Inbox.