How to Create an AI Server

In my last post, I covered the topic of creating a local Solaris 11 IPS repository accessible via HTTP. It is a primer for the information covered below, so you should read it to assist with the information that follows. Today our topic is ‘How to Create an AI Server’ and is the second part of this three-part series. As many of you may not be familiar, AI is the Automated Installer for Solaris 11. It is a complete replacement of it’s predecessor, JumpStart. It is a much more modern and robust installation system using customizable XML based profiles. One of the key benefits of AI is the ability to completely install a system via HTTP without the need for a bootable CD/DVD/ISO locally installed (or attached) to a system. While AI is capable of both SPARC based and x86 based installations, our scope will be limited to the SPARC based architecture (for which there is no need for DHCP or PXE). SPARC based AI installations are capable of installing over a WAN (which we will cover near the end of this post).

As a recap, there is already an IPS repository on our system using Solaris 11 (snv_167 build). Our existing repository is located at /IPS/s11-167/repo. It is now time to install the AI Server and configure it for use. First, we need to create a ZFS file system to contain our AI server. In this instance we will create two file systems rpool/AI and rpool/AI/s11-167 to logically separate Solaris 11 snv_167 from any potential future AI configurations we wish to host.

# zfs create -o mountpoint=/AI rpool/AI
# zfs create rpool/AI/s11-167
# zfs list -r rpool/AI
rpool/AI           63K  8.13G    32K  /AI
rpool/AI/s11-167   31K  8.13G    31K  /AI/s11-167

Now, we need to ensure our previously downloaded ISO image has not been corrupted in any way by re-verifying the md5 checksums.

# cd /var/tmp
# grep ai-sparc md5sums.txt 
235326ebe3753b6fcc4acb5dc6f256d1  sol-11-dev-167-ai-sparc.iso
# md5sum sol-11-dev-167-ai-sparc.iso 
235326ebe3753b6fcc4acb5dc6f256d1  sol-11-dev-167-ai-sparc.iso

Next, we perform a couple of preliminary tasks. We need to enable the dns/multicast service because AI looks for it to be online since it is listed as a dependency in the install/server service. We do this via the svcadm command below.

# svcadm enable dns/multicast
# svcs dns/multicast
STATE          STIME    FMRI
online         14:01:43 svc:/network/dns/multicast:default

The second part of our cleanup commands is specific to the snv_167 build. During packaging, the Oracle developers missed a critical file, in the library/python-2/lxml-26 package. This caused days of trial and error with debugging for me with AI in build snv_167 before I finally obtained the answer and received the from support. This particular issue should not be present in future builds. However, it helps to verify before creating the AI service. The file needs to be located at /usr/lib/python2.6/vendor-packages/lxml/ with ownership set to root:bin and permissions set to 444 as follows.

# cp /var/tmp/ /usr/lib/python2.6/vendor-packages/lxml/
# chmod 444 /usr/lib/python2.6/vendor-packages/lxml/ 
# chown root:bin /usr/lib/python2.6/vendor-packages/lxml/

Now, we can finally use the installadm command to create our AI service using the AI ISO image without DHCP. Note the Oracle documentation describes both with and without DHCP. If you are not using DHCP, your environment should have a system in place to statically assign IP addresses and hostnames via a DNS system and the DNS system should be updated prior to attempting to install a system from the AI service.

# installadm create-service -n s11-167-sparc -s \
  /var/tmp/iso/sol-11-dev-167-ai-sparc.iso /AI/s11-167 
Setting up the target image at /AI/s11-167 ...
Refreshing install services

Detected that DHCP is not set up on this server.
If not already configured, please create a DHCP macro
named with:
   Boot file      (BootFile) : \"\"
If you are running the Oracle Solaris DHCP Server, use the following
command to add the DHCP macro,
   /usr/sbin/dhtadm -g -A -m -d \

Note: Be sure to assign client IP address(es) if needed
(e.g., if running the Oracle Solaris DHCP Server, run pntadm(1M)).
Service discovery fallback mechanism set up
Creating SPARC configuration file

As mentioned previously, we will not use DHCP for our AI server. Eventhough it appears that this must be done according to the output of the installadm create-service command, it is most likely not a best practice in large environments.

Now that our AI service has been created, we need to slightly tweak our default manifest file for installing the Solaris 11 operating system. For the sake of brevity, I will not cover all of the possibilities for modifications. However, as general background information, the AI service uses XML based configuration files. It is possible to completely customize an installation by creating XML based profiles specifying exactly to which devices, clients (systems), networking, configuration, and default users you desire. There are example profiles listed under your default AI service location (e.g. /AI/s11-167/auto_install/sc_profiles). Likewise, there are XML based manifests that can be customized for a specific AI service to install any valid packages from any IPS repositories you wish. The manifest files are located under a similar path (e.g. /AI/s11-167/auto_install/manifest). Also note, that with each new build (for developers – snv_167) changes are made to the XML specifications allowing more advanced customizations. For our example we will only edit the default manifest. The changes we make to our default manifest follow. First, we backup the default manifest.

# cd /AI/s11-167/auto_install/manifest/
# cp -p default.xml default.xml.orig

Next, since we are only dealing with SPARC based systems and not x86 based systems, we want to add the auto_reboot attribute to the ai_instance tag. This should not be done for x86 based systems since the boot order is not guaranteed on x86.

<ai_instance name="default" auto_reboot="true">

Next, let’s change the name of the boot environment to snv_167 from the default of solaris just so we’re clear on which build we are using.

<be name="snv_167"/>

Next, we need to change our default Solaris 11 IPS repository from the default of:

<origin name=""/>

to the one we created in my previous post using the system’s IP address along with the port we specified for our IPS repository:

<origin name=""/>

If you are unsure of the port you specified, it can be verified using the svccfg -s pkg/server listprop pkg/port command.

For our last change to the default manifest, we want to add a section that will install any missing or third party drivers our clients may need that we already have in our IPS repository using the add_drivers tag as follows.

          <search_all addall="true">
              <publisher name="solaris">  
                <origin name=""/>  

Now that we’ve made our changes to the default manifest, we need to let the AI service know about them by updating our manifest.

# installadm list -m
Service Name   Manifest      Status 
------------   --------      ------ 
s11-167-sparc  orig_default  Default
# installadm update-manifest -n s11-167-sparc -f \
  /AI/s11-167/auto_install/default.xml -m orig_default

It’s time to test out our AI service backed by our Solaris 11 IPS repository. In this example, an LDom (Oracle VM on SPARC) is being used, but it works identically on any modern physical SPARC based system capable of wanboot. First we need to get the MAC address from our client system on which we want to install Solaris 11 with AI. We can do this from the OBP prompt as follows.

{0} ok cd net
{0} ok .properties
local-mac-address        00 14 4f f8 3c 49 
max-frame-size           00004000 
address-bits             00000030 
reg                      00000000 
compatible               SUNW,sun4v-network
device_type              network
name                     network
{0} ok 

Now that we know our MAC address we need to add our client system to the AI service to allow installation.

# installadm create-client -e 00:14:4f:f8:3c:49 -n s11-167-sparc
Creating SPARC configuration file
# installadm list -c 
Service Name   Client Address     Arch   Image Path 
------------   --------------     ----   ---------- 
s11-167-sparc  00:14:4F:F8:3C:49  Sparc  /AI/s11-167

Now that our AI server knows about our client system, we can initiate the wanboot (remote installation) process from our client system. First we set our client system to auto-boot?=true and set the OBP value for network-boot-arguments with our client information and the HTTP location of our bootable image. It’s important to understand that the output below does not contain any spaces, tabs, or return characters between the comma-separated values for the network-boot-arguments. Doing so will not work and cause unnecessary troubleshooting. If you are not familiar with this value, it is documented in the man page for eeprom in the Oracle Solaris 11 Express documentation.

{0} ok setenv auto-boot? true
auto-boot? =            true
{0} ok setenv network-boot-arguments host-ip=,
network-boot-arguments =  host-ip=,
{0} ok 

Next, we can initiate our remote install using the boot net - install command.

{0} ok boot net - install
Boot device: /virtual-devices@100/channel-devices@200/network@0  File and args: - install
<time unavailable> wanboot info: WAN boot messages->console
<time unavailable> wanboot info: configuring /virtual-devices@100/channel-devices@200/network@0
<time unavailable> wanboot progress: wanbootfs: Read 366 of 366 kB (100%)
<time unavailable> wanboot info: wanbootfs: Download complete
Wed Aug 10 16:40:03 wanboot progress: miniroot: Read 280394 of 280394 kB (100%)
Wed Aug 10 16:40:03 wanboot info: miniroot: Download complete
SunOS Release 5.11 Version snv_167 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Hostname: ilhsf001v002
Remounting root read/write
Probing for device nodes ...
Preparing network image for use
Downloading solaris.zlib
--2011-08-10 16:40:30--
Connecting to connected.
HTTP request sent, awaiting response... 200 OK
Length: 123897344 (118M) [text/plain]
Saving to: `/tmp/solaris.zlib'
100%[======================================>] 123,897,344 70.9M/s   in 1.7s    
2011-08-10 16:40:32 (70.9 MB/s) - `/tmp/solaris.zlib' saved [123897344/123897344]
Downloading solarismisc.zlib
--2011-08-10 16:40:32--
Connecting to connected.
HTTP request sent, awaiting response... 200 OK
Length: 16560128 (16M) [text/plain]
Saving to: `/tmp/solarismisc.zlib'
100%[======================================>] 16,560,128  64.3M/s   in 0.2s    
2011-08-10 16:40:32 (64.3 MB/s) - `/tmp/solarismisc.zlib' saved [16560128/16560128]
Downloading .image_info
--2011-08-10 16:40:32--
Connecting to connected.
HTTP request sent, awaiting response... 200 OK
Length: 36 [text/plain]
Saving to: `/tmp/.image_info'
100%[======================================>] 36          --.-K/s   in 0s      
2011-08-10 16:40:32 (967 KB/s) - `/tmp/.image_info' saved [36/36]
Downloading install.conf
--2011-08-10 16:40:32--
Connecting to connected.
HTTP request sent, awaiting response... 200 OK
Length: 67 [text/plain]
Saving to: `/tmp/install.conf'
100%[======================================>] 67          --.-K/s   in 0s      
2011-08-10 16:40:32 (2.07 MB/s) - `/tmp/install.conf' saved [67/67]
Done mounting image
Configuring devices.
Service discovery phase initiated
Service name to look up: s11-167-sparc
Service discovery finished successfully
Process of obtaining install manifest initiated
Using the install manifest obtained via service discovery
Automated Installation started
The progress of the Automated Installation will be output to the console
Detailed logging is in the logfile at /system/volatile/install_log
ilhsf001v002 console login: Press RETURN to get a login prompt at any time.
16:41:17    Install Log: /system/volatile/install_log
16:41:17    Starting Automated Installation Service
16:41:17    Using XML Manifest: /system/volatile/ai.xml
16:41:17    Using profile specification: /system/volatile/profile
16:41:18    Using service list file: /var/run/service_list
16:41:18    100% manifest-parser completed.
16:41:18    Manifest /system/volatile/ai.xml successfully parsed
16:41:18    Configuring Checkpoints
16:41:19    2% target-discovery completed.
16:41:19    === Executing Target Selection Checkpoint ==
16:41:20    Selected Disk(s) : c5d0
16:41:20    8% target-selection completed.
16:41:20    12% ai-configuration completed.
16:41:22    cannot share 'rpool' for nfs: protocol not installed
16:41:22    cannot share 'rpool/export' for nfs: protocol not installed
16:41:22    cannot share 'rpool/export/home' for nfs: protocol not installed
16:41:26    15% target-instantiation completed.
16:41:26    === Executing generated-transfer-1080-1 Checkpoint ===
16:41:26    15% Beginning IPS transfer
16:41:26    Creating IPS image
16:41:40    Installing packages from:
16:41:40        solaris
16:41:40            origin:
17:04:51    17% generated-transfer-1080-1 completed.
17:04:52    19% initialize-smf completed.
17:04:53    Installing SPARC bootblk to root pool devices: ['/dev/rdsk/c5d0s0']
17:04:53    Setting openprom boot-device
17:04:54    30% boot-configuration completed.
17:04:54    33% update-dump-adm completed.
17:04:54    35% setup-swap completed.
17:04:55    37% set-flush-ips-content-cache completed.
17:04:56    39% device-config completed.
17:04:57    41% apply-sysconfig completed.
17:05:04    86% boot-archive completed.
17:05:04    88% transfer-ai-files completed.
17:05:04    99% create-snapshot completed.
17:05:04    Automated Installation succeeded.
17:05:04    System will be rebooted now
Automated Installation finished successfully
Auto reboot enabled. The system will be rebooted now
Log files will be available in /var/sadm/system/logs/ after reboot
Aug 10 17:05:11 ilhsf001v002 reboot: initiated by root
Aug 10 17:05:17 ilhsf001v002 syslogd: going down on signal 15
syncing file systems... done
T5240, No Keyboard
Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.0.b, 8192 MB memory available, Serial #83591076.
Ethernet address 0:14:4f:fb:7f:a4, Host ID: 84fb7fa4.
Boot device: /virtual-devices@100/channel-devices@200/disk@0:a  File and args: -Z rpool/ROOT/snv_167
SunOS Release 5.11 Version snv_167 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Loading smf(5) service descriptions: 206/206
Configuring devices.

Once the installation is complete (approximately 20 minutes on my network) the system will reboot and you will be prompted to walk through the basic configuration tasks required by the installer (image below).

installer image of the configuration tasks after a default AI installation

Finally, once you have gone through the installer and completed the post-install configuration tasks, you now have a working system at a login prompt ready for work.

Exiting System Configuration Tool. Log is available at:
Loading smf(5) service descriptions: 1/1
Hostname: ilhsf001v002
ilhsf001v002 console login: 

Note, however, the manual post-install configuration tasks required by the installer can be eliminated entirely by establishing customized profiles containing the specific information for your client systems. Refer to Oracle’s documentation (or the XML DTD files) for specifics on the allowed tags and attributes for both manifests and profiles for your customized AI service.

In my next post, I will continue on the topic of IPS and cover “How to Publish Your Own IPS Packages”. It will include creating an IPS repository to hold your own packages which others will be able to install remotely.

Happy installing and stay tuned for more!


Brad Hudson is an Established Leader in IT Infrastructure, Engineering, Operations, and Customer Service with extensive experience developing and delivering infrastructure solutions, building and growing customer service organizations, leading change initiatives, providing data center management, and managing implementation activities.

Tagged with: , , , , , , , ,
Posted in IPS, Oracle, Solaris 11
About the Author

Brad Hudson is an Established Leader in IT Infrastructure, Engineering, Operations, and Customer Service with extensive experience developing and delivering infrastructure solutions, building and growing customer service organizations, leading change initiatives, providing data center management, and managing implementation activities.[Read More...]

%d bloggers like this: