2012年4月11日 星期三

Openstack Installation (Diablo Release) Part II

前幾天 Essex Release 已經出來了, 原本想要開始裝裝看, 不過看到官網的文件下面連發問區都還沒建立起來, 就沒有勇氣當白老鼠了

之前安裝是用手動的方式, 主要是為了了解整個安裝過程中, 每個 module 是怎麼互動的, 有什麼設定是會影響到彼此的  connection, 另外在遇到 trouble 時, 才不會完全手足無措, 不知道要怎麼解決, 今天的話決定使用官方的安裝 script

https://github.com/managedit/openstack-setup

setting 的內容
# General Settings
LIBVIRT_TYPE=${LIBVIRT_TYPE:-kvm} # Currently supports kvm or qemu

# MySQL Server Settings
# 我修改了這裡, 希望到時候新增  computer node 的時候, 其它台也能存取到這台的 dababase
# MYSQL_HOST="127.0.0.1"
MYSQL_HOST="0.0.0.0"
MYSQL_ROOT_PASS="letmein"
MYSQL_NOVA_PASS="letmeinN"
MYSQL_GLANCE_PASS="letmeinG"
MYSQL_KEYSTONE_PASS="letmeinK"
MYSQL_HORIZON_PASS="letmeinH"

# Controller Node
# 本機的 ip
HOST_IP="172.17.123.6"       # Do actually change this! This should be an IP address accessible by your end users. Not 127.0.0.1.

# Passwords
ADMIN_PASSWORD="letmein"  # Password for the "admin" user
SERVICE_TOKEN="abcdef123" # Pick something random-ish A-Za-z0-9 only. This is a password. Make it look like one.

# Networking and IPs
PUBLIC_INTERFACE="eth0"   # Interface for Floating IP traffic
# VLAN_INTERFACE="eth1"     # Interface for Fixed IP VLAN traffic
# 暫時只有一張網卡, 所以只好使用 eth0
VLAN_INTERFACE="eth0"     # Interface for Fixed IP VLAN traffic
# 每台 virtual machine 可以動態的得到一組對外的 ip, 分配的區段設在 172.17.122.1 ~ 172.17.122.254
FLOATING_RANGE="172.17.122.0/24"
# 每台 vm 內部的 ip, 希望從  192.168.128.1 開始,  所以把 RANGE_BITS 設成 17, 所以設成  16, 會變成從  192.168.0.1 開始
FIXED_RANGE_NET="192.168.128.0"
FIXED_RANGE_BITS="17"
FIXED_RANGE_MASK="255.255.0.0"
# 每 256 個 ip 變成一個 subset, 實際操作上會有什麼差別不是很清楚, 不過測試是每 256 個 ip 切出來一個 vlan 和 virtual switch
# 也就是  192.168.128.0 ~ 192.168.128.255 vlan100, br100
#        192.168.129.0 ~ 192.168.129.255 vlan101, br101
#        以此類推
FIXED_RANGE_NETWORK_SIZE="256"
# 總共 8 個 vlan
FIXED_RANGE_NETWORK_COUNT="8"


# Misc
REGION="nova"             # Region name - "nova" is the default, and probably the most reliable withoit extra setup!

# Load overrides from settings.local if it exists
if [ -f settings.local ]
then
  . settings.local
fi

# Check for kvm (hardware based virtualization).  If unable to initialize
# kvm, we drop back to the slower emulation mode (qemu).  Note: many systems
# come with hardware virtualization disabled in BIOS.
if [[ "$LIBVIRT_TYPE" == "kvm" ]]; then
    modprobe kvm || true
    if [ ! -e /dev/kvm ]; then
        LIBVIRT_TYPE=qemu
    fi
fi

# Dont change anything below here!
FIXED_RANGE="${FIXED_RANGE_NET}/${FIXED_RANGE_BITS}"
export NOVA_PROJECT_ID="admin" # Tenant
export NOVA_USERNAME="admin" # Username
export NOVA_API_KEY=$ADMIN_PASSWORD
export NOVA_URL="http://$HOST_IP:5000/v2.0/"
export NOVA_VERSION=1.1
export NOVA_REGION_NAME=$REGION

COUNT=1



安裝前必須要確認該台機器是完全乾淨的 Ubuntu 11.10 , 不然可能會遇到很多網路設定衝突的問題
另外我也稍微改寫了 clean-all.sh 原本的 script 跑完後似乎不夠乾淨, 所以如果再重裝一次 openstack (設定有改變), 會裝不起來

#!/bin/bash

killall dnsmasq
killall kvm

ifconfig br100 down
brctl delbr br100

./clean.sh

PACKAGES=`dpkg -l | grep -E "(openstack|nova|keystone|glance|swift)" | grep -v "kohana" | cut -d" " -f3 | tr "\\n" " "`

apt-get purge -y $PACKAGES

rm -rf /etc/nova /etc/glance /etc/keystone /etc/swift /etc/openstack-dashboard
rm -rf /var/lib/nova /var/lib/glance /var/lib/keystone /var/lib/swift /var/lib/openstack-dashboard
rm -rf /etc/ntp.conf

apt-get purge -y python-mysqldb mysql-client curl
apt-get purge -y mysql*
apt-get purge -y python-httplib2
apt-get purge -y rabbitmq-server
apt-get purge -y libapache2-mod-wsgi
apt-get purge -y bridge-utils
apt-get purge -y ntp
apt-get autoremove -y

安裝過程很快就結束了, 真的是輕鬆寫意, 比起之前看著安裝步驟玩了一天還裝不起來, 真的是方便多了
裝好之後, 偷偷進 mysql database 看一下,



mysql> use nova;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+-------------------------------------+
| Tables_in_nova                      |
+-------------------------------------+
| agent_builds                        |
| auth_tokens                         |
| block_device_mapping                |
| certificates                        |
| compute_nodes                       |
| console_pools                       |
| consoles                            |
| export_devices                      |
| fixed_ips                           |
| floating_ips                        |
| instance_actions                    |
| instance_metadata                   |
| instance_type_extra_specs           |
| instance_types                      |
| instances                           |
| iscsi_targets                       |
| key_pairs                           |
| migrate_version                     |
| migrations                          |
| networks                            |
| projects                            |
| provider_fw_rules                   |
| quotas                              |
| security_group_instance_association |
| security_group_rules                |
| security_groups                     |
| services                            |
| snapshots                           |
| user_project_association            |
| user_project_role_association       |
| user_role_association               |
| users                               |
| virtual_interfaces                  |
| virtual_storage_arrays              |
| volume_metadata                     |
| volume_type_extra_specs             |
| volume_types                        |
| volumes                             |
| zones                               |
+-------------------------------------+
39 rows in set (0.00 sec)

mysql> select id,netmask,bridge,gateway,vlan,dhcp_start,bridge_interface from networks;
+----+---------------+--------+---------------+------+---------------+------------------+
| id | netmask       | bridge | gateway       | vlan | dhcp_start    | bridge_interface |
+----+---------------+--------+---------------+------+---------------+------------------+
|  1 | 255.255.255.0 | br100  | 192.168.128.1 |  100 | 192.168.128.3 | eth0             |
|  2 | 255.255.255.0 | br101  | 192.168.129.1 |  101 | 192.168.129.3 | eth0             |
|  3 | 255.255.255.0 | br102  | 192.168.130.1 |  102 | 192.168.130.3 | eth0             |
|  4 | 255.255.255.0 | br103  | 192.168.131.1 |  103 | 192.168.131.3 | eth0             |
|  5 | 255.255.255.0 | br104  | 192.168.132.1 |  104 | 192.168.132.3 | eth0             |
|  6 | 255.255.255.0 | br105  | 192.168.133.1 |  105 | 192.168.133.3 | eth0             |
|  7 | 255.255.255.0 | br106  | 192.168.134.1 |  106 | 192.168.134.3 | eth0             |
|  8 | 255.255.255.0 | br107  | 192.168.135.1 |  107 | 192.168.135.3 | eth0             |
+----+---------------+--------+---------------+------+---------------+------------------+
8 rows in set (0.00 sec)

# 從這個結果可以看出來網路被拆成 8 組, 彼此的網路應該是不通的, 但在管理介面上要怎麼分配就不太清楚了

mysql> show columns from floating_ips;
+---------------+--------------+------+-----+---------+----------------+
| Field         | Type         | Null | Key | Default | Extra          |
+---------------+--------------+------+-----+---------+----------------+
| created_at    | datetime     | YES  |     | NULL    |                |
| updated_at    | datetime     | YES  |     | NULL    |                |
| deleted_at    | datetime     | YES  |     | NULL    |                |
| deleted       | tinyint(1)   | YES  |     | NULL    |                |
| id            | int(11)      | NO   | PRI | NULL    | auto_increment |
| address       | varchar(255) | YES  |     | NULL    |                |
| fixed_ip_id   | int(11)      | YES  | MUL | NULL    |                |
| project_id    | varchar(255) | YES  |     | NULL    |                |
| host          | varchar(255) | YES  |     | NULL    |                |
| auto_assigned | tinyint(1)   | YES  |     | NULL    |                |
+---------------+--------------+------+-----+---------+----------------+
10 rows in set (0.00 sec)


mysql> select id,address,host from floating_ips;
+-----+----------------+------+
| id  | address        | host |
+-----+----------------+------+
|   1 | 172.17.122.1   | NULL |
|   2 | 172.17.122.2   | NULL |
|   3 | 172.17.122.3   | NULL |
...
| 248 | 172.17.122.248 | NULL |
| 249 | 172.17.122.249 | NULL |
| 250 | 172.17.122.250 | NULL |
| 251 | 172.17.122.251 | NULL |
| 252 | 172.17.122.252 | NULL |
| 253 | 172.17.122.253 | NULL |
| 254 | 172.17.122.254 | NULL |
+-----+----------------+------+
254 rows in set (0.00 sec)


# 對外的 ip 從 172.17.122.1 ~ 172.17.122.254

mysql> show columns from fixed_ips;
+----------------------+--------------+------+-----+---------+----------------+
| Field                | Type         | Null | Key | Default | Extra          |
+----------------------+--------------+------+-----+---------+----------------+
| created_at           | datetime     | YES  |     | NULL    |                |
| updated_at           | datetime     | YES  |     | NULL    |                |
| deleted_at           | datetime     | YES  |     | NULL    |                |
| deleted              | tinyint(1)   | YES  |     | NULL    |                |
| id                   | int(11)      | NO   | PRI | NULL    | auto_increment |
| address              | varchar(255) | YES  |     | NULL    |                |
| network_id           | int(11)      | YES  | MUL | NULL    |                |
| instance_id          | int(11)      | YES  | MUL | NULL    |                |
| allocated            | tinyint(1)   | YES  |     | NULL    |                |
| leased               | tinyint(1)   | YES  |     | NULL    |                |
| reserved             | tinyint(1)   | YES  |     | NULL    |                |
| virtual_interface_id | int(11)      | YES  | MUL | NULL    |                |
| host                 | varchar(255) | YES  |     | NULL    |                |
+----------------------+--------------+------+-----+---------+----------------+
13 rows in set (0.00 sec)


mysql> select id,address,network_id,host,reserved from fixed_ips where id%256=0;
+------+-----------------+------------+------+----------+
| id   | address         | network_id | host | reserved |
+------+-----------------+------------+------+----------+
|  256 | 192.168.128.255 |          1 | NULL |        1 |
|  512 | 192.168.129.255 |          2 | NULL |        1 |
|  768 | 192.168.130.255 |          3 | NULL |        1 |
| 1024 | 192.168.131.255 |          4 | NULL |        1 |
| 1280 | 192.168.132.255 |          5 | NULL |        1 |
| 1536 | 192.168.133.255 |          6 | NULL |        1 |
| 1792 | 192.168.134.255 |          7 | NULL |        1 |
| 2048 | 192.168.135.255 |          8 | NULL |        1 |
+------+-----------------+------------+------+----------+
8 rows in set (0.01 sec)

# 對內的 ip 從 192.168.128.x 開始, 每 256 個成一組,  每一組的  .0 ~ .2 是保留的
# .3 開始才會分配給 virtual bridge 和 virtual machine

wistor@ubuntu:~$ sudo less /etc/nova/nova.conf

--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--force_dhcp_release=True
--verbose
--sql_connection=mysql://nova:letmeinN@0.0.0.0/nova
--public_interface=eth1
--vlan_interface=eth1
--zone_name=nova
--node_availability_zone=nova
--storage_availability_zone=nova
--allow_admin_api=true
--enable_zone_routing=true
--api_paste_config=api-paste-keystone.ini
--vncserver_host=0.0.0.0
--vncproxy_url=http://172.17.123.81:6080
--ajax_console_proxy_url=http://172.17.123.81:8000
--glance_api_servers=172.17.123.81:9292
--s3_dmz=172.17.123.81
--ec2_host=172.17.123.81
--s3_host=172.17.123.81
--osapi_host=172.17.123.81
--rabbit_host=172.17.123.81
--dmz_net=192.168.128.0
--dmz_mask=255.255.0.0
--fixed_range=192.168.128.0/17
--keystone_ec2_url=http://172.17.123.81:5000/v2.0/ec2tokens
--multi_host=True
--send_arp_for_ha=True
--libvirt_type=kvm



透過 dashboard 生成一個 VM 之後, 可以看到他會自動生出來 virtual bridge, ip 是 192.168.128.4

wistor@ubuntu:~$ sudo brctl show
bridge name     bridge id               STP enabled     interfaces
br100           8000.02163e38974d       no              vlan100
                                                        vnet1

wistor@ubuntu:~$ ifconfig br100
br100     Link encap:Ethernet  HWaddr 02:16:3e:38:97:4d
          inet addr:192.168.128.4  Bcast:192.168.128.255  Mask:255.255.255.0
          inet6 addr: fe80::640b:98ff:fe7e:4e8d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:510 (510.0 B)

wistor@ubuntu:~$ virsh list
 Id Name                 State
----------------------------------
  1 instance-00000001    running



也可以透過 virsh edit <id> 看一下 vm 的狀況

<emulator>/usr/bin/kvm </emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/nova/instances/instance-00000002/disk'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <interface type='bridge'>
      <mac address='02:16:3e:42:73:ac'/>
      <source bridge='br100'/>
      <filterref filter='nova-instance-instance-00000002-02163e4273ac'>
        <parameter name='DHCPSERVER' value='192.168.128.4'/>
        <parameter name='IP' value='192.168.128.3'/>
      </filterref>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

在裝了 keystone 之後, 原以為 eucatool 的使用和以前一樣, 是直接和 nova 溝通, 而不需要透過 keystone 認証,
不過在試了很久之後, 也找了相關的文章, 才發現我想的是錯的, 並沒有這樣的路線, 而必須要透過 keystone 認証


沒有裝 keystone 時設定方式
http://docs.openstack.org/diablo/openstack-compute/admin/content/setting-up-openstack-compute-environment-on-the-compute-node.html
http://docs.openstack.org/diablo/openstack-compute/admin/content/creating-certifications.html
http://docs.openstack.org/diablo/openstack-compute/admin/content/managing-compute-users.html

裝了 keystone 時設定方式
http://docs.openstack.org/diablo/openstack-compute/install/content/identity-define-services-endpoints.html
https://answers.launchpad.net/nova/+question/178940


root@ubuntu:~$ apt-get install -y euca2ools unzip
root@ubuntu:~$ mkdir -p creds
root@ubuntu:~$ cd creds

# 這是沒有安裝 keystone 時的方式, 但想要觀察最原始的方式, 他所設定的參數有那些
root@ubuntu:~/creds$ nova-manage user admin novaadmin
export EC2_ACCESS_KEY=38d3466b-79d9-41ad-984c-c44cb65dcf50
export EC2_SECRET_KEY=7c04de4a-083c-47ec-bb39-10bee47c0342
root@ubuntu:~/creds$ nova-manage project create proj novaadmin
root@ubuntu:~/creds$ nova-manage project zipfile proj novaadmin
root@ubuntu:~/creds$ unzip nova.zip
Archive:  nova.zip
 extracting: novarc
 extracting: pk.pem
 extracting: cert.pem
 extracting: cacert.pem
root@ubuntu:~/creds$ cat novarc
NOVARC=$(readlink -f "${BASH_SOURCE:-${0}}" 2>/dev/null) ||
    NOVARC=$(python -c 'import os,sys; print os.path.abspath(os.path.realpath(sys.argv[1]))' "${BASH_SOURCE:-${0}}")
NOVA_KEY_DIR=${NOVARC%/*}
export EC2_ACCESS_KEY="novaadmin:proj"
export EC2_SECRET_KEY="7c04de4a-083c-47ec-bb39-10bee47c0342"
export EC2_URL="http://172.17.123.6:8773/services/Cloud"
export S3_URL="http://172.17.123.6:3333"
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}"
alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"
export NOVA_API_KEY="novaadmin"
export NOVA_USERNAME="novaadmin"
export NOVA_PROJECT_ID="proj"
export NOVA_URL="http://172.17.123.6:8774/v1.1/"
export NOVA_VERSION="1.1"

# 看完了之後就可以砍了, 裡面最重要的是 EC2_ACCESS_KEY, EC2_SECRET_KEY, EC2_URL
root@ubuntu:~/creds$ nova-manage project delete proj
root@ubuntu:~/creds$ nova-manage user delete novaadmin


# 在實驗 EC2 之前, 我先在 keystone 裡面做出另一個帳號, tenant/user 是 demo/demoUser, 並且給他權限
root@ubuntu:~/creds$ keystone-manage tenant add demo
root@ubuntu:~/creds$ keystone-manage user add demouser
root@ubuntu:~/creds$ keystone-manage role grant Admin demoUser demo
root@ubuntu:~/creds$ keystone-manage role grant Member demoUser
root@ubuntu:~/creds$ keystone-manage role grant KeystoneAdmin demoUser
root@ubuntu:~/creds$ keystone-manage role grant KeystoneServiceAdmin demoUser

# 接下來和 keystone 註冊一個 credential 供 EC2 使用
# 指令是 keystone-manage credentials add <user> EC2 <tenant:user> <password> <tenant>
root@ubuntu:~/creds$ keystone-manage credentials add demoUser EC2 demo:demoUser 12345 demo

# 利用 novarc 當範本, 做出另一個
root@ubuntu:~/creds$ cp novarc myrc

# 改掉裡面的 EC2_ACCESS_KEY 和 EC2_SECRET_KEY
root@ubuntu:~/creds$ cat myrc | grep EC2
export EC2_ACCESS_KEY="demo:demoUser"
export EC2_SECRET_KEY="12345"
export EC2_URL="http://172.17.123.6:8773/services/Cloud"
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}"
alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"

root@ubuntu:~/creds$ source myrc
root@ubuntu:~/creds$ euca-describe-availability-zones verbose
AVAILABILITYZONE        nova    available
AVAILABILITYZONE        |- ubuntu
AVAILABILITYZONE        | |- nova-vncproxy      enabled :-) 2012-04-12 05:32:49
AVAILABILITYZONE        | |- nova-scheduler     enabled :-) 2012-04-12 05:32:48
AVAILABILITYZONE        | |- nova-network       enabled :-) 2012-04-12 05:32:52
AVAILABILITYZONE        | |- nova-compute       enabled :-) 2012-04-12 05:32:48



讓我們偷偷進 database 看一下

mysql> use keystone;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+--------------------+
| Tables_in_keystone |
+--------------------+
| credentials        |
| endpoint_templates |
| endpoints          |
| roles              |
| services           |
| tenants            |
| token              |
| user_roles         |
| users              |
+--------------------+
9 rows in set (0.00 sec)

mysql> select id,name,tenant_id from users;
+----+----------+-----------+
| id | name     | tenant_id |
+----+----------+-----------+
|  1 | admin    |      NULL |
|  2 | demoUser |      NULL |
+----+----------+-----------+

mysql> select * from roles;
+----+----------------------+------+------------+
| id | name                 | desc | service_id |
+----+----------------------+------+------------+
|  1 | Admin                | NULL |       NULL |
|  2 | Member               | NULL |       NULL |
|  3 | KeystoneAdmin        | NULL |       NULL |
|  4 | KeystoneServiceAdmin | NULL |       NULL |
+----+----------------------+------+------------+

mysql> select * from tenants;
+----+-------+------+---------+
| id | name  | desc | enabled |
+----+-------+------+---------+
|  1 | admin | NULL |       1 |
|  2 | demo  | NULL |       1 |
+----+-------+------+---------+

mysql> select * from services;
+----+----------+----------+---------------------------+
| id | name     | type     | desc                      |
+----+----------+----------+---------------------------+
|  1 | nova     | compute  | Nova Compute Service      |
|  2 | glance   | image    | Glance Image Service      |
|  3 | keystone | identity | Keystone Identity Service |
+----+----------+----------+---------------------------+

mysql> select * from credentials;
+----+---------+-----------+------+---------------+---------+
| id | user_id | tenant_id | type | key           | secret  |
+----+---------+-----------+------+---------------+---------+
|  1 |       1 |         1 | EC2  | admin:admin   | letmein |
|  2 |       2 |         2 | EC2  | demo:demoUser | 12345   |
+----+---------+-----------+------+---------------+---------+


我們在 database 中看到 EC2 使用的認証資料, 而且其實原本的 script 就有幫我們建好一組供 EC2 使用了
所以我們其實只要把 EC2_ACCESS_KEY(admin:admin), EC2_SECRET_KEY(letmein), EC2_URL 設定好就可以使用了


相關文章:
http://wikitech.wikimedia.org/view/OpenStack#On_all_nova_nodes.2C_install_the_nova_PPA
http://docs.openstack.org/diablo/openstack-compute/admin/content/reference-for-flags-in-nova-conf.html


補: 紀錄一下 nova-manage config list
--storage_availability_zone=nova
--vc_image_name=vc_image
--ec2_dmz_host=$my_ip
--fixed_range=192.168.128.0/17
--compute_topic=compute
--vsa_topic=vsa
--dmz_mask=255.255.0.0
--fixed_range_v6=fd00::/48
--glance_api_servers=172.17.123.81:9292
--rabbit_password=guest
--user_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=%s-%s-%s
--s3_dmz=172.17.123.81
--quota_ram=51200
--find_host_timeout=30
--aws_access_key_id=admin
--vncserver_host=0.0.0.0
--network_size=256
--virt_mkfs=windows=mkfs.ntfs --fast --label %(fs_label)s %(target)s --virt_mkfs=linux=mkfs.ext3 -L %(fs_label)s -F %(tar
get)s --virt_mkfs=default=mkfs.ext3 -L %(fs_label)s -F %(target)s
--enable_new_services
--my_ip=172.17.123.81
--live_migration_retry_count=30
--lockout_attempts=5
--credential_cert_file=cert.pem
--quota_max_injected_files=5
--zone_capabilities=hypervisor=xenserver;kvm,os=linux;windows
--logdir=/var/log/nova
--sqlite_db=nova.sqlite
--nouse_forwarded_for
--cpuinfo_xml_template=/usr/lib/python2.7/dist-packages/nova/virt/cpuinfo.xml.template
--num_networks=1
--boot_script_template=/usr/lib/python2.7/dist-packages/nova/cloudpipe/bootscript.template
--vsa_manager=nova.vsa.manager.VsaManager
--live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER
--notification_driver=nova.notifier.no_op_notifier
--osapi_max_limit=1000
--vsa_multi_vol_creation
--rabbit_port=5672
--s3_access_key=notchecked
--rabbit_max_retries=0
--noresume_guests_state_on_host_boot
--ajax_console_proxy_url=http://172.17.123.81:8000
--injected_network_template=/usr/lib/python2.7/dist-packages/nova/virt/interfaces.template
--snapshot_name_template=snapshot-%08x
--vncproxy_url=http://172.17.123.81:6080
--osapi_port=8774
--ajax_console_proxy_topic=ajax_proxy
--minimum_root_size=10737418240
--quota_cores=20
--nouse_project_ca
--vsa_name_template=vsa-%08x
--default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,eventlet.wsgi.server=WARN
--volume_topic=volume
--nolibvirt_use_virtio_for_bridges
--volume_name_template=volume-%08x
--lock_path=/var/lock/nova
--live_migration_uri=qemu+tcp://%s/system
--allow_same_net_traffic
--flat_network_dns=8.8.4.4
--default_vsa_instance_type=m1.small
--live_migration_bandwidth=0
--connection_type=libvirt
--noupdate_dhcp_on_disassociate
--default_project=openstack
--s3_port=3333
--logfile_mode=0644
--logging_context_format_string=%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(message)s
--s3_secret_key=notchecked
--instance_name_template=instance-%08x
--ec2_host=172.17.123.81
--norabbit_durable_queues
--credential_key_file=pk.pem
--vpn_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=project-vpn-%s-%s
--logging_debug_format_suffix=from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d
--network_host=wistor
--console_manager=nova.console.manager.ConsoleProxyManager
--rpc_backend=nova.rpc.impl_kombu
--rabbit_userid=guest
--osapi_scheme=http
--credential_rc_file=%src
--dhcp_domain=novalocal
--sql_connection=mysql://nova:letmeinN@0.0.0.0/nova
--console_topic=console
--instances_path=$state_path/instances
--noflat_injected
--use_local_volumes
--host=wistor
--fixed_ip_disassociate_timeout=600
--quota_instances=10
--quota_max_injected_file_content_bytes=10240
--libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtBridgeDriver
--floating_range=4.4.4.0/24
--multi_host
--lockout_window=15
--db_backend=sqlalchemy
--credentials_template=/usr/lib/python2.7/dist-packages/nova/auth/novarc.template
--dmz_net=192.168.128.0
--sql_retry_interval=10
--vpn_start=1000
--volume_driver=nova.volume.driver.ISCSIDriver
--crl_file=crl.pem
--nomonkey_patch
--rpc_conn_pool_size=30
--s3_host=172.17.123.81
--vlan_interface=eth1
--novolume_force_update_capabilities
--scheduler_topic=scheduler
--verbose
--sql_max_retries=12
--default_instance_type=m1.small
--vsa_volume_type_name=VSA volume type
--firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
--password_length=12
--console_host=wistor
--libvirt_type=kvm
--image_decryption_dir=/tmp
--vpn_key_suffix=-vpn
--use_cow_images
--block_size=268435456
--null_kernel=nokernel
--libvirt_xml_template=/usr/lib/python2.7/dist-packages/nova/virt/libvirt.xml.template
--vpn_client_template=/usr/lib/python2.7/dist-packages/nova/cloudpipe/client.ovpn.template
--credential_vpn_file=nova-vpn.conf
--service_down_time=60
--default_notification_level=INFO
--nopublish_errors
--quota_metadata_items=128
--allowed_roles=cloudadmin,itsec,sysadmin,netadmin,developer
--logging_exception_prefix=(%(name)s): TRACE:
--enabled_apis=ec2,osapi
--quota_max_injected_file_path_bytes=255
--scheduler_manager=nova.scheduler.manager.SchedulerManager
--ec2_port=8773
--monkey_patch_modules=nova.api.ec2.cloud:nova.notifier.api.notify_decorator,nova.compute.api:nova.notifier.api.notify_decorator
--rabbit_retry_backoff=2
--auth_token_ttl=3600
--quota_volumes=10
--libvirt_uri=
--ec2_scheme=http
--keys_path=$state_path/keys
--vpn_image_id=0
--host_state_interval=120
--block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_NON_SHARED_INC
--noauto_assign_floating_ip
--ca_file=cacert.pem
--quota_floating_ips=10
--max_vcs_in_vsa=32
--nofake_call
--default_publisher_id=wistor
--state_path=/var/lib/nova
--max_nbd_devices=16
--sql_idle_timeout=3600
--vpn_ip=$my_ip
--default_image=ami-11111
--aws_secret_access_key=admin
--nouse_ipv6
--stub_network=False
--key_file=private/cakey.pem
--nofake_network
--force_dhcp_release
--osapi_extensions_path=/var/lib/nova/extensions
--quota_gigabytes=1000
--region_list=
--auth_driver=nova.auth.dbdriver.DbDriver
--network_manager=nova.network.manager.VlanManager
--enable_zone_routing
--root_helper=sudo
--osapi_host=172.17.123.81
--zone_name=nova
--logging_default_format_string=%(asctime)s %(levelname)s %(name)s [-] %(message)s
--timeout_nbd=10
--compute_driver=nova.virt.connection.get_connection
--libvirt_vif_type=bridge
--nofake_rabbit
--rabbit_host=172.17.123.81
--vnc_keymap=en-us
--rescue_timeout=0
--ca_path=$state_path/CA
--nouse_syslog
--superuser_roles=cloudadmin
--osapi_path=/v1.1/
--nouse_deprecated_auth
--ec2_path=/services/Cloud
--norabbit_use_ssl
--rabbit_retry_interval=1
--node_availability_zone=nova
--lockout_minutes=15
--db_driver=nova.db.api
--create_unique_mac_address_attempts=5
--ajaxterm_portrange=10000-12000
--volume_manager=nova.volume.manager.VolumeManager
--nostart_guests_on_host_boot
--vlan_start=100
--rpc_thread_pool_size=1024
--ipv6_backend=rfc2462
--vnc_enabled
--global_roles=cloudadmin,itsec
--rabbit_virtual_host=/
--network_driver=nova.network.linux_net
--ajax_console_proxy_port=8000
--project_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=project-ca-%s-%s
--image_service=nova.image.glance.GlanceImageService
--control_exchange=nova
--cnt_vpn_clients=0
--vsa_part_size_gb=100
--vncproxy_topic=vncproxy
--compute_manager=nova.compute.manager.ComputeManager
--network_topic=network




沒有留言:

張貼留言