http://etherealmind.com/iscsi-network-designs-part-1-some-basics/
http://etherealmind.com/iscsi-network-designs-part-2-simple-scaling/
我覺得作者這部份的描述是錯誤的,找了很多資料,我認為圖的部份以及文字的部份是在描述 MC/S 這個方式,用 Active/Standby 來當標題並不合適
iSCSI initiator performance
When implementing iSCSI in your server operating system you have two choices, software or hardware (using Host Block Adapter).
- Software: The TCP/IP packet is generated by the operating system using your core CPU
- Hardware: A Host Bus Adapter is a PCI card that performs the TCP processing. This is known as TCP Offload and the cards are generally known as TCP Offload Engines or TOE.
http://etherealmind.com/iscsi-network-designs-part-2-simple-scaling/
Increase the bandwidth
Use Link Aggregation Control Protocol (LACP) or Etherchannel to increase the bandwidth to between 2-6 gigabits per second
The are some pre-conditions:
1) server NICs and drivers must support LACP
2) switches must support LACP
3) switches must support enough LACP bundles. (cheaper switches may only support a few LACP bundles per switch).
4) all bundles must terminate on the same switch (or switch stack or chassis).
2) switches must support LACP
3) switches must support enough LACP bundles. (cheaper switches may only support a few LACP bundles per switch).
4) all bundles must terminate on the same switch (or switch stack or chassis).
Software Initiators
- using a general purpose CPU to perform the data transformation
- it is not optimised for performance
TCP Offload Engines (TOE)
- TOE cards are able to improve the TCP performance of a server
Host Bus Adapters
- generic term for connecting the I/O bus of your server to an external system
- iSCSI Header and Data Digest calculations are very CPU intensive. Only a full iSCSI offload HBA has the logic built into the ASIC to accelerate these calculations.
None
Redundancy
iSCSI implements its HA features. There are three ways to achieve this:
- Link Aggregation – LACP
- iSCSI Multipathing – Active / Standby
- iSCSI Multipathing – Active / Active
Link Aggregation – Etherchannel / LACP
- two network cards in the servers
- the two network cards must connect to the same Cisco switch (2)
- network adapters and drivers must have support for LACP
- at least one switch with supports LACP (sometimes know as Etherchannel)
- you must be able to configure both the server drivers and switch configuration
其實這不太算是 Redundany,因為 NIC 在同一台機器上,通常會壞的不是網卡,而是其它東西,機器本身如果出了問題,LACP 也幫不上忙,LACP 的好處是可以增加讀寫的 Performance,尤其是寫入的部份,作為 LB 倒是滿適合的。另外是也需要 Switch 有支援才能做到這個功能。
iSCSI Multipathing Active / Standby => MC/S (Multiple Connections per Session)
This is defined in the iSCSI RFC as the method for achieving high availability. The iSCSI initiator will initiate at least two TCP connections to the iSCSI target. Data will flow down the primary connection until a failure is detected and data will then be diverted to the second connection.
我覺得作者這部份的描述是錯誤的,找了很多資料,我認為圖的部份以及文字的部份是在描述 MC/S 這個方式,用 Active/Standby 來當標題並不合適
iSCSI Multipathing Active / Active => MPIO (Multi-Path Input/Output)
MPIO 本身有兩種方案,分別就是 Active/Standby 及 Active/Active,所以我覺得作者的標題下的不太正確,相關文章我覺得這篇寫的最清楚。
http://blog.xuite.net/weirchen/blog/22184665
現在支援較豐富的是 MPIO,我個人認為是因為 MC/S的限制較多,除了不能針對每一個 Lun 來做設定,他似乎也間接限定了 Target 的兩個連線必須連到同一台主機的兩個 NIC,不過這有待確認,但肯定的是如果主機背後是 Distributed Storage Server,那使用 MPIO 可以將連線連到不同的主機,只要他們能連接到同一塊 Volume。
http://blog.xuite.net/weirchen/blog/22184665
現在支援較豐富的是 MPIO,我個人認為是因為 MC/S的限制較多,除了不能針對每一個 Lun 來做設定,他似乎也間接限定了 Target 的兩個連線必須連到同一台主機的兩個 NIC,不過這有待確認,但肯定的是如果主機背後是 Distributed Storage Server,那使用 MPIO 可以將連線連到不同的主機,只要他們能連接到同一塊 Volume。
Conclusion
沒有留言:
張貼留言