艾瑞克的 Hexo 空間

[雜念] 群暉科技 synology 的Support 客服人員 請不要再犯一樣的錯誤了,好嗎?

本文發表於1683天之前,文章內容可能已經過時,如有疑問,請聯繫作者。

update: 群暉客服來信表示,希望拿掉信件中的客戶聯絡EMAIL、電話、機器序號等資訊。所以,就拿掉了..

大概是去年五月份左右吧,某一天我突然收到群暉的技術支援發的信件。

內容如下

Fwd: Synology Online Support #124653: [DS1010+ DSM 3.2-1955] [sameera]

Synology Technical Support <[email protected]>

Hi Eric,

Sorry to trouble

Support need your expertise for this inquiry:

[Environment]

Win2000/Win7

Multiple Sessions(iSCSI)

[Symptom]

A customer who uses multiple sessions from initiators to connect the same iSCSI target at the same time. Neither did he operate such function in VMFS nor OCFS environment, thus the volume had corrupted in the end.

Is there any chance to get back the customer’s data?

[Kernel Log] or [SSH remote]

In attachment.

Look forward to your advise. Thanks!

Sincerely,

Antoine Yang

———- Forwarded message ———-

From: <>

Date: Thu, May 17, 2012 at 2:07 PM

Subject: Synology Online Support #124653: [DS1010+ DSM 3.2-1955] [sameera]

To: [email protected]










































Problem Explanation:

Dear Sir,this is to inform that one of our client using nas for iSCSI target with enable multiple seasons to his data backups. now he can not Access to the targets partition after restart the client machine and he gave the error massage i have given below for your kind consideration please give me solution to update him. he need his data recovery.



Problem Reproduce Steps:

Updated to DSM 4.0 and get debug.dat file


 







































































































































Product Information/QuestionPersonal Information



Product:

DS1010+


Type of Customer:

Distributor



Product Version:

DSM 3.2-1955


Company Name:

Active Solution



Product Serial Number:




Full Name:




Type of Usage:

For Business Use


Job Title:

System Engineer



Hard Drive Model:




E-mail:




Printer Model:




Address:




Language:

English,


Location:

Asia & Oceania / Sri Lanka



Client Operating System:

Windows 2000, Windows 7,


Telephone:




LAN Setting:

Static IP


Fax:




Type of Message:

Need Technical Support




**Previous support id reported by the same email or serial:

81415, 81836, 93625, 93829, 113967, 113972**

 
 

嗯哼,也就是一般技術支援的信件內容啊,有啥奇怪的??

重點是,這封信件根本就不是我發出的技術支援申請啊!

一開始我沒當一回事,想說他們自己搞錯了,應該會發現吧

結果三天後,又發了一封信來跟我確認問題是否已經解決。

這次我受不了了,所以有回信給他們說這不是我發出的信件!

我回信後隔天,群暉的人回信致歉了,想說事情告一段落了。

結果,5/31 又來一次,這次是不同的客戶反應的問題。

Dear Eric,

User’s respond does not seem useful.

However we ask user to replace the HDD and also try with spare HDD to see if same issue.

In the meantime could we ask for QC’s assistant to test in raid 6 use one LUN with 2 VMware ESX 4.1 Servers as replication destination with the software Veeam?

Below is user’s response, please allow me to fully quote it:

“1. From which exact DS version the performance drop?

I can’t recall exactly what the build Nr. of the first available 4.0 DSM was, but i’m sure, it was the very first one which became available for the DS411. Since then i have checked all available versions, upgraded to all of them one after another and the issue was there with every one of them. I hope this covers your question.

2. How is the capacity usage (used/available) of block LUN used by VMFS. – I have tested with many scenarios, it doesn’t matter if the LUN is empty or if it is almost full.

I’m not sure if it’s the VMFS performance issue or just lvm block LUN on raid level 6.

Could you turn off the DS, take out all original HDD and obtain another 1 spare HDD to place on DS to install for block level ISCSI and see if same issue? – I can only try that some days or weeks later i’m afraid. Is this absolutely neccessary? How is it going to be representative since one HDD is not going to be the same as RAID6.

3. The above will also clarify the below:

/dev/sdd is abnormal in Reallocated_Sector_Ct field with RAW_VALUE 118

5 Reallocated_Sector_Ct 0x0033 194 194 140 Pre-fail Always - 118

It means that 118 bad sectors have mapped to reserved sectors by HDD firmware.

I’m not sure if it’s the root cause result to bad write performance on lvm block LUN of raid 6.

Does this mean that the 4th Disk has some bad sectors? I have another 2 Disks of the same type which i use as external backup destinations, maybe i could take one of them to replace the „bad one”. If it’s going to help you.

Thank you.

Best Regards

Jim Cheng

On Wed, May 30, 2012 at 5:39 PM, <> wrote:

Dear Jim

i answer your questions in the body of your mail.

From: Synology Technical Support [mailto:[email protected]]

Sent: Tuesday, May 29, 2012 9:23 AM

To:

Subject: Re: Synology Online Support #125527: [DS411 DSM 4.0-2228] [balazs.eckert]

Dear ,

Sorry about the wait.

We may need more information.

1. From which exact DS version the performance drop?

I can’t recall exactly what the build Nr. of the first available 4.0 DSM was, but i’m sure, it was the very first one which became available for the DS411. Since then i have checked all available versions, upgraded to all of them one after another and the issue was there with every one of them. I hope this covers your question.

2. How is the capacity usage (used/available) of block LUN used by VMFS. – I have tested with many scenarios, it doesn’t matter if the LUN is empty or if it is almost full.

I’m not sure if it’s the VMFS performance issue or just lvm block LUN on raid level 6.

Could you turn off the DS, take out all original HDD and obtain another 1 spare HDD to place on DS to install for block level ISCSI and see if same issue? – I can only try that some days or weeks later i’m afraid. Is this absolutely neccessary? How is it going to be representative since one HDD is not going to be the same as RAID6.

3. The above will also clarify the below:

/dev/sdd is abnormal in Reallocated_Sector_Ct field with RAW_VALUE 118

5 Reallocated_Sector_Ct 0x0033 194 194 140 Pre-fail Always - 118

It means that 118 bad sectors have mapped to reserved sectors by HDD firmware.

I’m not sure if it’s the root cause result to bad write performance on lvm block LUN of raid 6.

Does this mean that the 4th Disk has some bad sectors? I have another 2 Disks of the same type which i use as external backup destinations, maybe i could take one of them to replace the „bad one”. If it’s going to help you.

Thank you.

Best Regards

Jim Cheng

On Mon, May 28, 2012 at 9:30 AM, Synology Technical Support <[email protected]> wrote:

Dear Eckert,

Thank you for waiting.

We have pass the information to our developer for further advice.

Once any news we will update you as soon as possible.

Thank you.

Best Regards

Jim Cheng

On Wed, May 23, 2012 at 5:09 PM, <> wrote:

Dear Jim,

int he DS411 there are 4 Western Digital 2TB WD20EARS-00MVWB0 hard drives arranged in 1 SHR array, with 2 drives fail tolerance. I think in this configuration there is no possibility to try something with a spare HDD, since all 4 drives are already parts of the one array that exists. We have used exactly the same configuration with the previous Firmware, where the speeds has been in the 35-40 MB range for the block based LUNs.

The actual LUNs used for the tests are:

 

ESX

ESXblock

Backup

Backupblock

 

i hope i could help you to clarify the situation

if you need more information, or maybe a remote session please let me know!

regards

 

From: Synology Technical Support [mailto:[email protected]]

Sent: Wednesday, May 23, 2012 10:06 AM

To:

Subject: Re: Synology Online Support #125527: [DS411 DSM 4.0-2228] [balazs.eckert]

Dear Balázs,

may I ask which LUN and HDD do you use in the test?

If there is a spare HDD, could you try see if same issue?

In the meantime, could you please let us know the approximate speed in DSM3.2 and if you could obtain a kernel log for us to check:

Please send us a copy of your kernel log, the instructions have been mentioned in below. The Kernel log is a technical log file which can give us more technical information about your system

[DSM3.0-1134 and above]

1. Login Management UI

2. You can see the similar link http://192.168.xx.xx:5000/webman/index.cgi

3. Add ?diagnose=debug to the link, like http://192.168.x.xx:5000/webman/index.cgi?diagnose=debug

4. Press “Enter”, then the file can be downloaded.

Thank you.

Best Regards

Jim Cheng

On Tue, May 22, 2012 at 4:24 PM, <> wrote:










































Problem Explanation:

After i upgraded to DSM 4.0 i have problems with iSCSI performance. I use one LUN with 2 VMware ESX 4.1 Servers as replication destination with the software Veeam. Prior to the upgrade i achieved “normal” write rates to the 1 TB block based LUN. With DSM 4.0 however the write speed seems to be very limited, also read rate is not the best (or as it was with DSM 3.2). The volume is formatted as VMFS 3.46. When i create another LUN, this time a file based one (no thin provisioning) the write speed seems to be “normal” again, for a file based LUN.Read and write speeds with ESX server are as following:

block based LUN read: 33 MB/sec

block based LUN write: 6 MB/sec

file based LUN read: 36 MB/sec

file based LUN write: 39 MB/secFurthermore i detected another similar problem: when i use a disk based LUN formatted as NTFS on a Windows 2003 R2 server the write speeds also seem not to be very good. I did another test with a file based and a block based LUN:Read and write speeds with Windows server 2003 R2, NTFS:

block based LUN read: 21 MB/sec

block based LUN write: 10 MB/sec

file based LUN read: 27 MB/sec

file based LUN write: 8-10 MB/secMaybe write speeds are non represantive, because Windows caches write operations. I could do another test with my PC via total commander, where i can disable caching.


With DSM 3.2 i had no problems with iSCSI, all speeds seemed to be normal.


Thank you for your help.

regards



Problem Reproduce Steps:



 







































































































































Product Information/QuestionPersonal Information



Product:

DS411


Type of Customer:

End User



Product Version:

DSM 4.0-2228


Company Name:




Product Serial Number:




Full Name:




Type of Usage:

For Business Use


Job Title:

IT manager



Hard Drive Model:

Western Digital 2TB WD20EARS-00MVWB0


E-mail:




Printer Model:




Address:

Szeder u. 5.



Language:

Deutsch, English, Magyar,


Location:

Europe & Russia / Hungary



Client Operating System:

Windows 7,


Telephone:




LAN Setting:

Static IP


Fax:




Type of Message:

Need Technical Support




 
 

再次回覆給群暉說你們回錯信件了!

6/4才又收到信件說抱歉。然後,6/18我又收到這種信件了。

接下來總算都沒收到類似的郵件,我也就慢慢忘了這個問題。

直到上禮拜,我又再次收到了這樣的郵件。

Hi, Eric

Sorry to disturb. Support needs your expertise for this issue.

**[Environment]

**DSM4.2-3211

RS2212RP+

**[Symptom]

**The capacity of the iSCSI LUN can’t be released even if the user remove the LUN and Target.

From the log, there’re many connection error for the iSCSI but could it be the reason to cause the system fail to release the capacity?

Apr 7 14:24:35 kernel: [6202392.886215] iSCSI: Login negotiation failed from [192.168.1.14]

Apr 7 14:24:35 kernel: [6202392.892396] Unsupported iSCSI IETF Pre-RFC Revision, version Min/Max 0x0a/0x32, rejecting login.

Apr 7 14:24:35 kernel: [6202392.901512] iSCSI: Login negotiation failed from [192.168.1.14]

Apr 7 14:24:40 kernel: [6202398.192167] rx_data() returned an error.

Apr 7 14:24:40 kernel: [6202398.196311] iSCSI: Login negotiation failed from [192.168.1.14]

Apr 7 14:25:34 cups-lpd[19733]: Unable to get client address - Transport endpoint is not connected

Apr 7 14:25:34 kernel: [6202451.377927] rx_data() returned an error.

Apr 7 14:25:34 kernel: [6202451.382035] iSCSI: Login negotiation failed from [192.168.1.14]

Apr 7 21:48:16 kernel: [6229002.224141] iSCSI: Client [iqn.1991-05.com.microsoft:data1.marketmakers.co.uk] logged out

Could you help to look into the problem?

Please let me know if you need remote access.

Best Regards

Derek Chung

———- Forwarded message ———-

From: <>

Date: Tue, Apr 30, 2013 at 8:21 PM

Subject: RE: Synology Online Support #196757: [RS2212RP+ DSM 4.2-3202] [itgroup]

To: “[email protected]“ <[email protected]>

Hi Derek,

I have updated our NAS to the latest version and checked the volume info and it still appears to have not released the space used for the iSCSI luns that have been deleted so as requested please find the debug file attached.

 

Regards

 

From: Synology Technical Support [mailto:[email protected]]

Sent: 30 April 2013 11:51

To:

Subject: Re: Synology Online Support #196757: [RS2212RP+ DSM 4.2-3202] [itgroup]

 

Hi,

Thanks for your inquiry.

To investigate this issue, please help us to update the firmware to the DSM4.2-3211 first and then recheck it.

If the problem still persist, to help investigate this issue, may I first please have a copy of your kernel log?

The Kernel log is a technical log file which can give us more detailed information about your system.

Please see the instructions below to retrieve and download the kernel log.

1. Log into the DiskStation Manager as admin.

2. The URL will look similar to http://192.168.1.xxx:5000/webman/index.cgi

3. Add ?diagnose=debug to the URL so that- http://192.168.1.xxx:5000/webman/index.cgi?diagnose=debug

4. Press Enter to save the debug.dat file.

Please feel free to contact us again if you have further questions or suggestions.

Best Regards

Derek Chung

On Thu, Apr 25, 2013 at 7:32 PM, <> wrote:

Hi Derek,

Cloud station was installed but wasn’t actively being used so I have uninstalled it and chose not to keep any existing settings or files however the volume manager still doesn’t appear to be showing the correct amount of free space that I would expect after removing the LUN’s.

 

If you could advise what to check next that would be appreciated.

 

Regards

 

**[From: Synology Technical Support [mailto:[email protected]]

Sent: 25 April 2013 11:58

To: IT Group

Subject: Re: Synology Online Support #196757: [RS2212RP+ DSM 4.2-3202] itgroup]**

Hi,

Thanks for your inquiry.

May I know that did you install the Cloud Station and running it on your NAS?

If so , as we know , since it supports version control, the actual size would be at least twice of the Cloud Station shared folders.

For space saving purpose, please go to Cloud Station > Settings to decrease max. versions. If you would like to uninstall this package, please uninstall the database or the space occupied by versions will not be released.

Please feel free to contact us again if you have further questions or suggestions.

Best Regards

Derek Chung

On Mon, Apr 22, 2013 at 10:59 PM, <> wrote:


















































Problem Explanation:

I installed 6 disks in my NAS and created 2 volumes on them, then created a test iSCSI LUN on each volume however after testing and copying files to the LUN’s I have deleted them from the NAS and deleted the target too however from what I can tell the space that was being used hasn’t been released and it still showing as being used in the volume view despite there no longer being any LUN’s or Targets.Is there any way to free this space up as it is needed for other usage but I can’t see any menu to do this from within the NAS. I have tried rebooting the NAS but this hasn’t released any of the space.



Problem Reproduce Steps:




Related hardware:

Hard Drive:






























































































































Product Information/QuestionPersonal Information



Product:

RS2212RP+


Type of Customer:

End User



Product Version:

DSM 4.2-3202


Company Name:




Product Serial Number:




Full Name:




Type of Usage:

For Business Use


Job Title:




E-mail:




Address:




Language:

English,


Location:

Europe & Russia / United Kingdom



Client Operating System:

Windows 7,


Telephone:




LAN Setting:

Static IP


Fax:




Type of Message:

Need Technical Support


Category:

Storage & iSCSI(iSCSI LUN & Target)





 

看來會發錯的郵件,似乎都跟iscsi有關係呢Big Boss

我實在很難接受這樣規模的一家公司,可以一而再再而三的發生這種錯誤。

不知道能不能用個資法玩玩他們 XD

 

 

 

 

 

 

 

avatar
[剪報] 全市光纖上網 「內容恐被監控」