Congratulations for the new community Brian.
Is there any community specific to Partners? or this is the common community for Partners (Distis / Resellers) and Direct (End) Customers both?
Congratulations for the new community Brian.
Is there any community specific to Partners? or this is the common community for Partners (Distis / Resellers) and Direct (End) Customers both?
Hi All,
Is there any limit on how many Virtual workstation instances i can create using VMWare Workstation? Also, are there any VMW Employee discounts on VMW Workstation or VMW Workstation Pro product?
Thanks
Chetna
Since in one of our site we use DL380G8(SiteB) and in the other site we use DL580G10(SiteA) thus could not install esxi 6.7 on siteB because it use old generation now want to know can i just upgrade my vcsa version to 6.7 without update esxi servers on siteB and install esxi 6.7 and vcsa 6.7 on siteB for use same vcsa Version to utilize ELM ?
Glad we were able to address it! Thanks for sharing the update
Cheers,
Supreet
As I thought,WINSvr-flat.vmdk seems to be missing. Since it is a data disk (and not the descriptor), it cannot be recreated. Only option would be to restore it from the backup, if any. If WINSvr16 is the new VM you are talking about, I would say let's go ahead and get it up.
Please consider marking this answer as "correct" or "helpful" if you think your questions have been answered.
Cheers,
Supreet
Technically, vCenter 6.7 should be able to manage ESXi 6.5 and 6.0 hosts. However, I would recommend staying at 6.5 on both the sites. Unless, you want to use any of the new features introduced with vSphere 6.7.
Please consider marking this answer as "correct" or "helpful" if you think your questions have been answered.
Cheers,
Supreet
For those interested here is mt
I found two Reasons, basically something in VMware14 had not upgraded properly. One problem was vmx86 was old, and was causing other errors. My solution was to follow the instructions to disable the service, reboot, rename the file, and fix the install, then reboot. The keyboard problem only resolved itself after a good clean scrub of everything VMware and several reboots - maybe I got lucky ?
Incorrect version of driver "vmx86.sys"
I still have one more PC I cant fix, if I find an adaptation of this method I will post it here.
Basically I think a full uninstall of VMware is the best solution, VMware upgrades seem somewhat touchy in my experience.
Yes your last sentence is indicative to what happens when I tried it.
vMotion does not work if the destination LUN has less space then the current space consumed by the VM on vSAN.
This should not be the case as it only needs the free space with a size of the disk as seen by the Guest OS/size as provisioned (and some additional space for swap file etc..)
So after many years I have built a new 6.7 machine to replace my 4.1 host, I now need to migrate my VM's to the new host.
But SCP on 4.1 won't connect with error "No matching KexAlgo"
I have come across it before on some old distros but can't find how to fix it on vSphere???
Anyone got the solution?
Thanks
Rob
If I understand this correctly, you want to move the VMs from 4.1 local datastore to 6.7 host local datastore? Or Is it a shared datastore? That's one big leap from 4.1 to 6.7
Cheers,
Supreet
Hello,
We have a dedicated server with NVMe drives.
We got the error from the vmkernel:
2018-08-25T07:57:55.549Z cpu7:2097662)ScsiDeviceIO: 3029: Cmd(0x459a40ef0440) 0x93, CmdSN 0x4bbe9 from world 2108876 to dev "t10.NVMe____INTEL_SSDPE
2MX450G7_CVPF7453000Z450RGN__00000001" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x0 0x0.
So it causes damage after a while and every VM on this datastore crashed.
I had checked the problem in here, But I didn't understand what should I do.
0x93 is a vendor specific SCSI command for Write Same VAAI functionality. In this case, the controller (disk) is reporting a check condition when the ESXi is issuing a write zero command. Not sure if this functionality is supported. You can run the command <esxcli storage core device vaai status get> to check if it supports VAAI functionality. If the zero status shows as unsupported, you can disable Write Same functionality using the command <esxcli system settings advanced set --int-value 0 --option /DataMover/HardwareAcceleratedInit>. Having said that, I don't think the VMs would have crashed due to non-functional VAAI primitive. Are you sure this is the cause?
More about Write Same functionality - WRITE SAME | Cody Hosterman / VMware Knowledge Base
Please consider marking this answer as "correct" or "helpful" if you think your questions have been answered.
Cheers,
Supreet
Local to Local over network.
It's a home system so it's never really needed upgrading.
they are supporting.
t10.NVMe____INTEL_SSDPE2MX450G7_CVPF7453000Z450RGN__00000001
VAAI Plugin Name:
ATS Status: unsupported
Clone Status: unsupported
Zero Status: supported
Delete Status: supported
t10.NVMe____INTEL_SSDPE2MX450G7_BTPF807303DX450RGN__00000001VAAI Plugin Name:
ATS Status: unsupported
Clone Status: unsupported
Zero Status: supported
Delete Status: supported
When VMs are crashed, I got this errors in vmkernel log:
83)NMP: nmp_ThrottleLogForDevice:3618: last error status from device t10.NVMe____INTEL_SSDPE2MX450G7_CVPF7453000Z450RGN__00000001 repeated 160 times
7)nvme_ScsiCommand: queue:1 busy
69)nvme_ScsiCommand: queue:2 busy
99)nvme_ScsiCommand: queue:3 busy
69)nvme_ScsiCommand: queue:0 busy
69)nvme_ScsiCommand: queue:1 busy
37)nvme_ScsiCommand: queue:2 busy
37)nvme_ScsiCommand: queue:3 busy
21)nvme_ScsiCommand: queue:0 busy
44)nvme_ScsiCommand: queue:1 busy
21)nvme_ScsiCommand: queue:2 busy
21)nvme_ScsiCommand: queue:3 busy
21)nvme_ScsiCommand: queue:0 busy
21)nvme_ScsiCommand: queue:1 busy
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:140 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_SyncCompletion: no sync request
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:141 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_SyncCompletion: no sync request
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:142 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_SyncCompletion: no sync request
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:143 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_SyncCompletion: no sync request
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:144 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:60 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:24 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:8 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:91 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:6 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:88 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:74 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:72 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:90 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:65 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:53 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:63 qid:4
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:50 qid:4
2018-08-25T08:08:47.754Z cpu7:2097454)nvme_TaskMgmt: adapter:1 type:abort
2018-08-25T08:08:47.754Z cpu7:2097454)nvme_TaskMgmt: waiting on command SN:4c55a
2018-08-25T08:08:47.755Z cpu0:2097183)ScsiDeviceIO: 3029: Cmd(0x459a898340c0) 0x28, CmdSN 0x4c55a from world 0 to dev "t10.NVMe____INTEL_SSDPE2MX450G7_CVPF7453000Z450RGN__00000001" failed H:0x
2018-08-25T08:08:47.761Z cpu1:2100036)HBX: 3033: 'datastore1': HB at offset 3407872 - Waiting for timed out HB:
2018-08-25T08:08:47.761Z cpu1:2100036) [HB state abcdef02 offset 3407872 gen 345 stampUS 6247906187 uuid 5b80f5b9-95bf4dba-f27a-ac1f6b01063c jrnl <FB 9> drv 24.82 lockImpl 3 ip 145.239.3.79]
2018-08-25T08:08:57.754Z cpu0:2097454)nvme_ResetController: adapter:1
2018-08-25T08:08:57.762Z cpu1:2100036)HBX: 3033: 'datastore1': HB at offset 3407872 - Waiting for timed out HB:
2018-08-25T08:08:57.762Z cpu1:2100036) [HB state abcdef02 offset 3407872 gen 345 stampUS 6247906187 uuid 5b80f5b9-95bf4dba-f27a-ac1f6b01063c jrnl <FB 9> drv 24.82 lockImpl 3 ip 145.239.3.79]
2018-08-25T08:08:59.254Z cpu0:2097454)nvme_ResetController: adapter:1 disabled, clear queues and restart
2018-08-25T08:08:44.386Z cpu0:2097183)NMP: nmp_ThrottleLogForDevice:3689: Cmd 0x28 (0x459a898340c0, 0) to dev "t10.NVMe____INTEL_SSDPE2MX450G7_CVPF7453000Z450RGN__00000001" on path "vmhba3:C0::T0:L0" Failed: H:0xc D:0x0 P:0x0 Invalid sense data: 0x0 0x0
2018-08-25T08:08:44.386Z cpu10:2097700)NMP: nmp_ResetDeviceLogThrottling:3519:: last error status from device t10.NVMe____INTEL_SSDPE2MX450G7_CVPF7453000Z450RGN__00000001 repeated 10678 times
Hello, colleagues!
1. We have:
- VCSA 6.5.0.21000
- a few ESXi, 6.5.0, 8935087
- 2 physical switches DELL FORCE10 S4810 (switches are combined in LAG)
- 2 NIC 10G Intel X520 (82599) on each host (each NIC is connected to different physical switches)
- everywhere included MTU9000
2. At the level of dSwith is configured:
- association of NIC in LAG (active mode, Load Balancing: source and destination ip address tcp/udp port and vlan)
- Private VLAN port group, all the necessary settings are also made on physical switches.
3. Test machines Windows Server 2012R2 with adapters vmxnet3 (VM version 13) and VMware-tools-10.2.5-8068406 have been prepared. All machines are located on the same subnet and in the same port group.
So, should I get the total speed between hosts around 20G and a similar speed between virtual machines?
But:
- I'm testing iperf between hosts - speed about 8G
- I'm testing iperf between VMs hosted on the same host - speed 3-3.5G
- I'm testing iperf between VMs hosted on different hosts - speed only 1-2G
What could be the problem?
Здравствуйте, колеги!
1. Имеем:
- VCSA 6.5.0.21000
- a few ESXi, 6.5.0, 8935087
- 2 физических свитча DELL FORCE10 S4810 (свитчи объеденены в LAG)
- 2 NIC 10G Intel X520 (82599) на каждом хосте (каждый NIC подключен в разные физические свитчи)
- везде включено MTU9000
2. На уровне dSwith настроено:
- объединение NIC в LAG (active mode, Load Balancing: source and destination ip address tcp/udp port and vlan)
- портгруппа Private VLAN, на физических свитчах также сделаны все необходимые настройки.
3. Подготовлены тестовые машины Windows Server 2012R2 с адаптерами vmxnet3 (VM version 13) и VMware-tools-10.2.5-8068406. Все машины размещены в одной подсети и в одной портгруппе.
Таким образом, я должен получить общую скорость между хостами около 20G и похожую скорость между виртуальными машинами?
Но:
- тестирую iperf между хостами - скорость около 8G
- тестирую iperf между VM, размещенными на одном хосте - скорость 3-3,5G
- тестирую iperf между VM, размещенными на разных хостах - скорость 1-2G
В чем может быть проблема?
Prehaps a temporary FREENAS will be built.
Can you share the screenshot of the command syntax you are using and the error being reported?
Cheers,
Supreet
Looks like the below user too had a similar issue and got around it by disabling the VAAI primitives -
https://www.reddit.com/r/vmware/comments/7khhfy/esxi_65u1_psod_intelnvme/
You may want to give it a shot. Also, we see 'H:0xc' events which indicate a transient error with the storage. In such scenarios, the commands will be reissued. Updating the NVMe controller driver/firmware to the latest version is also an important step towards isolating the cause.
Please consider marking this answer as "correct" or "helpful" if you think your questions have been answered.
Cheers,
Supreet
Hello, thank you for the settings. They worked. I added the following settings. The attached vc.json does not even need the hostname specified. It uses the one configured in the system. The system has a static ip and is joined to an active directory previous to vCenter installation.
"ceip": {
"ceip.enabled": false,
"ceip.acknowledged": true
}
Cheers,
Thomas