NSX-T Data Center REST API
Associated URIs:
| API Description | API Path |
|---|---|
List Host Transport NodesReturns information about all host transport nodes along with underlying host details. A transport node is a host that contains hostswitches. A hostswitch can have virtual machines connected to them. Because each transport node has hostswitches, transport nodes can also have virtual tunnel endpoints, which means that they can be part of the overlay. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes
|
Delete a Transport NodeDeletes the specified transport node. Query param force can be used to force delete the host nodes. Force delete is not supported if transport node is part of a cluster on which Transport node profile is applied. It also removes the specified host node from system. If unprepare_host option is set to false, then host will be deleted without uninstalling the NSX components from the host. If transport node delete is called with query param force not being set or set to false and uninstall of NSX components in the host fails, TransportNodeState object will be retained. If transport node delete is called with query param force set to true and uninstall of NSX components in the host fails, TransportNodeState object will be deleted. |
DELETE /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}
|
Get a Host Transport NodeReturns information about a specified transport node. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}
|
Patch a Host Transport NodeTransport nodes are hypervisor hosts that will participate in an NSX-T overlay. For a hypervisor host, this means that it hosts VMs that will communicate over NSX-T logical switches. This API creates transport node for a host node (hypervisor) in the transport network. When you run this command for a host, NSX Manager attempts to install the NSX kernel modules, which are packaged as VIB, RPM, or DEB files. For the installation to succeed, you must provide the host login credentials and the host thumbprint. To get the ESXi host thumbprint, SSH to the host and run the openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout command. To generate host key thumbprint using SHA-256 algorithm please follow the steps below. Log into the host, making sure that the connection is not vulnerable to a man in the middle attack. Check whether a public key already exists. Host public key is generally located at '/etc/ssh/ssh_host_rsa_key.pub'. If the key is not present then generate a new key by running the following command and follow the instructions. ssh-keygen -t rsa Now generate a SHA256 hash of the key using the following command. Please make sure to pass the appropriate file name if the public key is stored with a different file name other than the default 'id_rsa.pub'. awk '{print $2}' id_rsa.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64 Additional documentation on creating a transport node can be found in the NSX-T Installation Guide. In order for the transport node to forward packets, the host_switch_spec property must be specified. Host switches (called bridges in OVS on KVM hypervisors) are the individual switches within the host virtual switch. Virtual machines are connected to the host switches. When creating a transport node, you need to specify if the host switches are already manually preconfigured on the node, or if NSX should create and manage the host switches. You specify this choice by the type of host switches you pass in the host_switch_spec property of the TransportNode request payload. For a KVM host, you can preconfigure the host switch, or you can have NSX Manager perform the configuration. For an ESXi host NSX Manager always configures the host switch. To preconfigure the host switches on a KVM host, pass an array of PreconfiguredHostSwitchSpec objects that describes those host switches. In the current NSX-T release, only one prefonfigured host switch can be specified. See the PreconfiguredHostSwitchSpec schema definition for documentation on the properties that must be provided. Preconfigured host switches are only supported on KVM hosts, not on ESXi hosts. To allow NSX to manage the host switch configuration on KVM hosts, ESXi hosts, pass an array of StandardHostSwitchSpec objects in the host_switch_spec property, and NSX will automatically create host switches with the properties you provide. In the current NSX-T release, up to 16 host switches can be automatically managed. See the StandardHostSwitchSpec schema definition for documentation on the properties that must be provided. The request should provide node_deployement_info. |
PATCH /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}
|
Resync a Host Transport NodeResync the TransportNode configuration on a host. It is similar to updating the TransportNode with existing configuration, but force synce these configurations to the host (no backend optimizations). |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}?action=resync_host_config
|
Update transport node maintenance modePut transport node into maintenance mode or exit from maintenance mode. When HostTransportNode is in maintenance mode no configuration changes are allowed |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}
|
Create or update a Host Transport NodeTransport nodes are hypervisor hosts that will participate in an NSX-T overlay. For a hypervisor host, this means that it hosts VMs that will communicate over NSX-T logical switches. This API creates transport node for a host node (hypervisor) in the transport network. When you run this command for a host, NSX Manager attempts to install the NSX kernel modules, which are packaged as VIB, RPM, or DEB files. For the installation to succeed, you must provide the host login credentials and the host thumbprint. To get the ESXi host thumbprint, SSH to the host and run the openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout command. To generate host key thumbprint using SHA-256 algorithm please follow the steps below. Log into the host, making sure that the connection is not vulnerable to a man in the middle attack. Check whether a public key already exists. Host public key is generally located at '/etc/ssh/ssh_host_rsa_key.pub'. If the key is not present then generate a new key by running the following command and follow the instructions. ssh-keygen -t rsa Now generate a SHA256 hash of the key using the following command. Please make sure to pass the appropriate file name if the public key is stored with a different file name other than the default 'id_rsa.pub'. awk '{print $2}' id_rsa.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64 Additional documentation on creating a transport node can be found in the NSX-T Installation Guide. In order for the transport node to forward packets, the host_switch_spec property must be specified. Host switches (called bridges in OVS on KVM hypervisors) are the individual switches within the host virtual switch. Virtual machines are connected to the host switches. When creating a transport node, you need to specify if the host switches are already manually preconfigured on the node, or if NSX should create and manage the host switches. You specify this choice by the type of host switches you pass in the host_switch_spec property of the TransportNode request payload. For a KVM host, you can preconfigure the host switch, or you can have NSX Manager perform the configuration. For an ESXi host NSX Manager always configures the host switch. To preconfigure the host switches on a KVM host, pass an array of PreconfiguredHostSwitchSpec objects that describes those host switches. In the current NSX-T release, only one prefonfigured host switch can be specified. See the PreconfiguredHostSwitchSpec schema definition for documentation on the properties that must be provided. Preconfigured host switches are only supported on KVM hosts, not on ESXi hosts. To allow NSX to manage the host switch configuration on KVM hosts, ESXi hosts, pass an array of StandardHostSwitchSpec objects in the host_switch_spec property, and NSX will automatically create host switches with the properties you provide. In the current NSX-T release, up to 16 host switches can be automatically managed. See the StandardHostSwitchSpec schema definition for documentation on the properties that must be provided. The request should provide node_deployement_info. |
PUT /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}
|
Apply cluster level Transport Node Profile on overridden hostA host can be overridden to have different configuration than Transport Node Profile(TNP) on cluster. This action will restore such overridden host back to cluster level TNP. This API can be used in other case. When TNP is applied to a cluster, if any validation fails (e.g. VMs running on host) then existing transport node (TN) is not updated. In that case after the issue is resolved manually (e.g. VMs powered off), you can call this API to update TN as per cluster level TNP. |
PUT /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}?action=restore_cluster_config
|
Fetch Discovered VIF State on given TransportNodeFor the given TransportNode, fetch all the VIF info from VC and return the corresponding state. Only host switch configured for security will be considered. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/discovered-vifs
|
Get the module details of a host transport node |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/modules
|
Get a Host Transport Node's StateReturns information about the current state of the transport node configuration and information about the associated hostswitch. Change introduced in 4.1.2 for ESX Transport node - The vib details will not be retrieved in every state API call. It will be retrieved by periodical polling on the host. Therefore the nsx vib version mismatch or nsx vib absence will be reported by this API only after subsequent polling takes place. Currently, the poll frequency is 10 minutes. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/state
|
Get the counter values for realtime datapath statistics.Get the counter values for realtime datapath statistics. Support multiple types in one query. Query types should be declared inside query parameters. By default the query type is packet_stats. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/statistics/debug
(Experimental)
|
Get the counter values for cached datapath statistics.Get the counter values for cached datapath statistics. Support multiple types in one query. Query types should be declared inside query parameters. By default the query type is packet_stats. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/statistics/monitor
(Experimental)
|
Submit a new TransportNode VTEP actionSubmit a new VTEP action for a particular TransportNode. The status of submitted actions could be retrieved using the ListTransportNodeVtepActionsStatus API. |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/vteps/actions
|
List all TransportNode VTEP actions' statusList all VTEP actions' status for a particular TransportNode. If some action status is missing in the response, that indicates the action has completed successfully. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/vteps/actions/status
|
List transport nodes by realized stateReturns a list of transport node states that have realized state as provided as query parameter. If this API is called multiple times in parallel then it will fail with error indicating that another request is already in progress. In such case, try the API on another NSX manager instance (if exists) or try again after some time. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/state
|
List sub-clustersPaginated list of all sub-clusters. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters
|
Move host from one sub-cluster to another sub-cluster. When a node is moved from one sub-cluster to another sub-cluster, based on the TransportNodeCollection configuration appropriate sub-configuration will be applied to the node. If TransportNodeCollection does not have sub-configurations for the sub-cluster, then global configuration will be applied. |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters?action=move
|
Delete a Sub-ClusterDelete a Sub-Cluster. Deletion will not be allowed if sub-cluster contains discovered nodes. |
DELETE /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters/{subcluster-id}
|
Read a Sub-cluster configurationRead a Sub-cluster configuration. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters/{subcluster-id}
|
Patch Sub-ClusterPatch a sub-cluster under compute collection. |
PATCH /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters/{subcluster-id}
|
Create or Update a sub-clusterCreate or update a sub-cluster under a compute collection. Maximum number of sub-clusters that can be created under a compute collection is 16. |
PUT /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters/{subcluster-id}
|
List Transport Node collectionsReturns all Transport Node collections |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections
|
Detach transport node profile from compute collection.By deleting transport node collection, we are detaching the transport node profile(TNP) from the compute collection. It has no effect on existing transport nodes. However, new hosts added to the compute collection will no longer be automatically converted to NSX transport node. Detaching TNP from compute collection does not delete TNP. |
DELETE /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}
|
Get Transport Node collection by idReturns transport node collection by id |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}
|
Patch Transport Node collectionAttach different transport node profile to compute collection by updating transport node collection. |
PATCH /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}
|
Retry the process on applying transport node profileThis API is relevant for compute collection on which vLCM is enabled. This API should be invoked to retry the realization of transport node profile on the compute collection. This is useful when profile realization had failed because of error in vLCM. This API has no effect if vLCM is not enabled on the computer collection. |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}?action=retry_profile_realization
|
Configure the cluster for securityThis API configures a compute collection for security. In the request body, specify a Transport Node Collection with only the ID of the target compute collection meant for security. Specifically, a Transport Node Profile ID should not be specified. This API will define a system-generated security Transport Node Profile and apply it on the compute collection to create the Transport Node Collection. |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}?action=install_for_microseg
|
Uninstall NSX from the specified Transport Node CollectionThis API uninstalls NSX applied to the Transport Node Collection with the ID corresponding to the one specified in the request. |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}?action=remove_nsx
|
Get Transport Node collection application stateReturns the state of transport node collection based on the states of transport nodes of the hosts which are part of compute collection. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}/state
|
Create transport node collection by attaching Transport Node Profile to cluster.When transport node collection is created the hosts which are part of compute collection will be prepared automatically i.e. NSX Manager attempts to install the NSX components on hosts. Transport nodes for these hosts are created using the configuration specified in transport node profile. Pass apply_profile to false, if you do not want to apply transport node profile on the existing transport node with overridden host flag set and ignore overridden hosts flag is set to true on the transport node profile. |
PUT /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collections-id}
|
Read node propertiesReturns information about the NSX appliance. Information includes release number, time zone, system time, kernel version, message of the day (motd), and host name. |
GET /api/v1/transport-nodes/{transport-node-id}/node
GET /api/v1/cluster/{cluster-node-id}/node GET /api/v1/node |
Restart or shutdown nodeRestarts or shuts down the NSX appliance. |
POST /api/v1/transport-nodes/{transport-node-id}/node?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node?action=shutdown POST /api/v1/cluster/{cluster-node-id}/node?action=restart POST /api/v1/cluster/{cluster-node-id}/node?action=shutdown POST /api/v1/node?action=restart POST /api/v1/node?action=shutdown |
Set the node system timeSet the node system time to the given time in UTC in the RFC3339 format 'yyyy-mm-ddThh:mm:ssZ'. |
POST /api/v1/transport-nodes/{transport-node-id}/node?action=set_system_time
POST /api/v1/cluster/{cluster-node-id}/node?action=set_system_time POST /api/v1/node?action=set_system_time |
Update node propertiesModifies NSX appliance properties. Modifiable properties include the timezone, message of the day (motd), and hostname. The NSX appliance node_version, system_time, and kernel_version are read only and cannot be modified with this method. |
PUT /api/v1/transport-nodes/{transport-node-id}/node
PUT /api/v1/cluster/{cluster-node-id}/node PUT /api/v1/node |
Delete directory in remote file serverDelete a directory or file on the remote server. When remote directory is specified for deletion, it removes all of files and sub-directories residing within the specified remote directory for deletion. Supports only SFTP. You must provide the remote server's SSH fingerprint. See the NSX Administration Guide for information and instructions about finding the SSH fingerprint. |
DELETE /api/v1/transport-nodes/{transport-node-id}/node/file-store?action=delete_remote_directory
DELETE /api/v1/cluster/{cluster-node-id}/node/file-store?action=delete_remote_directory DELETE /api/v1/node/file-store?action=delete_remote_directory |
List node files |
GET /api/v1/transport-nodes/{transport-node-id}/node/file-store
GET /api/v1/cluster/{cluster-node-id}/node/file-store GET /api/v1/node/file-store |
Create directory in remote file serverCreate a directory on the remote remote server. Supports only SFTP. You must provide the remote server's SSH fingerprint. See the NSX Administration Guide for information and instructions about finding the SSH fingerprint. |
POST /api/v1/transport-nodes/{transport-node-id}/node/file-store?action=create_remote_directory
POST /api/v1/cluster/{cluster-node-id}/node/file-store?action=create_remote_directory POST /api/v1/node/file-store?action=create_remote_directory |
Retrieve ssh fingerprint for given remote serverRetrieve ssh fingerprint for a given remote server and port. |
POST /api/v1/transport-nodes/{transport-node-id}/node/file-store?action=retrieve_ssh_fingerprint
POST /api/v1/cluster/{cluster-node-id}/node/file-store?action=retrieve_ssh_fingerprint POST /api/v1/node/file-store?action=retrieve_ssh_fingerprint |
Retrieve matching host key algorithm for the ssh fingerprint providedRetrieve matching host key algorithm for a given remote server , port and ssh fingerprint. |
POST /api/v1/transport-nodes/{transport-node-id}/node/file-store?action=retrieve_matching_host_key_algorithm
POST /api/v1/cluster/{cluster-node-id}/node/file-store?action=retrieve_matching_host_key_algorithm POST /api/v1/node/file-store?action=retrieve_matching_host_key_algorithm |
Delete file |
DELETE /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}
DELETE /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name} DELETE /api/v1/node/file-store/{file-name} |
Read file properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}
GET /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name} GET /api/v1/node/file-store/{file-name} |
Upload a file to the file storeWhen you issue this API, the client must specify: - HTTP header Content-Type:application/octet-stream. - Request body with the contents of the file in the filestore. In the CLI, you can view the filestore with the get files command. |
POST /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}
POST /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name} POST /api/v1/node/file-store/{file-name} |
Copy a remote file to the file storeCopy a remote file to the file store. If you use scp or sftp, you must provide the remote server's SSH fingerprint. See the NSX-T Administration Guide for information and instructions about finding the SSH fingerprint. |
POST /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}?action=copy_from_remote_file
POST /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name}?action=copy_from_remote_file POST /api/v1/node/file-store/{file-name}?action=copy_from_remote_file |
Copy file in the file store to a remote file storeCopy a file in the file store to a remote server. If you use scp or sftp, you must provide the remote server's SSH fingerprint. See the NSX-T Administration Guide for information and instructions about finding the SSH fingerprint. |
POST /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}?action=copy_to_remote_file
POST /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name}?action=copy_to_remote_file POST /api/v1/node/file-store/{file-name}?action=copy_to_remote_file |
Read file contents |
GET /api/v1/node/file-store/{file-name}/data
|
Replace file contents |
PUT /api/v1/node/file-store/{file-name}/data
|
Read file thumbprint |
GET /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}/thumbprint
GET /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name}/thumbprint GET /api/v1/node/file-store/{file-name}/thumbprint |
Read node message of the dayReturns the message of the day (motd) text. |
GET /api/v1/transport-nodes/{transport-node-id}/node/motd
GET /api/v1/cluster/{cluster-node-id}/node/motd GET /api/v1/node/motd |
List node network routesReturns detailed information about each route in the node routing table. Routes can be of any type i.e. IPv4 or IPv6 or both. Route information includes the route ipv6 flag (True or False), route type (default, static, and so on), a unique route identifier, the route metric, the protocol from which the route was learned, the route source (which is the preferred egress interface), the route destination, and the route scope. If ipv6 flag is True then route information is for IPv6 route else for IPv4 route. The route scope refers to the distance to the destination network: The "host" scope leads to a destination address on the node, such as a loopback address; the "link" scope leads to a destination on the local network; and the "global" scope leads to addresses that are more than one hop away. |
GET /api/v1/transport-nodes/{transport-node-id}/node/network/routes
GET /api/v1/cluster/{cluster-node-id}/node/network/routes GET /api/v1/node/network/routes |
Create node network routeAdd a route to the node routing table. For static routes, the route_type, interface_id, netmask, and destination are required parameters. For default routes, the route_type, gateway address, and interface_id are required. For blackhole routes, the route_type and destination are required. All other parameters are optional. When you add a static route, the scope and route_id are created automatically. When you add a default or blackhole route, the route_id is created automatically. The route_id is read-only, meaning that it cannot be modified. All other properties can be modified by deleting and readding the route. |
POST /api/v1/transport-nodes/{transport-node-id}/node/network/routes
POST /api/v1/cluster/{cluster-node-id}/node/network/routes POST /api/v1/node/network/routes |
Delete node network routeDelete a route from the node routing table. You can modify an existing route by deleting it and then posting the modified version of the route. To verify, remove the route ID from the URI, issue a GET request, and note the absense of the deleted route. |
DELETE /api/v1/transport-nodes/{transport-node-id}/node/network/routes/{route-id}
DELETE /api/v1/cluster/{cluster-node-id}/node/network/routes/{route-id} DELETE /api/v1/node/network/routes/{route-id} |
Read node network routeReturns detailed information about a specified route in the node routing table. |
GET /api/v1/transport-nodes/{transport-node-id}/node/network/routes/{route-id}
GET /api/v1/cluster/{cluster-node-id}/node/network/routes/{route-id} GET /api/v1/node/network/routes/{route-id} |
List node processesReturns the number of processes and information about each process. Process information includes 1) mem_resident, which is roughly equivalent to the amount of RAM, in bytes, currently used by the process, 2) parent process ID (ppid), 3) process name, 4) process up time in milliseconds, 5) mem_used, wich is the amount of virtual memory used by the process, in bytes, 6) process start time, in milliseconds since epoch, 7) process ID (pid), 8) CPU time, both user and the system, consumed by the process in milliseconds. |
GET /api/v1/transport-nodes/{transport-node-id}/node/processes
GET /api/v1/cluster/{cluster-node-id}/node/processes GET /api/v1/node/processes |
Read node processReturns information for a specified process ID (pid). |
GET /api/v1/transport-nodes/{transport-node-id}/node/processes/{process-id}
GET /api/v1/cluster/{cluster-node-id}/node/processes/{process-id} GET /api/v1/node/processes/{process-id} |
Read appliance management service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/node-mgmt
GET /api/v1/cluster/{cluster-node-id}/node/services/node-mgmt GET /api/v1/node/services/node-mgmt |
Restart the node management service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/node-mgmt?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/node-mgmt?action=restart POST /api/v1/node/services/node-mgmt?action=restart |
Retrieve Node Management loglevel |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/node-mgmt/loglevel
GET /api/v1/cluster/{cluster-node-id}/node/services/node-mgmt/loglevel GET /api/v1/node/services/node-mgmt/loglevel |
Set Node Management loglevel |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/node-mgmt/loglevel
PUT /api/v1/cluster/{cluster-node-id}/node/services/node-mgmt/loglevel PUT /api/v1/node/services/node-mgmt/loglevel |
Read appliance management service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/node-mgmt/status
GET /api/v1/cluster/{cluster-node-id}/node/services/node-mgmt/status GET /api/v1/node/services/node-mgmt/status |
Read NSX Platform Client service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-platform-client
GET /api/v1/cluster/{cluster-node-id}/node/services/nsx-platform-client GET /api/v1/node/services/nsx-platform-client |
Restart, start or stop the NSX Platform Client service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-platform-client?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-platform-client?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-platform-client?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-platform-client?action=restart POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-platform-client?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-platform-client?action=stop POST /api/v1/node/services/nsx-platform-client?action=restart POST /api/v1/node/services/nsx-platform-client?action=start POST /api/v1/node/services/nsx-platform-client?action=stop |
Read NSX Platform Client service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-platform-client/status
GET /api/v1/cluster/{cluster-node-id}/node/services/nsx-platform-client/status GET /api/v1/node/services/nsx-platform-client/status |
Read node statusReturns information about the node appliance's file system, CPU, memory, disk usage, and uptime. |
GET /api/v1/transport-nodes/{transport-node-id}/node/status
GET /api/v1/cluster/{cluster-node-id}/node/status GET /api/v1/node/status |
Update node statusClear node bootup status |
POST /api/v1/transport-nodes/{transport-node-id}/node/status?action=clear_bootup_error
POST /api/v1/cluster/{cluster-node-id}/node/status?action=clear_bootup_error POST /api/v1/node/status?action=clear_bootup_error |
Read node version |
GET /api/v1/transport-nodes/{transport-node-id}/node/version
GET /api/v1/cluster/{cluster-node-id}/node/version GET /api/v1/node/version |
List Transport Node collectionsReturns all Transport Node collections |
GET /api/v1/transport-node-collections
|
Create transport node collection by attaching Transport Node Profile to cluster.When transport node collection is created the hosts which are part of compute collection will be prepared automatically i.e. NSX Manager attempts to install the NSX components on hosts. Transport nodes for these hosts are created using the configuration specified in transport node profile. |
POST /api/v1/transport-node-collections
|
Detach transport node profile from compute collection.By deleting transport node collection, we are detaching the transport node profile(TNP) from the compute collection. It has no effect on existing transport nodes. However, new hosts added to the compute collection will no longer be automatically converted to NSX transport node. Detaching TNP from compute collection does not delete TNP. |
DELETE /api/v1/transport-node-collections/{transport-node-collection-id}
|
Get Transport Node collection by idReturns transport node collection by id |
GET /api/v1/transport-node-collections/{transport-node-collection-id}
|
Retry the process on applying transport node profileThis API is relevant for compute collection on which vLCM is enabled. This API shpuld be invoked to retry the realization of transport node profile on the compute collection. This is useful when profile realization had failed because of error in vLCM. This API has no effect if vLCM is not enabled on the computer collection. |
POST /api/v1/transport-node-collections/{transport-node-collection-id}?action=retry_profile_realization
|
Update Transport Node collectionAttach different transport node profile to compute collection by updating transport node collection. |
PUT /api/v1/transport-node-collections/{transport-node-collection-id}
|
Get Transport Node collection application stateReturns the state of transport node collection based on the states of transport nodes of the hosts which are part of compute collection. |
GET /api/v1/transport-node-collections/{transport-node-collection-id}/state
|
List Transport NodesReturns information about all transport node profiles. |
GET /api/v1/transport-node-profiles
(Deprecated)
|
Create a Transport Node ProfileTransport node profile captures the configuration needed to create a transport node. A transport node profile can be attached to compute collections for automatic TN creation of member hosts. |
POST /api/v1/transport-node-profiles
(Deprecated)
|
Delete a Transport Node ProfileDeletes the specified transport node profile. A transport node profile can be deleted only when it is not attached to any compute collection. |
DELETE /api/v1/transport-node-profiles/{transport-node-profile-id}
(Deprecated)
|
Get a Transport NodeReturns information about a specified transport node profile. |
GET /api/v1/transport-node-profiles/{transport-node-profile-id}
(Deprecated)
|
Update a Transport Node ProfileWhen configurations of a transport node profile(TNP) is updated, all the transport nodes in all the compute collections to which this TNP is attached are updated to reflect the updated configuration. |
PUT /api/v1/transport-node-profiles/{transport-node-profile-id}
(Deprecated)
|
List Transport NodesReturns information about all transport nodes along with underlying host or edge details. A transport node is a host or edge that contains hostswitches. A hostswitch can have virtual machines connected to them. Because each transport node has hostswitches, transport nodes can also have virtual tunnel endpoints, which means that they can be part of the overlay. |
GET /api/v1/transport-nodes
(Deprecated)
|
Create a Transport NodeTransport nodes are hypervisor hosts and NSX Edges that will participate in an NSX-T overlay. For a hypervisor host, this means that it hosts VMs that will communicate over NSX-T logical switches. For NSX Edges, this means that it will have logical router uplinks and downlinks. This API creates transport node for a host node (hypervisor) or edge node (router) in the transport network. When you run this command for a host, NSX Manager attempts to install the NSX kernel modules, which are packaged as VIB, RPM, or DEB files. For the installation to succeed, you must provide the host login credentials and the host thumbprint. To get the ESXi host thumbprint, SSH to the host and run the openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout command. To generate host key thumbprint using SHA-256 algorithm please follow the steps below. Log into the host, making sure that the connection is not vulnerable to a man in the middle attack. Check whether a public key already exists. Host public key is generally located at '/etc/ssh/ssh_host_rsa_key.pub'. If the key is not present then generate a new key by running the following command and follow the instructions. ssh-keygen -t rsa Now generate a SHA256 hash of the key using the following command. Please make sure to pass the appropriate file name if the public key is stored with a different file name other than the default 'id_rsa.pub'. awk '{print $2}' id_rsa.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64 Additional documentation on creating a transport node can be found in the NSX-T Installation Guide. In order for the transport node to forward packets, the host_switch_spec property must be specified. Host switches (called bridges in OVS on KVM hypervisors) are the individual switches within the host virtual switch. Virtual machines are connected to the host switches. When creating a transport node, you need to specify if the host switches are already manually preconfigured on the node, or if NSX should create and manage the host switches. You specify this choice by the type of host switches you pass in the host_switch_spec property of the TransportNode request payload. For a KVM host, you can preconfigure the host switch, or you can have NSX Manager perform the configuration. For an ESXi host or NSX Edge node, NSX Manager always configures the host switch. To preconfigure the host switches on a KVM host, pass an array of PreconfiguredHostSwitchSpec objects that describes those host switches. In the current NSX-T release, only one prefonfigured host switch can be specified. See the PreconfiguredHostSwitchSpec schema definition for documentation on the properties that must be provided. Preconfigured host switches are only supported on KVM hosts, not on ESXi hosts or NSX Edge nodes. To allow NSX to manage the host switch configuration on KVM hosts, ESXi hosts, or NSX Edge nodes, pass an array of StandardHostSwitchSpec objects in the host_switch_spec property, and NSX will automatically create host switches with the properties you provide. In the current NSX-T release, up to 16 host switches can be automatically managed. See the StandardHostSwitchSpec schema definition for documentation on the properties that must be provided. Note: Previous versions of NSX-T also used a property named transport_zone_endpoints at TransportNode level. This property is deprecated which creates some combinations of new client along with old client payloads. Examples [1] & [2] show old/existing client request and response by populating transport_zone_endpoints property at TransportNode level. Example [3] shows TransportNode creation request/response by populating transport_zone_endpoints property at StandardHostSwitch level and other new properties. The node_id field is marked as deprecated and to convert the edge node into transport node use the below PUT API https://<nsx-mgr>/api/v1/transport-nodes/ If the host node (hypervisor) or edge node (router) is already added in system then it can be converted to transport node by providing node_id in request. If host node (hypervisor) or edge node (router) is not already present in system then information should be provided under node_deployment_info. |
POST /api/v1/transport-nodes
(Deprecated)
|
Clear edge transport node stale entriesEdge transport node maintains its entry in many internal tables. In some cases a few of these entries might not get cleaned up during edge transport node deletion. This api cleans up any stale entries that may exist in the internal tables that store the Edge Transport Node data. |
POST /api/v1/transport-nodes?action=clean_stale_entries
|
Clear edge transport node stale entryEdge transport node maintains its entry in many internal tables. In some cases a few of these entries might not get cleaned up during edge transport node deletion. This api cleans up an individual stale edge entry that may exist in the internal tables that store the Edge Transport Node data. |
POST /api/v1/transport-nodes/{edge-node-id}/action/clean-stale-entries
|
Redeploys a new node that replaces the specified edge node.Redeploys an edge node at NSX Manager that replaces the edge node with identifier <node-id>. If NSX Manager can access the specified edge node, then the node is put into maintenance mode and then the associated VM is deleted. This is a means to reset all configuration on the edge node. The communication channel between NSX Manager and edge is established after this operation. |
POST /api/v1/transport-nodes/{node-id}?action=redeploy
(Deprecated)
|
Add or update deployment references of edge VM.Populates placement references for edge node registered with identifier VM and manage lifecycle operations like edit and delete of the specified edge VM. This internal API may be used to convert a manually deployed edge VM into an NSX lifecycle-managed edge VM. The edge VM must be reachable from NSX Manager. NSX Manager fetches live configuration from the edge and vCenter Server and reports the values in GET API for the following configuration. NSX Manager fetches the following settings from the edge - hostname, NTP servers, syslog servers, DNS servers, search domains and SSH. NSX Manager fetches the following configuration from vCenter Server - storage, networks, compute cluster, resource allocations and reservation for CPU and memory. NSX Manager saves fields that are not refreshed from sources external sources, from the request payload itself. Fields include login credentials, resource pool, static IP address, network interfaces with static port attachments and advanced configuration section. If these fields are configured on the edge and not specified in request payload, then the converted edge would have gaps as compared to an NSX Manager lifecycle-managed edge deployed with this configuration. Any gaps in configuration would feature when consequent lifecycle operations are performed. |
POST /api/v1/transport-nodes/{node-id}?action=addOrUpdatePlacementReferences
|
Get the module details of a transport node |
GET /api/v1/transport-nodes/{node-id}/modules
(Deprecated)
|
Get statistics for all logical router NAT rules on a transport nodeReturns the summation of statistics for all rules from all logical routers which are present on given transport node. Only cached statistics are supported. The query parameter "source=realtime" is not supported. |
GET /api/v1/transport-nodes/{node-id}/statistics/nat-rules
|
Invoke DELETE request on target transport node |
DELETE /api/v1/transport-nodes/{target-node-id}/{target-uri}
|
Invoke GET request on target transport node |
GET /api/v1/transport-nodes/{target-node-id}/{target-uri}
|
Invoke POST request on target transport node |
POST /api/v1/transport-nodes/{target-node-id}/{target-uri}
|
Invoke PUT request on target transport node |
PUT /api/v1/transport-nodes/{target-node-id}/{target-uri}
|
Delete a Transport NodeDeletes the specified transport node. Query param force can be used to force delete the host nodes. Force deletion of edge and public cloud gateway nodes is not supported. Force delete is not supported if transport node is part of a cluster on which Transport node profile is applied. If transport node delete is called with query param force not being set or set to false and uninstall of NSX components in the host fails, TransportNodeState object will be retained. If transport node delete is called with query param force set to true and uninstall of NSX components in the host fails, TransportNodeState object will be deleted. It also removes the specified node (host or edge) from system. If unprepare_host option is set to false, then host will be deleted without uninstalling the NSX components from the host. |
DELETE /api/v1/transport-nodes/{transport-node-id}
(Deprecated)
|
Get a Transport NodeReturns information about a specified transport node. |
GET /api/v1/transport-nodes/{transport-node-id}
(Deprecated)
|
Restart the inventory sync for the node if it is paused currently.Restart the inventory sync for the node if it is currently internally paused. After this action the next inventory sync coming from the node is processed. |
POST /api/v1/transport-nodes/{transport-node-id}?action=restart_inventory_sync
|
Disable flow cache for an edge transport nodeDisable flow cache for edge transport node. Caution: This involves restart of the edge dataplane and hence may lead to network disruption. |
POST /api/v1/transport-nodes/{transport-node-id}?action=disable_flow_cache
|
Enable flow cache for an edge transport nodeEnable flow cache for edge transport node. Caution: This involves restart of the edge dataplane and hence may lead to network disruption. |
POST /api/v1/transport-nodes/{transport-node-id}?action=enable_flow_cache
|
Refresh the node configuration for the Edge node.The API is applicable for Edge transport nodes. If you update the edge configuration and find a discrepancy in Edge configuration at NSX Manager in compare with realized, then use this API to refresh configuration at NSX Manager. It refreshes the Edge configuration from sources external to NSX Manager like vSphere Server or the Edge node CLI. After this action, Edge configuration at NSX Manager is updated and the API GET api/v1/transport-nodes will show refreshed data. From 3.2 release onwards, refresh API updates the MP intent by default. |
POST /api/v1/transport-nodes/{transport-node-id}?action=refresh_node_configuration&resource_type=EdgeNode
(Deprecated)
|
Apply cluster level Transport Node Profile on overridden hostA host can be overridden to have different configuration than Transport Node Profile(TNP) on cluster. This action will restore such overridden host back to cluster level TNP. This API can be used in other case. When TNP is applied to a cluster, if any validation fails (e.g. VMs running on host) then existing transport node (TN) is not updated. In that case after the issue is resolved manually (e.g. VMs powered off), you can call this API to update TN as per cluster level TNP. |
POST /api/v1/transport-nodes/{transport-node-id}?action=restore_cluster_config
(Deprecated)
|
Update a Transport NodeModifies the transport node information. The host_switch_name field must match the host_switch_name value specified in the transport zone (API: transport-zones). You must create the associated uplink profile (API: host-switch-profiles) before you can specify an uplink_name here. If the host is an ESX and has only one physical NIC being used by a vSphere standard switch, TransportNodeUpdateParameters should be used to migrate the management interface and the physical NIC into a logical switch that is in a transport zone this transport node will join or has already joined. If the migration is already done, TransportNodeUpdateParameters can also be used to migrate the management interface and the physical NIC back to a vSphere standard switch. In other cases, the TransportNodeUpdateParameters should NOT be used. When updating transport node you should follow pattern where you should fetch the existing transport node and then only modify the required properties keeping other properties as is. It also modifies attributes of node (host or edge). Note: Previous versions of NSX-T also used a property named transport_zone_endpoints at TransportNode level. This property is deprecated which creates some combinations of new client along with old client payloads. Examples [1] shows old/existing client request and response by populating transport_zone_endpoints property at TransportNode level. Example [2] shows TransportNode updating TransportNode from exmaple [1] request/response by adding a new StandardHostSwitch by populating transport_zone_endpoints at StandardHostSwitch level. TransportNode level transport_zone_endpoints will ONLY have TransportZoneEndpoints that were originally specified here during create/update operation and does not include TransportZoneEndpoints that were directly specified at StandardHostSwitch level. If api response is 200 OK, user will have to wait for config to get realised, and realisation of the intent can be tracked using /api/v1/transport-nodes/ |
PUT /api/v1/transport-nodes/{transport-node-id}
(Deprecated)
|
Return the list of capabilities of transport nodeReturns information about capabilities of transport host node. Edge nodes do not have capabilities. |
GET /api/v1/transport-nodes/{transport-node-id}/capabilities
|
Get a Transport Node's StateReturns information about the current state of the transport node configuration and information about the associated hostswitch. Change introduced in 4.1.2 for ESX Transport node - The vib details will not be retrieved in every state API call. It will be retrieved by periodical polling on the host. Therefore the nsx vib version mismatch or nsx vib absence will be reported by this API only after subsequent polling takes place. Currently, the poll frequency is 10 minutes. |
GET /api/v1/transport-nodes/{transport-node-id}/state
(Deprecated)
|
Resync a Transport NodeResync the TransportNode configuration on a host. It is similar to updating the TransportNode with existing configuration, but force synce these configurations to the host (no backend optimizations). |
POST /api/v1/transport-nodes/{transportnode-id}?action=resync_host_config
|
Update transport node maintenance modePut transport node into maintenance mode or exit from maintenance mode. |
POST /api/v1/transport-nodes/{transportnode-id}
(Deprecated)
|
List transport nodes by realized stateReturns a list of transport node states that have realized state as provided as query parameter. If this API is called multiple times in parallel then it will fail with error indicating that another request is already in progress. In such case, try the API on another NSX manager instance (if exists) or try again after some time. |
GET /api/v1/transport-nodes/state
(Deprecated)
|