Skip to content

googleVertexAiEndpoint

Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.

To get more information about Endpoint, see:

Example Usage - Vertex Ai Endpoint Network

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as google from "./.gen/providers/google";
/*The following providers are missing schema information and might need manual adjustments to synthesize correctly: google.
For a more precise conversion please use the --provider flag in convert.*/
const dataGoogleComputeNetworkVertexNetwork =
  new google.dataGoogleComputeNetwork.DataGoogleComputeNetwork(
    this,
    "vertex_network",
    {
      name: "network-name",
    }
  );
const dataGoogleProjectProject = new google.dataGoogleProject.DataGoogleProject(
  this,
  "project",
  {}
);
const googleComputeGlobalAddressVertexRange =
  new google.computeGlobalAddress.ComputeGlobalAddress(this, "vertex_range", {
    address_type: "INTERNAL",
    name: "address-name",
    network: dataGoogleComputeNetworkVertexNetwork.id,
    prefix_length: 24,
    purpose: "VPC_PEERING",
  });
new google.kmsCryptoKeyIamMember.KmsCryptoKeyIamMember(this, "crypto_key", {
  crypto_key_id: "kms-name",
  member: `serviceAccount:service-\${${dataGoogleProjectProject.number}}@gcp-sa-aiplatform.iam.gserviceaccount.com`,
  role: "roles/cloudkms.cryptoKeyEncrypterDecrypter",
});
const googleServiceNetworkingConnectionVertexVpcConnection =
  new google.serviceNetworkingConnection.ServiceNetworkingConnection(
    this,
    "vertex_vpc_connection",
    {
      network: dataGoogleComputeNetworkVertexNetwork.id,
      reserved_peering_ranges: [googleComputeGlobalAddressVertexRange.name],
      service: "servicenetworking.googleapis.com",
    }
  );
new google.vertexAiEndpoint.VertexAiEndpoint(this, "endpoint", {
  depends_on: [
    `\${${googleServiceNetworkingConnectionVertexVpcConnection.fqn}}`,
  ],
  description: "A sample vertex endpoint",
  display_name: "sample-endpoint",
  encryption_spec: [
    {
      kms_key_name: "kms-name",
    },
  ],
  labels: [
    {
      "label-one": "value-one",
    },
  ],
  location: "us-central1",
  name: "endpoint-name",
  network: `projects/\${${dataGoogleProjectProject.number}}/global/networks/\${${dataGoogleComputeNetworkVertexNetwork.name}}`,
});

Argument Reference

The following arguments are supported:

  • name - (Required) The resource name of the Endpoint. The name must be numeric with no leading zeros and can be at most 10 digits.

  • displayName - (Required) Required. The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.

  • location - (Required) The location for the resource


  • description - (Optional) The description of the Endpoint.

  • labels - (Optional) The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

  • encryptionSpec - (Optional) Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key. Structure is documented below.

  • network - (Optional) The full name of the Google Compute Engine network to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is network name.

  • project - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

The encryptionSpec block supports:

  • kmsKeyName - (Required) Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/myProject/locations/myRegion/keyRings/myKr/cryptoKeys/myKey. The key needs to be in the same region as where the compute resource is created.

Attributes Reference

In addition to the arguments listed above, the following computed attributes are exported:

  • id - an identifier for the resource with format projects/{{project}}/locations/{{location}}/endpoints/{{name}}

  • deployedModels - Output only. The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively. Models can also be deployed and undeployed using the Cloud Console. Structure is documented below.

  • etag - Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

  • createTime - Output only. Timestamp when this Endpoint was created.

  • updateTime - Output only. Timestamp when this Endpoint was last updated.

  • modelDeploymentMonitoringJob - Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by CreateModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{modelDeploymentMonitoringJob}

The deployedModels block contains:

  • dedicatedResources - (Output) A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration. Structure is documented below.

  • automaticResources - (Output) A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Structure is documented below.

  • id - (Output) The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are /[0-9]/.

  • model - (Output) The name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint.

  • modelVersionId - (Output) Output only. The version ID of the model that is deployed.

  • displayName - (Output) The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.

  • createTime - (Output) Output only. Timestamp when the DeployedModel was created.

  • serviceAccount - (Output) The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the iamServiceAccountsActAs permission on this service account.

  • enableAccessLogging - (Output) These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that Stackdriver logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.

  • privateEndpoints - (Output) Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured. Structure is documented below.

  • sharedResources - (Output) The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

  • enableContainerLogging - (Output) If true, the container of the DeployedModel instances will send stderr and stdout streams to Stackdriver Logging. Only supported for custom-trained Models and AutoML Tabular Models.

The dedicatedResources block contains:

  • machineSpec - (Output) The specification of a single machine used by the prediction. Structure is documented below.

  • minReplicaCount - (Output) The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.

  • maxReplicaCount - (Output) The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).

  • autoscalingMetricSpecs - (Output) The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatformGoogleapisCom/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80. Structure is documented below.

The machineSpec block contains:

  • machineType - (Output) The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1Standard2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. TODO(rsurowka): Try to better unify the required vs optional.

  • acceleratorType - (Output) The type of accelerator(s) that may be attached to the machine as per accelerator_count. See possible values here.

  • acceleratorCount - (Output) The number of accelerators to attach to the machine.

The autoscalingMetricSpecs block contains:

  • metricName - (Output) The resource metric name. Supported metrics: * For Online Prediction: * aiplatformGoogleapisCom/prediction/online/accelerator/dutyCycle * aiplatformGoogleapisCom/prediction/online/cpu/utilization

  • target - (Output) The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.

The automaticResources block contains:

  • minReplicaCount - (Output) The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.

  • maxReplicaCount - (Output) The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.

The privateEndpoints block contains:

  • predictHttpUri - (Output) Output only. Http(s) path to send prediction requests.

  • explainHttpUri - (Output) Output only. Http(s) path to send explain requests.

  • healthHttpUri - (Output) Output only. Http(s) path to send health check requests.

  • serviceAttachment - (Output) Output only. The name of the service attachment resource. Populated if private service connect is enabled.

Timeouts

This resource provides the following Timeouts configuration options:

  • create - Default is 20 minutes.
  • update - Default is 20 minutes.
  • delete - Default is 20 minutes.

Import

Endpoint can be imported using any of these accepted formats:

$ terraform import google_vertex_ai_endpoint.default projects/{{project}}/locations/{{location}}/endpoints/{{name}}
$ terraform import google_vertex_ai_endpoint.default {{project}}/{{location}}/{{name}}
$ terraform import google_vertex_ai_endpoint.default {{location}}/{{name}}

User Project Overrides

This resource supports User Project Overrides.