<1> The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {infer} endpoint is `azure_openai_embeddings`.
<2> The API key for accessing your Azure OpenAI services.
Alternately, you can provide an `entra_id` instead of an `api_key` here.
The <<get-inference-api>> does not return this information.
<1> The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {infer} endpoint is `azure_ai_studio_embeddings`.
<2> The API key for accessing your Azure AI Studio deployed model. You can find this on your model deployment's overview page.
<3> The target URI for accessing your Azure AI Studio deployed model. You can find this on your model deployment's overview page.
<4> The model provider, such as `cohere` or `openai`.
<5> The deployed endpoint type. This can be `token` (for "pay as you go" deployments), or `realtime` for real-time deployment endpoints.
NOTE: It may take a few minutes for your model's deployment to become available
after it is created. If you try and create the model as above and receive a
`404` error message, wait a few minutes and try again.
Also, when using this model the recommended similarity measure to use in the
<1> The task type is `text_embedding` per the path. `google_vertex_ai_embeddings` is the unique identifier of the {infer} endpoint (its `inference_id`).
<2> A valid service account in JSON format for the Google Vertex AI API.
<3> For the list of the available models, refer to the https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api[Text embeddings API] page.
<4> The name of the location to use for the {infer} task. Refer to https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations[Generative AI on Vertex AI locations] for available locations.
<5> The name of the project to use for the {infer} task.
<1> The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {infer} endpoint is `mistral_embeddings`.
<2> The API key for accessing the Mistral API. You can find this in your Mistral account's API Keys page.
<3> The Mistral embeddings model name, for example `mistral-embed`.
<1> The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {infer} endpoint is `amazon_bedrock_embeddings`.
<2> The access key can be found on your AWS IAM management page for the user account to access Amazon Bedrock.
<3> The secret key should be the paired key for the specified access key.
<4> Specify the region that your model is hosted in.
<1> The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {infer} endpoint is `alibabacloud_ai_search_embeddings`.
<2> The API key for accessing the AlibabaCloud AI Search API. You can find your API keys in
your AlibabaCloud account under the
https://opensearch.console.aliyun.com/cn-shanghai/rag/api-key[API keys section]. You need to provide
your API key only once. The <<get-inference-api>> does not return your API
key.
<3> The AlibabaCloud AI Search embeddings model name, for example `ops-text-embedding-zh-001`.
<4> The name our your AlibabaCloud AI Search host address.
<5> The name our your AlibabaCloud AI Search workspace.