Azure OpenAI integration setup
If you’re new to integrations in Rewst, read through our introductory integration documentation here.
For our general OpenAI integration, see this documentation.
Azure OpenAI set up Steps in Microsoft Azure
Navigate to your Microsoft Azure Portal.
Click +Create a Resource.

Search for
Azure OpenAI.Click the tile for Azure OpenAI.

Request access to the Azure OpenAI service.
You must fill out a form to request access to the Azure OpenAI service

Wait for the welcome email.
Note that activation of the service can take up to 48 hours, based on Microsoft's timelines.

Return to the Azure Portal upon receiving the approval email.
Create the OpenAI Service in your Azure subscription.

Complete the steps Network > Tags > Review + Submit
Click Go to resource after submitting and confirming that deployment has finished.

Click Go to Foundry Portal.

Navigate to the Deployments tab on Azure AI Foundry.

You can now deploy a desired OpenAI model. The below example deploys a
gpt-5-minimodel. Click Deploy Model > Deploy base model.
Search for
gpt-5-miniand selectgpt-5-mini. Click Confirm.
Click Deploy.

You'll be redirected to the deployed model page that contains the Key and the Target URI. Copy both values and store them somewhere secure. These will be needed for further setup steps in Rewst.

Set up steps in Rewst
If you have not yet set up the OpenAI integration in Rewst, follow our integration setup steps in Rewst in our general OpenAI integration setup guide.
Configure Azure OpenAI with existing OpenAI integrations
If you already have an OpenAI integration configured, you can add an instance for Azure.
Navigate to the integration's configuration page by navigating to Configuration > Integrations > OpenAI.
Click Default > Add Configuration.

Name the configuration
Azure.Confirm that your Base URL reflects the accurate resource and region collected from Azure. Incorrect URLs will prevent Rewst from accessing the OpenAI service.
Enter the Azure API Version.
Crafting the Base URL
The Base URL is what directs Rewst to communicate with the correct Azure OpenAI instance. Here’s how to construct it:
Anatomy of the Base URL: The standard structure of the Base URL provided by Azure typically looks like this: https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>
If you are using GPT 3.5 Turbo, for example, you will append it to the URL like so:
https://<resource-name>.openai.azure.com/openai/deployments/gpt-35-turbo
Customize Your Base URL: Depending on your service instance or specific requirements, you may need to alter the base URL. For example:
If you have a custom deployment named
custom-model, you might append it to the URL like so:https://<resource-name>.openai.azure.com/openai/deployments/<custom-model>Ensure you replace
<resource-name>, <deployment-name>, and/or<custom-model>with the actual information from your Azure OpenAI service.

You must complete the Azure API Version field, otherwise, the integration will fail.
Configure workflow triggers to use Azure OpenAI
If this is your second instance of OpenAI, meaning you configured the direct OpenAI integration before, then you may adjust your workflow triggers to choose which version of the integration you want to use in your workflows.
Navigate to the integration's configuration page by navigating to Configuration > Integrations > OpenAI.
Find Integration Overrides under Trigger Configuration.
Click
.Choose OpenAI in the Integration drop-down selector.
Choose which of your your OpenAI integration instances to use in the Integration Configuration drop-down selector.

Last updated
Was this helpful?
