How to Setup Azure AI Foundry: Create Hub, Projects & Deploy Models

Azure AI Foundry Tutorial

In this blog, we will look at Azure AI Foundry basics and then setup AI Foundry hub and a project to deploy a large language model (LLM).

By the end of this blog, you will have learned:

  • What Azure AI Foundry is and how it works
  • How to set up an AI Foundry Hub and Project
  • How to deploy a large language model (LLM) in the project
  • Key features like model fine-tuning, deployment options, agent services, and playgrounds
  • The difference between Azure AI Foundry and Azure OpenAI
  • Best practices for setting up access, storage, and security

What is Azure AI Foundry?

Azure AI Foundry is Microsoft's end-to-end managed service for the complete AI/ML application lifecycle (includes comprehensive MLOps capabilities).

It offers various AI/ML related services with integrated Azure services for the entire lifecycle of an AI/ML application.

In short, if you're looking for a one-stop solution to build, train, deploy, and scale AI applications without stitching together multiple separate services, Azure AI Foundry is the service you need.

When we first started using Azure AI Foundry, it was quite confusing.

There is a separate AI Foundry portal, and then there os also an AI Foundry service inside the Azure portal.

So before you begin, it's important to understand these two key concepts in Azure AI Foundry. In this blog we will be looking at the AI Foundry service through the Azure portal.

The following image shows the architecture of Azure AI Foundry.

architecture of Azure AI Foundry

Azure AI Foundry Hub: The hub acts as the parent resource that provides shared infrastructure and centralized management. It centralizes and manages the following.

    1. Security and networking configuration (Private endpoints, managed virtual networks, Azure Policies)
    2. Shared compute, storage resources and connections (e.g., to Azure OpenAI, AI Search, Storage, Key Vault, Container Registry)
    3. Access control and permissions (Azure RBAC and Azure ABAC)
    4. Cost management across all child projects
💡
Ovearll, the Hub manages shared settings and resources.

Azure AI Foundry Projects: Projects are individual workspaces within a hub where actual AI development happens (model training, deployment, etc.).. Each project contains the following.

    1. Models (trained or imported)
    2. Datasets for training and testing
    3. Indexes for search and retrieval
    4. Experiments and deployments
    5. Endpoints for model serving

And within Projects, you have access to two main categories of AI services.

  1. Azure OpenAI: For integrating OpenAI's LLM models (GPT, etc.) into applications
  2. Azure AI Services: Readily available AI Services, such as Bot services, AI Content Safety, Machine learning, AI Search, AI Speech, Vision, Language, etc.

Let's Begin..

📌
The only prerequisite to follow this setup is to have an Azure user account with the required permissions to access AI Foundry.

Here is what we are going to do in this guide,

  1. Create Azure AI Foundry Hub (Through Azure Portal)
  2. Create a project inside the Hub
  3. Deploy a large language model in the project.
⚠️
Important Note: You can create a project directly from the Azure AI Foundry Portal, but this creates a 'Foundry project' with limited control over networking and security features.

If you are setting up AI Foundry and need good governance, including advanced networking, security configurations, and enterprise-grade controls, you need to create a hub-based project starting from the Azure portal. This guide is based on AI Foundry hub management through the Azure portal

Create an Azure AI Foundry Hub

The first step is to create a Hub.

Let's get started.

In the search bar, type Azure AI Foundryand open the service.

Navigate to the AI Hubs and click the + create button to select the Hub for the AI Hub creation.

the azure ai foundry hub creation page

The resource creation page will open.

On the Basics page, select the subscription, resource group, region, and name for the Hub.

💡
The Hub creation comes with the Azure OpenAI (Base Model).
You can use the existing one, or it will create a new one.
The basics page of the azure ai hub resouce creation

In the Storage section, you need a storage account to store your project artifacts. You can choose the existing one, or it will create a new one itself.

Along with you need to select one of the credentials store options to store the storage account and container registry credentials.

The storage section page of the azure ai hub resouce creation

The next one is the Inbound Access section, where we can configure the network configuration for the clients to access the Hub.

⚠️
This is the dev setup, so we are not restricting public access, but in production, you need to give access only to specific networks to access the Hub.
The inbound access section page of the azure ai hub resouce creation

The following section will be the Outbound Access section, where we can restrict outbound network access to the Hub.

The outbound access section page of the azure ai hub resouce creation

On the Encryption section, we can configure the encryption so our data will be encrypted by the managed keys.

The encryption section page of the azure ai hub resouce creation

On the Identity section, we define the identity so that the Hub can access the storage account, key vault, and container registry.

The identity section page of the azure ai hub resouce creation

Finally, add the tags and review the given configuration, then click to create the Hub.

💡
The configured settings will be applied to all the projects we create under this hub.

The Hub creation will take a few minutes to complete.

Once the Hub creation is completed, navigate to the Azure AI Hub and click the Launch Azure AI Foundry to manage the resources.

the overview page of the azure ai hub and the launch url of the azure ai foundry

You will be on a new dedicated page where you can manage your Hubs, Projects, and other services.

the dedicated azure ai foundry hub page

Now that we have the hub ready, the next step is to create a project under the hub.

Create an Azure AI Project

We can create multiple projects under one hub so hub level configurations will be inherited by all the projects.

For the demo, we are going to create a single project under this hub (Demo ai hub)

In the overview section of the Hub, we can create a new project by giving a name.

The azue ai project creation page of the azure ai hub

Once the Project creation is complete, you will be redirected to the project page where you can view the endpoints.

In there, you will see three endpoints for various use cases.

listing the defautl endpoints and api key of the azure ai project

In the Project overview page, we can see the following endpoints,

  1. Azure AI Inference --> Endpoint for the non-default model
  2. Azure OpenAI --> Endpoint for the OpenAI service (Default model - gpt-4o-mini and the default embedding model - text-embedding-3-small)
  3. Azure AI Services --> This endpoint is for Azure's AI services, including vision, speech, and language.
💡
We can create standalone projects on the Azure AI Foundry or create projects under the AI Hub.

All the projects under the AI Hub can utilize the security, connections, and configurations of the Hub.

Now, the project creation under the hub is completed so we can start deploying models and using them.

Deploying a Model In the Project

To deploy a model for the project, navigate to the Models + endpoints section of the left side panel and select the + Deploy model to choose the existing models or the fine tuned models.

the model and endpoints page of the azure ai foundry project

For this demo, we will choose one of the base OpenAI models, but you can choose any from the available list.

Choosing the Deploy base model will show the list of all existing LLM models. We can choose them using the filters based on our use case.

the llm model selection page to deploy the model for the azure ai foundry project

Once you select the model, it will show the description of the details about that model.

the selected llm model and the description for the azure ai foundry project

On the next page, deployment details will be available, and we can customize them as per the requirements, such as deployment type, tokens per minute, etc.

the deployment page of the llm model of the azure ai foundry project

Once the deployment is successfully completed, we can see the API key, endpoint, and sample usage on the deployment page.

the authentication details of the deployed model to use the llm on the project

This is how we deploy a model on the Azure AI Projects and use it on our AI applications.

📦
Accessing the model using an API is similar to how you access Azure OpenAI model endpoints. You can refer to the Azure OpenAI blog section for more details.
💰
There is no combined cost for the Azure AI Foundry, instead a dedicated cost for each service we are using, tokens, and computational power we are consuming.
To see the pricing details, refer to this official documentation.

In the next section, we can see some of the key features of the AI Foundry.

Features of the Azure AI Foundry

The following are some of the key features of the Azure AI Foundry service.

  1. You can fine-tune your own models or use Azure’s pre-built models with your own datasets.
  2. Models can be deployed in virtual machines or as serverless endpoints.
  3. The Agent Service helps automate and complete different parts of your AI workflows.
  4. Built-in playgrounds let you test your models or apps directly without setting up any infrastructure.

Azure AI Foundry vs Azure OpenAI

Azure OpenAI is a service that gives you access to OpenAI’s models like GPT-4, GPT-3.5, DALL·E, and Whisper through Microsoft’s Azure platform. It is a way to use these models with the security and scalability of Azure’s cloud.

Choose Azure OpenAI if you specifically want to use OpenAI's models in your applications

Azure AI Foundry is a broader platform designed for building, deploying, and managing AI projects. It supports different AI models and frameworks, not just OpenAI’s.

It also includes tools for MLOps, so teams can handle the full AI development lifecycle. Think of it like Azure’s version of AWS SageMaker.

Go with Azure AI Foundry if you are building machine learning models from scratch, want to train them using your own data, manage several AI projects across different teams, or need a single place to control and manage all your models.

Conclusion

If you have made it this far, nice work!

We started with the basics of Azure AI Foundry, then how to set up the Hub, start a project, and deploy a large language model.

The goal was to help you see where Azure AI Foundry fits in the bigger AI/ML picture, especially compared to Azure OpenAI. If you’re just using OpenAI models, Azure OpenAI might be all you need. But if you’re building full-on AI solutions from start to finish, AI Foundry gives you way more room to work.

And this is just the start.

In the next few posts, we will dig into real-world use cases, fine tuning, version control for models, managing costs, monitoring tools, and tips for working with AI Foundry as a team.

Thanks for following along. If you decide to try out AI Foundry, I’d love to hear how it goes.

Let’s build some cool stuff.

About the author
Bibin Wilson

Bibin Wilson

Bibin Wilson (authored over 300 tech tutorials) is a cloud and DevOps consultant with over 12+ years of IT experience. He has extensive hands-on experience with public cloud platforms and Kubernetes.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to DevOpsCube – Easy DevOps, SRE Guides & Reviews.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.