Home AI Everything Human-in-the-Loop (HITL) model and two main scenarios

Human-in-the-Loop (HITL) model and two main scenarios

The HITL model bridges the gap between AI capabilities and human expertise, ensuring more robust and reliable outcomes across various domains.

A bronze Author: AssemHijazi
images header

The future of AI lies in hybrid intelligence, where we increasingly involve humans alongside AI, especially generative AI.

When critical decisions need to be made or tasks performed, human engagement becomes crucial. Generative AI doesn’t merely generate content; it can also execute actions and tasks, including critical ones based on its trained Language Models (LLMs).

Relying solely on AI and GenAI for critical decisions and important tasks isn’t sufficient; we must actively engage humans to assist and even guide AI actions.

In critical domains like medicine, finance, and smart cities, human involvement in decision cycles, workflows, and scenarios is essential. The human-in-the-loop model plays a pivotal role.

The HITL model bridges the gap between AI capabilities and human expertise, ensuring more robust and reliable outcomes across various domains.

We can divide the HITL into two main scenarios for human engagement:

1. Human Engagement in LLM Training Scenario:

Humans actively participate during the training phase of Language Models (LLMs).

Process: Data Labeling: Humans label and annotate training data.

Model Tuning: They adjust hyperparameters, select architectures, and fine-tune the LLM.

Feedback Loop: Humans evaluate model performance and guide improvements.

Example: An LLM for medical diagnosis, where doctors provide labeled data and validate predictions.

2. Human Engagement with Generative AI through Prompt Engineering Scenario:

Humans interact with the LLM during inference (when generating responses).

Process: Prompt Design: Humans create prompts or queries.

Model Response: The LLM generates output based on the prompt and structures interactions for human engagement.

Human Review: Users interact with the structured prompt queries.

Example: The Co-Pilot Studio workflow at Microsoft exemplifies an effective approach for building the second scenario of the HITL model.

Here is an example of comparison between the two scenarios of the HITL model:

Human-in-the-Loop (HITL) model and two main scenarios
The HITL model two scenarios

Human centric AI

The HITL provides a human centric AI as well across all the possible domains of AI implementations

Example:

Scenario 1: Human Engagement in LLM Training Scenario

In the healthcare domain, let’s consider the development of an AI system for diagnosing diseases.

  • Process: Doctors collaborate with data scientists to label medical images or patient data, such as identifying the presence of a tumor in X-rays.
  • Model Tuning: After the model makes predictions, doctors review its performance and adjust the model by providing feedback, which fine-tunes the LLM for more accurate results.
  • Feedback Loop: Continuous human input ensures the AI system remains accurate, especially in edge cases or rare diseases.

Example: A medical AI LLM predicting a diagnosis for cancerous tumors based on a dataset labeled and validated by oncologists. Doctors provide feedback on the model’s accuracy, further refining its performance for real-world use.


Scenario 2: Human Engagement with Generative AI through Prompt Engineering Scenario

In the financial services sector, AI-driven systems assist in creating automated reports or generating investment strategies.

  • Process: Financial analysts provide specific prompts to the LLM, like “Generate a 5-year investment strategy for a mid-sized tech company.”
  • Model Response: The LLM generates a draft strategy based on the input, using its pre-trained financial data.
  • Human Review: Analysts review the generated strategies and make necessary adjustments before finalizing the reports or strategies, ensuring compliance with regulations or market conditions.

Example: The Co-Pilot Studio in Microsoft enables users to interact with the LLM using prompts to build workflows. A financial analyst uses this scenario by prompting the system to generate a risk analysis report, which is then reviewed and customized by the human expert.

Last update:
Publish date: