# Step 2: Evaluation

You can configure the human assessment below. Here, you can define the search task for the jurors to work on, create your assessment questionnaire, and manage domain filtering if necessary. You can also limit the workload for each participant and define a result range if needed. Additionally, you have the option to specify which types of items the jurors will assess, such as organic results, AI Overviews, or SERPs. Furthermore, you can decide whether the URLs of the search results and the sources of AI responses should be visible to the jurors. If desired, you can also allow participants to assess failed downloads. Of course, you can also copy the invitation link here and share it with your participants for the assessment.

<figure><img src="/files/FW8XZRuH7loAMHxnaCcT" alt=""><figcaption></figcaption></figure>

After clicking on “Add” in the questionnaire block, you can specify totally free questions for the jurors to answer when the search results are displayed. You can select from the following types of questions:

<details>

<summary><strong>Short Text</strong></summary>

**Example:**

Question: “What is the first thing that comes to mind when you see this search result?”

</details>

<details>

<summary><strong>Long Text</strong></summary>

**Example:**

Question: “Please describe why this search result is relevant to you or not.”

</details>

<details>

<summary><strong>Rating Scale</strong></summary>

**Example:**

Question: „How relevant is this search result to you?” \
Answer: “Not relevant”, “Slightly relevant”, “Quite relevant”, “Very relevant”

</details>

<details>

<summary><strong>Multiple Choice</strong></summary>

**Example:**

Question: Which things come to mind when you see this search result?”\
Answer: “Relevant”, “Well designed”, “User-friendly”, “Credible”

</details>

<details>

<summary><strong>Single Choice</strong></summary>

**Example:**

Question: “Is this search result relevant to you?”\
Answer: “Yes” or “No”

</details>

<details>

<summary><strong>Sliding Scale</strong></summary>

**Example:**

Question: „How relevant is this search result to you?”\
Answer: “Not relevant (=0)” to “Very relevant (=10)”

</details>

<figure><img src="/files/2Sm77qXIxwJgYprmb9rI" alt=""><figcaption></figcaption></figure>

After starting the evaluation, jurors will be presented with randomly ordered screenshots of the search results, texts of the AI generated answers and search engine result pages (SERPs) collected by RAT (on the right). They come along with your defined task and questions (on the left).

**Assessment interface with search result:**

<figure><img src="/files/VSl2suatPVxqMaCawOYT" alt=""><figcaption></figcaption></figure>

**Assessment interface with search engine result page (SERP):**

<figure><img src="/files/M2S34EeGVRrQmI5ChKNP" alt=""><figcaption></figcaption></figure>

**Assessment interface with AI response:**

<figure><img src="/files/jlsD7FRxVQCoFzeZnRHY" alt=""><figcaption></figcaption></figure>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://result-assessment-tool-rat.gitbook.io/result-assessment-tool-rat-docs/what-is-rat-and-how-does-it-work/step-2-evaluation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
