Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 1 addition & 8 deletions .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -147,11 +147,4 @@ data/
reports/

# Synthetic data conversations
src/agents/utils/example_inputs/
src/agents/utils/synthetic_conversations/
src/agents/utils/synthetic_conversation_generation.py
src/agents/utils/testbench_prompts.py
src/agents/utils/langgraph_viz.py

# development agents
src/agents/student_agent/
src/agents/utils/example_inputs/
1 change: 1 addition & 0 deletions .github/workflows/dev.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ jobs:
if: always()
run: |
source .venv/bin/activate
export PYTHONPATH=$PYTHONPATH:.
pytest --junit-xml=./reports/pytest.xml --tb=auto -v

- name: Upload test results
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ jobs:
if: always()
run: |
source .venv/bin/activate
export PYTHONPATH=$PYTHONPATH:.
pytest --junit-xml=./reports/pytest.xml --tb=auto -v

- name: Upload test results
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ coverage.xml
*.py,cover
.hypothesis/
.pytest_cache/
reports/

# Translations
*.mo
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ COPY src ./src

COPY index.py .

COPY index_test.py .
COPY tests ./tests

# Set the Lambda function handler
CMD ["index.handler"]
55 changes: 34 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,11 @@ In GitHub, choose Use this template > Create a new repository in the repository

Choose the owner, and pick a name for the new repository.

> [!IMPORTANT] If you want to deploy the evaluation function to Lambda Feedback, make sure to choose the Lambda Feedback organization as the owner.
> [!IMPORTANT] If you want to deploy the chat function to Lambda Feedback, make sure to choose the `Lambda Feedback` organization as the owner.

Set the visibility to Public or Private.
Set the visibility to `Public` or `Private`.

> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to Public.
> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to `Public`.

Click on Create repository.

Expand Down Expand Up @@ -78,9 +78,9 @@ Also, don't forget to update or delete the Quickstart chapter from the `README.m

## Development

You can create your own invocation to your own agents hosted anywhere. Copy or update the `base_agent` from `src/agents/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
You can create your own invocation to your own agents hosted anywhere. Copy or update the `agent.py` from `src/agent/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.

You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agents/llm_factory.py`.
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agent/utils/llm_factory.py`.

### Prerequisites

Expand All @@ -90,23 +90,37 @@ You agent can be based on an LLM hosted anywhere, you have available currently O
### Repository Structure

```bash
.github/workflows/
dev.yml # deploys the DEV function to Lambda Feedback
main.yml # deploys the STAGING function to Lambda Feedback
test-report.yml # gathers Pytest Report of function tests

docs/ # docs for devs and users

src/module.py # chat_module function implementation
src/module_test.py # chat_module function tests
src/agents/ # find all agents developed for the chat functionality
src/agents/utils/test_prompts.py # allows testing of any LLM agent on a couple of example inputs containing Lambda Feedback Questions and synthetic student conversations
.
├── .github/workflows/
│ ├── dev.yml # deploys the DEV function to Lambda Feedback
│ ├── main.yml # deploys the STAGING and PROD functions to Lambda Feedback
│ └── test-report.yml # gathers Pytest Report of function tests
├── docs/ # docs for devs and users
├── src/
│ ├── agent/
│ │ ├── utils/ # utils for the agent, including the llm_factory
│ │ ├── agent.py # the agent logic
│ │ └── prompts.py # the system prompts defining the behaviour of the chatbot
│ └── module.py
└── tests/ # contains all tests for the chat function
├── manual_agent_requests.py # allows testing of the docker container through API requests
├── manual_agent_run.py # allows testing of any LLM agent on a couple of example inputs
├── test_index.py # pytests
└── test_module.py # pytests
```


## Testing the Chat Function

To test your function, you can either call the code directly through a python script. Or you can build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
To test your function, you can run the unit tests, call the code directly through a python script, or build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.

### Run Unit Tests

You can run the unit tests using `pytest`.

```bash
pytest
```

### Run the Chat Script

Expand All @@ -116,9 +130,9 @@ You can run the Python function itself. Make sure to have a main function in eit
python src/module.py
```

You can also use the `testbench_agents.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
You can also use the `manual_agent_run.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
```bash
python src/agents/utils/testbench_agents.py
python tests/manual_agent_run.py
```

### Calling the Docker Image Locally
Expand Down Expand Up @@ -156,7 +170,7 @@ curl --location 'http://localhost:8080/2015-03-31/functions/function/invocations
#### Call Docker Container
##### A. Call Docker with Python Requests

In the `src/agents/utils` folder you can find the `requests_testscript.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
In the `tests/` folder you can find the `manual_agent_requests.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.

##### B. Call Docker Container through API request

Expand All @@ -183,7 +197,6 @@ Body with optional Params:
"conversational_style":" ",
"question_response_details": "",
"include_test_data": true,
"agent_type": {agent_name}
}
}
```
Expand Down
17 changes: 12 additions & 5 deletions docs/dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,15 @@

## Testing the Chat Function

To test your function, you can either call the code directly through a python script. Or you can build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
To test your function, you can run the unit tests, call the code directly through a python script, or build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.

### Run Unit Tests

You can run the unit tests using `pytest`.

```bash
pytest
```

### Run the Chat Script

Expand All @@ -22,9 +30,9 @@ You can run the Python function itself. Make sure to have a main function in eit
python src/module.py
```

You can also use the `testbench_agents.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
You can also use the `manual_agent_run.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
```bash
python src/agents/utils/testbench_agents.py
python tests/manual_agent_run.py
```

### Calling the Docker Image Locally
Expand Down Expand Up @@ -62,7 +70,7 @@ curl --location 'http://localhost:8080/2015-03-31/functions/function/invocations
#### Call Docker Container
##### A. Call Docker with Python Requests

In the `src/agents/utils` folder you can find the `requests_testscript.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
In the `tests/` folder you can find the `manual_agent_requests.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.

##### B. Call Docker Container through API request

Expand All @@ -89,7 +97,6 @@ Body with optional Params:
"conversational_style":" ",
"question_response_details": "",
"include_test_data": true,
"agent_type": {agent_name}
}
}
```
Expand Down
8 changes: 2 additions & 6 deletions index.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,6 @@
import json
try:
from .src.module import chat_module
from .src.agents.utils.types import JsonType
except ImportError:
from src.module import chat_module
from src.agents.utils.types import JsonType
from src.module import chat_module
from src.agent.utils.types import JsonType

def handler(event: JsonType, context):
"""
Expand Down
Empty file removed src/__init__.py
Empty file.
20 changes: 7 additions & 13 deletions src/agents/base_agent/base_agent.py → src/agent/agent.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,7 @@
try:
from ..llm_factory import OpenAILLMs, GoogleAILLMs
from .base_prompts import \
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
from ..utils.types import InvokeAgentResponseType
except ImportError:
from src.agents.llm_factory import OpenAILLMs, GoogleAILLMs
from src.agents.base_agent.base_prompts import \
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
from src.agents.utils.types import InvokeAgentResponseType
from src.agent.utils.llm_factory import OpenAILLMs, GoogleAILLMs
from src.agent.prompts import \
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
from src.agent.utils.types import InvokeAgentResponseType

from langgraph.graph import StateGraph, START, END
from langchain_core.messages import SystemMessage, RemoveMessage, HumanMessage, AIMessage
Expand Down Expand Up @@ -62,7 +56,7 @@ def call_model(self, state: State, config: RunnableConfig) -> str:
system_message = self.role_prompt

# Adding external student progress and question context details from data queries
question_response_details = config["configurable"].get("question_response_details", "")
question_response_details = config.get("configurable", {}).get("question_response_details", "")
if question_response_details:
system_message += f"## Known Question Materials: {question_response_details} \n\n"

Expand Down Expand Up @@ -98,8 +92,8 @@ def summarize_conversation(self, state: State, config: RunnableConfig) -> dict:
"""Summarize the conversation."""

summary = state.get("summary", "")
previous_summary = config["configurable"].get("summary", "")
previous_conversationalStyle = config["configurable"].get("conversational_style", "")
previous_summary = config.get("configurable", {}).get("summary", "")
previous_conversationalStyle = config.get("configurable", {}).get("conversational_style", "")
if previous_summary:
summary = previous_summary

Expand Down
61 changes: 38 additions & 23 deletions src/agents/base_agent/base_prompts.py → src/agent/prompts.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,43 @@
# NOTE:
# PROMPTS generated with the help of ChatGPT GPT-4o Nov 2024

#
# NOTE: Default prompts generated with the help of ChatGPT GPT-4o Nov 2024
#
# Description of the prompts:
#
# 1. role_prompt: Sets the overall role and behaviour of the chatbot.
#
# 2. summary_prompt: Used to generate a summary of the conversation.
# 2. update_summary_prompt: Used to update the conversation summary with new messages.
# 2. summary_system_prompt: Provides context for the chatbot based on the existing summary.
#
# 3. conv_pref_prompt: Used to analyze and extract the student's conversational style and learning preferences.
# 3. update_conv_pref_prompt: Used to update the conversational style based on new interactions.
#

# 1. Role Prompt
role_prompt = "You are an excellent tutor that aims to provide clear and concise explanations to students. I am the student. Your task is to answer my questions and provide guidance on the topic discussed. Ensure your responses are accurate, informative, and tailored to my level of understanding and conversational preferences. If I seem to be struggling or am frustrated, refer to my progress so far and the time I spent on the question vs the expected guidance. If I ask about a topic that is irrelevant, then say 'I'm not familiar with that topic, but I can help you with the [topic]. You do not need to end your messages with a concluding statement.\n\n"

# 2. Summary Prompts
summary_guidelines = """Ensure the summary is:

Concise: Keep the summary brief while including all essential information.
Structured: Organize the summary into sections such as 'Topics Discussed' and 'Top 3 Key Detailed Ideas'.
Neutral and Accurate: Avoid adding interpretations or opinions; focus only on the content shared.
When summarizing: If the conversation is technical, highlight significant concepts, solutions, and terminology. If context involves problem-solving, detail the problem and the steps or solutions provided. If the user asks for creative input, briefly describe the ideas presented.
Last messages: Include the most recent 5 messages to provide context for the summary.

Provide the summary in a bulleted format for clarity. Avoid redundant details while preserving the core intent of the discussion."""

summary_prompt = f"""Summarize the conversation between a student and a tutor. Your summary should highlight the major topics discussed during the session, followed by a detailed recollection of the last five significant points or ideas. Ensure the summary flows smoothly to maintain the continuity of the discussion.

{summary_guidelines}"""

update_summary_prompt = f"""Update the summary by taking into account the new messages above.

{summary_guidelines}"""

summary_system_prompt = "You are continuing a tutoring session with the student. Background context: {summary}. Use this context to inform your understanding but do not explicitly restate, refer to, or incorporate the details directly in your responses unless the user brings them up. Respond naturally to the user's current input, assuming prior knowledge from the summary."

# 3. Conversational Preference Prompt
pref_guidelines = """**Guidelines:**
- Use concise, objective language.
- Note the student's educational goals, such as understanding foundational concepts, passing an exam, getting top marks, code implementation, hands-on practice, etc.
Expand Down Expand Up @@ -57,23 +92,3 @@

{pref_guidelines}
"""

summary_guidelines = """Ensure the summary is:

Concise: Keep the summary brief while including all essential information.
Structured: Organize the summary into sections such as 'Topics Discussed' and 'Top 3 Key Detailed Ideas'.
Neutral and Accurate: Avoid adding interpretations or opinions; focus only on the content shared.
When summarizing: If the conversation is technical, highlight significant concepts, solutions, and terminology. If context involves problem-solving, detail the problem and the steps or solutions provided. If the user asks for creative input, briefly describe the ideas presented.
Last messages: Include the most recent 5 messages to provide context for the summary.

Provide the summary in a bulleted format for clarity. Avoid redundant details while preserving the core intent of the discussion."""

summary_prompt = f"""Summarize the conversation between a student and a tutor. Your summary should highlight the major topics discussed during the session, followed by a detailed recollection of the last five significant points or ideas. Ensure the summary flows smoothly to maintain the continuity of the discussion.

{summary_guidelines}"""

update_summary_prompt = f"""Update the summary by taking into account the new messages above.

{summary_guidelines}"""

summary_system_prompt = "You are continuing a tutoring session with the student. Background context: {summary}. Use this context to inform your understanding but do not explicitly restate, refer to, or incorporate the details directly in your responses unless the user brings them up. Respond naturally to the user's current input, assuming prior knowledge from the summary."
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"""

from typing import List, Optional, Dict, Any, Union
from .prompt_context_templates import PromptFormatter
from src.agent.utils.prompt_context_templates import PromptFormatter

# Definitions questionSubmissionSummary type
class StudentLatestSubmission:
Expand Down Expand Up @@ -150,7 +150,7 @@ def parse_json_to_structured_prompt(
question_submission_summary: Optional[List[StudentWorkResponseArea]],
question_information: Optional[QuestionDetails],
question_access_information: Optional[QuestionAccessInformation]
) -> Optional[str]:
) -> str:
"""
Parse JSON data into a well-structured, LLM-friendly prompt.

Expand Down Expand Up @@ -322,7 +322,7 @@ def parse_json_to_prompt(
questionSubmissionSummary: Optional[List[StudentWorkResponseArea]],
questionInformation: Optional[QuestionDetails],
questionAccessInformation: Optional[QuestionAccessInformation]
) -> Optional[str]:
) -> str:
"""
Legacy wrapper for backward compatibility.
Recommended to use parse_json_to_structured_prompt for new code.
Expand Down
File renamed without changes.
Empty file removed src/agents/__init__.py
Empty file.
Loading
Loading