diff --git a/rag_and_distilled_model/Apollo11_rag&distilled.ipynb b/rag_and_distilled_model/Apollo11_rag&distilled.ipynb new file mode 100644 index 0000000..4da0f4e --- /dev/null +++ b/rag_and_distilled_model/Apollo11_rag&distilled.ipynb @@ -0,0 +1,440 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "269fd429", + "metadata": {}, + "source": [ + "# 1. Install required packages\n", + "\n", + "We install all the dependencies needed for building a\n", + "Retrieval-Augmented Generation (RAG) pipeline.\n", + "These include LangChain components, Hugging Face models,\n", + "ChromaDB for vector storage, and PyTorch for GPU acceleration." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b52616a0", + "metadata": {}, + "outputs": [], + "source": [ + "%pip install somepackage -qq langchain langchain-community langchain-core langchain-text-splitters langchain-huggingface sentence-transformers chromadb transformers torch accelerate unstructured" + ] + }, + { + "cell_type": "markdown", + "id": "ed814cfe", + "metadata": {}, + "source": [ + "# 2. Import libraries and set configuration\n", + "\n", + "Here we import the necessary modules and define paths, constants,\n", + "and model settings.\n", + "We also suppress warnings to keep the notebook output clean." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "03e12116", + "metadata": {}, + "outputs": [], + "source": [ + "from pathlib import Path\n", + "import json\n", + "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", + "from langchain_community.vectorstores import Chroma\n", + "from langchain_huggingface import HuggingFaceEmbeddings, HuggingFacePipeline\n", + "from langchain_core.prompts import PromptTemplate\n", + "from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline\n", + "import torch\n", + "import warnings\n", + "\n", + "warnings.filterwarnings(\"ignore\")\n", + "\n", + "PROMPTS_FILE = \"data/test_data.json\"\n", + "PERSIST_DIR = \"data/chroma_db\"\n", + "EMBED_MODEL = \"all-MiniLM-L6-v2\"\n", + "CHUNK_SIZE = 400\n", + "CHUNK_OVERLAP = 50\n", + "TOP_K_RESULTS = 5\n", + "RELEVANCE_THRESHOLD = 0.3\n", + "LLM_MODEL = \"MBZUAI/LaMini-Flan-T5-248M\"\n", + "MAX_NEW_TOKENS = 100\n", + "LLM_TEMPERATURE = 0.2\n", + "USE_GPU = torch.cuda.is_available()\n", + "\n", + "PROMPT_TEMPLATE = \"\"\"Answer the question about Apollo 11 based on the context below. If you cannot answer based on the context, say \"I don't have enough information to answer that.\"\n", + "\n", + "Context:\n", + "{context}\n", + "\n", + "Question: {question}\n", + "\n", + "Answer:\"\"\"" + ] + }, + { + "cell_type": "markdown", + "id": "ad6e30b4", + "metadata": {}, + "source": [ + "# 3. Initialize embedding model and text splitter\n", + "\n", + "The embedding model converts text into numeric vectors, while the text\n", + "splitter breaks long documents into manageable chunks for retrieval." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "50dc94c7", + "metadata": {}, + "outputs": [], + "source": [ + "embedder = HuggingFaceEmbeddings(model_name=EMBED_MODEL)\n", + "splitter = RecursiveCharacterTextSplitter(\n", + " chunk_size=CHUNK_SIZE, chunk_overlap=CHUNK_OVERLAP\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "f8d3954a", + "metadata": {}, + "source": [ + "# 4. Load the local language model\n", + "\n", + "We initialize a small, local LLM (LaMini-Flan-T5) that can run on CPU or GPU.\n", + "This model will later generate answers based on retrieved context." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "86c57183", + "metadata": {}, + "outputs": [], + "source": [ + "def initialize_local_llm():\n", + " device = 0 if USE_GPU else -1\n", + " tokenizer = AutoTokenizer.from_pretrained(LLM_MODEL)\n", + " model = AutoModelForSeq2SeqLM.from_pretrained(\n", + " LLM_MODEL,\n", + " torch_dtype=torch.float16 if USE_GPU else torch.float32,\n", + " device_map=\"auto\" if USE_GPU else None,\n", + " low_cpu_mem_usage=True,\n", + " )\n", + " pipe = pipeline(\n", + " \"text2text-generation\",\n", + " model=model,\n", + " tokenizer=tokenizer,\n", + " max_new_tokens=MAX_NEW_TOKENS,\n", + " temperature=LLM_TEMPERATURE,\n", + " repetition_penalty=1.2,\n", + " do_sample=False,\n", + " top_p=0.95,\n", + " device=device,\n", + " )\n", + " return HuggingFacePipeline(pipeline=pipe)\n", + "\n", + "\n", + "llm = initialize_local_llm()" + ] + }, + { + "cell_type": "markdown", + "id": "59668d86", + "metadata": {}, + "source": [ + "# Load documents from JSON\n", + "\n", + "We read the context and metadata directly from a JSON file.\n", + "We also clean metadata and split text into chunks." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "85f71f6a", + "metadata": {}, + "outputs": [], + "source": [ + "def load_documents_from_json(json_path=PROMPTS_FILE):\n", + " data_path = Path(json_path)\n", + " if not data_path.exists():\n", + " print(f\"JSON file not found at: {json_path}\")\n", + " return []\n", + "\n", + " with open(data_path, \"r\", encoding=\"utf-8\") as f:\n", + " data = json.load(f)\n", + "\n", + " source_text = data.get(\"source_text\", \"\")\n", + " metadata = data.get(\"metadata\", {})\n", + "\n", + " if not source_text.strip():\n", + " print(\"No source text found in JSON.\")\n", + " return []\n", + "\n", + " for k, v in metadata.items():\n", + " if isinstance(v, (list, dict)):\n", + " metadata[k] = str(v)\n", + "\n", + " split_docs = splitter.create_documents([source_text])\n", + "\n", + " for doc in split_docs:\n", + " doc.metadata = metadata.copy()\n", + " doc.metadata[\"topic\"] = \"Apollo 11\"\n", + " doc.metadata[\"section\"] = \", \".join(metadata.get(\"sections\", [\"General\"]))\n", + "\n", + " print(f\"Loaded and split {len(split_docs)} chunks from JSON.\")\n", + " return split_docs" + ] + }, + { + "cell_type": "markdown", + "id": "854b765d", + "metadata": {}, + "source": [ + "# 6. Build Chroma vector store\n", + "\n", + "Here we embed the document chunks and save them into a local vector database (Chroma).\n", + "This enables fast similarity-based retrieval of relevant context later." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3335a68a", + "metadata": {}, + "outputs": [], + "source": [ + "def build_chroma_store(docs, persist_dir=PERSIST_DIR):\n", + " db = Chroma.from_documents(\n", + " documents=docs, embedding=embedder, persist_directory=persist_dir\n", + " )\n", + " db.persist()\n", + " return db" + ] + }, + { + "cell_type": "markdown", + "id": "790b3261", + "metadata": {}, + "source": [ + "# 7. Calling the Load Document Function \n", + "\n", + "This cell loads the source document (text and metadata) from the JSON file, and\n", + "splits it into smaller chunks for embedding." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c349a0b7", + "metadata": {}, + "outputs": [], + "source": [ + "documents = load_documents_from_json()" + ] + }, + { + "cell_type": "markdown", + "id": "05a12c5f", + "metadata": {}, + "source": [ + "# 8. Calling the Build Chroma Function\n", + "\n", + "This cell builds a Chroma vector database\n", + "that stores those embeddings for efficient similarity search.\n", + "\n", + "Once the database is built, it’s saved to disk,\n", + "so you only need to run this cell once, unless you change or add new data.\n", + "\n", + "Running it again will overwrite the existing database." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3a744010", + "metadata": {}, + "outputs": [], + "source": [ + "db = build_chroma_store(documents)" + ] + }, + { + "cell_type": "markdown", + "id": "670d000f", + "metadata": {}, + "source": [ + "# 9. Define query and response generation\n", + "\n", + "These functions retrieve the most relevant text chunks and use the\n", + "LLM to answer a question." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9ab0bc4d", + "metadata": {}, + "outputs": [], + "source": [ + "def query_database(query_text, k=TOP_K_RESULTS, threshold=RELEVANCE_THRESHOLD):\n", + " results = db.similarity_search_with_relevance_scores(query_text, k=k)\n", + "\n", + " if len(results) == 0 or results[0][1] < threshold:\n", + " return []\n", + "\n", + " return results\n", + "\n", + "\n", + "def generate_rag_response(\n", + " query_text, k=TOP_K_RESULTS, threshold=RELEVANCE_THRESHOLD, verbose=False\n", + "):\n", + " results = db.similarity_search_with_relevance_scores(query_text, k=k)\n", + "\n", + " if len(results) == 0 or results[0][1] < threshold:\n", + " return {\n", + " \"answer\": \"No relevant information found.\",\n", + " \"sources\": [],\n", + " \"context\": \"\",\n", + " \"prompt\": \"\",\n", + " }\n", + "\n", + " context_text = \"\\n\\n---\\n\\n\".join([doc.page_content for doc, _score in results])\n", + " prompt_template = PromptTemplate.from_template(PROMPT_TEMPLATE)\n", + " prompt = prompt_template.format(context=context_text, question=query_text)\n", + "\n", + " if llm is None:\n", + " return {\n", + " \"answer\": \"LLM not initialized.\",\n", + " \"sources\": [],\n", + " \"context\": context_text,\n", + " \"prompt\": prompt,\n", + " }\n", + "\n", + " response_text = llm.invoke(prompt)\n", + " sources = [doc.metadata.get(\"source\", \"Unknown\") for doc, _score in results]\n", + "\n", + " if verbose:\n", + " print(f\"\\nQuery: {query_text}\")\n", + " print(f\"\\nAnswer: {response_text}\")\n", + " print(f\"\\nSources: {', '.join([Path(s).name for s in sources])}\")\n", + "\n", + " return {\n", + " \"answer\": response_text,\n", + " \"sources\": sources,\n", + " \"context\": context_text,\n", + " \"prompt\": prompt,\n", + " \"scores\": [score for _, score in results],\n", + " }\n", + "\n", + "\n", + "def ask(query_text):\n", + " result = generate_rag_response(query_text, verbose=True)\n", + " return result[\"answer\"]" + ] + }, + { + "cell_type": "markdown", + "id": "8eff73ec", + "metadata": {}, + "source": [ + "# 10. Load evaluation prompts\n", + "\n", + "We load a list of test questions from a JSON file.\n", + "Each question is labeled with a category (e.g., summarization, reasoning, or RAG)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b4377d19", + "metadata": {}, + "outputs": [], + "source": [ + "with open(PROMPTS_FILE, \"r\") as f:\n", + " prompts_data = json.load(f)\n", + "\n", + "prompts = prompts_data[\"prompts\"]\n", + "print(f\"Loaded {len(prompts)} evaluation prompts\")\n", + "print(\"\\nCategories:\")\n", + "for category in [\"summarization\", \"reasoning\", \"rag\"]:\n", + " count = len([p for p in prompts if p[\"category\"] == category])\n", + " print(f\" - {category.title()}: {count} prompts\")" + ] + }, + { + "cell_type": "markdown", + "id": "1001c44c", + "metadata": {}, + "source": [ + "# 11. Run automated evaluation\n", + "\n", + "For each question, we generate an answer using the RAG system and print\n", + "both the model’s response and the expected answer (if provided)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fef553e0", + "metadata": {}, + "outputs": [], + "source": [ + "results = []\n", + "\n", + "for p in prompts:\n", + " question = p[\"prompt\"]\n", + " expected = p.get(\"expected_answer\", None)\n", + " print(f\"\\nTesting Prompt {p['id']}: {question}\")\n", + "\n", + " result = generate_rag_response(question, verbose=False)\n", + " answer = result[\"answer\"]\n", + "\n", + " results.append(\n", + " {\n", + " \"id\": p[\"id\"],\n", + " \"category\": p[\"category\"],\n", + " \"difficulty\": p[\"difficulty\"],\n", + " \"prompt\": question,\n", + " \"answer\": answer,\n", + " \"expected\": expected,\n", + " \"context_used\": len(result[\"context\"]),\n", + " \"top_sources\": result[\"sources\"],\n", + " }\n", + " )\n", + "\n", + " print(f\" Model Answer: {answer}\")\n", + " if expected:\n", + " print(f\" Expected: {expected}\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.10" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/rag_and_distilled_model/README.md b/rag_and_distilled_model/README.md new file mode 100644 index 0000000..cae7c0c --- /dev/null +++ b/rag_and_distilled_model/README.md @@ -0,0 +1,63 @@ +# Apollo 11 RAG & Distilled Model Evaluation + +This notebook demonstrates a **Retrieval-Augmented Generation (RAG)** system +using data from the **Apollo 11 mission**. +It uses **LangChain**, **HuggingFace**, and **ChromaDB** to load, embed, and +query textual data, then evaluates responses using a set of pre-defined prompts +from a JSON file. + +--- + +## Project Description + +The notebook `Apollo11_rag&distilled.ipynb` contains a structured RAG pipeline +with four main parts: + +1. **Data Loading** – Reads Apollo 11 mission text data from a JSON file. +2. **Database Creation** – Builds a local ChromaDB vector store for semantic +search. + > This step should be run **only once**, as it creates and saves the +database locally. +3. **Query & Generation** – Retrieves relevant context for a given question and +uses a model to generate an answer. +4. **Evaluation** – Tests the model’s responses using predefined data from the +JSON file. + +--- + +## Folder Structure + +```text + Rag + Distilled Model/ +├── Apollo11_rag&distilled.ipynb ← Main Jupyter Notebook +├── README.md ← Project documentation +└── data/ + ├── apollo11_docs.json ← Apollo 11 text dataset and evaluation prompts + └── chroma_db/ ← Auto-created vector database folder + (It will appear after you run it) +``` + +--- + +## Models Used + +* **LaMini-Flan-T5-248M**: It is a Local LLM and it is a distilled version of +Google's Flan-T5, optimized for lightweight text generation tasks. +Used here for reasoning, summarization, and RAG response generation. +* **all-MiniLM-L6-v2**: It as an Embedding model and it is a compact sentence-transformer +model used to convert text chunks into numerical vector embeddings for +semantic search and retrieval. + +These two models make the project lightweight, fully local, and suitable for GPU +or CPU execution. + +--- + +## Notes + +* The ChromaDB folder (data/chroma_db/) is automatically generated when you first +run the document loader. +* You can safely delete it to rebuild embeddings later. +* The notebook does not require an external .txt file — all content is inside +the JSON. +* The model automatically detects whether to use GPU (torch.cuda.is_available()). diff --git a/rag_and_distilled_model/data/test_data.json b/rag_and_distilled_model/data/test_data.json new file mode 100644 index 0000000..b4d9139 --- /dev/null +++ b/rag_and_distilled_model/data/test_data.json @@ -0,0 +1,139 @@ +{ + "metadata": { + "source": "Wikipedia - Apollo 11", + "url": "https://en.wikipedia.org/wiki/Apollo_11", + "permanent_link": "https://en.wikipedia.org/w/index.php?title=Apollo_11&oldid=1252473845", + "revision_id": "1252473845", + "sections": ["Lunar landing", "Lunar surface operations"], + "date_accessed": "2025-10-22", + "license": "CC BY-SA 3.0", + "note": "Excerpted passages from Wikipedia sections; individual sentences unchanged, some paragraphs omitted for length", + "word_count": "approximately 1,400 words", + "language": "English" + }, + + "source_text": "As the descent began, Armstrong and Aldrin found themselves passing landmarks on the surface two or three seconds early, and reported that they were \"long\"; they would land miles west of their target point. Eagle was traveling too fast. The problem could have been mascons—concentrations of high mass in a region or regions of the Moon's crust that contains a gravitational anomaly, potentially altering Eagle's trajectory.\n\nFive minutes into the descent burn, and 6,000 feet (1,800 m) above the surface of the Moon, the LM guidance computer (LGC) distracted the crew with the first of several unexpected 1201 and 1202 program alarms. Inside Mission Control Center, computer engineer Jack Garman told Guidance Officer Steve Bales it was safe to continue the descent, and this was relayed to the crew. The program alarms indicated \"executive overflows\", meaning the guidance computer could not complete all its tasks in real-time and had to postpone some of them. Margaret Hamilton, the Director of Apollo Flight Computer Programming at the MIT Charles Stark Draper Laboratory later recalled: \"To blame the computer for the Apollo 11 problems is like blaming the person who spots a fire and calls the fire department. Actually, the computer was programmed to do more than recognize error conditions. A complete set of recovery programs was incorporated into the software. The software's action, in this case, was to eliminate lower priority tasks and re-establish the more important ones. The computer, rather than almost forcing an abort, prevented an abort. If the computer hadn't recognized this problem and taken recovery action, I doubt if Apollo 11 would have been the successful Moon landing it was.\"\n\nWhen Armstrong again looked outside, he saw that the computer's landing target was in a boulder-strewn area just north and east of a 300-foot-diameter (91 m) crater, so he took semi-automatic control. Throughout the descent, Aldrin called out navigation data to Armstrong, who was busy piloting Eagle. Now 107 feet (33 m) above the surface, Armstrong knew their propellant supply was dwindling and was determined to land at the first possible landing site.\n\nArmstrong found a clear patch of ground and maneuvered the spacecraft towards it. They were now 100 feet (30 m) from the surface, with only 90 seconds of propellant remaining. Lunar dust kicked up by the LM's engine began to impair his ability to determine the spacecraft's motion.\n\nA light informed Aldrin that at least one of the 67-inch (170 cm) probes hanging from Eagle's footpads had touched the surface and he said: \"Contact light!\" Three seconds later, Eagle landed and Armstrong shut the engine down. Aldrin immediately said \"Okay, engine stop.\"\n\nEagle landed at 20:17:40 UTC on Sunday July 20 with 216 pounds (98 kg) of usable fuel remaining. Information available to the crew and mission controllers during the landing showed the LM had enough fuel for another 25 seconds of powered flight before an abort without touchdown would have become unsafe, but post-mission analysis showed that the real figure was probably closer to 50 seconds.\n\nArmstrong acknowledged Aldrin's completion of the post-landing checklist with \"Engine arm is off\", before responding to the CAPCOM, Charles Duke, with the words, \"Houston, Tranquility Base here. The Eagle has landed.\" Duke expressed the relief at Mission Control: \"Roger, Twan—Tranquility, we copy you on the ground. You got a bunch of guys about to turn blue. We're breathing again. Thanks a lot.\"\n\nPreparations for Neil Armstrong and Buzz Aldrin to walk on the Moon began at 23:43 UTC. These took longer than expected; three and a half hours instead of two. Six hours and thirty-nine minutes after landing, Armstrong and Aldrin were ready to go outside, and Eagle was depressurized.\n\nEagle's hatch was opened at 02:39:33. Armstrong initially had some difficulties squeezing through the hatch with his portable life support system (PLSS). At 02:51 Armstrong began his descent to the lunar surface. Climbing down the nine-rung ladder, Armstrong pulled a D-ring to deploy the modular equipment stowage assembly (MESA) folded against Eagle's side and activate the TV camera.\n\nDespite some technical and weather difficulties, black and white images of the first lunar EVA were received and broadcast to at least 600 million people on Earth.\n\nAfter describing the surface dust as \"very fine-grained\" and \"almost like a powder\", at 02:56:15, six and a half hours after landing, Armstrong stepped off Eagle's landing pad and declared: \"That's one small step for [a] man, one giant leap for mankind.\"\n\nArmstrong intended to say \"That's one small step for a man\", but the word \"a\" is not audible in the transmission, and thus was not initially reported by most observers of the live broadcast. When later asked about his quote, Armstrong said he believed he said \"for a man\", and subsequent printed versions of the quote included the \"a\" in square brackets.\n\nAbout seven minutes after stepping onto the Moon's surface, Armstrong collected a contingency soil sample using a sample bag on a stick. Twelve minutes after the sample was collected, he removed the TV camera from the MESA and made a panoramic sweep, then mounted it on a tripod. Aldrin joined Armstrong on the surface. He described the view with the simple phrase: \"Magnificent desolation.\"\n\nArmstrong said moving in the lunar gravity, one-sixth of Earth's, was \"even perhaps easier than the simulations ... It's absolutely no trouble to walk around.\" Aldrin joined him on the surface and tested methods for moving around, including two-footed kangaroo hops. The PLSS backpack created a tendency to tip backward, but neither astronaut had serious problems maintaining balance. The fine soil was quite slippery.\n\nThe astronauts planted the Lunar Flag Assembly containing a flag of the United States on the lunar surface, in clear view of the TV camera. Aldrin remembered, \"Of all the jobs I had to do on the Moon the one I wanted to go the smoothest was the flag raising.\" But the astronauts struggled with the telescoping rod and could only insert the pole about 2 inches (5 cm) into the hard lunar surface. Before Aldrin could take a photo of Armstrong with the flag, President Richard Nixon spoke to them through a telephone-radio transmission, which Nixon called \"the most historic phone call ever made from the White House.\"\n\nThey deployed the EASEP, which included a Passive Seismic Experiment Package used to measure moonquakes and a retroreflector array used for the lunar laser ranging experiment. Then Armstrong walked 196 feet (60 m) from the LM to take photographs at the rim of Little West Crater while Aldrin collected two core samples. He used the geologist's hammer to pound in the tubes—the only time the hammer was used on Apollo 11—but was unable to penetrate more than 6 inches (15 cm) deep.\n\nThe astronauts then collected rock samples using scoops and tongs on extension handles. Many of the surface activities took longer than expected, so they had to stop documenting sample collection halfway through the allotted 34 minutes. Aldrin shoveled 6 kilograms (13 lb) of soil into the box of rocks to pack them in tightly. Two types of rocks were found in the geological samples: basalt and breccia.\n\nWhile on the surface, Armstrong uncovered a plaque mounted on the LM ladder, bearing two drawings of Earth, an inscription, and signatures of the astronauts and President Nixon. The inscription read: \"Here men from the planet Earth first set foot upon the Moon July 1969, A. D. We came in peace for all mankind.\"\n\nMission Control used a coded phrase to warn Armstrong his metabolic rates were high, and that he should slow down. As metabolic rates remained generally lower than expected for both astronauts throughout the walk, Mission Control granted the astronauts a 15-minute extension.\n\nAldrin entered Eagle first. With some difficulty the astronauts lifted film and two sample boxes containing 21.55 kilograms (47.5 lb) of lunar surface material to the LM hatch using a flat cable pulley device called the Lunar Equipment Conveyor (LEC). Armstrong then jumped onto the ladder's third rung, and climbed into the LM. After transferring to LM life support, the explorers lightened the ascent stage for the return to lunar orbit by tossing out their PLSS backpacks, lunar overshoes, an empty Hasselblad camera, and other equipment. The hatch was closed again at 05:11:13. They then pressurized the LM and settled down to sleep.", + + "prompts": [ + { + "id": 1, + "category": "summarization", + "difficulty": "easy", + "prompt": "Summarize the main events during the Apollo 11 lunar landing in 3 sentences.", + "type": "general_summary" + }, + { + "id": 2, + "category": "summarization", + "difficulty": "easy", + "prompt": "What were the main challenges Armstrong faced while landing the Eagle?", + "type": "problem_identification" + }, + { + "id": 3, + "category": "summarization", + "difficulty": "medium", + "prompt": "Describe the activities the astronauts performed on the lunar surface.", + "type": "activity_summary" + }, + { + "id": 4, + "category": "summarization", + "difficulty": "medium", + "prompt": "Explain what scientific equipment the astronauts deployed on the Moon.", + "type": "technical_summary" + }, + { + "id": 5, + "category": "summarization", + "difficulty": "hard", + "prompt": "Compare the planned timeline for the lunar surface operations with what actually happened.", + "type": "comparative_summary" + }, + { + "id": 6, + "category": "reasoning", + "difficulty": "easy", + "prompt": "Why did the computer alarms (1201 and 1202) occur during the descent?", + "type": "causal_reasoning" + }, + { + "id": 7, + "category": "reasoning", + "difficulty": "medium", + "prompt": "What would have happened if Armstrong had not taken manual control during the landing?", + "type": "hypothetical_reasoning" + }, + { + "id": 8, + "category": "reasoning", + "difficulty": "medium", + "prompt": "Why did Armstrong's famous quote become controversial?", + "type": "interpretive_reasoning" + }, + { + "id": 9, + "category": "reasoning", + "difficulty": "hard", + "prompt": "Analyze how the fuel situation during landing reflects the risk management challenges of the mission.", + "type": "analytical_reasoning" + }, + { + "id": 10, + "category": "reasoning", + "difficulty": "hard", + "prompt": "Based on the text, what does Margaret Hamilton's statement reveal about the Apollo Guidance Computer's design philosophy?", + "type": "deep_analysis" + }, + { + "id": 11, + "category": "rag", + "difficulty": "easy", + "prompt": "At what time (UTC) did Eagle land on the Moon?", + "type": "factual_retrieval", + "expected_answer": "20:17:40 UTC on July 20" + }, + { + "id": 12, + "category": "rag", + "difficulty": "easy", + "prompt": "How much lunar material did the astronauts collect?", + "type": "numerical_retrieval", + "expected_answer": "21.55 kilograms (47.5 lb)" + }, + { + "id": 13, + "category": "rag", + "difficulty": "medium", + "prompt": "What was Armstrong's famous first words when stepping on the Moon?", + "type": "quote_retrieval", + "expected_answer": "That's one small step for [a] man, one giant leap for mankind" + }, + { + "id": 14, + "category": "rag", + "difficulty": "medium", + "prompt": "What scientific instruments were included in the EASEP package?", + "type": "list_retrieval", + "expected_answer": "Passive Seismic Experiment Package and retroreflector array" + }, + { + "id": 15, + "category": "rag", + "difficulty": "hard", + "prompt": "How much usable fuel remained when Eagle landed, and how many seconds of powered flight did this represent?", + "type": "complex_retrieval", + "expected_answer": "216 pounds (98 kg); about 25 seconds according to initial estimates, but post-mission analysis showed closer to 50 seconds" + } + ], + + "evaluation_notes": { + "testing_approach": "All 15 prompts should be tested across all models to ensure a fair comparison.", + "prompt_categories": { + "summarization": "Prompts 1-5 test condensing and extracting key information", + "reasoning": "Prompts 6-10 test analysis, inference, and logical connections", + "rag": "Prompts 11-15 test retrieval accuracy from source text" + }, + "note": "Some prompts may be more challenging for smaller models, but attempting all prompts provides complete evaluation data" + } +} diff --git a/test_dataset_apollo11/README.md b/test_dataset_apollo11/README.md index 6c4bd0b..30ae59b 100644 --- a/test_dataset_apollo11/README.md +++ b/test_dataset_apollo11/README.md @@ -13,9 +13,10 @@ retrieval-augmented generation capabilities. ## 📂 Dataset Contents -- **[README.md][readme]** - This file (overview and instructions) -- **[source_text.txt][source]** - Apollo 11 excerpted text (~1,400 words, plain text) -- **[test_prompts.md][prompts]** - 15 test prompts (readable format) +- **[README.md][readme]** - This file (overview and instructions) +- **[source_text.txt][source]** - Apollo 11 excerpted text +(~1,400 words, plain text) +- **[test_prompts.md][prompts]** - 15 test prompts (readable format) - **[test_data.json][json]** - Complete dataset (structured format for automated testing) - **[RATIONALE.md][rationale]** - Detailed explanation of selection decisions @@ -35,8 +36,7 @@ team discussions, see the **[team briefing](https://docs.google.com/document/d/1 **Source:** Wikipedia - Apollo 11 article **URL:** -**Permanent Link:** - +**Link:** **Revision ID:** 1252473845 (Wikipedia internal revision number) **Date Accessed:** October 22, 2025 **Sections:** Excerpted passages from "Lunar landing" and "Lunar surface @@ -126,10 +126,10 @@ but attempting all prompts provides comprehensive evaluation data. **Testing Protocol:** -**1.** Use the source text from **[source_text.txt][source]** exactly -as provided -**2.** Use all 15 prompts from **[test_prompts.md][prompts]** -without modification +**1.** Use the source text from **[source_text.txt][source]** +exactly as provided +**2.** Use all 15 prompts from **[test_prompts.md][prompts]** without +modification **3.** *(Optional)* Use **[test_data.json][json]** for automated or scripted testing workflows **4.** Record responses for each prompt with model configuration details