Measure and Improve Your Typing Performance with AI-Powered Practice
- ๐ค AI-Powered Sentence Generation: Use OpenAI's GPT models or HuggingFace models to generate dynamic typing targets
- ๐ Performance Tracking: Detailed recording of your typing performance with timestamps and accuracy metrics
- ๐ Multi-Language Support: Full support for Japanese (Hiragana, Katakana, Kanji) and English
- โจ๏ธ Flexible Input: Multiple correct input patterns (e.g., both 'ti' and 'chi' for 'ใก')
- ๐จ Console Interface: Clean, distraction-free typing practice environment
- ๐ Data Analysis Ready: Export records in JSON format for detailed analysis with pandas/matplotlib
-
Python >= 3.10 (Python 3.10, 3.11, 3.12, and 3.13 are supported)
-
Dependencies (automatically installed):
clicklangchain(>= 1.0)langchain_openaiopenaipydantic(>= 2.0)pynputpython-dotenvrequestssshkeyboardtypes-pynputtypes-requests
-
OpenAI API Key (required for AI-generated typing targets)
See pyproject.toml for detailed information.
The recommended way to install is using uv, a fast Python package installer:
# Clone the repository
git clone https://github.com/hmasdev/simple_typing_application.git
cd simple_typing_application
# Install with uv (recommended)
uv syncThis will install all dependencies and set up the application for use.
pip install git+https://github.com/hmasdev/simple_typing_application.gitYou can specify the following optional dependencies:
[extra]: Data analysis packages (pandas,matplotlib,jupyterlab,seaborn)[huggingface]: HuggingFace models support (torch,transformers, etc.)[dev]: Development tools (pytest,mypy,ruff, etc.)
Using uv (recommended):
git clone https://github.com/hmasdev/simple_typing_application.git
cd simple_typing_application
# Install with specific optional dependencies
uv sync --extra extra --extra huggingfaceUsing pip:
git clone https://github.com/hmasdev/simple_typing_application.git
cd simple_typing_application
# Install with optional dependencies
pip install ".[extra,huggingface]"For more details, see ./pyproject.toml.
-
Clone and install:
git clone https://github.com/hmasdev/simple_typing_application.git cd simple_typing_application uv sync -
Set up your OpenAI API key:
echo "OPENAI_API_KEY=your-api-key-here" > .env
-
Run the application:
python -m simple_typing_application
-
Start typing! Follow the on-screen prompts and improve your typing skills.
You can specify some parameters with '.json'.
For example, the following .json files are valid:
The content of ./sample_config.json is as follows:
{
"sentence_generator_type": "OPENAI",
"sentence_generator_config": {
"model": "gpt-5-nano",
"temperature": 0.7,
"openai_api_key": "HERE_IS_YOUR_API_KEY",
"memory_size": 0,
"max_retry": 5
},
"user_interface_type": "CONSOLE",
"user_interface_config": {},
"key_monitor_type": "PYNPUT",
"key_monitor_config": {},
"record_direc": "./record"
}As default, the contents of 'sample_config.json' are used except openai_api_key.
In this case, you should add an environment variable OPENAI_API_KEY which contains your API key or create .env file like
OPENAI_API_KEY={HERE_IS_YOUR_API_KEY}You can specify the following as sentence_generator_type:
OPENAI: Use OpenAI API to generate typing targets (recommended:gpt-5-nano,gpt-4o, orgpt-4-turbo)HUGGINGFACE: Use models available on HuggingFace to generate typing targetsSTATIC: Use predefined typing targets that you have specified
For each sentence_generator_type, you can specify the detailed parameters as sentence_generator_config:
-
OPENAImodel: See langchain.chat_models.openai.ChatOpenAItemperature: See langchain.chat_models.openai.ChatOpenAIopenai_api_key: See langchain.chat_models.openai.ChatOpenAImemory_size: See langchain.memory.buffer.ConversationBufferMemorymax_retry: Maximum number of times to rerun when an error occurs
-
HUGGINGFACEmodel: Model name. For example, "line-corporation/japanese-large-lm-3.6b", "rinna/japanese-gpt-neox-3.6b", "rinna/bilingual-gpt-neox-4b" and "cyberagent/open-calm-7b" are available as Japanese LLM. For details, See huggingface.co/models.do_sample:trueorfalse. See huggingface.co/docs/transformers/pipeline_tutorial.max_length: int. See huggingface.co/docs/transformers/pipeline_tutorial.top_k: int. See huggingface.co/docs/transformers/pipeline_tutorial.top_p: float between 0 and 1. See huggingface.co/docs/transformers/pipeline_tutorial.device:cpuorcuda
-
STATICtext_kana_map: key-value pairs whose keys are row typing targets and values are typing targets which do not include kanjis;is_random: whether typing targets are randomly selected or sequentially displayed.
To see the default values, see ./simple_typing_application/models/config_models/sentence_generator_config_model.py.
๐ก Tip: For best results, we recommend using
gpt-5-nano(fast and cost-effective) orgpt-4o(most capable) for OpenAI models.
You can specify the followings as user_interface_type:
CONSOLE: CUI
For each user_interface_type, you can specify the detailed parameters as user_interface_config:
CONSOLE- No parameters
To see the default values, see ./simple_typing_application/models/config_models/user_interface_config_model.py.
You can specity the followings as key_monitor_type:
PYNPUT:pynput-based local key monitorSSHKEYBOARD:sshkeyboard-based key monitor
For each key_monitor_type, you can specify the detailed parameters as key_monitor_config:
PYNPUT- No parameters
SSHKEYBOARD- No parameters
To see the default values, see ./simple_typing_application/models/config_models/key_monitor_config_model.py.
You can launch this application with the following command:
python -m simple_typing_application -c HERE_IS_YOUR_CONFIG_FILEIf you want to launch this application with debug mode, run the following command:
python -m simple_typing_application -c HERE_IS_YOUR_CONFIG_FILE --debugFor more details, run python -m simple_typing_application --help.
simple_typing_application shows typing targets through the interface which you have specified.
Type correct keys.
๐ Note: The
Typing Target (Romaji)displayed in your interface is one of the correct typing patterns. For example, whenTyping Target (Hiragana)is 'ใก', both 'ti' and 'chi' are correct, although only one of them is displayed.
Available keyboard shortcuts:
EscorCtrl+c: Quit the applicationTab: Skip the current typing target
The application records your typing in the following format in the directory specified in you config file for each typing target.
{
"timestamp": "HERE IS TIMESTAMP THE TYPING START WITH FORMAT %Y-%m-%dT%H:%M:%S.%f",
"typing_target": {
"text": "HERE IS TYPING TARGET",
"text_hiragana_alphabet_symbol": "HERE IS TRANSFORMED STRING WHICH CONTAINS ONLY HIRAGANA, ALPHABET AND SYMBOLS",
"typing_target": [["CORRECT PATTERN. TYPICALLY ROMANIZED STRING", ...], ...]
},
"records": [
{
"timestamp": "HERE IS TIMESTAMP WHEN YOU TYPE %Y-%m-%dT%H:%M:%S.%f",
"pressed_key": "WHICH KEY YOU HAVE PRESSED",
"correct_keys": ["", ...],
"is_correct": true or false
},
...
]
}Refer to ./sample_record.json for an example.
-
Fork this repository:
-
Clone your forked repository:
git clone https://github.com/hmasdev/simple_typing_application cd simple_typing_application -
Create your feature branch:
git checkout -b feature/your-feature
-
Setup your development environment (uv recommended):
uv sync
If you want to include optional dependencies for development (e.g.,
huggingface,pandas, etc.), run:uv sync --extra huggingface --extra extra
To know which option is available, see
./pyproject.toml. -
Develop your feature and add tests.
-
Test your feature:
uv run pytest # Unit test uv run pytest -m integrate # integration test
-
Check the code style and static type:
uv run ruff check simple_typing_application uv run ruff check tests uv run mypy simple_typing_application uv run mypy tests
(Optional) Format code:
uv run ruff format simple_typing_application uv run ruff format tests
-
Commit your changes:
git add . git commit -m "Add your feature"
-
Push to the branch:
git push -u origin feature/your-feature
-
Create a new Pull Request:
Thank you for your contribution! ๐
simple_typing_application is licensed under the MIT License. See the LICENSE file for more details.
- [1] https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html
- [2] https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html
- [3] https://huggingface.co/models?pipeline_tag=text-generation
- [4] https://huggingface.co/docs/transformers/pipeline_tutorial
