git » alan.git » commit afc7687

Clarify purpose, instructions, and rendering notes in README

author Alan Dipert
2025-12-04 05:50:25 UTC
committer Alan Dipert
2025-12-04 05:50:25 UTC
parent 3a955144a3053e56e221c18cd47c3fe8898db637

Clarify purpose, instructions, and rendering notes in README

README.md +4 -3

diff --git a/README.md b/README.md
index 5377878..10ee29a 100644
--- a/README.md
+++ b/README.md
@@ -1,9 +1,9 @@
 # ALAN — Alan's Language Aptitude iNstrument
 
-ALAN is a fully self-contained artificial-language aptitude assessment inspired by DLAB-style tasks. It generates a consistent micro-grammar, produces a 32-item multiple-choice test, renders a booklet and answer key, and validates every form against strict grammatical and psychometric properties.
+ALAN is a fully self-contained artificial-language aptitude assessment inspired by DLAB-style tasks. It generates a consistent micro-grammar, produces a 32-item multiple-choice test, renders a booklet and answer key, and validates every form against strict grammatical and psychometric properties. The goal is to measure how quickly and accurately someone can infer and apply unfamiliar language rules—skills that map to disciplined software reasoning (spec reading, edge-case handling, protocol compliance).
 
 ## What This Is
-- **Purpose:** Measure rapid rule inference, pattern generalization, and attention to fine-grained grammatical cues—abilities correlated with learning new syntactic systems and with disciplined software engineering (e.g., reading specs, refactoring, reasoning about invariants).
+- **Purpose:** Measure rapid rule inference, pattern generalization, and attention to fine-grained grammatical cues—abilities correlated with learning new syntactic systems and with disciplined software engineering (spec reading, refactoring, edge-case handling).
 - **Format:** 32 multiple-choice items across sections that introduce rules, then test them with strictly grammatical distractors that differ by exactly one semantic/morphosyntactic feature (minimal pairs).
 - **Artifacts produced:** `generated_test.json` (canonical test), `test_booklet.txt` (questions only), `answer_key.txt` (answers with explanations).
 - **Dependencies:** Python 3 only, no external libraries.
@@ -56,13 +56,14 @@ cat answer_key.txt     # view the key
     --out generated_test.json
   python3 render_text.py --in generated_test.json --test-out test_booklet.txt --key-out answer_key.txt
   ```
-The chosen parameters are recorded in `generation_params` inside the JSON and printed in the booklet for reproducibility.
+If PDF engines are missing, PDF output is skipped; the Markdown text still renders correctly.
 
 ## Administering ALAN
 1. **Prepare materials:** Run `make run` to produce `test_booklet.txt` and `answer_key.txt`. Print or distribute the booklet only.
 2. **Time:** 25–30 minutes is typical for 32 items; you can standardize at 30 minutes for comparability.
 3. **Instructions to candidates:**
    - “You will see a small dictionary, a short rule cheat sheet, and examples. Every question has four options; exactly one is correct. All sentences follow the published rules—no tricks. Work quickly but carefully.”
+   - “This measures how well you can infer and apply new language rules; no linguistics background is required.”
 4. **Environment:** Quiet room, no external aids. Paper or on-screen is fine.
 5. **Scoring:** 1 point per correct item, no guessing penalty. Max = 32.