summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--rust/README.md21
1 files changed, 12 insertions, 9 deletions
diff --git a/rust/README.md b/rust/README.md
index 7ea2db6..f7148d9 100644
--- a/rust/README.md
+++ b/rust/README.md
@@ -5,17 +5,20 @@ Current syntax is based on S-expressions, ie. Lisp. There is a command-line inte
# Tester
-**NOTE**: This is a work-in-progress, current tester is only capable of running program locally through a forked process
-
The tester daemon is inspired by [Extreme Startup](https://github.com/rchatley/extreme_startup): It's a REST-ish server and client that allows to repeatedly send λ-terms to a remote execution engine and compare the result against its expectations.
The interaction flow is simple:
-* The server listens on a well-known port
-* A client connects to this port and requests a test session (`POST /register`), giving a URL to callback,
-* The server returns an identification token, to passed to and from the client in subsequent calls,
-* Then the server repeatedly sends requests to the client (`POST /eval`) whose body is a S-expr representing a λ-term, and it expects an evaluation result.
- * Note the evaluation can return error if the term is syntactically malformed
- * The server waits at most 60 seconds for the answer
+* HTTP server starts on some known port (eg. 8080)
+* Client sends a `POST /register` request, passing in as payload a JSON object with a `url` and `name` string fields
+ ```
+ curl -v -X POST -d '{"url":"http://127.0.0.1:8888/eval", "name": "toto"}' -H 'Content-type: application/json' http://localhost:8080/register
+ ```
+* Obviously, client needs to start a HTTP server able to respond to a `GET` request at the given URL
+* If URL is not already registered, server accepts the registration (returning a 200 result) and starts a _testing thread_
+* The _tester_ then repeatedly sends `POST` requests to the client's registered URL
+ * The body of the request is plain text S-expression representing a λ-term
+ * The tester expects the response to be the plain text result of the evaluation of those terms
* If the client fails to answer, or answers wrongly, the server keeps sending the same request
-* If the client's answer is correct, the server sends another term, more complex, to evaluate.
+* If the client's answer is correct, the server sends another term to evaluate and awards 1 point to the client
+* The `/leaderboard` endpoint provides a crude HTML page listing each clients' current score