mirror of
https://gitlab.rlp.net/mobitar/ReCo.jl.git
synced 2024-12-21 00:51:21 +00:00
Further RL documentation
This commit is contained in:
parent
146b024e19
commit
51c9b0b72f
1 changed files with 43 additions and 2 deletions
45
README.adoc
45
README.adoc
|
@ -72,16 +72,57 @@ Import the package:
|
||||||
using ReCo
|
using ReCo
|
||||||
----
|
----
|
||||||
|
|
||||||
Run a reinforcement learning process and return the environment helper:
|
Run a reinforcement learning process and return the environment helper and the the path of the process directory relative to the directory `ReCo.jl`:
|
||||||
[source, julia]
|
[source, julia]
|
||||||
----
|
----
|
||||||
env_helper = run_rl(ENVTYPE)
|
env_helper, rl_dir = run_rl(ENVTYPE)
|
||||||
----
|
----
|
||||||
|
|
||||||
ENVTYPE has to be replaced by one of the environments named after the file names in the directory `ReCo.jl/RL/Envs`, for example: `LocalCOMEnv`. A description of an environment is included at the beginning of the corresponding file.
|
ENVTYPE has to be replaced by one of the environments named after the file names in the directory `ReCo.jl/RL/Envs`, for example: `LocalCOMEnv`. A description of an environment is included at the beginning of the corresponding file.
|
||||||
|
|
||||||
For more information about all possible optional arguments, press `?` in the REPL after running `using ReCo`. Then type `run_rl` followed by pressing enter.
|
For more information about all possible optional arguments, press `?` in the REPL after running `using ReCo`. Then type `run_rl` followed by pressing enter.
|
||||||
|
|
||||||
|
`env_helper` has the abstract type `EnvHelper`. To access the Q-matrix, enter the following:
|
||||||
|
|
||||||
|
[source, julia]
|
||||||
|
----
|
||||||
|
env_helper.shared.agent.policy.learner.approximator.table
|
||||||
|
----
|
||||||
|
|
||||||
|
To generate a LaTeX table with the states and actions combintation names for the Q-matrix, run the follwing:
|
||||||
|
|
||||||
|
[source, julia]
|
||||||
|
----
|
||||||
|
include("src/RL/latex_table.jl")
|
||||||
|
latex_rl_table(env_helper, FILENAME)
|
||||||
|
----
|
||||||
|
|
||||||
|
FILENAME has to be replaced by the wanted file name of the `.tex` file. This file can then be found under `ReCo.jl/exports/FILENAME`.
|
||||||
|
|
||||||
|
To access the rewards, run the following:
|
||||||
|
|
||||||
|
[source, julia]
|
||||||
|
----
|
||||||
|
env_helper.shared.hook.rewards
|
||||||
|
----
|
||||||
|
|
||||||
|
To plot the rewards, run the following:
|
||||||
|
|
||||||
|
[source, julia]
|
||||||
|
----
|
||||||
|
plot_rewards(rl_dir)
|
||||||
|
----
|
||||||
|
|
||||||
|
To plot the mean of kappa as the ratio of the eigenvalues of the gyration tensor, run the following:
|
||||||
|
|
||||||
|
[source, julia]
|
||||||
|
----
|
||||||
|
include("analysis/mean_kappa.jl")
|
||||||
|
plot_mean_kappa(; rl_dir=rl_dir, n_last_episodes=N_LAST_EPISODES)
|
||||||
|
----
|
||||||
|
|
||||||
|
`N_LAST_EPISODES` is the number of the last episodes of the learning process to average over.
|
||||||
|
|
||||||
== Run analysis
|
== Run analysis
|
||||||
|
|
||||||
After running the following command blocks in the REPL, the output can be found in the directory `exports/graphics`.
|
After running the following command blocks in the REPL, the output can be found in the directory `exports/graphics`.
|
||||||
|
|
Loading…
Reference in a new issue