mirror of
https://gitlab.rlp.net/mobitar/ReCo.jl.git
synced 2024-12-30 17:23:31 +00:00
Add reward discount analysis and graphics docs
This commit is contained in:
parent
54f8a1bfaf
commit
1030e34d96
5 changed files with 24 additions and 16 deletions
28
README.adoc
28
README.adoc
|
@ -52,11 +52,11 @@ You can import the package by running:
|
||||||
using ReCo
|
using ReCo
|
||||||
----
|
----
|
||||||
|
|
||||||
This will export the package's methods that are intended to be used by the end user.
|
This will export the package's functions that are intended to be used by the end user.
|
||||||
|
|
||||||
== Help mode
|
== Help mode
|
||||||
|
|
||||||
To access the documentation of the presented package methods further in this README, run `using ReCo` first. Then, enter the help mode by pressing `?` in the REPL. Now, enter the method's name followed by enter to see its documentation.
|
To access the documentation of the presented package functions further in this README, run `using ReCo` first. Then, enter the help mode by pressing `?` in the REPL. Now, enter the function's name followed by enter to see its documentation.
|
||||||
|
|
||||||
== Run a simulation
|
== Run a simulation
|
||||||
|
|
||||||
|
@ -87,7 +87,7 @@ To generate an animation of a simulation, run the following:
|
||||||
animate(sim_dir)
|
animate(sim_dir)
|
||||||
----
|
----
|
||||||
|
|
||||||
The method's documentation includes all possible optional arguments and where the output can be found.
|
The function's documentation includes all possible optional arguments and where the output can be found.
|
||||||
|
|
||||||
=== Snapshot plot
|
=== Snapshot plot
|
||||||
|
|
||||||
|
@ -98,7 +98,7 @@ To plot only one snapshot of a simulation, run the following:
|
||||||
plot_snapshot(sim_dir)
|
plot_snapshot(sim_dir)
|
||||||
----
|
----
|
||||||
|
|
||||||
This will ask for the number of the snapshot to plot out of the total number of snapshots. The method's documentation includes all possible optional arguments and where the output can be found.
|
This will ask for the number of the snapshot to plot out of the total number of snapshots. The function's documentation includes all possible optional arguments and where the output can be found.
|
||||||
|
|
||||||
== Run a reinforcement learning process
|
== Run a reinforcement learning process
|
||||||
|
|
||||||
|
@ -130,7 +130,7 @@ include("src/RL/latex_table.jl")
|
||||||
latex_rl_table(env_helper, FILENAME_WITHOUT_EXTENSION)
|
latex_rl_table(env_helper, FILENAME_WITHOUT_EXTENSION)
|
||||||
----
|
----
|
||||||
|
|
||||||
`FILENAME_WITHOUT_EXTENSION` has to be replaced by the wanted file name without extension of the `.tex` file. The method's documentation explains where the output is placed.
|
`FILENAME_WITHOUT_EXTENSION` has to be replaced by the wanted file name without extension of the `.tex` file. The function's documentation explains where the output is placed.
|
||||||
|
|
||||||
The output file can be used in a LaTeX document:
|
The output file can be used in a LaTeX document:
|
||||||
|
|
||||||
|
@ -155,7 +155,7 @@ To plot the rewards, run the following:
|
||||||
plot_rewards(rl_dir)
|
plot_rewards(rl_dir)
|
||||||
----
|
----
|
||||||
|
|
||||||
The method's documentation explains where the output is placed.
|
The function's documentation explains where the output is placed.
|
||||||
|
|
||||||
=== Mean kappa
|
=== Mean kappa
|
||||||
|
|
||||||
|
@ -167,7 +167,7 @@ include("analysis/mean_kappa.jl")
|
||||||
plot_mean_κ(; rl_dir=rl_dir, n_last_episodes=N_LAST_EPISODES)
|
plot_mean_κ(; rl_dir=rl_dir, n_last_episodes=N_LAST_EPISODES)
|
||||||
----
|
----
|
||||||
|
|
||||||
`N_LAST_EPISODES` is the number of the last episodes of the learning process to average over. The method's documentation explains where the output is placed.
|
`N_LAST_EPISODES` is the number of the last episodes of the learning process to average over. The function's documentation explains where the output is placed.
|
||||||
|
|
||||||
== Run analysis
|
== Run analysis
|
||||||
|
|
||||||
|
@ -182,6 +182,8 @@ run_msd_analysis()
|
||||||
run_random_walk()
|
run_random_walk()
|
||||||
----
|
----
|
||||||
|
|
||||||
|
The output is `ReCo.jl/exports/graphics/mean_squared_displacement.pdf` and `ReCo.jl/exports/graphics/random_walk.pdf`.
|
||||||
|
|
||||||
=== Radial distribution function
|
=== Radial distribution function
|
||||||
|
|
||||||
[source,julia]
|
[source,julia]
|
||||||
|
@ -190,10 +192,18 @@ include("analysis/radial_distribution_function/radial_distribution_function.jl")
|
||||||
run_radial_distribution_analysis()
|
run_radial_distribution_analysis()
|
||||||
----
|
----
|
||||||
|
|
||||||
|
The output is `ReCo.jl/exports/graphics/radial_distribution.pdf` and `ReCo.jl/exports/graphics/radial_distribution_all_vs.pdf`.
|
||||||
|
|
||||||
=== Reward discount analysis
|
=== Reward discount analysis
|
||||||
|
|
||||||
//TODO
|
[source,julia]
|
||||||
|
----
|
||||||
|
include("analysis/reward_discount_analysis.jl")
|
||||||
|
run_reward_discount_analysis()
|
||||||
|
----
|
||||||
|
|
||||||
|
The output is `ReCo.jl/exports/graphics/reward_discount_analysis.pdf`.
|
||||||
|
|
||||||
== Graphics
|
== Graphics
|
||||||
|
|
||||||
//TODO
|
The directory `ReCo.jl/graphics` has some Julia files that generate graphics related to this package. The function in every file that has to be run to generate the corresponding graphics starts with `plot_` or `gen_`. The output is placed in `ReCo.jl/exports/graphics`.
|
|
@ -5,7 +5,7 @@ using ReCo: ReCo
|
||||||
|
|
||||||
include("../src/Visualization/common_CairoMakie.jl")
|
include("../src/Visualization/common_CairoMakie.jl")
|
||||||
|
|
||||||
function run_rl_prcesses_reward_discount(γs::AbstractVector)
|
function run_reward_discount_processes(γs::AbstractVector)
|
||||||
n_γs = length(γs)
|
n_γs = length(γs)
|
||||||
env_helpers = Vector{ReCo.RL.EnvHelper}(undef, n_γs)
|
env_helpers = Vector{ReCo.RL.EnvHelper}(undef, n_γs)
|
||||||
|
|
||||||
|
@ -68,7 +68,7 @@ end
|
||||||
function run_reward_discount_analysis()
|
function run_reward_discount_analysis()
|
||||||
γs = 0.0:0.25:1.0
|
γs = 0.0:0.25:1.0
|
||||||
|
|
||||||
env_helpers = run_rl_prcesses_reward_discount(γs)
|
env_helpers = run_reward_discount_processes(γs)
|
||||||
|
|
||||||
plot_reward_discount_analysis(
|
plot_reward_discount_analysis(
|
||||||
γs, env_helpers, (:solid, :dash, :dashdot, :solid, :solid)
|
γs, env_helpers, (:solid, :dash, :dashdot, :solid, :solid)
|
||||||
|
|
|
@ -4,8 +4,6 @@ using LaTeXStrings: @L_str
|
||||||
include("../src/Visualization/common_CairoMakie.jl")
|
include("../src/Visualization/common_CairoMakie.jl")
|
||||||
|
|
||||||
function gen_elliptical_distance_graphics()
|
function gen_elliptical_distance_graphics()
|
||||||
box_length = 100
|
|
||||||
|
|
||||||
init_cairomakie!()
|
init_cairomakie!()
|
||||||
|
|
||||||
fig = gen_figure()
|
fig = gen_figure()
|
||||||
|
|
|
@ -28,7 +28,7 @@ end
|
||||||
|
|
||||||
Plot one snapshot of a simulation.
|
Plot one snapshot of a simulation.
|
||||||
|
|
||||||
The method will ask for the number of the snapshot to plot out of the total number of snapshots. The output is `sim_dir/graphics/N.pdf` with `N` as the number of the chosen snapshot.
|
The function will ask for the number of the snapshot to plot out of the total number of snapshots. The output is `sim_dir/graphics/N.pdf` with `N` as the number of the chosen snapshot.
|
||||||
|
|
||||||
# Arguments
|
# Arguments
|
||||||
- `sim_dir::String`: Simulation directory.
|
- `sim_dir::String`: Simulation directory.
|
||||||
|
|
|
@ -16,7 +16,7 @@ end
|
||||||
|
|
||||||
Run the initialized simulation in its directory `sim_dir`.
|
Run the initialized simulation in its directory `sim_dir`.
|
||||||
|
|
||||||
This method starts or resumes a simulation. For long simulations, the simulation can be stopped by pressing `Ctrl` + `c`. This stopped simulation can be resumed later by running this method again with the same simulation directory.
|
This function starts or resumes a simulation. For long simulations, the simulation can be stopped by pressing `Ctrl` + `c`. This stopped simulation can be resumed later by running this function again with the same simulation directory.
|
||||||
|
|
||||||
Some of the last snapshots might be lost if the simulations is stopped (see the argument `n_bundle_snapshots`).
|
Some of the last snapshots might be lost if the simulations is stopped (see the argument `n_bundle_snapshots`).
|
||||||
|
|
||||||
|
@ -28,7 +28,7 @@ Return `nothing`.
|
||||||
- `snapshot_at::Float64=$DEFAULT_SNAPSHOT_AT`: Snapshot time interval.
|
- `snapshot_at::Float64=$DEFAULT_SNAPSHOT_AT`: Snapshot time interval.
|
||||||
- `seed::Int64=$DEFAULT_SEED`: Random number generator seed.
|
- `seed::Int64=$DEFAULT_SEED`: Random number generator seed.
|
||||||
- `n_bundle_snapshots::Int64=$DEFAULT_N_BUNDLE_SNAPSHOTS`: Number of snapshots in a bundle. This number is relevant for long simulations that can be stopped while running. A simulation can be continued from the last bundle of snapshots. If the number of snapshots in a bundle is too high and the simulation is stopped, many of the last snapshots can be lost. A low number results in high IO since snapshots are then bundled and stored more often. For example, setting this number to 1 results in saving every snapshot immediately without bundeling it with other snapshots which would be more efficient. Setting the number to 1000 could mean loosing 999 snapshots in the worst case if the simulation is stopped before having 1000 snapshots to bundle and save.
|
- `n_bundle_snapshots::Int64=$DEFAULT_N_BUNDLE_SNAPSHOTS`: Number of snapshots in a bundle. This number is relevant for long simulations that can be stopped while running. A simulation can be continued from the last bundle of snapshots. If the number of snapshots in a bundle is too high and the simulation is stopped, many of the last snapshots can be lost. A low number results in high IO since snapshots are then bundled and stored more often. For example, setting this number to 1 results in saving every snapshot immediately without bundeling it with other snapshots which would be more efficient. Setting the number to 1000 could mean loosing 999 snapshots in the worst case if the simulation is stopped before having 1000 snapshots to bundle and save.
|
||||||
- `env_helper::Union{RL.EnvHelper,Nothing}=nothing`: Environment helper. It should be left as the default `nothing` unless this method is used internally for reinforcement learning.
|
- `env_helper::Union{RL.EnvHelper,Nothing}=nothing`: Environment helper. It should be left as the default `nothing` unless this function is used internally for reinforcement learning.
|
||||||
- `show_progress::Bool=$DEFAULT_SHOW_PROGRESS`: Show simulation progress bar.
|
- `show_progress::Bool=$DEFAULT_SHOW_PROGRESS`: Show simulation progress bar.
|
||||||
"""
|
"""
|
||||||
function run_sim(
|
function run_sim(
|
||||||
|
|
Loading…
Reference in a new issue