1
0
Fork 0
mirror of https://gitlab.rlp.net/mobitar/ReCo.jl.git synced 2024-09-17 18:51:17 +00:00
No description
Find a file
2022-02-08 21:34:06 +01:00
analysis Fix typos 2022-02-08 21:20:18 +01:00
graphics Fix typos 2022-02-08 21:20:18 +01:00
src Add animation documentation 2022-02-08 21:34:06 +01:00
test Remove Manifest 2022-02-08 15:09:43 +01:00
.gitignore Fix typos 2022-02-08 21:20:18 +01:00
.JuliaFormatter.toml Fixed animation memory leak 2021-11-15 15:17:47 +01:00
LICENSE Add LICENSE 2022-01-31 17:09:55 +01:00
Project.toml Added reward normalization 2022-01-30 03:20:45 +01:00
README.adoc Add animation documentation 2022-02-08 21:34:06 +01:00

= ReCo.jl
:source-highlighter: highlight.js
:highlightjs-languages: bash, julia

image:https://img.shields.io/badge/code%20style-blue-4495d1.svg[Code Style: Blue, link=https://github.com/invenia/BlueStyle]

**Re**inforcement learning of **co**llective behavior.

== Setup

The steps from the setup have to be followed before running anything in the following sections.

=== Launch Julia

To activate the environment, navigate to the main directory `/ReCo.jl` and then run the following to launch Julia:

[source,bash]
----
cd ReCo.jl
julia --threads auto
----

`auto` automatically sets the number of threads to use. If you want to use a specific number `N` of threads, replace `auto` with `N`.

=== Activating environment

After launching Julia, the package environment has to be activated by running the follwing in the REPL:

[source,julia]
----
using Pkg
Pkg.activate(".")
----

=== Install dependencies

After activating the package environment, run the following to install the package dependencies:

[source,julia]
----
Pkg.instantiate()
----

=== Import the package

You can import the package by running:

[source,julia]
----
using ReCo
----

This will export the package's methods that are intended to be used by the end user.

== Help mode

To access the documentation of the presented package methods further in this README, run `using ReCo` first. Then, enter the help mode by pressing `?` in the REPL. Now, enter the method's name followed by enter to see its documentation.

== Run simulation

Initialize a simulation with 100 particles having a self-propulsion velocity of 40.0 and return the relative path to the simulation directory:

[source,julia]
----
sim_dir = init_sim(100, 40.0)
----

Run the simulation:

[source,julia]
----
run_sim(sim_dir, duration=20.0)
----

The values for the number of particles, self-propulsion velocity and simulation duration are used here as an example. For more information about possible values and other optional arguments, see the documentation of `init_sim` or `run_sim`.

== Simulation visualization

=== Animation

To generate an animation of a simulation, run the following:

[source,julia]
----
animate(sim_dir)
----

The method's documentation includes all possible optional arguments.

=== Snapshot plot

//TODO

== Run reinforcement learning

Run a reinforcement learning process and return the environment helper and the the path of the process directory relative to the directory `ReCo.jl`:
[source,julia]
----
env_helper, rl_dir = run_rl(ENVTYPE)
----

`ENVTYPE` has to be replaced by one of the environments named after the file names in the directory `ReCo.jl/RL/Envs`, for example: `LocalCOMEnv`. A description of an environment is included at the beginning of the corresponding file.
//TODO: Descriptions of envs

The documentation of `run_rl` includes all possible optional arguments.

=== Q-matrix

`env_helper` has the abstract type `EnvHelper`. To access the Q-matrix, enter the following:

[source,julia]
----
env_helper.shared.agent.policy.learner.approximator.table
----

To generate a LaTeX table with the states and actions combintation names for the Q-matrix, run the follwing:

[source,julia]
----
include("src/RL/latex_table.jl")
latex_rl_table(env_helper, FILENAME)
----

`FILENAME` has to be replaced by the wanted file name without extension of the `.tex` file. This file can then be found under `ReCo.jl/exports/FILENAME.tex`.

=== Rewards

To access the rewards, run the following:

[source,julia]
----
env_helper.shared.hook.rewards
----

To plot the rewards, run the following:

[source,julia]
----
plot_rewards(rl_dir)
----

=== Mean kappa

To plot the mean of kappa as the ratio of the eigenvalues of the gyration tensor, run the following:

[source,julia]
----
include("analysis/mean_kappa.jl")
plot_mean_kappa(; rl_dir=rl_dir, n_last_episodes=N_LAST_EPISODES)
----

`N_LAST_EPISODES` is the number of the last episodes of the learning process to average over.

== Run analysis

After running the following command blocks in the REPL, the output can be found in the directory `ReCo.jl/exports/graphics`.

=== Mean squared displacement

[source,julia]
----
include("analysis/mean_squared_displacement.jl")
run_msd_analysis()
run_random_walk()
----

=== Radial distribution function

[source,julia]
----
include("analysis/radial_distribution_function/radial_distribution_function.jl")
run_radial_distribution_analysis()
----

=== Reward discount analysis

//TODO

== Graphics

//TODO