mirror of
https://gitlab.rlp.net/mobitar/ReCo.jl.git
synced 2024-12-30 14:33:30 +00:00
No description
analysis | ||
graphics | ||
src | ||
test | ||
.gitignore | ||
.JuliaFormatter.toml | ||
LICENSE | ||
Manifest.toml | ||
Project.toml | ||
README.adoc |
= ReCo.jl image:https://img.shields.io/badge/code%20style-blue-4495d1.svg[Code Style: Blue, link=https://github.com/invenia/BlueStyle] **Re**inforcement learning of **co**llective behavior. == Setup The steps from the setup have to be followed before running anything in the following sections. === Launch Julia To activate the environment, navigate to the main directory `/ReCo.jl` and then run the following to launch Julia: [source, bash] ---- cd ReCo.jl julia --threads auto ---- `auto` automatically sets the number of threads to use. If you want to use a specific number `N` of threads, replace `auto` with `N`. === Acitivating environment After launching Julia, the package environment has to be activated by running the follwing in the REPL: [source, julia] ---- using Pkg Pkg.activate(".") ---- === Install dependencies After activating the package environment, run the follwing to install the package dependencies: [source, julia] ---- Pkg.instantiate() ---- == Run simulation Import the package: [source, julia] ---- using ReCo ---- Initialize a simulation with 100 particles having a self-propulsion velocity of 40.0 and return the relative path to the simulation directory: [source, julia] ---- sim_dir = init_sim(100, 40.0) ---- Run the simulation: [source, julia] ---- run_sim(sim_dir, duration=20.0) ---- The values for the number of particles, self-propulsion velocity and simulation duration are used here as an example. For more information about possible values and other optional arguments, press `?` in the REPL after running `using ReCo`. Then type `init_sim` or `run_sim` followed by pressing enter. This will show the method's documention. == Run reinforcement learning Import the package: [source, julia] ---- using ReCo ---- Run a reinforcement learning process and return the environment helper and the the path of the process directory relative to the directory `ReCo.jl`: [source, julia] ---- env_helper, rl_dir = run_rl(ENVTYPE) ---- ENVTYPE has to be replaced by one of the environments named after the file names in the directory `ReCo.jl/RL/Envs`, for example: `LocalCOMEnv`. A description of an environment is included at the beginning of the corresponding file. For more information about all possible optional arguments, press `?` in the REPL after running `using ReCo`. Then type `run_rl` followed by pressing enter. `env_helper` has the abstract type `EnvHelper`. To access the Q-matrix, enter the following: [source, julia] ---- env_helper.shared.agent.policy.learner.approximator.table ---- To generate a LaTeX table with the states and actions combintation names for the Q-matrix, run the follwing: [source, julia] ---- include("src/RL/latex_table.jl") latex_rl_table(env_helper, FILENAME) ---- FILENAME has to be replaced by the wanted file name of the `.tex` file. This file can then be found under `ReCo.jl/exports/FILENAME`. To access the rewards, run the following: [source, julia] ---- env_helper.shared.hook.rewards ---- To plot the rewards, run the following: [source, julia] ---- plot_rewards(rl_dir) ---- To plot the mean of kappa as the ratio of the eigenvalues of the gyration tensor, run the following: [source, julia] ---- include("analysis/mean_kappa.jl") plot_mean_kappa(; rl_dir=rl_dir, n_last_episodes=N_LAST_EPISODES) ---- `N_LAST_EPISODES` is the number of the last episodes of the learning process to average over. == Run analysis After running the following command blocks in the REPL, the output can be found in the directory `exports/graphics`. === Mean squared displacement [source, julia] ---- include("analysis/mean_squared_displacement.jl") run_msd_analysis() run_random_walk() ---- === Radial distribution function [source, julia] ---- include("analysis/radial_distribution_function/radial_distribution_function.jl") run_radial_distribution_analysis() ----