1
0
Fork 0
mirror of https://gitlab.rlp.net/mobitar/ReCo.jl.git synced 2024-09-17 18:51:17 +00:00
ReCo.jl/README.adoc

104 lines
2.7 KiB
Text
Raw Normal View History

2022-01-13 16:18:35 +00:00
= ReCo.jl
image:https://img.shields.io/badge/code%20style-blue-4495d1.svg[Code Style: Blue, link=https://github.com/invenia/BlueStyle]
2022-01-15 17:55:01 +00:00
**Re**inforcement learning of **co**llective behavior.
2022-01-24 21:07:39 +00:00
== Setup
The steps from the setup have to be followed before running anything in the following sections.
=== Launch Julia
To activate the environment, navigate to the main directory `/ReCo.jl` and then run the following to launch Julia:
[source, bash]
----
cd ReCo.jl
julia --threads auto
----
`auto` automatically sets the number of threads to use. If you want to use a specific number `N` of threads, replace `auto` with `N`.
=== Acitivating environment
After launching Julia, the package environment has to be activated by running the follwing in the REPL:
[source, julia]
----
using Pkg
Pkg.activate(".")
----
=== Install dependencies
2022-01-24 21:07:39 +00:00
After activating the package environment, run the follwing to install the package dependencies:
[source, julia]
----
Pkg.instantiate()
----
== Run simulation
Import the package:
[source, julia]
----
using ReCo
----
2022-02-07 16:50:57 +00:00
Initialize a simulation with 100 particles having a self-propulsion velocity of 40.0 and return the relative path to the simulation directory:
[source, julia]
----
sim_dir = init_sim(100, 40.0)
----
Run the simulation:
[source, julia]
----
run_sim(sim_dir, duration=20.0)
----
The values for the number of particles, self-propulsion velocity and simulation duration are used here as an example. For more information about possible values and other optional arguments, press `?` in the REPL after running `using ReCo`. Then type `init_sim` or `run_sim` followed by pressing enter. This will show the method's documention.
2022-01-24 21:07:39 +00:00
== Run reinforcement learning
2022-02-07 16:50:57 +00:00
Import the package:
[source, julia]
----
using ReCo
----
Run a reinforcement learning process and return the environment helper:
[source, julia]
----
env_helper = run_rl(ENVTYPE)
----
ENVTYPE has to be replaced by one of the environments named after the file names in the directory `ReCo.jl/RL/Envs`, for example: `LocalCOMEnv`. A description of an environment is included at the beginning of the corresponding file.
For more information about all possible optional arguments, press `?` in the REPL after running `using ReCo`. Then type `run_rl` followed by pressing enter.
2022-01-24 21:07:39 +00:00
== Run analysis
After running the following command blocks in the REPL, the output can be found in the directory `exports/graphics`.
=== Mean squared displacement
[source, julia]
----
include("analysis/mean_squared_displacement.jl")
run_msd_analysis()
run_random_walk()
----
=== Radial distribution function
[source, julia]
----
include("analysis/radial_distribution_function/radial_distribution_function.jl")
run_radial_distribution_analysis()
----