1
0
Fork 0
mirror of https://gitlab.rlp.net/mobitar/julia_course.git synced 2024-11-16 13:28:10 +00:00

Day 4 done

This commit is contained in:
Mo8it 2022-03-31 04:26:38 +02:00
parent 7ad274779b
commit 54ce7b46a2

View file

@ -8,6 +8,8 @@ using InteractiveUtils
using BenchmarkTools
# ╔═╡ 8b1dfee2-bd8b-4d23-b9f8-9406002e0eaa
# Builtin package with useful functions for dealing with randomness
# Imported for the function `shuffle`
using Random
# ╔═╡ 1fb7d9af-333e-44f2-b693-09ff97937d4c
@ -144,9 +146,13 @@ If you want syntax highlighting in your REPL, add the package `OhMyREPL` to your
# ╔═╡ 0e53e4ee-16a7-47ef-9992-77cbfd1ed258
md"""
# Benchmarking
Benchmarking is a tool to evaluate the performance of code.
Although you can just measure the time a function call takes, it is not a good idea to use only one measurement. Therefore, the package `BenchmarkTools.jl` runs a function multiple times and evaluates it statistically.
"""
# ╔═╡ d86e4a2f-e737-49a4-bc16-a149e81785bd
# Calculating factorials
function normal_for_loop(N)
factorials = zeros(BigInt, N)
@ -161,9 +167,11 @@ end
N = 5000
# ╔═╡ 55d1d616-b6ee-40dd-a0ed-6274a98b1e73
# This macro shows a lot of information
@benchmark normal_for_loop(N)
# ╔═╡ 00c27927-9c72-417a-862f-9b66318d9751
# This macro only shows the most important values
@btime normal_for_loop(N)
# ╔═╡ d21771de-272e-4d57-8c76-c75be709ad0a
@ -199,6 +207,7 @@ md"""
"""
# ╔═╡ cacc94b4-21e0-410e-acaa-80e37b447f94
# Using multiple threads for calculating factorials
function multithreaded_for_loop(N)
factorials = zeros(BigInt, N)
@ -210,9 +219,15 @@ function multithreaded_for_loop(N)
end
# ╔═╡ a3029480-bcdc-44f3-b504-8bd3bf3aa14d
# Magic 🪄
@btime multithreaded_for_loop(N)
# ╔═╡ 3fcdf8ff-224c-4616-acd6-d8062f3a7af0
# Demonstration of shuffeling
shuffle(1:10)
# ╔═╡ f141dbb4-bdc5-4f16-8d97-fc0a3d5981f2
# Shuffle to change the order of calculating factorials
function shuffle_multithreaded_for_loop(N)
factorials = zeros(BigInt, N)
@ -224,11 +239,38 @@ function shuffle_multithreaded_for_loop(N)
end
# ╔═╡ e9dcda88-1eef-4c0a-99b2-12eaec56186b
# It did boost the performance even further 🤯
@btime shuffle_multithreaded_for_loop(N)
# ╔═╡ 768203a6-345d-4fa8-89a3-91e227579a38
md"""
The macro `@threads` splits the elements to iterate over evenly and gives every available thread a portion of these elements.
Example:
You have 2 threads and a loop over 1:10.
Then, one thread will get the values 1:5.
The other thread will get the values 6:10.
---
For our function above, calculating the factorial of the first elements does not take that much time as calculating the factorial of the last elements.
Therefore, the threads taking the first elements finish their job and wait for the thread that did get the portion with the biggest numbers.
With `shuffle`, we spread the hard job with big numbers over all threads. Therefore, the work is more evenly distributed and threads will finish (almost) together.
If you don't want the static work allocation of the macro `@threads` and do want to have more control about threads and what they do and when (dynamic thread allocation), take a look at the macro `Threads.@spawn`.
"""
# ╔═╡ 009bb2e8-f03e-40f7-a66b-166dc6a1962d
md"""
## Thread safety
Using threads is easy in Julia, but you have to pay attention when using threads!
If multiple threads try to modify a variable or an element in a container at the same time, weird dangerous things happen! 😵‍💫
"""
# ╔═╡ dd5e5073-be29-47e7-91c5-9e47c35f905c
@ -248,7 +290,13 @@ end
# The output is random 🤯
thread_unsafe()
# ╔═╡ 7520326f-5576-42c7-aefd-29bc7d2c6b56
md"""
Lets see how to avoid such situations using another example.
"""
# ╔═╡ ec08a80c-8886-4312-9481-5c89951681e1
# Calculating the sum of sums
function thread_unsafe_sum(N)
sum_of_sums = 0
@ -263,8 +311,16 @@ end
N2 = 1000000
# ╔═╡ fbd2423b-aaea-47a7-a3cf-537860e11a93
# Also random!
thread_unsafe_sum(N2)
# ╔═╡ 73c86a45-d7f7-4d65-a588-1f5ff3adcf6f
md"""
The problem can be solved by using a vector with `N` elements. After calculating a sum, the thread that did calculate it places the result on a unique place in the vector that is reserved for this specific result.
At the end, the sum of sums is calculated by summing over the vector of sums.
"""
# ╔═╡ e65ad214-33ba-4d08-81f0-5f98022a9f78
function thread_safe_sum(N)
sums = zeros(Int64, N)
@ -277,12 +333,15 @@ function thread_safe_sum(N)
end
# ╔═╡ 8ad3daa6-d221-4ff7-9bc2-8e8a66bdd8c7
# Stable!
@btime thread_safe_sum(N2)
# ╔═╡ 95dffc7f-3393-487e-8521-c96291cdc7bf
# Verify that we did not exceed the limit!
typemax(Int64)
# ╔═╡ ebd3a9d9-7a12-4001-9b53-913f664fb1c8
# Lets try shuffeling again
function shuffle_safe_thread_sum(N)
sums = zeros(Int64, N)
@ -300,18 +359,55 @@ end
# Always benchmark! This is the only method to make sure that an "optimization" indeed an optimization is
@btime shuffle_safe_thread_sum(N2)
# ╔═╡ 24ad64f9-b0a4-48ac-b6dc-06a5d1c7b072
function shuffeling_cost(N)
shuffle(1:N)
return
end
# ╔═╡ fe0b18c0-cbf0-421d-b6a0-987321a0b09d
# The shuffeling itself is too expensive compared to the calculation in the loop
@btime shuffeling_cost(N2)
# ╔═╡ 09f71a9e-6798-492f-98df-45087d0c4c8b
md"""
# Performance
# Performance optimization
Julia has a focus on high performance. But if you don't pay attention, you might produce code that either can not be optimized by the compiler or results in a lot of allocations. In both cases, your code will be slow.
In this section, tools and tips for performance optimization in Julia are presented.
In this section, some tools and tips for performance optimization in Julia are presented.
"""
# ╔═╡ 491e077c-fbf0-4ae7-b54b-9f9c68f8f1b0
md"""
## Tips
Most important performance tips:
- Don't use global variables! 🌐
- Don't use containers with abstract type! 🆎
- Don't write long functions! 🦒
- Don't change the type of a variable! 🥸
- Preallocate when possible! 🕰️
- Reuse containers when possible! 🔄
- Use views instead of copies when possible! 🧐
- Benchmark, benchmark, benchmark! Also pay attention to allocations. ⌚️
These are some of the tips in the official documentation of Julia.
If you are writing performance critical code in Julia, make sure you have a **very good relationship** with all performance tips in the documentation:
https://docs.julialang.org/en/v1/manual/performance-tips/
"""
# ╔═╡ 32981b03-edb9-417f-b5e0-c652e3ac715c
md"""
## Demo
"""
# ╔═╡ 6509dddd-ff17-49db-8e5e-fcea1ef0026c
N3 = 1000000
# ╔═╡ 2a24aebc-0654-4d00-bdab-627a8e1a75f2
# Use a global array of type Any
begin
sin_vals = []
@ -327,6 +423,7 @@ begin
end
# ╔═╡ 56058ab1-4ea2-479d-88f9-5da6ac8c39c2
# Not using an array of type Any
begin
typed_sin_vals = Float64[]
@ -342,6 +439,7 @@ begin
end
# ╔═╡ ef164e7c-668a-4312-83f1-687ca7d4c8f9
# Preallocation
begin
preallocated_sin_vals = zeros(Float64, N3)
@ -357,6 +455,8 @@ begin
end
# ╔═╡ ebc621b5-3aa3-4cf7-bcdf-e4c5fbb79f50
# The difference of not using global variables
# Never use global variables!
begin
passed_preallocated_sin_vals = zeros(Float64, N3)
@ -387,6 +487,18 @@ begin
@btime local_access(N3, passed_sin_vals)
end
# ╔═╡ 43d2cbda-a21b-46ae-8433-7a9ef30c536b
md"""
## `StaticArrays`
If you are dealing with small arrays with less than 100 elements, then take a look at the package [`StaticArrays.jl`](https://github.com/JuliaArrays/StaticArrays.jl). Especially if you are dealing with 2d or 3d coordinates, using `StaticArrays` will make a big performance difference.
"""
# ╔═╡ f0b634a5-19a9-4c61-932f-7ae357e13be2
md"""
## Profiling
Of course, you can profile your code in Julia. Check the package [ProfileView](https://github.com/timholy/ProfileView.jl) for example.
"""
# ╔═╡ 00000000-0000-0000-0000-000000000001
PLUTO_PROJECT_TOML_CONTENTS = """
[deps]
@ -620,7 +732,7 @@ uuid = "3f19e933-33d8-53b3-aaab-bd5110c3b7a0"
# ╟─7f45c502-0909-42df-b93d-384f743df6a9
# ╟─f23ad33d-af1d-40c2-9efc-17ef8c4d1fb8
# ╟─6340aec8-6f77-4a30-8815-ce76ddecd6e8
# ╠═0e53e4ee-16a7-47ef-9992-77cbfd1ed258
# ╟─0e53e4ee-16a7-47ef-9992-77cbfd1ed258
# ╠═7a9ccfbc-bd2e-41d0-be5d-dea04b90d397
# ╠═d86e4a2f-e737-49a4-bc16-a149e81785bd
# ╠═bc777c73-9573-41c3-8ab5-843335539f96
@ -632,26 +744,36 @@ uuid = "3f19e933-33d8-53b3-aaab-bd5110c3b7a0"
# ╠═cacc94b4-21e0-410e-acaa-80e37b447f94
# ╠═a3029480-bcdc-44f3-b504-8bd3bf3aa14d
# ╠═8b1dfee2-bd8b-4d23-b9f8-9406002e0eaa
# ╠═3fcdf8ff-224c-4616-acd6-d8062f3a7af0
# ╠═f141dbb4-bdc5-4f16-8d97-fc0a3d5981f2
# ╠═e9dcda88-1eef-4c0a-99b2-12eaec56186b
# ╠═009bb2e8-f03e-40f7-a66b-166dc6a1962d
# ╟─768203a6-345d-4fa8-89a3-91e227579a38
# ╟─009bb2e8-f03e-40f7-a66b-166dc6a1962d
# ╠═dd5e5073-be29-47e7-91c5-9e47c35f905c
# ╠═4554cbf0-36f5-45c6-a966-ad18b1592a60
# ╟─7520326f-5576-42c7-aefd-29bc7d2c6b56
# ╠═ec08a80c-8886-4312-9481-5c89951681e1
# ╠═3b5d9f7c-1fc9-4e85-8c07-8b5709895a10
# ╠═fbd2423b-aaea-47a7-a3cf-537860e11a93
# ╟─73c86a45-d7f7-4d65-a588-1f5ff3adcf6f
# ╠═e65ad214-33ba-4d08-81f0-5f98022a9f78
# ╠═8ad3daa6-d221-4ff7-9bc2-8e8a66bdd8c7
# ╠═95dffc7f-3393-487e-8521-c96291cdc7bf
# ╠═ebd3a9d9-7a12-4001-9b53-913f664fb1c8
# ╠═ddd2409e-de34-4eb9-a0b7-e10cc6c0ce9f
# ╠═24ad64f9-b0a4-48ac-b6dc-06a5d1c7b072
# ╠═fe0b18c0-cbf0-421d-b6a0-987321a0b09d
# ╟─09f71a9e-6798-492f-98df-45087d0c4c8b
# ╟─491e077c-fbf0-4ae7-b54b-9f9c68f8f1b0
# ╟─32981b03-edb9-417f-b5e0-c652e3ac715c
# ╠═6509dddd-ff17-49db-8e5e-fcea1ef0026c
# ╠═2a24aebc-0654-4d00-bdab-627a8e1a75f2
# ╠═56058ab1-4ea2-479d-88f9-5da6ac8c39c2
# ╠═ef164e7c-668a-4312-83f1-687ca7d4c8f9
# ╠═ebc621b5-3aa3-4cf7-bcdf-e4c5fbb79f50
# ╠═afcc15de-81e0-484f-80cf-3d805517c6e8
# ╟─43d2cbda-a21b-46ae-8433-7a9ef30c536b
# ╟─f0b634a5-19a9-4c61-932f-7ae357e13be2
# ╟─1fb7d9af-333e-44f2-b693-09ff97937d4c
# ╟─00000000-0000-0000-0000-000000000001
# ╟─00000000-0000-0000-0000-000000000002