rezasalatin commited on
Commit
68f4afc
·
verified ·
1 Parent(s): 0b70b07

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -32
README.md CHANGED
@@ -10,46 +10,46 @@ Liang, Y., Li, X., Tsai, B., Chen, Q., & Jafari, N. (2023). V-FloodNet: A video
10
 
11
  ## Prerequisites
12
 
13
- Install Conda on your Ubuntu 24.04 with default version of Python and Nvidia GPU.
14
 
15
  1. Install Anaconda prerequisite (Can also be accessed from [here](https://docs.anaconda.com/anaconda/install/linux/)):
16
- ```sh
17
- sudo apt update && \
18
- sudo apt install libgl1-mesa-dri libegl1 libglu1-mesa libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2-data libasound2-plugins libxi6 libxtst6
19
- ```
20
 
21
  2. Download Anaconda3:
22
- ```sh
23
- curl -O https://repo.anaconda.com/archive/Anaconda3-2024.06-1-Linux-x86_64.sh
24
- ```
25
 
26
- 3. Located the downloaded file and install it:
27
- ```sh
28
- bash Anaconda3-2024.06-1-Linux-x86_64.sh
29
- ```
30
 
31
  ## Steps
32
 
33
  1. Clone this repository and change directory:
34
- ```sh
35
- git clone https://github.com/rezasalatin/V-BeachNet.git
36
- cd V-BeachNet
37
- ```
38
 
39
  2. Create the virtual environment with the requirements:
40
- ```sh
41
- conda env create -f environment.yml
42
- conda activate vbeach
43
- ```
44
-
45
- 3. Visit the "Training_Station" folder and copy your manually segmented dataset to this directory. Open the following file to change any of the variables and save it. Then execute it to train the model:
46
- ```sh
47
- ./train_video_seg.sh
48
- ```
49
- Access your trained model from log/ directory.
50
-
51
- 4. Visit the "Testing_Station" folder and copy your data to this directory. Open the following file to change any of the variables (especially model path from the log/ folder) and save it. Then execute it to test the model:
52
- ```sh
53
- ./test_video_seg.sh
54
- ```
55
- Access your segmented data from output directory.
 
10
 
11
  ## Prerequisites
12
 
13
+ This code is tested on a newly installed Ubuntu 24.04 with default version of Python and Nvidia GPU.
14
 
15
  1. Install Anaconda prerequisite (Can also be accessed from [here](https://docs.anaconda.com/anaconda/install/linux/)):
16
+ ```sh
17
+ sudo apt update && \
18
+ sudo apt install libgl1-mesa-dri libegl1 libglu1-mesa libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2-data libasound2-plugins libxi6 libxtst6
19
+ ```
20
 
21
  2. Download Anaconda3:
22
+ ```sh
23
+ curl -O https://repo.anaconda.com/archive/Anaconda3-2024.06-1-Linux-x86_64.sh
24
+ ```
25
 
26
+ 3. Locate the downloaded file and install it:
27
+ ```sh
28
+ bash Anaconda3-2024.06-1-Linux-x86_64.sh
29
+ ```
30
 
31
  ## Steps
32
 
33
  1. Clone this repository and change directory:
34
+ ```sh
35
+ git clone https://huggingface.com/rezasalatin/V-BeachNet.git
36
+ cd V-BeachNet
37
+ ```
38
 
39
  2. Create the virtual environment with the requirements:
40
+ ```sh
41
+ conda env create -f environment.yml
42
+ conda activate vbeach
43
+ ```
44
+
45
+ 3. Visit the "Training_Station" folder and copy your manually segmented (using [labelme](https://github.com/labelmeai/labelme)) dataset to this directory. Open the following file to change any of the variables and save it. Then execute it to train the model:
46
+ ```sh
47
+ ./train_video_seg.sh
48
+ ```
49
+ Access your trained model from the `log/` directory.
50
+
51
+ 4. Visit the "Testing_Station" folder and copy your data to this directory. Open the following file to change any of the variables (especially the model path from the `log/` folder) and save it. Then execute it to test the model:
52
+ ```sh
53
+ ./test_video_seg.sh
54
+ ```
55
+ Access your segmented data from the `output` directory.