1. OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 training/extract_video_training_latents.py > ./logs/extract_video_latents_16k.log 2>&1 & 2. huggingface cache dir: /root/.cache/huggingface/hub 2. torchhub cache dir: /root/.cache/torch/hub/checkpoints/ 2. cache dir: /root/.cache/audioldm_eval/ckpt/Cnn14_16k_mAP=0.438.pth training 3. OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 train.py exp_id=vgg_only_small_44k model=small_44k > ./logs/train_vgg_only_small_44k.log 2>&1 & inference 4. OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 batch_eval.py duration_s=8 dataset=vggsound model=small_44k num_workers=8 > ./logs/inference_vgg_only_small_44k.log 2>&1 & demo example 5. CUDA_VISIBLE_DEVICES=0 python demo.py --variant="small_16k" --duration=4 --video='' --prompt "" CUDA_VISIBLE_DEVICES=0 python demo.py --variant="small_44k" --duration=10 --video='/inspire/hdd/ws-f4d69b29-e0a5-44e6-bd92-acf4de9990f0/gaopeng/zhoutao-240108120126/kwang/MMAudio/example_demo_videos/_jB-IM_77lI_000000_silent.mp4' --prompt "" CUDA_VISIBLE_DEVICES=2 python demo.py --variant="small_44k" --duration=4 --video='/inspire/hdd/ws-f4d69b29-e0a5-44e6-bd92-acf4de9990f0/gaopeng/zhoutao-240108120126/kwang/MMAudio/example_demo_videos/demo2.mp4' --prompt "" 6. moviegen CUDA_VISIBLE_DEVICES=1 python demo.py --variant="small_44k" --duration=11 --video='/inspire/hdd/ws-f4d69b29-e0a5-44e6-bd92-acf4de9990f0/gaopeng/zhoutao-240108120126/kwang/MMAudio/example_demo_videos/moviegen/video1.mp4' --prompt "" 7. audio waveforms for vggsound /inspire/hdd/ws-f4d69b29-e0a5-44e6-bd92-acf4de9990f0/gaopeng/public/kwang/datasets/vggsound/audios_vggsound 8. demo new for paper CUDA_VISIBLE_DEVICES=2 python demo.py --variant="small_44k" --duration=4 --video='/inspire/hdd/ws-f4d69b29-e0a5-44e6-bd92-acf4de9990f0/gaopeng/zhoutao-240108120126/kwang/MMAudio/example_demo_videos/demo_new/.mp4' --prompt "" 9 training for vggsound with text caption OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 train.py exp_id=vgg_only_small_44k_caption_jan26 model=small_44k > ./logs/train_vgg_only_small_44k_caption_jan26.log 2>&1 & 10 generate data for dpo (1) change eval_for_dpo_config.yaml (2) change eval_data/base.yaml OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 batch_eval_dpo.py duration_s=8 dataset=vggsound_dpo model=small_44k num_workers=8 > ./logs_dpo/inference_vgg_only_small_44k_new_model_lumina_v2a_two_stream_Feb28_depth16_caption_dpo.log 2>&1 & 11. nohup python dpo_training/create_dpo_file.py > ./logs/create_dop_file.log 2>&1 & nohup python reward_models/cavp.py > ./logs/create_dop_cavp_file.log 2>&1 & 12. dpo training OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 train_dpo.py exp_id=vgg_only_small_44k model=small_44k > ./logs_dpo/train_vgg_only_small_44k.log 2>&1 & 13. extraction features for dpo OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 dpo_training/extract_video_training_latents.py > ./logs_dpo/extract_video_latents_44k_vggsound_dpo-new_model_lumina_v2a_two_stream_Mar1_depth16_caption_inference_ema_iter1_cavp.log 2>&1 & 14. after generating data for dpo, how to create dpo data file (1) python ./dpo_training/generated_videos_file.py # get video json file (2) python ./dpo_training/create_dpo_file.py # get av-align dpo file (3) python ./reward_models/clap_multi_gpu.py (clap.py) # get clap dpo file (4) python ./reward_models/cavp.py # get cavp dpo file, note the reencode_video.py should be runned first for 4fps video # after that, extract video and audio features (1) change dpo_training/extract_video_training_latents.py file OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 dpo_training/extract_video_training_latents.py > ./logs_dpo/extract_video_latents_44k_vggsound_dpo-new_model_lumina_v2a_two_stream_Mar12_depth16_caption_10000samples_inference_ema_iter1_cavp.log 2>&1 & # dpo training # change config files: train_dpo_config.yaml / dpo_base_config.yaml / ./dpo_data/base.yaml OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 train_dpo.py exp_id=vgg_only_small_44k_lumina_v2a_two_stream_May7_depth16_caption_beta10000 model=small_44k > ./logs_dpo/train_vgg_only_small_44k_lumina_v2a_two_stream_May7_depth16_caption_beta10000.log 2>&1 & # inference: (1) change the 'model_path' in ./mmaudio/eval_utils.py file (2) change the config files: eval_config.yaml, ./eval_data/base.yaml (3) OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 batch_eval.py duration_s=8 dataset=vggsound model=small_44k num_workers=8 > ./logs_dpo/inference_vgg_only_small_44k_dpo_iter1_cavp_May7_lumina_v2a_two_stream_depth16_caption_2000samples_beta10000.log 2>&1 & OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 batch_eval.py duration_s=8 dataset=moviegen model=small_44k num_workers=8 > ./logs_dpo/lumina_v2a_moviegen_Sep24_inference_ema.log 2>&1 & # evaluation (1) change ./av-benchmark/evaluate.sh (2) bash evaluate.sh OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 train_dpo.py exp_id=vgg_only_small_44k_lumina_v2a_two_stream_May17_depth16_caption_beta20000_full_reward_ib_desync_iter3_steps5k model=small_44k > ./logs_dpo/train_vgg_only_small_44k_lumina_v2a_two_stream_May17_depth16_caption_beta20000_full_reward_ib_desync_iter3_steps5k.log 2>&1 & OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 batch_eval.py duration_s=8 dataset=vggsound model=small_44k num_workers=8 > ./logs_dpo/inference_vgg_only_small_44k_dpo_May17_lumina_v2a_two_stream_depth16_caption_2000samples_beta20000_full_reward_ib_desync_iter3_steps5k.log 2>&1 & OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 batch_eval_dpo.py duration_s=8 dataset=vggsound_dpo model=small_44k num_workers=8 > ./logs_dpo/inference_vgg_only_small_44k_new_model_lumina_v2a_two_stream_May17_depth16_caption__beta20000_full_reward_ib_desync_iter3_steps5k_for_dpo.log 2>&1 & OMP_NUM_THREADS=4 nohup torchrun --standalone --nproc_per_node=4 dpo_training/extract_video_training_latents.py > ./logs_dpo/extract_video_latents_44k_vggsound_dpo-new_model_lumina_v2a_two_stream_Mar17_depth16_caption_2000samples_steps5k_iter3_ib_desync.log 2>&1 &