rocky8安装alphafold3--本地安装和部署--no dock和非singularity,非虚拟化版本(原创,转载请注明出处)
2024-11-25 01:27:35
admin
0
(原创,转载请注明出处)
1、升级显卡驱动为
https://developer.nvidia.com/cuda-downloads
https://developer.download.nvidia.cn/compute/cudnn/redist/cudnn/linux-x86_64/
NVIDIA-SMI 565.57.01 Driver Version: 565.57.01 CUDA Version: 12.7
2、安装部署一下环境
source /appsnew/source/cmake-3.14.3.sh
source /appsnew/source/intel2022.sh
source /appsnew/source/gcc-12.1.0.sh
source /appsnew/source/cuda-12.6.2.sh
# change to your conda environment
source /appsnew/source/Anaconda3-2024.06-1-local.sh
conda create -n AF3 python=3.11
conda activate AF3
2、下载alphafold3
git clone https://github.com/google-deepmind/alphafold3.git
3、进入 alphafold3目录,编译安装hmmer
mkdir ./hmmer_build ./hmmer
wget http://eddylab.org/software/hmmer/hmmer-3.4.tar.gz --directory-prefix ./hmmer_build
cd ./hmmer_build && tar zxf hmmer-3.4.tar.gz && rm hmmer-3.4.tar.gz
cd ./hmmer-3.4 & ./configure --prefix $(realpath ../../hmmer)
make -j8
make install
cd ./easel && make install
cd ../../../
rm -rf ./hmmer_build
pip3 install -r dev-requirements.txt
pip3 install --no-deps .
# if you failed in build pybind11
# try to manually install it!
# 或者梯子(clash)安装
build_data
python run_alphafold.py --helpfull #测试
(感谢朱jintao首次编译和测试)提交脚本注意v100/L40的使用xla,程序默认为triton,v100/L40不能使用(还有cudnn选项),
A800/H800
export XLA_CLIENT_MEM_FRACTION=0.95
L40/V100
export XLA_CLIENT_MEM_FRACTION=3.2
python run.. 后加上参数
--flash_attention_implementation=xla
AF3RUN.sh 已经加入判断了 nvidia-smi --format=csv --query-gpu=name|grep -qi 'v100\|l40' :
在北极星的提交方法:
pkurun-l40 1 1 AF3RUN.sh 1tce.json
pkurun-h800 1 1 AF3RUN.sh 1tce.json
pkurun-a800 1 1 AF3RUN.sh 1tce.json
(v100目前不支持,会爆炸 gpu_2l gpu_4l 分区)
上诉命令的提交脚本
[chenf@login28 testclass]# cat job.srp185320
#!/bin/bash
#SBATCH -J AF3185320
#SBATCH -p gpu_l40
#SBATCH -N 1
#SBATCH -o AF3185320_%j.out
#SBATCH -e AF3185320_%j.err
#SBATCH --no-requeue
#SBATCH -A chenf_g1
#SBATCH --qos=chenfl40
#SBATCH --gres=gpu:1
#SBATCH --overcommit
#SBATCH --mincpus=9
pkurun AF3RUN.sh 8ujo.json