
解释ollama serve加载模型的日志
如果你想优化 GPU 使用,确保模型层合理分配到多个 GPU 上,且显存足够。检查和相关环境配置,确保不会出现资源瓶颈。通过调整n_seq_max和序列长度,可以优化模型的输入输出。根据 GPU 的内存和性能配置,合理选择每个设备的工作负载,以确保模型推理的高效性和稳定性。
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 5648.81 MiB
llm_load_tensors: CUDA1 model buffer size = 4777.06 MiB
llm_load_tensors: CUDA2 model buffer size = 4835.88 MiB
llm_load_tensors: CUDA3 model buffer size = 4777.06 MiB
llm_load_tensors: CUDA4 model buffer size = 4777.06 MiB
llm_load_tensors: CUDA5 model buffer size = 4835.88 MiB
llm_load_tensors: CUDA6 model buffer size = 4835.88 MiB
llm_load_tensors: CUDA7 model buffer size = 5491.86 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 352.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 320.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 320.00 MiB
llama_kv_cache_init: CUDA3 KV buffer size = 320.00 MiB
llama_kv_cache_init: CUDA4 KV buffer size = 320.00 MiB
llama_kv_cache_init: CUDA5 KV buffer size = 320.00 MiB
llama_kv_cache_init: CUDA6 KV buffer size = 320.00 MiB
llama_kv_cache_init: CUDA7 KV buffer size = 288.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA3 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA4 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA5 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA6 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA7 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 9
time=2025-02-11T11:34:37.966+08:00 level=INFO source=server.go:594 msg="llama runner started in 231.27 seconds"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOG token = 128001 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
从日志来看,模型正在进行 张量卸载(offloading)到多个 GPU,这意味着模型的不同层正在被分配到不同的 GPU 上。以下是对日志内容的解释和可能的改进建议:
解释日志内容
-
llm_load_tensors: offloading 80 repeating layers to GPU
表示模型的 80 个重复层正在被卸载到 GPU 中。这通常意味着模型很大,超出了单个 GPU 的内存容量,因此需要跨多个 GPU 分配。 -
llm_load_tensors: offloading output layer to GPU
模型的输出层正在被卸载到 GPU。这个输出层通常比其他层大,因此它需要分配更多的内存。 -
llm_load_tensors: offloaded X/Y layers to GPU
显示已经成功卸载到各个 GPU 上的层数和每个 GPU 上的内存使用情况。- 比如,CUDA0(GPU 0)分配了 5648.81 MiB 内存,CUDA1(GPU 1)分配了 4777.06 MiB 内存,以此类推。
-
CUDA{n} model buffer size
显示每个 GPU 上分配的内存大小。这里说明了每个 GPU 上的内存分配情况,可能根据 GPU 的显存大小进行优化。 -
llama_new_context_with_model: n_seq_max = 4
这表示模型上下文的最大序列长度(n_seq_max
)为 4,可能用于控制模型能处理的最大输入序列的长度。
分析与建议
-
多 GPU 分配:日志显示模型已经成功地被卸载到多个 GPU 上,每个 GPU 分配了不同的内存量。你有 8 个 GPU (
CUDA0
到CUDA7
),而且模型的总内存需求也跨越了这些设备。这种分配方式表明模型过大,无法在单一 GPU 上运行,因此采用了跨多个 GPU 的并行计算。每个 GPU 上的内存大小差异,可能是由于显卡性能和显存容量不同。 -
内存使用:某些 GPU(例如 CUDA0 和 CUDA6)分配了更多的内存(5648.81 MiB 和 4835.88 MiB),而其他 GPU 则分配了较少的内存(4777.06 MiB)。这可能表示模型层被优化并根据每个 GPU 的负载进行分配。
-
优化 GPU 使用:
- 如果你有多个 GPU,
Ollama
会尽量优化资源分配。如果 GPU 内存不足,可以考虑减少模型加载的大小,或将一些层分配给 CPU(通过参数设置),但这会影响推理速度。 - 如果你只使用特定的 GPU 或者需要控制模型如何分布在 GPU 上,你可以通过配置
CUDA_VISIBLE_DEVICES
环境变量来指定使用哪些 GPU。例如:export CUDA_VISIBLE_DEVICES=0,1 # 只使用 GPU 0 和 GPU 1
- 如果你有多个 GPU,
-
最大序列长度 (
n_seq_max
):n_seq_max
设置为 4 表示模型能处理的最大序列长度为 4。根据任务的不同,你可能希望调整这个参数来优化处理能力。增大n_seq_max
会影响内存使用和计算速度,但有时也需要调整它以适应输入数据的特定要求。
总结
- 如果你想优化 GPU 使用,确保模型层合理分配到多个 GPU 上,且显存足够。检查
CUDA_VISIBLE_DEVICES
和相关环境配置,确保不会出现资源瓶颈。 - 通过调整
n_seq_max
和序列长度,可以优化模型的输入输出。 - 根据 GPU 的内存和性能配置,合理选择每个设备的工作负载,以确保模型推理的高效性和稳定性。
更多推荐
所有评论(0)